Inspiration

Save the earth while looking stylish.

What it does

Make sure you don't buy clothings that you already have while trying to look like a fashion pic

How we built it

Postgresql + Python + Jupyter + FashionClip

Challenges we ran into

Ollama does not use FashionClip. We ended up not using Ollama to manage the llm models and use FashionClip included Python library.

There were some package version dependency problems we ran into and had to solve.

Accomplishments that we're proud of

We managed to accomplish the minimum goals that we set for.

  • Set up and architecture a RAG architecture
  • Scalable and flexible using open source technology and libraries
    • Debian (13 Trixie)
    • Docker
    • PostgreSQL
    • Django
    • Python's AI ecosystem (Jupyter, Numpy, Django, etc...)
  • Store embedded images of clothings
  • Convert user's query, in form of submitted image of clothings, to embedding and compare it to the stored clothing items
  • Have a working web application

What we learned

Ollama library expect a certain type of llm model.

Using Ollama in a RAG architecture is optional and not require, especially for llm models that cannot be use within Ollama.

I'm not sure if it was faster to use Python to do similarity score for vectors or Postgresql's extension.

What's next for Fashion Wardrobe

Have the computer to recognize individual clothing in detail. In computer vision this is called segmentation.

We will have to build a computer vision pipeline, either bluring, jittering the image, and pipe it through a segmentation algorithm to identify individual clothing pieces.

The idea is to submit a user photo and the app will pick apart every clothing pieces within the image and classify each inidividual one and store it in a database. A digital wardrobe, but more streamline.

Git Repo

https://github.com/kennardpeters/digital-wardrobe

Built With

Share this project:

Updates