-
-
The design stack for VERSION 10, our final product
-
An example of an input image and its converted output found on our StreamLit UI
-
Our UI input where you provide your budget, an image of the area to be reModeled, and a potential style (e.g. cottagecore, office, etc.)
-
Conversion and decour design produced by our stack
-
Stack for VERSION 1, our first prototype
-
Converting an image into a potential decorated corner!
-
An example of one of our stack's outputs
What and Why reModel?
Looking at our increasingly digital reality, our team wanted to find a way to root ourselves within physicality. We wanted to experiment with various image generation models and try to see how close we can get to replicating reality, not just any reality but the reality around us. We then wanted to connect the digital world back directly into our real world and through a series of classical and novel machine learning models.
What Does reModel Do?
Everyday a new GPT, DeepSeak, Siri, Dalle is released. And each has its specialization and methods, its ups and downs.
Models Used
- GPT4o
- GPT4 Vision
- DallE
- StableDiffusion
- StableDesign (Realistic Vision V5.1, SSD-1B Stable Diffusion, ControlNet Segmentation Mapping, and UperNet semantic segmentation)
- V7 YOLO Image Object Bounding and Detection
- META Segment Anything Model (SAM)
- OpenAI CLIP Neural Network Similarity Check for WebCrawled Products
Built With
- ai-applied-sentiment-analysis
- api
- classification
- clip
- controlnet
- meta
- neuralnetwork
- openai
- python
- replicate
- segmentanythingmodel
- selenium
- similarity
- stabledesign
- stablediffusion
Log in or sign up for Devpost to join the conversation.