Inspiration

The idea for ArtFit was inspired by my own move into a new apartment. After furnishing everything, the walls still felt empty. Store-bought art never matched my space, and even the paintings I created myself clashed with the décor. So they ended up in a corner. Realizing many people face the same mismatch, and many artists struggle to place their work meaningfully, I set out to build a tool that bridges both gaps, serving individuals, supporting creators, and enriching spaces.

What it does

ArtFit lets users upload a photo of any interior, chooses or refines style preferences, and then:

  • Analyzes the room’s palette, lighting, and layout with a vision model.
  • Generates or retrieves artwork that matches the detected style and the user’s taste.
  • Renders a realistic preview so users can see the piece on their own wall before buying.
  • Saves successful matches to a community gallery where others can discover similar works.

How we built it

Image analysis: OpenAI CLIP extracts style tags such as “industrial”, “warm beige”, and “oak wood”.
Recommendation engine: A tag-bundle method combines room tags with user keywords to search DynamoDB for suitable pieces or trigger GPT-4.1-powered image generation.
Generation and rendering: Custom artwork is produced with a diffusion pipeline orchestrated by GPT-4.1, then composited on the original photo with perspective correction.
Infrastructure: React front-end, AWS Lambda for inference, S3 for storage, and a Stripe checkout flow for purchases.

Challenges we ran into

Early experiments showed that feeding a full-resolution room photo directly into the image-generation model produced unfocused, often unattractive results. Without any guiding strategy, the AI tried to reinterpret every pixel of the scene, leading to clashing colours, mismatched styles, and compositions that felt “off” inside the space.

Accomplishments that we're proud of

  • We built and deployed a fully functional MVP in under a week and one week for iteration algorithm/system structure.
  • Our mock-up generation process delivers HD previews in less than 10 seconds.
  • Filed a provisional patent covering the combined style-analysis and generation workflow.

What we learned

  • Serverless primitives (S3 + Lambda + API Gateway) let us stitch computer-vision, LLM and image-generation workflows without managing servers.
  • Most importantly, rapid iteration with AWS tools allowed us to focus on user experience—turning a personal pain point into a production-ready prototype in days.
  • Generating artwork directly from a raw room photo produced noisy, mismatched compositions. By extracting semantic tags first (colour palette, dominant style, mood, layout) and then crafting a focused prompt, we guide the generative model toward harmonious results.
  • Purely algorithmic style tags rarely capture a homeowner’s personal taste. Merging system-derived tags with user-supplied mood, subject, and size preferences yields recommendations that feel both on-brand for the space and personally meaningful.

What's next for ArtFit

  • Use more models/tools like AWS Bedrock.
  • Launch a closed beta on iOS with AR view so users can reposition art in real time.
  • Expand the style vocabulary to include regional aesthetics like Mediterranean and Shaker.
  • Open the community gallery, letting users follow creators and share curated sets.

Built With

Share this project:

Updates