Inspiration
As a team of backend engineers, we use ChatGPT a LOT in our frontend engineering. However, the process often feels tedious as we flip between our site render, VSCode, and ChatGPT -- especially if our changes aren't rendering properly. Hence, we wanted to make a more convenient, interactive tool that would allow us to simply click on different site components to directly prompt changes.
What it does
We hope that this tool can help teams with lots of design savvy but limited engineering capacity. We imagine a future in which teams are able to click on site components and prompt Rendr to make desired updates. This interface is no-code and prompt-guided, which is perfect for tedious tasks like formatting grids/tables, and centering divs.
Developers can click/highlight a specific component in their local site preview. From there, a text input will appear to prompt any changes specific to that component. Another button popup gives AI suggestions for possible images (DALL-E generated) and replacement text.
How we built it
There are three layers to the project:
1) Backend (Python/Flask)
- Python script with OpenAI GPT API (using gpt3.5-turbo model) for image generation and chat completion
- Modifies HTML/CSS data by splicing updated GPT-generated code snippets
- Can download updated HTML/CSS to save changes
2) Live-rendered frontend (HTML/CSS)
- Rendered inside ReactJS, represents source code
- Modified by the backend script
3) Interactive Overlay (ReactJS)
- Layer on top of the actual frontend (IFrame used to display HTML inside ReactJS wrapper)
- Overlays hover functionality on specific HTML elements; This can identify individual components as well as their children elements; Source code snippet can then be sent to backend for changes/suggestions
- Maintains state string of relevant HTML and can stitch together changes
Challenges we ran into
We had an issue setting up PostgreSQL on our local systems. For some reason, if we had prior installations of it on our laptops, we had lots of issues. We were planning a semantic search of community-uploaded components (using PostgreSQL, Lantern, vector embeddings); however, due to these issues as well as time constraints, we were unable to implement this functionality. However, we hope to add this functionality once we can sort out these bugs.
Accomplishments that we're proud of
- Being able to isolate which part of the HTML tree to isolate a component and stitching it back into the HTML in the DOM was a huge concern that we eventually conquered
- Only spent 15 cents on OpenAI backend costs
- Slept in the Dome last night
What we learned
Chatting with the mentors was extremely helpful and serendipitous. We originally planned a much more complicated approach for modifying the HTML, but after speaking with a mentor, we were able to come up with a much simpler approach for our demo.
What's next for Rendr Dev
Implement community-sharing of components. This can be turned into a marketplace in which users can pre-select popular designs (ex: well-designed buttons, advanced forms with backend functionality, etc.).
Implement a few-shot Generative AI learning approach for data analysis of components. We currently use GPT to score the "quality" of a component, but we want to expand on this functionality. In the future, we can collect more examples to fine tune the model to have our own custom UI/UX analysis (accessibility, visibility, etc.).
Built With
- chatgpt
- css
- dalle
- html
- javascript
- lantern
- postgresql
- python
- vector
Log in or sign up for Devpost to join the conversation.