Inspiration

Kareem has done a lot of personal research on the benefits/drawbacks of various credit cards, and he thought he would share his knowledge with people who haven't invested as much time in learning about personal finance. We wanted to build an app that could make the journey toward financial literacy as painless as possible.

Currently, recommenders need certain information from you to gauge your preferences; we believe this is very subjective and sometime people my not know what they want. They may think that a certain category, like travel, will save them a lot of money when in reality it doesn't. This is why we wanted to look instead at transaction history and see how, based on the customers current usage, what credit card would be best for them.

What it does

Given the user's financial history, our web app researches and provides recommendations on several credit cards that provide the most utility. Note that the recommendations may not be mutually exclusive: using multiple cards in conjunction could provide more value across different categories of spending.

In addition, our interface also displays the average monetary value of each card to allow for easy visual comparisons between the recommendations. This also give users options, if they do not feel comfortable holding multiple credit cards they can still see which benefits them the most.

How we built it

We developed a simple frontend UI in React with Tailwind CSS. We developed our backend REST API with Flask.

We developed our application's core business logic using the OpenAI chat completions API along with its structured outputs (a.k.a. grammar-constrained decoding) functionality. Structured outputs is a new API feature that takes a JSON schema as input and ensures that the LLM's output is 100% adherent to the provided schema. This feature was very important, because we needed to be confident that we could directly render our LLM generations on the frontend with minimal post-processing.

To ensure our recommendations are grounded in evidence and solid reasoning, we manually scraped information on over 40 well-known credit cards and compiled that into a JSON dataset. We then provided this dataset as additional context to the LLM during its recommendation generation in a process known as retrieval augmented generation (RAG).

We also underwent multiple iterations of prompt engineering, and we ended up constructing a Chain-of-Thought (CoT) prompt that instructs the LLM to provide its reasoning along with its recommendations. The motivation behind this prompting methodology is to motivate well thought-out responses. As a side benefit, we can also show this reasoning to the user in the UI.

Challenges we ran into

It took us some time to figure out the most important features for providing the best recommendations. For example, we identified early on that "income" and "credit score" would be important features. However, our experimentation later revealed that "age," "oldest account length," and "annual fee tolerance" were also important features when it came to providing relevant credit card recommendations. We don't want recommend a card that doesn't suit the user's preferences or a card that the user is unlikely to get approved for.

The most challenging aspect of the project was connecting the frontend to the backend. Because we split up our work 50/50 between the frontend and backend, we had a hard time reconciling differences in how we formatted data.

One full all-nighter and a lot of frustrating moments later, we're done with our app!

Built With

Share this project:

Updates