Inspiration

With the rise of Large Language Models in recent years, there has been widespread applications for them in assisting users for context-specific use cases. For example, training an LLM on company-specific information could allow users to get answers to their questions. Our idea was to apply the same concept to George Mason students. If trained on a vast variety of career-related data from GMU, it could allow students to find unorthodox answers to their questions.

What it does

When a user sends a question to our system, it is sent to an LLM hosted in AWS, which is trained on information from GMU websites (such as the CS department's website). It then sends a response to the user based on that information.

How we built it

Our website is hosted on the AWS cloud. We run an EC2 instance that hosts the website. Our LLM is the Llama-2-7b model.

Challenges we ran into

Setting up the website was difficult as AWS is something we were unaccustomed to using. The main difficulty was the amount of data we had to collect from GMU websites.

Accomplishments that we're proud of

We are proud of our ability to set up a functioning website in AWS, as well as with utilizing AWS services to complete our project.

What's next for four_guys

We had some additional ideas to expand our project, including a 4-year plan generator, integration of Blackboard/Mason360/other GMU websites with calendar services to give students notifications about upcoming assignments, club meetings, and other events, based on preference. Due to time constraints, however, we could not implement these features. After this hackathon, we plan on polishing our chatbot assistant as well as implementing these features.

Share this project:

Updates