Inspiration
In today’s digital world, people consume all forms of multimedia at faster rates than ever. Business and influencers have capitalized on this mass media market to distribute content, campaigns, and products on a global scale. However, with the velocity and magnitude of outreach that is now capable through media presence, the line between a well-loved release and a harmful campaign with lasting detriments on societal progression and business reputations can be scarily thin. What’s more, social media brings together large groups of people, which may encourage interdisciplinary conversation but can also just as quickly foster an unshakable mob mentality. For example, Pepsi’s infamous ad with Kendall Jenner, though aired free of malintent, was quickly found to enact a trivializing effect on the importance of the BLM movement–a considerable setback for both the progression of society and the company itself. Globally, companies have spent over $600 billion in digital advertising (Spendesk); given the resources used and the sheer size of audience that an influential voice can reach, it is crucial that potential disasters and harmful media be deterred and refined before release through preventative measures as opposed to reactive.
Problem
Companies and influencers lack a reliable way to predict how their target audience will react to new content, products, or campaigns before release. Current methods like focus groups or surveys are limited in scale and often fail to capture the nuanced, evolving nature of public opinion and upcoming trends. Companies lack a surefire way to gauge the impact of their actions on their long-term reputation and society as a whole, especially through methods that robustly model human behavior and reception.
Our Solution: What we did
We aim to address this critical issue through simulating interdisciplinary human-to-human interaction as well as the evolution of human thought and behavior through unified ideals and influence through robust multi-agent systems, achieving a state-of-the-art method to predict public sentiment before media is published, with capabilities expanding to and beyond intelligently identifying areas of improvement and real-time dynamically-generated visualizations on the progression of group opinions over simulated time. MarketMind is an AI-powered human-interaction-simulation platform that generates diverse, realistic user personas based on demographic data, psychographics, and behavioral patterns of a company’s target market. The platform simulates the release of proposed content in multimodal input formats to these personas (including video, text, audio, and images / art), generating reactions and predicting evolving sentiments over time. It then models how these opinions spread and change over time through multiple “cycles” of people interacting and sharing their thoughts, dynamically processing this data into real-time visualizations and insights. The platform provides real-time analytics and insights on public sentiment based on the simulation cycles, allowing companies to identify potential issues or polarizing content in their media, subsequently making data-driven decisions before release.
How we built it
We built MarketMind by utilizing Google Gemini to use consumer and market data to generate interdisciplinary and diverse user personas, then integrated Fetch AI to generate distinct agents representing each user person with specialized instructions and system prompts in the agentverse. We choose to use Gemini because it is an expert at finding “needles in a haystack,” meaning it would be able to identify controversial or biased points that many humans may miss. An arbiter agent exposes multimedia of any form (text, video, audio, image) to the “users” in the agentverse and facilitates productive conversation while monitoring individual as well as overall shifts in sentiment and reception of the media, highlighting any potential points of controversy or caution. This data is processed through a SingleStore database, which performs sophisticated analytics in real-time to display visualizations, crucial summaries, as well as metrics on agent conversation in real-time until we observe a convergence in public opinion towards a more unified sentiment. We use semantic scores to rate the media based on these agent-agent interactions, giving companies a definite benchmark to work with / improve as well as model human psychographics as closely as possible. We then created a frontend to display the data and interactions using React and Typescript as well as Recharts for cleaner, dynamic visualizations.
Frontend:
- React
- Typescript
- Material UI and Recharts Libraries
Backend:
- Flask Server and REST API
- Fetch.AI
- Google Gemini API
- Singlestore
- SQL
Challenges we ran into
We struggled with using a singular Gemini model to generate many diverse user personas, and got around this by integrating few-shot learning in our persona generation prompt. It was also difficult to enforce objectivity in the arbiter agent’s observations of the subjective conversations taking place in the agentverse, and it was also challenging to find a way to prompt the LLMs such that they would embrace difficult conversations about potentially controversial topics as opposed to shunning them as most models usually do.
Accomplishments that we're proud of
- Inclusive creation of user personas, incorporating a wide variety of different perspectives that fall under the company’s target market
- Combining both original human thought and natural bias from human connection to simulate the realistic nature of news spreads over time
- Multimodal input availability for our LLM, with it being able to find “needles in haystacks,” i.e. identifying even the smallest details that can potentially lead to larger consequences or harmful and negative implications.
- High standard of overt bias detection counteracted by explicit choice of LLM based on modern research papers
- Reactive front-end to clearly display information returned by the LLM in real-time, allowing users to clearly understand sentiment change over-time and specific aspects of the campaign that may have caused that sentiment
- Learning a lot of new technologies and techniques we’ve never used before, from robust prompt engineering to implementation of real-time data streaming to creating multi-agents
- Successfully completing a hackathon! Very very proud :D
What we learned
- Using ready to use frontend components from libraries such as Material UI and Rechart simplifies and quickens the frontend development process
- LLM responses provides many nuanced details when scoring a campaign as positive or negative; however, sometimes fails when it comes to should be obvious very low or high scores
What's next for marketmind
- Model not only market influence, but also growth by dynamically generating new users and adopters as opinions progress, simulating marketing campaigns on an even deeper level while depicting potential barriers to expansion
- Simulate human connection more intricately with out-of-turn information-seeking agents that possess more tools instead of cycle-based learning with single-source information feeds
- Finetune the LLM to more accurately generate appropriate user personas and simulate response Store the market simulations and data for users to access and review later to compare different campaign reactions
Built With
- fetchai
- geminiapi
- materialui
- react
- recharts
- singlestore
- sql
- typescript
Log in or sign up for Devpost to join the conversation.