Inspiration

Revolut users accumulate points but often fail to redeem them because they lack timely awareness of personally relevant opportunities. Discovering vendors that accept points requires manual searching and doesn't align with their actual spending habits or moments of purchase intent. This leads to points feeling disconnected, undervalued, and ultimately unused.

Revolut partner vendors, particularly smaller ones, struggle to gain visibility among relevant user segments within the app. They lack an automated way to reach users whose demonstrated purchase history indicates a high likelihood of interest, especially users who are incentivized to spend (via points).

The low redemption rate of Revolut points signifies untapped potential for user engagement and loyalty. Revolut isn't maximizing the Points feature's ability to provide tangible, frequent value back to the user, weakening it as a differentiator and potentially increasing liability (outstanding points value).

What it does - AI-Powered Personalized Point Redemption Recommendations

We built a system that analyzes a user's (simulated) transaction history (merchants, categories, potentially frequency/timing) to proactively recommend specific Revolut partner vendors where they can spend their points. Users can explore potential vendors offering discounts and reward programs in exchange for Revolut Points.

We also created another interface to register new vendors in Revolut. We created a feature for small shop owners to upload pictures of their product catalogue and create a custom marketing campaign for them based on what categories of products users are frequently buying in similar places. We also notify interested users about these campaigns as soon as they are launched.

How We Built It

  • Front‑end (Streamlit)

    • Rapid prototyping & deployment – Spin up interactive UIs in minutes, not days.
    • Revolut‑inspired design – Card‑based layout, tabs for navigation, consistent typography.
    • Real‑time interactivity – Built‑in support for plots, insights, file uploads, and state management.
  • Back‑end (Python + Redis)

    • Plain Python modules – Clear separation of business logic, easy to test and extend.
    • Redis cache – Ultra‑low latency reads/writes for user profiles, campaign drafts, and recommendations.
    • LangGraph workflows – Orchestrate LLM calls, data validation (via Pydantic), and storage steps.
  • Campaigner Agent

    • Input: Vendor brochure upload
    • Process: Gemini‑2.5‑Pro multimodal model → Pydantic for structured JSON → LangGraph workflow → redis database update
    • Output: Draft marketing campaign, refinable by vendor, then saved to Redis on confirmation.
  • User Profiling Agent

    • Input: User transaction history
    • Process: Gemini LLM and simple filtering extracts top categories, frequent merchants, spend frequency & timeframes
    • Output: Dynamic JSON profile used for personalized recommendations.
  • Dashboards

    • User dashboard:
    • RevPoints balance & trends
    • Spending breakdown by category/merchant (line, bar, stacked bar charts)
    • Vendor dashboard:
    • Offer performance metrics
    • Revenue impact and customer engagement insights
  • Explore Feed

    • Goal: Personalized suggestions (stores, restaurants, travel, experiences)
    • Algorithm: Sentence‑Transformer embeddings + recency‑weighted transaction scoring
    • Delivery: Pub/Sub system updates each user’s feed in real-time when new transactions occur.
  • Recruiter Agent

    • Input: New or existing vendor data (name, email, description)
    • Process: Perplexity API enriches business details → auto‑generate JSON profile → create email template
    • Output: Ready‑to‑send outreach emails for vendor acquisition.
  • Push Notifications

    • Data sources: User Profile + Campaign drafts
    • Logic: Schedule messages based on individual activity patterns
    • Result: Timely, targeted promotions that boost engagement and conversions.

Challenges We Ran Into

Situation Task Action Result
Dependency conflicts between LangGraph, LangChain, and Streamlit versions Get workflows and UI running together • Pin compatible versions
• Isolate virtual environments
• Write custom adapters to bridge minor API changes
Resolved runtime errors, ensured stable CI builds; added version checks to documentation
Unexpected outputs from Gemini‑2.5‑Pro multimodal model Extract structured campaign data from vendor brochures • Implemented function‑calling schema in LLM prompts
• Wrapped outputs in Pydantic models
• Added validation & retry logic
Achieved consistent JSON output; reduced parsing errors by 80%
LangGraph workflow failures due to resolution conflicts in chained LLM calls Coordinate multiple agent calls with shared context • Upgraded to latest LangGraph release
• Introduced explicit context‑passing nodes
• Wrote unit tests for each sub‑workflow
Workflows stabilized; error rates dropped and debug logs surfaced root causes faster
Backend integration issues between Python modules and Redis cache Store & retrieve campaign drafts, user profiles, recommendations • Standardized data serialization format (MessagePack)
• Added Redis schema migration scripts
• Monitored Redis memory & eviction metrics
Near‑real‑time performance achieved; eliminated data corruption during high load
Real‑time Explore Feed not updating on new transactions Push personalized recommendations instantly • Built a Pub/Sub layer with Redis Streams
• Debounced rapid updates
• Added health‑checks and auto‑replay for missed events
Feed latency dropped below 200ms; users saw new recommendations immediately after transactions
Designing human‑in‑the‑loop for Campaigner Agent Allow vendors to refine AI‑drafted campaigns • Added “Review & Edit” UI step in Streamlit
• Stored draft state separately from confirmed state
• Logged user edits

Accomplishments That We’re Proud Of

  • End‑to‑end AI automation: From brochure upload to live campaign in under 2 minutes to recommendation to the customers
  • Robust real‑time feed: Personalized Explore Feed updates per transaction
  • Seamless human‑AI collaboration: Intuitive “review & refine” flow for campaign drafts
  • Modular architecture: LangGraph agents, Python services, and Streamlit front‑end cleanly decoupled

What We Learned

  • Version management matters: Pinning LLM‑related libs avoids unexpected breaking changes (use mamba for lib version mgmt works better)
  • Structured prompts & function calling: Critical for reliable multimodal LLM outputs
  • Observability is key: Detailed logging in LangGraph and Redis helped debug subtle race conditions
  • Human‑in‑loop design: Balancing automation with control boosts user trust and satisfaction
  • Iterative UI/UX: Rapid Streamlit prototyping surfaces usability issues early

What’s Next for RevPoints+

  1. Testing of campaign variants
    • Automatically generate multiple campaign drafts and measure user engagement
  2. Dynamic reward tiers
    • Adjust point valuations in real time based on demand, seasonality, and vendor goals
  3. Cross‑vendor promotions
    • Bundle offers from complementary merchants (e.g., coffee + co‑working) using graph algorithms
  4. Mobile app wrapper
    • convert Streamlit UI into iOS/Android apps with push notification integration
  5. Open API for partners
    • Expose secure endpoints so third‑party developers can build on RevPoints+ ecosystem

Built With

Share this project:

Updates