SquadHang — AI Group Trip Planner

Inspiration

Planning a group is somehow tedious. Between juggling everyone's schedules, chasing responses in a dozen different threads, and manually comparing flights and hotels, the planning process often kills the vibe before the trip even starts. We've all been there: the Google Sheet that no one updates, the Telegram thread that goes 300 messages deep without a decision, the friend who says "just pick something" and then complains about the pick.

We wanted to build something that lives exactly where the conversation already happens — the group chat — and takes the chaos out of coordinating travel. SquadHang was born from that frustration.

What it does ?

SquadHang is an AI-powered trip planning assistant that lives inside your Telegram group. You just add the bot to your chat, mention it with a travel idea, and it takes it from there.

Through a natural, conversational flow it:

  • Collects everyone's preferences one question at a time — destination, dates, group size, transport preference, lodging type, budget, and activities
  • Presents a summary and gets the whole group to agree before doing anything
  • Simultaneously searches for real flights (via Amadeus test API), hotels, and activities (GooglePlaces API) once the plan is confirmed
  • Presents numbered options for each category so group members can simply reply "I want option 2 for flights"
  • Persists all results and selections to memory so the conversation can continue across sessions without repeating itself
  • Allows any group member to request alternative options if nothing looks right

No app to download, no account to create, no spreadsheet to maintain. Just your squad, your chat, and a plan.

How we built it

SquadHang is built on AWS using a multi-agent architecture powered by the Strands Agents SDK and Amazon Bedrock with Amazon Nova Pro.

Amazon Nova Pro — The Brain of SquadHang

Amazon Nova Pro is the reasoning engine that makes SquadHang feel like a real conversation partner rather than a rigid form filler. We chose Nova Pro specifically for several key reasons:

Instruction-following precision — The orchestrator system prompt contains dozens of nuanced rules: when to infer intent vs. ask a clarifying question, how to handle multiple group members responding at once, when to trigger specialist agents and when not to. Nova Pro's strong instruction-following capability means the agent reliably respects those rules across long, unpredictable multi-turn conversations.

Tool use reliability — SquadHang's orchestrator coordinates up to 5 tools per turn (memory reads, memory writes, and specialist agent calls). Nova Pro handles multi-step tool orchestration accurately, correctly chaining dependent calls and knowing when a tool result is sufficient to proceed without over-calling.

Context retention across long conversations — Group trip planning involves many back-and-forth messages, corrections, and tangents. Nova Pro's large context window lets the agent hold the full conversation in view and avoid asking for information already provided — a critical quality-of-life feature in a group setting where no one wants to repeat themselves.

Cost-performance balance for serverless — Each Telegram message triggers a Lambda invocation. Nova Pro delivers frontier-quality reasoning at a cost profile that makes per-message inference practical in a serverless architecture, without needing to sacrifice capability for a cheaper model.

Multi-agent coordination — After specialist agents return results, Nova Pro synthesizes transport, lodging, and activity options from three independent sources into a single coherent, readable reply for the group. The model's summarization and formatting quality directly determines how useful the final output feels to users.

Architecture Overview

Orchestrator Agent — a Strands agent running on AWS Lambda, triggered by Telegram webhooks. It conducts the group conversation, manages trip state, and coordinates all specialist agents using Amazon Nova Pro via Bedrock as its reasoning model.

Specialist Agents — three independent Lambda-based agents, each an expert in its domain:

  • TransportAgent — searches flights and buses using the Amadeus Self-Service API
  • LodgingAgent — searches hotels via Amadeus, with a Booking.com scraper as fallback
  • PlannerAgent — suggests activities and builds an itinerary for the destination

Agent-to-Agent (A2A) Communication — specialist agents are invoked via Lambda Function URLs using the A2A JSON-RPC protocol, authenticated with SigV4. This lets the orchestrator call each specialist synchronously and wait for real results before replying to the group.

Memory & Persistence — trip state, conversation history, research results, and user selections are all stored in DynamoDB. This gives the bot full context across messages and sessions.

Infrastructure — everything runs serverless on AWS: Lambda for compute, DynamoDB for persistence, API Gateway for the Telegram webhook, and IAM roles for secure inter-service communication.

Challenges we ran into

Context window and tool-use limits — Nova Pro has strict limits on what can be passed through tool-use arguments. Our first design passed the full specialist tool results (large text strings) back through the model as tool call arguments, which triggered modelStreamErrorException in Bedrock. We had to rearchitect so specialist tools save their results directly to DynamoDB as a side-effect, and the orchestrator only passes a chat_id to consolidate — keeping tool-use payloads tiny and the model's reasoning clean.

Amadeus API availability errors — The hotel offers search API returns a 400 error if any hotel in the batch has no availability, rejecting the entire request. We had to implement batched hotel fetching (groups of 3) with per-batch error handling so one unavailable property doesn't block all results.

Multi-agent latency — Running three specialist agents in a serverless environment, each making external API calls, pushed close to Lambda's timeout limit. Managing timeouts, retries with exponential backoff, and cold starts required careful tuning.

Conversation inference — Getting Nova Pro to correctly interpret short, ambiguous replies in a group chat context (e.g., "5" after asking about group size, or "Medellín" when the destination is already set) required detailed prompt engineering with explicit inference rules tailored to the model's behavior.

Group dynamics — Unlike single-user chat, multiple people can respond to the same question. Teaching Nova Pro to accept any group member's answer and not ask the same question twice took significant iteration on the system prompt.

Accomplishments that we're proud of

  • A fully working end-to-end trip planning flow — from first message to real hotel and flight results — inside a Telegram group chat
  • A clean multi-agent architecture where each specialist agent is independently deployable and testable, all orchestrated by a single Nova Pro-powered agent
  • Persistent trip memory that survives across Lambda invocations and lets the conversation resume naturally
  • The A2A integration pattern that lets a lightweight orchestrator coordinate heavyweight specialist agents without coupling their implementations
  • A resilient hotel search that gracefully handles partial API failures instead of failing entirely
  • Proving that Amazon Nova Pro can reliably handle complex, multi-rule system prompts in a real-time conversational product without hallucinating tool calls or breaking conversation flow
  • Special thanks to Slalom for supporting the team with resources, cloud infrastructure access, and the space to build and test this solution end to end — this project wouldn't have shipped without that backing

What we learned

Nova Pro excels at multi-tool orchestration — when you give it a well-structured system prompt with clear rules, it consistently makes the right tool call at the right moment, chains dependent calls correctly, and knows when not to call a tool. That last point — restraint — is what separates a good agent from a noisy one.

LLM tool-use has hard payload limits — designing tools so the model never needs to pass large content as arguments is a first-class architectural concern, not an afterthought. We redesigned our save flow specifically to keep every tool-use message small.

Prompt engineering for group chats is genuinely different from single-user assistants — ambiguity is higher, turn-taking is non-deterministic, and you need explicit rules for inferring intent from short replies. Nova Pro's instruction-following made this tractable.

Serverless multi-agent systems need careful timeout budgeting — when three agents can each take 30+ seconds, the composition math matters.

Strands + Bedrock is a productive stack — the tool-decoration pattern and built-in event loop handle scaffolding that would otherwise be significant boilerplate, letting us focus on product behavior.

What's next for SquadHang

  • Voting system — let every group member vote on options and automatically surface the most popular choice, using Nova Pro to summarize the consensus
  • Budget tracking — split estimated costs per person and track who has confirmed their share
  • Booking integration — deep-link or direct-book selected options without leaving the chat
  • WhatsApp and Discord support — the core Nova Pro agent is platform-agnostic; the webhook adapter is the only thing that needs to change
  • Post-trip recaps — archive completed trips with a shareable summary of where the squad went, what they did, and what it cost
  • Proactive reminders — ping the group when a trip is approaching or when a selected flight price changes
  • Expanded Nova model usage — explore Nova Lite for low-stakes turns (simple memory reads, confirmations) to reduce cost, while keeping Nova Pro for complex reasoning turns that require multi-step tool use and synthesis

Built With

Share this project:

Updates