Inspiration
As mobile developers, we've repeatedly encountered frustration during the crucial debugging phase, spending disproportionate amounts of time resolving app bugs that severely hinder productivity. Despite exploring robust tools like Sentry, Instabug, BetterStack for monitoring, and AI-assisted development tools such as Devin, Windsurf, and Cursor, we noticed a significant gap. No existing solution provided a fully integrated, agent-based mobile debugging workflow capable of automatic ingestion, context-aware analysis, automated replication, and test script generation. This realization inspired us to build Alto, a complete agentic debugging system that transforms how mobile developers approach debugging.
What it does
Alto implements a comprehensive, agentic debugging pipeline integrating directly with bug reporting platforms (demonstrated with Instabug), code repositories (GitHub via gitingest) and notifications (Slack). When a bug is reported, Alto activates an orchestrated series of specialized agents:
- ExtractorAgent: Receives data from Instabug, classifying the bug type, priority, and error source. Driven by asi1-mini.
- TriageAgent: Pulls relevant code context from the specified repository using the
gitingestlibrary via a locally running agent. - HypothesisAgent: Uses Google Gemini 2.5 pro to generate root-cause hypotheses, suggest relevant code locations, and estimate bug severity.
- MaestroGenAgent ((Experimental) ): Generates Maestro UI automation scripts leveraging LLMs to automate bug replication steps based on provided data.
- ReplicatorAgent: Uses the generated Meastro YAML script to set up a local environment, ensuring the bug is reproducible and verifying replication steps.
- NotifierAgent: Sends detailed results, including hypotheses, bug severity, and replication status, to Slack via webhooks.
This workflow dramatically accelerates the debugging process from initial reporting through actionable insights, automated replication, and verification
How we built it
Alto is developed as a multi-agent system using Fetch.ai’s uagents framework, integrated with Agentverse for agent management:
- Agents: Python-based uAgents (
TriageAgent,HypothesisAgent,NotifierAgent,MaestroGenAgent) handle specific, modular tasks within the debugging pipeline. - AI/LLM: We integrated Google's Gemini API (specifically testing
gemini-1.5-flash-latestandgemini-2.5-pro-preview) for the core analysis (hypothesis generation) and Maestro script generation due to its large context window capabilities needed for processing repository content. - Code Fetching: The
gitingestlibrary is wrapped by our locally runningTriageAgentto efficiently pull relevant code context from specified GitHub repositories. - UI Automation Target: Maestro framework for mobile UI testing is the target output format for our
MaestroGenAgent. - Integrations & APIs: We worked with data formats from Instabug (for bug reports), used
gitingestfor GitHub interaction, and the Slack API (Incoming Webhooks) for notifications. - Backend/Gateway: A local Flask server integrated with a
GatewayAgentprovides a standard REST API endpoint, demonstrating how external systems (like a CI/CD pipeline or custom backend) could trigger the workflow. - Core Language/Libs: Python 3,
requests,asyncio,Flask,uAgentsframework.
Challenges we ran into
- Designing Agentic Workflows: Architecting a robust multi-agent system required significant design effort. Defining clear agent responsibilities, establishing reliable communication protocols using
uagents.Model, and orchestrating asynchronous tasks across agents (including implementing a request-response pattern using session IDs) were key challenges, especially with limited examples for this specific debugging domain. - Bridging Agent/Non-Agent Systems: Integrating external tools and environments posed hurdles:
- Wrapping the local-only
gitingestlibrary within theTriageAgentrequired careful handling of execution context (usingrun_in_executorto avoid blocking the asyncio loop). - Developing the
GatewayAgentwith Flask allowed standard backends to interact with the uAgent system but required managing communication between the Flask thread and the agent's async loop (asyncio.run_coroutine_threadsafe). - Reliably calling external APIs (Gemini, Slack) and handling potential network errors from within asynchronous agents.
- Wrapping the local-only
- LLM Reliability & Prompt Engineering: Obtaining consistent, accurate, and correctly formatted output (JSON for analysis, YAML for Maestro) from the LLM required extensive prompt engineering. Guiding the LLM to correlate natural language bug reports with specific code logic and generate syntactically valid, useful outputs was an iterative process.
- Problem Validation & Vertical Focus: We spent time researching the mobile debugging landscape, analyzing existing tools (Sentry, Instabug, Devin, Cursor, etc.) to identify a specific underserved niche – the lack of an integrated agentic system combining contextual code analysis with bug reports and test generation – ensuring our solution targeted a real-world pain point.
Accomplishments that we're proud of
- End-to-End Working Prototype: We successfully built and demonstrated a multi-agent system that orchestrates the core workflow: receiving inputs via the Gateway, fetching code context with the TriageAgent, performing LLM-based analysis with the HypothesisAgent, and sending results via the NotifierAgent.
- Integration of Local Tools: We effectively wrapped the local
gitingesttool within an agent, demonstrating a practical pattern for incorporating specialized libraries into the Fetch.ai agent network. - LLM-Powered Analysis & Generation: We successfully leveraged a large language model (Gemini) with significant code context (provided by
gitingest) to generate debugging hypotheses and syntactically plausible Maestro UI automation scripts, showcasing the potential of LLMs in this domain. - Agent Gateway Implementation: We built a functional
GatewayAgentusing Flask, providing a clean REST API interface for external systems to trigger complex agent workflows. - Identifying a Niche & Designing a Solution: We are proud to have moved beyond simply applying AI generally, instead identifying a specific industry gap in mobile debugging and designing a novel, agent-based solution (Alto) to address it.
What we learned
This hackathon provided deep practical experience with agent-based systems and the Fetch.ai ecosystem. We gained proficiency with the uagents framework, Agentverse concepts, and the intricacies of designing communication protocols for asynchronous inter-agent messaging. We learned firsthand the difference between simple LLM API calls and architecting agentic workflows where specialized agents collaborate, manage state, and potentially make decisions. We encountered the real-world challenges of prompt engineering for complex code-related tasks and ensuring LLM reliability. Critically, we validated the immense potential for targeted agentic systems to create significant value in established industries like mobile QA, where current solutions often lack deep AI integration and automation.
What's next for Alto
We are incredibly excited by Alto's potential and are committed to pushing this concept forward. Our immediate next steps include:
- Direct Webhook Integration: Implement robust webhook handlers in a
CollectorAgentto directly ingest reports from Instabug and Sentry. - Refine LLM Analysis: Continue iterating on prompts for higher accuracy in bug localization/hypothesis. Critically evaluate integrating ASI-1 Mini for comparison and to align with the Fetch.ai ecosystem.
- Enhance Maestro Generation: Improve the reliability and coverage of the
MaestroGenAgent's output. - Develop User Interface: Build a dashboard for users to manage integrations, view analysis history, and potentially provide feedback.
- Startup Exploration: Validate the business case with mobile development teams, gather user feedback, and pursue building Alto into a dedicated startup. We genuinely believe this agent-based approach to debugging has unicorn potential!


Log in or sign up for Devpost to join the conversation.