Inspiration

This project was inspired by the need for autonomous project management coordination systems that can provide continuous oversight without making implementation decisions. The team recognized that traditional project management tools often lack the ability to proactively identify risks and coordinate complex workflows, especially in agile environments where tasks and dependencies frequently change.

What it does

The PM Agentic Workflow Backend is an AI-powered project management coordination system that leverages four specialized AI agents to analyze, track, and optimize project delivery. The system provides autonomous oversight of project planning, task coordination, risk monitoring, and stakeholder reporting without making implementation decisions.

Key features include:

  • Autonomous project state analysis using AI agents
  • Real-time risk detection and monitoring
  • Task progress tracking and stall detection
  • Milestone sequencing and dependency analysis
  • Executive reporting generation
  • Dynamic ticket splitting for complex topics
  • MongoDB-based data persistence
  • FastAPI-based RESTful API

How we built it

The system was built with a modular architecture using Python, FastAPI, and MongoDB. The core components include:

Architecture:

  • FastAPI backend with async/await support for performance
  • MongoDB Atlas for data persistence
  • Grok (xAI) API for LLM capabilities
  • Four specialized AI agents working in parallel:
    1. Planning Agent - Analyzes milestone sequencing and dependencies
    2. Coordination Agent - Tracks task progress and detects stalled work
    3. Risk Agent - Monitors delivery risks and identifies blockers
    4. Reporting Agent - Generates stakeholder summaries and executive reports
  • Agent Orchestrator that coordinates all agents for comprehensive analysis

Key Technical Components:

  • app/ directory containing main application logic
  • app/agents/ with specialized agent implementations
  • app/core/ for configuration, database, and LLM integration
  • app/models/ with Pydantic data models
  • app/routes/ for API endpoints
  • docs/ directory with comprehensive documentation

Challenges we ran into

  • LLM Integration: Implementing consistent, structured output from LLMs across different agents
  • Agent Coordination: Ensuring agents can work in parallel while maintaining data consistency
  • Data Modeling: Designing a flexible project state model that supports all agent analyses
  • Temporal Reasoning: Handling time-sensitive analysis like overdue tasks and timeline realism
  • API Design: Creating a clean, consistent API that exposes all agent capabilities
  • Error Handling: Ensuring agents degrade gracefully without crashing the entire system

Accomplishments that we're proud of

  • Modular Agent System: Created a flexible, extensible agent architecture that can be easily extended with new agents
  • Comprehensive Coverage: Built all four core agents (Planning, Coordination, Risk, Reporting) that work together for complete project oversight
  • Real-time Analysis: Implemented parallel execution of agents for fast, comprehensive project analysis
  • Structured Output: Developed robust parsing and formatting for agent outputs to ensure consistent data structures
  • Full API Coverage: Created complete API endpoints for all project management operations and agent capabilities
  • Documentation: Provided comprehensive documentation for developers to understand and extend the system

What we learned

  • Agent Design Principles: Understanding the importance of the "coordination-only" principle where agents provide insights but don't make irreversible changes
  • LLM Prompt Engineering: The critical importance of structured prompts and output formats for reliable LLM integration
  • Async Programming: How to effectively use async/await in Python for handling parallel agent execution
  • Project State Management: How to design a flexible data model that can support multiple agent analyses
  • API Consistency: The value of maintaining consistent response formats across all API endpoints
  • Error Resilience: How to build systems that can gracefully handle LLM failures or data inconsistencies

What's next for typeshii

  • Enhanced Agent Capabilities: Adding more specialized agents for areas like resource allocation, budget tracking, or quality assurance
  • Advanced Risk Modeling: Implementing more sophisticated risk prediction algorithms
  • Improved UI Integration: Developing better frontend integrations to visualize agent insights
  • Multi-Platform Support: Expanding to support other LLM providers beyond Grok
  • Performance Optimization: Further optimizing parallel execution and database queries
  • Advanced Ticket Splitting: Enhancing the ticket splitting agent with more sophisticated topic decomposition
  • Deployment Automation: Improving Docker and Render configurations for easier deployment
  • Testing Framework: Building a comprehensive testing suite for agents and API endpoints
  • User Authentication: Adding JWT/OAuth support for production use
  • WebSocket Integration: Adding real-time updates for agent insights

Built With

Share this project:

Updates