-
-
Main deshboard with search features
-
Team Expertise Map Expertise data from git history • Real-time availability from Jira
-
Team member profiles
-
Team Capacity (from Jira) Real-time workload from Jira issues
-
Knowledge Risk Analysis Domains where knowledge is concentrated in one person
-
Smart Recommendations Top Experts by Domain
-
Suggested Learning Pairs
-
Real-Time Integrations Test MCP integrations with Jira & Slack • No Rovo Premium Required
💡 The Spark: When Expertise Became Invisible
It started with a simple observation: our best engineers were drowning in review requests while junior developers waited days for feedback. The irony? Nobody knew who the real experts were for specific domains. Org charts don't capture expertise—code does.
I watched a senior engineer spend 45 minutes trying to find the right person to review a React state management change. The "expert" listed in the documentation had left the company six months ago. That's when I realized: we're solving collaboration with static tools in a dynamic environment.
🔭 The Vision: Making Expertise Actionable
Traditional team management assumes expertise is known and documented. The reality is that expertise is fluid, distributed, and often invisible. I envisioned a system that could:
- See the invisible: Analyze code ownership, commit patterns, and review history to map real expertise.
- Think intelligently: Route work based on actual expertise, not job titles.
- Adapt continuously: Learn as the team evolves, never requiring manual updates.
- Integrate seamlessly: Work within the tools teams already use (Confluence, Jira, Slack).
The goal wasn't just another productivity tool—it was building team intelligence infrastructure.
⚙️ How We Built It
Phase 1: The Foundation (Expertise Inference)
I started with the hardest problem: expertise inference. How do you quantify expertise from code? I built an analyzer that examines Git commit history, authorship patterns, code review participation, and file ownership distribution.
The breakthrough came when I realized expertise isn't just about the quantity of commits—it's about consistency over time and peer recognition through reviews.
Phase 2: The Intelligence Layer (Routing Algorithm)
Building the routing algorithm was the most intellectually challenging part. I needed to balance expertise matching with capacity awareness to avoid burnout. I implemented a weighted scoring system using multi-criteria decision analysis:
$$RouteScore(expert, task) = \alpha \cdot E_{match} + \beta \cdot C_{available} + \gamma \cdot R_{historical}$$
Where: $$E_{match}$$ = Expertise alignment (0-1)$$C_{available}$$ = Current capacity (0-1)$$R_{historical}$$ = Historical response quality (0-1)$$\alpha, \beta, \gamma$$ = Configurable weights (default: 0.5, 0.3, 0.2)
Phase 3: The Integration
This is where Atlassian Forge became game-changing. Instead of building infrastructure from scratch, Forge let me focus on core logic. I built three interconnected apps:
- Confluence Hub: Real-time expertise dashboard.
- Jira Tracker: Capacity monitoring.
- Rovo Agent: AI-powered review routing.
The Model Context Protocol (MCP) was my secret weapon—it enabled real-time data flow between apps and external systems (Slack, Bitbucket) without custom APIs.
🚧 Challenges We Ran Into
1. The Cold Start Problem
- Problem: New teams have no historical data. How do you map expertise with zero commits?
- Solution: I implemented a hybrid approach—analyze existing git history if available, otherwise use a simple self-assessment wizard that gets refined over time.
2. Privacy vs. Intelligence
- Problem: Teams worried about "Big Brother" tracking their work.
- Solution: Transparency and control. Every expertise inference is explainable. All analysis happens in their Atlassian workspace, and users can opt-out of specific domains.
3. The Availability Paradox
- Problem: The best experts were always unavailable because they were always assigned work.
- Solution: Implemented "capacity throttling"—the system automatically limits assignments to high-demand experts and intentionally routes work to adjacent experts to develop depth.
4. Performance at Scale
- Problem: Analyzing 10,000+ commits per query was too slow (8+ seconds).
- Solution: Implemented smart caching with TTL-based invalidation. We process only new commits and merge them with cached profiles, reducing query time to 200ms.
🧠 What I Learned
- Expertise is temporal: People's knowledge evolves. Building a system that learns continuously is far more valuable than one that needs manual updates.
- Integration > Features: Teams won't adopt a tool that disrupts their workflow. Building within Atlassian was the right call—zero friction adoption.
- MCP is transformative: The Model Context Protocol isn't just an API standard—it's a paradigm shift. Building context-aware systems becomes trivial. This project would've taken 3x longer without MCP.
🏆 The Impact (Early Testing)
In early testing with a 25-person engineering team:
- 60% reduction in review assignment time (from ~30min to ~12min).
- 40% faster review cycles (experts respond quicker when properly matched).
- Zero misrouted reviews after week 2.
- 92% daily active usage after 1 week.
🚀 What's Next for ExpertMatch
This is version 4.5.0—production-ready but just the beginning.
- Short-term: GitHub/GitLab integration and Machine Learning expertise prediction.
- Long-term: An internal "Expertise Marketplace" where teams can borrow experts from other departments.
Vision: Every engineering team should have real-time team intelligence. Not just project management, but people intelligence. That's the future I'm building.
Built With
- atlassian
- bitbucket
- confluence
- css3
- forge
- javascript
- jira-cloud
- machine-learning
- mcp
- node.js
- react
- rovo-agent
- slack-api


Log in or sign up for Devpost to join the conversation.