
Agenda
Please note that the displayed time slots are not final - we will post the final agenda in the next few weeks
Full agenda will be published soon! Stay tuned.
- 08:00 -
- 09:00
- 09:35 -
- 10:05
- Hall B
Remember when you could build products that didn't change under your feet every 5 minutes?
- 10:10 -
- 10:40
- Hall B
What if you could ask your visual data a question, and it showed you exactly what your model missed?
- 10:45 -
- 11:00
- Hall B
We’ll show how to take a quantized open-weight LLM, load it directly into a browser with WebGPU — no server, no API call.
- 11:05 -
- 11:20
- Hall B
This talk explores how to elevate an open-source LLM to frontier-level performance. We’ll start with training-free methods — such as process supervision and outcome supervision — that can deliver up to 1.5-2x quality boost without modifying the model.
- 11:20 -
- 11:40
- 11:40 -
- 11:55
- Hall B
- 12:00 -
- 12:15
- Hall B
- 12:20 -
- 12:50
- Hall B
Slow inference, sky-high GPU bills, users complaining about latency—sound familiar? ...
- 12:55 -
- 13:25
- Hall B
How can they let developers move fast with AI while still holding onto the hard-won practices of good platform engineering?
- 13:30 -
- 13:45
- Hall B
In this talk, Shahar Polak shares a real-world case study from ImagenAI on building and deploying AI agents that tackle these exact challenges in production.
- 13:55 -
- 14:45
- 09:35 -
- 10:05
- Hall A
LLMs are evolving from single, monolithic systems into collaborative networks of specialized agents, each with its own role, knowledge, and perspective.
- 10:10 -
- 10:40
- Hall A
This talk dives deep into the pros and cons of codebase indexing for AI-powered tools...
- 10:45 -
- 11:00
- Hall A
This talk presents our GenAI-first approach—spanning planning, architecture, agentic automation, and measurement—to scale...
- 11:05 -
- 11:20
- Hall A
- 11:20 -
- 11:40
- 11:40 -
- 11:55
- Hall A
In this talk, Oren Yosifon explores practical methods to reshape base models themselves - from pruning and ablation to targeted tuning - to boost performance, reduce cost, and bypass common model limitations.
- 12:00 -
- 12:15
- Hall A
In this talk, we’ll walk through the practical process of transforming any website into structured, meaningful context that can power your LLM applications.
- 12:20 -
- 12:50
- Hall A
Most AI apps don’t fail because of bad models—they fail because they stop learning...
- 12:55 -
- 13:25
- Hall A
Join us to discover how MCP-UI enables the new web, where users can access their favorite apps uniformly through any agent.
- 13:30 -
- 14:00
- Hall A
This talk introduces a pragmatic toolkit for context management, demonstrating how mastering these principles helps explain why an unwanted outcome occurred, how to correct it, and how to prevent it next time
- 13:55 -
- 14:45
- 08:00 -
- 09:00
- 09:35 -
- 10:05
- Hall A
LLMs are evolving from single, monolithic systems into collaborative networks of specialized agents, each with its own role, knowledge, and perspective.
- 09:35 -
- 10:05
- Hall B
Remember when you could build products that didn't change under your feet every 5 minutes?
- 10:10 -
- 10:40
- Hall B
What if you could ask your visual data a question, and it showed you exactly what your model missed?
- 10:10 -
- 10:40
- Hall A
This talk dives deep into the pros and cons of codebase indexing for AI-powered tools...
- 10:45 -
- 11:00
- Hall A
This talk presents our GenAI-first approach—spanning planning, architecture, agentic automation, and measurement—to scale...
- 10:45 -
- 11:00
- Hall B
We’ll show how to take a quantized open-weight LLM, load it directly into a browser with WebGPU — no server, no API call.
- 11:05 -
- 11:20
- Hall A
- 11:05 -
- 11:20
- Hall B
This talk explores how to elevate an open-source LLM to frontier-level performance. We’ll start with training-free methods — such as process supervision and outcome supervision — that can deliver up to 1.5-2x quality boost without modifying the model.
- 11:20 -
- 11:40
- 11:40 -
- 11:55
- Hall A
In this talk, Oren Yosifon explores practical methods to reshape base models themselves - from pruning and ablation to targeted tuning - to boost performance, reduce cost, and bypass common model limitations.
- 11:40 -
- 11:55
- Hall B
- 12:00 -
- 12:15
- Hall A
In this talk, we’ll walk through the practical process of transforming any website into structured, meaningful context that can power your LLM applications.
- 12:00 -
- 12:15
- Hall B
- 12:20 -
- 12:50
- Hall A
Most AI apps don’t fail because of bad models—they fail because they stop learning...
- 12:20 -
- 12:50
- Hall B
Slow inference, sky-high GPU bills, users complaining about latency—sound familiar? ...
- 12:55 -
- 13:25
- Hall A
Join us to discover how MCP-UI enables the new web, where users can access their favorite apps uniformly through any agent.
- 12:55 -
- 13:25
- Hall B
How can they let developers move fast with AI while still holding onto the hard-won practices of good platform engineering?
- 13:30 -
- 14:00
- Hall A
This talk introduces a pragmatic toolkit for context management, demonstrating how mastering these principles helps explain why an unwanted outcome occurred, how to correct it, and how to prevent it next time
- 13:30 -
- 13:45
- Hall B
In this talk, Shahar Polak shares a real-world case study from ImagenAI on building and deploying AI agents that tackle these exact challenges in production.
- 13:55 -
- 14:45
- 14:40 -
- 15:10
- Hall B
In a series of real-world application hacking demos I’ll demonstrate how developers mistakenly trust LLMs in generative AI code assistants...
- 14:40 -
- 15:10
- Hall A
What if your AI could think like a detective before searching?
- 15:15 -
- 15:30
- Hall B
In this talk, we’ll cover how to take image generation from personal use to a practical tool for developers in production - from choosing the right model and mastering prompt engineering to ensuring output quality.
- 15:15 -
- 15:30
- Hall A
In this talk, we’ll share how this system works end-to-end: UI patterns for contributors, heuristics for safe merges, approval flows, and rollout metrics.
- 15:35 -
- 15:50
- Hall B
In this talk, we introduce CAIR (Counterfactual-based Agent Influence Ranker), recently accepted at EMNLP 2025, and show how it helps developers answer this question at inference time.
- 15:35 -
- 15:50
- Hall A
This session covers practical know-how learned through painful production iterations: how to build validation frameworks that catch LLM errors before they reach users, architectural patterns for constraining problem spaces without losing effectiveness, and techniques for creating evidence-based reasoning that can be audited and improved systematically.
- 15:55 -
- 16:10
- Hall B
Fasten your mental seatbelts—we're rocketing through a rollercoaster ride unraveling the hidden biases of Large Language Models (LLMs).
- 15:55 -
- 16:10
- Hall A
Everyone is excited about conversational AI. Everyone is implementing their own chatbots, until they have to make a conversation behave in production.
- 16:15 -
- 16:30
- Hall B
This talk explores practical techniques for debugging LLMs, focusing on systematic troubleshooting methodologies and analysis approaches you can create yourself.
- 16:30 -
- 16:45
- Hall A
What are agents, and how can they be leveraged to revolutionize GenAI systems?