Search subagent - set configurable exp variables#3205
Search subagent - set configurable exp variables#3205roblourens merged 9 commits intomicrosoft:mainfrom
Conversation
There was a problem hiding this comment.
Pull request overview
This PR makes the search subagent's model endpoint and tool call limit configurable via experiment-based settings, replacing hardcoded values with configurable defaults.
Changes:
- Added two new experiment-based configuration settings:
SearchSubagentModel(model endpoint) andSearchSubagentToolCallLimit(max tool calls, default: 4) - Updated
SearchSubagentToolCallingLoopto retrieve and use these configuration values when determining which endpoint to use and when building prompts - Modified
SearchSubagentPromptto acceptmaxSearchTurnsas a prop instead of using the hardcodedMAX_SEARCH_TURNSconstant - Updated
SearchSubagentToolto retrieve and pass the configured tool call limit to the tool calling loop
Reviewed changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 2 comments.
| File | Description |
|---|---|
| src/platform/configuration/common/configurationService.ts | Added two new experiment-based configuration settings for search subagent: model name and tool call limit |
| src/extension/tools/node/searchSubagentTool.ts | Updated to retrieve the tool call limit from configuration and inject configuration/experimentation services |
| src/extension/prompts/node/agent/searchSubagentPrompt.tsx | Modified interface to accept maxSearchTurns as a prop and removed hardcoded MAX_SEARCH_TURNS constant |
| src/extension/prompt/node/searchSubagentToolCallingLoop.ts | Added getEndpoint method to select model based on configuration, and updated buildPrompt to pass maxSearchTurns from configuration |
| let endpoint = await this.endpointProvider.getChatEndpoint(this.options.request); | ||
| if (!endpoint.supportsToolCalls) { | ||
| endpoint = await this.endpointProvider.getChatEndpoint('gpt-4.1'); | ||
| const modelName = this._configurationService.getExperimentBasedConfig(ConfigKey.Advanced.SearchSubagentModel, this._experimentationService) as ChatEndpointFamily | undefined; |
There was a problem hiding this comment.
The type casting from string | undefined to ChatEndpointFamily | undefined is unsafe. The configuration setting SearchSubagentModel allows any string value, but ChatEndpointFamily is a union of specific literals: 'gpt-4.1' | 'gpt-5-mini' | 'copilot-base' | 'copilot-fast'. If an invalid model name is configured, it will pass through without validation and potentially cause runtime errors when passed to getChatEndpoint.
Consider adding validation to ensure the configured model name is a valid ChatEndpointFamily value, or handle the case where an invalid value is provided by falling back to the default endpoint.
See below for a potential fix:
private isValidChatEndpointFamily(value: string): value is ChatEndpointFamily {
return value === 'gpt-4.1'
|| value === 'gpt-5-mini'
|| value === 'copilot-base'
|| value === 'copilot-fast';
}
// get the endpoint to use for the search subagent
private async getEndpoint(request: ChatRequest) {
const configuredModel = this._configurationService.getExperimentBasedConfig(ConfigKey.Advanced.SearchSubagentModel, this._experimentationService);
const modelName: ChatEndpointFamily | undefined =
typeof configuredModel === 'string' && this.isValidChatEndpointFamily(configuredModel)
? configuredModel
: undefined;
| } | ||
|
|
||
| // get the endpoint to use for the search subagent | ||
| private async getEndpoint(request: ChatRequest) { |
There was a problem hiding this comment.
The request parameter is unused in this method. It appears to be a leftover from the previous implementation. Consider removing it to clean up the method signature.
| return context; | ||
| } | ||
|
|
||
| // get the endpoint to use for the search subagent |
There was a problem hiding this comment.
Please use /** tsdoc style comments
|
|
||
| let endpoint; | ||
| if (!modelName) { | ||
| endpoint = await this.endpointProvider.getChatEndpoint(this.options.request); |
There was a problem hiding this comment.
getChatEndpoint throws if you ask for a model that it doesn't know about. If you set up an experiment to use haiku, and a user is in an org where Claude is disabled, what do you want to happen? Fall back to a different model? Or should the searchSubagent be disabled when the model you want to use isn't supported?
I think this is why getChatEndpoint isn't set up to easily ask for arbitrary models, because we generally don't want to encourage people to write code which does that. Your scenario is fine as long as you handle these sorts of cases.
There was a problem hiding this comment.
ideally, if it can't get the endpoint with "haiku", it would default to the main agent model. I can update this to include try/catch logic if getChatEndpoint throws an error because a certain model isn't available
There was a problem hiding this comment.
@roblourens is there a way for me to debug in a state where I don't have access to a certain model, so I can confirm my implementation works as expected?
There was a problem hiding this comment.
I don't know of a way to configure that. You can hack the code somewhere in the model fetcher or endpoint provider maybe. I think all you need is a try/catch here.
There was a problem hiding this comment.
Ok, I can use that. I was worried it would be slow, since this function is called many times during the trajectory & falling into the catch statement tends to be slow relative to a regular function call.
Rather than making the model endpoint and the tool call limit deterministic for the search subagent, make them configurable via exp settings.
Defaults to: max tools calls = 4 and subagent endpoint = main agent model endpoint