Skip to content

Search subagent - set configurable exp variables#3205

Merged
roblourens merged 9 commits intomicrosoft:mainfrom
24anisha:anisha/search_config_tools_model
Jan 29, 2026
Merged

Search subagent - set configurable exp variables#3205
roblourens merged 9 commits intomicrosoft:mainfrom
24anisha:anisha/search_config_tools_model

Conversation

@24anisha
Copy link
Contributor

Rather than making the model endpoint and the tool call limit deterministic for the search subagent, make them configurable via exp settings.

Defaults to: max tools calls = 4 and subagent endpoint = main agent model endpoint

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR makes the search subagent's model endpoint and tool call limit configurable via experiment-based settings, replacing hardcoded values with configurable defaults.

Changes:

  • Added two new experiment-based configuration settings: SearchSubagentModel (model endpoint) and SearchSubagentToolCallLimit (max tool calls, default: 4)
  • Updated SearchSubagentToolCallingLoop to retrieve and use these configuration values when determining which endpoint to use and when building prompts
  • Modified SearchSubagentPrompt to accept maxSearchTurns as a prop instead of using the hardcoded MAX_SEARCH_TURNS constant
  • Updated SearchSubagentTool to retrieve and pass the configured tool call limit to the tool calling loop

Reviewed changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated 2 comments.

File Description
src/platform/configuration/common/configurationService.ts Added two new experiment-based configuration settings for search subagent: model name and tool call limit
src/extension/tools/node/searchSubagentTool.ts Updated to retrieve the tool call limit from configuration and inject configuration/experimentation services
src/extension/prompts/node/agent/searchSubagentPrompt.tsx Modified interface to accept maxSearchTurns as a prop and removed hardcoded MAX_SEARCH_TURNS constant
src/extension/prompt/node/searchSubagentToolCallingLoop.ts Added getEndpoint method to select model based on configuration, and updated buildPrompt to pass maxSearchTurns from configuration

let endpoint = await this.endpointProvider.getChatEndpoint(this.options.request);
if (!endpoint.supportsToolCalls) {
endpoint = await this.endpointProvider.getChatEndpoint('gpt-4.1');
const modelName = this._configurationService.getExperimentBasedConfig(ConfigKey.Advanced.SearchSubagentModel, this._experimentationService) as ChatEndpointFamily | undefined;
Copy link

Copilot AI Jan 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The type casting from string | undefined to ChatEndpointFamily | undefined is unsafe. The configuration setting SearchSubagentModel allows any string value, but ChatEndpointFamily is a union of specific literals: 'gpt-4.1' | 'gpt-5-mini' | 'copilot-base' | 'copilot-fast'. If an invalid model name is configured, it will pass through without validation and potentially cause runtime errors when passed to getChatEndpoint.

Consider adding validation to ensure the configured model name is a valid ChatEndpointFamily value, or handle the case where an invalid value is provided by falling back to the default endpoint.

See below for a potential fix:

	private isValidChatEndpointFamily(value: string): value is ChatEndpointFamily {
		return value === 'gpt-4.1'
			|| value === 'gpt-5-mini'
			|| value === 'copilot-base'
			|| value === 'copilot-fast';
	}

	// get the endpoint to use for the search subagent
	private async getEndpoint(request: ChatRequest) {
		const configuredModel = this._configurationService.getExperimentBasedConfig(ConfigKey.Advanced.SearchSubagentModel, this._experimentationService);
		const modelName: ChatEndpointFamily | undefined =
			typeof configuredModel === 'string' && this.isValidChatEndpointFamily(configuredModel)
				? configuredModel
				: undefined;

Copilot uses AI. Check for mistakes.
}

// get the endpoint to use for the search subagent
private async getEndpoint(request: ChatRequest) {
Copy link

Copilot AI Jan 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The request parameter is unused in this method. It appears to be a leftover from the previous implementation. Consider removing it to clean up the method signature.

Copilot uses AI. Check for mistakes.
return context;
}

// get the endpoint to use for the search subagent
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please use /** tsdoc style comments


let endpoint;
if (!modelName) {
endpoint = await this.endpointProvider.getChatEndpoint(this.options.request);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

getChatEndpoint throws if you ask for a model that it doesn't know about. If you set up an experiment to use haiku, and a user is in an org where Claude is disabled, what do you want to happen? Fall back to a different model? Or should the searchSubagent be disabled when the model you want to use isn't supported?

I think this is why getChatEndpoint isn't set up to easily ask for arbitrary models, because we generally don't want to encourage people to write code which does that. Your scenario is fine as long as you handle these sorts of cases.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ideally, if it can't get the endpoint with "haiku", it would default to the main agent model. I can update this to include try/catch logic if getChatEndpoint throws an error because a certain model isn't available

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@roblourens is there a way for me to debug in a state where I don't have access to a certain model, so I can confirm my implementation works as expected?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know of a way to configure that. You can hack the code somewhere in the model fetcher or endpoint provider maybe. I think all you need is a try/catch here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I can use that. I was worried it would be slow, since this function is called many times during the trajectory & falling into the catch statement tends to be slow relative to a regular function call.

@roblourens roblourens enabled auto-merge January 29, 2026 22:27
@vs-code-engineering vs-code-engineering bot added this to the January 2026 milestone Jan 29, 2026
@roblourens roblourens added this pull request to the merge queue Jan 29, 2026
Merged via the queue into microsoft:main with commit 3e5f9a1 Jan 29, 2026
9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants