Skip to content

Conversation

@aibrahim-oai
Copy link
Collaborator

@aibrahim-oai aibrahim-oai commented Dec 3, 2025

  • This PR treats the ModelsManager like AuthManager and propagate it into the tui, replacing the builtin_model_presets
  • We are also decreasing the visibility of builtin_model_presets

based on #7552

@aibrahim-oai aibrahim-oai changed the title Migrate tui to models manager Migrate tui to use models manager Dec 3, 2025
@aibrahim-oai aibrahim-oai changed the base branch from main to migrate-app-server December 3, 2025 23:56
@chatgpt-codex-connector
Copy link
Contributor

💡 Codex Review

Self {
conversations: Arc::new(RwLock::new(HashMap::new())),
auth_manager: auth_manager.clone(),
session_source,
models_manager: Arc::new(ModelsManager::new(auth_manager.get_auth_mode())),

P1 Badge Update available models when auth mode changes

ModelsManager is constructed once with the initial auth_manager.get_auth_mode() and its cached list is reused by the TUI/server without any refresh. If the user logs in or switches between API key and ChatGPT after startup, the model picker continues to reflect the initial mode (e.g., still hides gpt-5.1-codex-max after upgrading to ChatGPT, or still shows it after switching to API keys), so the available model list becomes stale and incorrect compared to the active auth.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@aibrahim-oai
Copy link
Collaborator Author

@codex review this

conversations: Arc<RwLock<HashMap<ConversationId, Arc<CodexConversation>>>>,
auth_manager: Arc<AuthManager>,
models_manager: Arc<ModelsManager>,
pub models_manager: Arc<ModelsManager>,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

consider a getter method

.blocking_read()
.iter()
.find(|&preset| preset.model == "gpt-5.1-codex-max")
.cloned()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you need clonned?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah because we don't want the reference

Copy link
Contributor

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines 2132 to +2147
pub(crate) fn open_model_popup(&mut self) {
let current_model = self.config.model.clone();
let auth_mode = self.auth_manager.auth().map(|auth| auth.mode);
let presets: Vec<ModelPreset> = builtin_model_presets(auth_mode);
let presets: Vec<ModelPreset> = self
.models_manager
.available_models
.blocking_read()
.iter()
.cloned()
.collect();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Model picker uses stale preset list after auth changes

The model popup now reads the presets from models_manager.available_models instead of calling builtin_model_presets with the current auth. ModelsManager seeds available_models only once at construction (ModelsManager::new(auth_manager.get_auth_mode())) and nothing refreshes it when the user logs in or changes auth mode, so the picker keeps showing whatever list was computed at startup. For example, starting unauthenticated and then logging in with an API key will still surface gpt-5.1-codex-max even though builtin_model_presets is supposed to hide it for API key auth. This regresses the model picker’s filtering based on current credentials.

Useful? React with 👍 / 👎.

let auth_mode = self.auth_manager.auth().map(|auth| auth.mode);
builtin_model_presets(auth_mode)
.into_iter()
let models = self.models_manager.available_models.blocking_read();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this should be an async method at some point

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The issue is ChatWidget methods are mostly sync. Needs some work to convert

let presets: Vec<ModelPreset> = builtin_model_presets(auth_mode);
let presets: Vec<ModelPreset> = self
.models_manager
.available_models
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we have a single method that does all this chain at once? get_available_models() ??

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

okay

Base automatically changed from migrate-app-server to main December 4, 2025 01:17
@aibrahim-oai aibrahim-oai merged commit 8da91d1 into main Dec 4, 2025
26 checks passed
@aibrahim-oai aibrahim-oai deleted the migrate-tui-to-models-manager branch December 4, 2025 02:00
@github-actions github-actions bot locked and limited conversation to collaborators Dec 4, 2025
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants