Conversation
|
please fix failing tests |
gnanam1990
left a comment
There was a problem hiding this comment.
Thanks for the PR. Separating Mistral from the generic OpenAI-compatible path looks like a sensible direction, and I like the goal of avoiding max_completion_tokens for Mistral requests.
I had a few things I wanted to double-check before approving:
-
In
openaiShim.ts, it looks like the Mistral path removesmax_completion_tokens, but I wasn't sure whether the intended replacement ismax_tokens. My concern is that we may be dropping the explicit output cap entirely for Mistral requests. -
In
auth.ts, Mistral seems to be added to one 3P-auth check, but I may be missing a matching update forisUsing3PServices(). If that is still unchanged, I wonder whether some CI/test auth paths could still treat Mistral like first-party. -
In
spawnUtils.ts, I noticedMISTRAL_API_KEYandMISTRAL_MODELare forwarded to teammates, but I didn't seeMISTRAL_BASE_URL. That made me wonder whether subagents would fall back to the default endpoint when a custom Mistral base URL is configured. -
In
python/smart_router.py, could you also take another quick look at the new Mistral provider block? I may be misreading it, but the added entry looked like it might need a small syntax fix as well.
If I'm missing something on any of these, happy to be corrected. I'd be glad to re-check after that.
|
please fix conflicts |
Vasanthdev2004
left a comment
There was a problem hiding this comment.
Rechecked the latest head 9490af7efbb960dca8736f4e0480a89338d4cb4d against current origin/main 72e6a945fe1311a0b9eea1033703a74dc6cdbc18.
I still can't approve this because the new provider-flag clearing logic introduces a real branch-specific regression.
Current blocker:
-
src/utils/providerFlag.ts:82-88clears provider flags with assignments likeprocess.env.CLAUDE_CODE_USE_GEMINI = undefinedinstead of deleting them. Under Node/Bun that leaves the literal string"undefined"in the environment, so the flags are no longer truly absent.Direct repro on this head:
applyProviderFlag('anthropic', [])- resulting env contains values like:
CLAUDE_CODE_USE_OPENAI === "undefined"CLAUDE_CODE_USE_GEMINI === "undefined"CLAUDE_CODE_USE_MISTRAL === "undefined"
- downstream effects are real, not just cosmetic:
hasExplicitProviderSelection()returnstruebuildInheritedEnvVars()forwardsCLAUDE_CODE_USE_GEMINI=undefined,CLAUDE_CODE_USE_MISTRAL=undefined, etc. to teammates
This also regresses the existing unit contract on this branch:
bun test src/utils/providerFlag.test.ts-> fails on this head- same test on current
origin/main-> passes
So even though truthy provider detection still happens to work for the simple case, the env is observably wrong and leaks through code paths that check
!== undefinedrather than truthiness.
What I verified on this head:
- direct provider-flag repro above
- direct downstream repro through
hasExplicitProviderSelection()andbuildInheritedEnvVars() bun test src/utils/providerFlag.test.ts-> fails on this head, passes on currentorigin/main- Mistral request-body concern is fixed: direct shim repro now sends
max_tokensand omitsmax_completion_tokens bun run build-> successbun run smoke-> success
I didn't find a second high-confidence blocker beyond the provider-flag regression, but I wouldn't merge this until those env vars are actually deleted rather than assigned undefined.
Vasanthdev2004
left a comment
There was a problem hiding this comment.
Rechecked the latest head 653637acdd6e001a5066d770d3f5504c686294dd against current origin/main aff2bd89e22161f57ef330737b7f149208b96097.
The previous provider-flag blocker is fixed on this head, but I still can't approve because custom Mistral endpoints are not preserved consistently across the new user-facing flows.
Current blocker:
-
Custom
MISTRAL_BASE_URLvalues are still lost or misreported in the new Mistral UX paths.There are two concrete places this shows up:
-
src/commands/provider/provider.tsx:150-171/1205-1223/1245-1249getProviderWizardDefaults()capturesmistralModelbut notmistralBaseUrl- the
mistral-basestep uses aninitialValueexpression that is always'' - on submit, the wizard passes
processEnv: {}intobuildMistralProfileEnv()
Direct repros on this head:
getProviderWizardDefaults({ MISTRAL_BASE_URL: 'https://custom.mistral.test/v1', MISTRAL_MODEL: 'devstral-latest' })
returns no Mistral base-url default at allbuildMistralProfileEnv({ model: 'devstral-latest', baseUrl: null, apiKey: 'mistral-key', processEnv: {} })
returns:
{"MISTRAL_API_KEY":"mistral-key","MISTRAL_MODEL":"devstral-latest"}
with noMISTRAL_BASE_URL
So if a user with a working custom Mistral endpoint reopens the provider wizard and just confirms the defaults, the saved profile silently drops the custom endpoint and falls back to the default Mistral API.
-
src/components/StartupScreen.ts:96-100- the startup screen hardcodes
https://api.mistral.ai/v1instead of usingprocess.env.MISTRAL_BASE_URL
Direct repro on this head:
- with
CLAUDE_CODE_USE_MISTRAL=1andMISTRAL_BASE_URL='https://custom.mistral.test/v1' - captured startup-screen output shows the default
https://api.mistral.ai/v1 - it does not show the configured custom endpoint
- the startup screen hardcodes
This is exactly the kind of regression that can trip up an already working custom Mistral project: the runtime request path may be correct, but the setup and status UI around it still nudges the user back toward the default endpoint.
-
What I verified on this head:
- previous provider-flag blocker is fixed:
applyProviderFlag('anthropic', [])now leaves provider env vars absenthasExplicitProviderSelection()->falsebuildInheritedEnvVars()stays clean
- direct Mistral request-path repro is correct:
- custom
MISTRAL_BASE_URLis used Authorization: Bearer mistral-keymax_tokenspresentmax_completion_tokensabsent
- custom
bun test src/utils/providerFlag.test.ts-> 20 passbun test src/commands/model/model.test.tsx-> 1 passbun test src/commands/provider/provider.test.tsx-> 10 passbun test src/utils/providerProfile.test.ts src/utils/providerProfiles.test.ts src/utils/model/providers.test.ts src/services/api/providerConfig.local.test.ts-> 68 passpy -3 -m py_compile python/smart_router.py-> successbun run build-> successbun run smoke-> success
Vasanthdev2004
left a comment
There was a problem hiding this comment.
Rechecked the latest head 1c84b27127b66052ae1eb7579ad7391fcf0e146c against current origin/main.
The earlier Mistral UX/base-url blocker is fixed on this head, and I reverified the runtime request path:
- custom
MISTRAL_BASE_URLis used Authorization: Bearer mistral-keymax_tokenspresentmax_completion_tokensabsent
I still can't approve because this branch now introduces a real regression for already-working Ollama/custom-endpoint setups.
Current blocker:
-
src/utils/providerFlag.ts:124-130no longer preserves explicit Ollama/OpenAI-compatible env overrides.The function header still says explicit env vars should win, but the new
ollamabranch now does this:- forces
OPENAI_BASE_URLback tohttp://localhost:11434/v1unless it matches one hardcoded string - forces
OPENAI_API_KEY = 'ollama'
Direct repro on this head:
- set
OPENAI_BASE_URL='http://remote-ollama.internal:11434/v1' - set
OPENAI_API_KEY='secret-token' - call
applyProviderFlag('ollama', []) - resulting env becomes:
OPENAI_BASE_URL === 'http://localhost:11434/v1'OPENAI_API_KEY === 'ollama'
So a working remote/self-hosted Ollama-compatible setup gets silently rewritten to localhost with a fake key just by using
--provider ollama.This is exactly the kind of regression we should block: it breaks an already-working configuration outside the new Mistral path.
- forces
What I verified on this head:
bun test src/utils/providerFlag.test.ts-> 20 passbun test src/commands/model/model.test.tsx-> 1 passbun test src/commands/provider/provider.test.tsx-> 10 passbun test src/utils/providerProfile.test.ts src/utils/providerProfiles.test.ts src/utils/model/providers.test.ts src/services/api/providerConfig.local.test.ts-> 68 passbun run build-> successbun run smoke-> successpy -3 -m py_compile python/smart_router.py-> success- direct Mistral wizard/env repro -> fixed
- direct Ollama override repro above -> still broken
Non-blocking note:
- the existing Ollama test only preserves one exact URL (
http://my-ollama:11434/v1), which is why this broader regression slipped through.
|
@Vasanthdev2004 thank you for your reviews thus far. could I request another review? this is a product and model I use a lot that's why I felt the need to fix it, would appreciate it. |
gnanam1990
left a comment
There was a problem hiding this comment.
Thanks for continuing to iterate on this. I rechecked the latest head and I still see two blocking regressions:
-
src/utils/auth.ts
isUsing3PServices()now checksMISTRAL_API_KEYinstead of the actual active-provider flag. That means simply having a Mistral key in the environment can make the app behave as if it is in 3P mode even when Mistral is not the active provider. -
src/utils/model/model.ts
In GitHub mode, the env-configured model path is no longer handled consistently. That can cause a savedsettings.modelto override the intendedOPENAI_MODELvalue for GitHub setups, which is a regression for env-based configurations.
I’d want those provider-selection regressions fixed before approving.
Summary
Mistral was previously giving error due to sending max_completion_tokens. This pr decouples mistral as a seperate provider similar to gemini pattern and doesn't send max_completion_tokens for mistral.
Before:

After:

Now mistral can be configured via:
Testing
bun run buildbun run smoke