fix(provider): only set thinkingConfig for models with reasoning capability#18283
fix(provider): only set thinkingConfig for models with reasoning capability#18283Protocol-zero-0 wants to merge 1 commit intoanomalyco:devfrom
Conversation
…bility Made-with: Cursor
There was a problem hiding this comment.
Pull request overview
Fixes Vertex AI 400 errors by only including Google/Vertex thinkingConfig when the selected model is marked as having reasoning (“thinking”) capability.
Changes:
- Gate
thinkingConfig: { includeThoughts: true }behindinput.model.capabilities.reasoningfor@ai-sdk/googleand@ai-sdk/google-vertex. - Preserve existing gemini-3-specific
thinkingLevel: "high"behavior, but only when reasoning is enabled.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| if (input.model.api.npm === "@ai-sdk/google" || input.model.api.npm === "@ai-sdk/google-vertex") { | ||
| result["thinkingConfig"] = { | ||
| includeThoughts: true, | ||
| } | ||
| if (input.model.api.id.includes("gemini-3")) { | ||
| result["thinkingConfig"]["thinkingLevel"] = "high" | ||
| if (input.model.capabilities.reasoning) { | ||
| result["thinkingConfig"] = { | ||
| includeThoughts: true, | ||
| } | ||
| if (input.model.api.id.includes("gemini-3")) { | ||
| result["thinkingConfig"]["thinkingLevel"] = "high" | ||
| } | ||
| } |
There was a problem hiding this comment.
Add unit coverage for this new gating to prevent regressions: ProviderTransform.options should NOT set thinkingConfig for Google/Vertex models when model.capabilities.reasoning is false (e.g. gemini-2.0-flash), and should set it when reasoning is true. There are existing ProviderTransform.options tests in packages/opencode/test/provider/transform.test.ts, but none currently assert this Google/Vertex behavior.
|
Thanks for updating your PR! It now meets our contributing guidelines. 👍 |
Issue for this PR
Closes #18243
Type of change
What does this PR do?
When using Vertex AI with non-thinking models (e.g.
gemini-2.0-flash,gemini-2.5-flash-lite), theoptions()function intransform.tsunconditionally addsthinkingConfig: { includeThoughts: true }to every Google/Vertex request. Vertex AI returns HTTP 400 for models that don't support thinking.The fix adds a guard:
thinkingConfigis only set wheninput.model.capabilities.reasoningis true. This is consistent with howvariants()in the same file already checks for reasoning capability before returning thinking-related config.How did you verify your code works?
bun testontest/provider/transform.test.ts— all 115 tests passedturbo typecheckpassed during pushvariants()already uses the samecapabilities.reasoningguard, confirming this is the intended patternChecklist