Skip to content

fix(provider): only set thinkingConfig for models with reasoning capability#18283

Open
Protocol-zero-0 wants to merge 1 commit intoanomalyco:devfrom
Protocol-zero-0:fix/vertex-thinking-config
Open

fix(provider): only set thinkingConfig for models with reasoning capability#18283
Protocol-zero-0 wants to merge 1 commit intoanomalyco:devfrom
Protocol-zero-0:fix/vertex-thinking-config

Conversation

@Protocol-zero-0
Copy link

@Protocol-zero-0 Protocol-zero-0 commented Mar 19, 2026

Issue for this PR

Closes #18243

Type of change

  • Bug fix
  • New feature
  • Refactor / code improvement
  • Documentation

What does this PR do?

When using Vertex AI with non-thinking models (e.g. gemini-2.0-flash, gemini-2.5-flash-lite), the options() function in transform.ts unconditionally adds thinkingConfig: { includeThoughts: true } to every Google/Vertex request. Vertex AI returns HTTP 400 for models that don't support thinking.

The fix adds a guard: thinkingConfig is only set when input.model.capabilities.reasoning is true. This is consistent with how variants() in the same file already checks for reasoning capability before returning thinking-related config.

How did you verify your code works?

  • Ran bun test on test/provider/transform.test.ts — all 115 tests passed
  • turbo typecheck passed during push
  • Verified that variants() already uses the same capabilities.reasoning guard, confirming this is the intended pattern

Checklist

  • I have tested my changes locally
  • I have not included unrelated changes in this PR

Copilot AI review requested due to automatic review settings March 19, 2026 20:22
@github-actions github-actions bot added the needs:compliance This means the issue will auto-close after 2 hours. label Mar 19, 2026
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Fixes Vertex AI 400 errors by only including Google/Vertex thinkingConfig when the selected model is marked as having reasoning (“thinking”) capability.

Changes:

  • Gate thinkingConfig: { includeThoughts: true } behind input.model.capabilities.reasoning for @ai-sdk/google and @ai-sdk/google-vertex.
  • Preserve existing gemini-3-specific thinkingLevel: "high" behavior, but only when reasoning is enabled.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 757 to 765
if (input.model.api.npm === "@ai-sdk/google" || input.model.api.npm === "@ai-sdk/google-vertex") {
result["thinkingConfig"] = {
includeThoughts: true,
}
if (input.model.api.id.includes("gemini-3")) {
result["thinkingConfig"]["thinkingLevel"] = "high"
if (input.model.capabilities.reasoning) {
result["thinkingConfig"] = {
includeThoughts: true,
}
if (input.model.api.id.includes("gemini-3")) {
result["thinkingConfig"]["thinkingLevel"] = "high"
}
}
Copy link

Copilot AI Mar 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add unit coverage for this new gating to prevent regressions: ProviderTransform.options should NOT set thinkingConfig for Google/Vertex models when model.capabilities.reasoning is false (e.g. gemini-2.0-flash), and should set it when reasoning is true. There are existing ProviderTransform.options tests in packages/opencode/test/provider/transform.test.ts, but none currently assert this Google/Vertex behavior.

Copilot uses AI. Check for mistakes.
@github-actions github-actions bot removed the needs:compliance This means the issue will auto-close after 2 hours. label Mar 19, 2026
@github-actions
Copy link
Contributor

Thanks for updating your PR! It now meets our contributing guidelines. 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug] Vertex AI: thinkingConfig.includeThoughts sent unconditionally, breaks non-thinking Gemini models (2.0-flash, 2.5-flash-lite, etc.)

2 participants