Skip to content

Conversation

@Chesars
Copy link

@Chesars Chesars commented Jan 22, 2026

What does this PR do?

Fixes thinking mode when using Claude/Anthropic models via OpenAI-compatible proxies like LiteLLM.

Summary:
The @ai-sdk/openai-compatible package passes parameters as-is without converting to snake_case. OpenAI-compatible APIs expect budget_tokens (snake_case) but OpenCode was sending budgetTokens (camelCase).

Changes:

  • maxOutputTokens(): Now supports both budgetTokens and budget_tokens for proper token limit calculation

Compatibility:

  • Direct Anthropic SDK (@ai-sdk/anthropic): No changes, continues using camelCase
  • Other openai-compatible models (GPT, Llama, etc.): No changes, continues using reasoningEffort
  • Claude via openai-compatible: Now uses snake_case as expected by the spec

Tests: Added 8 new unit tests

…le APIs

When connecting to OpenAI-compatible proxies (like LiteLLM) with Claude/Anthropic
models, the thinking parameter must use snake_case (budget_tokens) to match the
OpenAI API spec.

The @ai-sdk/anthropic package handles camelCase → snake_case conversion automatically,
but @ai-sdk/openai-compatible passes parameters as-is without conversion. This fix
detects Claude/Anthropic models using openai-compatible SDK and sends budget_tokens
(snake_case) instead of reasoningEffort.

Also updated maxOutputTokens() to handle both camelCase (budgetTokens) and snake_case
(budget_tokens) for proper token limit calculation.

Fixes thinking mode when using Claude via LiteLLM or other OpenAI-compatible proxies.
@github-actions
Copy link
Contributor

Thanks for your contribution!

This PR doesn't have a linked issue. All PRs must reference an existing issue.

Please:

  1. Open an issue describing the bug/feature (if one doesn't exist)
  2. Add Fixes #<number> or Closes #<number> to this PR description

See CONTRIBUTING.md for details.

@github-actions
Copy link
Contributor

The following comment was made by an LLM, it may be inaccurate:

Potential Duplicate/Related PRs Found

PR #8900: feat(opencode): add copilot specific provider to properly handle copilot reasoning tokens

  • Related to reasoning/thinking token handling across different providers. May have addressed similar parameter formatting issues.

PR #5531: Feature/OpenAI compatible reasoning

PR #8359: feat(opencode): add auto model detection for OpenAI-compatible providers

  • Related to OpenAI-compatible provider handling and model detection, relevant to the provider-specific logic in this PR.

These PRs address related functionality around provider-specific parameter handling and reasoning tokens, but your PR (10109) appears to be the specific fix for the snake_case conversion issue with OpenAI-compatible APIs.

@rekram1-node
Copy link
Collaborator

Which api are you using because I don't think this is the case for all providers?

@neronlux
Copy link

Thanks for reporting on our behalf

@neronlux
Copy link

L

Which api are you using because I don't think this is the case for all providers?

Correct we are getting when using litellm to proxy Anthropic thinking models and trying to switch using c-t thinking modes

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants