You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I notice that OpenAIAugmentedLLM on init sets the self._reasoning flag , the default fallback model is gpt-4o but when i specifically pass a model_name which is a reasoning model during generate_str(request_params=...) it fails with the below error.
[mcp_agent.workflows.llm.augmented_llm_openai.writing_assistant] Error: Error code: 400 - {'error': {'message': "Unsupported parameter:
'max_tokens' is not supported with this model. Use 'max_completion_tokens' instead.", 'type': 'invalid_request_error', 'param': 'max_tokens',
'code': 'unsupported_parameter'}}
I believe we would need to update the self._reasoning after the self.select_model() returns (line: 154) here?
If there is an alternate way to resolve this or if a code change would be required would be happy to implement them based on suggestion :) Thanks!
The text was updated successfully, but these errors were encountered:
Hi @Pythonista7, thanks for raising the issue! If you would like to, another thing to resolve would be changing max_tokens to max_completion_tokens since max_tokens has been deprecated.
I notice that
OpenAIAugmentedLLM
on init sets theself._reasoning
flag , the default fallback model isgpt-4o
but when i specifically pass a model_name which is a reasoning model duringgenerate_str(request_params=...)
it fails with the below error.I believe we would need to update the
self._reasoning
after theself.select_model()
returns (line: 154) here?If there is an alternate way to resolve this or if a code change would be required would be happy to implement them based on suggestion :) Thanks!
The text was updated successfully, but these errors were encountered: