Skip to content

Added support for Invalid LLM API KEYS #638

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

yash1744
Copy link
Contributor

@yash1744 yash1744 commented Apr 5, 2025

why

Previously it was displaying a StagehandDefaultError when LLM api keys are not present which does not provide concise details about the error.

what changed

It now displays invalid LLM API keys with a clean structured message.
Added error handling in createChatCompletion method for various LLM Clients ( OpenAI, Grok, Anthropic, Cerebras ).
Created new InvalidLLMKeyError class for custom Invalid api message

test plan

it will now display invalid LLM API keys with a clean structured message.
Copy link

changeset-bot bot commented Apr 5, 2025

🦋 Changeset detected

Latest commit: 8c7c375

The changes in this PR will be included in the next version bump.

This PR includes changesets to release 1 package
Name Type
@browserbasehq/stagehand Minor

Not sure what this means? Click here to learn what changesets are.

Click here if you're a maintainer who wants to add another changeset to this PR

Copy link

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PR Summary

This PR introduces a dedicated error handling approach for invalid API keys across multiple LLM client modules, providing clearer, structured error messages via a new custom error class.

  • Updated /lib/llm/AnthropicClient.ts to catch authentication errors and throw InvalidLLMKeyError.
  • Modified /lib/llm/CerebrasClient.ts and /lib/llm/GroqClient.ts to handle OpenAI.AuthenticationError with detailed logging and custom error propagation.
  • Revised /lib/llm/OpenAIClient.ts to wrap core calls in try/catch blocks and implement the new invalid API key handling.
  • Introduced InvalidLLMKeyError in /types/stagehandErrors.ts for consistent error feedback.

💡 (2/5) Greptile learns from your feedback when you react with 👍/👎!

5 file(s) reviewed, no comment(s)
Edit PR Review Bot Settings | Greptile

@kamath
Copy link
Member

kamath commented Apr 5, 2025

thanks so much for contributing to stagehand! instead of modifying each client individually, can we instead change LLMClient such that it can call createChatCompletion and catch it with an error, then throw a ChatCompletionError?

also, please add a changeset - just run npx changeset, and this is a minor PR

@yash1744
Copy link
Contributor Author

yash1744 commented Apr 5, 2025

Yeah, that sounds like a good idea. But since createChatCompletion is an abstract method, we can’t directly instantiate it
Instead, I can create a new method like safeCreateChatCompletion that wraps the logic and handles errors. It can throw a common ChatCompletionError instead of introducing a new InvalidLLMKey class and currently two methods in operatorHandler.ts file point to createChatCompletion which we need to modify it to new one.

However, even with that, we’ll still need to check which client is being used — like OpenAI, Groq, Cerebras (which are similar using OPEN AI client), or Anthropic, which has a different API and error pattern and handle both errors.

So in the future, if we add a new client, we’ll need to update this part again to support its error format.

Do you want me to move in this way!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants