-
Notifications
You must be signed in to change notification settings - Fork 1.9k
fix: accept open ai compatible local llm #1671
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
fix: accept open ai compatible local llm #1671
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
❌ Changes requested. Reviewed everything up to ff96499 in 2 minutes and 24 seconds
More details
- Looked at
35
lines of code in1
files - Skipped
0
files when reviewing. - Skipped posting
6
drafted comments based on config settings.
1. extensions/llms/openai/pandasai_openai/openai.py:93
- Draft comment:
Consider refactoring repeated calls to openai.OpenAI(**self._client_params) into a single variable assignment to adhere to DRY. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
2. extensions/llms/openai/pandasai_openai/openai.py:85
- Draft comment:
Ensure unit tests cover this new branch of functionality for local OpenAI-compatible LLM services. - Reason this comment was not posted:
Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 30% vs. threshold = 50%
The comment is about testing new code changes, which is one of the additional rules to check. The change adds new error paths, API calls, and configuration options that should be tested. However, the comment starts with "Ensure that..." which violates the base rules about not asking authors to verify things. But the underlying request for tests is valid and important.
The comment could be too vague - it doesn't specify what scenarios need testing. Also, we don't know if tests already exist in another file.
While the comment could be more specific, requesting unit tests for new functionality is explicitly part of our review rules and this is a significant new code path that needs testing.
The comment should be rewritten to be more direct and specific, but the core request for unit tests is valid and important for this new functionality.
3. extensions/llms/openai/pandasai_openai/openai.py:86
- Draft comment:
Wrap the requests.get call in a try/except block and add a timeout to handle potential HTTP errors or connection issues. - Reason this comment was not posted:
Marked as duplicate.
4. extensions/llms/openai/pandasai_openai/openai.py:86
- Draft comment:
Consider renaming 'model_names' to 'available_model_ids' for improved clarity. - Reason this comment was not posted:
Confidence changes required:50%
<= threshold50%
None
5. extensions/llms/openai/pandasai_openai/openai.py:92
- Draft comment:
When raising UnsupportedModelError, include more context in the error message (e.g., model details) to aid debugging. - Reason this comment was not posted:
Comment was on unchanged code.
6. extensions/llms/openai/pandasai_openai/openai.py:86
- Draft comment:
Validate the structure of the response from the API before iterating over it. Ensure each model dict contains an 'id' key to avoid potential KeyError. - Reason this comment was not posted:
Comment looked like it was already resolved.
Workflow ID: wflow_O9YNly4l6eYjuOYc
Want Ellipsis to fix these issues? Tag @ellipsis-dev
in a comment. You can customize Ellipsis with 👍 / 👎 feedback, review rules, user-specific overrides, quiet
mode, and more.
self._is_chat_model = kwargs.get("is_chat_model", True) | ||
model_names = [ | ||
model.get("id") | ||
for model in requests.get(f"{self.api_base}/models") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add error handling (e.g., try/except and timeout) for the requests.get
call to handle network/API failures gracefully.
@@ -81,7 +82,21 @@ def __init__( | |||
self._is_chat_model = False | |||
self.client = openai.OpenAI(**self._client_params).completions | |||
else: | |||
raise UnsupportedModelError(self.model) | |||
self._is_chat_model = kwargs.get("is_chat_model", True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider documenting the new is_chat_model
kwarg in the __init__
docstring. This improves clarity on default behavior.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please implement the requested changes by the bot and re-commit, especially the try/except
error handling
Add try/except here does not make sense to me, if the local LLM does not work, pandas ai should not run at all. My impression is that they intentionally not supporting local LLM. |
it works for me, 可以白嫖硅基流动的。nb!!!! |
To accept the local open AI compatible LLM service.
Important
Adds support for local OpenAI-compatible LLM services by checking available models via API call in
openai.py
.openai.py
.self.api_base
and setsself.client
based on model availability.requests
to fetch model list and determine ifself.model
is supported.UnsupportedModelError
ifself.model
is not found in the fetched model list.requests
import toopenai.py
.This description was created by
for ff96499. It will automatically update as commits are pushed.