-
Notifications
You must be signed in to change notification settings - Fork 43.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. Weβll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(agent): Persistent execute_code session #7078
base: master
Are you sure you want to change the base?
feat(agent): Persistent execute_code session #7078
Conversation
This PR exceeds the recommended size of 1000 lines. Please make sure you are NOT addressing multiple issues with one PR. Note this PR might be rejected due to its size. |
β Deploy Preview for auto-gpt-docs canceled.
|
This pull request has conflicts with the base branch, please resolve those so we can evaluate the pull request. |
@CodiumAI-Agent /review |
PR Review(Review updated until commit 60cdfd7)
Code feedback:
β¨ Review tool usage guide:Overview: The tool can be triggered automatically every time a new PR is opened, or can be invoked manually by commenting on any PR.
See the review usage page for a comprehensive guide on using this tool. |
Conflicts have been resolved! π A maintainer will review the pull request shortly. |
970c95c
to
60cdfd7
Compare
@CodiumAI-Agent /review |
Persistent review updated to latest commit 60cdfd7 |
42b8c17
to
4f91372
Compare
Codecov ReportAll modified and coverable lines are covered by tests β
Additional details and impacted files@@ Coverage Diff @@
## master #7078 +/- ##
==========================================
+ Coverage 44.65% 44.81% +0.16%
==========================================
Files 133 133
Lines 6306 6321 +15
Branches 822 824 +2
==========================================
+ Hits 2816 2833 +17
+ Misses 3379 3377 -2
Partials 111 111
Flags with carried forward coverage won't be shown. Click here to find out more. β View full report in Codecov by Sentry. |
23ffd50
to
97ac81b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for submitting this! It needs a bit of work but it will be very cool to get this merged :)
28b9510
to
6a8e728
Compare
This pull request has conflicts with the base branch, please resolve those so we can evaluate the pull request. |
6a8e728
to
4fabc34
Compare
Conflicts have been resolved! π A maintainer will review the pull request shortly. |
- Add notebook libs - Add two tests for the execute_code part - Add a Python kernel to each agent - Change the execute_code command part to persist the state of the codes
4fabc34
to
02897f4
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks ok to me overall and;
- Introduces new dependencies
- I'm not sure if this is going to work properly in container and on agent protocol server (please test) and also refer to feat(forge): Add
mount
method toFileStorage
& execute code in mounted workspaceΒ #7115
benchmark = ["agbenchmark @ git+https://github.com/Significant-Gravitas/AutoGPT.git#subdirectory=benchmark"] | ||
benchmark = ["agbenchmark @ file:///home/mk/.cache/pypoetry/virtualenvs/agpt-xgev2_OR-py3.11/src/AutoGPT/benchmark"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This happens to me for some reason as well, please revert this to git address and be careful when updating poetry packages.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh sure thanks
OK. I'll test it with agent protocol |
if not self.legacy_config.execute_local_commands: | ||
logger.info( | ||
"Local shell commands are disabled. To enable them," | ||
" set EXECUTE_LOCAL_COMMANDS to 'True' in your config file." | ||
) | ||
self.notebook = new_notebook() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With this, does it always make a new notebook on init between instances of the agent? Will that session be shared across multiple agents running within the agent protocol service?
@kcze not sure how components are init'ed yet
def _execute_code_in_agent_python_session(self, code: str) -> str: | ||
""" | ||
This function will run code on python_kernel.`self.python_kernel.execute(code)` | ||
does not return stdout and error direclty.The while part |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit
does not return stdout and error direclty.The while part | |
does not return stdout and error directly. The while part |
raise IOError( | ||
f"{io_msg['content']['evalue']}\n \ | ||
{io_msg['content']['traceback']}" | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: is IOError appropriate here?
This pull request has conflicts with the base branch, please resolve those so we can evaluate the pull request. |
Background
As we talked about in Persistent Python session / Jupyter Notebook integration , we need to change the execute_code command to save the state of each executed code. We attach a Python kernel and a notebook to each agent in initializing the agent part. After that now we can use that kernel to run Python code with that kernel and this kernel will save the session until the agent gets destroyed. The error part may be a little dirty and I think maybe we can find something better instead of
kernel.execute
. You can check tests forexecute_code
to see scenarios we can support right now.Changes ποΈ
In the
execute_code
command instead of creating a temp file and running that file, we run the code directly by python kernel.PR Quality Scorecard β¨
+2 pts
+5 pts
+5 pts
+5 pts
-4 pts
+4 pts
+5 pts
-5 pts
agbenchmark
to verify that these changes do not regress performance? β+10 pts