Skip to content

Commit 2433836

Browse files
authored
Update README.md
1 parent 4d38afc commit 2433836

1 file changed

Lines changed: 50 additions & 4 deletions

File tree

README.md

Lines changed: 50 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -44,6 +44,56 @@ This will save the code in `persistent_source.py`
4444
Change the task in the `task.txt` file to perform another task.
4545

4646
## Using it as a standalone package in your program
47+
You can reuse the code from https://github.com/paolorechia/code-it/blob/main/code_it/__main__.py
48+
49+
Here's the base minimum code to use this library:
50+
```python
51+
from code_it.code_editor.python_editor import PythonCodeEditor
52+
from code_it.models import build_text_generation_web_ui_client_llm, build_llama_base_llm
53+
from code_it.task_executor import TaskExecutor, TaskExecutionConfig
54+
55+
56+
code_editor = PythonCodeEditor()
57+
model_builder = build_llama_base_llm
58+
config = TaskExecutionConfig()
59+
60+
task_executor = TaskExecutor(code_editor, model_builder, config)
61+
62+
with open("task.txt", "r") as fp:
63+
task = fp.read()
64+
task_executor.execute(task)
65+
```
66+
67+
Here we import the `PythonCodeEditor`, currently the only supported editor, along with a llama LLM. Notice that this assumes a server running on 0.0.0.0:8000, which comes from my other repo: https://github.com/paolorechia/learn-langchain/blob/main/servers/vicuna_server.py
68+
69+
You can easily change this to instead use the text-generation-web-ui tool from oobagooba, by importing the builder: `build_text_generation_web_ui_client_llm`. Implementing your own model client should also be straightforward. Look at the source code in: https://github.com/paolorechia/code-it/blob/main/code_it/models.py
70+
71+
### Modifying the behavior
72+
Notice that in the example above we imported the `TaskExecutionConfig`, let's look at this class:
73+
74+
```python
75+
@dataclass
76+
class TaskExecutionConfig:
77+
execute_code = True
78+
install_dependencies = True
79+
apply_linter = True
80+
check_package_is_in_pypi = True
81+
log_to_stdout = True
82+
coding_samples = 3
83+
code_sampling_strategy = "PYLINT"
84+
sampling_temperature_multipler = 0.1
85+
dependency_samples = 3
86+
max_coding_attempts = 5
87+
dependency_install_attempts = 5
88+
planner_temperature = 0
89+
coder_temperature = 0
90+
linter_temperature = 0.3
91+
dependency_tracker_temperature = 0.2
92+
```
93+
94+
You can change these parameters to change how the program behaves. Not all settings are always applied at the same time, for instance, if you change the `code_sampling_strategy` to `NO_SAMPLING`, then of course the config parameter `sampling_temperature_multiplier` is not used.
95+
96+
To understand these settings better, you should read the task execution code directly, as there is no detailed documentation for this yet: https://github.com/paolorechia/code-it/blob/main/code_it/task_executor.py
4797

4898

4999
## Using it with Langchain
@@ -55,11 +105,7 @@ Change the task in the `task.txt` file to perform another task.
55105

56106

57107

58-
## Modifying the behavior
59-
When you're importing `code_it` package in your own code, you can change some settings on how it should behave. Specifically, these are the supported config options at the moment:
60-
```python
61108

62-
```
63109

64110

65111

0 commit comments

Comments
 (0)