Skip to content

Commit da1d9a1

Browse files
committed
cleanup: add better documentation and comments
1 parent 662af37 commit da1d9a1

10 files changed

Lines changed: 81 additions & 50 deletions

File tree

fast-api-react/README.md

Lines changed: 25 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -28,27 +28,48 @@ OPENAI_API_KEY="<openai-key>"
2828
poetry install
2929
```
3030

31-
## Running the API
31+
### Running the API
3232

3333
To start the FastAPI server, run the following command in the terminal:
3434

3535
```shell
3636
poetry run api
3737
```
3838

39-
## Running the Hatchet Worker
39+
### Running the Hatchet Worker
4040

4141
In a separate terminal, start the the Hatchet worker by running the following command:
4242

4343
```shell
4444
poetry run hatchet
4545
```
4646

47-
## (Optional) Running the Example Frontend Application
47+
### (Optional) Running the Example Frontend Application
4848

4949
We've included a basic chat engine frontend to play with the example workflow. To run this script:
5050

51-
1. Open a new terminal window and cd into the `fast-api-react/frontend` directory.
51+
1. Open a new terminal window and cd into the [`./frontend`](./frontend/) directory.
5252
2. run `npm install`
5353
3. run `npm start`
5454
4. By default you can access the application in your browser at `http://localhost:3000` or by following the instructions in the terminal window.
55+
56+
## Project Overview
57+
58+
### Example Workflows
59+
60+
The project contains two example workflows in the [`./backend/src/workflows`](./backend/src/workflows/) directory. These workflows are registered with hatchet in [`./backend/src/workflows/main.py`](./backend/src/workflows/main.py) which is started when running `poetry run hatchet`.
61+
62+
1. [Simple Response Generation](./backend/src/workflows/simple.py): a single step workflow making a request to OpenAI
63+
2. [Basic Retrieval Augmented Generation](./backend/src/workflows/basicrag.py): a multi-step workflow to load the contents of a website with Beautiful soup, reason about the information, and generate a response with OpenAI.
64+
65+
### Exposing the workflows via a RestAPI
66+
67+
A common design pattern is to start a Hatchet workflow run from a rest api endpoint. In this way, you can handle authentication and authorization as you normally do and let Hatchet handle execution. The simple FastAPI example can be found at [./backend/src/api/main.py](./backend/src/api/main.py)
68+
69+
### Starting a Run
70+
71+
The `POST /message` endpoint initiates a Hatchet workflow run, utilizing the message body as input. Given Hatchet operates asynchronously, this endpoint immediately returns a run ID. This ID acts as a reference for clients to track the status and outcomes of the initiated run.
72+
73+
### Streaming Responses
74+
75+
After initiating a workflow run and receiving a run ID, clients can subscribe to updates through a `GET /message/{id}` request. This request allows clients to receive real-time notifications and results from the asynchronous Hatchet worker, associated with their specific run ID.

fast-api-react/backend/README.md

Whitespace-only changes.

fast-api-react/backend/poetry.lock

Lines changed: 4 additions & 4 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

fast-api-react/backend/pyproject.toml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ name = "src"
33
version = "0.0.0"
44
description = "Easily run background tasks in FastAPI with Hatchet"
55
authors = []
6-
readme = "README.md"
6+
readme = "../README.md"
77

88
[tool.poetry.scripts]
99
api = "src.api.main:start"
@@ -18,7 +18,7 @@ openai = "^1.11.0"
1818
beautifulsoup4 = "^4.12.3"
1919
requests = "^2.31.0"
2020
urllib3 = "1.26.15"
21-
hatchet-sdk = "0.10.3"
21+
hatchet-sdk = "^0.10.4"
2222

2323
[build-system]
2424
requires = ["poetry-core"]

fast-api-react/backend/src/api/main.py

Lines changed: 15 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -33,31 +33,39 @@
3333

3434
@app.post("/message")
3535
def message(data: MessageRequest):
36-
37-
messageId = hatchet.client.admin.run_workflow("GenerateWorkflow", {
36+
''' This endpoint is called by the client to start a message generation workflow. '''
37+
messageId = hatchet.client.admin.run_workflow("BasicRagWorkflow", {
3838
"request": data.model_dump()
3939
})
4040

41-
# save step message id -> workflowRunId
41+
# normally, we'd save the workflowRunId to a database and return a reference to the client
42+
# for this simple example, we just return the workflowRunId
4243

43-
return {"workflowRunId": messageId}
44+
return {"messageId": messageId}
4445

4546

4647
def event_stream_generator(workflowRunId):
48+
''' This helper function is a generator that yields events from the Hatchet event stream. '''
4749
stream = hatchet.client.listener.stream(workflowRunId)
4850

4951
for event in stream:
52+
''' you can filter and transform event data here that will be sent to the client'''
5053
data = json.dumps({
5154
"type": event.type,
5255
"payload": event.payload,
53-
"workflowRunId": workflowRunId
56+
"messageId": workflowRunId
5457
})
5558
yield "data: " + data + "\n\n"
5659

5760

58-
@app.get("/stream/{messageId}")
61+
@app.get("/message/{messageId}")
5962
async def stream(messageId: str):
60-
# message id -> workflowRunId
63+
'''
64+
in a normal application you might use the message id to look up a workflowRunId
65+
for this simple case, we have no persistence and just use the message id as the workflowRunId
66+
67+
you might also consider looking up the workflowRunId in a database and returning the results if the message has already been processed
68+
'''
6169
workflowRunId = messageId
6270
return StreamingResponse(event_stream_generator(workflowRunId), media_type='text/event-stream')
6371

fast-api-react/backend/src/workflows/generate.py renamed to fast-api-react/backend/src/workflows/basicrag.py

Lines changed: 1 addition & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -3,13 +3,12 @@
33
from bs4 import BeautifulSoup
44
from openai import OpenAI
55
import requests
6-
import time
76

87
openai = OpenAI()
98

109

1110
@hatchet.workflow(on_events=["question:create"])
12-
class GenerateWorkflow:
11+
class BasicRagWorkflow:
1312

1413
@hatchet.step()
1514
def start(self, context: Context):
@@ -88,27 +87,3 @@ def generate_response(self, ctx: Context):
8887
"status": "idle",
8988
"message": completion.choices[0].message.content,
9089
}
91-
92-
@hatchet.workflow()
93-
class SimpleWorkflow:
94-
@hatchet.step()
95-
def start(self, ctx: Context):
96-
message = ctx.workflow_input()["messages"][-1]
97-
98-
prompt = ctx.playground("prompt", "The user is asking the following question: {message}")
99-
100-
prompt = prompt.format(message=message['content'])
101-
102-
model = ctx.playground("model", "gpt-3.5-turbo")
103-
104-
completion = openai.chat.completions.create(
105-
model=model,
106-
messages=[
107-
{"role": "system", "content": prompt},
108-
message
109-
]
110-
)
111-
112-
return {
113-
"answer": completion.choices[0].message.content,
114-
}
Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,5 @@
11
from hatchet_sdk import Hatchet
22
from dotenv import load_dotenv
3-
43
load_dotenv()
54

65
hatchet = Hatchet(debug=True)
Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,12 @@
11
from .hatchet import hatchet
2-
from .generate import GenerateWorkflow, SimpleWorkflow
2+
from .basicrag import BasicRagWorkflow
3+
from .simple import SimpleWorkflow
34

45

56
def start():
67
worker = hatchet.worker('example-worker')
78

8-
generate = GenerateWorkflow()
9-
worker.register_workflow(generate)
9+
worker.register_workflow(BasicRagWorkflow())
1010
worker.register_workflow(SimpleWorkflow())
1111

1212
worker.start()
Lines changed: 28 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,28 @@
1+
from .hatchet import hatchet
2+
from hatchet_sdk import Context
3+
4+
5+
@hatchet.workflow()
6+
class SimpleWorkflow:
7+
@hatchet.step()
8+
def start(self, ctx: Context):
9+
message = ctx.workflow_input()["messages"][-1]
10+
11+
prompt = ctx.playground(
12+
"prompt", "The user is asking the following question: {message}")
13+
14+
prompt = prompt.format(message=message['content'])
15+
16+
model = ctx.playground("model", "gpt-3.5-turbo")
17+
18+
completion = openai.chat.completions.create(
19+
model=model,
20+
messages=[
21+
{"role": "system", "content": prompt},
22+
message
23+
]
24+
)
25+
26+
return {
27+
"answer": completion.choices[0].message.content,
28+
}

fast-api-react/frontend/src/App.tsx

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ function App() {
2222
useEffect(() => {
2323
if (!openRequest) return;
2424

25-
const sse = new EventSource(`${API_URL}/stream/${openRequest}`, {
25+
const sse = new EventSource(`${API_URL}/message/${openRequest}`, {
2626
withCredentials: true,
2727
});
2828

@@ -38,7 +38,7 @@ function App() {
3838
{
3939
role: "assistant",
4040
content: data.payload.message,
41-
hatchetRunId: data.workflowRunId,
41+
messageId: data.messageId,
4242
},
4343
]);
4444
setOpenRequest(undefined);
@@ -81,7 +81,7 @@ function App() {
8181

8282
if (response.ok) {
8383
// Handle successful response
84-
setOpenRequest((await response.json()).workflowRunId);
84+
setOpenRequest((await response.json()).messageId);
8585
} else {
8686
// Handle error response
8787
}

0 commit comments

Comments
 (0)