You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
f'User: extract temperature and duration values for each step of the following recipe. Use the following format for each sentence of the recipe: temperature=..., duration=....\nRecipe:\n{recipe}\n\nAssistant:',
41
-
f'User: from the following recipe, list temperature and time like: temperature=..., duration=...\n{recipe}\n\nAssistant:',
42
-
f'User: summarize temperature and time values for this recipe, where applicable in the following format: step1: temperature=..., time=...; step2: etc.\n{recipe}\n\nAssistant:'
prompt=f'User: extract temperature and duration values for each step of the following recipe. Use the following format for each sentence of the recipe: temperature=..., duration=....\nRecipe:\n{recipe}\n\nAssistant:'
32
+
prompts.append(prompt)
33
+
34
+
sampling_params=SamplingParams(max_tokens=300)
35
+
36
+
start_time=time.perf_counter()
37
+
outputs=llm.generate(prompts, sampling_params)
38
+
end_time=time.perf_counter()
39
+
40
+
print(f"Batch inferencing completed in {end_time-start_time:.2f} seconds")
41
+
42
+
start_time_output=time.perf_counter()
43
+
# iterate through the outputs of each prompt
44
+
results= []
45
+
fori, outputinenumerate(outputs):
46
+
result= {
47
+
"input": prompts[i],
48
+
"output": output.outputs[0].text
49
+
}
50
+
results.append(result)
51
+
52
+
# write output to a single file for the whole batch
0 commit comments