@@ -71,11 +71,11 @@ def create(
7171
7272 overflow_strategy: The strategy for handling content exceeding the model's maximum input length.
7373
74- `auto`, which is the default and recommended setting, currently behaves the same
75- as `chunk`, which intelligently breaks the input up into smaller chunks and then
76- stitches the results back together into a single prediction. In the future
77- `auto` may implement even more sophisticated strategies for handling long
78- contexts such as leveraging chunk overlap and/or a specialized stitching model.
74+ `auto`, which is the recommended setting, currently behaves the same as `chunk`,
75+ which intelligently breaks the input up into smaller chunks and then stitches
76+ the results back together into a single prediction. In the future `auto` may
77+ implement even more sophisticated strategies for handling long contexts such as
78+ leveraging chunk overlap and/or a specialized stitching model.
7979
8080 `chunk` breaks the input up into smaller chunks that fit within the model's
8181 context window and then intelligently merges the results into a single
@@ -159,11 +159,11 @@ async def create(
159159
160160 overflow_strategy: The strategy for handling content exceeding the model's maximum input length.
161161
162- `auto`, which is the default and recommended setting, currently behaves the same
163- as `chunk`, which intelligently breaks the input up into smaller chunks and then
164- stitches the results back together into a single prediction. In the future
165- `auto` may implement even more sophisticated strategies for handling long
166- contexts such as leveraging chunk overlap and/or a specialized stitching model.
162+ `auto`, which is the recommended setting, currently behaves the same as `chunk`,
163+ which intelligently breaks the input up into smaller chunks and then stitches
164+ the results back together into a single prediction. In the future `auto` may
165+ implement even more sophisticated strategies for handling long contexts such as
166+ leveraging chunk overlap and/or a specialized stitching model.
167167
168168 `chunk` breaks the input up into smaller chunks that fit within the model's
169169 context window and then intelligently merges the results into a single
0 commit comments