Skip to content

Commit d8d08e7

Browse files
docs(api): minor description correction
1 parent f7627f8 commit d8d08e7

3 files changed

Lines changed: 17 additions & 17 deletions

File tree

.stats.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
configured_endpoints: 5
2-
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/isaacus%2Fisaacus-46de7b353c33e2b6c139e564caeec9e0462ad714121690f65167a7943e325000.yml
3-
openapi_spec_hash: 6527287f9709a8741c9cc5b4181d7bb1
2+
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/isaacus%2Fisaacus-c2fccbd5de79f8d02aedb89c4674a7a21988dbb965671d3142196d1f4e98d46e.yml
3+
openapi_spec_hash: 34ca779f9d68f45b3a0f3be3889994fb
44
config_hash: 9040e7359f066240ad536041fb2c5185

src/isaacus/resources/enrichments.py

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -71,11 +71,11 @@ def create(
7171
7272
overflow_strategy: The strategy for handling content exceeding the model's maximum input length.
7373
74-
`auto`, which is the default and recommended setting, currently behaves the same
75-
as `chunk`, which intelligently breaks the input up into smaller chunks and then
76-
stitches the results back together into a single prediction. In the future
77-
`auto` may implement even more sophisticated strategies for handling long
78-
contexts such as leveraging chunk overlap and/or a specialized stitching model.
74+
`auto`, which is the recommended setting, currently behaves the same as `chunk`,
75+
which intelligently breaks the input up into smaller chunks and then stitches
76+
the results back together into a single prediction. In the future `auto` may
77+
implement even more sophisticated strategies for handling long contexts such as
78+
leveraging chunk overlap and/or a specialized stitching model.
7979
8080
`chunk` breaks the input up into smaller chunks that fit within the model's
8181
context window and then intelligently merges the results into a single
@@ -159,11 +159,11 @@ async def create(
159159
160160
overflow_strategy: The strategy for handling content exceeding the model's maximum input length.
161161
162-
`auto`, which is the default and recommended setting, currently behaves the same
163-
as `chunk`, which intelligently breaks the input up into smaller chunks and then
164-
stitches the results back together into a single prediction. In the future
165-
`auto` may implement even more sophisticated strategies for handling long
166-
contexts such as leveraging chunk overlap and/or a specialized stitching model.
162+
`auto`, which is the recommended setting, currently behaves the same as `chunk`,
163+
which intelligently breaks the input up into smaller chunks and then stitches
164+
the results back together into a single prediction. In the future `auto` may
165+
implement even more sophisticated strategies for handling long contexts such as
166+
leveraging chunk overlap and/or a specialized stitching model.
167167
168168
`chunk` breaks the input up into smaller chunks that fit within the model's
169169
context window and then intelligently merges the results into a single

src/isaacus/types/enrichment_create_params.py

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -28,11 +28,11 @@ class EnrichmentCreateParams(TypedDict, total=False):
2828
overflow_strategy: Optional[Literal["auto", "drop_end", "chunk"]]
2929
"""The strategy for handling content exceeding the model's maximum input length.
3030
31-
`auto`, which is the default and recommended setting, currently behaves the same
32-
as `chunk`, which intelligently breaks the input up into smaller chunks and then
33-
stitches the results back together into a single prediction. In the future
34-
`auto` may implement even more sophisticated strategies for handling long
35-
contexts such as leveraging chunk overlap and/or a specialized stitching model.
31+
`auto`, which is the recommended setting, currently behaves the same as `chunk`,
32+
which intelligently breaks the input up into smaller chunks and then stitches
33+
the results back together into a single prediction. In the future `auto` may
34+
implement even more sophisticated strategies for handling long contexts such as
35+
leveraging chunk overlap and/or a specialized stitching model.
3636
3737
`chunk` breaks the input up into smaller chunks that fit within the model's
3838
context window and then intelligently merges the results into a single

0 commit comments

Comments
 (0)