fix: replace all instances of deprecated torch._check_is_size with torch._check#1940
Merged
matthewdouglas merged 1 commit intobitsandbytes-foundation:mainfrom May 7, 2026
Conversation
PyTorch's _check_is_size is being removed in a future release per pytorch/pytorch#169400 ("Use _check(i >= 0) instead"). This replaces all 23 occurrences with the recommended torch._check pattern, matching the existing torch._check style already used elsewhere in these files. Affected files (23 occurrences): - bitsandbytes/_ops.py (8) - bitsandbytes/backends/triton/ops.py (5) - bitsandbytes/backends/default/ops.py (4) - bitsandbytes/backends/cpu/ops.py (3) - bitsandbytes/backends/cuda/ops.py (2) - bitsandbytes/backends/hpu/ops.py (1) torch._check is available in all torch >= 2.0; bitsandbytes requires torch >= 2.2 per pyproject.toml, so no backwards-compat shim is needed. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
matthewdouglas
approved these changes
May 7, 2026
Member
|
LGTM, thanks! |
4396187
into
bitsandbytes-foundation:main
94 of 95 checks passed
Contributor
Author
|
you are most welcome. I'm very happy to be of service. I appreciate the excellent community support you've provided |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Closes #1933.
PyTorch's
torch._check_is_sizeis being removed in a future release per pytorch/pytorch#169400 — the FutureWarning advises "Use_check(i >= 0)instead". This PR finds / replaces all older approach instances with the recommended substitution.Scope
23 occurrences across 6 files:
bitsandbytes/_ops.pybitsandbytes/backends/triton/ops.pybitsandbytes/backends/default/ops.pybitsandbytes/backends/cpu/ops.pybitsandbytes/backends/cuda/ops.pybitsandbytes/backends/hpu/ops.pyEach call
torch._check_is_size(blocksize)is replaced with the upstream-recommended pattern:This matches the existing
torch._check(...)style already used at adjacent lines in the same files.Backwards compatibility
torch._checkis available in alltorch >= 2.0; bitsandbytes requirestorch >= 2.2perpyproject.toml. No backwards-compat shim needed.Test plan
pre-commit run --files <changed>passes (ruff + ruff-format + hygiene hooks)Additional audits
Audit structure adopted from @rapsealk's #1931 (asserts → exceptions refactor) for similar mechanical-substitution PRs.
python -m py_compileon all 6 changed files: all passtorch._check_is_sizeanywhere inbitsandbytes/after the substitution (count went 23 → 0)blocksizeat all 23 sites — no slip in argument naming@register_fake(...)(FakeTensor / dispatch hook) or a@register_kernel(...)(backend kernel impl) decoratorSemantic equivalence under SymInt (FakeTensor / torch.compile)
_check_is_size(i)does two things: (1) assertsi >= 0, and (2) informs PyTorch's dynamic-shape engine thatiis a size (i.e., non-negative). The substitution_check(blocksize >= 0, ...)preserves both behaviors:int, both produce identical runtime checks.SymInt(which can flow throughregister_fakeunder torch.compile),blocksize >= 0evaluates toSymBoolandtorch._checkof aSymBoolinforms the dynamic-shape engine of the same>= 0guard.This matches the upstream migration explicitly recommended in pytorch/pytorch#169400 ("Use
_check(i >= 0)instead").Out of scope
csrc/(C++/CUDA) —_check_is_sizeis a Python-only APItests/andexamples/— no calls to_check_is_sizeoutsidebitsandbytes/package code