Skip to content

Commit 338706c

Browse files
deepme987claude
andcommitted
fix: use non-deprecated is_autocast_enabled API and fix docstring
- torch.is_autocast_enabled() without device_type is deprecated in PyTorch 2.8, use torch.is_autocast_enabled(device_type) instead - Update docstring to accurately describe root cause (upstream autocast leakage from nodes like SAM3, not fp8 specifically) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
1 parent 8e4bc0e commit 338706c

1 file changed

Lines changed: 6 additions & 4 deletions

File tree

comfy/ldm/wan/model.py

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -175,10 +175,12 @@ def repeat_e(e, x):
175175

176176

177177
def _addcmul(x, y, z):
178-
"""torch.addcmul wrapper that disables autocast to avoid
179-
'Unexpected floating ScalarType in at::autocast::prioritize' with fp8 weights."""
180-
if torch.is_autocast_enabled():
181-
with torch.autocast(device_type=x.device.type, enabled=False):
178+
"""torch.addcmul wrapper that disables autocast to prevent
179+
'Unexpected floating ScalarType in at::autocast::prioritize' when
180+
upstream nodes (e.g. SAM3) leave CUDA autocast enabled."""
181+
device_type = x.device.type
182+
if torch.is_autocast_enabled(device_type):
183+
with torch.autocast(device_type=device_type, enabled=False):
182184
return torch.addcmul(x, y, z)
183185
return torch.addcmul(x, y, z)
184186

0 commit comments

Comments
 (0)