You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
fix: prevent autocast crash in WAN model addcmul ops
Wraps torch.addcmul calls in WAN attention blocks with autocast-disabled
context to prevent 'Unexpected floating ScalarType in at::autocast::prioritize'
RuntimeError. This occurs when upstream nodes (e.g. SAM3) leave CUDA autocast
enabled - PyTorch 2.8's autocast promote dispatch for addcmul hits an unhandled
dtype in the prioritize function.
Uses torch.is_autocast_enabled(device_type) (non-deprecated API) and only
applies the workaround when autocast is actually active (zero overhead otherwise).
0 commit comments