Skip to content

Commit b0813cc

Browse files
sayakpaulstevhliu
andauthored
Apply suggestions from code review
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
1 parent 8a78e96 commit b0813cc

1 file changed

Lines changed: 7 additions & 4 deletions

File tree

docs/source/en/training/distributed_inference.md

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -237,8 +237,7 @@ By selectively loading and unloading the models you need at a given stage and sh
237237

238238
Use [`~ModelMixin.set_attention_backend`] to switch to a more optimized attention backend. Refer to this [table](../optimization/attention_backends#available-backends) for a complete list of available backends.
239239

240-
> [!NOTE]
241-
> Most attention backends are compatible with context parallelism. If one is not compatibel with context parallelism, please [file a feature request](https://github.com/huggingface/diffusers/issues/new).
240+
Most attention backends are compatible with context parallelism. Open an [issue](https://github.com/huggingface/diffusers/issues/new) if a backend is not compatible.
242241

243242
### Ring Attention
244243

@@ -296,7 +295,11 @@ if __name__ == "__main__":
296295
main()
297296
```
298297

299-
The script above needs to be run with a distributed launcher that is compatible with PyTorch. You can use `torchrun` for this: `torchrun --nproc-per-node 2 above_script.py`. `--nproc-per-node` depends on the number of GPUs available.
298+
The script above needs to be run with a distributed launcher, such as [torchrun](https://docs.pytorch.org/docs/stable/elastic/run.html), that is compatible with PyTorch. `--nproc-per-node` is set to the number of GPUs available.
299+
300+
/```shell
301+
`torchrun --nproc-per-node 2 above_script.py`.
302+
/```
300303

301304
### Ulysses Attention
302305

@@ -313,7 +316,7 @@ pipeline.transformer.enable_parallelism(config=ContextParallelConfig(ulysses_deg
313316

314317
### parallel_config
315318

316-
It's possible to pass a `ContextParallelConfig` to `parallel_config` during initializing a model and a pipeline:
319+
Pass `parallel_config` during model initialization to enable context parallelism.
317320

318321
```py
319322
CKPT_ID = "black-forest-labs/FLUX.1-dev"

0 commit comments

Comments
 (0)