Skip to content

Commit 9106e6f

Browse files
authored
Merge branch 'main' into overhaul-release-workflow
2 parents dc5cd04 + dc8d903 commit 9106e6f

227 files changed

Lines changed: 9658 additions & 3300 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

.ai/AGENTS.md

Lines changed: 19 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -10,24 +10,34 @@ Strive to write code as simple and explicit as possible.
1010

1111
---
1212

13-
### Dependencies
14-
- No new mandatory dependency without discussion (e.g. `einops`)
15-
- Optional deps guarded with `is_X_available()` and a dummy in `utils/dummy_*.py`
16-
1713
## Code formatting
14+
1815
- `make style` and `make fix-copies` should be run as the final step before opening a PR
1916

2017
### Copied Code
18+
2119
- Many classes are kept in sync with a source via a `# Copied from ...` header comment
2220
- Do not edit a `# Copied from` block directly — run `make fix-copies` to propagate changes from the source
2321
- Remove the header to intentionally break the link
2422

2523
### Models
26-
- All layer calls should be visible directly in `forward` — avoid helper functions that hide `nn.Module` calls.
27-
- Avoid graph breaks for `torch.compile` compatibility — do not insert NumPy operations in forward implementations and any other patterns that can break `torch.compile` compatibility with `fullgraph=True`.
28-
- See the **model-integration** skill for the attention pattern, pipeline rules, test setup instructions, and other important details.
24+
25+
- See [models.md](models.md) for model conventions, attention pattern, implementation rules, dependencies, and gotchas.
26+
- See the [model-integration](./skills/model-integration/SKILL.md) skill for the full integration workflow, file structure, test setup, and other details.
27+
28+
### Pipelines & Schedulers
29+
30+
- Pipelines inherit from `DiffusionPipeline`
31+
- Schedulers use `SchedulerMixin` with `ConfigMixin`
32+
- Use `@torch.no_grad()` on pipeline `__call__`
33+
- Support `output_type="latent"` for skipping VAE decode
34+
- Support `generator` parameter for reproducibility
35+
- Use `self.progress_bar(timesteps)` for progress tracking
36+
- Don't subclass an existing pipeline for a variant — DO NOT use an existing pipeline class (e.g., `FluxPipeline`) to override another pipeline (e.g., `FluxImg2ImgPipeline`) which will be a part of the core codebase (`src`)
2937

3038
## Skills
3139

32-
Task-specific guides live in `.ai/skills/` and are loaded on demand by AI agents.
33-
Available skills: **model-integration** (adding/converting pipelines), **parity-testing** (debugging numerical parity).
40+
Task-specific guides live in `.ai/skills/` and are loaded on demand by AI agents. Available skills include:
41+
42+
- [model-integration](./skills/model-integration/SKILL.md) (adding/converting pipelines)
43+
- [parity-testing](./skills/parity-testing/SKILL.md) (debugging numerical parity).

.ai/models.md

Lines changed: 76 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,76 @@
1+
# Model conventions and rules
2+
3+
Shared reference for model-related conventions, patterns, and gotchas.
4+
Linked from `AGENTS.md`, `skills/model-integration/SKILL.md`, and `review-rules.md`.
5+
6+
## Coding style
7+
8+
- All layer calls should be visible directly in `forward` — avoid helper functions that hide `nn.Module` calls.
9+
- Avoid graph breaks for `torch.compile` compatibility — do not insert NumPy operations in forward implementations and any other patterns that can break `torch.compile` compatibility with `fullgraph=True`.
10+
- No new mandatory dependency without discussion (e.g. `einops`). Optional deps guarded with `is_X_available()` and a dummy in `utils/dummy_*.py`.
11+
12+
## Common model conventions
13+
14+
- Models use `ModelMixin` with `register_to_config` for config serialization
15+
16+
## Attention pattern
17+
18+
Attention must follow the diffusers pattern: both the `Attention` class and its processor are defined in the model file. The processor's `__call__` handles the actual compute and must use `dispatch_attention_fn` rather than calling `F.scaled_dot_product_attention` directly. The attention class inherits `AttentionModuleMixin` and declares `_default_processor_cls` and `_available_processors`.
19+
20+
```python
21+
# transformer_mymodel.py
22+
23+
class MyModelAttnProcessor:
24+
_attention_backend = None
25+
_parallel_config = None
26+
27+
def __call__(self, attn, hidden_states, attention_mask=None, ...):
28+
query = attn.to_q(hidden_states)
29+
key = attn.to_k(hidden_states)
30+
value = attn.to_v(hidden_states)
31+
# reshape, apply rope, etc.
32+
hidden_states = dispatch_attention_fn(
33+
query, key, value,
34+
attn_mask=attention_mask,
35+
backend=self._attention_backend,
36+
parallel_config=self._parallel_config,
37+
)
38+
hidden_states = hidden_states.flatten(2, 3)
39+
return attn.to_out[0](hidden_states)
40+
41+
42+
class MyModelAttention(nn.Module, AttentionModuleMixin):
43+
_default_processor_cls = MyModelAttnProcessor
44+
_available_processors = [MyModelAttnProcessor]
45+
46+
def __init__(self, query_dim, heads=8, dim_head=64, ...):
47+
super().__init__()
48+
self.to_q = nn.Linear(query_dim, heads * dim_head, bias=False)
49+
self.to_k = nn.Linear(query_dim, heads * dim_head, bias=False)
50+
self.to_v = nn.Linear(query_dim, heads * dim_head, bias=False)
51+
self.to_out = nn.ModuleList([nn.Linear(heads * dim_head, query_dim), nn.Dropout(0.0)])
52+
self.set_processor(MyModelAttnProcessor())
53+
54+
def forward(self, hidden_states, attention_mask=None, **kwargs):
55+
return self.processor(self, hidden_states, attention_mask, **kwargs)
56+
```
57+
58+
Consult the implementations in `src/diffusers/models/transformers/` if you need further references.
59+
60+
## Gotchas
61+
62+
1. **Forgetting `__init__.py` lazy imports.** Every new class must be registered in the appropriate `__init__.py` with lazy imports. Missing this causes `ImportError` that only shows up when users try `from diffusers import YourNewClass`.
63+
64+
2. **Using `einops` or other non-PyTorch deps.** Reference implementations often use `einops.rearrange`. Always rewrite with native PyTorch (`reshape`, `permute`, `unflatten`). Don't add the dependency. If a dependency is truly unavoidable, guard its import: `if is_my_dependency_available(): import my_dependency`.
65+
66+
3. **Missing `make fix-copies` after `# Copied from`.** If you add `# Copied from` annotations, you must run `make fix-copies` to propagate them. CI will fail otherwise.
67+
68+
4. **Wrong `_supports_cache_class` / `_no_split_modules`.** These class attributes control KV cache and device placement. Copy from a similar model and verify -- wrong values cause silent correctness bugs or OOM errors.
69+
70+
5. **Missing `@torch.no_grad()` on pipeline `__call__`.** Forgetting this causes GPU OOM from gradient accumulation during inference.
71+
72+
6. **Config serialization gaps.** Every `__init__` parameter in a `ModelMixin` subclass must be captured by `register_to_config`. If you add a new param but forget to register it, `from_pretrained` will silently use the default instead of the saved value.
73+
74+
7. **Forgetting to update `_import_structure` and `_lazy_modules`.** The top-level `src/diffusers/__init__.py` has both -- missing either one causes partial import failures.
75+
76+
8. **Hardcoded dtype in model forward.** Don't hardcode `torch.float32` or `torch.bfloat16` in the model's forward pass. Use the dtype of the input tensors or `self.dtype` so the model works with any precision.

.ai/review-rules.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,8 +3,9 @@
33
Review-specific rules for Claude. Focus on correctness — style is handled by ruff.
44

55
Before reviewing, read and apply the guidelines in:
6-
- [AGENTS.md](AGENTS.md) — coding style, dependencies, copied code, model conventions
7-
- [skills/model-integration/SKILL.md](skills/model-integration/SKILL.md) — attention pattern, pipeline rules, implementation checklist, gotchas
6+
- [AGENTS.md](AGENTS.md) — coding style, copied code
7+
- [models.md](models.md) — model conventions, attention pattern, implementation rules, dependencies, gotchas
8+
- [skills/model-integration/modular-conversion.md](skills/model-integration/modular-conversion.md) — modular pipeline patterns, block structure, key conventions
89
- [skills/parity-testing/SKILL.md](skills/parity-testing/SKILL.md) — testing rules, comparison utilities
910
- [skills/parity-testing/pitfalls.md](skills/parity-testing/pitfalls.md) — known pitfalls (dtype mismatches, config assumptions, etc.)
1011

.ai/skills/model-integration/SKILL.md

Lines changed: 4 additions & 74 deletions
Original file line numberDiff line numberDiff line change
@@ -65,89 +65,19 @@ docs/source/en/api/
6565
- [ ] Run `make style` and `make quality`
6666
- [ ] Test parity with reference implementation (see `parity-testing` skill)
6767

68-
### Attention pattern
69-
70-
Attention must follow the diffusers pattern: both the `Attention` class and its processor are defined in the model file. The processor's `__call__` handles the actual compute and must use `dispatch_attention_fn` rather than calling `F.scaled_dot_product_attention` directly. The attention class inherits `AttentionModuleMixin` and declares `_default_processor_cls` and `_available_processors`.
71-
72-
```python
73-
# transformer_mymodel.py
74-
75-
class MyModelAttnProcessor:
76-
_attention_backend = None
77-
_parallel_config = None
78-
79-
def __call__(self, attn, hidden_states, attention_mask=None, ...):
80-
query = attn.to_q(hidden_states)
81-
key = attn.to_k(hidden_states)
82-
value = attn.to_v(hidden_states)
83-
# reshape, apply rope, etc.
84-
hidden_states = dispatch_attention_fn(
85-
query, key, value,
86-
attn_mask=attention_mask,
87-
backend=self._attention_backend,
88-
parallel_config=self._parallel_config,
89-
)
90-
hidden_states = hidden_states.flatten(2, 3)
91-
return attn.to_out[0](hidden_states)
92-
93-
94-
class MyModelAttention(nn.Module, AttentionModuleMixin):
95-
_default_processor_cls = MyModelAttnProcessor
96-
_available_processors = [MyModelAttnProcessor]
97-
98-
def __init__(self, query_dim, heads=8, dim_head=64, ...):
99-
super().__init__()
100-
self.to_q = nn.Linear(query_dim, heads * dim_head, bias=False)
101-
self.to_k = nn.Linear(query_dim, heads * dim_head, bias=False)
102-
self.to_v = nn.Linear(query_dim, heads * dim_head, bias=False)
103-
self.to_out = nn.ModuleList([nn.Linear(heads * dim_head, query_dim), nn.Dropout(0.0)])
104-
self.set_processor(MyModelAttnProcessor())
105-
106-
def forward(self, hidden_states, attention_mask=None, **kwargs):
107-
return self.processor(self, hidden_states, attention_mask, **kwargs)
108-
```
68+
### Model conventions, attention pattern, and implementation rules
10969

110-
Consult the implementations in `src/diffusers/models/transformers/` if you need further references.
70+
See [../../models.md](../../models.md) for the attention pattern, implementation rules, common conventions, dependencies, and gotchas. These apply to all model work.
11171

112-
### Implementation rules
72+
### Model integration specific rules
11373

114-
1. **Don't combine structural changes with behavioral changes.** Restructuring code to fit diffusers APIs (ModelMixin, ConfigMixin, etc.) is unavoidable. But don't also "improve" the algorithm, refactor computation order, or rename internal variables for aesthetics. Keep numerical logic as close to the reference as possible, even if it looks unclean. For standard → modular, this is stricter: copy loop logic verbatim and only restructure into blocks. Clean up in a separate commit after parity is confirmed.
115-
2. **Pipelines must inherit from `DiffusionPipeline`.** Consult implementations in `src/diffusers/pipelines` in case you need references.
116-
3. **Don't subclass an existing pipeline for a variant.** DO NOT use an existing pipeline class (e.g., `FluxPipeline`) to override another pipeline (e.g., `FluxImg2ImgPipeline`) which will be a part of the core codebase (`src`).
74+
**Don't combine structural changes with behavioral changes.** Restructuring code to fit diffusers APIs (ModelMixin, ConfigMixin, etc.) is unavoidable. But don't also "improve" the algorithm, refactor computation order, or rename internal variables for aesthetics. Keep numerical logic as close to the reference as possible, even if it looks unclean. For standard → modular, this is stricter: copy loop logic verbatim and only restructure into blocks. Clean up in a separate commit after parity is confirmed.
11775

11876
### Test setup
11977

12078
- Slow tests gated with `@slow` and `RUN_SLOW=1`
12179
- All model-level tests must use the `BaseModelTesterConfig`, `ModelTesterMixin`, `MemoryTesterMixin`, `AttentionTesterMixin`, `LoraTesterMixin`, and `TrainingTesterMixin` classes initially to write the tests. Any additional tests should be added after discussions with the maintainers. Use `tests/models/transformers/test_models_transformer_flux.py` as a reference.
12280

123-
### Common diffusers conventions
124-
125-
- Pipelines inherit from `DiffusionPipeline`
126-
- Models use `ModelMixin` with `register_to_config` for config serialization
127-
- Schedulers use `SchedulerMixin` with `ConfigMixin`
128-
- Use `@torch.no_grad()` on pipeline `__call__`
129-
- Support `output_type="latent"` for skipping VAE decode
130-
- Support `generator` parameter for reproducibility
131-
- Use `self.progress_bar(timesteps)` for progress tracking
132-
133-
## Gotchas
134-
135-
1. **Forgetting `__init__.py` lazy imports.** Every new class must be registered in the appropriate `__init__.py` with lazy imports. Missing this causes `ImportError` that only shows up when users try `from diffusers import YourNewClass`.
136-
137-
2. **Using `einops` or other non-PyTorch deps.** Reference implementations often use `einops.rearrange`. Always rewrite with native PyTorch (`reshape`, `permute`, `unflatten`). Don't add the dependency. If a dependency is truly unavoidable, guard its import: `if is_my_dependency_available(): import my_dependency`.
138-
139-
3. **Missing `make fix-copies` after `# Copied from`.** If you add `# Copied from` annotations, you must run `make fix-copies` to propagate them. CI will fail otherwise.
140-
141-
4. **Wrong `_supports_cache_class` / `_no_split_modules`.** These class attributes control KV cache and device placement. Copy from a similar model and verify -- wrong values cause silent correctness bugs or OOM errors.
142-
143-
5. **Missing `@torch.no_grad()` on pipeline `__call__`.** Forgetting this causes GPU OOM from gradient accumulation during inference.
144-
145-
6. **Config serialization gaps.** Every `__init__` parameter in a `ModelMixin` subclass must be captured by `register_to_config`. If you add a new param but forget to register it, `from_pretrained` will silently use the default instead of the saved value.
146-
147-
7. **Forgetting to update `_import_structure` and `_lazy_modules`.** The top-level `src/diffusers/__init__.py` has both -- missing either one causes partial import failures.
148-
149-
8. **Hardcoded dtype in model forward.** Don't hardcode `torch.float32` or `torch.bfloat16` in the model's forward pass. Use the dtype of the input tensors or `self.dtype` so the model works with any precision.
150-
15181
---
15282

15383
## Modular Pipeline Conversion

.ai/skills/model-integration/modular-conversion.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -148,5 +148,6 @@ ComponentSpec(
148148
- [ ] Create pipeline class with `default_blocks_name`
149149
- [ ] Assemble blocks in `modular_blocks_<model>.py`
150150
- [ ] Wire up `__init__.py` with lazy imports
151+
- [ ] Add `# auto_docstring` above all assembled blocks (SequentialPipelineBlocks, AutoPipelineBlocks, etc.), run `python utils/modular_auto_docstring.py --fix_and_overwrite`, and verify the generated docstrings — all parameters should have proper descriptions with no "TODO" placeholders indicating missing definitions
151152
- [ ] Run `make style` and `make quality`
152153
- [ ] Test all workflows for parity with reference

.github/labeler.yml

Lines changed: 97 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,97 @@
1+
# https://github.com/actions/labeler
2+
pipelines:
3+
- changed-files:
4+
- any-glob-to-any-file:
5+
- src/diffusers/pipelines/**
6+
7+
models:
8+
- changed-files:
9+
- any-glob-to-any-file:
10+
- src/diffusers/models/**
11+
12+
schedulers:
13+
- changed-files:
14+
- any-glob-to-any-file:
15+
- src/diffusers/schedulers/**
16+
17+
single-file:
18+
- changed-files:
19+
- any-glob-to-any-file:
20+
- src/diffusers/loaders/single_file.py
21+
- src/diffusers/loaders/single_file_model.py
22+
- src/diffusers/loaders/single_file_utils.py
23+
24+
ip-adapter:
25+
- changed-files:
26+
- any-glob-to-any-file:
27+
- src/diffusers/loaders/ip_adapter.py
28+
29+
lora:
30+
- changed-files:
31+
- any-glob-to-any-file:
32+
- src/diffusers/loaders/lora_base.py
33+
- src/diffusers/loaders/lora_conversion_utils.py
34+
- src/diffusers/loaders/lora_pipeline.py
35+
- src/diffusers/loaders/peft.py
36+
37+
loaders:
38+
- changed-files:
39+
- any-glob-to-any-file:
40+
- src/diffusers/loaders/textual_inversion.py
41+
- src/diffusers/loaders/transformer_flux.py
42+
- src/diffusers/loaders/transformer_sd3.py
43+
- src/diffusers/loaders/unet.py
44+
- src/diffusers/loaders/unet_loader_utils.py
45+
- src/diffusers/loaders/utils.py
46+
- src/diffusers/loaders/__init__.py
47+
48+
quantization:
49+
- changed-files:
50+
- any-glob-to-any-file:
51+
- src/diffusers/quantizers/**
52+
53+
hooks:
54+
- changed-files:
55+
- any-glob-to-any-file:
56+
- src/diffusers/hooks/**
57+
58+
guiders:
59+
- changed-files:
60+
- any-glob-to-any-file:
61+
- src/diffusers/guiders/**
62+
63+
modular-pipelines:
64+
- changed-files:
65+
- any-glob-to-any-file:
66+
- src/diffusers/modular_pipelines/**
67+
68+
experimental:
69+
- changed-files:
70+
- any-glob-to-any-file:
71+
- src/diffusers/experimental/**
72+
73+
documentation:
74+
- changed-files:
75+
- any-glob-to-any-file:
76+
- docs/**
77+
78+
tests:
79+
- changed-files:
80+
- any-glob-to-any-file:
81+
- tests/**
82+
83+
examples:
84+
- changed-files:
85+
- any-glob-to-any-file:
86+
- examples/**
87+
88+
CI:
89+
- changed-files:
90+
- any-glob-to-any-file:
91+
- .github/**
92+
93+
utils:
94+
- changed-files:
95+
- any-glob-to-any-file:
96+
- src/diffusers/utils/**
97+
- src/diffusers/commands/**

.github/workflows/benchmark.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ jobs:
2828
options: --shm-size "16gb" --ipc host --gpus all
2929
steps:
3030
- name: Checkout diffusers
31-
uses: actions/checkout@v6
31+
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
3232
with:
3333
fetch-depth: 2
3434
- name: NVIDIA-SMI
@@ -58,7 +58,7 @@ jobs:
5858
5959
- name: Test suite reports artifacts
6060
if: ${{ always() }}
61-
uses: actions/upload-artifact@v6
61+
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
6262
with:
6363
name: benchmark_test_reports
6464
path: benchmarks/${{ env.BASE_PATH }}

.github/workflows/build_docker_images.yml

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -25,14 +25,14 @@ jobs:
2525
if: github.event_name == 'pull_request'
2626
steps:
2727
- name: Set up Docker Buildx
28-
uses: docker/setup-buildx-action@v3
28+
uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f # v3
2929

3030
- name: Check out code
31-
uses: actions/checkout@v6
31+
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
3232

3333
- name: Find Changed Dockerfiles
3434
id: file_changes
35-
uses: jitterbit/get-changed-files@v1
35+
uses: jitterbit/get-changed-files@b17fbb00bdc0c0f63fcf166580804b4d2cdc2a42 # v1
3636
with:
3737
format: "space-delimited"
3838
token: ${{ secrets.GITHUB_TOKEN }}
@@ -99,16 +99,16 @@ jobs:
9999

100100
steps:
101101
- name: Checkout repository
102-
uses: actions/checkout@v6
102+
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
103103
- name: Set up Docker Buildx
104-
uses: docker/setup-buildx-action@v3
104+
uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f # v3
105105
- name: Login to Docker Hub
106-
uses: docker/login-action@v3
106+
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3
107107
with:
108108
username: ${{ env.REGISTRY }}
109109
password: ${{ secrets.DOCKERHUB_TOKEN }}
110110
- name: Build and push
111-
uses: docker/build-push-action@v6
111+
uses: docker/build-push-action@10e90e3645eae34f1e60eeb005ba3a3d33f178e8 # v6
112112
with:
113113
no-cache: true
114114
context: ./docker/${{ matrix.image-name }}
@@ -117,7 +117,7 @@ jobs:
117117

118118
- name: Post to a Slack channel
119119
id: slack
120-
uses: huggingface/hf-workflows/.github/actions/post-slack@main
120+
uses: huggingface/hf-workflows/.github/actions/post-slack@a88e7fa2eaee28de5a4d6142381b1fb792349b67 # main
121121
with:
122122
# Slack channel id, channel name, or user id to post message.
123123
# See also: https://api.slack.com/methods/chat.postMessage#channels

0 commit comments

Comments
 (0)