Skip to content

Commit 56333b9

Browse files
committed
Merge branch 'main' into enable-hotswap-testing-ci
2 parents 580e7ae + e30d3bf commit 56333b9

23 files changed

+2876
-189
lines changed

Diff for: docs/source/en/api/loaders/lora.md

+5
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,7 @@ LoRA is a fast and lightweight training method that inserts and trains a signifi
2828
- [`WanLoraLoaderMixin`] provides similar functions for [Wan](https://huggingface.co/docs/diffusers/main/en/api/pipelines/wan).
2929
- [`CogView4LoraLoaderMixin`] provides similar functions for [CogView4](https://huggingface.co/docs/diffusers/main/en/api/pipelines/cogview4).
3030
- [`AmusedLoraLoaderMixin`] is for the [`AmusedPipeline`].
31+
- [`HiDreamImageLoraLoaderMixin`] provides similar functions for [HiDream Image](https://huggingface.co/docs/diffusers/main/en/api/pipelines/hidream)
3132
- [`LoraBaseMixin`] provides a base class with several utility methods to fuse, unfuse, unload, LoRAs and more.
3233

3334
<Tip>
@@ -91,6 +92,10 @@ To learn more about how to load LoRA weights, see the [LoRA](../../using-diffuse
9192

9293
[[autodoc]] loaders.lora_pipeline.AmusedLoraLoaderMixin
9394

95+
## HiDreamImageLoraLoaderMixin
96+
97+
[[autodoc]] loaders.lora_pipeline.HiDreamImageLoraLoaderMixin
98+
9499
## LoraBaseMixin
95100

96101
[[autodoc]] loaders.lora_base.LoraBaseMixin

Diff for: examples/controlnet/README_flux.md

+15-2
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,19 @@ Training script provided by LibAI, which is an institution dedicated to the prog
66
> [!NOTE]
77
> **Memory consumption**
88
>
9-
> Flux can be quite expensive to run on consumer hardware devices and as a result, ControlNet training of it comes with higher memory requirements than usual.
9+
> Flux can be quite expensive to run on consumer hardware devices and as a result, ControlNet training of it comes with higher memory requirements than usual.
10+
11+
Here is a gpu memory consumption for reference, tested on a single A100 with 80G.
12+
13+
| period | GPU |
14+
| - | - |
15+
| load as float32 | ~70G |
16+
| mv transformer and vae to bf16 | ~48G |
17+
| pre compute txt embeddings | ~62G |
18+
| **offload te to cpu** | ~30G |
19+
| training | ~58G |
20+
| validation | ~71G |
21+
1022

1123
> **Gated access**
1224
>
@@ -98,8 +110,9 @@ accelerate launch train_controlnet_flux.py \
98110
--validation_image "./conditioning_image_1.png" "./conditioning_image_2.png" \
99111
--validation_prompt "red circle with blue background" "cyan circle with brown floral background" \
100112
--train_batch_size=1 \
101-
--gradient_accumulation_steps=4 \
113+
--gradient_accumulation_steps=16 \
102114
--report_to="wandb" \
115+
--lr_scheduler="cosine" \
103116
--num_double_layers=4 \
104117
--num_single_layers=0 \
105118
--seed=42 \

Diff for: examples/controlnet/train_controlnet_flux.py

+3-4
Original file line numberDiff line numberDiff line change
@@ -148,7 +148,7 @@ def log_validation(
148148
pooled_prompt_embeds=pooled_prompt_embeds,
149149
control_image=validation_image,
150150
num_inference_steps=28,
151-
controlnet_conditioning_scale=0.7,
151+
controlnet_conditioning_scale=1,
152152
guidance_scale=3.5,
153153
generator=generator,
154154
).images[0]
@@ -1085,8 +1085,6 @@ def compute_embeddings(batch, proportion_empty_prompts, flux_controlnet_pipeline
10851085
return {"prompt_embeds": prompt_embeds, "pooled_prompt_embeds": pooled_prompt_embeds, "text_ids": text_ids}
10861086

10871087
train_dataset = get_train_dataset(args, accelerator)
1088-
text_encoders = [text_encoder_one, text_encoder_two]
1089-
tokenizers = [tokenizer_one, tokenizer_two]
10901088
compute_embeddings_fn = functools.partial(
10911089
compute_embeddings,
10921090
flux_controlnet_pipeline=flux_controlnet_pipeline,
@@ -1103,7 +1101,8 @@ def compute_embeddings(batch, proportion_empty_prompts, flux_controlnet_pipeline
11031101
compute_embeddings_fn, batched=True, new_fingerprint=new_fingerprint, batch_size=50
11041102
)
11051103

1106-
del text_encoders, tokenizers, text_encoder_one, text_encoder_two, tokenizer_one, tokenizer_two
1104+
text_encoder_one.to("cpu")
1105+
text_encoder_two.to("cpu")
11071106
free_memory()
11081107

11091108
# Then get the training dataset ready to be passed to the dataloader.

Diff for: examples/dreambooth/README_hidream.md

+133
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,133 @@
1+
# DreamBooth training example for HiDream Image
2+
3+
[DreamBooth](https://arxiv.org/abs/2208.12242) is a method to personalize text2image models like stable diffusion given just a few (3~5) images of a subject.
4+
5+
The `train_dreambooth_lora_hidream.py` script shows how to implement the training procedure with [LoRA](https://huggingface.co/docs/peft/conceptual_guides/adapter#low-rank-adaptation-lora) and adapt it for [HiDream Image](https://huggingface.co/docs/diffusers/main/en/api/pipelines/).
6+
7+
8+
This will also allow us to push the trained model parameters to the Hugging Face Hub platform.
9+
10+
## Running locally with PyTorch
11+
12+
### Installing the dependencies
13+
14+
Before running the scripts, make sure to install the library's training dependencies:
15+
16+
**Important**
17+
18+
To make sure you can successfully run the latest versions of the example scripts, we highly recommend **installing from source** and keeping the install up to date as we update the example scripts frequently and install some example-specific requirements. To do this, execute the following steps in a new virtual environment:
19+
20+
```bash
21+
git clone https://github.com/huggingface/diffusers
22+
cd diffusers
23+
pip install -e .
24+
```
25+
26+
Then cd in the `examples/dreambooth` folder and run
27+
```bash
28+
pip install -r requirements_sana.txt
29+
```
30+
31+
And initialize an [🤗Accelerate](https://github.com/huggingface/accelerate/) environment with:
32+
33+
```bash
34+
accelerate config
35+
```
36+
37+
Or for a default accelerate configuration without answering questions about your environment
38+
39+
```bash
40+
accelerate config default
41+
```
42+
43+
Or if your environment doesn't support an interactive shell (e.g., a notebook)
44+
45+
```python
46+
from accelerate.utils import write_basic_config
47+
write_basic_config()
48+
```
49+
50+
When running `accelerate config`, if we specify torch compile mode to True there can be dramatic speedups.
51+
Note also that we use PEFT library as backend for LoRA training, make sure to have `peft>=0.14.0` installed in your environment.
52+
53+
54+
### Dog toy example
55+
56+
Now let's get our dataset. For this example we will use some dog images: https://huggingface.co/datasets/diffusers/dog-example.
57+
58+
Let's first download it locally:
59+
60+
```python
61+
from huggingface_hub import snapshot_download
62+
63+
local_dir = "./dog"
64+
snapshot_download(
65+
"diffusers/dog-example",
66+
local_dir=local_dir, repo_type="dataset",
67+
ignore_patterns=".gitattributes",
68+
)
69+
```
70+
71+
This will also allow us to push the trained LoRA parameters to the Hugging Face Hub platform.
72+
73+
Now, we can launch training using:
74+
> [!NOTE]
75+
> The following training configuration prioritizes lower memory consumption by using gradient checkpointing,
76+
> 8-bit Adam optimizer, latent caching, offloading, no validation.
77+
> Additionally, when provided with 'instance_prompt' only and no 'caption_column' (used for custom prompts for each image)
78+
> text embeddings are pre-computed to save memory.
79+
80+
```bash
81+
export MODEL_NAME="HiDream-ai/HiDream-I1-Dev"
82+
export INSTANCE_DIR="dog"
83+
export OUTPUT_DIR="trained-hidream-lora"
84+
85+
accelerate launch train_dreambooth_lora_hidream.py \
86+
--pretrained_model_name_or_path=$MODEL_NAME \
87+
--instance_data_dir=$INSTANCE_DIR \
88+
--output_dir=$OUTPUT_DIR \
89+
--mixed_precision="bf16" \
90+
--instance_prompt="a photo of sks dog" \
91+
--resolution=1024 \
92+
--train_batch_size=1 \
93+
--gradient_accumulation_steps=4 \
94+
--use_8bit_adam \
95+
--rank=16 \
96+
--learning_rate=2e-4 \
97+
--report_to="wandb" \
98+
--lr_scheduler="constant" \
99+
--lr_warmup_steps=0 \
100+
--max_train_steps=1000 \
101+
--cache_latents \
102+
--gradient_checkpointing \
103+
--validation_epochs=25 \
104+
--seed="0" \
105+
--push_to_hub
106+
```
107+
108+
For using `push_to_hub`, make you're logged into your Hugging Face account:
109+
110+
```bash
111+
huggingface-cli login
112+
```
113+
114+
To better track our training experiments, we're using the following flags in the command above:
115+
116+
* `report_to="wandb` will ensure the training runs are tracked on [Weights and Biases](https://wandb.ai/site). To use it, be sure to install `wandb` with `pip install wandb`. Don't forget to call `wandb login <your_api_key>` before training if you haven't done it before.
117+
* `validation_prompt` and `validation_epochs` to allow the script to do a few validation inference runs. This allows us to qualitatively check if the training is progressing as expected.
118+
119+
## Notes
120+
121+
Additionally, we welcome you to explore the following CLI arguments:
122+
123+
* `--lora_layers`: The transformer modules to apply LoRA training on. Please specify the layers in a comma seperated. E.g. - "to_k,to_q,to_v" will result in lora training of attention layers only.
124+
* `--rank`: The rank of the LoRA layers. The higher the rank, the more parameters are trained. The default is 16.
125+
126+
We provide several options for optimizing memory optimization:
127+
128+
* `--offload`: When enabled, we will offload the text encoder and VAE to CPU, when they are not used.
129+
* `cache_latents`: When enabled, we will pre-compute the latents from the input images with the VAE and remove the VAE from memory once done.
130+
* `--use_8bit_adam`: When enabled, we will use the 8bit version of AdamW provided by the `bitsandbytes` library.
131+
* `--instance_prompt` and no `--caption_column`: when only an instance prompt is provided, we will pre-compute the text embeddings and remove the text encoders from memory once done.
132+
133+
Refer to the [official documentation](https://huggingface.co/docs/diffusers/main/en/api/pipelines/) of the `HiDreamImagePipeline` to know more about the model.

Diff for: examples/dreambooth/requirements_hidream.txt

+8
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
accelerate>=1.4.0
2+
torchvision
3+
transformers>=4.50.0
4+
ftfy
5+
tensorboard
6+
Jinja2
7+
peft>=0.14.0
8+
sentencepiece

0 commit comments

Comments
 (0)