You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+8-1Lines changed: 8 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -33,6 +33,8 @@ We believe that a well-developed open-source code framework can lower the thresh
33
33
34
34
> Currently, the development personnel of this project are limited, with most of the work handled by [Artiprocher](https://github.com/Artiprocher). Therefore, the progress of new feature development will be relatively slow, and the speed of responding to and resolving issues is limited. We apologize for this and ask developers to understand.
35
35
36
+
-**January 27, 2026**: [Z-Image](https://modelscope.cn/models/Tongyi-MAI/Z-Image) is released, and our [Z-Image-i2L](https://www.modelscope.cn/models/DiffSynth-Studio/Z-Image-i2L) model is released concurrently. You can use it in [ModelScope Studios](https://modelscope.cn/studios/DiffSynth-Studio/Z-Image-i2L). For details, see the [documentation](/docs/zh/Model_Details/Z-Image.md).
37
+
36
38
-**January 19, 2026**: Added support for [FLUX.2-klein-4B](https://modelscope.cn/models/black-forest-labs/FLUX.2-klein-4B) and [FLUX.2-klein-9B](https://modelscope.cn/models/black-forest-labs/FLUX.2-klein-9B) models, including training and inference capabilities. [Documentation](/docs/en/Model_Details/FLUX2.md) and [example code](/examples/flux2/) are now available.
37
39
38
40
-**January 12, 2026**: We trained and open-sourced a text-guided image layer separation model ([Model Link](https://modelscope.cn/models/DiffSynth-Studio/Qwen-Image-Layered-Control)). Given an input image and a textual description, the model isolates the image layer corresponding to the described content. For more details, please refer to our blog post ([Chinese version](https://modelscope.cn/learn/4938), [English version](https://huggingface.co/blog/kelseye/qwen-image-layered-control)).
@@ -269,9 +271,14 @@ image.save("image.jpg")
269
271
270
272
Example code for Z-Image is available at: [/examples/z_image/](/examples/z_image/)
271
273
272
-
|Model ID|Inference|Low-VRAM Inference|Full Training|Full Training Validation |LoRA Training|LoRA Training Validation |
274
+
|Model ID|Inference|LowVRAM Inference|Full Training|Validation After Full Training|LoRA Training|Validation After LoRA Training|
@@ -75,6 +80,9 @@ Input parameters for `ZImagePipeline` inference include:
75
80
*`seed`: Random seed. Default is `None`, meaning completely random.
76
81
*`rand_device`: Computing device for generating random Gaussian noise matrix, default is `"cpu"`. When set to `cuda`, different GPUs will produce different generation results.
77
82
*`num_inference_steps`: Number of inference steps, default value is 8.
83
+
*`controlnet_inputs`: Inputs for ControlNet models.
84
+
*`edit_image`: Edit images for image editing models, supporting multiple images.
85
+
*`positive_only_lora`: LoRA weights used only in positive prompts.
78
86
79
87
If VRAM is insufficient, please enable [VRAM Management](/docs/en/Pipeline_Usage/VRAM_management.md). We provide recommended low VRAM configurations for each model in the example code, see the table in the "Model Overview" section above.
0 commit comments