Skip to content

Conversation

@r4inm4ker
Copy link
Contributor

@r4inm4ker r4inm4ker commented Dec 23, 2025

What does this PR do?

Adds Differential Diffusion to Z-Image
Mostly copied from pipeline_flux_differential_img2img.py implementation.

Before submitting

How to test:

import torch
from pipeline_z_image_differential_img2img import ZImageDifferentialImg2ImgPipeline
from diffusers.utils import load_image

pipe = ZImageDifferentialImg2ImgPipeline.from_pretrained("Z-a-o/Z-Image-Turbo", torch_dtype=torch.bfloat16)
pipe.to("cuda")

init_image = load_image("https://github.com/exx8/differential-diffusion/blob/main/assets/input.jpg?raw=true")

mask = load_image("https://github.com/exx8/differential-diffusion/blob/main/assets/map.jpg?raw=true")

prompt = "painting of a mountain landscape with a meadow and a forest, meadow background, anime countryside landscape"

image = pipe(
 prompt,
 image=init_image,
 mask_image=mask,
 strength=0.75,
 num_inference_steps=9,
 guidance_scale=0.0,
 generator=torch.Generator("cuda").manual_seed(0),
).images[0]
image.save("image.png")

Who can review?

@yiyixuxu @asomoza

@asomoza
Copy link
Member

asomoza commented Dec 23, 2025

@bot /style

@github-actions
Copy link
Contributor

Style fix is beginning .... View the workflow run here.

@asomoza
Copy link
Member

asomoza commented Dec 23, 2025

@r4inm4ker thanks a lot! the bot can't fix the style, can you please run:

make style
make quality

also if it doesn't fix it, there's some white spaces in some empty lines in the example doc string

@asomoza
Copy link
Member

asomoza commented Dec 23, 2025

in the meantime, this is the apple pear test with this one:

original mask result
20240329211129_4024911930 gradient_mask zimage_diff_diff

this one is more finicky too than SDXL, I had to up the strength to 0.9 and use 12 steps to get that result

@bghira
Copy link
Contributor

bghira commented Dec 23, 2025

need sigma shift

@r4inm4ker
Copy link
Contributor Author

Thanks for the feedback ! I will work on the updates after coming back from holiday.

…n make style , make quality, and fix white spaces in example doc string.
@r4inm4ker
Copy link
Contributor Author

Hi all, Happy New Year !
I have run the make style and make quality and pushed an update.

@bghira Regarding the "need sigma shift" comment : I am not familiar with what it means as I am still very new to ML/diffusion and this is my first attempt to contribute to the community, so any help to elaborate or explain a bit more would be really appreciated. Thanks !

In the meanwhile, I have tried rerunning example inputs from https://differential-diffusion.github.io/ , and this is the result with strength 1.0 and steps 9. prompt : ""tree of life under the sea"

original mask result

@asomoza
Copy link
Member

asomoza commented Jan 5, 2026

@r4inm4ker the sigma shift comment was for me I think. If I make the shift higher I don't need to use a higher strength or more steps.

Your example works fine because you're using a strength of 1.0, I don't use it like that since I use diff-diff more for inpainting.

But since this is a community pipeline, it just needs to work and pass the basic tests, I do see some code that can be improved but it's not necessary at this point.

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@asomoza
Copy link
Member

asomoza commented Jan 5, 2026

@r4inm4ker thanks a lot! failing tests are not related to this PR

@asomoza asomoza merged commit 5ffb658 into huggingface:main Jan 5, 2026
24 of 26 checks passed
@r4inm4ker
Copy link
Contributor Author

Thanks for merging !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants