Skip to content

Conversation

@vanDuven
Copy link

@vanDuven vanDuven commented Dec 8, 2025

This PR improves FP16 stability for Z-Image by using scaling instead of clamping.

Because the tensor passes through Linear and RMSNorm, the fp16 tensor can be scaled down to prevent overflow.
The scale value(2^x) is based on testing.
No noticeable impact on inference speed.
Tested with: Z-Image and Lumina 2.

The clamp_fp16 function can be safely removed or stay just in case.
workflow.json
comparison_result

@Kosinkadink
Copy link
Collaborator

this will get looked at and potentially merged after next stable!

@gelukuMLG
Copy link

Does this fix the fried/low-quality images with lumina 2 in fp16?

@comfyanonymous
Copy link
Owner

The reason I don't enable fp16 in lumina2 is because the neta yume 3.5 model breaks with fp16 + my clamping. It also breaks using the downscaling in this PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants