Checkpoint files will always be loaded safely.
Total VRAM 16311 MB, total RAM 65346 MB
pytorch version: 2.9.1+cu130
xformers version: 0.0.33.post2
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 5060 Ti : cudaMallocAsync
Using async weight offloading with 2 streams
Enabled pinned memory 29405.0
working around nvidia conv3d memory bug.
Found comfy_kitchen backend cuda: {'available': False, 'disabled': False, 'unavailable_reason': "cannot import name '_C' from partially initialized module 'comfy_kitchen.backends.cuda' (most likely due to a circular import) (D:\AI\ComfyUI\.ext\Lib\site-packages\comfy_kitchen\backends\cuda\init.py)", 'capabilities': []}
Found comfy_kitchen backend eager: {'available': True, 'disabled': False, 'unavailable_reason': None, 'capabilities': ['apply_rope', 'apply_rope1', 'dequantize_nvfp4', 'dequantize_per_tensor_fp8', 'quantize_nvfp4', 'quantize_per_tensor_fp8', 'scaled_mm_nvfp4']}
Found comfy_kitchen backend triton: {'available': True, 'disabled': True, 'unavailable_reason': None, 'capabilities': ['apply_rope', 'apply_rope1', 'dequantize_nvfp4', 'dequantize_per_tensor_fp8', 'quantize_nvfp4', 'quantize_per_tensor_fp8']}
Using sage attention
Python version: 3.12.12 | packaged by Anaconda, Inc. | (main, Oct 21 2025, 20:05:38) [MSC v.1929 64 bit (AMD64)]
ComfyUI version: 0.7.0
ComfyUI frontend version: 1.35.9
[Prompt Server] web root: D:\AI\ComfyUI.ext\Lib\site-packages\comfyui_frontend_package\static
Total VRAM 16311 MB, total RAM 65346 MB
pytorch version: 2.9.1+cu130
xformers version: 0.0.33.post2
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 5060 Ti : cudaMallocAsync
Using async weight offloading with 2 streams
Enabled pinned memory 29405.0
NumExpr defaulting to 16 threads.
ComfyUI-GGUF: Allowing full torch compile
Checkpoint files will always be loaded safely.
Total VRAM 16311 MB, total RAM 65346 MB
pytorch version: 2.9.1+cu130
xformers version: 0.0.33.post2
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 5060 Ti : cudaMallocAsync
Using async weight offloading with 2 streams
Enabled pinned memory 29405.0
working around nvidia conv3d memory bug.
Found comfy_kitchen backend cuda: {'available': False, 'disabled': False, 'unavailable_reason': "cannot import name '_C' from partially initialized module 'comfy_kitchen.backends.cuda' (most likely due to a circular import) (D:\AI\ComfyUI\.ext\Lib\site-packages\comfy_kitchen\backends\cuda\init.py)", 'capabilities': []}
Found comfy_kitchen backend eager: {'available': True, 'disabled': False, 'unavailable_reason': None, 'capabilities': ['apply_rope', 'apply_rope1', 'dequantize_nvfp4', 'dequantize_per_tensor_fp8', 'quantize_nvfp4', 'quantize_per_tensor_fp8', 'scaled_mm_nvfp4']}
Found comfy_kitchen backend triton: {'available': True, 'disabled': True, 'unavailable_reason': None, 'capabilities': ['apply_rope', 'apply_rope1', 'dequantize_nvfp4', 'dequantize_per_tensor_fp8', 'quantize_nvfp4', 'quantize_per_tensor_fp8']}
Using sage attention
Python version: 3.12.12 | packaged by Anaconda, Inc. | (main, Oct 21 2025, 20:05:38) [MSC v.1929 64 bit (AMD64)]
ComfyUI version: 0.7.0
ComfyUI frontend version: 1.35.9
[Prompt Server] web root: D:\AI\ComfyUI.ext\Lib\site-packages\comfyui_frontend_package\static
Total VRAM 16311 MB, total RAM 65346 MB
pytorch version: 2.9.1+cu130
xformers version: 0.0.33.post2
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 5060 Ti : cudaMallocAsync
Using async weight offloading with 2 streams
Enabled pinned memory 29405.0
NumExpr defaulting to 16 threads.
ComfyUI-GGUF: Allowing full torch compile