Cover image for the article, "Workaround for running ComfyUI on RTX 5090"
April 24, 2025
Back

Workaround for running ComfyUI on RTX 5090

By Mark Nguyen

ComfyUI is one the best ways to experiment with generative AI on consumer-grade GPUs.

However, if you have an RTX 5090 and run the ComfyUI desktop app you’ll likely get the following error:

CUDA error: no kernal image is available for execution on the device. CUDA kernel errors might be....

This is because the current release of PyTorch (and the one that ships with ComfyUI) doesn’t yet target CUDA 12.8 (the version of CUDA for 50 series GPUs).

The fix

The fix is described in this post: https://github.com/comfyanonymous/ComfyUI/discussions/6643#discussion-7891140

Screenshot of the GitHub issue that describes how to run ComfyUI on an RTX 50 series GPU

An easy-to-miss, but extremely useful GitHub issue

1. Download standalone ComfyUI with PyTorch 2.7 cu128 (CUDA 12.8)

https://github.com/comfyanonymous/ComfyUI/releases/download/latest/new_ComfyUI_windows_portable_nvidia_cu128_or_cpu.7z

2. Unzip and run run_nvidia_gpu.bat

This starts the ComfyUI server in a command prompt. Somewhere in the command prompt you’ll see:

...to see the GUI go to: https:/127.0.0.1:8188

Ctrl+click the URL.

3. Select a template and download models

With the ComfyUI web interface open, go to Workflow > Browse Templates and select a template. You’ll probably see the following message:

Comfy UI missing models warning message

When selecting the download location, navigate to: folder-where-you-extracte-ComfyUI/models/ and choose the corresponding subfolder in the missing model message. E.g vae/wan_2.1_var.safetensors goes in models/vae. If the folder doesn’t exist, create it.

4. Now go create 😊

Created using Mochi