If you are using Runpod, or a similar hosted GPU service, then you can access your running pod/instance using a terminal.
##
# CD into ./ComfyUI/models/clip/ folderwgethttps://huggingface.co/city96/umt5-xxl-encoder-gguf/resolve/main/umt5-xxl-encoder-Q8_0.gguf
##
# CD into ./ComfyUI/models/loras/ folderwgethttps://huggingface.co/lightx2v/Wan2.2-Lightning/resolve/main/Wan2.2-I2V-A14B-4steps-lora-rank64-Seko-V1/high_noise_model.safetensors-OWan2.2-I2V-A14B-lora-high_noise.safetensors
##
# CD into ./ComfyUI/models/audio_encoders/ folder # Create if you can't find this folderwgethttps://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/audio_encoders/wav2vec2_large_english_fp16.safetensors
##
# CD into ./ComfyUI/models/unet/ folderwgethttps://huggingface.co/QuantStack/Wan2.2-S2V-14B-GGUF/resolve/main/Wan2.2-S2V-14B-Q8_0.gguf
##
# CD into ./ComfyUI/models/vae/ folderwgethttps://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/vae/wan_2.1_vae.safetensors
Wait for files to download fully before running your workflows.