download large AI models aria2c

Attempting to Download ComfyUI Models: 5 Hard-Learned Lessons

October 16, 20254 min read

What I Wish I Knew Before Attempting to Download ComfyUI Models:
5 Hard-Learned Lessons from Stalls, 404s, and 130GB Hauls

October 16, 2025 • 8 min read Part 2 of the n8n AI Studio Journey Series


The Download Reality: From "Quick Pull" to Overnight Saga
My Download list

I thought grabbing models for ComfyUI would be simple: Spot FLUX.1-dev on Hugging Face, fire off hf download, and watch 24GB magic unfold for text-to-image bliss on my RTX 3090. Wrong. It was cryptic hex temp files, 95% stalls on WAN 2.2's 14GB chunks, and auth walls that turned optimism into 2AM log dives. After 130GB+ of trial-and-error—fueled by Reddit rants and GitHub issues—these five lessons emerged. If you're prepping checkpoints, LoRAs, or upscalers like SUPIR, this could've saved me a full day. Pulled from HF docs, community threads, and my sweat equity, it's your shield against the "is it frozen?" curse.

Lesson 1: Repo IDs and Paths Play Tricks—Always Scout with Searches and Go Direct

Hugging Face repos sound straightforward, but "Fanghua-Yu/SUPIR" 404'd hard—the ComfyUI-optimized fork hides at "Kijai/SUPIR_pruned." WAN 2.2's diffusion models lurk in nested split_files/diffusion_models/, stalling CLI pulls until I grabbed Comfy-Org's repack. Even basics like ControlNet variants trip you if the ID's off.

The Fix: Google "model-name ComfyUI Hugging Face" to confirm the exact repo and structure. For singles, bypass repo confusion with resolve URLs—wget lands them clean in models/controlnet for auto-scan. Common pitfall: Mismatched paths lead to "model not found" in workflows.

Example for ControlNet inpaint (1.4GB, ~5 mins on decent Wi-Fi):

textcd /path/to/ComfyUI/models/controlnetwget -c https://huggingface.co/lllyasviel/ControlNet-v1-1/resolve/main/control_v11p_sd15_inpaint.pth

Refresh ComfyUI's UI—boom, it's there. Pro tip: Test with tiny files like clip_l.safetensors (246MB) before beasts.

Lesson 2: Authentication Bites Only Gated Repos—Don't Blanket-Login Publics

I hammered hf login on every pull, bloating sessions unnecessarily. Public staples like clip_l.safetensors or WAN's base files? Zero token needed—they fly free. But gated gems like FLUX.1-schnell demand it post-ToS acceptance, or it's instant 403s, even if you've "agreed" on the page.

Streamline Your Flow: Eye the HF repo for a "gated" banner. Publics: Straight wget or aria2c. Gated: Whip up a read token at huggingface.co/settings/tokens, then CLI-login or append ?token=hf_YourToken to URLs. ComfyUI-Manager shines here—set your token as an env var for in-app pulls without venv toggles.

Public starter (clip_l for FLUX encoders):

textcd /path/to/ComfyUI/models/clip_visionwget -c https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/clip_l.safetensors

Gated hack: Aria2c with token for resilience. Watch for 2025 updates—HF's CLI now flags gated status upfront in searches.

Lesson 3: hf_transfer Rockets Speeds but Ghosts Resumes—Switch to Aria2c for Bulletproof Big Files

HF CLI's hf_transfer flag teased 2-3x boosts on my gigabit setup, but it flaked on WAN's 14GB low-noise—partials trashed, restarts from zero every hiccup. Default ETag resuming? Gold, but single-threaded crawls. Enter aria2c: Multi-threaded champ that chews CDNs, resumes flawlessly, and hits 100+ MB/s on spotty lines.

Level Up: Unset for CLI safety: unset HF_HUB_ENABLE_HF_TRANSFER before hf download. But for speed + reliability, aria2c rules—install via sudo apt install aria2, crank threads. HF docs note hf_transfer's production-ready but proxy-averse; aria2c fills the gap.

SUPIR FP16 upscale (7GB):

textcd /path/to/ComfyUI/models/upscale_models/supiraria2c -x 8 -s 8 -c https://huggingface.co/Kijai/SUPIR_pruned/resolve/main/SUPIR-v0F_fp16.safetensors -d . --summary-interval=60mv [hex-temp].safetensors SUPIR-v0F_fp16.safetensors

Rename that temp hex blob post-pull. Cloud users: -x 16 for bandwidth feasts. GitHub tools like hfd wrap aria2c for repo-wide grabs.

Lesson 4: Venv Crutches HF CLI Only—Ditch It for Wget/Aria2c's No-Fuss Freedom

Sourcing source venv/bin/activate unlocked HF CLI's huggingface_hub magic but sparked PATH meltdowns when I forgot to deactivate for plain wget. Reality: System wget/aria2c snag public files sans Python overhead; venv's CLI-exclusive for gated/repo clones.

Keep It Simple: Activate just for hf download --repo-type model; deactivate for directs. Post-venv, peek: pip show huggingface_hub (0.23+ for ComfyUI harmony). Issues spike on portable installs—deactivate prevents venv bleed.

GitHub-sourced RealESRGAN (upscale essential):

textdeactivate # If venv's oncd /path/to/ComfyUI/models/upscale_modelswget -c https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth

Perfect for Docker mounts where envs clash. ComfyUI's extra_model_paths.yaml ties multi-drive paths sans venv drama.

Lesson 5: "Stalls" Are Sneaky—Monitor Aggressively and Nail Permissions Early

Downloads "froze" at 95%? Nah—slow CDNs or silent permission blocks on mounted drives. My HDD (/media/inky/abc2) rejected writes till chown; no error, just limbo.

Eyes Wide Open: Bash a watcher: watch -n 30 'du -sh*.safetensors | tail -1; ls -1 | wc -l'. Aria2c's verbose summaries catch redirects. Pre-pull: sudo chown -R $USER:$USER /path/to/ComfyUI/models. Post: Inventory with find /models -type f \( -nme "*.safetensors" -o -name "*.pth" \) -exec ls -lh {} + | sort -h. Reddit gripes: Arch mismatches (FP32 vs. FP16) crash loads—stick to CUDA-matched FP16 for RTX.

Toolkit Triumph: Chaos to Clean Hauls

Starting with direct URLs, aria2c threads, and venv smarts flipped my marathon from pain to process. Small tests first (clip_l), then scale—FLUX.1-dev lands overnight, no sweat. Edit extra_model_paths.yaml for SSD/HDD splits:

textbase_path: /ComfyUI/modelsexternal:base_path: /media/inky/abc2/modelscheckpoints: checkpoints

ComfyUI-Manager next for node-tied ease. Payoff: Offline gen sans API chains. Models shift fast—HF updates, community tweaks keep it fresh.

Resources to Fast-Track Your Pulls

See Part 1

Regard Vermeulen is a self-taught AI Workflow Engineer based in Pretoria, South Africa. In January 2025 he began an intensive deep-dive into AI, and within eleven months shipped multiple production agentic systems on local hardware.
His flagship projects include an autonomous content pipeline that has posted over 70 videos to YouTube, Instagram, TikTok, and X with zero manual intervention after trigger; a zero-cloud Claude-based coding team that reduces three-day development cycles to three-hour turnarounds; and specialised CrewAI multi-agent systems for PDF journal generation, trading automation, and personal finance reporting.
With a background spanning banking, real-estate investment, and scaling a nationwide distribution business, Regard brings a relentless focus on measurable ROI, cost control, and production reliability to every system he builds.
He documents his work openly on GitHub and realandworks.com, sharing code, workflows, and lessons to help creators and teams move from manual execution to automated outcomes.
Regard is available for selective collaborations on high-impact AI workflow projects.

Regard Vermeulen

Regard Vermeulen is a self-taught AI Workflow Engineer based in Pretoria, South Africa. In January 2025 he began an intensive deep-dive into AI, and within eleven months shipped multiple production agentic systems on local hardware. His flagship projects include an autonomous content pipeline that has posted over 70 videos to YouTube, Instagram, TikTok, and X with zero manual intervention after trigger; a zero-cloud Claude-based coding team that reduces three-day development cycles to three-hour turnarounds; and specialised CrewAI multi-agent systems for PDF journal generation, trading automation, and personal finance reporting. With a background spanning banking, real-estate investment, and scaling a nationwide distribution business, Regard brings a relentless focus on measurable ROI, cost control, and production reliability to every system he builds. He documents his work openly on GitHub and realandworks.com, sharing code, workflows, and lessons to help creators and teams move from manual execution to automated outcomes. Regard is available for selective collaborations on high-impact AI workflow projects.

LinkedIn logo icon
Back to Blog