building a remotion docker container with best practices in mind

How I built a Remotion Docker Setup - Best Practices applied. 

September 18, 202514 min read

UPDATE: THIS IS THE SECOND ATTEMPT AND IT ALSO FAILED. LOCAL GPU REQUIREMENT WAS SETUP AS REQUIRED BUT THE CONTAINER STILL FAILS

SEE THIS BLOG POST FOR THE THIRD ATTEMPT (HERE)

Believing you've got it all figured out is one thing, but truly mastering knowledge is another ball game. Now, hold that thought. Even when you're convinced you've nailed it, curveballs come your way—updates shatter comfort zones, APIs evolve, code transforms. Hence, embrace the perpetual cycle of learning and adapting. That's the secret sauce! Take my journey of crafting a REMOTION Docker setup to complement my n8n—it's been a wild ride! Just as I geared up to launch, let's just say...I'm back to the drawing board, this time holding the official best practices as my North Star.

Building Remotion in Docker is inherently more issue riddled. ~ So they say.. I need it like this as I have a home server running .

  • n8n

  • Comfyui

  • Chatterbox-tts

    and need Remotion to stich everything together.

STARTING HERE is the build step by step:

How to Verify Your Ubuntu Host for Remotion GPU Rendering in Docker

Why This Matters

Remotion, a powerful React-based video rendering tool, shines with NVIDIA GPU acceleration for faster encodes via NVENC and ANGLE. But to run it smoothly in Docker on an Ubuntu host (like 24.04 LTS) with an RTX 3090, your system must have the right NVIDIA drivers, CUDA, Container Toolkit, and FFmpeg support. This guide walks you through verifying these components step-by-step—based on real troubleshooting from a fresh setup. It's quick, command-line only, and ensures no surprises during rendering.

Expect 10-15 minutes if everything's installed; longer if fixes are needed.

Prerequisites

  • Ubuntu 24.04 LTS (or 22.04; adjust repos if older).

  • Root/sudo access.

  • NVIDIA RTX 3090 (or compatible GPU; driver 470+ required).

  • Basic terminal familiarity.

Run commands in your terminal. Copy-paste outputs if issues arise for debugging.

Step 1: Confirm GPU Detection

First, ensure your RTX 3090 is visible.

Command:

text

lspci | grep -i nvidia

Expected Output: Something like VGA compatible controller: NVIDIA Corporation GA102 [GeForce RTX 3090].

If Missing: Check hardware connections or BIOS settings. Reboot and retry.

Step 2: Check NVIDIA Drivers

Drivers handle GPU communication—Remotion needs version 470+ for RTX 30-series.

Command:

text

nvidia-smi

Expected Output: Table showing driver (e.g., 570.172.08), CUDA version (12.8+), and RTX 3090 details like temp (46C) and memory (24GB total).

If Missing: Install via sudo apt update && sudo apt install nvidia-driver-570 (use NVIDIA's PPA for latest: add ppa:graphics-drivers/ppa first).

Step 3: Verify CUDA Toolkit

CUDA enables NVENC encoding; Remotion requires 11.0+.

Command:

text

nvcc --version

Expected Output: Cuda compilation tools, release 13.0 (or similar 11+).

If Missing: Add NVIDIA repo (wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2404/x86_64/cuda-ubuntu2404.pin && sudo mv cuda-ubuntu2404.pin /etc/apt/preferences.d/cuda-repository-pin-600 && sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2404/x86_64/3bf863cc.pub && sudo add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2404/x86_64/ /"), then sudo apt update && sudo apt install cuda-toolkit. Add to PATH: echo 'export PATH=/usr/local/cuda/bin:$PATH' >> ~/.bashrc && source ~/.bashrc.

Step 4: Confirm NVIDIA Container Toolkit

This passes GPU to Docker—essential for Remotion's --gpus all.

Command:

text

dpkg -l | grep nvidia-container-toolkit

Expected Output: ii nvidia-container-toolkit 1.15.0-1 amd64.

If Missing: curl -s -L https://nvidia.github.io/libnvidia-container/gpgkey | sudo apt-key add - && curl -s -L https://nvidia.github.io/libnvidia-container/ubuntu24.04/libnvidia-container.list | sudo tee /etc/apt/sources.list.d/libnvidia-container.list && sudo apt update && sudo apt install -y nvidia-container-toolkit && sudo nvidia-ctk runtime configure --runtime=docker && sudo systemctl restart docker.

Step 5: Test GPU Passthrough in Docker

Simulate Remotion's container env.

Command:

text

docker run --rm --gpus all nvidia/cuda:12.0.0-base-ubuntu24.04 nvidia-smi

Expected Output: Matches host nvidia-smi (RTX 3090 visible inside container).

If Fails: Reconfigure toolkit (Step 4) or check Docker version (docker --version; needs 19.03+).

Step 6: Validate FFmpeg NVENC Support

Remotion uses FFmpeg for encodes—must support NVENC.

Command:

text

ffmpeg -encoders | grep nvenc

Expected Output: Lines like h264_nvenc, hevc_nvenc, av1_nvenc.

If Missing: Update FFmpeg (sudo add-apt-repository ppa:savoury1/ffmpeg6 && sudo apt update && sudo apt install ffmpeg). Ignore config mismatch warnings—they're harmless.

Troubleshooting Common Issues

Wrap-Up

With these checks green, your Ubuntu host is Remotion-ready for GPU-accelerated Docker renders—expect 5-10x speedups on RTX 3090. Test with a simple npx remotion render in a container. Share your wins (or woes) in the comments!


Dockerizing Remotion for GPU Magic: A Deep Dive into What Keeps It Running (and What'll Crash It)

Hey folks, if you've ever tried wrangling video generation with code—especially when you throw a beast like an RTX 3090 into the mix—you know it can feel like herding cats on steroids. I'm talking about Remotion, that slick React-based framework for spitting out videos frame by frame. It's a dream for automating social clips or AI-driven narratives, but slap it into Docker for scalability, and suddenly you're debugging GPU passthrough at 2 a.m. because your render worker ghosts you mid-job.

Over the last couple of weeks, I dove headfirst into hardening a Remotion setup for a production pipeline. We're talking a split architecture: a Studio container for the interactive UI (think previewing your compositions like a video editor on caffeine) and a dedicated Render Worker hooked into N8N for API-triggered jobs. No fluff—just the gritty details on each config file, what makes it tick, where it could spectacularly fail, and the must-haves to keep things humming. This isn't a tutorial; it's the war stories from the trenches, so you don't have to learn the hard way.


Now that we have that running.

The Docker Compose Glue: Orchestrating Studio and Worker Without the Drama

At the heart of this beast is the docker-compose.yml—the conductor waving its baton over your services. For Remotion, we carved out two services: remotion-studio for that live preview magic and remotion-render-worker for the heavy lifting via N8N webhooks.

What could break here? Oh man, port conflicts are a killer—expose 3001 for Studio without mapping it right, and your UI vanishes into the ether. Or worse, forget GPU device passthrough (deploy.resources.devices with driver: nvidia and device_ids: ['0']), and your RTX 3090 sits idle while renders crawl on CPU. Sequential workflows? If you don't cap REMOTION_CONCURRENCY at 1 or 2 in the worker, you'll OOM your container mid-batch, especially with N8N queuing jobs like an overzealous intern.

Required fixes: Nail the env vars like NVIDIA_DRIVER_CAPABILITIES=compute,utility,video,graphics—it's the secret handshake for Docker to hand over GPU reins without a fight. Volumes are non-negotiable too; bind-mount /app/out for outputs, or you'll lose rendered videos to the void on restarts. Healthchecks (curling /api/ping for Studio, /health for worker) keep Compose from resurrecting zombies. Bottom line: This file's your blueprint—mess up the dependencies (Studio waiting on Redis, worker on Studio), and your pipeline turns into a game of musical chairs.

Dockerfile.remotion-studio: The UI's Cozy Nest

Shifting to the Studio's Dockerfile—this is where your interactive playground lives, pulling in Node, Chromium, and all the GPU niceties for smooth previews.

Break points? Bloating it with full CUDA toolkits is a classic trap; it balloons your image by gigabytes and invites driver mismatches, leaving your ANGLE acceleration DOA. Or skip apt-get clean after installs, and you're shipping apt caches like unwanted luggage, slowing pulls and eating disk.

Must-haves: Stick to node:22-bookworm-slim as base—it's lean and Remotion-approved. Layer in essentials like libnss3 and libgbm-dev for Chromium's GPU hooks, then swap to a non-root user (remotion:1001) early to dodge privilege escalation risks. npx remotion browser ensure is gold—it grabs the bundled headless Chrome tuned for Remotion, sidestepping system binary roulette. Env like REMOTION_GL=angle ties it to your config, ensuring previews leverage that 3090 without hiccups. Without these, your Studio loads but renders like molasses.

Dockerfile.remotion-worker: The Silent Render Beast

The worker's Dockerfile mirrors the Studio's but amps up for API grind—think Express server churning N8N jobs without the UI frills.

Potential pitfalls: Overlooking libnvidia-container1 means NVENC encoding flops, turning 30-second renders into hours. Or leaving PyTorch env vars in (they're for ML, not Remotion), cluttering your runtime with irrelevant noise.

Essentials: Same slim base, but add mesa-utils and vulkan-tools for broader GPU diagnostics—handy when N8N pings /gpu-test and you need quick sanity checks. Cleanup after every apt run keeps the image under 2GB, vital for swarm deploys. USER remotion before npm install prevents root-owned node_modules disasters. Pin REMOTION_CONCURRENCY=6 (or whatever your 24GB VRAM can swallow) to match N8N's sequential mode—bump it too high, and you'll hit memory walls faster than a bad plot twist.

remotion.config.ts: The Rendering Recipe Book

This TypeScript config is Remotion's secret sauce—tuning codecs, timeouts, and Webpack for your Docker reality.

Where it crumbles: Mismatched GL renderers (angle-egl vs. angle) confuse Chromium, leading to black-screen previews or CPU fallbacks. Or skimpy MIME types in Webpack overrides, and your audio/video assets 404 as HTML, nuking compositions with media.

Key ingredients: Config.setChromiumOpenGlRenderer("angle") syncs with your Docker env for seamless GPU handoff. Override Webpack's devServer with full MIME coverage (audio like MP3/WAV, video like MP4/WebM)—it's a Docker volume lifesaver, stopping SPA fallbacks from hijacking your assets. Polling at 1000ms watches for N8N-fed changes without thrashing CPU. Set concurrency low for worker stability, and browserExecutable to your bundled Chrome path. Skip these, and renders stutter or fail silently—I've seen whole pipelines halt over a missing .mp4 header.

start-worker.sh and start-remotion.sh: The Ignition Sequences

These bash scripts are your pre-flight checklists—firing up the worker's API or Studio's UI with GPU probes and dep verifies.

Failure modes: Hardcoded CLI versions (like 4.0.349) drift from your package.json, sparking import errors mid-render. Or testing system Chromium instead of bundled, and GPU flags mismatch, leaving N8N jobs hanging.

Non-negotiables: Export NVIDIA_DRIVER_CAPABILITIES early in the worker script for ironclad GPU access—N8N won't wait for lazy loading. Use npx remotion browser path in Studio's test for reliable binary picks. Pin CLI to @4.0.350 on reinstalls, and cap tests with timeout to avoid hangs. These scripts aren't glamorous, but botch the exec node or npx remotion studio, and your containers idle like forgotten prototypes.

render-worker.js: The N8N API Nerve Center

Finally, the JS heart of the worker—Express routes handling health pings, GPU tests, and crucially, renders.

Crash scenarios: No /render or /compositions endpoints, and N8N workflows blind-fire, wasting cycles on invalid comps. Dupe routes (like those twin /gpu-tests) confuse Express, or hardcoding chromium ignores your Puppeteer path, breaking GPU probes.

Imperatives: Zod schemas validate N8N payloads—z.object({ composition: z.string() }) catches bad inputs before they torch your queue. renderMediaOnWorker with onProgress hooks webhooks for real-time N8N feedback, keeping pipelines alive. Persist GPU test PDFs to /app/out for debugging persistence. Ditch the dupe endpoint, and lean on PUPPETEER_EXECUTABLE_PATH for bin flexibility. Without this, your worker's just a pretty health checker—add the SSR muscle, and it becomes the API powerhouse N8N craves.

Wrapping It Up: From Fragile to Fortress

Piecing this Remotion stack together taught me one thing: Docker amplifies every little oversight. Version syncs, GPU handshakes, and MIME minutiae aren't optional—they're the difference between silky 4K renders and endless tail-chasing. We stripped bloat, aligned configs, and bolted in N8N hooks, turning a hobby rig into a workflow workhorse. If you're spinning up something similar, start with the Compose orchestration—it's the domino that topples the rest.

Got your own Docker-Remotion scars? Drop 'em in the comments. Until next time, keep those frames flying.

Technical Deep Dive:
Hardening a Dockerized Remotion Stack for Production

If you're building video pipelines with Remotion—especially tying it into workflows like N8N on a GPU rig like the RTX 3090—this guide breaks down the key files we refined. I'll cover what each does, the best practices baked in (pulled from official docs and fresh 2025 updates), and the gotchas that can tank your setup. Think of this as the blueprint for future-proofing: no fluff, just the mechanics to avoid midnight rebuilds.

docker-compose.yml (Remotion Services)

This YAML orchestrates the two Remotion containers: remotion-studio for UI previews and remotion-render-worker for API-driven renders via N8N. It handles networking, volumes, GPU passthrough, and env tuning.

Core Function: Defines service isolation—Studio on port 3001 for interactive editing, worker on internal 3002 for webhook-triggered jobs. Volumes mount source code, assets, and outputs; deploy.resources.devices passes the GPU.

Best Practices:

What Can Break It: Exposing internal ports (like 3002) invites security holes; skip it for API-only workers. Mismatched volumes (e.g., no :rw on /app/out) lose renders on restarts. Without depends_on conditions (e.g., worker on Redis), jobs queue into voids.

Dockerfile.remotion-studio

Builds the Studio container: Node base, deps, GPU libs, and startup script for the preview UI.

Core Function: Layers Chromium/FFmpeg for rendering, installs NVIDIA toolkit stubs for passthrough, sets non-root user, and bundles Remotion's Chrome for headless previews.

Best Practices:

What Can Break It: Full CUDA installs bloat to 3GB+ and conflict with host drivers—stick to libnvidia-container1 for lean passthrough. No chown on COPY (--chown=remotion:remotion) roots your node_modules, exploding on writes.

Dockerfile.remotion-worker

Mirrors Studio but tailored for SSR: Focuses on API server, fewer UI deps, more encoding tools.

Core Function: Installs GPU/FFmpeg libs, bundles Remotion renderer, copies worker scripts, exposes 3002 for N8N calls.

Best Practices:

What Can Break It: Irrelevant vars like PyTorch allocs clutter logs—strip 'em. Overlooking libnvidia-container-tools dooms NVENC encodes to CPU slogs. COPY without paths (e.g., selective src/ only) caches better but misses if volumes override.

remotion.config.ts

Global tuning file: Sets render params, overrides Webpack for Docker quirks.

Core Function: Configures codecs (H.264/CRF 18), timeouts (600s), and Webpack devServer for asset serving/static watching.

Best Practices:

What Can Break It: Partial MIME coverage serves videos as HTML, crashing embeds. No watchFiles on volumes, and hot-reloads ignore N8N asset drops. Mismatched GL (angle-egl) forces CPU, spiking render times 5x.

start-remotion.sh

Bootstrap for Studio UI: Checks deps, tests GPU, launches npx remotion studio.

Core Function: Probes nvidia-smi/Chrome, verifies CLI/compositions, exports args, execs Studio on 3001.

Best Practices:

What Can Break It: System Chrome tests fail on bundled paths, masking GPU issues. Unpinned CLI drifts to betas, breaking Studio sidebar. No set -e exits on partial fails, leaving half-started UIs.

start-worker.sh

Ignition for render API: GPU init, CLI check, launches Express worker.

Core Function: Dumps system stats, exports NVIDIA caps, verifies files, execs node render-worker.js.

Best Practices:

What Can Break It: Missing exports let GPU flake on first render. Generic CLI installs pull mismatches, crashing SSR calls. No file verifies, and N8N hits 500s on absent JS.

render-worker.js

Express backbone for worker: Routes for health, tests, comps, and renders.

Core Function: Handles /health/GPU probes, lists comps via getCompositions, renders via renderMediaOnWorker with N8N hooks.

Best Practices:

  • Zod schemas validate POST /render bodies—catches bad N8N props early. onProgress callbacks fire webhooks for real-time flow.

  • Bundle once per request; use selectComposition for targeted renders.

  • 500MB JSON limits for props; error handler with timestamps for logs.

What Can Break It: No SSR endpoints starves N8N—add /compositions or jobs blind-fire. Dupe routes waste cycles; hardcode bins ignore PUPPETEER_EXECUTABLE_PATH, flopping GPU tests. Skip webhooks, and pipelines go dark on fails.

Quick Tips for Future Builders

Sync versions across all (4.0.350)—mismatches are silent killers. Test GPU with nvidia-smi in containers pre-prod. Volumes with :delegated boost perf on macOS hosts. This stack's now bulletproof for GPU pipelines; tweak concurrency for your VRAM, and you're golden.

About Regard: Building Freedom Through Shared Knowledge

Regard launched Real & Works after grinding through the chaos of content marketing, wearing every hat in the book—writer, WordPress coder, systems architect, graphic designer, video editor, and analytics guru. The hustle was relentless, but the burnout was inevitable. Running a one-person show while competing with studios flush with staff wasn’t just tough—it was draining every ounce of time and resources he had.

Armed with a deep background in programming and systems design, Regard decided to break the cycle. He built automated content pipelines, starting with a streamlined YouTube shorts video workflow that hums along via self-hosted setups, powered by service APIs for inference, composition, and posting. It’s lean, it’s mean, and it’s entirely under his control—no subscriptions, no middlemen, just pure, efficient creation on his own terms.

Now, Regard’s mission isn’t about landing clients—it’s about spreading knowledge to set creators free. He builds in public, sharing every step, stumble, and success, from the code to the crashes. His goal? To show that anyone with enough grit and guidance can build their own automated systems, right on their own servers, using APIs to make it happen. Follow his journey, grab the lessons from his wins and losses, and take charge of your own creative freedom.

Regard Vermeulen

About Regard: Building Freedom Through Shared Knowledge Regard launched Real & Works after grinding through the chaos of content marketing, wearing every hat in the book—writer, WordPress coder, systems architect, graphic designer, video editor, and analytics guru. The hustle was relentless, but the burnout was inevitable. Running a one-person show while competing with studios flush with staff wasn’t just tough—it was draining every ounce of time and resources he had. Armed with a deep background in programming and systems design, Regard decided to break the cycle. He built automated content pipelines, starting with a streamlined YouTube shorts video workflow that hums along via self-hosted setups, powered by service APIs for inference, composition, and posting. It’s lean, it’s mean, and it’s entirely under his control—no subscriptions, no middlemen, just pure, efficient creation on his own terms. Now, Regard’s mission isn’t about landing clients—it’s about spreading knowledge to set creators free. He builds in public, sharing every step, stumble, and success, from the code to the crashes. His goal? To show that anyone with enough grit and guidance can build their own automated systems, right on their own servers, using APIs to make it happen. Follow his journey, grab the lessons from his wins and losses, and take charge of your own creative freedom.

LinkedIn logo icon
Back to Blog