Official ComfyUI Integration

Happyhorse-1.0 ComfyUI Integration

The official Happyhorse ComfyUI custom node — connect the #1 ranked video model directly to your ComfyUI workflow. Install in 3 steps and start generating cinematic 4K video.

01pip install
02Load Node
03Add API Key

Requirements & Installation

Get the Happyhorse ComfyUI custom node running in your local or cloud ComfyUI environment. Supports ComfyUI v1.3+ with Python 3.10+.

Requirements

  • ComfyUI v1.3.0 or later
  • Python 3.10 or later
  • A Happyhorse API key (free on sign-up)
  • Internet access for inference API calls

Step 1 — Install the custom node via pip

Open a terminal in your ComfyUI root directory and run:

bash
pip install comfyui-happyhorse

Step 2 — Restart ComfyUI and load the node

After installation, restart ComfyUI. In the node graph, right-click → Add Node → Happyhorse → HappyhorseVideoGenerate. The node will appear in your palette under the Happyhorse category.

bash
# Alternatively, install via ComfyUI Manager:
# Open ComfyUI Manager → Install Custom Nodes → search "happyhorse"
# Click Install → Restart ComfyUI

Step 3 — Enter your API key

In the HappyhorseVideoGenerate node, paste your API key into the api_key field, or set it as an environment variable:

bash
# Option A: set as environment variable (recommended)
export HAPPYHORSE_API_KEY="hh_xxxxxxxxxxxxxxxxxxxx"

# Option B: enter directly in the node's api_key field
# (do not share workflows with the key embedded)

Loading the Workflow

Download our pre-built Happyhorse ComfyUI workflow JSON files and drag them directly into your ComfyUI canvas to get started immediately.

Text-to-Video — Basic

Simple prompt-to-video workflow. Best starting point for happyhorse comfyui video generation.

happyhorse-t2v-basic.json
Text-to-Video — Advanced

Full parameter control: resolution, motion strength, guidance scale, seed. Recommended for happy horse 1.0 comfyui power users.

happyhorse-t2v-advanced.json
Image-to-Video

Upload a reference image and animate it with Happyhorse-1.0 motion physics via ComfyUI.

happyhorse-i2v.json

Drag the downloaded .json file onto your ComfyUI canvas, or use File → Open Workflow. The Happyhorse nodes will load automatically if the custom node is installed correctly.

API Key Setup

The Happyhorse ComfyUI node uses the Happyhorse REST API under the hood. Your API key authenticates each video generation request.

Coming Soon

Happyhorse API — Launching Soon

The Happyhorse public API is under final testing and will be available very shortly. Join the waitlist to get early access and be the first to connect your ComfyUI workflows to Happyhorse-1.0.

Join the Waitlist

1. Get your free API key

Sign up at Happyhorselab.net — you receive 10 free compute credits on registration, no credit card required. Navigate to Dashboard → API Keys → Create New Key.

2. Configure the node

The HappyhorseVideoGenerate node has an api_key input field. You can either type the key directly or reference an environment variable using the built-in ComfyUI secret loader.

json
# In your ComfyUI workflow node:
{
  "api_key": "hh_xxxxxxxxxxxxxxxxxxxx",
  "model": "happyhorse-1.0",
  "prompt": "cinematic aerial shot of mountain peaks at sunrise",
  "resolution": "1280x720",
  "duration": 5,
  "guidance_scale": 7.5,
  "seed": 42
}

3. Monitor usage

Each generation deducts credits from your balance. Check your remaining credits at Dashboard → Credits. Purchase additional compute packs on the Pricing page with no subscription required.

Example Outputs

Videos generated with the Happyhorse ComfyUI custom node using the workflow files above. Happyhorse-1.0 delivers temporal consistency and cinematic motion physics directly inside ComfyUI.

Happyhorse-1.0 output

Aerial Mountain Sequence

cinematic aerial shot of snow-capped mountain peaks at golden hour, smooth camera drift, volumetric clouds

4KT2VLandscape
Happyhorse-1.0 output

Character Close-Up

portrait of a woman in soft studio light, slight head turn, photorealistic skin texture, shallow depth of field

1080pT2VPortrait
Happyhorse-1.0 output

Image-to-Video Demo

Product image animated with gentle parallax motion and ambient lighting shift

720pI2VProduct
Happyhorse-1.0 output

Abstract Motion

fluid paint simulation in deep blue and gold, swirling organic patterns, macro lens

1080pT2VAbstract

FAQ

Common questions about the Happyhorse ComfyUI integration, compatibility, API usage, and troubleshooting.

1

Which ComfyUI versions are compatible with the Happyhorse custom node?

The Happyhorse ComfyUI custom node supports ComfyUI v1.3.0 and later. We recommend running the latest stable release. If you encounter a version error, upgrade ComfyUI via git pull in your ComfyUI directory and reinstall the node with pip install --upgrade comfyui-happyhorse.

2

How many API credits does one ComfyUI video generation consume?

Standard 720p generations consume 1 credit per 5 seconds of video. 1080p and 4K upscaled outputs require 2–4 credits depending on motion complexity and the use of the daVinci-MagiHuman physics engine. You can monitor usage in real time on your Dashboard → Credits page.

3

I get a 'Node not found' error after installing — how do I fix it?

This usually means ComfyUI was not restarted after installation. Fully close and reopen ComfyUI. If the error persists, verify the install path: the comfyui-happyhorse package should appear under your Python environment's site-packages. Run pip show comfyui-happyhorse to confirm. Also check that your Python version is 3.10 or later.

4

Can I use the Happyhorse ComfyUI node without an internet connection?

No — Happyhorse-1.0 runs on our cloud inference cluster, so the ComfyUI node requires an active internet connection to send requests to the Happyhorse API. Local inference is not currently supported. Requests are lightweight (text prompts or image uploads) and results are streamed back efficiently.

5

Is there a pre-built ComfyUI workflow for Happyhorse image-to-video generation?

Yes. Download the happyhorse-i2v.json workflow from the Workflow section above. Load it in ComfyUI (drag onto canvas or File → Open Workflow), connect your reference image to the Image input node, add your API key, and queue the prompt. The node handles encoding and motion synthesis automatically.

Ready to use Happyhorse-1.0 in ComfyUI?

Get your free API key, download a workflow, and generate your first video in under 5 minutes.