The #1-ranked AI video model on Artificial Analysis — now on gVideo
HappyHorse 1.0 — Alibaba's #1-Ranked AI Video Model
Run Alibaba's HappyHorse 1.0 on gVideo alongside 11 other models — Kling 3.0, Veo 3.1, Sora 2 Pro, Luma Ray 2, Wan 2.6, Seedance 2.0, Hailuo 2.3, Kling 2.5 Turbo, and Pika 2.2. One subscription, one credit pool, no extra setup. HappyHorse leads the Artificial Analysis blind-test rankings for both text-to-video and image-to-video — and it ships with native audio synthesis baked in.
What’s different
Why creators reach for HappyHorse 1.0
#1 on Artificial Analysis
HappyHorse 1.0 climbed to the top of blind-test leaderboards for both text-to-video and image-to-video generation in April 2026. Built on a 15B-parameter transformer, it ranks ahead of every other open-weights and proprietary model on the benchmark.
Native audio + lip-sync
Audio is synthesized in the same pass as video — no separate dubbing step. Speech, ambient sound, and music are all generated together, with precise lip-sync when characters speak.
Up to 12 multimodal inputs
Combine text prompts with up to 12 reference inputs — images, audio clips, or short reference videos — to guide style, character likeness, and scene composition with precise control.
50+ visual styles, 5 aspect ratios
Cinematic, anime, documentary, commercial, fashion, sci-fi, and 40+ more out of the box. All five aspect ratios supported (16:9, 9:16, 1:1, 4:3, 3:4) for any platform.
Sample generations
Best prompts for HappyHorse 1.0
Hover to preview. Click any example to prefill the generator.
Cinematic close-up of a young woman in a sunlit Parisian cafe, gentle steam from her espresso, slow push-in, warm tungsten light
Anime: a teenage girl in school uniform on a rooftop at golden hour, wind blowing her hair, cherry blossom petals drifting past, painterly background
Macro 360-degree rotation of a luxury silver watch on black velvet, dramatic key light, dust motes through shaft of light, realistic chrome reflections
Vertical 9:16 cinematic — rainy neon-lit Tokyo street at night, lone figure with umbrella walking past noodle shop, reflections in puddles
Skateboarder landing a kickflip in slow motion at sun-bathed concrete plaza, dust kicking up on impact, dynamic camera tracking, motion blur
Documentary handheld shot of a fishmonger at Tokyo market preparing tuna, steam, natural daylight, ambient market chatter
One prompt · 10 engines
Same prompt, 10 models
This is what aggregator access unlocks: one prompt, run in parallel against every model on gVideo — Sora 2 Pro, Kling, Veo, Luma, Seedance, Hailuo, Wan. You see the style differences side by side before committing credits to a final render.
Prompt: “A golden retriever sprinting through shallow waves at sunset, slow motion, cinematic 35mm, warm backlight”
Samples are placeholders — real gVideo generations will replace these in the next release.
Head-to-head
HappyHorse 1.0 vs the other models on gVideo
| Model | Short-run credits | Speed | Max duration | Best for |
|---|---|---|---|---|
| Pika 2.2 | 20/ 5s | 30–75s | 10s | Stylized · 720p default |
| Kling 2.5 Turbo | 25/ 5s | 30–60s | 10s | Fast Kling renders |
| Hailuo 2.3 | 30/ 6s | 40–90s | 10s | Chinese realism · 768p default |
| Wan 2.6 | 30/ 5s | 30–75s | 5s | Versatility + low cost |
| Luma Ray 2 | 35/ 5s | 40–90s | 9s | Dream Machine · base default |
| Kling 3.0 | 40/ 5s | 60–120s | 10s | Lifelike motion |
| Veo 3.1 | 44/ 4s | 90–180s | 8s | Photoreal + audio |
| Seedance 2.0 | 90/ 5s | 40–90s | 10s | Speed + adherence |
| HappyHorse 1.0THIS PAGE | 45/ 5s | 90–150s | 10s | #1 benchmark · native audio · 720p default |
| Sora 2 Pro | 120/ 4s | 90–180s | 20s | OpenAI cinematic · HD default |
Credits
HappyHorse 1.0 credit cost on gVideo
HappyHorse 1.0 costs 45 credits per 5-second video, or 90 credits for 10 seconds. All 10 models share a single credit pool under your gVideo subscription.
HappyHorse 1.0 Standard 720p costs 45 credits per 5-second video — the same tier as Pika 2.2 HD and meaningfully cheaper than Seedance, Veo, or Sora. HD 1080p (85 credits) is available when you need print-quality output. Free 100-credit signup covers two 5-second renders to test.
Common questions about HappyHorse 1.0
What is HappyHorse 1.0?
HappyHorse 1.0 is Alibaba's flagship AI video model, released April 2026. It topped the Artificial Analysis blind-test leaderboard for both text-to-video and image-to-video on launch — a first for an open-source model. Built on a 15B-parameter transformer with native audio synthesis.
Is HappyHorse 1.0 really #1 on Artificial Analysis?
Yes, as of April-May 2026. HappyHorse 1.0 climbed to the top of both text-to-video and image-to-video leaderboards on artificialanalysis.ai shortly after release. The model was unattributed at first ("the mystery #1 model") and later confirmed as Alibaba's project.
How does HappyHorse 1.0 compare to Sora 2 Pro?
HappyHorse 1.0 ranks above Sora 2 Pro on the current Artificial Analysis blind tests, while costing about 1/3 of Sora 2 Pro per 5-second render on gVideo (45 credits vs 120). Sora still has the edge on very long-form coherence (up to 20s); HappyHorse caps at 10s on gVideo for now.
Does HappyHorse 1.0 support audio?
Yes — natively. Audio (speech, ambient, music) is synthesized in the same pass as video, with precise lip-sync when characters speak. This puts it in the same audio class as Veo 3.1 and Sora 2 Pro.
What aspect ratios does HappyHorse 1.0 support on gVideo?
All five: 16:9, 9:16 vertical, 1:1 square, 4:3, and 3:4. More aspect ratios than any other model on gVideo.
Can I use HappyHorse 1.0 for commercial work?
Yes on all paid plans. The free tier is personal-use only. The base HappyHorse 1.0 model is open-source under a permissive license, but using it via gVideo wraps it under our standard commercial license for paying users.
Ready to generate with HappyHorse 1.0?
Start free — 100 credits on signup, no credit card required.