Get started

Pioneering the future of
content creation

We're building end-to-end foundation models that let anyone go from idea to audience in minutes. Our models understand pacing, framing, and attention dynamics natively, all the way from script to audio-video.

Our research spans real-world generative modeling, multimodal reasoning, dataset design and collection, audio-video quality evaluation, and large-scale training and inference. If you want to push the frontier of video, audio, and engagement for content that real audiences watch and follow, this is the place.

Research principles

End-to-end rapid iteration

We build the model together with the products it powers. We ship very quickly, with many features and improvements going out the day we build them, and we don’t stop improving when things ship. Mirage’s iteration cycle lets us quickly optimize against real user signals at scale.

Multimodal models

Short-form is inherently multimodal. Making a good video requires a creator to nail everything from the nuances of the script and the timing and cadence of their delivery to the composition and editing of their video. Our foundation model treats audio, video, and text on equal footing, learning phrasing, pacing, pose, micro-expressions, and framing directly from data. The result is harmonized and controllable short-form content that delivers on the right message and emotion.

Data-first system design

Data is at the heart of every good model. Our models are built on a curated, licensed dataset of the highest-quality short-form audio-video data. At Mirage, we design and evaluate our dataset with close attention to rights, safety, and brand control. Our metadata, data infrastructure, and training protocols are built to reflect everything needed to make compelling short-form content, including appearance, shot styles, delivery techniques, and scene structure.

Frontier training and inference

At Mirage, training and inference is done in-house to let us move our models and products as fast as the frontier in AI research. We're training the next generation of large-scale multimodal models where inference speed and model capabilities matter in equal measure. We're pushing on efficient, scalable serving to deliver quality, speed, and affordability.

Research leadership

Drew Jaegle

Drew leads AI research at Mirage, shaping the foundation models at the core of our technology. Formerly a Research Scientist at Google DeepMind, he advanced the state of multimodal representation and generative modeling, developing the Perceiver family of architectures and building the AI music generation model Lyria.

Drew’s work spans high-impact projects in creative AI, combining deep technical expertise with a proven track record of publishing, shipping, and leading research that sets new standards in the field.

Captions

Read our latest research

To learn more, check out our white paper. The paper unpacks how Mirage generates ready-to-use footage of people speaking, laughing, gesturing, and communicating with real charisma and emotion.

Captions