Genmo
Live Demo

Check out our beta live demo.

What is this?

Genmo (short for Generative Mosaic) is a visual effects process that recreates any video (or photo) input using an entirely separate set of images. You upload a video, select a dataset to render it with, and get your recreated clip back in seconds.

How does it work?

We use our own deep neural network architecture to process your video. No original video content remains after processing - and no filters - the new clip is completely synthesized by an AI model.

Our architecture hybridizes a generative adversarial network (GAN) with other neural network models. It is feed-forward (contributing to its speed) and, unlike other popular AI-based image manipulation tools, we do not use a classifier network. Patent pending.

Have I seen this before?

Not exactly. You’ve seen photomosaics. You may have seen style transfer and deep dream. You might even know the great Arcimboldo. This differs from those.

You haven’t seen video redrawn at near real-time from an arbitrarily chosen image set. You haven’t seen total flexibility in mosaic tile size, arrangement, and shape - all selected at runtime. You definitely haven’t seen the potential to recreate any input video with any other image dataset on-demand. And much more is yet to come, including true real-time rendering.


Generative Mosaics
Photo & video reconstruction from constituent imagesets


Part of an ongoing project that reconstructs arbitrary photos and videos from all kinds of constituent elements (h/t Arcimboldo). We are rendering at near real-time speeds and suspect a variety of uses.

The engine of these works is an in-house authored deep learning architecture (we call it Generative Mosaic - no style transfer). There is no original photo/video content here - and no filters - it's completely synthesized by neural network models.


Transfiguration II
Cross-domain tiling with yGAN



Transfiguration
Cross-domain video composition with yGAN


Using in-house neural network architecture (GAN/autoencoder hybrid), we take a given input video and generate new sequences that map the input’s compositional & temporal qualities to a destination dataset (that is, transform an episode of Seinfeld to Family Guy, for example).

    

    

      


Learned Helplessness
Adventures in training to failure (it's a montage)
LATENTCULTURE is a research group focused on
generative systems and AI for creative applications.