This is a fun experiment to generate and morph faces. Type something in both text boxes and click morph to generate a video morphing between them.
Not sure what to type?
Pick from a list of popular baby names with names.facemorph.me
You can even upload your own images by clicking "Change Mode" in the textbox.
When you upload your own images, encoder4editing is used to encode it as a latent. It attempts to find a balance between accuracy and editability.
This tradeoff means it won't look quite the same as the input image but should work well for morphing.
Images based on text input or numeric seeds are not really people. They are randomly generated.
Custom images may be encodings of real people.
There is no correlation between what you type and the generated faces, other than that the same text will always generate the same face.
Generally the intermediate faces are a pretty good mix between the endpoints, but sometimes you'll notice it adds glasses or frowns or transitions old-young-old or any other feature.
The model trained to generate images has no concept of human features and has just learned to generate images that perceptually look like faces based on a set of numbers. Although StyleGan2 does have a concept of perceptual path length, it has no guidance on how to represent features internally. It just happens that optimising for generating convincing images also coincides with having gradients for many of the features we would expect.
TL;DR: nobody really knows.
The dataset that the model was trained on has a small number of images that have a second face in the photo. Enough images for it to learn to sometimes generate a second face, but not enough to learn how to make it realistic.
For an example, try "a".
Well, it depends how you count. For example, when you take two distinct faces and morph between them, there is usually no distinct point where you can say "now it's a different face". Do you count each frame as a different face?
Technically, as SHA-256 is used on the input, that puts an upper limit of 2^256 on the number of endpoints based on text values.
We're students doing an experiment. We don't have the means to spin up extra infrastructure if this gets popular. GPUs are expensive!
If you have a bunch of GPUs and would like to help, please get in touch.
This is just an experiment and we make no commitment to keeping the server up. We might be developing new features or training new models, so try again later.
In the input textbox there is a button to change mode and upload an image.