FAQ
What is this?
These are examples of generations from a 350M parameter diffusion model. It is trained to input a brightfield image and generate the corresponding cell paint channels (DNA, ER, Mitochondria, AGP, RNA) at 512x512 resolution. The model is a first generation architecture trained only on publicly available data. Feel free to use the Try It tab to upload your own brightfield images.
Why do the brightfields look so rough?
The brightfield images are not post-processed at all. This is done to give the reader empathy for what the model itself is seeing. Brightfields with artifacts are left as-is so that viewers can see how the model handles them.
The generated images look a little different from the ground truth, how do you measure model quality?
There are many degrees of freedom available to a researcher doing cell painting - how long the dye is left, the illumination, how 5 channels are combined into 3 RGB channels for an image, etc.... There is no single "right" cell paint for a given brightfield. See the technical report for a more quantitive evaluation.
Are you saying you can completely replace physical staining with generative models and brightfield imaging?
No. A virtual cell painting model is simply highlighting information present in the brightfield. Virtual cell paints can be generated purely computationally and without fixing the cells. This allows new capabilities like generating human interpretable images from time lapses. Virtual cell painting is somewhat lossy compared to real cell painting (see the technical report), thus there are use cases where brightfield and virtual cell staining will be sufficient, and there are cases where it will not.
Why is this called thiscellpaintingdoesnotexist.com?
Generative Adversarial Networks (GANs) were exciting image generation models that took the AI world by storm. During that period websites like thispersondoesnotexist.com were created to show off these models' generative potential. This site pays homage to the technology whose shoulders we stand on.