Written by Ethan Smith

Table of Contents

Intro

The dominant methods for decreasing the total number of steps needed for inference in a diffusion model revolve around 3 areas:

These methods have allowed reducing inference all the way down to 1 step with at least reasonably convincing quality.

Although a common thought I have is wondering if it is reasonable to believe there is a way of replicating diffusion faithfully in very few steps, or if intrinsically, their means of iterative prediction with complex trajectories are what provides its strengths in diversity of output and high quality.

In this post, I’ll do a review of these methods, as well as share some of my own thoughts.

GAN based methods

DiffusionGAN

Starting in 2022 we have DiffusionGAN. Interestingly, it takes the perspective of improving GAN training through noise augmentations during training, taking loose inspiration from diffusion. However, inference doesn’t look like a diffusion model in terms of multi-step inference with an update step.