We present an accessible first course on diffusion models and flow matching for machine learning, aimed at a technical audience with no diffusion experience. We try to simplify the mathematical details as much as possible (sometimes heuristically), while retaining enough precision to derive correct algorithms. 我们提供了一门易于理解的关于扩散模型和流匹配的机器学习入门课程,面向没有扩散经验的技术受众。我们尽量简化数学细节(有时采用启发式方法),同时保留足够的精确性以推导出正确的算法。
There are many existing resources for learning diffusion models. Why did we write another? Our goal was to teach diffusion as simply as possible, with minimal mathematical and machine learning prerequisites, but in enough detail to reason about its correctness. Unlike most tutorials on this subject, we take neither a Variational Auto Encoder (VAE) nor an Stochastic Differential Equations (SDE) approach. In fact, for the core ideas we will not need any SDEs, Evidence-Based-Lower-Bounds (ELBOs), Langevin dynamics, or even the notion of a score. The reader need only be familiar with basic probability, calculus, linear algebra, and multivariate Gaussians. The intended audience for this tutorial is technical readers at the level of at least advanced undergraduate or graduate students, who are learning diffusion for the first time and want a mathematical understanding of the subject. 现有许多学习扩散模型的资源。我们为什么还要写另一个?我们的目标是尽可能简单地教授扩散,尽量减少数学和机器学习的前提知识,但又要详细到足以推理其正确性。与大多数关于该主题的教程不同,我们既不采用变分自编码器(VAE)方法,也不采用随机微分方程(SDE)方法。实际上,对于核心思想,我们不需要任何 SDE、基于证据的下界(ELBO)、朗之万动力学,甚至不需要分数的概念。读者只需熟悉基本概率、微积分、线性代数和多元高斯分布即可。该教程的目标受众是至少具备高级本科或研究生水平的技术读者,他们第一次学习扩散,并希望对该主题有数学上的理解。
This tutorial has five parts, each relatively self-contained, but covering closely related topics. Section 1 presents the fundamentals of diffusion: the problem we are trying to solve and an overview of the basic approach. Sections 2 and 3 show how to construct a stochastic and deterministic diffusion sampler, respectively, and give intuitive derivations for why these samplers correctly reverse the forward diffusion process. Section 4 covers the closely-related topic of Flow Matching, which can be thought of as a generalization of diffusion that offers additional flexibility (including what are called rectified flows or linear flows). Finally, in Section 5 we return to diffusion and connect this tutorial to the broader literature while highlighting some of the design choices that matter most in practice, including samplers, noise schedules, and parametrizations. 本教程分为五个部分,每个部分相对独立,但涵盖了密切相关的主题。第一部分介绍扩散的基本原理:我们试图解决的问题以及基本方法的概述。第二部分和第三部分分别展示如何构建随机和确定性扩散采样器,并直观地推导出这些采样器为何能够正确逆转前向扩散过程。第四部分涵盖了密切相关的流匹配主题,可以将其视为扩散的推广,提供额外的灵活性(包括所谓的整流流或线性流)。最后,在第五部分,我们回到扩散,并将本教程与更广泛的文献联系起来,同时强调一些在实践中最重要的设计选择,包括采样器、噪声调度和参数化。
Acknowledgements 致谢
We are grateful for helpful feedback and suggestions from many people, in particular: Josh Susskind, Eugene Ndiaye, Dan Busbridge, Sam Power, De Wang, Russ Webb, Sitan Chen, Vimal Thilak, Etai Littwin, Chenyang Yuan, Alex Schwing, Miguel Angel Bautista Martin, and Dilip Krishnan. 我们感谢许多人提供的有益反馈和建议,特别是:Josh Susskind、Eugene Ndiaye、Dan Busbridge、Sam Power、De Wang、Russ Webb、Sitan Chen、Vimal Thilak、Etai Littwin、Chenyang Yuan、Alex Schwing、Miguel Angel Bautista Martin 和 Dilip Krishnan。
1 Fundamentals of Diffusion 扩散基础知识
The goal of generative modeling is: given i.i.d. samples from some unknown distribution p^(**)(x)p^{*}(x), construct a sampler for (approximately) the same distribution. For example, given a training set of dog images from some underlying distribution p_("dog ")p_{\text {dog }}, we want a method of producing new images of dogs from this distribution. 生成建模的目标是:给定来自某个未知分布 p^(**)(x)p^{*}(x) 的独立同分布样本,构建一个(近似)相同分布的采样器。例如,给定来自某个基础分布 p_("dog ")p_{\text {dog }} 的狗图像训练集,我们希望有一种方法能够从该分布中生成新的狗图像。
One way to solve this problem, at a high level, is to learn a transformation from some easy-to-sample distribution (such as Gaussian noise) to our target distribution p^(**)p^{*}. Diffusion models offer a general framework for learning such transformations. The clever trick of diffusion is to reduce the problem of sampling from distribution p^(**)(x)p^{*}(x) into to a sequence of easier sampling problems. 解决这个问题的一种高层次的方法是学习从某个易于采样的分布(例如高斯噪声)到我们的目标分布 p^(**)p^{*} 的变换。扩散模型提供了一个学习这种变换的通用框架。扩散的巧妙之处在于将从分布 p^(**)(x)p^{*}(x) 中采样的问题简化为一系列更简单的采样问题。
This idea is best explained via the following Gaussian diffusion example. We'll sketch the main ideas now, and in later sections we will use this setup to derive what are commonly known as the DDPM and DDIM samplers ^(1){ }^{1}, and reason about their correctness. 这个想法最好通过以下高斯扩散示例来解释。我们现在将概述主要思想,在后面的部分中,我们将使用这个设置推导出通常称为 DDPM 和 DDIM 采样器的内容 ^(1){ }^{1} ,并推理它们的正确性。
1.1 Gaussian Diffusion 1.1 高斯扩散
For Gaussian diffusion, let x_(0)x_{0} be a random variable in R^(d)\mathbb{R}^{d} distributed according to the target distribution p^(**)p^{*} (e.g., images of dogs). Then construct a sequence of random variables x_(1),x_(2),dots,x_(T)x_{1}, x_{2}, \ldots, x_{T}, by successively adding independent Gaussian noise with some small scale sigma\sigma : 对于高斯扩散,设 x_(0)x_{0} 为在 R^(d)\mathbb{R}^{d} 中根据目标分布 p^(**)p^{*} (例如,狗的图像)分布的随机变量。然后通过连续添加一些小尺度 sigma\sigma 的独立高斯噪声构造随机变量序列 x_(1),x_(2),dots,x_(T)x_{1}, x_{2}, \ldots, x_{T} :
This is called the forward process ^(2){ }^{2}, which transforms the data distribution into a noise distribution. Equation (1) defines a joint distribution over all (x_(0),x_(1),dots,x_(T))\left(x_{0}, x_{1}, \ldots, x_{T}\right), and we let {p_(t)}_(t in[T])\left\{p_{t}\right\}_{t \in[T]} denote the marginal distributions of each x_(t)x_{t}. Notice that at large step count TT, the distribution p_(T)p_{T} is nearly Gaussian ^(3){ }^{3}, so we can approximately sample from p_(T)p_{T} by just sampling a Gaussian. 这被称为前向过程 ^(2){ }^{2} ,它将数据分布转化为噪声分布。方程(1)定义了所有 (x_(0),x_(1),dots,x_(T))\left(x_{0}, x_{1}, \ldots, x_{T}\right) 的联合分布,我们用 {p_(t)}_(t in[T])\left\{p_{t}\right\}_{t \in[T]} 表示每个 x_(t)x_{t} 的边际分布。注意,在大步数 TT 时,分布 p_(T)p_{T} 几乎是高斯分布 ^(3){ }^{3} ,因此我们可以通过简单地从高斯分布中采样来近似地从 p_(T)p_{T} 中采样。
Figure 1: Probability distributions defined by diffusion forward process on one-dimensional target distribution p_(0)p_{0}. 图 1:由一维目标分布 p_(0)p_{0} 上的扩散前向过程定义的概率分布。
Now, suppose we can solve the following subproblem: 现在,假设我们可以解决以下子问题:
"Given a sample marginally distributed as p_(t)p_{t}, produce a sample marginally distributed as p_(t-1)p_{t-1} ". "给定一个边际分布为 p_(t)p_{t} 的样本,生成一个边际分布为 p_(t-1)p_{t-1} 的样本。"
We will call a method that does this a reverse sampler ^(4){ }^{4}, since it tells us how to sample from p_(t-1)p_{t-1} assuming we can already sample from p_(t)p_{t}. If we had a reverse sampler, we could sample from our target p_(0)p_{0} by simply starting with a Gaussian sample from p_(T)p_{T}, and iteratively applying the reverse sampling procedure to get samples from p_(T-1),p_(T-2),dotsp_{T-1}, p_{T-2}, \ldots and finally p_(0)=p^(**)p_{0}=p^{*} 我们将称这种方法为反向采样器 ^(4){ }^{4} ,因为它告诉我们如何从 p_(t-1)p_{t-1} 中进行采样,前提是我们已经能够从 p_(t)p_{t} 中进行采样。如果我们有一个反向采样器,我们可以通过简单地从 p_(T)p_{T} 中开始一个高斯样本,并迭代地应用反向采样过程来从我们的目标 p_(0)p_{0} 中获取样本,最终得到 p_(T-1),p_(T-2),dotsp_{T-1}, p_{T-2}, \ldots 和 p_(0)=p^(**)p_{0}=p^{*} 的样本。
The key insight of diffusion is, learning to reverse each intermediate step can be easier than learning to sample from the target distribution in one step ^(5){ }^{5}. There are many ways to construct reverse samplers, but for concreteness let us first see the standard diffusion sampler which we will call the DDPM sampler ^(6){ }^{6}. 扩散的关键见解是,学习逆转每个中间步骤可能比一步从目标分布中采样更容易 ^(5){ }^{5} 。构建逆采样器的方法有很多,但为了具体说明,我们首先来看标准的扩散采样器,我们称之为 DDPM 采样器 ^(6){ }^{6} 。
^(5){ }^{5} Intuitively this is because the distributions (p_(t-1),p_(t))\left(p_{t-1}, p_{t}\right) are already quite close, so the reverse sampler does not need to do much. 直观上,这是因为分布 (p_(t-1),p_(t))\left(p_{t-1}, p_{t}\right) 已经非常接近,因此反向采样器不需要做太多。
The Ideal DDPM sampler uses the obvious strategy: At time tt, given ^(6){ }^{6} This is the sampling strategy originally input zz (which is promised to be a sample from p_(t)p_{t} ), we output a proposed in Sohl-Dickstein et al. [2015]. sample from the conditional distribution 理想的 DDPM 采样器使用明显的策略:在时间 tt ,给定 ^(6){ }^{6} 。这是最初输入的采样策略 zz (承诺是来自 p_(t)p_{t} 的样本),我们输出一个来自 Sohl-Dickstein 等人[2015]的条件分布的提议样本。
This is clearly a correct reverse sampler. The problem is, it requires learning a generative model for the conditional distribution p(x_(t-1)∣x_(t))p\left(x_{t-1} \mid x_{t}\right) for every x_(t)x_{t}, which could be complicated. But if the per-step noise sigma\sigma is sufficiently small, then it turns out this conditional distribution becomes simple: 这显然是一个正确的反向采样器。问题在于,它需要为每个 x_(t)x_{t} 学习条件分布 p(x_(t-1)∣x_(t))p\left(x_{t-1} \mid x_{t}\right) 的生成模型,这可能会很复杂。但是如果每一步的噪声 sigma\sigma 足够小,那么这个条件分布就变得简单。
Fact 1 (Diffusion Reverse Process). For small sigma\sigma, and the Gaussian diffusion process defined in (1)(1), the conditional distribution p(x_(t-1)∣x_(t))p\left(x_{t-1} \mid x_{t}\right) is itself close to Gaussian. That is, for all times tt and conditionings z inR^(d)z \in \mathbb{R}^{d}, there exists some mean parameter mu inR^(d)\mu \in \mathbb{R}^{d} such that 事实 1(扩散逆过程)。对于小的 sigma\sigma ,以及在 (1)(1) 中定义的高斯扩散过程,条件分布 p(x_(t-1)∣x_(t))p\left(x_{t-1} \mid x_{t}\right) 本身接近高斯分布。也就是说,对于所有时间 tt 和条件 z inR^(d)z \in \mathbb{R}^{d} ,存在某个均值参数 mu inR^(d)\mu \in \mathbb{R}^{d} ,使得
This is not an obvious fact; we will derive it in Section 2.1. This fact enables a drastic simplification: instead of having to learn an 这并不是一个显而易见的事实;我们将在第 2.1 节中推导它。这个事实使得大幅简化成为可能:不必学习一个
Figure 2: Illustration of Fact 1. The prior distribution p(x_(t-1))p\left(x_{t-1}\right), leftmost, defines a joint distribution (x_(t-1),x_(t))\left(x_{t-1}, x_{t}\right) where p(x_(t)∣x_(t-1))=N(0,sigma^(2))p\left(x_{t} \mid x_{t-1}\right)=\mathcal{N}\left(0, \sigma^{2}\right). We plot the reverse conditional distributions p(x_(t-1)∣x_(t))p\left(x_{t-1} \mid x_{t}\right) for a fixed conditioning x_(t)x_{t}, and varying noise levels sigma\sigma. Notice these distributions become close to Gaussian for small sigma\sigma. 图 2:事实 1 的示意图。最左侧的先验分布 p(x_(t-1))p\left(x_{t-1}\right) 定义了一个联合分布 (x_(t-1),x_(t))\left(x_{t-1}, x_{t}\right) ,其中 p(x_(t)∣x_(t-1))=N(0,sigma^(2))p\left(x_{t} \mid x_{t-1}\right)=\mathcal{N}\left(0, \sigma^{2}\right) 。我们绘制了固定条件 x_(t)x_{t} 下的反向条件分布 p(x_(t-1)∣x_(t))p\left(x_{t-1} \mid x_{t}\right) ,并且噪声水平 sigma\sigma 变化。注意,当 sigma\sigma 较小时,这些分布接近高斯分布。
arbitrary distribution p(x_(t-1)∣x_(t))p\left(x_{t-1} \mid x_{t}\right) from scratch, we now know everything about this distribution except its mean, which we denote ^(7){ }^{7}mu_(t-1)(x_(t))\mu_{t-1}\left(x_{t}\right). The fact that we can approximate the posterior distribution as Gaussian when sigma\sigma is sufficiently small is illustrated in Fig 2. This is an important point, so to re-iterate: for a given time tt and conditioning value x_(t)x_{t}, learning the mean of p(x_(t-1)∣x_(t))p\left(x_{t-1} \mid x_{t}\right) is sufficient to learn the full conditional distribution p(x_(t-1)∣x_(t))p\left(x_{t-1} \mid x_{t}\right). 从头开始任意分布 p(x_(t-1)∣x_(t))p\left(x_{t-1} \mid x_{t}\right) ,我们现在对该分布的所有信息都已了解,除了它的均值,我们用 ^(7){ }^{7}mu_(t-1)(x_(t))\mu_{t-1}\left(x_{t}\right) 表示。图 2 展示了当 sigma\sigma 足够小时,我们可以将后验分布近似为高斯分布。这是一个重要的观点,因此重申一下:对于给定的时间 tt 和条件值 x_(t)x_{t} ,学习 p(x_(t-1)∣x_(t))p\left(x_{t-1} \mid x_{t}\right) 的均值足以学习完整的条件分布 p(x_(t-1)∣x_(t))p\left(x_{t-1} \mid x_{t}\right) 。
Learning the mean of p(x_(t-1)∣x_(t))p\left(x_{t-1} \mid x_{t}\right) is a much simpler problem than learning the full conditional distribution, because we can solve it by regression. To elaborate, we have a joint distribution (x_(t-1),x_(t))\left(x_{t-1}, x_{t}\right) from which we can easily sample, and we would like to estimate E[x_(t-1)∣x_(t)]\mathbb{E}\left[x_{t-1} \mid x_{t}\right]. This can be done by optimizing a standard regression loss ^(8)^{8} : 学习 p(x_(t-1)∣x_(t))p\left(x_{t-1} \mid x_{t}\right) 的均值是一个比学习完整条件分布简单得多的问题,因为我们可以通过回归来解决它。具体来说,我们有一个联合分布 (x_(t-1),x_(t))\left(x_{t-1}, x_{t}\right) ,可以轻松地从中进行采样,我们希望估计 E[x_(t-1)∣x_(t)]\mathbb{E}\left[x_{t-1} \mid x_{t}\right] 。这可以通过优化标准回归损失 ^(8)^{8} 来完成:
where the expectation is taken over samples x_(0)x_{0} from our target distribution p^(**).9p^{*} .9 This particular regression problem is well-studied in certain settings. For example, when the target p^(**)p^{*} is a distribution on images, then the corresponding regression problem (Equation 6) is exactly an image denoising objective, which can be approached with familiar methods (e.g. convolutional neural networks). 在这里,期望是针对来自我们目标分布 p^(**).9p^{*} .9 的样本 x_(0)x_{0} 进行的。这个特定的回归问题在某些环境中得到了充分研究。例如,当目标 p^(**)p^{*} 是图像上的分布时,相应的回归问题(方程 6)恰好是一个图像去噪目标,可以通过熟悉的方法(例如卷积神经网络)来处理。
Stepping back, we have seen something remarkable: we have reduced the problem of learning to sample from an arbitrary distribution to the standard problem of regression. 退一步来看,我们看到了一些显著的事情:我们将从任意分布中学习的样本问题简化为标准的回归问题。
1.2 Diffusions in the Abstract 1.2 抽象中的扩散
Let us now abstract away the Gaussian setting, to define diffusionlike models in a way that will capture their many instantiations (including deterministic samplers, discrete domains, and flowmatching). 现在让我们抽象掉高斯设置,以一种能够捕捉其多种实例(包括确定性采样器、离散域和流匹配)来定义扩散类模型。
Abstractly, here is how to construct a diffusion-like generative model: We start with our target distribution p^(**)p^{*}, and we pick some base distribution q(x)q(x) which is easy to sample from, e.g. a standard Gaussian or i.i.d bits. We then try to construct a sequence of distributions which interpolate between our target p^(**)p^{*} and the base distribution qq. That is, we construct distributions 抽象地说,构建一个类似扩散的生成模型的方法如下:我们从目标分布 p^(**)p^{*} 开始,然后选择一个易于采样的基础分布 q(x)q(x) ,例如标准高斯分布或独立同分布的比特。接着,我们尝试构建一个分布序列,该序列在目标分布 p^(**)p^{*} 和基础分布 qq 之间进行插值。也就是说,我们构建分布。
^(7){ }^{7} We denote the mean as a function mu_(t-1):R^(d)rarrR^(d)\mu_{t-1}: \mathbb{R}^{d} \rightarrow \mathbb{R}^{d} because the mean of p(x_(t-1)∣x_(t))p\left(x_{t-1} \mid x_{t}\right) depends on the time tt as well as the conditioning x_(t)x_{t}, as described in Fact 1 . 我们将均值表示为一个函数 mu_(t-1):R^(d)rarrR^(d)\mu_{t-1}: \mathbb{R}^{d} \rightarrow \mathbb{R}^{d} ,因为 p(x_(t-1)∣x_(t))p\left(x_{t-1} \mid x_{t}\right) 的均值依赖于时间 tt 以及条件 x_(t)x_{t} ,如事实 1 所述。
^(8){ }^{8} Recall the generic fact that for any distribution over (x,y)(x, y), we have: argmin_(f)E||f(x)-y||^(2)=E[y∣x]\operatorname{argmin}_{f} \mathbb{E}\|f(x)-y\|^{2}=\mathbb{E}[y \mid x] 回忆一下,对于任何在 (x,y)(x, y) 上的分布,我们有: argmin_(f)E||f(x)-y||^(2)=E[y∣x]\operatorname{argmin}_{f} \mathbb{E}\|f(x)-y\|^{2}=\mathbb{E}[y \mid x]
^(9){ }^{9} Notice that we simulate samples of (x_(t-1),x_(t))\left(x_{t-1}, x_{t}\right) by adding noise to the samples of x_(0)x_{0}, as defined in Equation 1. 注意,我们通过向 x_(0)x_{0} 的样本添加噪声来模拟 (x_(t-1),x_(t))\left(x_{t-1}, x_{t}\right) 的样本,如方程 1 所定义。
such that p_(0)=p^(**)p_{0}=p^{*} is our target, p_(T)=qp_{T}=q the base distribution, and adjacent distributions (p_(t-1),p_(t))\left(p_{t-1}, p_{t}\right) are marginally "close" in some appropriate sense. Then, we learn a reverse sampler which transforms distributions p_(t)p_{t} to p_(t-1)p_{t-1}. This is the key learning step, which presumably is made easier by the fact that adjacent distributions are "close." Formally, reverse samplers are defined below. 使得 p_(0)=p^(**)p_{0}=p^{*} 是我们的目标, p_(T)=qp_{T}=q 是基础分布,而相邻分布 (p_(t-1),p_(t))\left(p_{t-1}, p_{t}\right) 在某种适当的意义上是边际上“接近”的。然后,我们学习一个反向采样器,它将分布 p_(t)p_{t} 转换为 p_(t-1)p_{t-1} 。这是关键的学习步骤,假设由于相邻分布“接近”,这一过程变得更容易。正式地,反向采样器的定义如下。
Definition 1 (Reverse Sampler). Given a sequence of marginal distributions p_(t)p_{t}, a reverse sampler for step tt is a potentially stochastic function F_(t)F_{t} such that if x_(t)∼p_(t)x_{t} \sim p_{t}, then the marginal distribution of F_(t)(x_(t))F_{t}\left(x_{t}\right) is exactly p_(t-1)p_{t-1} : 定义 1(反向采样器)。给定一系列边际分布 p_(t)p_{t} ,步骤 tt 的反向采样器是一个潜在的随机函数 F_(t)F_{t} ,使得如果 x_(t)∼p_(t)x_{t} \sim p_{t} ,则 F_(t)(x_(t))F_{t}\left(x_{t}\right) 的边际分布恰好是 p_(t-1)p_{t-1} :
{:(8){F_(t)(z):z∼p_(t)}-=p_(t-1):}\begin{equation*}
\left\{F_{t}(z): z \sim p_{t}\right\} \equiv p_{t-1} \tag{8}
\end{equation*}
There are many possible reverse samplers ^(10){ }^{10}, and it is even possible to construct reverse samplers which are deterministic. In the remainder of this tutorial we will see three popular reverse samplers more formally: the DDPM sampler discussed above (Section 2.1), the DDIM sampler (Section 3), which is deterministic, and the family of flow-matching models (Section 4), which can be thought of as a generalization of DDIM. ^(11){ }^{11} 有许多可能的反向采样器 ^(10){ }^{10} ,甚至可以构造出确定性的反向采样器。在本教程的其余部分,我们将更正式地介绍三种流行的反向采样器:上述讨论的 DDPM 采样器(第 2.1 节)、确定性的 DDIM 采样器(第 3 节)以及流匹配模型家族(第 4 节),可以将其视为 DDIM 的推广。 ^(11){ }^{11}
1.3 Discretization 1.3 离散化
Before we proceed further, we need to be more precise about what we mean by adjacent distributions p_(t),p_(t-1)p_{t}, p_{t-1} being "close". We want to think of the sequence p_(0),p_(1),dots,p_(T)p_{0}, p_{1}, \ldots, p_{T} as the discretization of some (well-behaved) time-evolving function p(x,t)p(x, t), that starts from the target distribution p_(0)p_{0} at time t=0t=0 and ends at the noisy distribution p_(T)p_{T} at time t=1t=1 : 在我们进一步讨论之前,我们需要更准确地定义相邻分布 p_(t),p_(t-1)p_{t}, p_{t-1} “接近”的含义。我们希望将序列 p_(0),p_(1),dots,p_(T)p_{0}, p_{1}, \ldots, p_{T} 视为某个(表现良好的)时间演变函数 p(x,t)p(x, t) 的离散化,该函数在时间 t=0t=0 时从目标分布 p_(0)p_{0} 开始,并在时间 t=1t=1 时结束于噪声分布 p_(T)p_{T} :
{:(9)p(x","k Delta t)=p_(k)(x)","quad" where "Delta t=(1)/(T):}\begin{equation*}
p(x, k \Delta t)=p_{k}(x), \quad \text { where } \Delta t=\frac{1}{T} \tag{9}
\end{equation*}
The number of steps TT controls the fineness of the discretization (hence the closeness of adjacent distributions). ^(12){ }^{12} 步数 TT 控制离散化的精细程度(因此相邻分布的接近程度)。 ^(12){ }^{12}
In order to ensure that the variance of the final distribution, p_(T)p_{T}, is independent of the number of discretization steps, we also need to be more specific about the variance of each increment. Note that if x_(k)=x_(k-1)+N(0,sigma^(2))x_{k}=x_{k-1}+\mathcal{N}\left(0, \sigma^{2}\right), then x_(T)∼N(x_(0),Tsigma^(2))x_{T} \sim \mathcal{N}\left(x_{0}, T \sigma^{2}\right). Therefore, we need to scale the variance of each increment by Delta t=1//T\Delta t=1 / T, that is, choose
where sigma_(q)^(2)\sigma_{q}^{2} is the desired terminal variance. This choice ensures that the variance of p_(T)p_{T} is always sigma_(q)^(2)\sigma_{q}^{2}, regardless of TT. (The sqrt(Delta t)\sqrt{\Delta t} scaling will turn out to be important in our arguments for the correctness of our reverse solvers in the next chapter, and also connects to the SDE formulation in Section 2.4.) ^(10){ }^{10} Notice that none of this abstraction is specific to the case of Gaussian noisein fact, it does not even require the concept of "adding noise". It is even possible to instantiate in discrete settings, where we consider distributions p^(**)p^{*} over a finite set, and define corresponding "interpolating distributions" and reverse samplers.
^(11){ }^{11} Given a set of marginal distributions {p_(t)}\left\{p_{t}\right\}, there are many possible joint distributions consistent with these marginals (such joint distributions are called couplings). There is therefore no canonical reverse sampler for a given set of marginals {p_(t)}\left\{p_{t}\right\} - we are free to chose whichever coupling is most convenient.
At this point, it is convenient to adjust our notation. From here on, tt will represent a continuous-value in the interval [0,1][0,1] (specifically, taking one of the values 0,Delta t,2Delta t,dots,T Delta t=10, \Delta t, 2 \Delta t, \ldots, T \Delta t=1 ). Subscripts will indicate time rather than index, so for example x_(t)x_{t} will now denote xx at a discretized time tt. That is, Equation 1 becomes:
{:(12)x_(t)∼N(x_(0),sigma_(t)^(2))","quad" where "sigma_(t):=sigma_(q)sqrtt:}\begin{equation*}
x_{t} \sim \mathcal{N}\left(x_{0}, \sigma_{t}^{2}\right), \quad \text { where } \sigma_{t}:=\sigma_{q} \sqrt{t} \tag{12}
\end{equation*}
since the total noise added up to time tt (i.e. sum_(tau in{0,Delta t,2Delta t,dots,t-Delta t})eta_(tau)\sum_{\tau \in\{0, \Delta t, 2 \Delta t, \ldots, t-\Delta t\}} \eta_{\tau} ) is also Gaussian with mean zero and variance sum_(tau)sigma_(q)^(2)Delta t=sigma_(q)^(2)t\sum_{\tau} \sigma_{q}^{2} \Delta t=\sigma_{q}^{2} t.
2 Stochastic Sampling: DDPM
In this section we review the DDPM-like reverse sampler discussed in Section 1, and heuristically prove its correctness. This sampler is conceptually the same as the sampler popularized in Denoising Diffusion Probabilistic Models (DDPM) by Ho et al. [2020] and originally introduced by Sohl-Dickstein et al. [2015], when adapted to our simplified setting. However, a word of warning for the reader familiar with Ho et al. [2020]: Although the overall strategy of our sampler is identical to Ho et al. [2020], certain technical details (like constants, etc) are slightly different ^(13){ }^{13}.
We consider the setup from Section 1.3, with some target distribution p^(**)p^{*} and the joint distribution of noisy samples (x_(0),x_(Delta t),dots,x_(1))\left(x_{0}, x_{\Delta t}, \ldots, x_{1}\right) defined by Equation (11). The DDPM sampler will require estimates of the following conditional expectations:
This is a set of functions {mu_(t)}\left\{\mu_{t}\right\}, one for every time step t in{0,Delta t,dots,1-Delta t}t \in\{0, \Delta t, \ldots, 1-\Delta t\}. In the training phase, we estimate these functions from i.i.d. samples of x_(0)x_{0}, by optimizing the denoising regression objective
typically with a neural-network ^(14){ }^{14} parameterizing ff. Then, in the inference phase, we use the estimated functions in the following reverse sampler.
{:[" Algorithm 1: Stochastic Reverse Sampler (DDPM-like) "],[" For input sample "x_(t)", and timestep "t", output: "]:}\begin{aligned}
& \text { Algorithm 1: Stochastic Reverse Sampler (DDPM-like) } \\
& \hline \text { For input sample } x_{t} \text {, and timestep } t \text {, output: }
\end{aligned}
To actually generate a sample, we first sample x_(1)x_{1} as an isotropic Gaussian x_(1)∼N(0,sigma_(q)^(2))x_{1} \sim \mathcal{N}\left(0, \sigma_{q}^{2}\right), and then run the iteration of Algorithm 1 down to t=0t=0, to produce a generated sample widehat(x)_(0)\widehat{x}_{0}. (Recall that in our discretized notation (12), x_(1)x_{1} is the fully-noised terminal distribution, and the iteration takes steps of size Delta t\Delta t.) Explicit pseudocode for these algorithms are given in Section 2.2.
We want to reason about correctness of this entire procedure: why does iterating Algorithm 1 produce a sample from [approximately] our target distribution p^(**)p^{*} ? The key missing piece is, we need to prove some version of Fact 1: that the true conditional p(x_(t-Delta t)∣x_(t))p\left(x_{t-\Delta t} \mid x_{t}\right) can be well-approximated by a Gaussian, and this approximation gets better as we scale Delta t rarr0\Delta t \rightarrow 0.
^(13){ }^{13} For the experts, the main difference is we use the "Variance Exploding" diffusion forward process. We also use a constant noise schedule, and we do not discuss how to parameterize the predictor ("predicting x_(0)x_{0} vs. x_(t-1)x_{t-1} vs. noise eta^('')\eta^{\prime \prime} ). We elaborate on the latter point in Section 2.3
2.1 Correctness of DDPM
Here is a more precise version of Fact 1 , along with a heuristic derivation. This will complete the argument that Algorithm 1 is correct- i.e. that it approximates a valid reverse sampler in the sense of Definition 1.
Claim 1 (Informal). Let p_(t-Delta t)(x)p_{t-\Delta t}(x) be an arbitrary, sufficiently-smooth density over R^(d)\mathbb{R}^{d}. Consider the joint distribution of (x_(t-Delta t),x_(t))\left(x_{t-\Delta t}, x_{t}\right), where x_(t-Delta t)∼p_(t-Delta t)x_{t-\Delta t} \sim p_{t-\Delta t} and x_(t)∼x_(t-Delta t)+N(0,sigma_(q)^(2)Delta t)x_{t} \sim x_{t-\Delta t}+\mathcal{N}\left(0, \sigma_{q}^{2} \Delta t\right). Then, for sufficiently small Delta t\Delta t, the following holds. For all conditionings z inR^(d)z \in \mathbb{R}^{d}, there exists mu_(z)\mu_{z} such that:
where p_(t)p_{t} is the marginal distribution of x_(t)x_{t}.
Before we see the derivation, a few remarks: Claim 1 implies that to sample from x_(t-Delta t)x_{t-\Delta t}, it suffices to first sample from x_(t)x_{t}, then sample from a Gaussian distribution centered around E[x_(t-Delta t)∣x_(t)]\mathbb{E}\left[x_{t-\Delta t} \mid x_{t}\right]. This is exactly what DDPM does, in Equation (15). Finally, in these notes we will not actually need the expression for mu_(z)\mu_{z} in Equation (18); it is enough for us know that such a mu_(z)\mu_{z} exists, so we can learn it from samples.
Proof of Claim 1 (Informal). Here is a heuristic argument for why the score appears in the reverse process. We will essentially just apply Bayes rule and then Taylor expand appropriately. We start with Bayes rule:
Then take logs of both sizes. Throughout, we will drop any additive constants in the log\log (which translate to normalizing factors), and drop all terms of order O(Delta t)^(16)\mathcal{O}(\Delta t)^{16}. Note that we should think of x_(t)x_{t} as a constant in this derivation, since we want to understand the
^(15){ }^{15} Experts will recognize this mean as
related to the score. In fact, Tweedie's
formula implies that this mean is
exactly correct even for large Delta t\Delta t, with
no approximation required. That is, E[x_(t-Delta t)∣x_(t)=z]=z+sigma_(q)^(2)Delta t grad log p_(t)(z)\mathbb{E}\left[x_{t-\Delta t} \mid x_{t}=z\right]=z+\sigma_{q}^{2} \Delta t \nabla \log p_{t}(z).
The distribution p(x_(t-Delta t)∣x_(t))p\left(x_{t-\Delta t} \mid x_{t}\right) may
deviate from Gaussian, however, for
larger sigma\sigma.
^(16){ }^{16} Note that x_(t+1)-x_(t)∼O(sqrt(Delta t))x_{t+1}-x_{t} \sim \mathcal{O}(\sqrt{\Delta t}).
Dropping O(Delta t)\mathcal{O}(\Delta t) terms means dropping
(x_(t+1)-x_(t))^(2)∼O(Delta t)\left(x_{t+1}-x_{t}\right)^{2} \sim \mathcal{O}(\Delta t) in the expansion
of p_(t)(x_(t))p_{t}\left(x_{t}\right), but keeping (1)/(2sigma_(q)^(2)Delta t)(x_(t+1)-:}\frac{1}{2 \sigma_{q}^{2} \Delta t}\left(x_{t+1}-\right.
x_(t))^(2)∼O(1)\left.x_{t}\right)^{2} \sim \mathcal{O}(1) in p(x_(t)∣x_(t+1))p\left(x_{t} \mid x_{t+1}\right)
conditional probability as a function of x_(t-Delta t)x_{t-\Delta t}. Now:
This is identical, up to additive factors, to the log-density of a Normal distribution with mean mu\mu and variance sigma_(q)^(2)Delta t\sigma_{q}^{2} \Delta t. Therefore,
Reflecting on this derivation, the main idea was that for small enough Delta t\Delta t, the Bayes-rule expansion of the reverse process p(x_(t-Delta t)∣:}p\left(x_{t-\Delta t} \mid\right.{:x_(t))\left.x_{t}\right) is dominated by the term p(x_(t)∣x_(t-Delta t))p\left(x_{t} \mid x_{t-\Delta t}\right), from the forward process. This is intuitively why the reverse process and the forward process have the same functional form (both are Gaussian here) ^(17){ }^{17}.
Technical Details [Optional]. The meticulous reader may notice that Claim 1 is not obviously sufficient to imply correctness of the entire DDPM algorithm. The issue is: as we scale down Delta t\Delta t, the error in our per-step approximation (Equation 16) decreases, but the number of total steps required increases. So if the per-step error does not decrease fast enough (as a function of Delta t\Delta t ), then these errors could accumulate to a non-negligible error by the final step. Thus, we need to quantify how fast the per-step error decays. Lemma 1 below is one way of quantifying this: it states that if the step-size (i.e. variance of the per-step noise) is sigma^(2)\sigma^{2}, then the KL error of the per-step Gaussian approximation is O(sigma^(4))\mathcal{O}\left(\sigma^{4}\right). This decay rate is fast enough, because the number of steps only grows as ^(18)Omega(1//sigma^(2)){ }^{18} \Omega\left(1 / \sigma^{2}\right).
Lemma 1. Let p(x)p(x) be an arbitrary density over R\mathbb{R}, with bounded 1st to 4^("th ")4^{\text {th }} order derivatives. Consider the joint distribution (x_(0),x_(1))\left(x_{0}, x_{1}\right), where x_(0)∼px_{0} \sim p and x_(1)∼x_(0)+N(0,sigma^(2))x_{1} \sim x_{0}+\mathcal{N}\left(0, \sigma^{2}\right). Then, for any conditioning z inRz \in \mathbb{R}, we have
Since p_(t-Delta t)(*)=p_(t)(*)+Delta t(del)/(del t)p_(t)(*)p_{t-\Delta t}(\cdot)=p_{t}(\cdot)+\Delta t \frac{\partial}{\partial t} p_{t}(\cdot).
Definition of log p(x_(t)∣x_(t-Delta t))\log p\left(x_{t} \mid x_{t-\Delta t}\right)
Taylor expand around x_(t)x_{t} and drop constants.
Complete the square in (x_(t-Delta t)-x_(t))\left(x_{t-\Delta t}-x_{t}\right), and drop constant CC involving only x_(t)x_{t}.
For mu:=x_(t)+(sigma_(q)^(2)Delta t)grad_(x)log p_(t)(x_(t))\mu:=x_{t}+\left(\sigma_{q}^{2} \Delta t\right) \nabla_{x} \log p_{t}\left(x_{t}\right)
^(17){ }^{17} This general relationship between forward and reverse processes holds somewhat more generally than just Gaussian diffusion; see e.g. the discussion in Sohl-Dickstein et al. [2015].
It is possible to prove Lemma 1 by doing essentially a careful Taylor expansion; we include the full proof in Appendix B.1.
2.2 Algorithms
Pseudocode listings 1 and 2 give the explicit DDPM train loss and sampling code. To train ^(19){ }^{19} the network f_(theta)f_{\theta}, we must minimize the expected loss L_(theta)\operatorname{loss} L_{\theta} output by Pseudocode 1, typically by backpropagation.
Pseudocode 3 describes the closely-related DDIM sampler, which
^(19){ }^{19} Note that the training procedure
optimizes f_(theta)f_{\theta} for all timesteps tt si-
multaneously, by sampling t in[0,1]t \in[0,1]
uniformly in Line 2 . will be discussed later in Section 3.
Pseudocode 2: DDPM sampling (Code for
Algorithm 1)
Input: Trained model $f_{\theta}$.
Data: Terminal variance $\sigma_{q}$; step-size $\Delta t$.
Output: $x_{0}$
$x_{1} \leftarrow \mathcal{N}\left(0, \sigma_{q}^{2}\right)$
for $t=1,(1-\Delta t),(1-2 \Delta t), \ldots, \Delta t$ do
$\eta \leftarrow \mathcal{N}\left(0, \sigma_{q}^{2} \Delta t\right)$
$x_{t-\Delta t} \leftarrow f_{\theta}\left(x_{t}, t\right)+\eta$
end
return $x_{0}$
2.3 Variance Reduction: Predicting x_(0)x_{0}
Thus far, our diffusion models have been trained to predict E[x_(t-Delta t)∣x_(t)]\mathbb{E}\left[x_{t-\Delta t} \mid x_{t}\right] : this is what Algorithm 1 requires, and what the training procedure of Pseudocode 1 produces. However, many practical
diffusion implementations actually train to predict E[x_(0)∣x_(t)]\mathbb{E}\left[x_{0} \mid x_{t}\right], i.e. to predict the expectation of the initial point x_(0)x_{0} instead of the previous point x_(t-Delta t)x_{t-\Delta t}. This difference turns out to be just a variance reduction trick, which estimates the same quantity in expectation. Formally, the two quantities can be related as follows:
Claim 2. For the Gaussian diffusion setting of Section 1.3 , we have:
This claim implies that if we want to estimate E[x_(t-Delta t)∣x_(t)]\mathbb{E}\left[x_{t-\Delta t} \mid x_{t}\right], we can instead estimate E[x_(0)∣x_(t)]\mathbb{E}\left[x_{0} \mid x_{t}\right] and then then essentially divide by (t//Delta t)(t / \Delta t), which is the number of steps taken thus far. The variance-reduced versions of the DDPM training and sampling algorithms do exactly this; we include them in Appendix B.g.
The intuition behind Claim 2 is illustrated in Figure 3: first, observe that predicting x_(t-Delta t)x_{t-\Delta t} given x_(t)x_{t} is equivalent to predicting the last noise step, which is eta_(t-Delta t)=(x_(t)-x_(t-Delta t))\eta_{t-\Delta t}=\left(x_{t}-x_{t-\Delta t}\right) in the forward process of Equation (11). But, if we are only given the final x_(t)x_{t}, then all of the previous noise steps {eta_(i)}_(i < t)\left\{\eta_{i}\right\}_{i<t} intuitively "look the same"- we cannot distinguish between noise that was added at the last step from noise that was added at the 5 th step, for example. By this symmetry, we can conclude that all of the individual noise steps are distributed identically (though not independently) given x_(t)x_{t}. Thus, instead of estimating a single noise step, we can equivalently estimate the average of all prior noise steps, which has much lower variance. There are (t//Delta t)(t / \Delta t) elapsed noise steps by time tt, so we divide the total noise by this quantity in Equation 23 to compute the average. See Appendix B. 8 for a formal proof.
Word of warning: Diffusion models should always be trained to estimate expectations. In particular, when we train a model to predict E[x_(0)∣x_(t)]\mathbb{E}\left[x_{0} \mid x_{t}\right], we should not think of this as trying to learn "how to sample from the distribution p(x_(0)∣x_(t))^('')p\left(x_{0} \mid x_{t}\right)^{\prime \prime}. For example, if we are training an image diffusion model, then the optimal model will output E[x_(0)∣x_(t)]\mathbb{E}\left[x_{0} \mid x_{t}\right] which will look like a blurry mix of images (e.g. Figure 1b1 b in Karras et al. [2022]) — it will not look like an actual image sample. It is good to keep in mind that when diffusion papers colloquially discuss models "predicting x_(0)x_{0} ", they do not mean producing something that looks like an actual sample of x_(0)x_{0}.
Figure 3: The intuition behind Claim 2. Given x_(t)x_{t}, the final noise step eta_(t-Delta t)\eta_{t-\Delta t} is distributed identically as all other noise steps, intuitively because we only know the sum x_(t)=x_(0)+sum_(i)eta_(i)x_{t}=x_{0}+\sum_{i} \eta_{i}.
2.4 Diffusions as SDEs [Optional]
In this section ^(20){ }^{20}, we connect the discrete-time processes we have discussed so far to stochastic differential equations (SDEs). In the continuous limit, as Delta t rarr0\Delta t \rightarrow 0, our discrete diffusion process turns into a stochastic differential equation. SDEs can also represent many other diffusion variants (corresponding to different drift and diffusion terms), offering flexibility in design choices, like scaling and noise-scheduling. The SDE perspective is powerful because existing theory provides a general closed-form solution for the time-reversed SDE. Discretization of the reverse-time SDE for our particular diffusion immediately yields the sampler we derived in this section, but reverse-time SDEs for other diffusion variants are also available automatically (and can then be solved with any off-the-shelf or custom SDE solver), enabling better training and sampling strategies as we will discuss further in Section 5. Though we mention these connections only briefly here, the SDE perspective has had significant impact on the field. For a more detailed discussion, we recommend Yang Song's blog post [Song, 2021].
In this limit as Delta t rarr0\Delta t \rightarrow 0, this corresponds to a zero-drift SDE:
{:(25)dx=sigma_(q)dw:}\begin{equation*}
d x=\sigma_{q} d w \tag{25}
\end{equation*}
where ww is a Brownian motion. A Brownian motion is a stochastic process with i.i.d. Gaussian increments whose variance scales with Delta t.^(21)\Delta t .{ }^{21} Very heuristically, we can think of dw∼lim_(Delta t rarr0)sqrt(Delta t)N(0,1)d w \sim \lim _{\Delta t \rightarrow 0} \sqrt{\Delta t} \mathcal{N}(0,1), and thus "derive" (25) by
dx=lim_(Delta t rarr0)(x_(t+Delta t)-x_(t))=sigma_(q)lim_(Delta t rarr0)sqrt(Delta t)xi=sigma_(q)dwd x=\lim _{\Delta t \rightarrow 0}\left(x_{t+\Delta t}-x_{t}\right)=\sigma_{q} \lim _{\Delta t \rightarrow 0} \sqrt{\Delta t} \xi=\sigma_{q} d w
More generally, different variants of diffusion are equivalent to SDEs with different choices of drift and diffusion terms:
{:(26)dx=f(x","t)dt+g(t)dw:}\begin{equation*}
d x=f(x, t) d t+g(t) d w \tag{26}
\end{equation*}
The SDE (25) simply has f=0f=0 and g=sigma_(q)g=\sigma_{q}. This formulation encompasses many other possibilities, though, corresponding to different choices of f,gf, g in the SDE. As we will revisit in Section 5 , this flexibility is important for developing effective algorithms. Two important ^(20){ }^{20} Sections marked "[Optional]" are advanced material, and can be skipped on first read. None of the main sections depend on Optional material.
choices made in practice are tuning the noise schedule and scaling x_(t)x_{t}; together these can help to control the variance of x_(t)x_{t}, and control how much we focus on different noise levels. Adopting a flexible noise schedule {sigma_(t)}\left\{\sigma_{t}\right\} in place of the fixed schedule sigma_(t)-=sigma_(q)sqrtt\sigma_{t} \equiv \sigma_{q} \sqrt{t} corresponds to the SDE [Song et al., 2020]
x_(t)∼N(x_(0),sigma_(t)^(2))Longleftrightarrowx_(t)=x_(t-Delta t)+sqrt(sigma_(t)^(2)-sigma_(t-Delta t)^(2))z_(t-Delta t)Longleftrightarrow dx=sqrt((d)/(dt)sigma^(2)(t))dwx_{t} \sim \mathcal{N}\left(x_{0}, \sigma_{t}^{2}\right) \Longleftrightarrow x_{t}=x_{t-\Delta t}+\sqrt{\sigma_{t}^{2}-\sigma_{t-\Delta t}^{2}} z_{t-\Delta t} \Longleftrightarrow d x=\sqrt{\frac{d}{d t} \sigma^{2}(t)} d w.
If we also wish to scale each x_(t)x_{t} by a factor s(t)s(t), Karras et al. [2022] show that this corresponds to the SDE ^(22){ }^{22}
{:[^(22)" As a sketch of how "f" arises, let's "],[" ignore the noise and note that: "],[qquad{:[x_(t)=s(t)x_(0)],[Longleftrightarrowx_(t+Delta t)=(s(t+Delta t))/(s(t))x_(t)],[=x_(t)+(s(t)-s(t+Delta t))/(s(t))x_(t)],[Longleftrightarrow dx//dt=((s^(˙)))/(s)x]:}]:}\begin{aligned}
& { }^{22} \text { As a sketch of how } f \text { arises, let's } \\
& \text { ignore the noise and note that: } \\
& \qquad \begin{aligned}
x_{t} & =s(t) x_{0} \\
\Longleftrightarrow x_{t+\Delta t} & =\frac{s(t+\Delta t)}{s(t)} x_{t} \\
& =x_{t}+\frac{s(t)-s(t+\Delta t)}{s(t)} x_{t} \\
\Longleftrightarrow d x / d t & =\frac{\dot{s}}{s} x
\end{aligned}
\end{aligned}
Reverse-Time SDE
The time-reversal of an SDE runs the process backward in time. Reversetime SDEs are the continuous-time analog of samplers like DDPM. A deep result due to Anderson [1982] (and nicely re-derived in Winkler [2021]) states that the time-reversal of SDE (26) is given by:
{:(27)dx=(f(x,t)-g(t)^(2)grad_(x)log p_(t)(x))dt+g(t)d bar(w):}\begin{equation*}
d x=\left(f(x, t)-g(t)^{2} \nabla_{x} \log p_{t}(x)\right) d t+g(t) d \bar{w} \tag{27}
\end{equation*}
That is, SDE (27) tells us how to run any SDE of the form (26) backward in time! This means that we don't have to re-derive the reversal in each case, and we can choose any SDE solver to yield a practical sampler. But nothing is free: we sill cannot use (27) directly to sample backward, since the term grad_(x)log p_(t)(x)\nabla_{x} \log p_{t}(x) - which is in fact the score that previously appeared in equation 18 - is unknown in general, since it depends on p_(t)p_{t}. However, if we can learn the score, then we can solve the reverse SDE. This is analogous to discrete diffusion, where the forward process is easy to model (it just adds noise), while the reverse process must be learned.
Let us take a moment to discuss the score, grad_(x)log p_(t)(x)\nabla_{x} \log p_{t}(x), which plays a central role. Intuitively, since the score "points toward higher probability", it helps to reverse the diffusion process, which "flattens out" the probability as it runs forward. The score is also related to the conditional expectation of x_(0)x_{0} given x_(t)x_{t}. Recall that in the discrete case
sigma_(q)^(2)Delta t grad log p_(t)(x_(t))=E[x_(t-Delta t)-x_(t)∣x_(t)]=(Delta t)/(t)E[x_(0)-x_(t)∣x_(t)]\sigma_{q}^{2} \Delta t \nabla \log p_{t}\left(x_{t}\right)=\mathbb{E}\left[x_{t-\Delta t}-x_{t} \mid x_{t}\right]=\frac{\Delta t}{t} \mathbb{E}\left[x_{0}-x_{t} \mid x_{t}\right]
(by equations 18,23 ).
Similarly, in the continuous case we have ^(23){ }^{23}
Returning to the reverse SDE, we can show that its discretization
yields the DDPM sampler of Claim 1 as a special case. The reversal of the simple SDE (25) is:
{:[(29)dx=-sigma_(q)^(2)grad_(x)log p_(t)(x)dt+sigma_(q)d bar(w)],[(30)=-(1)/(t)E[x_(0)-x_(t)∣x_(t)]dt+sigma_(q)d bar(w)]:}\begin{align*}
d x & =-\sigma_{q}^{2} \nabla_{x} \log p_{t}(x) d t+\sigma_{q} d \bar{w} \tag{29}\\
& =-\frac{1}{t} \mathbb{E}\left[x_{0}-x_{t} \mid x_{t}\right] d t+\sigma_{q} d \bar{w} \tag{30}
\end{align*}
which is exactly the stochastic (DDPM) sampler derived in Claim 1.
3 Deterministic Sampling: DDIM
We will now show a deterministic reverse sampler for Gaussian diffusion - which appears similar to the stochastic sampler of the previous section, but is conceptually quite different. This sampler is equivalent to the DDIM ^(24){ }^{24} update of Song et al. [2021], adapted to in our simplified setting.
We consider the same Gaussian diffusion setup as the previous section, with the joint distribution (x_(0),x_(Delta t),dots,x_(1))\left(x_{0}, x_{\Delta t}, \ldots, x_{1}\right) and conditional expectation function mu_(t)(z):=E[x_(t)∣x_(t+Delta t)=z]\mu_{t}(z):=\mathbb{E}\left[x_{t} \mid x_{t+\Delta t}=z\right]. The reverse sampler is defined below, and listed explicitly in Pseudocode 3.
where lambda:=((sigma_(t))/(sigma_(t-Delta t)+sigma_(t)))\lambda:=\left(\frac{\sigma_{t}}{\sigma_{t-\Delta t}+\sigma_{t}}\right) and sigma_(t)-=sigma_(q)sqrtt\sigma_{t} \equiv \sigma_{q} \sqrt{t} from Equation (12).
How do we show that this defines a valid reverse sampler? Since Algorithm 2 is deterministic, it does not make sense to argue that it samples from p(x_(t-Delta t)∣x_(t))p\left(x_{t-\Delta t} \mid x_{t}\right), as we argued for the DDPM-like stochastic sampler. Instead, we will directly show that Equation (33) implements a valid transport map between the marginal distributions p_(t)p_{t} and p_(t-Delta t)p_{t-\Delta t}. That is, if we let F_(t)F_{t} be the update of Equation (33):
Proof overview: The usual way to prove this is to use tools from stochastic calculus, but we'll present an elementary derivation. Our strategy will be to first show that Algorithm 2 is correct in the simplest case of a point-mass distribution, and then lift this result to full distributions by marginalizing appropriately. For the experts, this is similar to "flow-matching" proofs.
3.1 Case 1: Single Point
Let's first understand the simple case where the target distribution p_(0)p_{0} is a single point mass in R^(d)\mathbb{R}^{d}. Without loss of generality ^(26){ }^{26}, we can assume the point is at x_(0)=0x_{0}=0. Is Algorithm 2 correct in this case?
^(24){ }^{24} DDIM stands for Denoising Diffusion Implicit Models, which reflects a perspective used in the original derivation of Song et al. [2021]. Our derivation follows a different perspective, and the "implicit" aspect will not be important to us.
^(25){ }^{25} The notation F♯pF \sharp p means the distribution of {F(x)}_(x∼p)\{F(x)\}_{x \sim p}. This is called the pushforward of pp by the function FF.
To reason about correctness, we want to consider the distributions of x_(t)x_{t} and x_(t-Delta t)x_{t-\Delta t} for arbitrary step tt. According to the diffusion forward process (Equation 11), at time tt the relevant random variables are ^(27){ }^{27}
The marginal distribution of x_(t-Delta t)x_{t-\Delta t} is p_(t-Delta t)=N(0,sigma_(t-1)^(2))p_{t-\Delta t}=\mathcal{N}\left(0, \sigma_{t-1}^{2}\right), and the marginal distribution of x_(t)x_{t} is p_(t)=N(0,sigma_(t)^(2))p_{t}=\mathcal{N}\left(0, \sigma_{t}^{2}\right).
Let us first find some deterministic function G_(t):R^(d)rarrR^(d)G_{t}: \mathbb{R}^{d} \rightarrow \mathbb{R}^{d}, such that G_(t)♯p_(t)=p_(t-Delta t)G_{t} \sharp p_{t}=p_{t-\Delta t}. There are many possible functions which will work ^(28){ }^{28}, but this is the obvious one:
G_(t)(z):=((sigma_(t-Delta t))/(sigma_(t)))zG_{t}(z):=\left(\frac{\sigma_{t-\Delta t}}{\sigma_{t}}\right) z
The function G_(t)G_{t} above simply re-scales the Gaussian distribution of p_(t)p_{t}, to match variance of the Gaussian distribution p_(t-Delta t)p_{t-\Delta t}. It turns out this G_(t)G_{t} is exactly equivalent to the step F_(t)F_{t} taken by Algorithm 2, which we will now show.
Claim 3. When the target distribution is a point mass p_(0)=delta_(0)p_{0}=\delta_{0}, then update F_(t)F_{t} (as defined in Equation 35) is equivalent to the scaling G_(t)G_{t} (as defined in Equation 37):
Thus Algorithm 2 defines a reverse sampler for target distribution p_(0)=delta_(0)p_{0}=\delta_{0}.
Proof. To apply F_(t)F_{t}, we need to compute E[x_(t-Delta t)∣x_(t)]\mathbb{E}\left[x_{t-\Delta t} \mid x_{t}\right] for our simple distribution. Since (x_(t-Delta t),x_(t))\left(x_{t-\Delta t}, x_{t}\right) are jointly Gaussian, this is ^(29){ }^{29}
We therefore conclude that Algorithm 2 is a correct reverse sampler, since it is equivalent to G_(t)G_{t}, and G_(t)G_{t} is valid.
The correctness of Algorithm 2 still holds ^(30){ }^{30} if x_(0)x_{0} is an arbitrary point instead of x_(0)=0x_{0}=0, since everything is transitionally symmetric. ^(27){ }^{27} We omit the Identity matrix in these covariances for notational simplicity. The reader may assume dimension d=1d=1 without loss of generality.
^(28){ }^{28} For example, we can always add a rotation around the origin to any valid map.
Abstract
^(29){ }^{29} Recall the conditional expectation of two jointly Gaussian random variables (X,Y)(X, Y) is E[X∣Y=y]=mu_(X)+\mathbb{E}[X \mid Y=y]=\mu_{X}+Sigma_(XY)Sigma_(YY)^(-1)(y-mu_(Y))\Sigma_{X Y} \Sigma_{Y Y}^{-1}\left(y-\mu_{Y}\right), where mu_(X),mu_(Y)\mu_{X}, \mu_{Y} are the respective means, and Sigma_(XY),Sigma_(YY)\Sigma_{X Y}, \Sigma_{Y Y} the cross-covariance of (X,Y)(X, Y) and covariance of YY. Since X=x_(t-Delta t)X=x_{t-\Delta t} and Y=x_(t)Y=x_{t} are centered at 0 , we have mu_(X)=mu_(Y)=0\mu_{X}=\mu_{Y}=0. For the covariance term, since x_(t)=x_(t-Delta t)+etax_{t}=x_{t-\Delta t}+\eta we have Sigma_(XY)=\Sigma_{X Y}=E[x_(t)x_(t-Delta t)^(T)]=E[x_(t-Delta t)x_(t-Delta t)^(T)]=sigma_(t-Delta t)^(2)I_(d)\mathbb{E}\left[x_{t} x_{t-\Delta t}^{T}\right]=\mathbb{E}\left[x_{t-\Delta t} x_{t-\Delta t}^{T}\right]=\sigma_{t-\Delta t}^{2} I_{d}. Similarly, Sigma_(YY)=E[x_(t)x_(t)^(T)]=sigma_(t)^(2)I_(d)\Sigma_{Y Y}=\mathbb{E}\left[x_{t} x_{t}^{T}\right]=\sigma_{t}^{2} I_{d}. by definition of F_(t)F_{t} by definition of lambda\lambda by Equation (39)
3.2 Velocity Fields and Gases
Before we move on, it will be helpful to think of the DDIM update as equivalent to a velocity field, which moves points at time tt to their positions at time (t-Delta t)(t-\Delta t). Specifically, define the vector field
The physical intuition for v_(t)v_{t} is: imagine a gas of non-interacting particles, with density field given by p_(t)p_{t}. Then, suppose a particle at position zz moves in the direction v_(t)(z)v_{t}(z). The resulting gas will have density field p_(t-Delta t)p_{t-\Delta t}. We write this process as
In the limit of small stepsize Delta t\Delta t, speaking informally, we can think of v_(t)v_{t} as a velocity field - which specifies the instantaneous velocity of particles moving according to the DDIM algorithm.
As a concrete example, if the target distribution p_(0)=delta_(x_(0))p_{0}=\delta_{x_{0}}, as in Section 3.1, then the velocity field of DDIM is v_(t)(x_(t))=((sigma_(t)-sigma_(t-Delta t))/(sigma_(t)))(x_(0)-:}v_{t}\left(x_{t}\right)=\left(\frac{\sigma_{t}-\sigma_{t-\Delta t}}{\sigma_{t}}\right)\left(x_{0}-\right.{:x_(t))//Delta t\left.x_{t}\right) / \Delta t which is a vector field that always points towards the initial point x_(0)x_{0} (see Figure 4).
3.3 Case 2: Two Points
Now let us show Algorithm 2 is correct when the target distribution is a mixture of two points:
for some a,b inR^(d)a, b \in \mathbb{R}^{d}. According to the diffusion forward process, the distribution at time tt will be a mixture of Gaussians ^(31){ }^{31} :
We want to show that with these distributions p_(t)p_{t}, the DDIM velocity field v_(t)v_{t} (of Equation 40) transports p_(t)rarr"v_(t)"p_(t-Delta t)p_{t} \xrightarrow{v_{t}} p_{t-\Delta t}.
Let us first try to construct some velocity field v_(t)^(**)v_{t}^{*} such that p_(t)rarr"v_(t)^(**)"p_(t-Delta t)p_{t} \xrightarrow{v_{t}^{*}} p_{t-\Delta t}. From our result in Section 3.1 — the fact that DDIM update works for single points - we already know velocity fields from Equation (33)
Figure 4: Velocity field v_(t)v_{t} when p_(0)=delta_(x_(0))p_{0}=\delta_{x_{0}}, overlaid on the Gaussian distribution p_(t)p_{t}.
^(31){ }^{31} Linearity of the forward process (with respect to p_(0)p_{0} ) was important here. That is, roughly speaking, diffusing a distribution is equivalent to diffusing each individual point in that distribution independently; the points don't interact.
which transport each mixture component {a,b}\{a, b\} individually. That is, we know the velocity field v_(t)^([a])v_{t}^{[a]} defined as
We may be tempted to just take the average velocity field (v_(t)^(**)=:}\left(v_{t}^{*}=\right.{: 0.5v_(t)^([a])+0.5v_(t)^([b]))\left.0.5 v_{t}^{[a]}+0.5 v_{t}^{[b]}\right), but this is incorrect. The correct combined velocity v_(t)^(**)v_{t}^{*} is a weighted-average of the individual velocity fields, weighted by their corresponding density fields ^(33){ }^{33}.
Explicitly, the weight for v_(t)^([a])v_{t}^{[a]} at a point x_(t)x_{t} is the probability that x_(t)x_{t} was generated from initial point x_(0)=ax_{0}=a, rather than x_(0)=bx_{0}=b.
To be intuitively convinced of this ^(34){ }^{34}, consider the corresponding question about gasses illustrated in Figure 5. Suppose we have two overlapping gases, a red gas with density N(a,sigma^(2))\mathcal{N}\left(a, \sigma^{2}\right) and velocity v_(t)^([a])v_{t}^{[a]}, and a blue gas with density N(b,sigma^(2))\mathcal{N}\left(b, \sigma^{2}\right) and velocity v_(t)^([b])v_{t}^{[b]}. We want to know, what is the effective velocity of the combined gas (as if we saw only in grayscale)? We should clearly take a weighted-average of the individual gas velocities, weighted by their respective densities just as in Equation (49).
We have now solved the main subproblem of this section: we have found one particular vector field v_(t)^(**)v_{t}^{*} which transports p_(t)p_{t} to p_(t-Delta t)p_{t-\Delta t}, for our two-point distribution p_(0)p_{0}. It remains to show that this v_(t)^(**)v_{t}^{*} is equivalent to the velocity field of Algorithm 2 ( v_(t)v_{t} from Equation 40). ^(32){ }^{32} Pay careful attention to which distributions we take expectations over! The expectation in Equation (45) is w.r.t. the single-point distribution delta_(a)\delta_{a}, but our definition of the DDIM algorithm, and its vector field in Equation (40), are always w.r.t. the target distribution. In our case, the target distribution is p_(0)p_{0} of Equation (43).
^(33){ }^{33} Note that we can write the density N(x_(t);a,sigma_(t)^(2))\mathcal{N}\left(x_{t} ; a, \sigma_{t}^{2}\right) as p(x_(t)∣x_(0)=a)p\left(x_{t} \mid x_{0}=a\right).
^(34){ }^{34} The time step must be small enough for this analogy to hold, so the DDIM updates are essentially infinitesimal steps. Otherwise, if the step size is large, it may not be possible to combine the two transport maps with "local" (i.e. pointwise) operations alone.
Figure 5: Illustration of combining the velocity fields of two gasses. Left: The density and velocity fields of two independent gases (in red and blue). Right: The effective density and velocity field of the combined gas, including streamlines.
To show this, first notice that the individual vector field v_(t)^([a])v_{t}^{[a]} can be written as a conditional expectation. Using the definition in Equation (45)^(35)(45)^{35},
where all expectations are w.r.t. the distribution x_(0)∼1//2delta_(a)+1//2delta_(b)x_{0} \sim 1 / 2 \delta_{a}+1 / 2 \delta_{b}. Thus, the combined velocity field v_(t)^(**)v_{t}^{*} is exactly the velocity field v_(t)v_{t} given by the updates of Algorithm 2 - so Algorithm 2 is a correct reverse sampler for our two-point mixture distribution.
3.4 Case 3: Arbitrary Distributions
Now that we know how to handle two points, we can generalize this idea to arbitrary distributions of x_(0)x_{0}. We will not go into details here, because the general proof will be subsumed by the subsequent section.
It turns out that our overall proof strategy for Algorithm 2 can be generalized significantly to other types of diffusions, without much
work. This yields the idea of flow matching, which we will see in the following section. Once we develop the machinery of flows, it is actually straightforward to derive DDIM directly from the simple single-point scaling algorithm of Equation (37): see Appendix B.5.
3.5 The Probability Flow ODE [Optional]
Finally, we generalize our discrete-time deterministic sampler to an ordinary differential equation (ODE) called the probability flow ODE [Song et al., 2020]. The following section builds on our discussion of SDEs as the continuous limit of diffusion in section 2.4. Just as the reverse-time SDEs of section 2.4 offered a flexible continuoustime generalization of discrete stochastic samplers, so we will see that discrete deterministic samplers generalize to ODEs. The ODE formulation offers both a useful theoretical lens through which to view diffusion, as well as practical advantages, like the opportunity to choose from a variety of off-the-shelf and custom ODE solvers to improve sampling (like the popular DPM++ method, as discussed in chapter 5).
Recall the general SDE (26) from section 2.4:
dx=f(x,t)dt+g(t)dwd x=f(x, t) d t+g(t) d w
Song et al. [2020] showed that is possible to convert this SDE into a deterministic equivalent called the probability flow ODE (PF-ODE): 36
SDE (26) and ODE (56) are equivalent in the sense that trajectories obtained by solving the PF-ODE have the same marginal distributions as the SDE trajectories at every point in time ^(37){ }^{37}. However, note that the score appears here again, as it did in the reverse SDE (27); just as for the reverse SDE, we must learn the score to make the ODE (56) practically useful.
Just as DDPM was a (discretized) special-case of the reverse-time SDE (27), so DDIM can be seen as a (discretized) special case of the PF-ODE (56). Recall from section 2.4 that the simple diffusion we have been studying corresponds to the SDE\operatorname{SDE} (25) with f=0f=0 and g=sigma_(q)g=\sigma_{q}. The corresponding ODE is
Noting that lim_(Delta t rarr0)((sigma_(t))/(sigma_(t-Delta t)tsigma_(t)))=(1)/(2)\lim _{\Delta t \rightarrow 0}\left(\frac{\sigma_{t}}{\sigma_{t-\Delta t} t \sigma_{t}}\right)=\frac{1}{2}, we recover the deterministic (DDIM) sampler (33).
3.6 Discussion: DDPM vs DDIM
The two reverse samplers defined above (DDPM and DDIM) are conceptually significantly different: one is deterministic, and the other stochastic. To review, these samplers use the following strategies:
DDPM ideally implements a stochastic map F_(t)F_{t}, such that the output F_(t)(x_(t))F_{t}\left(x_{t}\right) is, pointwise, a sample from the conditional distribution p(x_(t-Delta t)∣x_(t))p\left(x_{t-\Delta t} \mid x_{t}\right).
DDIM ideally implements a deterministic map F_(t)F_{t}, such that the output F_(t)(x_(t))F_{t}\left(x_{t}\right) is marginally distributed as p_(t-Delta t)p_{t-\Delta t}. That is, F_(t)♯p_(t)=p_(t-Delta t)F_{t} \sharp p_{t}=p_{t-\Delta t}.
Although they both happen to take steps in the same direction ^(38){ }^{38}
^(38){ }^{38} Steps proportional to (mu_(t-Delta t)(x_(t))-x_(t))\left(\mu_{t-\Delta t}\left(x_{t}\right)-x_{t}\right). (given the same input x_(t)x_{t} ), the two algorithms end up evolving very differently. To see this, let's consider how each sampler ideally behaves, when started from the same initial point x_(1)x_{1} and iterated to completion.
DDPM will ideally produce a sample from p(x_(0)∣x_(1))p\left(x_{0} \mid x_{1}\right). If the forward process mixes sufficiently (i.e. for large sigma_(q)\sigma_{q} in our setup), then the final point x_(1)x_{1} will be nearly independent from the initial point. Thus p(x_(0)∣x_(1))~~p(x_(0))p\left(x_{0} \mid x_{1}\right) \approx p\left(x_{0}\right), so the distribution output by the ideal DDPM will not depend at all ^(39){ }^{39} on the starting point x_(1)x_{1}. In contrast, DDIM is deterministic, so it will always produce a fixed value for a given x_(1)x_{1}, and thus will depend very strongly on x_(1)x_{1}.
The picture to have in mind is, DDIM defines a deterministic map R^(d)rarrR^(d)\mathbb{R}^{d} \rightarrow \mathbb{R}^{d}, taking samples from a Gaussian distribution to our target distribution. At this level, the DDIM map may sound similar to other generative models - after all, GANs and Normalizing Flows also define maps from Gaussian noise to the true distribution. What is special about the DDIM map is, it is not allowed to be arbitrary: the target distribution p^(**)p^{*} exactly determines the ideal DDIM map (which we train models to emulate). This map is "nice"; for example we expect it to be smooth if our target distribution is smooth. GANs, in contrast, are free to learn any arbitrary mapping between noise and images. This feature of diffusion models may make the learning
problem easier in some cases (since it is supervised), or harder in other cases (since there may be easier-to-learn maps which other methods could find).
3.7 Remarks on Generalization
In this tutorial, we have not discussed the learning-theoretic aspects of diffusion models: How do we learn properties of the underlying distribution, given only finite samples and bounded compute? These are fundamental aspects of learning, but are not yet fully understood for diffusion models; it is an active area of research ^(40){ }^{40}.
To appreciate the subtlety here, suppose we learn a diffusion model using the classic strategy of Empirical Risk Minimization (ERM): we sample a finite train set from the underlying distribution, and optimize all regression functions w.r.t. this empirical distribution. The problem is, we should not perfectly minimize the empirical risk, because this would yield a diffusion model which only reproduces the train samples ^(41){ }^{41}.
In general learning the diffusion model must be regularized, implicitly or explicitly, to prevent overfitting and memorization of the training data. When we train deep neural networks for use in diffusion models, this regularization often occurs implicitly: factors such as finite model size and optimization randomness prevent the trained model from perfectly memorizing its train set. We will revisit these factors (as sources of error) in Section 5 .
This issue of memorizing training data has been seen "in the wild" in diffusion models trained on small image datasets, and it has been observed that memorization reduces as the training set size increases [Somepalli et al., 2023, Gu et al., 2023]. Additionally, memorization as been noted as a potential security and copyright issue for neural networks as in Carlini et al. [2023] where the authors found they can recover training data from stable diffusion with the right prompts.
Figure 6 demonstrates the effect of training set size, and shows the DDIM trajectories for a diffusion model trained using a 3 layer ReLU network. We see that the diffusion model on N=10N=10 samples "memorizes" its train set: its trajectories all collapse to one of the train points, instead of producing the underlying spiral distribution. As we add more samples, the model starts to generalize: the trajectories converge to the underlying spiral manifold. The trajectories also start to become more perpendicular the underlying manifold, suggesting that the low dimensional structure is being learned. We also note that in the N=10N=10 case where the diffusion model fails, it is not at all obvious a human would be able to identify the "correct" pattern from these samples, so generalization may be too much to expect. ^(40){ }^{40} We recommend the introductions of Chen et al. [2022] and Chen et al. [2024b] for an overview of recent learning-theoretic results. This line of work includes e.g. De Bortoli et al [2021], De Bortoli [2022], Lee et al. [2023], Chen et al. [2023, 2024a].
^(41){ }^{41} This is not specific to diffusion models: any perfect generative model of the empirical distribution will always output a uniformly random train point, which is far-from-optimal w.r.t. the true underlying distribution.
Figure 6: The DDIM trajectories (shaded by timestep tt ) for a spiral dataset. We compare the trajectories with 10,20 , and 40 training samples. Note that as we add more training points (moving left to right) the diffusion algorithm begins to learn the underlying spiral and the trajectories look more perpendicular to the underlying manifold. The network used here is a 3 layer ReLU network with 128 neurons per layer.
4 Flow Matching
We now introduce the framework of flow matching [Peluchetti, 2022, Liu et al., 2022b,a, Lipman et al., 2023, Albergo et al., 2023]. Flow matching can be thought of as a generalization of DDIM, which allows for more flexibility in designing generative models- including for example the rectified flows (sometimes called linear flows) used by Stable Diffusion 3 [Liu et al., 2022a, Esser et al., 2024].
We have actually already seen the main ideas behind flow matching, in our analysis of DDIM in Section 3. At a high level, here is how we constructed a generative model in Section 3:
First, we defined how to generate a single point. Specifically, we constructed vector fields {v_(t)^([a])}_(t)\left\{v_{t}^{[a]}\right\}_{t} which, when applied for all time steps, transported a standard Gaussian distribution to an arbitrary delta distribution delta_(a)\delta_{a}.
Second, we determined how to combine two vector fields into a single effective vector field. This lets us construct a transport from the standard Gaussian to two points (or, more generally, to a distribution over points - our target distribution).
Neither of these steps particularly require the Gaussian base distribution, or the Gaussian forward process (Equation 1). The second step of combining vector fields remains identical for any two arbitrary vector fields, for example.
So let's drop all the Gaussian assumptions. Instead, we will begin by thinking at a basic level about how to map between any two points x_(0)x_{0} and x_(1)x_{1}. Then, we see what happens when the two points are sampled from arbitrary distributions pp (data) and qq (base), respectively. We will see that this point of view encompasses DDIM as a special case, but that it is significantly more general.
4.1 Flows
Let us first define the central notion of a flow. A flow is simply a collection of time-indexed vector fields v={v_(t)}_(t in[0,1])v=\left\{v_{t}\right\}_{t \in[0,1]}. We should think of this as the velocity-field v_(t)v_{t} of a gas at each time tt, as we did earlier in Section 3.2. Any flow defines a trajectory taking initial points x_(1)x_{1} to final points x_(0)x_{0}, by transporting the initial point along the velocity fields {v_(t)}\left\{v_{t}\right\}.
Figure 7: Running a flow which generates a spiral distribution (bottom) from an annular distribution (top).
Formally, for flow vv and initial point x_(1)x_{1}, consider the ODE ^(42){ }^{42}
to denote the solution to the flow ODE (Equation 59) at time tt, terminating at final point x_(0)x_{0}. That is, RunFlow is the result of transporting point x_(1)x_{1} along the flow vv up to time tt.
Just as flows define maps between initial and final points, they also define transports between entire distributions, by "pushing forward" points from the source distribution along their trajectories. If p_(1)p_{1} is a distribution on initial points ^(43){ }^{43}, then applying the flow vv yields the distribution on final points ^(44){ }^{44}
We denote this process as p_(1)↪^(v)p_(0)p_{1} \stackrel{v}{\hookrightarrow} p_{0} meaning the flow vv transports initial distribution p_(1)p_{1} to final distribution 45p_(0)45 p_{0}.
The ultimate goal of flow matching is to somehow learn a flow v^(**)v^{*} which transports q↪^(v^(**))pq \stackrel{v^{*}}{\hookrightarrow} p, where pp is the target distribution and qq is some easy-to-sample base distribution (such as a Gaussian). If we had this v^(**)v^{*}, we could generate samples from our target pp by first sampling x_(1)∼qx_{1} \sim q, then running our flow with initial point x_(1)x_{1} and outputting the resulting final point x_(0)x_{0}. The DDIM algorithm of Section 3 was actually a special case ^(46){ }^{46} of this, for a very particular choice of flow v^(**)v^{*}. Now, how do we construct such flows in general?
4.2 Pointwise Flows
Our basic building-block will be a pointwise flow which just transports a single point x_(1)x_{1} to a point x_(0)x_{0}. Intuitively, given an arbitrary path {x_(t)}_(t in[0,1])\left\{x_{t}\right\}_{t \in[0,1]} that connects x_(1)x_{1} to x_(0)x_{0}, a pointwise flow describes this trajectory by giving its velocity v_(t)(x_(t))v_{t}\left(x_{t}\right) at each point x_(t)x_{t} along it (see Figure 8). Formally, a pointwise flow between x_(1)x_{1} and x_(0)x_{0} is any flow {v_(t)}_(t)\left\{v_{t}\right\}_{t} that satisfies Equation 59 with boundary conditions x_(1)x_{1} and x_(0)x_{0} at times t=1,0t=1,0 respectively. We denote such flows as v^([x_(1),x_(0)])v^{\left[x_{1}, x_{0}\right]}. Pointwise flows are not unique: there are many different choices of path between x_(0)x_{0} and x_(1)x_{1}.
4.3 Marginal Flows
Suppose that for all pairs of points (x_(1),x_(0))\left(x_{1}, x_{0}\right), we can construct an explicit pointwise flow v^([x_(1),x_(0)])v^{\left[x_{1}, x_{0}\right]} that transports a source point x_(1)x_{1} to target ^(42){ }^{42} The corresponding discretetime analog is the iteration: x_(t-Delta t)larrx_(t)+v_(t)(x_(t))Delta tx_{t-\Delta t} \leftarrow x_{t}+v_{t}\left(x_{t}\right) \Delta t, starting at t=1t=1 with initial point x_(1)x_{1}.
^(43){ }^{43} Notational warning: Most of the flow matching literature uses a reversed time convention, so t=1t=1 is the target distribution. We let t=0t=0 be the target distribution to be consistent with the DDPM convention.
^(44){ }^{44} We could equivalently write this as the pushforward RunFlow (v,*0)♯p_(1)(v, \cdot 0) \sharp p_{1}. ^(45){ }^{45} In our gas analogy, this means if we start with a gas of particles distributed according to p_(1)p_{1}, and each particle follows the trajectory defined by vv, then the final distribution of particles will be p_(0)p_{0}.
^(46){ }^{46} To connect to diffusion: The continuous-time limit of DDIM (58) is a flow with v_(t)(x_(t))=(1)/(2t)E[x_(0)-x_(t)∣x_(t)]v_{t}\left(x_{t}\right)=\frac{1}{2 t} \mathbb{E}\left[x_{0}-x_{t} \mid x_{t}\right]. The base distribution p_(1)p_{1} is Gaussian. DDIM Sampling (algorithm 3) is a discretized method for evaluating RunFlow. DDPM Training (algorithm 2) is a method for learning v^(***)v^{\star} - but it relies on the Gaussian structure and differs somewhat from the flow-matching algorithm we will present in this chapter.
Figure 8: A pointwise flow v_(t)^([x_(1),x_(0)])v_{t}^{\left[x_{1}, x_{0}\right]} transporting x_(1)x_{1} to x_(0)x_{0}.
point x_(0)x_{0}. For example, we could let x_(t)x_{t} travel along a straight line from x_(1)x_{1} to x_(0)x_{0}, or along any other explicit path. Recall in our gas analogy, this corresponds to an individual particle that moves between x_(1)x_{1} and x_(0)x_{0}. Now, let us try to set up a collection of individual particles, such that at t=1t=1 the particles are distributed according to qq, and at t=0t=0 they are distributed according to pp. This is actually easy to do: We can pick any coupling 47Pi_(q,p)47 \Pi_{q, p} between qq and pp, and consider particles corresponding to the pointwise flows {v^([x_(1),x_(0)])}_((x_(1),x_(0))∼Pi_(q,p))\left\{v^{\left[x_{1}, x_{0}\right]}\right\}_{\left(x_{1}, x_{0}\right) \sim \Pi_{q, p}}. This gives us a distribution over pointwise flows (i.e. a collection of particle trajectories) with the desired behavior in aggregate.
We would like to combine all of these pointwise flows somehow, to get a single flow v^(**)v^{*} that implements the same transport between distributions ^(48){ }^{48}. Our previous discussion ^(49){ }^{49} in Section 3 tells us how to do this: to determine the effective velocity v_(t)^(**)(x_(t))v_{t}^{*}\left(x_{t}\right), we should take a weighted-average of all individual particle velocities v_(t)^([x_(1),x_(0)])v_{t}^{\left[x_{1}, x_{0}\right]}, weighted by the probability that a particle at x_(t)x_{t} was generated by the pointwise flow v^([x_(1),x_(0)])v^{\left[x_{1}, x_{0}\right]}. The final result is ^(50){ }^{50}
where the expectation is w.r.t. the joint distribution of
(x_(1),x_(0),x_(t))\left(x_{1}, x_{0}, x_{t}\right) induced by sampling (x_(1),x_(0))∼Pi_(q,p)\left(x_{1}, x_{0}\right) \sim \Pi_{q, p} and letting x_(t)larr RunFlow(v^([x_(1),x_(0)]),x_(1),t)x_{t} \leftarrow \operatorname{RunFlow}\left(v^{\left[x_{1}, x_{0}\right]}, x_{1}, t\right)
At this point, we have a "solution" to our generative modeling problem in principle, but some important questions remain to make it useful in practice:
Which pointwise flow v^([x_(1),x_(0)])v^{\left[x_{1}, x_{0}\right]} and coupling Pi_(q,p)\Pi_{q, p} should we chose?
How do we compute the marginal flow v^(**)v^{*} ? We cannot compute it from Equation (64) directly, because this would require sampling from p(x_(0)∣x_(t))p\left(x_{0} \mid x_{t}\right) for a given point x_(t)x_{t}, which may be complicated in general.
We answer these in the next sections.
4.4 A Simple Choice of Pointwise Flow
We need an explicit choices of: pointwise flow, base distribution qq, and coupling Pi_(q,p)\Pi_{q, p}. There are many simple choices which would work ^(51){ }^{51}.
The base distribution qq can be essentially any easy-to-sample distribution. Gaussians are a popular choice but certainly not the only one- Figure 7 uses an annular base distribution, for example. As for the coupling Pi_(q,p)\Pi_{q, p} between the base and target distribution, the simplest choice is the independent coupling, i.e. sampling from pp and qq independently. ^(47){ }^{47} A coupling Pi_(q,p)\Pi_{q, p} between qq and pp, specifies how to jointly sample pairs (x_(1),x_(0))\left(x_{1}, x_{0}\right) of source and target points, such that x_(0)x_{0} is marginally distributed as pp, and x_(1)x_{1} as qq. The most basic coupling is the independent coupling, with corresponds to sampling x_(1),x_(0)x_{1}, x_{0} independently.
^(48){ }^{48} Why would we like this? As we will see later, it simplifies our learning problem: instead of having to learn the distribution of all the individual trajectories, we can instead just learn one velocity field representing their bulk evolution.
^(49){ }^{49} Compare to Equation (49) in Section 3. A formal statement of how to combine flows is given in Appendix B.4.
^(50){ }^{50} An alternate way of viewing this result at a high level is: we start with pointwise flows v^([x_(1),x_(0)])v^{\left[x_{1}, x_{0}\right]} which transport delta distributions:
And then Equation (64) gives us a fancy way of "averaging these flows over x_(1)x_{1} and x_(0)^('')x_{0}{ }^{\prime \prime}, to get a flow v^(**)v^{*} transporting
Figure 9: A marginal flow with linear pointwise flows, base distribution qq uniform over an annulus, and target distribution pp equal to a Dirac-delta at x_(0)x_{0}. (This can also be thought of as the average over x_(1)x_{1} of the pointwise linear flows from x_(1)∼qx_{1} \sim q to a fixed x_(0)x_{0} ). Gray arrows depict the flow field at different times tt. The leftmost (t=1)(t=1) plot shows samples from the base distribution qq. Subsequent plots show these samples transported by the flow at intermediate times tt, The final (t=0)(t=0) plot shows all points collapsed to the target x_(0)x_{0}. This particular x_(0)x_{0} happens to be one point on the spiral distribution of Figure 7 .
For a pointwise flow, arguably the simplest construction is a linear pointwise flow:
which simply linearly interpolates between x_(1)x_{1} and x_(0)x_{0} (and corresponds to the choice made in Liu et al. [2022a]). In Figure 9 we visualize a marginal flow composed of linear pointwise flows, the same annular base distribution qq of Figure 7, and target distribution equal to a point-mass (p=delta_(x_(0)))^(52)\left(p=\delta_{x_{0}}\right)^{52}.
4.5 Flow Matching
Now, the only remaining problem is that naively evaluating v^(**)v^{*} using
^(52){ }^{52} A marginal distribution with a point-mass target distribution - or equivalently the average of pointwise flows over the the base distribution only - is sometimes called a (one-sided) conditional flow [Lipman et al., 2023].
^(53){ }^{53} This result is analogous to Theorem 2 in Lipman et al. [2023], but ours is for a two-sided flow.
(by using the generic fact that argmin_(f)E||f(x)-y||^(2)=E[y∣x]\operatorname{argmin}_{f} \mathbb{E}\|f(x)-y\|^{2}=\mathbb{E}[y \mid x] ).
In words, Equation (68) says that to compute the loss of a model f_(theta)f_{\theta} for a fixed time tt, we should:
Sample source and target points (x_(1),x_(0))\left(x_{1}, x_{0}\right) from their joint distribution.
Compute the point x_(t)x_{t} deterministically, by running 54 the pointwise
^(54){ }^{54} If we chose linear pointwise flows, for example, this would mean x_(t)larr tx_(1)+(1-t)x_(0)x_{t} \leftarrow t x_{1}+(1-t) x_{0}, via Equation (66).
Evaluate the model's prediction at x_(t)x_{t}, as f_(theta)(x_(t))f_{\theta}\left(x_{t}\right). Evaluate the deterministic vector v_(t)^([x_(1),x_(0)])(x_(t))v_{t}^{\left[x_{1}, x_{0}\right]}\left(x_{t}\right). Then compute L2 loss between these two quantities.
To sample from the trained model (our estimate of v_(t)^(**)v_{t}^{*} ), we first sample a source point x_(1)∼qx_{1} \sim q, then transport it along the learnt flow to a target sample x_(0)x_{0}. Pseudocode listings 4 and 5 give the explicit procedures for training and sampling from flow-based models (including the special case of linear flows for concreteness; matching Algorithm 1 in Liu et al. [2022a].).
Summary
To summarize, here is how to learn a flow-matching generative model for target distribution pp.
The Ingredients. We first choose:
A source distribution qq, from which we can efficiently sample (e.g. a standard Gaussian).
A coupling Pi_(q,p)\Pi_{q, p} between qq and pp, which specifies a way to jointly sample a pair of source and target points (x_(1),x_(0))\left(x_{1}, x_{0}\right) with marginals qq and pp respectively. A standard choice is the independent coupling, i.e. sample x_(1)∼qx_{1} \sim q and x_(0)∼px_{0} \sim p independently.
For all pairs of points (x_(1),x_(0))\left(x_{1}, x_{0}\right), an explicit pointwise flow v^([x_(1),x_(0)])v^{\left[x_{1}, x_{0}\right]} which transports x_(1)x_{1} to x_(0)x_{0}. We must be able to efficiently compute the vector field v_(t)^([x_(1),x_(0)])v_{t}^{\left[x_{1}, x_{0}\right]} at all points.
These ingredients determine, in theory, a marginal vector field v^(**)v^{*} which transports qq to pp :
Training. Train a neural network f_(theta)f_{\theta} by backpropogating the stochastic loss function computed by Pseudocode 4. The optimal function for this expected loss is: f_(theta)(x_(t),t)=v_(t)^(**)(x_(t))f_{\theta}\left(x_{t}, t\right)=v_{t}^{*}\left(x_{t}\right).
Sampling. Run Pseudocode 5 to generate a sample x_(0)x_{0} from (approimately) the target distribution pp.
Pseudocode 4: Flow-matching train loss generic pointwise flow [or linear flow]
Input: Neural network f_(theta)f_{\theta}
Data: Sample-access to coupling Pi_(q,p)\Pi_{q, p};
Pointwise flows {v_(t)^([x_(1),x_(0)])}\left\{v_{t}^{\left[x_{1}, x_{0}\right]}\right\} for all x_(1),x_(0)x_{1}, x_{0}.
_(4)L larr||f_(theta)(x_(t),t)-ubrace(v_(t)^([x_(1),x_(0)])(x_(t))ubrace)_((x_(0)-x_(1)))||_(2)^(2){ }_{4} L \leftarrow\|f_{\theta}\left(x_{t}, t\right)-\underbrace{v_{t}^{\left[x_{1}, x_{0}\right]}\left(x_{t}\right)}_{\left(x_{0}-x_{1}\right)}\|_{2}^{2}
5 return LL
4.6 DDIM as Flow Matching [Optional]
The DDIM algorithm of Section 3 can be seen as a special case of flow matching, for a particular choice of pointwise flows and coupling. We describe the exact correspondence here, which will allow us to notice an interesting relation between DDIM and linear flows.
We claim DDIM is equivalent to flow-matching with the following parameters:
Pointwise Flows: Either of the two equivalent pointwise flows:
This claim is straightforward to prove (see Appendix B.5), but the implication is somewhat surprising: we can recover the DDIM trajectories (which are not straight in general) as a combination of the straight pointwise trajectories in Equation (72). In fact, the DDIM trajectories are exactly equivalent to flow-matching trajectories for the above linear flows, with a different scaling of time (sqrtt" vs. "t)^(56)(\sqrt{t} \text { vs. } t)^{56}.
Claim 4 (DDIM as Linear Flow; Informal). The DDIM sampler (Algorithm 2) is equivalent, up to time-reparameterization, to the marginal flow produced by linear pointwise flows (Equation 65) with the diffusion coupling (Equation 73).
A formal statement of this claim 57 is provided in Appendix B.7.
4.7 Additional Remarks and References [Optional]
See Figure 11 for a diagram of the different methods described in this tutorial, and their relations.
We highly recommend the flow-matching tutorial of Fjelde et al. [2024], which includes helpful visualizations of flows, and uses notation more consistent with the current literature.
As a curiosity, note that we never had to define an explicit "forward process" for flow-matching, as we did for Gaussian diffusion. Rather, it was enough to define the appropriate "reverse processes" (via flows).
What we called pointwise flows are also called two-sided conditional flows in the literature, and was developed in Albergo and VandenEijnden [2022], Pooladian et al. [2023], Liu et al. [2022a], Tong et al. [2023].
Albergo et al. [2023] define the framework of stochastic interpolants, which can be thought of as considering stochastic pointwise flows, instead of only deterministic ones. Their framework strictly generalizes both DDPM and DDIM.
See Stark et al. [2024] for an interesting example of non-standard flows. They derive a generative model for discrete spaces by embedding into a continuous space (the probability simplex), then constructing a special flow on these simplices. ^(56){ }^{56} DDIM at time tt corresponds to the linear flow at time sqrtt\sqrt{t}; thus linear flows are "slower" than DDIM when tt is small. This may be beneficial for linear flows in practice (speculatively).
Figure 10: The trajectories of individual samples x_(1)∼qx_{1} \sim q for the flow in Figure 7.
5 Diffusion in Practice
To conclude, we mention some aspects of diffusion which are important in practice, but were not covered in this tutorial.
Samplers in Practice. Our DDPM and DDIM samplers (algorithms 2 and 3) correspond to the samplers presented in Ho et al. [2020] and Song et al. [2021], respectively, but with different choice of schedule and parametrization (see footnote 13). DDPM and DDIM were some of the earliest samplers to be used in practice, but since then there has been significant progress in samplers for fewer-step generation (which is crucial since each step requires a typically-expensive model forward-pass). 5^(8)5^{8} In sections 2.4 and 3.5, we showed that DDPM and DDIM can be seen as discretizations of the reverse SDE and Probability Flow ODE, respectively. The SDE and ODE perspectives automatically lead to many samplers corresponding to different black-box SDE and ODE numerical solvers (such as Euler, Heun, and RungeKutta). It is also possible to take advantage of the specific structure of the diffusion ODE, to improve upon black-box solvers [Lu et al., 2022a,b, Zhang and Chen, 2023].
Noise Schedules. The noise schedule typically refers to sigma_(t)\sigma_{t}, which determines the amount of noise added at time tt of the diffusion process. The simple diffusion (1) has p(x_(t))∼N(x_(0),sigma_(t)^(2))p\left(x_{t}\right) \sim \mathcal{N}\left(x_{0}, \sigma_{t}^{2}\right) with sigma_(t)propsqrtt\sigma_{t} \propto \sqrt{t}. Notice that the variance of x_(t)x_{t} increases at every timestep. ^(59){ }^{59}
In practice, schedules with controlled variance are often preferred. One of the most popular schedules, introduced in Ho et al. [2020], uses a time-dependent variance and scaling such that the variance of x_(t)x_{t} remains bounded. Their discrete update is
where 0 < beta(t) < 10<\beta(t)<1 is chosen so that x_(t)x_{t} is (very close to) clean data at t=1t=1 and pure noise at t=Tt=T.
The general SDE (26) introduced in 2.4 offers additional flexibility. Our simple diffusion (1) has f=0,g=sigma_(q)f=0, g=\sigma_{q}, while the diffusion (74) of Ho et al. [2020] has f=-(1)/(2)beta(t),g=sqrt(beta(t))f=-\frac{1}{2} \beta(t), g=\sqrt{\beta(t)}. Karras et al. [2022] reparametrize the SDE in terms of an overall scaling s(t)s(t) and variance sigma(t)\sigma(t) of x_(t)x_{t}, as a more interpretable way to think about diffusion designs, and suggest a schedule with s(t)=1,sigma(t)=ts(t)=1, \sigma(t)=t (which corresponds to f=0,g=sqrt(2t)f=0, g=\sqrt{2 t} ). Generally, the choice of f,gf, g, or equivalently s,sigmas, \sigma, offers a convenient way to explore the design-space of possible schedules.
Abstract
^(58){ }^{58} Even the best samplers still require around 10 sampling steps, which may be impractical. A variety of time distillation methods seek to train onestep-generator student models to match the output of diffusion teacher models, with the goal of high-quality sampling in one (or few) steps. Some examples include consistency models [Song et al., 2023b] and adversarial distillation methods [Lin et al., 2024, Xu et al., 2023, Sauer et al., 2024]. Note, however, that the distilled models are no longer diffusion models, nor are their samplers (even if multi-step) diffusion samplers.
^(59){ }^{59} Song et al. [2020] made the distinction between "variance-exploding" (VE) and "variance-preserving" (VP) schedules while comparing SMLD [Song and Ermon, 2019] and DDPM [Ho et al., 2020]. The terms VE and VP often refer specifically to SMLD and DDPM, respectively. Our diffusion (1) could also be called a variance-exploding schedule, though our noise schedule differs from the one originally proposed in Song and Ermon [2019]
Likelihood Interpretations and VAEs. One popular and useful interpretation of diffusion models is the Variational Auto Encoder (VAE) perspective ^(60){ }^{60}. Briefly, diffusion models can be viewed as a special case of a deep hierarchical VAE, where each diffusion timestep corresponds to one "layer" of the VAE decoder. The corresponding VAE encoder is given by the forward diffusion process, which produces the sequence of noisy {x_(t)}\left\{x_{t}\right\} as the "latents" for input xx. Notably, the VAE encoder here is not learnt, unlike usual VAEs. Because of the Markovian structure of the latents, each layer of the VAE decoder can be trained in isolation, without forward/backward passing through all previous layers; this helps with the notorious training instability of deep VAEs. We recommend the tutorials of Turner [2021] and Luo [2022] for more details on the VAE perspective.
One advantage of the VAE interpretation is, it gives us an estimate of the data likelihood under our generative model, by using the standard Evidence-Based-Lower-Bound (ELBO) for VAEs. This allows us to train diffusion models directly using a maximum-likelihood objective. It turns out that the ELBO for the diffusion VAE reduces to exactly the L2 regression loss that we presented, but with a particular time-weighting that weights the regression loss differently at different time-steps tt. For example, regression errors at large times tt (i.e. at high noise levels) may need to be weighted differently from errors at small times, in order for the overall loss to properly reflect a likelihood. ^(61){ }^{61} The best choice of time-weighting in practice, however, is still up for debate: the "principled" choice informed by the VAE interpretation does not always produce the best generated samples ^(62){ }^{62}. See Kingma and Gao [2023] for a good discussion of different weightings and their effect.
Parametrization: x_(0)//epsi//vx_{0} / \varepsilon / v-prediction. Another important practical choice is which of several closely-related quantities - partiallydenoised data, fully-denoised data, or the noise itself - we ask the network to predict. ^(63){ }^{63} Recall that in DDPM Training (Algorithm 1), we asked the network f_(theta)f_{\theta} to learn to predict E[x_(t-Delta t)∣x_(t)]\mathbb{E}\left[x_{t-\Delta t} \mid x_{t}\right] by minimizing ||f_(theta)(x_(t),t)-x_(t-Delta t)||_(2)^(2)\left\|f_{\theta}\left(x_{t}, t\right)-x_{t-\Delta t}\right\|_{2}^{2}. However, other parametrizations are possible. For example, recalling that E[x_(t-Delta t)-x_(t)∣x_(t)]=^(" eq. "23)(Delta t)/(t)E[x_(0)-x_(t)∣x_(t)]\mathbb{E}\left[x_{t-\Delta t}-x_{t} \mid x_{t}\right] \stackrel{\text { eq. } 23}{=} \frac{\Delta t}{t} \mathbb{E}\left[x_{0}-x_{t} \mid x_{t}\right], we see that that
is a (nearly) equivalent problem, which is often called x_(0)x_{0}-prediction. ^(64){ }^{64} The objectives differ only by a time-weighting factor of (1)/(t)\frac{1}{t}. Similarly, defining the noise epsi_(t)=(1)/(sigma_(t))E[x_(0)-x_(t)∣x_(t)]\varepsilon_{t}=\frac{1}{\sigma_{t}} \mathbb{E}\left[x_{0}-x_{t} \mid x_{t}\right], we see that we could alternatively ask the the network to predict E[epsi_(t)∣x_(t)]\mathbb{E}\left[\varepsilon_{t} \mid x_{t}\right] : this is usually called ^(60){ }^{60} This was actually the original approach to derive the diffusion objective function, in Sohl-Dickstein et al. [2015] and also Ho et al. [2020].
^(61){ }^{61} See also Equation (5) in Kadkhodaie et al. [2024] for a simple bound on KL divergence between the true distribution and generated distribution, in terms of regression excess risks.
^(62){ }^{62} For example, Ho et al. [2020] drops the time-weighting terms, and just uniformly weights all timesteps.
^(63){ }^{63} More accurately, the network always predicts conditional expectations of these quantities
^(64){ }^{64} This corresponds to the variancereduced algorithm (6).
e-prediction. Another parametrization, v-prediction, asks the model to predict v=alpha_(t)epsi-sigma_(t)x_(0)v=\alpha_{t} \varepsilon-\sigma_{t} x_{0} [Salimans and Ho, 2022] - mostly predicting data for high noise-levels and mostly noise for low noise-levels. All the parametrizations differ only by time-weightings (see Appendix B. 10 for more details).
Although the different time-weightings do not affect the optimal solution, they do impact training as discussed above. Furthermore, even if the time-weightings are adjusted to yield equivalent problems in principle, the different parametrizations may behave differently in practice, since learning is not perfect and certain objectives may be more robust to error. For example, x_(0)x_{0}-prediction combined with a schedule that places a lot of weight on low noise levels may not work well in practice, since for low noise the identity function can achieve a relatively low objective value, but clearly is not what we want.
Sources of Error. Finally, when using diffusion and flow models in practice, there are a number of sources of error which prevent the learnt generative model from exactly producing the target distribution. These can be roughly segregated into training-time and sampling-time errors.
Train-time error: Regression errors in learning the populationoptimal regression function. The regression objective is the marginal flow v_(t)^(**)v_{t}^{*} in flow-matching, or the scores E[x_(0)∣x_(t)]\mathbb{E}\left[x_{0} \mid x_{t}\right] in diffusion models. For each fixed time tt, this a standard kind of statistical error. It depends on the neural network architecture and size as well as the number of samples, and can be decomposed further into approximation and estimation errors in the usual way (e.g. see Advani et al. [2020, Sec. 4] decomposing a 2-layer network into approximation error and over-fitting error).
Sampling-time error: Discretization errors from using finite stepsizes Delta t\Delta t. This error is exactly the discretization error of the ODE or SDE solver used in sampling. These errors manifest in different ways: for DDPM, this reflects the error in using a Gaussian approximation of the reverse process (i.e. Fact 1 breaks for large sigma\sigma ). For DDIM and flow matching, it reflects the error in simulating continuous-time flows in discrete time.
These errors interact and compound in nontrivial ways, which are not yet fully understood. For example, it is not clear exactly how train-time error in the regression estimates translates into distributional error of the entire generative model. (And this question itself is complicated, since it is not always clear what type of distributional divergence we care about in practice). Interestingly, these "errors" can also have a beneficial effect on small train sets, because they act
as a kind of regularization which prevents the diffusion model from just memorizing the train samples (as discussed in Section 3.7).
Conclusion
We have now covered the basics of diffusion models and flow matching. This is an active area of research, and there are many interesting aspects and open questions which we did not cover (see Page 36 for recommended reading). We hope the foundations here equip the reader to understand more advanced topics in diffusion modeling, and perhaps contribute to the research themselves.
Figure 11: Commutative diagram of the different reverse samplers described in this tutorial, and their relations. Each deterministic sampler produces identical marginal distributions as its stochastic counterpart. There are also various ways to construct stochastic versions of flows, which are not pictured here (e.g. Albergo et al. [2023]).
A Additional Resources
Several other helpful resources for learning diffusion (tutorials, blogs, papers), roughly in order of mathematical background required.
Perspectives on diffusion.
Dieleman [2023]. (Webpage.)
Overview of many interpretations of diffusion, and techniques.
Tutorial on Diffusion Models for Imaging and Vision.
Chan [2024]. (49 pgs.)
More focus on intuitions and applications.
Interpreting and improving diffusion models using the euclidean distance function.
Permenter and Yuan [2023]. (Webpage.)
Distance-field interpretation. See accompanying blog with simple code [Yuan, 2024].
On the Mathematics of Diffusion Models.
McAllester [2023]. (4 pgs.)
Short and accessible.
Building Diffusion Model's theory from ground up
Das [2024]. (Webpage.)
ICLR 2024 Blogposts Track. Focus on SDE and score-matching perspective.
Denoising Diffusion Models: A Generative Learning Big Bang.
Song, Meng, and Vahdat [2023a]. (Video, 3 hrs.)
CVPR 2023 tutorial, with recording.
Diffusion Models From Scratch.
Duan [2023]. (Webpage, 10 parts.)
Fairly complete on topics, includes: DDPM, DDIM, Karras et al. [2022], SDE/ODE solvers. Includes practical remarks and code.
Understanding Diffusion Models: A Unified Perspective.
Luo [2022]. (22 pgs.)
Focus on VAE interpretation, with explicit math details.
Demystifying Variational Diffusion Models.
Ribeiro and Glocker [2024]. (44 pgs.)
Focus on VAE interpretation, with explicit math details.
Diffusion and Score-Based Generative Models.
Song [2023]. (Video, 1.5 hrs.)
Discusses several interpretations, applications, and comparisons to other generative modeling methods.
Deep Unsupervised Learning using Nonequilibrium Thermodynamics
Sohl-Dickstein, Weiss, Maheswaranathan, and Ganguli [2015]. (9 pgs + Appendix)
Original paper introducing diffusion models for ML. Includes unified description of discrete diffusion (i.e. diffusion on discrete state spaces).
An Introduction to Flow Matching.
Fjelde, Mathieu, and Dutordoir [2024]. (Webpage.)
Insightful figures and animations, with rigorous mathematical exposition.
Elucidating the Design Space of Diffusion-Based Generative Models.
Karras, Aittala, Aila, and Laine [2022]. (10 pgs + Appendix.)
Discusses the effect of various design choices such as noise schhedule, parameterization, ODE solver, etc. Presents a generalized framework that captures many choices.
Denoising Diffusion Models
Peyré [2023]. (4 pgs.)
Fast-track through the mathematics, for readers already comfortable with Langevin dynamics and SDEs.
Generative Modeling by Estimating Gradients of the Data Distribution.
Song, Sohl-Dickstein, Kingma, Kumar, Ermon, and Poole [2020]. (9 pgs + Appendix.)
Presents the connections between SDEs, ODEs, DDIM, and DDPM.
Stochastic Interpolants: A Unifying Framework for Flows and Diffusions.
Albergo, Boffi, and Vanden-Eijnden [2023]. (46 pgs + Appendix.)
Presents a general framework that captures many diffusion variants, and learning objectives. For readers comfortable with SDEs
Sampling, Diffusions, and Stochastic Localization.
Montanari [2023]. (22 pgs + Appendix.)
Presents diffusion as a special case of "stochastic localization," a technique used in high-dimensional statistics to establish mixing of Markov chains.
B Omitted Derivations
B. 1 KL Error in Gaussian Approximation of Reverse Process
Here we prove Lemma 1, restated below.
Lemma 2. Let p(x)p(x) be an arbitrary density over R\mathbb{R}, with bounded 1st to 4^("th ")4^{\text {th }} order derivatives. Consider the joint distribution (x_(0),x_(1))\left(x_{0}, x_{1}\right), where x_(0)∼px_{0} \sim p and x_(1)∼x_(0)+N(0,sigma^(2))x_{1} \sim x_{0}+\mathcal{N}\left(0, \sigma^{2}\right). Then, for any conditioning z inRz \in \mathbb{R} we have
Proof. WLOG, we can take z=0z=0. We want to estimate the KL:
{:(77)KL(N(mu,sigma^(2))||p(x_(0)=*∣x_(1)=0)):}\begin{equation*}
K L\left(\mathcal{N}\left(\mu, \sigma^{2}\right) \| p\left(x_{0}=\cdot \mid x_{1}=0\right)\right) \tag{77}
\end{equation*}
where we will let mu\mu be arbitrary for now.
Let q:=N(mu,sigma^(2))q:=\mathcal{N}\left(\mu, \sigma^{2}\right), and p(x)=:exp(F(x))p(x)=: \exp (F(x)). We have x_(1)∼p***N(0,sigma^(2))x_{1} \sim p \star \mathcal{N}\left(0, \sigma^{2}\right). This implies:
Notice that our choice of mu_(**)\mu_{*} in the above proof was crucial; for example if we had set mu_(**)=0\mu_{*}=0, the Omega(sigma^(2))\Omega\left(\sigma^{2}\right) terms in Line (113) would not have cancelled out.
B. 2 SDE proof sketches
Here is sketch of the proof of the equivalence of the SDE and Probability Flow ODE, which relies on the equivalence of the SDE to a Fokker-Planck equation. (See Song et al. [2020] for full proof.)
Proof.
{:[dx=f(x","t)dt+g(t)dw],[(FP)Longleftrightarrow(delp_(t)(x))/(del t)=-grad_(x)(fp_(t))+(1)/(2)g^(2)grad_(x)^(2)p_(t)quad(FP)],[=-grad_(x)(fp_(t))+(1)/(2)g^(2)grad_(x)(p_(t)grad_(x)log p_(t))],[=-grad_(x){(f-(1)/(2)g^(2)grad_(x)log p_(t))p_(t)}],[=-grad_(x){( tilde(f))(x,t)p_(t)(x)}","quad tilde(f)(x","t)=f(x","t)-(1)/(2)g(t)^(2)grad_(x)log p_(t)(x)],[Longrightarrow dx= tilde(f)(x","t)dt]:}\begin{align*}
d x & =f(x, t) d t+g(t) d w \\
\Longleftrightarrow \frac{\partial p_{t}(x)}{\partial t} & =-\nabla_{x}\left(f p_{t}\right)+\frac{1}{2} g^{2} \nabla_{x}^{2} p_{t} \quad(\mathrm{FP}) \tag{FP}\\
& =-\nabla_{x}\left(f p_{t}\right)+\frac{1}{2} g^{2} \nabla_{x}\left(p_{t} \nabla_{x} \log p_{t}\right) \\
& =-\nabla_{x}\left\{\left(f-\frac{1}{2} g^{2} \nabla_{x} \log p_{t}\right) p_{t}\right\} \\
& =-\nabla_{x}\left\{\tilde{f}(x, t) p_{t}(x)\right\}, \quad \tilde{f}(x, t)=f(x, t)-\frac{1}{2} g(t)^{2} \nabla_{x} \log p_{t}(x) \\
\Longrightarrow d x & =\tilde{f}(x, t) d t
\end{align*}
The equivalence of the SDE and Fokker-Planck equations follows from Itô's formula and integration-by-parts. Here is an outline for a simplified case in 1d, where gg is constant (see Winkler [2023] for full proof):
Proof.
{:[dx=f(x)dt+gdw","quad dw∼sqrt(dt)N(0","1)],[" For any "phi:quad d phi(x)=(f(x)del_(x)phi(x)+(1)/(2)g^(2)del_(x)^(2)phi(x))dt+gdel_(x)phi(x)dw],[Longrightarrow(d)/(dt)E[phi]=E[fdel_(x)phi+(1)/(2)g^(2)del_(x)^(2)phi]","quad(E[dw]=0)],[int phi(x)del_(t)p(x","t)dx=int f(x)del_(x)phi(x)p(x","t)dx+(1)/(2)g^(2)intdel_(x)^(2)phi(x)p(x","t)dx],[=-int phi(x)del_(x)(f(x)p(x","t))dx+(1)/(2)g^(2)int phi(x)del_(x)^(2)p(x","t)dx","" integration-by-parts "],[del_(t)p(x)=-del_(x)(f(x)p(x","t))+(1)/(2)g^(2)del_(x)^(2)p(x)","" Fokker-Planck "]:}\begin{aligned}
d x & =f(x) d t+g d w, \quad d w \sim \sqrt{d t} \mathcal{N}(0,1) & & \\
\text { For any } \phi: \quad d \phi(x) & =\left(f(x) \partial_{x} \phi(x)+\frac{1}{2} g^{2} \partial_{x}^{2} \phi(x)\right) d t+g \partial_{x} \phi(x) d w & & \\
\Longrightarrow \frac{d}{d t} \mathbb{E}[\phi] & =\mathbb{E}\left[f \partial_{x} \phi+\frac{1}{2} g^{2} \partial_{x}^{2} \phi\right], \quad(\mathbb{E}[d w]=0) & & \\
\int \phi(x) \partial_{t} p(x, t) d x & =\int f(x) \partial_{x} \phi(x) p(x, t) d x+\frac{1}{2} g^{2} \int \partial_{x}^{2} \phi(x) p(x, t) d x & & \\
& =-\int \phi(x) \partial_{x}(f(x) p(x, t)) d x+\frac{1}{2} g^{2} \int \phi(x) \partial_{x}^{2} p(x, t) d x, & & \text { integration-by-parts } \\
\partial_{t} p(x) & =-\partial_{x}(f(x) p(x, t))+\frac{1}{2} g^{2} \partial_{x}^{2} p(x), & & \text { Fokker-Planck }
\end{aligned}
B. 3 DDIM Point-mass Claim
Here is a version of Claim 3 where p_(0)p_{0} is a delta at an arbitrary point x_(0)x_{0}
Claim 5. Suppose the target distribution is a point mass at x_(0)inR^(d)x_{0} \in \mathbb{R}^{d}, i.e. p_(0)=delta_(x_(0))p_{0}=\delta_{x_{0}}. Define the function
Thus Algorithm 2 defines a valid reverse sampler for target distribution p_(0)=delta_(x_(0))p_{0}=\delta_{x_{0}}.
B. 4 Flow Combining Lemma
Here we provide a more formal statement of the marginal flow result stated in Equation (64).
Equation (64) follows from a more general lemma (Lemma 3) which formalizes the "gas combination" analogy of Section 3. The motivation for this lemma is, we need a way of combining flows: of taking several different flows and producing a single "effective flow."
As a warm-up for the lemma, suppose we have nn different flows, each with their own initial and final distributions q_(i),p_(i)q_{i}, p_{i} :
We can imagine these as the flow of nn different gases, where gas ii has initial density q_(i)q_{i} and final density p_(i)p_{i}. Now we want to construct an overall flow v^(**)v^{*} which takes the average initial-density to the average final-density:
To construct v_(t)^(**)(x_(t))v_{t}^{*}\left(x_{t}\right), we must take an average of the individual vector fields v^((i))v^{(i)}, weighted by the probability mass the ii-th flow places on x_(t)x_{t}, at time tt. (This is exactly analogous to Figure 5).
This construction is formalized in Lemma 3. There, instead of averaging over just a finite set of flows, we are allowed to average over any distribution over flows. To recover Equation (64), we can apply Lemma 3 to a distribution Gamma\Gamma over (v,q_(v))=(v^([x_(1),x_(0)]),delta_(x_(1)))\left(v, q_{v}\right)=\left(v^{\left[x_{1}, x_{0}\right]}, \delta_{x_{1}}\right), that is, pointwise flows and their associated initial delta distributions.
Lemma 3 (Flow Combining Lemma). Let Gamma\Gamma be an arbitrary joint distribution over pairs (v,q_(v))\left(v, q_{v}\right) of flows vv and their associated initial distributions q_(v)q_{v}. Let v(q_(v))v\left(q_{v}\right) denote the final distribution when initial distribution q_(v)q_{v} is transported by flow vv, so q_(v)↪^(v)v(q_(v))q_{v} \stackrel{v}{\hookrightarrow} v\left(q_{v}\right)
For fixed t in[0,1]t \in[0,1], consider the joint distribution over (x_(1),x_(t),w_(t))in\left(x_{1}, x_{t}, w_{t}\right) \in(R^(d))^(3)\left(\mathbb{R}^{d}\right)^{3} generated by:
Now that we have the machinery of flows in hand, it is fairly easy to derive the DDIM algorithm "from scratch", by extending our simple scaling algorithm from the single point-mass case.
First, we need to find the pointwise flow. Recall from Claim 5 that for the simple case where the target distribution p_(0)p_{0} is a Dirac-delta at x_(0)x_{0}, the following scaling maps p_(t)p_{t} to p_(t-Delta t)p_{t-\Delta t} :
Now let us compute the marginal flow v^(**)v^{*} generated by the pointwise flow of Equation ( 70 ) and the coupling implied by the diffusion forward process. By Equation (69), the marginal flow is:
To see this, we can solve the ODE determined by (70) via the Separable Equations method:
{:[(dx_(t))/(dt)=-(1)/(2t)(x_(0)-x_(t))],[Longrightarrow((dx_(t))/(dt))/(x_(t)-x_(0))=(1)/(2t)],[Longrightarrow int(1)/(x_(t)-x_(0))dx=int(1)/(2t)dt","" since "(dx_(t))/(dt)dt=dx],[Longrightarrow log(x_(t)-x_(0))=log sqrtt+c],[c=log(x_(1)-x_(0))" (boundary cond.) "],[Longrightarrow log(x_(t)-x_(0))=log sqrtt(x_(1)-x_(0))],[Longrightarrowx_(t)-x_(0)=sqrtt(x_(1)-x_(0))]:}\begin{aligned}
\frac{d x_{t}}{d t} & =-\frac{1}{2 t}\left(x_{0}-x_{t}\right) \\
\Longrightarrow \frac{\frac{d x_{t}}{d t}}{x_{t}-x_{0}} & =\frac{1}{2 t} \\
\Longrightarrow \int \frac{1}{x_{t}-x_{0}} d x & =\int \frac{1}{2 t} d t, \text { since } \frac{d x_{t}}{d t} d t=d x \\
\Longrightarrow \log \left(x_{t}-x_{0}\right) & =\log \sqrt{t}+c \\
c & =\log \left(x_{1}-x_{0}\right) \text { (boundary cond.) } \\
\Longrightarrow \log \left(x_{t}-x_{0}\right) & =\log \sqrt{t}\left(x_{1}-x_{0}\right) \\
\Longrightarrow x_{t}-x_{0} & =\sqrt{t}\left(x_{1}-x_{0}\right)
\end{aligned}
B. 7 DDIM vs Time-reparameterized linear flows
Lemma 4 (DDIM vs Linear Flows). Let p_(0)p_{0} be an arbitrary target distribution. Let {x_(t)}_(t)\left\{x_{t}\right\}_{t} be the joint distribution defined by the DDPM forward process applied to p_(0)p_{0}, so the marginal distribution of x_(t)x_{t} is p_(t)=p_{t}=p***N(0,tsigma_(q)^(2))p \star \mathcal{N}\left(0, t \sigma_{q}^{2}\right).
Let x^(**)inR^(d)x^{*} \in \mathbb{R}^{d} be an arbitrary initial point. Consider the following two deterministic trajectories:
The trajectory {y_(t)}_(t)\left\{y_{t}\right\}_{t} of the continuous-time DDIM flow, with respect to target distribution p_(0)p_{0}, when started at initial point y_(1)=x^(**)y_{1}=x^{*}.
That is, y_(t)y_{t} is the solution to the following ODE (Equation 58):
The trajectory {z_(t)}_(t)\left\{z_{t}\right\}_{t} produced when initial point z_(1)=x^(**)z_{1}=x^{*} is transported by the marginal flow constructed from:
Now, we claim that for given x_(t)x_{t}, the conditional distributions p(eta_(i)∣:}p\left(\eta_{i} \mid\right.x_(t)x_{t} ) are identical for all i < ti<t. To see this, notice that the joint distribution function p(x_(0),x_(t),eta_(0),eta_(Delta t),dots,eta_(t-Delta t))p\left(x_{0}, x_{t}, \eta_{0}, \eta_{\Delta t}, \ldots, \eta_{t-\Delta t}\right) is symmetric in the {eta_(i)}s\left\{\eta_{i}\right\} \mathrm{s}, by definition of the forward process, and therefore the conditional distribution function p(eta_(0),eta_(Delta t),dots,eta_(t-Delta t)∣x_(t))p\left(\eta_{0}, \eta_{\Delta t}, \ldots, \eta_{t-\Delta t} \mid x_{t}\right) is also symmetric in the {eta_(i)}s\left\{\eta_{i}\right\} \mathrm{s}. Therefore, all eta_(i)\eta_{i} have identical conditional expectations:
Here we give the "varianced-reduced" versions of the DDPM training and sampling algorithms, where we train a network g_(theta)g_{\theta} to approximate
the scaling factors are more complex in the general case (see Luo [2022] for VP diffusion, for example) but the idea is the same. The DDPM training algorithm 1 has objective and optimal value
That is, the network f_(theta)f_{\theta} to learn to predict E[x_(t-Delta t)∣x_(t)]\mathbb{E}\left[x_{t-\Delta t} \mid x_{t}\right]. However, we could instead require the network to predict other related quantities, as follows. Noting that
Madhu S Advani, Andrew M Saxe, and Haim Sompolinsky. High-dimensional dynamics of generalization error in neural networks. Neural Networks, 132:428-446, 2020. uarr34\uparrow 34
Michael S. Albergo, Nicholas M. Boffi, and Eric Vanden-Eijnden. Stochastic interpolants: A unifying framework for flows and diffusions, 2023. uarr25,uarr31,uarr35,uarr37\uparrow 25, \uparrow 31, \uparrow 35, \uparrow 37
Michael Samuel Albergo and Eric Vanden-Eijnden. Building normalizing flows with stochastic interpolants. In The Eleventh International Conference on Learning Representations, 2022. uarr31\uparrow 31
Brian DO Anderson. Reverse-time diffusion equation models. Stochastic Processes and their Applications, 12 (3):313-326, 1982. uarr14\uparrow 14
Nicolas Carlini, Jamie Hayes, Milad Nasr, Matthew Jagielski, Vikash Sehwag, Florian Tramer, Borja Balle, Daphne Ippolito, and Eric Wallace. Extracting training data from diffusion models. In 32nd USENIX Security Symposium (USENIX Security 23), pages 5253-5270, 2023. uarr23\uparrow 23
Stanley H. Chan. Tutorial on diffusion models for imaging and vision, 2024. uarr36\uparrow 36
Hongrui Chen, Holden Lee, and Jianfeng Lu. Improved analysis of score-based generative modeling: Userfriendly bounds under minimal smoothness assumptions. In International Conference on Machine Learning, pages 4735-47634735-4763. PMLR, 2023. uarr23\uparrow 23
Sitan Chen, Sinho Chewi, Jerry Li, Yuanzhi Li, Adil Salim, and Anru Zhang. Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions. In The Eleventh International Conference on Learning Representations, 2022. uarr23\uparrow 23
Sitan Chen, Sinho Chewi, Holden Lee, Yuanzhi Li, Jianfeng Lu, and Adil Salim. The probability flow ode is provably fast. Advances in Neural Information Processing Systems, 36, 2024a. uarr23\uparrow 23
Sitan Chen, Vasilis Kontonis, and Kulin Shah. Learning general gaussian mixtures with efficient score matching. arXiv preprint arXiv:2404.18893, 2024b. uarr23\uparrow 23
Valentin De Bortoli. Convergence of denoising diffusion models under the manifold hypothesis. arXiv preprint arXiv:2208.05314, 2022. uarr23\uparrow 23
Valentin De Bortoli, James Thornton, Jeremy Heng, and Arnaud Doucet. Diffusion schrödinger bridge with applications to score-based generative modeling. Advances in Neural Information Processing Systems, 34: 17695-17709, 2021. uarr23\uparrow 23
Ronen Eldan. Lecture notes - from stochastic calculus to geometric inequalities, 2024. URL https://www. wisdom.weizmann.ac.il/ ronene/GFANotes.pdf. uarr13\uparrow 13
Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas Müller, Harry Saini, Yam Levi, Dominik Lorenz, Axel Sauer, Frederic Boesel, et al. Scaling rectified flow transformers for high-resolution image synthesis. arXiv preprint arXiv:2403.03206, 2024. uarr25\uparrow 25
Lawrence C Evans. An introduction to stochastic differential equations, volume 82. American Mathematical Soc., 2012. uarr13\uparrow 13
Xiangming Gu, Chao Du, Tianyu Pang, Chongxuan Li, Min Lin, and Ye Wang. On memorization in diffusion models. arXiv preprint arXiv:2310.02664, 2023. uarr23\uparrow 23
Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840-6851,2020.uarr3,uarr8,uarr32,uarr3333: 6840-6851,2020 . \uparrow 3, \uparrow 8, \uparrow 32, \uparrow 33
Zahra Kadkhodaie, Florentin Guth, Eero P Simoncelli, and Stéphane Mallat. Generalization in diffusion models arises from geometry-adaptive harmonic representations. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview. net/forum?id=ANvmVS2Yr0. uarr33\uparrow 33
Tero Karras, Miika Aittala, Timo Aila, and Samuli Laine. Elucidating the design space of diffusion-based generative models, 2022. uarr12,uarr14,uarr32,uarr36,uarr37\uparrow 12, \uparrow 14, \uparrow 32, \uparrow 36, \uparrow 37
Diederik P Kingma and Ruiqi Gao. Understanding diffusion objectives as the ELBO with simple data augmentation. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. URL https: //openreview.net/forum?id=NnMEadcdyD. uarr33\uparrow 33
P.E. Kloeden and E. Platen. Numerical Solution of Stochastic Differential Equations. Stochastic Modelling and Applied Probability. Springer Berlin Heidelberg, 2011. ISBN 9783540540625. URL https://books. google.com/books?id=BCvtssom1CMC. uarr13\uparrow 13
Holden Lee, Jianfeng Lu, and Yixin Tan. Convergence of score-based generative modeling for general data distributions. In International Conference on Algorithmic Learning Theory, pages 946-985. PMLR, 2023. uarr23\uparrow 23
Yaron Lipman, Ricky T. Q. Chen, Heli Ben-Hamu, Maximilian Nickel, and Matthew Le. Flow matching for generative modeling. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=PqvMRDCJT9t. uarr25,uarr28\uparrow 25, \uparrow 28
Xingchao Liu, Chengyue Gong, et al. Flow straight and fast: Learning to generate and transfer data with
Xingchao Liu, Lemeng Wu, Mao Ye, and Qiang Liu. Let us build bridges: Understanding and extending diffusion generative models, 2022b. uarr25\uparrow 25
Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around to steps. Advances in Neural Information Processing Systems, 35:5775-5787, 2022a. uarr32\uparrow 32
Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. Dpm-solver++: Fast solver for guided sampling of diffusion probabilistic models. arXiv preprint arXiv:2211.01095, 2022b. uarr32\uparrow 32
Calvin Luo. Understanding diffusion models: A unified perspective, 2022. uarr33,uarr36,uarr46\uparrow 33, \uparrow 36, \uparrow 46
David McAllester. On the mathematics of diffusion models, 2023. uarr36\uparrow 36
Andrea Montanari. Sampling, diffusions, and stochastic localization, 2023. uarr37\uparrow 37
Frank Permenter and Chenyang Yuan. Interpreting and improving diffusion models using the euclidean distance function. arXiv preprint arXiv:2306.04848, 2023. uarr36\uparrow 36
Aram-Alexandre Pooladian, Heli Ben-Hamu, Carles Domingo-Enrich, Brandon Amos, Yaron Lipman, and Ricky TQ Chen. Multisample flow matching: Straightening flows with minibatch couplings. In International Conference on Machine Learning, pages 28100-28127. PMLR, 2023. uarr31\uparrow 31
Fabio De Sousa Ribeiro and Ben Glocker. Demystifying variational diffusion models, 2024. uarr36\uparrow 36
Tim Salimans and Jonathan Ho. Progressive distillation for fast sampling of diffusion models. arXiv preprint arXiv:2202.00512, 2022. uarr34\uparrow 34
Axel Sauer, Frederic Boesel, Tim Dockhorn, Andreas Blattmann, Patrick Esser, and Robin Rombach. Fast high-resolution image synthesis with latent adversarial diffusion distillation. arXiv preprint arXiv:2403.12015, 2024. uarr32\uparrow 32
Jascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. CoRR, abs/1503.03585, 2015. URL http://arxiv.org/ abs //1503.03585.uarr4,uarr8,uarr10,uarr33,uarr36/ 1503.03585 . \uparrow 4, \uparrow 8, \uparrow 10, \uparrow 33, \uparrow 36
Gowthami Somepalli, Vasu Singla, Micah Goldblum, Jonas Geiping, and Tom Goldstein. Diffusion art or digital forgery? investigating data replication in diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6048-6058, 2023. uarr23\uparrow 23
Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. In International Conference on Learning Representations, 2021. URL https://openreview. net/forum?id=St1giarCHLP. uarr3\uparrow 3, uarr16,uarr32\uparrow 16, \uparrow 32
Jiaming Song, Chenlin Meng, and Arash Vahdat. Cvpr 2023 tutorial: Denoising diffusion models: A generative learning big bang, 2023a. URL https://cvpr2023-tutorial-diffusion-models.github.io. uarr36\uparrow 36
Yang Song. Generative modeling by estimating gradients of the data distribution, 2021. URL https: //yang-song.net/blog/2021/score/. uarr13\uparrow 13
Yang Song. Diffusion and score-based generative models, 2023. URL https://www.youtube.com/watch?v= wMmqCMwuM2Q. uarr36\uparrow 36
Yang Song and Stefano Ermon. Generative modeling by estimating gradients of the data distribution. Advances in neural information processing systems, 32, 2019. uarr32\uparrow 32
Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. arXiv preprint arXiv:2011.13456, 2020. URL https://arxiv.org/pdf/2011.13456.pdf. uarr14,uarr21,uarr32,uarr37,uarr40\uparrow 14, \uparrow 21, \uparrow 32, \uparrow 37, \uparrow 40
Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever. Consistency models. arXiv preprint arXiv:2303.01469, 2023b. uarr32\uparrow 32
Hannes Stark, Bowen Jing, Chenyu Wang, Gabriele Corso, Bonnie Berger, Regina Barzilay, and Tommi Jaakkola. Dirichlet flow matching with applications to dna sequence design, 2024. uarr31\uparrow 31
Alexander Tong, Nikolay Malkin, Kilian Fatras, Lazar Atanackovic, Yanlei Zhang, Guillaume Huguet, Guy Wolf, and Yoshua Bengio. Simulation-free schr $$ " odinger bridges via score and flow matching. arXiv preprint arXiv:2307.03672, 2023. uarr31\uparrow 31
Angus Turner. Diffusion models as a kind of vae, June 2021. URL https://angusturner.github.io/ generative_models/2021/06/29/diffusion-probabilistic-models-I.html. uarr33\uparrow 33
Yanwu Xu, Yang Zhao, Zhisheng Xiao, and Tingbo Hou. Ufogen: You forward once large scale text-toimage generation via diffusion gans. arXiv preprint arXiv:2311.09257, 2023. uarr32\uparrow 32
Chenyang Yuan. Diffusion models from scratch, from a new theoretical perspective, 2024. URL https: //www. chenyang.co/diffusion.html. uarr36\uparrow 36
Qinsheng Zhang and Yongxin Chen. Fast sampling of diffusion models with exponential integrator. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview. net/forum? id=Loek7hfb46P. uarr32\uparrow 32
^(1){ }^{1} These stand for Denoising Diffusion Probabilistic Models (DDPM) and Denoising Diffusion Implicit Models (DDIM), following Ho et al. [2020] and Song et al. [2021].
^(2){ }^{2} One benefit of using this particular forward process is computational: we can directly sample x_(t)x_{t} given x_(0)x_{0} in constant time.
^(3){ }^{3} Formally, p_(T)p_{T} is close in KL divergence to N(0,Tsigma^(2))\mathcal{N}\left(0, T \sigma^{2}\right), assuming p_(0)p_{0} has bounded moments.
^(12){ }^{12} This naturally suggests taking the continuous-time limit, which we discuss in Section 2.4, though it is not needed for most of our arguments.
^(14){ }^{14} In practice, it is common to share parameters when learning the different regression functions {mu_(t)}_(t)\left\{\mu_{t}\right\}_{t}, instead of learning a separate function for each timestep independently. This is usually implemented by training a model f_(theta)f_{\theta} that accepts the time tt as an additional argument, such that f_(theta)(x_(t),t)~~mu_(t)(x_(t))f_{\theta}\left(x_{t}, t\right) \approx \mu_{t}\left(x_{t}\right).
^(18){ }^{18} The chain rule for KL implies that we can add up these per-step errors: the approximation error for the final sample is bounded by the sum of all the per-step errors.
^(21){ }^{21} See Eldan [2024] for a high-level overview of Brownian motions and Itô's formula. See also Evans [2012] for a gentle introductory textbook, and Kloeden and Platen [2011] for numerical methods.
^(26){ }^{26} Because we can just "shift" our coordinates to make it so. Formally, our entire setup including Equation 35 is translation-symmetric.
^(30){ }^{30} See Claim 5 in Appendix B. 3 for an explicit statement.
^(35){ }^{35} We add conditioning x_(0)=ax_{0}=a, because we want to take expectations w.r.t the two-point mixture distribution, not the single-point distribution.
^(36)A{ }^{36} \mathrm{~A} proof sketch is in appendix B.2. It involves rewriting the SDE noise term as the deterministic score (recall the connection between noise and score in equation (18)). Although it is deterministic, the score is unknown since it depends on p_(t)p_{t}.
^(37){ }^{37} To use a gas analogy: the SDE describes the (Brownian) motion of individual particles in a gas, while the PF-ODE describes the streamlines of the gas's velocity field. That is, the PF-ODE describes the motion of a "test particle" being transported by the gas- like a feather in the wind.
^(51){ }^{51} Diffusion provides one possible construction, as we will see later in Section 4.6.
^(55){ }^{55} See Appendix B. 6 for details on why (70) and (71) are equivalent along their trajectories.
^(57){ }^{57} In practice, linear flows are most often instantiated with the independent coupling, not the above "diffusion coupling." However, for large enough terminal variance sigma_(q)^(2)\sigma_{q}^{2}, the diffusion coupling is close to independent. Therefore, Claim 4 tells us that the common practice in flow matching (linear flows with a Gaussian terminal distribution and independent coupling) is nearly equivalent to standard DDIM, with a different time schedule. Finally, for the experts: this is a claim about the "variance exploding" version of DDIM, which is what we use throughout. Claim 4 is false for variance-preserving DDIM.