Were RNNs All We Needed?
我们只需要 RNN 吗?
Abstract 抽象
The scalability limitations of Transformers regarding sequence length have renewed interest in recurrent sequence models that are parallelizable during training. As a result, many novel recurrent architectures, such as S4, Mamba, and Aaren, have been proposed that achieve comparable performance.
In this work, we revisit traditional recurrent neural networks (RNNs) from over a decade ago: LSTMs (1997) and GRUs (2014).
While these models were slow due to requiring to backpropagate through time (BPTT), we show that by removing their hidden state dependencies from their input, forget, and update gates, LSTMs and GRUs no longer need to BPTT and can be efficiently trained in parallel.
Building on this, we introduce minimal versions (minLSTMs and minGRUs) that (1) use significantly fewer parameters than their traditional counterparts and (2) are fully parallelizable during training ( faster for a sequence of length ). Lastly, we show that these stripped-down versions of decade-old RNNs match the empirical performance of recent sequence models.
Transformers 在序列长度方面的可扩展性限制重新引起了人们对在训练期间可并行化的递归序列模型的兴趣。因此,已经提出了许多新颖的递归架构,例如 S4、Mamba 和 Aaren,它们实现了类似的性能。
在这项工作中,我们重新审视了十多年前的传统递归神经网络 (RNN):LSTM (1997) 和 GRU (2014)。
虽然这些模型由于需要随时间反向传播 (BPTT) 而速度较慢,但我们表明,通过从其输入、忘记和更新门中删除其隐藏的状态依赖关系,LSTM 和 GRU 不再需要 BPTT,并且可以有效地并行训练。
在此基础上,我们引入了最小版本(minLSTM 和 minGRU),它们 (1) 使用的参数比传统版本少得多,并且 (2) 在训练期间完全可并行化( 对于长度 序列来说更快)。最后,我们表明这些十年前的 RNN 的精简版本与最近序列模型的经验性能相匹配。
1 Introduction 1 介绍
Over the past few years, Transformers (Vaswani et al., 2017) have been the dominant architecture in many areas, leading to advancements in tasks like machine translation (Devlin et al., 2019), text generation (Brown et al., 2020), and more.
However, Transformers have a quadratic computational complexity in the sequence length, making them prohibitively expensive for long sequences, especially in low-resource settings.
As such, numerous works have investigated the design of more efficient alternatives that achieve competitive performance with that of Transformers.
Recently, there has been a renewed interest in recurrent sequence models that can be trained efficiently processing their context in parallel.
These models (1) during training require only linear memory in the sequence length and (2) at inference time are rolled out recurrently token-by-token, requiring only constant memory.
As a result, these models can scale to significantly longer sequences than Transformers11The title of this paper pays tribute to the original Transformers paper, “Attention is All You Need”.
本文的标题旨在向 Transformers 的原始论文“Attention is All You Need”致敬。
在过去的几年里,变压器(Vaswani et al., 2017)一直是许多领域的主导架构,导致机器翻译(Devlin et al., 2019)、文本生成(Brown et al., 2020)等任务的进步。但是,Transformer 在序列长度上具有二次计算复杂性,这使得它们对于长序列来说非常昂贵,尤其是在资源匮乏的环境中。因此,许多工作研究了更高效的替代方案的设计,以实现与 Transformer 相比具有竞争力的性能。最近,人们对递归序列模型重新产生了兴趣,这些模型可以被训练以并行处理其上下文。这些模型 (1) 在训练期间只需要序列长度的线性内存,并且 (2) 在推理时逐个令牌地反复推出,只需要常量内存。因此,这些模型可以扩展到比 Transformer 长得多的序列1.
A family of efficiently trainable recurrent sequence models that has recently gained much traction is that of state-space models, specifically the recently proposed Mamba (Gu & Dao, 2024). Mamba (S6) is a state-space model that differentiates itself from prior works by leveraging input-dependent transitions.
The recent success of Mamba and the proposals of many new variants of state-space models has led to several survey papers (Wang et al., 2024; Patro & Agneeswaran, 2024; Qu et al., 2024).
Another extensively explored group of methods is those based on attention.
Peng et al. (2023) proposed a linear attention model that can be written recurrently while being trained in parallel.
Feng et al. (2024) showed that softmax attention (and Transformers) can be viewed as a recurrent neural network (RNN).
Building on their RNN formulation of attention, they proposed Aaren, a softmax attention model, that can be computed in parallel for efficient training or unrolled sequentially as an RNN for efficient inference.
Although many recurrent models have been proposed with vastly different architectures, these recent state-of-the-art methods are all efficiently trainable using the same algorithm – the parallel prefix scan algorithm (Blelloch, 1990).
最近获得广泛关注的一系列高效可训练的循环序列模型是状态空间模型,特别是最近提出的Mamba(Gu&Dao,2024)。Mamba (S6) 是一种状态空间模型,它通过利用依赖于输入的过渡来区别于以前的工作。Mamba 最近的成功和状态空间模型的许多新变体的提议导致了几篇调查论文(Wang et al., 2024;Patro & Agneeswaran, 2024;Qu et al., 2024)的另一组被广泛探索的方法是基于注意力的方法。Peng et al. (2023) 提出了一个线性注意力模型,该模型可以在并行训练的同时反复编写。Feng et al. (2024) 表明,softmax 注意力(和 Transformers)可以被视为递归神经网络 (RNN)。基于他们的 RNN 注意力公式,他们提出了 Aaren,这是一种 softmax 注意力模型,可以并行计算以实现高效训练,也可以作为 RNN 顺序展开以实现高效推理。尽管已经提出了许多架构截然不同的递归模型,但这些最新的最先进的方法都可以使用相同的算法——并行前缀扫描算法(Blelloch,1990)进行有效训练。
Inspired by the striking algorithmic similarities between the numerous recently proposed sequence models, we revisit LSTMs (Hochreiter & Schmidhuber, 1997) and GRUs (Cho et al., 2014) from a modern lens.
As traditional RNNs from over a decade ago, LSTMs and GRUs are only computable sequentially and require to backpropagate through time (BPTT) during training.
As such, LSTMs and GRUs were far too slow to scale beyond a few hundred tokens, resulting in their deprecation.
Revisiting these models, we show that by removing hidden state dependencies from their input, forget, and update gates, LSTMs and GRUs no longer need to BPTT and can be trained efficiently using the parallel scan algorithm.
Building on this, we simplify LSTMs and GRUs further by removing their constraints on output range, (i.e., their use of ) and ensuring their output is time-independent in scale.
These steps result in minimal versions (minLSTMs and minGRUs) that (1) use significantly fewer parameters than their traditional counterpart and (2) are trainable in parallel ( faster for a context length of ).
Finally, we show that these stripped-down versions of decade-old RNNs match the empirical performance of recent sequence models.
受到众多最近提出的序列模型之间惊人的算法相似性的启发,我们从现代视角重新审视了LSTMs(Hochreiter & Schmidhuber(1997)和GRUs(Cho等 人,2014)。与十多年前的传统 RNN 一样,LSTM 和 GRU 只能按顺序计算,并且在训练期间需要通过时间反向传播 (BPTT)。因此,LSTM 和 GRU 的扩展速度太慢,无法扩展到几百个令牌以上,从而导致它们被弃用。重新审视这些模型,我们表明,通过从其 input、forget 和 update 门中删除隐藏的状态依赖关系,LSTM 和 GRU 不再需要 BPTT,并且可以使用并行扫描算法进行高效训练。在此基础上,我们通过消除 LSTM 和 GRU 对输出范围的限制(即它们的使用 )并确保它们的输出在规模上与时间无关,从而进一步简化了 LSTM 和 GRU。这些步骤导致最小版本(minLSTM 和 minGRU)满足以下条件:(1) 使用的参数明显少于其传统版本,并且 (2) 可并行训练( 上下文长度 为 )。最后,我们表明这些十年前的 RNN 的精简版本与最近序列模型的经验性能相匹配。
2 Background
In this section, we review recurrent neural networks (RNNs). RNNs are recurrent sequence models that maintain a hidden state across time steps, capturing temporal dependencies. As such, RNNs are particularly suitable for sequence modelling settings such as those involving time series, natural language processing, and other sequential tasks where context from previous steps informs the current prediction. Vanilla RNNs (Elman, 1990), however, struggle with issues of vanishing and exploding gradients, limiting their ability to learn long-term dependencies.
2.1 LSTM
Addressing this limitation, Hochreiter & Schmidhuber (1997) introduced Long Short-Term Memory (LSTM) networks. LSTMs are enhanced RNNs designed to mitigate the vanishing gradient problem, allowing the model to learn long-term dependencies. LSTMs are computed as follows:
where represents an element-wise multiplication of vectors, is the current timestep, is the outputted hidden state, represents the concatenation of with , is the size of the hidden state, is a cell state that maintains information over the sequence, and is the candidate cell state to be added, , , and are gating mechanisms. The input gate controls how much new information from the candidate cell state is added. The forget gate determines the proportion of information in the cell gate to discard. The output gate decides what information from the cell state should be outputted. The and are used for scaling to ensure that the output does not explode/vanish. An LSTM module maintains both a cell and a hidden state and, in total, contains parameters.
2.2 GRU
Simplifying LSTM, Cho et al. (2014) introduced Gated Recurrent Unit (GRU) which only uses two gates and a single state instead of LSTM’s three gates and two states (hidden and cell state). GRU’s reduced complexity leads to faster training and inference times while achieving competitive performance in many tasks. GRUs are computed as follows:
where is the candidate hidden state that represents a potential new value for the hidden state. GRU combines LSTM’s forget and input gates into a single update gate which decides how much of the past information to carry forward (i.e., ) and how much new information from the candidate hidden state to add (i.e., ). Additionally, LSTM’s output gate is removed and instead, a reset gate is added that controls how much past information is used in computing the candidate hidden state. GRU reduces the total number of parameters and computations, requiring only parameters. However, GRUs and LSTMs are only computable sequentially. As a result, during training they require backpropagating their gradients through time (BPTT), requiring linear training time and greatly limiting their ability to scale to long contexts.
2.3 Parallel Scan
Due to this limitation, Transformers replaced LSTMs and GRUs as the defacto sequence modelling method for years by leveraging parallelization during training. However, Transformers have a quadratic complexity in the sequence length, limiting their ability to scale to long contexts. Recently, a resurgence of many new recurrent models have been proposed as replacements for Transformers that achieve comparable performance and are trainable in parallel, while avoiding the BPTT issue that traditional RNNs (e.g., LSTMs and GRUs) faced. Although many different architectures have been proposed, many of these models are efficiently trained using the parallel prefix scan algorithm (Blelloch, 1990).
The parallel scan algorithm is a parallel computation method for computing prefix computations from sequential data points via an associative operator (e.g., and ). The algorithm efficiently computes from . In particular, we can apply the parallel scan method for efficiently computing a popular family of functions: where and (Heinsen, 2023). The method takes as input and and computes via parallel scans .
3 Methodology
Naturally, the aforementioned algorithm also extends to vectors: where is the element-wise multiplication. Interestingly, we can see that the GRU and LSTM state recurrences resemble the vector formulation. In this section, we show that GRUs and LSTMs are trainable via parallel scan by simplifying and removing several hidden state dependencies from their various gates. Building on this, we further simplify these RNNs by removing their constraints on output range, (i.e., ) and ensuring the outputs are time-independent in scale. Combining the steps, we describe minimal versions of GRUs and LSTMs (minGRUs and minLSTMs) that are trainable via parallel scan and perform comparably to Transformers and recently proposed sequence methods.
3.1 A Minimal GRU: minGRU
3.1.1 Step 1: Drop previous hidden state dependencies from gates
Revisiting GRU’s hidden state recurrence which works as follows:
We can observe that the recurrence resembles the aforementioned parallel scan’s formulation where , , and . However, and are dependent on previous hidden states , i.e., and . As a result, it is not possible to apply the parallel scan as is since the algorithm’s inputs and are conditional on already knowing its outputs .
We can remedy this by simplifying GRUs, removing their previous hidden state (i.e., ) dependencies. Specifically, the changes are as follows:
By removing the dependence on from the candidate hidden state , the reset gate that would control weight is also no longer needed and is removed. Without the dependencies on previous hidden states, the inputs to the algorithm and are all easily computed in parallel and can thus be used to compute efficiently via the parallel scan.
3.1.2 Step 2: Drop range restriction of candidate states
In GRU’s hidden state recurrence, the proportion carried over from the previous hidden state () and the amount added for the new candidate hidden state () sum to . As a result, the scale of GRU’s hidden state value is time-independent. Instead, the scale of its hidden state depends on that of its candidate hidden states . The hyperbolic tangent function () plays a crucial role in LSTMs and GRUs, restricting the range of (candidate) hidden states, i.e., . The helps stabilize the training and mitigates vanishing gradients that result from applying sigmoid () activations to linear transformations of the hidden state (e.g., ). In the previous step, these hidden state dependencies were removed. As such, we can simplify GRU further by removing the range restriction () on the (candidate) hidden states as follows:
3.1.3 minGRU
Combining the two simplification steps results in a minimal version of GRU (minGRU):
The resulting model is significantly more efficient than the original GRU (1) requiring only parameters instead of GRU’s parameters where corresponds to the sizes of and respectively. In terms of training, minGRU (2) can be trained in parallel using the parallel scan algorithm, speeding up training significantly. In Section 4.1, we show that this corresponded to a speedup in training steps for a sequence length of on a T4 GPU. The parameter efficiency gains are also significant. Typically, in RNNs, state expansion is performed (i.e., where ) allowing the models to more readily learn features from their inputs. minGRU uses approximately or of parameters compared to GRU when or respectively.
3.2 A Minimal LSTM: minLSTM
3.2.1 Step 1: Drop previous hidden state dependencies from gates
Revisiting LSTMs, we focus on their cell state recurrence which works as follows:
Similar to GRU’s hidden state, we can see that LSTM’s cell state recurrence resembles the aforementioned parallel scan’s formulation where , , and . However, , and are dependent on the previous hidden state . As such, LSTM’s cell state recurrence is unable to apply the parallel scan algorithm as is. We can address this in a similar fashion to GRU by removing their hidden state dependencies as follows:
3.2.2 Step 2: Drop range restriction of candidate states
Similar to GRUs, LSTMs leverage the hyperbolic tangent function () to restrict the range of its states between . LSTMs apply the range restriction twice: once when computing the candidate cell state and once computing its hidden state. In this step, we drop both as follows:
3.2.3 Step 3: Ensure output is time-independent in scale
In many sequence modelling settings (e.g., text generation), the optimization objective/target is time-independent in scale. Recall LSTM’s cell state recurrence where , and GRU’s hidden state recurrence22A superscript is added to differentiate GRU’s hidden state from LSTM’s., where . GRUs retain of the previous hidden state and add of the new candidate state. Since these proportions sum to , the model ensures its outputs (i.e., hidden states) are time-independent in scale. In contrast, LSTM’s forget and input gates are computed independently (e.g., or ), making its cell states time-dependent in scale33For example, when , growing in scale as the sequence length increases. and optimization more difficult. As such, we ensure LSTM’s output is time-independent in scale.
To do so, we can simply normalize the two gates, i.e., , ensuring that and the scale of LSTM’s cell state is time-independent. Ensuring that the hidden state is time-independent in scale, we also drop the output gate which scales the hidden state. Without the output gate, the normalized hidden state is equal to the cell state, i.e., , making having both a hidden and cell state unnecessary. As such, we drop the cell state as well. In summary, the modifications are as follows:
Notably, GRUs do not need this step as their outputs are already time-independent in scale.
3.2.4 minLSTM
Combining the three steps results in a minimal version of LSTM (minLSTM):
The minimal version (minLSTM) is significantly more efficient (1) requiring only parameters compared to LSTM’s . Furthermore, minLSTM (2) can be trained in parallel using the parallel scan algorithm, speeding up training significantly. For example, in Section 4.1, we found that minLSTM corresponded to a speedup for a sequence of length compared to LSTM on a T4 GPU. In terms of parameter efficiency, minLSTM uses only or of parameters compared to LSTM when or respectively where .
4 Were RNNs All We Needed?
In this section, we compare the minimal versions (minLSTMs and minGRUs) with their traditional counterparts (LSTMs and GRUs) and modern sequence models. Pseudocode, PyTorch implementation, and detailed information regarding the experiment setup are available in the Appendix.
4.1 Minimal LSTMs and GRUs are very efficient
At test time, recurrent sequence models are rolled out sequentially, making their inferences efficient. Instead, the bottleneck of traditional RNNs is their training which requires linear training time (backpropagating through time) which resulted in their eventual deprecation. The renewed interest in recurrent sequence models is due to many new architectures being efficiently trained in parallel (Gu et al., 2021). In this section, we compare the resources required to train the traditional RNNs (LSTM and GRU), their minimal versions (minLSTM and minGRU), and a recent state-of-the-art sequence model. In particular, we focus on the comparison with Mamba (Gu & Dao, 2024) which has seen significant popularity recently. For these experiments, we consider a batch size of and vary the sequence length. We measure the total runtime and memory complexity of performing a forward pass through the models, computing a loss, and computing gradients via a backward pass.
Runtime. In terms of runtime (see Figure 1 (left)), the simplified versions of LSTM and GRU (minLSTM and minGRU) Mamba achieve similar runtimes. Averaging over runs, the runtime for sequence lengths of for minLSTM, minGRU, and Mamba were , , and milliseconds respectively. For a sequence with length , the runtime were , , and respectively. In contrast, the traditional RNN counterparts (LSTMs and GRUs) required a runtime that scaled linearly with respect to sequence length. For a sequence length of , minGRUs and minLSTMs were and faster per training step (see Figure 1 (middle)) than GRUs and LSTMs on a T4 GPU. The improvement is even more significant as sequences grow in length with minGRUs and minLSTMs being and faster for a sequence length of . As such, in a setting where minGRU would take a day to finish training for a fixed number of epochs, its traditional counterpart GRU could take over years.
Memory. By leveraging a parallel scan algorithm to compute the outputs in parallel efficiently, minGRU, minLSTM, and Mamba create a larger computational graph, thus needing more memory compared to traditional RNNs (see Figure 1 (right)). The minimal variants (minGRU and minLSTM) use more memory compared to their traditional counterpart. Mamba uses more memory compared to minGRU. In practice, however, runtime is the bottleneck when training RNNs.
Effect of removing . The original LSTM and GRU compute their various gates using their inputs and previous hidden states . These models leverage their time-dependent gates to learn complex functions. However, minLSTM and minGRU’s training efficiencies are achieved by dropping their gates’ dependencies on the previous hidden states . As a result, minLSTM and minGRU’s gates are dependent only on their inputs , resulting in a simpler recurrent module. As such, the gates of a model consisting of a single layer of minLSTM or minGRU are time-independent due to being conditioned on time-independent inputs .
Model | # Layers | Accuracy |
---|---|---|
MinLSTM | 1 | 37.6 ± 2.0 |
2 | 85.7 ± 5.8 | |
3 | 96.0 ± 2.8 | |
MinGRU | 1 | 37.0 ± 2.3 |
2 | 96.8 ± 3.2 | |
3 | 99.5 ± 0.2 |
However, in deep learning, models are constructed by stacking modules. Although the inputs to the first layer is time-independent, its outputs are time-dependent and are used as the inputs to the second layer, i.e., . As such, beginning from the second layer onwards, minLSTM and minGRU’s gates will also be time-dependent, resulting in the modelling of more complex functions. In Table 1, we compare the performance of the models with varying numbers of layers on the Selective Copying Task from the Mamba paper (Gu & Dao, 2024). We can immediately see the impact of the time dependencies: increasing the number of layers to or more drastically increases the model’s performance.
Training Stability. Another effect of the number of layers is increased stability with decreased variance in the accuracy as the number of layers increases (see Table 1). Furthermore, although minLSTM and minGRU both solve the Selective Copying task, we can see that minGRU is an empirically more stable method than minLSTM, solving the task with more consistency and lower variance. minLSTM discards old information and adds new information, controlling the ratio with two sets of parameters (forget and input gate). During training, the two sets of parameters are tuned in different directions, making the ratio harder to control and optimize. In contrast, minGRU’s discarding and adding of information is controlled by a single set of parameters (update gate), making it easier to optimize.
Model | Layer | Accuracy |
H3 | Hyena | 30.1 |
Mamba | Hyena | 28.4 |
S4 | S4 | 18.3 |
H3 | S4 | 57.0 |
Mamba | S4 | 56.4 |
S4 | S6 | 97.0 |
H3 | S6 | 99.7 |
Mamba | S6 | 99.8 |
minGRU | minGRU | 99.5 ± 0.2 |
minLSTM | minLSTM | 96.0 ± 2.8 |
4.2 Minimal LSTMs and GRUs perform well
In the previous section, we showed the significant efficiency gains achieved by simplifying traditional RNNs. Here, we explore the empirical performance aspect of these minimal versions of LSTMs and GRUs compared to several popular sequence models.
Selective Copy. We consider the long-range Selective Copying task from the Mamba paper (Gu & Dao, 2024). Unlike the original Copying task (Arjovsky et al., 2016), the Selective Copying task’s input elements are randomly spaced relative to their output, making the task harder. To solve the task, models are required to perform content-aware reasoning, memorizing relevant and filtering out irrelevant tokens.
In Table 2, we compare the simplified versions of LSTMs and GRUs (minLSTM and minGRU) against well-known recurrent sequence models that can trained in parallel: S4 (Gu et al., 2021), H3 (Fu et al., 2023), Hyena (Poli et al., 2023), and Mamba (S6) (Gu & Dao, 2024). The results for these baselines are quoted from the Mamba paper. Out of all of these baselines, only S6 from Mamba’s paper is capable of solving this task. minGRU and minLSTM are also capable of solving the Selective Copying task, achieving comparable performance to S6 and outperforming all other baselines. LSTMs and GRUs leverage content-aware gating mechanisms, making these minimal versions sufficient for solving this task that many popular sequence models fail to solve.
Dataset | DT | DS4 | DAaren | DMamba | minLSTM | minGRU |
HalfCheetah-M | 42.6 | 42.5 | 42.2 | 42.8 | 42.7 ± 0.7 | 43.0 ± 0.4 |
Hopper-M | 68.4 | 54.2 | 80.9 | 83.5 | 85.0 ± 4.4 | 79.4 ± 8.2 |
Walker-M | 75.5 | 78.0 | 74.4 | 78.2 | 72.0 ± 7.5 | 73.3 ± 3.3 |
HalfCheetah-M-R | 37.0 | 15.2 | 37.9 | 39.6 | 38.6 ± 1.1 | 38.5 ± 1.1 |
Hopper-M-R | 85.6 | 49.6 | 77.9 | 82.6 | 88.5 ± 4.7 | 90.5 ± 0.9 |
Walker-M-R | 71.2 | 69.0 | 71.4 | 70.9 | 69.7 ± 10.7 | 72.8 ± 8.9 |
HalfCheetah-M-E | 88.8 | 92.7 | 75.7 | 91.9 | 85.4 ± 1.7 | 86.3 ± 0.5 |
Hopper-M-E | 109.6 | 110.8 | 103.9 | 111.1 | 110.3 ± 1.6 | 109.7 ± 2.7 |
Walker-M-E | 109.3 | 105.7 | 110.5 | 108.3 | 110.3 ± 0.5 | 110.3 ± 0.4 |
Average | 76.4 | 68.6 | 75.0 | 78.8 | 78.1 | 78.2 |
Reinforcement Learning. Next, we consider the MuJoCo locomotion tasks from the D4RL benchmark (Fu et al., 2020). Specifically, we consider the three environments: HalfCheetah, Hopper, and Walker. For each environment, the models are trained on three datasets of varying data quality: Medium (M), Medium-Replay (M-R), and Medium-Expert (M-E).
In Table 3, we compare minLSTM and minGRU with various Decision Transformer variants, including the original Decision Transformer (DT) (Chen et al., 2021), Decision S4 (DS4) (David et al., 2023), Decision Mamba (Ota, 2024), and (Decision) Aaren (Feng et al., 2024). The baseline results are retrieved from the Decision Mamba and Aaren papers. minLSTM and minGRU outperform Decision S4 and achieve performance competitive with Decision Transformer, Aaren, and Mamba. Unlike other recurrent methods, Decision S4 is a model whose recurrence transitions are not input-aware, affecting their performance. In terms of average score across the datasets, minLSTM and minGRU outperform all the baselines except for Decision Mamba where the difference is marginal.
Language Modelling. Finally, we consider a language modelling task. In this setting, we train a character-level GPT on the works of Shakespeare using the nanoGPT (Karpathy, 2022) framework. In Figure 2, we plot the learning curves with a cross-entropy loss comparing the proposed minimal LSTM and GRU (minLSTM and minGRU) with Mamba and Transformers. We found that minGRU, minLSTM, Mamba, and Transformers achieved comparable test losses of , , , and respectively. Mamba performed slightly worse than the other models but trained faster, particularly in the early stages, achieving its best performance at steps while minGRU and minLSTM continued training until and steps respectively. In contrast, Transformers trained significantly slower, requiring steps () more training steps than minGRU to achieve comparable performance, making it significantly slower and more resource-intensive to train (quadratic complexity compared to minGRU, minLSTM, and Mamba’s linear complexity).
5 Related Work
In this section, we provide a discussion of the similarities and differences between existing recurrent sequence models and the simplified versions of LSTMs and GRUs (minLSTM and minGRU).
State-Space Models (SSMs). Although Mamba (Gu & Dao, 2024) and state-space models have gained significant popularity recently, the steps towards the recent success of Mamba began years ago. Gu et al. (2020) first proposed a discretized structured state-space model. Gu et al. (2021) scaled the idea up, introducing S4. The success of S4 became the basis for many future works (Gu et al., 2022; Gupta et al., 2022; Hasani et al., 2023; Smith et al., 2023) and state-space model applications in language (Mehta et al., 2023), audio (Goel et al., 2022), and more. Recently, Mamba was a significant breakthrough in SSM, outperforming previous methods and garnering substantial attention. A major novelty in Mamba was the proposal of S6, a state-space model whose transition matrices are input-dependent (i.e., and are functions of ). In contrast, earlier state-space model transition matrices were input-independent, limiting their expressivity. The success of Mamba and state-space models led to the writing of several survey papers (Wang et al., 2024; Patro & Agneeswaran, 2024; Qu et al., 2024).
Recurrent Versions of Attention. Another direction that proposed efficient recurrent sequence models is that of attention. Building on variations of linear attention (Katharopoulos et al., 2020), several papers have introduced recurrent versions that can be computed in parallel. Notably, Sun et al. (2023) and Qin et al. (2023) introduced variants that use an input-independent gating mechanism (decay factor). More recently, Katsch (2023) and Yang et al. (2024) proposed linear attention variants that use input-dependent gating. Feng et al. (2024) showed softmax attention can be viewed as an RNN and proposed a recurrent model based on their RNN formulation.
Parallelizable RNNs. Alternatively, several papers have proposed RNNs that can be trained efficiently in parallel. Orvieto et al. (2023) proposed an RNN that leverages complex diagonal recurrences and an exponential parameterization. Beck et al. (2024) proposed various enhancements to LSTM such as exponential gating, covariance update rule, and a normalizer state.
Although these three directions of designing efficient recurrent sequence models have proposed vastly different architectures, the core recurrent component of these models is remarkably similar. For example, although state-space models are typically written as , in practice, the transition matrices are typically diagonal for efficiency reasons. As such, Mamba’s S6 (Gu & Dao, 2024) can be viewed as where and are functions of . In contrast, consider the minimal version of GRU and minimal version of LSTM . The recurrences of these models are similar. The major difference between these minimal RNNs, Mamba’s S6, and other models is how their transitions (e.g., and ) are computed from the input token .
Parallel Scan. Generalizing across the families of methods (including minLSTM and minGRU), these recent sequence models can be viewed as members of the same family of functions trainable via a parallel scan: (see Section 2.3) where and are functions of the input token . Improving upon the parallel scan algorithm, several models (Yang et al., 2024; Gu & Dao, 2024) such as Mamba have proposed specialized hardware-efficient methods that leverage GPU’s memory hierarchy to reduce high I/O costs and speed up training. In our work, we implemented minLSTM and minGRU in plain PyTorch. However, due to the structural similarities in recurrences amongst the numerous methods that leverage parallel scan, many techniques such as chunking that apply to one work for speeding up training can also apply to others such as minGRU and minLSTM.
Parameter Initializations. Unrolling the recurrences of these new recurrent sequence models over time often results in their outputs and gradients vanishing/exploding (Wang et al., 2024) due to time dependency in their output’s scale. To ensure model stability, the parameters of many models such as state-space models are initialized according to special distributions (Gu et al., 2020, 2022; Orvieto et al., 2023). In contrast, we found that minLSTM and minGRU are already stable using the default PyTorch initialization. Unlike SSMs, minLSTM and minGRU’s outputs are time-independent in scale, avoiding potential instabilities.
6 Conclusion
In this work, we revisited RNNs from over a decade ago: LSTMs and GRUs. We show that these models are trainable via the parallel scan algorithm by removing their hidden state dependencies from their gates. Simplifying these models further, we removed their constraints on output range and ensured their output was time-independent in scale. These steps result in their minimal versions (minLSTM and minGRU). Empirically, we showed that minLSTM and minGRU (1) address the computational limitations of their traditional counterparts and (2) are as computationally efficient as Mamba, a popular recent state-of-the-art recurrent sequence model, and (3) are competitive in performance with recent sequence models. Considering the strong empirical performance of these simplified RNNs and their fundamental similarities with many recently proposed recurrent sequence methods, we question ”Were RNNs all we needed?”
Limitations
Our experiments were run on P100 (16 GBs) and T4 (16 GBs) GPUs. Due to computation limitations, our experiments are smaller in scale compared to works such as Mamba (Gu & Dao, 2024) which leveraged A100 80GB GPUs. To fit the selective copy task on the GPU, we leveraged gradient accumulation for training, splitting the standard batch size in half and slowing training significantly. Nonetheless, we hypothesize that these conclusions generalize to larger-scale settings due to the fundamental similarities between the minimal RNNs (minLSTM and minGRU) and many recent sequence methods.
References
- Arjovsky et al. (2016) Martin Arjovsky, Amar Shah, and Yoshua Bengio. Unitary evolution recurrent neural networks. In International conference on machine learning, pp. 1120–1128. PMLR, 2016.
- Beck et al. (2024) Maximilian Beck, Korbinian Pöppel, Markus Spanring, Andreas Auer, Oleksandra Prudnikova, Michael Kopp, Günter Klambauer, Johannes Brandstetter, and Sepp Hochreiter. xlstm: Extended long short-term memory. arXiv preprint arXiv:2405.04517, 2024.
- Blelloch (1990) Guy E Blelloch. Prefix sums and their applications. Technical Report CMU-CS-90-190, School of Computer Science, Carnegie Mellon University, 1990.
- Brown et al. (2020) Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeff Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. ArXiv, abs/2005.14165, 2020.
- Chen et al. (2021) Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Misha Laskin, Pieter Abbeel, Aravind Srinivas, and Igor Mordatch. Decision transformer: Reinforcement learning via sequence modeling. Advances in neural information processing systems, 34:15084–15097, 2021.
- Cho et al. (2014) Kyunghyun Cho, Bart Van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In EMNLP, 2014.
- David et al. (2023) Shmuel Bar David, Itamar Zimerman, Eliya Nachmani, and Lior Wolf. Decision s4: Efficient sequence-based rl via state spaces layers. In The Eleventh International Conference on Learning Representations, 2023.
- Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In North American Chapter of the Association for Computational Linguistics, 2019.
- Elman (1990) Jeffrey L. Elman. Finding structure in time. Cognitive Science, 14(2):179–211, 1990. ISSN 0364-0213.
- Feng et al. (2024) Leo Feng, Frederick Tung, Hossein Hajimirsadeghi, Mohamed Osama Ahmed, Yoshua Bengio, and Greg Mori. Attention as an rnn. arXiv preprint arXiv:2405.13956, 2024.
- Fu et al. (2023) Daniel Y Fu, Tri Dao, Khaled Kamal Saab, Armin W Thomas, Atri Rudra, and Christopher Re. Hungry hungry hippos: Towards language modeling with state space models. In The Eleventh International Conference on Learning Representations, 2023.
- Fu et al. (2020) Justin Fu, Aviral Kumar, Ofir Nachum, George Tucker, and Sergey Levine. D4rl: Datasets for deep data-driven reinforcement learning. arXiv preprint arXiv:2004.07219, 2020.
- Goel et al. (2022) Karan Goel, Albert Gu, Chris Donahue, and Christopher Ré. It’s raw! audio generation with state-space models. In International Conference on Machine Learning, pp. 7616–7633. PMLR, 2022.
- Gu & Dao (2024) Albert Gu and Tri Dao. Mamba: Linear-time sequence modeling with selective state spaces. arXiv preprint arXiv:2312.00752, 2024.
- Gu et al. (2020) Albert Gu, Tri Dao, Stefano Ermon, Atri Rudra, and Christopher Ré. Hippo: Recurrent memory with optimal polynomial projections. Advances in neural information processing systems, 33:1474–1487, 2020.
- Gu et al. (2021) Albert Gu, Karan Goel, and Christopher Re. Efficiently modeling long sequences with structured state spaces. In International Conference on Learning Representations, 2021.
- Gu et al. (2022) Albert Gu, Karan Goel, Ankit Gupta, and Christopher Ré. On the parameterization and initialization of diagonal state space models. Advances in Neural Information Processing Systems, 35:35971–35983, 2022.
- Gupta et al. (2022) Ankit Gupta, Albert Gu, and Jonathan Berant. Diagonal state spaces are as effective as structured state spaces. Advances in Neural Information Processing Systems, 35:22982–22994, 2022.
- Hasani et al. (2023) Ramin Hasani, Mathias Lechner, Tsun-Hsuan Wang, Makram Chahine, Alexander Amini, and Daniela Rus. Liquid structural state-space models. In The Eleventh International Conference on Learning Representations, 2023.
- Heinsen (2023) Franz A Heinsen. Parallelization of an ubiquitous sequential computation. arXiv preprint arXiv:2311.06281, 2023.
- Hochreiter & Schmidhuber (1997) S Hochreiter and J Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735–1780, 1997.
- Karpathy (2022) Andrej Karpathy. NanoGPT. https://github.com/karpathy/nanoGPT, 2022.
- Katharopoulos et al. (2020) Angelos Katharopoulos, Apoorv Vyas, Nikolaos Pappas, and François Fleuret. Transformers are rnns: Fast autoregressive transformers with linear attention. In International conference on machine learning, pp. 5156–5165. PMLR, 2020.
- Katsch (2023) Tobias Katsch. Gateloop: Fully data-controlled linear recurrence for sequence modeling. arXiv preprint arXiv:2311.01927, 2023.
- Mehta et al. (2023) Harsh Mehta, Ankit Gupta, Ashok Cutkosky, and Behnam Neyshabur. Long range language modeling via gated state spaces. In The Eleventh International Conference on Learning Representations, 2023.
- Orvieto et al. (2023) Antonio Orvieto, Samuel L Smith, Albert Gu, Anushan Fernando, Caglar Gulcehre, Razvan Pascanu, and Soham De. Resurrecting recurrent neural networks for long sequences. In International Conference on Machine Learning, pp. 26670–26698. PMLR, 2023.
- Ota (2024) Toshihiro Ota. Decision mamba: Reinforcement learning via sequence modeling with selective state spaces. arXiv preprint arXiv:2403.19925, 2024.
- Patro & Agneeswaran (2024) Badri Narayana Patro and Vijay Srinivas Agneeswaran. Mamba-360: Survey of state space models as transformer alternative for long sequence modelling: Methods, applications, and challenges. arXiv preprint arXiv:2404.16112, 2024.
- Peng et al. (2023) Bo Peng, Eric Alcaide, Quentin Anthony, Alon Albalak, Samuel Arcadinho, Stella Biderman, Huanqi Cao, Xin Cheng, Michael Chung, Matteo Grella, et al. Rwkv: Reinventing rnns for the transformer era. arXiv preprint arXiv:2305.13048, 2023.
- Poli et al. (2023) Michael Poli, Stefano Massaroli, Eric Nguyen, Daniel Y Fu, Tri Dao, Stephen Baccus, Yoshua Bengio, Stefano Ermon, and Christopher Ré. Hyena hierarchy: Towards larger convolutional language models. In International Conference on Machine Learning, pp. 28043–28078. PMLR, 2023.
- Qin et al. (2023) Zhen Qin, Dong Li, Weigao Sun, Weixuan Sun, Xuyang Shen, Xiaodong Han, Yunshen Wei, Baohong Lv, Fei Yuan, Xiao Luo, et al. Scaling transnormer to 175 billion parameters. arXiv preprint arXiv:2307.14995, 2023.
- Qu et al. (2024) Haohao Qu, Liangbo Ning, Rui An, Wenqi Fan, Tyler Derr, Xin Xu, and Qing Li. A survey of mamba. arXiv preprint arXiv:2408.01129, 2024.
- Smith et al. (2023) Jimmy TH Smith, Andrew Warrington, and Scott Linderman. Simplified state space layers for sequence modeling. In The Eleventh International Conference on Learning Representations, 2023.
- Sun et al. (2023) Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, and Furu Wei. Retentive network: A successor to transformer for large language models. arXiv preprint arXiv:2307.08621, 2023.
- Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30(2017), 2017.
- Wang et al. (2024) Xiao Wang, Shiao Wang, Yuhe Ding, Yuehang Li, Wentao Wu, Yao Rong, Weizhe Kong, Ju Huang, Shihao Li, Haoxiang Yang, et al. State space model for new-generation network alternative to transformers: A survey. arXiv preprint arXiv:2404.09516, 2024.
- Yang et al. (2024) Songlin Yang, Bailin Wang, Yikang Shen, Rameswar Panda, and Yoon Kim. Gated linear attention transformers with hardware-efficient training. In International Conference on Machine Learning, 2024.
Appendix A Implementation Details: Vanilla Version
In this section, we provide the pseudocode and equivalent PyTorch code for minGRU and minLSTM. When performing repeated multiplications such as in many recurrent sequence models, numerical instabilities are common, especially during training. As such, we trained using a log-space implementation (see Section B) for improved numerical stability.
A.1 Pseudocode: Vanilla Version
A.1.1 minGRU: A Minimal GRU
A.1.2 minLSTM: A Minimal LSTM
A.2 PyTorch Code: Vanilla Version
A.2.1 minGRU: A Minimal GRU
A.2.2 minLSTM: A Minimal LSTM
Appendix B Implementation Details: Log-Space Version (Additional Numerical Stability)
In this section, we detail the log-space version of minLSTM and minGRU for improved numerical stability. During training, the parallel modes are used to avoid backpropagation through time (BPTT), speeding up the training time significantly. At inference time, the sequential modes are used.
B.1 Parallel Scan: Log-Space Implementation
Recall that, the parallel scan’s objective is to compute where . In code, the vanilla parallel scan function would take as input: coefficients and values . The function then outputs . For numerical stability, we consider a log-space implementation which takes as input and instead and outputs . The code for the parallel scan in log-space is included below and is based on the code by Heinsen (2023).
B.2 Pseudocode: Log-Space Version
For maximal numerical stability, we rewrite the log-space versions of minGRU and minLSTM.
B.2.1 minGRU: A Minimal GRU
Recall minGRU’s recurrence is as follows . As such, and where and . As a result, and . We can break down these down as follows:
where . However, we need to compute which is inconvenient if has some negative values. We could use complex numbers and a complex number version of the parallel scan, but this would result in the parallel scan increasing in complexity. Instead, we propose to ensure that . This be can done in a variety of ways. In our experiments, we added a continuous activation function replacing with where and its log: .
At inference time, the sequential mode (Algorithm 5) is used. During training, the parallel mode (Algorithm 6) is used.
B.2.2 minLSTM: A Minimal LSTM
We also derive minLSTM’s log-space formulation as well. Recall minLSTM’s recurrence is as follows . As such, and . As a result, and .
Recall that and are computed via sigmoid. In other words, and where and . Furthermore, recall in minGRU’s derivation we showed that Using this, we can simplify the computation as follows:
Similarly, we also get that:
Combining these derivations, we get the parallel mode (Algorithm 8) for efficient training.
B.3 PyTorch Code: Log-Space Version
B.3.1 minGRU: A Minimal GRU
B.3.2 minLSTM: A Minimal LSTM
Appendix C Detailed Experiment Setup
In this section, we describe the experiment setup in detail.
C.1 Architecture
In all models, residual connections are added between layers and layer norms are applied before each layer.
Selective Copying. Each layer in the model consisted of (1) either a minLSTM or minGRU layer and (2) a linear layer.
Reinforcement Learning. In this work, we consider the Decision Transformer framework for (Offline) RL. Following prior works (Feng et al., 2024; Ota, 2024), we replace the Self-Attention module with our recurrent sequence modules: minLSTM and minGRU respectively.
Language Modelling. Prior works (Gu & Dao, 2024; Beck et al., 2024) apply a convolutional layer in addition to their recurrent sequence module. Following them, a layer of the model consists of an RNN (minLSTM and minGRU), a convolutional layer applied temporally with a kernel size of , and a two-layer MLP.
C.2 Hyperparameters and general experimental details
Selective Copying. Models are trained for steps with a batch size of , and using early stopping. The optimizer used is Adam with a learning rate of . Due to GPU memory limitations, gradient accumulation is performed during training. Gradients for two batches of size are accumulated for each gradient update and clipped to . Each model consists of layers, an input dimension of , and a dropout ratio of . minLSTM and minGRU have an expansion factor of . Results for the baselines are referenced from the Mamba paper.
Reinforcement Learning. We follow the hyperparameter settings outlined by Ota (2024). For Hopper (Medium) and Hopper (Medium-Replay), an embedding dimension of is used, while all other environments utilize an embedding dimension of . The learning rate is set to for Hopper (Medium), Hopper (Medium-Replay), and Walker (Medium). For all other environments and datasets, the learning rate is . The models are optimized using AdamW with a weight decay of and a linear warmup for steps. Each model consists of layers and has a dropout ratio of . The models are trained for steps with a batch size of . Gradients are clipped to . Results for the baselines are referenced from the Mamba and Aaren papers.
Language Modelling. The models are optimized using AdamW with a learning rate of . Each model consists of three layers, a dropout ratio of , and an embedding dimension of . Training is done with steps using a batch size of 64 and evaluated every steps. Gradients are clipped to . The Transformer is configured with heads. Mamba uses an SSM state expansion factor of and a block expansion factor of . Following Mamba, both minLSTM and minGRU utilize an expansion factor of as well.
C.3 Datasets
Selective Copying. In this task, the model learns to extract data tokens from a sequence while disregarding noise tokens. Following Gu & Dao (2024), we consider a vocabulary of and sequences of length . Each sequence includes randomly placed data tokens. The remainder of the tokens are noise.
Reinforcement Learning. In this setting, we consider continuous control tasks from the D4RL benchmark (Fu et al., 2020). These tasks based on MuJoCo comprise of three environments with dense rewards: HalfCheetah, Hopper, and Walker. For each environment, three different datasets are considered that have varying level represent varying levels of data quality:
-
•
Medium (M): One million timesteps generated by a policy scoring about one-third of an expert policy’s score.
-
•
Medium-Replay (M-R): A replay buffer from an agent trained to perform like the Medium policy.
-
•
Medium-Expert (M-E): One million timesteps from the Medium policy combined with one million from an expert policy.
Following Fu et al. (2020), reported scores are normalized such that represents an expert policy performance.
Language Modelling. In this setting, we consider the Shakespeare dataset, comprising a collection of text data derived from the works of William Shakespeare. The training and testing data consists of and tokens respectively.