[2106.06981] Thinking Like Transformers

Download a PDF of the paper titled Thinking Like Transformers, by Gail Weiss and 2 other authors

Download PDF

Abstract: What is the computational model behind a Transformer? Where recurrent neural
networks have direct parallels in finite state machines, allowing clear
discussion and thought around architecture variants or trained models,
Transformers have no such familiar parallel. In this paper we aim to change
that, proposing a computational model for the transformer-encoder in the form
of a programming language. We map the basic components of a transformer-encoder
— attention and feed-forward computation — into simple primitives, around
which we form a programming language: the Restricted Access Sequence Processing
Language (RASP). We show how RASP can be used to program solutions to tasks
that could conceivably be learned by a Transformer, and how a Transformer can
be trained to mimic a RASP solution. In particular, we provide RASP programs
for histograms, sorting, and Dyck-languages. We further use our model to relate
their difficulty in terms of the number of required layers and attention heads:
analyzing a RASP program implies a maximum number of heads and layers necessary
to encode a task in a transformer. Finally, we see how insights gained from our
abstraction might be used to explain phenomena seen in recent works.

Submission history

From: Gail Weiss [view email]
[v1]

Sun, 13 Jun 2021 13:04:46 UTC (3,616 KB)
[v2]

Mon, 19 Jul 2021 11:22:34 UTC (1,808 KB)


Source link