Skip to content
May 24 2024

#175 Collapsing Reality: How Large Language Models Make Choices Like Quantum Observers

Blog Details

<< Previous Edition: Cognitive Dissonance Problem of Copilot + PC

Large Language Models (LLMs) have revolutionized natural language processing, enabling machines to generate human-like text and understand natural language. These models achieve this feat through a series of interconnected layers, each making crucial choices that shape the final output. Just like an observer in quantum mechanics forcing a wave function to collapse, LLMs at various levels collapse reality to produce coherent and meaningful text. Let's delve into this fascinating process, starting from the very beginning.

Token Embeddings: Choosing the Essence

Before information even enters the first layer of neurons, there's a crucial step called tokenization. Here, each word in the input text is broken down into tokens using techniques like Byte-Pair Encoding (BPE). These tokens are then mapped to numerical vectors known as token embeddings. These embeddings capture the meaning and context of the tokens based on their relationships to other tokens in the training data.

One can think of an embedding as a way to try to represent the “essence” of something by an array of numbers—with the property that “nearby things” are represented by nearby numbers. - Stephen Wolfram

Neurons and Putting Thumb on a Scale

At the heart of LLMs lie artificial neurons, which function similarly to biological neurons. Each neuron receives input from the previous layer and applies a filter called an activation function. This function acts like a decision maker, evaluating the input and determining whether the neuron should fire strongly or weakly. High activation indicates the presence of relevant features, while low activation filters out less important information. This initial decision point, akin to a quantum observation, collapses the wave of possibilities for that particular neuron.

One of the simplest activation functions is the sigmoid function, which is particularly useful in binary classification tasks. The sigmoid function outputs values between 0 and 1. Values below 0.5 can be interpreted as negative, while values above 0.5 can be seen as positive. This binary decision-making process illustrates how activation functions help neurons make critical choices in the information processing pipeline of LLMs.

In addition to the sigmoid function, modern LLMs often employ other activation functions such as the Rectified Linear Unit (ReLU) or its variants. ReLU is defined as f(x) = max(0, x), which means it outputs 0 for negative inputs and the input value itself for positive inputs. ReLU helps alleviate the vanishing gradient problem and allows for faster training of deep neural networks. The vanishing gradient problem, which can be understood as the model not learning enough from mistakes, occurs when gradients become too small during backpropagation, leading to insufficient updates in the early layers of the network.

Gating Mechanisms: Focusing Attention

As information travels through the LLM, it encounters gating mechanisms, which are crucial for managing the flow of information and ensuring the model focuses on the most relevant parts of the input sequence. These mechanisms act like spotlights, directing the model's attention to specific segments of the data.

Gating mechanisms, such as those found in Transformer models, include self-attention and cross-attention mechanisms. These allow each word or token in a sequence to weigh the importance of every other word, helping the model understand relationships regardless of position. Similarly, in models like GRUs and LSTMs, gates such as the update, reset, and forget gates dynamically adjust the flow of information, helping the model retain or discard information as necessary.

The Softmax Layer: The Final Choice

The final stage of collapsing reality in LLMs occurs in the softmax layer. Here, the LLM processes the information through multiple layers of choices. The softmax layer takes the output from the last layer and makes a critical decision: it selects the most likely next token from a range of possibilities. This selection process is probabilistic, assigning probabilities to different tokens based on the previous choices made by the LLM. Ultimately, just like a quantum measurement forcing the collapse of a wave function into a single particle state, the softmax layer collapses the possibilities into a single most probable output.

Conclusion

Though LLMs are fundamentally prediction systems, they must collapse reality among the sea of choices available at different points, much like humans do. Not every choice is perfect—this is why hallucinations occur. However, without making these hard choices, no progress can be made, similar to the decision-making processes in human life. By understanding and refining how LLMs collapse possibilities, we can improve their ability to generate accurate and meaningful text, advancing their usefulness across various applications (especially in open-weights era).