×
“Tokenization” Fuels Breakthroughs but Limits Potential
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The rise of tokenization in AI models is limiting their potential, according to a recent article exploring how this text processing method creates biases and odd behaviors in today’s generative AI systems.

Key takeaways: Tokenization, the process of breaking down text into smaller pieces called tokens, enables transformer-based AI models to take in more information but also introduces problems:

  • Tokenizers can treat spacing, case, and individual characters differently, leading to strange model outputs that fail to capture the intended meaning.
  • Many tokenizers were designed with English in mind and struggle with languages that don’t use spaces between words or have more complex grammatical structures, resulting in inequities and higher costs for non-English language users.

Understanding the technical details: Most of today’s leading AI models, from GPT-4 to smaller on-device systems, are built on the transformer architecture which requires text to be tokenized rather than processed as raw data:

  • Transformer models work with text broken into tokens, which can be words, syllables, or even individual letters, in order to expand the amount of information they can take in.
  • However, tokenization is an imperfect process that can derail a model if token spacing is inconsistent or words get broken into odd chunks that lose the original semantic meaning.

Implications for non-English languages: Tokenizers built for English do a poor job handling many other languages, studies have found:

  • Logographic writing systems like Chinese are often tokenized character-by-character while agglutinative languages like Turkish get broken into many small word elements, greatly increasing token counts.
  • A 2023 analysis showed some languages needed up to 10 times more tokens than English to convey the same meaning, leading to worse performance and higher usage costs for those languages.

Impacts on model capabilities: Inconsistent tokenization helps explain some of the quirky behaviors and current limitations of generative AI:

  • Irregular number tokenization destroys the mathematical relationships between digits, causing models to incorrectly compare numbers, fail at arithmetic, and misunderstand formulas.
  • Models struggle with rearranging words, like in anagrams or reversed text, because the words have been tokenized and abstracted into opaque chunks, losing their internal structure.

Looking ahead: New AI architectures that avoid tokenization entirely may be key to improving language model performance and capabilities:

  • “Byte-level” state space models like MambaByte, which ingest raw data rather than tokens, have proven competitive with transformers on language tasks while handling irregularities like strange spacing and inconsistent case better.
  • However, these new approaches are still in early research phases, so it seems tokenization hacks will be needed for the foreseeable future until more revolutionary token-free model designs emerge.

Analyzing Deeper: While tokenization has enabled today’s AI models to achieve remarkable language feats, this article highlights how reliance on rigid token-based text processing is also holding back progress:

The inconsistent ways tokenizers handle even basic elements of text lead to perplexing model mistakes and limit their real-world usefulness. The bias toward English tokenization puts non-English speakers and lower-resource languages at a systematic disadvantage. Core AI weaknesses like poor math skills and inability to manipulate words stem from the token abstraction itself.

As demands grow for AI to master more complex analytics and serve global user bases equitably, the shortcomings of tokens will only become more apparent. Yet this key technical detail of AI architecture is often overlooked in the hype around new generative models.

While tokenization has taken AI remarkably far, entirely new approaches may be needed for the next leaps in language model performance. However, radically different architectures could take years to refine, scale and make computationally feasible. In the meantime, today’s AI developers must work around and mitigate the current realities and frustrating limitations of token-based models.

Tokens are a big reason today's generative AI falls short

Recent News

Why enterprises are increasingly using small language models

The trend reflects a growing emphasis on cost-effectiveness and real-world performance in enterprise AI deployment.

Amazon gave AI features to its Fire HD 8 tablet — they still need work

Amazon's integration of AI features into its budget Fire HD 8 tablet faces performance challenges due to hardware limitations and software constraints.

How AI is democratizing the data science industry

AI tools are enabling non-technical employees to perform basic coding and data analysis tasks, potentially accelerating digital initiatives but raising new challenges in quality control and governance.