×
Microsoft’s Latest Small Model Phi-3 Has Big Potential
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Tiny but Mighty: The Phi-3 Small Language Models with Big Potential

Sometimes the best solutions come from unexpected places. That’s the lesson Microsoft researchers learned when they developed a new class of small language models (SLMs) that pack a powerful punch.

The Case in Point: Large language models (LLMs) have opened up exciting new possibilities for AI, but their massive size means they require significant computing resources. Microsoft’s researchers set out to create SLMs that offer many of the same capabilities as LLMs, but in a much smaller and more accessible package.

  • The researchers trained the Phi-3 family of SLMs on carefully curated, high-quality datasets, allowing them to outperform models of similar size and even larger models across a variety of benchmarks.
  • The first Phi-3 model, Phi-3-mini, has 3.8 billion parameters and performs better than models twice its size.

Go Deeper: The key to the Phi-3 models’ success was the researchers’ innovative approach to data selection and curation. Inspired by how children learn language, they built datasets focused on high-quality, educational content rather than relying on raw internet data.

  • The “TinyStories” dataset, for example, was created by prompting a large language model to generate millions of short stories using a limited vocabulary.
  • The “CodeTextbook” dataset was built by carefully selecting and filtering publicly available information to capture a wide scope of high-quality, textbook-like content.

Why It Matters: SLMs like the Phi-3 models offer significant advantages over their larger counterparts. They can run on devices at the edge, minimizing latency and maximizing privacy, and are more accessible for organizations with limited resources.

  • SLMs are well-suited for tasks that don’t require extensive reasoning or a quick response, such as summarizing documents, generating marketing content, or powering customer support chatbots.
  • By keeping data processing local, SLMs can enable AI experiences in areas with limited connectivity, opening up new possibilities for applications like crop disease detection for farmers.

The Big Picture: While LLMs will remain the gold standard for complex tasks, Microsoft envisions a future where a portfolio of models, both large and small, work together to solve a wide range of problems.

  • SLMs and LLMs can complement each other, with LLMs acting as routers to direct certain queries to the more lightweight SLMs when appropriate.
  • This flexible approach allows organizations to choose the right-sized model for their specific needs and resources, unlocking the power of AI for a broader range of users and use cases.

The Bottom Line: By developing the Phi-3 family of small language models, Microsoft has demonstrated that size isn’t everything when it comes to AI. These innovative SLMs offer a glimpse into a future where the benefits of powerful language models are more accessible and widely applicable, empowering more people to harness the potential of AI.

Microsoft's Phi-3 small language models has big potential

Recent News

Baidu reports steepest revenue drop in 2 years amid slowdown

China's tech giant Baidu saw revenue drop 3% despite major AI investments, signaling broader challenges for the nation's technology sector amid economic headwinds.

How to manage risk in the age of AI

A conversation with Palo Alto Networks CEO about his approach to innovation as new technologies and risks emerge.

How to balance bold, responsible and successful AI deployment

Major companies are establishing AI governance structures and training programs while racing to deploy generative AI for competitive advantage.