×
DeepSeek launches reasoning AI models with reinforcement learning breakthroughs
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

DeepSeek has released its first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1, along with six distilled variants, offering new approaches to AI reasoning capabilities through reinforcement learning.

Key innovations: DeepSeek-R1-Zero represents a breakthrough in AI development by achieving strong reasoning capabilities through pure reinforcement learning, without requiring supervised fine-tuning.

  • The model demonstrates advanced capabilities including self-verification, reflection, and generating complex chains of thought
  • Despite its achievements, DeepSeek-R1-Zero faces challenges with repetition, readability, and language mixing
  • To address these limitations, researchers developed DeepSeek-R1, which incorporates initial training data before reinforcement learning

Technical specifications: The DeepSeek-R1 series comprises multiple models with varying parameters and capabilities, built on different base architectures.

  • The flagship models, DeepSeek-R1-Zero and DeepSeek-R1, each feature 671B total parameters with 37B activated parameters
  • Both models support a context length of 128K tokens
  • The distilled variants range from 1.5B to 70B parameters, based on either Qwen or Llama architectures

Practical applications: DeepSeek has made these models accessible through multiple channels for both research and commercial use.

  • Users can interact with DeepSeek-R1 through the official chat website (chat.deepseek.com) using the “DeepThink” feature
  • An OpenAI-compatible API is available through the DeepSeek Platform
  • Local deployment options are supported, with recommended temperature settings between 0.5 and 0.7 for optimal performance

Accessibility and licensing: The models are released under permissive licensing terms that encourage both academic research and commercial applications.

  • All models are available through HuggingFace, with MIT License coverage for the core repository
  • The distilled variants inherit licensing terms from their base models (Apache 2.0 for Qwen-based models, specific licenses for Llama-based versions)
  • Commercial use, modifications, and derivative works are explicitly permitted

Looking ahead: The success of DeepSeek’s pure reinforcement learning approach opens new possibilities for AI model development, while the effectiveness of model distillation demonstrates that smaller models can achieve impressive reasoning capabilities when trained on data from larger models. These developments could significantly influence future AI research and development strategies.

deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B · Hugging Face

Recent News

Pope Francis rejects AI version of himself, warns of deepfake dangers

The pontiff fears wealthy tech elites are driving AI development without humanity's input.

Former ClickUP leader: Work sprawl is killing productivity, but here’s how AI can fix it

High-performing teams use nine or fewer tools while struggling teams juggle fifteen or more.

Go, Figure! Company secures deal to deploy 100K humanoid robots in 4 years

The American company aims to challenge Chinese robotics dominance with one mystery Fortune 500 partner.