×
DeepSeek R1 vs DeepSeek V3: Which is better at coding?
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Testing methodology and scope: A comprehensive evaluation of DeepSeek’s V3 and R1 models was conducted by a journalist at ZDNET using four established coding challenges that have previously been used to benchmark other AI models.

  • The testing framework included writing a WordPress plugin, rewriting a string function, debugging code, and creating a complex automation script
  • Both V3 and R1 variants were evaluated against identical criteria to ensure consistent comparison
  • The assessment focused on code accuracy, functionality, and practical implementation

Performance breakdown: DeepSeek V3 emerged as the stronger performer, successfully completing three out of four challenges while R1 managed two successful completions.

  • Both models excelled at WordPress plugin development and bug identification tasks
  • V3 demonstrated superior capability in string function manipulation, while R1 showed potential stability issues
  • Neither model could successfully navigate the complex automation script involving AppleScript, Chrome, and Keyboard Maestro integration

Technical strengths and limitations: DeepSeek’s performance indicates capabilities comparable to GPT-3.5, while highlighting areas for potential improvement.

  • The code output tends toward verbosity, often including unnecessary complexity
  • Both models demonstrate strong fundamental programming knowledge
  • The platform shows impressive capabilities despite operating with reduced infrastructure compared to major competitors

Competitive positioning: DeepSeek’s performance relative to established AI coding assistants reveals promising potential in the development tools landscape.

  • The V3 model outperformed several major platforms including Gemini, Copilot, Claude, and Meta’s offerings
  • Its open-source nature provides transparency and potential for community-driven improvements
  • The platform achieves competitive results while maintaining a lighter infrastructure footprint

Future outlook: While DeepSeek shows promise as an emerging development tool, several factors will influence its adoption and evolution in the AI coding assistant space.

  • The platform’s ability to maintain competitive performance while using fewer resources suggests potential for efficient scaling
  • Continued development could address current limitations in handling complex, multi-platform automation tasks
  • The open-source approach may accelerate improvement through community contributions and transparency
I tested DeepSeek's R1 and V3 coding skills - and we're not all doomed (yet)

Recent News

AI agents reshape digital workplaces as Moveworks invests heavily

AI agents evolve from chatbots to task-completing digital coworkers as Moveworks launches comprehensive platform for enterprise-ready agent creation, integration, and deployment.

McGovern Institute at MIT celebrates a quarter century of brain science research

MIT's McGovern Institute marks 25 years of translating brain research into practical applications, from CRISPR gene therapy to neural-controlled prosthetics.

Agentic AI transforms hiring practices in recruitment industry

AI recruitment tools accelerate candidate matching and reduce bias, but require human oversight to ensure effective hiring decisions.