×
Does AI write better code if you keep asking it to do better?
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

A creative developer recently tested whether repeatedly asking AI to “write better code” leads to actual improvements in code quality and performance, using Claude 3.5 Sonnet to optimize a Python coding challenge.

Key findings and methodology: Through iterative prompting experiments, requesting “better code” did yield significant performance improvements, though with some notable drawbacks.

  • Initial requests for “better code” produced a 100x faster implementation compared to the first attempt
  • The approach sometimes led to unnecessary complexity and enterprise-style features being added
  • More targeted optimization prompts from the start achieved a 59x speedup on the first attempt
  • Subsequent specific optimization requests reached 95x performance improvement

Technical optimizations: The AI model demonstrated proficiency in implementing several advanced performance optimization techniques.

  • Successfully integrated numba for Just-In-Time (JIT) compilation, which converts Python code into optimized machine code at runtime
  • Employed vectorized numpy operations, allowing for faster processing of large arrays of data
  • Made use of efficient data structures and algorithmic improvements
  • Implemented parallel processing capabilities where appropriate

Limitations and challenges: Despite showing promise, the AI’s code optimization efforts revealed several important constraints.

  • Introduced incorrect bit manipulation operations that required human intervention to fix
  • Generated subtle bugs that needed manual debugging and correction
  • Sometimes added unnecessary complexity that didn’t contribute to performance
  • Required human expertise to guide the optimization process effectively

Process insights: The experiments revealed important lessons about working with AI for code optimization.

  • Specific, targeted prompts produced better results than general requests for improvement
  • Human oversight remained crucial for identifying and correcting errors
  • The AI demonstrated understanding of various optimization techniques but needed guidance in applying them appropriately
  • Iterative improvement showed diminishing returns after certain optimizations were implemented

Looking ahead: While AI shows promise in code optimization, the research highlights the importance of maintaining a balanced approach that combines AI capabilities with human expertise. Future developments may reduce the need for human intervention, but currently, the most effective strategy appears to be using AI as a sophisticated tool within a human-guided optimization process.

Can LLMs write better code if you keep asking them to “write better code”?

Recent News

AI emoji showdown: Apple’s Genmoji vs. Google’s Emoji Kitchen

Apple Intelligence lets iPhone users create personalized emojis by describing what they want or uploading photos, challenging Google's more limited Emoji Kitchen feature.

15 prompting tips to boost your AI productivity in 2025

Businesses are discovering that precise, context-rich prompts help AI tools deliver more practical and actionable solutions for daily workflows.

Notion vs. NotebookLM: Which AI note-taker reigns supreme?

Google's NotebookLM and Notion take contrasting approaches to AI-powered productivity, with the former focusing on deep document analysis while the latter offers broader workspace management capabilities.