×
Does AI write better code if you keep asking it to do better?
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

A creative developer recently tested whether repeatedly asking AI to “write better code” leads to actual improvements in code quality and performance, using Claude 3.5 Sonnet to optimize a Python coding challenge.

Key findings and methodology: Through iterative prompting experiments, requesting “better code” did yield significant performance improvements, though with some notable drawbacks.

  • Initial requests for “better code” produced a 100x faster implementation compared to the first attempt
  • The approach sometimes led to unnecessary complexity and enterprise-style features being added
  • More targeted optimization prompts from the start achieved a 59x speedup on the first attempt
  • Subsequent specific optimization requests reached 95x performance improvement

Technical optimizations: The AI model demonstrated proficiency in implementing several advanced performance optimization techniques.

  • Successfully integrated numba for Just-In-Time (JIT) compilation, which converts Python code into optimized machine code at runtime
  • Employed vectorized numpy operations, allowing for faster processing of large arrays of data
  • Made use of efficient data structures and algorithmic improvements
  • Implemented parallel processing capabilities where appropriate

Limitations and challenges: Despite showing promise, the AI’s code optimization efforts revealed several important constraints.

  • Introduced incorrect bit manipulation operations that required human intervention to fix
  • Generated subtle bugs that needed manual debugging and correction
  • Sometimes added unnecessary complexity that didn’t contribute to performance
  • Required human expertise to guide the optimization process effectively

Process insights: The experiments revealed important lessons about working with AI for code optimization.

  • Specific, targeted prompts produced better results than general requests for improvement
  • Human oversight remained crucial for identifying and correcting errors
  • The AI demonstrated understanding of various optimization techniques but needed guidance in applying them appropriately
  • Iterative improvement showed diminishing returns after certain optimizations were implemented

Looking ahead: While AI shows promise in code optimization, the research highlights the importance of maintaining a balanced approach that combines AI capabilities with human expertise. Future developments may reduce the need for human intervention, but currently, the most effective strategy appears to be using AI as a sophisticated tool within a human-guided optimization process.

Can LLMs write better code if you keep asking them to “write better code”?

Recent News

OpenAI, CSU partner to bring ChatGPTEdu to 500,000 students

California's largest university system brings OpenAI's chatbot to help half a million students with writing and research tasks.

YouTube ad sales hit record $10.5B as Alphabet plans $75B AI investment

YouTube's ad revenue surge comes as parent company Alphabet commits to massive AI infrastructure spending amid growing competition from Microsoft and Meta.

Tempus AI acquires Ambry Genetics to advance precision medicine

Genomics testing firm buys diagnostics company in $600m deal to combine AI analysis with genetic screening workflows.