Advanced AI coding models from companies like OpenAI, Anthropic, and Google are fundamentally transforming software development, with some experts predicting AI could write 90% of all code within months. This shift toward “vibe coding”—where developers use natural language prompts to generate entire applications—is creating both unprecedented opportunities and deep concerns about the future of engineering careers.
The big picture: What started as simple code autocompletion in ChatGPT has evolved into AI systems capable of building complete apps, websites, and even multiplayer games through conversational prompts.
- Steve Yegge, a veteran engineer at Sourcegraph (a code search company), now codes on four different projects simultaneously using AI, describing the process as “burning tokens” while AI handles the actual programming.
- Andrej Karpathy, a prominent AI researcher, coined the term “vibe coding” in February to describe this text-to-software development approach.
- AI coding startups like Cursor and Windsurf have gained significant traction, with OpenAI reportedly in talks to acquire Windsurf.
What industry leaders are saying: Predictions about AI’s coding capabilities range from revolutionary to cautionary.
- “We are not far from a world—I think we’ll be there in three to six months—where AI is writing 90 percent of the code,” said Dario Amodei, CEO of Anthropic. “And then in 12 months, we may be in a world where AI is writing essentially all of the code.”
- “This is how all programming will be conducted by the end of this year,” Yegge predicts. “And if you’re not doing it, you’re just walking in a race.”
- Martin Casado from Andreessen Horowitz called it “the most dramatic shift in the art of computer science since assembly was supplanted by higher-level languages.”
The reality check: Despite the hype, significant limitations persist in AI-generated code.
- “AI [tools] will do everything for you—including fuck up,” Yegge warns. “You need to watch them carefully, like toddlers.”
- A WIRED survey found developers evenly split, with 36% enthusiastic about AI coding tools and 38% remaining skeptical.
- “The nondeterministic nature of AI is too risky, too dangerous,” explains Ken Thompson, VP of engineering at Anaconda (a company that provides open source code for software development), noting that AI output varies unpredictably even with identical prompts.
Key challenges emerging: Early adopters report numerous pitfalls with AI-generated code.
- Security vulnerabilities and features that only simulate real functionality are common issues.
- Developers often accumulate high bills from AI tool usage and struggle to debug broken code they didn’t write.
- “There are almost no applications in which ‘mostly works’ is good enough,” says MIT’s Daniel Jackson, emphasizing that serious software requires precision AI cannot yet guarantee.
Impact on the job market: The employment picture remains complex, with both displacement and new opportunities emerging.
- “If I’m building a product, I could have needed 50 engineers and now maybe I only need 20 or 30,” says Naveen Rao, VP of AI at Databricks (a company that helps large businesses build their own AI systems). “That is absolutely real.”
- However, Liad Elidan from Milestone (a company that helps firms measure AI project impact) notes: “We are not seeing less demand for developers. We are seeing less demand for average or low-performing developers.”
- MIT economist David Autor suggests the outcome depends on demand elasticity, comparing it to an “Uber effect on coding: more people writing more code at lower prices, and lower wages.”
Current limitations in practice: Even companies integrating AI coding tools report significant constraints.
- Christine Yen, CEO at Honeycomb (a company that provides technology for monitoring software system performance), says developers using AI have increased productivity by only about 50% and notes that “AI just frankly isn’t good enough yet” for performance-critical or sensitive systems.
- Complex software projects with interdependencies remain challenging for AI, as “large language models can’t reason their way around those kinds of dependencies,” according to Jackson.
- The technology works best for simple, formulaic tasks like building component libraries but struggles with projects requiring judgment and architectural understanding.
Looking ahead: Experts recommend adaptation rather than replacement strategies.
- Yegge and co-author Gene Kim advocate for new development approaches including modular codebases, constant testing, and extensive experimentation to work effectively with AI.
- Many see the shift as abstraction rather than replacement, similar to how Python built on lower-level languages to make programming more accessible.
- “It’s like saying ‘Don’t teach your kid to learn math,'” Rao argues, emphasizing that understanding how to leverage computers will remain valuable even as AI handles more routine coding tasks.
Recent Stories
DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment
The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...
Oct 17, 2025Tying it all together: Credo’s purple cables power the $4B AI data center boom
Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...
Oct 17, 2025Vatican launches Latin American AI network for human development
The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...