back
Get SIGNAL/NOISE in your inbox daily

AI systems based on large neural networks present significant software engineering challenges that raise serious concerns about their reliability and responsible deployment, according to Professor Eerke Boiten of De Montfort University Leicester.

Core argument: Current AI systems, particularly those based on large neural networks, are fundamentally unmanageable from a software engineering perspective, making their use in critical applications irresponsible.

  • The primary challenge stems from the inability to apply traditional software engineering tools and principles to manage complexity and scale
  • These systems lack transparency and accountability, two essential elements for trustworthy software development
  • The development of AI has coincided with a concerning trend of diminished responsibility regarding data sources and algorithmic outcomes

Technical fundamentals: Large neural networks, which power most modern AI systems including generative AI and large language models (LLMs), operate through a complex web of interconnected nodes that process information in ways that are difficult to predict or control.

  • Neural networks contain millions of nodes, each processing multiple inputs through weighted connections and activation functions
  • Training these networks requires enormous computational resources, often costing millions of dollars
  • The training process is largely unsupervised, with minimal human input beyond potential post-training adjustments

Key limitations: The emergent behavior of neural networks fundamentally conflicts with traditional software engineering principles, particularly compositionality.

  • Neural networks lack internal structure that meaningfully relates to their functionality
  • They cannot be developed or reused as components
  • These systems do not create explicit models of knowledge
  • The absence of intermediate models prevents stepwise development
  • Explanation of system behavior becomes extremely difficult due to the lack of reasoning representation

Verification challenges: Traditional software testing and verification methods prove inadequate for current AI systems.

  • Input and state spaces are too vast for exhaustive testing
  • Stochastic behavior means correct outputs in testing don’t guarantee consistent performance
  • Component-level testing is impossible
  • Meaningful test coverage metrics cannot be established
  • The only available verification method – whole system testing – provides insufficient confidence

Fault management: The handling of errors and system improvements presents significant obstacles.

  • Error behavior is emergent and unpredictable
  • The scale of unsupervised training versus human error correction creates inherent reliability issues
  • Error fixes through retraining can introduce new problems that are difficult to detect
  • Regression testing becomes effectively impossible

Looking ahead: While current AI architecture may represent a developmental dead end, alternative approaches could offer more promising paths forward.

  • Hybrid systems combining symbolic and intuition-based AI might provide better reliability
  • AI systems could be valuable in limited contexts where errors can be detected and managed
  • Applications like weather prediction, where probabilistic outputs are expected, might be more suitable use cases
  • The development of compositional approaches to neural networks, though challenging, could address current limitations

Recent Stories

Oct 17, 2025

DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment

The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...

Oct 17, 2025

Tying it all together: Credo’s purple cables power the $4B AI data center boom

Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...

Oct 17, 2025

Vatican launches Latin American AI network for human development

The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...