Compal’s new NVIDIA MGX architecture-based server platforms are reshaping the enterprise AI and HPC computing landscape with unprecedented computational power and flexibility. Unveiled at GTC 2025, these three new server models represent a significant advancement in data center technology, offering tailored configurations for various high-performance computing needs while leveraging NVIDIA’s latest GPU innovations to address the growing demands of AI workloads and scientific computing applications.
The big picture: Compal Electronics has launched three new server platforms built on NVIDIA MGX architecture, designed specifically for enterprise-level AI, HPC, and high-load computing applications.
- The lineup includes the SX420-2A 4U AI Server, SX224-2A 2U AI Server, and SX220-1N 2U AI Server, each targeting different segments of the high-performance computing market.
- These servers incorporate NVIDIA’s latest GPU technology, including support for RTX PRO 6000 Blackwell GPUs and the NVIDIA GH200 Grace Hopper Superchip.
Key details: The flagship SX420-2A 4U AI Server features flexible rack design supporting both industry standard 19″ and ORV3 21″ configurations.
- It can be configured with up to 8 RTX PRO 6000 Blackwell GPUs, significantly enhancing data center compute performance and resource utilization.
- The server is specifically engineered for deep AI-HPC applications, providing the computational foundation for advanced artificial intelligence workloads.
Technical innovations: The SX224-2A 2U AI Server integrates NVIDIA MGX architecture with AMD x86 platform technology for versatile configuration options.
- The design is future-compatible and optimized for diverse computing workloads spanning AI-HPC and AI-Graphics applications.
- This platform enables tailored performance adjustments to meet specific computational requirements.
Advanced GPU capabilities: The RTX PRO 6000 Blackwell GPU included in these servers delivers exceptional performance with 96GB of GDDR7 memory.
- The passively cooled thermal design ensures stable operation even under extreme computational loads.
- These GPUs provide acceleration for both agentic and physical AI applications, as well as scientific computing, graphics, and video workloads.
Specialized computing solutions: The SX220-1N 2U AI Server is engineered specifically for giant-scale AI and HPC applications requiring massive computational resources.
- It features the NVIDIA GH200 Grace Hopper Superchip and employs NVIDIA NVLink-C2C technology to deliver a coherent memory pool.
- This architecture enables faster memory speeds and exceptional bandwidth to handle large-scale computational tasks.
Why this matters: The comprehensive server lineup represents a significant advancement in enterprise computing technology, addressing the growing demands for specialized AI and HPC infrastructure.
- These platforms satisfy the rigorous requirements of diverse data center applications while offering the flexibility to adapt to evolving computational needs.
- As AI workloads continue to grow in complexity and scale, purpose-built hardware solutions like these become increasingly critical for organizational performance and efficiency.
Recent Stories
DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment
The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...
Oct 17, 2025Tying it all together: Credo’s purple cables power the $4B AI data center boom
Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...
Oct 17, 2025Vatican launches Latin American AI network for human development
The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...