
TITAN GM645R-G6
AI & GPU Server
- 4U NVIDIA MGX™
- Dual AMD EPYC™ 9005/9004 and 97x4 Series Processors
- Supports 8 dual-slot GPUs with 600W TDP each
- Up to 6TB DDR5-6400 MT/s memory
- 16 x Hot-swap E1.S (PCIe5.0 x4) drive bays and 2 x M.2
- ASPEED AST2600 management
- 3200W (3+1) 80 PLUS Titanium Power Supply
- Perfect for AI Training, AI Inference, Deep Learning and more

Product Overview
The CIARA TITAN GM645R-G6 is a 4U NVIDIA MGX™ server built for large-scale AI training, high-volume inference, and clustered AI environments. Powered by dual AMD EPYC™ 9005/9004 and 97×4 Series processors (up to 500 W each), it delivers robust multi-core performance for data-intensive and GPU-driven workloads.
Designed on NVIDIA’s MGX™ modular architecture, the system supports up to eight NVIDIA GPUs, including NVIDIA RTX™ 6000, H200 NVL, H100 NVL, and L40s, providing optimized airflow, spacing, and power delivery for sustained GPU performance under continuous load.
With support for up to 6 TB of DDR5 6400 MT/s memory, eight 400Gb/s QSFP112 ports, E1.S PCIe Gen5 storage, and robust PCIe Gen5 expansion, the CIARA TITAN GM645R-G6 delivers the networking throughput, memory bandwidth, and scalability required for modern AI clusters and future high-power GPU deployments.


Key Features and Benefits
AMD EPYC Compute for Advanced AI Workloads
The CIARA TITAN GM645R-G6 features dual-socket AMD EPYC™ 9005/9004 and 97×4 Series processors (up to 500 W each), delivering powerful multi-core performance for AI training, simulation, and large-scale analytics. This computing architecture processes data efficiently and keeps GPU workloads running smoothly during intensive operations.
MGX™ Architecture for GPU-Optimized Performance
Built on NVIDIA’s MGX™ modular platform, the system supports up to eight NVIDIA GPUs (600 W TDP each), including RTX™ 6000, H200 NVL, H100 NVL, and L40s, with standardized airflow, power delivery, and slot spacing. This ensures consistent cooling, stable multi-GPU performance, and flexibility to adopt next-generation GPUs.
High-Speed Networking for AI Clusters
The system includes eight 400Gb/s QSFP112 ports powered by NVIDIA ConnectX®-8, providing extremely high network bandwidth for distributed training, multi-node inference, and large-scale data transfer. This level of connectivity is essential for building and scaling high-performance AI clusters.
High-Bandwidth Memory and E1.S Storage Architecture
With up to 6 TB of DDR5 6400 MT/s memory and sixteen E1.S PCIe Gen5 drive bays, the system provides the speed and throughput required for large datasets and high-speed checkpointing. This storage architecture supports performance-critical AI workflows and fast-paced development environments.
Perfect For
AI Training
AI Inference
Deep Learning
Simulation
Multi-Node Workloads
Tech Specs
Resources
Product Downloads
Service & Support
Related Products















