CIARA TITAN GM645R-G6 AI & GPU Server
CIARA TITAN GM645R-G6 AI & GPU Server
CIARA TITAN GM645R-G6 AI & GPU Server
CIARA TITAN GM645R-G6 AI & GPU Server
CIARA TITAN GM645R-G6 AI & GPU Server
CIARA TITAN GM645R-G6 AI & GPU Server
No items found.
NVIDIA Logo
No items found.
CIARA Logo

TITAN GM645R-G6

AI & GPU Server

The CIARA TITAN GM645R-G6 is a 4U NVIDIA MGX™ server built for advanced AI and multi-node training workloads.
  • 4U NVIDIA MGX™
  • Dual AMD EPYC™ 9005/9004 and 97x4 Series Processors​
  • Supports 8 dual-slot GPUs with 600W TDP each
  • Up to 6TB DDR5-6400 MT/s memory
  • 16 x Hot-swap E1.S (PCIe5.0 x4) drive bays and 2 x M.2
  • ASPEED AST2600 management
  • 3200W (3+1) 80 PLUS Titanium Power Supply
  • Perfect for AI Training, AI Inference, Deep Learning and more
TITAN GM645R-G6 Datasheet

Product Overview

The CIARA TITAN GM645R-G6 is a 4U NVIDIA MGX™ server built for large-scale AI training, high-volume inference, and clustered AI environments. Powered by dual AMD EPYC™ 9005/9004 and 97×4 Series processors (up to 500 W each), it delivers robust multi-core performance for data-intensive and GPU-driven workloads.

Designed on NVIDIA’s MGX™ modular architecture, the system supports up to eight NVIDIA GPUs, including NVIDIA RTX™ 6000, H200 NVL, H100 NVL, and L40s, providing optimized airflow, spacing, and power delivery for sustained GPU performance under continuous load.

With support for up to 6 TB of DDR5 6400 MT/s memory, eight 400Gb/s QSFP112 ports, E1.S PCIe Gen5 storage, and robust PCIe Gen5 expansion, the CIARA TITAN GM645R-G6 delivers the networking throughput, memory bandwidth, and scalability required for modern AI clusters and future high-power GPU deployments.

TITAN GM645R-G6

Key Features and Benefits

AMD EPYC Compute for Advanced AI Workloads

MGX™ Architecture for GPU-Optimized Performance

High-Speed Networking for AI Clusters

High-Bandwidth Memory and E1.S Storage Architecture

Perfect For

AI Training

AI Inference

Deep Learning

Simulation

Multi-Node Workloads

Trademarks: Intel, the Intel logo, Intel Core, Intel Inside, the Intel Inside logo, Intel vPro and Xeon are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries. Other company, product or service names may be trademarks or service marks of others.
Trademarks: AMD, and the AMD Arrow logo, AMD EPYC, Ryzen, Threadripper and combinations thereof are trademarks of Advanced Micro Devices Inc.

Tech Specs

Form Factor 4U NVIDIA® MGX™
GPU Supports 8 GPUs with 600W TDP each
Processor Dual AMD EPYC™ 9005/9004 and 97x4 Series Processors​ (TDP up to 500W)
Cooling System Passive air cooling
Memory 12 Channel memory per processor architecture 12+12 DIMM slots (1DPC) DDR5 6400 MT/s RDIMM/ 3DS RDIMM Up to 6TB
Network Controller CX-8 version 8 x QSFP112 (400Gb/s) ports via NVIDIA® ConnectX®-8 2 x RJ45 (1GbE) by Intel i350 1 x Realtek RTL8211F for dedicated management GLAN Standard version 2 x RJ45 (1GbE) by Intel i350 1 x Realtek RTL8211F for dedicated management GLAN
Storage 16 x hot-swap E1.S (PCIe5.0 x4) drive bays 2 x M.2 (PCIe3.0 x4)
Expansion Slots CX-8 version 8 x PCIe x16 (Gen5 x16) FHFL 1 x PCIe x16 (Gen5 x16) FHHL Standard version 8 x dual-slot PCIe5.0 x16 (FHFL) 5 x PCIe5.0 x16 (FHHL) 1 x PCIe5.0 x16 (HHHL)
Front I/O 1 x Power button w/ LED 1 x Reset button 1 x NMI button 1 x UID button 1 x System fault LED 1 x HDD activity LED 1 x LAN1/LAN2 activity LED 2 x Type-A (USB3.0 Gen1)
Rear I/O 1 x UID button 1 x Reset button 1 x Power button 1 x Mini-DisplayPort 1 x Type-A (USB3.0 Gen1) 1 x dedicated IPMI 8 x QSFP112 (400Gb/s) ports via NVIDIA® ConnectX®-8
Management Aspeed® AST2600 Baseboard Management Controller
Power Supply 3+1, 80-PLUS Titanium, 3200W CRPS
Environment Operating temperature: 10°C to 30°C (50°F to 86°F) Non-operating temperature: -40°C to 70°C (-40°F to 158°F) Non-operating humidity: 20% to 90% (non-condensing)
Dimensions (DxWxH) 1100mm x 860mm x 550mm 43.3" x 33.9" x 21.7"
Estimated Weight Net Weight: 43 kg / 95 lb Gross Weight: 79 kg / 175 lb
OS Support Windows Server 2022 Windows Server 2025 Red Hat Enterprise Linux server 8.10 x64 Red Hat Enterprise Linux server 9.4 x64 Red Hat Enterprise Linux server 10.0 x64 Ubuntu 22.04 LTS x64 Ubuntu 24.04 LTS x64

Resources

Product Downloads

  • TITAN GM645R-G6 Datasheet

Related Products

TITAN G620R-G6

TITAN G620R-G6

AI & GPU Server
The CIARA TITAN G620R-G6 AI & GPU Server is built for AI, hyperscale, GPU rendering and scientific simulation workloads.
View Product
TITAN G525R-G6

TITAN G525R-G6

AI & GPU Server
The CIARA TITAN G525R-G6 AI & GPU server delivers industry-leading performance for data center agility and significantly improves workload throughput.
View Product
TITAN G620R-G6-04D3L

TITAN G620R-G6-04D3L

AI & GPU Server
The CIARA TITAN G620R-G6-04D3L AI & GPU server is built for AI, machine learning, deep learning and scientific simulation workloads.
View Product
TITAN GS685R-G6 B200

TITAN GS685R-G6 B200

AI & GPU Server
The CIARA TITAN GS685R-G6 B200 stands apart as a platform for large-scale AI and high-performance computing.
View Product
TITAN GS680R-G7 B300

TITAN GS680R-G7 B300

AI & GPU Server
The CIARA TITAN GS680R-G7 B300 is an 8U GPU server built for AI, HPC, and simulation workloads.
View Product
TITAN GM640R-G7

TITAN GM640R-G7

AI & GPU Server
The CIARA TITAN GM640R-G7 is a 4U NVIDIA MGX™ server built for advanced AI training, inference, and distributed computing.
View Product

We Have Your Back

To protect your investment in our CPU, and GPU computing servers, workstations, and laptops, we offer flexible limited warranty services for durations customizable to your needs. We are proud of the quality and reliability of our products, including our immersion-born servers. This dedication is demonstrated through our comprehensive warranty, which covers all essential components (CPU, GPU, RAM, SSD, NIC, and more). Our service options include return-to-depot, advance exchange, and on-site support.
View Support Options

Deploy AI & GPU Servers

Work with Hypertec to design and deploy AI and GPU platforms optimized for accelerated computing, scalability, and evolving AI workloads.
AI & GPU Server
5th Gen AMD EPYC™
4th Gen AMD EPYC™
AI Training
AI Inference
Deep Learning
Simulation
Multi-Node Workloads