dg5R

Datacenter in a Box

Overview

Independent AI Infrastructure Managing 10kW Ultra-High Heat

dg5R is equipped with the most advanced deepgadget liquid cooling technology, enabling up to 10 AI chips to operate at room temperature. With comprehensive cooling covering CPU, memory, and NICs, performance is pushed to its limits, while the latest architecture support and remote monitoring maximize operational convenience. dg5R is the most practical AI infrastructure available now, from challenging server rooms to small and large data centers.

Recommended Workloads

What You Can Do Right Now with dg5R

AI Inference

Inference for Frontier-Scale Open LLMs

  • With 33,410 TFLOPS FP8 and 1.41TB of HBM3e memory, dg5R can accommodate frontier-scale open model inference on a single server, including Llama 3.1 405B, Qwen3-235B-A22B, and DeepSeek-class models. For 70B-class models, the system is well suited to higher concurrency and more stable production serving.
  • RAG-powered knowledge search, enterprise chatbots, and Copilot services
  • Document summarization, classification, Q&A, and report draft automation
  • High-throughput batch inference and concurrent user traffic handling

AI Inference

LLM Tuning

Domain-Specific Model Tuning and Validation

  • Backed by 8,350 TFLOPS BF16 performance and 1.41TB of HBM3e memory, dg5R enables LoRA- and SFT-based tuning and evaluation of 70B-class LLMs within a single server. Training, validation, inference, and deployment can be iterated in one place to shorten the experimentation cycle.
  • Domain-specific model adaptation with LoRA and SFT
  • Iterative experimentation with checkpoints, evaluation, and inference outputs
  • Private AI assistants tailored to internal policy, terminology, and tone

LLM Tuning

Industrial & Financial High-Performance Computing

Accelerated Simulation for Industry and Finance

  • With 300 TFLOPS FP64 performance and 1.41TB of HBM3e memory, dg5R accelerates large-scale numerical workloads on a single server, including FEA, CFD, and Monte Carlo risk simulation.
  • Large-scale FEA structural analysis and CFD simulation
  • Parallel computation for Monte Carlo risk and quantitative finance models
  • Digital twin and infrastructure simulation workflows
  • Hybrid pipelines combining AI inference with numerical computing

Industrial & Financial High-Performance Computing

Image Workflows

Workflows for Image Generation, Transformation, and Visualization

  • With 3,545 TFLOPS of RT Core performance, 9,360 TFLOPS FP16 throughput, and 960GB of GDDR7 memory, dg5R can accelerate large-scale image generation, image transformation, 3D rendering, and visualization workflows on a single server.
  • Batch inference and high-volume generation with large image models
  • Image transformation, upscaling, style transfer, and post-processing pipelines
  • 3D rendering, digital content creation, and visualization workloads
  • Extended visual AI workflows built on private brand or domain data

Image Workflows

Highlights

AI Power in a Single Server

Max AI Processors

10GPUs

LLM Inference

33.4PFLOPS
FP8 기준

LLM Training

8.3PFLOPS
BF16 기준

High-Speed Network

4Tb/s
Up to 10x NIC configuration

dg5R features a total of 15 PCIe 5.0 x16 slots, with up to 10 available for AI processor expansion. LLM inference theoretical peak is 33.4 PFLOPS with 10 NVIDIA H200 NVL GPUs under FP8 Tensor and Structured Sparsity conditions; LLM training theoretical peak is 8.3 PFLOPS under BF16 Dense with the same configuration. For networking, multiple 400Gbps+ InfiniBand / RoCE NICs can be installed, supporting both InfiniBand and Ethernet protocols.

High Energy Efficiency

~35°C

Operable in ambient temperature

Self-contained L2A · High-temp intake compatible · Low external HVAC dependency

에너지 효율

1.41PUE

공랭식 1.85 대비 약 23.8% 개선

공조 비용 절감
52.4%
서버 내부 냉각 절감
54.0%
전체 전력 비용 절감
23.8%

공랭 vs dg5R

공냉 서버dg5R
IT 전력672.0 kW672.0 kW
서버 내부 냉각300.0 kW132.8 kW
중앙 공조224.0 kW96.0 kW
UPS 손실33.6 kW33.6 kW
기타15.0 kW10.0 kW
총 전력1,244.6 kW944.4 kW
PUE1.851.41

* Based on 100-server datacenter (IT power 672 kW), same workload. Air-cooled servers require ~20°C intake for stable operation; figures calculated under that condition. dg5R (self-contained L2A) figures calculated at 35°C intake.

Supports Up to 10 Latest AI Accelerators

Supports up to 10 of the latest NVIDIA GPUs as well as a wide range of next-generation AI accelerators from AMD, Tenstorrent, FuriosaAI, and more. Configure the perfect environment for your workloads.

H200 coldplate

High-Performance Enterprise

NVIDIA H200 NVL

Pro6000 coldplate

Large-Scale Inference

NVIDIA RTX PRO 6000

5090 coldplate

Best Value

NVIDIA RTX 5090

Technology

More Thorough and Reliable deepgadget Liquid Cooling

Scientific L2A (AALC) Cooling Flow with CFD Analysis

AALC(Air-Assisted Liquid Cooling) is a structure where liquid captures heat and air manages final heat rejection. Cold plates in direct contact with major heat sources such as CPUs, AI processors, and NICs absorb heat into the coolant, which then passes through built-in radiators to release that heat into the air. The discharged heat is handled by the data center’s existing HVAC system, allowing flexible deployment within conventional air-cooled infrastructure without direct facility water connections. The entire cooling path is optimized through CFD-based design to deliver high cooling efficiency and stable operation.

유동해석 결과

Liquid Cooling for AI Chips, CPU, NIC, and Memory

Liquid cooling is applied to all heat-generating components including NIC, chipset, and CPU, Memory not just GPUs. Efficiently manages heat across the entire system to maintain optimal performance.

메모리 콜드 플레이트

Patented StackFlow Technology

deepgadget's patented StackFlow technology achieves maximum heat dissipation efficiency in limited space. Stacked radiator structure, flow path design, and parallel cooling flow enable stable temperature management.

• Cooling System for Server — Patent No. 10-2879706

StackFlow 기술

In-house Cooling Plate Design Capability

We design cooling plates optimized for each chip including GPU, CPU, and NIC to maximize heat transfer efficiency. Patented design ensures uniform cooling performance.

• Water Cooling Device for Computer and Driving Method — Patent No. 10-2118786

콜드 플레이트 설계

Powerful Flow Rate from 4 Pumps

Four pumps — two per loop — continuously push coolant through two independent cooling circuits. With up to 16 liters per minute circulating at full force, heat from AI processors and CPUs is absorbed and carried away the instant it's generated. And if one loop encounters an issue, the other keeps cooling without interruption, so the server never has to throttle down.

직렬-병렬 펌프 구성

Certified Quality & Trust

Verified by domestic and international certification bodies — ISO 9001 quality management, direct in-house production, and venture company recognition for technology and growth potential.

ISO 9001 인증

Quality Management

ISO 9001

직접생산 확인 인증

Ministry of SMEs

Direct Production Cert.

벤처기업 확인 인증

Ministry of SMEs

Venture Company Cert.

Configuration

Latest Architecture for Large-Scale Workloads

Latest AI Processor Support

  • NVIDIAPRO 6000 Blackwell · H200 NVL · RTX 5090
  • AMDRX 9070 XT · AI PRO R9700
  • TenstorrentBlackhole p150a · Wormhole n300s
  • FuriosaAIRNGD · WARBOY

Latest CPU Support (Dual)

  • Intel Xeon 6700 Series
  • Intel Xeon 6900 Series
  • AMD EPYC 9005 Series
  • Dual CPU Configuration Support

PCIe Gen 5 Slots

  • Up to 15 PCIe 5.0 x16 Slots
  • Up to 128 GB/s bidirectional bandwidth per slot

DDR5-6400 Memory Support

  • ECC RDIMM Support
  • Intel Xeon 6700 SeriesUp to 32 DIMMs (4TB)
  • Intel Xeon 6900 Series/AMD EPYC 9005 SeriesUp to 24 DIMMs (3TB)

High-Speed NIC Support

  • NVIDIA ConnectX-7 NDR / 400GbE
  • NVIDIA ConnectX-6 HDR / 200GbE
  • InfiniBand / Ethernet Dual-Mode
  • RDMA, RoCEv2 Support
IPMI

1GbE & IPMI Support

Comes standard with 2 x 1GbE Base-T ports and IPMI remote management. Monitor and manage your server reliably from anywhere.

Storage

Expandable Storage

Supports M.2 NVMe SSD (HW RAID) and up to 16 U.2 NVMe SSDs. Capacities range from 3.84TB to 61.44TB per drive; with 61.44TB drives, total storage reaches up to 983TB.

Power

Enterprise 3200W Hot-Swap Redundant PSU

Equipped with four 3,200W 80 Plus Titanium certified PSUs. Supports both hot-swap and redundancy for stable power delivery during live server operation.

dg5R Supported AI Processor Performance Comparison

Select an AI Processor supported by dg5R to compare its performance against competing products. Highlighted items are models supported by dg5R.

dg5R Supporteddg5R
VS
Compare Against
NVIDIA RTX PRO 6000 Blackwell SE
vs
NVIDIA H100 NVL
Leads96GB GDDR7
MemoryGB
94GB HBM3
234
TF32TFLOPS
494Leads
468
BF16TFLOPS
989Leads
936
FP16TFLOPS
1,979Leads
3,784
FP8TFLOPS
3,958Leads
1.88
FP64TFLOPS
34Leads
234
FP32TFLOPS
494Leads
Leads354.5
RT CoreTFLOPS

* Performance figures are based on manufacturer-disclosed specifications; some are estimated. Actual performance may vary depending on system configuration and environment.

Support

Convenient Monitoring, Proactive All-in-One Management.

Dashboard

gadgetini, our web-based monitoring software, covers the entire server across six dedicated panels: Overview, Cooling Health, Compute Health, Network Health, History, and Alerting. Color-coded Normal / Warning / Critical status gives you an instant read on server health, while real-time data spans per-loop coolant temperature, leakage, and level — alongside AI processor and CPU temperature, power, and utilization — plus NIC link status and InfiniBand chipset temps. A 24-hour history graph lets you trace anomalies back to their source, and threshold breaches trigger instant email and Slack alerts. Accessible from desktop or mobile, anywhere.

dg5R 대시보드 - Overview
01 / 06

Overview


Server status is color-coded Normal / Warning / Critical at a glance. Cooling (Inlet/Outlet/ΔT, leakage, level), AI processors, CPUs, and network link status are all summarized on one panel.

Display

The built-in front LCD display lets you check server status directly. It shows all information available on the dashboard, and display modes can be freely selected via the gadgetini web UI.

CPU 온도
GPU 온도
메모리
에어
Q: What happens if there is a leak in the Liquid Cooling System?
deep gadget is engineered as a fully sealed structure, minimizing the risk of leakage under normal circumstances. However, we do have precautionary measures in place. Our system is equipped to shut down automatically in the event of a critical failure such as a leak caused by physical shock or other factors. Additionally, the amount of coolant that may leak due to our liquid cooling system design is minimal, typically not exceeding tens of milliliters. This level of leakage has negligible impact on other devices and minimal effect on deep gadget itself. In the event of any malfunction, our engineers will promptly address the issue, replacing the problematic part and reinstalling it as needed.

Instant deep learning.
Preloaded for AI. Ready to train from day one.

deep gadget comes preloaded with all the software needed for AI research and development—so you can start deep learning the moment you power it on. From the OS to the deep learning stack, everything is optimized by the deep gadget team. Designed for effortless compatibility, so you can stay focused on your research.





OS

    • Ubuntu
      RHEL
      Rocky Linux
      Windows
  • Drivers

    • NVIDIA
      AMD
      Tenstorrent
      Infiniband/GbE
  • Runtime & Libraries

    • CUDA
      cuDNN
      NCCL
      cuBLAS
      TensorRT
      ROCM
  • Deep Learning Frameworks & Solutions

    • PyTorch
      TensorFlow
      DeepSpeed
      Ollama
      Horovod
      vLLM
      3rd Party Solutions
  • Development Tools

    • MPI
      Anaconda
      Docker
  • Specifications

    dg5R Specifications

    Noise
    30 ~ 62 dB (Full load)
    Form Factor
    447mm x 911.5mm x 266mm(6U)
    CPU
    Intel Xeon 6700 Series Dual, up to 144 Cores/CPU
    6725P · 6737P · 6747P · 6761P · 6756E · 6780E
    Intel Xeon 6900 Series Dual, up to 128 Cores/CPU
    6952P · 6972P · 6960P · 6978P · 6980P
    AMD EPYC 9005 Series Dual, up to 128 Cores/CPU
    9135 · 9355 · 9455 · 9555 · 9755
    GPU
    Up to 10x
    NVIDIA RTX PRO 6000 Blackwell Server Edition
    NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition
    NVIDIA H200 NVL Tensor Core
    NVIDIA RTX 5090
    AMD RX 9070 XT · AMD AI PRO R9700
    tenstorrent Wormhole n300s · tenstorrent Blackhole p150a
    FuriosaAI RNGD · FuriosaAI WARBOY
    Memory
    DDR5-6400 RDIMM (ECC)
    Intel 6700: up to 4,096 GB (32 × 128 GB)
    Intel 6900 · AMD: up to 3,072 GB (24 × 128 GB)
    Primary Storage
    4× M.2 NVMe SSD (1 / 2 / 4 TB) HW RAID Controller included
    Up to 16× U.2 NVMe SSD (up to 983 TB)
    PCIe
    Up to 10 Gen 5 x16 slots
    PSU
    4 × Hot swappablle redundant 3,200 W 80 PLUS Titanium
    Networking | Max Bandwidth
    2 x 1GbE Base-T, IPMI
    NVIDIA ConnectX-6 — InfiniBand EDR / HDR200 / Ethernet 200GbE
    NVIDIA ConnectX-7 — InfiniBand NDR / Ethernet 400GbE
    Supported OS
    Ubuntu · RHEL · Rocky Linux · Windows Server
    Warranty
    3-Year Warranty

    dg5W

    Rack / Tower type

    Workstation 이미지

    dg5R

    Rackmount

    Product image update coming soon
    dg5r이미지

    dg-Tenstorrent

    dg-Tenstorrent 이미지