🎉 Up to 70% Off Selected ItemsShop Sale
HomeStore

NVIDIA Mellanox MCX653105A-HDAT 200G QSFP56 VPI Adapter

Product image 1

NVIDIA Mellanox MCX653105A-HDAT 200G QSFP56 VPI Adapter

NVIDIA Mellanox MCX653105A-HDAT 200G QSFP56 VPI Adapter

[shortdesc] Single QSFP56 port, 200GbE and HDR InfiniBand, PCIe Gen4 x16, 19.3W typical power, 0–55°C operating, RoHS compliant [/shortdesc]

Accelerate 200G fabrics with ConnectX-6 VPI performance

NVIDIA ConnectX-6 VPI MCX653105A-HDAT delivers high-throughput networking over a single QSFP56 port with dual-protocol flexibility for Ethernet and InfiniBand. It supports Ethernet up to 200 Gb/s and InfiniBand up to HDR/HDR100, enabling fast, low-latency links for modern clusters and storage fabrics. With PCI Express Gen3/4 x16, it integrates cleanly into performance-critical servers.

Key Benefits & Features

  • 200G dual-protocol: Operate as 200GbE or HDR/HDR100 InfiniBand on one QSFP56 port for maximum deployment flexibility.
  • Broad speed negotiation: Auto-negotiates IB SDR/DDR/QDR/FDR/FDR10/EDR/HDR100/HDR and Ethernet 1/10/25/40/50/100/200G for backward compatibility.
  • PCIe Gen4 x16 bandwidth: SERDES at 8.0/16 GT/s ensures headroom for high-throughput, latency-sensitive workloads.
  • Thermal-ready design: Defined airflow options (heatsink-to-port or port-to-heatsink) with LFM and temperature guidance for reliable operation.
  • Data center compliance: RoHS compliant with global safety and EMC certifications for smooth qualification.

Ideal Use Cases

  • High-performance computing clusters requiring HDR InfiniBand or 200GbE
  • AI and machine learning training or inference nodes
  • RDMA-enabled storage and NVMe-based fabric backbones
  • Hybrid IB/Ethernet data centers consolidating on QSFP56
  • 100/200G aggregation and server access upgrades

Technical Specifications

  • Connector: Single QSFP56 for InfiniBand and Ethernet (copper and optical)
  • Ethernet Standards: 200GBASE-CR4/KR4/SR4; 100GBASE-CR4/CR2/KR4/SR4; 50GBASE-R2/R4; 40GBASE-CR4/KR4/SR4/LR4/ER4/R2; 25GBASE-R; 20GBASE-KR2; 10GBASE-LR/ER/CX4/CR/KR/SR; SGMII; 1000BASE-CX/KX
  • InfiniBand: IBTA v1.4a; SDR/DDR/QDR/FDR10/FDR/EDR/HDR100/HDR
  • Data Rates: Ethernet 1/10/25/40/50/100/200 Gb/s; InfiniBand up to HDR
  • PCI Express: Gen3/4 x16 (compatible with 2.0 and 1.1)
  • Power: Typical 19.3 W with passive cables; QSFP56 port power available up to 5 W
  • Voltage/Current: 3.3Aux, max 100 mA
  • Airflow Guidance: Passive cables: 350 LFM at 55°C (heatsink to port) or 250 LFM at 35°C (port to heatsink); NVIDIA active 4.7 W cables: 500 LFM at 55°C (heatsink to port) or 250 LFM at 35°C (port to heatsink)
  • Environmental: Operating 0°C to 55°C; non-operating -40°C to 70°C; humidity 10–85% operating / 10–90% non-operating; altitude up to 3050 m
  • Regulatory: Safety CB/cTUVus/CE; EMC CE/FCC/VCCI/ICES/RCM/KC; RoHS compliant
  • Physical Size: 167.65 mm x 68.90 mm (6.6 in x 2.71 in)

Power your 200G upgrade with confidence

Unify Ethernet and InfiniBand at 200G with a single QSFP56 adapter built for performance and reliability. Order today.

$1,045.00
NVIDIA Mellanox MCX653105A-HDAT 200G QSFP56 VPI Adapter
$1,045.00

Product Information

Shipping & Returns

Description

[shortdesc] Single QSFP56 port, 200GbE and HDR InfiniBand, PCIe Gen4 x16, 19.3W typical power, 0–55°C operating, RoHS compliant [/shortdesc]

Accelerate 200G fabrics with ConnectX-6 VPI performance

NVIDIA ConnectX-6 VPI MCX653105A-HDAT delivers high-throughput networking over a single QSFP56 port with dual-protocol flexibility for Ethernet and InfiniBand. It supports Ethernet up to 200 Gb/s and InfiniBand up to HDR/HDR100, enabling fast, low-latency links for modern clusters and storage fabrics. With PCI Express Gen3/4 x16, it integrates cleanly into performance-critical servers.

Key Benefits & Features

  • 200G dual-protocol: Operate as 200GbE or HDR/HDR100 InfiniBand on one QSFP56 port for maximum deployment flexibility.
  • Broad speed negotiation: Auto-negotiates IB SDR/DDR/QDR/FDR/FDR10/EDR/HDR100/HDR and Ethernet 1/10/25/40/50/100/200G for backward compatibility.
  • PCIe Gen4 x16 bandwidth: SERDES at 8.0/16 GT/s ensures headroom for high-throughput, latency-sensitive workloads.
  • Thermal-ready design: Defined airflow options (heatsink-to-port or port-to-heatsink) with LFM and temperature guidance for reliable operation.
  • Data center compliance: RoHS compliant with global safety and EMC certifications for smooth qualification.

Ideal Use Cases

  • High-performance computing clusters requiring HDR InfiniBand or 200GbE
  • AI and machine learning training or inference nodes
  • RDMA-enabled storage and NVMe-based fabric backbones
  • Hybrid IB/Ethernet data centers consolidating on QSFP56
  • 100/200G aggregation and server access upgrades

Technical Specifications

  • Connector: Single QSFP56 for InfiniBand and Ethernet (copper and optical)
  • Ethernet Standards: 200GBASE-CR4/KR4/SR4; 100GBASE-CR4/CR2/KR4/SR4; 50GBASE-R2/R4; 40GBASE-CR4/KR4/SR4/LR4/ER4/R2; 25GBASE-R; 20GBASE-KR2; 10GBASE-LR/ER/CX4/CR/KR/SR; SGMII; 1000BASE-CX/KX
  • InfiniBand: IBTA v1.4a; SDR/DDR/QDR/FDR10/FDR/EDR/HDR100/HDR
  • Data Rates: Ethernet 1/10/25/40/50/100/200 Gb/s; InfiniBand up to HDR
  • PCI Express: Gen3/4 x16 (compatible with 2.0 and 1.1)
  • Power: Typical 19.3 W with passive cables; QSFP56 port power available up to 5 W
  • Voltage/Current: 3.3Aux, max 100 mA
  • Airflow Guidance: Passive cables: 350 LFM at 55°C (heatsink to port) or 250 LFM at 35°C (port to heatsink); NVIDIA active 4.7 W cables: 500 LFM at 55°C (heatsink to port) or 250 LFM at 35°C (port to heatsink)
  • Environmental: Operating 0°C to 55°C; non-operating -40°C to 70°C; humidity 10–85% operating / 10–90% non-operating; altitude up to 3050 m
  • Regulatory: Safety CB/cTUVus/CE; EMC CE/FCC/VCCI/ICES/RCM/KC; RoHS compliant
  • Physical Size: 167.65 mm x 68.90 mm (6.6 in x 2.71 in)

Power your 200G upgrade with confidence

Unify Ethernet and InfiniBand at 200G with a single QSFP56 adapter built for performance and reliability. Order today.