ThinkSystem Mellanox ConnectX-6 HDR100 InfiniBand AdaptersProduct Guide

Author
Updated
25 Sep 2019
Form Number
LP1170
PDF size
10 pages, 215 KB

Abstract

The ThinkSystem Mellanox ConnectX-6 HDR100 InfiniBand Adapters offer 100 Gb/s InfiniBand connectivity for high-performance connectivity when running HPC, cloud, storage and machine learning applications.

This product guide provides essential presales information to understand the adapter and its key features, specifications, and compatibility. This guide is intended for technical specialists, sales specialists, sales engineers, IT architects, and other IT professionals who want to learn more about the ConnectX-6 HDR100 adapters and consider their use in IT solutions.

Change History

Changes in the September 25 update:

Introduction

The ThinkSystem Mellanox ConnectX-6 HDR100 InfiniBand Adapters offer 100 Gb/s InfiniBand connectivity for high-performance connectivity when running HPC, cloud, storage and machine learning applications. The adapter is also planned to have support for 100 Gb Ethernet with a future firmware upgrade.

The following figure shows the 2-port ConnectX-6 HDR100 InfiniBand adapter (the standard heat sink has been removed in this photo).

ThinkSystem Mellanox ConnectX-6 HDR100 QSFP56 2-port PCIe InfiniBand Adapter
Figure 1. ThinkSystem Mellanox ConnectX-6 HDR100 QSFP56 2-port PCIe InfiniBand Adapter

Did you know?

Mellanox ConnectX-6 brings new acceleration engines for maximizing High Performance, Machine Learning, Storage, Web 2.0, Cloud, Data Analytics and Telecommunications platforms. ConnectX-6 HDR100 adapters support up to 100G total bandwidth at sub-600 nanosecond latency, and NVMe over Fabric offloads, providing the highest performance and most flexible solution for the most demanding applications and markets. ThinkSystem servers with Mellanox adapters and switches deliver the most intelligent fabrics for High Performance Computing clusters.

Part number information

The following table shows the part numbers for the adapters.

Table 1. Ordering information
Part
number
Feature
code
Mellanox
equivalent
Description
4C57A14177 B4R9 MCX653105A-ECAT ThinkSystem Mellanox ConnectX-6 HDR100 QSFP56 1-port PCIe InfiniBand Adapter
4C57A14178 B4RA MCX653106A-ECAT ThinkSystem Mellanox ConnectX-6 HDR100 QSFP56 2-port PCIe InfiniBand Adapter

The part numbers include the following:

  • One Mellanox adapter
  • Low-profile (2U) and full-height (3U) adapter brackets
  • Documentation

Supported cables

The following table lists the supported fiber optic cables and Active Optical Cables.

Table 2. Optical cables
Part number Feature code Description
QSFP56 HDR IB to 2x HDR100 Optical Splitter Cables
4Z57A14196 B4R4 3m Mellanox HDR IB to 2x HDR100 Splitter Optical QSFP56 Cable
4Z57A14197 B4R5 5m Mellanox HDR IB to 2x HDR100 Splitter Optical QSFP56 Cable
4Z57A14198 B4R6 10m Mellanox HDR IB to 2x HDR100 Splitter Optical QSFP56 Cable
4Z57A14199 B4R7 15m Mellanox HDR IB to 2x HDR100 Splitter Optical QSFP56 Cable
4Z57A14214 B4R8 20m Mellanox HDR IB to 2x HDR100 Splitter Optical QSFP56 Cable
4Z57A11490 B68K 30m Mellanox HDR IB to 2x HDR100 Splitter Optical QSFP56 Cable

The following table lists the supported direct-attach copper (DAC) cables.

Table 3. Copper cables
Part number Feature code Description
QSFP56 HDR IB Passive DAC Cables
4Z57A14182 B4QQ 0.5m Mellanox HDR IB Passive Copper QSFP56 Cable
4Z57A14183 B4QR 1m Mellanox HDR IB Passive Copper QSFP56 Cable
4Z57A14184 B4QS 1.5m Mellanox HDR IB Passive Copper QSFP56 Cable
4Z57A14185 B4QT 2m Mellanox HDR IB Passive Copper QSFP56 Cable
QSFP56 HDR IB to 2x HDR100 Passive DAC Splitter Cables
4Z57A14193 B4R1 1m Mellanox HDR IB to 2x HDR100 Splitter Passive Copper QSFP56 Cable
4Z57A14194 B4R2 1.5m Mellanox HDR IB to 2x HDR100 Splitter Passive Copper QSFP56 Cable
4Z57A11477 B68L 2m Mellanox HDR IB to 2x HDR100 Splitter Passive Copper QSFP56 Cable

Features

Machine learning and big data environments

Data analytics has become an essential function within many enterprise data centers, clouds and hyperscale platforms. Machine learning relies on especially high throughput and low latency to train deep neural networks and to improve recognition and classification accuracy. ConnectX-6 offers an excellent solution to provide machine learning applications with the levels of performance and scalability that they require.

ConnectX-6 utilizes the RDMA technology to deliver low-latency and high performance. ConnectX-6 enhances RDMA network capabilities even further by delivering end-to-end packet level flow control.

Security

ConnectX-6 block-level encryption offers a critical innovation to network security. As data in transit is stored or retrieved, it undergoes encryption and decryption. The ConnectX-6 hardware offloads the IEEE AES-XTS encryption/decryption from the CPU, saving latency and CPU utilization. It also guarantees protection for users sharing the same resources through the use of dedicated encryption keys.

By performing block-storage encryption in the adapter, ConnectX-6 excludes the need for self-encrypted disks. This allows customers the freedom to choose their preferred storage device, including those that traditionally do not provide encryption. ConnectX-6 can support Federal Information Processing Standards (FIPS) compliance.

ConnectX-6 also includes a hardware Root-of-Trust (RoT), which uses HMAC relying on a device-unique key. This provides both a secure boot as well as cloning-protection. Delivering best-in-class device and firmware protection, ConnectX-6 also provides secured debugging capabilities, without the need for physical access.

Storage environments

NVMe storage devices offer very fast access to storage media. The evolving NVMe over Fabric (NVMe-oF) protocol leverages RDMA connectivity to remotely access NVMe storage devices efficiently, while keeping the end-to-end NVMe model at lowest latency. With its NVMe-oF target and initiator offloads, ConnectX-6 brings further optimization to NVMe-oF, enhancing CPU utilization and scalability.

Cloud and Web 2.0 environments

Telco, Cloud and Web 2.0 customers developing their platforms on software-defined network (SDN) environments are leveraging the Virtual Switching capabilities of server operating systems to enable maximum flexibility in the management and routing protocols of their networks.

Open V-Switch (OVS) is an example of a virtual switch that allows virtual machines to communicate among themselves and with the outside world. Software-based virtual switches, traditionally residing in the hypervisor, are CPU intensive, affecting system performance and preventing full utilization of available CPU for compute functions.

To address such performance issues, ConnectX-6 offers Mellanox Accelerated Switching and Packet Processing (ASAP2) Direct technology. ASAP2 offloads the vSwitch/vRouter by handling the data plane in the NIC hardware while maintaining the control plane unmodified. As a result, significantly higher vSwitch/vRouter performance is achieved minus the associated CPU load.

The vSwitch/vRouter offload functions supported by ConnectX-5 and ConnectX-6 include encapsulation and de-capsulation of overlay network headers, as well as stateless offloads of inner packets, packet headers re-write (enabling NAT functionality), hairpin, and more.

In addition, ConnectX-6 offers intelligent flexible pipeline capabilities, including programmable flexible parser and flexible match-action tables, which enable hardware offloads for future protocols.

Technical specifications

The adapters have the following technical specifications.

Form factor

  • Single-slot low-profile adapter

PCI Express Interface

  • PCIe 3.0 x16 host interface
  • Support for PCIe x1, x2, x4, x8, and x16 configurations
  • PCIe Atomic
  • TLP (Transaction Layer Packet) Processing Hints (TPH)
  • PCIe switch Downstream Port Containment (DPC) enablement for PCIe hot-plug
  • Advanced Error Reporting (AER)
  • Access Control Service (ACS) for peer-to-peer secure communication
  • Process Address Space ID (PASID) Address Translation Services (ATS)
  • IBM CAPIv2 (Coherent Accelerator Processor Interface)
  • Support for MSI/MSI-X mechanisms

Connectivity

  • One or two QSFP56 ports
  • Supports passive copper cables with ESD protection
  • Powered connectors for optical and active cable support

InfiniBand

  • Supports interoperability with InfiniBand switches (up to HDR100, as 2 lanes of 50Gb/s data rate)
  • Total connectivity is 100 Gb/s:
    • One port adapter supports a single 100 Gb/s link
    • Two-port adapter supports two connections of 50 Gb/s each or one 100 Gb/s active link and the other a standby link
  • HDR100 / EDR / FDR / QDR / DDR / SDR
  • IBTA Specification 1.3 compliant
  • RDMA, Send/Receive semantics
  • Hardware-based congestion control
  • Atomic operations
  • 16 million I/O channels
  • 256 to 4Kbyte MTU, 2Gbyte messages
  • 8 virtual lanes + VL15

Ethernet (support planned for a future date)

  • Support interoperability with Ethernet switches (up to 100GbE, as 2 lanes of 50Gb/s data rate)
  • Total connectivity is 100 Gb/s:
    • One port adapter supports a single 100 Gb/s link
    • Two-port adapter supports two connections of 50 Gb/s each or one 100 Gb/s active link and the other a standby link
  • Supports 100GbE / 50GbE / 40GbE / 25GbE / 10GbE / 1GbE
  • IEEE 802.3bj, 802.3bm 100 Gigabit Ethernet
  • IEEE 802.3by, Ethernet Consortium 25, 50 Gigabit Ethernet, supporting all FEC modes
  • IEEE 802.3ba 40 Gigabit Ethernet
  • IEEE 802.3ae 10 Gigabit Ethernet
  • IEEE 802.3az Energy Efficient Ethernet
  • IEEE 802.3ap based auto-negotiation and KR startup
  • IEEE 802.3ad, 802.1AX Link Aggregation
  • IEEE 802.1Q, 802.1P VLAN tags and priority
  • IEEE 802.1Qau (QCN) – Congestion Notification
  • IEEE 802.1Qaz (ETS)
  • IEEE 802.1Qbb (PFC)
  • IEEE 802.1Qbg
  • IEEE 1588v2
  • Jumbo frame support (9.6KB)

Enhanced Features

  • Hardware-based reliable transport
  • Collective operations offloads
  • Vector collective operations offloads
  • PeerDirect RDMA (GPUDirect) communication acceleration
  • 64/66 encoding
  • Enhanced Atomic operations
  • Advanced memory mapping support, allowing user mode registration and remapping of memory (UMR)
  • Extended Reliable Connected transport (XRC)
  • Dynamically Connected transport (DCT)
  • On demand paging (ODP)
  • MPI Tag Matching
  • Rendezvous protocol offload
  • Out-of-order RDMA supporting Adaptive Routing
  • Burst buffer offload
  • In-Network Memory registration-free RDMA memory access

CPU Offloads

  • RDMA over Converged Ethernet (RoCE)
  • TCP/UDP/IP stateless offload
  • LSO, LRO, checksum offload
  • RSS (also on encapsulated packet), TSS, HDS, VLAN and MPLS tag insertion/stripping, Receive flow steering
  • Data Plane Development Kit (DPDK) for kernel bypass applications
  • Open VSwitch (OVS) offload using ASAP2
    • Flexible match-action flow tables
    • Tunneling encapsulation / de-capsulation
  • Intelligent interrupt coalescence
  • Header rewrite supporting hardware offload of NAT router

Storage Offloads

  • Block-level encryption: XTS-AES 256/512 bit key
  • NVMe over Fabric offloads for target machine
  • Erasure Coding offload - offloading Reed-Solomon calculations
  • T10 DIF - signature handover operation at wire speed, for ingress and egress traffic
  • Storage Protocols: SRP, iSER, NFS RDMA, SMB Direct, NVMe-oF

Overlay Networks

  • RoCE over overlay networks
  • Stateless offloads for overlay network tunneling protocols
  • Hardware offload of encapsulation and decapsulation of VXLAN, NVGRE, and GENEVE overlay networks

Hardware-Based I/O Virtualization

  • Single Root IOV
  • Address translation and protection
  • VMware NetQueue support
  • SR-IOV: Up to 512 Virtual Functions
  • SR-IOV: Up to 16 Physical Functions per host
  • Virtualization hierarchies (network partitioning, NPAR)
    • Virtualizing Physical Functions on a physical port
    • SR-IOV on every Physical Function
  • Configurable and user-programmable QoS
  • Guaranteed QoS for VMs

HPC Software Libraries

  • HPC-X, OpenMPI, MVAPICH, MPICH, OpenSHMEM, PGAS and varied commercial packages

Management and Control

  • NC-SI, MCTP over SMBus and MCTP over PCIe - BMC interface
  • PLDM for Monitor and Control DSP0248
  • PLDM for Firmware Update DSP0267
  • SDN management interface for managing the eSwitch
  • I2C interface for device control and configuration
  • General Purpose I/O pins
  • SPI interface to Flash
  • JTAG IEEE 1149.1 and IEEE 1149.6

Remote Boot

  • Remote boot over InfiniBand
  • Remote boot over Ethernet
  • Remote boot over iSCSI
  • Unified Extensible Firmware Interface (UEFI)
  • Pre-execution Environment (PXE)

Server support

The following table lists the ThinkSystem servers that are compatible.

Table 4. ThinkSystem server support
Description E 1S Intel 2S Intel AMD 4S Intel Dense/ Blade
SE350 (7Z46 / 7D1X)
ST50 (7Y48/7Y50)
ST250 (7Y45/7Y46)
SR150 (7Y54)
SR250 (7Y51/7Y52)
ST550 (7X09/7X10)
SR530 (7X07/7X08)
SR550 (7X03/7X04)
SR570 (7Y02/7Y03)
SR590 (7X98/7X99)
SR630 (7X01/7X02)
SR650 (7X05/7X06)
SR670 (7Y36/7Y37/7Y38)
SR635 (7Y98 / 7Y99)
SR655 (7Y00 / 7Z01)
SR850 (7X18/7X19)
SR860 (7X69/7X70)
SR950 (7X11/12/13)
SD530 (7X21)
SD650 (7X58)
SN550 (7X16)
SN850 (7X15)
ThinkSystem Mellanox ConnectX-6 HDR100 QSFP56 1-port PCIe InfiniBand Adapter, 4C57A14177 N N N N N N N N N N Y Y N N N Y Y Y Y N N N
ThinkSystem Mellanox ConnectX-6 HDR100 QSFP56 2-port PCIe InfiniBand Adapter, 4C57A14178 N N N N N N N N N N Y Y N N N Y Y Y Y N N N

Operating system support

The adapters support the operating systems listed in the following tables.

Tip: These tables are automatically generated based on data from Lenovo ServerProven.

Table 5. Operating system support for ThinkSystem Mellanox ConnectX-6 HDR100/100GbE QSFP56 1-port PCIe VPI Adapter, 4C57A14177
Operating systems
SD530 (Gen 2)
SR630 (Gen 2)
SR650 (Gen 2)
SR850 (Gen 2)
SR860 (Gen 2)
SR950 (Gen 2)
SD530 (Gen 1)
SR630 (Gen 1)
SR650 (Gen 1)
SR850 (Gen 1)
SR860 (Gen 1)
SR950 (Gen 1)
Red Hat Enterprise Linux 7.5 N N N N N N Y Y N Y Y Y
Red Hat Enterprise Linux 7.6 Y Y Y Y Y Y Y Y Y Y Y Y
SUSE Linux Enterprise Server 12 SP3 N N N N N N Y Y N Y Y Y
SUSE Linux Enterprise Server 12 SP4 Y Y Y Y Y Y Y Y Y Y Y Y
SUSE Linux Enterprise Server 15 Y Y Y Y Y Y Y Y Y Y Y Y
SUSE Linux Enterprise Server 15 SP1 Y Y Y Y Y Y Y Y Y Y Y Y
Table 6. Operating system support for ThinkSystem Mellanox ConnectX-6 HDR100/100GbE QSFP56 2-port PCIe VPI Adapter, 4C57A14178
Operating systems
SD530 (Gen 2)
SR630 (Gen 2)
SR650 (Gen 2)
SR850 (Gen 2)
SR860 (Gen 2)
SR950 (Gen 2)
SD530 (Gen 1)
SR630 (Gen 1)
SR650 (Gen 1)
SR850 (Gen 1)
SR860 (Gen 1)
SR950 (Gen 1)
Red Hat Enterprise Linux 7.5 N N N N N N Y Y N Y Y Y
Red Hat Enterprise Linux 7.6 Y Y Y Y Y Y Y Y N Y Y Y
SUSE Linux Enterprise Server 12 SP3 N N N N N N Y Y N Y Y Y
SUSE Linux Enterprise Server 12 SP4 Y Y Y Y Y Y Y Y N Y Y Y
SUSE Linux Enterprise Server 15 Y Y Y Y Y Y Y Y Y Y Y Y
SUSE Linux Enterprise Server 15 SP1 Y Y Y Y Y Y Y Y Y Y Y Y

Regulatory approvals

The adapters have the following regulatory approvals:

  • Safety: CB / cTUVus / CE
  • EMC: CE / FCC / VCCI / ICES / RCM / KC
  • RoHS: RoHS Compliant

Operating environment

The adapters have the following operating characteristics:

  • Typical power consumption (passive cables): 15.6W
  • Maximum power available through QSFP56 port: 5W
  • Temperature
    • Operational: 0°C to 55°C
    • Non-operational: -40°C to 70°C
  • Humidity: 90% relative humidity

Warranty

One year limited warranty. When installed in a Lenovo server, the adapter assumes the server’s base warranty and any warranty upgrades.

Related product families

Product families related to this document are the following:

Trademarks

Lenovo and the Lenovo logo are trademarks or registered trademarks of Lenovo in the United States, other countries, or both. A current list of Lenovo trademarks is available on the Web at https://www.lenovo.com/us/en/legal/copytrade/.

The following terms are trademarks of Lenovo in the United States, other countries, or both:
Lenovo®
ServerProven®
ThinkSystem

The following terms are trademarks of other companies:

Intel® is a trademark or registered trademark of Intel Corporation or its subsidiaries in the United States and other countries.

Linux® is a trademark of Linus Torvalds in the United States, other countries, or both.

Other company, product, or service names may be trademarks or service marks of others.