Mellanox ConnectX-4 AdaptersProduct Guide

Author
Updated
27 Sep 2017
Form Number
LP0098
PDF size
19 pages, 573 KB

Abstract

ConnectX-4 from Mellanox is a family of high-performance and low-latency Ethernet and InfiniBand adapters. The ConnectX-4 Lx EN adapters are available in 40 Gb and 25 Gb Ethernet speeds and the ConnectX-4 Virtual Protocol Interconnect (VPI) adapter supports 100 Gb EDR InfiniBand and 100 Gb Ethernet.

This product guide provides essential presales information to understand the ConnectX-4 offerings and their key features, specifications, and compatibility. This guide is intended for technical specialists, sales specialists, sales engineers, IT architects, and other IT professionals who want to learn more about ConnectX-4 network adapters and consider their use in IT solutions.

Change History

Changes in the September 27 update:

Introduction

ConnectX-4 from Mellanox is a family of high-performance and low-latency Ethernet and InfiniBand adapters. The ConnectX-4 Lx EN adapters are available in 40 Gb and 25 Gb Ethernet speeds and the ConnectX-4 Virtual Protocol Interconnect (VPI) adapter supports 100 Gb EDR InfiniBand and 100 Gb Ethernet.

These adapters address virtualized infrastructure challenges, delivering best-in-class performance to various demanding markets and applications. Providing true hardware-based I/O isolation with unmatched scalability and efficiency, achieving the most cost-effective and flexible solution for Web 2.0, Cloud, data analytics, database, and storage platforms.

The following figure shows the Mellanox ConnectX-4 2x100GbE/EDR IB QSFP28 VPI Adapter (the standard heat sink has been removed in this photo).

Mellanox ConnectX-4 2x100GbE/EDR IB QSFP28 VPI Adapter
Figure 1. Mellanox ConnectX-4 2x100GbE/EDR IB QSFP28 VPI Adapter (heatsink removed)

Did you know?

Virtual Protocol Interconnect (VPI) enables standard networking, clustering, storage, and management protocols to seamlessly operate over any converged network by leveraging a consolidated software stack. Each port can operate on InfiniBand, Ethernet, or Data Center Bridging (DCB) fabrics, and supports Ethernet over InfiniBand (EoIB) as well as RDMA over Converged Ethernet (RoCE). VPI simplifies I/O system design and makes it easier for IT managers to deploy infrastructure that meets the challenges of a dynamic data center.

Part number information

The following table shows the part numbers for adapters for ThinkSystem, System x and NeXtScale servers.

Table 1. Ordering information - ThinkSystem and System x
Part number Feature code Mellanox equivalent Description
ConnectX-4 Lx 25 Gb & 40 Gb Ethernet adapters - PCIe low-profile form factor
01GR250 AUAJ MCX4121A-ACAT Mellanox ConnectX-4 Lx 2x25GbE SFP28 Adapter
00MM950 ATRN MCX4131A-BCAT Mellanox ConnectX-4 Lx 1x40GbE QSFP+ Adapter
ConnectX-4 Lx 25 Gb Ethernet adapters - ML2 form factor
00MN990 ATZR MCX4111A-ACAT* Mellanox ConnectX-4 Lx ML2 1x25GbE SFP28 Adapter
7ZT7A00507 AUKU MCX4121A-ACAT* ThinkSystem Mellanox ConnectX-4 Lx ML2 25Gb 2-Port SFP28 Ethernet Adapter
ConnectX-4 FDR InfiniBand / 40 Gb Ethernet adapters
7ZT7A00500 AUVG MCX454A-FCAT ThinkSystem Mellanox ConnectX-4 PCIe FDR 2-Port QSFP VPI Adapter
ConnectX-4 EDR InfiniBand / 100 Gb Ethernet adapters
00MM960 ATRP MCX456A-ECAT Mellanox ConnectX-4 2x100GbE/EDR IB QSFP28 VPI Adapter
00KH924 ASWQ MCX455A-ECAT Mellanox ConnectX-4 1x100GbE/EDR IB QSFP28 VPI Adapter

* MCX4111A-ACAT and MCX4121A-ACAT are the PCIe version of these ML2 form-factor adapters

The following table shows the part numbers for adapters supported on ThinkServer systems.

Table 2. Ordering part numbers - ThinkServer
Part number Mellanox
equivalent
Description
ConnectX-4 Lx Ethernet adapters for ThinkServer
4XC0G88861 MCX4121A-ACAT Lenovo ThinkServer ConnectX-4 Lx PCIe 25Gb 2 Port SFP28 Ethernet Adapter by Mellanox

The part numbers include the following:

  • One Mellanox adapter
  • Low-profile (2U) and full-height (3U) adapter brackets
  • Documentation

Supported cables and transceivers

This section lists the supported transceivers and direct-attach copper (DAC) cables.

InfiniBand adapters

The Mellanox ConnectX-4 100GbE/EDR InfiniBand adapters support the InfiniBand cables listed in the following table.

Table 3. InfiniBand cable support for Mellanox ConnectX-4 EDR InfiniBand adapters
Part number Feature code Description
QSFP28 Passive DAC cables
00MP516 ASQT 0.5m Mellanox EDR IB Passive Copper QSFP28 Cable
00MP524 ASQV 1m Mellanox EDR IB Passive Copper QSFP28 Cable
00MP536 ASQY 2m Mellanox EDR IB Passive Copper QSFP28 Cable
QSFP28 Optical cables
00MP544 ASR0 10m Mellanox EDR IB Optical QSFP28 Cable

The Mellanox ConnectX-4 FDR InfiniBand adapter supports the cables listed in the following table.

Table 4. Cables for Mellanox FDR InfiniBand QSFP adapters
Part number Feature code Description
QSFP to 10Gb Ethernet (SFP+) Conversion
00D9676 ARZH Mellanox QSFP to SFP+ adapter
Passive copper cables for Mellanox FDR InfiniBand QSFP adapters
00KF002 ARZB 0.75m Mellanox QSFP Passive DAC Cable
00KF003 ARZC 1m Mellanox QSFP Passive DAC Cable
00KF004 ARZD 1.25m Mellanox QSFP Passive DAC Cable
00KF005 ARZE 1.5m Mellanox QSFP Passive DAC Cable
00KF006 ARZF 3m Mellanox QSFP Passive DAC Cable
Active optical cables for Mellanox FDR InfiniBand QSFP adapters
00KF007 ARYC 3m Mellanox IB FDR Active Optical Fiber Cable
00KF008 ARYD 5m Mellanox IB FDR Active Optical Fiber Cable
00KF009 ARYE 10m Mellanox IB FDR Active Optical Fiber Cable
00KF010 ARYF 15m Mellanox IB FDR Active Optical Fiber Cable
00KF011 ARYG 20m Mellanox IB FDR Active Optical Fiber Cable
00KF012 ARYH 30m Mellanox IB FDR Active Optical Fiber Cable

100 Gb Ethernet adapters

The Mellanox ConnectX-4 100GbE/EDR IB Adapters also support the 100 Gb Ethernet SFP+ optical transceivers and DAC cables listed in the following table.

Table 5. Supported optical transceivers and DAC cables - 100 Gb Ethernet
Part number Feature code Description
100 GbE QSFP28 transceivers
7G17A03539 AV1D Lenovo 100GBase-SR4 QSFP28 Transceiver
100 GbE QSFP28 Active Optical Cables
7Z57A03546 AV1L Lenovo 3m 100G QSFP28 Active Optical Cable
7Z57A03547 AV1M Lenovo 5m 100G QSFP28 Active Optical Cable
7Z57A03548 AV1N Lenovo 10m 100G QSFP28 Active Optical Cable
7Z57A03549 AV1P Lenovo 15m 100G QSFP28 Active Optical Cable
7Z57A03550 AV1Q Lenovo 20m 100G QSFP28 Active Optical Cable
100 GbE SFP28 DAC cables
7Z57A03561 AV1Z Lenovo 1m Passive 100G QSFP28 DAC Cable
7Z57A03562 AV20 Lenovo 3m Passive 100G QSFP28 DAC Cable
7Z57A03563 AV21 Lenovo 5m Passive 100G QSFP28 DAC Cable

40 Gb Ethernet adapter

The Mellanox ConnectX-4 Lx 1x40GbE QSFP+ Adapter supports the 40Gb DAC cables, transceiver, and optical cables that are listed in the following table.

Table 6. 40Gb cable support for Mellanox ConnectX-4 Lx 1x40GbE QSFP+ Adapter
Part number Feature code Description
40Gb Ethernet (QSFP) - 40GbE copper uses the QSFP+ to QSFP+ cables directly
49Y7890 A1DP 1 m QSFP+ to QSFP+ Cable
49Y7891 A1DQ 3 m QSFP+ to QSFP+ Cable
00D5810 A2X8 5m QSFP-to-QSFP cable
00D5813 A2X9 7m QSFP-to-QSFP cable
40Gb Ethernet (QSFP) - 40GbE optical uses QSFP+ transceiver with MTP optical cables
49Y7884 A1DR QSFP+ 40GBASE-SR4 Transceiver
00VX003 AT2U Lenovo 10m QSFP+ MTP-MTP OM3 MMF Cable
00VX005 AT2V Lenovo 30m QSFP+ MTP-MTP OM3 MMF Cable

In addition, the Mellanox ConnectX-4 Lx 1x40GbE QSFP+ Adapter also supports the 40Gb-to-10Gb QSFP to SFP+ adapter and 10Gb DAC cables and optics as shown in the following table.

Table 7. 10Gb support for Mellanox ConnectX-4 Lx 1x40GbE QSFP+ Adapter when using option 00D9676
Part number Feature code Description
40Gb Ethernet (QSFP) to 10Gb Ethernet (SFP+) Conversion
00D9676 ARZH Mellanox QSFP to SFP+ adapter
10Gb SFP+ cables
00D6288 A3RG .5 m Passive DAC SFP+ Cable
90Y9427 A1PH 1 m Passive DAC SFP+ Cable
00AY764 A51N 1.5m Passive DAC SFP+ Cable
00AY765 A51P 2m Passive DAC SFP+ Cable
90Y9430 A1PJ 3m Passive DAC SFP+ Cable
90Y9433 A1PK 5m Passive DAC SFP+ Cable
00D6151 A3RH 7 m Passive DAC SFP+ Cable
10Gb SFP+ transceivers
49Y4216 0069 Brocade 10Gb SFP+ SR Optical Transceiver
46C3447 5053 SFP+ SR Transceiver (10Gb)
49Y4218 0064 QLogic 10Gb SFP+ SR Optical Transceiver

25 Gb Ethernet adapters

The following table lists the supported 25 GbE transceiver and DAC cables.

Table 8. Supported optical transceivers and DAC cables - 25 Gb Ethernet
Part number Feature code Description
25 GbE SFP28 transceiver
7G17A03537 AV1B Lenovo 25GBase-SR SFP28 Transceiver
25 GbE SFP28 DAC cables
7Z57A03557 AV1W Lenovo 1m Passive 25G SFP28 DAC Cable
7Z57A03558 AV1X Lenovo 3m Passive 25G SFP28 DAC Cable
7Z57A03559 AV1Y Lenovo 5m Passive 25G SFP28 DAC Cable

In addition, the 25Gb adapters also support the following 10 GbE transceivers and DAC cables.

Table 9. Supported optical transceivers and DAC cables - 10 Gb Ethernet
Part number Feature code Description
10Gb SFP+ transceivers
49Y4216 0069 Brocade 10Gb SFP+ SR Optical Transceiver
46C3447 5053 SFP+ SR Transceiver (10Gb)
49Y4218 0064 QLogic 10Gb SFP+ SR Optical Transceiver
10Gb SFP+ Passive DAC cables
00D6288 A3RG .5 m Passive DAC SFP+ Cable
90Y9427 A1PH 1 m Passive DAC SFP+ Cable
00AY764 A51N 1.5m Passive DAC SFP+ Cable
00AY765 A51P 2m Passive DAC SFP+ Cable
90Y9430 A1PJ 3m Passive DAC SFP+ Cable
90Y9433 A1PK 5m Passive DAC SFP+ Cable
00D6151 A3RH 7 m Passive DAC SFP+ Cable
10Gb SFP+ Active DAC cables
00VX111 AT2R Lenovo 1m Active DAC SFP+ Cables
00VX114 AT2S Lenovo 3m Active DAC SFP+ Cables
00VX117 AT2T Lenovo 5m Active DAC SFP+ Cables

The following figure shows the Mellanox ConnectX-4 Lx ML2 1x25GbE SFP28 Adapter.

Mellanox ConnectX-4 Lx ML2 1x25GbE SFP28 Adapter
Figure 2. Mellanox ConnectX-4 Lx ML2 1x25GbE SFP28 Adapter (heatsink removed)

Features

The ConnectX-4 family of adapters offer a number of performance features, including the following:

  • ConnectX-4 Lx Ethernet adapters

    The ConnectX-4 Lx adapters discussed in this product guide offer a high performance Ethernet adapter solution for Ethernet speeds up to 40 Gb/s, enabling seamless networking, clustering, or storage. The Lx adapters reduce application runtime, and offer the flexibility and scalability to make infrastructure run as efficiently and productively as possible.

  • ConnectX-4 100 Gb Ethernet / EDR InfiniBand

    ConnectX-4 with Virtual Protocol Interconnect (VPI) offers the highest throughput VPI adapter, supporting EDR 100Gb/s InfiniBand and 100Gb/s Ethernet and enabling any standard networking, clustering, or storage to operate seamlessly over any converged network leveraging a consolidated software stack.

  • I/O Virtualization

    ConnectX-4 SR-IOV technology provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VMs) within the server. I/O virtualization with ConnectX-4 gives data center administrators better server utilization while reducing cost, power, and cable complexity, allowing more Virtual Machines and more tenants on the same hardware.

  • Overlay Networks

    In order to better scale their networks, data center operators often create overlay networks that carry traffic from individual virtual machines over logical tunnels in encapsulated formats such as NVGRE and VXLAN. While this solves network scalability issues, it hides the TCP packet from the hardware offloading engines, placing higher loads on the host CPU. ConnectX-4 Lx effectively addresses this by providing advanced NVGRE and VXLAN hardware offloading engines that encapsulate and de-capsulate the overlay protocol header as well as offloads TCP stateless activities on the encapsulated packet.

  • RDMA over Converged Ethernet (RoCE)

    ConnectX-4 adapters supports RoCE specifications delivering low-latency and high-performance over Ethernet networks. The ConnectX-4 VPI adapter also supports IBTA RDMA (Remote Data Memory Access) for InfiniBand network performance. Leveraging data center bridging (DCB) capabilities as well as ConnectX-4 advanced congestion control hardware mechanisms, RoCE provides efficient low-latency RDMA services over Layer 2 and Layer 3 networks.

  • Mellanox PeerDirect

    PeerDirect communication provides high efficiency RDMA access by eliminating unnecessary internal data copies between components on the PCIe bus (for example, from GPU to CPU), and therefore significantly reduces application run time. ConnectX-4 advanced acceleration technology enables higher cluster efficiency and scalability to tens of thousands of nodes.

The following figure shows the Mellanox ConnectX-4 Lx 1x40GbE QSFP+ Adapter.

Mellanox ConnectX-4 Lx 1x40GbE QSFP28 Adapter
Figure 3. Mellanox ConnectX-4 Lx 1x40GbE QSFP+ Adapter (heatsink removed)

Technical specifications

PCIe 3.0 host interface:

  • ConnectX-4 Lx Ethernet adapters: PCIe 3.0 x8 interface
  • ConnectX-4 EDR InfiniBand / 100 Gb Ethernet adapter: PCIe 3.0 x16 interface
  • Support for MSI/MSI-X mechanisms

External connectors:

  • 25 Gb PCIe and ML2 adapters: SFP28
  • 40 Gb and 100 Gb adapters: QSFP28

Ethernet standards (all adapters, except where noted):

  • 25G Ethernet Consortium (25 Gb)
  • 25G Ethernet Consortium (50 Gb) (100Gb/EDR adapter only)
  • IEEE 802.3bj, 802.3bm 100 Gigabit Ethernet (100Gb/EDR adapter only)
  • IEEE 802.3ba 40 Gigabit Ethernet (100Gb/EDR and 40Gb adapters only)
  • IEEE 802.3by 25 Gigabit Ethernet
  • IEEE 802.3ae 10 Gigabit Ethernet
  • IEEE 802.3az Energy Efficient Ethernet
  • IEEE 802.3ap based auto-negotiation and KR startup
  • Proprietary Ethernet protocols (20/40GBASE-R2) (40Gb adapter only)
  • IEEE 802.3ad, 802.1AX Link Aggregation
  • IEEE 802.1Q, 802.1P VLAN tags and priority
  • IEEE 802.1Qau (QCN) – Congestion Notification
  • IEEE 802.1Qaz (ETS)
  • IEEE 802.1Qbb (PFC)
  • IEEE 802.1Qbg
  • IEEE 1588v2
  • Jumbo frame support (9.6KB)

InfiniBand protocols (VPI Infiniband adapters only):

  • InfiniBand: IBTA v1.3 Auto-Negotiation
  • 1X/2X/4X SDR (2.5 Gb/s per lane)
  • DDR (5 Gb/s per lane)
  • QDR (10 Gb/s per lane)
  • FDR10 (10.3125 Gb/s per lane)
  • FDR (14.0625 Gb/s per lane) port
  • EDR (25.78125 Gb/s per lane)

InfiniBand features (VPI Infiniband adapters only)

  • RDMA, Send/Receive semantics
  • Hardware-based congestion control
  • Atomic operations
  • 16 million I/O channels
  • 256 to 4Kbyte MTU, 2Gbyte messages

Note: The feature of 8 virtual lanes with VL15 is currently not supported

Enhanced Features

  • Hardware-based reliable transport
  • Collective operations offloads
  • Vector collective operations offloads
  • PeerDirect RDMA (GPUDirect communication acceleration)
  • 64/66 encoding
  • Extended Reliable Connected transport (XRC)
  • Dynamically Connected transport (DCT)
  • Enhanced Atomic operations
  • Advanced memory mapping support, allowing user mode registration and remapping of memory (UMR)
  • On demand paging (ODP) – registration free RDMA memory access

Storage Offloads

  • RAID offload - erasure coding (Reed-Solomon) offload
  • T10 DIF - Signature handover operation at wire speed, for ingress and egress traffic (100Gb/EDR adapter only)

Overlay Networks

  • Stateless offloads for overlay networks and tunneling protocols
  • Hardware offload of encapsulation and decapsulation of NVGRE and VXLAN overlay networks

Hardware-Based I/O Virtualization

  • Single Root IOV (SR-IOV)
  • Multi-function per port
  • Address translation and protection
  • Multiple queues per virtual machine
  • Enhanced QoS for vNICs
  • VMware NetQueue support

Virtualization

  • SR-IOV: Up to 256 Virtual Functions, Up to 16 Physical Functions per port
  • SR-IOV on every Physical Function
  • 1K ingress and egress QoS levels
  • Guaranteed QoS for VMs

Note: NPAR (NIC partitioning) is currently not supported.

CPU Offloads

  • RDMA over Converged Ethernet (RoCE)
  • TCP/UDP/IP stateless offload
  • LSO, LRO, checksum offload
  • RSS (can be done on encapsulated packet), TSS, HDS, VLAN insertion / stripping, Receive flow steering
  • Intelligent interrupt coalescence

Remote Boot

  • Remote boot over InfiniBand (VPI Infiniband adapters only)
  • Remote boot over Ethernet
  • Remote boot over iSCSI
  • PXE and UEFI

Protocol Support

  • OpenMPI, IBM PE, OSU MPI (MVAPICH/2), Intel MPI
  • Platform MPI, UPC, Open SHMEM
  • TCP/UDP, MPLS, VxLAN, NVGRE, GENEVE
  • EoIB, IPoIB, SDP, RDS (VPI Infiniband adapters only)
  • iSER, NFS RDMA, SMB Direct
  • uDAPL

Management and Control Interfaces

  • NC-SI (25Gb ML2 adapter only)
  • PLDM over MCTP over PCIe
  • SDN management interface for managing the eSwitch

Server support - ThinkSystem

The following table lists the ThinkSystem servers that are compatible.

Table 10. ThinkSystem server support
Part
number
Description 2S Rack & Tower  4S Rack Dense/
Blade
ST550 (7X09/7X10)
SR530 (7X07/7X08)
SR550 (7X03/7X04)
SR570 (7Y03/7Y04)
SR590 (7X98/7X99)
SR630 (7X01/7X02)
SR650 (7X05/7X06)
SR850 (7X18/7X19)
SR860 (7X69/7X70)
SR950 (7X11/12/13)
SD530 (7X21)
SN550 (7X16)
SN850 (7X15)
ConnectX-4 Lx Ethernet adapters
01GR250 Mellanox ConnectX-4 Lx 2x25GbE SFP28 Adapter N N N N N Y Y Y Y* Y Y N N
00MM950 Mellanox ConnectX-4 Lx 1x40GbE QSFP+ Adapter N N N N N Y Y Y Y* Y Y N N
00MN990 Mellanox ConnectX-4 Lx ML2 1x25GbE SFP28 Adapter N N N N N Y Y Y Y* Y N N N
7ZT7A00507 ThinkSystem Mellanox ConnectX-4 Lx ML2 25Gb 2-Port SFP28 Ethernet Adapter N N N N N Y Y Y Y* Y N N N
ConnectX-4 VPI InfiniBand adapters
7ZT7A00500 ThinkSystem Mellanox ConnectX-4 PCIe FDR 2-Port QSFP VPI Adapter N N N N N Y* Y* Y Y* Y Y N N
00MM960 Mellanox ConnectX-4 2x100GbE/EDR IB QSFP28 VPI Adapter N N N N N Y Y Y Y* Y Y N N
00KH924 Mellanox ConnectX-4 1x100GbE/EDR IB QSFP28 VPI Adapter N N N N N Y Y Y* N N Y N N

* Support is planned for 4Q/2017

Server support - System x

The following tables list the System x and dense servers that are compatible.

Support for System x and dense servers with Xeon E5 v4 and E3 v5 processors

Table 11. Support for System x and dense servers with Xeon E5 v4 and E3 v5 processors
Part
number
Description
x3250 M6 (3943)
x3250 M6 (3633)
x3550 M5 (8869)
x3650 M5 (8871)
x3850 X6/x3950 X6 (6241, E7 v4)
nx360 M5 (5465, E5-2600 v4)
sd350 (5493)
nx360 M5 WCT (5467, v4)
ConnectX-4 Lx Ethernet adapters
01GR250 Mellanox ConnectX-4 Lx 2x25GbE SFP28 Adapter N N Y Y Y Y N N
00MM950 Mellanox ConnectX-4 Lx 1x40GbE QSFP+ Adapter N N Y Y Y Y N N
00MN990 Mellanox ConnectX-4 Lx ML2 1x25GbE SFP28 Adapter N N Y Y Y Y N N
7ZT7A00507 ThinkSystem Mellanox ConnectX-4 Lx ML2 25Gb 2-Port SFP28 Ethernet Adapter N N N N N N N N
ConnectX-4 VPI InfiniBand adapters
7ZT7A00500 ThinkSystem Mellanox ConnectX-4 PCIe FDR 2-Port QSFP VPI Adapter N N N N N N N N
00MM960 Mellanox ConnectX-4 2x100GbE/EDR IB QSFP28 VPI Adapter N N Y Y Y Y N N
00KH924 Mellanox ConnectX-4 1x100GbE/EDR IB QSFP28 VPI Adapter N N Y Y Y Y Y N

Support for System x and dense servers with Intel E5 v3 and E3 v3 processors

Table 12. Support for servers with Intel Xeon v3 processors
Part
number
Description
x3100 M5 (5457)
x3250 M5 (5458)
x3500 M5 (5464)
x3550 M5 (5463)
x3650 M5 (5462)
x3850 X6/x3950 X6 (6241, E7 v3)
nx360 M5 (5465)
ConnectX-4 Lx Ethernet adapters
01GR250 Mellanox ConnectX-4 Lx 2x25GbE SFP28 Adapter N N N Y Y Y Y
00MM950 Mellanox ConnectX-4 Lx 1x40GbE QSFP+ Adapter N N N Y Y Y Y
00MN990 Mellanox ConnectX-4 Lx ML2 1x25GbE SFP28 Adapter N N N Y Y Y Y
7ZT7A00507 ThinkSystem Mellanox ConnectX-4 Lx ML2 25Gb 2-Port SFP28 Ethernet Adapter N N N N N N N
ConnectX-4 VPI InfiniBand adapters
7ZT7A00500 ThinkSystem Mellanox ConnectX-4 PCIe FDR 2-Port QSFP VPI Adapter N N N N N N N
00MM960 Mellanox ConnectX-4 2x100GbE/EDR IB QSFP28 VPI Adapter N N N Y Y Y Y
00KH924 Mellanox ConnectX-4 1x100GbE/EDR IB QSFP28 VPI Adapter N N N Y Y Y Y

Support for System x servers with Intel Xeon v2 processors

Table 13. Support for servers with Intel Xeon v2 processors
Part
number
Description
x3300 M4 (7382)
x3500 M4 (7383, E5-2600 v2)
x3550 M4 (7914, E5-2600 v2)
x3630 M4 (7158, E5-2400 v2)
x3650 M4 (7915, E5-2600 v2)
x3650 M4 BD (5466)
x3750 M4 (8753)
x3850 X6/x3950 X6 (6241, E7 v2)
ConnectX-4 Lx Ethernet adapters
01GR250 Mellanox ConnectX-4 Lx 2x25GbE SFP28 Adapter N N N N N N N Y
00MM950 Mellanox ConnectX-4 Lx 1x40GbE QSFP+ Adapter N N N N N N N Y
00MN990 Mellanox ConnectX-4 Lx ML2 1x25GbE SFP28 Adapter N N N N N N N Y
7ZT7A00507 ThinkSystem Mellanox ConnectX-4 Lx ML2 25Gb 2-Port SFP28 Ethernet Adapter N N N N N N N N
ConnectX-4 VPI InfiniBand adapters
7ZT7A00500 ThinkSystem Mellanox ConnectX-4 PCIe FDR 2-Port QSFP VPI Adapter N N N N N N N N
00MM960 Mellanox ConnectX-4 2x100GbE/EDR IB QSFP28 VPI Adapter N N N N N N N Y
00KH924 Mellanox ConnectX-4 1x100GbE/EDR IB QSFP28 VPI Adapter N N N N N N Y Y

Server support - ThinkServer

The following tables list the ThinkServer systems that are compatible.

Support for sd350: The ThinkServer sd350 is listed in Table 6.

Support for ThinkServer Generation 5 servers with E5 v4 and E3 v5/v6 processors

Table 14. Support for ThinkServer Generation 5 servers with E5 v4 and E3 v5/v6 processors
Part number Description
TS150
TS450
TS460
RS160
TD350
RD350 (70Qx)
RD450 (70Qx)
RD550 (70Rx/70Sx)
RD650 (70Rx)
4XC0G88861 Lenovo ThinkServer ConnectX-4 Lx PCIe 25Gb 2 Port SFP28 Ethernet Adapter by Mellanox N N N N N Y Y Y Y

Support for ThinkServer Generation 5 servers with E5 v3 and E3 v3 processors

Table 15. Support for ThinkServer Generation 5 servers with E5 v3  and E3 v3 processors
Part number Description
TS140
TS440
RS140
TD350
RD350 (70Dx)
RD450 (70Dx)
RD550 (70Cx)
RD650 (70Dx)
4XC0G88861 Lenovo ThinkServer ConnectX-4 Lx PCIe 25Gb 2 Port SFP28 Ethernet Adapter by Mellanox N N N N N N N N

The following figure shows the Mellanox ConnectX-4 Lx 2x25GbE SFP28 Adapter.

Mellanox ConnectX-4 Lx 2x25GbE SFP28 Adapter
Figure 4. Mellanox ConnectX-4 Lx 2x25GbE SFP28 Adapter (heatsink removed)

Operating system support

The Mellanox ConnectX-4 adapters support the following operating systems:

  • Microsoft Windows Server 2012 (no SR-IOV support)
  • Microsoft Windows Server 2012 R2
  • Microsoft Windows Server 2016
  • Red Hat Enterprise Linux 6 Server x64 Edition, U7
  • Red Hat Enterprise Linux 7, U2
  • SUSE LINUX Enterprise Server 11 for AMD64/EM64T, SP4**
  • SUSE LINUX Enterprise Server 12, SP1**
  • VMware ESXi 5.5, U3*
  • VMware ESXi 6.0, U2*

* InfiniBand mode not supported with VMware: With VMware, these adapters are supported only in Ethernet mode. InfiniBand is not supported.

** Xen Support: The Mellanox adapters do not support Xen

Regulatory approvals

The adapters meet the following regulatory standards:

  • Safety: CB, cTUVus, CE 
  • EMC: CE, FCC, VCCI, ICES, RCM 
  • RoHS: RoHS-R6

Operating environment

Power consumption:

Table 16. Power consumption
Adapter Typical power
(passive cable)
Maximum power
(passive cable)
Maximum power
(active cable)
2x25 Gb adapter (01GR250) 9.47 W 10.69 W 14.03 W
1x25 Gb ML2 adapter (00MN990) 8.82 W 10.04 W 11.70 W
1x40 Gb adapter (00MM950) 10.16 W 11.38 W 13.05 W
2x100 Gb adapter (00MM960) 16.12 W 18.04 W 24.80 W

Maximum power through external connectors:

  • 25Gb adapters: 1.5 W
  • 40Gb adapter: 1.5 W
  • 100Gb adapter: 3.5 W

Temperature:

  • Operational 0°C to 55°C
  • Non-operational -40°C to 70°C

Humidity: 90% relative humidity

Warranty

One year limited warranty. When installed in a Lenovo server, these cards assume the server’s base warranty and any warranty upgrades.

Related product families

Product families related to this document are the following: