skip to main content

Flex System x240 Compute Node (8737, E5-2600)

Product Guide (withdrawn product)

Home
Top

Abstract

The Flex System™ x240 Compute Node is a high-performance Intel Xeon processor-based server that offers outstanding performance for virtualization with new levels of CPU performance and memory capacity, and flexible configuration options. It is part of Flex System, a new category of computing that integrates multiple server architectures, networking, storage, and system management capability into a single system that is easy to deploy and manage. Flex System has full built-in virtualization support of servers, storage, and networking to speed provisioning and increase resiliency. In addition, it supports open industry standards, such as operating systems, networking and storage fabrics, virtualization, and system management protocols, to easily fit within existing and future data center environments. Flex System is scalable and extendable with multi-generation upgrades to protect and maximize IT investments.

Withdrawn from marketing: The models covered in this product guide are now withdrawn from marketing. The replacement system is the x240 M5 (E5-2600 v4) which is described in https://lenovopress.com/lp0093.

Note: There are three Product Guides for the x240 Compute Node, as follows:

Introduction

The Flex System™ x240 Compute Node is a high-performance server that offers outstanding performance for virtualization with new levels of CPU performance and memory capacity, and flexible configuration options. The x240 Compute Node is an efficient server designed to run a broad range of workloads, armed with advanced management capabilities allowing you to manage your physical and virtual IT resources from a single-pane of glass.

Suggested use: database, virtualization, enterprise applications, collaboration/email, streaming media, web, HPC, Microsoft RemoteFX, and cloud applications.

Note: This Product Guide describes the models of the x240 Compute Node with Intel Xeon E5-2600 processors. For models with the Intel Xeon E5-2600 v2 processors, see http://lenovopress.com/tips1090

Figure 1 shows the Flex System x240 Compute Node.
The IBM Flex System x240 Compute Node
Figure 1. The Flex System x240 Compute Node

Did you know?

Flex System is a new category of computing that integrates multiple server architectures, networking, storage, and system management capability into a single system that is easy to deploy and manage. Flex System has full built-in virtualization support of servers, storage, and networking to speed provisioning and increased resiliency. In addition, it supports open industry standards, such as operating systems, networking and storage fabrics, virtualization, and system management protocols, to easily fit within existing and future data center environments. Flex System is scalable and extendable with multi-generation upgrades to protect and maximize IT investments.

Key features

The Flex System x240 Compute Node is a high-availability, scalable compute node optimized to support the next-generation microprocessor technology and is ideally suited for medium and large businesses. This section describes the key features of the server.


Scalability and performance

The x240 offers numerous features to boost performance, improve scalability, and reduce costs:

  • The Intel Xeon Processor E5-2600 product family improves productivity by offering superior system performance with up to 8-core processors and up to 3.3 GHz core speeds depending on the CPU's number of cores, up to 20 MB of L3 cache, and QPI interconnect links of up to 8 GT/s.
  • The Intel Xeon Processor E5-2600 provides up to 80% performance boost over the previous generation, the Intel Xeon Processor 5600 (Westmere EP).
  • Up to two processors, 16 cores, and 32 threads maximize the concurrent execution of multi-threaded applications.
  • Intelligent and adaptive system performance with Intel Turbo Boost Technology 2.0 allows CPU cores to run at maximum speeds during peak workloads by temporarily going beyond processor TDP.
  • Intel Hyper-Threading Technology boosts performance for multi-threaded applications by enabling simultaneous multi-threading within each processor core, up to two threads per core.
  • Intel Virtualization Technology integrates hardware-level virtualization hooks that allow operating system vendors to better utilize the hardware for virtualization workloads.
  • Intel Advanced Vector Extensions (AVT) significantly improve floating point performance for compute-intensive technical and scientific applications compared with Intel Xeon 5600 series processors.
  • Up to 24 DDR3 ECC memory RDIMMs provide speeds up to 1600 MHz and a memory capacity of up to 384 GB. Load-reduced DIMMs (LRDIMMs) are supported with a maximum capacity of 768 GB.
  • The theoretical maximum memory bandwidth of the Intel Xeon processor E5 family is 51.6 GBps, which is 60% more than in the previous generation of Intel Xeon processors.
  • The use of solid-state drives (SSDs) instead of or along with traditional spinning drives (HDDs) can significantly improve I/O performance. An SSD can support up to 100 times more I/O operations per second (IOPS) than a typical HDD.
  • Supports the Storage Expansion Node providing an additional 12 hot-swap 2.5-inch drive bays for local storage.
  • Up to 32 virtual I/O ports per compute node with integrated 10 Gb Ethernet ports, offering the choice of Ethernet, iSCSI, or Fibre Channel over Ethernet (FCoE) connectivity.
  • The x240 offers PCI Express 3.0 I/O expansion capabilities that improve the theoretical maximum bandwidth by 60% (8 GT/s per link), compared with the previous generation of PCI Express 2.0.
  • With Intel Integrated I/O Technology, the PCI Express 3.0 controller is integrated into the Intel Xeon processor E5 family. This helps to dramatically reduce I/O latency and increase overall system performance.
  • Support for high-bandwidth I/O adapters, up to two in each x240 Compute Node. Support for 10 Gb Ethernet, 16 Gb Fibre Channel, and FDR InfiniBand.
  • Supports the PCIe Expansion Node for support for up to six additional I/O adapters.

Availability and serviceability

The x240 provides many features to simplify serviceability and increase system uptime:

  • Chipkill, memory mirroring and memory rank sparing for redundancy in the event of a non-correctable memory failure.
  • Tool-less cover removal provides easy access to upgrades and serviceable parts, such as CPU, memory, and adapter cards.
  • Hot-swap drives supporting integrated RAID 1 redundancy for data protection and greater system uptime.
  • A light path diagnostics panel and individual light path LEDs to quickly lead the technician to failed (or failing) components. This simplifies servicing, speeds up problem resolution, and helps improve system availability.
  • Predictive Failure Analysis (PFA), which detects when system components (such as processors, memory, and hard disk drives) operate outside of standard thresholds and generates pro-active alerts in advance of possible failure, therefore increasing uptime.
  • Solid-state drives (SSDs), which offer significantly better reliability than traditional mechanical HDDs for greater uptime.
  • Built-in Integrated Management Module II (IMM2) continuously monitors system parameters, triggers alerts, and performs recovering actions in case of failures to minimize downtime.
  • Built-in diagnostics using Dynamic Systems Analysis (DSA) Preboot speeds up troubleshooting tasks to reduce service time.
  • Three-year customer replaceable unit and onsite limited warranty, next business day 9x5. Optional service upgrades are available.

Manageability and security

Powerful systems management features simplify local and remote management of the x240:

  • The x240 includes an Integrated Management Module II (IMM2) to monitor server availability and perform remote management.
  • Integrated industry-standard Unified Extensible Firmware Interface (UEFI) enables improved setup, configuration, and updates, and simplifies error handling.
  • Integrated Trusted Platform Module (TPM) 1.2 support enables advanced cryptographic functionality, such as digital signatures and remote attestation.
  • Industry-standard AES NI support for faster, stronger encryption.
  • Integrates with the IBM® Flex System™ Manager for proactive systems management. It offers comprehensive systems management for the entire Flex System platform that help to increase up-time, reduce costs, and improve productivity through advanced server management capabilities.
  • Lenovo Fabric Manager simplifies deployment of infrastructure connections by managing network and storage address assignments.
  • Intel Execute Disable Bit functionality can help prevent certain classes of malicious buffer overflow attacks when combined with a supporting operating system.
  • Intel Trusted Execution Technology provides enhanced security through hardware-based resistance to malicious software attacks, allowing an application to run in its own isolated space protected from all other software running on a system.

Energy efficiency

The x240 offers the following energy-efficiency features to save energy, reduce operational costs, increase energy availability, and contribute to the green environment:

  • Component-sharing design of the Flex System chassis provides ultimate power and cooling savings.
  • The Intel Xeon processor E5-2600 product family offers significantly better performance over the previous generation while fitting into the same thermal design power (TDP) limits.
  • Intel Intelligent Power Capability powers individual processor elements on and off as needed, to reduce power draw.
  • Low-voltage Intel Xeon processors draw less energy to satisfy demands of power and thermally constrained data centers and telecommunication environments.
  • Low-voltage 1.35 V DDR3 memory RDIMMs consume 15% less energy than 1.5 V DDR3 RDIMMs.
  • Solid state drives (SSDs) consume as much as 80% less power than traditional spinning 2.5-inch HDDs.
  • The x240 uses hexagonal ventilation holes, a part of Calibrated Vectored Cooling™ technology. Hexagonal holes can be grouped more densely than round holes, providing more efficient airflow through the system.

Locations of key components and connectors

Figure 2 shows the front of the server.

Front view of the NGP x240 Compute Node
Figure 2. Front view of the Flex System x240 Compute Node

Figure 3 shows the locations of key components inside the server.

Inside view of the NGP x240 Compute Node
Figure 3. Inside view of the Flex System x240 Compute Node

Standard specifications

The following table lists the standard specifications.

Table 1. Standard specifications

Components Specification
Models 8737-x1x and 8737-x2x (x-config)
8737-15X and 7863-10X (e-config)
Form factor Standard-width compute node.
Chassis support Flex System Enterprise Chassis.
Processor Up to two Intel Xeon Processor E5-2600 product family CPUs with eight-core (up to 2.9 GHz) or six-core (up to 2.9 GHz) or quad-core (up to 3.3 GHz) or dual-core (up to 3.0 GHz). Two QPI links up to 8.0 GT/s each. Up to 1600 MHz memory speed. Up to 20 MB L3 cache.
Chipset Intel C600 series.
Memory Up to 24 DIMM sockets (12 DIMMs per processor) using Low Profile (LP) DDR3 DIMMs. RDIMMs, UDIMMs, and LRDIMMs supported. 1.5 V and low-voltage 1.35 V DIMMs supported. Support for up to 1600 MHz memory speed depending on the processor. Four memory channels per processor (three DIMMs per channel). Supports two DIMMs per channel operating at 1600 MHz (2 DPC @ 1600MHz) with single and dual rank RDIMMs. Supports three DIMMs per channel at 1066 MHz 1600MHz) with single and dual rank RDIMMs.
Memory maximums With LRDIMMs: Up to 768 GB with 24x 32 GB LRDIMMs and two processors
With RDIMMs: Up to 384 GB with 24x 16 GB RDIMMs and two processors
With UDIMMs: Up to 64 GB with 16x 4 GB UDIMMs and two processors
Memory protection ECC, Chipkill (for x4-based memory DIMMs), memory mirroring, and memory rank sparing.
Disk drive bays Two 2.5" hot-swap SAS/SATA drive bays supporting SAS, SATA, and SSD drives. Optional support for up to eight 1.8” SSDs. Up to 12 additional 2.5-inch drive bays with the optional Storage Expansion Node.
Maximum internal storage With two 2.5” hot-swap drives: Up to 2 TB with 1 TB 2.5" NL SAS HDDs, or up to 2.4 TB with 1.2 TB 2.5" SAS HDDs, or up to 2 TB with 1 TB 2.5" SATA HDDs, or up to 3.2 TB with 1.6 TB 2.5" SATA SSDs. An intermix of SAS and SATA HDDs and SSDs is supported. Alternatively, with 1.8” SSDs and ServeRAID M5115 RAID adapter, up to 4 TB with eight 512 GB 1.8” SSDs. Additional storage available with an attached Flex System Storage Expansion Node.
RAID support RAID 0 and 1 with integrated LSI SAS2004 controller. Optional ServeRAID M5115 RAID controller with RAID 0, 1, 10, 5, 50 support and 1 GB cache. Supports up to eight 1.8” SSD with expansion kits. Optional flash-backup for cache, RAID 6/60, SSD performance enabler.
Optical and tape bays No internal bays; use an external USB drive. See http://support.lenovo.com/en/documents/pd011281 for options.
Network interfaces x2x models: Two 10 Gb Ethernet ports with Embedded 10Gb Virtual Fabric Ethernet LAN-on-motherboard (LOM) controller; Emulex BE3 based.
x1x models: None standard; optional 1Gb or 10Gb Ethernet adapters.
PCI Expansion slots Two I/O connectors for adapters. PCI Express 3.0 x16 interface. Includes an Expansion Connector (PCIe 3.0 x16) to connect an expansion node such as the PCIe Expansion Node. PCIe Expansion Node supports two full-height PCIe adapters, two low-profile PCIe adapters and two Flex System I/O adapters.
Ports USB ports: One external. Two internal for embedded hypervisor with optional USB Enablement Kit. Console breakout cable port providing local KVM and serial ports (cable standard with chassis; additional cables optional).
Systems management UEFI, Integrated Management Module 2 (IMM2) with Renesas SH7757 controller, Predictive Failure Analysis, light path diagnostics panel, automatic server restart, remote presence. Support for IBM Flex System Manager, IBM Systems Director and Active Energy Manager, and Lenovo ServerGuide.
Security features Power-on password, administrator's password, Trusted Platform Module 1.2.
Video Matrox G200eR2 video core with 16 MB video memory integrated into the IMM2. Maximum resolution is 1600x1200 at 75 Hz with 16 M colors.
Limited warranty 3-year customer-replaceable unit and onsite limited warranty with 9x5/NBD.
Operating systems supported Microsoft Windows Server, Red Hat Enterprise Linux, SUSE Linux Enterprise Server, VMware ESXi. See the Operating system support section for specifics.
Service and support Optional service upgrades are available through ServicePacs®: 4-hour or 2-hour response time, 8-hour fix time, 1-year or 2-year warranty extension, remote technical support for Lenovo hardware and selected Lenovo and OEM software.
Dimensions Width: 215 mm (8.5”), height 51 mm (2.0”), depth 493 mm (19.4”).
Weight Maximum configuration: 6.98 kg (15.4 lb).

The x240 servers are shipped with the following items:

  • Statement of Limited Warranty
  • Important Notices
  • Documentation CD that contains the Installation and User's Guide

Standard models

Table 2 lists standard models.

Note: This Product Guide describes the models of the x240 Compute Node with Intel Xeon E5-2600 processors. For models with the Intel Xeon E5-2600 v2 processors, see http://lenovopress.com/tips1090

Table 2. Standard models

Model Intel Xeon Processor
(2 maximum)*
Memory
(max
786 GB)
Disk
adapter
Disk bays† Disks 10 GbE
Embedded
Virtual
Fabric‡
I/O slots
(used/max)
8737-A1x 1x E5-2630L 6C 2.0GHz
15MB 1333MHz 60W
2x 4 GB LSI SAS2004 2x 2.5-inch
hot-swap
Open No 0 / 2
8737-D2x 1x E5-2609 4C 2.40GHz
10MB 1066MHz 80W
2x 4 GB LSI SAS2004 2x 2.5-inch
hot-swap
Open Standard 1 / 2‡
8737-F2x 1x E5-2620 6C 2.0GHz
15MB 1333MHz 95W
2x 4 GB LSI SAS2004 2x 2.5-inch
hot-swap
Open Standard 1 / 2‡
8737-G2x 1x E5-2630 6C 2.3GHz
15MB 1333MHz 95W
2x 4 GB LSI SAS2004 2x 2.5-inch
hot-swap
Open Standard 1 / 2‡
8737-H1x 1x E5-2640 6C 2.5GHz
15MB 1333MHz 95W
2x 4 GB LSI SAS2004 2x 2.5-inch
hot-swap
Open No 0 / 2
8737-H2x 1x E5-2640 6C 2.5GHz
15MB 1333MHz 95W
2x 4 GB LSI SAS2004 2x 2.5-inch
hot-swap
Open Standard 1 / 2‡
8737-J1x 1x E5-2670 8C 2.6GHz
20MB 1600MHz 115W
2x 4 GB LSI SAS2004 2x 2.5-inch
hot-swap
Open No 0 / 2
8737-L2x 1x E5-2660 8C 2.2GHz
20MB 1600MHz 95W
2x 4 GB LSI SAS2004 2x 2.5-inch
hot-swap
Open Standard 1 / 2‡
8737-M1x 1x E5-2680 8C 2.7GHz
20MB 1600MHz 130W
2x 4 GB LSI SAS2004 2x 2.5-inch
hot-swap
Open No 0 / 2
8737-M2x 1x E5-2680 8C 2.7GHz
20MB 1600MHz 130W
2x 4 GB LSI SAS2004 2x 2.5-inch
hot-swap
Open Standard 1 / 2‡
8737-N2x 1x E5-2643 4C 3.3GHz
10MB 1600MHz 130W
2x 4 GB LSI SAS2004 2x 2.5-inch
hot-swap
Open Standard 1 / 2‡
8737-Q2x 1x E5-2667 6C 2.9GHz
15MB 1600MHz 130W
2x 4 GB LSI SAS2004 2x 2.5-inch
hot-swap
Open Standard 1 / 2‡
8737-R2x 1x E5-2690 8C 2.9GHz
20MB 1600MHz 135W
2x 4 GB LSI SAS2004 2x 2.5-inch
hot-swap
Open Standard 1 / 2‡
8737-HBx§ 2x E5-2640 6C 2.5GHz
15MB 1333MHz 95W
2x 8 GB LSI SAS2004 14x 2.5-inch
hot-swap with Storage
Expansion Node§
Open Standard 1 / 2‡

* Processor detail: Processor quantity, model, cores, core speed, L3 cache, memory speed, power TDP rating.
† The 2.5-inch drive bays can be replaced and expanded with 1.8" solid-state drive bays and a ServeRAID M5115 RAID controller to support up to eight 1.8-inch SSDs.
‡ The x2x and HBx models include an Embedded 10Gb Virtual Fabric Ethernet controller. Connections are routed using a Fabric Connector. The Fabric Connector precludes the use of an I/O adapter in I/O connector 1.
§ Model HBx is optimized as a Network Attached Storage (NAS) offering and includes the Flex System Storage Expansion Node (68Y8588) as standard

Model 8737-HBx is a new Network Attached Storage (NAS) optimized model that includes the x240 compute node and the Flex System Storage Expansion Node connected together as a single unit at the factory. This single model number enables a simpler acquisition for both business partners and direct sales. The combination of the x240 compute node and the storage node is certified with Windows Storage Server 2012 meaning that this configuration is an excellent foundation for a low cost NAS solution. Windows Storage Server 2012 is available via the Reseller Option Kit (ROK) program using part number 00Y6302. Model HBx does not include drives giving you maximum flexibility when it comes to selecting drives, either SAS or SATA disk drives or high-performance solid-state drives.

Chassis support

The x240 is supported in the Flex System Enterprise Chassis.

Up to 14 x240 Compute Nodes can be installed in the chassis, however, the actual number that can be installed in a chassis depends on these factors:

  • The TDP power rating for the processors that are installed in the x240
  • The number of power supplies installed in the chassis
  • The capacity of the power supplies installed (2100 W or 2500 W)
  • The chassis power redundancy policy used (N+1 or N+N)

The following table provides guidelines about what number of x240 Compute Nodes can be installed. For more guidance, use the Power Configurator, found at the following website:
http://ibm.com/systems/bladecenter/resources/powerconfig.html

In the table:

  • Green = No restriction to the number of x240 Compute Nodes that are installable
  • Yellow = Some bays must be left empty in the chassis

Table 3. Maximum number of x240 Compute Nodes installable based on power supplies installed and power redundancy policy used

x240 TDP rating
2100 W power supplies installed
2500 W power supplies installed
N+1, N=5
6 power supplies
N+1, N=4
5 power supplies
N+1, N=3
5 power supplies
N+N, N=3
6 power supplies
N+1, N=5
6 power supplies
N+1, N=4
5 power supplies
N+1, N=3
4 power supplies
N+N, N=3
6 power supplies
60 W 14 14 14 14 14 14 14 14
70 W 14 14 13 14 14 14 14 14
80 W 14 14 13 13 14 14 14 14
95 W 14 14 12 12 14 14 14 14
115 W 14 14 11 12 14 14 14 14
130 W 14 14 11 11 14 14 13 14
135 W 14 14 10 11 14 14 13 14

Processor options

The x240 supports the processor options listed in the following table. The server supports one or two processors. The table also shows which server models have each processor standard. If no corresponding where used model for a particular processor is listed, then this processor is available only through Configure to Order (CTO).

Table 3. Processor options

Part
number
Feature code** Intel Xeon processor description Models
where used
81Y5180 A1CQ / A1D1 Intel Xeon E5-2603 4C 1.8GHz 10MB 1066MHz 80W -
81Y5182 A1CS / A1D3 Intel Xeon E5-2609 4C 2.40GHz 10MB 1066MHz 80W D2x
81Y5183 A1CT / A1D4 Intel Xeon E5-2620 6C 2.0GHz 15MB 1333MHz 95W F2x
81Y5184 A1CU / A1D5 Intel Xeon E5-2630 6C 2.3GHz 15MB 1333MHz 95W G2x
81Y5206 A1ER / A1DD Intel Xeon E5-2630L 6C 2.0GHz 15MB 1333MHz 60W A1x
49Y8125 A2EP / A2EQ Intel Xeon E5-2637 2C 3.0GHz 5MB 1600MHz 80W -
81Y5185 A1CV / A1D6 Intel Xeon E5-2640 6C 2.5GHz 15MB 1333MHz 95W H1x, H2x, HBx
81Y5190 A1CY / A1DA Intel Xeon E5-2643 4C 3.3GHz 10MB 1600MHz 130W N2x
95Y4670 A31A / A31C Intel Xeon E5-2648L 8C 1.8GHz 20MB 1600MHz 70W -
81Y5186 A1CW / A1D7 Intel Xeon E5-2650 8C 2.0GHz 20MB 1600MHz 95W -
81Y5179 A1ES / A1DE Intel Xeon E5-2650L 8C 1.8GHz 20MB 1600MHz 70W -
95Y4675 A319 / A31B Intel Xeon E5-2658 8C 2.1GHz 20MB 1600MHz 95W -
81Y5187 A1CX / A1D8 Intel Xeon E5-2660 8C 2.2GHz 20MB 1600MHz 95W L2x
49Y8144 A2ET / A2EU Intel Xeon E5-2665 8C 2.4GHz 20MB 1600MHz 115W -
81Y5189 A1CZ / A1DB Intel Xeon E5-2667 6C 2.9GHz 15MB 1600MHz 130W Q2x
81Y9418 A1SX / A1SY Intel Xeon E5-2670 8C 2.6GHz 20MB 1600MHz 115W J1x
81Y5188 A1BB / A1D9 Intel Xeon E5-2680 8C 2.7GHz 20MB 1600MHz 130W M1x, M2x
49Y8116 A2ER / A2ES Intel Xeon E5-2690 8C 2.9GHz 20MB 1600MHz 135W R2x

** The first feature code is for processor 1 and second feature code is for processor 2

Memory options

DDR3 memory is compatibility tested and tuned for optimal performance and throughput. Memory specifications are integrated into the light path diagnostics for immediate system performance feedback and optimum system uptime. From a service and support standpoint, memory automatically assumes the Lenovo system warranty, and Lenovo provides service and support worldwide.

The following table lists memory options available for the x240 server. DIMMs can be installed one at a time, but for performance reasons, install them in sets of four (one for each of the four memory channels).

Table 4. Memory options for the x240

Part number Feature
code
Description Models
where used
Registered DIMM (RDIMM) modules - 1066 MHz and 1333 MHz
49Y1405 8940 2GB (1x2GB, 1Rx8, 1.35V) PC3L-10600 CL9 ECC DDR3 1333MHz LP RDIMM -
49Y1406 8941 4GB (1x4GB, 1Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333MHz LP RDIMM H1x, H2x, G2x,
F2x, D2x, A1x
49Y1407 8942 4GB (1x4GB, 2Rx8, 1.35V) PC3L-10600 CL9  ECC DDR3 1333MHz LP RDIMM -
49Y1397 8923 8GB (1x8GB, 2Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333MHz LP RDIMM HBx
49Y1563 A1QT 16GB (1x16GB, 2Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333MHz LP RDIMM -
49Y1400 8939 16GB (1x16GB, 4Rx4, 1.35V) PC3L-8500 CL7 ECC DDR3 1066MHz LP RDIMM -
Registered DIMM (RDIMM) modules - 1600 MHz
49Y1559 A28Z 4GB (1x4GB, 1Rx4, 1.5V) PC3-12800 CL11 ECC DDR3 1600MHz LP RDIMM R2x, Q2x, N2x, M2x,
M1x, L2x, J1x
90Y3178 A24L 4GB (1x4GB, 2Rx8, 1.5V) PC3-12800 CL11 ECC DDR3 1600MHz LP RDIMM -
90Y3109 A292 8GB (1x8GB, 2Rx4, 1.5V) PC3-12800 CL11 ECC DDR3 1600MHz LP RDIMM -
00D4968 A2U5 16GB (1x16GB, 2Rx4, 1.5V) PC3-12800 CL11 ECC DDR3 1600MHz LP RDIMM -
46W0672 A3QM 16GB (1x16GB, 2Rx4, 1.35V) PC3L-12800 CL11 ECC DDR3 1600MHz LP RDIMM -
Unbuffered DIMM (UDIMM) modules
49Y1403 A0QS 2GB (1x2GB, 1Rx8, 1.35V) PC3L-10600 CL9 ECC DDR3 1333MHz LP UDIMM -
49Y1404 8648 4GB (1x4GB, 2Rx8, 1.35V) PC3L-10600 CL9 ECC DDR3 1333MHz LP UDIMM -
Load-reduced (LRDIMM) modules
49Y1567 A290 16GB (1x16GB, 4Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333MHz LP LRDIMM -
90Y3105 A291 32GB (1x32GB, 4Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333MHz LP LRDIMM -

The x240 supports Low Profile (LP) DDR3 memory RDIMMs, UDIMMs, and LRDIMMs. The server supports up to 12 DIMMs when one processor is installed and up to 24 DIMMs when two processors are installed. Each processor has four memory channels, and there are three DIMMs per channel.

The following rules apply when selecting the memory configuration:

  • Mixing 1.5 V and 1.35 V DIMMs in the same server is supported. In such a case all DIMMs operate at 1.5 V.
  • Mixing of different types of DIMMs (UDIMM , RDIMM and LRDIMM) in the same server is not supported.
  • The maximum number of ranks supported per channel is eight.
  • The maximum quantity of DIMMs that can be installed in the server depends on the number of CPUs, DIMM rank, and operating voltage, as shown in the "Max. qty supported" row in the following table. The shaded cells indicate that the DIMM type supports the maximum number of DIMMs (24 for the x240)
  • All DIMMs in all CPU memory channels operate at the same speed, which is determined as the lowest value of:
    • Memory speed supported by specific CPU
    • Lowest maximum operating speed for the selected memory configuration that depends on rated speed, as shown under the "Max. operating speed" section in the following table. The shaded cells indicate that the speed indicated is the maximum that the DIMM allows.

The following tables (Part 1 and Part 2) shows the maximum memory speeds that are achievable based on the installed DIMMs and the number of DIMMs per channel. The tables also show maximum memory capacity at any speed supported by the DIMM and maximum memory capacity at rated DIMM speed. In the tables, cells highlighted with a grey background indicate when the specific combination of DIMM voltage and number of DIMMs per channel still allows the DIMMs to operate at rated speed.

Table 5. Maximum memory speeds (Part 1 - UDIMMs, LRDIMMs and Quad rank RDIMMs)

Specification
UDIMMs
LRDIMMs
RDIMMs
Rank Dual rank Quad rank Quad rank
Part numbers 49Y1404 (4 GB) 49Y1567 (16 GB)
90Y3105 (32 GB)
49Y1400 (16 GB)
Rated speed 1333 MHz 1333 MHz 1066 MHz
Rated voltage 1.35 V 1.35 V 1.35 V
Operating voltage 1.35 V 1.5 V 1.35 V 1.5 V 1.35 V 1.5 V
Maximum quantity* 16 16 24 24 8 16
Largest DIMM 4 GB 4 GB 32 GB 32 GB 16 GB 16 GB
Max memory capacity 64 GB 64 GB 768 GB 768 GB 128 GB 256 GB
Max memory at rated speed 64 GB 64 GB N/A 512 GB N/A 128 GB
Maximum operating speed (MHz)
1 DIMM per channel 1333 MHz 1333 MHz 1066 MHz 1333 MHz 800 MHz 1066 MHz
2 DIMMs per channel 1333 MHz 1333 MHz 1066 MHz 1333 MHz NS** 800 MHz
3 DIMMs per channel NS† NS† 1066 MHz 1066 MHz NS‡ NS‡

* The maximum quantity supported is shown for two processors installed. When one processor is installed, the maximum quantity supported is half of that shown.
** NS = Not supported at 1.35 V. Will operate at 1.5 V instead
† NS = Not supported. UDIMMs only support up to 2 DIMMs per channel.
‡ NS = Not supported. RDIMMs support up to 8 ranks per channel.

Table 6. Maximum memory speeds (Part 2 - Single and Dual rank RDIMMs)

Specification
RDIMMs
Rank Single rank Dual rank
Part numbers 49Y1405 (2GB)
49Y1406 (4GB)
49Y1559 (4GB) 49Y1407 (4GB)
49Y1397 (8GB)
49Y1563 (16GB)
90Y3178 (4GB)
90Y3109 (8GB)
00D4968 (16GB)
Rated speed 1333 MHz 1600 MHz 1333 MHz 1600 MHz
Rated voltage 1.35 V 1.5 V 1.35 V 1.5 V
Operating voltage 1.35 V 1.5 V 1.5 V 1.35 V 1.5 V 1.5 V
Max quantity* 16 24 24 16 24 24
Largest DIMM 4 GB 4 GB 4 GB 16 GB 16 GB 16 GB
Max memory capacity 64 GB 96 GB 96 GB 256 GB 384 GB 384 GB
Max memory at rated speed N/A 64 GB 64 GB N/A 256 GB 256 GB
Maximum operating speed (MHz)
1 DIMM per channel 1333 MHz 1333 MHz 1600 MHz 1333 MHz 1333 MHz 1600 MHz
2 DIMMs per channel 1333 MHz 1333 MHz 1600 MHz 1333 MHz 1333 MHz 1600 MHz
3 DIMMs per channel NS** 1066 MHz 1066 MHz NS** 1066 MHz 1066 MHz

* The maximum quantity that is supported is shown for two processors installed. When one processor is installed, the maximum quantity that is supported is half of that shown.
** NS = Not supported at 1.35 V. Will operate at 1.5 V instead

The following memory protection technologies are supported:

  • ECC
  • Chipkill (for x4-based memory DIMMs -- look for "x4" in the DIMM description)
  • Memory mirroring
  • Memory sparing

If memory mirroring is used, then DIMMs must be installed in pairs (minimum of one pair per CPU), and both DIMMs in a pair must be identical in type and size.

If memory rank sparing is used, then a minimum of one quad-rank DIMM or two single-rank or dual-rank DIMMs must be installed per populated channel (the DIMMs do not need being identical). In rank sparing mode, one rank of a DIMM in each populated channel is reserved as spare memory. The size of a rank varies depending on the DIMMs installed.

Internal storage

The x240 server has two 2.5-inch hot-swap drive bays accessible from the front of the blade server (Figure 2). These bays connect to the integrated LSI SAS2004 6 Gbps SAS/SATA RAID-on-Chip (ROC) controller.

The integrated LSI SAS2004 ROC has the following features:

  • Four-port LSI SAS2004 controller with 6 Gbps throughput per port
  • PCIe x4 Gen 2 host interface
  • Two SAS ports routed internally to the two hot-swap drive bays
  • Supports RAID-0, RAID-1 and RAID-1E

The x240 also supports up to eight 1.8-inch drives with the addition of the ServeRAID M5115 controller and additional SSD tray hardware. These are described in the next section.

Supported drives are listed in the Internal drive options section.

ServeRAID M5115 SAS/SATA controller

The x240 supports up to eight 1.8-inch solid-state drives combined with a ServeRAID M5115 SAS/SATA controller (90Y4390). The M5115 attaches to the I/O adapter 1 connector and can be attached even if the Compute Node Fabric Connector is installed (used to route the Embedded 10Gb Virtual Fabric Adapter to bays 1 and 2, as discussed in "I/O expansion options"). The ServeRAID M5115 cannot be installed if an adapter is installed in I/O adapter slot 1.

The ServeRAID M5115 supports combinations of 2.5-inch drives and 1.8-inch solid state drives:

  • Up to two 2.5-inch drives only
  • Up to four 1.8-inch drives only
  • Up to two 2.5-inch drives, plus up to four 1.8-inch solid state drives
  • Up to eight 1.8-inch solid state drives

The ServeRAID M5115 SAS/SATA Controller (90Y4390) provides an advanced RAID controller supporting RAID 0, 1, 10, 5, 50, and optional 6 and 60. It includes 1 GB of cache, which can be backed up to Flash when attached to the supercapacitor included with the optional ServeRAID M5100 Series Enablement Kit (90Y4342).

At least one hardware kit is required with the ServeRAID M5115 controller, and there are three hardware kits that are supported that enable specific drive support.

Table 7. ServeRAID M5115 and hardware kits

Part number Feature
code
Description Maximum
supported
90Y4390 A2XW ServeRAID M5115 SAS/SATA Controller for Flex System 1
90Y4342 A2XX ServeRAID M5100 Series Enablement Kit for Flex System x240 1
90Y4341 A2XY ServeRAID M5100 Series Flex System Flash Kit for x240 1
47C8808 A47D ServeRAID M5100 Series Flex System Flash Kit v2 for x240 1
90Y4391 A2XZ ServeRAID M5100 Series SSD Expansion Kit for Flex System x240 1

The hardware kits have the following features:

  • ServeRAID M5100 Series Enablement Kit for Flex System x240 (90Y4342) enables support for up to two 2.5” HDDs or SSDs in the hot-swap bays in the front of the server. It includes a CacheVault unit, which enables MegaRAID CacheVault flash cache protection. This enablement kit replaces the standard two-bay backplane (which is attached via the planar to an onboard controller) with a new backplane that attaches to an included flex cable to the M5115 controller. It also includes an air baffle, which also serves as an attachment for the CacheVault unit.

    MegaRAID CacheVault flash cache protection uses NAND flash memory powered by a supercapacitor to protect data stored in the controller cache. This module eliminates the need for a lithium-ion battery commonly used to protect DRAM cache memory on PCI RAID controllers. To avoid the possibility of data loss or corruption during a power or server failure, CacheVault technology transfers the contents of the DRAM cache to NAND flash using power from the supercapacitor. After the power is restored to the RAID controller, the saved data is transferred from the NAND flash back to the DRAM cache, which can then be flushed to disk.

    Tip: The Enablement Kit is only required if 2.5-inch drives are to be used. If you plan to install four or eight 1.8-inch SSDs only, then this kit is not required.

  • ServeRAID M5100 Series Flex System Flash Kit for x240 (90Y4341) enables support for up to four 1.8-inch SSDs. This kit replaces the standard two-bay backplane with a four-bay SSD backplane that attaches to an included flex cable to the M5115 controller. Because only SSDs are supported, a CacheVault unit is not required, and therefore this kit does not have a supercap. The use of this kit limits which 1.8-inch solid state drives can be used in the x240 as listed in Table 11. Use Flash Kit v2 instead.
  • ServeRAID M5100 Series Flex System Flash Kit v2 for x240 (47C8808) enables support for up to four 1.8-inch SSDs. This kit replaces the standard two-bay backplane with a four-bay SSD backplane that attaches to an included flex cable to the M5115 controller. Because only SSDs are supported, a CacheVault unit is not required, and therefore this kit does not have a supercap. This v2 kit provides support for the latest high-performance SSDs.
  • ServeRAID M5100 Series SSD Expansion Kit for Flex System x240 (90Y4391) enables support for up to four internal 1.8-inch SSDs. This kit includes two air baffles, left and right, which can attach two 1.8-inch SSD attachment locations and Flex cables for attachment to up to four 1.8-inch SSDs.

    Note: The SSD Expansion Kit cannot be installed if the USB Enablement Kit, 49Y8119, is already installed as these kits occupy the same location in the server.

The following table shows the kits required for each combination of drives. For example, if you plan to install eight 1.8-inch SSDs, then you will need the M5115 controller, the Flash kit, and the SSD Expansion kit.

Table 8. ServeRAID M5115 hardware kits

Desired drive support
Components required
Maximum
number of
2.5" drives
Maximum
number of
1.8" SSDs
ServeRAID
M5115
90Y4390
Enablement Kit
90Y4342
Flash Kit
90Y4341
47C8808
SSD
Expansion Kit
90Y4391
2 0
=>
Required Required
0 4 (front)
=>
Required Required
2 4 (internal)
=>
Required Required Required
0 8 (both)
=>
Required Required Required

The following figure shows how the ServeRAID M5115 and the Enablement Kit are installed in the server to support two 2.5-inch drives with MegaRAID CacheVault flash cache protection (row 1 of the preceding table).

The ServeRAID M5115 and the Enablement Kit installed
Figure 4. The ServeRAID M5115 and the Enablement Kit installed

The following figure shows how the ServeRAID M5115 and Flash and SSD Expansion Kits are installed in the server to support eight 1.8-inch solid-state drives (row 4 of the preceding table).

ServeRAID M5115 with Flash and SSD Expansion Kits installed
Figure 5. ServeRAID M5115 with Flash and SSD Expansion Kits installed

The eight SSDs are installed in the following locations:

  • Four in the front of the system in place of the two 2.5-inch drive bays
  • Two in a tray above one the memory banks for CPU 1
  • Two in a tray above one the memory banks for CPU 2

The ServeRAID M5115 controller has the following specifications:

  • Eight internal 6 Gbps SAS/SATA ports
  • PCI Express 3.0 x8 host interface
  • 6 Gbps throughput per port
  • 800 MHz dual-core IBM PowerPC® processor with LSI SAS2208 6 Gbps RAID on Chip (ROC) controller
  • Support for RAID levels 0, 1, 10, 5, 50 standard; support for RAID 6 and 60 with optional upgrade using 90Y4411
  • Onboard 1 GB data cache (DDR3 running at 1333 MHz) with optional flash backup (MegaRAID CacheVault technology) as part of the Enablement Kit 90Y4342.
  • Support for SAS and SATA HDDs and SSDs
  • Support for intermixing SAS and SATA HDDs and SSDs; mixing different types of drives in the same array (drive group) not recommended
  • Support for self-encrypting drives (SEDs) with MegaRAID SafeStore
  • Optional support for SSD performance acceleration with MegaRAID FastPath and SSD caching with MegaRAID CacheCade Pro 2.0 (90Y4447)
  • Support for up to 64 virtual drives, up to 128 drive groups, up to 16 virtual drives per one drive group, and up to 32 physical drives per one drive group
  • Support for logical unit number (LUN) sizes up to 64 TB
  • Configurable stripe size up to 1 MB
  • Compliant with Disk Data Format (DDF) configuration on disk (COD)
  • S.M.A.R.T. support
  • MegaRAID Storage Manager management software

Optional add-ons to the ServeRAID M5115 controller are RAID 6 support, SSD performance accelerator, and SSD caching enabler. The feature upgrades are as listed in the following table. These are all Feature on Demand (FoD) license upgrades.

Table 9. Supported upgrade features

Part number Feature
code
Description Maximum
supported
90Y4410 A2Y1 ServeRAID M5100 Series RAID 6 Upgrade for Flex System 1
90Y4412 A2Y2 ServeRAID M5100 Series Performance Upgrade for Flex System
(MegaRAID FastPath)
1
90Y4447 A36G ServeRAID M5100 Series SSD Caching Enabler for Flex System
(MegaRAID CacheCade Pro 2.0)
1

These features are described as follows:

  • RAID 6 Upgrade (90Y4410)

    Adds support for RAID 6 and RAID 60. This is a Feature on Demand license.

  • Performance Upgrade (90Y4412)

    The Performance Upgrade for Flex System (implemented using the LSI MegaRAID FastPath software) provides high-performance I/O acceleration for SSD-based virtual drives by exploiting an extremely low-latency I/O path to increase the maximum I/O per second (IOPS) capability of the controller. This feature boosts the performance of applications with a highly random data storage access pattern, such as transactional databases. Part number 90Y4412 is a Feature on Demand license.

  • SSD Caching Enabler for traditional hard drives (90Y4447)

    The SSD Caching Enabler for Flex System (implemented using the LSI MegaRAID CacheCade Pro 2.0) is designed to accelerate the performance of hard disk drive (HDD) arrays with only an incremental investment in solid-state drive (SSD) technology. The feature enables the SSDs to be configured as a dedicated cache to help maximize the I/O performance for transaction-intensive applications, such as databases and web serving. The feature tracks data storage access patterns and identifies the most frequently accessed data. The hot data is then automatically stored on the SSDs that are assigned as a dedicated cache pool on the ServeRAID controller. Part number 90Y4447 is a Feature on Demand license. This feature requires at least one SSD drive be installed.

Internal drive options

The 2.5" drive bays support SAS or SATA hard disk drives (HDDs) or SATA solid state drives (SSDs). The following table lists the supported 2.5" drive options.

Table 10. 2.5-inch drive options for internal disk storage
Part number Feature
code
Description Maximum
supported
10K SAS hard disk drives
00AD075 A48S 1.2TB 10K 6Gbps SAS 2.5'' G2HS HDD 2
81Y9650 A282 900GB 10K 6Gbps SAS 2.5" SFF HS HDD 2
90Y8872 A2XD 600GB 10K 6Gbps SAS 2.5" SFF G2HS HDD 2
49Y2003 5433 600GB 10K 6Gbps SAS 2.5" SFF Slim-HS HDD 2
90Y8877 A2XC 300GB 10K 6Gbps SAS 2.5" SFF G2HS HDD 2
42D0637 5599 300GB 10K 6Gbps SAS 2.5" SFF Slim-HS HDD 2
15K SAS hard disk drives
00AJ300 A4VB 600GB 15K 6Gbps SAS 2.5'' G2HS HDD 2
81Y9670 A283 300GB 15K 6Gbps SAS 2.5" SFF HS HDD 2
42D0677 5536 146GB 15K 6Gbps SAS 2.5" SFF Slim-HS HDD 2
90Y8926 A2XB 146GB 15K 6Gbps SAS 2.5" SFF G2HS HDD 2
10K and 15K Self-encyrpting drives (SED)
00AD085 A48T 1.2TB 10K 6Gbps SAS 2.5'' G2HS SED 2
81Y9662 A3EG 900GB 10K 6Gbps SAS 2.5" SFF G2HS SED 2
90Y8913 A2XF 300GB 10K 6Gbps SAS 2.5" SFF G2HS SED 2
90Y8944 A2ZK 146GB 15K 6Gbps SAS 2.5" SFF G2HS SED 2
NL SAS
81Y9690 A1P3 1TB 7.2K 6Gbps NL SAS 2.5" SFF HS HDD 2
90Y8953 A2XE 500GB 7.2K 6Gbps NL SAS 2.5" SFF G2HS HDD 2
42D0707 5409 500GB 7200 6Gbps NL SAS 2.5" SFF Slim-HS HDD 2
NL SATA
81Y9722 A1NX 250GB 7.2K 6Gbps NL SATA 2.5" SFF HS HDD 2
81Y9726 A1NZ 500GB 7.2K 6Gbps NL SATA 2.5" SFF HS HDD 2
81Y9730 A1AV 1TB 7.2K 6Gbps NL SATA 2.5" SFF HS HDD 2
10K and 15K SAS-SSD Hybrid drive
00AD102 A4G7 600GB 10K 6Gbps SAS 2.5'' G2HS Hybrid 2
Solid-state drives - Enterprise
41Y8341 A4FQ S3700 800GB SATA 2.5" MLC HS Enterprise SSD 2
41Y8336 A4FN S3700 400GB SATA 2.5" MLC HS Enterprise SSD 2
41Y8331 A4FL S3700 200GB SATA 2.5" MLC HS Enterprise SSD 2
49Y6195 A4GH 1.6TB SAS 2.5" MLC HS Enterprise SSD 2
49Y6139 A3F0 800GB SAS 2.5" MLC HS Enterprise SSD 2
49Y6134 A3EY 400GB SAS 2.5" MLC HS Enterprise SSD 2
00W1125 A3HR 100GB SATA 2.5" MLC HS Enterprise SSD 2
Solid-state drives - Enterprise Value
00AJ000 A4KM S3500 120GB SATA 2.5" MLC HS Enterprise Value SSD 2
00AJ005 A4KN S3500 240GB SATA 2.5" MLC HS Enterprise Value SSD 2
00AJ010 A4KP S3500 480GB SATA 2.5" MLC HS Enterprise Value SSD 2
00AJ015 A4KQ S3500 800GB SATA 2.5" MLC HS Enterprise Value SSD 2
00FN268 A5U4 S3500 1.6TB SATA 2.5" MLC HS Enterprise Value SSD 2
00AJ355 A56Z 120GB SATA 2.5" MLC HS Enterprise Value SSD 2
00AJ360 A570 240GB SATA 2.5" MLC HS Enterprise Value SSD 2
00AJ365 A571 480GB SATA 2.5" MLC HS Enterprise Value SSD 2
00AJ370 A572 800GB SATA 2.5" MLC HS Enterprise Value SSD 2

The 1.8-inch solid state drives supported are listed in the following table. The use of 1.8-inch drives requires the ServeRAID M5115 SAS/SATA controller as described in the next section.

Table 11. Supported 1.8-inch solid state drives

Part number Feature
code
Description Maximum
supported
Enterprise SSDs
49Y6119 A3AN 200GB SATA 1.8" MLC Enterprise SSD 8
49Y6124 A3AP 400GB SATA 1.8" MLC Enterprise SSD 8
00W1120 A3HQ 100GB SATA 1.8" MLC Enterprise SSD 8
41Y8366 A4FS S3700 200GB SATA 1.8" MLC Enterprise SSD 8
41Y8371 A4FT S3700 400GB SATA 1.8" MLC Enterprise SSD 8
Enterprise Value SSDs
00AJ040 A4KV S3500 80GB SATA 1.8" MLC Enterprise Value SSD 8
00AJ045 A4KW S3500 240GB SATA 1.8" MLC Enterprise Value SSD 8
00AJ050 A4KX S3500 400GB SATA 1.8" MLC Enterprise Value SSD 8
00AJ455 A58U S3500 800GB SATA 1.8" MLC Enterprise Value SSD 8
00AJ335 A56V 120GB SATA 1.8" MLC Enterprise Value SSD 8
00AJ340 A56W 240GB SATA 1.8" MLC Enterprise Value SSD 8
00AJ345 A56X 480GB SATA 1.8" MLC Enterprise Value SSD 8
00AJ350 A56Y 800GB SATA 1.8" MLC Enterprise Value SSD 8

Flex System Storage Expansion Node

The x240 supports the attachment of the Flex System Storage Expansion Node. The Flex System Storage Expansion Node provides the ability to attach additional 12 hot-swap 2.5-inch HDDs or SSDs locally to the attached compute node. The Storage Expansion Node provides storage capacity for Network Attach Storage (NAS) workloads, providing flexible storage to match capacity, performance and reliability needs.

Model 8737-HBx includes the Storage Expansion Node as standard as listed in Table 2. All other models support the SEN as an option.

The following figure shows the Flex System Storage Expansion Node attached to a compute node.

23.4CFE.jpg
Figure 6. Flex System Storage Expansion Node (right) attached to a compute node (left)

The ordering information for the Storage Expansion Node is shown in the following table.

Table 12. Ordering part number and feature code

Part number Feature code* Description Maximum
supported
68Y8588 A3JF Flex System Storage Expansion Node 1

* The feature is not available to be configured using model 7863-10X in e-config. Use model 8737-15X instead.

The Storage Expansion Node has the following features:

  • Connects directly to supported compute nodes via a PCIe 3.0 interface to the compute node's expansion connector (See Figure 3)
  • Support for 12 hot-swap 2.5-inch drive, accessible via a sliding tray
  • Support for 6 Gbps SAS and SATA drives, both HDDs and SSDs
  • Based on an LSI SAS2208 6 Gbps RAID on Chip (ROC) controller
  • Supports RAID 0, 1, 5, 10, and 50 as standard. JBOD also supported. Optional RAID 6 and 60 with a Features on Demand upgrade.
  • Optional 512 MB or 1 GB cache with cache-to-flash super capacitor offload

Note: The use of the Storage Expansion Node requires that the x240 Compute Node have both processors installed.

For more information, see the Product Guide on the Flex System Storage Expansion Node, http://lenovopress.com/tips0914

Internal tape drives

The server does not support an internal tape drive. However, it can be attached to external tape drives using Fibre Channel connectivity.

Optical drives

The server does not support an internal optical drive option, however, you can connect an external USB optical drive. See http://support.lenovo.com/en/documents/pd011281 for information about available external optical drives from Lenovo.

Note: The USB port on the compute nodes supply up to 0.5 A at 5 V. For devices that require more power, an additional power source will be required.

Embedded 10Gb Virtual Fabric Adapter

Some models of the x240 include an Embedded 10Gb Virtual Fabric Adapter (VFA, also known as LAN on Motherboard or LOM) built into the system board. Table 2 lists what models of the x240 include the Embedded 10Gb Virtual Fabric Adapter. Each x240 model that includes the embedded 10Gb VFA also has the Compute Node Fabric Connector installed in I/O connector 1 (and physically screwed onto the system board) to provide connectivity to the Enterprise Chassis midplane. Figure 3 shows the location of the Fabric Connector.

The Fabric Connector enables port 1 on the embedded 10Gb VFA to be routed to I/O module bay 1 and port 2 to be routed to I/O module bay 2. The Fabric Connector can be unscrewed and removed, if required, to allow the installation of an I/O adapter on I/O connector 1.

The Embedded 10Gb VFA is based on the Emulex BladeEngine 3 (BE3), which is a single-chip, dual-port 10 Gigabit Ethernet (10GbE) Ethernet Controller. These are some of the features of the Embedded 10Gb VFA:

  • PCI-Express Gen2 x8 host bus interface
  • Supports connection to 10 Gb and 1 Gb Flex System Ethernet switches
  • Supports multiple virtual NIC (vNIC) functions
  • TCP/IP Offload Engine (TOE enabled)
  • SRIOV capable
  • RDMA over TCP/IP capable
  • iSCSI and FCoE upgrade offering via FoD

The following table lists the ordering information for the IBM Virtual Fabric Advanced Software Upgrade (LOM), which enables the iSCSI and FCoE support on the Embedded 10Gb Virtual Fabric Adapter.

Table 13. Feature on Demand upgrade for FCoE and iSCSI support

Part number Feature
code
Description Maximum
supported
90Y9310 A2TD IBM Virtual Fabric Advanced Software Upgrade (LOM) 1

I/O expansion options

The x240 has two I/O expansion connectors for attaching I/O adapter cards. There is a third expansion connector designed to connect an expansion node such as the PCIe Expansion Node. The I/O expansion connectors are a very high-density 216-pin PCIe connector. Installing I/O adapter cards allows the server to connect with switch modules in the Flex System Enterprise Chassis. Each slot has a PCI Express 3.0 x16 host interface and both slots support the same form-factor adapters.

The following figure shows the location of the I/O expansion connectors.

Location of the I/O adapter slots in the NGP x240 Compute Node
Figure 7. Location of the I/O adapter slots in the Flex System x240 Compute Node

All I/O adapters are the same shape and can be used in any available slot.. A compatible switch or pass-through module must be installed in the corresponding I/O bays in the chassis, as indicated in the following table. Installing two switches means that all ports of the adapter are enabled, which improves performance and network availability.

Table 14. Adapter to I/O bay correspondence

I/O adapter slot in the server Port on the adapter Corresponding I/O module bay in the chassis
Slot 1 Port 1 Module bay 1
Port 2 Module bay 2
Port 3 (for 4-port cards) Module bay 1
Port 4 (for 4-port cards) Module bay 2
Slot 2 Port 1 Module bay 3
Port 2 Module bay 4
Port 3 (for 4-port cards) Module bay 3
Port 4 (for 4-port cards) Module bay 4

The following figure shows the location of the switch bays in the Flex System Enterprise Chassis.

Location of the switch bays in the NGP Enterprise Chassis
Figure 8. Location of the switch bays in the Flex System Enterprise Chassis

The following figure shows how two-port adapters are connected to switches installed in the chassis.

Logical layout of the interconnects between I/O adapters and I/O modules
Figure 9. Logical layout of the interconnects between I/O adapters and I/O modules

Flex System PCIe Expansion Node

The x240 supports the attachment of the Flex System PCIe Expansion Node. The Flex System PCIe Expansion Node provides the ability to attach additional PCI Express cards such as High IOPS SSD adatpers, fabric mezzanine cards, and next-generation graphics processing units (GPU) to supported Flex System compute nodes. This capability is ideal for many applications that require high performance I/O, special telecommunications network interfaces, or hardware acceleration using a PCI Express card. The PCIe Expansion Node supports up to four PCIe 2.0 adapters and two additional Flex System expansion adapters.

The PCIe Expansion Node is attached to the x240 as shown in the following figure.

PCIe Expansion Node
Figure 10. PCIe Expansion Node

The ordering information for the PCIe Expansion Node is shown in the following table.

Table 15. Ordering part number and feature code

Part number Feature code Description Maximum
supported
81Y8983 A1BV* Flex System PCIe Expansion Node 1

* The feature is not available to be configured using model 7863-10X in e-config. Use model 8737-15X instead.

The PCIe Expansion Node has the following features:

  • Support for up to four standard PCIe 2.0 adapters:
    • Two PCIe 2.0 x16 slots that support full-length, full-height adapters
    • Two PCIe 2.0 x8 slots that support half-length, low-profile adapters
  • Support for PCIe 3.0 adapters by operating them in PCIe 2.0 mode
  • Support for one full-length, full-height double-wide adapter (consuming the space of the two full-length, full-height adapter slots)
  • Support for PCIe cards with higher power requirements -- a single adapter card (up to 225W), or to two adapters (up to 150W each)
  • Two Flex System I/O expansion connectors for to further expand the I/O capability of the attached compute node.

Note: The use of the PCIe Expansion Node requires that the x240 Compute Node have both processors installed.

For more information, see the Product Guide on the Flex System PCIe Expansion Node, http://lenovopress.com/tips0906

Network adapters

As described in "Embedded 10Gb Virtual Fabric Adapter," certain models (those with a model number of the form x2x) have a 10Gb Ethernet controller on the system board, and its ports are routed to the midplane and switches installed in the chassis via a Compute Note Fabric Connector that takes the place of an adapter in I/O slot 1.

Models without the Embedded 10Gb Virtual Fabric Adapter (those with a model number of the form x1x) do not include any other Ethernet connections to the Enterprise Chassis midplane as standard. Therefore, for those models, an I/O adapter must be installed in either I/O connector 1 or I/O connector 2 to provide network connectivity between the server and the chassis midplane and ultimately to the network switches.

The following table lists the supported network adapters and upgrades. Adapters can be installed in either slot. However, compatible switches must be installed in the corresponding bays of the chassis. All adapters can also be installed in the PCIe Expansion Node. The "Maximum supported" column indicates the number of adapter than can be installed in the server and in the PCIe Expansion Node (PEN).

Table 16. Network adapters

Part number Feature
code
Description Number
of ports
Maximum
supported
(x240* / PEN)
40 Gb Ethernet
90Y3482 A3HK Flex System EN6132 2-port 40Gb Ethernet Adapter 2 2 / None
10 Gb Ethernet
88Y5920 A4K3 Flex System CN4022 2-port 10Gb Converged Adapter 2 2 / 2
90Y3554 A1R1 Flex System CN4054 10 Gb Virtual Fabric Adapter 4 2 / 2
90Y3558 A1R0 Flex System CN4054 Virtual Fabric Adapter (SW Upgrade)
(Feature on Demand to provide FCoE and iSCSI support)
(one license required per adapter)
License 2 / 2
90Y3466 A1QY Flex System EN4132 2-port 10 Gb Ethernet Adapter 2 2 / 2
1 Gb Ethernet
49Y7900 A10Y Flex System EN2024 4-port 1 Gb Ethernet Adapter 4 2 / 2
InfiniBand
90Y3454 A1QZ Flex System IB6132 2-port FDR InfiniBand Adapter 2 2 / 2

* For x2x models with the Embedded 10Gb Virtual Fabric Adapter standard, the Compute Node Fabric Connector occupies the same space as an I/O adapter in I/O slot 1, so you have to remove the Fabric Connector if you plan to install an adapter in I/O slot 1.

For adapter-to-switch compatibility, see the Flex System Interoperability Guide: http://lenovopress.com/fsig

Storage host bus adapters

The following table lists storage HBAs supported by the x240 server, both internally in the compute node and in the PCIe Expansion Node.

Table 17. Storage adapters

Part number Feature
code
Description Number
of ports
Maximum
supported
(x240* / PEN)
Fibre Channel
88Y6370 A1BP Flex System FC5022 2-port 16Gb FC Adapter 2 2 / 2
95Y2386 A45R Flex System FC5052 2-port 16Gb FC Adapter 2 2 / 2
95Y2391 A45S Flex System FC5054 4-port 16Gb FC Adapter 4 2 / 2
69Y1942 A1BQ Flex System FC5172 2-port 16Gb FC Adapter 2 2 / 2
69Y1938 A1BM Flex System FC3172 2-port 8Gb FC Adapter 2 2 / 2
95Y2375 A2N5 Flex System FC3052 2-port 8Gb FC Adapter 2 2 / 2

* For x2x models with the Embedded 10Gb Virtual Fabric Adapter standard, the Compute Node Fabric Connector occupies the same space as an I/O adapter in I/O slot 1, so you have to remove the Fabric Connector if you plan to install an adapter in I/O slot 1.

For adapter-to-switch compatibility, see the Flex System Interoperability Guide:
http://lenovopress.com/fsig

PCIe SSD adapters

The compute node supports the High IOPS SSD adapters listed in the following table.

Note: These adapters are installed in an attached PCIe Expansion Node.

Table 18. SSD adapters

Part number Feature
code
Description Maximum
supported
46C9078 A3J3 365GB High IOPS MLC Mono Adapter (low-profile adapter) 4
46C9081 A3J4 785GB High IOPS MLC Mono Adapter (low-profile adapter) 4
81Y4519* 5985 640GB High IOPS MLC Duo Adapter (full-height adapter) 2
81Y4527* A1NB 1.28TB High IOPS MLC Duo Adapter (full-height adapter) 2
90Y4377 A3DY 1.2TB High IOPS MLC Mono Adapter (low-profile adapter) 2
90Y4397 A3DZ 2.4TB High IOPS MLC Duo Adapter (full-height adapter) 2
00AE983 ARYK IBM 1250GB Enterprise Value io3 Flash Adapter for System x 4
00AE986 ARYL IBM 1600GB Enterprise Value io3 FlashAdapter for System x 4
00AE989 ARYM IBM 3200GB Enterprise Value io3 Flash Adapter for System x 4
00AE992 ARYN IBM 6400GB Enterprise Value io3 Flash Adapter for System x 2
00AE995 ARYP IBM 1000GB Enterprise io3 Flash Adapter for System x 4
00AE998 ARYQ IBM 1300GB Enterprise io3 Flash Adapter for System x 4
00JY001 ARYR IBM 2600GB Enterprise io3 Flash Adapter for System x 4
00JY004 ARYS IBM 5200GB Enterprise io3 Flash Adapter for System x 2

* Withdrawn from marketing

GPU and Crypto adapters

The compute node supports the GPU and Crypto adapters listed in the following table.

Note: These adapters are installed in an attached PCIe Expansion Node.

Table 19. GPU and Crypto adapters

Part number Feature code Description Maximum
supported
94Y5960 A1R4 NVIDIA Tesla M2090 (full-height adapter) 1*
47C2120 A4F1 NVIDIA GRID K1 for Flex System PCIe Expansion Node 1†
47C2121 A4F2 NVIDIA GRID K2 for Flex System PCIe Expansion Node 1†
47C2119 A4F3 NVIDIA Tesla K20 for Flex System PCIe Expansion Node 1†
47C2122 A4F4 Intel Xeon Phi 5110P for Flex System PCIe Expansion Node 1†
None 4809** 4765 Crypto Card (full-height adapter) 2

* When this double-wide adapter is installed in the PCIe Expansion Node, it occupies both full-height slots. The low-profile slots and Flex System I/O expansion slots can still be used.
† If installed, only this adapter is supported in the system. No other PCIe adapters may be installed.
** Orderable as separate MTM 4765-001 feature 4809. Available via AAS (e-config) only.

Power supplies

Server power is derived from the power supplies installed in the chassis. There are no server options regarding power supplies.

Integrated virtualization

The x240 supports the ESXi hypervisor on a USB memory key via the x240 USB Enablement Kit. This kit offers two internal USB ports. The x240 USB Enablement Kit and the supported USB memory keys are listed in the following table.

There are two types of USB keys, preloaded keys or blank keys. Blank keys allow you to download a Lenovo customized version of ESXi and load it onto the key. Preload keys are shipped with a specific version of ESXi already loaded. The x240 supports one or two keys installed, but only certain combinations:

Supported combinations:

  • One preload key
  • One blank key
  • One preload key and one blank key
  • Two blank keys

Unsupported combinations:

  • Two preload keys

Installing two preloaded keys will prevent ESXi from booting as described in http://kb.vmware.com/kb/1035107. Having two keys installed provides a backup boot device. both devices are listed in the boot menu, which allows you to boot from either device or to set one as a backup in case the first one gets corrupted.

Note: The x240 USB Enablement Kit and USB memory keys are not supported if the SSD Expansion Kit (90Y4391) is already installed, because these kits occupy the same location in the server.

Table 20. Virtualization options

Part number Feature
code
Description Maximum
supported
49Y8119 A3A3 x240 USB Enablement Kit 1
41Y8298 A2G0 Blank USB Memory Key for VMware ESXi Downloads 2
41Y8300 A2VC USB Memory Key for VMware ESXi 5.0 1
41Y8307 A383 USB Memory Key for VMware ESXi 5.0 Update 1 1
41Y8311 A2R3 USB Memory Key for VMware ESXi 5.1 1
41Y8382 A4WZ USB Memory Key for VMware ESXi 5.1 Update 1 1
41Y8385 A584 USB Memory Key for VMware ESXi 5.5 1

Light path diagnostics

For quick problem determination when located physically at the server, the x240 offers a three-step guided path:

  1. The Fault LED on the front panel
  2. The light path diagnostics panel, shown in the following figure.
  3. LEDs next to key components on the system board

The x240 light path diagnostics panel is visible when you remove the server from the chassis. The panel is located on the top right-hand side of the compute node, as shown in the following figure.

Location of x240 light path diagnostics panel
Figure 11. Location of x240 light path diagnostics panel

To illuminate the light path diagnostics LEDs, power off the compute node, slide it out of the chassis, and press the power button. The power button doubles as the light path diagnostics remind button when the server is removed from the chassis.

The meanings of the LEDs in the light path diagnostics panel are listed in the following table.

Table 21. Light path diagnostic panel LEDs

LED Meaning
LP The light path diagnostics panel is operational.
S BRD A system board error is detected.
MIS A mismatch has occurred between the processors, DIMMs, or HDDs within the configuration as reported by POST.
NMI A non maskable interrupt (NMI) has occurred.
TEMP An over-temperature condition occurs that was critical enough to shut down the server.
MEM A memory fault has occurred. The corresponding DIMM error LEDs on the system board are also lit.
ADJ A fault is detected in the adjacent expansion unit (if installed).

Remote management

The server contains a Lenovo Integrated Management Module II (IMM2), which interfaces with the advanced management module in the chassis. The combination of these provides advanced service-processor control, monitoring, and an alerting function. If an environmental condition exceeds a threshold or if a system component fails, LEDs on the system board are lit to help you diagnose the problem, the error is recorded in the event log, and you are alerted to the problem. A virtual presence capability comes standard for remote server management.

Remote server management is provided through industry-standard interfaces:

  • Intelligent Platform Management Interface (IPMI) Version 2.0
  • Simple Network Management Protocol (SNMP) Version 3
  • Common Information Model (CIM)
  • Web browser

The server also supports virtual media and remote control features, which provide the following functions:

  • Remotely viewing video with graphics resolutions up to 1600x1200 at 75 Hz with up to 23 bits per pixel, regardless of the system state
  • Remotely accessing the server using the keyboard and mouse from a remote client
  • Mapping the CD or DVD drive, diskette drive, and USB flash drive on a remote client, and mapping ISO and diskette image files as virtual drives that are available for use by the server
  • Uploading a diskette image to the IMM2 memory and mapping it to the server as a virtual drive
  • Capturing blue-screen errors

Operating system support

The server supports the following operating systems:

  • Microsoft Windows HPC Server 2008 SP1
  • Microsoft Windows Server 2008 Datacenter x64 SP2
  • Microsoft Windows Server 2008 Enterprise x64 SP2
  • Microsoft Windows Server 2008 R2 SP1
  • Microsoft Windows Server 2008 Standard x64 SP2
  • Microsoft Windows Server 2008 Web x64 SP2
  • Microsoft Windows Server 2012
  • Microsoft Windows Server 2012 R2
  • Red Hat Enterprise Linux 5.10 Xen x64
  • Red Hat Enterprise Linux 5.10 x64
  • Red Hat Enterprise Linux 5.7 Xen x64
  • Red Hat Enterprise Linux 5.7 x64
  • Red Hat Enterprise Linux 5.8 Xen x64
  • Red Hat Enterprise Linux 5.8 x64
  • Red Hat Enterprise Linux 5.9 Xen x64
  • Red Hat Enterprise Linux 5.9 x64
  • Red Hat Enterprise Linux 6.2 32-bit
  • Red Hat Enterprise Linux 6.2 x64
  • Red Hat Enterprise Linux 6.4 x64
  • Red Hat Enterprise Linux 6.5 x64
  • Red Hat Enterprise Linux 6.6 x64
  • Red Hat Enterprise Linux 6.7 x64
  • Red Hat Enterprise Linux 6.8 x64
  • Red Hat Enterprise Linux 7.0
  • Red Hat Enterprise Linux 7.1
  • Red Hat Enterprise Linux 7.2
  • Red Hat Enterprise Linux 7.3
  • Red Hat Enterprise Linux 7.4
  • SUSE Linux Enterprise Server 10 x64 SP4
  • SUSE Linux Enterprise Server 11 Xen x64 SP1
  • SUSE Linux Enterprise Server 11 Xen x64 SP2
  • SUSE Linux Enterprise Server 11 Xen x64 SP3
  • SUSE Linux Enterprise Server 11 Xen x64 SP4
  • SUSE Linux Enterprise Server 11 x64 SP1
  • SUSE Linux Enterprise Server 11 x64 SP2
  • SUSE Linux Enterprise Server 11 x64 SP3
  • SUSE Linux Enterprise Server 11 x64 SP4
  • SUSE Linux Enterprise Server 12
  • SUSE Linux Enterprise Server 12 SP1
  • SUSE Linux Enterprise Server 12 SP2
  • SUSE Linux Enterprise Server 12 Xen
  • SUSE Linux Enterprise Server 12 Xen SP1
  • VMware ESX 4.1 U2
  • VMware ESX 4.1 U3
  • VMware ESXi 4.1 U1
  • VMware ESXi 4.1 U2
  • VMware ESXi 5.0
  • VMware ESXi 5.0 U1
  • VMware ESXi 5.0 U2
  • VMware ESXi 5.0 U3
  • VMware ESXi 5.1
  • VMware ESXi 5.1 U2
  • VMware ESXi 5.1 U3
  • VMware ESXi 5.1U1
  • VMware ESXi 5.5
  • VMware ESXi 5.5 U1
  • VMware ESXi 5.5 U2
  • VMware ESXi 5.5 U3
  • VMware ESXi 6.0
  • VMware ESXi 6.0 U2

For a complete list of supported, certified and tested operating systems, plus additional details and links to relevant web sites, see the Operating System Interoperability Guide: https://lenovopress.com/osig#servers=x240-8737-e5-v1

Physical specifications

Dimensions and weight (approximate):

  • Height: 56 mm (2.2 in)
  • Depth: 492 mm (19.4 in)
  • Width: 217 mm (8.6 in)
  • Maximum weight: 7.1 kg (15.6 lb)

Shipping dimensions and weight (approximate):

  • Height: 197 mm (7.8 in)
  • Depth: 603 mm (23.7 in)
  • Width: 430 mm (16.9 in)
  • Weight: 8 kg (17.6 lb)

Supported environment

The Flex System x240 compute node complies with ASHRAE Class A3 specifications.

This is the supported operating environment:

Power on:

  • Temperature: 5 - 40 °C (41 - 104 °F)
  • Humidity, non-condensing: -12 °C dew point (10.4 °F) and 8 - 85% relative humidity
  • Maximum dew point: 24 °C (75 °F)
  • Maximum altitude: 3048 m (10,000 ft)
  • Maximum rate of temperature change: 5 °C/hr (41 °F/hr)

Power off:

  • Temperature: 5 - 45 °C (41 - 113 °F)
  • Relative humidity: 8 - 85%
  • Maximum dew point: 27 °C (80.6 °F)

Storage (non-operating):

  • Temperature: 1 - 60 °C (33.8 - 140 °F)
  • Altitude: 3050 m (10,006 ft)
  • Relative humidity: 5 - 80%
  • Maximum dew point: 29 °C (84.2°F)

Shipment (non-operating):

  • Temperature: -40 - 60 °C (-40 - 140 °F)
  • Altitude: 10,700 m (35,105 ft)
  • Relative humidity: 5 - 100%
  • Maximum dew point: 29 °C (84.2 °F)

Warranty options

The Flex System x240 Compute Node has a three-year on-site warranty with 9x5 next-business-day terms. Lenovo offers the warranty service upgrades through ServicePac, discussed in this section. The ServicePac is a series of prepackaged warranty maintenance upgrades and post-warranty maintenance agreements with a well-defined scope of services, including service hours, response time, term of service, and service agreement terms and conditions.

ServicePac offerings are country-specific. That is, each country might have its own service types, service levels, response times, and terms and conditions. Not all covered types of ServicePac might be available in a particular country. For more information about Lenovo ServicePac offerings available in your country, see the ServicePac Product Selector at https://www-304.ibm.com/sales/gss/download/spst/servicepac.

The following table explains warranty service definitions in more detail.

Table 22. Warranty service definitions

Term Description
On-site repair (OR) A service technician will come to the server's location for equipment repair.
24x7x2 hour A service technician is scheduled to arrive at your customer’s location within two hours after remote problem determination is completed. We provide 24-hour service, every day, including Lenovo holidays.
24x7x4 hour A service technician is scheduled to arrive at your customer’s location within four hours after remote problem determination is completed. We provide 24-hour service, every day, including Lenovo holidays.
9x5x4 hour A service technician is scheduled to arrive at your customer’s location within four business hours after remote problem determination is completed. We provide service from 8:00 a.m. to 5:00 p.m. in the customer's local time zone, Monday through Friday, excluding Lenovo holidays. If after 1:00 p.m. it is determined that on-site service is required, the customer can expect the service technician to arrive the morning of the following business day. For noncritical service requests, a service technician will arrive by the end of the following business day.
9x5 next business day A service technician is scheduled to arrive at your customer’s location on the business day after we receive your call, following remote problem determination. We provide service from 8:00 a.m. to 5:00 p.m. in the customer's local time zone, Monday through Friday, excluding Lenovo holidays.

In general, these are the types of ServicePacs:

  • Warranty and maintenance service upgrades
    • One, two, three, four, or five years of 9x5 or 24x7 service coverage
    • On-site repair from the next business day to four or two hours
    • One or two years of warranty extension
  • Remote technical support services
    • One or three years with 24x7 coverage (severity 1) or 9-5 next business day for all severities
    • Installation and startup support for System x® servers
    • Remote technical support for System x servers
    • Software support - Support Line
      • Microsoft or Linux software
      • VMware
      • IBM Systems Director

Regulatory compliance

The server conforms to the following standards:

  • ASHRAE Class A3
  • FCC - Verified to comply with Part 15 of the FCC Rules Class A
  • Canada ICES-004, issue 3 Class A
  • UL/IEC 60950-1
  • CSA C22.2 No. 60950-1
  • NOM-019
  • Argentina IEC 60950-1
  • Japan VCCI, Class A
  • IEC 60950-1 (CB Certificate and CB Test Report)
  • China CCC (GB4943); (GB9254, Class A); (GB17625.1)
  • Taiwan BSMI CNS13438, Class A; CNS14336
  • Australia/New Zealand AS/NZS CISPR 22, Class A
  • Korea KN22, Class A, KN24
  • Russia/GOST ME01, IEC 60950-1, GOST R 51318.22, GOST R
  • 51318.249, GOST R 51317.3.2, GOST R 51317.3.3
  • IEC 60950-1 (CB Certificate and CB Test Report)
  • CE Mark (EN55022 Class A, EN60950-1, EN55024, EN61000-3-2,
  • EN61000-3-3)
  • CISPR 22, Class A
  • TUV-GS (EN60950-1/IEC 60950-1, EK1-ITB2000)

Related product families

Product families related to this document are the following:

Trademarks

Lenovo and the Lenovo logo are trademarks or registered trademarks of Lenovo in the United States, other countries, or both. A current list of Lenovo trademarks is available on the Web at https://www.lenovo.com/us/en/legal/copytrade/.

The following terms are trademarks of Lenovo in the United States, other countries, or both:
Lenovo®
Flex System
ServeRAID
ServerGuide
ServerProven®
System x®

The following terms are trademarks of other companies:

Intel®, Intel Xeon Phi™, and Xeon® are trademarks of Intel Corporation or its subsidiaries.

Linux® is the trademark of Linus Torvalds in the U.S. and other countries.

Microsoft®, RemoteFX®, Windows Server®, and Windows® are trademarks of Microsoft Corporation in the United States, other countries, or both.

Other company, product, or service names may be trademarks or service marks of others.