Lenovo Validated Design for AI Infrastructure on ThinkSystem ServersReference Architecture
This document describes the reference architecture for a flexible and scalable Artificial Intelligence (AI) infrastructure on Lenovo ThinkSystem servers. It provides a predefined and optimized hardware infrastructure for data access, model training and inference under various usage scenarios. The reference architecture provides planning, design considerations, and best practices for implementing the AI infrastructure with Lenovo products.
The AI adoption journey involves the following key steps:
- Data access
- Model training
The task of providing data access entails connection with various data repositories. Typical models are based on deep neural networks (DNNs) and require a significant amount of computational resources for training. Using hardware infrastructure designed as a scale-out cluster for such model training use cases is a key requirement for enabling DL adoption. The inference step is aimed at deploying and using the trained model in the target application environment.
The intended audience for this reference architecture is IT professionals, technical architects, sales engineers, and consultants to assist in planning, designing, and implementing advanced analytics solutions with Lenovo hardware.
Table of Contents
2 Business problem and business value
4 Architectural overview
5 Component model
6 Operational model
7 Deployment considerations
8 Appendix: Bill of Material
9 Appendix: Example Training Workload
To view the document, click the language links under Download PDF.
Changes in the November 9 update (Version 2.0):
- Added inference
- Added big data storage
- Updated BOM tables to include configurations with 25Gb Ethernet switch
Note: The Chinese version of this document is back-level but will be updated to this latest version shortly.
Related product families
Product families related to this document are the following: