skip to main content

Reference Architecture: Lenovo ThinkSystem Edge Compute and Storage Solution for AI Inference Workloads

Reference Architecture

Home
Top
Published
9 Aug 2021
Form Number
LP1513
PDF size
27 pages, 603 KB

Abstract

This Lenovo reference architecture describes an edge architecture using Lenovo ThinkSystem compute servers and ThinkSystem DM Series storage systems optimized for Artificial Intelligence (AI) inference workloads that are accelerated by GPUs. The architecture enables inference deployments at the edge with localized consolidation of data storage, to optimize data movement across the environment without impacting performance.

This document covers testing and validation of the compute/storage configuration consisting of two accelerated ThinkSystem SE350 servers and an entry-level 10GbE network connected ThinkSystem DM storage system, providing an efficient and cost-effective solution for deploying AI inference applications that require the enterprise-grade capabilities of DM Series storage.

This document is intended for Enterprise architects who want to design and put solutions into production for the use of AI models and software outside the traditional datacenter, and IT decision makers and business leaders who want to achieve the fastest time to market possible from AI initiatives.

Table of Contents

1 Introduction
2 Technology Overview
3 Test Overview
4 Test Configuration
5 Test Procedure
6 Test Results
7 Architecture Adjustments
8 Conclusion
9 Appendix: Lenovo Bill of materials
Resources

To view the document, click the Download PDF button.

Related product families

Product families related to this document are the following: