Author
Published
21 Jun 2024Form Number
LP1976PDF size
6 pages, 112 KBEnabling AI at any scale
The evolution of artificial intelligence / machine learning (AI/ML) provides different value streams across enterprise business and drives significant business impact. Growing adoption of generative AI in the enterprise is driving the need for more hardware and accelerators which increase Total Cost of Ownership. The Lenovo ThinkSystem V4 server portfolio powered by Intel® Xeon® 6 processors with E-cores enables right-sized AI compute with efficient, secure, workload optimized solutions for all classical machine learning and enterprise private AI use cases.
In September 2024, Lenovo will start shipping the ThinkSystem SD520 V4 1U half width servers followed by the ThinkSystem SR630 V4 1U rack servers in November 2024 equipped with 6th generation Intel Xeon 6 processor family with E-Cores (codename Sierra Forest). These E-core processors have a scalable architecture with a higher number of cores to support AI/ML and enterprise workloads. The SD520 V4 servers are built with Neptune Liquid cooling modules to provide efficient cooling on the CPUs.
Intel Xeon 6 processors with E-cores are enhanced to deliver density-optimized compute in the most power-efficient manner. Xeon processors with E-cores provide best-in-class power-performance density, offering distinct advantages for cloud-native and hyperscale workloads:
- 2.5x better rack density and 2.4x higher performance per watt.
- Support for 1S and 2S servers, with up to 144c per CPU and TDP as low as 200W.
- Modern instruction set with robust security, virtualization and AVX with AI extensions.
Intel Xeon 6 processors with E cores are designed to be more power efficient and consume 30-40% less power than 5th Gen processors when servers are utilized at 40-60%. It dramatically reduces power and cooling costs and 6th Gen processors provide up to 20% more performance than previous generation processors which increases consolidation ratio for any workloads. The compute intensive AI/ML workloads benefit greatly from Xeon 6 architecture with Intel Optimized AI software libraries, and the solution reduces rack, power, and cooling cost to achieve better Return on Investment.
Features to optimize AI/ML use cases
Lenovo ThinkSystem SR630 V4 and SD520 V4 servers have the following features to optimize AI/ML use cases:
- Sub NUMA Clustering (SNC) feature can provide improved performance for Resnet50.
- E-core 64-144c provide more energy efficiency and ideal for inference workloads and SMBs.
- Optimization support for AVX2-128 (VNNI/Int8 & Bfloat16), Accelerator ISA(AiA), 5G ISA.
- Fast upconvert for FP16 and BF16.
- Memory support for DDR5-6400 MT/s.
- GPU support - SR630 V4 (Up to 3 single width 75W GPUs) and SD520 V4 (1 single width 75W GPU).
Intel® Optimized AI Libraries & Frameworks
Intel provides a comprehensive portfolio of AI development software including data preparation, model development, training, inference, deployment, and scaling. Using optimized AI software and developer tools can significantly improve AI workload performance, developer productivity, and reduce compute resource usage costs. Intel® oneAPI libraries enable the AI ecosystem with optimized software, libraries, and frameworks. Software optimizations include leveraging accelerators, parallelizing operations, and maximizing core usage.
Intel AI software and optimization libraries provide scalable performance using Intel CPU and GPU. Many of the libraries and framework extensions are designed to leverage CPU to provide optimal performance for machine learning and inference workloads. Intel Xeon 6th Gen Scalable processors with E-cores are compatible with many Intel Optimized AI Libraries and tools and provide ecosystem for model development and deployment for enterprise-wide use cases.
Software / Solution | Details |
---|---|
Intel oneAPI Library |
|
MLOPs | Cnvrg.io is a platform to build and deploy AI models at scale |
AI Experimentation | SigOpt is a guided platform to design experiments, explore parameter space, and optimize hyperparameters and metrics |
Intel AI Reference Models | Repository of pretrained models, sample scripts, best practices, and step-by-step tutorials for many popular open source, machine learning models optimized to run on Intel https://github.com/intel/models |
Intel Distribution for Python |
|
AI Model Optimization Intel® Neural Compressor | Support for models created with PyTorch, TensorFlow, Open Neural Network Exchange (ONNX) Runtime, and Apache MXNet |
Deploy and Scale Generative AI Workloads with ThinkSystem V4 Systems
Lenovo ThinkSystem V4 systems with Intel Xeon 6 processors with E-cores and low-end GPU accelerators provide a cost effective infrastructure solution to scale your AI deployment and Generative AI use cases. With higher core counts, power efficiency and Optimized AI software, many AI/ML classical use cases and inference workloads can seamlessly run on the CPU without need for expensive GPU accelerators.
XClarity One Powered by AIOps
Lenovo ThinkSystem V4 servers are supported by the XClarity One platform, a hybrid cloud-based unified systems management solution. XClarity One provides three predictive failure analytics engines to swiftly identify potential issues and minimize system downtime while increasing accuracy.
Why Lenovo
Lenovo is a US$70 billion revenue Fortune Global 500 company serving customers in 180 markets around the world. Focused on a bold vision to deliver smarter technology for all, we are developing world-changing technologies that power (through devices and infrastructure) and empower (through solutions, services and software) millions of customers every day.
For More Information
To learn more about this Lenovo solution contact your Lenovo Business Partner or visit: https://www.lenovo.com/us/en/servers-storage/solutions/ai/
References:
Lenovo ThinkSystem SD520 V4: https://lenovopress.lenovo.com/ds0184
Lenovo ThinkSystem SR630 V4: https://lenovopress.lenovo.com/ds0185.pdf
Intel AI Development Software: https://www.intel.com/content/www/us/en/newsroom/news/intel-unveils-future-generation-xeon.html
Intel Unveils Future Generation Xeon Architecture: https://learn.microsoft.com/en-us/sql/sql-server/what-s-new-in-sql-server-2022?view=sql-server-ver16
Trademarks
Lenovo and the Lenovo logo are trademarks or registered trademarks of Lenovo in the United States, other countries, or both. A current list of Lenovo trademarks is available on the Web at https://www.lenovo.com/us/en/legal/copytrade/.
The following terms are trademarks of Lenovo in the United States, other countries, or both:
Lenovo®
Neptune®
ThinkSystem®
XClarity®
The following terms are trademarks of other companies:
Intel® and Xeon® are trademarks of Intel Corporation or its subsidiaries.
Other company, product, or service names may be trademarks or service marks of others.
Configure and Buy
Full Change History
Course Detail
Employees Only Content
The content in this document with a is only visible to employees who are logged in. Logon using your Lenovo ITcode and password via Lenovo single-signon (SSO).
The author of the document has determined that this content is classified as Lenovo Internal and should not be normally be made available to people who are not employees or contractors. This includes partners, customers, and competitors. The reasons may vary and you should reach out to the authors of the document for clarification, if needed. Be cautious about sharing this content with others as it may contain sensitive information.
Any visitor to the Lenovo Press web site who is not logged on will not be able to see this employee-only content. This content is excluded from search engine indexes and will not appear in any search results.
For all users, including logged-in employees, this employee-only content does not appear in the PDF version of this document.
This functionality is cookie based. The web site will normally remember your login state between browser sessions, however, if you clear cookies at the end of a session or work in an Incognito/Private browser window, then you will need to log in each time.
If you have any questions about this feature of the Lenovo Press web, please email David Watts at [email protected].