Lenovo ThinkAgile MX Series Performance Review

On this page contains a video below that will show you the performance of the Lenovo ThinkAgile MX Series. The ThinkAgile MX Certified Nodes are designed for deploying highly available, highly scalable hyper-converged infrastructure (HCI) and software-defined storage (SDS) from Microsoft on Lenovo enterprise platforms that feature the second generation of the Intel Xeon Processor Scalable Family.

The ThinkAgile MX Certified Nodes deliver fully validated and integrated Lenovo hardware and firmware that is certified for Microsoft Azure Stack HCI solutions.

ThinkAgile MX Series' Excellent Performance

ThinkAgile returns a total result of 903,255 IOPS and a throughput of 3.5GB/s as compared to others like HPE Nible All-Flash Storage achieves a total IOPS of 22,500 IO/s while the . To learn more about the HPE Nimble, you can download and read the three pages document where it is clearly explained.

Get To Know Your Storage Constraints: IOPS and Throughput

Throughput measures how many units of information a system can process in a period of time. It can refer to the number of I/O operations per second, but is typically measured in bytes per second. On their own, IOPS and throughput cannot provide an accurate performance measurement.

IOPS stands for input/output operations per second. It’s generally a measurement of performance for hard drives (HDDs or SSDs) and storage area networks. IOPS represents how quickly a given storage device or medium can read and write commands in every second.

The video above shows an  IOPS test after running a tool called VMFleet on 3 X ThinkAgile MX series nodes and results achieved.

What is VMFleet?

VMfleet is a storage load generator to stress your S2D.  It basically burns your disks/cpu/storage network to check whether S2D is stable or not. VMFleet is also used to get performance results of S2D storage subsystem.  In more depth it is used to analyze performance work by allowing the engineer to simulate a real world situation. It utilizes a set of scripts found on GitHub that utilizes diskspd for testing and validation of performance on HCI clusters. Diskspd is a free and open-source tool used for storage benchmarking with Microsoft Windows Server environments.

Three Way Mirror Resilience

This falls under a topic called fault tolerance and storage efficiency in Storage Spaces Direct. Storage Spaces is about providing fault tolerance, often called ‘resiliency’, for your data. Its implementation is similar to RAID, except distributed across servers and implemented in software

Mirroring provides fault tolerance by keeping multiple copies of all data. This most closely resembles RAID-1. How that data is striped and placed is non-trivial (see this blog to learn more), but it is absolutely true to say that any data stored using mirroring is written, in its entirety, multiple times. Each copy is written to different physical hardware (different drives in different servers) that are assumed to fail independently.

Three-way mirroring writes three copies of everything. Its storage efficiency is 33.3% – to write 1 TB of data, you need at least 3 TB of physical storage capacity. Likewise, you need at least three hardware fault domains – with Storage Spaces Direct, that means three servers.

Three-way mirroring can safely tolerate at least two hardware problems (drive or server) at a time. For example, if you’re rebooting one server when suddenly another drive or server fails, all data remains safe and continuously accessible.

Find out more on fault tolerance and storage efficiency in Storage Spaces Direct here…


Notice: ob_end_flush(): Failed to send buffer of zlib output compression (0) in /home/refinene/bfinesystems.com/wp-includes/functions.php on line 5464