The first thing to say is that CPU performance is a very multi-faceted beast. There’s no one magic measure or benchmark, for that reason you can make pretty much any cloud look like ‘the best’ in the right light. It’s why we don’t publish our own benchmarks in-house, they just don’t have credibility for a discerning audience.
CPU performance holistically comes from the interplay of a number of factors which you can think of as layers. The first is the raw underlying potential of the actual hardware, clearly an older slower processor has a lower maximum potential than a newer, faster one. The second layer is the hypervisor translation layer and how efficient that is and whether resources are being over-allocated/contended. Finally you have the fit/efficiency between how the CPU is exposed and what your application ideally wants to see. The final one is things like ideal number of CPU threads, CPU instruction sets, NUMA topology and other such details that can make a huge difference to computing in the real world.
I’m outlining below how we deliver high performance computing to customers in light of the above factors.
Factor 1: Raw underlying hardware potential
Factor 2: Hypervisor Efficiency & Resource Contention
Our proprietary cloud stack monitors load on each individual compute node to prevent contention of customer computing, in this way you’ll always get the CPU throughput you’ve paid for unlike other platforms that slow down during busy times.
Factor 3: CPU Settings & Customization
Take a look at this comprehensive comparison of cloud server performance across various providers on the market.
Marked as spam