When AMD launched its Opteron processors nearly a decade and a half ago, it succeeded in stealing a march on rival Intel.
While Intel tried to position the ill-fated Itanium architecture as its path to 64-bit computing, AMD instead extended the x86 architecture to 64-bits, offering customers a seamless transition because Opteron was fully compatible with existing operating systems and applications.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
On the back of that move, AMD managed to secure a substantial share of the server processor market for several years until Intel hastily added 64-bit support to its Xeon chips, causing AMD to lose much of its initial momentum
For CIOs and other executives with sign-off on corporate IT purchases, those early Opterons were something of a no-brainer.
Choosing AMD-based systems when refreshing server hardware meant the organisation could continue to run all their existing software without modification, and upgrade to 64-bit versions as these became available.
With its new EPYC server processors, AMD now faces a different challenge. It does not necessarily have the compelling advantage it enjoyed with the Opteron, and firms have now grown used to buying Intel Xeon-based systems almost by reflex.
If it is to persuade customers to choose EPYC instead, AMD knows that it has to offer something different.
“We realised we had to focus on areas where Intel was either coming up short or wasn’t interested in delivering,” says Forrest Norrod, senior vice president and general manager of AMD’s Enterprise group.
An obvious area to focus on is performance, and EPYC achieves this through simply having more processing cores than are available in Intel’s chips, with the top SKUs having 32 of its “Zen” cores.
This comparison applies to the existing Xeon line-up, however, and Intel may have a few surprises with its new chips based on the “Skylake” microarchitecture, which are due for release in the near future. Rumours suggest that it too may deliver a 32-core variant.
AMD has put a lot of effort into ensuring its EPYC chips deliver performance while minimising power consumption. Features such as workload-aware power management enable the processor to deliver its claimed performance gains while cutting power consumption by between 12-and-22% at the system level. Taken across an entire rack of systems, this could add up to a worthwhile saving on energy bills for customers.
Assessing the AMD use cases
AMD has not simply focused on benchmark performance, but on differentiating the EPYC platform from Intel’s Xeon chips in key areas that should make AMD the better option for organisations interested in particular use cases.
For example, each EPYC chip has twice the number of memory channels as Intel’s current Xeon chips, for example, making it theoretically possible to have up to 2TB of memory in a single-socket system and up to 4TB in a two-socket configuration. This could make EPYC servers an attractive option for running workloads such as in-memory databases and high performance computing (HPC).
Each EPYC chip also features 128 lines of PCIe for I/O, more than double that of rival Xeon chips. This allows for a large number of devices such as GPU accelerators or NVMe flash storage drives to be connected directly to the processor for low latency access.
AMD claims the ability to connect a large number of GPUs means EPYC systems will find favour with customers running machine learning and big data analytics workloads. Such systems often use PCIe switches to interconnect multiple GPU cards, but this is understood to be a very costly method.
Meanwhile, the ability to connect a large number of flash storage devices directly to the processor should also make EPYC systems, especially one-socket configurations, well suited to serving as a node for software-defined storage clusters.
In fact, a single-socket EPYC server should be capable of fulfilling many of the scenarios that would otherwise require a two-socket system, for less cost. AMD drives home this point by claiming that EPYC delivers “two-socket performance for a single-socket price”.
“The optimised one-socket story will appeal to a lot of people, because it should open up a new price point,” said John Abbot, research vice president at analyst firm 451 Research.
EPYC at work and in use
One supplier already moving to take advantage is HPE, which has announced an EPYC-based Cloudline CL3150 server targeting cloud and service providers. This exploits the EPYC’s large number of PCIe lanes to support up to 24 NVMe SSDs in a 1U rack-mount chassis, and is capable of delivering a throughput of 9.1 million IOPS, according to HPE.
Meanwhile, with security now a major concern for all organisations, the EPYC platform’s built-in security features should also prove a valuable feature for many customers. This starts with the AMD Secure Processor, which is a dedicated ARM-based microcontroller embedded into each EPYC chip.
This generates and manages keys for encryption functions, but also serves as a hardware root of trust to ensure integrity of the system itself. The Secure Processor only allows low-level code with a valid digital signature to run, preventing the system being compromised by attacks that inject malicious code at boot-up prior to the operating system loading.
Another feature, Secure Memory Encryption (SME) automatically encrypts data written to memory and decrypts it when it is read back into the processor. This is handled by hardware support in each memory channel so that is completely transparent to any application protected by SME.
This approach is taken a step further with Secure Encrypted Virtualisation (SEV), which enables each virtual machine running on a server to be protected using its own encryption key. This ensures virtual machines are protected against a compromised hypervisor, for example.
There is a caveat to this, of course: encrypting and decrypting all memory accesses on the fly carries a performance penalty, even with dedicated hardware doing the job.
AMD said this adds 7ns or 8ns to memory accesses, but claims it only incurs a 1.5% hit on performance. The feature is optional, however, and customers can choose to use it or not.
Locking down colocation
For organisations that use a colocation provider to host their servers, encrypted memory support and the Secure Boot feature of the EPYC processor provides organisations with assurance that their systems are secure against tampering, and this could prove a compelling reason to use EPYC servers in this scenario.
While SME and SEV are expected to have value for some enterprise customers, they are likely to prove more important for cloud service providers, who will be able to assure customers that workloads running inside virtual machines on the public cloud are better isolated and protected.
Even the service provider itself will not have access to the keys, which are protected by the Secure Processor inside the EPYC chip.
“In a multi-tenant environment, users of virtual machines can be sure that no one can compromise their data, even if there is a rogue administrator working at the datacentre,” claims Norrod.
Abbot agrees, and says, “AMD has done a lot here to add security to virtualised environments. This will be pretty useful for service providers that want to offer that extra level of security.”
Overall, AMD’s new EPYC platform seems impressive, and shows a clear focus on addressing some key customer requirements, as it needs to do if it is to win back some of its lost market share.
“It’s encouraging that a competitor is coming back into servers and addressing some underserved parts of the market,” says Abbot.
However, whether it is offering a compelling enough alternative to Intel-based systems to convince organisations to procure AMD once again remains to be seen.
“The platform needs to be flexible enough for them and their partners to find segments where they can differentiate. That’s essential when you come back in after so long,” Abbot adds.
The firm already seems to have strong support with Dell using the chips in some of its upcoming 14G PowerEdge server line-up, while Supermicro has a whole portfolio of EPYC-based systems, including a version of its BigTwin high-density enclosure that fits four nodes into a 2U chassis.
At the EPYC launch event in mid-June 2017, AMD was also joined by VMware, who said EPYC should enable even greater consolidation for virtual machines, and Microsoft, who set out its plans to become the first global cloud provider to offer infrastructure services based on EPYC systems. But a lot may come down to price, and now we know how much EPYC chips will cost, at least insofar as their starting price.
The 32-core chips will start at $3,400, with 24-core chips coming in at $1,850 and the 16-core chips starting at $650. The entry-level 8-core chip, which AMD is supporting “because some software is licensed on core counts,” comes in at $475.
“In terms of total cost of ownership, it is about performance per watt per dollar, the energy matters,” said AMD’s senior vice president and chief technology officer, Mark Papermaster.
“You can’t fool anybody when it comes to the datacentre. You run the applications, you look at what it costs to get that work done, and that drives your assessment of how successful your product will be.”