Silicon considerations for the new data center
April 13, 2015
Over the years, the data center has experienced tremendous computing growth, with servers now able to crunch more data while using less power. In the...
Over the years, the data center has experienced tremendous computing growth, with servers now able to crunch more data while using less power. In the near future, the need for compute power will continue but the emphasis will be on deploying silicon solutions throughout the data center that are both cost effective and secure.
This is a significant challenge. Disruptive technologies such as software defined networking (SDN) and network function virtualization (NFV) have provided enterprises the means to offer more functionality than ever before. However, in order for SDN and NFV to be viable paradigms, enterprises need to make use of silicon specifically designed to provide power-efficient computing.
Leveraging new network “compute” architectures
Historically, companies employed relatively expensive and hard-to-program accelerators in their network topology that were based on FPGAs or single-use ASICs. That changes with SDN and NFV, which allow data center operators to leverage commodity hardware and increase resource utilization within the infrastructure. Their goal is to lower capital and operational expenditures; however, users are demanding higher levels of quality of service (QoS). In order to achieve both, one must use the right silicon that meets the needs of both businesses and users.
So what makes the “right silicon”? Ultimately the best silicon is that which provides the best performance per dollar, per watt, and is supported by a robust software ecosystem. In order to choose the right silicon, we must leverage the appropriate compute architectures for the workload mix within the data center.
Both SDN and NFV are vast, sprawling infrastructure paradigms, and focusing on using silicon based on a single architecture and a single vendor may not provide the optimal solution from a cost benefit standpoint. Some functions such as network control plane operations are well suited to high-performance x86 and ARM CPU processor cores, but other functions, such as virtual switching, cryptography, packet classification, and deep packet inspection, are better suited to general purpose GPU (GPGPU) architectures to provide the necessary power-efficient performance. The critical aspect is to analyze the workload and to pick and choose compute architectures that are best suited for particular workloads.
But what does this mean at the operations level? It means the ability to do more with the packet at wire-line speeds without diminishing QoS levels.
The future of data storage
Networking isn’t the only area of the data center that is ripe for disruption. For the most part, storage has been just about the physical media, whether it be capacity or performance of hard drives, solid-state drives (SSDs), or magnetic tape. Tomorrow’s storage area networks will incorporate more intelligence than ever before to make efficient use of data and allow for cost effective scaling.
Data storage is much more than reading and writing to media, reflecting a growing compute requirement. Users expect data to be encrypted, companies need redundancy to avoid failures affecting operational status and the dramatic growth in user-generated data has increased the need for functions such as deduplication and compression. What all of these functions have in common is the need for compute power within the storage infrastructure. By performing multiple compute functions in parallel, GPGPU allows effective power- and cost-efficient acceleration of storage functions such as RAID and compression.
The challenge with providing high I/O operations per second (IOPS) comes down to the silicon that interfaces with the storage, and if it isn’t robust enough, then it soon becomes the bottleneck in the infrastructure. Most modern, integrated SoCs, high-performance 64-bit CPUs, and GPGPUs provide a compute solution that meets these ever more demanding data center needs.
Expanding needs for secure data
In addition to the need for greater flexibility and the ability to perform more work in parallel, both storage and networking increasing have created a further requirement. Data security has long been an issue for the data center but never more so than now and end users are becoming increasingly concerned about security of their information. Increasingly, companies are encrypting data and securing their network from threats.
As data volume and wire-speeds continue to increase, encrypting data without causing a performance bottleneck is a significant compute challenge. In addition, security appliances within the data center must contend with ever growing and sophisticated security threats while analyzing more packets than ever before.
Enterprises will have to calculate processing needs of their security appliances for filtering requirements at different layers of the network stack, and determining whether a low-power, highly integrated SoC will be able to handle the packet flows or whether a high-end x86 or ARM processor is better suited for high bandwidth data plane and complex control plane operations.
Compute reigns king
The same compute revolution that’s been underway for decades that has been driving servers to become more efficient, cost effective and powerful is going to revolutionize storage, security, and networking. Dealing with the data that is processed in both a power efficient and secure manner requires the right silicon choices. Those choices may make use of multiple architectures to provide the optimal performance per dollar, per watt while providing the necessary performance and functionality needs of SDN, NFV, and software defined storage.