FPGAs: A considerable surge in data traffic is recognized every single day as now many technologies have evolved, leading to more and more data production.
Nonetheless, due to the increased data traffic, the market for data center workloads is anticipated to grow exponentially over the coming years.
As per one of the reports on verifiedmarketreserach.com, Data Center Accelerator Market was valued to be in four-billion-dollar range in 2020 and is supposed to reach $122 Billion by 2028, around a 50% CAGR, which is foreseeable as the world is producing data at an exponentially increasing rate.
Also, according to assessments, somewhere around a whopping 90% of all the world’s data has been created simply over the most recent few years.
Moreover, this COVID-19 situation has sent many people to work from home, and as a result, data centers are overburdened.
The statistics shown in below figure depict the various traffic changes during 2020 at multiple vantage points. In March 2020, the ISP and the three IXPs observed a traffic increase of about 15-20%.
Traffic changes from January 2020 until June 2020 at multiple vantage points
One of the Forbes articles mentioned that the amount of data consumed has almost grown at 5,000% from 1.2 Trillion Gigabytes to 59 trillion gigabytes in the last decade. International Data Corporation(IDC) reports,
which measure the amount of data consumed and created in the world each year, predicted the data growth to continue through 2024 with a five-year compound annual growth rate (CAGR) of 26%.
The rate is quite alarming, which puts us wondering what data will do in the coming decade?
The latest Cisco Visual Networking Index of 2019 forecasts that global IP traffic “will grow at a CAGR of 26 percent from 2017 to 2022,” which would result in an annual IP traffic rate of 4.8 Zettabytes per year (396 Exabytes per month).
Also, with the advent of following generation workloads, such as Big Data, Machine Learning (ML), Internet of Things (IoT), Artificial Intelligence, etc.,
CPUs are seeing different data types, mixtures of file sizes, and new algorithms with varying requirements of processing. You can take a closer perspective of the data center evolution in one of my blog posts titled “Focal Points of PCIe 5.0 for Data Center Performances“.
The increasing burden on data centers, mainly because of the work from the home situation during covid-19, has presented architects with a new challenge: processor speed vs. bandwidth.
You must be very aware of the famous Moore’s law, which holds that silicon device processing power doubles every two years. Tragically, conventional processors have not developed at anything distantly like that rate.
Many factors, including the breakdown of Dennard scaling and leveling-off of the von Neuman architecture progress, have contrived to slow the development in performance.
However, network port speeds have been dramatically growing due to the increasing demand for internet services. Researchers have estimated that server processors and current silicon technology will no longer go hand in hand.
With an exponentially rising port speed, there is also a need to either upgrade or supplant the servers to handle the growth.
The Way Out
Simply adding more servers doesn’t seem a plausible and beneficial solution as it will just increase the complexity and the cost. The enterprises needed to come up with a solution to slice and dice big data without adding servers.
Envision – briefly a server whose core hardware was configurable to assist with offloading assignments from the virtual CPU while giving high-speed transmission capacity abilities.
And this is very much possible and feasible by turning to accelerators to offload some of the algorithms of the applications, either to play out the necessary calculations more rapidly or to accomplish more execution with less power utilization to facilitate the load on the data center’s electrical power and cooling.
One or both the enhancement—performance and performance per watt—are essential for various applications.
New workloads targeted for acceleration include:
- Data storage and analytics
- Networking applications and cybersecurity
- Media transcoding
- Financial analysis
These workloads employ algorithms that can be sped up by other computational hardware, bringing about better data throughput and lower response latency. Hence, It is pretty clear that an accelerator can do the job just the right way the data centers are demanding.
FPGAs – A state-of-the-art contender in the acceleration arena
FPGAs, the acronym of “Field Programmable Gate Array,” are superb contenders for the acceleration crown.
With around 30-years of history in the electronics industry, its use as a server accelerator in the data center is relatively new.
FPGAs do have the capability to burst the bottlenecks that hold back performance on analytical tasks—without bursting your power or cooling budgets.
FPGAs, which are integrated circuits like microprocessors, can be dynamically reprogrammed to coordinate with the exact computational requirement of a workload or algorithm.
This nearby coordination brings about quicker computational speed and lower power and energy utilization.
It is a matter of coincidence that when the CPU could not handle the compute complexities of many workloads for a bulk of reasons,
FPGAs have come forward to answer the “COMPUTE GAP,” providing excellent performances and heterogeneous computing capabilities maintaining low latency.
FPGA allows you to build application-specific communication networks with capabilities to optimize the application’s compute and data movement aspects.
FPGAs speed up the calculations at the core of the numerous types of machine intelligence.
FPGAs’ re-programmability and adaptability to sparse data and variable-precision weights are critical tools for today’s and tomorrow’s artificial intelligence in both training and inference modes.
The machine learning interface has a typical data flow that can be efficiently designed using FPGA despite incorporating a complicated switch-based architecture.
Similar is the verdict for the machine learning interface as well, where packets of the mesh network can be created to explicitly accommodate the problem at hand.
Analysts estimate that the FPGA market will grow at the highest CAGR of any technologies competing for the acceleration crown, which is also evident from the above-described rationale.
But let’s not rely upon the analyst’s opinion. Let’s look at some real deals, enough to support the above statement.
Intel spent more than $16B a couple of years before getting Altera – apparently because FPGAs would be a pretty serious deal in the server farm.
Also, cloud service providers like Amazon, Tencent, Microsoft, Alibaba, and Baidu have adopted the FPGAs as a reconfigurable heterogeneous processing asset.
Xilinx is, in any event, attempting to guarantee that their most recent FPGAs are another class of gadget: “ACAP.”
FPGAs value proposition is not just a one-liner; about re-programmability and computability,
but it adds a much more value addition as it adapts to the complex workztloads datapath and memory architecture with room to scale out without paying any significant penalties.
However, the accessible and scalable deployment of FPGA is a considerable hurdle to make it an obvious and optimal choice.
A large part of the difficulty of programming FPGAs are the long compilation times, leading to slacking off and absence of not so accessible and scalable deployment as available for CPUs.
Researchers are giving their best to achieve this remarkable feat, leading to a potential bloom in the FPGA market.
Logic Fruit Technologies, one of the contributing R&D houses in the FPGA market, offers innovative solutions to meet these workload demands and has the potential to support any level of organizations to provide end to end solutions.