Top 10 AI Hardware Companies

from | 25 August 2022 | Basics

Training machine learning (ML) algorithms, especially artificial neural networks like those used by large language models, is pushing current hardware further and further to its limits. For the widespread implementation and use of AI, specialised and powerful hardware is therefore very important. Dedicated AI chips are becoming more popular because they offer more efficient ways to parallelise computations. Some start-ups as well as various tech giants are already developing state-of-the-art AI hardware, further driving the research and use of ML and especially Deep Learning. In this article, we present the top 10 AI hardware companies that are currently designing new AI chips with innovative approaches and thus creating the conditions for the ML of the future.

Rank 10 - Groq

Founded by former Google employees, startup Groq develops AI and high-performance processors for ML applications that required low latency. With a design from the ground up newly developed single-chip architecture Groq processors have low latency and offer near linear scaling designed for real-time ML applications. Instead of trying to outperform the competition's architecture, Groq started with the development of an Software compilers. This has a decisive advantage: the infrastructure of the chip can thus be used much more efficiently. 

9th place - Hailo

Hailo's Deep Learning hardware technology is designed to bring the benefits of AI to a small scale. Suitable for robots, autonomous vehicles and other Egde devicesThe Hailo-8 processor offers Deep Learning functionality. Fast and with low energy requirements. The compiler developed by Hailo is intended to distribute different layers of artificial neural networks to different areas of the chip and thus ensure faster and efficient data processing. 

Rank 8 - Graphcore

Graphcore takes a different approach: as Europe's highest-rated AI chip manufacturer, the start-up has specialised in the development of IPUs (Intelligent Processing Units). Graphcore's Intelligent Processing Units are designed to significantly accelerate ML training through an architecture optimised for parallel processing. Instead of separating memory and computational capacities, the chips have an MIMD (multiple instruction, multiple data) Architecture. This makes it possible to store small data packages close to the respective core for calculation and to process them in parallel. Large data sets can thus be processed faster and the time required to train an ML model is reduced. In contrast to calculations with GPUs in a similar price range and with comparable power consumption, the training time of an ML model is significantly shorter. 

Rank 7 - Cerebras Systems

The company Cerebras Systems has specialised in the development of particularly large deep learning processors. With a size of 462.25 cm², 850,000 processor cores and 40 GB of RAM, Cerebras has developed the largest AI processor in the world - the Wafer-Scale Engine 2. The Processor with the size of an iPad is designed for the calculation of linear algebra, the basis of Deep Learning and neural networks. Available as a standalone system with cooling, power supply and various interfaces, the processor packs the power of an entire server room into the form factor of a mini fridge.

6th place - IBM

As one of the first AI hardware developers, IBM introduced its 'Neuromorphic Chip TrueNorth AI' in 2014. The chip contains 5.4 billion transistors, one million neurons and 256 million synapses, and is capable of deep Network Interference and interpret data in a high-quality manner. Recently, IBM launched the Telum processor an AI chip developed for specific use cases. With this new chip, IBM is primarily focusing on sales to manufacturers and server and data centres.

5th place - Google

Tech giant Google is known for its cloud services and infrastructure, as well as its eponymous search engine. However, Google has also developed AI hardware in the past: With the Tensor Processing Unit (TPU), Google Cloud has a AISC (Application-Specific Integrated Circuit) AI Accelerator developed for deep neural networks and "conventional" ML. The chip is mainly used by Google Cloud customers around the world to accelerate ML calculations. The company has also developed the Edge TPU, a much smaller version of the TPU. This is available as an external AI Accelerator for PCs as well as a Solderable chip for ML boards and edge hardware.

Place 4 - SambaNova Systems

A completely different approach than the other candidates on this list is taken by SambaNova Systems: the Startup leases complete data centres - equipped with hardware developed by SambaNova - and thus offers both the infrastructure and a platform for the development of ML projects. The software of the unicorn start-up offers ML, NLP and computer vision services based on domain-specific data provided by the customer. While other hardware developers usually only specialise in hardware and compilers, SambaNova includes the complete infrastructureto make AI more accessible to businesses.

3rd place - AMD

Because Graphics processors are currently the most popular and economical option for ML, GPUs developed by AMD for Big Data & AI play a major role. The American company, which mainly develops CPUs and GPUs for servers, data centres and home use, has also released HPC GPUs, a GPU clustersuitable for machine learning. These AI Accelerators are located in data centres widespread all over the world and can help accelerate Big Data processing and computation on a large scale.

2nd place - Intel

Intel is one of the largest computer hardware manufacturers with a long history of technology development. In 2017, the company broke the $1 billion revenue barrier for AI chips. Intel's Xeon processors, which are suitable for a wide range of tasks, including computing in data centres, have had a major impact on its commercial success. Gaudi is a new Training accelerator for neural networks from Intel, which offers near-linear scalability with ever-larger models and has a relatively low total cost of ownership. For inferencing, Intel has developed Goya, which is optimised for throughput and latency. Intel® NCS2 is the latest AI chip from Intel and has been especially for Deep Learning developed. As you can see, Intel has some serious chips in its arsenal. However, the company is taking a different approach than the GPU developers because it is mainly focusing on CPU-supported ML specialised.

1st place - NVIDIA

NVIDIA has been developing high-quality GPUs for gaming for some time. Thus, personal computers, consoles, data centres and servers around the world use NVIDIA GPUs for all kinds of computations. NVIDIA's chips continue to be the First choice for data centresthat perform parallel computations, such as machine learning and deep learning. NVIDIA offers new AI chips for HPC (High Performance Computing), date centres, edge devices such as autonomous vehicles, mobile phones and more. The "Full Stack Platform for Accelerated Computing" makes it possible, with the help of various SDKs (Software Development Kits), to optimally combine software and hardware and to individually develop applications for AI use cases in any application area. With CUDA as a proven platform for parallel computing and programming interface, hardware resources can be used optimally.

Author

Luke Lux

Lukas Lux is a working student in the Customer & Strategy department at Alexander Thamm GmbH. In addition to his studies in Sales Engineering & Product Management with a focus on IT Engineering, he is concerned with the latest trends and technologies in the field of Data & AI and compiles them for you in cooperation with our [at]experts.

0 Kommentare