Is the AI chip so popular? What changes have been applied to all areas of the field?

Release on : Jun 13, 2019

LONDON - As a symbol of the times, half of the discussion at this year's Hot Chip conference focused on artificial intelligence acceleration. In previous years, microprocessor designers focused most of their discussion on the CPUs of PCs and servers.

Startups Cerebras, Habana and UpMem will introduce new deep learning processors. Cerebras will introduce a device that is expected to use wafer level integration. Habana, which has launched the reasoning chip, will demonstrate the follow-up to its training.

UpMem, based in Grenoble, will unveil a new memory processor that is said to be using DRAM for a variety of applications. Graphcore was also invited to participate, but is not yet ready to share more details of its chips.

These startups will compete with competitors such as Intel, which is introducing Spring Hill and Spring Crest, its reasoning and training chips based on the Nervana architecture. Rarely, Alibaba will also release the inference processor for embedded systems.

In addition, Huawei, MIPS, Nvidia and Xilinx will disclose new details about their existing deep learning chips. Members of the MLPerf team are expected to present their reasoning benchmarks for data centers and embedded systems, which is a follow-up to their training benchmarks.

The organizers hope that Huawei's senior engineers can talk about their Ascend 310/910 AI chip. However, given that the company is at the crossroads of the US/China trade war, it is unclear whether the spokesperson will be able to obtain a visa or whether it will face other obstacles.

Nvidia dominates the market with its V100 in artificial intelligence training chips. Given its market leading edge, it chose not to introduce new chips this year. Therefore, it will introduce a research work on a multi-chip module for inference tasks that provides 0.11 picjoules per operation in the range of 0.32-128 terabytes per second.

In addition, the top three cloud computing providers in the United States will provide their tutorials on AI hardware. This is quite rare, and they are not speaking on this topic at their three sponsored events, let alone joining their competitors who are speaking.

Google will introduce the details of its liquid-cooled third-generation TPU. Representatives of Microsoft Azure will discuss their next-generation FPGA hardware. Members of Amazon AWS will introduce their I/O and system acceleration hardware.

In addition, Facebook engineers will introduce the multi-processor training system Zion, which was released at the Open Computing Summit earlier this year. "Facebook and its open computing partners will increasingly set the standard for form and interconnection methods for data center servers," said Hot Chips organizers.

“If Rip Van Winkle fell asleep in 1999 and woke up now, he would be shocked by all the news about machine learning and artificial intelligence. When he started taking a nap, it was almost a research topic.” Insight64's advanced microprocessor Analyst Nathan Brookwood said.

However, he added, "Rip will be satisfied with about half of the papers in this year's Hot Chip conference, because this is a more direct inference from the past conference. Rip already knows that Intel, AMD and IBM are still trying to get from the architecture. More performance."

In fact, PCs and servers are still receiving much attention during this event. AMD will discuss Zen2, the next-generation x86 kernel for client and server systems. IBM will showcase the next generation of server processors, which are known to be Power 10.

AMD CEO Lisa Su will present one of two keynote speeches. The head of the TSMC research team will deliver another keynote speech to provide insights into future process nodes.

The event ended after a series of other interesting speeches. Tesla will provide detailed information about its recent disclosure of autonomous vehicle chips. In a separate meeting, Intel will disclose more details about its Optane memory and its emerging packaging technology.

In this section, Hewlett-Packard Enterprise will introduce GenZ's first chipset, an open interface for distributed memory and memory, independent of many emerging memory architectures. In addition, AyarLabs will explain its TeraPHY high-speed interconnect technology.

Another interesting point is that AMD and Nvidia will end the conference by discussing their latest GPU for high performance computing. Brookwood said that this is a fairly new area for chips that once focused on gaming. The irony is that it is the slot that Hot Chips uses to reserve for the server CPU.

Query millions of electronic components