Nvidia’s Flagship Ai Chip Is Reportedly 4.5x Faster Than The Previous Champion

Nvidia claims that the MLPerf premiere of their next “Hopper” GPU set new benchmark records.

With results, up to 4.5 times quicker than the A100, Nvidia’s current fastest production AI hardware, Nvidia said yesterday that its upcoming H100 “Hopper” Tensor Core GPU set new performance records during its debut in the industry-standard MLPerf benchmarks.

The MPerf benchmarks (formally known as “MLPerfTM Inference 2.1”) evaluate “inference” workloads that show how effectively a chip applies a trained machine learning model to new data. The MLPerf benchmarks were created in 2018 by a consortium of businesses called the MLCommons to provide a uniform gauge for communicating machine learning performance to prospective clients.

The H100 performed exceptionally well on the BERT-Large benchmark, which evaluates systems’ abilities to interpret natural language data using Google’s BERT model.

Specifically, Nvidia attributes this success to the Transformer Engine in the Hopper architecture, which quickens the process of training transformer models. This implies the H100 could speed up the development of natural language models like OpenAI’s GPT-3, which can write in various styles and carry on natural-sounding conversations.

Further Reading

For artificial intelligence (AI) and supercomputer applications, including image recognition, big language models, picture synthesis, and more, Nvidia markets the H100 as a high-end data center GPU processor.

Analysts believe it will succeed the A100 as Nvidia’s primary GPU for data centers, but it is still in the works. Nvidia’s ability to produce the H100 by the end of 2022 has been questioned because some of its development is taking place in China, where the US government slapped export restrictions on the chips last week.

Last week, Nvidia made it clear in a second filing with the Securities and Exchange Commission that the US government will allow the H100 to be developed in China. Nvidia has promised that the H100 will be released “later this year.” If the A100’s success is any indicator, the H100 might drive a wide range of innovative AI applications in the years to come.