Reading time ( words)
NVIDIA’s Graphics Processing Technology Conference was, as expected, a showcase of new developments, as well as an opportunity for engineers and developers to learn, enhance skills, and discuss new ideas.
Just hearing about all the amazing new developments and the accelerating expansion of AI in virtually all aspects of modern society gives those who attended a better idea of just how much AI is and will continue to change their work and our world.
AI is driving new computer platforms and is utilizing new advanced super computers with 10x the power of the most powerful one available today and at a fraction of the cost. Consider this again—10x the power and 10% of the cost of just a year or two ago. Autonomous transportation advances seem to be accelerating again after a few years’ hiatus, with AI playing a huge part as well as development of hardware.
For more details and demonstrations, I suggest watching the keynote with NVIDIA CEO Jensen Huang.
Some of most interesting announcements in the keynote included gaming graphics focused on GPUs as well as demonstrations showing their capabilities, and there was so much more. If you accept all that was announced, you can expect to quickly see amazing advances in new computing platforms. In addition to the supercomputer, and full and truly available autonomous transportation, expect to see other computing great leaps forward—science fiction from the ‘90s-type devices such as a new fully functional translator called Jarvis that universally translates five languages. Huang promoted Jarvis as a GPU-accelerated deep learning AI platform for speech recognition and generation, language understanding, and translations. “Jarvis interacts in about 100 milliseconds,” he said.
The conference unveiled a new product for high-performance computing (HPC) clients, NVIDIA’s first-ever data center CPU named “Grace,” after the pioneering computer scientist Grace Hopper. “We are thrilled to announce the Swiss National Supercomputing Center will build a supercomputer powered by Grace and our next-generation GPU,” Huang said. Based on Arm architecture, NVIDIA states that Grace provides 10x better performance than the fastest servers on the market today by focusing on complex artificial intelligence and HPC workloads. NVIDIA's first data center CPU however is not intended to compete directly against Intel's Xeon lineup or AMD's EPYC processors. NVIDIA made a point that it continues to provide full support for all CPUs, including x86 and Arm architectures.
For now, Grace is designed specifically to be "tightly coupled" with NVIDIA's GPUs to remove bottlenecks for complex giant-model AI and HPC applications, compared to today's high-end NVIDIA DGX-based systems which run on Intel CPUs. Grace is built on a 5-nanometer manufacturing process which I am sure will grab Intel’s attention. NVIDIA is planning on making Grace available within two years.
Other announcements included:
- The Bluefield-3 400 Gbps data center Infra processor, a new powerful 16x 78-core processor. It has 22 billion transistors and will allow network processing and storage at the above mentioned 400Gbps. Also discussed was the NV Triton Inference Server 2.9 that maximizes performance and simplifies production deployment at scale.
- The TensorRT 8.0, which is the latest version of its high-performance deep learning inference SDK. The TensorRT includes a deep learning inference optimizer and runtime that delivers low latency and high throughput for deep learning optimizations. The core of NVIDIA TensorRT is a C++ library that facilitates high-performance inference on NVIDIA GPU’s (graphics processing units). It is designed complement training frameworks such as TensorFlow, Caffe, PyTorch, and MXNet. It focuses specifically on running an existing network quickly and efficiently on a GPU for the purpose of generating a result.
Huang spent the better part of an hour describing his vision and a near term future filled with autonomous machines, super-powerful AI, fully computer controlled and robotic-manned factories, and unlimited virtual worlds—from silicon to supercomputers to AI software all in one presentation. Grace and BlueField are key parts of a data center roadmap consisting of three chips: CPU, GPU, and DPU.
Huang said, “Each chip architecture has a two-year rhythm with likely a kicker in between. One year will focus on x86 platforms, the next on Arm platforms. Every year will see new exciting products from us. Three chips, yearly leaps, one architecture.”
Over the 25 years that I have covered NVIDIA they have come from a modest computer GPU supplier to a giant multi-technology company. They still are considered the leading computer GPU supplier globally, but they are now so much more, and this keynote address does not hesitate to state that they feel the best is still to come.