OpenAI receives the world’s most powerful AI GPU from Nvidia CEO

The most potent AI GPU globally will help in OpenAI’s efforts to acheive Artificial General Intelligence.

OpenAI receives the world’s most powerful AI GPU from Nvidia CEO

Nvidia's CEO delivering its latest AI GPU to OpenAI

Greg Brockman/X

OpenAI has become the first firm to receive Nvidia’s advanced AI processor DGX H200, hand-delivered by the firm’s CEO, Jensen Huang.

The H200, billed as the world’s most powerful GPU, will help OpenAI advance the development of GPT-5 and achieve its goal of artificial general intelligence (AGI).

OpenAI’s president and co-founder, Greg Brockman, took the opportunity to post a picture of the handover on social media, and its CEO, Sam Altman, was also present. Brockman said Huang made the gesture “to advance AI, computing, and humanity.”

The DGX H 200 is a successor to the firm’s H100, an AI supercomputer optimized for large generative AI and other transformer-based workloads.

The development also signifies a collaboration between two major forces in the AI industry: Nvidia concentrates on hardware advancements, while OpenAI specializes in software development.

The next-gen AI processor

Nvidia’s unveiling of the DGX H200 marks a significant leap forward in high-performance computing. It boasts substantial improvements over its predecessor, the H100.

The platform, built on the Nvidia HopperTM architecture, has an Nvidia H200 Tensor Core GPU with enhanced memory to manage large volumes of data for high-performance computing and generative AI tasks.

Nvidia's HGX H200 GPU.
Nvidia’s HGX H200 GPU

The Nvidia H200 is the first GPU to support HBM3e, which is faster and more memory-efficient. This enables the advancement of scientific computing for HPC workloads, generative AI, and huge language models.

The standout upgrades over the H100 include a remarkable 1.4x surge in memory bandwidth and an impressive 1.8x increase in memory capacity. These enhancements translate to a staggering bandwidth of 4.8 terabytes per second and a capacious memory capacity of 141 GB.

According to Nvidia, such enhancements are pivotal for tackling the demands of training larger and more intricate AI models, particularly those deployed in generative AI applications that generate diverse content types like text, images, and predictive analytics.

Furthering the development of GPT-5

An effective data center architecture is essential for training AI models with hundreds of billions of parameters. This includes maximizing throughput, reducing server downtime rates, and utilizing multi-GPU clusters for tasks requiring a lot of computation.

“To create intelligence with generative AI and HPC applications, vast amounts of data must be efficiently processed at high speed using large, fast GPU memory,” With Nvidia H200, the industry’s leading end-to-end AI supercomputing platform, just got faster to solve some of the world’s most important challenges,” said Ian Buck, vice president of hyperscale and HPC at Nvidia.

These capabilities will advance OpenAI’s efforts to train GPT-5, which is touted to feature an evolved version of AI known as AGI. For reference, GPT-4 was trained on around 25,000 Nvidia A100 GPUs for about 100 days.

It is anticipated that GPT-5 will be a multimodal model with textual and visual inputs and outputs. AGI is anticipated to merge the functionalities of natural language processing, image recognition, data from sensors like cameras and microphones, pattern recognition, predictive analysis, abstraction, and logical reasoning.

The resultant AI system would be able to execute tasks akin to humans. Present-day AI tools like Siri, Alexa, Watson, and ChatGPT are classified as Artificial Narrow Intelligences (ANIs).

Although GPT-5’s release date is unknown, it’s safe to assume it’s in the works. (OpenAI worked on GPT-4 for at least two years before its formal release.) As with GPT-3.5, it’s also feasible that OpenAI will release an interim GPT-4.5 ahead of GPT-5.

RECOMMENDED ARTICLES

OpenAI has not yet made an official announcement regarding the release schedule, but it is safe to say that the new Nvidia GPUs are set to advance the timelines.

0COMMENT
NEWSLETTER
The Blueprint Daily

Stay up-to-date on engineering, tech, space, and science news with The Blueprint.

By clicking sign up, you confirm that you accept this site's Terms of Use and Privacy Policy

ABOUT THE EDITOR

Jijo Malayil Jijo is an automotive and business journalist based in India. Armed with a BA in History (Honors) from St. Stephen's College, Delhi University, and a PG diploma in Journalism from the Indian Institute of Mass Communication, Delhi, he has worked for news agencies, national newspapers, and automotive magazines. In his spare time, he likes to go off-roading, engage in political discourse, travel, and teach languages.