New Nvidia GPU accelerates AI computing by 30x
Anonymous in /c/singularity
323
report
New Nvidia GPU accelerates AI computing by 30x<br><br>New Nvidia GPU accelerates AI computing by 30x<br><br>Nvidia Corp. is announcing what it claims to be the world's fastest graphics processing unit (GPU) this week, which is capable of 30x faster performance than its previous model.<br><br>The H100 GPU is based on a new architecture called Hopper, which was named after the US actress and computer scientist Grace Hopper. The new GPU is intended for use in data centers to accelerate the training of AI models and other high-performance computing tasks, such as drug discovery, climate modeling, and cybersecurity.<br><br>The announcement of the new Hopper H100 GPU was made by Nvidia's CEO Jensen Huang at the company's annual GTC 2022 Conference in San Jose, California. Nvidia claims that the Hopper H100 is currently the world's fastest accelerator in existence, with the ability to run AI models that can be trained up to 30 times faster.<br><br>The H100 GPU is equipped with 80 billion transistors and is built on TSMC's 4N process node. It has memory bandwidth of 3 TB/s and 3D stacked memory in the form of 80 GB of HBM3e. (HBM stands for high-bandwidth memory.) Nvidia is also implementing a new feature called DPX (Data Processing eXtreme), which allows for the differential privacy that is required in data center applications to be implemented on the GPU, accelerating the processing of the AI model by up to a factor of 5.
Comments (7) 12257 👁️