Chipmaker Nvidia has rapidly discovered that artificial intelligence is a growth market.The company's background in specialized high-speed graphics processors gives it an edge in providing high-performance silicon for artificial intelligence applications.

"Artificial intelligence is rapidly becoming a key application for supercomputing," said Nvidia VP Ian Buck. "NVIDIA's GPU computing platform merges AI with HPC, accelerating computation so that scientists and researchers can drive life-changing advances in such fields as healthcare, energy, and transportation."

Nvidia has announced that it will provide GPUs to power an ultra-fast AI supercomputer at the Tokyo Institute of Technology.The company's Tesla P100 GPUs, which use the Pascal processing architecture and operate at three times the speed of their predecessors, will be central to the cluster-based supercomputer, which is code-named TSUBAME 3.0.The new computer, which will replace TSUBAME 2.5, is expected to have twice the performance.

The institute will continue using TSUBAME 2.5, incorporating it into TSUBAME 3.0's 47 petaflops architecture to make a combined processing throughput of 64.3 petaflops.That would place it within the world's top 10 fastest computing platforms and the highest-performing AI supercomputer in Japan.

Tokyo Institute of Technology expects TSUBAME 3.0 to be operational by summer.The institute will make the supercomputer available to its own staff researchers and to private-sector companies under contract.

According to Tokyo Tech's Satoshi Matsuoka, who is supervising construction of the new system, "NVIDIA's broad AI ecosystem, including thousands of deep learning and inference applications, will enable Tokyo Tech to begin training TSUBAME3.0 immediately to help us more quickly solve some of the world's once unsolvable problems."

저작권자 © 리서치페이퍼 무단전재 및 재배포 금지