BIOS IT Blog
GTC Europe 2018 Announcements
The GPU Technology Conference 2018 keynote kicked off showing the many ways AI is changing our lives.
We bring you the highlights from the entire keynote at GTC 2018.
RAPIDS
RAPIDS, launched today at GTC Europe, gives data scientists for the first time a robust platform for GPU-accelerated data science: analytics, machine learning and, soon, data visualization. And what’s more, the libraries are open-source, built with the support of open-source contributors and available immediately at www.RAPIDS.ai
Initial benchmarks show game-changing 50x speedups with RAPIDS running on the NVIDIA DGX-2 AI supercomputer, compared with CPU-only systems, reducing experiment iteration from hours to minutes.
With a suite of CUDA-integrated software tools, RAPIDS gives developers new plumbing under the foundation of their data science workflows.
Contact us for more information
Real-Time Ray Tracing with Quadro GV100
Ray tracing calculates the color of pixels by tracing the path that light would take if it were to travel from the eye of the viewer through the virtual 3D scene. Ray tracing is able to show light striking a surface, bouncing off the surfacing, and then striking additional surfaces. Recreating this follows billions of rays. This normally requires a supercomputer that can calculate these rays. The more reflections, the more refractions, the harder it is.
During the keynote, NVIDIA CEO Jensen Huang demonstrated real-time ray tracing - using NVIDIA RTX technology that runs on a Quadro GV100 processor. The complete demo is running on just one DGX Station – not a supercomputer running one frame in hours, but one DGX-Station that costs $68K, with four Voltas - in real time.
It’s a big deal, he says, because for now we can bring real-time ray tracing to the market. The technology has been encapsulated into multiple layers. You’re also seeing deep learning in action, without that we couldn’t trace all the rays. It predicts rays.
Quadro GV100 is the world’s first workstation GPU based on Volta architecture. It also has a new interconnect called NVLink 2 that extends the programming and memory model out of our GPU to a second one. They essentially function as one GPU. These two combined have 10,000 CUDA cores, 236 teraflops of Tensor Cores, all used to revolutionize modern computer graphics, with 64GB of memory
Contact us for more information
NVIDIA Drive AGX
NVIDIA DRIVE AGX is a scalable, open autonomous vehicle computing platform that serves as the brain for autonomous vehicles. The only hardware platform of its kind, NVIDIA DRIVE AGX delivers high-performance, energy-efficient computing for functionally safe AI-powered self-driving.
“Safety is the single most important thing. It’s the hardest computing problem. With the fatal accident, we’re reminded that this work is vitally important. We need to solve this problem step by step by step because so much is at stake. We have the opportunity so save so many lives if we do it right.”
NVIDIA DRIVE AGX incorporates the NVIDIA Xavier system-on-a-chip, the world’s first processor built for autonomous driving. Architected for safety, the Xavier SoC incorporates six different types of processors for redundant and diverse algorithms.
Contact us for more information
TensorRT-4
NVIDIA TensorRT™ is a platform for high-performance deep learning inference. It includes a deep learning inference optimizer and runtime that delivers low latency and high-throughput for deep learning inference applications.
TensorRT-4 can handle recurrent neural networks and has deep integration into the TensorFlow Deep Learning framework, as well as full optimization across the entire software stack – TensorFlow, Kaldi optimization, ONNX, WinML
Images are accelerated 190X, Natural Language Processing (NLP) is accelerated 50X, recommender engines 45X, speech 36X, speech recognition by 60x. In aggregate with TensorRT-4, we can speed up hyperscale datacenters by 100X.
Contact us for more information
Not what you're looking for? Check out our archives for more content
Blog Archive