BIOS IT Blog
NVIDIA® Announce new Tesla® V100 32GB Configuration
2x memory boost to V100, the world’s most advanced data center GPU
At this years NVIDIA GTC event, CEO Jensen Huang introduces the new Tesla V100 32Gb configuration during his keynote speech. We highlight the benefits of the revolutionary Volta GPU architecture.
Tesla® V100 is the most advanced data center GPU ever built to accelerate AI, HPC, and graphics. It''s powered by NVIDIA Volta architecture, available in 16 and 32GB configurations, and offers the performance of 100 CPUs in a single GPU. This gives data scientists, researchers, and engineers the power to tackle challenges that were once thought impossible. The new configuration provides 2X the memory improves deep learning training performance for next-gen AI models by up to 50%. Supporting larger data models also improves developer productivity, allowing AI developers to deliver more AI breakthroughs in less time.
3 Reasons to deploy V100 in your DC:
Reason 1: Be Prepared for the AI Revolution
NVIDIA Tesla V100 is the computational engine driving the AI revolution and enabling HPC breakthroughs. For example, researchers at the University of Florida University and University of North Carolina leveraged GPU deep learning to develop ANAKIN-ME (ANI) to reproduce molecular energy surfaces at extremely high (DFT) accuracy and 1-10/millionths of the cost of current computational methods.
Reason 2: Top Applications are GPU-Accelerated
Over 550 HPC applications are already GPU optimized in a wide range of areas including quantum chemistry, molecular dynamics, climate and weather, and more. In fact, an independent study by Intersect360 Research shows that 70% of the most popular HPC applications, including 10 of the top 10 have built-in support for GPUs.
Reason 3: Boost Data Center Productivity & Throughput
A single server node with V100 GPUs can replace over 60 CPU nodes. For example, for SPECFEM3D, a single node with four V100’s will do the work of 53 dual-socket CPU nodes while for NAMD a single V100 node can replace 13 CPU nodes. With lower networking, power, and rack space overheads, accelerated nodes provide higher application throughput at substantially reduced costs.
AI is Transforming HPC
Data Center GPUS from NVIDIA
NVIDIA TESLA V100 FOR NVLINK
Ultimate performance for deep learning.
Up to
3X
faster time-to-solution over P100
Key Features:
- 125 TeraFLOPS of tensor operations for deep learning
- 15.7 TeraFLOPS of single-precision performance
- 7.8 TeraFLOPS of half precision performance
- 300 GB/s NVIDIA NVLink Interconnect
- 900 GB/s memory bandwidth
- 16GB of HBM2 memory
NVIDIA TESLA V100 FOR PCle
Highest versatility for all workloads.
Up to
4X
higher throughput formixed workloads
Key Features:
- 112 TeraFLOPS of tensor operations for deep learning
- 14 TeraFLOPS of single-precision performance
- 7 TeraFLOPS of half-precision performance
- 900 GB/s memory bandwidth
- 16GB of HBM2 memory
Contact BIOS IT for pricing and availability and to schedule a test drive for your workloads.
Ask about our broad range of on-prem or cloud solutions with support for Tesla V100 GPUs,
Not what you're looking for? Check out our archives for more content
Blog Archive