Sign In | Join Free | My components-electronic.com
China Beijing Plink AI Technology Co., Ltd logo
Beijing Plink AI Technology Co., Ltd
Beijing Plink AI is an expert of cloud-to-end one-stop solution.
Active Member

3 Years

Home > Nvidia GPU Server >

AI Data Center Tesla Tensor Core Nvidia GPU Server A100 80GB Video Graphic Card Computing

Beijing Plink AI Technology Co., Ltd
Contact Now

AI Data Center Tesla Tensor Core Nvidia GPU Server A100 80GB Video Graphic Card Computing

  • 1
  • 2

Brand Name : NVIDIA

Model Number : NVIDIA A100

Place of Origin : China

MOQ : 1pcs

Price : To be discussed

Payment Terms : L/C, D/A, D/P, T/T

Supply Ability : 20pcs

Delivery Time : 15-30 word days

Packaging Details : 4.4” H x 7.9” L Single Slot

NAME : AI Data Center Tesla Tensor Core Nvidia GPU Server A100 80GB Video Graphic Card Computing

Keyword : AI Data Center Tesla Tensor Core Nvidia GPU Server A100 80GB Video Graphic Card Computing

Model : NVIDIA A100

GPU Architecture : NVIDIA Ampere

Peak FP64 : 9.7 TF

Peak FP64 Tensor Core : 19.5 TF

Peak FP32 : 19.5 TF

Peak TF32 Tensor Core : 156 TF | 312 TF*

Peak BFLOAT16 Tensor Core : 312 TF | 624 TF*

Peak FP16 Tensor Core : 312 TF | 624 TF*

Peak INT8 Tensor Core : 624 TOPS | 1,248 TOPS*

Peak INT4 Tensor Core : 1,248 TOPS | 2,496 TOPS*

GPU memory : 40 GB

GPU memory bandwidth : 1,555 GB/s

Form Factor : PCIe

Contact Now

AI Data Center Tesla Tensor Core Nvidia GPU Server A100 80GB Video Graphic Card Computing

NVIDIA A100

The NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration at every scale for AI, data analytics, and high-performance computing (HPC) to tackle the world’s toughest computing challenges.

Accelerating the Most Important Work of Our Time

The NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration at every scale for AI, data analytics, and high-performance computing (HPC) to tackle the world’s toughest computing challenges. As the engine of the NVIDIA data center platform, A100 can efficiently scale to thousands of GPUs or, with NVIDIA Multi-Instance GPU (MIG) technology, be partitioned into seven GPU instances to accelerate workloads of all sizes. And third-generation Tensor Cores accelerate every precision for diverse workloads, speeding time to insight and time to market.

The Most Powerful End-to-End AI and HPC Data Center Platform

A100 is part of the complete NVIDIA data center solution that incorporates building blocks across hardware, networking, software, libraries, and optimized AI models and applications from NGC™. Representing the most powerful end-to-end AI and HPC platform for data centers, it allows researchers to deliver real-world results and deploy solutions into production at scale.

AI Data Center Tesla Tensor Core Nvidia GPU Server A100 80GB Video Graphic Card Computing

High-Performance Data Analytics

Data scientists need to be able to analyze, visualize, and turn massive datasets into insights. But scale-out solutions are often bogged down by datasets scattered across multiple servers.

Accelerated servers with A100 provide the needed compute power—along with massive memory, over 2 TB/sec of memory bandwidth, and scalability with NVIDIA® NVLink® and NVSwitch™, —to tackle these workloads. Combined with InfiniBand, NVIDIA Magnum IO™ and the RAPIDS™ suite of open-source libraries, including the RAPIDS Accelerator for Apache Spark for GPU-accelerated data analytics, the NVIDIA data center platform accelerates these huge workloads at unprecedented levels of performance and efficiency.

On a big data analytics benchmark, A100 80GB delivered insights with 83X higher throughput than CPUs and a 2X increase over A100 40GB, making it ideally suited for emerging workloads with exploding dataset sizes.

NVIDIA A100 Technical Specifications

NVIDIA A100 for PCIe
GPU Architecture

NVIDIA Ampere

Peak FP64 9.7 TF
Peak FP64 Tensor Core 19.5 TF
Peak FP32 19.5 TF
Peak TF32 Tensor Core 156 TF | 312 TF*
Peak BFLOAT16 Tensor Core 312 TF | 624 TF*
Peak FP16 Tensor Core 312 TF | 624 TF*
Peak INT8 Tensor Core 624 TOPS | 1,248 TOPS*
Peak INT4 Tensor Core 1,248 TOPS | 2,496 TOPS*
GPU Memory 40GB
GPU Memory Bandwidth 1,555 GB/s
Interconnect PCIe Gen4 64 GB/s
Multi-instance GPUs Various instance sizes with up to 7MIGs @5GB
Form Factor PCIe

Max TDP Power

250W

Delivered Performance of Top Apps

90%

NVIDIA A100

The flagship product of the NVIDIA data center platform for deep learning, HPC, and data analytics.

The platform accelerates over 700 HPC applications and every major deep learning framework. It’s available everywhere, from desktops to servers to cloud services, delivering both dramatic performance gains and cost-saving opportunities.

AI Data Center Tesla Tensor Core Nvidia GPU Server A100 80GB Video Graphic Card Computing


Product Tags:

80GB Nvidia GPU Server

      

A100 Nvidia GPU Server

      

A100 nvidia data center gpu

      
Buy cheap AI Data Center Tesla Tensor Core Nvidia GPU Server A100 80GB Video Graphic Card Computing product

AI Data Center Tesla Tensor Core Nvidia GPU Server A100 80GB Video Graphic Card Computing Images

Inquiry Cart 0
Send your message to this supplier
 
*From:
*To: Beijing Plink AI Technology Co., Ltd
*Subject:
*Message:
Characters Remaining: (0/3000)