|More from: | NetworkWorld ||
Nvidia is raising its game in data centers, extending its reach across different types of AI workloads with the Tesla T4 GPU, based on its new Turing architecture and, along with related software, designed for blazing acceleration of applications for images, speech, translation and recommendation systems.
The T4 is the essential component in Nvidia’s new TensorRT Hyperscale Inference Platform, a small-form accelerator card, expected to ship in data-center systems from major server makers in the fourth quarter.
The T4 features Turing Tensor Cores, which support different levels of compute precision for different AI applications, as well as the major software frameworks – including TensorFlow, PyTorch, MXNet, Chainer, and Caffe2 – for so-called deep learning, machine learning involving multi-layered neural networks.
About | NetworkWorld |
Network World provides deep domain expertise on the modern enterprise data center, including the latest networking, storage, servers, and virtualization technologies. Focused on the decisions faced by data center managers who must build out agile infrastructure and extend their networks to embrace the internet of things, Network World is an invaluable resource in helping enterprises meet strategic business goals.