What are Tensor Cores? Nvidia’s technology explained
Ever since Nvidia launched the GeForce 20 Series range of graphics cards back in 2018, it has been equipping the vast majority of new consumer graphics with Tensor Cores.
Tensor Cores should not be confused with CUDA Cores, with the latter providing all of the graphics firepower required to generate complex images at high speeds.
So if CUDA Cores are responsible for the main workload of a graphics card, then what are Tensor Cores needed for? Keep reading on for a detailed explanation and full breakdown.
What are Tensor Cores?
Tensor Cores are dedicated AI accelerators found on modern Nvidia graphics cards. This means that the Tensor Cores are responsible for AI performance, allowing the graphics card to make use of various AI features locally without needing to use the cloud.
One of the most notable AI features found with the Nvidia RTX graphics cards is DLSS. This uses artificial intelligence to generate additional pixels, therefore upping the resolution (and therefore picture quality) of a game without putting added pressure on the GPU.
This enables owners of an Nvidia RTX graphics card to push up the frame rate of a game at a high resolution. This is especially important for counteracting the performance loss seen when activating intensive features such as ray tracing.
The Tensor Cores are required for DLSS to function, as Nvidia trains the artificial intelligence to add in the correct pixels. Over time, the AI has improved considerably, not only improving the accuracy of these added pixels, but also increasing the number that it can generate in real-time in order to maximise the performance gains.
The latest edition is called DLSS 3.5, which is capable of a new feature called Ray Reconstruction that can enhance the visual quality of ray tracing, which allows games to make use of realistic lighting and reflections.
While DLSS has been available on Nvidia’s consumer graphics cards ever since the GeForce RTX 20 Series back in 2018 when Tensor Cores were first introduced, the newer DLSS 3.5 standard is only available on the latest RTX 4000 Series line-up. This is because the latest crop of Nvidia’s GPUs make use of the 4th Generation of Tensor Cores, whereas older graphics cards are limited to previous generations.
Nvidia is constantly improving its Tensor Cores for each graphics card generation, as it’s coming to a point where the AI performance is arguably becoming more important than the raw performance. While performance gains through raw graphics performance is starting to slow down between generations, the generational performance gains of AI is only speeding up. For the RTX 4000 Series, there’s no doubt that the improvements made to DLSS were the biggest highlight.
Of course, DLSS isn’t the only AI feature that the Tensor Cores are responsible for. Chat with RTX is one of the newest AI features from Nvidia, as it can answer queries like ChatGPT, but based on your chosen documents and videos instead so you know the source of the data. This all runs on the AI performance of your graphics card rather than leveraging the power of the cloud.
Other Nvidia apps also make use of the Tensor Cores, such as Nvidia Broadcast which can be used to create a virtual background, eliminate background noise and even create the illusion that you’re maintaining eye contact with the camera.
The number of AI-powered features from Nvidia are only going to increase over time, and we’ll likely see further improvements made to the Tensor Cores with the next generation of RTX GeForce graphics cards.