2017 was the scene of a low noise revolution: Nvidia’s Volta and Pascal architectures were the roots of the “mainstream” Turing and Ampere architectures that followed. By technological effect, Nvidia, returns via the metaverse to the creation of innovative 3D universes, integrating machine learning.
From video games and the creation of synthetic images to AI and the metaverse
The GPU has gone from being a feature for high-end servers and workstations to being a commonplace feature found in any smartphone. We started, simplifying from the Silicon Graphics IRIS station to the Nvidia A8000.
Nvidia is an example for any high-tech company to follow, with of course Tesla. Nvidia started out as a competitor to 3DFx, before becoming that of Silicon Graphics. Then Nvidia created CUDA first and foremost for video games, but as python, which was initially used for the web, to evolve towards deep learning and datasciences.
Ecological and economically viable
The GPU is extraordinarily flexible, yes, we still need a CPU, for data preparation, 3D modeling, even operating an operating system, but the GPU does 99% of the work.
The definite advantage of the GPU is the number of occurrence processing cores, one of the use cases is SQL data processing, GPU optimized DBMS, usable via the cloud. I would not talk about the now traditional CUDA-optimized Photoshop potions, this has allowed the emergence of photogrammetry, thus making three-dimensional environments photorealistic, by corollary the viable metaverse.
From an economic point of view , a GPU has more power than a supercomputer of two decades ago , on specific operations , I who successively dreamed of the Cray 2 and T3e , disregarding the computing and technical environments and humans… now it draws 75W and it fits in a PC.
The GPU is the best of allies for the challenges ahead.