Comments 11

Re: Sony May Be Exploring Its Own DLSS-Esque Graphics Technology

kratreus

@TooBarFoo It's possible that AMD will implement it in their desktop GPUs, but it's unlikely that Sony and Microsoft will on their consoles. Having dedicated silicon for just machine learning is a waste of resources and die space, it contradicts the cost-efficiency goal of consoles, when only a handful of devs will support it. Tensor cores and RT cores are the primary reason why RTX cards cost double the initial price of the GTX cards it replaced. Microsoft's implementation is more cost effective because it does not waste any die space while still being able to accelerate integer operations, although there will be performance sacrifices. The cost-efficiency of that solution still outweighs the benefits of dedicated silicon.

Re: Sony May Be Exploring Its Own DLSS-Esque Graphics Technology

kratreus

@Gts AMD GPUs usually have 4 shader engines, each are composed of multiple Compute Units (14 in the Series X, because 56 total CUs with 4 disabled), then these compute units contain 64 stream processors or shader cores. Since the shader cores are no longer exclusively used for shaders, its now called Stream processors by AMD, and CUDA cores by NVIDIA. Microsoft's implementation of INT8 and INT4 operations are done in the stream processors. However, these integer operations cannot be done in parallel with the floating point operations. Although due to requiring less precision than FP32, it can do 4 INT8 and 8 INT4 operations the same time it takes to do 1 FP32. It will share the processing resources with floating point operations, hence it cannot do 12 TFLOPs if it chooses to use integer ops in Mixed precision processing. It also cannot do 48.5 INT8 or 97 INT4 TOPS unless the Stream Processors are exclusively dedicated to that. That's why Microsoft demonstrated using it in a current-generation game at 1080P, because it does not need to use the full capabilities of the GPU, it has more resources to use for lower precision operations. For comparison, the RTX 2060 Super can do 115 INT8 and 230 INT4 TOPS in parallel with 7.2 TFLOPS because those operations are offloaded to the tensor cores, thats why DLSS has extreme performance gains because it doesnt have to sacrifice anything.

I doubt Sony's implementation (if there is) will be different from microsoft, because having dedicated silicon just for machine learning is a waste of die space when only a handful of devs will support it. The cost-efficiency just isnt there, which is the point of consoles, removing everything not needed for gaming to reduce hardware cost as much as possible for the best price-performance ratio. RTX Cards are different because PCs do alot more than gaming, and the dedicated silicon for Ray tracing and tensor cores are the reason why RTX cards are double the initial price of the GTX cards it replaced. AMD's implementation of accelerated machine learning and ray tracing is smarter because it makes sure everything in the GPU will always be used.

https://www.eurogamer.net/articles/digitalfoundry-2020-inside-xbox-series-x-full-specs

Some more things I'd like to clarify to you are DLSS and Supersampling. Supersampling is a type of Anti-aliasing usually abbreviated as SSAA. I wont go in-depth about Anti-aliasing. Most PC games can do up to 4X SSAA at best. DLSS takes this to another level by leveraging machine learning to be able to do 64X SSAA. SSAA uses GPU resources alone, while DLSS are trained in supercomputers then implemented using the tensor cores in RTX GPUs. In the example you've shown, it used SSAA more efficiently because the GPU is capable of doing more less-precise operations at the same time. Unlike DLSS, it is not trained in supercomputers beforehand, it is done locally, so the resulting image will not be as good as DLSS and won't be able to recreate the fine details of a native 4K resolution, although it will be noticeably better than just traditional bilinear upscaling and its native resolution. This is why 4K via DLSS is as good or even better than native 4K, because it is trained in supercomputers beforehand to achieve the highest fidelity using a lower native resolution with an extremely high-fidelity teacher data, then the resulting data is implemented locally in real-time by the Tensor cores. Local implementations of SSAA cannot use extremely high fidelity teacher data (ex. 16K resolution with 16K textures and extremely high polygon 3D models) because the performance hit will be significant, it will only use a higher resolution as the teacher data while using the same in-game assets (best looking games only use 4K textures at most), so the accuracy will suffer.

Re: Sony May Be Exploring Its Own DLSS-Esque Graphics Technology

kratreus

@BAMozzy DirectML isnt really DLSS. DirectML is just an API to program for INT8/INT4 TOPS. I don't expect any of the two consoles to have something even half as good as DLSS, when even the Xbox Series X only has up to 48.5 INT8 TOPS and 97 INT4 TOPS, while the RTX 2060 Super has 115 INT8 TOPS and 230 INT4 TOPS in its tensor cores alone, which increases even more once you consider its CUDA cores. The lowest end RTX card, more than twice as powerful in ML than the only console confirmed to support ML.

Re: PlayStation Fans Are Worried Microsoft Could Buy Warner Bros' Gaming Division

kratreus

Take-Two would be the safest destination for WB Games. EA and Activision are the two worst companies for consumers. The slight possibility that Microsoft may turn a multi-plat game into an exclusive is very worrying. Just like how Ninja Theory's Hellblade went from being a multi-plat to an exclusive. If it becomes exclusive, Sony would respond by buying another big name, further dividing the gaming community. It will be a lose-lose situation for everyone. If Microsoft keeps it multi-plat even if its a timed exclusive for Xbox, that would be fine. If thats the case, I'd want Microsoft over Take-Two.