Forums

Topic: PlayStation 5 --OT--

Posts 741 to 760 of 4,717

BAMozzy

What you also have to remember is that pure native pixel count won't necessarily matter too much at such high resolutions and from normal seating distances. With VRS and DLSS (or a version of that), you will be hard pressed to spot any differences - even in a side by side.

Both next gen consoles will have their strengths and weaknesses in some areas but unlike the XB1 vs PS4, the competition shoild be more evenly matched. Remember the Xbox had a faster CPU but in almost every game, the PS4 matched or bettered it. Only a few games where CPU bottleneck mattered did you see Xbox getting a 'win' - games like Unity and Hitman with the large crowds gave Xbox a very minor frame rate advantage.

Certain tasks could be handled better by Playstation - thanks to its faster GPU whilst others could be better on Xbox due to its 'bigger' GPU. The CPU is faster too on Xbox but only by a small margin - in SMT mode - more if the game is built around Single Thread performance. RAM is faster on Xbox - albeit for 10GB with the other 3.5GB available for devs (2.5GB is for system operations) being 'slower' - but not 'slow'. However, the PS5 is all unified, all the same which may make it easier for devs - not having to be so 'careful' where they allocate the blocks.

Assuming the SSD was streaming 10GB of RAW uncompressed data directly into the game, the PS5 would do this in around 2secs where as that would take around 4s on Series X. What this could mean is that Fast Travel could be slightly faster on PS5 than Xbox. I say slightly because compared to current gen, both will be incredibly fast. As for game design, the most likely devs to take advantage of that Data transfer will be the First Party devs as 3rd Party will be looking at the Xbox and bulk of PC gamers hardware too and designing a game around what they can logistically handle. Its possible that Games may have higher quality textures and assets being used at a greater distance fro the camera position on PS5 too but then it will still have to render those and it puts more workload on the GPU. As the CU cores will be used for Ray Tracing, the Xbox could have more rays - a higher ray tracing level but if the CPU isn't being utilised heavily by PS5, they could use a CPU core as PS5 will be using AMDs version of Ray Tracing compared to MS using their own DXR version.

Its all speculation, one could be stronger here whilst the other is stronger there, one might look/run better because it has a faster CPU/Bigger GPU and the other may Load quicker because of its faster SSD! Of course they could end up being closer than it seems on paper or maybe one may have the advantage in the first few years whilst the other may well have an advantage in the after 3-4yrs when clock speed and data transfer becomes more important because more PC's offer that as well.

Its very difficult at the moment to predict and like some games are more CPU bottlenecked, others are more GPU bottlenecked - it really depends on the dev and their game. Maybe we will see Data transfer as a bottleneck in gaming but with advancements in compression, streamlining the rendering process and machine learning upscaling, any advantages could be negated on either system. For GPU bottlenecks, you have DLSS, VRS etc, for Data transfer, compression and again Machine Learning upscaling (using lower quality textures but making them look much higher quality) and as for CPU, both are very close in SMT mode anyway...

Its no point speculating where the advantages of either could lie as it really depends on the devs, their design and what their game needs to run at its best. If a game doesn't need to transfer more than 2.5Gb/s for example, then that is just overkill on a PS5 and doesn't give it any advantage at all over Xbox in that area. Having watched Digital Foundry's analysis of Wolfenstein and its implementation of DLSS, 1080p looked as good as native 4k so resolution could be no advantage to Xbox. It will be even more difficult to see with much higher resolutions and after post processing - some don't even notice when dynamic resolution kicks in now unless its a drastic drop....

At the end of the day, if you are a Sony Fan and have a PS4 back catalogue, you aren't going to jump to Xbox unless the PS5 is ridiculously under-powered, under-performing and expensive - at least not until the price is cut or Sony address that in a PS5 Pro. I could say the same about an Xbox fan too - they wouldn't jump ship either unless MS really screw up. Both represent a massive leap over the last gen with a different approach.

In any case, I am excited by the prospect and potential that both offer and intend to get both too. Both will offer a tangible upgrade over any console currently available.

A pessimist is just an optimist with experience!

Why can't life be like gaming? Why can't I restart from an earlier checkpoint??

Feel free to add me but please send a message so I know where you know me from...

PSN: TaimeDowne

JohnnyShoulder

@Jaz007 @KratosMD Oh right i get where Kratos was coming from now. The console wars between the console manufacturers was over pretty quickly this gen, but between fanboys i don't think it will ever stop lol.

Life is more fun when you help people succeed, instead of wishing them to fail.

Better to remain silent and be thought a fool than to speak and remove all doubt.

PSN: JohnnyShoulder

Octane

@nessisonett The SSD essentially functions as additional RAM. If that means you can render stuff on the fly as you turn around, there is less need to render additional stuff outside of the player's view, so you can cram more into the visible field, resulting in a higher graphical fidelity, theoretically.

Octane

JJ2

@nessisonett
Hey idk. I try to get informed.

The crowd, accepting this immediately, assumed the anti-Eurasian posters and banners everywhere were the result of acts of sabotage by agents of Goldstein and ripped them from the walls.

nessisonett

@JJ2 I understand what they’re saying but the onus lies on the developer since an SSD isn’t a magic wand that can be waved over graphics. So if you add an SSD to a PC, games will load faster but won’t look any better. However, with the PS5, Sony will make use of the whole SSD as RAM concept but I just don’t think many multiplats will since it would require drastically changing the way objects are rendered.

Plumbing’s just Lego innit. Water Lego.

Trans rights are human rights.

JJ2

@nessisonett
Well GodFall devs was already praising the ps5 SSD a few months ago. Obviously first party studios will use all its potential but multiplat will also find it an advantage to be really fast. SSD is to way to go from now on with new gen games.

Edited on by JJ2

The crowd, accepting this immediately, assumed the anti-Eurasian posters and banners everywhere were the result of acts of sabotage by agents of Goldstein and ripped them from the walls.

BAMozzy

RAM is the 'memory' - it holds the data of all the assets, all the textures etc. If you fast travel for example, all the data for that area has to be found on the disc and loaded into RAM which is then streamed into the GPU to construct that new setting. In essence, the SSD bypasses some of that so should cut down on that delay as the data is found and loaded into RAM and the time it takes for that amount of Data to pulled into the game so you cut down a lot of the loading time. What this also could allow is for you to be stood on top of a mountain looking out across the valley with a LOT of high level detail stretching off into the distance and turn around and have a similar long distance view of a completely different look - completely swapping out all the data needed for one scene and replacing it with all the data necessary for the other and when you turn back around, it swaps the data around very quickly. Fast travel could be virtually instantaneous as all the data necessary for the new environment is streamed straight into the GPU.

With data transfer that fast, you also don't need to have quiet, fairly restricted areas as you move from one 'block' or zone to another. Some games may have a narrow valley between two cliff faces that twist and turn so the view is quite 'plain' and with minimal draw distances to keep the load on the system very low whilst the data for the new area is swapped in - that doesn't have to happen with new games that utilise the SSD.

The Xbox 'could' still match it with various methods - like using Machine Learning to upscale those textures to reduce the size of the data. Maybe not have the highest level of detail for the more distant objects - similar to methods used today but at much greater distances. A houses a mile down in the valley may not have the highest textures used to create the look of tiles or brick work compared to what the PS5 could use but by the time you get close enough, the difference is negligible (if any).

Its like the PS5 could use methods to make the Visuals look equally as high resolution to offset that difference in GPU. It will be interesting to see though what Devs do create with the options the SSD offers - the future of gaming is exciting

A pessimist is just an optimist with experience!

Why can't life be like gaming? Why can't I restart from an earlier checkpoint??

Feel free to add me but please send a message so I know where you know me from...

PSN: TaimeDowne

JJ2

@BAMozzy
Talking about RAM, PS5 went for a unified memory again which is a big advantage too. 👍

The crowd, accepting this immediately, assumed the anti-Eurasian posters and banners everywhere were the result of acts of sabotage by agents of Goldstein and ripped them from the walls.

JohnnyShoulder

BAM talking about RAM lol.

Life is more fun when you help people succeed, instead of wishing them to fail.

Better to remain silent and be thought a fool than to speak and remove all doubt.

PSN: JohnnyShoulder

BAMozzy

@JJ2 I know - I mentioned unified RAM in one of my posts but it may not - it maybe easier for devs - meaning that they don't have to worry about which pool of RAM they allocate data too but Xbox still has the advantage and some data - like audio - even 3D audio doesn't need that high a bandwidth...

That slower, but not slow 3.5GB for games is only a bit slower than the PS4's unified RAM and could feed the CPU certain data....

Edited on by BAMozzy

A pessimist is just an optimist with experience!

Why can't life be like gaming? Why can't I restart from an earlier checkpoint??

Feel free to add me but please send a message so I know where you know me from...

PSN: TaimeDowne

JJ2

@BAMozzy
Well I don't pretend I know what I'm talking about haha. However common sense tells me one pool of fast 16gb is way better than 10 fast + 6 slow. Apparently the OS doesn't even need to sit in memory

Edited on by JJ2

The crowd, accepting this immediately, assumed the anti-Eurasian posters and banners everywhere were the result of acts of sabotage by agents of Goldstein and ripped them from the walls.

BAMozzy

@JJ2 the 6 Slow are still faster than the Pro's and the same bandwidth as the X - although it is also GDDR6 (not 5 like the X). The OS does need RAM as it too has assets, sounds etc. All the icons for the menu's for example are 'assets', your background and any game covers are all artwork that would be moved to RAM and used as needed. If you have 100 games and 20 backgrounds for example, the games displayed and the background would need to be moved to RAM so they can be streamed in for the OS to display.

The actual OS itself doesn't sit in RAM - it would be running like a game does albeit on a part of the system and some of the system is not available to devs - 1 core of the CPU for example. That part of the system is not available for Devs at all.

On the PS4, you have 8core Jaguar and 8GB of RAM BUT 3GB of that RAM and at least 1 core of the CPU that are not available to devs. The Series X has 12GB of RAM but 2.5GB is not available to Devs - it has 9.5GB for gaming and that 9.5GB is as fast as the 3.5GB of 'slower' memory on the Series X. That 3.5GB is 50% faster than the RAM in the Pro for example which was faster than the PS4 which was a lot faster than the XB1 - its not slow at all....

A pessimist is just an optimist with experience!

Why can't life be like gaming? Why can't I restart from an earlier checkpoint??

Feel free to add me but please send a message so I know where you know me from...

PSN: TaimeDowne

Ryall

@JJ2 @BAMozzy It’s important to note that the Xbox series X RAM presents to developers as unified. The developer won’t tell the system which pool of RAM to use. When RAM usage exceeds 10 GB lowest priority tasks will be sent to be 3.5 GB of slow RAM. Practically I don’t see the splitting having a negative impact.

Edited on by Ryall

Ryall

BAMozzy

@Ryall I didn't know that about the Series X - i have seen speculation that Devs can allocate certain tasks - like Audio files for example as they don't need that high a bandwidth - to the slower section if necessary. The way MS described the 10GB as GPU Optimal Memory with the 6GB as standard, it seemed like they were indicating that they could allocate certain tasks to certain Memory. Of course it still presents as 'unified' in that its not 'split' like the PS3's was. Its not like they have one block of 10GB and another block of just 3.5GB - its still 13.5GB of RAM.

There is a difference between Unified and Split - the PS3 had two separate, split blocks that could not 'overflow' into the other. It had 2 x 256k whilst the Xbox had 512k of RAM - both equate to the same amount but the Xbox was not split. Unified just means that all 13.5GB available can be used in a single block but whether or not Devs can choose to allocate certain aspects to the 'lower' speed RAM rather than say Audio occupying some of the Optimum RAM and taking up room that would be better placed there or whether that is 'automatically' handled by the internal hardware, I don't know. Its the difference between devs choosing to put certain files in certain areas of RAM or letting the internal hardware decide which files are better suited to that block of memory. I know some devs can choose which cores do what when it comes to CPU usage - Core 1 (or 0 as its more often displayed as) for draw calls - the most intensive workload for the CPU and often its a single core of the CPU that gets overloaded and causes the bottle neck whilst the other cores may only be operating at 40-50%.

All unified means is that there isn't a 'distinct' split that has to be managed - ie up to 10GB has to be allocated here and just 3.5GB allocated there and all 13.5GB 'for example' of game data could be used as a single block but in terms of optimising that Data flow, it would make sense if Developers have the option to ensure that the Data that doesn't need to be streamed in at the highest rate isn't in the Optimum RAM or more importantly, the data that is better suited to the optimum, isn't sat in the Standard RAM. Again though, I don't know if that would be handled by Hardware - the hardware itself deciding where the data ought to go - or whether the devs themselves can choose whether certain data goes to standard whilst the rest goes to Optimum. It makes sense to me that devs are given that choice to be able to optimise their game but it could equally be handled by internal hardware.

I don't see it being a negative either - not all data needs that high a speed/bandwidth - certainly not Audio. The only point I was trying to make is that 3.5GB is not as fast, maybe doesn't need too be as fast, but still as fast as the Series X which was 50% faster than the Pro's (which had 5.5GB of RAM for gamers - just 2GB more than this block of RAM). When I talk about 'Blocks', I refer more to the that area of RAM - not that its is completely separate and has to be managed as 'split' RAM. It just makes sense that devs have the option - if needed/wanted, to ensure that the lowest priority or aspects that don't need that speed, can be pushed to that area of the RAM leaving the highest speed RAM for the data that needs it most. At some point, the allocation of Data will need to be optimised - whether that's at a system level - ie the console deciding that certain data ought to be held in standard RAM, or its a choice given to developers to decide - if the data exceeds 10GB of course. Its sub-optimal to just dump data into RAM when you have two different speeds. If you have 13.5GB of data, you don't want the lowest priority or data that doesn't need that bandwidth in the Optimal memory so somewhere in the chain, it has to be decided where that Data should be held. You can't wait until the game is 'streaming' it in and then find that some files are held in standard and then try to move the lowest priority out of Optimal to move the files needed faster...

Maybe Devs can mark the data as lower priority so if necessary, it will be sent to the standard RAM by the internal hardware or maybe the internal hardware will look for certain file types (like .wav or .jpg) and know where best to send that data meaning that devs don't need to worry about that - its handled at a hardware level. You are correct though in that it is a unified block of 13.5GB - not a block of 10GB and 3.5GB that are completely split and separate.

A pessimist is just an optimist with experience!

Why can't life be like gaming? Why can't I restart from an earlier checkpoint??

Feel free to add me but please send a message so I know where you know me from...

PSN: TaimeDowne

JJ2

So in Cerny deep dive he said 36 CU RDNA2 is 58CU for PS4. So I counted its 16+ TFlop on the old architecture. Did I get it right?
So people are complaining that's 'weak' . Jesus haha The world gone mad

The crowd, accepting this immediately, assumed the anti-Eurasian posters and banners everywhere were the result of acts of sabotage by agents of Goldstein and ripped them from the walls.

Anti-Matter

I hope PS5 games will be supported by Limited Run Games / Strictly Limited Games / eastsoftasia / any publisher from digital to physical. Some indie games are hidden gems for me.

Anti-Matter

Ryall

The PS5 supports 256 bit instructions and these use a lot of power. I wonder whether they’ll be less used now as they cause the the CPU and GPU to down clock than they would’ve been if they had increased by power consumption.

Ryall

JJ2

@Ryall
Definitely, I think power is constantly monitored and frequencies depend on it right? That's why they can target an efficient cooling system. (I think?)
Amusingly a lot of insecure people try to downplay the ps5 and question what Cerny said whereas it the only info we have. So why question it ?
I watched Brad Sam's video recently and that's embarrassing, especially the comments.

Edited on by JJ2

The crowd, accepting this immediately, assumed the anti-Eurasian posters and banners everywhere were the result of acts of sabotage by agents of Goldstein and ripped them from the walls.

BAMozzy

@JJ2 Basically he was saying that with the technology used in the PS4 Pro, they would need a much bigger GPU to match what the GPU in the PS5 is capable of delivering. The 'size' (as in number of CU's) is only part of the equation when it comes to the TF. It's calculated using FP32 but if they used the smaller FP16, essentially doubling the amount of instruction the GPU could do, the PS4 Pro would be a 8.4TF console - it can accept FP16, unlike the XB1X which can only handle FP32 operations

Flops are Floating Point operations per second so the frequency, how many times it cycles per second is important. An Identical GPU, same number of CU's/Shaders, running at 2Ghzwould have double the TF as one running at 1Ghz because it has double the amount of cycles. Its like a printing press that prints out a 20 sheets a second, if you speed that up to print 40 sheets a second, it doubles the amount of work it can do per second. Another way of course would be to add a 2nd machine so you have 2 machines both printing out 20 sheets a second to give you the same 40 sheets per second rate.

With modern GPU's, the pipeline is much more efficient and less lag between components so the overall effect is that they don't need as many 'Flops' to achieve the same results as older machines. They aren't waiting for micro-seconds for the instructions for the instructions to be sent to the GPU or waiting for certain operations to be carried out before the next instruction can be carried out so you get more work done. Of course we are seeing a much bigger increase in the GPU's too.

In a recent DF video, they were looking at RDNA1 and the 'rumoured' Project Lockhart Xbox. At face value, it sounds 'weak' compared to the Pro and X with just 4TF (with the Pro at 4.2 and X at 6TF) but RDNA1 in a PC at 4.2 TF was delivering comparable performance to the X. RDNA 2 is more efficient again so would offer an improved performance on top of RDNA1. What that means is that at ~10TF, the PS5's RDNA2 GPU would give more than 2.5x the performance of the 6TF X and probably close to 4x the performance of the PS4 Pro despite the TF rating seemingly much closer. According to the numbers, the PS5 isn't 2x the X or more than 2.5x the Pro...

I think that's what Mark Cerny was alluding to - that you would need a massive GPU if they used the same GCN Architecture that the Pro used in the PS5 and that would need to be at least 16TF's to deliver the same performance...

A pessimist is just an optimist with experience!

Why can't life be like gaming? Why can't I restart from an earlier checkpoint??

Feel free to add me but please send a message so I know where you know me from...

PSN: TaimeDowne

WebHead

WebHead

PSN: JTPrime93

Please login or sign up to reply to this topic