It seems that you're using an outdated browser. Some things may not work as they should (or don't work at all).
We suggest you upgrade newer and better browser like: Chrome, Firefox, Internet Explorer or Opera

×
Made me smirk. Because each time we get pretty console garbage, I tend to warn people about these very problems xD

Now for anecdotal humor:
Before DOS emulation became really popular for gaming on modern machines. The only way I could play Ultima 7(at the time) I had to buy an entire computer for it. Then reprogram every resource the machine had, to run one game. Skip ahead to 2024, long after any real DOS machine has been made and sold. Here we are again.

There is a difference though. There use to be some overhead room to computer development. In regard to advancement to what components could achieve. Now that the wall has been hit, we have to go backwards and find ways to thin out or cut back bloat to move forward(again). CPU circuits are as small as they can be, to carry electrons. Lets hope the fat gets trimmed faster in the coming years, rather than gets more bloated.
avatar
timppu: Maybe we should at least leave x86 behind and start using more energy-efficient ARM-based CPUs, like Snapdragon?
I completely agree that there should be a dramatic shift in favor of increasing the adoption (and availability) of desktop and laptop RISC processors.

As for ARM itself, over the past few years, my opinion has gradually soured, as Arm [Holdings PLC] has behaved in an extremely hostile manner towards other entities (Qualcomm being their most-recent 'target'). Having said that, of course, ARM devices have proven to be significantly more energy-efficient when compared to x86_64 counterparts.

However, with the advent of RISC-V, I have been slowly abandoning ARM, and, when responsive desktop-grade functionality is not required, I have been somewhat successful in my attempts to persuade others to transition to RISC-V devices. While still remaining in its infancy (so to speak), an open instruction set architecture, free of royalties (such as RISC-V), is, I believe, the way of the future.

Currently, in terms of performance, RISC-V devices are certainly not competitive, but, of the micro-controllers and single-board computers that I have used, the overall experience has been splendid. The StarFive VisionFive 2 has been very reliable, and, StarFive engineers have been making steady progress with adding mainline Linux support for various features. Within two years, some exciting offerings will (hopefully) be released (I am mostly looking forward to the Milk-V Oasis, which features the SiFive P670).
How comes the CPU is the "new target" now? So far AMDs CPUs are sufficiently energy efficient and some true allrounders which is important for the tasks asked.

The main focus for improvements the next years is cleary in the GPU segment, especially AMD will need to improve there a lot.

ARM might be a thing but mainly in the portable market where every saved Watt counts... and compatibility is lesser of a issue.

Especially on powerful gaming hardware, we just need 8 (maybe up to 16 in the future) very powerful cores... not a lake of low powered ultra high efficient "mini cores" (which is as well a ARM weakness)... as a game got certain calculation that do not scale well with massive parallelizing of CPU cores. Sure, some tasks are friendly to this, even on games, but a game still mainly need very powerful cores and fast cache is very useful as well, reason why X3D is such a great thing.

One of the biggest thing is indeed very quick cache, even on the GPU. This is a matter Mark Cerny was bringing up at some point because AI can produce a huge amount of micro-TOPS (basically a storm of operations faster than a burst... each second) but without a cache-architecture with sufficient speed... it can not even barely become fully utilized. So, for gaming and AI use... memory can never be to fast, this is a special matter that is not sufficiently utilized by ARM. Although i guess Nvidia will have to work on it, in term they want to create some own ARM chips in combo with their "AI hardware".

The 5090 will provide nearly 2 TBs of bandwidth, looks crazy but totaly required for a GPU of this performance level. Still not fast enough for certain AI calculations, it seems... so it may need some specialized cache-architecture.

This is not a GPU only matter, as we already know that most games are enjoying fast cache there. There is simply different needs for gaming hardware, it is not all about efficiency and nothing else. Aswell... AI and gaming demands are pretty close... so a gaming hardware is pretty much as well a good AI hardware.

avatar
timppu: Maybe we should at least leave x86 behind and start using more energy-efficient ARM-based CPUs, like Snapdragon?
avatar
Palestine: I completely agree that there should be a dramatic shift in favor of increasing the adoption (and availability) of desktop and laptop RISC processors.
X86 is a architecture able to support CISC and RISC, as RISC is basically implemented into the CISC architecture. However, X86 can, no matter how its done... will always lose some efficiency in exchange for a higher compatibility and a in general way more complex instruction set. Sure, if you use a car with a engine several times the size, so it can handle any demand in every situation, then it surely will never be able to reach the same efficiency such as a engine with several times smaller size and specialized for a very specific task. Thats not what X86 is here for, it simply will need a high flexibility without to much loss on efficiency, which is a challenge AMD/Intel simply will have to tackle. I do not think that X86 is a "obsolescent" architecture, as it got another focus... this focus is on high performance, even if it may cost some efficiency, and a very wide range of resource management that is never "the most efficient thing" in general.

Sure, it is always possible to build something way more efficient... but it would mean to lose a lot of flexibility on resources and it would always mean to use pretty specialized software solutions, a matter Apple was working on for way to many years already. The compatibility of Apple hardware is in general very limited but using this "specialized approach" Apple indeed was able to become very energy efficient.
Post edited December 23, 2024 by Xeshra
avatar
P. Zimerickus: Still, after reviewing Cyberpunk again (got a nice screenie if your interested) I feel mighty interested about what CDPR manages with Unreal Engine themselves
avatar
botan9386: Well, the question is can this same level of quality be maintained and improved in UE5 without being much more demanding.

I have also attached a screenshot of my own gameplay from today. I ran out of a Ripperdoc to my car and had to appreciate just how good this game can really look at times. The better part is the 70fps which is maintained throughout the city whilst looking this way.

This game is still incomplete in my opinion and lacking so many QOL updates (without mods) but one thing it managed was looking good and running well. If they can improve upon this with UE5 I'll be impressed.
Your picture looks amazingly nice! What kind of GPU do you use?

Okay, this is a bit off-topic but i'll show you another one of mine too with NVIDIA's OSD. I have my FPS set to 60, with everything I enabled, i fear of going any higher... Ray tracing and such...
The game's power draw averages at 170W, Oh btw the temps are from a just booted PC.
I have limited my power draw on the GPU to half so i don't have to many excess going on and that where I think problems arise for other PC users.

While testing games to find the right settings, i notice these situations where the difference between a lined on GPU (think as in how you line a dog) and a free GPU can be as much as 150W difference. Still, the game or the GPU manages to downscale in limited situations and try to create a as good as picture as possible. 150W is a lot! I can't imagine this goes right all the time, especially if there are other 'consuming' factors at play. A lot of writing to memory and hard drives, a bit of a weird instance in the code etc. It is quite amazing, you can play the same game at a 130W load or use 450W's! Especially if the game is originally designed at 750W's (just to name a figure)

Personally, i wouldn't dare complain about something not good enough if I knew the whole design team behind that something made it their life's dream :p
Attachments:
Lined on GPU?

Aaahhh! It must be this "spec": https://www.youtube.com/watch?v=LYjUCKLyBLI&t=4s

Oh yeah... i mean we all need coins i guess... and why not to play "in between".
Post edited December 23, 2024 by Xeshra
avatar
P. Zimerickus: Your picture looks amazingly nice! What kind of GPU do you use?

Okay, this is a bit off-topic but i'll show you another one of mine too with NVIDIA's OSD. I have my FPS set to 60, with everything I enabled, i fear of going any higher... Ray tracing and such...
The game's power draw averages at 170W, Oh btw the temps are from a just booted PC.
I have limited my power draw on the GPU to half so i don't have to many excess going on and that where I think problems arise for other PC users.

While testing games to find the right settings, i notice these situations where the difference between a lined on GPU (think as in how you line a dog) and a free GPU can be as much as 150W difference. Still, the game or the GPU manages to downscale in limited situations and try to create a as good as picture as possible. 150W is a lot! I can't imagine this goes right all the time, especially if there are other 'consuming' factors at play. A lot of writing to memory and hard drives, a bit of a weird instance in the code etc. It is quite amazing, you can play the same game at a 130W load or use 450W's! Especially if the game is originally designed at 750W's (just to name a figure)

Personally, i wouldn't dare complain about something not good enough if I knew the whole design team behind that something made it their life's dream :p
I'm using an RX 6800 (250 watt TDP). I notice that power draw difference too if I let it be "free". There's a huge gap from the default TDP where the GPU is capable of doing all the same things at a much lower power draw. There's also a point where FPS drops if you limit the GPU clock any further, but maybe the trade off is fine for less power used.

Otherwise, I notice significant differences in just adjusting some settings. Ray-tracing reflections and psycho lighting each tax me for about 40 watts. But using FSR or XESS negates the tax. Alternatively, I can reduce my GPU clock from 2600MHz to 2000MHz and lose 20-30FPS in exchange for using 100 watts less.

In the end things are working as they should, if a scene is less demanding, reduce the power needed. If more power is needed but it can't be found, reduce the FPS.

I don't know if this is something the developers are optimising for but it's appreciated if so.

avatar
Xeshra: Aaahhh! It must be this "spec": https://www.youtube.com/watch?v=LYjUCKLyBLI&t=4s
That's got to be like 10,000+ watts (assuming these are just mid-range cards...) Jeez.
Post edited December 23, 2024 by botan9386
youtube feeds me with vid's about the current 'crisis' in game engines. This one covers the upcoming DLLS 4 and, what kind of decrease in graphical quality we can expect for the future

he even reaches out to stand up against these insane practices of developing.... 'futureproof' e.g. UE5 or at least according to some...

link: https://www.youtube.com/watch?v=CjbYwanN8os
DLSS depends on quality. As i said, if it is used on native + DLAA it is good quality but most gamers are not using it this way... because... obviously... it is very taxing.

I just can say, using a 3090 TI @1080P and DLAA + Epic, on Stalker 2 i can reach 60 FPS, but it will produce around 35-70% load, dependable on the scenario = 150-350 W... so basically at the most demanding settings it is a 1080P or... maybe... 1440P card.

Simply use a weaker DLSS or native res without DLAA? Both will produce better FPS, but the issue is... i get bad graphic artifacts or at least "grainy artifacts" on many textures. It is so ugly... i will instantly switch back to native + DLAA. Not sure if this is a bug but for me the game only looks great using native + DLAA combo, the most demanding setting.

4k may work using a 4090 but it would as well come critically close to the maximum load (using this settings) and in many cases even dip below 60 FPS, in this case not entirely being able to hold up the minimum in performance. Perhaps the devs are able to optimize Stalker 2 even more but i would not hold my breath... even just to fix all (well most of) the bugs they may need a full year of work.

In this case... even better to use a 5090... but this one is... ugh... nothing affordable and it is safe to say it will never become affordable, not even in several years... because Nvidia lacks competition.

No matter what... a almost fully capable 4k card in this case... is at least a matter of 2000 coins it seems. A bulletproof 4k card is in this case even up to 3000+ coins. So, the current hardware-demand for a "proper graphic" combined with suitable performance is surely very high, very very high.

It works for me using my current card, but just because i still do not use 4k... and it is clear to me: If i go 4k i would need at least the performance of a 5090, else i will pass on it (it is not sufficient if a 4090 can only barely produce 60 FPS TODAY). At least i can still use my 1080P Plasma for several more years to come... so no need to buy a new TV yet... the hardware simply can not keep up (with the software demand) on PC.

I mean, i may not even care if the picture is missing some "oomph", maybe no good reflection on a puddle... but nope... the true issue is "artifacting" or grainy and in some cases way to soft textures (visible lack of crispness, it simply is not sharp) and i am very vulnerable to this, because i simply notice it... some people apparently dont (kinda lucky... as it will be cheaper for sure). And nope... no 4k TV needed for detecting those issues... but a big quality screen definitely "helps" (well... not really... you know what i mean).

I would not say Stalker 2 is graphically weak, definitely not, it got very good graphics for sure... by the current standards, but it will need a lot of help from the hardware and this means it is very taxing. Same issue for a lot of games, especially UE5 games.
Post edited December 24, 2024 by Xeshra
we all need (sharp) edges!!

I have this guy working for me, well i mean under me, who's able to create fantastic models, moulds, for different concrete shapes. He's really an expert in creating this amazingly sharp lines that really stand out once the concrete shape comes out of the mould. The effect is that the edge becomes a real selling point for that specific model. Even when the job specifications do not require said effect.

Something to pounder about, if you like.

For the rest, i did not decide for nothing earlier this year that i needed a good 1080p gaming monitor in my life. With all my worries 'bout 'the green thumb' etc

Of course current gen developments are worrying, especially if the 5090 manages to top the 4090 with yet another 200% (on paper) still, i don't think we ever had a problem of some titles not running on current gen hardware. Or we ever will. (Exceptions further defining this from now on set rule)
Especially because the gaming scene is 'protected' (i never imagined sayin something like this) by the big corpo's Nintendo, Sony etc.
Post edited December 24, 2024 by P. Zimerickus
"Not running" is a relative term: Guess even the Switch can "run" almost any game if optimized in a excessive way. However... the results is not what makes me smile and nothing i would ever call "next gen"... or simply immersive in a graphical way. If i just want to play games "with the good old graphics"... okay... but i already got hundreds of classic games handling it properly. I do expect more tech count from a "modern game", preferably without artifacting or many other issues of "graphical incompetence".

Sure, we are currently in a difficult "shift" between "normal rendering" based on rasterization... and "advanced rendering" including RT and AI. This "shift" toward the new technologies is a very tough shift, as the changes are currently so big, even the most capable peoples and companies will need way to many years for making it happen in a way it can lead to great results. So we are simply currently in a beta-phase that will last for many years. At least over time improvements will clearly become "visible" but until then... it is a rough time for hardware and even software.
Post edited December 24, 2024 by Xeshra
avatar
Xeshra: In this case... even better to use a 5090... but this one is... ugh... nothing affordable and it is safe to say it will never become affordable, not even in several years... because Nvidia lacks competition.
To be honest, if we take a good look at the 7900 XTX against the 4090, it was only ever a 10-20% downgrade in average rasterization performance. To put this into numbers, if a 4090 is managing 200fps then a 7900 XTX is probably managing 170fps. The 7900 XTX was supposed to compete with the 4080 but it was priced so well that it managed to sway some potential 4090 buyers to the 7900 XTX instead.

Where the 4090 was truly valuable was in ray-tracing, and being as so many games released with heavy ray-tracing features, a 4090 was necessary to enjoy them at their highest settings. I wonder how things would have looked if the 7000 series was only 25% behind in ray-tracing as opposed to 50%. All of the sudden the 4090 would not have been the only GPU where heavy ray-tracing was possible.

In any case, we can expect the 5090 to sell out instantly and likely be resold at a tax for years. The 4090 never returned to MSRP after release unless you bought it used. Meanwhile, all AMD GPUs reduced in price, including their high-end. NVIDIA understands that they hold a lot of power in this space and exercise it accordingly.

Remember just how much the PC gaming space hated the 4060 cards? To this day, they are a horrible deal. Yet, their worst model (4060 Ti) has sold more units than alternatives that are both cheaper and faster like AMD's 7700 XT or RX 6800. That's the NVIDIA power.
Post edited December 24, 2024 by botan9386
Isn't UE owned by tencent?
avatar
dangerous-boy: Isn't UE owned by tencent?
Afaik, Epic owns UE and Tencent has shares in Epic but not enough to make decisions by themselves. It's unlikely that they control UE.
avatar
botan9386: I'm using an RX 6800 (250 watt TDP). I notice that power draw difference too if I let it be "free". There's a huge gap from the default TDP where the GPU is capable of doing all the same things at a much lower power draw. There's also a point where FPS drops if you limit the GPU clock any further, but maybe the trade off is fine for less power used.

Otherwise, I notice significant differences in just adjusting some settings. Ray-tracing reflections and psycho lighting each tax me for about 40 watts. But using FSR or XESS negates the tax. Alternatively, I can reduce my GPU clock from 2600MHz to 2000MHz and lose 20-30FPS in exchange for using 100 watts less.

In the end things are working as they should, if a scene is less demanding, reduce the power needed. If more power is needed but it can't be found, reduce the FPS.

I don't know if this is something the developers are optimising for but it's appreciated if so.

avatar
Xeshra: Aaahhh! It must be this "spec": https://www.youtube.com/watch?v=LYjUCKLyBLI&t=4s
avatar
botan9386: That's got to be like 10,000+ watts (assuming these are just mid-range cards...) Jeez.
just had another look on the hood so to speak.

1080p 90 fps all settings low, no ray tracing. around a 100W powerdraw on my 3090Ti in your appartment. HDR enabled windows HDR not the native one

first 2 raytracing settings enabled 80FPS sort of stable, 180W powerdraw

all ray tracing settings enabled no path tracing - 78FPS 245W powerdraw

with path tracing 75 FPS 350W powerdraw

ridicilous! uhm DLSS was enabled. Quality: Balanced

Cyberpunk 2077
Post edited December 24, 2024 by P. Zimerickus
avatar
P. Zimerickus: just had another look on the hood so to speak.

1080p 90 fps all settings low, no ray tracing. around a 100W powerdraw on my 3090Ti in your appartment. HDR enabled windows HDR not the native one

first 2 raytracing settings enabled 80FPS sort of stable, 180W powerdraw

all ray tracing settings enabled no path tracing - 78FPS 245W powerdraw

with path tracing 75 FPS 350W powerdraw

ridicilous! uhm DLSS was enabled. Quality: Balanced

Cyberpunk 2077
100 watts to 350 watts for path tracing is rough. At that point I'd feel like the feature isn't worth more than 15 minutes just testing out the visuals. Needless to say, my GPU also reaches its power limit with path tracing but I'm only getting 30fps with path tracing + FSR balanced so it's even less worthwhile.