Could the PS6 be the last console generation with an expressive improvement in compute and graphics? Miniaturization keeps giving ever more diminishing returns each shrink, prices of electronics are going up (even sans tariffs), lead by the increase in the price of making chips. Alternate techniques have slowly been introduced to offset the compute deficit, first with post processing AA in the seventh generation, then with "temporal everything" hacks (including TAA) in the previous generation and finally with minor usage of AI up-scaling in the current generation and (projected) major usage of AI up-scaling and frame-gen in the next gen.
However, I'm pessimistic on how this can keep evolving. RT already takes a non trivial amount of transistor budget and now those high end AI solutions require another considerable chunk of the transistor budget. If we are already reaching the limits of what non generative AI up-scaling and frame-gen can do, I can't see where a PS7 can go other than using generative AI to interpret a very crude low-detail frame and generating a highly detailed photorealistic scene from that, but that will, I think, require many times more transistor budget than what will likely ever be economically achievable for a whole PS7 system.
Will that be the end of consoles? Will everything move to the cloud and a power guzzling 4KW machine will take care of rendering your PS7 game?
I really can only hope there is a break-trough in miniaturization and we can go back to a pace of improvement that can actually give us a new generation of consoles (and computers) that makes the transition from an SNES to a N64 feel quaint.
My kids are playing Fortnite on a PS4, it works, they are happy, I feel the rendering is really good (but I am an old guy) and normally, the only problem while playing is the stability of the Internet connection.
We also have a lot of fun playing board games, simple stuff from design, card games, here, the game play is the fun factor. Yes, better hardware may bring more realistic, more x or y, but my feeling is that the real driver, long term, is the quality of the game play. Like the quality of the story telling in a good movie.
Every generation thinks the current generation of graphics won't be topped, but I think you have no idea what putting realtime generative models into the rendering pipeline will do for realism. We will finally get rid of the uncanny valley effect with facial rendering, and the results will almost certainly be mindblowing.
I think the inevitable near future is that games are not just upscaled by AI, but they are entirely AI generated in realtime. I’m not technical enough to know what this means for future console requirements, but I imagine if they just have to run the generative model, it’s… less intense than how current games are rendered for equivalent results.
I don't think you grasp how many GPUs are used to run world simulation models. It is vastly more intensive in compute that the current dominant realtime rendering or rasterized triangles paradigm
Yeah, which is pretty slow due to the need to autoregressively generate each image frame token in sequence. And leading diffusion models need to progressively denoise each frame. These are very expensive computationally. Generating the entire world using current techniques is incredibly expensive compared to rendering and rasterizing triangles, which is almost completely parallelized by comparison.
I’m thinking more procedural generation of assets. If done efficiently enough, a game could generate its assets on the fly, and plan for future areas of exploration. It doesn’t have to be rerendered every time the player moves around. Just once, then it’s cached until it’s not needed anymore.
Unreal engine 1 looks good to me, so I am not a good judge.
I keep thinking there is going to be a video game crash soon, over saturation of samey games. But I'm probably wrong about that. I just think that's what Nintendo had right all along: if you commoditize games, they become worthless. We have endless choice of crap now.
In 1994 at age 13 I stopped playing games altogether. Endless 2d fighters and 2d platformer was just boring. It would take playing wave race and golden eye on the N64 to drag me back in. They were truly extraordinary and completely new experiences (me and my mates never liked doom).
Anyway I don't see this kind of shift ever happening again. Infact talking to my 13 year old nephew confirms what I (probably wrongly) believe, he's complaining there's nothing new. He's bored or fortnight and mine craft and whatever else. It's like he's experiencing what I experienced, but I doubt a new generation of hardware will change anything.
> Unreal engine 1 looks good to me, so I am not a good judge.
But we did hit a point where the games were good enough, and better hardware just meant more polygons, better textures, and more lighting. The issues with Unreal Engine 1 (or maybe just games of that era) was that the worlds were too sparse.
> over saturation of samey games
So that's the thing. Are we at a point where graphics and gameplay in 10-year-old games is good enough?
Are we at a point where graphics and gameplay in 10-year-old games is good enough?
Personally, there are enough good games from the 32bit generation of consoles, and before, to keep me from ever needing to buy a new console, and these are games from ~25 years ago. I can comfortably play them on a MiSTer (or whatever PC).
If the graphics aren’t adding to the fun and freshness of the game, nearly. Rewatching old movies over seeing new ones is already a trend. Video games are a ripe genre for this already.
Now I'm going to disagree with myself... there came a point where movies started innovating in storytelling rather than the technical aspects (think Panavision). Anything that was SFX-driven is different, but the stories movies tell and how they tell them changed, even if there are stories where the technology was already there.
???? hmm wrong??? if everyone can make game, the floor is raising making the "industry standard" of a game is really high
while I agree with you that if everything is A then A is not meaning anything but the problem is A isn't vanish, they just moved to another higher tier
That's the Nintendo way. Avoiding the photorealism war altogether by making things intentionally sparse and cartoony. Then you can sell cheap hardware, make things portable etc.
It sounds like even the PS6 isn’t going to have an expressive improvement, and that the PS5 was the last such console. PS5 Pro was the first console focused on fake frame generation instead of real output resolution/frame rate improvements, and per the article PS6 is continuing that trend.
In the past a game console might launch at a high price point and then after a few years, the price goes down and they can release a new console at a high at a price close to where the last one started.
Blame crypto, AI, COVID but there has been no price drop for the PS5 and if there was gonna be a PS6 that was really better it would probably have to cost upwards of $1000 and you might as well get a PC. Sure there are people who haven’t tried Steam + an XBOX controller and think PV gaming is all unfun and sweaty but they will come around.
Inflation. PS5 standard at $499 in 2019 is $632 in 2025 money which is the same as the 1995 PS 1 when adjusted for inflation $299 (1995) to $635(2025). https://www.usinflationcalculator.com/
When I bought a PS 1 around 1998-99 I paid $150 and I think that included a game or two. It's the later in the lifecycle price that has really changed (didn't the last iteration of it get down to either $99 or $49?)
The main issue with inflation is that my salary is not inflation adjusted. Thus the relative price increase adjusted by inflation might be zero but the relative price increase adjusted by my salary is not.
The phrase “cost of living increase” is used to refer to an annual salary increase designed to keep up with inflation.
Typically, you should be receiving at least an annual cost of living increase each year. This is standard practice for every company I’ve ever worked for and it’s a common practice across the industry. Getting a true raise is the amount above and beyond the annual cost of living increase.
If your company has been keeping your salary fixed during this time of inflation, then you are correct that you are losing earning power. I would strongly recommend you hit the job market if that’s the case because the rest of the world has moved on.
In some of the lower wage brackets (not us tech people) the increase in wages has actually outpaced inflation.
Typically "Cost Of Living" increases target roughly inflation. They don't really keep up though, due to taxes.
If you've got a decent tech job in Canada your marginal tax rate will be near 50%. Any new income is taxed at that rate, so that 3% COL raise, is really a 1.5% raise in your purchasing power, which typically makes you worse off.
Until you're at a very comfortable salary, you're better off job hopping to boost your salary. I'm pretty sure all the financial people are well aware they're eroding their employees salaries over time, and are hoping you are not aware.
Tax brackets also shift through time, though less frequently. So if you only get COL increases for 20 years you’re going to be reasonably close to the same after tax income barring significant changes to the tax code.
Thank you for your concern but I'm in Germany so the situation is a bit different and only very few companies have been able to keep up with inflation around here. I've seen at least a few adjustments but would not likely find a job that pays as well as mine does 100% remote. Making roughly 60K in Germany as a single in his 30s isn't exactly painful.
> but would not likely find a job that pays as well as mine does 100% remote.
That makes sense. The market for remote jobs has been shrinking while more people are competing for the smaller number of remote jobs. In office comes with a premium now and remote is a high competition space.
As long as I need a mouse and keyboard to install updates or to install/start my games from GOG, it's still going to be decidedly unfun, but hopefully Windows' upcoming built-in controller support will make it less unfun.
Today you can just buy an Xbox controller and pair it with your Windows computer and it just works and it’s the same same with the Mac.
You don’t have to install any drivers or anything and with the big screen mode in Steam it’s a lean back experience where you can pick out your games and start one up without using anything other than the controller.
I like big picture mode in Steam, but.... controller support is spotty across Steam games, and personally I think you need both a Steam controller and a DualSense or Xbox controller. Steam also updates itself by default every time you launch, and you have to deal with Windows updates and other irritations. Oh, here's another update for .net, wonderful. And a useless new AI agent. SteamOS and Linux/Proton may be better in some ways, but there are still compatibility and configuration headaches. And half my Steam library doesn't even work on macOS, even games that used to work (not to mention the issues with intel vs. Apple Silicon, etc.)
The "it just works" factor and not having to mess with drivers is a huge advantage of consoles.
Apple TV could almost be a decent game system if Apple ever decided to ship a controller in the box and stopped breaking App Store games every year (though live service games rot on the shelf anyway.)
DualSense 4 and 5 support under Linux is rock-solid, wired or wireless. That's to be expected since the drivers are maintained by Sony[1]. I have no idea about the XBox controller, but I know DS works perfectly with Steam/Proton out of the box, with the vanilla Linux kernel.
I have clarified that I meant controller support in the Steam games themselves. Some of them work well, some of them not so well. Others need to be configured. Others only work with a Steam controller. I wish everything worked well with DualSense, especially since I really like its haptics, but it's basically on the many (many) game developers to provide the same kind of controller support that is standard on consoles.
Thanks for the clarification. I've into that a couple of times - Steam's button remapping helps sometimes, but you'd have to remember which controller button the on-screen symbol maps to.
But when I have to install drivers, or install a non-Steam game, I can't do that with the controller yet. That's what I need for PC gaming to work in my living room.
Or you just need a Steam controller. They're discontinued now but work well as a mouse+keyboard for desktop usage. It got squished into the Steam Deck so hopefully there's a new version in the future.
But now you’re assuming the PC isn’t also getting more expensive.
If a console designed to break even is $1,000 then surely an equivalent PC hardware designed to be profitable without software sales revenue will be more expensive.
Lower latency between your input and its results appearing on the screen is exactly what a fundamental benefit is.
The resolution part is even sillier - you literally get more information per frame at higher resolutions.
Yes, the law of diminishing returns still applies, but 720p@60hz is way below the optimum. I'd estimate 4k@120hz as the low end of optimal maybe? There's some variance w.r.t the application, a first person game is going to have different requirements from a movie, but either way 720p ain't it.
Screen size is pretty much irrelevant, as nobody is going to be watching it at nose-length distance to count the pixels. What matters is angular resolution: how much area does a pixel take up in your field of vision? Bigger screens are going to be further away, so they need the same resolution to provide the same quality as a smaller screen which is closer to the viewer.
Resolution-wise, it depends a lot on the kind of content you are viewing as well. If you're looking at a locally-rendered UI filled with sharp lines, 720p is going to look horrible compared to 4k. But when it comes to video you've got to take bitrate into account as well. If anything, a 4k movie with a bitrate of 3Mbps is going to look worse than a 720p movie with a bitrate of 3Mbps.
I definitely prefer 4k over 720p as well, and there's a reason my desktop setup has had a 32" 4k monitor for ages. But beyond that? I might be able to be convinced to spend a few bucks extra for 6k or 8k if my current setup dies, but anything more would be a complete waste of money - at reasonable viewing distances there's absolutely zero visual difference.
We're not going to see 10.000Hz 32k graphics in the future, simply because nobody will want to pay extra to upgrade from 7.500Hz 16k graphics. Even the "hardcore gamers" don't hate money that much.
There's a noticeable and obvious improvement from 720 to 1080p to 4k (depending on the screen size). While there are diminishing gains, up to at least 1440p there's still a very noticeable difference.
> Somewhere between 60 hz and 240hz, theres zero fundamental benefits. Same for resolution.
Also not true. While the difference between 40fps and 60fps is more noticeable than say from 60 to 100fps, the difference is still noticeable enough. Add the reduction in latency that's also very noticeable.
That would be very obvious and immediately noticeable difference but you need enough FPS rendered (natively not with latency increasing frame generation) and a display that can actually do 240hz without becoming a smeary mess.
If you have this combination and you play with it for an hour and you go back to a locked 100hz Game you would never want to go back. It's rather annoying in that regard actually.
Even with frame generation it is incredibly obvious. The latency for sure is a downside, but 100 FPS vs 240 FPS is extremely evident to the human visual system.
Consoles are the perfect platform for a proper pure ray tracing revolution.
Ray tracing is the obvious path towards perfect photorealistic graphics. The problem is that ray tracing is really expensive, and you can't stuff enough ray tracing hardware into a GPU which can also run traditional graphics for older games. This means games are forced to take a hybrid approach, with ray tracing used to augment traditional graphics.
However, full-scene ray tracing has essentially a fixed cost: the hardware needed depends primarily on the resolution and framerate, not the complexity of the scene. Rendering a million photorealistic objects is not much more compute-intensive than rendering a hundred cartoon objects, and without all the complicated tricks needed to fake things in a traditional pipeline any indie dev could make games with AAA graphics. And if you have the hardware for proper full-scene raytracing, you no longer need the whole AI upscaling and framegen to fake it...
Ideally you'd want a GPU which is 100% focused on ray tracing and ditches the entire legacy triangle pipeline - but that's a very hard sell in the PC market. Consoles don't have that problem, because not providing perfect backwards compatibility for 20+ years of games isn't a dealbreaker there.
I believe with an existing BVH acceleration structure, the average case time complexity is O(log n) for n triangles. So not constant, but logarithmic. Though for animated geometry the BVH needs to be rebuilt for each frame, which might be significantly more expensive depending on the time complexity of BVH builds.
What if we keep the number of triangles constant per pixel, independently of scene complexity, through something like virtualized geometry? Though this would then require rebuilding part of the BVH each frame, even for static scenes, which is probably not a constant operation.
> Rendering a million photorealistic objects is not much more compute-intensive than rendering a hundred cartoon objects
Surely ray/triangle intersection tests, brdf evaluation, acceleration structure rebuilds (when things move/animate) all would cost more in your photorealistic scenario than the cartoon scenario?
Combining both ray tracing (including path tracing, which is a form of ray tracing) and rasterization is the most effective approach. The way it is currently done is that primary visibility is calculated using triangle rasterization, which produces perfectly sharp and noise free textures, and then the ray traced lighting (slightly blurry due to low sample count and denoising) is layered on top.
> However, full-scene ray tracing has essentially a fixed cost: the hardware needed depends primarily on the resolution and framerate, not the complexity of the scene.
That's also true for modern rasterization with virtual geometry. Virtual geometry keeps the number of rendered triangles roughly proportional to the screen resolution, not to the scene complexity. Moreover, virtual textures also keep the amount of texture detail in memory roughly proportional to the screen resolution.
The real advantage of modern ray tracing (ReSTIR path tracing) is that it is independent of the number of light sources in the scene.
I know this isn't an original idea, but I wonder if this will be the trick for step-level improvement in visuals. Use traditional 3D models for the broad strokes and generative AI for texture and lighting details. We're at diminishing returns for add polygons and better lighting, and generative AI seems to be better at improving from there—when it doesn't have to get the finger count right.
After raytracing, the next obvious massive improvement would be path tracing.
And while consoles usually lag behind the latest available graphics, I'd expect raytracing and even path tracing to become available to console graphics eventually.
One advantage of consoles is that they're a fixed hardware target, so games can test on the exact hardware and know exactly what performance they'll get, and whether they consider that performance an acceptable experience.
There is no real difference between "Ray Tracing" and "Path Tracing", or better, the former is just the operation of intersecting a ray with a scene (and not a rendering technique), the latter is a way to solve the integral to approximate the rendering equation (hence, it could be considered a rendering technique). Sure, you can go back to the terminology used by Kajiya in his earlier works etc etc, but it was only a "academic terminology game" which is worthless today. Today, the former is accelerated by HW since around a decade (I am cunting the PowerVR wizard). The latter is how most of non-realtime rendering renders frames.
You can not have "Path Tracing" in games, not according to what it is. And it also probably does not make sense, because the goal of real-time rendering is not to render the perfect frame at any time, but it is to produce the best reactive, coherent sequence of frames possible in response to simulation and players inputs. This being said, HW ray tracing is still somehow game changing because it shapes a SIMT HW to make it good at inherently divergent computation (eg. traversing a graph of nodes representing a scene): following this direction, many more things will be unlocked in real-time simulation and rendering. But not 6k samples unidirectionally path-traced per pixel in a game.
> If you have some issue with that terminology, by all means raise that issue, but "You can not have" is just factually incorrect here.
It is not incorrect because, at least for now, all those "path tracing" modes do not do compute multiple "paths" (with each being made of multiple rays casted) per pixel but rasterize primary rays and then either fire 1 [in rare occasions, 2] rays for such a pixel, or, more often, read a value from a local special cache called a "reservoir" or from a radiance cache - which is sometimes a neural network. All of this goes even against the defition your first article gives itself of path tracing :D
I don't have problems with many people calling it "path tracing" in the same way I don't have issues with many (more) people calling Chrome "Google" or any browser "the internet", but if one wants to talk about future trends in computing (or is posting on hacker news!) I believe it's better to indicate a browser as a browser, Google as a search engine, and Path Tracing as what it is.
There's likely still room to go super wide with CPU cores and much more ram but everyone is talking about neutral nets so that's what the press release is about.
not all games need horse power. We've now past the point of good enough to run a ton of it. Sure, tentpole attractions will warrant more and more, but we're turning back to mechanics, input methods, gameplay, storytelling. If you play 'old' games now, they're perfectly playable. Just like older movies are perfectly watchable. Not saying you should play those (you should), but there's not kuch of a leap needed to keep such ideas going strong and fresh.
This is my take as well. I haven’t felt that graphics improvement has “wowed” me since the PS3 era honestly.
I’m a huge fan of Final Fantasy games. Every mainline game (those with just a number; excluding 11 and 14 which are MMOs) pushes the graphical limits of the platforms at the time. The jump from 6 to 7 (from SNES to PS1); from 9 to 10 (PS1 to 2); and from 12 to 13 (PS3/X360) were all mind blowing. 15 (PS4) and 16 (PS5) were also major improvements in graphics quality, but the “oh wow” generational gap is gone.
And then I look at the gameplay of these games, and it’s generally regarded as going in the opposite direction- it’s all subjective of course but 10 is generally regarded as the last “amazing” overall game, with opinions dropping off from there.
We’ve now reached the point where an engaging game with good mechanics is way more important than graphics: case in point being Nintendo Switch, which is cheaper and has much worse hardware, but competes with the PS5 and massively outsells Xbox by huge margins, because the games are fun.
It's not just technology that's eating away at console sales, it's also the fact that 1) nearly everything is available on PC these days (save Nintendo with its massive IP), 2) mobile gaming, and 3) there's a limitless amount of retro games and hacks or mods of retro games to play and dedicated retro handhelds are a rapidly growing market. Nothing will ever come close to PS2 level sales again. Will be interesting to see how the video game industry evolves over the next decade or two. I suspect subscriptions (sigh) will start to make up for lost console sales.
doubtful, they say this with every generation of console and even gaming pc systems. When it's popularity decreases then profits decrease and then maybe it will be "the last generation".
Gaming using weird tech is not a hardware manufacturer or availability issue. It is a game studio leadership problem.
Even in the latest versions of unreal and unity you will find the classic tools. They just won't be advertised and the engine vendor might even frown upon them during a tech demo to make their fancy new temporal slop solution seem superior.
The trick is to not get taken for a ride by the tools vendors. Real time lights, "free" anti aliasing, and sub-pixel triangles are the forbidden fruits of game dev. It's really easy to get caught up in the devil's bargain of trading unlimited art detail for unknowns at end customer time.
Hard assets and things with finite supply. Anything real. Gold, bitcoin, small cap value stocks, commodities, treasuries (if you think the government won't fail).
If the Internet goes away, Bitcoin goes away. That's a real threat in a bunch of conceivable societal failure scenarios. If you want something real, you want something that will survive the loss of the internet. Admittedly, what you probably want most in those scenarios is diesel, vehicles that run on diesel, and salt. But a pile of gold still could be traded for some of those.
Everyone always talk like societal collapse is global. Take a small pile of gold and use it to buy a plane ticket somewhere stable with internet and your bitcoin will be there waiting for you.
Beyond the PS6, the answer is very clearly graphics generated in real time via a transformer model.
I’d be absolutely shocked if in 10 years, all AAA games aren’t being rendered by a transformer. Google’s veo 3 is already extremely impressive. No way games will be rendered through traditional shaders in 2035.
The future of gaming is the Grid-Independent Post-Silicon Chemo-Neural Convergence, the user will be injected with drugs designed by AI based on a loose prompt (AI generated as well, because humans have long lost the ability to formulate their intent) of the gameplay trip they must induce.
Now that will be peak power efficiency and a real solution for the world where all electricity and silicon are hogged by AI farms.
Baidu Apollo Go is conpletes millions of rides a year as well, with expansions into Europe in the Middle East. In China they've been active for a long time - during COVID they were making autonomous deliveries.
It is odd how many people don't realize how developed self-driving taxis are.
It did flop, but still a hefty loaf of money was sliced off in the process.
Those with the real vested interest don't care if that flops, while zealous worshippers to the next brand new disruptive tech are just a free vehicle to that end.
Just because it's possible doesn't mean it is clearly the answer. Is a transformer model truly likely to require less compute than current methods? We can't even run models like Veo 3 on consumer hardware at their current level of quality.
Even in a future with generative UIs, those UIs will be composed from pre-created primitives just because it's faster and more consistent, there's literally no reason to re-create primitives every time.
This _might_ be true, but it's utterly absurd to claim this is a certainty.
The images rendered in a game need to accurately represent a very complex world state. Do we have any examples of Transformer based models doing something in this category? Can they do it in real-time?
I could absolutely see something like rendering a simplified and stylised version and getting Transformers to fill in details. That's kind of a direct evolution from the upscaling approach described here, but end to end rendering from game state is far less obvious.
I could absolutely see something like rendering a simplified and stylised version and getting Transformers to fill in details. That's kind of a direct evolution from the upscaling approach described here, but end to end rendering from game state is far less obvious.
Sure. This could be a variation. You do a quick render that any GPU from 2025 can do and then make the frame hyper realistic through a transformer model. It's basically saying the same thing.
The main rendering would be done by the transformer.
Already in 2025, Google Veo 3 is generating pixels far more realistic than AAA games. I don't see why this wouldn't be the default rendering mode for AAA games in 2035. It's insanity to think it won't be.
> Google Veo 3 is generating pixels far more realistic than AAA games
That’s because games are "realtime", meaning with a tight frame-time budget. AI models are not (and are even running on multiple cards each costing 6 figures).
Well you missed the point. You could call it prompt adherence. I need veo to generate the next frame in a few milliseconds, and correctly represent the position of all the cars in the scene (reacting to player input) reliably to very high accuracy.
You conflate the challenge of generating realistic pixels with the challenge of generating realistic pixels that represent a highly detailed world state.
So I don't think your argument is convincing or complete.
> Already in 2025, Google Veo 3 is generating pixels far more realistic than AAA games.
Traditional rendering techniques can also easily exceed the quality of AAA games if you don't impose strict time or latency constraints on them. Wake me up when a version of Veo is generating HD frames in less than 16 milliseconds, on consumer hardware, without batching, and then we can talk about whether that inevitably much smaller model is good enough to be a competitive game renderer.
Doesn’t this imply that a transformer or NN could fill in details more efficiently than traditional techniques?
I’m really curious why this would be preferable for a AAA studio game outside of potential cost savings. Also imagine it’d come at the cost of deterministic output / consistency in visuals.
Genie 3 is already a frontier approach to interactive generative world views no?
It will be AI all the way down soon. The models internal world view could be multiple passes and multi layer with different strategies... In any case; safe to say more AI will be involved in more places ;)
I am super intrigued by such world models. But at the same time it's important to understand where they are at. They are celebrating the achievement of keeping the world mostly consistent for 60 seconds, and this is 720p at 24fps.
I think it's reasonable to assume we won't see this tech replace game engines without significant further breakthroughs...
For LLMs agentic workflows ended up being a big breakthrough to make them usable. Maybe these World Models will interact with a sort of game engine directly somehow to get the required consistency. But it's not evident that you can just scale your way from "visual memory extending up to one minute ago" to 70+ hour game experiences.
Neural net is already being used via DLSS. Neural rendering is the next step. And finally, a full transformer based rendering pipeline. My guess anyway.
That's just not efficient. AAA games will use AI to pre-render assets, and use AI shaders to make stuff pop more, but on the fly asset generation will still be slow and produce low quality compared to offline asset generation. We might have a ShadCN style asset library that people use AI to tweak to produce "realtime" assets, but there will always be an offline core of templates at the very least.
I was going to say "again?", but then I recalled DirectX 12 was released 10 years ago and now I feel old...
The main goal of Direct3D 12, and subsequently Vulcan, was to allow for better use of the underlying graphics hardware as it had changed more and more from its fixed pipeline roots.
So maybe the time is ripe for a rethink, again.
Particularly the frame generation features, upscaling and frame interpolation, have promise but needs to be integrated in a different way I think to really be of benefit.
The rethink is already taking place via mesh shaders and neural shaders.
You aren't seeing them adopted that much, because the hardware still isn't deployed at scale that games can count on them being available, and also it cannot ping back on improving the developer experience adopting them.
Yeah but that doesn't mean that much of Mantle is recognizeable in Vulkan, because Vulkan wanted to cover the entire range of GPU architectures (including outdated and mobile GPUs) with a single API, while Mantle was designed for modern (at the time) desktop GPUs (and specifically AMD GPUs). Vulkan basically took an elegant design and "ruined" it with too much real-word pragmatism ;)
The industry, and at large the gaming community is just long past being interested in graphics advancement. AAA games are too complicated and expensive, the whole notion of ever more complex and grandiose experiences doesn't scale. Gamers are fractured along thousands of small niches, even in sense of timeline in terms of 80s, 90s, PS1 era each having a small circle of businesses serving them.
The times of console giants, their fiefdoms and the big game studios is coming to an end.
I'll take the other side of this argument and state that people are interested in higher graphics, BUT they expect to see an equally higher simulation to go along with it. People aren't excited for GTA6 just because of the graphics, but because they know the simulation is going to be better then anything they've seen before. They need to go hand in hand.
That's totally where all this is going. More horsepower on a GPU doesn't necessarily mean it's all going towards pixels on the screen. People will get creative with it.
I disagree - current gen console aren't enough to deliver smooth immersive graphics - I played BG3 on PS first and then on PC and there's just no comparing the graphics. Cyberpunk same deal. I'll pay to upgrade to consistent 120/4k and better graphics, and I'll buy the games.
And there are AAA that make and will make good money with graphics being front and center.
>aren't enough to deliver smooth immersive graphics
I'm just not sold.
Do I really think that BG3 being slightly prettier than, say, Dragon Age / Skyrim / etc made it a more enticing game? Not to me certainly. Was cyberpunk prettier than Witcher 3? Did it need to be for me to play it?
My query isn't about whether you can get people to upgrade to play new stuff (always true). But whether they'd still upgrade if they could play on the old console with worse graphics.
I also don't think anyone is going to suddenly start playing video games because the graphics improve further.
> Do I really think that BG3 being slightly prettier than, say, Dragon Age / Skyrim / etc made it a more enticing game?
Absolutely - graphical improvements make the game more immersive for me and I don't want to go back and replay the games I spent hundreds of hours in mid two thousands, like say NVN or Icewind Dale (never played BG 2). It's just not the same feeling now that I've played games with incomparable graphics, polished mechanics and movie level voice acting/mocap cutscenes. I even picked up Mass Effect recently out of nostalgia but gave up fast because it just isn't as captivating as it was back when it was peak graphics.
> it’s odd how quickly people handwave away graphics in a visual medium.
There is a difference between graphics as in rendering (i.e. the technical side, how something gets rendered) and graphics as in aesthetics (i.e. visual styles, presentation, etc).
The latter is important for games because it can be used to evoke some feel to the player (e.g. cartoony Mario games or dreadful Silent Hill games). The former however is not important by itself, its importance only comes as means to achieve the latter. When people handwave away graphics in games they handwave the misplaced focus on graphics-as-in-tech, not on graphics-as-in-aesthetics.
I don't know what these words mean to you vs. what they mean to me. But whatever you call the visual quality that Baldur's Gate 3, CyberPunk 2077, and most flagship AAA titles, etc. are chasing after that makes them have "better graphics" and be "more immersive", whatever that is, is not the only way to paint the medium.
Very successful games are still being made that use sprites, low-res polygons, cel shading, etc. While these techniques still can run into hardware limits, they generally don't benefit from the sort of improvements (and that word is becoming ever more debatable with things like AI frame generation) that make for better looking [whatever that quality is called] games.
And not caring as much about those things doesn't mean I don't understand that video games are a visual medium.
This is just one type of graphics. And focusing too heavily on it is not going to be enough to keep the big players in the industry afloat for much longer. Some gamers care--apparently some care a lot--but that isn't translating into enough sales to overcome the bloated costs.
For me, the better graphics, mocap etc., the stroger the uncanny valley feeling - i.e. I stop perceiving it as a video game, but instead see it as an incredibly bad movie.
> I don't want to go back and replay the games I spent hundreds of hours in mid two thousands, like say NVN or Icewind Dale (never played BG 2). It's just not the same feeling now that I've played games with incomparable graphics, polished mechanics and movie level voice acting/mocap cutscenes. I even picked up Mass Effect recently out of nostalgia but gave up fast because it just isn't as captivating as it was back when it was peak graphics.
And yet many more have no such issue doing exactly this. Despite having a machine capable of the best graphics at the best resolution, I have exactly zero issues going back and playing older games.
Just in the past month alone with some time off for surgery I played and completed Quake, Heretic and Blood. All easily as good, fun and as compelling as modern titles, if not in some ways better.
-How difficult it must be for the art/technical teams at game studios to figure out for all the detail they are capable of putting on screen how much of it will be appreciated by gamers. Essentially making sure that anything they're going to be budgeting significant amount of worker time to creating, gamers aren't going to run right past it and ignore or doesn't contribute meaningfully to 'more than the sum of its parts'.
-As much as technology is an enabler for art, alongside the install base issue how well does pursuing new methods fit how their studio is used to working, and is the payoff there if they spend time adapting. A lot of gaming business is about shipping product, and the studios concern is primarily about getting content to gamers than chasing tech as that is what lets their business continue, selling GPUs/consoles is another company's business.
Being an old dog that still cares about gaming, I would assert many games are also not taking advantage of current gen hardware, coded in Unreal and Unity, a kind of Electron for games, in what concerns taking advantage of existing hardware.
There is a reason there are so many complaints in social media about being obvious to gamers in what game engine a game was written on.
It used to be that game development quality was taken more seriously, when they were sold via storage media, and there was a deadline to burn those discs/cartridges.
Now they just ship whatever is done by the deadline, and updates will come later via a DLC, if at all.
It is pretty simple to bootstrap an engine. What isn’t simple is supporting asset production pipelines on which dozen/hundreds of people can work on simultaneously, and on which new hires/contractors can start contributing right away, which is what modern game businesses require and what unity/unreal provide.
Unreal and Unity would be less problematic if these engines were engineered to match the underlying reality of graphics APIs/drivers, but they're not. Neither of these can systematically fix the shader stuttering they are causing architecturally, and so essentially all games built on these platforms are sentenced to always stutter, regardless of hardware.
Both of these seem to suffer from incentive issues similar to enterprise software: They're not marketing and selling to either end users or professionals, but studio executives. So it's important to have - preferably a steady stream of - flashy headline features (e.g. nanite, lumen) instead of a product that actually works on the most basic level (consistently render frames). It doesn't really matter to Epic Games that UE4/5 RT is largely unplayable; even for game publishers, if you can pull nice-looking screenshots out of the engine or do good-looking 24p offline renders (and slap "in-game graphics" on them), that's good enough.
The shader stutter issues are non-existent on console, which is where most of their sales are. PC, as it has been for almost two decades, is an afterthought rather than a primary focus.
The shader stutter issues are non-existent on console because consoles have one architecture and you can ship shaders as compiled machine code.
For PC you don't know what architecture you will be targeting, so you ship some form of bytecode that needs to be compiled on the target machine.
Agreed. I didn't mean to say consoles' popularity is why they don't have shader stutter, but rather it's why implementing a fix on PC (e.g. precompilation at startup) isn't something most titles bother with.
It's not just popularity, Epic has been trying really hard to solve it in Unreal Engine.
The issue is that, because of monolithic pipelines, you have to provide the exact state the shaders will be used in. There's a lot of that, and a large part of it depends on user authored content, which makes it really hard to figure out in advance.
It's a fundamental design mistake in D3D12/Vulkan that is slowly being corrected, but it will take some time (and even more for game engines to catch up).
That's why I said "precompilation at startup". That has users compile for their precise hardware/driver combination prior to the game trying to use them for display.
Even this is just guesswork for the way these engines work, because they literally don't know what set of shaders to compile ahead of time. Arbitrary scripting can change that on a frame-by-frame basis, shader precompilation in these engines mostly relies on recording shader invocations during gameplay and shipping that list. [1]
Like, on the one hand, you have engines/games which always stutter, have more-or-less long "shader precompilation" splashscreens on every patch and still stutter anyway. The frametime graph of any UE title looks like a topographic cross-section of Verdun. On the other hand there are titles not using those engines where you wouldn't even notice there were any shaders to precompile which... just run.
> In a highly programmable real-time rendering environment such as Unreal Engine (UE), any application with a large amount of content has too many GPU state parameters that can change to make it practical to manually configure PSOs in advance. To work around this complication, UE can collect data about the GPU state from an application build at runtime, then use this cached data to generate new PSOs far in advance of when they are used. This narrows down the possible GPU states to only the ones used in the application. The PSO descriptions gathered from running the application are called PSO caches.
> The steps to collect PSOs in Unreal are:
> 1. Play the game.
> 2. Log what is actually drawn.
> 3. Include this information in the build.
> After that, on subsequent playthroughs the game can create the necessary GPU states earlier than they are needed by the rendering code.
Of course, if the playthrough used for generating the list of shadersdoesn't hit X codepath ("oh this particular spell was not cast while holding down shift"), a player hitting it will then get a 0.1s game pause when they invariably do.
If anything I think PC has been a prototyping or proving grounds for technologies on the roadmap for consoles to adopt. It allows software and hardware iterations before it's relied upon in a platform that is required to be stable and mostly unchanging for around a decade from designing the platform through developers using it and recently major refreshes. For example from around 2009 there were a few cross platform games with the baseline being 32bit/DX9 capabilities, but optional 64bit/DX11 capabilities, and given the costs and teams involved in making the kind of games which stretch those capabilities I find it hard to believe it'd be one or a small group of engineers putting significant time into an optional modes that aren't critical to the game functioning and supporting them publicly. Then a few years later that's the basis of the next generation of consoles.
Long first runs seem like an unambiguous improvement over stutter to me. Unfortunately, you still get new big games like Borderlands 4 that don't fully precompile shaders.
Depending on the game and the circumstances, I'm getting some cases of 20-40 minutes to compile shaders. That's just obscene to me. I don't think stutter is better but neither situation is really acceptable. Even if it was on first install only it would be bad, but it happens on most updates to the game or the graphics drivers, both of which are getting updated more frequently than ever.
Imagine living in a reality where the studio exec picks the engine based on getting screenshots 3 years later when there's something interesting to show.
I mean, are you actually talking from experience at all here?
It's really more that engines are an insane expense in money and time and buying one gets your full team in engine far sooner. That's why they're popular.
PC costs a lot and depreciates fast, by the end of a console lifecycle I can still count on developers targeting it - PC performance for 6+ year hardware is guaranteed to suck. And I'm not a heavy gamer - I'll spend ~100h on games per year, but so will my wife and my son - PC sucks for multiple people using it - PS is amazing. I know I could concoct some remote play setup via lan on TV to let my wife and kids play but I just want something I spend a few hundred eur and I plug into the TV and then it works.
Honestly the only reason I caved with the GPU purchase (which cost the equivalent of a PS pro) was the local AI - but in retrospect that was useless as well.
> by the end of a console lifecycle I can still count on developers targeting it
And I can count on those games still being playable on my six year old hardware because they are in fact developed for 6 year old hardware.
> PC performance for 6+ year hardware is guaranteed to suck
For new titles at maximum graphics level sure. For new titles at the kind of fidelity six year old consoles are putting out? Nah. You just drop your settings from "ULTIMATE MAXIMUM HYPER FOR NEWEST GPUS ONLY" to "the same low to medium at best settings the consoles are running" and off you go.
Advancements in lighting can help all games, not just AAA ones.
For example, Tiny Glade and Teardown have ray traced global illumination, which makes them look great with their own art style, rather than expensive hyper-realism.
But currently this is technically hard to pull off, and works only within certain constrained environments.
Devs are also constrained by the need to support multiple generations of GPUs. That's great from perspective of preventing e-waste and making games more accessible. But technically it means that assets/levels still have to be built with workarounds for rasterized lights and inaccurate shadows. Simply plugging in better lighting makes things look worse by exposing the workarounds, while also lacking polish for the new lighting system. This is why optional ray tracing effects are underwhelming.
Nintendo dominated last generation with switch. The games were only HD and many at 30fps. Some AAA didn't even get ported to them. But they sold a ton of units and a ton of games and few complained because they were having fun which is what gaming is all about anyways.
> That is a different audience than people playing on pc/xbox/ps5.
Many PC users also own a switch. It is in fact one of the most common pairings. There is very little I want get on PC from PS/Xbox so very little point in owning one, I won't get any of the Nintendo titles so keeping one around makes significantly more sense if I want to cover my bases for exclusives.
Have you played it? I haven't so I'm just basing my opinion on some YouTube footage I've seen.
BF1 is genuinely gorgeous, I can't lie. I think it's the photogrammetry. Do you think the lighting is better in BF1? I'm gonna go out on a limb and say that BF6's lighting is more dynamic.
Yes I played it on a 4090. The game is good but graphics are underwhelming.
To my eyes everything looked better in BF1.
Maybe it's trickery but it doesn't matter to me. BF6, new COD, and other games all look pretty bad. At least compared to what I would expect from games in 2025.
I don't see any real differences from similar games released 10 years ago.
Exploding production cost is pretty much the only reason (eg we hit diminishing returns in overall game asset quality vs production cost at least a decade ago) plus on the tech side a brain drain from rendering tech to AI tech (or whatever the current best-paid mega-hype is). Also, working in gamedev simply isn't "sexy" anymore since it has been industrialized to essentially assembly line jobs.
It's not, though. The use of RT in games is generally limited to secondary rays; the primaries are still rasterized. (Though the rasterization is increasingly done in “software rendering”, aka compute shaders.)
As you can tell, I'm patient :) A very important quality for any ray tracing enthusiast lol
The ability to do irregular sampling, efficient shadow computation (every flavour of shadow mapping is terrible!) and global illumination is already making its way into games, and path tracing has been the algorithm of choice in offline rendering (my profession since 2010) for quite a while already.
Making a flexible rasterisation-based renderer is a huge engineering undertaking, see e.g. Unreal Engine. With the relentless march of processing power, and finally having hardware acceleration as rasterisation has enjoyed for decades, it's going to be possible for much smaller teams to deliver realistic and creative (see e.g. Dreams[0]) visuals with far less engineering effort. Some nice recent examples of this are Teardown[1] and Tiny Glade[2].
It's even more inevitable from today's point of view than it was back in the 90s :)
AFAICT it's not really different, they're just calling it something else for marketing reasons. The system described in the Sony patent (having a fixed-function unit traverse the BVH asynchronously from the shader cores) is more or less how Nvidia's RT cores worked from the beginning, as opposed to AMDs early attempts which accelerated certain intersection tests but still required the shader cores to drive the traversal loop.
My old ray tracer could do arbitrary quadric surfaces, toroids with 2 minor radii, and CSG of all those. Triangles too (no CSG). It was getting kind of fast 20 years ago - 10fps at 1024x768. Never had good shading though.
I should dig that up and add NURBS and see how it performs today.
It feels like each time SCE makes a new console, it'd always come with some novelty that's supposed to change the field forever, but after two years they'd always end up just another console.
Games written for the PlayStation exclusively get to take advantage of everything, but there is nothing to compare the release to.
Alternatively, if a game is release cross-platform, there’s little incentive to tune the performance past the benchmarks of comparable platforms. Why make the PlayStation game look better than Xbox if it involves rewriting engine layer stuff to take advantage of the hardware, for one platform only.
Basically all of the most interesting utilization of the hardware comes at the very end of the consoles lifecycle. It’s been like that for decades.
I think apart from cross-platform woes (if you can call it that), it's also that the technology landscape would shift, two or few years after the console's release:
For PS2, game consoles didn't become the centre of home computing; for PS3, programming against the GPU became the standard of doing real time graphics, not some exotic processor, plus that home entertaining moved on to take other forms (like watching YouTube on an iPad instead of having a media centre set up around the TV); for PS4, people didn't care if the console does social networking; PS5 has been practical, it's just the technology/approach ended up adopted by everyone, so it lost its novelty later on.
You got a very "interesting" history there, it certainly not particularly grounded in reality however.
PS3s edge was generally seen as the DVD player.
That's why Sony went with Blue Ray in the PS4, hoping to capitalize on the next medium, too.
While that bet didn't pay out, Xbox kinda self destructed, consequently making them the dominant player any way.
Finally:
> PS5 has been practical, it's just the technology/approach ended up adopted by everyone, so it lost its novelty later on.
PS5 did not have any novel approach that was consequently adopted by others. The only thing "novel" in the current generation is frame generation, and that was already being pushed for years by the time Sony jumped on that bandwagon.
You're right, I mixed up the version numbers from memory. I'd contest the statement "the history is wrong" though, that's an extremely minor point to what I was writing.
> PS5 did not have any novel approach that was consequently adopted by others
DualSense haptics are terrific, though the Switch kind of did them first with the Joy-Cons. I'd say haptics and adaptive triggers are two features that should become standard. Once you have them you never want to go back.
PS5's fast SSD was a bit of a game changer in terms of load time and texture streaming, and everyone except Nintendo has gone for fast m.2/nvme storage. PS5 also finally delivered the full remote play experience that PS3 and PS4 had teased but not completed. Original PS5 also had superior thermals vs. PS4 pro, while PS5 pro does solid 4K gaming while costing less than most game PCs (and is still quieter than PS4 pro.) Fast loading, solid remote play, solid 4K, low-ish noise are all things I don't want to give up in any console or game PC.
My favorite PS5 feature however is fast game updates (vs. PS4's interminable "copying" stage.) Switch and Switch 2 also seem to have fairly fast game updates, but slower flash storage.
That is very country specific, many countries home computers since the 8 bit days always dominated, whereas others consoles always dominated since Nintendo/SEGA days.
Also tons of blue collar people bought Chinese NES clones even in mid 90's (at least in Spain) while some other people with white collar jobs bought their kids a Play Station. And OFC the Brick Game Tetris console was everywhere.
By late 90's, yes, most people afforded a Play Station, but as for myself I've got a computer in very early 00's and I would emulate the PSX and most N64 games just fine (my computer wasn't a high end one, but the emulators were good enough to play the games at 640x480 and a bilinear filter).
Yet those companies don’t necessarily compete for performance and comparaison, but instead for their own profit. If Nintendo makes profit from selling a device that runs a game in lower spec than Sony, they’re Happy with it. Computing devices aren’t driven by performance only.
It’s also that way on the C64 - while it came out in 1981, people figures out how to get 8 bit sound and high resolution color graphics with multiple sprites only after 2000…
Maybe I ate too much marketing but it does feel like having the PS5 support SSDs raised the bar for how fast games are expected to load, even across platforms.
Not just loading times, but I expect more games do more aggressive dynamic asset streaming. Hopefully we'll get less 'squeeze through this gap in the wall while we hide the loading of the next area of the map' in games.
Technically the PS4 supported 2.5" SATA or USB SSDs, but yeah PS5 is first gen that requires SSDs, and you cannot run PS5 games off USB anymore.
It does but I don't think that's necessarily a bad thing, they at least are willing to take some calculated risks about architecture - since consoles have essentially collapsed to been a PC internally.
I don't think it's a bad thing either. Consoles are a curious breed in today's consumer electronics landscape, it's great that someone's still devoted to doing interesting experiments with it.
That was kind of true until Xbox 360 and later Unity, those ended eras of consoles as machines made of quirks as well as game design as primarily software architecture problems. The definitive barrier to entry for indie gamedevs before Unity was the ability to write a toy OS, a rich 3D engine, and GUI toolkit by themselves. Only little storytelling skills were needed.
Console also partially had to be quirky dragsters because of Moore's Law - they had to be ahead of PC by years, because it had to be at least comparable to PC games at the end of lifecycle, not utterly obsolete.
Funny that I thought the biggest improvement of PS5 is actually crazy fast storage. No loading screen is really gamechanger. I would love to get xbox instant resume on Playstation.
The hardware 3D audio acceleration (basically fancy HRTFs) is also really cool, but almost no 3rd party games use it.
I've had issues with Xbox instant resume. Lots of "your save file has changed since the last time you played, so we have to close the game and relaunch" issues. Even when the game was suspended an hour earlier. I assume it's just cloud save time sync issues where the cloud save looks newer because it has a timestamp 2 seconds after the local one. Doesn't fill me with confidence, though.
Pretty sure they licensed a compression codec from RAD and implemented it in hardware, which is why storage is so fast on the PS5. Sounds like they're doing the same thing for GPU transfers now.
Storage on the PS5 isn't really fast. It's just not stupidly slow. At the time of release, the raw SSD speeds for the PS5 were comparable to the high-end consumer SSDs of the time, which Sony achieved by using a controller with more channels than usual so that they didn't have to source the latest NAND flash memory (and so that they could ship with only 0.75 TB capacity). The hardware compression support merely compensates for the PS5 having much less CPU power than a typical gaming desktop PC. For its price, the PS5 has better storage performance than you'd expect from a similarly-priced PC, but it's not particularly innovative and even gaming laptops have surpassed it.
The most important impact by far of the PS5 adopting this storage architecture (and the Xbox Series X doing something similar) is that it gave game developers permission to make games that require SSD performance.
So, you're saying they built a novel storage architecture that competed with state-of-the-art consumer hardware, at a lower price point. Five years later, laptops are just catching up, and that at the same price point, it's faster than what you'd expect from a PC.
The compression codec they licensed was built by some of the best programmers alive [0], and was later acquired by Epic [1]
I dunno how you put those together and come up with "isn't really fast" or "not particularly innovative".
Fast doesn't mean 'faster than anything else in existence'. Fast is relative to other existing solutions with similar resource constraints.
Their storage architecture was novel in that they made different tradeoffs than off the shelf SSDs for consumer PCs, but there's absolutely no innovation aspect to copy and pasting four more NAND PHYs that are each individually running at outdated speeds for the time. Sony simply made a short-term decision to build a slightly more expensive SSD controller to enable significant cost savings on the NAND flash itself. That stopped mattering within a year of the PS5 launching, because off the shelf 8-channel drives with higher speeds were no longer in short supply.
"Five years later, laptops are just catching up" is a flat out lie.
"at the same price point, it's faster than what you'd expect from a PC" sounds impressive until you remember that the entire business model of Sony and Microsoft consoles is to sell the console at or below cost and make the real money on games, subscription services, and accessories.
The only interesting or at all innovative part of this story is the hardware decompression stuff (that's in the SoC rather than the SSD controller), but you're overselling it. Microsoft did pretty much the same thing with their console and a different compression codec. (Also, the fact that Kraken is a very good compression method for running on CPUs absolutely does not imply that it's the best choice for implementing in silicon. Sony's decision to implement it in hardware was likely mainly due to the fact that lots of PS4 games used it.) Your own source says that space savings for PS5 games were more due to the deduplication enabled by not having seek latency to worry about, than due to the Kraken compression.
This video is a direct continuation of the one where Cerny explains logic behind PlayStation 5 pro design and telling that the path forward for them goes into rendering near perfect low res image then upscaling it with neural networks to 4K.
How good it will be? Just look at the current upscalers working on perfectly rendered images - photos. And they aren't doing it in realtime. So the errors, noise, and artefacts are all but inevitable. Those will be masked by post processing techniques that will inevitably degrade image clarity.
It only takes a marketing psyop to alter the perception of the end user with the slogans along the lines of "Tired of pixel exactness, hurt by sharpness? Free YOUR imagination and embrace the future of ever-shifting vague forms and softness. Artifact stands for Art!"
I’m replaying CP2077 for the third time, and all the sarcastic marketing material and ads you find in the game, don’t seem so sarcastic after all when you really think about the present.
I don't know, I think it's conceivable that you could get much much better results from a custom upscale per game.
You can give much more input than a single low res frame. You could throw in motion vectors, scene depth, scene normals, unlit color, you could separately upscale opaque, transparent and post process effect... I feel like you could really do a lot more.
Plus, aren't cellphone camera upscalers pretty much realtime these days? I think you're comparing generating an image to what would actually be happening.
> I think it's conceivable that you could get much much better results from a custom upscale per game.
> You can give much more input than a single low res frame. You could throw in motion vectors, scene depth, scene normals, unlit color, you could separately upscale opaque, transparent and post process effect... I feel like you could really do a lot more.
NVIDIA has already been down that road. What you're describing is pretty much DLSS, at various points in its history. To the extent that those techniques were low-hanging fruit for improving upscaler quality, it's already been tried and adopted to the extent that it's practical. At this point, it's more reasonable to assume that there isn't much low-hanging fruit for further quality improvements in upscalers without significant hardware improvements, and that the remaining artifacts and other downsides are hard problems.
The amount of drama about AI based upscaling seems disproportionate. I know framing it in terms of AI and hallucinated pixels makes it sound unnatural, but graphics rendering works with so many hacks and approximations.
Even without modern deep-learning based "AI", it's not like the pixels you see with traditional rendering pipelines were all artisanal and curated.
> I am certainly not going to celebrate the reduction in image quality
What about perceived image quality? If you are just playing the game chances of you noticing anything (unless you crank up the upscaling to the maximum) are near zero.
> AI upscaling is equivalent to lowering bitrate of compressed video.
When I was a kid people had dozens of CDs with movies, while pretty much nobody had DVDs. DVD was simply too expensive, while Xvid allowed to compress entire movie into a CD while keeping good quality. Of course original DVD release would've been better, but we were too poor, and watching ten movies at 80% quality was better than watching one movie at 100% quality.
DLSS allows to effectively quadruple FPS with minimal subjective quality impact. Of course natively rendered image would've been better, but most people are simply too poor to buy game rig that plays newest games 4k 120FPS on maximum settings. You can keep arguing as much as you want that natively rendered image is better, but unless you send me money to buy a new PC, I'll keep using DLSS.
The contentious part from what I get is the overhead for hallucinating these pixels, on cards that also cost a lot more than the previous generation for otherwise minimal gains outside of DLSS.
Some [0] are seeing 20 to 30% drop in actual frames when activating DLSS, and that means as much latency as well.
There's still games where it should be a decent tradeoff (racing or flight simulators ? Infinite Nikki ?), but it's definitely not a no-brainer.
I also find them completely useless for any games I want to play. I hope that AMD would release a card that just drops both of these but that's probably not realistic.
They will never drop ray tracing, some new games require ray tracing. The only case where I think it's not needed is some kind of specialized office prebuilt desktops or mini PCs.
There are a lot of theoretical arguments I could give you about how almost all cases where hardware BVH can be used, there are better and smarter algorithms to be using instead. Being proud of your hardware BVH implementation is kind of like being proud of your ultra-optimised hardware bubblesort implementation.
But how about a practical argument instead. Enabling raytracing in games tends to suck. The graphical improvements on offer are simply not worth the performance cost.
A common argument is that we don't have fast enough hardware yet, or developers haven't been able to use raytracing to it's fullest yet, but it's been a pretty long damn time since this hardware was mainstream.
I think the most damning evidence of this is the just released Battlefield 6. This is a franchise that previously had raytracing as a top-level feature. This new release doesn't support it, doesn't intend to support it.
> But how about a practical argument instead. Enabling raytracing in games tends to suck. The graphical improvements on offer are simply not worth the performance cost.
Pretty much this - even in games that have good ray tracing, I can't tell when it's off or on (except for the FPS hit) - I cared so little I bought a card not known to be good at it (7900XTX) because the two games I play the most don't support it anyway.
They oversold the technology/benefits and I wasn't buying it.
There were and always are people who swear to not see the difference with anything above 25hz, 30hz, 60hz, 120hz, HD, Full HD, 2K, 4K. Now it's ray-tracing, right.
I can see the difference in all of those. I can even see the difference between 120hz and 240hz, and now I play on 240hz.
Ray tracing looks almost indistinguishable from really good rasterized lighting in MOST conditions. In scenes with high amounts of gloss and reflections, it's a little more pronounced. A little.
From my perspective, you're getting, like, a 5% improvement in only one specific aspect of graphics in exchange for a 200% cost.
There’s an important distinction between being able to see the difference and caring about it. I can tell the difference between 30Hz and 60Hz but it makes no difference to my enjoyment of the game. (What can I say - I’m a 90s kid and 30fps was a luxury when I was growing up.) Similarly, I can tell the difference between ray traced reflections and screen space reflections because I know what to look for. But if I’m looking, that can only be because the game itself isn’t very engaging.
I think one of the challenges is that game designers have trained up so well at working within the non-RT constraints (and pushing back those constraints) that it's a tall order to make paying the performance cost (and new quirks of rendering) be paid back by RT improvements. There's also how a huge majority of companies wouldn't want to cut off potential customers in terms of whether their hardware can do RT at all or performance while doing so. The other big one is whether they're trying to recreate a similar environment with RT, or if they're taking advantage of what is only possible on the new technique, such as dynamic lighting and whether that's important to the game they want to make.
To me, the appeal is that game environments that can now be way more dynamic because we're not being limited by prebaked lighting. The Finals does this, but doesn't require ray tracing and it's pretty easy to tell when ray tracing is enabled: https://youtu.be/MxkRJ_7sg8Y
Because enabling raytracing means the game supports non-raytracing too. Which limits the game's design on how they can take advantage of raytracing being realtime.
The only exception to this I've seen The Finals: https://youtu.be/MxkRJ_7sg8Y . Made by ex-Battlefield devs, the dynamic environment from them 2 years ago is on a whole other level even compared to Battlefield 6.
There's also Metro: Exodus, which the developers have re-made to only support RT lighting. DigitalFoundry made a nice video on it: https://www.youtube.com/watch?v=NbpZCSf4_Yk
naive q: could games detect when the user is "looking around" at breathtaking scenery and raytrace those? offer a button to "take picture" and let the user specify how long to raytrace? then for heavy action and motion, ditch the raytracing? even better, as the user passes through "scenic" areas, automatically take pictures in the background. Heck, this could be an upsell kind of like the RL pictures you get on the roller coaster... #donthate
Even without RT I think it'd be beneficial to tune graphics settings depending on context, if it's an action/combat scene there's likely aspects the player isn't paying attention to. I think the challenge is it's more developer work whether it's done by implementing some automatic detection or manually being set scene by scene during development (which studios probably do already where they can set up specific arenas). I'd guess an additional task is making sure there's no glaring difference between tuning levels, and setting a baseline you can't go beneath.
It will never be fast enough to work in real time without compromising some aspect of the player's experience.
Ray tracing is solving the light transport problem in the hardest way possible. Each additional bounce adds exponentially more computational complexity. The control flows are also very branchy when you start getting into the wild indirect lighting scenarios. GPUs prefer straight SIMD flows, not wild, hierarchical rabbit hole exploration. Disney still uses CPU based render farms. There's no way you are reasonably emulating that experience in <16ms.
The closest thing we have to functional ray tracing for gaming is light mapping. This is effectively just ray tracing done ahead of time, but the advantage is you can bake for hours to get insanely accurate light maps and then push 200+ fps on moderate hardware. It's almost like you are cheating the universe when this is done well.
The human brain has a built in TAA solution that excels as frame latencies drop into single digit milliseconds.
The problem is the demand for dynamic content in AAA games. Large exterior and interior worlds with dynamic lights, day and night cycle, glass and translucent objects, mirrors, water, fog and smoke. Everything should be interactable and destructable. And everything should be easy to setup by artists.
I would say, the closest we can get are workarounds like radiance cascades. But everything else than raytracing is just an ugly workaround which falls apart in dynamic scenarios. And don't forget that baking times and storing those results, leading to massive game sizes, are a huge negative.
Funnily enough raytracing is also just an approximation to the real world, but at least artists and devs can expect it to work everywhere without hacks (in theory).
Manually placed lights and baking not only takes time away from iteration but also takes a lot of disk space for the shadow maps. RT makes development faster for the artists, I think DF even mentioned that doing Doom Eternal without RT would take so much disk space it wouldn’t be possible to ship it.
edit: not Doom Etenral, it’s Doom The Dark Ages, the latest one.
The quoted number was in the range of 70-100 GB if I recall correctly, which is not that significant for modern game sizes. I’m sure a lot of people would opt to use it as an option as a trade off for having 2-3x higher framerate. I don’t think anyone realistically complains about video game lighting looking too “gamey” when in a middle of an intense combat sequence. Why optimize a Doom game of all things for standing still and side by side comparisons? I’m guessing NVidia paid good money for making RT tech mandatory.
And as for shortened development cycle, perhaps it’s cynical, but I find it difficult to sympathize when the resulting product is still sold for €80
It's fast enough today. Metro Exodus, an RT-only game runs just fine at around 60 fps for me on a 3060 Ti. Looks gorgeous.
Light mapping is a cute trick and the reason why Mirror's Edge still looks so good after all these years, but it requires doing away with dynamic lighting, which is a non-starter for most games.
I want my true-to-life dynamic lighting in games thank you very much.
Much higher resource demands, which then requires tricks like upscaling to compensate. Also you get uneven competition between GPU vendors because it is not hardware ray tracing but Nvidia raytracing in practice.
On a more subjective note, you get less interesting art styles because studio somehow have to cram raytracing as a value proposition in there.
1. People somehow think that just because today's hardware can't handle RT all that well it will never be able to. A laughable position of course.
2. People turn on RT in games not designed with it in mind and therefore observe only minor graphical improvements for vastly reduced performance. Simple chicken-and-egg problem, hardware improvements will fix it.
Not OP, but a lot of the current kvetching about hardware based ray tracing is that it’s basically an nvidia-exclusive party trick, similar to DLSS and physx. AMD has this inferiority complex where nvidia must not be allowed to innovate with a hardware+software solution, it must be pure hardware so AMD can compete on their terms.
The gimmicks aren't the product, and the customers of frontier technologies aren't the consumers. The gamers and redditors and smartphone fanatics, the fleets of people who dutifully buy, are the QA teams.
In accelerated compute, the largest areas of interest for advancement are 1) simulation and modeling and 2) learning and inference.
That's why this doesn't make sense to a lot of people. Sony and AMD aren't trying to extend current trends, they're leveraging their portfolios to make the advancements that will shape future markets 20-40 years out. It's really quite bold.
I disagree. From what I’ve read if the game can leverage RT the artists save a considerable amount of time when iterating the level designs. Before RT they had to place lights manually and any change to the level involved a lot of rework. This also saves storage since there’s no need to bake shadow maps.
So what stops the developers from iterating on a raytraced version of the game during development, and then executing a shadow precalcualtion step once the game is ready to be shipped? Make it an option to download, like the high resolution texture packs. They are offloading processing power and energy requirements to do so on consumer PCs, and do so in an very inefficient manner
And they're achieving "acceptable" frame rates and resolutions by sacrificing image quality in ways that aren't as easily quantified, so those downsides can be swept under the rug. Nobody's graphics benchmark emits metrics for how much ghosting is caused by the temporal antialiasing, or how much blurring the RT denoiser causes (or how much noise makes it past the denoiser). But they make for great static screenshots.
Nintendo is getting it right (maybe): focus on first-party exclusive games and, uh, a pile of indies and ports from the PS3 and PS4 eras.
Come to think of it, Sony is also stuck in the PS4 era since PS5 pro is basically a PS4 pro that plays most of the same games but at 4K/60. (Though it does add a fast SSD and nice haptics on the DualSense controller.) But it's really about the games, and we haven't seen a lot of system seller exclusives on the PS5 that aren't on PS4, PC, or other consoles. (Though I'm partial to Astro-bot and also enjoyed timed exclusives like FF16 and FF7 Rebirth.)
PS5 and Switch 2 are still great gaming consoles - PS5 is cheaper than many GPU cards, while Switch 2 competes favorably with Steam Deck as a handheld and hybrid game system.
So this is AMD catching up with Nvidia in the RT and AI upscaling/frame gen fields. Nothing wrong with it, and I am quite happy as an AMD GPU owner and Linux user.
But the way it is framed as a revolutionary step and as a Sony collab is a tad misleading. AMD is competent enough to do it by itself, and this will definitely show up in PC and the competing Xbox.
I think we don't have enough details to make statements like this yet. Sony have shown they are willing to make esoteric gaming hardware in the past (cell architecture) and maybe they'll do something unique again this time. Or, maybe it'll just use a moderately custom model. Or, maybe it's just going to use exactly what AMD have planned for the next few year anyway (as you say). Time will tell.
I'm rooting for something unique because I haven't owned a console for 20 years and I like interesting hardware. But hopefully they've learned a lesson about developer ergonomics this time around.
>Sony have shown they are willing to make esoteric gaming hardware in the past (cell architecture)
Just so we’re clear, you’re talking about a decision that didn’t really pan out made over 20 years ago.
PS6 will be an upgraded PS5 without question. You aren’t ever going to see a massive divergence away from the PC everyone took the last twenty years working towards.
The landscape favors Microsoft, but they’ll drop the ball, again.
> you’re talking about a decision that didn’t really pan out made over 20 years ago.
The PS3 sold 87m units, and more importantly, it sold more than the Xbox 360, so I think it panned out fine even if we shouldn't call it a roaring success.
It did sell less than the PS2 or PS4 but I don't think the had much to do with the cell architecture.
Game developer hated it, but that's a different issue.
I do agree that a truly unusual architecture like this is very unlikely for the next gen though.
It sold well, but there are multiple popular games that were built for the PS3 that have not come to any other platform because porting them is exceptionally hard.
I really dislike the focus on graphics here, but I think a lot of people are missing big chunk of the article that's focused on efficiency.
If we can get high texture + throughput content like dual 4k streams but with 1080p bandwidth, we can get VR that isn't as janky. If we can get lower power consumption, we can get smaller (and cooler) form functions which means we might see a future where the Playstation Portal is the console itself. I'm about to get on a flight to Sweden, and I'd kill to have something like my Steam Deck but running way cooler, way more powerful, and less prone to render errors.
I get the feeling Sony will definitely focus on graphics as that's been their play since the 90s, but my word if we get a monumental form factor shift and native VR support that feels closer to the promise on paper, that could be a game changer.
How about actually releasing games? GT7 and GOW Ragnarok are the only worthwhile exclusives of the current gen. This is hilariously bad for 5 year old console.
This. I would also add Returnal to this list but otherwise I agree, It's hard to believe it's been almost 5 years since the release of PS5 and there are still barely any games that look as good as The Last Of Us 2 or Red Dead Redemption 2 which were released on PS4
I would agree with this. A lot of PS5 games using UE5+ with all it's features run at sub 1080p30 (some times sub 720p30)upscaled to 1440p/4K and still look & run way, way worse that TLOU2/RDR2/Death Stranding 1/Horizon 1 on the PS4. Death Stranding 2, Horizon 2, and the Demon's Souls remake look and run far, far better (on a purely technical level) than any other PS5 game and they all use rasterized lighting.
So we're getting a new console just to play AI-upscaled PS4 and PS5 "remasters"... and I suspect it’ll probably come without any support for physical media. The PS5 will be my last console. There's no point anymore.
There sure is a lot of visionary(tm) thinking out there right now about the future of gaming, But what strikes me is how few of those visionaries(tm) have ever actually developed and taken a game to market.
Not entirely unlike how many AI academics who step functioned their compensation a decade ago by pivoting to the tech industry had no experience bringing an AI product to market, but they certainly felt free pontificate on how things are done.
I eagerly await the shakeout due from the weakly efficient market as the future of gaming ends up looking like nothing anyone imagineered.
Seems like the philosophy here is, if you're going to do AI-based rendering, might as well try it across different parts of the graphics pipeline and see if you can fine-tune it at the silicon level. Probably a microoptimization, but if it makes the PS6 look a tiny bit better than the Xbox, people will pay for that.
Yeah, I don’t recall a single original game from the PS5 exclusive lineup (that wasn’t available for PS4). We did get some remakes and sequels, but the PS5 lineup pales in comparison to the PS4 one.
Also, to my knowledge, the PS5 still lags behind the PS4 in terms of sales, despite the significant boost that COVID-19 provided.
The PS4 lineup pales in comparison to the PS3 lineup, which pales in comparison to the PS2 lineup, which pales in comparison to the PS1 lineup.
Each generation has around half the number of games as the previous. This does get a bit murky with the advent of shovelware in online stores, but my point remains.
I think this only proves is that games are now ridiculously expensive to create and met the quality standards expected. Maybe AI will improved this in this future. Take-Two has confirmed that GTA6's budget has exceeded US$1 billion, which is mind-blowing.
The most extreme example of this is that Naughty Dog, one of Sony's flagship first-party studios, has still yet to release a single original game for the PS5 after nearly five years. They've steadily been making fewer and fewer brand new games each generation and it's looking like they may only release one this time around. AAA development cycles are out of control.
There's simply no point in buying that console when it has like what, 7 exclusive titles that aren't shovelware? 7 titles after 5 years? And this number keeps going down because games are constantly being ported to other systems.
Yes, duh. It's a console, resolution scaling is the #1 foremost tool in their arsenal for stabilizing the framerate. I can't think of a console game made in the past decade that doesn't "fake frames" at some part of the pipeline.
I'll also go a step further - not every machine-learning pass is frame generation. Nvidia uses AI for DLAA, a form of DLSS that works with 100% input resolution as a denoiser/antialiasing combined pass. It's absolutely excellent if your GPU can keep up with the displayed content.
I can't help but think that Sony and AMD would be better off developing a GPU-style PCI-card module that has all their DRM and compute and storage on the board, and then selling consoles that are just normal gaming PCs in a conveniently-sized branded case with a PS card installed. If the card was sold separately at $3-400 it would instantly take over a chunk of the PC gaming market and upgrades would be easier.
"Project Amethyst is focused on going beyond traditional rasterization techniques that don't scale well when you try to "brute force that with raw power alone," Huynh said in the video. Instead, the new architecture is focused on more efficient running of the kinds of machine-learning-based neural networks behind AMD's FSR upscaling technology and Sony's similar PSSR system."
Graphics could stand to get toned down. It sucks to wait 7 years for a sequel to your favorite game. There was a time where sequels came out while the games were still relevant. We are getting sequels 8 years or more apart for what? Better beard graphics? Beer bottles where the liquid reacts when you bump into it? Who cares!
| Game | Release Year |
|-------------------------------------------|--------------|
| GTA III | 2001 |
| GTA Vice City | 2002 |
| GTA San Andreas | 2004 |
| Sly Cooper and the Thievius Raccoonus | 2002 |
| Sly 2: Band of Thieves | 2004 |
| Sly 3: Honor Among Thieves | 2005 |
| Infamous | 2009 |
| Infamous 2 | 2011 |
We are 5 full years into the PS5's lifetime. These are the only games that are exclusive to the console.
| Game | Release Year |
|-------------------------------------------|--------------|
| Astro's Playroom | 2020 |
| Demon's Souls | 2020 |
| Destruction AllStars | 2021 |
| Gran Turismo 7 | 2022 |
| Horizon Call of the Mountain | 2023 |
| Firewall Ultra | 2023 |
| Astro Bot | 2024 |
| Death Stranding 2: On the Beach | 2025 |
| Ghost of Yōtei | 2025 |
I think this is probably on the docket. Epic seems to be in a push to offload a lot of animation work to more cores. The industry is going that way and that was a big topic at their last conference.
This reminds me of the PlayStation/2 developer manual which, when describing the complicated features of system, said something like "there is no profit in making it easy to extract the most performance from the system."
Both raytracing and NPUs use a lot of bandwidth and that is scaling the least with time. Time will tell if just going for more programmable compute would be better
Seems they didn’t learn from the PS3, and that exotic architectures don't drive sales. Gamers don’t give a shit and devs won’t choose it unless they have a lucrative first party contract.
The entire Switch 1 game library is free to play on emulators. They probably put a custom accelerator to prevent reverse engineering. A consequence of using weaker spec parts than their competitors.
The Switch 1 also had CUDA cores and other basic hardware accelerators. To my knowledge (and I could be wrong), none of the APIs that Nintendo exposed even gave access to those fancy features. It should just be calls to NVN, which can be compiled into Vulkan the same way DXVK translates DirectX calls.
TL:DW - it's not quite the full-fat CNN model but it's also not a uselessly pared-back upscaler. Seems to handle antialiasing and simple upscale well at super low TDPs (<10w).
In this video, Alex goes in-depth on Switch 2 DLSS, confirming that there are actually two different forms of the technology available - the DLSS we know from PC gaming and a faster, far more simplified version.
I wonder how many variants of the PS6 they'll go through before they get a NIC that works right.
As someone working at an ISP, I am frustrated with how bad Sony has mangled the networking stack on these consoles. I thought BSD was supposed to be the best in breed of networking but instead Sony has found all sorts of magical ways to make it Not Work.
From the PS5 variants that just hate 802.11ax to all the gamers making wild suggestions like changing MTU settings or DNS settings just to make your games work online... man, does Sony make it a pain for us to troubleshoot when they wreck it.
Bonus points that they took away the Web browser so we can't even try to do forward-facing troubleshooting without going through an obtuse process of the third-party-account-linking system to sneak out of the process to run a proper speedtest to Speedtest/Fast to show that "no, it's PSN being slow, not us".
Noone is gonna give you some groundbreaking tech for your electronic gadget.... As IBM showed when they created the Cell for Sony and then gave almost the same tech to Microsoft :D.
I don’t think they ever claimed that. Every time Mark Cerny discusses PS hardware he always mentions that it’s a collaboration, so whatever works for AMD they can use on their own GPUs, even for other clients.
Maybe Sony should focus on getting a half-respectable library out on the PS5 before touting the theoretical merits of the PS6? It’s kind of wild how thin they are this go around. Their live service gambles clearly cost them this cycle and the PSVR2 landed with a thud.
Frankly after releasing the $700 pro and going “it’s basically the same specs but it can actually do 4K60 this time we promise” and given how many friends I have with the PS5 sitting around as an expensive paper weight, I can’t see a world where I get a PS6 despite decades of console gaming. The PS5 is an oversized final fantasy machine supported by remakes/remasters of all their hits from the PS3/PS4 era. It’s kind of striking when you look at the most popular games on the console.
It really doesn’t though. The library stacked against PS4’s doesn’t even compare unless you want to count cross platform and even then PS4 still smokes it. The fact that Helldivers 2 is one of the only breakout successes they’ve had (and it didn’t even come from one of their internal studios) says everything. And of course they let it go cross platform too so that edge is gone now.
All their best studios were tied up with live service games that have all been canceled. They wasted 5+ years and probably billions if we include the missed out sales. The PS4 was heavily driven by their close partner/internal teams and continue to carry a significant portion of the PS5’s playerbase.
If you don’t need Final Fantasy or to (re)play improved PS4 games, the PS5 is an expensive paperweight and you may as well just grab a series S or something for half the price, half the shelf space, and play 90% of the same games.
Let me ask you this: should we really be taking this console seriously if they’re about to go an entire cycle without naughty dog releasing a game?
We don’t need to flex about owning every console. I own basically every one as well except PS5. I kept waiting and waiting for a good sale and a good library just like PS4. The wait has not rewarded me lol
I get every console at launch, so I went from PS4 to Pro to PS5 to Pro.
At launch I really enjoyed Demon’s Souls, which I never played in PS3, fantastic game. Then came out Returnal probably my favorite 1st party game so far, really looking forward to its sequel Saros next year.
I also played Ragnarok, GT7 (with PSVR2 is fantastic), Horizon 2, and yes, all these came out also for PS4 but are undoubtedly better on the PS5. I’d just get a PS5 just because of the fast loading, it’s awesome.
There’s also Spider-Man 2, Ratchet, Death Stranding 2, Ghost of Yotei, and I’ll probably leaving others but there’s plenty of great 1st party exclusives. There’s also a bunch of great 3rd party exclusives as well.
I don’t game on PC though, used to when I was younger but I prefer to play on consoles now and use the computers for work and other things.
All of these are available on PC and/or Xbox. Several are PS4. Not a single one is exclusive.
I understand these things don’t bother you but you can’t say it has plenty of exclusives when it literally does not. You just aren’t bothered by that fact and that is fine. But it makes me question what I would be buying when I have more affordable ways of playing all of these games since again, they have virtually no exclusives and their best studios have dropped little to nothing due to their failed gamble with live service.
The PS3/PS4 had several single player titles that you could only play on PlayStation and were made specifically for them. They weren’t resting on the laurels of previous releases and just giving them a new coat of paint. They had bigger, better, more exclusive libraries. The PS4 in particular had clear value. No one had to argue for it. The library is considered one of the best.
I am a big proponent of consoles believe it or not but frankly the PS5 is a head scratcher for me at the end of the day. Especially for the (now increased) price.
That's not correct. God of war Ragnarok an Ghost of Yotei are not on PC / XBox. But they will probably eventually make it to PC.
Why do you think that releasing games on the PC (a year or two after the PlayStation release) is a bad thing? It means you don't need to buy a PlayStation to play their first-party titles, assuming you're a patient gamer. It also means Sony makes more money from the bigger PC market. Win-win
Majority of those games came out 1st on PS, they are releasing some of them later on PC and that’s fine.
Like I said, since I don’t want to play on PC the best option for me it’s to play them on the latest PS hardware, that a game also comes out elsewhere doesn’t detriment my experience.
Again it’s not about your preference. My initial comment was “they don’t have exclusives,” which you contested, then shifted to “well it doesn’t bother me.”
I’m not debating preference. I’m saying they don’t have a robust library for the PS5 compared to previous hardware and they lack exclusives, yet here they are hyping the PS6. If you are happy with your PS5 then great! Many people are. But the library is thinner and depends on old titles. That is just reality.
Why should I expect the library to be better next iteration when they’ve farted their way through the last 5+ years and seem to have no interest doing otherwise?
Could the PS6 be the last console generation with an expressive improvement in compute and graphics? Miniaturization keeps giving ever more diminishing returns each shrink, prices of electronics are going up (even sans tariffs), lead by the increase in the price of making chips. Alternate techniques have slowly been introduced to offset the compute deficit, first with post processing AA in the seventh generation, then with "temporal everything" hacks (including TAA) in the previous generation and finally with minor usage of AI up-scaling in the current generation and (projected) major usage of AI up-scaling and frame-gen in the next gen.
However, I'm pessimistic on how this can keep evolving. RT already takes a non trivial amount of transistor budget and now those high end AI solutions require another considerable chunk of the transistor budget. If we are already reaching the limits of what non generative AI up-scaling and frame-gen can do, I can't see where a PS7 can go other than using generative AI to interpret a very crude low-detail frame and generating a highly detailed photorealistic scene from that, but that will, I think, require many times more transistor budget than what will likely ever be economically achievable for a whole PS7 system.
Will that be the end of consoles? Will everything move to the cloud and a power guzzling 4KW machine will take care of rendering your PS7 game?
I really can only hope there is a break-trough in miniaturization and we can go back to a pace of improvement that can actually give us a new generation of consoles (and computers) that makes the transition from an SNES to a N64 feel quaint.
My kids are playing Fortnite on a PS4, it works, they are happy, I feel the rendering is really good (but I am an old guy) and normally, the only problem while playing is the stability of the Internet connection.
We also have a lot of fun playing board games, simple stuff from design, card games, here, the game play is the fun factor. Yes, better hardware may bring more realistic, more x or y, but my feeling is that the real driver, long term, is the quality of the game play. Like the quality of the story telling in a good movie.
Every generation thinks the current generation of graphics won't be topped, but I think you have no idea what putting realtime generative models into the rendering pipeline will do for realism. We will finally get rid of the uncanny valley effect with facial rendering, and the results will almost certainly be mindblowing.
I think the inevitable near future is that games are not just upscaled by AI, but they are entirely AI generated in realtime. I’m not technical enough to know what this means for future console requirements, but I imagine if they just have to run the generative model, it’s… less intense than how current games are rendered for equivalent results.
I don't think you grasp how many GPUs are used to run world simulation models. It is vastly more intensive in compute that the current dominant realtime rendering or rasterized triangles paradigm
I don't think you grasp what I'm saying? I'm talking about next token prediction to generate video frames.
Yeah, which is pretty slow due to the need to autoregressively generate each image frame token in sequence. And leading diffusion models need to progressively denoise each frame. These are very expensive computationally. Generating the entire world using current techniques is incredibly expensive compared to rendering and rasterizing triangles, which is almost completely parallelized by comparison.
I’m thinking more procedural generation of assets. If done efficiently enough, a game could generate its assets on the fly, and plan for future areas of exploration. It doesn’t have to be rerendered every time the player moves around. Just once, then it’s cached until it’s not needed anymore.
Unreal engine 1 looks good to me, so I am not a good judge.
I keep thinking there is going to be a video game crash soon, over saturation of samey games. But I'm probably wrong about that. I just think that's what Nintendo had right all along: if you commoditize games, they become worthless. We have endless choice of crap now.
In 1994 at age 13 I stopped playing games altogether. Endless 2d fighters and 2d platformer was just boring. It would take playing wave race and golden eye on the N64 to drag me back in. They were truly extraordinary and completely new experiences (me and my mates never liked doom). Anyway I don't see this kind of shift ever happening again. Infact talking to my 13 year old nephew confirms what I (probably wrongly) believe, he's complaining there's nothing new. He's bored or fortnight and mine craft and whatever else. It's like he's experiencing what I experienced, but I doubt a new generation of hardware will change anything.
> Unreal engine 1 looks good to me, so I am not a good judge.
But we did hit a point where the games were good enough, and better hardware just meant more polygons, better textures, and more lighting. The issues with Unreal Engine 1 (or maybe just games of that era) was that the worlds were too sparse.
> over saturation of samey games
So that's the thing. Are we at a point where graphics and gameplay in 10-year-old games is good enough?
Are we at a point where graphics and gameplay in 10-year-old games is good enough?
Personally, there are enough good games from the 32bit generation of consoles, and before, to keep me from ever needing to buy a new console, and these are games from ~25 years ago. I can comfortably play them on a MiSTer (or whatever PC).
If the graphics aren’t adding to the fun and freshness of the game, nearly. Rewatching old movies over seeing new ones is already a trend. Video games are a ripe genre for this already.
Now I'm going to disagree with myself... there came a point where movies started innovating in storytelling rather than the technical aspects (think Panavision). Anything that was SFX-driven is different, but the stories movies tell and how they tell them changed, even if there are stories where the technology was already there.
"if you commoditize games, they become worthless"
???? hmm wrong??? if everyone can make game, the floor is raising making the "industry standard" of a game is really high
while I agree with you that if everything is A then A is not meaning anything but the problem is A isn't vanish, they just moved to another higher tier
That's the Nintendo way. Avoiding the photorealism war altogether by making things intentionally sparse and cartoony. Then you can sell cheap hardware, make things portable etc.
I.e., the uncanny valley.
Cartoony isn’t the uncanny valley. Uncanny valley is attempted photorealism that misses the mark.
It sounds like even the PS6 isn’t going to have an expressive improvement, and that the PS5 was the last such console. PS5 Pro was the first console focused on fake frame generation instead of real output resolution/frame rate improvements, and per the article PS6 is continuing that trend.
What really matters is the cost.
In the past a game console might launch at a high price point and then after a few years, the price goes down and they can release a new console at a high at a price close to where the last one started.
Blame crypto, AI, COVID but there has been no price drop for the PS5 and if there was gonna be a PS6 that was really better it would probably have to cost upwards of $1000 and you might as well get a PC. Sure there are people who haven’t tried Steam + an XBOX controller and think PV gaming is all unfun and sweaty but they will come around.
Inflation. PS5 standard at $499 in 2019 is $632 in 2025 money which is the same as the 1995 PS 1 when adjusted for inflation $299 (1995) to $635(2025). https://www.usinflationcalculator.com/
Thus the PS6 should be around 699 at launch.
When I bought a PS 1 around 1998-99 I paid $150 and I think that included a game or two. It's the later in the lifecycle price that has really changed (didn't the last iteration of it get down to either $99 or $49?)
The main issue with inflation is that my salary is not inflation adjusted. Thus the relative price increase adjusted by inflation might be zero but the relative price increase adjusted by my salary is not.
The phrase “cost of living increase” is used to refer to an annual salary increase designed to keep up with inflation.
Typically, you should be receiving at least an annual cost of living increase each year. This is standard practice for every company I’ve ever worked for and it’s a common practice across the industry. Getting a true raise is the amount above and beyond the annual cost of living increase.
If your company has been keeping your salary fixed during this time of inflation, then you are correct that you are losing earning power. I would strongly recommend you hit the job market if that’s the case because the rest of the world has moved on.
In some of the lower wage brackets (not us tech people) the increase in wages has actually outpaced inflation.
Typically "Cost Of Living" increases target roughly inflation. They don't really keep up though, due to taxes.
If you've got a decent tech job in Canada your marginal tax rate will be near 50%. Any new income is taxed at that rate, so that 3% COL raise, is really a 1.5% raise in your purchasing power, which typically makes you worse off.
Until you're at a very comfortable salary, you're better off job hopping to boost your salary. I'm pretty sure all the financial people are well aware they're eroding their employees salaries over time, and are hoping you are not aware.
Tax brackets also shift through time, though less frequently. So if you only get COL increases for 20 years you’re going to be reasonably close to the same after tax income barring significant changes to the tax code.
In the US the bottom tax brackets where 10% under 2020 $19,750 then 12% next bucket, in 2025 it’s 10% under $23,850 then 12% next bracket. https://taxfoundation.org/data/all/federal/historical-income...
And here I am in the UK, where the brackets have been frozen until 2028 (if they don't invent some reason to freeze further).
Freezing tax brackets is a somewhat stealthy way to shift the tax burden to lower income households as it’s less obviously a tax increase.
Thank you for your concern but I'm in Germany so the situation is a bit different and only very few companies have been able to keep up with inflation around here. I've seen at least a few adjustments but would not likely find a job that pays as well as mine does 100% remote. Making roughly 60K in Germany as a single in his 30s isn't exactly painful.
If you want to work 100% remote you could consider working for a US company as a consultant?
> but would not likely find a job that pays as well as mine does 100% remote.
That makes sense. The market for remote jobs has been shrinking while more people are competing for the smaller number of remote jobs. In office comes with a premium now and remote is a high competition space.
Those in charge of fiat printing presses have run the largest theft or wealth in world history since 1971 when the dollar decoupled from gold.
Cash is a small fraction of overall US wealth, but inflation is a very useful tax on foreigners using USD thus subsidizing the US economy.
Is your salary the same as 10 years ago?
As long as I need a mouse and keyboard to install updates or to install/start my games from GOG, it's still going to be decidedly unfun, but hopefully Windows' upcoming built-in controller support will make it less unfun.
Today you can just buy an Xbox controller and pair it with your Windows computer and it just works and it’s the same same with the Mac.
You don’t have to install any drivers or anything and with the big screen mode in Steam it’s a lean back experience where you can pick out your games and start one up without using anything other than the controller.
I like big picture mode in Steam, but.... controller support is spotty across Steam games, and personally I think you need both a Steam controller and a DualSense or Xbox controller. Steam also updates itself by default every time you launch, and you have to deal with Windows updates and other irritations. Oh, here's another update for .net, wonderful. And a useless new AI agent. SteamOS and Linux/Proton may be better in some ways, but there are still compatibility and configuration headaches. And half my Steam library doesn't even work on macOS, even games that used to work (not to mention the issues with intel vs. Apple Silicon, etc.)
The "it just works" factor and not having to mess with drivers is a huge advantage of consoles.
Apple TV could almost be a decent game system if Apple ever decided to ship a controller in the box and stopped breaking App Store games every year (though live service games rot on the shelf anyway.)
> [...]controller support is spotty[...]
DualSense 4 and 5 support under Linux is rock-solid, wired or wireless. That's to be expected since the drivers are maintained by Sony[1]. I have no idea about the XBox controller, but I know DS works perfectly with Steam/Proton out of the box, with the vanilla Linux kernel.
1. https://www.phoronix.com/news/Sony-HID-PlayStation-PS5
I have clarified that I meant controller support in the Steam games themselves. Some of them work well, some of them not so well. Others need to be configured. Others only work with a Steam controller. I wish everything worked well with DualSense, especially since I really like its haptics, but it's basically on the many (many) game developers to provide the same kind of controller support that is standard on consoles.
Thanks for the clarification. I've into that a couple of times - Steam's button remapping helps sometimes, but you'd have to remember which controller button the on-screen symbol maps to.
But when I have to install drivers, or install a non-Steam game, I can't do that with the controller yet. That's what I need for PC gaming to work in my living room.
Or you just need a Steam controller. They're discontinued now but work well as a mouse+keyboard for desktop usage. It got squished into the Steam Deck so hopefully there's a new version in the future.
If you have steam, ps4/ps5 controllers also work fine.
They do not work fine in every game. That is why I think you need a Steam controller as well.
They do but they cost a lot more.
My ps5 came with one for “free”
Plus add your GOG games as non-Steam games to Steam and launch them from big screen mode as well.
Launch Steam in big screen mode. Done.
I'm aware of Big Picture Mode, and it doesn't address either of the scenarios I cited specifically because they can't be done from Big Picture Mode.
How many grams of gold has the PS cost at launch using gold prices on launch day
But now you’re assuming the PC isn’t also getting more expensive.
If a console designed to break even is $1,000 then surely an equivalent PC hardware designed to be profitable without software sales revenue will be more expensive.
You have to price it equivalent grams of gold to see the real price trend
PCs do get cheaper over time though, except if there is another crypto boom, then we are all doomed.
"PCs do get cheaper over time though"
pc get cheaper but the gpu isnt
Im still watching 720p movirs, video games.
Somewhere between 60 hz and 240hz, theres zero fundamental benefits. Same for resolution.
It isnt just that hardware progress is a sigmoid, our experiential value.
The reality is that exponential improvement is not a fundamental force. Its always going to find some limit.
Lower latency between your input and its results appearing on the screen is exactly what a fundamental benefit is.
The resolution part is even sillier - you literally get more information per frame at higher resolutions.
Yes, the law of diminishing returns still applies, but 720p@60hz is way below the optimum. I'd estimate 4k@120hz as the low end of optimal maybe? There's some variance w.r.t the application, a first person game is going to have different requirements from a movie, but either way 720p ain't it.
On my projector (120 inch) the difference between 720p and 4k is night and day.
Screen size is pretty much irrelevant, as nobody is going to be watching it at nose-length distance to count the pixels. What matters is angular resolution: how much area does a pixel take up in your field of vision? Bigger screens are going to be further away, so they need the same resolution to provide the same quality as a smaller screen which is closer to the viewer.
Resolution-wise, it depends a lot on the kind of content you are viewing as well. If you're looking at a locally-rendered UI filled with sharp lines, 720p is going to look horrible compared to 4k. But when it comes to video you've got to take bitrate into account as well. If anything, a 4k movie with a bitrate of 3Mbps is going to look worse than a 720p movie with a bitrate of 3Mbps.
I definitely prefer 4k over 720p as well, and there's a reason my desktop setup has had a 32" 4k monitor for ages. But beyond that? I might be able to be convinced to spend a few bucks extra for 6k or 8k if my current setup dies, but anything more would be a complete waste of money - at reasonable viewing distances there's absolutely zero visual difference.
We're not going to see 10.000Hz 32k graphics in the future, simply because nobody will want to pay extra to upgrade from 7.500Hz 16k graphics. Even the "hardcore gamers" don't hate money that much.
Does an increased pixel count make a bad movie better?
Does a decreased pixel count make a good movie better?
> Im still watching 720p movirs, video games.
There's a noticeable and obvious improvement from 720 to 1080p to 4k (depending on the screen size). While there are diminishing gains, up to at least 1440p there's still a very noticeable difference.
> Somewhere between 60 hz and 240hz, theres zero fundamental benefits. Same for resolution.
Also not true. While the difference between 40fps and 60fps is more noticeable than say from 60 to 100fps, the difference is still noticeable enough. Add the reduction in latency that's also very noticeable.
Is the difference between 100fps and 240fps noticeable though? The OP said "somewhere between 60hz and 240hz" and I agree.
> Is the difference between 100fps and 240fps noticeable though?
Yes.
> The OP said "somewhere between 60hz and 240hz" and I agree.
Plenty of us dont. A 240hz OLED still provides a signifacntly blurrier image in motion than my 20+ year old CRT.
Somewhere between a shoulder tap and a 30-06 there is a painful sensation.
The difference between 60 and 120hz is huge to me. I havent had a lot of experience above 140.
Likewise, 4k is a huge difference in font rendering, and 1080->1440 is big in gaming.
4K is big but certainly was not as big a leap forward as SD to HD
That would be very obvious and immediately noticeable difference but you need enough FPS rendered (natively not with latency increasing frame generation) and a display that can actually do 240hz without becoming a smeary mess.
If you have this combination and you play with it for an hour and you go back to a locked 100hz Game you would never want to go back. It's rather annoying in that regard actually.
Even with frame generation it is incredibly obvious. The latency for sure is a downside, but 100 FPS vs 240 FPS is extremely evident to the human visual system.
Really strange that a huge pile of hacks, maths, and more hacks became the standard of "true" frames.
Consoles are the perfect platform for a proper pure ray tracing revolution.
Ray tracing is the obvious path towards perfect photorealistic graphics. The problem is that ray tracing is really expensive, and you can't stuff enough ray tracing hardware into a GPU which can also run traditional graphics for older games. This means games are forced to take a hybrid approach, with ray tracing used to augment traditional graphics.
However, full-scene ray tracing has essentially a fixed cost: the hardware needed depends primarily on the resolution and framerate, not the complexity of the scene. Rendering a million photorealistic objects is not much more compute-intensive than rendering a hundred cartoon objects, and without all the complicated tricks needed to fake things in a traditional pipeline any indie dev could make games with AAA graphics. And if you have the hardware for proper full-scene raytracing, you no longer need the whole AI upscaling and framegen to fake it...
Ideally you'd want a GPU which is 100% focused on ray tracing and ditches the entire legacy triangle pipeline - but that's a very hard sell in the PC market. Consoles don't have that problem, because not providing perfect backwards compatibility for 20+ years of games isn't a dealbreaker there.
> Rendering a million photorealistic objects is not much more compute-intensive than rendering a hundred cartoon objects
Increasing the object count by that many orders of magnitude is definitely much more compute intensive.
Only if you have more than 1 bounce. Otherwise it’s the same. You’ll cast a ray and get a result.
No, searching the set of triangles in the scene to find an intersection takes non-constant time.
I believe with an existing BVH acceleration structure, the average case time complexity is O(log n) for n triangles. So not constant, but logarithmic. Though for animated geometry the BVH needs to be rebuilt for each frame, which might be significantly more expensive depending on the time complexity of BVH builds.
Yeah, this search is O(log n) and can be hardware-accelerated, but there's no O(1) way to do this.
What if we keep the number of triangles constant per pixel, independently of scene complexity, through something like virtualized geometry? Though this would then require rebuilding part of the BVH each frame, even for static scenes, which is probably not a constant operation.
> Rendering a million photorealistic objects is not much more compute-intensive than rendering a hundred cartoon objects
Surely ray/triangle intersection tests, brdf evaluation, acceleration structure rebuilds (when things move/animate) all would cost more in your photorealistic scenario than the cartoon scenario?
Matrix multiplication is all that is and GPUs are really good at doing that in parallel already.
So I guess there is no need to change any of the hardware, then? I think it might be more complicated than waving your hands around linear algebra.
Yes there is, to improve ray tracing…
Combining both ray tracing (including path tracing, which is a form of ray tracing) and rasterization is the most effective approach. The way it is currently done is that primary visibility is calculated using triangle rasterization, which produces perfectly sharp and noise free textures, and then the ray traced lighting (slightly blurry due to low sample count and denoising) is layered on top.
> However, full-scene ray tracing has essentially a fixed cost: the hardware needed depends primarily on the resolution and framerate, not the complexity of the scene.
That's also true for modern rasterization with virtual geometry. Virtual geometry keeps the number of rendered triangles roughly proportional to the screen resolution, not to the scene complexity. Moreover, virtual textures also keep the amount of texture detail in memory roughly proportional to the screen resolution.
The real advantage of modern ray tracing (ReSTIR path tracing) is that it is independent of the number of light sources in the scene.
So create a system RT only GPU plus a legacy one for the best of both worlds?
> non generative AI up-scaling
I know this isn't an original idea, but I wonder if this will be the trick for step-level improvement in visuals. Use traditional 3D models for the broad strokes and generative AI for texture and lighting details. We're at diminishing returns for add polygons and better lighting, and generative AI seems to be better at improving from there—when it doesn't have to get the finger count right.
After raytracing, the next obvious massive improvement would be path tracing.
And while consoles usually lag behind the latest available graphics, I'd expect raytracing and even path tracing to become available to console graphics eventually.
One advantage of consoles is that they're a fixed hardware target, so games can test on the exact hardware and know exactly what performance they'll get, and whether they consider that performance an acceptable experience.
There is no real difference between "Ray Tracing" and "Path Tracing", or better, the former is just the operation of intersecting a ray with a scene (and not a rendering technique), the latter is a way to solve the integral to approximate the rendering equation (hence, it could be considered a rendering technique). Sure, you can go back to the terminology used by Kajiya in his earlier works etc etc, but it was only a "academic terminology game" which is worthless today. Today, the former is accelerated by HW since around a decade (I am cunting the PowerVR wizard). The latter is how most of non-realtime rendering renders frames.
You can not have "Path Tracing" in games, not according to what it is. And it also probably does not make sense, because the goal of real-time rendering is not to render the perfect frame at any time, but it is to produce the best reactive, coherent sequence of frames possible in response to simulation and players inputs. This being said, HW ray tracing is still somehow game changing because it shapes a SIMT HW to make it good at inherently divergent computation (eg. traversing a graph of nodes representing a scene): following this direction, many more things will be unlocked in real-time simulation and rendering. But not 6k samples unidirectionally path-traced per pixel in a game.
> You can not have "Path Tracing" in games
It seems like you're deliberately ignoring the terminology currently widely used in the gaming industry.
https://www.rockpapershotgun.com/should-you-bother-with-path...
https://gamingbolt.com/10-games-that-make-the-best-use-of-pa...
(And any number of other sources, those are just the first two I found.)
If you have some issue with that terminology, by all means raise that issue, but "You can not have" is just factually incorrect here.
> If you have some issue with that terminology, by all means raise that issue, but "You can not have" is just factually incorrect here.
It is not incorrect because, at least for now, all those "path tracing" modes do not do compute multiple "paths" (with each being made of multiple rays casted) per pixel but rasterize primary rays and then either fire 1 [in rare occasions, 2] rays for such a pixel, or, more often, read a value from a local special cache called a "reservoir" or from a radiance cache - which is sometimes a neural network. All of this goes even against the defition your first article gives itself of path tracing :D
I don't have problems with many people calling it "path tracing" in the same way I don't have issues with many (more) people calling Chrome "Google" or any browser "the internet", but if one wants to talk about future trends in computing (or is posting on hacker news!) I believe it's better to indicate a browser as a browser, Google as a search engine, and Path Tracing as what it is.
There's likely still room to go super wide with CPU cores and much more ram but everyone is talking about neutral nets so that's what the press release is about.
not all games need horse power. We've now past the point of good enough to run a ton of it. Sure, tentpole attractions will warrant more and more, but we're turning back to mechanics, input methods, gameplay, storytelling. If you play 'old' games now, they're perfectly playable. Just like older movies are perfectly watchable. Not saying you should play those (you should), but there's not kuch of a leap needed to keep such ideas going strong and fresh.
This is my take as well. I haven’t felt that graphics improvement has “wowed” me since the PS3 era honestly.
I’m a huge fan of Final Fantasy games. Every mainline game (those with just a number; excluding 11 and 14 which are MMOs) pushes the graphical limits of the platforms at the time. The jump from 6 to 7 (from SNES to PS1); from 9 to 10 (PS1 to 2); and from 12 to 13 (PS3/X360) were all mind blowing. 15 (PS4) and 16 (PS5) were also major improvements in graphics quality, but the “oh wow” generational gap is gone.
And then I look at the gameplay of these games, and it’s generally regarded as going in the opposite direction- it’s all subjective of course but 10 is generally regarded as the last “amazing” overall game, with opinions dropping off from there.
We’ve now reached the point where an engaging game with good mechanics is way more important than graphics: case in point being Nintendo Switch, which is cheaper and has much worse hardware, but competes with the PS5 and massively outsells Xbox by huge margins, because the games are fun.
FF12 and FF13 are terrific games that have stood the test of time.
And don't forget the series of MMOs:
FF11 merged Final Fantasy with old-school MMOs, notably Everquest, to great success.
FF14 2.0 was literally A Realm Reborn from the ashes of the failed 1.0, and was followed by the exceptional Heavensward expansion.
FF14 Shadowbringers was and is considered great.
It's not just technology that's eating away at console sales, it's also the fact that 1) nearly everything is available on PC these days (save Nintendo with its massive IP), 2) mobile gaming, and 3) there's a limitless amount of retro games and hacks or mods of retro games to play and dedicated retro handhelds are a rapidly growing market. Nothing will ever come close to PS2 level sales again. Will be interesting to see how the video game industry evolves over the next decade or two. I suspect subscriptions (sigh) will start to make up for lost console sales.
> Nothing will ever come close to PS2 level sales again.
The switch literally has and according to projections the Switch 1 will in fact have outsold the PS2 globally by the end of the year.
doubtful, they say this with every generation of console and even gaming pc systems. When it's popularity decreases then profits decrease and then maybe it will be "the last generation".
I'd hesitate to call the temporal hacks progress. I disable them every time.
Gaming using weird tech is not a hardware manufacturer or availability issue. It is a game studio leadership problem.
Even in the latest versions of unreal and unity you will find the classic tools. They just won't be advertised and the engine vendor might even frown upon them during a tech demo to make their fancy new temporal slop solution seem superior.
The trick is to not get taken for a ride by the tools vendors. Real time lights, "free" anti aliasing, and sub-pixel triangles are the forbidden fruits of game dev. It's really easy to get caught up in the devil's bargain of trading unlimited art detail for unknowns at end customer time.
they can't move everything to the cloud because of latency
Welcome to the Age of the Plateau. It will change everything we know. Invest accordingly.
And what do you think to invest in for such times?
Moats. Government relationships. Simple and unsexy. Hard assets.
Hard assets and things with finite supply. Anything real. Gold, bitcoin, small cap value stocks, commodities, treasuries (if you think the government won't fail).
https://portfoliocharts.com/2021/12/16/three-secret-ingredie...
> Anything real
> bitcoin
:D
Bitcoin hate is real, here. At least.
If it isn't real, I invite you to get some easily or print more.
https://www.investopedia.com/news/hyperinflation-produces-su...
https://decrypt.co/332083/billionaire-ray-dalio-urges-invest...
Is your argument that not being able to "get some easily" makes a thing more real?
If the Internet goes away, Bitcoin goes away. That's a real threat in a bunch of conceivable societal failure scenarios. If you want something real, you want something that will survive the loss of the internet. Admittedly, what you probably want most in those scenarios is diesel, vehicles that run on diesel, and salt. But a pile of gold still could be traded for some of those.
Everyone always talk like societal collapse is global. Take a small pile of gold and use it to buy a plane ticket somewhere stable with internet and your bitcoin will be there waiting for you.
Beyond the PS6, the answer is very clearly graphics generated in real time via a transformer model.
I’d be absolutely shocked if in 10 years, all AAA games aren’t being rendered by a transformer. Google’s veo 3 is already extremely impressive. No way games will be rendered through traditional shaders in 2035.
The future of gaming is the Grid-Independent Post-Silicon Chemo-Neural Convergence, the user will be injected with drugs designed by AI based on a loose prompt (AI generated as well, because humans have long lost the ability to formulate their intent) of the gameplay trip they must induce.
Now that will be peak power efficiency and a real solution for the world where all electricity and silicon are hogged by AI farms.
/s or not, you decide.
Stanislaw Lem’s “The Futurological Congress” predicted this in 1971.
FYI it's got an amazing film adaptation by Ari Folman in his 2013 "The Congress". The most emotionally striking film I've ever watched.
It's all about nerual spores
https://youtu.be/NyvD_IC9QNw
There will be a war between these biogamers and smart consoles that can play themselves.
Is this before or after fully autonomous cars and agi? Both should be there in two years right?
10 years ago people were predicting VR would be everywhere, it flopped hard.
I've been riding Waymo for years in San Francisco.
10 years ago, people were predicting that deep learning will change everything. And it did.
Why just use one example (VR) and apply it to everything? Even then, a good portion of people did not think VR would be everywhere by now.
Baidu Apollo Go is conpletes millions of rides a year as well, with expansions into Europe in the Middle East. In China they've been active for a long time - during COVID they were making autonomous deliveries.
It is odd how many people don't realize how developed self-driving taxis are.
The future isn't evenly distributed.
I think most people will consider self driving tech to be a thing when it's as widespread as TVs were, 20 years after their introduction.
And outside of a few major cities with relatively good weather, self driving is non existent
> I've been riding Waymo for years in San Francisco.
Fully autonomous in select defined cities owned by big corps is probably a reasonable expectation.
Fully autonomous in the hands of an owner applied to all driving conditions and working reliably is likely still a distant goal.
It did flop, but still a hefty loaf of money was sliced off in the process.
Those with the real vested interest don't care if that flops, while zealous worshippers to the next brand new disruptive tech are just a free vehicle to that end.
VR is great industrial tech and bad consumer tech. It’s too isolating for consumers.
Just because it's possible doesn't mean it is clearly the answer. Is a transformer model truly likely to require less compute than current methods? We can't even run models like Veo 3 on consumer hardware at their current level of quality.
How much money are you willing to bet?
All my money.
Even in a future with generative UIs, those UIs will be composed from pre-created primitives just because it's faster and more consistent, there's literally no reason to re-create primitives every time.
Go short Nintendo and Sony today. I'm the last one who's going to let my technical acumen get in the way of your mistake.
Why would gaming rendering using transformers lead to one shorting Nintendo and Sony?
This _might_ be true, but it's utterly absurd to claim this is a certainty.
The images rendered in a game need to accurately represent a very complex world state. Do we have any examples of Transformer based models doing something in this category? Can they do it in real-time?
I could absolutely see something like rendering a simplified and stylised version and getting Transformers to fill in details. That's kind of a direct evolution from the upscaling approach described here, but end to end rendering from game state is far less obvious.
The main rendering would be done by the transformer.
Already in 2025, Google Veo 3 is generating pixels far more realistic than AAA games. I don't see why this wouldn't be the default rendering mode for AAA games in 2035. It's insanity to think it won't be.
Veo3: https://aistudio.google.com/models/veo-3
> Google Veo 3 is generating pixels far more realistic than AAA games
That’s because games are "realtime", meaning with a tight frame-time budget. AI models are not (and are even running on multiple cards each costing 6 figures).
I mistaken veo3 for Genie model. Genie is the Google model I should have referenced. It is real time.
Well you missed the point. You could call it prompt adherence. I need veo to generate the next frame in a few milliseconds, and correctly represent the position of all the cars in the scene (reacting to player input) reliably to very high accuracy.
You conflate the challenge of generating realistic pixels with the challenge of generating realistic pixels that represent a highly detailed world state.
So I don't think your argument is convincing or complete.
> Already in 2025, Google Veo 3 is generating pixels far more realistic than AAA games.
Traditional rendering techniques can also easily exceed the quality of AAA games if you don't impose strict time or latency constraints on them. Wake me up when a version of Veo is generating HD frames in less than 16 milliseconds, on consumer hardware, without batching, and then we can talk about whether that inevitably much smaller model is good enough to be a competitive game renderer.
Doesn’t this imply that a transformer or NN could fill in details more efficiently than traditional techniques?
I’m really curious why this would be preferable for a AAA studio game outside of potential cost savings. Also imagine it’d come at the cost of deterministic output / consistency in visuals.
Genie 3 is already a frontier approach to interactive generative world views no?
It will be AI all the way down soon. The models internal world view could be multiple passes and multi layer with different strategies... In any case; safe to say more AI will be involved in more places ;)
I am super intrigued by such world models. But at the same time it's important to understand where they are at. They are celebrating the achievement of keeping the world mostly consistent for 60 seconds, and this is 720p at 24fps.
I think it's reasonable to assume we won't see this tech replace game engines without significant further breakthroughs...
For LLMs agentic workflows ended up being a big breakthrough to make them usable. Maybe these World Models will interact with a sort of game engine directly somehow to get the required consistency. But it's not evident that you can just scale your way from "visual memory extending up to one minute ago" to 70+ hour game experiences.
Transformer maybe not, but neural net yes. This is profoundly uncomfortable for a lot of people, but it's the very clear direction.
The other major success of recent years not discussed much so far is gaussian splats, which tear up the established production pipeline again.
Neural net is already being used via DLSS. Neural rendering is the next step. And finally, a full transformer based rendering pipeline. My guess anyway.
That's just not efficient. AAA games will use AI to pre-render assets, and use AI shaders to make stuff pop more, but on the fly asset generation will still be slow and produce low quality compared to offline asset generation. We might have a ShadCN style asset library that people use AI to tweak to produce "realtime" assets, but there will always be an offline core of templates at the very least.
Be prepared to be shocked. This industry moves extremely slow.
I was going to say "again?", but then I recalled DirectX 12 was released 10 years ago and now I feel old...
The main goal of Direct3D 12, and subsequently Vulcan, was to allow for better use of the underlying graphics hardware as it had changed more and more from its fixed pipeline roots.
So maybe the time is ripe for a rethink, again.
Particularly the frame generation features, upscaling and frame interpolation, have promise but needs to be integrated in a different way I think to really be of benefit.
The rethink is already taking place via mesh shaders and neural shaders.
You aren't seeing them adopted that much, because the hardware still isn't deployed at scale that games can count on them being available, and also it cannot ping back on improving the developer experience adopting them.
Don't forget mantle.
Did not Mantle become Vulkan?
Yeah but that doesn't mean that much of Mantle is recognizeable in Vulkan, because Vulkan wanted to cover the entire range of GPU architectures (including outdated and mobile GPUs) with a single API, while Mantle was designed for modern (at the time) desktop GPUs (and specifically AMD GPUs). Vulkan basically took an elegant design and "ruined" it with too much real-word pragmatism ;)
While I didn't forget about it, I did misremember the timeline. So yea, Mantle should definitely be mentioned.
I remember reading about directx 1 in PC Gamer magazine
The industry, and at large the gaming community is just long past being interested in graphics advancement. AAA games are too complicated and expensive, the whole notion of ever more complex and grandiose experiences doesn't scale. Gamers are fractured along thousands of small niches, even in sense of timeline in terms of 80s, 90s, PS1 era each having a small circle of businesses serving them.
The times of console giants, their fiefdoms and the big game studios is coming to an end.
I'll take the other side of this argument and state that people are interested in higher graphics, BUT they expect to see an equally higher simulation to go along with it. People aren't excited for GTA6 just because of the graphics, but because they know the simulation is going to be better then anything they've seen before. They need to go hand in hand.
That's totally where all this is going. More horsepower on a GPU doesn't necessarily mean it's all going towards pixels on the screen. People will get creative with it.
I disagree - current gen console aren't enough to deliver smooth immersive graphics - I played BG3 on PS first and then on PC and there's just no comparing the graphics. Cyberpunk same deal. I'll pay to upgrade to consistent 120/4k and better graphics, and I'll buy the games.
And there are AAA that make and will make good money with graphics being front and center.
>aren't enough to deliver smooth immersive graphics
I'm just not sold.
Do I really think that BG3 being slightly prettier than, say, Dragon Age / Skyrim / etc made it a more enticing game? Not to me certainly. Was cyberpunk prettier than Witcher 3? Did it need to be for me to play it?
My query isn't about whether you can get people to upgrade to play new stuff (always true). But whether they'd still upgrade if they could play on the old console with worse graphics.
I also don't think anyone is going to suddenly start playing video games because the graphics improve further.
> Do I really think that BG3 being slightly prettier than, say, Dragon Age / Skyrim / etc made it a more enticing game?
Absolutely - graphical improvements make the game more immersive for me and I don't want to go back and replay the games I spent hundreds of hours in mid two thousands, like say NVN or Icewind Dale (never played BG 2). It's just not the same feeling now that I've played games with incomparable graphics, polished mechanics and movie level voice acting/mocap cutscenes. I even picked up Mass Effect recently out of nostalgia but gave up fast because it just isn't as captivating as it was back when it was peak graphics.
Well this goes to show that, as some other commenter said, the gamer community (whatever that is) is indeed very fragmented.
I routinely re-play games like Diablo 2 or BG1/2 and I couldn't care less about graphics, voice acting or motion capture.
> Absolutely - graphical improvements make the game more immersive for me
Exactly. Graphics are not the end all be all for assessing games, but it’s odd how quickly people handwave away graphics in a visual medium.
> it’s odd how quickly people handwave away graphics in a visual medium.
There is a difference between graphics as in rendering (i.e. the technical side, how something gets rendered) and graphics as in aesthetics (i.e. visual styles, presentation, etc).
The latter is important for games because it can be used to evoke some feel to the player (e.g. cartoony Mario games or dreadful Silent Hill games). The former however is not important by itself, its importance only comes as means to achieve the latter. When people handwave away graphics in games they handwave the misplaced focus on graphics-as-in-tech, not on graphics-as-in-aesthetics.
Maximal "realism" is neither the only nor even necessarily the best use of that medium.
When did I say anything like that? When did anybody in this thread?
I don't know what these words mean to you vs. what they mean to me. But whatever you call the visual quality that Baldur's Gate 3, CyberPunk 2077, and most flagship AAA titles, etc. are chasing after that makes them have "better graphics" and be "more immersive", whatever that is, is not the only way to paint the medium.
Very successful games are still being made that use sprites, low-res polygons, cel shading, etc. While these techniques still can run into hardware limits, they generally don't benefit from the sort of improvements (and that word is becoming ever more debatable with things like AI frame generation) that make for better looking [whatever that quality is called] games.
Wanting them to look good and saying they look way better on a PC does not mean what you described above.
And not caring as much about those things doesn't mean I don't understand that video games are a visual medium.
This is just one type of graphics. And focusing too heavily on it is not going to be enough to keep the big players in the industry afloat for much longer. Some gamers care--apparently some care a lot--but that isn't translating into enough sales to overcome the bloated costs.
We are really straying from the initial point here IMO
For me, the better graphics, mocap etc., the stroger the uncanny valley feeling - i.e. I stop perceiving it as a video game, but instead see it as an incredibly bad movie.
> I don't want to go back and replay the games I spent hundreds of hours in mid two thousands, like say NVN or Icewind Dale (never played BG 2). It's just not the same feeling now that I've played games with incomparable graphics, polished mechanics and movie level voice acting/mocap cutscenes. I even picked up Mass Effect recently out of nostalgia but gave up fast because it just isn't as captivating as it was back when it was peak graphics.
And yet many more have no such issue doing exactly this. Despite having a machine capable of the best graphics at the best resolution, I have exactly zero issues going back and playing older games.
Just in the past month alone with some time off for surgery I played and completed Quake, Heretic and Blood. All easily as good, fun and as compelling as modern titles, if not in some ways better.
Two aspects I keep thinking about:
-How difficult it must be for the art/technical teams at game studios to figure out for all the detail they are capable of putting on screen how much of it will be appreciated by gamers. Essentially making sure that anything they're going to be budgeting significant amount of worker time to creating, gamers aren't going to run right past it and ignore or doesn't contribute meaningfully to 'more than the sum of its parts'.
-As much as technology is an enabler for art, alongside the install base issue how well does pursuing new methods fit how their studio is used to working, and is the payoff there if they spend time adapting. A lot of gaming business is about shipping product, and the studios concern is primarily about getting content to gamers than chasing tech as that is what lets their business continue, selling GPUs/consoles is another company's business.
Being an old dog that still cares about gaming, I would assert many games are also not taking advantage of current gen hardware, coded in Unreal and Unity, a kind of Electron for games, in what concerns taking advantage of existing hardware.
There is a reason there are so many complaints in social media about being obvious to gamers in what game engine a game was written on.
It used to be that game development quality was taken more seriously, when they were sold via storage media, and there was a deadline to burn those discs/cartridges.
Now they just ship whatever is done by the deadline, and updates will come later via a DLC, if at all.
They're both great engines. They're popular and gamers will lash out at any popular target.
If it was so simple to bootstrap an engine no one would pay the percentage points to Unity and Epic.
The reality is the quality bar is insanely high.
It is pretty simple to bootstrap an engine. What isn’t simple is supporting asset production pipelines on which dozen/hundreds of people can work on simultaneously, and on which new hires/contractors can start contributing right away, which is what modern game businesses require and what unity/unreal provide.
Unreal and Unity would be less problematic if these engines were engineered to match the underlying reality of graphics APIs/drivers, but they're not. Neither of these can systematically fix the shader stuttering they are causing architecturally, and so essentially all games built on these platforms are sentenced to always stutter, regardless of hardware.
Both of these seem to suffer from incentive issues similar to enterprise software: They're not marketing and selling to either end users or professionals, but studio executives. So it's important to have - preferably a steady stream of - flashy headline features (e.g. nanite, lumen) instead of a product that actually works on the most basic level (consistently render frames). It doesn't really matter to Epic Games that UE4/5 RT is largely unplayable; even for game publishers, if you can pull nice-looking screenshots out of the engine or do good-looking 24p offline renders (and slap "in-game graphics" on them), that's good enough.
The shader stutter issues are non-existent on console, which is where most of their sales are. PC, as it has been for almost two decades, is an afterthought rather than a primary focus.
No, that's not the reason.
The shader stutter issues are non-existent on console because consoles have one architecture and you can ship shaders as compiled machine code. For PC you don't know what architecture you will be targeting, so you ship some form of bytecode that needs to be compiled on the target machine.
Agreed. I didn't mean to say consoles' popularity is why they don't have shader stutter, but rather it's why implementing a fix on PC (e.g. precompilation at startup) isn't something most titles bother with.
It's not just popularity, Epic has been trying really hard to solve it in Unreal Engine.
The issue is that, because of monolithic pipelines, you have to provide the exact state the shaders will be used in. There's a lot of that, and a large part of it depends on user authored content, which makes it really hard to figure out in advance.
It's a fundamental design mistake in D3D12/Vulkan that is slowly being corrected, but it will take some time (and even more for game engines to catch up).
You still don't get it. It's just not possible to ship a precompilation of every shader permutation for every supported hardware permutation.
That's why I said "precompilation at startup". That has users compile for their precise hardware/driver combination prior to the game trying to use them for display.
Even this is just guesswork for the way these engines work, because they literally don't know what set of shaders to compile ahead of time. Arbitrary scripting can change that on a frame-by-frame basis, shader precompilation in these engines mostly relies on recording shader invocations during gameplay and shipping that list. [1]
Like, on the one hand, you have engines/games which always stutter, have more-or-less long "shader precompilation" splashscreens on every patch and still stutter anyway. The frametime graph of any UE title looks like a topographic cross-section of Verdun. On the other hand there are titles not using those engines where you wouldn't even notice there were any shaders to precompile which... just run.
[1] https://dev.epicgames.com/documentation/en-us/unreal-engine/...
> In a highly programmable real-time rendering environment such as Unreal Engine (UE), any application with a large amount of content has too many GPU state parameters that can change to make it practical to manually configure PSOs in advance. To work around this complication, UE can collect data about the GPU state from an application build at runtime, then use this cached data to generate new PSOs far in advance of when they are used. This narrows down the possible GPU states to only the ones used in the application. The PSO descriptions gathered from running the application are called PSO caches.
> The steps to collect PSOs in Unreal are:
> 1. Play the game.
> 2. Log what is actually drawn.
> 3. Include this information in the build.
> After that, on subsequent playthroughs the game can create the necessary GPU states earlier than they are needed by the rendering code.
Of course, if the playthrough used for generating the list of shadersdoesn't hit X codepath ("oh this particular spell was not cast while holding down shift"), a player hitting it will then get a 0.1s game pause when they invariably do.
Any search on game console reviews on YouTube will show otherwise, even though it isn't as bad as PC.
Just a quick search,
https://www.gamesradar.com/games/the-ps5-stutter-issue-is-re...
https://forums.flightsimulator.com/t/stutters-on-xbox-series...
You're saying a periodic VRR stutter is a shader compiler issue?
If anything I think PC has been a prototyping or proving grounds for technologies on the roadmap for consoles to adopt. It allows software and hardware iterations before it's relied upon in a platform that is required to be stable and mostly unchanging for around a decade from designing the platform through developers using it and recently major refreshes. For example from around 2009 there were a few cross platform games with the baseline being 32bit/DX9 capabilities, but optional 64bit/DX11 capabilities, and given the costs and teams involved in making the kind of games which stretch those capabilities I find it hard to believe it'd be one or a small group of engineers putting significant time into an optional modes that aren't critical to the game functioning and supporting them publicly. Then a few years later that's the basis of the next generation of consoles.
You know the hardware for console so you can ship precompiled shaders.
Can't do that for PC so you either have long first runs or stutter for JIT shader compiles.
Long first runs seem like an unambiguous improvement over stutter to me. Unfortunately, you still get new big games like Borderlands 4 that don't fully precompile shaders.
Depending on the game and the circumstances, I'm getting some cases of 20-40 minutes to compile shaders. That's just obscene to me. I don't think stutter is better but neither situation is really acceptable. Even if it was on first install only it would be bad, but it happens on most updates to the game or the graphics drivers, both of which are getting updated more frequently than ever.
Imagine living in a reality where the studio exec picks the engine based on getting screenshots 3 years later when there's something interesting to show.
I mean, are you actually talking from experience at all here?
It's really more that engines are an insane expense in money and time and buying one gets your full team in engine far sooner. That's why they're popular.
Pretty much it.
Just get a PC then? ;) In the end, game consoles haven't been much more than "boring" subsidized low-end PCs for quite a while now.
PC costs a lot and depreciates fast, by the end of a console lifecycle I can still count on developers targeting it - PC performance for 6+ year hardware is guaranteed to suck. And I'm not a heavy gamer - I'll spend ~100h on games per year, but so will my wife and my son - PC sucks for multiple people using it - PS is amazing. I know I could concoct some remote play setup via lan on TV to let my wife and kids play but I just want something I spend a few hundred eur and I plug into the TV and then it works.
Honestly the only reason I caved with the GPU purchase (which cost the equivalent of a PS pro) was the local AI - but in retrospect that was useless as well.
> by the end of a console lifecycle I can still count on developers targeting it
And I can count on those games still being playable on my six year old hardware because they are in fact developed for 6 year old hardware.
> PC performance for 6+ year hardware is guaranteed to suck
For new titles at maximum graphics level sure. For new titles at the kind of fidelity six year old consoles are putting out? Nah. You just drop your settings from "ULTIMATE MAXIMUM HYPER FOR NEWEST GPUS ONLY" to "the same low to medium at best settings the consoles are running" and off you go.
Oh yeah it's great to play PS4 games while the thing runs with the noise of a vacuum cleaner.
> current gen console aren't enough to deliver smooth immersive graphics
They were enough since PS4 era to deliver smooth, immersive graphics.
Advancements in lighting can help all games, not just AAA ones.
For example, Tiny Glade and Teardown have ray traced global illumination, which makes them look great with their own art style, rather than expensive hyper-realism.
But currently this is technically hard to pull off, and works only within certain constrained environments.
Devs are also constrained by the need to support multiple generations of GPUs. That's great from perspective of preventing e-waste and making games more accessible. But technically it means that assets/levels still have to be built with workarounds for rasterized lights and inaccurate shadows. Simply plugging in better lighting makes things look worse by exposing the workarounds, while also lacking polish for the new lighting system. This is why optional ray tracing effects are underwhelming.
Nintendo dominated last generation with switch. The games were only HD and many at 30fps. Some AAA didn't even get ported to them. But they sold a ton of units and a ton of games and few complained because they were having fun which is what gaming is all about anyways.
That is a different audience than people playing on pc/xbox/ps5. Although arguably each console has a different audience, so there is that.
> That is a different audience than people playing on pc/xbox/ps5.
Many PC users also own a switch. It is in fact one of the most common pairings. There is very little I want get on PC from PS/Xbox so very little point in owning one, I won't get any of the Nintendo titles so keeping one around makes significantly more sense if I want to cover my bases for exclusives.
idk, battlefield 6 came out today to very positive reviews and it's absolutely gorgeous.
It's fine, but definitely a downgrade compared to previous titles like Battlefield 1. At moments it looks pretty bad.
I'm curious why graphics are stagnating and even getting worse in many cases.
https://www.youtube.com/watch?v=gBzXLrJTX1M
Battlefield 6 vs Battlefield 1 - Direct Comparison! Attention to Detail & Graphics! PC 4K
The progress in 9 years do seems underwhelming.
Have you played it? I haven't so I'm just basing my opinion on some YouTube footage I've seen.
BF1 is genuinely gorgeous, I can't lie. I think it's the photogrammetry. Do you think the lighting is better in BF1? I'm gonna go out on a limb and say that BF6's lighting is more dynamic.
Yes I played it on a 4090. The game is good but graphics are underwhelming.
To my eyes everything looked better in BF1.
Maybe it's trickery but it doesn't matter to me. BF6, new COD, and other games all look pretty bad. At least compared to what I would expect from games in 2025.
I don't see any real differences from similar games released 10 years ago.
Exploding production cost is pretty much the only reason (eg we hit diminishing returns in overall game asset quality vs production cost at least a decade ago) plus on the tech side a brain drain from rendering tech to AI tech (or whatever the current best-paid mega-hype is). Also, working in gamedev simply isn't "sexy" anymore since it has been industrialized to essentially assembly line jobs.
It looks like Frostbite 4.0 is so much better than Unreal 5.x. I cant wait to see comparison.
Teenage me from the 90s telling everyone that ray tracing will eventually take over all rendering and getting laughed at would be happy :)
It's not, though. The use of RT in games is generally limited to secondary rays; the primaries are still rasterized. (Though the rasterization is increasingly done in “software rendering”, aka compute shaders.)
As you can tell, I'm patient :) A very important quality for any ray tracing enthusiast lol
The ability to do irregular sampling, efficient shadow computation (every flavour of shadow mapping is terrible!) and global illumination is already making its way into games, and path tracing has been the algorithm of choice in offline rendering (my profession since 2010) for quite a while already.
Making a flexible rasterisation-based renderer is a huge engineering undertaking, see e.g. Unreal Engine. With the relentless march of processing power, and finally having hardware acceleration as rasterisation has enjoyed for decades, it's going to be possible for much smaller teams to deliver realistic and creative (see e.g. Dreams[0]) visuals with far less engineering effort. Some nice recent examples of this are Teardown[1] and Tiny Glade[2].
It's even more inevitable from today's point of view than it was back in the 90s :)
[0] Dreams: https://www.youtube.com/watch?v=u9KNtnCZDMI
[1] Teardown: https://teardowngame.com/
[2] Tiny Glade: https://www.youtube.com/watch?v=jusWW2pPnA0
Hi teenage you! You did well :)
The idea of the radiance cores is pretty neato
>radiance cores is pretty nea
I still dont understand how it is different to Nvidia's RT Core.
AFAICT it's not really different, they're just calling it something else for marketing reasons. The system described in the Sony patent (having a fixed-function unit traverse the BVH asynchronously from the shader cores) is more or less how Nvidia's RT cores worked from the beginning, as opposed to AMDs early attempts which accelerated certain intersection tests but still required the shader cores to drive the traversal loop.
I wonder if we'll ever get truly round objects in my lifetime though
My old ray tracer could do arbitrary quadric surfaces, toroids with 2 minor radii, and CSG of all those. Triangles too (no CSG). It was getting kind of fast 20 years ago - 10fps at 1024x768. Never had good shading though.
I should dig that up and add NURBS and see how it performs today.
dreams on playstation and unbound on pc both use sdfs to allow users to make truly round objects for games
It feels like each time SCE makes a new console, it'd always come with some novelty that's supposed to change the field forever, but after two years they'd always end up just another console.
You end up with a weird phenomenon.
Games written for the PlayStation exclusively get to take advantage of everything, but there is nothing to compare the release to.
Alternatively, if a game is release cross-platform, there’s little incentive to tune the performance past the benchmarks of comparable platforms. Why make the PlayStation game look better than Xbox if it involves rewriting engine layer stuff to take advantage of the hardware, for one platform only.
Basically all of the most interesting utilization of the hardware comes at the very end of the consoles lifecycle. It’s been like that for decades.
I think apart from cross-platform woes (if you can call it that), it's also that the technology landscape would shift, two or few years after the console's release:
For PS2, game consoles didn't become the centre of home computing; for PS3, programming against the GPU became the standard of doing real time graphics, not some exotic processor, plus that home entertaining moved on to take other forms (like watching YouTube on an iPad instead of having a media centre set up around the TV); for PS4, people didn't care if the console does social networking; PS5 has been practical, it's just the technology/approach ended up adopted by everyone, so it lost its novelty later on.
You got a very "interesting" history there, it certainly not particularly grounded in reality however.
PS3s edge was generally seen as the DVD player.
That's why Sony went with Blue Ray in the PS4, hoping to capitalize on the next medium, too. While that bet didn't pay out, Xbox kinda self destructed, consequently making them the dominant player any way.
Finally:
> PS5 has been practical, it's just the technology/approach ended up adopted by everyone, so it lost its novelty later on.
PS5 did not have any novel approach that was consequently adopted by others. The only thing "novel" in the current generation is frame generation, and that was already being pushed for years by the time Sony jumped on that bandwagon.
You've got your history wrong too.
The PS2 was the DVD console. The PS3 was the bluray console.
The PS4 and PS5 are also bluray consoles, however blurays are too slow now so they're just a medium for movies or to download the game from.
You're right, I mixed up the version numbers from memory. I'd contest the statement "the history is wrong" though, that's an extremely minor point to what I was writing.
> PS5 did not have any novel approach that was consequently adopted by others
DualSense haptics are terrific, though the Switch kind of did them first with the Joy-Cons. I'd say haptics and adaptive triggers are two features that should become standard. Once you have them you never want to go back.
PS5's fast SSD was a bit of a game changer in terms of load time and texture streaming, and everyone except Nintendo has gone for fast m.2/nvme storage. PS5 also finally delivered the full remote play experience that PS3 and PS4 had teased but not completed. Original PS5 also had superior thermals vs. PS4 pro, while PS5 pro does solid 4K gaming while costing less than most game PCs (and is still quieter than PS4 pro.) Fast loading, solid remote play, solid 4K, low-ish noise are all things I don't want to give up in any console or game PC.
My favorite PS5 feature however is fast game updates (vs. PS4's interminable "copying" stage.) Switch and Switch 2 also seem to have fairly fast game updates, but slower flash storage.
That is very country specific, many countries home computers since the 8 bit days always dominated, whereas others consoles always dominated since Nintendo/SEGA days.
Also tons of blue collar people bought Chinese NES clones even in mid 90's (at least in Spain) while some other people with white collar jobs bought their kids a Play Station. And OFC the Brick Game Tetris console was everywhere. By late 90's, yes, most people afforded a Play Station, but as for myself I've got a computer in very early 00's and I would emulate the PSX and most N64 games just fine (my computer wasn't a high end one, but the emulators were good enough to play the games at 640x480 and a bilinear filter).
I suspect it won't be as much of an issue next gen, with Microsoft basically dropping out of the console market.
3rd party games will still want to launch on the Nintendo Switch 2, so it's still the same problem.
The Switch (even 2) is nowhere near the same class of performance as PlayStation or Xbox, games on them aren't comparable.
Yet those companies don’t necessarily compete for performance and comparaison, but instead for their own profit. If Nintendo makes profit from selling a device that runs a game in lower spec than Sony, they’re Happy with it. Computing devices aren’t driven by performance only.
Sure, but the point I want replying to was about the Switch 2 being able to make up for the loss of the Xbox as a PlayStation competitor. It can’t.
They are definitely doing something but it seems it’s going to be more PC-like. Like even supporting 3rd party stores.
I’m intrigued.
It’s also that way on the C64 - while it came out in 1981, people figures out how to get 8 bit sound and high resolution color graphics with multiple sprites only after 2000…
Maybe I ate too much marketing but it does feel like having the PS5 support SSDs raised the bar for how fast games are expected to load, even across platforms.
Not just loading times, but I expect more games do more aggressive dynamic asset streaming. Hopefully we'll get less 'squeeze through this gap in the wall while we hide the loading of the next area of the map' in games.
Technically the PS4 supported 2.5" SATA or USB SSDs, but yeah PS5 is first gen that requires SSDs, and you cannot run PS5 games off USB anymore.
It does but I don't think that's necessarily a bad thing, they at least are willing to take some calculated risks about architecture - since consoles have essentially collapsed to been a PC internally.
I don't think it's a bad thing either. Consoles are a curious breed in today's consumer electronics landscape, it's great that someone's still devoted to doing interesting experiments with it.
That was kind of true until Xbox 360 and later Unity, those ended eras of consoles as machines made of quirks as well as game design as primarily software architecture problems. The definitive barrier to entry for indie gamedevs before Unity was the ability to write a toy OS, a rich 3D engine, and GUI toolkit by themselves. Only little storytelling skills were needed.
Console also partially had to be quirky dragsters because of Moore's Law - they had to be ahead of PC by years, because it had to be at least comparable to PC games at the end of lifecycle, not utterly obsolete.
But we've all moved on. IMO that is a good thing.
Funny that I thought the biggest improvement of PS5 is actually crazy fast storage. No loading screen is really gamechanger. I would love to get xbox instant resume on Playstation.
Graphic is nice but not number one.
The hardware 3D audio acceleration (basically fancy HRTFs) is also really cool, but almost no 3rd party games use it.
I've had issues with Xbox instant resume. Lots of "your save file has changed since the last time you played, so we have to close the game and relaunch" issues. Even when the game was suspended an hour earlier. I assume it's just cloud save time sync issues where the cloud save looks newer because it has a timestamp 2 seconds after the local one. Doesn't fill me with confidence, though.
Pretty sure they licensed a compression codec from RAD and implemented it in hardware, which is why storage is so fast on the PS5. Sounds like they're doing the same thing for GPU transfers now.
Storage on the PS5 isn't really fast. It's just not stupidly slow. At the time of release, the raw SSD speeds for the PS5 were comparable to the high-end consumer SSDs of the time, which Sony achieved by using a controller with more channels than usual so that they didn't have to source the latest NAND flash memory (and so that they could ship with only 0.75 TB capacity). The hardware compression support merely compensates for the PS5 having much less CPU power than a typical gaming desktop PC. For its price, the PS5 has better storage performance than you'd expect from a similarly-priced PC, but it's not particularly innovative and even gaming laptops have surpassed it.
The most important impact by far of the PS5 adopting this storage architecture (and the Xbox Series X doing something similar) is that it gave game developers permission to make games that require SSD performance.
So, you're saying they built a novel storage architecture that competed with state-of-the-art consumer hardware, at a lower price point. Five years later, laptops are just catching up, and that at the same price point, it's faster than what you'd expect from a PC.
The compression codec they licensed was built by some of the best programmers alive [0], and was later acquired by Epic [1]
I dunno how you put those together and come up with "isn't really fast" or "not particularly innovative".
Fast doesn't mean 'faster than anything else in existence'. Fast is relative to other existing solutions with similar resource constraints.
[0] https://fgiesen.wordpress.com/about/ [1] https://www.epicgames.com/site/en-US/news/epic-acquires-rad-...
Their storage architecture was novel in that they made different tradeoffs than off the shelf SSDs for consumer PCs, but there's absolutely no innovation aspect to copy and pasting four more NAND PHYs that are each individually running at outdated speeds for the time. Sony simply made a short-term decision to build a slightly more expensive SSD controller to enable significant cost savings on the NAND flash itself. That stopped mattering within a year of the PS5 launching, because off the shelf 8-channel drives with higher speeds were no longer in short supply.
"Five years later, laptops are just catching up" is a flat out lie.
"at the same price point, it's faster than what you'd expect from a PC" sounds impressive until you remember that the entire business model of Sony and Microsoft consoles is to sell the console at or below cost and make the real money on games, subscription services, and accessories.
The only interesting or at all innovative part of this story is the hardware decompression stuff (that's in the SoC rather than the SSD controller), but you're overselling it. Microsoft did pretty much the same thing with their console and a different compression codec. (Also, the fact that Kraken is a very good compression method for running on CPUs absolutely does not imply that it's the best choice for implementing in silicon. Sony's decision to implement it in hardware was likely mainly due to the fact that lots of PS4 games used it.) Your own source says that space savings for PS5 games were more due to the deduplication enabled by not having seek latency to worry about, than due to the Kraken compression.
This video is a direct continuation of the one where Cerny explains logic behind PlayStation 5 pro design and telling that the path forward for them goes into rendering near perfect low res image then upscaling it with neural networks to 4K.
How good it will be? Just look at the current upscalers working on perfectly rendered images - photos. And they aren't doing it in realtime. So the errors, noise, and artefacts are all but inevitable. Those will be masked by post processing techniques that will inevitably degrade image clarity.
It only takes a marketing psyop to alter the perception of the end user with the slogans along the lines of "Tired of pixel exactness, hurt by sharpness? Free YOUR imagination and embrace the future of ever-shifting vague forms and softness. Artifact stands for Art!"
I’m replaying CP2077 for the third time, and all the sarcastic marketing material and ads you find in the game, don’t seem so sarcastic after all when you really think about the present.
If you think those are uncanny, wait until you hear the ads in GTAV.
Pepperidge Farm remembers the days of “Pißwasser, this is beer! Drive drunk, off a pier!”
And, luckily enough, craft beer in the US has only gotten better since then.
I don't know, I think it's conceivable that you could get much much better results from a custom upscale per game.
You can give much more input than a single low res frame. You could throw in motion vectors, scene depth, scene normals, unlit color, you could separately upscale opaque, transparent and post process effect... I feel like you could really do a lot more.
Plus, aren't cellphone camera upscalers pretty much realtime these days? I think you're comparing generating an image to what would actually be happening.
> I think it's conceivable that you could get much much better results from a custom upscale per game.
> You can give much more input than a single low res frame. You could throw in motion vectors, scene depth, scene normals, unlit color, you could separately upscale opaque, transparent and post process effect... I feel like you could really do a lot more.
NVIDIA has already been down that road. What you're describing is pretty much DLSS, at various points in its history. To the extent that those techniques were low-hanging fruit for improving upscaler quality, it's already been tried and adopted to the extent that it's practical. At this point, it's more reasonable to assume that there isn't much low-hanging fruit for further quality improvements in upscalers without significant hardware improvements, and that the remaining artifacts and other downsides are hard problems.
I really hope that this doesn't come to pass. It's all in on the two worst trends in graphics right now. Hardware Raytracing and AI based upscaling.
The amount of drama about AI based upscaling seems disproportionate. I know framing it in terms of AI and hallucinated pixels makes it sound unnatural, but graphics rendering works with so many hacks and approximations.
Even without modern deep-learning based "AI", it's not like the pixels you see with traditional rendering pipelines were all artisanal and curated.
AI upscaling is equivalent to lowering bitrate of compressed video.
Given netflix popularity, most people obviously don’t value image quality as much as other factors.
And it’s even true for myself. For gaming, given the choice of 30fps at a higher bitrate, or 60fps at a lower one, I’ll take the 60fps.
But I want high bitrate and high fps. I am certainly not going to celebrate the reduction in image quality.
> I am certainly not going to celebrate the reduction in image quality
What about perceived image quality? If you are just playing the game chances of you noticing anything (unless you crank up the upscaling to the maximum) are near zero.
People have different sensitivities. For me personally, the reduction in image quality is very noticeable.
I am playing on a 55” TV at computer monitor distance, so the difference between a true 4K image and an upscaled one is very significant.
> AI upscaling is equivalent to lowering bitrate of compressed video.
When I was a kid people had dozens of CDs with movies, while pretty much nobody had DVDs. DVD was simply too expensive, while Xvid allowed to compress entire movie into a CD while keeping good quality. Of course original DVD release would've been better, but we were too poor, and watching ten movies at 80% quality was better than watching one movie at 100% quality.
DLSS allows to effectively quadruple FPS with minimal subjective quality impact. Of course natively rendered image would've been better, but most people are simply too poor to buy game rig that plays newest games 4k 120FPS on maximum settings. You can keep arguing as much as you want that natively rendered image is better, but unless you send me money to buy a new PC, I'll keep using DLSS.
The contentious part from what I get is the overhead for hallucinating these pixels, on cards that also cost a lot more than the previous generation for otherwise minimal gains outside of DLSS.
Some [0] are seeing 20 to 30% drop in actual frames when activating DLSS, and that means as much latency as well.
There's still games where it should be a decent tradeoff (racing or flight simulators ? Infinite Nikki ?), but it's definitely not a no-brainer.
[0] https://youtu.be/EiOVOnMY5jI
I also find them completely useless for any games I want to play. I hope that AMD would release a card that just drops both of these but that's probably not realistic.
They will never drop ray tracing, some new games require ray tracing. The only case where I think it's not needed is some kind of specialized office prebuilt desktops or mini PCs.
What's wrong with hardware raytracing?
There are a lot of theoretical arguments I could give you about how almost all cases where hardware BVH can be used, there are better and smarter algorithms to be using instead. Being proud of your hardware BVH implementation is kind of like being proud of your ultra-optimised hardware bubblesort implementation.
But how about a practical argument instead. Enabling raytracing in games tends to suck. The graphical improvements on offer are simply not worth the performance cost.
A common argument is that we don't have fast enough hardware yet, or developers haven't been able to use raytracing to it's fullest yet, but it's been a pretty long damn time since this hardware was mainstream.
I think the most damning evidence of this is the just released Battlefield 6. This is a franchise that previously had raytracing as a top-level feature. This new release doesn't support it, doesn't intend to support it.
And in a world where basically every AAA release is panned for performance problems, BF6 has articles like this: https://www.pcgamer.com/hardware/battlefield-6-this-is-what-...
> But how about a practical argument instead. Enabling raytracing in games tends to suck. The graphical improvements on offer are simply not worth the performance cost.
Pretty much this - even in games that have good ray tracing, I can't tell when it's off or on (except for the FPS hit) - I cared so little I bought a card not known to be good at it (7900XTX) because the two games I play the most don't support it anyway.
They oversold the technology/benefits and I wasn't buying it.
There were and always are people who swear to not see the difference with anything above 25hz, 30hz, 60hz, 120hz, HD, Full HD, 2K, 4K. Now it's ray-tracing, right.
I can see the difference in all of those. I can even see the difference between 120hz and 240hz, and now I play on 240hz.
Ray tracing looks almost indistinguishable from really good rasterized lighting in MOST conditions. In scenes with high amounts of gloss and reflections, it's a little more pronounced. A little.
From my perspective, you're getting, like, a 5% improvement in only one specific aspect of graphics in exchange for a 200% cost.
It's just not worth it.
Glad you intimately know how my perception of lighting in games works better than I do - though I'm curious how you do.
There’s an important distinction between being able to see the difference and caring about it. I can tell the difference between 30Hz and 60Hz but it makes no difference to my enjoyment of the game. (What can I say - I’m a 90s kid and 30fps was a luxury when I was growing up.) Similarly, I can tell the difference between ray traced reflections and screen space reflections because I know what to look for. But if I’m looking, that can only be because the game itself isn’t very engaging.
I think one of the challenges is that game designers have trained up so well at working within the non-RT constraints (and pushing back those constraints) that it's a tall order to make paying the performance cost (and new quirks of rendering) be paid back by RT improvements. There's also how a huge majority of companies wouldn't want to cut off potential customers in terms of whether their hardware can do RT at all or performance while doing so. The other big one is whether they're trying to recreate a similar environment with RT, or if they're taking advantage of what is only possible on the new technique, such as dynamic lighting and whether that's important to the game they want to make.
To me, the appeal is that game environments that can now be way more dynamic because we're not being limited by prebaked lighting. The Finals does this, but doesn't require ray tracing and it's pretty easy to tell when ray tracing is enabled: https://youtu.be/MxkRJ_7sg8Y
But that's a game design change that takes longer
> Enabling raytracing in games tends to suck.
Because enabling raytracing means the game supports non-raytracing too. Which limits the game's design on how they can take advantage of raytracing being realtime.
The only exception to this I've seen The Finals: https://youtu.be/MxkRJ_7sg8Y . Made by ex-Battlefield devs, the dynamic environment from them 2 years ago is on a whole other level even compared to Battlefield 6.
There's also Metro: Exodus, which the developers have re-made to only support RT lighting. DigitalFoundry made a nice video on it: https://www.youtube.com/watch?v=NbpZCSf4_Yk
naive q: could games detect when the user is "looking around" at breathtaking scenery and raytrace those? offer a button to "take picture" and let the user specify how long to raytrace? then for heavy action and motion, ditch the raytracing? even better, as the user passes through "scenic" areas, automatically take pictures in the background. Heck, this could be an upsell kind of like the RL pictures you get on the roller coaster... #donthate
(sorry if obvious / already done)
Even without RT I think it'd be beneficial to tune graphics settings depending on context, if it's an action/combat scene there's likely aspects the player isn't paying attention to. I think the challenge is it's more developer work whether it's done by implementing some automatic detection or manually being set scene by scene during development (which studios probably do already where they can set up specific arenas). I'd guess an additional task is making sure there's no glaring difference between tuning levels, and setting a baseline you can't go beneath.
Not exactly the same but adaptive rendering based on viewer attention reminded me of this: https://en.wikipedia.org/wiki/Foveated_rendering
> But how about a practical argument instead.
With raytracing lighting a scene goes from taking hours-days to just designating objects that emit light
It will never be fast enough to work in real time without compromising some aspect of the player's experience.
Ray tracing is solving the light transport problem in the hardest way possible. Each additional bounce adds exponentially more computational complexity. The control flows are also very branchy when you start getting into the wild indirect lighting scenarios. GPUs prefer straight SIMD flows, not wild, hierarchical rabbit hole exploration. Disney still uses CPU based render farms. There's no way you are reasonably emulating that experience in <16ms.
The closest thing we have to functional ray tracing for gaming is light mapping. This is effectively just ray tracing done ahead of time, but the advantage is you can bake for hours to get insanely accurate light maps and then push 200+ fps on moderate hardware. It's almost like you are cheating the universe when this is done well.
The human brain has a built in TAA solution that excels as frame latencies drop into single digit milliseconds.
The problem is the demand for dynamic content in AAA games. Large exterior and interior worlds with dynamic lights, day and night cycle, glass and translucent objects, mirrors, water, fog and smoke. Everything should be interactable and destructable. And everything should be easy to setup by artists.
I would say, the closest we can get are workarounds like radiance cascades. But everything else than raytracing is just an ugly workaround which falls apart in dynamic scenarios. And don't forget that baking times and storing those results, leading to massive game sizes, are a huge negative.
Funnily enough raytracing is also just an approximation to the real world, but at least artists and devs can expect it to work everywhere without hacks (in theory).
Manually placed lights and baking not only takes time away from iteration but also takes a lot of disk space for the shadow maps. RT makes development faster for the artists, I think DF even mentioned that doing Doom Eternal without RT would take so much disk space it wouldn’t be possible to ship it.
edit: not Doom Etenral, it’s Doom The Dark Ages, the latest one.
The quoted number was in the range of 70-100 GB if I recall correctly, which is not that significant for modern game sizes. I’m sure a lot of people would opt to use it as an option as a trade off for having 2-3x higher framerate. I don’t think anyone realistically complains about video game lighting looking too “gamey” when in a middle of an intense combat sequence. Why optimize a Doom game of all things for standing still and side by side comparisons? I’m guessing NVidia paid good money for making RT tech mandatory. And as for shortened development cycle, perhaps it’s cynical, but I find it difficult to sympathize when the resulting product is still sold for €80
Devs get paid either way, consumers just pay for more dev waiting instead of more game.
You still have to manually place lights. Where do you think the rays come from (or rather, go to).
It's fast enough today. Metro Exodus, an RT-only game runs just fine at around 60 fps for me on a 3060 Ti. Looks gorgeous.
Light mapping is a cute trick and the reason why Mirror's Edge still looks so good after all these years, but it requires doing away with dynamic lighting, which is a non-starter for most games.
I want my true-to-life dynamic lighting in games thank you very much.
How is Metro Exodus Enhanced Edition (that is purely raytraced) compromised compared to regular version that uses traditional lighting?
> It will never be fast enough to work in real time ...
640Kb surely is enough!
Much higher resource demands, which then requires tricks like upscaling to compensate. Also you get uneven competition between GPU vendors because it is not hardware ray tracing but Nvidia raytracing in practice.
On a more subjective note, you get less interesting art styles because studio somehow have to cram raytracing as a value proposition in there.
1. People somehow think that just because today's hardware can't handle RT all that well it will never be able to. A laughable position of course.
2. People turn on RT in games not designed with it in mind and therefore observe only minor graphical improvements for vastly reduced performance. Simple chicken-and-egg problem, hardware improvements will fix it.
Not OP, but a lot of the current kvetching about hardware based ray tracing is that it’s basically an nvidia-exclusive party trick, similar to DLSS and physx. AMD has this inferiority complex where nvidia must not be allowed to innovate with a hardware+software solution, it must be pure hardware so AMD can compete on their terms.
The gimmicks aren't the product, and the customers of frontier technologies aren't the consumers. The gamers and redditors and smartphone fanatics, the fleets of people who dutifully buy, are the QA teams.
In accelerated compute, the largest areas of interest for advancement are 1) simulation and modeling and 2) learning and inference.
That's why this doesn't make sense to a lot of people. Sony and AMD aren't trying to extend current trends, they're leveraging their portfolios to make the advancements that will shape future markets 20-40 years out. It's really quite bold.
I disagree. From what I’ve read if the game can leverage RT the artists save a considerable amount of time when iterating the level designs. Before RT they had to place lights manually and any change to the level involved a lot of rework. This also saves storage since there’s no need to bake shadow maps.
So what stops the developers from iterating on a raytraced version of the game during development, and then executing a shadow precalcualtion step once the game is ready to be shipped? Make it an option to download, like the high resolution texture packs. They are offloading processing power and energy requirements to do so on consumer PCs, and do so in an very inefficient manner
Looks different. But for quick previs before the bake, this is done.
So far the AI upscaling/interpolating has just been used to ship horribly optimized games with a somewhat acceptable framerate
And they're achieving "acceptable" frame rates and resolutions by sacrificing image quality in ways that aren't as easily quantified, so those downsides can be swept under the rug. Nobody's graphics benchmark emits metrics for how much ghosting is caused by the temporal antialiasing, or how much blurring the RT denoiser causes (or how much noise makes it past the denoiser). But they make for great static screenshots.
Nintendo is getting it right (maybe): focus on first-party exclusive games and, uh, a pile of indies and ports from the PS3 and PS4 eras.
Come to think of it, Sony is also stuck in the PS4 era since PS5 pro is basically a PS4 pro that plays most of the same games but at 4K/60. (Though it does add a fast SSD and nice haptics on the DualSense controller.) But it's really about the games, and we haven't seen a lot of system seller exclusives on the PS5 that aren't on PS4, PC, or other consoles. (Though I'm partial to Astro-bot and also enjoyed timed exclusives like FF16 and FF7 Rebirth.)
PS5 and Switch 2 are still great gaming consoles - PS5 is cheaper than many GPU cards, while Switch 2 competes favorably with Steam Deck as a handheld and hybrid game system.
So this is AMD catching up with Nvidia in the RT and AI upscaling/frame gen fields. Nothing wrong with it, and I am quite happy as an AMD GPU owner and Linux user.
But the way it is framed as a revolutionary step and as a Sony collab is a tad misleading. AMD is competent enough to do it by itself, and this will definitely show up in PC and the competing Xbox.
I think we don't have enough details to make statements like this yet. Sony have shown they are willing to make esoteric gaming hardware in the past (cell architecture) and maybe they'll do something unique again this time. Or, maybe it'll just use a moderately custom model. Or, maybe it's just going to use exactly what AMD have planned for the next few year anyway (as you say). Time will tell.
I'm rooting for something unique because I haven't owned a console for 20 years and I like interesting hardware. But hopefully they've learned a lesson about developer ergonomics this time around.
>Sony have shown they are willing to make esoteric gaming hardware in the past (cell architecture)
Just so we’re clear, you’re talking about a decision that didn’t really pan out made over 20 years ago.
PS6 will be an upgraded PS5 without question. You aren’t ever going to see a massive divergence away from the PC everyone took the last twenty years working towards.
The landscape favors Microsoft, but they’ll drop the ball, again.
> you’re talking about a decision that didn’t really pan out made over 20 years ago.
The PS3 sold 87m units, and more importantly, it sold more than the Xbox 360, so I think it panned out fine even if we shouldn't call it a roaring success.
It did sell less than the PS2 or PS4 but I don't think the had much to do with the cell architecture.
Game developer hated it, but that's a different issue.
I do agree that a truly unusual architecture like this is very unlikely for the next gen though.
It sold well, but there are multiple popular games that were built for the PS3 that have not come to any other platform because porting them is exceptionally hard.
they were hoping Cell would get more widespread use though, which it did not
Digital Foundry just released a video discussing this:
https://youtu.be/Ru7dK_X5tnc
I really dislike the focus on graphics here, but I think a lot of people are missing big chunk of the article that's focused on efficiency.
If we can get high texture + throughput content like dual 4k streams but with 1080p bandwidth, we can get VR that isn't as janky. If we can get lower power consumption, we can get smaller (and cooler) form functions which means we might see a future where the Playstation Portal is the console itself. I'm about to get on a flight to Sweden, and I'd kill to have something like my Steam Deck but running way cooler, way more powerful, and less prone to render errors.
I get the feeling Sony will definitely focus on graphics as that's been their play since the 90s, but my word if we get a monumental form factor shift and native VR support that feels closer to the promise on paper, that could be a game changer.
How about actually releasing games? GT7 and GOW Ragnarok are the only worthwhile exclusives of the current gen. This is hilariously bad for 5 year old console.
This. I would also add Returnal to this list but otherwise I agree, It's hard to believe it's been almost 5 years since the release of PS5 and there are still barely any games that look as good as The Last Of Us 2 or Red Dead Redemption 2 which were released on PS4
I would agree with this. A lot of PS5 games using UE5+ with all it's features run at sub 1080p30 (some times sub 720p30)upscaled to 1440p/4K and still look & run way, way worse that TLOU2/RDR2/Death Stranding 1/Horizon 1 on the PS4. Death Stranding 2, Horizon 2, and the Demon's Souls remake look and run far, far better (on a purely technical level) than any other PS5 game and they all use rasterized lighting.
Ratchet and Clank is a good one too.
So we're getting a new console just to play AI-upscaled PS4 and PS5 "remasters"... and I suspect it’ll probably come without any support for physical media. The PS5 will be my last console. There's no point anymore.
There sure is a lot of visionary(tm) thinking out there right now about the future of gaming, But what strikes me is how few of those visionaries(tm) have ever actually developed and taken a game to market.
Not entirely unlike how many AI academics who step functioned their compensation a decade ago by pivoting to the tech industry had no experience bringing an AI product to market, but they certainly felt free pontificate on how things are done.
I eagerly await the shakeout due from the weakly efficient market as the future of gaming ends up looking like nothing anyone imagineered.
Seems like the philosophy here is, if you're going to do AI-based rendering, might as well try it across different parts of the graphics pipeline and see if you can fine-tune it at the silicon level. Probably a microoptimization, but if it makes the PS6 look a tiny bit better than the Xbox, people will pay for that.
Hopefully their game lineup is not as underwhelming as the ps5 one.
underwhelming? what do you mean?
every year, Playstation ranks very high when it comes to GOTY nominations
just last year, Playstation had the most nominations for GOTY: https://x.com/thegameawards/status/1858558789320142971
not only that, but PS5 has more 1st party games than Microsoft's Xbox S|X
1053 vs 812 (that got inflated with recent Activision acquisition)
https://en.wikipedia.org/wiki/List_of_PlayStation_5_games
https://en.wikipedia.org/wiki/List_of_Xbox_Series_X_and_Seri...
It's important to check the facts before spreading random FUD
PS5 had the strongest lineup of games this generation, hence why they sold this many consoles
Still today, consumers are attracted to PS5's lineup, and this is corroborated by facts and data https://www.vgchartz.com/
In August for example, the ratio between PS5 and Xbox is 8:1; almost as good as the new Nintendo Switch 2, and the console is almost 5 years old!
You say "underwhelming", people are saying otherwise
Yeah, I don’t recall a single original game from the PS5 exclusive lineup (that wasn’t available for PS4). We did get some remakes and sequels, but the PS5 lineup pales in comparison to the PS4 one.
Also, to my knowledge, the PS5 still lags behind the PS4 in terms of sales, despite the significant boost that COVID-19 provided.
The PS4 lineup pales in comparison to the PS3 lineup, which pales in comparison to the PS2 lineup, which pales in comparison to the PS1 lineup.
Each generation has around half the number of games as the previous. This does get a bit murky with the advent of shovelware in online stores, but my point remains.
I think this only proves is that games are now ridiculously expensive to create and met the quality standards expected. Maybe AI will improved this in this future. Take-Two has confirmed that GTA6's budget has exceeded US$1 billion, which is mind-blowing.
The most extreme example of this is that Naughty Dog, one of Sony's flagship first-party studios, has still yet to release a single original game for the PS5 after nearly five years. They've steadily been making fewer and fewer brand new games each generation and it's looking like they may only release one this time around. AAA development cycles are out of control.
Returnal is probably one the best 1st party games available and it’s a PS5 exclusive.
Its sequel Saros is coming out next year too.
There’s also Spider-Man 2, Ratchet and Clank Rift Apart, Astro Bot, Death Stranding 2, Ghost of Yotei…
Their output hasn’t been worse than the PS4 at all imo.
There's simply no point in buying that console when it has like what, 7 exclusive titles that aren't shovelware? 7 titles after 5 years? And this number keeps going down because games are constantly being ported to other systems.
>constantly being ported to other systems.
And why wouldn’t they? In many cases they’re are some compiler settings and a few drivers away from working.
That's not an argument in favor of PS5.
I don’t say it was. If anything it’s an argument in favor of Xbox with DirectX.
> the new architecture is focused on more efficient running of the kinds of machine-learning-based neural networks
so fake frames generation ?
Yes, duh. It's a console, resolution scaling is the #1 foremost tool in their arsenal for stabilizing the framerate. I can't think of a console game made in the past decade that doesn't "fake frames" at some part of the pipeline.
I'll also go a step further - not every machine-learning pass is frame generation. Nvidia uses AI for DLAA, a form of DLSS that works with 100% input resolution as a denoiser/antialiasing combined pass. It's absolutely excellent if your GPU can keep up with the displayed content.
I can't help but think that Sony and AMD would be better off developing a GPU-style PCI-card module that has all their DRM and compute and storage on the board, and then selling consoles that are just normal gaming PCs in a conveniently-sized branded case with a PS card installed. If the card was sold separately at $3-400 it would instantly take over a chunk of the PC gaming market and upgrades would be easier.
"Uh oh, I don't like that sound of that..."
clicks article
"Project Amethyst is focused on going beyond traditional rasterization techniques that don't scale well when you try to "brute force that with raw power alone," Huynh said in the video. Instead, the new architecture is focused on more efficient running of the kinds of machine-learning-based neural networks behind AMD's FSR upscaling technology and Sony's similar PSSR system."
"Yep..."
Sigh.
Indeed. It is a "rethink" only for a very small value of /think/.
Graphics could stand to get toned down. It sucks to wait 7 years for a sequel to your favorite game. There was a time where sequels came out while the games were still relevant. We are getting sequels 8 years or more apart for what? Better beard graphics? Beer bottles where the liquid reacts when you bump into it? Who cares!
We are 5 full years into the PS5's lifetime. These are the only games that are exclusive to the console.I see this as a test ground for the next thing on PC.
Why not also give a mini AMD EPYC cpu with 32 cores? This way games would start to be much better at multicore.
I think this is probably on the docket. Epic seems to be in a push to offload a lot of animation work to more cores. The industry is going that way and that was a big topic at their last conference.
Soon real games will be 10 pixels, and everything else is upscaled
This reminds me of the PlayStation/2 developer manual which, when describing the complicated features of system, said something like "there is no profit in making it easy to extract the most performance from the system."
Both raytracing and NPUs use a lot of bandwidth and that is scaling the least with time. Time will tell if just going for more programmable compute would be better
A new PS console already?
PS5 will be remembered as the worst PS generation.
That would still be PS3 for me.
Cell processor 2: electric boogaloo
Seems they didn’t learn from the PS3, and that exotic architectures don't drive sales. Gamers don’t give a shit and devs won’t choose it unless they have a lucrative first party contract.
This isn’t exotic at all. This is the future roadmap of AMD even for their own PC GPUs.
Since Mark Cerny became the hardware architect of PS they have not made the mistakes of the PS3 generation at all.
Custom graphics architectures aren't always a disaster - the Switch 2 is putting up impressive results with their in-house DLSS acceleration.
Now, shackling yourself to AMD and expecting a miracle... that I cannot say is a good idea. Maybe Cerny has seen something we haven't, who knows.
The entire Switch 1 game library is free to play on emulators. They probably put a custom accelerator to prevent reverse engineering. A consequence of using weaker spec parts than their competitors.
The Switch 1 also had CUDA cores and other basic hardware accelerators. To my knowledge (and I could be wrong), none of the APIs that Nintendo exposed even gave access to those fancy features. It should just be calls to NVN, which can be compiled into Vulkan the same way DXVK translates DirectX calls.
What is "in-house dlss acceleration" in your context? What's in-house about it?
It's better off if I let Digital Foundry take it from here: https://youtu.be/BDvf1gsMgmY
TL:DW - it's not quite the full-fat CNN model but it's also not a uselessly pared-back upscaler. Seems to handle antialiasing and simple upscale well at super low TDPs (<10w).
Ok, but that's still nVidia DLSS tech from desktop, what's Nintendo in-house about it?
It's literally not. From the description of TFA:
Why? Hasn't it only been 5 years according to the public? Stop being greedy.
I wonder how many variants of the PS6 they'll go through before they get a NIC that works right.
As someone working at an ISP, I am frustrated with how bad Sony has mangled the networking stack on these consoles. I thought BSD was supposed to be the best in breed of networking but instead Sony has found all sorts of magical ways to make it Not Work.
From the PS5 variants that just hate 802.11ax to all the gamers making wild suggestions like changing MTU settings or DNS settings just to make your games work online... man, does Sony make it a pain for us to troubleshoot when they wreck it.
Bonus points that they took away the Web browser so we can't even try to do forward-facing troubleshooting without going through an obtuse process of the third-party-account-linking system to sneak out of the process to run a proper speedtest to Speedtest/Fast to show that "no, it's PSN being slow, not us".
Noone is gonna give you some groundbreaking tech for your electronic gadget.... As IBM showed when they created the Cell for Sony and then gave almost the same tech to Microsoft :D.
I don’t think they ever claimed that. Every time Mark Cerny discusses PS hardware he always mentions that it’s a collaboration, so whatever works for AMD they can use on their own GPUs, even for other clients.
I'm just saying no sane company gonna give you any edge in chiptech.
Maybe Sony should focus on getting a half-respectable library out on the PS5 before touting the theoretical merits of the PS6? It’s kind of wild how thin they are this go around. Their live service gambles clearly cost them this cycle and the PSVR2 landed with a thud.
Frankly after releasing the $700 pro and going “it’s basically the same specs but it can actually do 4K60 this time we promise” and given how many friends I have with the PS5 sitting around as an expensive paper weight, I can’t see a world where I get a PS6 despite decades of console gaming. The PS5 is an oversized final fantasy machine supported by remakes/remasters of all their hits from the PS3/PS4 era. It’s kind of striking when you look at the most popular games on the console.
Don’t even get me started on Xbox lol
It has plenty of games not including cross gen games and remasters. Compared to the PS4 the output has been completely fine.
But it’s a fact development times continue to increase. But that’s not a Sony thing it’s happening to every publisher.
It really doesn’t though. The library stacked against PS4’s doesn’t even compare unless you want to count cross platform and even then PS4 still smokes it. The fact that Helldivers 2 is one of the only breakout successes they’ve had (and it didn’t even come from one of their internal studios) says everything. And of course they let it go cross platform too so that edge is gone now. All their best studios were tied up with live service games that have all been canceled. They wasted 5+ years and probably billions if we include the missed out sales. The PS4 was heavily driven by their close partner/internal teams and continue to carry a significant portion of the PS5’s playerbase.
If you don’t need Final Fantasy or to (re)play improved PS4 games, the PS5 is an expensive paperweight and you may as well just grab a series S or something for half the price, half the shelf space, and play 90% of the same games.
Let me ask you this: should we really be taking this console seriously if they’re about to go an entire cycle without naughty dog releasing a game?
I disagree, it has plenty of great 1st party exclusives and even 3rd party.
And I own every console.
What are the exclusives?
We don’t need to flex about owning every console. I own basically every one as well except PS5. I kept waiting and waiting for a good sale and a good library just like PS4. The wait has not rewarded me lol
It’s not a flex at all, just for reference.
I get every console at launch, so I went from PS4 to Pro to PS5 to Pro.
At launch I really enjoyed Demon’s Souls, which I never played in PS3, fantastic game. Then came out Returnal probably my favorite 1st party game so far, really looking forward to its sequel Saros next year.
I also played Ragnarok, GT7 (with PSVR2 is fantastic), Horizon 2, and yes, all these came out also for PS4 but are undoubtedly better on the PS5. I’d just get a PS5 just because of the fast loading, it’s awesome.
There’s also Spider-Man 2, Ratchet, Death Stranding 2, Ghost of Yotei, and I’ll probably leaving others but there’s plenty of great 1st party exclusives. There’s also a bunch of great 3rd party exclusives as well.
I don’t game on PC though, used to when I was younger but I prefer to play on consoles now and use the computers for work and other things.
All of these are available on PC and/or Xbox. Several are PS4. Not a single one is exclusive.
I understand these things don’t bother you but you can’t say it has plenty of exclusives when it literally does not. You just aren’t bothered by that fact and that is fine. But it makes me question what I would be buying when I have more affordable ways of playing all of these games since again, they have virtually no exclusives and their best studios have dropped little to nothing due to their failed gamble with live service.
The PS3/PS4 had several single player titles that you could only play on PlayStation and were made specifically for them. They weren’t resting on the laurels of previous releases and just giving them a new coat of paint. They had bigger, better, more exclusive libraries. The PS4 in particular had clear value. No one had to argue for it. The library is considered one of the best.
I am a big proponent of consoles believe it or not but frankly the PS5 is a head scratcher for me at the end of the day. Especially for the (now increased) price.
> All of these are available on PC and/or Xbox.
That's not correct. God of war Ragnarok an Ghost of Yotei are not on PC / XBox. But they will probably eventually make it to PC.
Why do you think that releasing games on the PC (a year or two after the PlayStation release) is a bad thing? It means you don't need to buy a PlayStation to play their first-party titles, assuming you're a patient gamer. It also means Sony makes more money from the bigger PC market. Win-win
Ragnarok is definitely on PC. I saw it on Steam again like a week ago.
You’re correct about Yotei, so yes 1 (likely timed) exclusive 5 years in. I think my overall point clearly still stands.
Majority of those games came out 1st on PS, they are releasing some of them later on PC and that’s fine.
Like I said, since I don’t want to play on PC the best option for me it’s to play them on the latest PS hardware, that a game also comes out elsewhere doesn’t detriment my experience.
Again it’s not about your preference. My initial comment was “they don’t have exclusives,” which you contested, then shifted to “well it doesn’t bother me.”
I’m not debating preference. I’m saying they don’t have a robust library for the PS5 compared to previous hardware and they lack exclusives, yet here they are hyping the PS6. If you are happy with your PS5 then great! Many people are. But the library is thinner and depends on old titles. That is just reality.
Why should I expect the library to be better next iteration when they’ve farted their way through the last 5+ years and seem to have no interest doing otherwise?