It seems that you're using an outdated browser. Some things may not work as they should (or don't work at all).
We suggest you upgrade newer and better browser like: Chrome, Firefox, Internet Explorer or Opera

×
avatar
clarry: What OS? How does it limit it?

I'm not familiar with any OS that peeks into the instruction stream of a running program in order to make scheduling decisions (or measure CPU usage, for that matter). Program is either running, or it's not. If it never yields and isn't forced to stop, it'll use up 100% of the CPU.
avatar
ColJohnMatrix: Oh boy... <facepalm>

There's CPU priority as per the multi-tasking portion of the OS. It doesn't need to "peak into the instruction stream". The rest you're right. The OS can control (to a point) how much resources go to a task. This is pointless hair-splitting side-aspect of this topic.
I don't think it's pointless. Priorities don't solve high CPU usage or the stutter caused by random task scheduling, it's just a way to choose who's right when you have multiple hogs (or one hog and one important task) and the scheduler needs to make a decision now. Priorities matter less and less when every program is well behaved and yields when they've nothing to do. Otherwise, priorities are a good way to prevent the system from becoming unresponsive when a poorly behaved process spins all the time. And it'll do so by stopping the spinner in order to run the more important task, and when that happens at the wrong time, it'll cause a missed frame. Yey, stutter.

A low priority task can and will still burn 100% CPU if there's nothing else to run... yey, hot CPU and loud fans and low battery life :)

I think it's important to provide a timing mechanism that isn't from the DOS era. Of course, I don't mind having busylooping as an option for systems that do not provide a good alternative.
Post edited September 18, 2019 by clarry
avatar
clarry: I think it's important to provide a timing mechanism that isn't from the DOS era. Of course, I don't mind having busylooping as an option for systems that do not provide a good alternative.
Am I making a game for you? No? Then it's pointless. I was merely illustrating the approach to the OP. I've acknowledged that there's more up-to-date ways to handle this problem. This isn't a game development forum and we're not discussing best programming practices here in a modern context. I think you'd be very surprised at what's commonly done in game development outside your idealistic confirmation bias of "how it should be done". This exact approach is in far more games than I think you'd be able to handle without a meltdown. Are you so insecure that this is the only way you can feel good about yourself? By attempting to strawman someone making a simple point with excessive reaching off-topic nitpicking? F*** OFF!
Post edited September 18, 2019 by ColJohnMatrix
avatar
dtgreene: My question is, why doesn't the modern game run too fast?
Because they aren't tied to the core CPU clock speed, like very old DOS games.
avatar
dtgreene: Furthermore, if writing a game, what would be the best way (on a modern computer) to control the speed of the game so that it doesn't run too fast? How is this done in modern games?
At the most simplistic level: modern games use something other than the core CPU clock for their timing.

How this is done, and how it should be done correctly, is easily a topic that could become a doctoral dissertation. At an extremely high level, this means: break your game into threads (at the very least: graphics, AI, input and potentially network), then pick an appropriate timing method for each thread, as well as having a master scheduler as part of the core game logic which ensures all the threads stay in step with each other. There are a wide variety of timing events that you can cue off, and knowing when and how to use each -- and how to keep different methods relatively in sync with each other -- is what separates a true Developer from someone who downloaded an SDK, middleware package and set of prefab assets, and glued them all together to make an "Early Access Game".
avatar
ColJohnMatrix: off-topic nitpicking? F*** OFF!
dtgreene is a technical person and so am I. The devil is in the details, and I believe we both find it interesting to discuss these details. I'm sorry that you don't.

And yeah, I know plenty of game developers ship subpar code. The results speak for themselves. Of course, it helps that the mass market seems to be oblivious to stutter, latency, and other performance issues that can be traced to poor design. And people who complain are shut up by telling them to buy a better computer. "Sorry if that doesn't work out for you."

Years later technical people who understand and care about the details will fix things with source ports and mods if possible.
Post edited September 18, 2019 by clarry
avatar
Ryan333: At an extremely high level, this means: break your game into threads (at the very least: graphics, AI, input and potentially network)
At this point, you open yourself to all the complexities of multi-threaded programming, including all the bugs that can result; for a game that does not realistically need multiple CPU cores, it may be best to try to avoid doing this.

Threads are like pointers; they're powerful, but they're also a sort of many hard-to-find bugs.
avatar
Ryan333: At an extremely high level, this means: break your game into threads (at the very least: graphics, AI, input and potentially network)
avatar
dtgreene: At this point, you open yourself to all the complexities of multi-threaded programming, including all the bugs that can result; for a game that does not realistically need multiple CPU cores, it may be best to try to avoid doing this.

Threads are like pointers; they're powerful, but they're also a sort of many hard-to-find bugs.
Yep. The stuff at work right now...

Someone wanted to make a performant design and they thought to use threads. Sadly they didn't think long and hard enough about the design so now it's a nightmare of races, locking, potential deadlocks, poor performance, and generally just pain to work with.

Anyway, the old fashioned game loop does not need threads to avoid issues with too fast CPUs. It really just boils down to selecting between fixed time step (requires interpolation or extrapolation, and might benefit from moving some things like mouse cursor / camera rotation out of the fixed domain) or dynamic time step (cap framerate to avoid problems with large FPS, run multiple ticks per frame on lag spikes). For limiting framerate, it's vsync or sleep or busyloop, all with their own caveats (latency, sleep resolution, cpu hogging & scheduler woes).

Imo performance (with multiple cores) is the only good reason to introduce threads. The other reason people use threads is that they rely on blocking APIs with no non-blocking alternative (hearsay suggests this is common in the Windows world; I don't know if it's true but I've run into more than a few Windows developers who think threads are the only way to do things we'd do with a poll(2) loop in the Unix world); in such a case, they'd just employ a thread to perform the blocking operation while letting the rest of the code run in another thread. Also, if you need low latency for some part of the program (e.g. audio), then a separate thread for that might help. Again, simple games that stream music and pre-recorded samples (possibly with simple effects slapped on) should have no trouble buffering 20ms (or more) of audio which guarantees no drop-outs at >50 fps. It's not unusual for older games to have buffers anywhere from 50 to 100 ms.
Post edited September 19, 2019 by clarry
avatar
clarry: dtgreene is a technical person and so am I. The devil is in the details, and I believe we both find it interesting to discuss these details. I'm sorry that you don't.
lol Excuse me, what? I already did and added actual code that provides a working example. Your response stepped beyond that into the realm of criticism, obvious condescension, and strawman, and frankly I've seen nothing from you giving you the credentials to do so. You're some unbelievable piece of work...

But, by all means, show your code sample that's a vast improvement... I'm waiting.

avatar
clarry: And yeah, I know plenty of game developers ship subpar code. The results speak for themselves. Of course, it helps that the mass market seems to be oblivious to stutter, latency, and other performance issues that can be traced to poor design. And people who complain are shut up by telling them to buy a better computer. "Sorry if that doesn't work out for you."
You're conflating multiple issues, somehow attributing them to this approach (et al), and then by proxy deflecting to me for demonstrating... What you seem to fail to grasp is these things aren't the issues that you think they are. Not unless you're working on a much larger game design or something for modern consoles where it becomes critical. A vast amount of games use the same technique listed above with minor tweaks, generally adding delta physics, and the things you seem to think are issues just aren't. In short, your thoughts on this are vain nonsense that sounds great on paper but doesn't hold up to obdurate practical reality. You think you're "being technical", but you're just being pathetically pedantic without cause.
Post edited September 19, 2019 by ColJohnMatrix
avatar
dtgreene: At this point, you open yourself to all the complexities of multi-threaded programming, including all the bugs that can result; for a game that does not realistically need multiple CPU cores, it may be best to try to avoid doing this.

Threads are like pointers; they're powerful, but they're also a sort of many hard-to-find bugs.
Exactly. Multi-threaded programming is opening Pandora's Box to a whole host of additional problems which aren't worth solving until it becomes a necessity. Books like "The Art of Multiprocessor Programming" detail the problems exhaustively.

When the difference in a component goes from something that needs a small code snippet to explain to an entire book outlining the theory and practice of a thing, you know you've gone from practical and reasonable to law-of-diminishing-returns. Hence why most developers don't deal with this unless they're on large-scale AAA projects or something approaching that which probably uses an engine where this sort of aspect is for the most-part solved.

I'd make the additional argument that threads are a LOT more advanced in the programming techniques called for than pointers. Pointers are easy. Pretty much all games use pointers. They don't require theory. They're a known domain and It's necessary to allocate the regions in memory to copy the game data into and for a number of other aspects. Even languages without explicit pointers will still have implied ones and use arrays of data in the same fashion. You can make a game without threading, but it's extremely unlikely that you'll make a game without pointers.
Post edited September 19, 2019 by ColJohnMatrix
avatar
ColJohnMatrix: it's extremely unlikely that you'll make a game without pointers.
I've done that, albeit a long time ago; the game was in BASIC, which does not have pointers (unless you count PEEK and POKE, but those weren't practical for accessing structured data).

A better comparison might be to compare it to dynamic memory allocation, which uses pointers and has its share of bug classes (use after free, double free, memory leaks, etc.). Some games need it (Civilization and Morrowind come to mind), while others could easily do without (something like Shovel Knight could probably be implemented without it, for example).
avatar
Ryan333: There are a wide variety of timing events that you can cue off, and knowing when and how to use each -- and how to keep different methods relatively in sync with each other -- is what separates a true Developer from someone who downloaded an SDK, middleware package and set of prefab assets, and glued them all together to make an "Early Access Game".
This is a "No True Scotsman" fallacy. By this logic guys like John Carmack wouldn't be a "true Developer". A developer is someone who develops and (hopefully) ships products. Quality can of course vary wildly, and I'd blame prefab engines like Unity before any SDK, but, to employ an aphorism "Not every road calls for a Lamborghini".
Post edited September 19, 2019 by ColJohnMatrix
avatar
clarry: Imo performance (with multiple cores) is the only good reason to introduce threads. The other reason people use threads is that they rely on blocking APIs with no non-blocking alternative (hearsay suggests this is common in the Windows world; I don't know if it's true but I've run into more than a few Windows developers who think threads are the only way to do things we'd do with a poll(2) loop in the Unix world); in such a case, they'd just employ a thread to perform the blocking operation while letting the rest of the code run in another thread. Also, if you need low latency for some part of the program (e.g. audio), then a separate thread for that might help. Again, simple games that stream music and pre-recorded samples (possibly with simple effects slapped on) should have no trouble buffering 20ms (or more) of audio which guarantees no drop-outs at >50 fps. It's not unusual for older games to have buffers anywhere from 50 to 100 ms.
A third possible reason: You're running code on bare metal, and the computer system uses interrupts as the primary or sole means of interacting with certain hardware. In this case, it isn't technically threads, but you still have to worry about the same sort of issues (plus you might need to remember to save and restore the contents of registers, otherwise bad things can happen).

By the way, could you explain how I would choose between select(2) and poll(2)?
avatar
dtgreene: I've done that, albeit a long time ago; the game was in BASIC, which does not have pointers (unless you count PEEK and POKE, but those weren't practical for accessing structured data).

A better comparison might be to compare it to dynamic memory allocation, which uses pointers and has its share of bug classes (use after free, double free, memory leaks, etc.). Some games need it (Civilization and Morrowind come to mind), while others could easily do without (something like Shovel Knight could probably be implemented without it, for example).
I was talking about games intended to be sold commercially, not merely some text game or some introductory programming you did. I think you have some misguided ideas about pointers.

The example holds because even without dynamic memory allocation, you still access pointers, EG - in the screen buffer in DOS for mode 13h at 0xA0000000. You'll need to access that in some fashion via pointers to do anything with it. Dynamic memory is merely another level of it.

In my experience, most of those bug classes, while it's true they exist, shouldn't be the issues they're made out to be with a healthy dose of defensive programming. I don't think Shovel Knight could be implemented without them, unless you're going to make a horrendous design having everything static and doing all manner of things you simply should just not do. It's not a portion of the language you can just decide to selectively not use. It's a specific tool that really needs to be employed on the sorts of problems that come up commonly in games and system software. You'd have to go to a language blatently without them to avoid them in the sphere of game programming.
Post edited September 19, 2019 by ColJohnMatrix
avatar
dtgreene: I've done that, albeit a long time ago; the game was in BASIC, which does not have pointers (unless you count PEEK and POKE, but those weren't practical for accessing structured data).

A better comparison might be to compare it to dynamic memory allocation, which uses pointers and has its share of bug classes (use after free, double free, memory leaks, etc.). Some games need it (Civilization and Morrowind come to mind), while others could easily do without (something like Shovel Knight could probably be implemented without it, for example).
avatar
ColJohnMatrix: I was talking about games intended to be sold commercially, not merely some text game or some introductory programming you did. I think you have some misguided ideas about pointers.

The example holds because even without dynamic memory allocation, you still access pointers, EG - in the screen buffer in DOS for mode 13h at 0xA0000000. You'll need to access that in some fashion via pointers to do anything with it. Dynamic memory is merely another level of it.

In my experience, most of those bug classes, while it's true they exist, shouldn't be the issues they're made out to be with a healthy dose of defensive programming. I don't think Shovel Knight could be implemented without them, unless you're going to make a horrendous design having everything static and doing all manner of things you simply should just not do. It's not a portion of the language you can just decide to selectively not use. It's a specific tool that really needs to be employed on the sorts of problems that come up commonly in games and system software. You'd have to go to a language blatently without them to avoid them in the sphere of game programming.
BASIC abstracts all this away; to write to the screen, you use commands dedicated for that purpose, and the interpreter does checking to make sure you don't overstep the screen bounds.

Also, Akalabeth and the original version of Ultima 1 were originally written in BASIC.

The thing is, in a game like Shovel Knight, about the only thing I could think of that might need dynamic allocation is sprites (including things like enemies and bullets); if you limit the number of sprites on screen at once, you only need to allocate a fixed amount of menory to store them. (With that said, it *is* possible to get bugs as a result of the allocation of such sprite slots; Super Mario World has bugs of this type, like one that make it possible to eat a Chuck, causing strange things, including arbitrary code execution, to happen.)

Remember that old consoles have very limited amounts of RAM; the NES has only 2 kilobytes of ram, and the Atai 2600, only 128 *bytes*. When you have so little RAM, you can't just allocate as much memory as you need; you need to be careful about memory usage. Garbage collection is also not particularly feasible at this point, either. Also, remember that you don't have a BIOS or C library (so you don't have anything like malloc() unless you implement it yourself).

Early video game consoles are, in effect, embedded systems.
Thanks, I didn't know any of that.

I'm becoming convinced you guys have Aspergers.
Post edited September 19, 2019 by ColJohnMatrix
Making a few assumptions here.

I believe it had mainly to do how programming languages functioned.
Back in the day there weren't a lot of different processors and they were relatively slow so it was important to squeeze every bit of performance out of it. So programming languages were made to use up all processing power (cpu cycles) available.
When the next generation of cpu's came out, say Intel's 80386 you already saw that they had to put turbo buttons on PC's because of this.
Why newer programs don't have this issue anymore (but sometimes they still do when they were made dependant on framerate) is because programming languages have been changed the dependancy on cpu cycles.
I don't even think gameprogrammers could do a lot about it back in the day except completely changing the programming language or going out of their way to program some software to solve this issue, but that would have been very hard to do if you're already working on the fastest cpu's available at the time and don't have access to future cpu's.