20 years of Haiku

HollyB

@jockm

People barely get portability. God knows I’ve written about portability layers enough times for people to get it. There isn’t a single framework from SDL to wxWindows which gets it. The portability layer is a very thin layer simply dealing with versioning and quirks between compilers and OS and SDK versions and bit length ie 32 and 64. One #include and you can work with multiple compilers and OS. You also need an OpenGL portability file to cope with extensions and static and dynamic loading, and a third file to provide function wrappers. All of this is a really thin layer barely above the layer of the compiler where you maximise portability.

I based my framework on Borland VCL and pretty much didn’t use the standard library where I could avoid it. (Back then you’d probably use Boost as the STL was generally flakey.) Handling memory safe allocation and threads and exception handling and garbage collection tuned for performance was pretty easy. A few years later at the compiler and general toolkit level things like memory allocation and garbage collection etcetera became more of a thing but back then if you wanted high performance and real time stuff doing which wouldn’t blow up in your face you had to do it yourself. Personally I think you still do but I don’t kow the state of compilers and toolkits today.

Given the performance of most CPUs for the past 2-3 generations for the majority of everyday applications for most users most of them are GPU limited not CPU limited.

Since I no longer code I’ll confess I haven’t thought too deeply about how to implement parallelism but it definately can provide benefit in making the best use of the CPU whether locally or remotely. With games a fair few number of tasks can be broken down to operate in parallel and this can be done gracefully but ideally you need to think about before you begin. I don’t know about your typical business class application as I don’t spend any time thinking about it but don’t see why a similar approach cannot be taken.

As for GPU’s they’re not really parallel in common use on the developer side of the API. You typically have a single pipeline you bang stuff through. Stuff has to be done in a certain order to prevent pipeline stalls. From that point on it’s a question of the GPU then breaking everything down and parcelling it out so the more GPU streams you have relative to the number of pixels and operations per pixel the faster it goes. Back when I was doing high performance real time graphics you could only work with one thread attached to a graphics surface as things tended not to work if you used more than one. Could you use more than one thread if it was supported? Probably. It’s something you need to look into deeper because you need to balance keeping the GPU pipeline full versus CPU versus how expensive the thread switch is. There’s no reason why the code abstraction around this cannot be a common component.

It’s really old now but for the curious they may like to flip through “Michael Abrash’s Graphics Programming Black Book, Special Edition.” The books is available as a free download. I’ve never read it myself as I didn’t need to but the basics still make sense today.

https://www.gamasutra.com/view/news/91373/Abrashs_Graphics_Programming_Black_Book_Available_As_A_Free_Download.phphttp://floppsie.comp.glam.ac.uk/download/pdf/abrash-black-book.pdf

With respect to portability for a glimpse into Microsoft’s “Extend, embrace, extinguish” monoculture attitude to portability may wish to compare the independent Id Software attitude and approach to Quake from then versus the Microsoft owned attitude and approach to Quake today.

I don’t personally agree with the editorial line of DF retro but that’s another topic!

DF Retro: Quake – The Game, The Technology, The Ports, The Legacyhttps://www.youtube.com/watch?v=0KxRXZhQuY8

Quake – Official Trailer (2021)https://www.youtube.com/watch?v=vi-bdUd9J3E

Log in to Reply

Leave a Reply

Your email address will not be published. Required fields are marked *