adruab.net Deep internal/external searching… for… stuff

9Jan/110

C++ modules?!?

Through a random grapevine, which ended up including twitter (shudder), I ran across this proposal for C++ modules:

http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n2073.pdf

This would be absolutely amazing to have.  Lets count some ways:

  • Better compile times (more scalable at least).
  • Sensible semi-enforced code organization.
  • Less redundancy in typed code.
  • Prevent time eating bizarro crashes when saving a file mid-compile.

And the list could go on.  Sure there are other ways to achieve these things, but not to the degree modules would.  I'm crossing my fingers that the standards committee can get this in.  So much awesome!

If they do, maybe I'll be able to post more than once a year....

Filed under: Programming No Comments
18Apr/090

Now Loading…

I HATE seeing that screen in games. Best, or worst I suppose, recent example: Resident Evil 5.

For example, when you or your partner dies, you're treated to a 5+ second loading screen. Just long enough to be disoriented when the animated "You're Dead" screen pops up. Yes I knew that, thank you, you didn't need to take 15 seconds total to tell me this. Then if you select Continue, you get another loading screen to get you back into the level. Keep in mind all this after a 5+ minute install. Worst experience in a while.

Combine this with very small levels (frequent loading), canned animation (ever heard of blending?), weird visual freak outs when you go through the door waiting for your AI partner to come through 0.5 seconds later, and complete inability to move while swinging the knife or shooting the gun makes for a REALLY negative single player experience. Yes I know the later behavior has been in RE forever, that doesn't mean its good.

It not complete crap. It's got some nice visuals and I'll give it another go. But at this point, I regret buying the game. It's really depressing after RE4 was so solid an experience. What happened?

11Apr/093

Stenciling for “deferred” lighting

I've seen many papers describing using stencil culling to speed up rendering of a light in a deferred (or semi-deferred) manner. For instance an Insomniac paper. It confuses me a bit.

The approach is normally: clear stencil, render front/back faces of bounding volume to stencil, render a screen space quad with stencil culling. The reasoning: Screen space quad gets best usage out of pixel groups, stencil settings reject expensive pixel shaded pixels.

Pros: Stencil will reject all pixels not in the bounds. There are many costs to this approach: stencil clear is not cheap (it's actually more expensive than depth+stencil clear IIRC), stencil culling is done in big blocks so it won't actually reject as many pixels on the edge of intersections with the geometry, you have 2x pixels to render to stencil operations (though you'll probably get the fast-depth only path too), changing render targets to point to the stencil buffer can stall the pipeline.

Alternate approach: render bounding light volume using front OR back faces (depth pass or fail respectively) and turn on depth bounds clamping.

Pros: Depth bounds clamping early (performance suggests this at least) rejects many pixels outside your bounds (especially for small lights). No stencil clearing tax. Cons: may not early reject some blocks of pixels because depth bounds is not as fine grained as stencil, pixel quads may not be optimally filled due to the triangulated nature of the bounding volume.

In ALL situations we've tried at work these trade offs have favored the later approach, even for relatively large lights. Some issues which might favor the first approach: switching render targets to render shadows anyway, a big light with a really expensive shader, adding non-spatial rejection bits into the stencil operations, something else I'm completely not considering for some reason (?).

We'll have to try this again to make sure we're not missing something. I could imagine depth bounds clamping failing if your early z get borked for some reason. I have to believe that others have tried our approach as well, and its somewhat surprising that no one ever mentions it (too obvious?). But for now I'll call it competitive advantage I guess :P.

Filed under: Programming 3 Comments
11Jan/092

Parallelismorama

First some comments. I've been reading some posts on the Sweng Gamedev mailing list. I'm a bit surprised that people are so hooked on super general lock free structures (e.g. a doubly linked list). They are complex to reason about and correspondingly hard to write. I've used big arrays with and simple atomic indices to great effect thus far and haven't needed such complicated mechanism. Perhaps it's console vs PC all over again, but I generally consider simplicity to be one of the most important aspects of parallel code I write.

One of the things that I've always disliked about CSP and Task based systems is how insanely broken readability gets. Adding task indirection wraps everything in goo. With lambdas/delegates in C# you've still got Task this Future that, but that's astronomically more readable than the C++ equivalent. I tried watching the PDC08 presentations and my eyes glazed over. Such simple concepts and ALL these crazy hoops to jump through. Now there's definitely an argument for explicitness when overhead is involved. And sure the C++ version is probably faster, but that much? Really?

I read Expert F# recently, a decent book on a cool language (functional with Haskell headaches). It has cool syntactic sugar called workflow builders, which are wrappers around continuations. They allow you to do write very straight simple code with asynchronous breaks in the middle. I haven't figured out yet if it's A) not explicit enough B) too slow because of compiler optimization deficiency or C) super awesome.

Additionally, I'm not sure I buy the each system runs asynchronous technique Intel's Smoke uses. Either I haven't read it close enough or there would be a direct competition between evil latency and the number of asynchronous stages. Lets assume everything uses deferred message passing you've got Input/player AI + physics + render + gpu + scan = a lot of latency at 30 or even 60 Hz. I imagine pipelining some part of drawing in immediate mode would have zero or net-good effect on latency. Breaking up the rest though would be hard. There are certainly obvious choices for deferred computation (e.g. effects, most AI). But player + physics is still a bigish serial bowling ball to chew on. Maybe staging physics so each stage only cares about objects that can influence that stage, and interleaving the rest with initial render submission? I'll keep that in the back of my brain, so I don't go berzerk trying to figure out how to limit the serializing effect script callbacks can have.

11Aug/080

Ding dong the API is… well it’s pissed people off

They just released OpenGL 3.0, apparently better named OpenGL 2.2. After waiting 2 years?, this completely quashes my interest in writing anything else in OpenGL. I'll probably revert to .NET and DX as a backup. Oh well.

Filed under: Programming No Comments
3Apr/080

Complete Trees

Complete trees are cool. Best part is the closed form for parent/child indices. I had to rederive these equations for a complete quad tree today. It didn't take as long as I was expecting. Here they are if you care...

parent(i) = (i - 1) / 4

child_j of i = (i * 4) + j + 1

It turns out a level of a specific depth has a closed form too. The numbers are 0b, 1b, 101b, 10101b and so on.

level 0 start = 0

level n > 0 start = ((1 << ((n - 1) * 2 + 1)) - 1) & 0x5555555 Essentially this gives you multiples of 4. 1,2,3,4 map to 2^1, 2^3, 2^5, 2^7. Subtracting 1 gives you 1b, 111b, 11111b, 1111111b. Then anding with 0x55555555 should give you the 101010101 pattern. The main gotcha here is that access to a single group of siblings is swizzled from normal xy access. Once that's done though, you're golden.

Filed under: Programming No Comments
5Nov/070

And the engine goes…

A few recent things have been getting me interested in working on a new project. I might take C# for a spin, since its a larger project. Some potentially interesting ideas to investigate:

Art pipeline. I've never written one from scratch before. Probably start with hard coded data and move to collada subset. Would like to make the pipeline serializable between any major operation (parallelization and quicker turn around).

OpenGL 3.0. Whenever it finally comes out. Maybe an argument to keep to C++?

Thread separable operation. Especially for graphics engine. Seems like you could have some handy simplifications if you assume that most of your objects won't move every frame.

Components and/or data driven. I've used generally data driven paths with good success in the past, but haven't toyed too much with more heavy component architectures. Explicit or implicit communication?

TDD. I still haven't used it for a large project. I did get a simplified NUnit working with C#, but in a command line mode more similar to UnitTest++ (WAY faster turn around).

In-place Loading. Only really interesting in C++. The simple setup I started working with was kind of a pain. The holy grail here would be to get garbage collection working so you could mark all objects to output and then do so as a sweep process (serialize binary + links). There are other interesting questions to deal with (versioning, virtual classes, etc.).

Class Introspection. Useful for In-place loading and any other form of code generation. Was playing with ctags for a bit, but had some issues with scoping. Perhaps push that through and see what fun can get working.

Btw, I hate wordpress' list editing. It sucks (hence above bold + paragraphs). Maybe I just need to upgrade.

Filed under: Programming No Comments
13Jun/070

Holy Jeepers Batman!

So, long due post. Things that have happened since last one, in no particular order:

  • Vacation in France for two weeks. Uncle's wedding + stuff. Food is good, but no where near as small portions as people would lead you to believe.
  • Movies, some good some bad. Was very disappointed with Spiderman 3 for numerous reasons, enjoyed Pirates 3 vastly more than was expecting, Shrek 3 ok... etc.
  • Unleashed concert. We did some fun skits, probably got a little too crazy singing the pirates theme song and shot ourselves in the foot sound quality-wise.
  • Back to Karaoke at Skylark. It has changed hosts twice, I didn't like the first change, second is back to a better pace I think, although significantly reduced, albeit legal, songlist. Quirky skippy McCDplayer threatens my happiness!
  • Work. Good stuff, still on graphics. Working hard on performance now, and in two words "it's tough" (tm). When adding extra work makes shaders go faster, my head esplode.
  • Role playing. Been very slow getting into Warcraft d20 game. I wrote a huge 15ish page back story for my character so I am dismayed to say the least. Starting a new Forgotten Realms Wizards game. World looks cool and deep, but I can't say I'm looking forward to the preplanning spells system DnD uses for wizards (my head esplode x2).
  • Games, some good some bad. Mario party fun, but hasn't really changed in the billion versions they released. Super Paper Mario huge let down. Super simple completely gamey flip to 3d mechanic interrupts the normal pace of action jumping (as does tons of dialog boxes). Need to play God of War (and so many other games). Warcraft 3 is an excellent game. Super great value even at $40 for the battle chest... still!
  • Lost phone in Cinerama after Pirates opening show. Bought an EnV. It is, as I thought, huge. Though not requiring hours to enter new phone book entries and the occasional text message is nice.
  • Board game night at work is fun. We've played all sorts of interesting stuff that I haven't really seen before. And I thought I used to play a lot of board games :).
  • My sister went skydiving which was entertaining and made me want to go.
  • OpenGL is heading down an amazingly promising track. Cleaning up the API, adding static typing through struct tags (still C compatible), and all the other stuff they're adding sound awesome. I will have to build a new machine with a DX10 to try it out (yes I know the irony of getting a "DX10" card for OpenGL).
  • Allergies suck, since right before going to France I've been screwed with congestion and such. Medication only defers symptoms a few hours (did I mention I hate medication). I think I'm going to do a cleaning spree and regularly vacuum the house or something *shudder*.
  • I may have gone to GDC and such since I last posted. That was fun, although didn't get to see as much of people as I'd like. And now I've lost most of their numbers since my phone is gone, let the long march begin.
  • Going to start attending Thursday night East/West side game developer get togethers. They're fun, and I don't have a choir conflict during the summer.
  • Been reading up on C++ template stuff for fun. I looking to create some sort of reflection system. However, I think writing a new language may be just as easy as implementing reflection for every relevant C++ class I would want to use. Binary dumping/loading is my goal. Wouldn't it be nice land. I'm starting to think that, if using C++ just writing the serialization to the binary format may not be a bad plan, purely because it reduces the transformation work you have to do on the runtime side. Not to mention the pain of dealing with cross platform binary incompatibilities. Still if you had reflection this would be easy. Why the C++ board does not put that in, I will never know.
17Jan/070

Game wackiness

A few things...

I have a pretty high respect for Valve and the general quality of their work.  I know many people that work there.  Thus, I find it very alarming that their CEO is going senile.  Um, wow.  I doubt highly that he's personally done any work on the PS3.  He could be echoing the mentality of his development staff so there's that.  However, I think his comment is crap.  Sony certainly hasn't done everything right in releasing the PS3.  The development process is tough, and there is some hand holding you have to do to get the processor jumping through its hoops.

I would venture to guess that this just does not fit the general Valve MO of coding (which comes heavily from the PC).  In fact I would even posite that this is a symptom of the thing that I hate most about their game, loading screens.  To get the PS3 to work well you have to design your SPU data in a way that can be shipped to and fro.  A similar step is required if you want to cleanly stream your game, which they don't have.  I think they are still stuck the PC mental model of load a bunch of data and it will work.

On another note, Guitar Hero development has moved to Neversoft!!!!!!  Ok, am I the only one seeing doom on the horizon.  They've certainly done a decent job with Tony Hawk.  Spiderman also had it good points, but was buggy as all get out.  I will never forget the image of Blackcat randomly flipping 90 degrees while she jumps of the edge of a building during a CUTSCENE.  They also developed GUN, which I have never played, but I've heard bad things.  I pray for the franchise considering its bright prospective future.

The last bit concerns the PS3.  Apparently it is readily available pretty much anywhere.  This when analysts were expecting shortages until March or even July.  Don't let Gabe Newell beat you down Sony!  I hope that Sony can pull themselves out of this pit.  Maybe even... you know... lower the price! :)

11Jun/060

MAA

So while I don't really like microsoft's past business practices, they've been up in my book for the last little bit. I appreciate their current attitude adjustment of trying to out-do the competition. Also, C# rocks my socks. It's super great. Once I figured out how to use windows forms it's actually really good too (wx.NET has issues). It's a very nicely objectified wrapper around windows. I'm really looking forward to working with C# 3.0 and the newly redubbed .Net framework 3. I will say that their sluggish support of C++ is as disheartening as ever.

I wish other people would make consistent, good tools. I'd really like to embrace open-source software, but it feels like they're falling so far behind. I mean, Microsoft can learn and adapt, why can't everyone else!

Filed under: Programming No Comments