I've played games mostly on consoles for a while, no particular reason, I just like the experience better with some exceptions.
I've been playing Deus Ex Human Revolution on the PC and I'm torn. The artistic design and the overall mood is great and occasionally amazing. The gameplay ranges from ok to great has has awesome variation. The cutscenes are mostly well done and the various. The voice acting is great for primary characters and occassionally weak on extras. Overally, it reminds me of a much more polished and enjoyable, though limited, version of Fallout 3. It's a bit too linear in some areas, but generally does a great job of giving you multiple angles. It has WAY too many loading screens for my taste. The character quality is hugely variable. Animation ranging from ok to strange (especially facial lip flap, yuck :/). I understand the production costs involved in making good facial animation, but this game looks mostly crap at this point.
My biggest complaint so far is that the game is super unstable. I think it has crashed in 5 out of the 7 sessions I've played it. It even failed to load a cutscene and jumped me past some portions of gameplay. It feels like a combination of driver problems (ATI :/) and lack of testing thoroughness. I bought the PC version on an arstechnica.com review's recommendation. I probably would enjoy it more on PS3 (my 360 is toast).
Hopefully it smooths out, because I really want to love this game.
Through a random grapevine, which ended up including twitter (shudder), I ran across this proposal for C++ modules:
This would be absolutely amazing to have. Lets count some ways:
- Better compile times (more scalable at least).
- Sensible semi-enforced code organization.
- Less redundancy in typed code.
- Prevent time eating bizarro crashes when saving a file mid-compile.
And the list could go on. Sure there are other ways to achieve these things, but not to the degree modules would. I'm crossing my fingers that the standards committee can get this in. So much awesome!
If they do, maybe I'll be able to post more than once a year....
I started playing with getting a Direct3D 11 pipeline up and running recently (e.g. draw a screen space triangle). It's absolutely crazy how many more problems I've run into than I did with Direct3D 9.
First the good:
- Core design feels cleaner. Less functional groupings to worry about and those that are left seem to be grouped fairly logically.
- The effects framework has been released as source. It also looks like it has a cleaner reflection interface than before. Don't look at the code though, it's scary.
Frustrations in random order:
- You have to turn on debug mode to get ANY kind of error reporting out of the immediate device context.
- Shader compilation and DXGI seem to talk to mostly DX 10 interfaces. Wtf?
- The documentation is very sad. They refer you to DX10 docs for everything that's not new to DX11.
- Where is the effects framework? How about where do they tell you in the docs? At the bottom of differences from DX10 effects page (not reference).
- Documentation errors! D3DX10async.h does not exist. D3DX11CreateEffectFromMemory "Use ???.lib". Sad monkeys.
- PIX is useless for debugging. Only timing and replay is supported thus far.
- Certain errors seem to hang and crash the graphics driver. Thankfully the OS survives.
- The DXUT source sucks. Try searching for the frame buffer setup functions in their source, I dare you.
- State block interfaces. Why do you need interfaces when you have user mode drivers!? Instead of just saying SetBlendState(&blendstate), you have to create an interface set that and then use the interface pointer whenever you want to set something. Do you really gain that much out of the extra layer of indirection to make it worth it? Now simple operations like toggling on and off back face culling require many hoops. Maybe that's the point?
This is definitely the most frustrated I've ever been with a Microsoft release in a long time. Here's hoping the next release will fill all the gaps. On the plus side, I finally have a triangle drawing.
I just finished reading the Sandman comic series. It was great! Pretty much all the stories were deep and interesting in some manner or other. The various tidbits of mythology and the way everything fits together is absolutely awesome. I've also read Good Omens (gate wayed in through Terry Prachett), American Gods, Stardust and Neverwhere. While they've all somewhat similar in setting style (real + fantastic often with myth/fairy stuff), they've all be really very different in overall feel. Super great.
Next up a review of SIGGRAPH, assuming I can survive the humidity.
Infamous is done, and has been for a bit now. It'll be on sale Tuesday (26th) in the US and Friday (29th) in Europe. Color me excited :).
It's not perfect, but I played through it a bit ago and it's damn good. Also, there are few loading screens. Maybe that's why I'm so biased :P.
I HATE seeing that screen in games. Best, or worst I suppose, recent example: Resident Evil 5.
For example, when you or your partner dies, you're treated to a 5+ second loading screen. Just long enough to be disoriented when the animated "You're Dead" screen pops up. Yes I knew that, thank you, you didn't need to take 15 seconds total to tell me this. Then if you select Continue, you get another loading screen to get you back into the level. Keep in mind all this after a 5+ minute install. Worst experience in a while.
Combine this with very small levels (frequent loading), canned animation (ever heard of blending?), weird visual freak outs when you go through the door waiting for your AI partner to come through 0.5 seconds later, and complete inability to move while swinging the knife or shooting the gun makes for a REALLY negative single player experience. Yes I know the later behavior has been in RE forever, that doesn't mean its good.
It not complete crap. It's got some nice visuals and I'll give it another go. But at this point, I regret buying the game. It's really depressing after RE4 was so solid an experience. What happened?
I've seen many papers describing using stencil culling to speed up rendering of a light in a deferred (or semi-deferred) manner. For instance an Insomniac paper. It confuses me a bit.
The approach is normally: clear stencil, render front/back faces of bounding volume to stencil, render a screen space quad with stencil culling. The reasoning: Screen space quad gets best usage out of pixel groups, stencil settings reject expensive pixel shaded pixels.
Pros: Stencil will reject all pixels not in the bounds. There are many costs to this approach: stencil clear is not cheap (it's actually more expensive than depth+stencil clear IIRC), stencil culling is done in big blocks so it won't actually reject as many pixels on the edge of intersections with the geometry, you have 2x pixels to render to stencil operations (though you'll probably get the fast-depth only path too), changing render targets to point to the stencil buffer can stall the pipeline.
Alternate approach: render bounding light volume using front OR back faces (depth pass or fail respectively) and turn on depth bounds clamping.
Pros: Depth bounds clamping early (performance suggests this at least) rejects many pixels outside your bounds (especially for small lights). No stencil clearing tax. Cons: may not early reject some blocks of pixels because depth bounds is not as fine grained as stencil, pixel quads may not be optimally filled due to the triangulated nature of the bounding volume.
In ALL situations we've tried at work these trade offs have favored the later approach, even for relatively large lights. Some issues which might favor the first approach: switching render targets to render shadows anyway, a big light with a really expensive shader, adding non-spatial rejection bits into the stencil operations, something else I'm completely not considering for some reason (?).
We'll have to try this again to make sure we're not missing something. I could imagine depth bounds clamping failing if your early z get borked for some reason. I have to believe that others have tried our approach as well, and its somewhat surprising that no one ever mentions it (too obvious?). But for now I'll call it competitive advantage I guess :P.
First some comments. I've been reading some posts on the Sweng Gamedev mailing list. I'm a bit surprised that people are so hooked on super general lock free structures (e.g. a doubly linked list). They are complex to reason about and correspondingly hard to write. I've used big arrays with and simple atomic indices to great effect thus far and haven't needed such complicated mechanism. Perhaps it's console vs PC all over again, but I generally consider simplicity to be one of the most important aspects of parallel code I write.
One of the things that I've always disliked about CSP and Task based systems is how insanely broken readability gets. Adding task indirection wraps everything in goo. With lambdas/delegates in C# you've still got Task this Future that, but that's astronomically more readable than the C++ equivalent. I tried watching the PDC08 presentations and my eyes glazed over. Such simple concepts and ALL these crazy hoops to jump through. Now there's definitely an argument for explicitness when overhead is involved. And sure the C++ version is probably faster, but that much? Really?
I read Expert F# recently, a decent book on a cool language (functional with Haskell headaches). It has cool syntactic sugar called workflow builders, which are wrappers around continuations. They allow you to do write very straight simple code with asynchronous breaks in the middle. I haven't figured out yet if it's A) not explicit enough B) too slow because of compiler optimization deficiency or C) super awesome.
Additionally, I'm not sure I buy the each system runs asynchronous technique Intel's Smoke uses. Either I haven't read it close enough or there would be a direct competition between evil latency and the number of asynchronous stages. Lets assume everything uses deferred message passing you've got Input/player AI + physics + render + gpu + scan = a lot of latency at 30 or even 60 Hz. I imagine pipelining some part of drawing in immediate mode would have zero or net-good effect on latency. Breaking up the rest though would be hard. There are certainly obvious choices for deferred computation (e.g. effects, most AI). But player + physics is still a bigish serial bowling ball to chew on. Maybe staging physics so each stage only cares about objects that can influence that stage, and interleaving the rest with initial render submission? I'll keep that in the back of my brain, so I don't go berzerk trying to figure out how to limit the serializing effect script callbacks can have.
That was a funny sketch I saw some comic do with a guitar online. Buy yeah, what this post is really about is that I'm now engaged. Huzzah and all that. Also, bring on the ninjas!
They just released OpenGL 3.0, apparently better named OpenGL 2.2. After waiting 2 years?, this completely quashes my interest in writing anything else in OpenGL. I'll probably revert to .NET and DX as a backup. Oh well.