Tech Focus: Killzone 3
Digital Foundry goes head-to-head with Guerrilla Games on the technical make-up of its latest shooter
In this extensive interview, Digital Foundry talks with Guerrilla Games' technical director Michiel van der Leeuw about the development of Sony's latest first person shooter, Killzone 3.
Covering topics as diverse as improvements to the core rendering tech, upgrades to AI and physics systems, advanced SPU utilisation, stereoscopic 3D, PlayStation Move and the choice of codec for video playback, this piece is a fascinating insight into the creation of one of the most technologically advanced games on console.
The interview kicks off with insights into Guerrilla's approach to the post mortem process, its impact on Killzone 3, and the internal systems established at Sony that allows its many studios to share tech and general development know-how.
We certainly did get a lot of good feedback from the audience, and it was epic in a lot - but not all - regards. We take a very good look at the feedback we get from reviews, forums, fans and other studios. We always do a pretty thorough analysis and then see what people comment on a lot, to see if there are recurring patterns. When that's done, we filter the feedback and bring it back in as input points for the next game. Some of the issues that we wanted to tackle were improving controls, getting more variety in the game (gameplay, environment characters etc), fixing any hiccups/glitches, and improving storytelling.
"Some of the issues that we wanted to tackle were improving controls, getting more variety in the game... and improving storytelling."
Yes, definitely. There's a pretty good positive competition and sense of learning from each other between the Sony studios. We were doing a studio tour right after we finished Killzone 2 to get inspiration from other studios and we often found people in other studios tearing apart a preview copy of Killzone 2 to see how we did. Those were great opportunities for other studios to feed off us, but also for them to give feedback.
We also get to see what other studios are working on and we have meetings to share experiences. We've been on-site with Naughty Dog and Sony Santa Monica before their games were released to exchange experiences, and we regularly have conference calls on specific subjects like animation, scale, storytelling, rendering and so on.
We also have a Sony yearly conference where all the studios attend and there are regular meetings between various disciplines, the Technical Directors have their annual meeting and so on. So there's no single "Studio A judges Studio B's game" but there's a lot of communication back and forth.
Maybe not so much save, but we wanted to bite off what we could chew. I think the exclusion of a few things like a complete co-op campaign, big vehicles, on-rails sections and a few others were things we felt would stretch us too thin. They'd been on our wish-list to master for a long time and the time just wasn't there yet for Killzone 2. But no, there wasn't a single moment where we went "oh, no don't stick this in, we'll save this for Killzone 3!"
We have a pretty open dialogue with all of our sister studios worldwide. Apart from regular, organised meetings we're also pretty easy in picking up the phone if we quickly want to run something by somebody. We also have something called SHIP ("SHared Information Portal") within Sony, which is a platform for sharing ideas, technology, etc. It's a combination between a Wiki, bugtracker, file manager, and so on.
It allows any studio or person within the organisation to start a project and contribute things for sharing across all studios. Not only does it spawn a lot of useful projects, it also makes it easier for people to approach one another on chat programmes or by phone - it's easier to know who your colleagues are.
I think that the basic technology is still the same, but we re-organised the g-buffers slightly so we had more dynamic range for colour and lighting. We ended up getting more bang for every bit in the buffer, but it stayed as fat as ever.
We did experiment with it, but found it unsuitable. Both blending and interpolation don't come naturally to Logluv and fixing those issues cost more cycles than it was worth. We deal with colour differently slightly (by pre-multiplying light intensity with exposure before it is colour corrected), but we're still in RGB colour space.
We may experiment with LUV space again, but probably not because we'd like to compress dynamic range into fewer bits, but because chroma-based colour spaces have properties which resemble the way the human eye works. This makes them interesting for things like colour correction, texture sampling and compression as well.
We've been doing a lot more stuff on SPUs this time around. One of the things we're very proud of is one that nobody sees: We're doing a full depth rasterisation of tens of thousands of triangles in a software rasteriser on SPUs so we can do occlusion culling against it. This allows us to do much more aggressive culling, which in the end allows us to do more complex scenes and further draw distance. The work flow is also much nicer than what we had and it reduces the frame rate spikes which were associated with the portal/occluder system we previously had.
Besides this sort of heavy work, more AI code moved to SPUs, MLAA got added, we do more characters, more physics objects, rendering and streaming book-keeping and a few more game code and physics systems. By the end of Killzone 2 we had a lot of people on the team who were pretty comfortable with writing SPU code, so it was just a natural progression for us to continue using it. We got to the point where they were all full though, so we had to optimise our SPU code quite a bit to fit it all in at the end.