First came Connor, then Markus. Now finally with Quantic Dream revealing the third of its Detroit leads Kara at last month’s Paris Games Week, we thought it the best time to look back on the tech demo that sparked the android revolution. Five years after its creation, we asked David Cage to revisit the studio’s PS3 tech demo Kara, offering his commentary to the piece, and additionally, in this exclusive piece written for PlayStation Blog, offer his insight into the evolving technology that has helped define Quantic Dream’s games.
Quantic Dream is one of the few studios in the world to develop a new engine for each game. The objective of this ambitious endeavor is to push the envelope (and the hardware) as far as we can and give our fans the best looking game possible. We also try to improve the quality of acting performances game after game, which is strongly related to the quality of our technology; our evolution from Kara to Detroit illustrates the progress we’ve made in these areas.
The challenges that arose from Heavy Rain’s motion capture
One of the objectives in making the Kara short was to create the entire sequence with performance capture – that is, to record body, face and voice simultaneously. By way of comparison, the shoot for our PS3 title Heavy Rain, released in 2010, was via ‘body’ motion capture, which meant facial movements and voice overs were shot separately. First we filmed all body animations, then we recorded voice and facial animations in a sound booth, hoping everything would sync together.
All our actors did an amazing job, but it meant their performances were captured in two parts. As a result, the performances were disjointed – the eyes wouldn’t look in the right direction. It was very challenging (for both us and the actors) to get the level of performance we were looking for. So for the Kara short, we upgraded our motion capture system to be capable of recording body, face and voice all at once – what we call performance capture.
We really wanted a setup that wasn’t intrusive, which meant no helmet, no face camera, no backpack and no wires. We wanted the setup to be as invisible and light as possible, so our actors could quickly forget it.
So we installed a wireless mic for the actors to wear and developed a system for tracking markers without a helmet or a projector. A system precise enough to track eye movements simultaneously. Last but not least, we needed to capture data good enough to minimize the need for post-animations. We shot massive volumes of dialogue; we needed a system that could make this process as efficient as possible and for the final result to look fantastic.
In short, we wanted high quality data captured with a very light setup, which was a very interesting challenge…
How Kara helped refine the capture system for Beyond and Detroit: Become Human
The Kara short is the result of this first iteration. When I saw the first captures, I realized that there was no going back. After Kara, we kept improving the precision of our capture system. We also greatly increased the area in which we could capture: working on Kara, we were capable of shooting one actor in performance capture in an area of two metres square; on Beyond it was four actors in nine metres square. On Detroit we were able to shoot six actors in sixteen metres square.
The precision of the data we were able to capture has improved dramatically. We now capture details on Detroit that we on set would only see before.
We have also continued to improve all the technologies that are linked to the acting performances. So, we’ve developed a muscle simulation system, a wrinkle simulation, a shot by shot lighting rig to allow for soft and detailed shadows, real time translucency (picture how your ears become red when there is a light behind you). These join many other technologies that you may not see, but all play an important part in what you see on screen.
Since Kara’s 2012 debut, our rendering technology has also been through many iterations. The engine used for Kara was an evolution of Heavy Rain’s engine and the first version of Beyond: Two Souls’ engine. You see, after Heavy Rain, we wanted to improve the rendering of skin and eyes to allow for more subtle lighting and shadow on faces; we also worked on some improvements with our image rendering, especially in improving depth of field.
We were satisfied with our progress. That said, I remember we all feared that the Kara demo would over-promise what we could deliver visually for our next game, 2013’s Beyond: Two Souls. Working on a short demo is always different to a full game! So we had many discussions as to whether it would be fair to show this short. In the end, we decided to present it because we were confident that Beyond would look at least as good as Kara, if not better.
From Beyond to the Dark Sorcerer to Detroit: the evolution of Quantic Dream’s engines
Beyond: Two Souls used another iteration of the same engine, which improved every single aspect of the tech. To my mind, the game looks considerably better than the Kara short. Then (2013 tech demo) Dark Sorcerer was a major step forward for the studio as it was our very first PS4 engine. It remains, for me one, of the best looking demos we have created.
For Detroit: Become Human, we used a brand new engine again. We invested a lot of time in having optics that are physically correct. Virtual cameras have no limitations and so can emulate optics that cannot exist, sometimes resulting in visuals that are not very convincing to the human eye.
For Detroit, we worked on making sure to use rules that are commonly accepted by our audience. This little change had a massive impact on the visual quality of the game. We added many new features, from bokkeh, advanced lens flares, improved lighting, real-time motion blur, volumetric lights, higher resolution on PS4 Pro and many others.
This new engine, combined with our progress in performance capture, makes Detroit: Become Human the most advanced title ever produced by my studio. From Heavy Rain to Detroit, Quantic Dream continues to seek new ways and create new technologies to better capture and inspire emotion; they open new possibilities and give creators access to nuances and subtleties that were impossible before.