Modern video games are at a crossroads.
Programmers have been improving 3d game engines at a break neck pace, but if you ask me, there are still a few areas that are seriously lagging behind. I’ve got many critiques, but today I’m going to focus on just one: characters. Though I have seen major improvements from some developers, like Dice and *shudder*, Crytek… the vast majority of game characters are absolutely horrific, lagging far behind the beautiful worlds they often inhabit. One of the worst offenders is Ubisoft and their creepy eyed, blow-up-doll-esque Assassin’s Creed characters.
We blame YOU for our mysterious glowing eyes and crisp wooden faces!
Surely one can do better than this, right? Well, remember that crossroads I mentioned? It seems we may not be so far off from doing something a little more like say, this…
Or better yet, something like this…
Those are 3d scans by the way. Both of them were captured using Infinite Realities scanning technology… both the geometry and the textures. Perhaps now you can see why I think we’re at a crossroads, but if you’re still not convinced, then check out this short video of the scanned characters in a 3D space. Just a heads up, there’s some mild nudity…
Non-shitty Characters are here… Almost!
I’m sure I don’t really need to explain this, but as you watch the video, remember, these are all 3D scans of real people. These weren’t created by hand in a computer, they were simply scanned.
http://vimeo.com/73422331
Not bad, right? Well, it turns out that this technology is already being used for quite a game video game trailers. Not the games themselves, but the CG trailers that are used to promote them. If you can remember back to that awesome Cyberpunk 2077 trailer that I posted a while back, well… yeah, they were one of Infinite Realities’ clients.
The only question is, when are we going to see this tech used in-game? I’m hoping soon, and I have a feeling that I won’t be disappointed. Given the quantum leap in horsepower with the next generation of game consoles, I think we’re finally, finally going to see some characters that don’t look like shit.
For more information on Infinite Realities, the incredible technology they’re developing, and numerous videos of the work they’ve done over the years, head over to their official website and have a look around. You may, or may not, be surprised to see just how many clients they’ve taken on in the last few years alone
So when are we going to able to use the characters for more carnal knowledge activities. Yip, I want to really control the fake people.
This topic is clearly made by someone who have little understanding of how video games works behind the machine.
Every point can be countered :
– Many point in which video games are lagging. Yeah, you’re right that they seems to be lagging behind. But the reality is that they are lagging due to technology, not because they don’t want to do something better. With the next specs from the new gen console (PS4 and XBox One), things will change, but still there is the financial issue that come with upgrading a product such as a video game project.
– The Assassin Creed’s “creepy eyed, blow-up-doll-esque characters” is due to what we call “game design & style” as well as technical difficulties. The glowing eyes is due to a problem with the shader (shader is the thing that render everything. It’s the core code which translate what’s in the engine into something that the hardware is able to comprehend) because without that, the eyes would always be in the shadow. In the video game as well as in the movie field, it’s a huge no-no! Eyes reflect between 30% and 40% of the emotion. Cut that out with shadow and you lost that bit of emotions out of the character.
The “blow-up-doll-esque” side is a style. Every game are not meant to be ultra-realist with proportions and colors. Even if the game is based on “real” event, nobody is forced to make things look ultra realistic and less doll-esque. There is not a SINGLE assassin creed with realistic sized character. They are all deformed in some ways. It’s part of the game. (Like how WoW make uses of big weapon, breast, hand, shoulder armor and mounts)
– The picture you showed are 3D scans which were not polished. To give you an idea, the face of the guy contain close to 5 million polygons. The girl have close to 15 million. Those are impossible numbers for video games. A character in video games, even with the newer gen console, should not exceed 200K polygons (50K for the main character for older gen like PS3 and Xbox360)
What you are mixing up is the conflict between “Real-time rendering” and “CGI”. Real-Time rendering is what is used in video game and it require to render a scene every frame with at max 0.03 second per frame. CGI (movies) are made so that a “picture” of the scene is made which can take minutes or even hours (on lower rigs) per frame. The thing can take 5 days to render 30 seconds (900 frames) and that wouldn’t be a problem because, after it’s played like a movie (no rendering of 3D with lights, shaders, occlusion, etc. but only 2D pictures one after the other. The company who own the “best” equipment for CGI in the world can render a frame in 10 seconds minimum and that frame is sized with enough pixel for a computer screen (not a theater screen)
Also another point is that the pictures shown make uses of 4K textures while video games makes uses of lower resolution textures:
– PS3 and Xbox 360 are limited to only a few 2048X2048 textures for a whole rendered scene view. It’s prefered to keep it at 1024×1024.
– The PS4 and XBoxOne can make it to 4096×4096, but in the same way as how the previous were using 2048×2048. Why is it even with 16x more memory? Because the PS3/XBox360 were limited to around a max of 6 layers of textures per asset (like in Beyond 2 Souls) while it’s close to 14 different layers for the new gen. So, instead of making uses of the “old” way with 6 files of 4096×4096, they mostly make uses of the new shaders capacity, but with 2048×2048 textures and 14 different types per materials. That’s half of what is shown on the pictures and, again, it cost 3x more time to produce.
3D scan is ONLY used in games for 3 things :
– Making a ultra-high poly version (and textured) of an asset which will be “projected” on a lower poly asset (with a lost of texture. This allow to produce the different type of textures like bump/normal, diffuse and even displacement maps. There are still the opacity, gloss, specular, etc. layer to manage manually. (And trust me, manually making the texture is a pain even if it can be fun. Many things to take in consideration… like the UVs)
– Producing a better referential data for making the low res version. It’s always better to have a high res form which you can look around and work “over” it (like with a molded paper mask) instead of using 2D referential picture and imagining the “filling” between the views.
– Expressions. This thing is awesome with, again, reference like a 3D scan. It’s easier to determine the emplacement of the armature to allow the best representation of emotion if you have a good view of the vertex on the face. Morphers (like used in movies) is still quite almost never used since it’s use close to 24x more memory to read and render.