Games Look Bad, Part 1: HDR and Tone Mapping

This is Part 1 of a series examining techniques used in game graphics and how those techniques fail to deliver a visually appealing end result. See Part 0 for a more thorough explanation of the idea behind it.

High dynamic range. First experienced by most consumers in late 2005, with Valve’s Half Life 2: Lost Coast demo. Largely faked at the time due to technical limitations, but it laid the groundwork for something we take for granted in nearly every blockbuster title. The contemporaneous reviews were nothing short of gushing. We’ve been busy making a complete god awful mess of it ever since.

Let’s review, very quickly. In the real world, the total contrast ratio between the brightest highlights and darkest shadows during a sunny day is on the order of 1,000,000:1. We would need 20 bits of just luminance to represent those illumination ranges, before even including color in the mix. A typical DSLR can record 12-14 bits (16,000:1 in ideal conditions). A typical screen can show 8 (curved to 600:1 or so). Your eyes… well, it’s complicated. Wikipedia claims 6.5 (100:1) static. Others disagree.

Graphics programmers came up with HDR and tone mapping to solve the problem. Both film and digital cameras have this same issue, after all. They have to take enormous contrast ratios at the input, and generate sensible images at the output. So we use HDR to store the giant range for lighting computations, and tone maps to collapse the range to screen. The tone map acts as our virtual “film”, and our virtual camera is loaded with virtual film to make our virtual image. Oh, and we also throw in some eye-related effects that make no sense in cameras and don’t appear in film for good measure. Of course we do.

And now, let’s marvel in the ways it goes spectacularly wrong.

battlefield_1_1020170716-rural-08_1500890393cod13232272656804_ccca70cc7e_o

In order: Battlefield 1, Uncharted: Lost Legacy, Call of Duty: Infinite Warfare, and Horizon Zero Dawn. HZD is a particular offender in the “terrible tone map” category and it’s one I could point to all day long. And so we run head first into the problem that plagues games today and will drive this series throughout: at first glance, these are all very pretty 2017 games and there is nothing obviously wrong with the screenshots. But all of them feel videogamey and none of them would pass for a film or a photograph. Or even a reasonably good offline render. Or a painting. They are instantly recognizable as video games, because only video games try to pass off these trashy contrast curves as aesthetically pleasing. These images look like a kid was playing around in Photoshop and maxed the Contrast slider. Or maybe that kid was just dragging the Curves control around at random.

The funny thing is, this actually has happened to movies before.

maxresdefault

Hahaha. Look at that Smaug. He looks terrible. Not terrifying. This could be an in-game screenshot any day. Is it easy to pick on Peter Jackson’s The Hobbit? Yes, it absolutely is. But I think it serves to highlight that while technical limitations are something we absolutely struggle with in games, there is a fundamental artistic component here that is actually not that easy to get right even for film industry professionals with nearly unlimited budgets.

Allow me an aside here into the world of film production. In 2006, the founder of Oakley sunglasses decided the movie world was disingenuous in their claims of what digital cameras could and could not do, and set out to produce a new class of cinema camera with higher resolution, higher dynamic range, higher everything than the industry had and would exceed the technical capabilities of film in every regard. The RED One 4K was born, largely accomplishing its stated goals and being adopted almost immediately by one Peter Jackson. Meanwhile, a cine supply company founded in 1917 called Arri decided they don’t give a damn about resolution, and shipped the 2K Arri Alexa camera in 2010. How did it go? 2015 Oscars: Four of the five nominees in the cinematography category were photographed using the ARRI Alexa. Happy belated 100th birthday, Arri.

So what gives? Well, in the days of film there was a lot of energy expended on developing the look of a particular film stock. It’s not just chemistry; color science and artistic qualities played heavily into designing film stocks, and good directors/cinematographers would (and still do) choose particular films to get the right feel for their productions. RED focused on exceeding the technical capabilities of film, leaving the actual color rendering largely in the hands of the studio. But Arri? Arri focused on achieving the distinctive feel and visual appeal of high quality films. They better understood that even in the big budget world of motion pictures, color rendering and luminance curves are extraordinarily difficult to nail. They perfected that piece of the puzzle and it paid off for them.

Let’s bring it back to games. The reality is, the tone maps we use in games are janky, partly due to technical limitations. We’re limited to a 1D luminance response where real film produces both hue and saturation shifts. The RGB color space is a bad choice to be doing this in the first place. And because nobody in the game industry has an understanding of film chemistry, we’ve all largely settled on blindly using the same function that somebody somewhere came up with. It was Reinhard in years past, then it was Hable, now it’s ACES RRT. And it’s stop #1 on the train of Why does every game this year look exactly the goddamn same?

The craziest part is we’re now at the point of real HDR televisions showing game renders with wider input ranges. Take this NVIDIA article which sees the real problem and walks right past it. The ACES tone map is destructive to chroma. Then they post a Nikon DSLR photo of a TV in HDR mode as a proxy for how much true HDR improves the viewing experience. Which is absolutely true – but then why does the LDR photo of your TV look so much better than the LDR tone map image? There’s another tone map in this chain which nobody thought to examine: Nikon’s. They have decades of expertise in doing this. Lo and behold, their curve makes a mockery of the ACES curve used in the reference render. Wanna know why that is? It’s because the ACES RRT was never designed to be an output curve in the first place. Its primary design goal is to massage differences between cameras and lenses used in set so they match better. You’re not supposed to send it to screen! It’s a preview/baseline curve which is supposed to receive a film LUT and color grading over top of it.

“Oh, but real games do use a post process LUT color grade!” Yeah, and we screwed that up too. We don’t have the technical capability to run real film industry LUTs in the correct color spaces, we don’t have good tools to tune ours, they’re stuck doing double duty for both “filmic look” as well as color grading, the person doing it doesn’t have the training background, and it’s extraordinary what an actual trained human can do after the fact to fix these garbage colors. Is he cheating by doing per-shot color tuning that a dynamic scene can’t possibly accomplish? Yes, obviously. But are you really going to tell me that any of these scenes from any of these games look like they are well balanced in color, contrast, and overall feel?

Of course while we’re all running left, Nintendo has always had a fascinating habit of running right. I can show any number of their games for this, but Zelda: Breath of the Wild probably exemplifies it best when it comes to HDR. double_1487330294849_file_the_legend_of_zelda_-_breath_of_the_wild_screenshot___3__

No HDR. No tone map. The bloom and volumetrics are being done entirely in LDR space. (Or possibly in 10 bit. Not sure.) Because in Nintendo’s eyes, if you can’t control the final outputs of the tone mapped render in the first place, why bother? There’s none of that awful heavy handed contrast. No crushed blacks. No randomly saturated whites in the sunset, and saturation overall stays where it belongs across the luminance range. The game doesn’t do that dynamic exposure adjustment effect that nobody actually likes. Does stylized rendering help? Sure. But you know what? Somebody would paint this. It’s artistic. It’s aesthetically pleasing. It’s balanced in its transition from light to dark tones, and the over-brightness is used tastefully without annihilating half the sky in the process.

Now I don’t think that everybody should walk away from HDR entirely. (Probably.) There’s too much other stuff we’ve committed to which requires it. But for god’s sake, we need to fix our tone maps. We need to find curves that are not so aggressively desaturating. We need curves that transition contrast better from crushed blacks to mid-tones to blown highlights. LUTs are garbage in, garbage out and they cannot be used to fix bad tone maps. We also need to switch to industry standard tools for authoring and using LUTs, so that artists have better control over what’s going on and can verify those LUTs outside of the rendering engine.

In the meantime, the industry’s heavy hitters are just going to keep releasing this kind of over-contrasty garbage.

45hfdmf

Before I finish up, I do want to take a moment to highlight some games that I think actually handle HDR very well. First up is Resident Evil 7, which benefits from a heavily stylized look that over-emphasizes contrast by design.

image_resident_evil_7_32138_3635_0003

That’s far too much contrast for any normal image, but because we’re dealing with a horror game it’s effective in giving the whole thing an unsettling feel that fits the setting wonderfully. The player should be uncomfortable with how the light and shadows collide. This particular scene places the jarring transition right in your face, and it’s powerful.

Next, at risk of seeming hypocritical I’m going to say Deus Ex: Mankind Divided (as well as its predecessor).

041599

The big caveat with DX is that some scenes work really well. The daytime outdoors scenes do not. The night time or indoor scenes that fully embrace the surrealistic feeling of the world, though, are just fantastic. Somehow the weird mix of harsh blacks and glowing highlights serves to reinforce the differences between the bright and dark spots that the game is playing with thematically throughout. It’s not a coincidence that Blade Runner 2049 has many similarities. Still too much contrast though.

Lastly, I’m going to give props to Forza Horizon 3.

forzahorizon39_25_201xiyor

 

Let’s be honest: cars are “easy mode” for HDR. They love it. But there is a specific reason this image works so well. It is low contrast. Nearly all of it lives in the mid-tones, with only a few places wandering into deep shadow (notably the trees) and almost nothing in the bright highlights. But the image is low contrast because cars themselves tend to use a lot of black accents and dark regions which are simply not visible when you crush the blacks as we’ve seen in other games. Thus the toe section of the curve is lifted much more than we normally see. Similarly, overblown highlights mean whiting out the car in the specular reflections, which are big and pretty much always image based lighting for cars. It does no good to lose all of that detail, but the entire scene benefits from the requisite decrease in contrast. The exposure level is also noticeably lower, which actually leaves room for better mid-tone saturation. (This is also a trick used by Canon cameras, whose images you see every single day.) The whole image ends up with a much softer and more pleasant look that doesn’t carry the inherent stress we find in the images I criticized at the top. If we’re looking for an exemplar for how to HDR correctly in a non-stylized context, this is the model to go by.

Where does all this leave us? With a bunch of terrible looking games, mostly. There are a few technical changes we need to make right up front, from basic decreases in contrast to simple tweaks to the tone map to improved tools for LUT authoring. But as the Zelda and Forza screenshots demonstrate, and as the Hobbit screenshot warns us, this is not just a technical problem. Bad aesthetic choices are being made in the output stages of the engine that are then forced on the rest of the creative process. Engine devs are telling art directors that their choices in tone maps are one of three and two are legacy options. Is it bad art direction or bad graphics engineering? It’s both, and I suspect both departments are blaming the other for it. The tone map may be at the end of graphics pipeline, but in film production it’s the first choice you make. You can’t make a movie without loading film stock in the camera, and you only get to make that choice once (digital notwithstanding). Don’t treat your tone map as something to tweak around the edges when balancing the final output LUT. Don’t just take someone else’s conveniently packaged function. The tone map’s role exists at the beginning of the visual development process and it should be treated as part of the foundation for how the game will look and feel. Pay attention to the aesthetics and visual quality of the map upfront. In today’s games these qualities are an afterthought, and it shows.

UPDATE: User “vinistois” on HackerNews shared a screenshot from GTA 5 and I looked up a few others. It’s very nicely done tone mapping. Good use of mid-tones and contrast throughout with great transitions into both extremes. You won’t quite mistake it for film, I don’t think, but it’s excellent for something that is barely even a current gen product. This is proof that we can do much better from an aesthetic perspective within current technical and stylistic constraints. Heck, this screenshot isn’t even from a PC – it’s the PS4 version.

place

19 thoughts on “Games Look Bad, Part 1: HDR and Tone Mapping

  1. As replied on Twitter, I think the post is almost entirely subjective. I will not debate on that, I’m actually part of the group of people thinking that the ACES RRT has too much contrast and should be more neutral, however you have to keep in mind that a lot of people don’t mind it and a lot other like it. My point being that you can’t assume everybody thinks the same than you and I know a lot of people that do like the visuals of any of the games you referenced.

    What brought me here is that your article is riddled with errors and incorrect assumptions, I’ll address some of them:

    > We’re limited to a 1D luminance response where real film produces both hue and saturation shifts.

    This is incorrect, you can almost run any type of colour processing functions in realtime (even complex Colour Appearance Models) and if they are too expensive, you can collapse them to 3D LUTs, e.g. Unreal Engine or Unity.

    > The RGB color space is a bad choice to be doing this in the first place.

    Would you mind expanding this sentence because it does not make any sense without context, as a matter of fact most of the VFX compositing and colour grading applications adopt an RGB working space.

    > The ACES tone map is destructive to chroma.

    Typical of sigmoid functions applied on RGB components, nothing new here. Thousands of films and billions of photographs have been produced for years without it being a problem, on the contrary. It is actually a subjectively pleasing side effect that a lot of people enjoy. You are of course free to tonemap Luminance only 🙂

    > Its primary design goal is to massage differences between cameras and lenses used in set so they match better.

    This is incorrect, this is not the role of the ACES RRT but the role of the ACES IDTs: they ensure that any input devices will output consistent and matching scene referred colours within ACES 2065-1 colourspace. The purpose of the ACES RRT is to convert the ACES 2065-1 scene referred colours to ACES OCES display referred colours for a theoretical display with dynamic range [0.0001, 10000] cd.m-2.

    > You’re not supposed to send it to screen!

    If you have the theoretical display you absolutely can and should! This is also the reason for the ACES ODTs to exist, they convert the ACES OCES display referred colours for your actual display device. It is quite clear that did not understood it when referencing Narkowicz (2016) fitted function, Krzysztof’s function is fitted for a typical sRGB display: “but we can just sample ODT( RRT( x ) )”, the ODT word component is critical here.

    > It’s a preview/baseline curve which is supposed to receive a film LUT and color grading over top of it.

    Incorrect, colour grading/look modifications in ACES system are technically meant to be performed using ACES LMTs and thus UNDER the ACES RRT, not on TOP of it.

    > We don’t have the technical capability to run real film industry LUTs in the correct color spaces

    Can you expand on that? I have again trouble seeing where is the issue.

    Cheers,

    1. As far as subjectivity goes… well, yeah. Obviously.

      > We’re limited to a 1D luminance response where real film produces both hue and saturation shifts.
      This is mainly a product of performance limitations, maybe with some challenges mixed in for tooling and artist training. Hue/sat shifts are expensive if you have to change color space to do them. As far as LUTs, that’s pretty much what we end up doing instead. It’s okay, but they’re not very big LUTs. I’m not sure of performance budgets offhand in current titles but it’s probably well under half a millisecond of GPU for 1080p tonemap/LUT/color correct. Wouldn’t be surprised to see it under a quarter.

      > as a matter of fact most of the VFX compositing and colour grading applications adopt an RGB working space.
      I was under the impression these were typically implemented in a luma/chroma space under the hood, but perhaps I’m mistaken. Working in that space enables tuning luminance response without unintended desaturation that is the result of applying sigmoids to RGB, as you pointed out. Maybe it just doesn’t matter in film since there’s going to be another color correct pass anyway.

      RE: Aces RRT – I may have misunderstood/misread a few things here. I need to review before commenting further. I think it’s not crucial and I probably overemphasized it in this write-up in the first place.

      > We don’t have the technical capability to run real film industry LUTs in the correct color spaces
      Again I think this is a product of performance limitations. Amongst other things, I want a world where I can go to Film Looks and just drop it onto my game, like I’d do to S-Log footage.

      1. +1 to nearly everything Colour_Science said. The only part of the film pipeline that uses other colorspaces is compressing gamut from Rec2020\ACES\P3 gamut to Rec709. But that’s easily handled and included in a proper Rec2020 HDR -> Rec709 SDR 3DLUT. A modern game engine should have 0 trouble running a 33x 3D LUT. Alternatively you have to run a 1D LUT on saturation to softly convert from a large color gamut to narrower gamut.

  2. In my opinion, when grading, you should work in HDR, but set the display to clip at 100 nits, and achieve the best picture you can in that space, and then simply use HDR as an “unclipping” of your SDR grade. Far too many people think you need to include every bit of detail, even in SDR. Clipping is absolutely fine. What’s the point of HDR if it’s not expanding above what you already display in your SDR image? Just make sure the camera is not actually doing the clipping at the source, otherwise you’re limiting what you can do in HDR.

    I watched a clip of Rise of the Tomb Raider in HDR and in the clip, Lara is running toward the exit of a tomb. The opening is shrouded in blinding white light. In SDR, this would look great, because the visuals outside that cave should absolutely be overexposed when the camera is inside, but in HDR, it shouldn’t look like that. You should see the details and color in the sky that were previously clipped to white, but now simply displayed very brightly, because now your tv has an expanded dynamic range above what was previously visible. In games this is so simple to do, as long as you use lighting that goes outside a 0-100% range. Games have been using HDR compatible lighting for about a decade now, and you could theoretically apply HDR display updates to older games because of that. But only if you treat it as a raising of the luminosity ceiling, rather than simply remapping the same range of colors you have to a brighter range. That’s not what HDR is about.

    I was watching some Star Trek Discovery, and I noticed the edges around objects all had that typical old school HDR to SDR tonemapping ringing/halo effects. Saw the same thing in a few episodes of The Leftovers. DON’T DO THAT! It’s okay if your background is brighter or darker than your characters. Let it be brighter or darker. If it’s so dark that you can’t see detail you should, then the set was probably lit poorly. If it’s so much brighter than the character that it gets clipped and looks overexposed, leave it that way. It will actually look bright, as it should. If there’s important detail there, improve your set lighting so that the character isn’t too much darker than that backdrop, except obviously in cases where the character being a silhouette is fine.

    Heck even on HDR TVs, I’d rather they not tonemap highlights that go beyond the TV’s capabilities down. TV goes to 650 nits? Clip above 650. If it’s an oled, maybe also provide an option for pitch black room viewing (so your eyes can adjust) where you linearly compress a brighter range into a darker, by simulating a lower exposure. A movie is graded for 4000 nits? You can display that at 16.25% exposure, and let your eyes adjust in a black room, and after a couple minutes of adjusting, you’ll have a similar experience as someone with a 4000nit OLED (from the future)

    1. Unfortunately HDR output requires significant pipeline changes* in games that most teams are not willing to invest in. I know for a fact that as a result, several commercial games that were updated for HDR TVs did no such thing. What they did was add a range-expander post tone map that artificially boosts levels into “HDR” ranges. I suspect Tomb Raider is included in that category.

      It’s basically the same thing that TVs are doing with gamut expansions and range expansions on LDR content.

      * For info on why pipeline changes are required and what is involved, see this presentation: https://www.ea.com/frostbite/news/high-dynamic-range-color-grading-and-display-in-frostbite

      1. Yeah that sounds like what Tomb Raider is doing, but a large portion of games over the last decade are internally rendering their lighting to higher levels than are displayed con screen. Pretty much any game where you see the exposure adapt to the image on screen is doing this, aside from some very early games. For example, even the original Uncharted game was done in a way where all they would have to essentially do is pre-gamma, adjust the exposure to 1% of what it was originally, and then simply apply the PQ EOTF so that the 100% level represents 10,000 nits.

        It’s an older game and they won’t bother updating it, but most games that include adaptive exposure and/or bloom effects on “brighter-than-white” lighting most likely are internally rendering light above the 100% level, and likely in floating point, and so a native expansion into the HDR range would be theoretically possible, if the developers cared to implement it.

        1. Right, but the problem is there are still steps in the render pipeline that are being applied after the image is collapsed to 8 bit LDR space and those steps are not easily revised for an HDR input. The Frostbite slides are really worth a read, as they provide good insight as to why something that should be easy, isn’t.

    2. > In my opinion, when grading, you should work in HDR, but set the display to clip at 100 nits, and achieve the best picture you can in that space, and then simply use HDR as an “unclipping” of your SDR grade. Far too many people think you need to include every bit of detail, even in SDR. Clipping is absolutely fine.

      While not every bit of detail has to be included in SDR, and it’s ok for some things to sometimes clip in SDR, usually clipping is still undesirable. Grading everything in HDR, clipping off everything above 100 nits, and calling that an SDR grade is insane. That’s simply not how grading and tone mapping work at all. I have Davinci resolve studio, along with decklink and HDR monitor, on which I do HDR grading, so I speak from experience. In short, SDR and HDR require separate grades, any process that automatically derives one from the other will be a compromise that will sometimes produce bad results. I suggest reading 5 part blog post series by mysterybox on HDR workflow and grading if you’re interested in details: https://www.mysterybox.us/blog/2016/10/18/hdr-video-part-1-what-is-hdr-video
      (part 3 might be most relevant to you, but it’s all a great read)

      If we focus on games HDR only rather than on video, note that in unreal engine for example, the workflow of grading everything in HDR and then clipping that to 100 and using it as SDR output is not even possible without significant rewrite of the engine code, and this is 5 years in the future.

      1. If you make an HDR grade and clip it to 100 nits and the highlights look crazy blown out, that’s an indication that your HDR is graded way too brightly. I didn’t mean to do that and call it a day (although in some cases that will absolutely be fine, especially games) but HDR should feel like a natural expansion above SDR, not simply a remapping of the same visual information to match different display capabilities.

        The black point of HDR and SDR is also the same, so when displayed on a display that can show true blacks, there’s no real reason for shadows to look considerably different in SDR either. Overall shadows midtones and contrast should look very similar in HDR, with only the highlights and colors expanding beyond what SDR can do, when appropriate. A highlight that reached white in SDR doesn’t necessarily mean a highlight that would be super bright in HDR. How bright it gets should use a natural level of dynamic range relative to the overall contrast of the image. If you’re purposefully lowering contrast in SDR, you’re harming the look of the image for no reason.

        So basically, start with HDR. Grade the HDR so that it still looks great if you clip it to 100 nits, with only minimal clipping in the brightest of highlights, like a SDR camera would do when an image is properly exposed. Yes, from there, then you can further tweak the SDR to look better when needed, but if you’ve done the HDR part right, you shouldn’t have a lot of work to do. Far too many people grade HDR way too brightly, or exaggerate contrast in ways that don’t make sense, or inversely, make SDR look like crap by lowering contrast, raising blacks/shadows, and trying to preserve way more highlight information than they need to. If done right, there shouldn’t be much difference between the SDR and HDR image aside from the brightest highlights and the most saturated colors. Most of it should look very similar.

  3. Super interesting post! As someone who is new to more complicated graphics programming, I’d love to know what kind of tools you’d suggest for better tone maps 🙂

  4. It’s interesting that most of the “manually graded” Uncharted scenes referred to actually increased contrast, yet this article claims that the problem is “too much contrast in games.”

  5. As someone who works on high-end commercial live-action productions and worked a great deal as a lighter for CG. I’ve often thought long and hard about why games look bad. A lot of what makes a film look “Filmic” isn’t the camera or the tonemap or the grade it’s the lighting. If you are trying to fix the lighting through a LUT or a grade you’re going to ultimately fail. The reason that shot of smaug looks bad is because the cinematography is bad. The same software and compositors who worked on Planet of the Apes also rendered and composited that shot of Smaug. The difference is the cinematographer for Planet of the Apes set a lighting template that looks incredible. Smaug looks like smaug not because they don’t have a nice tone-curve but because the cinematography has two unnatural colored and unmotivated lighting sources. Even if it wasn’t CG it would look fakey no matter what camera\tone curve it’s run through.

    The next problem is shading. I’ve run game scenes with game textures through an offline renderer and they look 1,000 times more photographic instantly. It has nothing to do with a good tone mapper or proper gamut mapping and everything to do with the lighting. Games look flat and weird because the lighting *is* flat and weird due to technical limitations and for creative reasons. Tonemapping helps but what helps more is global illumination and ambient occlusion and soft shadowing.

    And what’s really insurmountable is that the world looks pretty bad even with a well graded Alexa or RED. When you’re shooting a film you almost never have direct sun on an actor. We’ll throw up a silk or some diffusion to soften the light source on their face. And it’ll be juuuust out of frame. And juuust out of frame to the right? A large bounce source following the actor. You can’t throw up a silk on a game character. You can’t perfectly place a rim light (although Gears of War tried with its random edge light added to all characters). If a AAA game team could come up with an AI that dynamically lit an 8 hour game in real-time to look filmic that would be an incredible achievement. Films can take an hour just to turn a camera around 180 degrees and shoot the reverse angle. A game has to look good from 360 degrees. And film grades are a painstaking process that is finessed for every framing. Again the film industry would love to get their hands on an AI that could learn to properly grade 6 hours of a single ‘take’ without any cuts.

    The good news is that of-what can be done games are on the right path by starting to adopt ACES. Dislike or love the ACES RRT, the film industry is wrestling with how to make ACES look great and if games implement an ACES workflow they’ll be able to steal a lot of that work and run their games through the exact same pipeline. Just don’t be surprised when cheap rasterized hacked-lighting tricks don’t look photographic.

  6. Not seeing the issue with the CoD picture, the contrast and saturation looks pretty appropriate for the context of the scene and does not look heavy handed to me. Also those manually graded UC4 images look worse to me, the black levels look better but the color temperature has shifted from a natural looking warm to off looking cool.

  7. Great post, never really thought why games always feel “game-ish”. I’m developing a game and I loved the insights, looking further for the next posts 🙂

  8. I work as an environment and lighting artist but my background is in lighting for film and tv. I’m long enough in the tooth to remember how important the choice of film stock is/was to the final grade. Also in my experience lighting is often a neglected aspect of the game production pipeline. Fantastically interesting article – thanks.

  9. Thanks for writing this, there are some interesting ideas in here, although I disagree with many points. I have a question about how you took the screengrabs. Several of these look like the image is rec709, but you are judging it on a sRgb device. If you correct the Horizon images for this, you will see the dark areas get lifted quite a bit. It’s also hard to know if these images are legal or full range.

    There are a lot of variables when judging color, knowing the process how the images where acquired helps. As a side note running a 3d cube lut generated in resolve is no problem for current games.

    Cheers,
    Koen

    ps. I dont know how to attach an image or I would have attached a converted horizon screen grab, but if you have access to something like Nuke, it’s not hard to do.

  10. I’m confused – is this an article about the best ways to look at flattened grabs from HDR games on non-HDR screens?

    1. It’s about HDR-Rendering, not HDR-TVs specifically. For the last decade games have been rendering their images internally in HDR formats (often 16 bit / EXR), and then reducing them to SDR (e.g. 8 bit Rec709) for display.
      These days with HDR-TV’s, games reduce from their internal HDR (e.g. 16 bit) to “HDR” (e.g. 10 bit Rec2020)… but that’s not really important to the article. The process is the same. Whether the user has an SDR or HDR, there’s still a tonemapping algorithm in use by the game, which is somewhat equivalent to the choice of film stock or colour grading in Hollywood.

  11. This is a topic that’s pretty near and dear to my heart in the sense that I’ve been trying to fight this battle ever since I work in the industry; we as game artists have a bunch of tools at the tips of our fingers, but we constantly either use them the “wrong” way or we simply overuse them just because they’re there and they have a cost on GPU so might as well crank it up to 11 and make the juice worth the squeeze…
    This is obviously a highly subjective topic, what looks good to my eyes might not look good to another persons, but there is commonalities in what can be considered pretty and readable and overuse of “”HDR Look”” definitely doesn’t fall on that spectrum to me.

    Getting into the ins an outs of photography and what the dynamic range of a lens ACTUALLY is in real life, really helped getting a better grasp of understanding on how we fake all that with video games, LUTs and other post processing tools, and one of the most invaluable tools we as artists can use is the histogram, usually that graphic will tell you if you have a very busy/noisy/contrasty picture, or simply if it’s too dark/bright, every env artist and lighting artist should know how to use it and try and implement it into their art, it’ll pay off.

    As an example, Horizon did feel overly contrasty and it has a certain busy read to it, I actually almost feel that it could have been a push from Sony so they could sell more HDR/4K TV’s, because in the final picture if your pixels go from bright to dark often and faster, for the more “layman” eye it’s usually perceived as more detailed and that actually explains why so many games look so noisy, they want to make the final picture look more detailed and tbh the faster we get out of that mindset the better. This is the new trend the industry is currently going through and we will look back at it with a certain disgust in the future, just as we now look at the desaturated “next gen” look from the early previous generation.

    Do what’s best for your art direction vision and for your game and leave the preconceived notions of detail out of the door, and as a basic guideline contrast is usually very unappealing to the eye, use other tools like color balance and composition instead.

Leave a comment