Promit's Ventspace

September 11, 2012

Cinematic Color

Filed under: Graphics,Photography — Promit @ 3:32 pm
Tags: , ,

I chose not to go to SIGGRAPH 2012, and I’m starting to wish I had. Via Julien Guertault, I found the course on Cinematic Color.

I’ve mentioned this in the past: I believe that as a graphics programmer, a thorough understanding of photography and cinematography through the entire production pipeline is necessary. Apparently I am not alone in this regard. Interesting corollary: should cinematographers understand computer graphics? Hmm.

July 16, 2012

Review: Olympus OM-D E-M5

Filed under: Photography — Promit @ 12:42 am
Tags: , , , , , , , ,

I’ve mentioned in the past that as an extension of my game development work, I began to explore photography. I’m a big fan of the Micro Four Thirds mirrorless cameras, and I recently purchased the newest iteration in the line: the Olympus OM-D E-M5. I thought I’d go ahead and do a review, since a number of people have asked me about the camera. To make a long story short, Olympus has finally gotten serious and this camera is a force to be reckoned with. Much more importantly, the E-M5 is a lot of fun to shoot with. I enjoy photography much more with it than anything I’ve ever used, and for an enthusiast that’s crucial.

This review is not meant to be all encompassing; see DPReview for that. Rather, I want to focus on the things that I feel are often lost in normal reviews, and provide some introduction to these cameras in general.


I’ll start with a quick prelude for those of you who aren’t in the know, since this isn’t really a photography blog (not yet, anyway). Until recently, there were two kinds of cameras that the mainstream public knew and cared about: digital compacts and digital SLRs. A compact is an integrated package with a sensor and lens all together. They’re typically priced anywhere from $50 to $500, and they’re a one shot purchase: camera, done. They almost always use small low quality image sensors, on the scale of 5-8mm diagonal. This helps keep the size of the optics down and the overall package small. They also tend to have low end processing hardware and limited control over the result. Compacts also eschew viewfinders, running their sensor in video mode to the LCD to display an image preview. Most people take this functionality for granted. Some have electronic viewfinders, lenses onto tiny LCD screens of varying quality and size.

On the other end, we have digital single lens reflex (DSLR) cameras. The SLR design became big in the sixties as a compact film camera that allowed a photographer to see precisely what the film was going to see via a mirror/prism arrangement, and set exposure parameters based on that information. Modern “pro” cameras are identical in most ways to the film cameras of the late nineties, with the film replaced by a digital sensor and guts. DSLRs use large sensors (21mm-55mm diagonal), and feature large interchangeable lenses. They also have high end processors on board, lots of memory, and sophisticated controls. Until a few years ago, DSLRs could not run their sensors in video mode; they were unable to record videos and unable to display a live feed on the LCD. This was a limitation of the sensor hardware, and using the optical mirrored viewfinder was the only way to preview your shot. Although modern DSLRs have overcome this limitation and now support “Live View”, they are not well suited to this mode of operation and it’s generally not how you’ll want to use the camera.

Mirrorless cameras split the difference. By designing a compact-type camera with a video-compatible sensor and interchangeable lenses, these cameras try to compromise between the bulk and limitations of a DSLR, while boasting far more powerful processing and far better images than any compact camera. The idea was really pioneered by a cooperation by Olympus and Panasonic called Micro Four Thirds. This shared format came to fruition in 2008, and sent off a shockwave in the industry. Sony, Samsung, Fuji, Nikon, Pentax, and Canon have all stepped into the arena with their own competitors in this new class.

Micro Four Thirds

When the transition from film to digital happened, the vast majority of consumer equipment was designed for the 135 (35mm film) standard. Companies ran up against a problem: nobody knew how to create a digital sensor quite that large (“full frame”). Canon managed to produce one in 2002, the 1Ds, for $7,999. Nikon would not release one until the D3 in 2007, for $4,999. It was necessary to experiment with smaller sensor standards to produce consumer priced digital cameras, and most settled on the APS-C size with a diagonal of about 28mm on a 3:2 aspect ratio, versus full frame’s 43mm. APS-C sensor cameras only see the middle of the image projected by a 35mm lens, cropping the image off into a narrower field of view. APS-C has a “crop factor” of around 1.5, meaning that film lenses are effectively 1.5x narrower than they would be on a full frame camera. Despite the common sensor or film formats, each manufacturer makes their own lens system and are for the most part incompatible.

Olympus, meanwhile, decided to go with a smaller sensor format called Four Thirds, with a 4:3 aspect ratio and an image diagonal of about 21.6mm and a crop factor of 2x. (A 25mm lens is considered “normal”.) They did this to try and produce more compact DSLR cameras, similar to their old OM film SLRs. They tried to share this standard with several other manufacturers of cameras and lenses, but it never really caught on as a mainstream lineup. The Four Thirds options lagged their bigger competitors in performance and nobody really wanted a fairly big camera with fairly mediocre performance.

Micro Four Thirds (m4/3) was announced by Olympus and Panasonic in 2008 as a mirrorless interchangeable lens camera (MILC) line. The m4/3 cameras used the same sensor, but a mirrorless design to cut back dramatically on overall size. They leveraged tricks like digital image correction to reduce size, and boasted promise of high quality video support. The first cameras were the Panasonic G1 and the Olympus E-P1. The bad news is that price was not particularly different from full blown DSLRs and the cameras were small but not pocket-small. Combined with a wide range of technical and performance limitations, the cameras basically sucked in terms of bang for the buck. The value was in the flexibility of size and interchangeable lenses, supposedly. I’ve been very fond of these cameras for a long time, but that was due to personal quirks: I hate viewfinders, and DSLRs are fairly awful at video.

Olympus OM-D E-M5

Nevermind the ridiculous name; the Olympus OM-D is the real deal at long last. This is my fourth Olympus and my sixth m4/3 body. Olympus’ previous m4/3 cameras, the PEN series, were designed as compact cameras on steroids. Plastic build, simplified interfaces, mediocre sensor performance, and in many cases mediocre autofocus. This new camera is the genesis of a semi-pro lineup and it has the spec sheet to match. Magnesium build with full dust and splash proofing. An integrated high resolution electronic viewfinder (EVF), physical control dials, an accessory battery grip, and most welcome of all: a brand new 16 MP image sensor by Sony that is now able to compete with the DSLRs on even footing. Olympus has finally given us something that isn’t a toy, for $999 body-only.

The body’s available in silver or black. Olympus is going for a retro-throwback here, and I find the silver to be a beautiful look that stands out from the crowd in a good way. The viewfinder hump is a bit goofy thanks to the physical size of the stabilization system and accessory port, but the body is extremely compact overall. The handling is good, but not great. Olympus continues an unfortunate affectation of minimal grips on their cameras. Handling with larger lenses is compromised as a result, and large hands won’t appreciate the form in general. The battery grip supposedly makes a dramatic difference, but at USD $300 it’s a steep price to pay. The strap lugs are also the stupid compact-style with keychain type D-rings to actually get a strap on. Pointless inconvenience. The front of the viewfinder holds stereo mics with better than usual separation, and a normal hotshoe on top. Olympus has included their accessory port here, which makes the viewfinder hump comically oversized for something that shouldn’t be necessary. But since there’s no mic input and no built-in flash, you’ll need the accessory port often to drive those accessories.

The buttons are tiny, because the screen takes up most of the tiny body’s space. They’re also squishy thanks to the weather sealing. Olympus’ buttons have continuously shrunk over the years, and the OM-D is really starting to test the limits. I don’t find it to be a problem, but this is getting ridiculous even for my Asian hands. The dials are very nice, and something about Olympus shutter buttons is just so much nicer than other cameras I’ve tried. So is the actual shutter noise, a nice subtle click that doesn’t carry. The camera has a trio of customizable function buttons, and completely arbitrary restrictions on which button can be set to what functions (underwater mode yes, bracketing mode no). Haphazard half-backed customization is a theme that continues throughout the camera; the customization menu contains 87 different settings, some of which branch further off into sub-settings. For the most part it’s possible to things up exactly as you want, and equally as easy to screw them up in weird ways. Olympus offers a “Myset” system to save camera settings, but I don’t find it to be useful since the only way to get to them quickly is to assign a function button. I’d rather set the button to something useful, thanks.

The viewfinder is a large and beautiful 800×600 RGB LCD panel. It isn’t a color-sequential display like some manufacturers (*ahem* Panasonic), and the contrast and brightness are way punchier and more pleasant than some competitors (*ahem* Panasonic). It’s also not quite up to the spec of the Sony NEX-7, sadly, but it is wonderful to use. There’s a built in proximity detector to activate the EVF, and it works very smoothly. There’s no sensitivity adjustment, which can mean a lot of accidental switching, but as a nice extra touch you can toggle the sensor and active screen simply by holding down a button on the side of the viewfinder. The rear screen a large tilting OLED panel with excellent color and brightness, albeit at a lower resolution than the EVF. Again not at the standard of Sony but Panasonic should be taking notes. A flip out swivel screen would’ve been nice, though.


Let’s start with that new sensor: it’s fantastic. The dated Panasonic chip used in previous Olympus bodies has been replaced with a state of the art Sony unit. It is able to keep pace with the NEX-5N and 7, considered the standards-bearers of APS-C quality. ISO 3200 is clean enough to print, and I’m getting perfectly decent screen-resolution shots at ISO 12800 with RAW processing in Lightroom. Stunning. The Olympus JPEG engine has always been stellar, but it visibly disintegrates at 6400 and above — process RAW files yourself in high ISo situations. Dynamic range is traditionally a severe limitation of the m4/3 cameras, and the new Sony supplied sensor seems to do a fantastic job. There’s also something about the Olympus color rendition even in RAW that I find extremely pleasant and far nicer than any manufacturer out there except maybe Fuji.

While working with the Olympus ISO 12800 files, I’m finding something unexpected: I don’t mind the noise. Don’t get me wrong, the shots have plenty of noise to go around and we’re still not getting quite as clear results as the best of the new APS-C sensors (though it is better than any APS-C sensor from a year or two ago.) No, it’s not the amount of noise at play but the pattern, which has a very natural smooth feel to it after just a kiss of chroma noise reduction — Lightroom’s 25 default does just fine. It doesn’t interfere badly with the image at normal magnifications, and for many purposes I’m finding that I’m happy without going through the careful NR-sharpening balancing act that high-ISO shots typically require. This is something you won’t get from the test charts or DXO numbers. Camera sensors show noise in different patterns and types; many degrade into a color-splotched mess particularly in the shadows. This sensor degrades cleanly and elegantly into a film-like look that is easy to correct in post and easy to live with.

Here’s a secret that people don’t often mention: Olympus and Panasonic have stellar autofocus systems, better than what you get out of a typical midrange DSLR and kit lens. The other mirrorless systems, like Sony NEX, cannot compete. The basic entry level m4/3 kit lenses are basically able to match expensive supersonic drive DSLR lenses in speed, with dead silent video compatible internal focus mechanisms. Focus is also dead accurate, since it’s driven by the sensor and tolerances don’t matter. The only downside is that Olympus doesn’t offer resizable focus zones. The default is large enough to pick the wrong object to focus on, which can be an unpleasant surprise. The real bad news comes in with continuous or tracking autofocus, which basically don’t work even in 120hz high speed mode. In reality, single acquisition is so fast that you can often use it to replace continuous mode in DSLRs. For photos, the OM-D won’t be able to match the speed of a Sony SLT or pro DSLR with high end lenses. Kit lens users with DSLRs will discover that they were lied to about what m4/3 focus performance is like, but sports shooters relying on AF-C will be sorely disappointed. Caveat emptor.
For those interested, Roger Cicala wrote about DSLR AF accuracy.

Did I mention it can fire at 9 fps? Because it can, with a fat buffer that will go for 14 JPEG+RAW shots and full stabilization. I’ve seen it take 20 JPEGs on a fast card before running out of steam. (Compare to the GH2 at 5fps and a buffer of about 7 shots.) Continuous AF only becomes available at 4 fps, although it probably won’t actually work at that speed. Want to shoot fast action? Dial in your focus and wait for the target to come to you. At 9fps, your odds are very good, about as good as it gets for a consumer level camera. Only the Sony SLT line will go ever so slightly quicker, if you really need even more.

Olympus offers a sensor-shift stabilization system integrated into the body. The stabilizer can stabilize any lens, including adapted lenses from other systems. Sony Alpha and Pentax DSLRs offer similar systems. These systems traditionally suffer several flaws; they can only correct for translational motion, correct is not always as good as lens-based optical systems, and they have a tendency to overheat which makes them useless for long exposures or video use. Olympus has conquered all of these problems with a new 5-axis system that electromagnetically floats the sensor full time, compensating for rotational and translational motion even during video recording as well as in the viewfinder. The sheer ability of the 5-axis to lock the sensor down defies belief. Hand-held long exposures are possible, and video gains a silky smoothness that can trick a viewer into thinking a rig was used. Here’s a video from Engadget showing the stabilizer demo unit:

Of course Olympus giveth and Olympus taketh away; the camera refuses to stabilize non-electronic lenses in video, for no readily apparent reason. Let this be your first hint that Olympus does not understand high end video. UPDATE: Olympus has added stabilization for legacy/adapted lenses in firmware 1.5.

Battery life, however, is not good. Buy a spare battery or two…or more, if you’re a heavy shooter. Chinese generics from eBay work just fine although they seem to have slightly shorter lifespans than the OEM version. In general I find it’s best to keep at least two batteries for a high end camera, but this thing works through them fairly quickly. It’s not so bad if you only use the viewfinder and leave the main LCD off, but your battery times are going to be much closer to a compact than an SLR. Olympus ships a dedicated charger with an awkward cord; no USB charging here, so don’t lose the charger or forget it on a trip. You won’t find a spare. Might want to pick one up with those extra batteries.

M.Zuiko 12-50mm Kit Lens

The OM-D is available with a new weather sealed kit lens, the 12-50mm. It’s also available with the old 14-42mm kit lens, that lens isn’t sealed and doesn’t have the range. It IS a lot smaller, which brings us to the real problem with the 12-50mm: it should have never been made. It’s not a bad lens; sharp, weather sealed, wide range (24-100mm equivalent), mechanical AND power zoom modes for photo and video, plus a macro mode that gets to about .75x. The trouble is that it’s unreasonably large, unreasonably slow (f/6.3 at the long end), unreasonably expensive ($300 in kit, $500 standalone) and solves problems no one ever asked to be solved. It was a total waste of Olympus engineering time. A $1,500 kit with a new native conversion of the 12-60mm or 14-54mm would have been an absolutely incredible kit to offer. As it is, the kit is useful but mediocre.

Taking the lens on its own merits, there are some positives. Optically it is excellent for a kit zoom type lens with the ultra-sharp look that is standard for Zuikos, even wide open. The lens is not only internal focus but also internal zoom, which brings a welcome subtlety to candid photography work versus the telescoping monstrosities most people are used to. The zoom ring slides forwards and backwards to toggle lens modes, and can be bumped easily but works well overall. The electronic zoom is legitimately useful for video. The mechanical zoom is a bit odd though, as you can hear and feel the internal zoom motor being dragged along and there’s a weak hard stop that allows the ring to continue spinning. A macro button allows the lens to be locked to its maximum magnification and a limited focus range. Macro mode is very sharp and gets in very close.

If it was really necessary to produce a slow consumer kit lens, I would’ve preferred that Olympus spend time on producing a more compact weather sealed zoom. But a semi-pro camera deserves a semi-pro lens, and this isn’t it. The range is useful, but the lens is slower across the range than the normal Panasonic and Olympus 14-42mm lenses by about a third of a stop. What’s the point of having a wonderful new sensor sunk into noise because the lens is wide open at f/6.3? That’s a cruel joke. A 12-60 f/2.8-4 kit could’ve shaken up the entire industry.

System Lenses

The Nikon F mount for their SLR cameras was introduced to the world in 1959, and for the most part you have always to mount your Nikon lenses on newer cameras. The Canon EF mount was introduced in 1987 and again, lenses from then on have always been fully functional. Buying into a popular SLR system has always meant an enormous range of available lenses created over the course of decades. Even now, most of the Nikon and Canon lenses (including the entire L series) are designed for full-frame rather than APS-C formats and are often awkward on crop formats. Micro Four Thirds was created in 2008, and other mirrorless systems were introduced even later. Sony and Samsung’s entries appeared in 2010, Nikon in late 2011, and Pentax/Fuji/Canon in 2012. Lens choice and pricing are a soft spot for all of these lines. (Most can adapt SLR lenses, with varying degrees of success.) Micro Four Thirds has a few unique advantages over the other manufacturers, though.

Not only is m4/3 the oldest system, but it also has two manufacturers committed to producing both bodies and lenses that are all mostly cross compatible. By building largely complementary sets of lenses, the system has gained a large set of lenses in a very short time. The system isn’t “complete” yet, in that there are still major holes which need to be filled for general purpose use. It is however far, far ahead of the competitors. They’ve also focused on producing very good quality lenses at consumer prices; if you want something dirt cheap or ultra high end, you’re likely to be disappointed. (Panasonic is just rolling out their first constant aperture pro zooms this year, and there are barely any sub-$300 lenses.) On the other hand, it’s almost impossible to make a bad choice with the lenses that are available. All of them are optically stellar, even the relatively poor and very dated Olympus 17mm pancake. Lenses like the Panasonic 20mm f/1.7 pancake, Panasonic-Leica 25mm f/1.4, and Olympus 45mm f/1.8 are considered practically classic.

One of the stated goals of mirrorless was to decrease the size not only of the camera bodies, but also of the lenses. m4/3 accomplishes this in three ways. First, the very short flange distance (the distance between the sensor and the lens mount) allows lenses to be designed more simply and mounted much closer. Second, the smaller and closer-to-square Four Thirds format sensor allows for smaller image circles that are used more efficiently than the traditional 3:2 film format. Third, m4/3 relies heavily on digital corrections of lens issues like distortion and chromatic aberration, which would previously have required heavy and expensive glass elements to fix. The 20mm pancake (shown right) is actually one of the sharpest lenses for the system, an inch deep and under $360.

All together, there are around 25 current electronically enabled lenses for the system, with a handful of manual focus native lenses as well. 7-14mm or 9-18mm ultra wide angle. Wide angle, there’s the 12/2.0 or 14/2.5. Fisheye? Two of them .Normal lenses, pick from 17/2.8, 19/2.8, 20/1.7, and 25/1.4. If raw aperture is your thing and price is no object, Voigtlander will sell you 17/0.95 and 25/0.95. Macro comes from the 12-50, the 45/2.8, or the upcoming 60mm. The 45/1.8 75/1.8 fulfill portrait needs. The 14-150 and 14-140 OIS are fantastic all-in-one superzooms. And I’m not going to even start naming all the telephoto options, but the 100-300 gives most people as much reach as their heart desires.


With the advent of digital photography, camera companies (Nikon, Olympus, Fuji) collided with consumer electronics companies (Sony, Panasonic, Canon) in producing cameras. The consumer electronics guys make a variety of fantastic video cameras. The camera companies still seem somewhat baffled about what exactly video is for and what video people want. Fuji in particular makes the best film lenses on the planet, but cannot understand what to do with video recording. The previous m4/3 flagship camera was the Panasonic GH2, and it’s such an eminently capable camera that in the latest Zacuto shootout, it was frequently mistaken for the RED Epic and has proven to be one of the most popular cameras in that blind test. The OM-D will not be showing up in any such tests.

Let’s start with features: it writes h.264 files in a Quicktime MOV container to the same directory as photos. Most cameras emulate a very confusing Blu-Ray disc file structure on the card so that you can directly burn your card to a Blu-Ray once you’re done filming. This is exactly the sort of moronic “feature” a consumer electronics company would come up with, and four people in the world have ever actually used. Olympus’ version is a welcome change. It will AF during video, and rolling shutter is decently well controlled. Olympus also offers the amazing 5-axis stabilization with electronic lenses, and the results of that stabilization really cannot be overstated. It is absolutely stellar. Of course you can’t stabilize lenses you’d actually want to use for filming, like the Voigtlander f/0.95 primes.

Trouble is, that’s where the features end. Output is 1080i/60 or 720p/60, both of which are derived from a 30hz sensor readout. Bit-rates suck (20 Mbps max). No 24p, no 25p, no 50/60p. It does 30hz. The camera’s h.264 codec isn’t particularly good, as it tends to degrade into macroblocking when pushed too hard. Unlike the beautiful highlight roll-off in still photos, videos get a nasty burnt look on anything that gets too bright. You can buy an accessory for a microphone input, but there’s really no point since the camera doesn’t have volume control. The microphone has been improved significantly over the old PENs at least, which used to clip quickly. The new mic is completely deaf to bass though. Single clips are limited to 29:59 thanks to stupid laws in the EU, so long-form interview/lecture recordings are out. You can set aperture/shutter/ISO/exposure manually for video, but only before you start recording. I suspect that this is strictly a set of software problems, as the underlying hardware is extremely capable. I’m not the only one who thinks a lot more is possible on the OM-D platform. More than that though, Olympus just doesn’t understand what people are looking for from video.

On second thought, allow me to rephrase that a bit: Olympus doesn’t understand professional video. The OM-D is an extraordinary photography camera for amateur/home video thanks to the excellent stabilizer. It’s miles ahead of any DSLR for video work, including the inexplicably popular Canons. Bolt on the Olympus 14-150mm ($350 refurb) for a video friendly 11x zoom and for casual use the OM-D delivers very good results. Film buffs will need to look elsewhere, probably to the very competent (and now much cheaper) Panasonic GH2.


There are a lot of good mirrorless and DSLR cameras out there. Sony NEX will take absolutely fantastic images with the right lenses. The high-end Panasonic G cameras share many of the same advantages at significantly better price points. A high end consumer DSLR (D7000, A65, 60D, K-5) can be had for the same money, with much wider choices in lenses across the range. So why are photographers like Damian McGillicuddy, Steve Huff, and Andy Hendriksen going crazy over this new camera?

Shooting with the OM-D is effortless. It’s compact enough to carry comfortably; not pocketable, but much more convenient than a DSLR. Use the viewfinder or LCD as you like, hit the button, and the sensor is able to handle almost anything you throw at it. Twin control dials make it easy to tweak settings quickly. The JPEG processing is good enough that I almost never sit down with the RAW files from everyday shooting situations. The stabilizer solves most shutter speed problems and gives video a professional feel. The tough build and weather sealing inspire confidence, while still being lightweight. It’s expensive, but there’s so much to like in this package and so little to complain about that it is worth it.

The bottom line? This camera is much more fun than its DSLR or mirrorless competitors.

June 4, 2012

Digital Color Part 1

Filed under: Graphics,Photography — Promit @ 3:54 pm

What do you know about how computers read, store, process, and display colors? If your answer is R, G, and B color channels in the range of [0, 255], go hang your head in shame — and no credit for alpha/transparency channels. If you said spanning [0, 1], that’s very slightly better. More points if you mentioned HSV, YUV, YcbCr, etc. Less points if you didn’t mention sRGB. Extra credit if you thought about gamma curves or color temperatures, and a highly approving nod if the word “gamut” crossed your mind. Yes, today we’re going to be talking about color spaces, gamuts, color management, bit depth, and all the fun stuff that defines digital color. I’ve been doing a lot of photography work recently, and it’s brought a number of things to the forefront which I did not previously understand and of which I haven’t seen concise, centralized discussion. Most of this applies equally well to digital image capture (cameras, scanners), digital image rendering (offline or real time), and digital image reproduction (monitors, printers). Understand in advance that I’m trying to distill a book’s worth of theory into a blog post. This will be fast, loose, and cursory at best, but hopefully it’s enough of a glimpse to be enlightening. I’m multi-targeting this post for software engineers (game developers mainly), artists and photographers. We’ll see how that goes.

I decided to write this post after an incident where a self-portrait turned my skin distinctly pink when I uploaded it to Facebook. (I’m brown-skinned.) I went into significant depth trying to understand why, and found out that there is a lot of complexity in digital color that is not well explained in a unified format. This was originally intended to be a single monster post, but it’s just too much material to pack into one blog entry. I think I could write a small eBook if I really went into detail. In this first part, I’ll just be talking about representation of color tones, independent of brightness. Part 2 will talk about brightness, luminance, gamma, dynamic range, and so on. Part 3 will discuss the details of how devices reproduce colors and how we can work with multiple devices correctly.

Colors in Real Life

I do not want to talk about real life colors. That is an enormously complicated (and fairly awesome) topic that integrates physics, optics, biology, neurology, and cognitive studies. I just want to cover enough of the basics to allow us to discuss digital colors. Real life has a lot of colors. An infinite number of colors. Humans can perceive a limited number of these colors which exist in the visible spectrum. Each color represents a spectrum of light, which our eyes receive as an R/G/B triplet of information (for the purposes of this discussion). There are many discrete spectrums that we cannot differentiate and thus appear as the same color even though they’re not. This ambiguity shows up very strongly in people suffering from any type of color blindness. For a normal person, we can describe the total range of colors perceived and graph it on what’s known as a chromaticity diagram:
CIE 1931 color diagram
This is called the CIE 1931 color space. The full range of color perception forms a 3D volume region. The X and Y axes shown above describe the chromaticity of color, and the Z axis is brightness. This diagram represents a slice through that volume at 50% brightness. As an added bonus, this diagram will look different in different browsers if you have any type of calibrated or high end monitor. That’s a hint about the rabbit hole we’re about to enter. The colors on the diagram are merely an illustrated aid, not actual colors. It’s also a single 2D slice through a 3D volume of colors, with the third axis being brightness. We’ll talk more about brightness in Part 2. For now, just assume that the graph describes the total range of colors we can perceive, and values outside the colored area may as well not exist. This chart will be our basis for the discussion of digital color.

Colors on Digital Devices

You might know that most computer monitors are able to express about 16.8 million discrete color values in 24 bits per pixel. That sounds like a lot of colors, but it isn’t. It translates to 256 discrete values from 8 bits for each of red, blue, and green color channels, and 256 total levels of luminance covering 8 stops (one stop represents a doubling of light intensity). That means that for any given luminance level, you can describe 65536 different colors. So already a lot of our possible values have been spent on just describing luminance, leaving us very few to encode the actual shade of color. It’s not even adequate to express real luminance values; a typical high end digital camera can capture 12-14 stops in a single scene, and human perception can span 20+ stops. In short, we’d need nearly all of our 24 bits just to express all of the levels of luminance that humans can perceive, let alone the color range.

Because we’re talking about digital color, we have another problem — the mediums we work with cannot hope to cover the totality of human vision. Cameras, scanners, monitors, and printers have limitations in the colors they can understand and reproduce. More infuriatingly, each device has its own independent gamut of colors that does not match up with any other device. You’ve probably seen a printer vomit out colors that don’t match your screen. In many cases, the printer can never match your screen. When you take a photo and import it to a computer, you get two color shifts, first in the camera’s capture and processing and second in your computer’s processing and display. Send the image to someone else and it shifts again. Is the color on screen really related to the color in real life anymore? Take a photo of a leaf, then bring it inside and put it up against your computer screen. Go ahead and scan it too. Odds are the scan, photo, and leaf are all radically different colors when you see them side by side.

In Part 3 of this series, I’ll go into detail about how the different categories of digital devices detect or reproduce colors at a hardware level. These engineering details have a very real effect on our color workflow, and will be important in understanding how to compromise effectively across different hardware. There’s no point getting an image perfect on a computer screen if its final destination is print. Reconciling the differences in hardware and producing your desired colors anywhere will be our overarching goal for this series.

One last footnote: most digital cameras output JPEG images by default. These images are not relevant to a serious discussion of color, as they tend to interpret the digital sensor’s data rather creatively. Instead we will be talking about the RAW format data that higher quality digital cameras can optionally produce directly from the image sensor. These files are usually proprietary but there are many common software packages that can read them, and produce more even-handed representations of the sensor data. These color-accurate conversions will be the focus for the photography aspects of this discussion. The same applies to video data, with the caveat that consumers don’t have cameras that can produce RAW video data files at all.


With that series of observations, it’s time to get a little bit more formal and look at what is going on. Let’s start with some proper definitions of terms I’ve been throwing around:

  • Color space: This is a mathematically defined subset of colors. It’s a theoretical construct, independent of any digital device or bit depth.
  • Color gamut: This is a description of the colors a given device can actually produce, and will vary per device. The gamut can be adjusted on many devices.
  • Color calibration: The process of matching a device’s gamut to a desired color space.
  • Luminance: An absolute measure of the intensity of light, independent of any device or perception.
  • Brightness: The subjective perception of luminance.
  • Stop: A difference of one exposure value. This is a measure of light, and an increase of one stop represents a doubling of the photon flux (roughly, density).
  • Chromaticity: An objective, physical description of a color independent of any device or perception.
  • Bit depth/bits per pixel: The number of bits we use to describe the value of an individual image pixel. With n bits, we can express 2n different values.
  • RGB: The general idea of encoding a pixel value using a set of three values that describe the strength of red, blue, and green to be combined. This is the native operation mode of nearly all digital devices.
  • White point/balance: The physical definition of the color considered “white”, independent of luminance.
  • Color temperature: A thermodynamics-derived value expressing a pure color of light that would be emitted by a body at that temperature. These are the colors we associate with stars. Trust me, you don’t want more detail.

That’s just to start us off, we’ll meet more terms along the way.

Spaces and Gamuts

Let’s start with computer monitors, as they are probably the easiest to understand. Above, I showed you the CIE 1931 color space describing the totality of human color perception. Monitors cannot express anything close to that range. Instead, monitors have traditionally tried to match an alternate, much smaller space known as sRGB. If you graph sRGB, it forms a triangle on top of CIE 1931 like this:

sRGB was created in 1996 by Microsoft and HP in order to match the capabilities of existing CRT displays, applying a formal mathematical structure. When people talk about RGB colors, they are almost certainly referring to values within the sRGB color space, where each value represents the weight applied to a weighted average of the three points of the sRGB triangle. Thus for any RGB value, you can pinpoint one location on the graph which is the desired color. A perfectly calibrated monitor will generate exactly that color when given the matching RGB value. In reality, the monitor’s triangle tends to be slightly misaligned. In any case, this is the color range that nearly the entire world uses for nearly everything. Most of the software on your computer doesn’t understand that anything else exists.

It should be blatantly obvious at this point that sRGB is very small. Compared to our full perceptual range, it misses an awful lot and you can begin to see why our leaf doesn’t match anything the monitor can display. There are a number of other color spaces out there:

AdobeRGB in particular has gained significant popularity, as a number of cameras and monitors support it and it is nearly identical to the NTSC color space standard for televisions. When we’re talking about monitors, we typically express the gamut as a percentage coverage of the NTSC space. The sRGB space represents a 70% gamut; a modern high end Dell UltraSharp will do about 110%. These monitors still take those same 24 color bits, but spread them over a wider area (actually, volume). These high end monitors are called wide gamut displays and they come with a very nasty catch.

Color values appear completely different on wide gamut displays. Those [0, 255] values for each channel represent different points in the color spectrum, spread farther apart. A wide gamut is a double edged sword because it represents a larger, more saturated space with less detail within the space. sRGB can describe smaller changes in color than AdobeRGB, but AdobeRGB can express more extreme colors than sRGB. This leads to nasty, unpleasant accidents if you’re not careful. Here’s a screenshot of two applications displaying exactly the same image:

Notice the massive shift in the red? The application on the left is MS Paint; the application on the right is Adobe Lightroom. Lightroom is a photo post-processing tool which is fully color-aware. The pixels of this image are stored in the sRGB color space, but my monitor is not in sRGB. Windows knows the model of my monitor and has downloaded a color profile, which tells it the attributes of my monitor’s color rendition. Lightroom knows this, and alters the image using the color profile to look correct on my monitor. Paint, however, has no clue about color profiles, and simply forwards the pixel data blindly to the monitor. The monitor’s wider color space causes a massive boost in saturation, changing my neighbor’s tastefully red house into a eye-searing abomination.

This can happen in reverse too. If you’ve got a nice image in AdobeRGB, it will look washed out and generally bad in sRGB mode. It will look even worse if you don’t print it correctly. Even if you do interpret it correctly, there are problems. AdobeRGB is a larger space than sRGB, so colors you can see on a wide gamut monitor simply won’t exist for an sRGB monitor and color saturation will get squished. Because so few people have wide gamut monitors, and because print gamuts are so much smaller, working on a wide gamut AdobeRGB display can be a dicey proposition. Making use of those extra colors may not pay dividends, and you may wind up with an image that cannot even be displayed correctly for your intended audience. As a result, it’s extremely important to understand which applications are color managed, what color space you’re working on, and what will happen when you produce the final image for other people to view. I call applications that use color management profiles color-aware, and others color-stupid (not a technical term).

Color Aware Software (on Windows 7)

Mac is traditionally much better about color management than Windows, due to the long graphic design history. Windows 7 does have full color management support, but following the tradition of Windows, most applications blithely ignore it. The first step is making sure you have a full color profile for your monitor. I won’t provide instructions on that here; it is usually automatic or derived from color calibration which we’ll discuss shortly. The second, somewhat more difficult step is making sure that your applications are all color-aware. On Windows 7, this is the situation:

  • Windows Explorer: Fully color aware. Your thumbnails are correct.
  • Windows Photo Viewer: Fully color aware. This is the default image preview tool, so when you’re previewing images all is well.
  • MS Office 2010: Fully color-aware, pleasantly enough.
  • MS Paint: Completely color-stupid.
  • Internet Explorer 9: Aware of image profiles, but ignores monitor profiles and blindly outputs everything as sRGB. Your IE colors are all rendered wrong on a wide gamut display. This despite the fact that IE specifically advertises color-awareness.
  • Mozilla Firefox: Fully color aware, but images that don’t specify a profile explicitly are assumed to match the monitor. You probably want FF to assume they’re sRGB, which is a hidden setting in about:config. Change gfx.color_management.mode to 2.
  • Google Chrome: Completely color-stupid.
  • Google Picasa: Fully color-aware, but not by default. Enable it in the View menu of Picasa and both the organizer tool and the preview tool become fully aware. You want to do this.
  • Adobe Anything: Fully color-aware, and pretty much the standard for color management — EXCEPT PREMIERE.
  • Corel Paintshop Pro: Fully color-aware, but glitchy for no apparent reason in the usual Corel way.
  • Blender: Almost completely color-stupid.
  • The GIMP: Fully color-aware, but ignores the system settings by default. Go into Edit->Preferences, Color Management tab, and check “Try to use the system monitor profile”. This is an important step if you’re using GIMP on a wide gamut or calibrated monitor.
  • Visual Studio: Color-stupid, which is disappointing but not surprising.
  • Video players: Blatant color-stupidity across the board. WMP, Quicktime, VLC, and MPC all failed my test.
  • Video games/3D rendering: Hah, not a chance. All color-stupid. Don’t count on 3D modeling tools to be color-aware in 3D mode either. The entire Autodesk suite of tools (3DS Max, Maya, Softimage, Mudbox) are all incorrect in this respect.

Given Mac’s longer legacy in graphic design, it is probably safe to assume that all image applications are color-aware. I know Chrome and Safari both handle colors correctly on Mac, for example. I have not yet tested video or 3D on Mac with a wide gamut display, but I suspect that they will not handle colors correctly.

Bit Depth

We’ve covered the idea that colors are expressed in a color space. Mathematically, a color space is a polygon on the chromaticity diagram; we’ll assume it’s always a triangle. A color represents a point inside this triangle, and any color can be represented as a weighted average of the three points on the triangle. These weighted averages form what we commonly refer to as the RGB value. In its purest form, the RGB value is a normal length vector of three coordinates in the interval [0, 1]. We traditionally also have an intensity value which gives the overall color luminance. In practical terms, we store the colors differently; the RGB value that most people are familiar with expresses the intensity of each of red, green, and blue color values (called channels). These are ratios between 0 (black) and 1 (fully saturated single color), and together they describe both a color and an intensity for that color. This is not the only representation; many televisions use YCbCr, which stores the luminance Y and two of the three color weights, Cb and Cr. You can compute the third color weight quite easily, and so these different representations (and many others, like HSV) are all basically equivalent. Hardware devices natively work with RGB intensities though, so that’s the color representation we will stick to.

Because computers don’t really like working with decimal numbers, we usually transform the [0, 1] range for the RGB channels into a wide range that we can describe using only integers. Most people are familiar with the [0, 255] range seen in many art programs such as Photoshop. This representation assigns an 8 bit integer to each color channel, which can store up to 256 values. With three channels of 256 values each, we have a total of 2563 colors, 16,777,216 in all. Computers have used this specification for many years, calling it TrueColor, 24 bit color, millions of colors, or something along those lines. I’ve already mentioned that this is a very limiting space of colors in many ways. It’s often adequate for our final images, but we really want much better color fidelity, especially in the process of editing or rendering our images. Otherwise, every image adjustment will cause rounding errors that cause our colors to subtly drift, eventually causing significant damage to color accuracy.

If you’ve worked with digital camera data, you probably know that most do not only use 8 bits per color channel. It’s typical for high end cameras to use 12 or even 14 bits for their internal data, yielding raw image files of 36 or 42 bits per pixel. Modern computer graphics applications and games use 16 or even 32 bits per color channel, totaling 48 or 96 bits of color information per pixel. Even though that level of detail is not necessary for the final image, it is important that we store the images as accurately as possible while working on them to avoid losing data before we are ready.

This problem extends to monitors, too. The vast majority of LCD monitors on the market only have 6 — that’s six — bits per color channel, and use various tricks to display the missing colors. (Yes, even many high end IPS type screens.) For many years, this meant that doing serious imaging work on LCDs was out of the question; you either used an older CRT or a very expensive design grade LCD. Nowadays, the color replication on quality 6 bit monitors like my Dell UltraSharp U2311H is excellent, and I don’t have any qualms in recommending one of these monitors for serious graphics work. I’ve compared the output side by side to my real 8 bit monitors and there is a difference, but it is minute and only visible in a direct comparison or test charts.

However, there is another consideration. I hinted earlier that wide gamut can hurt color accuracy. When using a wide gamut monitor, those color bits are stretched over a wider range of colors than normal. Because the bit depth hasn’t changed, we can no longer represent as many colors within the smaller sRGB triangle, and sRGB images will have some of their colors “crushed” when processed by the monitor’s color profile in a color-aware application. In order to combat this, high end modern monitors like the Dell U2711H actually process and display colors at 10 bits per channel, 30 bits total. 30, 36, and 48 bit color representations are known as Deep Color and they allow the monitor to be significantly more precise in its color rendition, even if the physical panel is still limited to 8 bits per color. It also allows more precise color calibration. If your monitor and graphics card support it, applications like Photoshop can take advantage of deep color to display extremely accurate wide gamut colors. And that brings me to an unfortunate caveat.

UPDATE: I previously claimed that Radeons could output 30 bit Deep Color. This appears not to be the case; more to come. The paragraph below has been revised.
Only AMD FirePro and NVIDIA Quadro chips support deep color, and only under Windows. Intel chips do not have deep color support at all. NVIDIA GeForce and AMD Radeon chips have the necessary hardware for 30 bit output, but the drivers do not support it. Mac OSX, up to and including 10.7 Lion, cannot do 30 bit under any situation no matter what hardware you have. This is despite the fact that both AMD and NVIDIA explicitly advertise 30-bit support in a number of these cases.

Color Calibration

Color calibration is the process of aligning the gamut of a display to a desired color space. This applies both to devices that capture images (cameras, scanners) and devices that generate them (monitors, printers). These devices frequently have adjustable settings that will alter their color gamut, but those controls are not usually adequate to match a color space. In the case of computer monitors, calibration actually refers to two discrete steps. The first step, calibration, corrects the monitor’s basic settings (brightness, contrast, hardware color settings) and graphics card settings to optimal values. The second step, profiling, measures the error between the gamut and the space, and encodes how to convert different color spaces to the actual gamut of the display as a software calibration profile. Manufacturers provide a default profile that describes the monitor, but calibration corrects the settings for your specific environment and screen. Monitor gamuts can shift over time, and calibration depends on the ambient conditions as well. Thus for truly accurate work, it is necessary to calibrate the monitor periodically, rather than set-and-forget.

Within the graphics card or monitor, there is a look-up table (LUT) that is used to pick the hardware output for a certain input. The LUT is used to provide basic calibration control for colors. For example, the red channel on your monitor may be too strong, so a calibrator could set the LUT entries for red 251-255 to output a red value of 250. In this case our color gamut has been corrected, but we’ve also lost color accuracy, since 256 input colors are now mapped to only 251 output colors. Depending on the hardware, this correction can happen at 6, 8, or 10 bit precision. 10 bit allows much more color detail, and so even in 8 bit mode, a 10 bit monitor’s expanded LUT makes it much more capable of responding to color calibration accurately. The LUT is a global hardware setting that lives outside of any particular software, and so it will provide calibration to all programs regardless of whether they are color aware. However, the LUT only operates within the native gamut of the monitor. That is, it can correct an AdobeRGB image to display correctly on an AdobeRGB monitor, but it cannot convert between sRGB and AdobeRGB.

Color conversion and correction is handled by an ICC profile, sometimes called an ICM profile on Windows. The monitor typically has a default profile, and the profiling step of a color calibrator can create one customized to your display and environment. The profile describes how to convert from various color spaces to the gamut of the monitor correctly. On a perfectly calibrated monitor, we would expect the ICC profile to have no effect on colors that are in the same color space as the monitor. In reality the monitor’s gamut never matches the color space perfectly, so the ICC profile may specify corrections even within the same color space. Its primary purpose, however, is to describe how to convert various color spaces to the monitor’s color gamut. We can never display an AdobeRGB image correctly on an sRGB monitor, because the space is too wide. Instead the display must decide to how to convert colors that are out of gamut. One option is to simply force out of gamut colors to the nearest edge of the target gamut. This preserves color accuracy within the smaller space, but destroys detail in the out of gamut areas entirely. Alternately we can scale the entire space, which will lead to inaccurate colors everywhere but better preservation of color detail. I’ll look at this dilemma in more detail in a future post.

The ICC profile is necessary to reproduce accurate colors on the monitor. This brings up an important point: Hardware color calibration is ineffective for color-stupid applications when image and monitor color spaces do not match. Consider the case of an sRGB image on an AdobeRGB monitor. A color stupid application will tell the monitor that these are sRGB colors. The LUT only specifies how to correct AdobeRGB colors, so for sRGB it simply changes one incorrect color to another. No amount of expense on calibration hardware will fix this problem.

In the case of a digital camera, it is not possible to alter the color response of the internal sensor. Instead, the sensor needs to be measured and corrected according to a profile. Tools like Adobe Camera Raw ship a set of default camera profiles containing this data. Unfortunately the correct calibration varies based on lighting conditions, camera settings like ISO, the lens in use, etc in unpredictable ways. For highly color-critical work (eg studio photography), it’s common to use a product like the X-Rite Color Checker to acquire the correct colors for the shooting conditions. Either way, the calibration data is used in RAW conversion to determine final colors (along with other settings like white balance). The details of this process are at the discretion of the RAW conversion software. Adobe uses the profile (whether it’s the built-in default or an X-Rite calibrated alternative) to move everything to the enormous ProPhotoRGB color space at 16 bits per color channel, 48 bits per pixel. This gives them the widest possible flexibility in editing and color outputs, but it is critical to understand what will happen when the data is baked into a more common output format. We’ll see more of that in Part 3.

White Balance

What color is white? It’s a tricky question, because it depends on lighting conditions and to some extent is a subjective choice. Day to day, the brain automatically corrects our perception of white for the environment. Digital devices have to pick a specific rendition of white, based on their hardware and processing algorithms. Mathematically, white is the dead center of our color space, the point where R, G, and B all balance perfectly. But that point itself is adjustable, controlled by a value we call the white balance. White balance is a range of tones that encompass “white”, and it is defined primarily by a color temperature. You were probably told at some point that “white” light contains every color. Although it’s true, the balance of those colors varies. The color temperature is actually a value from thermodynamic physics, and it describes a particular color spectrum emitted by any “black body” at a particular temperature in Kelvins. We’ll ignore the apparent contradiction of terms and the physics in general. In short, cooler temperatures, 4000K and below, tend towards orange and red. Warmer temperatures, 6000K and above, are blue and eventually violet. 5000K is generally considered to be an even medium white and matches the average color of daylight (not to be confused with the color of the sky or sun). We can graph color temperatures on a chromaticity diagram:

The white point of a color space is the color temperature that it expects to correspond to mathematical white. In the case of sRGB and most digital devices, the white point is a particular illuminant known as D65, a theoretical white value roughly equivalent to a color temperature of 6504K. There’s no point agonizing about the details here; simply remember that the standard white is 6500K.

All digital devices have a native white point, derived from their physical parameters. In an LCD monitor, it comes from the color of the backlight. This color is usually close to, but not exactly 6500K. Correcting the white balance of the monitor is one of the biggest benefits of calibration, especially in multi-monitor situations. Because the hardware white point cannot be changed, these adjustments operate by correcting the individual channel intensities downwards, which reduces the color gamut. Thus a wider gamut display is more tolerant of color calibration, because it has more flexibility to compensate for shifts in white balance. Similarly, digital cameras capture all colors relative to a physical white point and white balance adjustments to photos will shift the entire gamut.

White balance is probably the most common color adjustment on photographs. As I said earlier, our brains constantly color correct our environments. Cameras don’t have that luxury, and have to make a best guess about what to do with the incoming light. They will make a best guess and store that value along with the RAW data, but that guess can be changed later in processing for the final output. Most cameras allow the user to specify a particular white balance at capture time. Either way, white balance adjustment typically happens along two axes: color temperature (which we’ve already discussed) and green-magenta hue. The hue adjustment moves the colors perpendicular to the color temperature, functioning as a separate independent axis. You can see this most clearly on the diagram above where 6000K is marked, but depending on the color temperature in use the hue shift will not always be between green and magenta. For example, at 1500K, it appears to be between orange and green instead. If you skip back up, the chart of the sRGB space has its central D65 point marked. You can imagine that point shifting and the whole triangle pinned to it as we change the white balance. All of the points of the triangle will move in color space to center around the white point.

Be careful in how you use “warmer” and “cooler” in describing white balance, because it can get confusing quickly. If you’ve done photo work, you might notice that the chart displays the temperatures reversed from what you expect. The colors of a photo shift in the opposite direction of the white point, which leads to our common description of warm and cool colors. If you set the white point to a very cold value, neutral white is now considered to be a very yellow color, and all the colors of the photo are pushed into blue territory. If you pick a very warm color, blue is considered neutral and all our colors shift towards yellow. This is because the rest of our colors are effectively described relative to the white point, and the color temperature of the photograph is the physical temperature that gets mapped to RGB (1, 1, 1).


In this post, I talked about the basics of digital color representation. We looked at color spaces, the mathematical ranges of color, and gamuts, the actual ranges that devices can work in. We talked about the implications of images and devices in different spaces, and the importance of color-aware applications. Next I explained bit depth and color calibration, and closed with an overview of white point and white balance.

What’s more interesting is what we did not cover yet. The discussion covered color, but not brightness. We know how to express various shades and tints, but not how to describe how bright they are (or the differences between brightness, luminance, and saturation). We also don’t know how to put any of this knowledge into practical use in our graphics work. Those will be the subjects of Parts 2 and 3, respectively.

May 26, 2012

New Theme, New Posts Soon, New Happenings

Filed under: Non-technical — Promit @ 12:30 am

I wanted a new coat of paint around here. We’re going to try this one on for size. It may or may not stick, we’ll see. I’m going to try and revive blogging here, as there are a number of things I’ve been meaning to write for many months. Many of those things are about photography, some of them are about games, and not a lot are about SlimDX or SlimTune.

Don’t hold your breath on the Slim* stuff — I just don’t know what is going to happen in the coming months. I’ve decided to pursue a Master’s Degree in Computer Science at my alma mater, Johns Hopkins University. I hate school, but given other events in my life this was an important step to take. It does cut into my time quite severely, so I’m basically stepping out of the consulting business and maintaining a blog during school is daunting to say the least.

I am also working on game development for the Department of Neurology at the Johns Hopkins Hospital. That is an extremely interesting effort which I will attempt to discuss as much as I can, though a lot of it is not and will not be public soon. That’s the nature of the beast, unfortunately. I will say right now that it should be obvious that psychology and neurology play an important role in game design. It turns out that game design plays an important role in psychology and neurology too, and research has only just started to explore the implications of that crossover. There is a lot of potential.

Lastly, I’ve found myself very heavily invested in artistic pursuits, primarily photography. I think it’s important for any game developer (or any entertainment industry professional at all) to nurture their creative/artistic side as much as possible. You don’t have to be good at it, but you can’t neglect it. I picked photography because I’m terrible at drawing, and because I hoped it would clarify a lot of things I’ve never understood in graphics engineering. (It did.) It’s now a pursuit of mine in its own right, and I intend to be writing a lot about it.

Lastly, I want to thank all half dozen people who are actually reading this post. You guys are nuts and I’ve wasted your time, but I promise better things are coming down the pipeline. I am working to finish an epic post detailing the basics of digital color representation. I can almost guarantee you’ll learn something.

December 25, 2011

SlimDX Status Report

Filed under: SlimDX — Promit @ 3:17 pm

Alright, we’ve talked DirectX and XNA already so let’s move on to the subject of SlimDX.

First off, there’s a release coming any day now. A number of things were screwed up with the September 2011 release, mostly my fault, and I’ve been busy patching them up. So there’s a new December 2011 release around the corner, and 4.0 runtimes will be available right at the start. I do want to point out, though, that the runtimes are strictly for end-users (non-developers) who are consuming SlimDX apps. You don’t need them to develop, and for that matter you probably don’t need them at all if you’re at a game development company. They install the DX runtimes, VC runtimes, and SlimDX itself. Given that both runtimes are now well over a year old, odds are you already have this stuff. While that doesn’t excuse my personal failures in getting this stuff out in a timely fashion, there is almost certainly no need to worry over it for 90% of you.

We’ve been promising a SlimDX 2.0 release for some time now, with substantially revised architecture. The redesign is based around many of the same concepts driving another wrapper library called SharpDX by Alexandre Mutel. Alex was working with us for a while but we split up over some mutual differences and went our separate ways. I’ve decided to withhold any comments on his work one way or the other. As far as our work… we need help. The three of us (Josh Petrie, Mike Popoloski, and myself) have been working on the library for something like five years, and things are pretty stable at this point. Sure there are bug fixes that we’re shipping out, but especially now that the DirectX SDK updates have stopped, the current codebase is largely good to go. The new codebase for 2.0 is really a prototype, and the simple fact is that it needs a lot of work and none of us has the time anymore.

I repeat: We need new people to help develop SlimDX. If that doesn’t happen then we’re likely stuck in place, which might not be that big of a problem except for one thing: Windows 8. SlimDX 2.0 is based on a code generation system that should allow us to target C++/CLI as well as the new C++/CX language. With CX support we get not only .NET but also JavaScript and native code support to interop with Metro apps. Not only that, but it also means ARM support and SlimDX on tablets in the coming years. I think that’s a big deal, if we can pull it off.

When I first wrote SlimDX in 2006, I believed that automated codegen like SWIG was not well suited to creating a simple, usable wrapper. SlimDX was hand written from the ground up to make using DirectX as painless as possible, and also to reshape the DirectX API into something that made sense as a .NET API. That was directly in the footsteps of Managed DirectX which Tom Miller had created, though we took the model a lot farther in that direct path. Alex came to us with an approach for code-gen which we felt really has potential, but there’s still a lot of rough edges and a lot of work in getting it to the standard which we really want it to be at.

So, who can help? You’ll need to have a working familiarity with C++ and C#, and DirectX of course. It is not, contrary to popular belief, necessary to really deeply understand any of these things. Working on SlimDX is an adventure in quirks and details of interop that I can almost guarantee you have not seen. Don’t worry about experience if you’re looking to help out. There will be a lot to learn of course, and you’re going to need a lot of free time to commit to this, but we’ve spent a long time building SlimDX and have a pretty solid handle on what’s going on. The only other requirement is the understanding that what you get out of this is experience, an excellent resume item, and skills that are fairly rare. Money is very unlikely to appear directly unless donations take a serious uptick.

If you’re interested in helping out, please post here, or ping us via Twitter or IRC or e-mail or GameDev or whatever. I really do need one or two people to join as regular developers, otherwise DirectX and Windows may well move forward without a SlimDX to help glue the bits together.

December 24, 2011

Advocacy Won’t Save the Internet

Filed under: Non-technical — Promit @ 8:31 pm

There’s been a lot of rage across the internet and related companies about a US bill called the Stop Online Piracy Act, abbreviated as SOPA. You can look to Wikipedia for what the whole thing is about and why people are upset. In short, it greatly contracts internet freedom and may inflict damage on the core structure. That is not the part I am writing about. If anything, it’s amazing that things took so long to get to this point. We’re seeing the beginning of a war that was always inevitable, and I fear that if we continue to try to solve it at a policy level, freedom will lose as it always does.

Money and power are and always have been centered around a singular point: control. In order to protect an oppressive government, or an oppressive business model, you must control the basic pathways and communication channels. The methods have changed over the course of centuries but the ideas have not. The Internet and the Web represent largely uncontrolled systems of communication. As a result, it’s been a continued thorn in the side of governments and corporations for many years. From Napster to PirateBay and WikiLeaks, and far more reprehensible things (eg child porn), there’s been a constant struggle between freedom and control. That struggle has been largely random and without direction, because nobody really knew how to police the internet. The system was designed to be resilient, and there are many, many ways in which blocks by oppressive regimes have proven ineffective.

Now we’re seeing the next phase, which is to target the gate-keepers. The internet is resilient, but it is not resilient enough. Search engines and link accumulators were targeted first. Coupled with DMCA provisions, sites are vanished from Google and Bing and once that happens the site may as well not exist. Discovery becomes nearly impossible. This has been done to protect “copyright holders” and “intellectual property”, but that is merely a proxy for ANY information that any party or any government (primarily the US) does not want in the wild. You only need to observe Universal’s assault on the MegaUpload video to understand that. Making somebody invisible, even temporarily, is an enormously powerful ability.

The next target, possibly the crucial one, is the Domain Name System (DNS). DNS is responsible for translating a domain name like “” into an IP like “”. The US Department of Homeland Security has gleefully pursued sites by revoking their domain names without anything resembling due process and without available recourse. And without actual authority, for that matter. The results were predictable: a technical workaround which got the government mad, and a bogus seizure that made the whole program look corrupt, which it is.

The last gatekeeper is the ISP, the guys who hold the actual physical connection between us and the internet. They are under assault too. It’s the same story over and over again, but in the end the ISPs will cave because it will be difficult or illegal for them to hold out.

SOPA might be the greatest ever attack on Internet freedom, but it’s also a dead-on logical expansion of a war that has unfolded continuously over the past decade or more. It’s possible that this particular measure will be defeated. The trouble is that it doesn’t matter. There is far, far too much at stake for the corporations and governments to let this go so easily. They will learn from their mistakes here, tweak and tune the language and the pitch, and come back with armies of lobbyists time and again until the chaotic political winds line up in their favor. That WILL happen, and things will start to crumble for those who value freedom.

Ultimately Hollywood wants the same thing that the government wants: the ability to control and restrict what happens on the Internet and how. They are on the same side, and all the calls in the world to your Representative will only delay what’s coming. It’s useful to buy time, but at the end of it all there is only one choice that will work: the Internet and the World Wide Web must be made entirely immune to censorship at a fundamental technical level. It must be redesigned so that no amount of legal threat is capable of affecting it at all.

From a technical point of view, that means a few things. First, the DNS system must be secured against the whims of any government. There are two options for doing that. One is to secure the DNS system so that every country controls its own TLDs and cannot affect any others. I believe this is doable with a widespread rollout of DNSSEC. The US could still revoke domains, but only those hosted as COM/ORG/NET/US/etc which are ostensibly subject to their legal control anyway. Just pick a country where whatever you’re doing is legal and sign up with them. The other option is rather extreme, and involves replacing the DNS system entirely with a new naming system that is not under anybody’s control at all. There is work along these lines, but it’s difficult to see potential for mainstream adoption. (On the other hand, it could thrive in environments like P2P networks if the tech details are hacked out.)

Then there are the ISPs. There’s no point locking the overall system down if your personal uplink still says “hey, no PirateBay for you no matter how you’re trying to get there.” That requires end-to-end encryption of your sensitive traffic. We have a system for that called Tor, but it’s possibly extreme. The ability to perform encrypted DNS queries locally (this is different from DNSSEC), plus secure HTTPS connections, achieves nearly everything we need. The latter has already become commonplace on major sites, which only leaves us to solve encrypted DNS queries. Luckily we’ve got that too.

That leaves us with the visibility problem in search engines, social networks, and similar services controlled by a single entity. I’m less concerned about this, because the steps I’ve discussed so far open the door for somebody in a more open country to build systems that are not subject to government or corporate whims. There is work on a decentralized search engine that isn’t subject to any control at all, but it’s unclear whether such a system is actually workable. Similar efforts are underway to replace centralized services such as Facebook, Twitter, and even semi-centralized mechanisms like OpenID. There is a core belief here that any system that is centralized is necessarily a threat, and cannot be trusted. I don’t know if that’s the case, but the more research we have in building completely distributed tools the better.

To try and win true freedom for the Internet on political and policy grounds is an eternal battle which we will likely lose. There is too much at stake for the power players to give up what we are asking of them. If we’re lucky, Google and all the other internet companies will remember to sink millions of dollars into R&D into making the Internet unbreakable, instead of simply lobbying the government not to do it. Once we make it indestructible on a technical level, governments and corporations will be forced to adapt to the new order, instead of trying to stop it. That’s our only chance to preserve what we’ve built and earned in the last forty-odd years: a completely free communication system that is equal to everyone.

December 23, 2011

Moving Away From Godaddy

Filed under: SlimDX,SlimTune,Software Engineering — Promit @ 8:53 pm

Just a quick update here: My domains, primarily and but a few others as well, are currently hosted by GoGaddy. Now GoDaddy is a company with a long, messy history of being a third tier sleaze-bag registrar, but I stuck with them because of pricing. However their recent support of SOPA, and their pathetic recant, pushed me over the edge.

Effective immediately, I am shifting all domains away from GoDaddy. Because I’m rather new to this process, I don’t know what will happen to DNS and email during the transition. SlimDX or addresses may become inaccessible for a short period while I sort things out. Please bear with me.

If you are interested in moving your own domains, I found out that NameCheap is running a promotion with code “SOPAsucks”. Their pricing is not quite as aggressive as GoDaddy but it appears that they do offer very competitive pricing ($2 specials) nonetheless. Transfers with the code cost $6.99 per domain, which includes a year renewal of the domain. I am sure there are other anti-SOPA registrars but this one is mine.

December 15, 2011

DirectX and XNA Follow-up

Filed under: Software Engineering — Promit @ 5:28 pm

I wanted to clarify and respond to a few things regarding my previous post about DirectX and XNA. First a quick note: the very long standing DIRECTXDEV mailing list is being shut down. Microsoft is encouraging a move to their forums, but in case you’re fond of the mailing list format there’s a Google Group that everyone is shifting over to. If you’re reading this blog post, you’re probably interested in the subject matter and so I highly encourage everyone to join. MS Connect’s entry for DirectX is being discontinued as well, so I’m not sure how you report bugs now.

Something on the personal front: I got a few comments, directly and indirectly, about being an MS or DirectX “hater”. Good lord no! I adore DirectX, XNA, Windows, and Microsoft. I criticize because I want to see these technologies thrive and succeed. I’ve been doing a lot of iOS/Mac/OpenGL work lately and from a technical standpoint it’s absolutely miserable. I miss the wonderful Microsoft world of development. But a lot of what I’m hearing in public and private worries me. My alarmist approach is designed to bring attention to these things, because oftentimes the development teams live in a bubble separated from their users. (Hell, I can barely get in touch with people using SlimDX.) The XNA team, for example, are terrible at communication. It’s exasperating, because the direction, plans, and schedules are completely opaque — even to those of us who have signed an MS NDA and are ostensibly supposed to see this information early. (We don’t see jack shit, by the way. Exasperating.)

Second, no I do not think DirectX is dead. No I do not think everybody should switch to OpenGL. From a platform and technical standpoint, we’re probably as much or even more of a commitment than in the past. What’s bothering me is the pathetic way that community, documentation, etc is being handled. Look: Where is the DirectX SDK? I don’t know when that was published, but it appears that nobody noticed it until very recently. How would we? I don’t wander MSDN online at random. And it’s been placed right next to this comically worthless page. This is the kind of developer support I’m complaining about. Nobody outside MS understands what is going on in Bellevue, and I’m getting this worrisome feeling that nobody inside understands either. The DirectX SDK wasn’t just a diverged way to deliver support for a core platform technology, which is what seems to be driving the current decision making process. It represented half a gigabyte of commitment to the developers who arguably make the entire Windows ecosystem compelling to a consumer. “Developers developers developers!” “Yeah, what’s up?” “Uhhh, hi?” That’s what it feels like. MS wants developers around, but they forgot to put any thought into why. (Here’s a hint guys, ask DevDiv. They seem to still have a clue, C++ team aside.)

And in the other corner, we’ve got XNA. Whoo boy. There’s no point to sugarcoating this, although I’ll probably ruffle some feathers: XNA 4.0 is garbage. It exists to support two things: XBLIG and WP7. XBLIG is a joke, so really the only productive arena remaining is WP7. It’s sad because XNA 3.x was actually an excellent way to do managed development on PC. 4.0 introduces a profiles mechanism that is focused specifically around the Xbox and WP7, and produces a stunningly foolish situation on PC where DX10+ hardware is required but none of the new features are supported. Worse still, that was a few years ago. Now we’ve got DirectX 11 with Metro coming down the pipeline and XNA staring blankly back like the whole thing is a complete surprise. It tends to raise some questions like, is XNA 5 coming? Will there be DX11 or Metro or Windows 8 support? Is anybody even listening? And the answer we got was, and I quote: “We’re definitely sensitive to this uncertainty, but unfortunately have nothing we can announce at this time.”

My judgement on that message is that XNA does not exist on PC. I’ll say the same thing I said in 2006 when XNA was first revealed. Treat it as an API that happens to run on Windows as a development convenience, not as something it’s actually meant to do. The computing world has moved on, and if Microsoft can’t be bothered to bring XNA along then that’s just something we have to work with. If and when XNA 5 is announced, then we can go back and take a look at the new landscape.

November 28, 2011

DirectX and XNA Status Report

Filed under: Graphics,Software Engineering — Promit @ 9:05 pm

A few interesting things have been happening in the DirectX and XNA world, and I think people haven’t really noticed yet. It’s been done quietly, not because Microsoft is trying to hide anything but because they’ve always been big believers in the “fade into the night” approach to canceling projects. Or their communication ability sucks. Cancel may be the wrong word here, but the DirectX developer experience is going to be quite a bit different moving forward.

Let’s start with the DirectX SDK, which you may have noticed was last updated in June of 2010. That’s about a year and a half now, which is a bit of a lag for a product which has — sorry, had — scheduled quarterly releases. Unless of course that product is canceled, and it is. You heard me right: there is no more DirectX SDK. Its various useful components have been spun out into a hodge-podge of other places, and some pieces are simply discontinued. Everything outside DirectX Graphics is of course gone, and has been for several years now. That should not be a surprise. The graphics pieces and documentation, though, are being folded into the Windows SDK. D3DX is entirely gone. The math library was released as XNA Math (essentially a port from Xbox), then renamed to DirectXMath. It was a separate download for a while but I think it might be part of Windows SDK from Windows 8 also. I haven’t checked. The FX compiler has been spun off/abandoned as an open source block of code that is in the June 2010 SDK. There are no official patches for a wide range of known bugs, and I’m not aware of a central location for indie patches. Most of the remaining bits and pieces live on Chuck Walbourn’s blog. Yeah, I know.

In case it’s not obvious, this means that the DirectX release schedule is now the same as the Windows SDK, which always corresponds with major OS updates (service packs and full new versions). Don’t hold your breath on bug fixes. Last I heard, there’s only one person still working on the HLSL compiler. Maybe they’ve hired someone, or I assume they have a job opening on that ‘team’ at least. What I do know is that for all practical purposes, DirectX has been demoted to a standard, uninteresting Windows API just like all the others. I imagine there won’t be a lot more samples coming from Microsoft, especially big cool ones like the SDK used to have. Probably have to rely on AMD and NVIDIA for that stuff moving forward.

That covers the native side. What about managed? Well the Windows API Code Pack hasn’t been updated in a year and a half so we won’t worry about that. On the XNA front, two things are becoming very clear:
* XNA is not invited to Windows 8.
* XBLIG is not a serious effort.
The point about XBLIG has been known by most of us MVP guys for a while now. Microsoft promised a lot of interesting news out of this past //BUILD/ conference, which I suppose was true. However you may have noticed that XNA was not mentioned at any point. That’s because XNA isn’t invited. All of that fancy new Metro stuff? None of it will work with XNA, at all, in any fashion. (Win8 will run XNA just like any other ‘classic’ app.) That also implies pretty minimal involvement with the Windows app store. Combined with the fact that XBLIG has never been a serious effort to begin with, I’m dubious about tablet support for managed games. XNA does work on WinPhone7 and Win8 does support Win7 apps, so it ought to work in principle. Maybe. Given the niche status of Windows Phone 7 at the moment, and major losses of tech like UDK and Unity from that ecosystem, I’m also expecting WinPhone8 to be much more friendly to native code. (If not, I expect that platform to fail entirely, and take Nokia with it.)

I’m also looking at this e-mail right now in my box, which starts as follows:

As you know, the 2012 Microsoft MVP Summit is Feb 28-Mar 2, 2012. We wanted to inform you that DirectX and XNA technologies will not be hosting sessions at the Summit. As MVPs, you are still encouraged to attend and be a part of the global MVP community, and you’ll have the ability to attend technical sessions offered by other product groups.

Really? No DX/XNA sessions at all? I don’t think DirectX is fading into the sunset because it’s a core technology. XNA will most likely disappear. For some reason, Microsoft is getting out of the business of helping developers produce games and other 3D applications at the exact same time that they’re adding core support for it. Yes, Visual Studio 11 has model and texture visualizing and editing. It even has a visually based shader editor. I’ll let you take a wild guess on how well that will work with XNA code.

I don’t know exactly what Microsoft is going for here, but every couple of days somebody asks me when the next DirectX SDK is going to be released and I think I’m just explicitly stating what Microsoft has vaguely hinted for a while now. There is no new DirectX SDK, soon or ever. And I’m not holding my breath for any XNA updates either. I am told there is an XNA 5, but they won’t be at the summit apparently. If there is some actual future in XNA, feel free to make yourselves heard on the mailing lists because right now it’s really difficult to understand why anyone would bother.

Important clarification: I do NOT think DirectX is being deprecated or vanished or even dramatically changed at the core. (I’m less confident in XNA.) The SDK is being vanished, subsumed into Windows SDK as a component just like all the rest of Windows. So coding DX is no more special than coding GDI or Winsock. There will still be 11.1 and 12 and all that, probably delivered with OS releases and service packs. “DirectX 11.1″ will actually be synonymous with “Win7 SP2″. That has been the case for a while, actually. What’s going away is the wonderful developer support we’ve enjoyed as long as I can remember. Compare the D3DX libraries across 9, 10, and 11. Is this really a surprise? It’s been slowly happening for years.

November 3, 2011

Purchasing Glasses: Online or Offline?

Filed under: Non-technical — Promit @ 10:55 pm

I pride myself on being an informed consumer, to the point of obsession and beyond. It came time to buy a new pair of glasses, and I realized that I know nothing about them. I’ve had these things on my face every minute of every day for most of my life, and yet I had no clue what I was doing. After asking around a bit, I realize that essentially all of my friends and acquaintances were exactly the same way. Those are nearly all computer people, and depend on their glasses for daily life. How much money did you lay out for your glasses? I’m betting it wasn’t trivial, but do you know anything about them? Do you even know what brand you’re wearing?

This post is the first of a couple, dealing with overall eyeglass selection and particularly the question of whether to buy in-store or online. Later on, I’ll discuss lenses, coatings, etc. But for now, let’s just tackle the overall problem of where to get your glasses.

Online or brick and mortar store? In most cases, the person doing your eye exam rents space from or is outright affiliated with a retailer. Once your prescription’s been established, they hand it straight off to the shop, and you pick out your frames. Nice, easy, and totally ignorant. That’s not to say you’re getting a bad product, but you are probably paying a lot for the convenience. How much is a lot? The typical margin on frames varies from 100% to 1000%. Yes, that’s 10x in pure profit. Those $300 designer frames cost probably $30 to manufacture, and they’re really not precision equipment. I don’t have good numbers on lens markup, but it’s not subtle either. Cursory exploration suggests it’s at least 100%. For your money, you get free adjustments and maybe repairs from the people who sold you the glasses. Odds are you’ll walk out with a pair of comfy glasses that look pretty decent on you and assurance that if something goes wrong, you’ll have someone to yell at.

So where do you go in the brick and mortar world? Walmart and Target are, well, pretty much what you’d expect. Sort of vaguely competent but minimal at best. Big name chains like Lenscrafters are popular and prevalent in malls all over, but it’s turned out time and again that those guys over-charge and under-deliver. Shoddy lenses, shoddy coatings, even shoddy frames sometimes. Consumer Reports tells me that Costco is stand-out in quality and price, and as a matter of fact that’s where I got my glasses many times. If I were looking for a quality pair of reasonably priced glasses to try on before I buy, that’s where I would go.

That was the easy part — now for online retailers. The first challenge is even knowing what to buy. You’ll need your measurements, all of them. What size frames are good for you, your prescription, and also an obnoxious number called the pupillary distance (PD). PD is the straight line separation between your pupils when looking straight forward, and is used to place the optical centers of the lenses correctly. It’s also not officially part of your prescription, and difficult to measure reliably on your own. All opticians are equipped to take this measurement in order to sell you glasses. The only reason you could possibly need this number yourself is to order online, and that is why pretty much no optician anywhere will give you that number. It is not legally part of your prescription. Many online retailers tell you how to measure the PD by yourself. Don’t do this. I got mine by asking Costco to give me the stats on the last frames I bought from them. I’m due for a new prescription any day now, and I intend to find an optometrist (not an optician) who will give me the number properly as part of the exam. You’ll also need to know the ballpark for what frames fit you, best established by finding frames that already fit you and getting the measurements off them. Whether it’s ethical to do that by trying glasses in a store you won’t buy from, I leave for you to decide.

So where do you buy from? Pretty much everything you need to know is at GlassyEyes. I believe Zenni Optical is the largest and most popular of the online retailers, and I’ve been fairly happy with them myself. Just keep in mind that you’re getting made-to-a-price Chinese frames with Chinese lenses and Chinese coatings shipped from China, with everything that implies. Anti-reflective coatings have a substantial mark-up here, but Zenni charges $5 and corners are getting cut somewhere. Support and returns are also about as easy as you’d expect with any of these companies, which is to say dismal. If you have a bad experience, you’re not likely to find a good resolution. But for the price of a reasonably decent Costco pair, I can easily order four or five pairs from Zenni with all of the relevant coatings. The Costco will be better, though, so it’s still not clear if things stack up in your favor. At the very least, online is a great way to get backup pairs, prescription sunglasses, or costume glasses. That’s assuming you’re not in a hurry, because these things take 2-4 weeks and there are horror stories out there about things going completely awry.

So that’s the run-down. Retailers = reliable and safe but horrifically expensive. Online = enormous cash savings and super sketchy. Actually fairly typical. So which one to pick? If this is for a first time purchase, go to a retailer. Seriously. First-time wearers have absolutely no business buying online. If you have a very complex prescription or health problems with the eyes, you’re probably better off with the retailers. The online places are just too likely to screw it up, and you need someone who is equipped to CHECK the lenses you get back from the lab. (Some optometrists will do this for ordered glasses though.) If you’re poor/broke or just looking for backup pairs, online is a great way to get them. I have a couple lying around from online and they do the job. The Zenni coating is clearly trash, though, and I suspect some of these lenses from other online shops won’t be coated in 6-12 months. But at $20 per, it’s difficult to care.

If you’re looking for a primary pair though, especially if you’re not quite certain about the necessary measurements, then it gets tricky. I ordered four pairs recently to try out. One is great, one is workable, one got returned due to incredibly poor fit, and one is being donated because it looks laughably terrible on me and returning it would net me nothing after shipping. That’s a risk I took, and I’m a little irritated about the losses on the bad ones, but my new sunglasses from Zenni are really nice, and these rimless from Goggles4U are adequate too. But I don’t love them; they fit great and look nice but they’re made poorly and it’s kind of a hassle. The people over at Optiboard would gloat; that’s a forum specifically for people in the optical industry, and their attitude about online purchasing is exactly what you’d expect, turned up to 11 in some cases. (By the way guys, online optical shops may be a lot of negative things but ‘criminal’ isn’t among them.) Remember that online optical, though in its infancy, represents an existential threat to these people’s careers.

All the same, I still need a primary pair of glasses. I used a pair from Zenni as my primary for about a year but they were never quite right and the coatings just plain rubbed off. They’re scratched up badly. So I thought to myself, this one time I’m going to do it right. I’m going to visit an actual store, pay for real frames and real lenses, and have a real professional set them up just so. Pretensions being what they are, I wasn’t willing to buy ‘good’ frames unless they were Oakley, and the local Oakley distributor optician guy was really incredibly nice and incredibly helpful. I was pretty much ready to buy from him until I got the price tag: $550. Ouch. I know there’s about $200 in markup on the lenses alone there. I can get a Leica 14 element or Zeiss 7 element camera lens for not much more money, and corrective lenses are not in the same league of quality as those beauties.

Right now, I’m researching some alternatives on how to get real brand name, good quality glasses. The Internet is here, after all, and I’m all for ruthless global competition. (That’s a nod to you libertarians, as long as we don’t have VAT in the US.) One of the big problems with online is the lack of professional adjustments, but it’s not like you always have to take your glasses to the person who sold them to you. Of course a glasses store is hoping to get your business with services like free adjustments, but I don’t mind adding an extra fifteen bucks on top for fifteen minutes of that person’s time, and that’s more than I’m paid. I’m also going to find an optometrist who doesn’t have skin in the retail game and is willing to help me see better, regardless of who I pay for the glasses themselves.

« Previous PageNext Page »

The Rubric Theme. Blog at


Get every new post delivered to your Inbox.

Join 511 other followers