Oddly Elaborate Apple Error Message

I just wanted to share this. Popped up today while initializing an NSDateComponents object.

components:fromDate:toDate:options:]: fromDate cannot be nil
I mean really, what do you think that operation is supposed to mean with a nil fromDate?
An exception has been avoided for now.
A few of these errors are going to be reported with this complaint, then further violations will simply silently do whatever random thing results from the nil.
Here is the backtrace where this occurred this time (some frames may be missing due to compiler optimizations):

So that was unexpected.

Advertisements

The Scandalous Yetizen Costume

There’s been a lot of chatter on the various blogs and news sites about the IGDA and Yetizen party incident. I’m not going to rehash that. See these articles if you’re not up to date on the whole controversy:
http://www.joystiq.com/2013/03/28/igda-party-features-dancers-prompts-controversy-resignations/
http://www.joystiq.com/2013/04/09/igda-defines-new-rules-for-future-industry-parties-after-gdc-mi/
http://yetizen.com/2013/03/30/official-statement-by-the-yetizen-ceo-on-the-yetizen-igda-gdc-party/2/
I will comment that I thought that the controversy was a wholly pointless manufactured thing and Brenda Romero’s resignation did not help anybody. That said, I was a little surprised to discover that the scandalous, allegedly inappropriate outfits that created all this trouble aren’t actually shown anywhere, in any of the news about the incident. At all. Not on Joystiq, not on the Gawker owned Kotaku, nowhere. I thought that was strange. Luckily I have photos of the Yetizen models from the previous year, so… here it is. This is the outfit that forced two IGDA members to resign.
Yetizen Outfits
Now you know.

A Glimpse of What I’m Working On

I’ve decided to focus a little less on complaining and a little more on the actual work I do. Here’s a teaser:
Monitor array
I had a substantial amount of help with the over-water environmental rendering (not pictured) from a friend of mine, Nauful Shaikh. See his site for some great graphics work.

This wall of monitors was graciously made available to us by the Computer Science department for a presentation to the President of the University as well as a healthy mix of department chairs from Neuroscience, Neurology, Brain Sciences Institute, Computer Science, and Electrical/Computer Engineering at Johns Hopkins. I’m driving it at 60fps off a single 7970 in Eyefinity 6. It was supposed to be Crossfire but somebody’s driver is broken *cough cough* so I had to gut the render pipeline somewhat. Total resolution is 5760×2160 plus some margins for bezel compensation. The actual app is Kinect and PS Move enabled, and maybe I can share more about it this summer. The focus is a dolphin which we’ve developed with significant help and guidance from the National Aquarium in Baltimore, who let us work directly with their dolphins to better understand the animals, how they move and think, etc.

We’re planning to launch an iPad version this year on the iTunes App Store, and create a large scale interactive installation version for aquariums, hospitals, museums and similar at 4K resolution in stereoscopic 3D.

Follow-up on DirectX/XNA

Received today, and hopefully the “you can quote me” part means this is an exception to NDA because it’s important:

The message said “DirectX is no longer evolving as a technology.” That is definitely not true in any way, shape or form. Microsoft is actively investing in DirectX as the unified graphics foundation for our key platforms, including Xbox 360, Windows Phone and Windows. DirectX is evolving and will continue to evolve. For instance, right now we’re investing in some very cool graphics code authorizing [sic] technology in Visual Studio. We have absolutely no intention of stopping innovation with DirectX, and you can quote me on that. 🙂

My intent was not to start a firestorm of questioning on DirectX’s future viability, and I said up-front that I felt that communication was poorly worded with regards to intent. My frustrations were also apparently poorly worded. Since I accidentally launched this, let’s clear up a few things.

Number One: In the absolute (and implausible) worst case scenario that MS really scales back their Direct3D support to a minimum, that situation is still better than OpenGL. The Direct3D system is a technically superior piece of technology, and support for working with it is still better than OpenGL whether you’re a hobbyist or a pro. I cannot emphasize this point enough, so for the love of god stop bringing up OpenGL. It’s a badly designed API and has been since I started doing this in 2000.
Number Two: A new picture is coming into focus that shifts a lot of the DirectX SDK’s burden onto VS. This hasn’t been made previously clear to us on the MVP side. As I’ve begun to explore the tools already inside VS 2012, I like what I’m seeing. It’ll take some time to see how it all plays out, but in a very real way having Direct3D integrated into core VS development is a serious promotion.
Number Three: There’s more content in today’s email regarding XNA which I don’t care to share, thanks to a stern NDA reminder. (Ironically, when MS finally gives us what they should be saying to the public all along, I can’t share it.) But this is very much a case of “put up or shut up” and defending XNA’s status as a serious technology seems patently ridiculous to me right now. The community, whether it’s my work or someone else’s, has stepped in to integrate .NET and DirectX for many wonderful use cases. But there are things we can’t do (like Xbox) and it’s clear that matters to a lot of people. It’s not clear that it matters to Microsoft.

That said, I am not walking back my actual complaints about how DirectX and XNA are being handled. I like the work that’s been done in integrating VS and DirectX, which is arguably many years overdue. That doesn’t make everything else okay. The fact that we’re having this discussion, the fact that my dashed off blog post exploded on Twitter, the fact that clarification had to be written up behind the scenes — this is a problem. Which brings me at long last to the actual point I was trying to make yesterday:

As developers, we need Microsoft to communicate clearly with us, in public. As MVPs we were asked to act as community representatives, to guide everyone interested in the tech and have an open line on future development. Apparently that means we get half-hearted vague emails from time to time that dodges our serious questions and casts further doubts about the status of the technology and teams, all covered by an NDA agreement. And then, shockingly enough, people get the wrong idea. We’re sitting on the outside, trying to play this stupid guessing game of “which Microsoft technology is alive?” XNA doesn’t support DirectX 10+ or Windows 8, but it’s still a “supported product”, as if that means anything in the real world. Windows XP is still a “supported product” too.

It shouldn’t take a leaked email to force a straight answer.

DirectX/XNA Phase Out Continues


Please read the follow up post.

This email was sent out to DirectX/XNA MVPs today:

The XNA/DirectX expertise was created to recognize community leaders who focused on XNA Game Studio and/or DirectX development. Presently the XNA Game Studio is not in active development and DirectX is no longer evolving as a technology. Given the status within each technology, further value and engagement cannot be offered to the MVP community. As a result, effective April 1, 2014 XNA/DirectX will be fully retired from the MVP Award Program.

There’s actually a fair bit of information packed in there, and I think some of it is poorly worded. The most stunning part of it was this: “DirectX is no longer evolving as a technology.” That is a phrase I did not expect to hear from Microsoft. Before going to “the sky is falling” proclamations, I don’t think this is a death sentence for DirectX, per se. It conveys two things. Number one, DirectX outside of Direct3D is completely dead. I hope this is not a shock to you. Number two, it’s a reminder that Direct3D has been absorbed into Windows core, and thus is no more a “technology” than GDI or Winsock.

Like I said, poorly worded.

There are a few other things packed in there. XNA Game Studio is finished. That situation has been obvious for years now, so it also should not really come as a surprise either. And finally the critical point for me: our “MVP” role as community representatives and assistants is appreciated but no longer necessary. On this point, the writing has been on the wall for some time and so I should not be surprised. But I am. Maybe dismayed is a better word.

As I’ve said previously, I don’t feel that the way DirectX has been handled in recent years has been a positive thing. A number of technical decisions were made that were unfortunate, and then a number of business and marketing type decisions were made that compounded the problem. Many of the technologies (DirectInput, DirectSound, DirectShow) have splayed into a mess of intersecting fragments intended to replace them. The amount of developer support for Direct3D from Microsoft has been unsatisfactory, and anecdotal reports of internal team status have not been promising. Somebody told me a year or two back that the HLSL compiler team was one person. That’s not something you want to hear, true or not. Worst of all, though, was the communication. That’s the part that bugs me.

When you are in charge of a platform, whatever that platform may be, developers invest in your platform tech. That’s time and money spent, and opportunity costs lost elsewhere. This is an expected aspect of software development. As developers and managers, we want as much information as possible in order to make the best short and long term decisions on what to invest in. We don’t want to rewrite our systems from scratch every few years. We don’t want to fall behind competitors due to platform limitations. Navigating these pitfalls is crucial to survival for us. Microsoft has a vested interest in some level of non-disclosure and secrecy about what they’re doing. All companies do. I understand that. But some back and forth is necessary in order for the relationship to be productive.

Look at XNA — there have been a variety of questions surrounding it for years, about the extent to which the technology and its associated marketplace were going to be taken seriously and forward into the future. It is clear at this juncture that there was no future and the tech was being phased out. Direct3D 10 was launched in late 2006, a bit over six years ago, yet XNA was apparently never going to be brought along with the major improvements in DWM and Direct3D. How long was it known internally at Microsoft that XNA was a dead-end? How many people would’ve passed over XNA if MS had admitted circa 2008 (or even 2010, when 4.0 was released) that there was no future for the tech? The official response, of course, was always something vague and generic: “XNA is a supported technology.” That means nothing in Microsoft world, because “it will continue to work in its current state for a while” is not a viable way for developers to stay current with their competition.

Just to be clear, I don’t attribute any of this fumbling to malice or bad faith. There’s a lot of evidence that this type of behavior is merely a delayed reflection of internal forces at Microsoft which are wreaking havoc on the company’s ability to compete in any space. But the simple ground truth is that we’re entering an era where Windows’ domination is openly in question, and a lot of us have the flexibility and inclination to choose between a range of platforms, whether those platforms are personal computers, game consoles, or mobile devices. Microsoft’s offer in that world is lock-in to Windows, in exchange for powerful integrated platforms like .NET which are far more capable than their competitors (eg Java, which is just pathetic). That was an excellent trade-off for many years. Looking back now, though? The Windows tech hegemony is a graveyard. XNA. Silverlight. WPF. DirectX. Managed C++. C++/CLI. Managed DirectX. Visual Basic. So when you guys come knocking and ask us to commit to Metro — sorry, the Windows 8 User Experience — and its associated tech?

You’ll understand if I am not in a hurry to start coding for your newest framework.

Before things get out of hand: No, you should not switch to OpenGL. I get to use it professionally every day and it sucks. Direct3D 11 with the Win8 SDK is a perfectly viable choice, much more so than OpenGL for high end development. None of the contents of my frequent complaints should imply in any way that OpenGL is a good thing.

The Promise of Motion Control

I saw a blog post on IGN today: 4 reasons why the Nintendo Wii U will fail by Ian Fisch. I won’t comment on the WiiU, because I was one of the people who said the Wii was going to flop and man oh man was I ever off the mark on that one. But I did want to highlight a particular chunk of his post:

When people think of the massive success of the Nintendo Wii, they usually think of middle-aged moms playing Wii Fit, and senior citizens playing Wii Sports bowling at the retirement home. Indeed, the success of the Wii, much like the success of the Nintendo DS was due, in a large part, to casual gamers. We tend to forget that, originally, the excitement for the Wii was at a fever pitch among hardcore gamers. If you were a hardcore gamer then, you might remember sharing Eric Cartman’s excitement over the potential of Wii’s “motion control controls.”

It was hardcore gamers that gave the Wii its terrific launch. For about a year and a half, hardcore gamers were as enthusiastic about the Wii as their out-of-shape mothers soon would be. Of course, once hardcore gamers discovered the severe limitations of the Wii’s motion controls, the system became little more than a dust collector. The Wii U will not get this initial surge of excitement from hardcore gamers. The original Wii tantilized the hardcore set with the (false) promise of a new level of immersion – a step toward virtual reality.

I currently work for the BLAM Lab at Johns Hopkins University, which is part of the Department of Neurology. I helped found a group here called Kata. The Kata Project exists for a lot of reasons, but this idea is really our heart and soul:

In Japanese language, kata (though written as 方) is a frequently-used suffix meaning “way of doing,” with emphasis on the form and order of the process. Other meanings are “training method” and “formal exercise.” The goal of a painter’s practicing, for example, is to merge his consciousness with his brush; the potter’s with his clay; the garden designer’s with the materials of the garden. Once such mastery is achieved, the theory goes, the doing of a thing perfectly is as easy as thinking it.

I’m doing a rich mix of work here, centered around game development not only for medical and scientific research purposes but also commercial production. The key point, though, is that everything we do is centered around the study of biological motion and what it means for games. We’ve got touch, Wii, PS Move, or Kinect, Leap, or whatever else is coming down the pipeline, and I don’t feel that the potential of any of those devices has really been explored properly. The Wii implied something that it turned out not to be, sadly. Motion control itself, combined with game design that really focuses on using it in new and interesting ways, has a very distinct future separate from what we’ve got today. Fruit Ninja is an early expression of it, I think. Of course I believe that we’ll be the ones to crack the code, but no matter how it happens I find it extremely interesting to observe what people are doing with the rich data we can get out of motion control systems. So far Kinect and most iPad games seem to be an expression of how much data we can throw away, instead. That needs to change.

Gamma FAQ

I am working on Part 2 of my Digital Color posts, but it won’t be ready for a while yet. The goal of that post is to talk all about luminance, brightness, gamma, and the various other attributes and properties of how light a color is, rather than what shade it is.

In the meantime, please accept my apology and consider reading this page I found: the Gamma FAQ by Charles Poynton.

C++-JSON Serialization

I’ve decided to share some code today, just because I’m such a nice guy. Those of you who enjoy the more perverse ways to apply C++ tricks will enjoy this. Those who prefer simpler, more primitive approaches (that’s not a bad thing) may not appreciate this creation as much. What I’ve got here is a utility class that makes it fairly straightforward to serialize C++ objects to and from JSON using the generally decent JsonCpp library. Hierarchies are properly saved and loaded with no real effort. It works well for us, and probably has plenty of limitations too. Maybe some of you out there will find it useful. It seems to be difficult to find decent serialization code that isn’t also somehow awful to use.

This lives in a single file, but the bad news is it takes boost dependencies in order to get type traits. I think everything I’m using from boost is added to C++ core as of TR1, but I haven’t checked. It also depends on JsonCpp, but changing it over to use other JSON, XML, binary, etc libraries shouldn’t be terribly difficult. I don’t know how this compares to other serialization libraries, but boost::serialization sounded like a train-wreck so I wrote my own.

Let’s cover usage first. Generally speaking, you’ll simply add a member function to a structure that declares the members to be serialized (free functions are allowed too). Each declaration is a string name for the value, and the variable to be serialized under that value. There’s a few macros to combine those via preprocessor. Values can also be read-only or write-only serialized. The serializer is able to traverse vectors and structures, and will produce nicely structured JSON. Here’s a sample:

void Serialize(Vector3D& vec, JsonSerializer& s)
{
	//each Vector3D is written as an array
	s.Serialize(0, vec.x);
	s.Serialize(1, vec.y);
	s.Serialize(2, vec.z);
}

struct PathSave {

	bool LeftWall;
	bool RightWall;
	float LeftWallHeight;
	float RightWallHeight;

	vector<Vector3D>	c_Center,
				c_Left,
				c_Right;

	vector<int> hitPillarSingle_type;
	vector<struct PCC> hitPillarSingle_pcc;
	vector<Vector3D> hitPillarSingle_hh;

	int pathPointDensity;
	int pillarNum;
	
	void Serialize(JsonSerializer& s)
	{
		s.SerializeNVP(LeftWall);
		s.SerializeNVP(RightWall);
		s.SerializeNVP(LeftWallHeight);
		s.SerializeNVP(RightWallHeight);
		
		s.Serialize("Center", c_Center);
		s.Serialize("Left", c_Left);
		s.Serialize("Right", c_Right);
		
		s.SerializeNVP(hitPillarSingle_pcc);
		s.SerializeNVP(hitPillarSingle_hh);
		s.SerializeNVP(hitPillarSingle_type);
		
		s.WriteOnly(NVP(pathPointDensity));
		s.ReadOnly(NVP(pillarNum));
	}
};

void SaveToFile(PathSave& path)
{
	JsonSerializer s(true);
	path.Serialize(s);
	std::string styled = s.JsonValue.toStyledString();
	printf("Saved data:\n%s\n", styled.c_str());
}

bool LoadFromFile(const char* filename, PathSave& path)
{
	std::string levelJson;
	bool result = PlatformHelp::ReadDocument(filename, levelJson);
	if(!result)
		return false;

	JsonSerializer s(false);
	Json::Reader jsonReader;
	bool parse = jsonReader.parse(levelJson, s.JsonValue);
	if(!parse)
		return false;
	
	path.Serialize(s);
	return true;
}

And that will generally produce something that looks like this:

{
    "LeftWall" : true,
    "LeftWallHeight" : 4.50,
    "RightWall" : true,
    "RightWallHeight" : 4.50,
    "Center" : [
    [ 0.05266714096069336, 0.0, -15.13085746765137 ],
    [ 0.1941599696874619, 0.0, 1.553306341171265 ],
    [ 0.5984783172607422, 0.0, 50.54330444335938 ]
    ],
    "Left" : [
    [ -10.44694328308105, 0.0, -15.04044914245605 ],
    [ -15.55506420135498, 0.0, 1.709598302841187 ],
    [ -8.466680526733398, 2.896430828513985e-07, 42.68054962158203 ]
    ],
    "Right" : [
    [ 10.55227851867676, 0.0, -15.22126579284668 ],
    [ 15.94338321685791, 0.0, 1.397014379501343 ],
    [ 9.663637161254883, -1.829234150818593e-07, 58.40605926513672 ]
    ],
    "hitPillarSingle_hh" : null,
    "hitPillarSingle_pcc" : null,
    "hitPillarSingle_type" : null,
    "pathPointDensity" : 24,
    "pillarNum" : 0,
}

Now I happen to think that’s fairly tidy, as far as C++ serialization goes. Symmetry is maintained between read and write steps, and there’s very little in the way of syntax magic. I do have a few macros in there (the stuff that says NVP), but they’re optional and I find that they clean things up. Now shield your eyes, because here is the actual implementation.

/*
 * Copyright (c) 2011-2012 Promit Roy
 * 
 * Permission is hereby granted, free of charge, to any person obtaining a copy
 * of this software and associated documentation files (the "Software"), to deal
 * in the Software without restriction, including without limitation the rights
 * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
 * copies of the Software, and to permit persons to whom the Software is
 * furnished to do so, subject to the following conditions:
 * 
 * The above copyright notice and this permission notice shall be included in
 * all copies or substantial portions of the Software.
 * 
 * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
 * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
 * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
 * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
 * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
 * THE SOFTWARE.
 */

#ifndef JSONSERIALIZER_H
#define JSONSERIALIZER_H

#include <json/json.hpp>
#include <boost/utility.hpp>
#include <boost/type_traits.hpp>
#include <string>

class JsonSerializer
{
private:
	//SFINAE garbage to detect whether a type has a Serialize member
	typedef char SerializeNotFound;
	struct SerializeFound { char x[2]; };
	struct SerializeFoundStatic { char x[3]; };
	
	template<typename T, void (T::*)(JsonSerializer&)>
	struct SerializeTester { };
	template<typename T, void(*)(JsonSerializer&)>
	struct SerializeTesterStatic { };
	template<typename T>
	static SerializeFound SerializeTest(SerializeTester<T, &T::Serialize>*);
	template<typename T>
	static SerializeFoundStatic SerializeTest(SerializeTesterStatic<T, &T::Serialize>*);
	template<typename T>
	static SerializeNotFound SerializeTest(...);
	
	template<typename T>
	struct HasSerialize
	{
		static const bool value = sizeof(SerializeTest<T>(0)) == sizeof(SerializeFound);
	};
	
	//Serialize using a free function defined for the type (default fallback)
	template<typename TValue>
	void SerializeImpl(TValue& value,
						typename boost::disable_if<HasSerialize<TValue> >::type* dummy = 0)
	{
		//prototype for the serialize free function, so we will get a link error if it's missing
		//this way we don't need a header with all the serialize functions for misc types (eg math)
		void Serialize(TValue&, JsonSerializer&);
		
		Serialize(value, *this);
	}

	//Serialize using a member function Serialize(JsonSerializer&)
	template<typename TValue>
	void SerializeImpl(TValue& value, typename boost::enable_if<HasSerialize<TValue> >::type* dummy = 0)
	{
		value.Serialize(*this);
	}
	
public:
	JsonSerializer(bool isWriter)
	: IsWriter(isWriter)
	{ }
	
	template<typename TKey, typename TValue>
	void Serialize(TKey key, TValue& value, typename boost::enable_if<boost::is_class<TValue> >::type* dummy = 0)
	{
		JsonSerializer subVal(IsWriter);
		if(!IsWriter)
			subVal.JsonValue = JsonValue[key];
		
		subVal.SerializeImpl(value);
		
		if(IsWriter)
			JsonValue[key] = subVal.JsonValue;
	}
		
	//Serialize a string value
	template<typename TKey>
	void Serialize(TKey key, std::string& value)
	{
		if(IsWriter)
			Write(key, value);
		else
			Read(key, value);
	}
	
	//Serialize a non class type directly using JsonCpp
	template<typename TKey, typename TValue>
	void Serialize(TKey key, TValue& value, typename boost::enable_if<boost::is_fundamental<TValue> >::type* dummy = 0)
	{
		if(IsWriter)
			Write(key, value);
		else
			Read(key, value);
	}
	
	//Serialize an enum type to JsonCpp 
	template<typename TKey, typename TEnum>
	void Serialize(TKey key, TEnum& value, typename boost::enable_if<boost::is_enum<TEnum> >::type* dummy = 0)
	{
		int ival = (int) value;
		if(IsWriter)
		{
			Write(key, ival);
		}
		else
		{
			Read(key, ival);
			value = (TEnum) ival;
		}
	}
	
	//Serialize only when writing (saving), useful for r-values
	template<typename TKey, typename TValue>
	void WriteOnly(TKey key, TValue value, typename boost::enable_if<boost::is_fundamental<TValue> >::type* dummy = 0)
	{
		if(IsWriter)
			Write(key, value);
	}
	
	//Serialize a series of items by start and end iterators
	template<typename TKey, typename TItor>
	void WriteOnly(TKey key, TItor first, TItor last)
	{
		if(!IsWriter)
			return;
		
		JsonSerializer subVal(IsWriter);
		int index = 0;
		for(TItor it = first; it != last; ++it)
		{
			subVal.Serialize(index, *it);
			++index;
		}
		JsonValue[key] = subVal.JsonValue;
	}
	
	template<typename TKey, typename TValue>
	void ReadOnly(TKey key, TValue& value, typename boost::enable_if<boost::is_fundamental<TValue> >::type* dummy = 0)
	{
		if(!IsWriter)
			Read(key, value);
	}

	template<typename TValue>
	void ReadOnly(std::vector<TValue>& vec)
	{
		if(IsWriter)
			return;
		if(!JsonValue.isArray())
			return;
		
		vec.clear();
		vec.reserve(vec.size() + JsonValue.size());
		for(int i = 0; i < JsonValue.size(); ++i)
		{
			TValue val;
			Serialize(i, val);
			vec.push_back(val);
		}
	}
	
	template<typename TKey, typename TValue>
	void Serialize(TKey key, std::vector<TValue>& vec)
	{
		if(IsWriter)
		{
			WriteOnly(key, vec.begin(), vec.end());
		}
		else
		{
			JsonSerializer subVal(IsWriter);
			subVal.JsonValue = JsonValue[key];
			subVal.ReadOnly(vec);
		}
	}
	
	//Append a Json::Value directly
	template<typename TKey>
	void WriteOnly(TKey key, const Json::Value& value)
	{
		Write(key, value);
	}
	
	//Forward a pointer
	template<typename TKey, typename TValue>
	void Serialize(TKey key, TValue* value, typename boost::disable_if<boost::is_fundamental<TValue> >::type* dummy = 0)
	{
		Serialize(key, *value);
	}
	
	template<typename TKey, typename TValue>
	void WriteOnly(TKey key, TValue* value, typename boost::disable_if<boost::is_fundamental<TValue> >::type* dummy = 0)
	{
		Serialize(key, *value);
	}
	
	template<typename TKey, typename TValue>
	void ReadOnly(TKey key, TValue* value, typename boost::disable_if<boost::is_fundamental<TValue> >::type* dummy = 0)
	{
		ReadOnly(key, *value);
	}
	
	//Shorthand operator to serialize
	template<typename TKey, typename TValue>
	void operator()(TKey key, TValue& value)
	{
		Serialize(key, value);
	}
	
	Json::Value JsonValue;
	bool IsWriter;
	
private:
	template<typename TKey, typename TValue>
	void Write(TKey key, TValue value)
	{
		JsonValue[key] = value;
	}
				  
	template<typename TKey, typename TValue>
	void Read(TKey key, TValue& value, typename boost::enable_if<boost::is_arithmetic<TValue> >::type* dummy = 0)
	{
		int ival = JsonValue[key].asInt();
		value = (TValue) ival;
	}
	
	template<typename TKey>
	void Read(TKey key, bool& value)
	{
		value = JsonValue[key].asBool();
	}
	
	template<typename TKey>
	void Read(TKey key, int& value)
	{
		value = JsonValue[key].asInt();
	}
	
	template<typename TKey>
	void Read(TKey key, unsigned int& value)
	{
		value = JsonValue[key].asUInt();
	}
	
	template<typename TKey>
	void Read(TKey key, float& value)
	{
		value = JsonValue[key].asFloat();
	}
	
	template<typename TKey>
	void Read(TKey key, double& value)
	{
		value = JsonValue[key].asDouble();
	}
	
	template<typename TKey>
	void Read(TKey key, std::string& value)
	{
		value = JsonValue[key].asString();
	}
};

//"name value pair", derived from boost::serialization terminology
#define NVP(name) #name, name
#define SerializeNVP(name) Serialize(NVP(name))

#endif

Now that’s not so bad, is it? A bit under three hundred lines of type traits and template games and we’re ready to get on with our lives. A lot of the code is just fussing about what type it’s being applied to and drilling down to the correct read or write function. The SFINAE based block at the top of the class is used to locate the correct Serialize function for any given type, which can be an instance member function, static member function, or free function.

There is your free C++ to JSON serializer utility class for the day, complete with ultra permissive license. Enjoy.

Cinematic Color

I chose not to go to SIGGRAPH 2012, and I’m starting to wish I had. Via Julien Guertault, I found the course on Cinematic Color.

I’ve mentioned this in the past: I believe that as a graphics programmer, a thorough understanding of photography and cinematography through the entire production pipeline is necessary. Apparently I am not alone in this regard. Interesting corollary: should cinematographers understand computer graphics? Hmm.

Review: Olympus OM-D E-M5


I’ve mentioned in the past that as an extension of my game development work, I began to explore photography. I’m a big fan of the Micro Four Thirds mirrorless cameras, and I recently purchased the newest iteration in the line: the Olympus OM-D E-M5. I thought I’d go ahead and do a review, since a number of people have asked me about the camera. To make a long story short, Olympus has finally gotten serious and this camera is a force to be reckoned with. Much more importantly, the E-M5 is a lot of fun to shoot with. I enjoy photography much more with it than anything I’ve ever used, and for an enthusiast that’s crucial.

This review is not meant to be all encompassing; see DPReview for that. Rather, I want to focus on the things that I feel are often lost in normal reviews, and provide some introduction to these cameras in general.

Mirrorless?

I’ll start with a quick prelude for those of you who aren’t in the know, since this isn’t really a photography blog (not yet, anyway). Until recently, there were two kinds of cameras that the mainstream public knew and cared about: digital compacts and digital SLRs. A compact is an integrated package with a sensor and lens all together. They’re typically priced anywhere from $50 to $500, and they’re a one shot purchase: camera, done. They almost always use small low quality image sensors, on the scale of 5-8mm diagonal. This helps keep the size of the optics down and the overall package small. They also tend to have low end processing hardware and limited control over the result. Compacts also eschew viewfinders, running their sensor in video mode to the LCD to display an image preview. Most people take this functionality for granted. Some have electronic viewfinders, lenses onto tiny LCD screens of varying quality and size.

On the other end, we have digital single lens reflex (DSLR) cameras. The SLR design became big in the sixties as a compact film camera that allowed a photographer to see precisely what the film was going to see via a mirror/prism arrangement, and set exposure parameters based on that information. Modern “pro” cameras are identical in most ways to the film cameras of the late nineties, with the film replaced by a digital sensor and guts. DSLRs use large sensors (21mm-55mm diagonal), and feature large interchangeable lenses. They also have high end processors on board, lots of memory, and sophisticated controls. Until a few years ago, DSLRs could not run their sensors in video mode; they were unable to record videos and unable to display a live feed on the LCD. This was a limitation of the sensor hardware, and using the optical mirrored viewfinder was the only way to preview your shot. Although modern DSLRs have overcome this limitation and now support “Live View”, they are not well suited to this mode of operation and it’s generally not how you’ll want to use the camera.

Mirrorless cameras split the difference. By designing a compact-type camera with a video-compatible sensor and interchangeable lenses, these cameras try to compromise between the bulk and limitations of a DSLR, while boasting far more powerful processing and far better images than any compact camera. The idea was really pioneered by a cooperation by Olympus and Panasonic called Micro Four Thirds. This shared format came to fruition in 2008, and sent off a shockwave in the industry. Sony, Samsung, Fuji, Nikon, Pentax, and Canon have all stepped into the arena with their own competitors in this new class.

Micro Four Thirds

When the transition from film to digital happened, the vast majority of consumer equipment was designed for the 135 (35mm film) standard. Companies ran up against a problem: nobody knew how to create a digital sensor quite that large (“full frame”). Canon managed to produce one in 2002, the 1Ds, for $7,999. Nikon would not release one until the D3 in 2007, for $4,999. It was necessary to experiment with smaller sensor standards to produce consumer priced digital cameras, and most settled on the APS-C size with a diagonal of about 28mm on a 3:2 aspect ratio, versus full frame’s 43mm. APS-C sensor cameras only see the middle of the image projected by a 35mm lens, cropping the image off into a narrower field of view. APS-C has a “crop factor” of around 1.5, meaning that film lenses are effectively 1.5x narrower than they would be on a full frame camera. Despite the common sensor or film formats, each manufacturer makes their own lens system and are for the most part incompatible.

Olympus, meanwhile, decided to go with a smaller sensor format called Four Thirds, with a 4:3 aspect ratio and an image diagonal of about 21.6mm and a crop factor of 2x. (A 25mm lens is considered “normal”.) They did this to try and produce more compact DSLR cameras, similar to their old OM film SLRs. They tried to share this standard with several other manufacturers of cameras and lenses, but it never really caught on as a mainstream lineup. The Four Thirds options lagged their bigger competitors in performance and nobody really wanted a fairly big camera with fairly mediocre performance.

Micro Four Thirds (m4/3) was announced by Olympus and Panasonic in 2008 as a mirrorless interchangeable lens camera (MILC) line. The m4/3 cameras used the same sensor, but a mirrorless design to cut back dramatically on overall size. They leveraged tricks like digital image correction to reduce size, and boasted promise of high quality video support. The first cameras were the Panasonic G1 and the Olympus E-P1. The bad news is that price was not particularly different from full blown DSLRs and the cameras were small but not pocket-small. Combined with a wide range of technical and performance limitations, the cameras basically sucked in terms of bang for the buck. The value was in the flexibility of size and interchangeable lenses, supposedly. I’ve been very fond of these cameras for a long time, but that was due to personal quirks: I hate viewfinders, and DSLRs are fairly awful at video.

Olympus OM-D E-M5

Nevermind the ridiculous name; the Olympus OM-D is the real deal at long last. This is my fourth Olympus and my sixth m4/3 body. Olympus’ previous m4/3 cameras, the PEN series, were designed as compact cameras on steroids. Plastic build, simplified interfaces, mediocre sensor performance, and in many cases mediocre autofocus. This new camera is the genesis of a semi-pro lineup and it has the spec sheet to match. Magnesium build with full dust and splash proofing. An integrated high resolution electronic viewfinder (EVF), physical control dials, an accessory battery grip, and most welcome of all: a brand new 16 MP image sensor by Sony that is now able to compete with the DSLRs on even footing. Olympus has finally given us something that isn’t a toy, for $999 body-only.

The body’s available in silver or black. Olympus is going for a retro-throwback here, and I find the silver to be a beautiful look that stands out from the crowd in a good way. The viewfinder hump is a bit goofy thanks to the physical size of the stabilization system and accessory port, but the body is extremely compact overall. The handling is good, but not great. Olympus continues an unfortunate affectation of minimal grips on their cameras. Handling with larger lenses is compromised as a result, and large hands won’t appreciate the form in general. The battery grip supposedly makes a dramatic difference, but at USD $300 it’s a steep price to pay. The strap lugs are also the stupid compact-style with keychain type D-rings to actually get a strap on. Pointless inconvenience. The front of the viewfinder holds stereo mics with better than usual separation, and a normal hotshoe on top. Olympus has included their accessory port here, which makes the viewfinder hump comically oversized for something that shouldn’t be necessary. But since there’s no mic input and no built-in flash, you’ll need the accessory port often to drive those accessories.

The buttons are tiny, because the screen takes up most of the tiny body’s space. They’re also squishy thanks to the weather sealing. Olympus’ buttons have continuously shrunk over the years, and the OM-D is really starting to test the limits. I don’t find it to be a problem, but this is getting ridiculous even for my Asian hands. The dials are very nice, and something about Olympus shutter buttons is just so much nicer than other cameras I’ve tried. So is the actual shutter noise, a nice subtle click that doesn’t carry. The camera has a trio of customizable function buttons, and completely arbitrary restrictions on which button can be set to what functions (underwater mode yes, bracketing mode no). Haphazard half-backed customization is a theme that continues throughout the camera; the customization menu contains 87 different settings, some of which branch further off into sub-settings. For the most part it’s possible to things up exactly as you want, and equally as easy to screw them up in weird ways. Olympus offers a “Myset” system to save camera settings, but I don’t find it to be useful since the only way to get to them quickly is to assign a function button. I’d rather set the button to something useful, thanks.

The viewfinder is a large and beautiful 800×600 RGB LCD panel. It isn’t a color-sequential display like some manufacturers (*ahem* Panasonic), and the contrast and brightness are way punchier and more pleasant than some competitors (*ahem* Panasonic). It’s also not quite up to the spec of the Sony NEX-7, sadly, but it is wonderful to use. There’s a built in proximity detector to activate the EVF, and it works very smoothly. There’s no sensitivity adjustment, which can mean a lot of accidental switching, but as a nice extra touch you can toggle the sensor and active screen simply by holding down a button on the side of the viewfinder. The rear screen a large tilting OLED panel with excellent color and brightness, albeit at a lower resolution than the EVF. Again not at the standard of Sony but Panasonic should be taking notes. A flip out swivel screen would’ve been nice, though.

Performance

Let’s start with that new sensor: it’s fantastic. The dated Panasonic chip used in previous Olympus bodies has been replaced with a state of the art Sony unit. It is able to keep pace with the NEX-5N and 7, considered the standards-bearers of APS-C quality. ISO 3200 is clean enough to print, and I’m getting perfectly decent screen-resolution shots at ISO 12800 with RAW processing in Lightroom. Stunning. The Olympus JPEG engine has always been stellar, but it visibly disintegrates at 6400 and above — process RAW files yourself in high ISo situations. Dynamic range is traditionally a severe limitation of the m4/3 cameras, and the new Sony supplied sensor seems to do a fantastic job. There’s also something about the Olympus color rendition even in RAW that I find extremely pleasant and far nicer than any manufacturer out there except maybe Fuji.

While working with the Olympus ISO 12800 files, I’m finding something unexpected: I don’t mind the noise. Don’t get me wrong, the shots have plenty of noise to go around and we’re still not getting quite as clear results as the best of the new APS-C sensors (though it is better than any APS-C sensor from a year or two ago.) No, it’s not the amount of noise at play but the pattern, which has a very natural smooth feel to it after just a kiss of chroma noise reduction — Lightroom’s 25 default does just fine. It doesn’t interfere badly with the image at normal magnifications, and for many purposes I’m finding that I’m happy without going through the careful NR-sharpening balancing act that high-ISO shots typically require. This is something you won’t get from the test charts or DXO numbers. Camera sensors show noise in different patterns and types; many degrade into a color-splotched mess particularly in the shadows. This sensor degrades cleanly and elegantly into a film-like look that is easy to correct in post and easy to live with.

Here’s a secret that people don’t often mention: Olympus and Panasonic have stellar autofocus systems, better than what you get out of a typical midrange DSLR and kit lens. The other mirrorless systems, like Sony NEX, cannot compete. The basic entry level m4/3 kit lenses are basically able to match expensive supersonic drive DSLR lenses in speed, with dead silent video compatible internal focus mechanisms. Focus is also dead accurate, since it’s driven by the sensor and tolerances don’t matter. The only downside is that Olympus doesn’t offer resizable focus zones. The default is large enough to pick the wrong object to focus on, which can be an unpleasant surprise. The real bad news comes in with continuous or tracking autofocus, which basically don’t work even in 120hz high speed mode. In reality, single acquisition is so fast that you can often use it to replace continuous mode in DSLRs. For photos, the OM-D won’t be able to match the speed of a Sony SLT or pro DSLR with high end lenses. Kit lens users with DSLRs will discover that they were lied to about what m4/3 focus performance is like, but sports shooters relying on AF-C will be sorely disappointed. Caveat emptor.
For those interested, Roger Cicala wrote about DSLR AF accuracy.

Did I mention it can fire at 9 fps? Because it can, with a fat buffer that will go for 14 JPEG+RAW shots and full stabilization. I’ve seen it take 20 JPEGs on a fast card before running out of steam. (Compare to the GH2 at 5fps and a buffer of about 7 shots.) Continuous AF only becomes available at 4 fps, although it probably won’t actually work at that speed. Want to shoot fast action? Dial in your focus and wait for the target to come to you. At 9fps, your odds are very good, about as good as it gets for a consumer level camera. Only the Sony SLT line will go ever so slightly quicker, if you really need even more.

Olympus offers a sensor-shift stabilization system integrated into the body. The stabilizer can stabilize any lens, including adapted lenses from other systems. Sony Alpha and Pentax DSLRs offer similar systems. These systems traditionally suffer several flaws; they can only correct for translational motion, correct is not always as good as lens-based optical systems, and they have a tendency to overheat which makes them useless for long exposures or video use. Olympus has conquered all of these problems with a new 5-axis system that electromagnetically floats the sensor full time, compensating for rotational and translational motion even during video recording as well as in the viewfinder. The sheer ability of the 5-axis to lock the sensor down defies belief. Hand-held long exposures are possible, and video gains a silky smoothness that can trick a viewer into thinking a rig was used. Here’s a video from Engadget showing the stabilizer demo unit:

Of course Olympus giveth and Olympus taketh away; the camera refuses to stabilize non-electronic lenses in video, for no readily apparent reason. Let this be your first hint that Olympus does not understand high end video. UPDATE: Olympus has added stabilization for legacy/adapted lenses in firmware 1.5.

Battery life, however, is not good. Buy a spare battery or two…or more, if you’re a heavy shooter. Chinese generics from eBay work just fine although they seem to have slightly shorter lifespans than the OEM version. In general I find it’s best to keep at least two batteries for a high end camera, but this thing works through them fairly quickly. It’s not so bad if you only use the viewfinder and leave the main LCD off, but your battery times are going to be much closer to a compact than an SLR. Olympus ships a dedicated charger with an awkward cord; no USB charging here, so don’t lose the charger or forget it on a trip. You won’t find a spare. Might want to pick one up with those extra batteries.

M.Zuiko 12-50mm Kit Lens

The OM-D is available with a new weather sealed kit lens, the 12-50mm. It’s also available with the old 14-42mm kit lens, that lens isn’t sealed and doesn’t have the range. It IS a lot smaller, which brings us to the real problem with the 12-50mm: it should have never been made. It’s not a bad lens; sharp, weather sealed, wide range (24-100mm equivalent), mechanical AND power zoom modes for photo and video, plus a macro mode that gets to about .75x. The trouble is that it’s unreasonably large, unreasonably slow (f/6.3 at the long end), unreasonably expensive ($300 in kit, $500 standalone) and solves problems no one ever asked to be solved. It was a total waste of Olympus engineering time. A $1,500 kit with a new native conversion of the 12-60mm or 14-54mm would have been an absolutely incredible kit to offer. As it is, the kit is useful but mediocre.

Taking the lens on its own merits, there are some positives. Optically it is excellent for a kit zoom type lens with the ultra-sharp look that is standard for Zuikos, even wide open. The lens is not only internal focus but also internal zoom, which brings a welcome subtlety to candid photography work versus the telescoping monstrosities most people are used to. The zoom ring slides forwards and backwards to toggle lens modes, and can be bumped easily but works well overall. The electronic zoom is legitimately useful for video. The mechanical zoom is a bit odd though, as you can hear and feel the internal zoom motor being dragged along and there’s a weak hard stop that allows the ring to continue spinning. A macro button allows the lens to be locked to its maximum magnification and a limited focus range. Macro mode is very sharp and gets in very close.

If it was really necessary to produce a slow consumer kit lens, I would’ve preferred that Olympus spend time on producing a more compact weather sealed zoom. But a semi-pro camera deserves a semi-pro lens, and this isn’t it. The range is useful, but the lens is slower across the range than the normal Panasonic and Olympus 14-42mm lenses by about a third of a stop. What’s the point of having a wonderful new sensor sunk into noise because the lens is wide open at f/6.3? That’s a cruel joke. A 12-60 f/2.8-4 kit could’ve shaken up the entire industry.

System Lenses

The Nikon F mount for their SLR cameras was introduced to the world in 1959, and for the most part you have always to mount your Nikon lenses on newer cameras. The Canon EF mount was introduced in 1987 and again, lenses from then on have always been fully functional. Buying into a popular SLR system has always meant an enormous range of available lenses created over the course of decades. Even now, most of the Nikon and Canon lenses (including the entire L series) are designed for full-frame rather than APS-C formats and are often awkward on crop formats. Micro Four Thirds was created in 2008, and other mirrorless systems were introduced even later. Sony and Samsung’s entries appeared in 2010, Nikon in late 2011, and Pentax/Fuji/Canon in 2012. Lens choice and pricing are a soft spot for all of these lines. (Most can adapt SLR lenses, with varying degrees of success.) Micro Four Thirds has a few unique advantages over the other manufacturers, though.

Not only is m4/3 the oldest system, but it also has two manufacturers committed to producing both bodies and lenses that are all mostly cross compatible. By building largely complementary sets of lenses, the system has gained a large set of lenses in a very short time. The system isn’t “complete” yet, in that there are still major holes which need to be filled for general purpose use. It is however far, far ahead of the competitors. They’ve also focused on producing very good quality lenses at consumer prices; if you want something dirt cheap or ultra high end, you’re likely to be disappointed. (Panasonic is just rolling out their first constant aperture pro zooms this year, and there are barely any sub-$300 lenses.) On the other hand, it’s almost impossible to make a bad choice with the lenses that are available. All of them are optically stellar, even the relatively poor and very dated Olympus 17mm pancake. Lenses like the Panasonic 20mm f/1.7 pancake, Panasonic-Leica 25mm f/1.4, and Olympus 45mm f/1.8 are considered practically classic.

One of the stated goals of mirrorless was to decrease the size not only of the camera bodies, but also of the lenses. m4/3 accomplishes this in three ways. First, the very short flange distance (the distance between the sensor and the lens mount) allows lenses to be designed more simply and mounted much closer. Second, the smaller and closer-to-square Four Thirds format sensor allows for smaller image circles that are used more efficiently than the traditional 3:2 film format. Third, m4/3 relies heavily on digital corrections of lens issues like distortion and chromatic aberration, which would previously have required heavy and expensive glass elements to fix. The 20mm pancake (shown right) is actually one of the sharpest lenses for the system, an inch deep and under $360.

All together, there are around 25 current electronically enabled lenses for the system, with a handful of manual focus native lenses as well. 7-14mm or 9-18mm ultra wide angle. Wide angle, there’s the 12/2.0 or 14/2.5. Fisheye? Two of them .Normal lenses, pick from 17/2.8, 19/2.8, 20/1.7, and 25/1.4. If raw aperture is your thing and price is no object, Voigtlander will sell you 17/0.95 and 25/0.95. Macro comes from the 12-50, the 45/2.8, or the upcoming 60mm. The 45/1.8 75/1.8 fulfill portrait needs. The 14-150 and 14-140 OIS are fantastic all-in-one superzooms. And I’m not going to even start naming all the telephoto options, but the 100-300 gives most people as much reach as their heart desires.

Video

With the advent of digital photography, camera companies (Nikon, Olympus, Fuji) collided with consumer electronics companies (Sony, Panasonic, Canon) in producing cameras. The consumer electronics guys make a variety of fantastic video cameras. The camera companies still seem somewhat baffled about what exactly video is for and what video people want. Fuji in particular makes the best film lenses on the planet, but cannot understand what to do with video recording. The previous m4/3 flagship camera was the Panasonic GH2, and it’s such an eminently capable camera that in the latest Zacuto shootout, it was frequently mistaken for the RED Epic and has proven to be one of the most popular cameras in that blind test. The OM-D will not be showing up in any such tests.

Let’s start with features: it writes h.264 files in a Quicktime MOV container to the same directory as photos. Most cameras emulate a very confusing Blu-Ray disc file structure on the card so that you can directly burn your card to a Blu-Ray once you’re done filming. This is exactly the sort of moronic “feature” a consumer electronics company would come up with, and four people in the world have ever actually used. Olympus’ version is a welcome change. It will AF during video, and rolling shutter is decently well controlled. Olympus also offers the amazing 5-axis stabilization with electronic lenses, and the results of that stabilization really cannot be overstated. It is absolutely stellar. Of course you can’t stabilize lenses you’d actually want to use for filming, like the Voigtlander f/0.95 primes.

Trouble is, that’s where the features end. Output is 1080i/60 or 720p/60, both of which are derived from a 30hz sensor readout. Bit-rates suck (20 Mbps max). No 24p, no 25p, no 50/60p. It does 30hz. The camera’s h.264 codec isn’t particularly good, as it tends to degrade into macroblocking when pushed too hard. Unlike the beautiful highlight roll-off in still photos, videos get a nasty burnt look on anything that gets too bright. You can buy an accessory for a microphone input, but there’s really no point since the camera doesn’t have volume control. The microphone has been improved significantly over the old PENs at least, which used to clip quickly. The new mic is completely deaf to bass though. Single clips are limited to 29:59 thanks to stupid laws in the EU, so long-form interview/lecture recordings are out. You can set aperture/shutter/ISO/exposure manually for video, but only before you start recording. I suspect that this is strictly a set of software problems, as the underlying hardware is extremely capable. I’m not the only one who thinks a lot more is possible on the OM-D platform. More than that though, Olympus just doesn’t understand what people are looking for from video.

On second thought, allow me to rephrase that a bit: Olympus doesn’t understand professional video. The OM-D is an extraordinary photography camera for amateur/home video thanks to the excellent stabilizer. It’s miles ahead of any DSLR for video work, including the inexplicably popular Canons. Bolt on the Olympus 14-150mm ($350 refurb) for a video friendly 11x zoom and for casual use the OM-D delivers very good results. Film buffs will need to look elsewhere, probably to the very competent (and now much cheaper) Panasonic GH2.

Verdict

There are a lot of good mirrorless and DSLR cameras out there. Sony NEX will take absolutely fantastic images with the right lenses. The high-end Panasonic G cameras share many of the same advantages at significantly better price points. A high end consumer DSLR (D7000, A65, 60D, K-5) can be had for the same money, with much wider choices in lenses across the range. So why are photographers like Damian McGillicuddy, Steve Huff, and Andy Hendriksen going crazy over this new camera?

Shooting with the OM-D is effortless. It’s compact enough to carry comfortably; not pocketable, but much more convenient than a DSLR. Use the viewfinder or LCD as you like, hit the button, and the sensor is able to handle almost anything you throw at it. Twin control dials make it easy to tweak settings quickly. The JPEG processing is good enough that I almost never sit down with the RAW files from everyday shooting situations. The stabilizer solves most shutter speed problems and gives video a professional feel. The tough build and weather sealing inspire confidence, while still being lightweight. It’s expensive, but there’s so much to like in this package and so little to complain about that it is worth it.

The bottom line? This camera is much more fun than its DSLR or mirrorless competitors.