Moving Away From Godaddy

Just a quick update here: My domains, primarily SlimDX.org and SlimTune.com but a few others as well, are currently hosted by GoGaddy. Now GoDaddy is a company with a long, messy history of being a third tier sleaze-bag registrar, but I stuck with them because of pricing. However their recent support of SOPA, and their pathetic recant, pushed me over the edge.

Effective immediately, I am shifting all domains away from GoDaddy. Because I’m rather new to this process, I don’t know what will happen to DNS and email during the transition. SlimDX or @slimdx.org addresses may become inaccessible for a short period while I sort things out. Please bear with me.

If you are interested in moving your own domains, I found out that NameCheap is running a promotion with code “SOPAsucks”. Their pricing is not quite as aggressive as GoDaddy but it appears that they do offer very competitive pricing ($2 specials) nonetheless. Transfers with the code cost $6.99 per domain, which includes a year renewal of the domain. I am sure there are other anti-SOPA registrars but this one is mine.

Advertisements

DirectX and XNA Follow-up

I wanted to clarify and respond to a few things regarding my previous post about DirectX and XNA. First a quick note: the very long standing DIRECTXDEV mailing list is being shut down. Microsoft is encouraging a move to their forums, but in case you’re fond of the mailing list format there’s a Google Group that everyone is shifting over to. If you’re reading this blog post, you’re probably interested in the subject matter and so I highly encourage everyone to join. MS Connect’s entry for DirectX is being discontinued as well, so I’m not sure how you report bugs now.

Something on the personal front: I got a few comments, directly and indirectly, about being an MS or DirectX “hater”. Good lord no! I adore DirectX, XNA, Windows, and Microsoft. I criticize because I want to see these technologies thrive and succeed. I’ve been doing a lot of iOS/Mac/OpenGL work lately and from a technical standpoint it’s absolutely miserable. I miss the wonderful Microsoft world of development. But a lot of what I’m hearing in public and private worries me. My alarmist approach is designed to bring attention to these things, because oftentimes the development teams live in a bubble separated from their users. (Hell, I can barely get in touch with people using SlimDX.) The XNA team, for example, are terrible at communication. It’s exasperating, because the direction, plans, and schedules are completely opaque — even to those of us who have signed an MS NDA and are ostensibly supposed to see this information early. (We don’t see jack shit, by the way. Exasperating.)

Second, no I do not think DirectX is dead. No I do not think everybody should switch to OpenGL. From a platform and technical standpoint, we’re probably as much or even more of a commitment than in the past. What’s bothering me is the pathetic way that community, documentation, etc is being handled. Look: Where is the DirectX SDK? I don’t know when that was published, but it appears that nobody noticed it until very recently. How would we? I don’t wander MSDN online at random. And it’s been placed right next to this comically worthless page. This is the kind of developer support I’m complaining about. Nobody outside MS understands what is going on in Bellevue, and I’m getting this worrisome feeling that nobody inside understands either. The DirectX SDK wasn’t just a diverged way to deliver support for a core platform technology, which is what seems to be driving the current decision making process. It represented half a gigabyte of commitment to the developers who arguably make the entire Windows ecosystem compelling to a consumer. “Developers developers developers!” “Yeah, what’s up?” “Uhhh, hi?” That’s what it feels like. MS wants developers around, but they forgot to put any thought into why. (Here’s a hint guys, ask DevDiv. They seem to still have a clue, C++ team aside.)

And in the other corner, we’ve got XNA. Whoo boy. There’s no point to sugarcoating this, although I’ll probably ruffle some feathers: XNA 4.0 is garbage. It exists to support two things: XBLIG and WP7. XBLIG is a joke, so really the only productive arena remaining is WP7. It’s sad because XNA 3.x was actually an excellent way to do managed development on PC. 4.0 introduces a profiles mechanism that is focused specifically around the Xbox and WP7, and produces a stunningly foolish situation on PC where DX10+ hardware is required but none of the new features are supported. Worse still, that was a few years ago. Now we’ve got DirectX 11 with Metro coming down the pipeline and XNA staring blankly back like the whole thing is a complete surprise. It tends to raise some questions like, is XNA 5 coming? Will there be DX11 or Metro or Windows 8 support? Is anybody even listening? And the answer we got was, and I quote: “We’re definitely sensitive to this uncertainty, but unfortunately have nothing we can announce at this time.”

My judgement on that message is that XNA does not exist on PC. I’ll say the same thing I said in 2006 when XNA was first revealed. Treat it as an API that happens to run on Windows as a development convenience, not as something it’s actually meant to do. The computing world has moved on, and if Microsoft can’t be bothered to bring XNA along then that’s just something we have to work with. If and when XNA 5 is announced, then we can go back and take a look at the new landscape.

DirectX and XNA Status Report

A few interesting things have been happening in the DirectX and XNA world, and I think people haven’t really noticed yet. It’s been done quietly, not because Microsoft is trying to hide anything but because they’ve always been big believers in the “fade into the night” approach to canceling projects. Or their communication ability sucks. Cancel may be the wrong word here, but the DirectX developer experience is going to be quite a bit different moving forward.

Let’s start with the DirectX SDK, which you may have noticed was last updated in June of 2010. That’s about a year and a half now, which is a bit of a lag for a product which has — sorry, had — scheduled quarterly releases. Unless of course that product is canceled, and it is. You heard me right: there is no more DirectX SDK. Its various useful components have been spun out into a hodge-podge of other places, and some pieces are simply discontinued. Everything outside DirectX Graphics is of course gone, and has been for several years now. That should not be a surprise. The graphics pieces and documentation, though, are being folded into the Windows SDK. D3DX is entirely gone. The math library was released as XNA Math (essentially a port from Xbox), then renamed to DirectXMath. It was a separate download for a while but I think it might be part of Windows SDK from Windows 8 also. I haven’t checked. The FX compiler has been spun off/abandoned as an open source block of code that is in the June 2010 SDK. There are no official patches for a wide range of known bugs, and I’m not aware of a central location for indie patches. Most of the remaining bits and pieces live on Chuck Walbourn’s blog. Yeah, I know.

In case it’s not obvious, this means that the DirectX release schedule is now the same as the Windows SDK, which always corresponds with major OS updates (service packs and full new versions). Don’t hold your breath on bug fixes. Last I heard, there’s only one person still working on the HLSL compiler. Maybe they’ve hired someone, or I assume they have a job opening on that ‘team’ at least. What I do know is that for all practical purposes, DirectX has been demoted to a standard, uninteresting Windows API just like all the others. I imagine there won’t be a lot more samples coming from Microsoft, especially big cool ones like the SDK used to have. Probably have to rely on AMD and NVIDIA for that stuff moving forward.

That covers the native side. What about managed? Well the Windows API Code Pack hasn’t been updated in a year and a half so we won’t worry about that. On the XNA front, two things are becoming very clear:
* XNA is not invited to Windows 8.
* XBLIG is not a serious effort.
The point about XBLIG has been known by most of us MVP guys for a while now. Microsoft promised a lot of interesting news out of this past //BUILD/ conference, which I suppose was true. However you may have noticed that XNA was not mentioned at any point. That’s because XNA isn’t invited. All of that fancy new Metro stuff? None of it will work with XNA, at all, in any fashion. (Win8 will run XNA just like any other ‘classic’ app.) That also implies pretty minimal involvement with the Windows app store. Combined with the fact that XBLIG has never been a serious effort to begin with, I’m dubious about tablet support for managed games. XNA does work on WinPhone7 and Win8 does support Win7 apps, so it ought to work in principle. Maybe. Given the niche status of Windows Phone 7 at the moment, and major losses of tech like UDK and Unity from that ecosystem, I’m also expecting WinPhone8 to be much more friendly to native code. (If not, I expect that platform to fail entirely, and take Nokia with it.)

I’m also looking at this e-mail right now in my box, which starts as follows:

As you know, the 2012 Microsoft MVP Summit is Feb 28-Mar 2, 2012. We wanted to inform you that DirectX and XNA technologies will not be hosting sessions at the Summit. As MVPs, you are still encouraged to attend and be a part of the global MVP community, and you’ll have the ability to attend technical sessions offered by other product groups.

Really? No DX/XNA sessions at all? I don’t think DirectX is fading into the sunset because it’s a core technology. XNA will most likely disappear. For some reason, Microsoft is getting out of the business of helping developers produce games and other 3D applications at the exact same time that they’re adding core support for it. Yes, Visual Studio 11 has model and texture visualizing and editing. It even has a visually based shader editor. I’ll let you take a wild guess on how well that will work with XNA code.

I don’t know exactly what Microsoft is going for here, but every couple of days somebody asks me when the next DirectX SDK is going to be released and I think I’m just explicitly stating what Microsoft has vaguely hinted for a while now. There is no new DirectX SDK, soon or ever. And I’m not holding my breath for any XNA updates either. I am told there is an XNA 5, but they won’t be at the summit apparently. If there is some actual future in XNA, feel free to make yourselves heard on the mailing lists because right now it’s really difficult to understand why anyone would bother.

Important clarification: I do NOT think DirectX is being deprecated or vanished or even dramatically changed at the core. (I’m less confident in XNA.) The SDK is being vanished, subsumed into Windows SDK as a component just like all the rest of Windows. So coding DX is no more special than coding GDI or Winsock. There will still be 11.1 and 12 and all that, probably delivered with OS releases and service packs. “DirectX 11.1” will actually be synonymous with “Win7 SP2”. That has been the case for a while, actually. What’s going away is the wonderful developer support we’ve enjoyed as long as I can remember. Compare the D3DX libraries across 9, 10, and 11. Is this really a surprise? It’s been slowly happening for years.

Neuroscience Meets Games

It’s been a long time since I’ve written anything, so I thought I’d drop off a quick update. I was in San Francisco last week for a very interesting and unusual conference: ESCoNS. It’s the first meeting of the Entertainment Software and Cognitive Neurotherapy Society. Talk about a mouthful! The attendance was mostly doctors and research lab staff, though there were people in from Activision, Valve, and a couple more industry representatives. The basic idea is that games can have a big impact on cognitive science and neuroscience, particularly as applies to therapy. This conference was meant to get together people who were interested in this work, and at over 200 people it was fairly substantial attendance for what seems like a rather niche pursuit. For comparison’s sake, GDC attendance is generally in the vicinity of 20,000 people.

The seminal work driving this effort is really the findings by Daphne Bevalier at the University of Rochester. All of the papers are available for download as PDF, if you are so inclined. I imagine some background in psychology, cognitive science, neurology is helpful to follow everything that’s going on. The basic take-away, though, is that video games can have dramatic and long-lasting positive effects on our cognitive and perceptual abilities. Here’s an NPR article that is probably more helpful to follow as a lay-person with no background. One highlight:

Bavelier recruited non-gamers and trained them for a few weeks to play action video games. […] Bavelier found that their vision remained improved, even without further practice on action video games. “We looked at the effect of playing action games on this visual skill of contrast sensitivity, and we’ve seen effects that last up to two years.”

Another rather interesting bit:

Brain researcher Jay Pratt, professor of psychology at the University of Toronto, has studied the differences between men and women in their ability to mentally manipulate 3-D figures. This skill is called spatial cognition, and it’s an essential mental skill for math and engineering. Typically, Pratt says, women test significantly worse than men on tests of spatial cognition.

But Pratt found in his studies that when women who’d had little gaming experience were trained on action video games, the gender difference nearly disappeared.

As it happens, I’ve wound up involved in this field as well. I had the good fortune to meet a doctor at the Johns Hopkins medical center/hospital who is interested in doing similar research. The existing work in the field is largely focused on cognition and perception; we’ll be studying motor skills. Probably lots of work with iPads, Kinect, Wii, PS Move, and maybe more exotic control devices as well. There’s a lot of potential applications, but one early angle will be helping stroke patients to recover basic motor ability more quickly and more permanently.

There’s an added component as to why we’re doing this research. My team believes that by studying the underlying neurology and psychology that drives (and is driven by) video games, we can actually turn the research around and use it to produce games that are more engaging, more interactive, more addictive, and just more fun. That’s our big gambit, and if it pans out we’ll be able to apply a more scientific and precise eye to the largely intuitive problem of making a good game. Of course the research is important for it’s own sake and will hopefully lead to a lot of good results, but I went into games and not neurology for a reason 😉

Understanding Subversion’s Problems

I’ve used Subversion for a long time, even CVS before that. Recently there’s a lot of momentum behind moving away from Subversion to a distributed system, like git or Mercurial. I myself wrote a series of posts on the subject, but I skipped over the reasons WHY you might want to switch away from Subversion. This post is motivated in part by Richard Fine’s post, but it’s a response to a general trend and not his entry specifically.

SVN is a long time stalwart as version control systems go, created to patch up the idiocies of CVS. It’s a mature, well understood system that has been and continues to be used in a vast variety of production projects, open and closed source, across widely divergent team sizes and workflows. Nevermind the hyperbole, SVN is good by practically any real world measure. And like any real world production system, it has a lot of flaws in nearly every respect. A perfect product is a product no one uses, after all. It’s important to understand what the flaws are, and in particular I want to discuss them without advocating for any alternative. I don’t want to compare to git or explain why it fixes the problems, because that has the effect of lensing the actual problems and additionally the problem of implying that distributed version control is the solution. It can be a solution, but the problems reach a bit beyond that.

Committing vs publishing
Fundamentally, a commit creates a revision, and a revision is something we want as part of the permanent record of a file. However, a lot of those revisions are not really meant for public consumption. When I’m working on something complex, there are a lot of points where I want to freeze frame without actually telling the world about my work. Subversion understands this perfectly well, and the mechanism for doing so is branches. The caveat is that this always requires server round-trips, which is okay as long as you’re in a high availability environment with a fast server. This is fine as long as you’re in the office, but it fails the moment you’re traveling or your connection to the server fails for whatever reason. Subversion cannot queue up revisions locally. It has exactly two reference points: the revision you started with and the working copy.

In general though, we are working on high availability environments and making a round trip to the server is not a big deal. Private branches are supposed to be the solution to this problem of work-in-progress revisions. Do everything you need, with as many revisions as you want, and then merge to trunk. Simple as that! If only merges actually worked.

SVN merges are broken
Yes, they’re broken. Everybody knows merges are broken in Subversion and that they work great in distributed systems. What tends to happen is people gloss over why they’re broken. There are essentially two problems in merges: the actual merge process, and the metadata about the merge. Neither works in SVN. The fatal mistake in the merge process is one I didn’t fully understand until reading HgInit (several times). Subversion’s world revolves around revisions, which are snapshots of the whole project. Merges basically take diffs from the common root and smash the results together. But the merged files didn’t magically drop from the sky — we made a whole series of changes to get them where they are. There’s a lot of contextual information in those changes which SVN has completely and utterly forgotten. Not only that, but the new revision it spits out necessarily has to jam a potentially complicated history into a property field, and naturally it doesn’t work.

For added impact, this context problem shows up without branches if two people happen to make more than trivial unrelated changes to the same trunk file. So not only does the branch approach not work, you get hit by the same bug even if you eschew it entirely! And invariably the reason this shows up is because you don’t want to make small changes to trunk. Damned if you do, damned if you don’t.

Newer version control systems are typically designed around changes rather than revisions. (Critically, this has nothing at all to do with decentralization.) By defining a particular ‘version’ of a file as a directed graph of changes resulting in a particular result, there’s a ton of context about where things came from and how they got there. Unfortunately the complex history tends to make assignment of revision numbers complicated (and in fact impossible in distributed systems), so you are no longer able to point people to r3359 for their bug fix. Instead it’s a graph node, probably assigned some arcane unique identifier like a GUID or hash.

File system headaches
.svn. This stupid little folder is the cause of so many headaches. Essentially it contains all of the metadata from the repository about whatever you synced, including the undamaged versions of files. But if you forget to copy it (because it’s hidden), Subversion suddenly forgets all about what you were doing. You just lost its tracking information, after all. Now you get to do a clean update and a hand merge. Overwrite it by accident, and now Subversion will get confused. And here’s the one that gets me every time with externals like boost — copy folders from a different repository, and all of a sudden Subversion sees folders from something else entirely and will refuse to touch them at all until you go through and nuke the folders by hand. Nope, you were supposed to SVN export it, nevermind that the offending files are marked hidden.

And of course because there’s no understanding of the native file system, move/copy/delete operations are all deeply confusing to Subversion unless it’s the one who handles those changes. If you’re working with an IDE that isn’t integrated into source control, you have another headache coming because IDEs are usually built for rearranging files. (In fact I think this is probably the only good reason to install IDE integration.)

It’s not clear to me if there’s any productive way to handle this particular issue, especially cross platform. I can imagine a particular set of rules — copying or moving files within a working copy does the same to the version control, moving them out is equivalent to delete. (What happens if they come back?) This tends to suggest integration at the filesystem layer, and our best bet for that is probably a FUSE implementation for the client. FUSE isn’t available on Windows, though apparently a similar tool called Dokan is. Its maturity level is unclear.

Changelists are missing
Okay, this one is straight out of Perforce. There’s a client side and a server side to this, and I actually have the client side via my favorite client SmartSVN. The idea on the client is that you group changed files together into changelists, and send them off all at once. It’s basically a queued commit you can use to stage. Perforce adds a server side, where pending changelists actually exist on the server, you can see what people are working on (and a description of what they’re doing!), and so forth. Subversion has no idea of anything except when files are different from their copies living in the .svn shadow directory, and that’s only on the client. If you have a couple different live streams of work, separating them out is a bit of a hassle. Branches are no solution at all, since it isn’t always clear upfront what goes in which branch. Changelists are much more flexible.

Locking is useless
The point of a lock in version control systems is to signal that it’s not safe to change a file. The most common use is for binary files that can’t be merged, but there are other useful situations too. Here’s the catch: Subversion checks locks when you attempt to commit. That’s how it has to work. In other words, by the time you find out there’s a lock on a file, you’ve already gone and started working on it, unless you obsessively check repository status for files. There’s also no way to know if you’re putting a lock on a file somebody has pending edits to.

The long and short of it is if you’re going to use a server, really use it. Perforce does. There’s no need to have the drawbacks of both centralized and distributed systems at once.

I think that’s everything that bothers me about Subversion. What about you?

Windows Installer: Worse Than I Thought

I hate Windows Installer. I really hate it. So you have to understand, I was a little surprised to discover that it is actually worse than I thought.

I do not, generally speaking, have much of a problem with Microsoft or Windows. They’ve done a lot for me, I’ve done a few things for them, and it’s been good. However, my feelings about Windows are continuing to degrade from “generally decent” to “least unpleasant”. But that is a digression from the point at hand, which is a specific infuriating problem. Let me explain my machine’s hard drive setup:
* 60 GB SSD. This is for OS and ‘core’ software.
* 400 GB magnetic drive for ‘non-core’ software.
* 1 TB drive for personal data.
I figured that 60 GB would surely be plenty for the OS and small programs. The Windows folder now accounts for a mind boggling 25.7 GB of usage on that drive. I was wondering why.

Our culprits are apparently Installer and WinSxS. I’ll leave the latter for another time and dig into why Installer is so big.

As I’ve discussed in the past, Installer files (msi, msp) are essentially embedded database files that store everything that is required in order to install or uninstall a program. Notice the flip side of the coin here — the only way to uninstall (or repair for that matter) is to save the installer package. And these packages always get saved in the same spot, regardless of where you actually installed the program.

The short version is that if you have a small C drive but run lots of software, you will eventually run out of space even if all the software lives elsewhere. At this point I’m left to throw up my hands and ask the same thing I’ve asked many times before. What idiot was in charge of designing Windows Installer? The engineering here is slipshod and frankly embarrassing.

On the bright side, the solution to this problem is actually relatively straightforward. There’s nothing stopping you from moving the contents of Installer to another drive and creating a junction to fill in for it. Just like that, I can cross off 8 GB that definitely don’t need to be on my two-dollar-per-gigabyte SSD.

Evaluation: Git

Last time I talked about Mercurial, and was generally disappointed with it. I also evaluated Git, another major distributed version control system (DVCS).

Short Review: Quirky, but a promising winner.

Git, like Mercurial, was spawned as a result of the Linux-BitKeeper feud. It was written initially by Linus Torvalds, apparently during quite a lull in Linux development. It is for obvious reasons a very Linux focused tool, and I’d heard that performance is poor on Windows. I was not optimistic about it being usable on Windows.

Installation actually went very smoothly. Git for Windows is basically powered by MSYS, the same Unix tools port that accompanies the Windows GCC port called MinGW. The installer is neat and sets up everything for you. It even offers a set of shell extensions that provide a graphical interface. Note that I opted not to install this interface, and I have no idea what it’s like. A friend tells me it’s awful.

Once the installer is done, git is ready to go. It’s added to PATH and you can start cloning things right off the bat. Command line usage is simple and straightforward, and there’s even a ‘config’ option that lets you set things up nicely without having to figure out what config file you want and where it lives. It’s still a bit annoying, but I like it a lot better than Mercurial. I’ve heard some people complain about git being composed of dozens of binaries, but I haven’t seen this on either my Windows or Linux boxes. I suspect this is a complaint about old versions, where each git command was its own binary (git-commit, git-clone, git-svn, etc), but that’s long since been retired. Most of the installed binaries are just the MSYS ports of core Unix programs like ls.

I was also thrilled with the git-svn integration. Unlike Mercurial, the support is built in and flat out works with no drama whatsoever. I didn’t try committing back into the Subversion repository from git, but apparently there is fabulous two way support. It was simple enough to create a git repository but it can be time consuming, since git replays every single check-in from Subversion to itself. I tested on a small repository with only about 120 revisions, which took maybe two minutes.

This is where I have to admit I have another motive for choosing Git. My favorite VCS frontend comes in a version called SmartGit. It’s a stand-alone (not shell integrated) client that is free for non commercial use and works really well. It even handled SSH beautifully, which I’m thankful about. It’s still beta technically, but I haven’t noticed any problems.

Now the rough stuff. I already mentioned that Git for Windows comes with a GUI that is apparently not good. What I discovered is that getting git to authenticate from Windows is fairly awful. In Subversion, you actually configure users and passwords explicitly in a plain-text file. Git doesn’t support anything of the sort; their ‘git-daemon’ server allows fully anonymous pulls and can be configured for anonymous-push only. Authentication is entirely dependent on the filesystem permissions and the users configured on the server (barring workarounds), which means that most of the time, authenticated Git transactions happen inside SSH sessions. If you want to do something else, it’s complicated at best. Oh, and good luck with HTTP integration if you chose a web server other than Apache. I have to imagine running a Windows based git server is difficult.

Let me tell you about SSH on Windows. It can be unpleasant. Most people use PuTTY (which is very nice), and if you use a server with public key authentication, you’ll end up using a program called Pageant that provides that service to various applications. Pageant doesn’t use OpenSSH compatible keys, so you have to convert the keys over, and watch out because the current stable version of Pageant won’t do RSA keys. Git in turn depends on a third program called Plink, which exists to help programs talk to Pageant, and it finds that program via the GIT_SSH environment variable. The long and short of it is that getting Git to log into a public key auth SSH server is quite painful. I discovered that SmartGit simply reads OpenSSH keys and connects without any complications, so I rely on it for transactions with our main server.

I am planning to transition over to git soon, because I think that the workflow of a DVCS is better overall. It’s really clear, though, that these are raw tools compared to the much more established and stable Subversion. It’s also a little more complicated to understand; whether you’re using git, Mercurial, or something else it’s valuable to read the free ebooks that explain how to work with them. There are all kinds of quirks in these tools. Git, for example, uses a ‘staging area’ that clones your files for commit, and if you’re not careful you can wind up committing older versions of your files than what’s on disk. I don’t know why — seems like the opposite extreme from Mercurial.

It’s because of these types of issues that I favor choosing the version control system with the most momentum behind it. Git and Mercurial aren’t the only two DVCS out there; Bazaar, monotone, and many more are available. But these tools already have rough (and sharp!) edges, and by sticking to the popular ones you are likely to get the most community support. Both Git and Mercurial have full blown books dedicated to them that are available electronically for free. My advice is that you read them.

Evaluation: Mercurial

I’ve been a long time Subversion user, and I’m very comfortable with its quirks and limitations. It’s an example of a centralized version control system (CVCS), which is very easy to understand. However, there’s been a lot of talk lately about distributed version control systems (DVCS), of which there are two well known examples: git and Mercurial. I’ve spent a moderate amount of time evaluating both, and I decided to post my thoughts. This entry is about Mercurial.

Short review: A half baked, annoying system.

I started with Mercurial, because I’d heard anecdotally that it’s more Windows friendly and generally nicer to work with than git. I was additionally spurred by reading the first chapter of HgInit, an e-book by Joel Spolsky of ‘Joel on Software’ fame. Say what you will about Joel — it’s a concise and coherent explanation of why distributed version control is, in a general sense, preferable to centralized. Armed with that knowledge, I began looking at what’s involved in transitioning from Subversion to Mercurial.

Installation was smooth. Mercurial’s site has a Windows installer ready to go that sets everything up beautifully. Configuration, however, was unpleasant. The Mercurial guide starts with this as your very first step:

As first step, you should teach Mercurial your name. For that you open the file ~/.hgrc with a text-editor and add the ui section (user interaction) with your username:

Yes, because what I’ve always wanted from my VCS is for it to be a hassle every time I move to a new machine. Setting up extensions is similarly a pain in the neck. More on that in a moment. Basically Mercurial’s configurations are a headache.

Then there’s the actual VCS. You see, I have one gigantic problem with Mercurial, and it’s summed up by Joel:

Whereas, in Mercurial, all commands always apply to the entire tree. If your code is in c:\code, when you issue the hg commit command, you can be in c:\code or in any subdirectory and it has the same effect.

This is an incredibly awkward design decision. The basic idea, I guess, is that somebody got really frustrated about forgetting to check in changes and decided this was the solution. My take is that this is a stupid restriction that makes development unpleasant.

When I’m working on something, I usually have several related projects in a repository. (Mercurial fans freely admit this is a bad way to work with it.) Within each project, I usually wind up making a few sets of parallel changes. These changes are independent and shouldn’t be part of the same check-in. The idea with Mercurial is, I think, that you simply produce new branches every time you do something like this, and then merge back together. Should be no problem, since branching is such a trivial operation in Mercurial.

So now I have to stop and think about whether I should be branching every time I make a tweak somewhere?

Oh but wait, how about the extension mechanism? I should be able to patch in whatever behavior I need, and surely this is something that bothers other people! As it turns out that definitely the case. Apart from the branching suggestions, there’s not one but half a dozen extensions to handle this problem, all of which have their own quirks and pretty much all of which involve jumping back into the VCS frequently. This is apparently a problem the Mercurial developers are still puzzling over.

Actually there is one tool that’s solved this the way you would expect: TortoiseHg. Which is great, save two problems. Number one, I want my VCS features to be available from the command line and front-end both. Two, I really dislike Tortoise. Alternative Mercurial frontends are both trash, and an unbelievable pain to set up. If you’re working with Mercurial, TortoiseHg and command line are really your only sane options.

It comes down to one thing: workflow. With Mercurial, I have to be constantly conscious about whether I’m in the right branch, doing the right thing. Should I be shelving these changes? Do they go together or not? How many branches should I maintain privately? Ugh.

Apart from all that, I ran into one serious show stopper. Part of this test includes migrating my existing Subversion repository, and Mercurial includes a convenient extension for it. Wait, did I say convenient? I meant borderline functional:

Subversion’s Python bindings are a prerequisite. The bindings (generated with SWIG) are installed separately on Windows, and can be found on http://subversion.tigris.org/ . Note that you can’t do this with the Win32 Mercurial binaries — there’s no way to install the Subversion bindings into its built-in Python library. So you’ll need to use a Mercurial installed on top of a stand-alone Python, and you may also need to do something like “set HG=python c:\Python25\Scripts\hg” to override the default Win32 binaries if you have those installed also. For Mac OS X, the easiest way is to install the CollabNet Subversion build, and then copy the content of /opt/subversion/lib/svn-python to the site-package directory of the python installation.

The silver lining is there are apparently third party tools to handle this that are far better, but at this point Mercurial has tallied up a lot of irritations and I’m ready to move on.

Spoiler: I’m transitioning to git. I’ll go into all the gory details in my next post, but I found git to be vastly better to work with.

Lua and LuaBind Binaries for VS 2010

I’ve been working with Lua recently, and apparently neither Lua nor LuaBind come in binary distributions. They’re also kind of a pain to build, so I thought I’d go ahead and post fully prepped binaries, built by and for VC 2010.
Lua 5.1
LuaBind 0.9

A few things to be aware of:

  • These are just built 32 bit binaries and nothing else. You will still need project sources for the headers.
  • Lua is built as a release mode only DLL.
  • LuaBind comes in debug and release variants as a static library.
  • These are built using Multithreaded DLL and Multithreaded Debug DLL. You’ll get link errors if your project is statically linking to the CRT.

PhysX and Hardware Acceleration

A more accurate title would be PhysX and its lack of hardware acceleration. In retrospect this is largely my fault for believing marketing hype without looking into details, but I figured that I’d discuss it. Finding resources on the subject was difficult, so I think a lot of people may be suffering from the same delusion as I was.

Super short version: PhysX computes its rigid body simulation strictly in software. The GPU never gets involved.

Let’s step back for a second and look at the history. It starts with a library called NovodeX which was a popular physics package around the same time Half-Life 2 launched with Havok. A semiconductor company called Ageia acquired the company to build Physics Processing Unit (PPU) based cards, an idea that got a lot of attention at the time. The hardware was over priced, mismanaged, and probably doomed in the end regardless. The effort started sinking pretty quickly until NVIDIA swept in and bought the whole thing. NV then announced that the whole PPU project would be scrapped and that hardware acceleration of physics would be done on the GPU.

The current state of PhysX is somewhat confusing. There are plenty of PhysX accelerated games. Benchmarks of these games show dramatic performance gains when running with hardware supported physics. There have been recent allegations that PhysX’s CPU performance is unreasonably poor, possibly to strengthen the case for hardware acceleration. I don’t have any comment on that mess, but it’s part of the evidence suggesting that hardware is the real focus of PhysX and NVIDIA. That’s not a surprise.

What is a surprise is that the rigid body simulation — the important part of the physics — is not hardware accelerated. Apparently it was when the PPU rolled out, but the GPU based acceleration does not support it. Look at any PhysX based game that advertises and you’ll notice gobs of destruction, cloth, fluids, etc. That’s because those are the only GPU-accelerated effects PhysX supports (plus a few misc things like soft body). Probably the big tip off is that none of these effects require forces to be imparted back to the main physics scene in any way. This is strictly eye candy.

So the interactive rigid body simulation, the part that actually affects gameplay, is completely in software. And if you believe the claims, it’s not even done well in software. All these problems will apparently be fixed in a magical 3.0 release, coming at some vague point in the future. Why? My best guess is that no one has paid any attention to the core PC code in six years. I’d wager that everyone’s been so obsessed with hardware acceleration, and that the basic problem of writing a rigid body solver is so stupidly easy, that we’re simply coasting on the same 2004 NovodeX era code that made the library popular in the first place. Version 3.0 is probably a ground up rewrite.

Don’t get me wrong. PhysX is not bad. It is simply stagnant. Take a recent game and strip away the GPU driven effects candy. What exactly is left in the interactive part of the simulation that wasn’t already in Half Life 2? That was also a 2004 release. NVIDIA did what they do best, visual effects. Marketing also did what they do best, letting everybody assume something untrue without actually ever saying it.

Anyway, now you know where hype ends and fact begins. Rigid body game physics is not hardware accelerated; only special effects that fall pretty loosely into the category of “physics” are. Maybe that’s common knowledge, but it was news to me.

Update: This presentation about Bullet’s GPU acceleration is a good read.
Update 2: I’m wrong on one technical point — PhysX’s hardware accelerated systems can impart forces back to the main scene, and their cloth shows off this capability in the samples.