Promit's Ventspace

October 15, 2014

Time Capsule Draft: “Speculating About Xbox Next”

Filed under: Non-technical — Promit @ 11:50 am

I was digging through my Ventspace post drafts, and I found this writeup that I apparently decided not to post. It was written in March of 2012, a full year and a half before the Xbox One arrived in the market. In retrospect, I’m apparently awesome. On the one hand, I wish I’d posted this up at the time, because it’s eerily accurate. On the other hand, the guesses are actually accurate enough that this might have looked to Microsoft like a leak, rather than speculation. Oh well. Here it is for your amusement. I haven’t touched a thing about it.


I’ve been hearing a lot of rumors, though the credibility of any given information is always suspect. I have some supposed info about the specs on the next Xbox, but I’m not drawing on any of that info here. I’m dubious about at least some of the things I heard, and it’s not good to spill that kind of info if you’re trying to maintain a vaguely positive relationship with a company anyway. So what I’m presenting here is strictly speculation based on extrapolation of what we’ve seen in the past and overall industry and Microsoft trends. I’m also assuming that MS is fairly easy to read and that they’re unlikely to come out of left field here.

  • 8 GB shared memory. The original Xbox had 64 MB of shared memory. The Xbox 360 has 512, a jump of 8x. This generation is dragging along a little longer, and memory prices have dropped violently in the last year or so. I would like to see 16 GB actually, but the consoles always screw us on memory and I just don’t think we’ll be that lucky. 4 GB is clearly too low, they’d be insane to ship a console with that now. As for the memory type, we’re probably talking simple (G)DDR3 shared modules. The Xboxes have always been shared memory and there’s no reason for them to change that now. Expect some weird addressing limitations on the GPU side.
  • Windows 8 kernel. All indications are that the WinCE embedded kernel is being retired over the next two years (at least for internal use). There’s a substantial tech investment in Windows 8, and I think we’re going to see the desktop kernel roll out across all three screens. (HINT HINT.) iOS and Android are both running stripped desktop kernels, and the resources in current mobile platforms make WinXP’s minimum hardware requirements look comically low. There is no reason to carry the embedded kernel along any longer. I wouldn’t want to be a CE licensee right now.
  • x86-64, 8×2 threads, out of order CPU. There are three plausible CPU architectures to choose from: x86, ARM, and PowerPC. Remember what I said about the Windows 8 kernel? There’s no Windows 8 PPC build, and we’re not going to see PowerPC again here. ARM is of course a big focus right now, but the design parameters of the current chips simply won’t accommodate a console. They’re not fast enough and that can’t be easily revised. That pretty much leaves us with x86. The only extant in-order x86 architecture is Intel Atom, which sucks. I think they’ll get out of order for free from the existing architectures. As far as the CPU, 8 core is essentially the top of the market right now, and I’m assuming they’ll hyperthread it. They’ll probably steal a core away from the OS, and I wouldn’t be surprised if they disable another core for yield purposes. That means six HT cores, which is a simple doubling of the current Xbox. I have a rumored clock-speed, but have decided not to share. Think lower rather than higher.
  • DirectX 11 GPU — AMD? DX11 class should be blatantly obvious. I have reason to believe that AMD is the supplier, and I did hear a specific arch but I don’t believe it. There’s no word in NVIDIA land about a potential contract, either. No idea if they’re giving the design ownership to MS again or anything like that, all I know is the arrows are all pointed the same way. There are some implications for the CPU here.
  • Wifi N and Gigabit ethernet. This is boring standard consumer networking hardware. No surprises here.
  • Optical drive? — I don’t think they want to have one. I do think they have to have one, though you can definitely expect a stronger push towards digital distribution than ever. There’s no choice but to support Blu-ray at this point. Top tier games simply need the space. I suspect that we’ll see a very large (laptop grade) hard drive included in at least some models. Half terabyte large, with larger sizes later in the lifecycle. That is purely a guess, though.
  • AMD Fusion APU? — I’m going to outlandishly suggest that a Fusion APU could be the heart of this console. With an x86 CPU and a mainstream Radeon core in about the right generation, the existing Fusion product could be retooled for use in a console. Why not? It already has the basic properties you want in a console chip. The big sticking points are performance and heat. It’s easy to solve either one but not both at once, and we all know what happened last time Microsoft pushed the heat envelope too far. If it is Fusion architecture, I would be shocked if they were to actually integrate the CPU and GPU dies.
  • Kinect. — Here’s another outlandish one: Every Xbox Next will include a Kinect (2?), in the box. Kinect has been an enormous winner for Microsoft so far on every single front, and this is where they’re going to draw the battle lines against Nintendo and Sony. Nintendo’s control scheme is now boring to the general public, with the Wii U being introduced to a resounding “meh”. PS Move faded into irrelevance the day it was launched. For the first time in many years, the Xbox is becoming the casual gamers’ console and they’re going to hammer that advantage relentlessly. Microsoft is also pushing use of secondary features (eg microphone) for hardcore games — see Mass Effect 3.
  • $500. Yes, it’s high, although not very high once you adjust for inflation. The Xbox 360 is an extremely capable device, especially for the no-so-serious crowd. It’s also pure profit for Microsoft, and really hitting its stride now as the general public’s long tail console. There’s no need to price its successor aggressively, and the stuff I just described is rather expensive besides. A $600 package option at launch would not be surprising.
  • November 2013. As with the last two Xboxes, it will be launched for the holiday season. Some people were saying it would be announced this year but the more I think about it, the less it makes sense to do so. There’s no way it’s launching this year, and they’re not going to announce it a year and some ahead of time. E3 2013 will probably be the real fun.

There are some problems with the specs I’ve listed so far. AMD doesn’t produce the CPU I described. Not that the rumors match any other known CPU, but Intel is closer. I don’t think one of the Phenom X6 designs is a credible choice. The Xbox 360 CPU didn’t match any existing chips either, so this may not really be a problem. The total package price would have to be quite high with a Kinect 2 included. The Xbox 360 may function as a useful buffer against being priced out of the market.

October 14, 2014

Quick tip: Retina mode in iOS OpenGL rendering is not all-or-nothing

Filed under: Graphics — Promit @ 3:50 pm

Some of you are probably working on Retina support and performance for your OpenGL based game for iOS devices. If you’re like us, you’re probably finding that a few of the devices (*cough* iPad 3) don’t quiiite have the GPU horsepower to drive your fancy graphics at retina resolutions. So now you’re stuck with 1x and 4x MSAA, which performs decently well but frankly looks kind of bad. It’s a drastic step down in visual fidelity, especially with all the alpha blend stuff that doesn’t antialias. (Text!) Well it turns out you don’t have to choose such a drastic step. Here’s the typical enable-retina code you’ll find on StackOverflow or whatever:

if([[UIScreen mainScreen] respondsToSelector:@selector(scale)] && [[UIScreen mainScreen] scale] == 2)
{
self.contentScaleFactor = 2.0;
eaglLayer.contentsScale = 2.0;
}


//some GL setup stuff
...

//get the correct backing framebuffer size
int fbWidth, fbHeight;
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &fbWidth);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &fbHeight);

The respondsToSelector bit is pretty token nowadays – what was that, iOS 3? But there’s not much to it. Is the screen a 2x scaled screen? Great, set our view to 2x scale also. Boom, retina. Then we ask the GL runtime what we are running at, and set everything up from there. The trouble is it’s a very drastic increase in resolution, and many of the early retina devices don’t have the GPU horsepower to really do nice rendering. The pleasant surprise is, the scale doesn’t have to be 2.0. Running just a tiny bit short on fill?

if([[UIScreen mainScreen] respondsToSelector:@selector(scale)] && [[UIScreen mainScreen] scale] == 2)
{
self.contentScaleFactor = 1.8;
eaglLayer.contentsScale = 1.8;
}

Now once you create the render buffers for your game, they’ll appear at 1.8x resolution in each each direction, which is very slightly softer than 2.0 but much, much crisper than 1.0. I waited until after I Am Dolphin cleared the Apple App Store approval process, to make sure that they wouldn’t red flag this usage. Now that it’s out, I feel fairly comfortable sharing it. This can also be layered with multisampling (which I’m also doing) to fine tune the look of poly edges that would otherwise give away the trick. I use this technique to get high resolution, high quality sharp rendering at 60 fps across the entire range of Apple devices, from the lowly iPhone 4S, iPod 5, and iPad 3 on up.

October 9, 2014

I Am Dolphin – Kinect Prototype

Filed under: Non-technical — Promit @ 5:29 pm

I’d hoped to write up a nice post for this, but unfortunately I haven’t had much time lately. Releasing a game, it turns out, is not at all relaxing. Work doesn’t end when you hit that submit button to Apple.

In the meantime, I happened to put together a video showing a prototype of the game, running off Kinect control. I thought you all might find it interesting, as it’s a somewhat different control than the touch screen. Personally I think it’s the best version of the experience we’ve made, and we’ve had several (touch screen, mouse, PS Move, Leap, etc). Unlike the touch screen version, you get full 3D directional control. We don’t have to infer your motion intention. This makes a big difference in the feeling of total immersion.

October 7, 2014

Our New Game: I Am Dolphin

Filed under: Games — Promit @ 9:21 pm

After an incredibly long time of quiet development, our new game, I Am Dolphin, will be available this Thursday, October 9th, on the Apple/iOS App Store. This post will be discussing the background and the game itself; I’m planning to post more technical information about the game and development in the future. This depends somewhat on people reading and commenting – tell me what you want to know about the work and I’m happy to answer as much as I can.

For those of you who may not have followed my career path over time: A close friend and I have spent quite a few years doing R&D with purely physically driven animation. There’s plenty of work out there on the subject; ours is not based on any of it and takes a completely different approach. About three years ago, we met a neurologist at the Johns Hopkins Hospital who helped us set up a small research group at Hopkins to study biological motion and create a completely new simulation system from the ground up, based around neurological principles and hands-on study of dolphins at the National Aquarium in Baltimore. Unlike many other physical animation systems, including our own previous work, the new work allows the physical simulation to be controlled as a player character. We also developed a new custom in-house framework, called the Kata Engine, to make the simulation work possible.

One of the goals in developing this controllable simulation was to learn more about human motor control, and specifically to investigate how to apply this technology to recovery from motor impairments such as stroke. National Geographic was kind enough to write some great articles on our motivations and approach:

Virtual Dolphin On A Mission

John Krakauer’s Stroke of Genius

Although the primary application of our work is medical and scientific, we’ve also spent our spare time to create a game company, Max And Haley LLC, and a purely entertainment focused version of the game. This is the version that will be publicly available in a scant few days.

Here is a review of the game by AppUnwrapper.

I got my hands on the beta version of the game, and it’s incredibly impressive and addictive. I spent two hours playing right off the bat without even realizing it, and have put in quite a few more hours since. I just keep wanting to come back to it. iPhones and iPads are the perfect platform for the game, because they allow for close and personal, tactile controls via simple swipes across the screen.

I now have three shipped titles to my name; I’d say this is the first one I’m really personally proud of. It’s my firm belief that we’ve created something that is completely unique in the gaming world, without being a gimmick. Every creature is a complete physical simulation. The dolphins you control respond to your swipes, not by playing pre-computed animation sequences but by actually incorporating your inputs into the drive parameters of the underlying simulation. The end result is a game that represents actual motion control, not gesture-recognition based selection of pre-existing motions.

As I said at the beginning of the post, this is mostly a promotional announcement. However, this is meant to be a technical blog, not my promotional mouthpiece. I want to dig in a lot to the actual development and technical aspects of this game. There’s a lot to talk about in the course of developing a game with a three person (2x coder, 1x artist) team, building a complete cross-platform engine from nothing, all in the backdrop of an academic research hospital environment. Then there’s the actual development of the simulation, which included a lot of interaction with the dolphins, the trainers, and the Aquarium staff. We did a lot of filming (but no motion capture!) in the course of the development as well; I’m hoping to share some of that footage moving forward.

Here’s a slightly older trailer – excuse the wrong launch date on this version. We decided to slip the release by two months after this was created – that’s worth a story in itself. It is not fully representative of the final product, but our final media isn’t quite ready.

September 16, 2014

Game Code Build Times: RAID 0, SSD, or both for the ultimate in speed?

Filed under: Non-technical — Promit @ 7:08 pm

I’ve been in the process of building and testing a new machine using Intel’s new X99 platform. This platform, combined with the new Haswell-E series of CPUs, is the new high end of what Intel is offering in the consumer space. One of the pain points for developers is build time. For our part, we’re building in the general vicinity of 400K LOC of C++ code, some of which is fairly complex — it uses standard library and boost headers, as well as some custom template stuff that is not simple to compile. The worst case is my five year old home machine, an i5-750 compiling to a single magnetic drive, which turns in a six minute full rebuild time. Certainly not the biggest project ever, but a pretty good testbed and real production code.

I wanted to find out what storage system layout would provide the best results. Traditionally game developers used RAID 0 magnetic arrays for development, but large capacity SSDs have now become common and inexpensive enough to entertain seriously for development use. I tested builds on three different volumes:

  • A single Samsung 850 Pro 512 GB (boot)
  • A RAID 0 of two Crucial MX100 512 GB
  • A RAID 0 of three WD Black 4 TB (7200 rpm)

Both RAID setups were blank. The CPU is an i7-5930k hex-core (12 threads) and I’ve got 32 GB of memory on board. Current pricing for all of these storage configurations is broadly similar. Now then, the results. Will the Samsung drive justify its high price tag? Will the massive bandwidth of two striped SSDs scream past the competitors? Can the huge magnetic drives really compete with the pinnacle of solid state technology? Who will win?

Drumroll…

They’re all the same.

All three configurations run my test build in roughly 45 seconds, the differences between them being largely negligible. In fact it’s the WD Blacks that posted the fastest time at 42s. The obvious takeaway is that all of these setups are past the threshold where something else is the bottleneck. That something in this case is the CPU, and more specifically the overall hardware thread count. Overclocking the CPU from 3.5 to 4.5 did nothing to help. I’ve heard of some studios outfitting their engineers with dual Xeon setups, and it’s not looking so crazy to do so when employee time is on the line. (The potential downside is that the machine starts to stray significantly from what the game will actually run on.) Given the results, and the sizes of modern game projects, I’d recommend using an inexpensive 500 GB SSD for a boot drive (Crucial MX100, Sandisk Ultra II, 840 EVO), and stocking up on the WD Blacks for data. Case closed.

But… as long as we’re here, why don’t we take a look at what these drives are benchmarking at? The 850 Pro is a monster of a drive. Those striped MX100s might be the real heros though; ATTO shows them flirting with a full gigabyte per second of sequential transfer. Here are the raw CrystalDiskMark numbers for all three:

Samsung 850 Pro:

Sequential Read : 520.557 MB/s
Sequential Write : 489.836 MB/s
Random Read 512KB : 407.993 MB/s
Random Write 512KB : 465.648 MB/s
Random Read 4KB (QD=1) : 24.216 MB/s [ 5912.1 IOPS]
Random Write 4KB (QD=1) : 71.216 MB/s [ 17386.7 IOPS]
Random Read 4KB (QD=32) : 398.378 MB/s [ 97260.3 IOPS]
Random Write 4KB (QD=32) : 331.571 MB/s [ 80950.0 IOPS]

2x Crucial MX100 in RAID 0:

Sequential Read : 898.908 MB/s
Sequential Write : 905.506 MB/s
Random Read 512KB : 695.787 MB/s
Random Write 512KB : 854.666 MB/s
Random Read 4KB (QD=1) : 26.271 MB/s [ 6413.8 IOPS]
Random Write 4KB (QD=1) : 110.554 MB/s [ 26990.8 IOPS]
Random Read 4KB (QD=32) : 430.077 MB/s [104999.3 IOPS]
Random Write 4KB (QD=32) : 413.606 MB/s [100978.0 IOPS]

3x WD Black 4TB in RAID 0:

Sequential Read : 530.522 MB/s
Sequential Write : 494.534 MB/s
Random Read 512KB : 61.752 MB/s
Random Write 512KB : 162.619 MB/s
Random Read 4KB (QD=1) : 0.724 MB/s [ 176.7 IOPS]
Random Write 4KB (QD=1) : 4.461 MB/s [ 1089.1 IOPS]
Random Read 4KB (QD=32) : 5.090 MB/s [ 1242.8 IOPS]
Random Write 4KB (QD=32) : 5.307 MB/s [ 1295.6 IOPS]

I don’t claim that these numbers are reliable or representative. I am only posting them to provide a general sense of the performance characteristics involved in each choice. The SSDs decimate the magnetic drive setup for random ops, though the 512 KB values are respectable. I had expected the 4K random read, for which SSDs are known, to have a significant impact on build time, but that clearly isn’t the case. The WDs are able to dispatch 177 of those per second; despite being 33x slower than the 850 Pro, this is still significantly faster than the compiler can keep up with. Even in the best case scenarios, a C++ compiler won’t be able to clear out more than a couple dozen files a second.

May 14, 2013

A Pixel Is NOT A Little Square

Filed under: Non-technical — Promit @ 12:57 pm
Tags: , , , , ,

It’s good to review the fundamentals sometimes. Written in 1995 and often forgotten: A Pixel Is Not A Little Square.

April 29, 2013

Oddly Elaborate Apple Error Message

Filed under: Non-technical — Promit @ 3:51 pm

I just wanted to share this. Popped up today while initializing an NSDateComponents object.

components:fromDate:toDate:options:]: fromDate cannot be nil
I mean really, what do you think that operation is supposed to mean with a nil fromDate?
An exception has been avoided for now.
A few of these errors are going to be reported with this complaint, then further violations will simply silently do whatever random thing results from the nil.
Here is the backtrace where this occurred this time (some frames may be missing due to compiler optimizations):

So that was unexpected.

April 11, 2013

The Scandalous Yetizen Costume

Filed under: Games,Non-technical — Promit @ 12:18 am
Tags: , , ,

There’s been a lot of chatter on the various blogs and news sites about the IGDA and Yetizen party incident. I’m not going to rehash that. See these articles if you’re not up to date on the whole controversy:

http://www.joystiq.com/2013/03/28/igda-party-features-dancers-prompts-controversy-resignations/

http://www.joystiq.com/2013/04/09/igda-defines-new-rules-for-future-industry-parties-after-gdc-mi/

http://yetizen.com/2013/03/30/official-statement-by-the-yetizen-ceo-on-the-yetizen-igda-gdc-party/2/

I will comment that I thought that the controversy was a wholly pointless manufactured thing and Brenda Romero’s resignation did not help anybody. That said, I was a little surprised to discover that the scandalous, allegedly inappropriate outfits that created all this trouble aren’t actually shown anywhere, in any of the news about the incident. At all. Not on Joystiq, not on the Gawker owned Kotaku, nowhere. I thought that was strange. Luckily I have photos of the Yetizen models from the previous year, so… here it is. This is the outfit that forced two IGDA members to resign.
Yetizen Outfits
Now you know.

April 2, 2013

A Glimpse of What I’m Working On

Filed under: Graphics — Promit @ 12:32 am

I’ve decided to focus a little less on complaining and a little more on the actual work I do. Here’s a teaser:
Monitor array
I had a substantial amount of help with the over-water environmental rendering (not pictured) from a friend of mine, Nauful Shaikh. See his site for some great graphics work.

This wall of monitors was graciously made available to us by the Computer Science department for a presentation to the President of the University as well as a healthy mix of department chairs from Neuroscience, Neurology, Brain Sciences Institute, Computer Science, and Electrical/Computer Engineering at Johns Hopkins. I’m driving it at 60fps off a single 7970 in Eyefinity 6. It was supposed to be Crossfire but somebody’s driver is broken *cough cough* so I had to gut the render pipeline somewhat. Total resolution is 5760×2160 plus some margins for bezel compensation. The actual app is Kinect and PS Move enabled, and maybe I can share more about it this summer. The focus is a dolphin which we’ve developed with significant help and guidance from the National Aquarium in Baltimore, who let us work directly with their dolphins to better understand the animals, how they move and think, etc.

We’re planning to launch an iPad version this year on the iTunes App Store, and create a large scale interactive installation version for aquariums, hospitals, museums and similar at 4K resolution in stereoscopic 3D.

January 31, 2013

Follow-up on DirectX/XNA

Filed under: Graphics,SlimDX — Promit @ 5:45 pm

Received today, and hopefully the “you can quote me” part means this is an exception to NDA because it’s important:

The message said “DirectX is no longer evolving as a technology.” That is definitely not true in any way, shape or form. Microsoft is actively investing in DirectX as the unified graphics foundation for our key platforms, including Xbox 360, Windows Phone and Windows. DirectX is evolving and will continue to evolve. For instance, right now we’re investing in some very cool graphics code authorizing [sic] technology in Visual Studio. We have absolutely no intention of stopping innovation with DirectX, and you can quote me on that. :)

My intent was not to start a firestorm of questioning on DirectX’s future viability, and I said up-front that I felt that communication was poorly worded with regards to intent. My frustrations were also apparently poorly worded. Since I accidentally launched this, let’s clear up a few things.

Number One: In the absolute (and implausible) worst case scenario that MS really scales back their Direct3D support to a minimum, that situation is still better than OpenGL. The Direct3D system is a technically superior piece of technology, and support for working with it is still better than OpenGL whether you’re a hobbyist or a pro. I cannot emphasize this point enough, so for the love of god stop bringing up OpenGL. It’s a badly designed API and has been since I started doing this in 2000.
Number Two: A new picture is coming into focus that shifts a lot of the DirectX SDK’s burden onto VS. This hasn’t been made previously clear to us on the MVP side. As I’ve begun to explore the tools already inside VS 2012, I like what I’m seeing. It’ll take some time to see how it all plays out, but in a very real way having Direct3D integrated into core VS development is a serious promotion.
Number Three: There’s more content in today’s email regarding XNA which I don’t care to share, thanks to a stern NDA reminder. (Ironically, when MS finally gives us what they should be saying to the public all along, I can’t share it.) But this is very much a case of “put up or shut up” and defending XNA’s status as a serious technology seems patently ridiculous to me right now. The community, whether it’s my work or someone else’s, has stepped in to integrate .NET and DirectX for many wonderful use cases. But there are things we can’t do (like Xbox) and it’s clear that matters to a lot of people. It’s not clear that it matters to Microsoft.

That said, I am not walking back my actual complaints about how DirectX and XNA are being handled. I like the work that’s been done in integrating VS and DirectX, which is arguably many years overdue. That doesn’t make everything else okay. The fact that we’re having this discussion, the fact that my dashed off blog post exploded on Twitter, the fact that clarification had to be written up behind the scenes — this is a problem. Which brings me at long last to the actual point I was trying to make yesterday:

As developers, we need Microsoft to communicate clearly with us, in public. As MVPs we were asked to act as community representatives, to guide everyone interested in the tech and have an open line on future development. Apparently that means we get half-hearted vague emails from time to time that dodges our serious questions and casts further doubts about the status of the technology and teams, all covered by an NDA agreement. And then, shockingly enough, people get the wrong idea. We’re sitting on the outside, trying to play this stupid guessing game of “which Microsoft technology is alive?” XNA doesn’t support DirectX 10+ or Windows 8, but it’s still a “supported product”, as if that means anything in the real world. Windows XP is still a “supported product” too.

It shouldn’t take a leaked email to force a straight answer.

Next Page »

The Rubric Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 784 other followers