Ghost Binaural Audio Video

Embedded video refuses to use HD. HD is important for this one! Watch it in HD.

Did you forget we have an audio technology too? Grab some headphones (they’re required) and listen to this video. And here’s the blurb:

Ghost Binaural Audio is a real-time 3D audio SDK for video games and virtual reality applications. It’s extremely easy to integrate into your source code. Simply point to your mono source audio files, update each with 3D positional vectors every frame and you’re good to go. Optional control parameters also give you detailed control over things like air absorption, volume, pitch, looping etc…

Ghost Binaural Audio runs with a high level of efficiency, allowing you to have 5, 10, or even up to 20+ simultaneous audio files playing at the same time with realistic positional sound (depending on your hardware).

Be warned, Ghost will completely change the way you design audio for games. But you will never go back to the boring, flat audio you’ve used before. There’s just no comparison, period. Take your games to the next level of audio immersion and realism!

The Ghost Binaural Audio SDK will be available shortly. Stay tuned or contact us for more details.

We will have a LIVE DEMO of the Ghost technology at GDC as well!

Advertisements

Selling Middleware

So a few days ago, we published a video demo of our BioReplicant technology. In particular, we published it without saying much. No explanation of how it works, what problems it solves, or how it could be used. That was a very important and carefully calculated decision. I felt it was critical that people be allowed to see our technology without any tinting or leading on our part. Some of the feedback was very positive, some very negative, and a whole lot in between. I’m sure we’ll get an immense amount more from GDC, but this initial experience has been critical in understanding what people want and what they think we’re offering.

To a large extent, people’s expectations do not align with what BioReplicants actually does. Our eventual goal is to meet those expectations, but in the meantime there is a very tricky problem of explaining what our system actually does for them. I think that will continue to be a problem, exacerbated by the fact that on the surface, we seem to be competing with NaturalMotion’s Euphoria product, and in fact we’ve encouraged that misconception.

In truth, it’s not the case. We aren’t doing anything like what NM does internally, and all we’re really doing is trying to solve the same problem every game has to solve. Everybody wants realistic, varied, complex, and reactive animations for their game. Everybody! And frankly, they don’t need Euphoria or BioReplicants to do it. There’s at least three GDC talks this year on the subject. That’s why it’s important to step back and look at why middleware even exists.

The very first thing to realize is that games are hard to make. There is a wide array of complex intersecting problems that go into the production of every last title you see on the shelf, in Steam, or anywhere else.

  1. Engineering a game’s underlying systems is complicated in the best of situations, and every title evolves dramatically, throwing half your previous work out the door every six months. Developers do not want anything that makes their lives harder. Developers love almost anything that makes their lives easier!
  2. Art production for a game is incredibly time consuming. Time is money. Streamlined production is worth a lot of money.
  3. Games are expected to run on very limited hardware. How much RAM do you guys have in your laptops? A PlayStation 3 has 200 MB. Two hundred.
  4. Designing a fun game that isn’t a rehash of everything before it is very, very tricky. Companies go a long way to make a game not look like a rehash. Here’s a hint: developer diaries are rarely produced for any other reason. Nobody would ever have noticed NM Euphoria in Force Unleashed otherwise.
  5. If there’s one thing harder than making a game, it’s selling it. Each console gets a couple hundred new game releases every year and most of them represent a substantial loss. Publishers are willing to do an awful lot to avoid that loss. A new gimmick might flop, but when you drop $30M to ship a game, $32M instead is probably worth the odds.

The point of middleware is simply to alleviate one or more of these problems. That’s all. Selling middleware, then, is mainly a matter of convincing people in games that you can tackle these problems in a net positive way. Most game developers are also gamers, and our instinct is to focus on 5 and to a lesser extent 4 (the two are somewhat intertwined). Middleware developers usually try to convince game developers that their games will be more fun with the middleware product. We’ve been doing the same. It’s very possible that it’s true, but it’s just one piece of the puzzle.

So in moving from a concept to a product, it’s critical to start with a very solid understanding of these points, and to pick exactly which ones the product attacks. Nobody hits all five. With Force Unleashed, LucasArts designed a game based almost entirely around Euphoria, and you know what? It’s a terrible game. Once you get past the vaguely clever physics, there is no substance there. Asking people to design games around BioReplicants is a tall order. NM has been forced into the position of doing it for themselves.

“Every tackle is different.” As if EA etc haven’t thought of this already. A big failing of NaturalMotion Euphoria is that it is an epic amount of work to integrate. It’s very easy to spot a single animation being overused in a game. But throw in five animations and behave vaguely smart about which one you pick and when, and suddenly no one can tell the difference. Put in some parametric control over details of the animation, and you can probably turn those five animations into twenty. All it cost you was extra staff to do the animations and a programmer to figure out how to squeeze that into memory. Kind of expensive, but a hell of a lot easier to do than Euphoria integration. We’re not competing against NaturalMotion. The two of us are competing against the status quo.

A shooter now will have half a dozen knock-back animations for hits to different body parts, another half dozen for falls, and so on. Somebody sits down and animates them. Can we make that animator’s job more efficient? I bet we can. How much memory do you suppose those animations eat? If we can do them on the fly and open up that memory for other things, engineering staff will trip over themselves to thank us. I bet a BioReplicant walk takes up a lot less memory than a keyframed walk, and I bet we can replace a dozen hand-drawn animations with two physically driven animations.

Revolutionizing how people play games is glamorous and tempting. It’s an important goal. Full blown rigid-body physics made the jump between five and ten years ago. But that’s just not how you sell a middleware product. On the other hand, if you can inflict a moderately sized change in game development, developers will happily pay you. After that, you can start changing the actual games.

BioReplicant Keeps Walking


Click for High Def version.

This is what we’ve been working on for the last several months at AR Labs.

Forget falls.
Forget tackles.
BioReplicant keeps walking.
info@actionreactionlabs.com for more information.
———————–
BioReplicants is a completely reactive procedural animation system for use in video games. No key framing, motion capture, or precomputed animations were used. Everything you see here was generated in real-time, reacting to human input. Oh, and it’s efficient enough to run on an iPhone.

We know he looks crazy. Sure we could’ve made it realistic, but it’s just not that interesting to watch. BioReplicant can keep going even through bone crushing impacts, and we think that’s pretty cool.

We’ll be showing off the LIVE DEMO at GDC. Catch up with us to try it out!

Preparing for GDC

It’s amazing how fast this snuck up on us. This time next week, I will be getting ready for my first day of my first ever GDC. I’m in the process of getting ready for the conference, on several fronts. One of those fronts is figuring out who I need to meet. On the off chance somebody wants to talk to me, they’re welcome to do that too.

Step 1 is communication. I’ll be getting the phone numbers of friends (you people know who you are). For overall communication and simply keep tracking of what the hell’s going on, I am going to try to use this Twitter thing. I don’t like twitter, but I guess this might be something it’s good at. My twitter account is d3dhaxxor, so if you’ll be at GDC let me know.

After that, I have no idea. I’m trying to figure out which parties I have to be at, like GameDev.Net’s mixer for example. If there’s somewhere else I ought to be, let me know!