Promit's Ventspace

November 5, 2014

Sony A77 Mark II: EVF Lag and Blackout Test

Filed under: Non-technical — Promit @ 4:05 pm

I’m planning to review this camera properly at some point, but for the time being, I wanted to do a simple test of what the parameters of EVF lag and blackout are.

Let’s talk about lag first. What do we mean? The A77 II uses an electronic viewfinder, which means that the viewfinder is a tiny LCD panel, showing a feed of what the imaging sensor currently sees. This view takes camera exposure and white balance into exposure, allowing you to get a feel for what the camera is actually going to record when the shutter fires. However, downloading and processing the sensor data, and then showing it on the LCD, takes time. This shutter firing needs to compensate for this lag; if you hit the shutter at the exact moment an event occurs on screen, the lag is how late you will actually fire the shutter as a result.

How do we test the lag? Well, the A77 II’s rear screen shows exactly the same display as the viewfinder, presumably with very similar lag. So all we have to do is point the camera at an external timer, and photograph both the camera and the timer simultaneously. And so that’s exactly what I did.
P1030514-screen
Note that I didn’t test whether any particular camera settings affected the results. The settings are pretty close to defaults. “Live View Display” is set to “Setting Effect ON”. These are the values I got, across 6 shots, in millseconds:
32, 16, 17, 34, 17, 33 = 24.8 ms average
I discarded a few values due to illegible screen (mid transition), but you get the picture. The rear LCD, and my monitor, are running at a 60 Hz refresh rate, which means that a new value appears on screen every ~16.67 ms. The lag wobbles between one and two frames, but this is mostly due to the desynchronization of the two screen refresh intervals. It’s not actually possible to measure any finer using this method, unfortunately. However the average value gives us a good ballpark value of effectively 25 ms. Consider that a typical computer LCD screen is already going to be in the 16ms range for lag, and TVs are frequently running in excess of 50ms. This is skirting the bottom of what the fastest humans (pro gamers etc) can detect. Sony’s done a very admirable job of getting the lag under control here.

Next up: EVF blackout. What is it? Running the viewfinder is essentially a continuous video processing job for the camera, using the sensor feed. In order to take a photo, the video feed needs to be stopped, the sensor needs to be blanked, the exposure needs to be taken, the shutter needs to be closed, the image downloaded off the sensor into memory, then the shutter must open again and the video feed must be resumed. The view of the camera goes black during this entire process, which can take quite a long time. To test this, I simply took a video of the camera while clicking off a few shots (1/60 shutter). Here’s a GIFed version at 20 fps:
P1030523
By stepping through the video, I can see how long the screen is black. These are the numbers I got, counted in 60 Hz video frames:
17, 16, 16, 17, 16, 16 = 272 ms average
The results here are very consistent; we’ll call it a 0.27 second blackout time. For comparison, Canon claims that the mirror blackout on the Canon 7D is 0.055 seconds, so this represents a substantial difference between the two cameras. It also seems to be somewhat worse than my Panasonic GH4, another EVF based camera, although I haven’t measured it. I think this is an area which Sony needs to do a bit more, and I would love to see a firmware update to try and get this down at least under 200 ms.

October 15, 2014

Time Capsule Draft: “Speculating About Xbox Next”

Filed under: Non-technical — Promit @ 11:50 am

I was digging through my Ventspace post drafts, and I found this writeup that I apparently decided not to post. It was written in March of 2012, a full year and a half before the Xbox One arrived in the market. In retrospect, I’m apparently awesome. On the one hand, I wish I’d posted this up at the time, because it’s eerily accurate. On the other hand, the guesses are actually accurate enough that this might have looked to Microsoft like a leak, rather than speculation. Oh well. Here it is for your amusement. I haven’t touched a thing about it.


I’ve been hearing a lot of rumors, though the credibility of any given information is always suspect. I have some supposed info about the specs on the next Xbox, but I’m not drawing on any of that info here. I’m dubious about at least some of the things I heard, and it’s not good to spill that kind of info if you’re trying to maintain a vaguely positive relationship with a company anyway. So what I’m presenting here is strictly speculation based on extrapolation of what we’ve seen in the past and overall industry and Microsoft trends. I’m also assuming that MS is fairly easy to read and that they’re unlikely to come out of left field here.

  • 8 GB shared memory. The original Xbox had 64 MB of shared memory. The Xbox 360 has 512, a jump of 8x. This generation is dragging along a little longer, and memory prices have dropped violently in the last year or so. I would like to see 16 GB actually, but the consoles always screw us on memory and I just don’t think we’ll be that lucky. 4 GB is clearly too low, they’d be insane to ship a console with that now. As for the memory type, we’re probably talking simple (G)DDR3 shared modules. The Xboxes have always been shared memory and there’s no reason for them to change that now. Expect some weird addressing limitations on the GPU side.
  • Windows 8 kernel. All indications are that the WinCE embedded kernel is being retired over the next two years (at least for internal use). There’s a substantial tech investment in Windows 8, and I think we’re going to see the desktop kernel roll out across all three screens. (HINT HINT.) iOS and Android are both running stripped desktop kernels, and the resources in current mobile platforms make WinXP’s minimum hardware requirements look comically low. There is no reason to carry the embedded kernel along any longer. I wouldn’t want to be a CE licensee right now.
  • x86-64, 8×2 threads, out of order CPU. There are three plausible CPU architectures to choose from: x86, ARM, and PowerPC. Remember what I said about the Windows 8 kernel? There’s no Windows 8 PPC build, and we’re not going to see PowerPC again here. ARM is of course a big focus right now, but the design parameters of the current chips simply won’t accommodate a console. They’re not fast enough and that can’t be easily revised. That pretty much leaves us with x86. The only extant in-order x86 architecture is Intel Atom, which sucks. I think they’ll get out of order for free from the existing architectures. As far as the CPU, 8 core is essentially the top of the market right now, and I’m assuming they’ll hyperthread it. They’ll probably steal a core away from the OS, and I wouldn’t be surprised if they disable another core for yield purposes. That means six HT cores, which is a simple doubling of the current Xbox. I have a rumored clock-speed, but have decided not to share. Think lower rather than higher.
  • DirectX 11 GPU — AMD? DX11 class should be blatantly obvious. I have reason to believe that AMD is the supplier, and I did hear a specific arch but I don’t believe it. There’s no word in NVIDIA land about a potential contract, either. No idea if they’re giving the design ownership to MS again or anything like that, all I know is the arrows are all pointed the same way. There are some implications for the CPU here.
  • Wifi N and Gigabit ethernet. This is boring standard consumer networking hardware. No surprises here.
  • Optical drive? — I don’t think they want to have one. I do think they have to have one, though you can definitely expect a stronger push towards digital distribution than ever. There’s no choice but to support Blu-ray at this point. Top tier games simply need the space. I suspect that we’ll see a very large (laptop grade) hard drive included in at least some models. Half terabyte large, with larger sizes later in the lifecycle. That is purely a guess, though.
  • AMD Fusion APU? — I’m going to outlandishly suggest that a Fusion APU could be the heart of this console. With an x86 CPU and a mainstream Radeon core in about the right generation, the existing Fusion product could be retooled for use in a console. Why not? It already has the basic properties you want in a console chip. The big sticking points are performance and heat. It’s easy to solve either one but not both at once, and we all know what happened last time Microsoft pushed the heat envelope too far. If it is Fusion architecture, I would be shocked if they were to actually integrate the CPU and GPU dies.
  • Kinect. — Here’s another outlandish one: Every Xbox Next will include a Kinect (2?), in the box. Kinect has been an enormous winner for Microsoft so far on every single front, and this is where they’re going to draw the battle lines against Nintendo and Sony. Nintendo’s control scheme is now boring to the general public, with the Wii U being introduced to a resounding “meh”. PS Move faded into irrelevance the day it was launched. For the first time in many years, the Xbox is becoming the casual gamers’ console and they’re going to hammer that advantage relentlessly. Microsoft is also pushing use of secondary features (eg microphone) for hardcore games — see Mass Effect 3.
  • $500. Yes, it’s high, although not very high once you adjust for inflation. The Xbox 360 is an extremely capable device, especially for the no-so-serious crowd. It’s also pure profit for Microsoft, and really hitting its stride now as the general public’s long tail console. There’s no need to price its successor aggressively, and the stuff I just described is rather expensive besides. A $600 package option at launch would not be surprising.
  • November 2013. As with the last two Xboxes, it will be launched for the holiday season. Some people were saying it would be announced this year but the more I think about it, the less it makes sense to do so. There’s no way it’s launching this year, and they’re not going to announce it a year and some ahead of time. E3 2013 will probably be the real fun.

There are some problems with the specs I’ve listed so far. AMD doesn’t produce the CPU I described. Not that the rumors match any other known CPU, but Intel is closer. I don’t think one of the Phenom X6 designs is a credible choice. The Xbox 360 CPU didn’t match any existing chips either, so this may not really be a problem. The total package price would have to be quite high with a Kinect 2 included. The Xbox 360 may function as a useful buffer against being priced out of the market.

October 9, 2014

I Am Dolphin – Kinect Prototype

Filed under: Non-technical — Promit @ 5:29 pm

I’d hoped to write up a nice post for this, but unfortunately I haven’t had much time lately. Releasing a game, it turns out, is not at all relaxing. Work doesn’t end when you hit that submit button to Apple.

In the meantime, I happened to put together a video showing a prototype of the game, running off Kinect control. I thought you all might find it interesting, as it’s a somewhat different control than the touch screen. Personally I think it’s the best version of the experience we’ve made, and we’ve had several (touch screen, mouse, PS Move, Leap, etc). Unlike the touch screen version, you get full 3D directional control. We don’t have to infer your motion intention. This makes a big difference in the feeling of total immersion.

September 16, 2014

Game Code Build Times: RAID 0, SSD, or both for the ultimate in speed?

Filed under: Non-technical — Promit @ 7:08 pm

I’ve been in the process of building and testing a new machine using Intel’s new X99 platform. This platform, combined with the new Haswell-E series of CPUs, is the new high end of what Intel is offering in the consumer space. One of the pain points for developers is build time. For our part, we’re building in the general vicinity of 400K LOC of C++ code, some of which is fairly complex — it uses standard library and boost headers, as well as some custom template stuff that is not simple to compile. The worst case is my five year old home machine, an i5-750 compiling to a single magnetic drive, which turns in a six minute full rebuild time. Certainly not the biggest project ever, but a pretty good testbed and real production code.

I wanted to find out what storage system layout would provide the best results. Traditionally game developers used RAID 0 magnetic arrays for development, but large capacity SSDs have now become common and inexpensive enough to entertain seriously for development use. I tested builds on three different volumes:

  • A single Samsung 850 Pro 512 GB (boot)
  • A RAID 0 of two Crucial MX100 512 GB
  • A RAID 0 of three WD Black 4 TB (7200 rpm)

Both RAID setups were blank. The CPU is an i7-5930k hex-core (12 threads) and I’ve got 32 GB of memory on board. Current pricing for all of these storage configurations is broadly similar. Now then, the results. Will the Samsung drive justify its high price tag? Will the massive bandwidth of two striped SSDs scream past the competitors? Can the huge magnetic drives really compete with the pinnacle of solid state technology? Who will win?

Drumroll…

They’re all the same.

All three configurations run my test build in roughly 45 seconds, the differences between them being largely negligible. In fact it’s the WD Blacks that posted the fastest time at 42s. The obvious takeaway is that all of these setups are past the threshold where something else is the bottleneck. That something in this case is the CPU, and more specifically the overall hardware thread count. Overclocking the CPU from 3.5 to 4.5 did nothing to help. I’ve heard of some studios outfitting their engineers with dual Xeon setups, and it’s not looking so crazy to do so when employee time is on the line. (The potential downside is that the machine starts to stray significantly from what the game will actually run on.) Given the results, and the sizes of modern game projects, I’d recommend using an inexpensive 500 GB SSD for a boot drive (Crucial MX100, Sandisk Ultra II, 840 EVO), and stocking up on the WD Blacks for data. Case closed.

But… as long as we’re here, why don’t we take a look at what these drives are benchmarking at? The 850 Pro is a monster of a drive. Those striped MX100s might be the real heros though; ATTO shows them flirting with a full gigabyte per second of sequential transfer. Here are the raw CrystalDiskMark numbers for all three:

Samsung 850 Pro:

Sequential Read : 520.557 MB/s
Sequential Write : 489.836 MB/s
Random Read 512KB : 407.993 MB/s
Random Write 512KB : 465.648 MB/s
Random Read 4KB (QD=1) : 24.216 MB/s [ 5912.1 IOPS]
Random Write 4KB (QD=1) : 71.216 MB/s [ 17386.7 IOPS]
Random Read 4KB (QD=32) : 398.378 MB/s [ 97260.3 IOPS]
Random Write 4KB (QD=32) : 331.571 MB/s [ 80950.0 IOPS]

2x Crucial MX100 in RAID 0:

Sequential Read : 898.908 MB/s
Sequential Write : 905.506 MB/s
Random Read 512KB : 695.787 MB/s
Random Write 512KB : 854.666 MB/s
Random Read 4KB (QD=1) : 26.271 MB/s [ 6413.8 IOPS]
Random Write 4KB (QD=1) : 110.554 MB/s [ 26990.8 IOPS]
Random Read 4KB (QD=32) : 430.077 MB/s [104999.3 IOPS]
Random Write 4KB (QD=32) : 413.606 MB/s [100978.0 IOPS]

3x WD Black 4TB in RAID 0:

Sequential Read : 530.522 MB/s
Sequential Write : 494.534 MB/s
Random Read 512KB : 61.752 MB/s
Random Write 512KB : 162.619 MB/s
Random Read 4KB (QD=1) : 0.724 MB/s [ 176.7 IOPS]
Random Write 4KB (QD=1) : 4.461 MB/s [ 1089.1 IOPS]
Random Read 4KB (QD=32) : 5.090 MB/s [ 1242.8 IOPS]
Random Write 4KB (QD=32) : 5.307 MB/s [ 1295.6 IOPS]

I don’t claim that these numbers are reliable or representative. I am only posting them to provide a general sense of the performance characteristics involved in each choice. The SSDs decimate the magnetic drive setup for random ops, though the 512 KB values are respectable. I had expected the 4K random read, for which SSDs are known, to have a significant impact on build time, but that clearly isn’t the case. The WDs are able to dispatch 177 of those per second; despite being 33x slower than the 850 Pro, this is still significantly faster than the compiler can keep up with. Even in the best case scenarios, a C++ compiler won’t be able to clear out more than a couple dozen files a second.

May 14, 2013

A Pixel Is NOT A Little Square

Filed under: Non-technical — Promit @ 12:57 pm
Tags: , , , , ,

It’s good to review the fundamentals sometimes. Written in 1995 and often forgotten: A Pixel Is Not A Little Square.

April 29, 2013

Oddly Elaborate Apple Error Message

Filed under: Non-technical — Promit @ 3:51 pm

I just wanted to share this. Popped up today while initializing an NSDateComponents object.

components:fromDate:toDate:options:]: fromDate cannot be nil
I mean really, what do you think that operation is supposed to mean with a nil fromDate?
An exception has been avoided for now.
A few of these errors are going to be reported with this complaint, then further violations will simply silently do whatever random thing results from the nil.
Here is the backtrace where this occurred this time (some frames may be missing due to compiler optimizations):

So that was unexpected.

April 11, 2013

The Scandalous Yetizen Costume

Filed under: Games,Non-technical — Promit @ 12:18 am
Tags: , , ,

There’s been a lot of chatter on the various blogs and news sites about the IGDA and Yetizen party incident. I’m not going to rehash that. See these articles if you’re not up to date on the whole controversy:

http://www.joystiq.com/2013/03/28/igda-party-features-dancers-prompts-controversy-resignations/

http://www.joystiq.com/2013/04/09/igda-defines-new-rules-for-future-industry-parties-after-gdc-mi/

http://yetizen.com/2013/03/30/official-statement-by-the-yetizen-ceo-on-the-yetizen-igda-gdc-party/2/

I will comment that I thought that the controversy was a wholly pointless manufactured thing and Brenda Romero’s resignation did not help anybody. That said, I was a little surprised to discover that the scandalous, allegedly inappropriate outfits that created all this trouble aren’t actually shown anywhere, in any of the news about the incident. At all. Not on Joystiq, not on the Gawker owned Kotaku, nowhere. I thought that was strange. Luckily I have photos of the Yetizen models from the previous year, so… here it is. This is the outfit that forced two IGDA members to resign.
Yetizen Outfits
Now you know.

November 14, 2012

The Promise of Motion Control

Filed under: Non-technical — Promit @ 4:16 pm

I saw a blog post on IGN today: 4 reasons why the Nintendo Wii U will fail by Ian Fisch. I won’t comment on the WiiU, because I was one of the people who said the Wii was going to flop and man oh man was I ever off the mark on that one. But I did want to highlight a particular chunk of his post:

When people think of the massive success of the Nintendo Wii, they usually think of middle-aged moms playing Wii Fit, and senior citizens playing Wii Sports bowling at the retirement home. Indeed, the success of the Wii, much like the success of the Nintendo DS was due, in a large part, to casual gamers. We tend to forget that, originally, the excitement for the Wii was at a fever pitch among hardcore gamers. If you were a hardcore gamer then, you might remember sharing Eric Cartman’s excitement over the potential of Wii’s “motion control controls.”

It was hardcore gamers that gave the Wii its terrific launch. For about a year and a half, hardcore gamers were as enthusiastic about the Wii as their out-of-shape mothers soon would be. Of course, once hardcore gamers discovered the severe limitations of the Wii’s motion controls, the system became little more than a dust collector. The Wii U will not get this initial surge of excitement from hardcore gamers. The original Wii tantilized the hardcore set with the (false) promise of a new level of immersion – a step toward virtual reality.

I currently work for the BLAM Lab at Johns Hopkins University, which is part of the Department of Neurology. I helped found a group here called Kata. The Kata Project exists for a lot of reasons, but this idea is really our heart and soul:

In Japanese language, kata (though written as 方) is a frequently-used suffix meaning “way of doing,” with emphasis on the form and order of the process. Other meanings are “training method” and “formal exercise.” The goal of a painter’s practicing, for example, is to merge his consciousness with his brush; the potter’s with his clay; the garden designer’s with the materials of the garden. Once such mastery is achieved, the theory goes, the doing of a thing perfectly is as easy as thinking it.

I’m doing a rich mix of work here, centered around game development not only for medical and scientific research purposes but also commercial production. The key point, though, is that everything we do is centered around the study of biological motion and what it means for games. We’ve got touch, Wii, PS Move, or Kinect, Leap, or whatever else is coming down the pipeline, and I don’t feel that the potential of any of those devices has really been explored properly. The Wii implied something that it turned out not to be, sadly. Motion control itself, combined with game design that really focuses on using it in new and interesting ways, has a very distinct future separate from what we’ve got today. Fruit Ninja is an early expression of it, I think. Of course I believe that we’ll be the ones to crack the code, but no matter how it happens I find it extremely interesting to observe what people are doing with the rich data we can get out of motion control systems. So far Kinect and most iPad games seem to be an expression of how much data we can throw away, instead. That needs to change.

May 26, 2012

New Theme, New Posts Soon, New Happenings

Filed under: Non-technical — Promit @ 12:30 am

I wanted a new coat of paint around here. We’re going to try this one on for size. It may or may not stick, we’ll see. I’m going to try and revive blogging here, as there are a number of things I’ve been meaning to write for many months. Many of those things are about photography, some of them are about games, and not a lot are about SlimDX or SlimTune.

Don’t hold your breath on the Slim* stuff — I just don’t know what is going to happen in the coming months. I’ve decided to pursue a Master’s Degree in Computer Science at my alma mater, Johns Hopkins University. I hate school, but given other events in my life this was an important step to take. It does cut into my time quite severely, so I’m basically stepping out of the consulting business and maintaining a blog during school is daunting to say the least.

I am also working on game development for the Department of Neurology at the Johns Hopkins Hospital. That is an extremely interesting effort which I will attempt to discuss as much as I can, though a lot of it is not and will not be public soon. That’s the nature of the beast, unfortunately. I will say right now that it should be obvious that psychology and neurology play an important role in game design. It turns out that game design plays an important role in psychology and neurology too, and research has only just started to explore the implications of that crossover. There is a lot of potential.

Lastly, I’ve found myself very heavily invested in artistic pursuits, primarily photography. I think it’s important for any game developer (or any entertainment industry professional at all) to nurture their creative/artistic side as much as possible. You don’t have to be good at it, but you can’t neglect it. I picked photography because I’m terrible at drawing, and because I hoped it would clarify a lot of things I’ve never understood in graphics engineering. (It did.) It’s now a pursuit of mine in its own right, and I intend to be writing a lot about it.

Lastly, I want to thank all half dozen people who are actually reading this post. You guys are nuts and I’ve wasted your time, but I promise better things are coming down the pipeline. I am working to finish an epic post detailing the basics of digital color representation. I can almost guarantee you’ll learn something.

December 24, 2011

Advocacy Won’t Save the Internet

Filed under: Non-technical — Promit @ 8:31 pm

There’s been a lot of rage across the internet and related companies about a US bill called the Stop Online Piracy Act, abbreviated as SOPA. You can look to Wikipedia for what the whole thing is about and why people are upset. In short, it greatly contracts internet freedom and may inflict damage on the core structure. That is not the part I am writing about. If anything, it’s amazing that things took so long to get to this point. We’re seeing the beginning of a war that was always inevitable, and I fear that if we continue to try to solve it at a policy level, freedom will lose as it always does.

Money and power are and always have been centered around a singular point: control. In order to protect an oppressive government, or an oppressive business model, you must control the basic pathways and communication channels. The methods have changed over the course of centuries but the ideas have not. The Internet and the Web represent largely uncontrolled systems of communication. As a result, it’s been a continued thorn in the side of governments and corporations for many years. From Napster to PirateBay and WikiLeaks, and far more reprehensible things (eg child porn), there’s been a constant struggle between freedom and control. That struggle has been largely random and without direction, because nobody really knew how to police the internet. The system was designed to be resilient, and there are many, many ways in which blocks by oppressive regimes have proven ineffective.

Now we’re seeing the next phase, which is to target the gate-keepers. The internet is resilient, but it is not resilient enough. Search engines and link accumulators were targeted first. Coupled with DMCA provisions, sites are vanished from Google and Bing and once that happens the site may as well not exist. Discovery becomes nearly impossible. This has been done to protect “copyright holders” and “intellectual property”, but that is merely a proxy for ANY information that any party or any government (primarily the US) does not want in the wild. You only need to observe Universal’s assault on the MegaUpload video to understand that. Making somebody invisible, even temporarily, is an enormously powerful ability.

The next target, possibly the crucial one, is the Domain Name System (DNS). DNS is responsible for translating a domain name like “google.com” into an IP like “10.11.12.13″. The US Department of Homeland Security has gleefully pursued sites by revoking their domain names without anything resembling due process and without available recourse. And without actual authority, for that matter. The results were predictable: a technical workaround which got the government mad, and a bogus seizure that made the whole program look corrupt, which it is.

The last gatekeeper is the ISP, the guys who hold the actual physical connection between us and the internet. They are under assault too. It’s the same story over and over again, but in the end the ISPs will cave because it will be difficult or illegal for them to hold out.

SOPA might be the greatest ever attack on Internet freedom, but it’s also a dead-on logical expansion of a war that has unfolded continuously over the past decade or more. It’s possible that this particular measure will be defeated. The trouble is that it doesn’t matter. There is far, far too much at stake for the corporations and governments to let this go so easily. They will learn from their mistakes here, tweak and tune the language and the pitch, and come back with armies of lobbyists time and again until the chaotic political winds line up in their favor. That WILL happen, and things will start to crumble for those who value freedom.

Ultimately Hollywood wants the same thing that the government wants: the ability to control and restrict what happens on the Internet and how. They are on the same side, and all the calls in the world to your Representative will only delay what’s coming. It’s useful to buy time, but at the end of it all there is only one choice that will work: the Internet and the World Wide Web must be made entirely immune to censorship at a fundamental technical level. It must be redesigned so that no amount of legal threat is capable of affecting it at all.

From a technical point of view, that means a few things. First, the DNS system must be secured against the whims of any government. There are two options for doing that. One is to secure the DNS system so that every country controls its own TLDs and cannot affect any others. I believe this is doable with a widespread rollout of DNSSEC. The US could still revoke domains, but only those hosted as COM/ORG/NET/US/etc which are ostensibly subject to their legal control anyway. Just pick a country where whatever you’re doing is legal and sign up with them. The other option is rather extreme, and involves replacing the DNS system entirely with a new naming system that is not under anybody’s control at all. There is work along these lines, but it’s difficult to see potential for mainstream adoption. (On the other hand, it could thrive in environments like P2P networks if the tech details are hacked out.)

Then there are the ISPs. There’s no point locking the overall system down if your personal uplink still says “hey, no PirateBay for you no matter how you’re trying to get there.” That requires end-to-end encryption of your sensitive traffic. We have a system for that called Tor, but it’s possibly extreme. The ability to perform encrypted DNS queries locally (this is different from DNSSEC), plus secure HTTPS connections, achieves nearly everything we need. The latter has already become commonplace on major sites, which only leaves us to solve encrypted DNS queries. Luckily we’ve got that too.

That leaves us with the visibility problem in search engines, social networks, and similar services controlled by a single entity. I’m less concerned about this, because the steps I’ve discussed so far open the door for somebody in a more open country to build systems that are not subject to government or corporate whims. There is work on a decentralized search engine that isn’t subject to any control at all, but it’s unclear whether such a system is actually workable. Similar efforts are underway to replace centralized services such as Facebook, Twitter, and even semi-centralized mechanisms like OpenID. There is a core belief here that any system that is centralized is necessarily a threat, and cannot be trusted. I don’t know if that’s the case, but the more research we have in building completely distributed tools the better.

To try and win true freedom for the Internet on political and policy grounds is an eternal battle which we will likely lose. There is too much at stake for the power players to give up what we are asking of them. If we’re lucky, Google and all the other internet companies will remember to sink millions of dollars into R&D into making the Internet unbreakable, instead of simply lobbying the government not to do it. Once we make it indestructible on a technical level, governments and corporations will be forced to adapt to the new order, instead of trying to stop it. That’s our only chance to preserve what we’ve built and earned in the last forty-odd years: a completely free communication system that is equal to everyone.

Next Page »

The Rubric Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 787 other followers