DirectX/XNA Phase Out Continues


Please read the follow up post.

This email was sent out to DirectX/XNA MVPs today:

The XNA/DirectX expertise was created to recognize community leaders who focused on XNA Game Studio and/or DirectX development. Presently the XNA Game Studio is not in active development and DirectX is no longer evolving as a technology. Given the status within each technology, further value and engagement cannot be offered to the MVP community. As a result, effective April 1, 2014 XNA/DirectX will be fully retired from the MVP Award Program.

There’s actually a fair bit of information packed in there, and I think some of it is poorly worded. The most stunning part of it was this: “DirectX is no longer evolving as a technology.” That is a phrase I did not expect to hear from Microsoft. Before going to “the sky is falling” proclamations, I don’t think this is a death sentence for DirectX, per se. It conveys two things. Number one, DirectX outside of Direct3D is completely dead. I hope this is not a shock to you. Number two, it’s a reminder that Direct3D has been absorbed into Windows core, and thus is no more a “technology” than GDI or Winsock.

Like I said, poorly worded.

There are a few other things packed in there. XNA Game Studio is finished. That situation has been obvious for years now, so it also should not really come as a surprise either. And finally the critical point for me: our “MVP” role as community representatives and assistants is appreciated but no longer necessary. On this point, the writing has been on the wall for some time and so I should not be surprised. But I am. Maybe dismayed is a better word.

As I’ve said previously, I don’t feel that the way DirectX has been handled in recent years has been a positive thing. A number of technical decisions were made that were unfortunate, and then a number of business and marketing type decisions were made that compounded the problem. Many of the technologies (DirectInput, DirectSound, DirectShow) have splayed into a mess of intersecting fragments intended to replace them. The amount of developer support for Direct3D from Microsoft has been unsatisfactory, and anecdotal reports of internal team status have not been promising. Somebody told me a year or two back that the HLSL compiler team was one person. That’s not something you want to hear, true or not. Worst of all, though, was the communication. That’s the part that bugs me.

When you are in charge of a platform, whatever that platform may be, developers invest in your platform tech. That’s time and money spent, and opportunity costs lost elsewhere. This is an expected aspect of software development. As developers and managers, we want as much information as possible in order to make the best short and long term decisions on what to invest in. We don’t want to rewrite our systems from scratch every few years. We don’t want to fall behind competitors due to platform limitations. Navigating these pitfalls is crucial to survival for us. Microsoft has a vested interest in some level of non-disclosure and secrecy about what they’re doing. All companies do. I understand that. But some back and forth is necessary in order for the relationship to be productive.

Look at XNA — there have been a variety of questions surrounding it for years, about the extent to which the technology and its associated marketplace were going to be taken seriously and forward into the future. It is clear at this juncture that there was no future and the tech was being phased out. Direct3D 10 was launched in late 2006, a bit over six years ago, yet XNA was apparently never going to be brought along with the major improvements in DWM and Direct3D. How long was it known internally at Microsoft that XNA was a dead-end? How many people would’ve passed over XNA if MS had admitted circa 2008 (or even 2010, when 4.0 was released) that there was no future for the tech? The official response, of course, was always something vague and generic: “XNA is a supported technology.” That means nothing in Microsoft world, because “it will continue to work in its current state for a while” is not a viable way for developers to stay current with their competition.

Just to be clear, I don’t attribute any of this fumbling to malice or bad faith. There’s a lot of evidence that this type of behavior is merely a delayed reflection of internal forces at Microsoft which are wreaking havoc on the company’s ability to compete in any space. But the simple ground truth is that we’re entering an era where Windows’ domination is openly in question, and a lot of us have the flexibility and inclination to choose between a range of platforms, whether those platforms are personal computers, game consoles, or mobile devices. Microsoft’s offer in that world is lock-in to Windows, in exchange for powerful integrated platforms like .NET which are far more capable than their competitors (eg Java, which is just pathetic). That was an excellent trade-off for many years. Looking back now, though? The Windows tech hegemony is a graveyard. XNA. Silverlight. WPF. DirectX. Managed C++. C++/CLI. Managed DirectX. Visual Basic. So when you guys come knocking and ask us to commit to Metro — sorry, the Windows 8 User Experience — and its associated tech?

You’ll understand if I am not in a hurry to start coding for your newest framework.

Before things get out of hand: No, you should not switch to OpenGL. I get to use it professionally every day and it sucks. Direct3D 11 with the Win8 SDK is a perfectly viable choice, much more so than OpenGL for high end development. None of the contents of my frequent complaints should imply in any way that OpenGL is a good thing.

Advertisements

101 thoughts on “DirectX/XNA Phase Out Continues

  1. This is exactly why I don’t want to bother learning any new technology to make Metro applications, and that’s why I used MonoGame to port my XNA games to Windows 8 Metro.

  2. Chalk another one up for not supporting metro style apps. I have no desire to support a platform that I don’t even like using (for the most part). They’ve shot themselves in the foot, and I hope it hurts them.

    I’m still bitter over the lack of support for XNA. They had a real good thing going there – a perfect platform for people to start with for independent game development (especially for someone who was doing .Net business apps for a living). They even had the potential distribution channels set up on Xbox Live Indie Games and Games for Windows, but once things were up and running they never gave it a second thought. There seemed to be no support. I can promise you that my future time will be spent chasing different platforms, rather than being continually burned on one. Xamarin, MonoGame and a ton of other projects out there are looking really tempting.

  3. That’s sad. Like really sad. I’ve been learning DirectX for a while thinking that I could make something with it. But if support is stopping for DirectX, which should I go now?

    Regardless, I hope that something comes soon to replace it in someway.

    Sad to hear.

  4. > Microsoft’s offer in that world is lock-in to Windows, in exchange for powerful integrated platforms like .NET which are far more capable than their competitors (eg Java, which is just pathetic).

    This really resonates with me: this is exactly the pact I made with XNA, and the current situation is really troubling. I have no problem with moving on to a new .NET-based game framework, even if they don’t call it XNA. But I don’t see a solution like that in the current or upcoming landscape. I’m still using XNA for my next game, but I will definitely be on the prowl for a replacement framework. Given XNA’s popularity, my hope is that the developer community (e.g. MonoGame) creates something soon.

      1. SunBurn does look pretty good, but sometimes it is nice to be able to tinker with all the nuts and bolts yourself. If everyone ends up going the route of using third party engines, that still leaves us hobbyists dangling.

  5. “The Windows tech hegemony is a graveyard. XNA. Silverlight. WPF. DirectX. Managed C++. C++/CLI. Managed DirectX. Visual Basic. So when you guys come knocking and ask us to commit to Metro — sorry, the Windows 8 User Experience — and its associated tech? You’ll understand if I am not in a hurry to start coding for your newest framework.”

    The money quote right here.

  6. To be fair, I would argue that Apple killed Silverlight rather than Microsoft. I’ve been a liason with MS for my company for a few years, and internally they were supporting the hell out of SL up until the point Apple made it clear that SL/Flash would never run on their mobile devices. After that, what choice did they have. The entire point of SL was write-once-run-anywhere, and Apple kneecapped them so that their primary mission statement could never be fulfilled. I will never forgive Apple for that nonsense.

    That being said, I don’t really consider WPF/SL to be a “graveyard” per se, because the technologies evolved with a very easy migration path; Everything you learned about XAML still works on the WP7/Metro/WP8 platforms, just have to change a few namespace declarations around here and there.

    I’m still annoyed about Managed DirectX though. I wrote my 3rd book based on that technology only to have Microsoft abandon it almost immediately after publication and go with XNA. I stopped writing windows games at that point because I was too annoyed, but I always kind of assumed that MS would see that a C# game platform was the future of PC gaming. I don’t understand why they don’t treat it like they should.

    1. Ron, I completely agree with what you’re saying on the migration path from wpf/SL to the newer XAML based technologies. But the problem is that’s never how it was communicated. The message has always been 100% bungled so that most people truly thought that their investment in that tech was a waste … and let’s be honest, it’s only theoretically been just a few minor changes. You can’t just take a WPF app, change a few namespaces, and boom you’ve got a metro app. Their switch to the WinRT APIs, while technically sound, have been a disaster for the migration path … you’ve got to rewrite most of your library and UI code.

      1. I suppose. Granted I’ve never ported a large enough app over to see for myself; everything I’ve done was relatively simple. So I admit that the migration path may not be as clean as I initially surmised.

    2. If you were apple, though, would you have chosen to adopt silverlight? I don’t claim to know a whole lot about it, but regardless its quality, it didn’t exactly seem like an open, community/multi-vendor driven technology.

      1. Apple didn’t have to support or adopt anything; they specifically altered their web browser so that nobody could make addons, and explicitly locked down their devices so that nobody could make competing browsers which did support addons. They essentially said “nobody with an iDevice will ever be able to view the web in any way other than the way we dictate”. This killed Flash as well, don’t forget.

        1. I read that Adobe was aiming to create a cross-platform app ecosystem with Flash and Apple had to lock down the App Store as the only source of mobile software (and revenue). Silverlight was collateral damage.

        2. Being able to use silverlight on the iphone would’ve created a direct competition in terms of the ecosystem you can use to deliver applications to iphone users — so it would’ve taken away a good chunk of control from apple, and put that control into the hand of microsoft (and microsoft alone, presumably, with nobody else having any real influence on it) — I don’t have a hard time imagining that this doesn’t fit very well with apples idea of how to control their ecosystem.

          Consider also that apple is extremely draconian in terms of controlling the end-user experience that apps are supposed to give the user, which again would be a huge strike against any such external platform (but silverlight in particular.) I don’t think anything other than a “either we don’t do it, or we completely adopt it” is in question for them.

          On the other hand, I don’t know whether they possibly would’ve adopted it even if it were a completely open platform, so I guess there was probably nothing much microsoft could’ve done to sway apples opinions on the matter anyway (Although they seem to be embracing HTML5 to at least some degree.) But it probably would’ve found a much bigger audience amongst other groups, such as desktop users and possibly android developers.

  7. “That being said, I don’t really consider WPF/SL to be a “graveyard” per se, because the technologies evolved with a very easy migration path; Everything you learned about XAML still works on the WP7/Metro/WP8 platforms, just have to change a few namespace declarations around here and there.”

    Agreed, same here. Silverlight and the new UI for Win8 are very, very similar.

  8. Ron/SilverLight – clearly shows Microsoft does not listen to their customers. SilverLight on the web may not have the strongest future, but it’s an excellent choice for internal LoB apps and a nice lightweight option over Click Once. Not to mention it still runs on Macs, the OOB support goes really far to make it feel like a real desktop app, and was a potential method to distribute XNA games.

    I have no reason to believe that the Win8/Metro API will exist in a few years if it doesn’t gain market share. It will just be another tech tossed into the Microsoft graveyard. Microsoft is also the only platform that segregates AAA from Indie with limited API access, feature limitations, and separate marketing that goes so far as to brand a class of games beneath the rest in the consumer’s mind. That none of the issues are address in Metro… well Microsoft has made it easy to look to other options outside of them.

    The best hope now is releasing XNA as open source so the MonoGame crew can add parts to their system and then evolve, in my opinion, one of the best game frameworks out there.

    1. Perhaps they listened to different customers. I’m in the education industry so all of our customers are schools. We developed some substantial LOB apps using Silverlight, and everything was going great until the iPad came out. Now we’ve got customers demanding that we support their shiny new toys or they’ll find a solution elsewhere. So naturally, my company chose to begin transitioning to HTML5 instead of Silverlight because it was (in our opinion) not cost-effective to maintain two codebases for the same product (which incidentally was the reason we chose silverlight in the first place). I am not happy with the prospect of taking our clean OO C# code and mangling it into Javascript, but the world being what it is, we have no choice. I’m sure there are many other ISV’s in the same boat. We can’t be the only ones who suddenly told Microsoft that we’re abandoning ship, and they must have seen the writing on the wall as well.

      I’m on the fence about Metro. When SL came out years ago I transitioned all my personal projects to web-only, thinking that desktop apps are soon to be a thing of the past. I still believe this is likely, which makes it harder to swallow the MS line of “come back to the desktop”. IMO they need to lead the industry and push for a true standardized sandboxed bytecode language for the web so that we can compile C# or whatever language you may prefer instead of dealing with JS any longer. It’s kind of ridiculous how web development works now. People are getting sick of JS and taking up new languages like CoffeeScript or TypeScript, which compile down to Javascript, which (most) browsers then take and compile down to bytecode. Skip the middle-men!

      /rant

      1. I don’t see a replacement for the JVM coming any time soon, but there are languages that now run on it (e.g. Ruby) which means you don’t have to do this stuff in JS.

        This is a time of flux and because of the ubiquity of the JVM it is very hard for anything else but JS to gain traction. It *will* happen though. I would be surprised if someone isn’t already writing C# for the JVM.

  9. Exactly why i got out of XNA while the getting out was good. And I’m so glad I found Unity3d.com. Unity has C#, Boo, & Javascript language support, Multiplatfom, Webplayer, flash, linux, android, & ios publishing as well as xbox, wii & ps3. Built in asset store, and the editor runs on mac and windows (probably linux eventually I’m guessing)

    It has everything I could ever want and more and oh yeah it’s FREE. There is a pro version but you can do so much with the free version that I’ll never need pro. Also posting my xna code over to Unity was trivial. And there is a massive amount of unity related content on the you tubes.

    Have only been using unity for little over a year now and have nothing but good things to say about it.

    1. Unity is an interesting product.

      Unity is sort of the “WordPress” of game engines… with all the negatives and positives that go with it.

      On the plus side, you can get something up and going that looks fairly decent with minimal effort and time. .

      But as a consequence, It’s typically very easy to spot a Unity game because many have got a very similar vibe to them. From my observation, unless developers takes a lot of care in the way the game is developed and the way assets are created and handled within it, games developed with Unity tends to look and feel like they’re descended from the same thing. (Which, duh! it’s a game engine, so that makes sense). But many games come off feeling more like a “mod” more than a game. I don’t know.. maybe it’s just me with that observation/complaint.

      Once you want to target all of the platforms you’ve cited, Unity isn’t really “free” anymore. The standard version is comparable to Xamarin’s Mono licensing when hitting multiple platforms (and you’re locked into their scripting framework, which means it’s not terribly suited to make a non-game app with it). Things get quite a bit more costly if you’re targeting multiple platforms with Unity Pro. If your company happens to make $100,000 a year in some capacity, you’re disqualified from the non-pro license path entirely. Even if Unity isn’t the reason you’re that successful. (Maybe if you’re that successful, the licensing difference isn’t that important to you.. but I’ve never been a fan of the “How much is this? Depends… How much do you make annually?” approach to licensing)

      So I’d say Unity is fine for hobbyist programmers. But on the Pro side, I’d be curious as to how many who invest in Pro licenses make their money back (who don’t own a company that rhymes with Rovio. 🙂 ) I’d guess that’s why so many projects that try to get funding via Kickstarter using Unity seem to be costly.

      I’ve followed Unity for a long time, and considered it an option (still do). But for now, I’m going with Monogame. It’s less restrictive in terms of licensing. There are plenty of free platforms you can target (desktops, PS Vita, Windows 8). And any licensing you have to buy from Xamarin doesn’t lock you into a “game-only” application. Plus Tom, Dean, Steve and the rest of the core developers there are mighty responsive and show a passion for the project greater than it seems Microsoft ever did with XNA. That instills a lot of confidence in its future for me.

      1. “So I’d say Unity is fine for hobbyist programmers.” As a hobbyist I would say it takes all the fun out of development. A purely commercial project for someone else would be a different story.

      2. I’d say this is also true of Unreal games though, even without the logo you can spot the engine quirks.
        I will have to check out monogame though, I’m growing to like Unity more and more but I’ve heard enough knowledgeable people hiss at it to at least keep my options open. I don’t want to go back to coding ‘engines’ I want to code games 😀

        1. Yeah.. I’ve noticed that with Unreal as well. Sort of the price you pay for investing in an engine. On the other hand, a decent engine can democratizes the ability to have something that looks fairly good up and running with very little effort. In a professional environment, I could see it as a great prototyping tool to sell concepts. Even if the final product isn’t actually created with Unity.

          I hope my original comment about Unity wasn’t construed as overly critical in the regard of being able to spot a Unity game. The probably culprit in those situations is when one relies too heavily on stock content and common scripted special effects they sell/provide. But I suppose this is a problem any ecosystem is susceptible to. The fact that so many Unity games look alike probably isn’t the fault of Unity itself. Then again, maybe it is to the degree that they promote the sale of their content packs.

          MonoGame, like XNA is a framework rather than an “engine”. You probably won’t get a whiz-bang massively 3D demo running in a few lines of code (it’ll take probably twice the code you’d need for Unity), but those wanting some greater control over the “game loop” pattern without having to directly write procedural OpenGL/Direct X code will find it pretty elegant. Or at least I do. It’s level of abstraction is about perfect for most game application cases out there and the learning curve is very low.

    1. That’s what I’ve been wondering for a while, ever since XNA started getting the red-headed stepchild treatment. Part of the problem is there’s no unity between teams at MS, they’re all struggling for power in their own little kingdoms. It strikes me as a completely asinine way to run a company, but what do I know. 😦

      1. Jim’s right, the environment seems kinda cannibalistic; techs and products come and go on a trial-and-error basis as if there were no long term vision for the company. Fortunately, SharpDX and Monogame are there -both, open source- for those who loved MDX and XNA.

  10. How could they make these remarks without commenting on any new form of game development MVP status? Sure, they promote HTML/JS for games and then promote DirectX for high performance? Just saying “well now DirectX is PART of the CORE OS” is great and all, but there is a key difference between game development (or high performance graphics) versus event driven and highly productive frameworks. Even if they want to start changing our attitudes about DirectX being an add-on or core, they shouldn’t cut this cord without giving their current MVPs something to strive for to stay in the program.

    1. “…promote HTML/JS for games…” Lol, they paid to have Cut the Rope ported but where is Pudding Monsters in HTML/JS? Suprise-surprise!

  11. “DirectX is no longer evolving as a technology.” – more MS madness ? Can they halt the harikiri for a moment ?
    “Direct3D has been absorbed into Windows core” – that’s wishful thinking interpretation – they didn’t say that. After Silverlight, can you really give MS the benefit of the doubt here ?

    Have I Wasted years of effort of esoteric research in easily abandoned MS technologies?
    DOA: Oslo, Software Factories, MBF (project green), Whitehorse (ACD), StreamInsight, WPF, Silverlight, Sketchflow, XNA, Spec#, Accelerator (.NET GPGPU),IronRuby, RIA Services, English Query, Solver Foundation, DryadLINQ, XSG (Farenheit), MLNet
    Quietly forgotten: SQL Server Data mining, Robot Toolkit, MDX
    Coma of limited serious adoption: HPC, WF, F#, Azure, LightSwitch, EF, MDX, WinRT , now DirectX ???
    Terminal: ASP.NET, IIS
    Whats left: WCF, LINQ, TPL, SQL, Visual Studio, TFS

    1. Spec# was a MSR project you really were not thinking it would go live?
      It became the contracts library which has not gained huge traction but is being used/improved/supported now

  12. Pingback: Prog.Hu
  13. this worries me ! 😦

    im a young game developer, still learning game development ..

    can you suggest me the best thing to start with (openGL | XNA | directX) to make good indie games like Limbo, world of goo ?? (im good at coding 😉 )

    and what do big companies use to make those huge awesome games ??

    thank u
    prajwal

  14. You have mirrored my sentiments exactly. After so much time, effort, and capital invested in Microsoft technologies, and platforms, I am probably going the Apple route, or Mono. If We treated our customers the way Microsoft has treated us we would be spending out time on the interstate exit ramp with a cardboard sign beggin money.

  15. Microsoft seems to excel at creating bad (and often pointless) technology, proclaiming it to be the future then letting it die. Learnt the safest thing is stick with
    standards outside Microsoft whenever possible i.e. native C++ and opengl, its actual easier to use than Direct3D anyway and can go with you when you finally abandon Windows!

  16. I recall you saying this last year at GDC promit :\
    While I’d already kinda abandoned XNA as a dead end (grudgingly, I’d really love to play MY game on a console), its still sad to see it go.

    I’m surprised MS’s directx commitment also seems to be wavering.

  17. Great post and sum up. I have been feeling this way for a long time as well. Even back in 2006, when I started as an DirectX/XNA MVP, it was clearly visible there was no focus on Professional Developers (XNA Pro never saw the light of day). I got bored a few years later (anyone remember Zune?) and now it is really sad to see it all die for the past 2 years ..

  18. It´s extremely bad with 2D graphic / ui stuff – they introduced GDI+, WPF, Aero-specific additions, Direct2D, Additions for Win7,… yet if you want to get deeper you still end up with the good old GDI from early 90s.
    Or just look at multi-screen support or transparent windows, expect each windows version to have some specialties which need extra care and makes testing a tedious task.

    Direct X began well, then they overdid it (Direct Music, Direct Show), it started getting weird with Version 8 and 9 (making D3D really nice, but starting to drop the rest), and now there´s D3D 10 and 11 with a dropped D3DX DLL and a bunch of ugly helper functionality for connecting D3D surfaces with the rest of the world. Doesn´t look like a solid development road map for me…

      1. The amount of code involved in implementing what D2D and DWrite do is certainly not trivial. They’re not really wrappers or APIs but rather fairly substantial libraries.

  19. “DirectX is no longer evolving as a technology” — Well.. for the most part toilet paper is no longer evolving [much] as a technology, we we all know that’s not going away anytime soon either. Ignoring those three seashells anyway. 😉

    “No, you should not switch to OpenGL” — What if I don’t want to be locked into the proprietary MS way? Are there widely available [freely usable] implementations of Direct3D for non MS platforms (e.g. Android, iOS, Linux, OSX, BSD, etc..) that are stable (at least as stable as their OpenGL ones)? With MS’s erratic behavior lately, trying to quickly “reinvent” themselves, I can see the potential of even more users finally taking the plunge to drop their dependency on the OS due to being fed up. I haven’t personally depended on MS for any of their applications for many years, and only reluctantly use their OS on some systems mainly due to gaming requirements (as most game companies apparently can’t be bothered to use portable code and APIs). OpenGL may not have the greatest API (the C interface was mehh in some parts last time I looked), but I think it beats being a slave to MS’s whims.

    1. When you type “M$” it makes you sound like a pathetic, whiny child. Or a slashdot poster. I’ve fixed and approved this time, but don’t bring that bullshit back.

      1. Agree, but a bit harsh on Chadf!!

        His analogy about toilet paper is a good one. MS had (still has) the equivalent to this, an operating system that does what people want, continuous change is not what most users/developers desire anyway. Microsoft honestly believes it has to change, but I suspect it’s just this change that will be its undoing!

        Engineering wise we just ignore the Micosoft fads and happily continue to work with c++ and OpenGL etc. The trouble is the Microsoft silliness has now crept into their Developer tools VS2012 is virtually unusable!

        1. My primary targets these days are Windows and iOS (same engine) and it looks like OSX is about to be promoted to a primary platform as well. I’ve gotten a fairly wide view of platforms lately, and I still find OpenGL quite tiresome and XCode quite painful. I’m actually fond of VS 2012, more so than 2010. Engineering wise, I don’t get to pick based on what I like. I pick based on what’s necessary or the best technical choice, which means OpenGL sometimes and Direct3D sometimes. IMO MS has the best technology, whether you’re looking at the kernel (yeah I said it), the intermediate system APIs, .NET, Direct3D, etc etc. Technical superiority isn’t the only piece of the puzzle though, and the poor, opaque, and frequently infuriating communication or lack thereof is becoming a real problem.

      2. Here is my question to everyone who says OpenGL sucks, or even just that Direct3D is better. … are you comparing apples to apples? Are you using modern OpenGL or are you using old fixed function/deprecated stuff? If you want a fair comparison you should compare Direct3D 10 with OpenGL 3.3 core profile or Direct3D 11 with OpenGL 4.3. It seems that every developer I’ve discussed this with hear OpenGL and think glBegin/glEnd OpenGL 1.x or at best 2.0 technology. My graphics class in college that I took Fall 2010 was (and probably still is) teaching from the 1.1 redbook. I refused to learn that and my professor let me implement the projects in OpenGL 3.3 which had come out that March. The modern API is wonderful, clean, powerful and flexible. You do have to use something like GLEW (http://glew.sourceforge.net/) to get access to it but that’s dead simple, just a simple init function. Qt is even going to provide that built in 5.1 http://www.kdab.com/opengl-in-qt-5-1-part-1/?utm_source=rss&utm_medium=rss&utm_campaign=opengl-in-qt-5-1-part-1

        At my last job I converted a project to using modern OpenGL which I was fortunately able to do because it a small company and a relatively small project where I was the only developer but no one there, including the people who worked on the application before me knew about modern OpenGL.

        My current job is another example of this ignorance of modern OpenGL. They’re using roughly OpenGL 2 and all the developers have a very negative, uninformed attitude about OpenGL thinking that fixed function is the only way to use OpenGL on windows etc. I hope to convert them and change the code (probably slightly more likely than me getting them to convert to using a cross platform GUI like Qt or GTK+ 3 instead of using windows forms and mac specific stuff … also I wish they’d stop using svn for git)

        Anyway checkout a good book (I originally learned from this http://www.amazon.com/OpenGL-SuperBible-Comprehensive-Tutorial-Reference/dp/0321712617 and this is one I bought later http://www.amazon.com/OpenGL-4-0-Shading-Language-Cookbook/dp/1849514763/ref=pd_sim_b_7) or you can use this site to get an overview of modern OpenGL:
        http://www.opengl-tutorial.org/

        Other good articles and references:
        http://blogs.valvesoftware.com/linux/faster-zombies/
        http://blog.wolfire.com/2010/01/Why-you-should-use-OpenGL-and-not-DirectX
        http://rastergrid.com/blog/2011/10/opengl-vs-directx-the-war-is-far-from-over/

        1. Oh, we’re gonna do this huh? Okay then.

          While I can’t speak for anyone else, I want to emphasize very strongly that I am speaking about modern OpenGL, specifically 4.2 core on mostly Windows. Everything except compute, really. In fact, I will suggest that the people who are happy with GL are probably the ones using the outdated versions. I need to write a full post detailing the misery of my trenches experience putting together an OpenGL system, but the highlights are:
          * Stupid shader compiler bugs that are inconsistent between vendors. NVIDIA still accepts a lot of invalid code, five years after the fact. AMD’s error reporting tends to break.
          * There’s something severely wrong with how AMD handles MapBufferRange in general, notably in Crossfire. Gonna report this bug to them next week.
          * Uniform buffers don’t work on AMD right now. Haven’t had the patience or time to debug this one, so fell back to the ES code for binding individual uniforms for now.
          * Tooling issues; both AMD and NV’s analysis tools have varying levels of failure (total, in AMD’s case). Some other tools like gDEBugger don’t seem to be up to date, which means they either crash or give useless results when using 4.x GL code.
          * Lack of samples and info about fast paths. I did find some detailed slides about OpenGL on NVIDIA hardware ([1], [2]) and hilariously enough their primary advice is: don’t use OpenGL core profile because it doesn’t reflect the reality of the hardware, causes CPU overhead that may result in slower code, etc. AMD? Very little out there and a rapidly shifting somewhat buggy driver.
          * AMD hasn’t done 4.3 yet. Come on guys.
          * Generalized variance in how different drivers and hardware react to different OpenGL calls. I found a very good book ([3]) that covers a lot of this.
          * GLSL is essentially a giant list of stupid decisions.
          * Mac. Hoooooly shit. g-truc [4] has the messy details if you really need them.
          * Intel, see [4]. Thankfully this doesn’t affect me.

          To be quite candid, my ported ES render pipeline was far, far more stable on PC. The 4.x pipeline has been one long series of disasters.

          [1] http://www.slideshare.net/Mark_Kilgard/gtc-2010-opengl
          [2] http://www.slideshare.net/Mark_Kilgard/siggraph-2012-nvidia-opengl-for-2012
          [3] http://openglinsights.com/
          [4] http://www.g-truc.net

      3. There’s no reply link next to your response to me so I’ll just put it here.

        First of all I’m not trying to start an argument, you clearly know your stuff but most people don’t have a clue about OpenGL hence my original question about what you were comparing.

        However I do have a few comments/responses to your statements. I looked at all your references (although could you link me to the specific post about mac problems on g-truc?).

        First, with the exception of your vague comment about GLSL, none of the things you listed are problems with the API itself. Saying modern OpenGL is a bad API because some vendors’ implementations have problems (or tooling issues), is like saying English is a bad language because some people don’t speak/read/understand it very well. About your guess that people who use only the old immediate mode 1.x versions being happy I think you’re probably partially right. They’re happy if that’s all they know and that’s all they use so it’s consistent. What exactly don’t you like about GLSL? I’ve never used Direct3D but I’ve seen HLSL and it looks pretty much the same except float4 instead of vec4. I think I like GLSL better but I want to know what you think is more than a cosmetic difference between them (see my caveats at the bottom)?

        ” don’t use OpenGL core profile because it doesn’t reflect the reality of the hardware, causes CPU overhead that may result in slower code, etc”

        I read both powerpoints and nowhere does it say that modern OpenGL doesn’t reflect the reality of the hardware. What is says is that deprecated functionality is still hardware accelerated so there’s no reason not to use it performance wise. About the core profile it says “It *may* go slower because of extra … checks”. It was my understanding that except for a few legacy parameters like texture border in the teximage functions if you select a core profile in GLEW it won’t even hook up the function pointers for older deprecated functionality (although maybe the 1.x is always available since you don’t need GLEW for that anyway). In addition, the complexity of creating a context should hardly matter since it’s a one time thing at startup. I would think the overhead of all the immediate mode function calls would more than overcome any positive from lack of checks. I don’t know how display lists work but I’d assume under the hood they’re no different than special buffers and uniforms except the GL is handling it for you which seems like it’d be more overhead/slower. Also things like linewidth > 1 are apparently deprecated but still in the core context (checked the man pages and the glspec pdf document). They’re still there as of 4.3. I think using just the core 3.3 features (ie everything listed here http://www.opengl.org/sdk/docs/man3/ whether or not you actualy request a core profile/context) is easier and cleaner and there’s no reason not to use it unless you have to target computers more than 5 years old. I especially don’t see any reason not to stick to modern GL with a new project. Also there’s this https://github.com/p3/regal and Mesa3D which should be getting 3.3 finished up this year.

        I notice in the first powerpoint they’re using old OpenGL extensively in the shaders, using the old matrix stack and builtins. This is what I hate. If someone is just using OpenGL 1.X and nothing modern ok at least it’s consistent. Imo it sucks comparatively but it’s consistent. It is when they start to mix them that I am repulsed. It’s like reading a beautiful C program and suddenly there’s Perl or Java code in the middle of it. Why not just use glm (or your own math library) and uniforms instead of using the glMatrix (and glu) functions? If you have a large legacy codebase fine but if you’re adding shaders or any modern stuff especially 3.x features, just convert the whole codebase to be consistent. Side note, another ugly thing is them premultiplying vectors. If they were using uniforms (or using column major matrices in the client code I admit) they wouldn’t even have to deal with that (see this post by Erin Catto and the long response comment by Robert Winkler (yes that’s me) addressing/clarifying the issue)

        A few notes/caveats about my experience: I have pretty much exclusively used OpenGL 3.3 though I’ve read about OpenGL 4.X features, I like my programs to be able to run on my desktop and my laptop which only supports 3.3. While my programs for my graphics class were developed on windows (required for class) since then I’ve done all my development on Linux (though compiled 1 program on windows just to test). Another thing to note is both my computers have Nvidia cards and I’ve never had to deal with other cards. Also never used any debugging or profiling tools. I have never explicitly requested a core context, just init glewInit’ed and my shaders just say #version 330 so I’m not sure if that defaults to compatibility or core. I have never run into driver problems or Nvidia accepting things it shouldn’t (as far as I know). I apologize for the long response and that my questions are spread all over it. I really would like to see you write a post about OpenGL, I think that’d be interesting and informative not to mention more visible for Google and future people.

        1. I’ll just address some specific points of interest. As I said before, this really merits a full post with more details.
          * Here is the specific g-truc post I want to highlight: http://www.g-truc.net/post-0546.html Note that only 3.2 is supported.
          * “Saying modern OpenGL is a bad API because some vendors’ implementations have problems” This is a common criticism that I don’t buy. We live in the real world, where APIs don’t exist in some magical fairytale land without implementations. If a major implementation is bad, the API is bad when it comes to the business of making products people can use. I can go on and on about why I’m upset at AMD, but that doesn’t help me or my userbase. Switching to Direct3D DOES help.
          * Slower core contexts — what NV is saying is that the GL spec requires the core implementation to actively check if you’re using compatibility features and generate errors. These features are not necessarily limited to functions but include particular states and combinations of states. What GLEW does or doesn’t hook up makes no difference. The driver needs to test validity and that takes CPU time.
          * Display lists on NV are probably implemented in terms of the native underlying hardware command buffers. Common technique on consoles.
          * Linux+NVIDIA+GL is one of the easier platforms to work with. Mac or AMD based anything are pretty bad. Crossfire is really bad.
          * “just init glewInit’ed and my shaders just say #version 330 so I’m not sure if that defaults to compatibility or core” Compatibility. Try enabling core and see what happens.
          * “I have never run into driver problems or Nvidia accepting things it shouldn’t (as far as I know).” You /don’t/ know, not until you run a couple different hardware vendors and drivers. This is part of the problem.

      4. Sorry I forgot one minor thing. You mentioned uniform buffers. I spent a frustrated hour or 2 trying to use them following my OpenGL 4.0 Shading Language cookbook one day. It wasn’t working right and I finally remembered that there was a section in my Superbible about them. Turns out the cookbook describes them incorrectly. The superbible’s info was accurate and worked though. I wonder how many other people have run into that and just thought the driver was broken and/or OpenGL was stupid (not saying that’s what’s happening in your case with AMD but bad resources/misinformation probably does cause many problems. I think NeHe’s tutorials should be taken down or have a big giant link on every page saying “This is ancient technology. If you’re really a beginner please go learn the newest version” with a link to opengl.org and a site like that tutorial site I linked in my first post. I don’t actually use uniform buffers I was just trying them out that day.

      5. I’m so sorry for the triple post but I think I found what you wanted me to find. Is this it:
        http://www.g-truc.net/doc/OpenGL%20status%202013-03.pdf

        After finding that googling got me this:
        http://renderingpipeline.com/2012/04/sad-state-of-opengl-on-macos-x/

        I had no idea. One more reason for me to hate Apple and never ever buy a mac. I’ve always thought I’d probably release any software for Linux and Windows and (if it’s open source) let people compile mac for themselves if they want. This just cements it; until they improve OpenGL support I’m not even going to consider it.

      6. forgot to actually give you this link in my second response
        https://plus.google.com/114825651948330685771/posts

        So it looks like shaders default to core (see here http://www.opengl.org/wiki/Core_Language_%28GLSL%29#Version) but I’m not sure if that does anything if the context isn’t explicitly core. There is apparently no way to just load the a core functions with GLEW so I’d have to switch to something else if I cared. As far as the actual context creation goes I have always and am currently using SFML 1.6 for my programs and that has no way of specifying and defaults to a 2.0 context or something like that (SFML 2.0 does support creating a context for a specific OpenGL profile but I plan on switching to SDL 1.2 and eventually SDL 2.0 which also will support that iirc).

        As I said before I’m fairly certain that I don’t use anything outside of the core based on what I learned from and the only resources I’ve ever looked at being core resources. Looking at these pdf’s http://www.opengl.org/documentation/glsl/ (I’m not sure why the 3.3 link says 3.2/1.50) I can’t see anything in blue (the removed stuff) that I’ve ever used.

        Also I don’t really care about whether a core is all I actually have access to, I just don’t want to use the old stuff because it’s ugly, less intuitive, less flexible, and harder to use. So even if the core were a little slower (still don’t believe the checking would make any sort of significant difference especially for anything I’ll ever develop), it wouldn’t affect me because I’d be using just the core but in a compatibility profile/context.

        1. I just don’t want to use the old stuff because it’s ugly, less intuitive, less flexible, and harder to use.

          That’s an accurate description of the entire API, as far as I’m concerned.

  20. Unless opengl has done a lot more in the revision a few years back than I knew, its still uuuugly. But I’m getting kinda concerned about the confusion out of redmond these days, and I tack closer to ‘MS fanboi’ than not.
    This is why I’ve been making the concious decision to use an engine rather than code low level, although that brings its own frustrations. It seems you are pretty much ALWAYS going to have some technology tie in, you are ALWAYS going to have a fairly nasty cost to switching technologies. Do I tie myself to Unity, Unreal, Direct X, OpenGL? I guess opengl can be considered the most ‘free’ but lets not pretend thats suddenly the easy mode either and its all good from there.

    I’m just disapointed since XNA was a glimpse into true console development for the indie, even as badly mishandled as it was (and the developers have some call in this too, there was a LOT of BS games on there). Sony has their new project I guess, is it going to be any better?

    1. Also I think its wrong to say MS doesn’t need to innovate. Consumer side they really do, Apple has been making steady gains into their territory, and the tablet side is bleeding from the mainline pc segment. Whether the choices they’ve taken are the right ones is highly debatable (some are good, some not so much). They need to be wary of changing just to change. I don’t think balmer is/was doing a good job with MS in general, although a lot of the changes in the marketplace may just be hard for them to handle.
      They’re probably best dividing off the biz side again, THAT is the side resistant to change.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s