SlimTune Testers Needed

I’m currently working towards the release of SlimTune 0.2.1, which will hopefully happen before GDC. If you’ve been paying attention, that might seem a bit odd to you — 0.2.0 isn’t out yet. That’s because 0.2.0 is the pre-release version, and I need testers for it.

The idea is basically to poke around the new code, let me know about glaring omissions or failures, etc. This is a short process, so if you want to do it I need you to have sufficient time this week. I want to put the final touches in this weekend, if possible. The pre-release will install normally and won’t interfere with an upgrade to the proper release version later on.

I’m particularly interested in verification that ASP.NET is working correctly, and simple sanity checks that everything is that it should be. UI feedback is always welcome, though it may not make it into 0.2.1. In any case, if you’re interested you should email me. The address, as usual, is promit dot roy, gmail.


9 thoughts on “SlimTune Testers Needed

  1. I added a couple of issues to the Google code page.

    Plus, with regards to performance while profiling, the average performance is better than before(with the in memory option). In fact my frame time is identical to running without SlimTune.

    However the percieved performance is quite a bit worse, ie every second or so there is a large performance spike/hitch. Makes controlling the player very disconcerting!!!

    It would be much better if there was a smaller constant performance hit from profiling. Perhaps an option to control data communication/database buffer size?

    1. It can, although I’m inclined to suspect the backend hook, which has not actually been profiled at all. (Is that ironic? I’m not sure.) Dunno if I can fix this for 0.2.1, but I’ll back off the frequency at which the backend sends data (which is currently set too high anyway) and see what happens.

  2. It seems like the inverse problem, ie the backend is buffering up too much data, then periodically processing it and causing a stall. So increasing the frequency at which it transmitts data would even out the latency, at the cost of some throughput.

    But maybe the way things are happening is not as simple as that.

    1. It’s run through ASIO, so the flush means I queue up a request and it happens at some unspecified point in the future. The request just gives the send a bounded block of memory, and the OS should handle the rest.

      Are you set up to recompile the backend on your own?

  3. Not if it still uses ATL…

    Probably wouldnt be a major job to remove that however and make some Express solution files. If you wanted I might do that if it can be commited to SVN.

    Perhaps it is some sort of thread pool thread priority thing or the socket buffering up data.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s