I’ve mentioned before that most of SlimTune’s core functionality is pluggable. This actually includes the underlying data storage system. The app works through a fairly simple interface, and even SQL is only used by the visualizers and not the core program. To date, the engine in use was Microsoft’s SQL Server Compact Edition (SQLCE). With the next release, I’m introducing support for SQLite as well.
Let’s recap. Every other profiler I’m aware of works in more or less the same way. While the application is running, data is written to a file. Once the run is complete, the frontend steps in to parse the data and visualize it one way or another. SlimTune on the other hand allows (encourages, in fact) live visualization while the application is running. It also supports very different visualizations that slice the data in entirely different ways. The enabling technology for these features is the use of an embedded database. I’m not sure why no one else has taken this approach, but my theory at the time was that it was a simple matter of performance. Databases have to be manipulated with SQL, have to store everything in tables, etc. I suspected that updating a database so often was a problem. My goal was to write a blindingly fast database backend that would be capable of handling large amounts of data efficiently.
I was very, very successful. There’s a number of application side tricks to batch writes together, and I modify the database tables directly instead of issuing queries. The code is quite complex, and annoying to maintain. But the results are nothing short of fabulous. With the standard sampling rate, the database update takes 3-10 ms every second or two, and the frontend process accumulates about one second of CPU time for every thirty on a target single-threaded process eating 100% CPU. Live queries don’t really make a dent at all, since they’re so infrequent. Overall I was thrilled with the performance I’ve gotten out of SQLCE.
I decided to add SQLite support for a few reasons. First of all, it’s cross platform and I’m looking to enable Mac OSX support (and potentially Linux) in this release series. Second, it doesn’t require installation and so distribution could potentially be simplified somewhat. Third, SQLite supports in-memory databases, which MS SQL does not. Some people have complained about the need to create a file every time they run a profile, and that will no longer be necessary.
There was one more reason though — I was honestly curious how the performance of SQLite compares. I started by deciding I didn’t like any of the existing C# wrappers, so I wrote my own. (Wasn’t interested in ADO.NET support.) It’s a simple PInvoke deal, took me an hour to build the support I needed. The SQLite implementation is also much, much simpler than my SQLCE code. As I said before, I work directly with the tables in CE, which is fairly annoying to code. There’s no support for that type of thing in SQLite, so I simply issue prepared statements. All the application side caching tricks are still there, but writes are using normal SQL, one entry at a time. (No batched inserts!)
And how is performance? Equivalent to SQLCE, actually, with far less code and effort. Figures, right? It took a little legwork to get there, but nothing compared to what I spent on making the SQLCE implementation fast. When I started, the amount of time spent in the database was catastrophically long, and I thought maybe I’d wasted the effort. SQLite has a few options which are important to look at in order to get the best possible performance out of it. These options are called pragmas, and they turned extremely poor initial performance into an implementation that is good enough that I’ve now marked the SQLCE code obsolete.
I changed two pragmas in order to get the performance I wanted. Remember that the way I’ve written the code, every single data point (several thousand a second) is a separate transaction. I tried to combine them into one transaction but that failed miserably. I ended up specifying two pragmas:
m_database.Execute("PRAGMA synchronous=OFF"); m_database.Execute("PRAGMA journal_mode=MEMORY");
The first setting had a particularly dramatic effect, about 2000x in fact. It turns out that SQLite’s default behavior is to force a filesystem flush to disk of the database after every transaction ends, which is hideously slow. (I’m told that on some systems, it forces a flush of the ENTIRE filesystem’s pending writes.) Setting synchronous to off disables filesystem flushes, and relies on the OS to get things to disk safely. The second setting moves the transaction journal to memory instead of using a file. Again, with thousands of transactions this is dramatically faster than creating file traffic. Unfortunately it does mean a high likelihood of file corruption if your app crashes, but .NET’s ability to fail fairly gracefully and still run finalizers offers a lot of protection against that.
I’m still planning on a few more database engines to be available. I had kicked around the idea last year of being able to profile to a remote database instead of being restricted to a local filesystem, and I’m eager to see how a full blown MySQL or SQL Server instance handles the data. I’m worried about the amount of data moving through the TCP/IP stack on a single system though. I guess we’ll see what happens. I’m considering enabling plugins for that too, but right now I’m still fighting with how to expose data engine selection in UI. I haven’t figured out a way I like yet.