2010 – The Year of In-Memory Databases?

First of all I have to apologize to my loyal readers for the long absence from my blog. In simple terms, “I got busy”. But in these economic times, I guess this is a good thing. I don’t want to give a promise I can’t keep, but I will do my best to keep this blog up-to-date.

The past few years were dominated by all major database vendors introducing and improving their database cluster products. There is the bread of shared nothing clusters like Microsoft SQL Server 2008 and there are the share everything clusters like Oracle and Sybase. You can read all about this in my previous post “Grid Databases – The Future of Database Technology?“.

It is amazing how far these technologies have come and how much we got used to “always available” databases. You know what’s coming next. Now, that we have uninterrupted access to data, it would be great if we get the data faster. Well, the database vendors have an answer for that as well.

It was about 7 years ago when I first was introduced into the concept of in-memory databases. At the time it was less known database vendor called Times-Ten that offered an in-memory database with blazing performance metrics, hence times ten. It was the perfect answer to solid state disk drives that could drain an IT budget in a hurry.

Apparently this technology was so intriguing that Oracle decided to buy Times Ten and make it Oracle’s in-memory database. The only downside to this is, it is not an Oracle database in memory, it is Times Ten’s engine running in memory. This creates admin nightmares to have special skills to manage the Times Ten engine in addition to the Oracle server, as well as different software development techniques for both systems. Performance gains out weight manageability concerns, I guess?

Just recently Sybase announced its Sybase ASE server, in version 15.5, will have an in-memory engine equivalent that will provide the same functionality and manageability as the standard Sybase ASE server. This is a remarkable step, because it provides performance gains transparent to client applications and the database engine will not challenge DBAs to learn new skills. To me this is a win-win situation.

Microsoft is still in the planning and rumor phase of providing an in-memory database for its next version of SQL Server. The code name for the next SQL Server upgrade is Kilimanjaro. This is the name to use when searching for upgrade information. It is not clear when the new SQL Server release will be available and it is not clear if it will be named SQL Server 2010. It depends if it gets out this year or not.

IBM has its own in-memory database for DB2 and I believe it is a Java based and Java supporting engine. I have to admit that I’m not as fluent with DB2 as I wish to and please add your comments to this post if you’re a DB2 expert.

Having listed all the in-memory contenders, the question pops up “What about Sybase IQ?” or any other data warehouse database for that matter, Terradata and Netezza for example.

The answer lies in the architecture of in-memory databases. They are designed to improve transaction processing volume, the classic OLTP applications. Data warehouses would not have any benefits from in-memory databases. In-memory databases provide extreme high-speed transaction processing without the need to confirm disk write success. Traditional databases have one thing they have to do to ensure data integrity. They all need to wait for the disk i/o to confirm a write to disk. Database vendors came up with very complex and sophisticated caching techniques to overcome this performance challenge. But they cannot ignore this fundamental requirement.

In-memory database bypass this disk writing requirement and that’s what improves the speed. Designed for high volume transaction systems, like e-commerce shopping carts, in-memory databases are unbeatable when it comes to writing transaction data. And this is fundamentally different to data caching of traditional database engines. Data caching improves read performance, but does nothing to improve write performance.
There is a downside to these databases as well; they offer alternatives to performance problems in poorly written applications. Like powerful hardware, in-memory database have the potential to mask poor application development. We might see an explosion of in-memory database implementations due to this matter.

Bottom-line: this is cutting edge technology that will give database architects another tool in the toolbox to design the most effective database environment. Do yourself a favor and try to get your hands at a test environment to experience this technology first hand. Yes, 2010 could be the year of in-memory databases.

Thanks for listening,
Peter

Leave a Comment

Your email address will not be published. Required fields are marked *