In-Memory Databases vs Intel NVMe NUMA

For the past few years, the hot topic for database vendors was the in-memory database. The general belief was and maybe still is, that server memory is getting cheaper and density will continuously improve. For the most part, this has been the case, maybe not as extreme as predicted. However, the data storage needs for databases increased at a much faster pace during the same time, which created another problem.

Going beyond 1TB of memory in a single server will quickly become very expensive. A 1TB database used to be called very large. This attribute is now applied to the 5-10 TB databases servers. In-memory databases are needed where read and write performance are equally important. Traditional databases improve read performance by caching data in memory, but there is no way to improve write performance. To be ACID (Atomic, Consistent, Isolated and Durable) compliant, data must be written to persistent storage. An in-memory database achieves this by writing every transaction in parallel to a high-performance storage. In many cases this is a flash drive on the PCI bus or a SSD drive. But every write action must be stored away from Memory (RAM) to be ACID compliant.

There are many case studies available to prove the superior performance of an in-memory database. This article is not disputing this. But moving to an in-memory database can be expensive. Especially if you want to move a 10TB database into memory. Yes, there are in-memory databases using techniques that will allow you to just keep the most important data in memory and then store the rest onto standard disk arrays, transparently and seamlessly. Or you can opt to keep all the data in memory. In that case the only solution is to go with a server cluster that makes use of all that RAM. AWS is working on a cluster to offer 16TB or even higher to support the in-memory database SAP HANA. Don’t get fooled, this is a costly infrastructure and if you are not using either a 3rd party application or are starting a new application development project, moving to in-memory databases like SAP HANA always requires code changes. Changing application code can be as expensive as the infrastructure to support in-memory databases. Not to mention the software license cost of in-memory databases like SAP HANA.

Let’s say you run an application with a database server that is around for a while. It runs very well, but the demand on the database is constantly increasing. There are more transactions to perform, more data is being added every day, and the database maintenance jobs are running longer and longer. Plus, the backups are now dangerously close to not complete in the backup time window allocated. This can lead to serious performance issue when application tasks collide with backup jobs.

What is your next move? Find faster hardware, move the storage from traditional spindle disks to flash drives, add more memory to speed up the cache and allow faster, memory based query transactions. Or decide to radically change everything and enlist the help of an in-memory database. Either one of these moves are expensive and time consuming to implement. Especially the move to an in-memory database.

So, what is the alternative? As soon as Intel announced the NVMeTM NUMA storage, it was clear that this technology can breathe new life into older database servers and applications, without tearing the house down. How so? This technology provides almost 1 million IOPS per NVMeTM NUMA SSD drive. SuperMicro created a server with 10 SSD drives that is capable of 6-7 million IOPS in a RAID6 configuration. Best of all, this setup is not expensive at all.

By not changing the database engine, there is no change in the application required. Which means that a move from your current infrastructure to the Intel NVMeTM NUMA technology is very transparent and none intrusive to the applications and the business. Every time technology improvements can be performed, without impacting the business and then greatly benefit the business, it is a win -win situation.

I highly recommend reading our latest case study “Case Study – Get SAP HANA Performance Out of Your SYBASE Database for 1/10th the Cost”. Regardless of what database platform you’re currently running. This case study will show you how we achieved mind blowing performance improvements for an old Sybase ASE database server. You can easily take these improvement gains to other databases like Oracle, SQL Server or even MySQL.

As always, not every database infrastructure is the same and not every database benefits from this technology. Keep in mind, it is so inexpensive that you can safely try it out. I’m looking forward to hearing your feedback.

Leave a Comment

Your email address will not be published. Required fields are marked *