Saturday, October 5, 2013

In Memory...

One of the most frequent questions I have been getting of late in places like Chicago, Milan and Stockholm involves memory.

Specifically: "when will DB2 for i have in memory capabilities like the other database systems"?

This question is somewhat understandable given all the recent hype from SAP, IBM Software Group and now Oracle about their "in memory" database capability.  At the same time, I am a little disappointed when I get this question (and the accompanying concern) from long time users and supporters of IBM i.  You see, the ability to take advantage of all the memory available in the system has been a hallmark of DB2 for i for... well, 25 years.  If you count the System/38 before it, you can add another decade.

Yep, the notion of in memory and nothing but memory for 35 years. Just saying.

Why All the Fuss?


To explain both the hype and the high expectations of "in memory" capabilities, let's look at the concepts of data access and data processing in a very simple way.

The general idea revolves around locating and accessing data on / in a storage device, retrieving the data into memory, and pulling the data from memory into the processing unit to crunch the numbers. Obviously the entire thing takes time. How much time is a function of distance and duration. Specifically, how far away is the data from the CPU and how long will it take to get the data from its resting point into that CPU.

As we all know, the relative distance and duration of accessing data from an hard disk drive is much greater than accessing data from memory.


To appreciate this latency (i.e. the period of time that one component in a system is spinning its wheels waiting for another component), imagine wait time as being the time it takes the data to move from the hard disk drive (HDD) to memory, plus the time it takes the data to move from memory into the CPU.

To reduce the waiting time, faster storage devices have been developed, with the current state of the art being solid state drives (SSD). The relative distance and duration data must travel from SDD to memory can be much less than HDD, especially for random reads.


Can We Do Better?


If the latency is to drop even further, both distance and duration have to be eliminated, not just reduced. So why not keep the data in memory? In other words, remove the relatively long path of moving data from HDD or SDD to memory.



Sounds good, but there must be some issues with only having data in memory and not on disk.

The biggest issue is data longevity. Memory is volatile and the data is only available when memory is powered up and powered on. If the electricity fails, the representation of data no longer exists, and thus the traditional need for other types of "permanent" storage.

Another issue is cost. Up until recently, good and reliable memory modules were very expensive as compared to other storage media.

And finally, for many years all but a very small number of special computer systems could recognize and use large memory subsystems. This is a main point regarding the advent of 32-bit and then 64-bit operating systems and applications.  Even if you could buy the memory, you need to be able to address it.

A pop quiz!  Back in the late 1970s and early 1980s, IBM's System/38 coming out of Rochester, Minnesota supported how many bits of addressing?

Answer: 48!  Yep, 48-bit hardware, 48-bit OS and 48-bit DBMS.  Hmmm... I wonder why?

Bonus info: if my memory serves correctly, a 48-bit address space contains 2^48, or 281,474,976,710,656 possible unique addresses.

At the end of the day, commercially available computer systems represent a grand compromise; a compromise between what's possible in computer science, effectiveness of the architecture and the cost of bringing the product to fruition.  I am sure you appreciate the following law of economics:

Good, Fast, Cheap - Pick any two.


Other Solutions to Minimize Distance and Duration


Given all the inherent compromises, there are other solutions available to reduce latency and increase the throughput of data processing.

If the data can be brought into memory before the processing must occur, there is effectively no waiting. Think of reading ahead and having the data ready for the next operation. This is called asynchronous I/O.  As opposed to synchronous I/O where the processing must wait (in line) for the data to be read or written. This effectively means more latency whereby the fastest part of the system (CPU) waits for the slowest part (HDD).

A large memory space, an independent I/O subsystem and intelligent database management system allows asynchronous I/O to occur whereby the data is read from disk and pulled into memory at the correct time and correct rate, all to minimize or eliminate latency. Hmmm... DB2 for i can do that.

I/O parallelism is another effective and elegant technique to get the data into memory and next to the processor(s) as fast as possible. The ability to not only read ahead, but to fill the available memory with data as fast as possible via independent parallel tasks or threads is a hallmark of effective query machines. It is also a hard requirement for very large database environments. Hmmm... DB2 for i can do that.

To process the vast amounts of data brought into a large memory space, CPU parallelism can also be employed. This is known as symmetrical multi-processing. Hmmm... DB2 for i can do that.



As mentioned previously, incorporating different levels of storage, each with it's own attributes is common place today. Using spinning HDDs, fast SDDs, and read/write cache can be a wonderful way to meet business requirements and avoid unnecessary compromises. Hmmm... DB2 for i can do that.



A good memory... imagine remembering the query, remembering the answer and providing this answer back, without doing the work to compute the answer again and again. This is a form of results caching. Hmmm... DB2 for i can do that.

OK, Does IBM i Support In Memory Database or Not?


Of course it does!

From day one, the capability of putting and keeping objects in memory has been part of IBM i and DB2 for i. As eluded to earlier, the entire system was build from the ground up to support ALL objects in memory. To refresh YOUR memory, see Single Level Storage.  Further more, some folks still refer to memory as "main storage" and disk as "auxiliary storage". By definition, IBM i believes everything is residing in main storage (i.e.memory). Why else have a very large unique address space and the ability to support a very large physical memory configuration? Why have the ability to perform both logical and physical I/O?

DB2 for i has the ability to manage what data comes into memory, when it comes into memory, and how long it stays in memory. No special versions of software, no buffer pools or special database subsystems, and no worries about what work loads can truly make use of such an environment. Online transaction processing and analytical queries work just fine. And of course, data persistence, data integrity and data recover ability are there too - complements of your friend DB2 for i.

Time and Space


If you want to put your entire database (and application) into memory, no problem, just provide the memory - IBM i will take it from there. My AS/400 Teraplex Center team routinely demonstrated this back in the 1990s! Having the largest systems at our disposal, we regularly preloaded an entire database into memory and ran queries without any physical I/O. This sounds like "in memory" to me!

With IBM i 7.1 there is even more flexibility, including a keep in memory attribute for tables and indexes. This tells the SQL query optimizer and database engine to bring the specific object into memory at first touch, and to keep this object in memory for as long as possible (i.e. this object has higher priority in terms of staying in memory when objects are paged in and out).

At the end of the day, IBM i is much more sophisticated than most people believe. When it comes to using and managing memory, DB2 for i can employ proactive, reactive and highly adaptive techniques without user intervention. And yes, this is the very same (over worked and under appreciated) system you own today.

The Lesson


Seek not the answer, but to understand the question.

The constant questioning about "in memory" capabilities leads me to believe we have lost our own memory of the unique and effective computing architecture available with IBM i running on Power Systems.

Maybe it's time you refreshed your memory.  If you need assistance with that, don't forget, we are here to help.

7 comments:

  1. Hi Mike,
    it's been a while since you wrote this but a question for you anyway... despite working on the 'i' for 2 decades, I still like database tech and recently investigated an in-memory database for features (VoltDB in this case).

    Apart from #1. declaring the demise of the dinosaur databases (Oracle & DB2 were in that list - no mention of DB24i ) and #2. having a product to go the hard sell; there were some interesting aspects of the database that could contextually apply in the 'i' which you haven't addressed here. Being in-memory isn't the end of the equation.

    In particular is the concept of machine sympathy - the idea that the code is written in such a way to maximize the potential of a CPU. An simple example is ensuring that data for a file is always stored in such a way as retrieval of the data is possible with one operation. For an in-memory DB this literally requires that the data to access is contiguous in memory, no fragmentation.

    With the i, all I know of is that memory paging to/from disk, the fragmentation boosting disk I/O with more disk arms, but disk has always been in the equation. So, if there was a theoretical IBMi setup with pure memory (is that even possible?), does the OS consider contiguous arrangements or is it still page size blocks? Does the fact that 'i' is a virtual OS hinder or benefit this equation?

    ReplyDelete
  2. Excellent question Darron. In general, IBM i is designed and implemented to handle the dispatching of work to the processors as well as managing the I/O requests to minimize waiting and maximize throughout. When it comes to DB2 for i, using SQL and good data-centric programming techniques (set-at-a-time for example) is a critical success factor in allowing the system to optimize the query plan and execution. This "optimization" allows for CPU bound plans and smooth I/O, whether this includes in-memory techniques or not. Simply put, the DB2 for i Query Optimizer does what you are describing: maximize the potential of the computing and I/O resources, while being a good neighbor.

    ReplyDelete
    Replies
    1. Hi Mike,
      Excellent article.
      If an object is in memory from an interactive job, (*INTERAACT pool) and a batch job needs the same object, but batch runs in *SHRPOOL1, does the object have to be brought in to memory again because it's in a different pool?
      With large amount of memory now available, even more on a new P9, I was planning on using SETOBJACC to keep some objects in memory in hoping to improve some batch jobs performance.
      SETOBJACC needs to point to a storage pool.
      The objects being considered for SETOBJACC might be used by both interactive (*INTERACTIVE) and batch (*SHRPOOL1)
      My thoughts were run everything out of *BASE, SETOBJACC would point to *BASE.
      I'm looking for some confirmation on the use of the multiple memory pools along with SETOBJACC.

      Delete
  3. Still a great article half a dozen years later. Just had to point a manager to this link again today. Thanks Mike!

    ReplyDelete

Note: Only a member of this blog may post a comment.