You may be familiar with Parkinson’s Law: ‘Work expands so as to fill the time available for its completion’. There are lots of variations, including, ‘If you wait until the last minute, it only takes a minute to do’ and ‘Data expands to fill the space available for storage’, sometimes referred to as Parkinson’s Law Of Data.
This last version will be familiar to you if you have ever bought a computer and chose something that you were sure had enough storage, only to realise that the hard disk is bursting at the seams within six months. In fact, Parkinson’s Law, although it was originally derived from Cyril Parkinson’s observations of bureaucracy in the Civil Service, crops up all the time in different guises when you are dealing with technology.
For example, if I ask someone to complete a task by the end of the week, I don’t expect to get it back before Friday afternoon. I guess the ubiquity of Parkinson’s Law is simply down to the fact that it is generally to do with resource management, and project management is all about juggling valuable resources, usually resources that are valuable, scarce or both: time, equipment, people.
So Parkinson’s Law is a familiar concept, but I wonder what happens when something comes along that disrupts the economics to the extent that SAP HANA does? What happens when, as is the case at the extreme end of the HANA effect, your systems are running 100,000 times faster than they were before?
How an organisation makes use of the potential of in-memory computing depends on what their business is. Some see their systems speed up by a factor of 100,000 as a result of such implementations. This is fantastic news for applications that run at the transactional level. SAP’s Business Suite has just been announced on HANA, which will allow users to make use of the transactional data in ECC in ways that were just not possible before.
Previously, all sorts of data transfer work, duplication of data and multiple tools were required to link the transactional and analytic viewpoints, but having HANA holding the landscape together not only makes things faster, but also simplifies the overall architecture.
Furthermore, there are use-cases that could be imagined and desired, but not implemented without the super-fast solution that HANA provides. For instance, one of SAP’s new use-cases proposes a retail company that can interact with customers in real-time, as they enter (or presumably pass close by) one of their stores, rather than, say, in a monthly mailshot that might not be timely enough.
However, this scenario doesn’t apply so clearly in Enterprise Performance Management (EPM). The number of people you are trying to reach isn’t a customer base of millions and it is rare that it is in the hundreds. So what happens when we bring in superfast accounting solutions to bear in EPM implementations?
We can quickly deal with much larger data volumes, but, with much technical advancement, you need to keep an eye on the side effects. The Swedish statistician Hans Rosling referred to the domestic washing machine as ‘the greatest invention of the industrial revolution’, because it did so much to reduce domestic drudgery.
However, the ease with which the washing could be done led to clothes being laundered more often, which led to more work in drying, folding and ironing. Parkinson’s Law strikes again. And so it is, if you’re not careful, with other technological quantum leaps, such as HANA. The ability to interact with an enormous data set in real time is a tremendous thing. You can now ask 100,000 questions in the time that it previously took to ask one.
But here is the problem. It still takes a human being to decide which of these questions are relevant, and we aren’t going to make much of a dent in that massive list of questions because, whatever these in-memory database systems are capable of, it is unable to create more working hours.
Chris Anderson of Wired magazine wrote an insightful article about what happens when technical resources becomes abundant rather than scarce, and how it encourages users to adopt a scattergun approach to querying a dataset, or what Cory Doctorow terms a dandelion strategy.
Interestingly though, this just assumed that what we want is to use our available time to just do more of the same thing, rather than free up time to allow us to do something different and smarter. What is required is not necessarily the ability to ask thousands of questions, but the capability to ask a question you couldn’t ask before, or for the system to be clever enough to ask the questions for you.
If we look beyond transactional applications of in-memory database technology, EPM is an ideal application of the technology because it recognises both of these requirements. The speed of in-memory computing allows the business to run more planning cycles within the available monthly timeframe and to try out scenarios that would previously have been ignored due to a lack of time.
EPM also allows the application to guide users towards the areas of interest. Dashboards, alerts, exception reporting and infographics are great tools for helping users sift through large data sets quickly, which is important if the metaphorical ironing isn’t going to absorb all the time you have saved on the metaphorical washing.
While you can make strides by implementing in-memory computing, bigger benefits can be achieved by using the technology alongside some common sense and understanding of human nature, which I find rather encouraging.