Today we are facing more and more complexity in our IT landscapes. We are forever bolting on newer, faster, cheaper solutions to deliver business services. Also, as if that wasn’t recipe enough to give us night terrors, we continue to layer on top of existing systems, be they back-end transactional, data repositories or utility services to be able to reuse as much as possible.
The challenge is that as the next major project comes along, we then have a dilemma around how we test a new solution in pre-production. What is worse is that we don’t always have control of those systems to be able to test against them.
What is even more worrisome is that we don’t want to be firing test transactions against any live systems for fear of loss of integrity. Or the inevitable “there is no one left who knows how it works, DON’T TOUCH IT!” And trust me, having worked for over 15 years in the IT industry, I have seen this!
But this reminds me of a comment I heard once about a change to a network component in a development environment by the head of networks at the time, who said “by default, the network is a production system”.
What he meant was that even though we would be changing something in a development environment device, it could affect production as it is connected to it. Thankfully we separate production and development networks a little more now so this isn’t so much of an issue.
However, today, we face increased complexity and these delineations are not always as clear. So what can be done?
What we need is to take a simple, scientific approach where we isolate all but the variable being tested, keeping everything else constant. Easier said than done right? Further, we need to be able to repeat tests, to do them earlier in the cycle, to be able to clean out environments to ensure a known starting point and to ensure that there is no data corruption. This all adds costs and makes it less feasible to maintain a real replica environment.
The thing is we NEED to test. It is not optional; just look at the news recently and the number of outages that made the headlines that maybe could have been mitigated through more thorough testing. The point I’m making here is that we need to be able to develop and test end-to-end, early and often.
The earlier we can test, the sooner we identify defects, and the more money we save in error prevention. The benefits are clear – some studies quote a 1:10:100 multiplier effect for the cost of fixing a defect in design, coding and production.
Going back to the system under test, we have options of how we handle dependent systems. We can stub out external integrations to always get a predicted response but this doesn’t really test the integration code and let’s face it, we all dread standing in front of the boss and explain how a stubbed routine made it into production.
So if we need to test, but don’t want to risk stubbing something and looking daft, what can we do? We virtualise.
Service virtualisation is a software solution created to emulate a system we interface with but without the need to maintain it or clean it out after each test run. This removes the costs of additional (sometimes archaic but, always expensive) hardware and the risks of stubbing/ it also removes the delays of cleaning and preparation before reuse.
Increasing the frequency of testing and testing earlier has shown to reduce the number of major bugs significantly. It does come with some overheads however. Test scripts being driven at the front end need to be synchronised with test responses at the back end. Orchestrating this could get complex, though no more complex than having to maintain a mirror environment.
So, do you virtualise or not and can we use virtualisation in other areas? Virtualisation has many potential uses, for example, we could use it as a honey pot in security testing, where we provide the impression to an attacker that they have breached our network, when really we are merely containing them in a secure area where they can exhaust themselves trying to breach something pretending to be something it is not.