Back in the days of the mainframe, application performance was as scalable as the size of the server machine at your disposal. Everything ran on these systems, and resources were allocated as you needed them. The age of client-server and multi-tiered applications then led to a proliferation of smaller mid-range and smaller Linux/Unix boxes, each dedicated to a separate task.
Now, the rise service-oriented architecture and web services has seen applications change again. Each application can work across multiple tiers of server infrastructure, pulling information from various different sources inside and outside the business to deliver what the user wants.
In a consumer environment, examples of services built on these principles include cost comparison websites or e-commerce sites. Finance, insurance and banking companies are equally using this approach.
This shift in how applications are put together offers great opportunities to deliver services that customers want, but the back-end infrastructure to supply them has become increasingly complex to manage. With so many different moving parts to consider, tracking issues is much more difficult – never mind rolling out new features.
The other major development is the role of the browser. Instead of being a passive receiver of data from back-end services, the browser is increasingly taking on more of the application logic and execution. As web 2.0 applications grow in popularity with users, applications in general have evolved to deliver a better, more interactive experience to the user.
This requirement for richer services relies on the browser understanding what is going on and providing the local power to deliver the service that the organisation had in mind. Frameworks like jQuery, Dojo, JSF and other Ajax tools or libraries increase the level of activity at the browser, while limiting the visibility of that activity at the back-end or to traditional performance monitoring solutions.
Testing functionality and user experience for web-based services has to catch up with this change in application delivery, particularly as the browser has a greater level of responsibility for user experience than in the past. The growth of browser functionality and integration of third party services are affecting requirements for testing and finding performance issues.
Instead of looking just at the back-end servers to understand how the application is performing, knowing what end-user experience is being delivered is critical. According to Amazon, a 100 millisecond improvement in website performance can equate to a one per cent increase in revenue generated via the site.
Conversely, degradation in performance leads to abandoned shopping carts and customers leaving to seek alternative services. Aberdeen Group surveyed over 160 organisations in 2008 and found that a one second delay in page load times led to 11 per cent fewer pages being viewed, and a 16 per cent decrease in customer satisfaction.
Finding where a performance issue exists, or where there are opportunities to scale services further, can directly lead to better overall business results. Getting this full overview of application performance therefore mandates looking at all the parts of the chain that provide a service to the user.
This involves looking at the back-end infrastructure, the network between company and user, and the user’s own browser experience as well. Without this complete overview of these systems, problems will either be missed or you won’t get a true picture of what end-users see.
Building this full path of information on performance and expectations can then be used to show where problems are occurring, and then how to fix them. Surprisingly simple things can have a large impact.
According to Forrester, users expect a site to load in two seconds, and quit after four if they don’t get what they need. In other words, performance hits immediately harm your business. However, without the understanding of the end-user experience and what is taking place within the browser, companies can be unaware of these problems because the back-end infrastructure is performing well – and here their monitoring stops.
This full overview should also look at any third party services that are being used to deliver the full web application to the user. In these circumstances, data on back-end systems is not available to you, but performance of these services can still have an impact on the wider application. Again, without analysis of what is being perceived by the end-user, you can think things are going well when actually, user frustration is high.
In this age of increasingly complex applications, being able to see how applications are performing is important both for the IT team managing production environments and for the business. To lack a comprehensive overview of the overall application dependencies, network and browser performance translates into missed opportunities and lost revenue. In an age of competitive online businesses and easy churn for end-users, getting the full picture is a must.