The CIO’s job has changed considerably in the last few years. While the essentials remain the same – to deliver highly available, secure, interconnected systems and acceptable end-user performance – the ways to go about them have changed radically. It was hard enough back in the days when all apps and systems ran under the same roof, and CIOs used to know where their end-users were – in the office.
If anyone wanted to access to the network on the road, IT would issue them with locked-down devices that required a traverse through slow and cumbersome virtual private network (VPN) connections. The majority of resources were under direct control and expectations were not set by consumer market capabilities. For a CIO today looking back, this must seem very old fashioned because nowadays the average consumer is almost always online, uses interchangeable devices and expects videoconferencing capabilities in their pocket, among other applications.
Currently, companies operate with one foot on-premises and one in the cloud. This combination of private and public assets delivering essential business services is known as the ‘hybrid enterprise’ and it’s the norm. While this model can reduce costs and improve employee productivity, it can also be a nightmare for IT to manage. It is not enough to provision cloud apps and move on.
CIOs need visibility, optimisation and control across hybrid clouds and networks to ensure that all applications perform, no matter where they are hosted or managed. After all, consider the uproar that would sweep through your organisation if email goes down or you cannot access important files stored in the cloud.
The one decision a CIO cannot afford to make is to try to keep everything on-premises. That would put the company at a severe competitive disadvantage. According to Gartner, 75 percent of enterprises expect to have hybrid cloud deployments by the end of this year, and that number is only going to increase. As the cloud grows, on-premises computing continues as well, and will do so for the foreseeable future. To quote James Staten, former Forrester analyst and now Chief Strategist, Cloud and Enterprise Division at Microsoft:
“We know that cloud services and cloud platforms are here to stay and should be considered part of your overall IT portfolio but how much of that portfolio will these services occupy in your future? For most companies – and probably all enterprises – your future won’t be 100% cloud. And your business units and line employees have already ensured that it won’t be 0% cloud. So what’s the right number?”
The hybrid enterprise model makes perfect sense but also poses a number of critical decisions to make when planning your IT architectures. Do I have a SaaS alternative option or do I need to continue to run a commercial software package? Should I build a private cloud or share a public cloud, or both? What do I host internally?
What’s the best way to provide access to all applications for our workforce – whether they are at major sites, small branches or mobile workers? Do you provide access via more expensive, but highly reliable private networks – or build secure paths across the Internet? Deploying applications in the cloud via SaaS options and providing access via inexpensive public networks can shorten time to provisioning, and on the surface, promise less IT workload to manage.
Along with these questions come the hidden costs of the hybrid enterprise. Having multiple cloud and SaaS providers leads to multiple different, and often incompatible, architectural roadmaps across the entire technology stack – an IT management nightmare. So when a business process or workflow moves across the hybrid landscape, it is not fair to expect the IT teams to quickly troubleshoot and solve problems within such a diverse architecture, much of which falls outside their purview. The responsibility really lies with a service provider or SaaS vendor.
The key to success in this new reality is ‘end-to-end visibility.’ It sounds like an empty buzz term, but it refers to a critical capability IT must have today: using a single management console to see all activities in real-time all the way from the remote end-user’s device, through any network they may use, to the app running in the data centre or the cloud.
Consider a network administrator with a multinational company who has implemented a centralised management console in order to monitor all apps, users and sites holistically across the entire global network. A real-time alert appears that indicates SharePoint has an issue, so he/she drills down to view all apps, and confirms all are nominal except SharePoint.
The admin examines each site running SharePoint, and sees all are fine except for one remote office. The current path selection policy directs SharePoint traffic over MPLS, and the console shows the MPLS pipe is almost at capacity. So the admin modifies the path selection policy for collaboration technologies like SharePoint to route over public Internet when MPLS is not available. The problem has been quickly identified and solved.
Achieving this level of end-to-end visibility requires five key building blocks:
- Network and application-aware path selection capability: Directs traffic on the appropriate network. A typical branch configuration has three paths, one based on MPLS, a second based on an Internet link combined with a secured overlay connecting the branch back to the data centre using Internet protocol security (IPSec), and a third path exiting directly to the Internet.
- Dynamic tunnelling capability with a central control plane: Provides for secure backhauling of branch traffic to the corporate data centre across the Internet using IPSec.
- Simple interface to cloud-based security service providers: Embracing local Internet breakouts, companies must also strengthen their security environments within the branches themselves. To do so, enterprises typically implement secure Web gateways (SWGs) that analyse specific ports like HTTP/ hypertext transfer protocol secure (HTTPS) ports and often use SWGs in combination with advanced threat detection (ATD) to detect the more advanced attacks. Now these capabilities are being made available as a cloud service. Interfacing with a security service provider enables local Internet breakouts without requiring further investment in on-premises Internet security appliances.
- Inbound QoS: Manages local Internet breakouts and protecting business Internet traffic against surges in recreational Internet traffic. For example, the finite bandwidth of the local Internet pipe can be filled as branch users consume a variety of business-critical SaaS applications combined with less critical and bandwidth-heavy applications like YouTube. Implementing a QoS capability that manages traffic from the destination instead of from the source like with traditional QoS effectively slows down less critical inbound traffic to make room for more business-critical flows, thus protecting experience and productivity for users of those applications.
- Unified management plane: Provides administrators with an intuitive interface and management plane based on high-level abstractions such as applications, sites, uplinks, or networks that match the way they see their IT environment.
While what goes wrong can sometimes be outside your company’s control – what good does it do to pinpoint another vendor’s problem? Consider that the sooner a problem is found, or even anticipated before it hits, the sooner somebody can address it – even if it is not your IT team.
That point does lead into the struggle to achieve control now that IT typically only has control of just a fraction of our hybrid environment. Here are some approaches that work well in this very common scenario.
- Pull together. You cannot fight these battles alone. Pull in your critical service providers and your trusted peer network to help shed light on what is happening ‘out there’. This means reaching out to other IT executives and coming together to address issues like compromised server farms.
- Monitor the traffic. Find out where the bottlenecks and blockages are in the network. That will help you pinpoint where the problem is coming from. It’s essentially an issues heat map.
- Watch out for trends in your data. Did this problem come on quickly or slowly? Is there a trend that is leading toward an even bigger disaster six hours from now? Be predictive in your thinking to avoid getting stuck in reaction mode.
Once you achieve the necessary levels of control, and the maths becomes as simple as 1+1=2. In this case, it’s ‘Visibility + Optimisation = Control’. This may seem simplistic, but this equation refers to IT’s ability to keep systems running at optimum levels and ensure information is available to all employees whether they are based at your headquarters, remote offices, or on the road.
The point to remember is that ‘visibility’ refers not just to visibility of a particular device, network or app, but end-to-end visibility that lets the IT team detect and fix problems before the end-user even notices. Optimisation not of something as specific as discreet infrastructure, but optimisation of how your entire business runs. And control not just at layer 4, but all the way up to the application layer. The ‘Visibility + Optimisation = Control’ equation is a universal one, and is now the go to formula for ensuring that IT meets business objectives.