I first met the word ”broadband” in the mid 1980s when it was used to describe analogue RF signals passing down a coaxial cable. Then it re-emerged about ten years later as a buzz-word for what was then considered to be very high speed Ethernet in the LAN. So I was surprised to hear a NetEvents keynote speaker from BT Labs claiming that the trouble with broadband in the LAN was that it was far too slow. Had he forgotten the days when you had to wait 4 or 5 minutes for the network printer to cough up a page?

His argument was this: in the days when it took five minutes to print, you just saved up one or two print jobs then launched them at your convenience; you walked down the corridor to the printer, making yourself a cup of coffee of the way, and arrived as the pages were coming out – no problem. Nowadays you had a laser printer beside your desk and, having hit the print button, nothing to do but sit and glare at the ruddy thing while you waited for 30 interminable seconds. Five minutes, no problem. Thirty seconds, a real pain – because it did not fit human needs.

That story sums up the network managers’ basic dilemma. Every time there is a significant upgrade in networking speeds, you enter a whole new level of user expectations and demand. A naive observer might think that moving from 10G to 40G Ethernet would make everyone four times as happy, as the goods are delivered in a quarter of the time – but life is not that kind, because high among today’s drivers for faster broadband is video – no longer just for entertainment but a serious business tool from videoconferencing to webcasting.

Video presents a huge problem, because humans are a highly visual species. Lacking the superior hearing and smell capabilities of many mammals, we have evolved extremely critical vision. One dud pixel on an LCD display of a million pixels, and the eye goes straight to it. Why? Because for our ancestors in the savannah that tiny flicker could signal a venomous spider, a stealthy tree snake disturbing a leaf, or a lion lurking in the grass far ahead.

So the clearer the picture, the more critical the viewer. When the BBC delivered a flickering 405-line image to people’s homes in the 1950s it was hailed as a miracle. But when today’s videoconference delivers a good picture and clear audio, but the odd frame gets lost here and there, it will be called ”rubbish”.

Video is just a prime example. The fact is that faster networking is driven by more complex applications across the board, and user experience is becoming ever more critical as a result.

So it’s not just data moving at 40Gbps. It’s the network manager’s, and ultimately the company’s, reputation that’s on the move. And that puts a whole new emphasis on the need for pre-testing and monitoring.

How serious is the 40G Ethernet market?

Figure 1, from Ovum, does not present a very exciting picture of optical 40G Ethernet uptake over the last four years. Perhaps not surprising when you realise that a 40GB line card costs five or six times the cost of a 10G equivalent, and the pundits suggest it should fall to two and a half times the cost to make an upgrade really attractive. But there are circumstances, such as existing buried or undersea backbones, where it is still far cheaper to upgrade to 40G rather than lay three more 10G cables.

Figure 2 shows actual revenue from these backbone deployments, plus a much smaller contribution from metro deployments in blue. It also shows a tiny contribution from 100G coming in after 2012. In these terms 40G does not look to be much of an issue, but on the other hand there is a potential market waiting for standardisation and, as always, once the uptake reaches critical mass the resulting drop in prices will make 40G a ”must-have”.

The market pressure comes from a variety of directions. Storage networks – and disaster recovery server mirroring – continue to grow, but above all is the move to data center virtualization. This represents a significant increase in inter-server traffic as process are spread across multiple physical machines, or when virtual machines migrate.

On a humbler level, even some workstations are now equipped with 10 Gbps interfaces, which will have to be aggregated to a higher bandwidth to accommodate the traffic across the backbone. Then you have specific vertical markets like the equities trading industry where the need for billions of transactions and terabytes of information at extremely low latencies means that much surplus bandwidth is needed for survival; and, in the medical field, an MRI scan can generate 500 Mbytes of critical data per hour.

These – together with the demands of video-intensive business services – are all highly critical applications and the user is not going to give you any credit for a move to higher network speeds unless the performance, reliability and quality of experience delivered is of the highest order.

Such quality can only be achieved by thoroughly pre-testing the network in every conceivable realistic traffic condition, and it can only be maintained at that level with a well conceived and ongoing monitor and measurement programme.

The testing challenges of 40G Ethernet

Isn’t 40G simply faster Ethernet? In some ways it’s more of the same, in others the new standards change everything.

At the upper layers, each component or step must do its job in a fraction of the time. A router, for example, which strips lower-layer information from an incoming packet, queues it, performs a route lookup, and sends it to the proper outbound queue to be packetized, as well as simultaneously filtering, SLA monitoring and policing, and CoS/QoS prioritizing the data.

The router also sets up and tears down VPN connections, builds multicast routing trees, performs routing table updates for multiple protocols, maintains statistics and performance, alarm, event and failure logs, and performs firewall and security functions, such as key exchanges, attack detection and prevention – let alone the demanding task of encryption/decryption. A router with 40G interfaces must do all this at four times current maximum speeds without dropping packets, introducing excessive jitter, compromising VPN boundaries, or reordering packets – errors which would be especially disruptive for storage networks and high-bandwidth video.

Testing such devices starts with validating the transport and the ability of the system to pass line-rate traffic, but also includes testing the functionality, performance, scalability and QoE of the upper-layer engines that deliver services. All this has major implications for the test lab budget, the development cycle, and the possible limitations of the test platform itself.

A simple example of the challenge to the test system is given by the accuracy needed of the internal clock used to measure latency and jitter. A 20 nanosecond clock resolution works fine for 10G Ethernet, but at 40G it takes 16.8 ns to transmit a 64-byte frame and frames will begin overlapping a single tick of the timestamp clock.

This means that latency and jitter measurements begin to break down, and there might even be issues with counting packets. (The problem is even worse gor 100G Ethernet, when every tick of the clock spans three frames and measurement becomes simply impossible.) So it is essential to have a test facility not just with 40G connectors, but one designed from the ground up to handle the higher speed.

Another vital factor is test automation, particularly with the greater test demands imposed by higher speed Ethernet. Firstly you need test case automation to quickly configure and run a test without intervention from the lab engineer, who remains free to focus on other taks essential for completing the test cycle.

However, in real test conditions, some manual testing is often needed to investigate anomalies. Without integration between the API and the user interface, valuable time will be wasted in error-prone manual configuration for these manual interventions. Some test platforms do support a drag-and-drop interface that allows tests to be automated without the overhead of a scripting language learning curve.

The next step beyond test case automation is test lab automation. This can be as simple as a scheduling application that manages and executes suites of test cases, perhaps with results management and the ability to do conditional testing based on the outcome of a test. Or it can go beyond managing test cases and suites to full configuration of test topologies, including automated cabling, DUT configuration, and power management.

Pre-testing is often ascribed a secondary role, and seen as merely a tool to validate the real job of development. There are, however, few processes where increasing efficiency can have such a positive effect as in shrinking the development/deployment cycle. Smart pre-test strategies boost top-line sales by shortening time-to-revenue, and they reduce OPEX to increase bottom-line profits on the already increased top line. What is more, when you consistently release or deploy products faster and with better quality, you go beyond immediate profitability to improving customer confidence and boosting your brand credibility.

The same appplies to the in-house network manager delivering services to company users – although they seldom thank you for faultless service, and are quick to criticise anything less – a reliable, high performance network is the greatest asset you can provide for the entire organization.

To optimise this efficiency in the move to 40G networking you must look for a test platform designed specifically for the new standards, and one that supports code-free automation with 100% integration between the GUI and the API. Finally, it should ideally incorporate end-to-end automation – not just test case automation – for fullest benefit.