The landscape of network analysis

Network monitoring and analysis has never been more important. New opportunities and demands are driving innovation making network monitoring and analysis a crucial part of any company’s IT strategy, according to Dan Joe Barry

 

The data provided by rugged network analysis not only supports business strategy, but can often provide the basis for sounder decisions in the strategy process. In short, network monitoring and analysis is no longer a “nice-to-have”, but a “have-to-have” in IT networks.

The initial driver for monitoring and analysis
Network monitoring and analysis was initially employed to provide insight into network behavior (ie who is sending what to whom). This was necessary as Ethernet/IP protocols do not provide embedded network management information as was the practice in telecommunication network protocols, such as SONET/SDH. In addition, Ethernet/IP is typically deployed as a connection-less network with all users sharing available bandwidth and data finding its own way through the network. This is in contrast to telecommunication protocols, which typically reserve bandwidth in a connection-oriented manner. Think of a telephone call where the connection between the two callers is reserved for the duration of the call.

Network monitoring and analysis tools, such as network probes, are therefore necessary to understand where data is being routed in the network and how much individual connections in the network are being used. Network probes are hardware devices that capture and analyse copies of the network data in real-time. In addition, it is possible to determine the sender and receiver of the data and the application used to send the data (if it is a well-known service with a defined TCP/UDP port).

New opportunities and new demands driving new solutions
It did not take long for innovative thinkers to identify opportunities to use the information provided by network monitoring and analysis for new applications. For example:
– Network testing, troubleshooting and forensics: If the network is not performing as expected or there is a fault, network monitoring and analysis probes can be used to determine where there is an issue.
– Network enterprise security: Network data can be monitored to detect anomalies in network behaviour and known malware.
– Cybersecurity: Networks can also be monitored to detect illegal behaviour by criminals, terrorists or foreign espionage.
– Policy enforcement: Services provided by networks can be managed using network policies based on real-time data collected by network monitoring and analysis probes.
– Financial trading networks: Financial institutions need to ensure that they react fast to market dynamics and therefore require the lowest latency to ensure that they can execute trades as quickly as possible, which is measured by network monitoring and analysis probes.

The examples above are but a taste of the various opportunities emerging based on the availability of real-time network data on crucial connections in the network.

At the same time, the demands on the OEM vendors addressing these opportunities are increasing. ABI Research recently forecast that the volume of global annual data traffic will exceed 60,000 petabytes in 2016, over seven times more than the 8,000 petabytes expected in 2011. It also estimates that the fastest year-on-year growth will occur in 2012, at 58 percent. This means that the network monitoring and analysis systems being developed by OEM vendors need to be able to handle the large increase in data volume expected over the coming years.

Why is this important? Surely the whole idea of Ethernet/IP networks is that if data is not received, it is simply re-sent? This is true for communication, but not the case for network monitoring and analysis. In communication, you are only interested in the data that is addressed to you. In network monitoring and analysis, all data needs to be examined. It’s the difference between receiving a letter as an individual and working in the postal sorting service. How big should the letterbox be in each case?

Many communication services do not require real-time data (eg email), but there are two issues from a network monitoring and analysis point of view:
– More and more services are real-time or near real-time. For example, Voice-over-IP, Streaming video and hosted or cloud services.
– To perform a useful analysis, you can’t afford to lose any information as the part you are missing could be the crucial piece of information that solves the puzzle.

So, not only do OEM vendors of network monitoring and analysis systems need to handle a large amount of data, but they cannot afford to lose any of this data in order to provide a useful analysis. In addition, they have to do all of this in real-time to properly support all types of services.

For this very reason, OEM vendors are considering new approaches to developing network monitoring and analysis systems so that they are not only able to meet current demands, but can keep up with the demands of the future.

Re-thinking network monitoring and analysis system design
The initial approach to developing network monitoring and analysis systems took one of two paths:
– Proprietary hardware and/or chip development to ensure high performance (still practiced by many enterprise network security vendors today).
– System development based on standard “off-the-shelf” hardware, such as standard servers and standard Network Interface Cards (NICs).

Proprietary hardware and chip development (even if the chip is a programmable FPGA or NPU) is a costly, high-risk approach with a need for constant investment and support to keep up with customer demands and growth in bandwidth. As an OEM vendor, one needs to be certain that the expected volumes can justify the investment.

The opposite approach of using “off-the-shelf” hardware is a low-risk approach that avoids long development cycles and thus provides a fast-time-to-market. The processing power and memory provided by modern servers and CPU chipsets are more than adequate for even the most demanding applications. There are, therefore, many OEM vendors who have adopted this approach in order to avoid the cost and risk of developing proprietary hardware. Especially, as many of the competitively distinctive features of their solutions lie in the analysis of data in application software rather than in hardware. In addition, it is much easier to program in standard CPU environments than in proprietary FPGA or NPU environments.

The Achilles heel of the standard “off-the-shelf” hardware approach is standard NICs or data input/output. As mentioned, standard NICs are built for communication and not for network monitoring and analysis. They are standard letterboxes for individual households rather than the docking bay that is needed for receiving all the post that needs to be sorted centrally.

To successfully exploit the “standard off-the-shelf” approach, dedicated network monitoring and analysis network adapters are required that can keep up with data loads and ensure that no data is lost. These types of network adapter have been available for a number of years and are being used in all the applications listed earlier, helping OEM vendors to continuously increase performance and keep up with demands. The combination of standard servers and these specialised network adapters allows OEM vendors to quickly and easily increase hardware performance as new standard servers with more powerful CPU chipsets are available. The only demand on the network adapter is that it can provide full line-rate data capture and transfer to the application software without losing any data. At the time of writing, there are network adapters capable of handling 40 Gbps of Ethernet data as either 1×40 Gbps port or 4×10 Gbps port.

Intelligent network adapters provide this type of functionality, but also provide additional features to off-load compute intensive data management tasks, such as flow identification, filtering and distribution, from the application. This provides more CPU processing power to the application allowing it to run faster and keep up with data traffic growth.

The advent of virtualisation and cloud services
Thus, the technology is now in place to support high performance network monitoring and analysis for a number of applications using standard “off-the-shelf” hardware. OEM vendors can now keep up with data growth and support the port speeds required without losing any data. The next challenge is supporting customers in their transition to virtual environments and cloud services.

Demonstrations have already been made showing how many physical network monitoring and analysis systems can be consolidated onto a single physical server as virtual machines. This provides new possibilities of upgrading legacy systems to support new data growth and port speed demands in a fast and efficient way. It also allows multiple virtual systems to analyse the same data at the same time.

These solutions thus help OEM vendors of network monitoring and analysis systems to also provide the same level of performance and support in virtualised environments as more and more companies outsource their operations to cloud service providers.

Keeping up with change
The technology pieces are in place to help network monitoring and analysis vendors to continuously evolve and keep up with market dynamics, be they new applications, data growth and line speeds or the onset of virtualisation and cloud services. This allows OEM vendors to focus on these needs and adapt accordingly without the distraction of re-inventing the “hardware wheel”.