IPS performance is something you can't afford to get wrong. Unfortunately, performance is difficult to test, and the results are almost as hard to describe. As IPS moves further up the network stack, performance becomes highly data-dependent. This is different from what we're used to witnessing in the world of switches and routers, where performance is easy enough to describe. Even for firewalls (at least firewalls without UTM features)...
performance is easy to measure because metrics such as connection rate, maximum simultaneous connection count, and throughput are commonly understood and universally accepted.
IPS devices are much harder to characterize. The greatest differentiator in performance is not the IPS itself, but how it is configured. For many signature-based IPS products, the performance of the product varies hugely based on the number of signatures and protocol decoders enabled for detection. For example, an IPS may have hundreds of signatures covering HTTP. If half of those signatures are disabled (perhaps because they are IIS signatures and Apache is being used), then the performance of the IPS on HTTP traffic can be quite different. Similarly, many IPS vendors classify signatures by severity. If only high-priority signatures are enabled, the IPS will pass traffic more quickly than if all priority types are enabled.
Your traffic may also cause variations in performance. For example, an IPS may be able to pass clean HTTP traffic at 2Gbps rates--unless the traffic is in Japanese, at which point the rate can drop to 1.75Gbps. Why? Asian languages use multi-byte characters, and the HTTP processors inside the IPS have to do much more work with multi-byte HTTP. More commonly, an IPS will have dramatic performance differences based on the protocol used to pass the traffic. For example, moving files around a network with Windows file sharing might not slow down the IPS very much because there aren't many IPS signatures for Windows file traffic. If you moved the exact same files using a protocol that has more signatures and requires work to decode and normalize traffic, such as SMTP, you would see very different performance characteristics.
Additionally, IPSes will also behave differently depending on the mix of attack traffic and benign traffic.
In our testing, we found that attack traffic has a disproportionate impact on IPS performance compared to "clean" traffic. Because an attack is considered an exception, has to be logged, generates an alert, and generally requires much more processing than non-attack traffic, the ability of an IPS to pass traffic as the attack rate goes up varies dramatically with small amounts of attack traffic.
If you intend to put an IPS out near the perimeter of your network, you will see more attacks--and thus greater variation in system performance. The worst performance case would be to put an IPS outside the network firewall, fully exposed to the Internet. This has the advantage of providing the curious security staffer hours of amusement and gigabytes of interesting data. It also has the downside of slower and generally unpredictable performance because of the variability in type and volume of Internet-sourced attacks.
As an IPS moves closer to the core of the network, the ratio of attack traffic to normal traffic will change so that observed performance become much more consistent. While an IPS protecting internal systems does have to handle a very high transaction rate, much higher than one simply at the network perimeter, it will also see a smaller amount of attack traffic.
A critical step before adding any IPS to your network is validating the vendor's performance claims by testing in your own network, using live traffic, and using your selected signature set. In published benchmarks, traffic may have been hand-selected to be "low impact" on the IPS, and a minimal set of signatures and decodes turned on. This may make good marketing literature, but it represents a dangerous way to specify the performance of IPS devices.
To determine the real performance in your network, make sure that the protocols you use and the signatures you care about are all enabled. This may require some amount of tuning on your part, but it's better to discover performance limitations before committing to a full IPS deployment.
A second aspect to IPS performance lies within the management system. If your goals for implementing an IPS call for forensics functionality or alerting and reporting, testing the performance of the management system should be part of your evaluation process. Our IPS testing has shown that many IPS management systems will slow to unacceptable performance levels when more than a small number of events are arriving in short period of time, or when a significant number of events have accumulated in the management system database. With IPS devices being pushed as part of regulatory compliance, where years of record keeping are generally required, performance of the management system with millions or tens of millions of events requires some validation.
Joel Snyder is a senior partner at Opus One, an IT consulting firm specializing in security and messaging.