Cisconinja’s Blog

Policing Tests

Posted by Andy on January 31, 2009

This post will take a look at some various tests related to policing.  I will be using the same UDP flood script that I used for WFQ and CBWFQ/LLQ.  R1 will be configured to police traffic, and one or more UDP packet streams will be generated, depending on what is being tested.  R2 will be configured to measure incoming traffic after it has been policed.  The network topology and initial configurations are shown below:

policing-topology

R1:
interface FastEthernet0/0
 ip address 10.1.1.1 255.255.255.0
 load-interval 30
 speed 100
 full-duplex
 no keepalive
 no mop enabled
!
interface Serial0/0
 ip address 10.1.12.1 255.255.255.0
 load-interval 30
 no keepalive
!
no cdp run

R2:
ip access-list extended DHCP
 permit udp any any eq bootps
ip access-list extended DNS
 permit udp any any eq domain
ip access-list extended TFTP
 permit udp any any eq tftp
!
class-map match-all TFTP
 match access-group name TFTP
class-map match-all DHCP
 match access-group name DHCP
class-map match-all DNS
 match access-group name DNS
!
policy-map Traffic-Meter
 class TFTP
 class DHCP
 class DNS
!
interface Serial0/0
 ip address 10.1.12.2 255.255.255.0
 load-interval 30
 no keepalive
 service-policy input Traffic-Meter
!
no cdp run

For the first few tests, we will be looking at how the size of the token bucket(s) can affect how the policer behaves.  Consider the following configuration:

R1:
policy-map policer
 class class-default
  police 600000 1499
   conform-action transmit
   exceed-action drop
!
interface Serial0/0
 service-policy output policer

Using this configuration, R1 polices all traffic outbound on S0/0 at a rate of 600 kbps.  The policer uses a single token bucket of size 1499 bytes which will take approximately 20 ms to fill (1499 * 8 / 600,000).  Now we will use PC to send UDP packets every 125 ms with a layer-3 size of 1496 bytes:

flood.pl --port=53 --size=1496 --delay=125 10.1.12.2 

This will result in 1500-byte frames on R1’s S0/0.  The total bandwidth used by this flow is approximately 96 kbps (1500bytes * 8bits/byte * 8packets/second), which is far below the policing rate.  The results on R1 are shown below:

1-r1-pmap

All packets are dropped despite the fact that the offered rate is well below the policing rate.  Because the token bucket has a max size of 1499, it will never have enough tokens to allow a 1500-byte packet.  The output also confirms that the policer sees the packets as 1500 bytes (5,434,500 / 3,623).  Now let’s change Bc to 1500 bytes:

R1:
policy-map policer
 class class-default
  police 600000 1500

We will use the same parameters for the UDP traffic:

flood.pl --port=53 --size=1496 --delay=125 10.1.12.2

The results on R1 are shown below:

2-r1-pmap

This time not a single packet has been policed.  The size of the token bucket is 1500 bytes, exactly the same as the packet sizes, and takes 20 ms to fill.  With a packet arriving every 125 ms, the bucket is always full when a packet arrives.  R2 also confirms that all 96 kbps worth of traffic are being sent across the serial link:

2-r2-pmap

 

Next let’s see what can happen if the token bucket is configured to a small size (but larger than the maximum packet size to prevent all packets from being policed).  We will use the following configuration:

R1:
policy-map policer
 class class-default
  police cir 95000 bc 1500
   conform-action transmit
   exceed-action drop

We will also use the same parameters for generating traffic:

flood.pl --port=53 --size=1496 --delay=125 10.1.12.2

Take a look at the policing statistics on R1:

3-r1-pmap

Even though the bucket is as big as the packet size and we are policing at 95 kbps, only approximately 49 kbps is being allowed.  The reason for this is that with the values chosen for CIR and Bc, it takes approximately 126 ms to completely fill the token bucket to 1500 bytes (1500 * 8 / 95000) and a 1500 byte packet arrives approximately every 125 ms.  The first packet will be allowed and the token bucket will be decremented to 0 bytes.  The second packet will arrive 125 ms later, and 1484 tokens will be placed into the token bucket (.125 * 95000 / 8).  Since the bucket does not have enough tokens, the packet is policed.  The token bucket reaches it’s max size 1 ms later, but no packets arrive to use the tokens for another 124 ms.  When the third packet arrives, the token bucket is decremented to 0 bytes and the cycle starts over, with every other packet being policed.  The fact that slightly more packets conformed than exceeded is probably due to a slight variation in interpacket delay; if 2 consecutive packets arrive 126 ms or more apart they will both conform.  This example shows the worst possible scenario that can occur (other than Bc being smaller than the packet size):  the token bucket reaches its max size just after the first interval and spends nearly the entire second interval wasting tokens.  With a small Bc, the actual sending rate can be as low as half of the configured CIR.

 

Next we will keep the same configuration but add a second token bucket to the policer.  We will use a size of 3000 bytes for the second bucket (Be).  The new configuration is:

R1:
policy-map policer
 class class-default
  police cir 95000 bc 1500 be 3000
   conform-action transmit
   exceed-action transmit
   violate-action drop

The policing statistics on R1 and traffic metering on R2 are shown below:

4-r1-pmap

4-r2-pmap

This time, the extra tokens spill into the second bucket, and no traffic is dropped.  It probably would have made more sense just to increase Bc, but this shows that excess burst can be useful even in the middle of a sustained flow if the Bc bucket is configured to a small size. 

 

Next let’s look at what happens with more typical values for Bc and Be with an offered rate that exceeds the CIR.  We will configure Bc and Be each as 500 ms of CIR.  The configuration is:

R1:
policy-map policer
 class class-default
  police cir 96000 bc 6000 be 6000
   conform-action transmit
   exceed-action transmit
   violate-action drop

Instead of 8 packets/second, we will generate 16 packets/second for an offered rate of 192 kbps:

flood.pl --port=53 --size=1496 --delay=62 10.1.12.2

The policing statistics on R1 are shown below.  The first show command output was taken a little under 1 second after beginning the traffic flow, and the second several minutes later:

5-r1-pmap1

5-r1-pmap21

This shows the more typical behavior of a single rate, two bucket policer with a fairly large Bc.  Bc and Be both start full at 6000 tokens each.  The offered rate is considerably higher than the CIR, so the Bc bucket is soon used up and the router begins using tokens out of Be to transmit packets.  After a little under 1 second, we can see that Be has been used to transmit 4 packets, which completely empties it (4 * 1500 = 6000).  Several minutes later, we can see that the counters for conformed and violated packets have increased by a lot, but exceeded remains at 4.  This is because with single rate policing, Be is only refilled when Bc is full.  With Bc configured to a large enough value that it does not spend any time in a full state, Be will never have a chance to accumulate tokens unless the offered rate falls below the CIR.

 

Next we will look at how a policer functions on 2 flows with very different packet sizes.  We will send DNS traffic with 1250-byte packets at 8 packets/second and DHCP traffic with 156-byte packets at 64 packets/second so that each flow sends approximately 80 kbps of traffic.  We will police all traffic to 64 kbps, with Bc set to 4000 bytes.  The config is:

R1:
policy-map policer
 class class-default
  police cir 64000 bc 4000
   conform-action transmit
   exceed-action drop

Start the traffic flows on PC:

flood.pl --port=53 --size=1246 --delay=125 10.1.12.2
flood.pl --port=67 --size=152 --delay=15 10.1.12.2

The policing results on R1 and traffic measurements on R2 are:

6-r1-pmap1

6-r2-pmap1

DNS has not been allowed to send a single packet!  The two flows combined generate 160 kbps of traffic, well above the 64 kbps CIR, so Bc quickly empties.  As it refills, DHCP is able to send a packet anytime there are 156 tokens in the bucket, while DNS must wait for 1250.  The best situation DNS could hope for is that a DHCP packet arrives while there are 155 tokens in the bucket and is policed.  In this case another DHCP packet will arrive 1/64 of a second later, 125 tokens will be added to the bucket ((64000/8) / 64) bringing the total to 280, the DHCP packet will be allowed, and the bucket will be decremented to 124 (280 – 156).  Therefore as long as the DHCP flow sustains, the token bucket will never contain more than 280 tokens and a DNS packet will never be allowed.

 

Let’s say we’ve been given the requirement to limit all traffic being sent out an interface to 96 kbps.  DNS should be allowed to send up to 80 kbps, DHCP 24 kbps, and TFTP 24 kbps, as long as the overall CIR of 96 kbps has not been exceeded.  We’ve also been given the requirement to do this using only policing.  First let’s try this using class-based policing.  The configuration is:

R1:
ip access-list extended DHCP
 permit udp any any eq bootps
ip access-list extended DNS
 permit udp any any eq domain
ip access-list extended TFTP
 permit udp any any eq tftp
!
class-map match-all TFTP
 match access-group name TFTP
class-map match-all DHCP
 match access-group name DHCP
class-map match-all DNS
 match access-group name DNS
!
policy-map child-policer
 class DNS
  police cir 80000 bc 5000
   conform-action transmit
   exceed-action drop
 class DHCP
  police cir 24000 bc 1500
   conform-action transmit
   exceed-action drop
 class TFTP
  police cir 24000 bc 1500
   conform-action transmit
   exceed-action drop
policy-map policer
 class class-default
  police cir 96000 bc 6000
   conform-action transmit
   exceed-action drop
 service-policy child-policer

Now we will start the three different traffic streams.  DNS and DHCP will each send 1500-byte packets with an interpacket delay of 125 ms for an offered rate of approximately 96,000 each (1500 * 8 * 8).  TFTP will send 500-byte packets with an interpacket delay of 1/64 seconds for an offered rate of approximately 256,000 (500 * 64 * 8):

flood.pl --port=53 --size=1496 --delay=125 10.1.12.2
flood.pl --port=67 --size=1496 --delay=125 10.1.12.2
flood.pl --port=69 --size=496 --delay=15 10.1.12.2

First, look at the input and output rate on each interface:

7-r1-f01

7-r1-s01

7-r2-s0

R1 shows an input rate of 434,000 bps, which is close to the expected value of 454,400 bps (510 * 64 * 8 + 1510 * 8 * 8 + 1510 * 8 * 8).  However, after policing takes place only 24,000 bps is being sent out of S0/0.  The policing statistics shown on R1 are:

7-r1-pmap2

First look at the class-default statistics on the parent policy map.  Policing on the parent policy takes place first, and the results are offered to the child policy.  We can see that 432,000 bps of traffic has been offered to the parent policy, with 96,000 conforming and 336,000 exceeding.  Next look at the statistics on the child policy map.  The measurements of the 30 second offered rates on the child policy apparently take place before policing on the parent since the offered rates match the actual sending rate for each type of traffic.  The policing statistics, however, are based on the offered traffic after it has been policed by the parent and give some insight into what is causing the problem.  We can see that not a single packet has conformed to or exceeded the child policer on the DNS and DHCP classes – in other words every DNS and DHCP packet was policed by the parent policy first.  TFTP, on the other hand, shows that there has been 24,000 bps of conforming traffic and 72,000 bps of exceeding traffic.  The combined conforming and exceeding rates on the TFTP class match the CIR on the parent policer, and the conforming and exceeding packet counters on TFTP exactly match the conforming packet counter on the parent policy – so we know that the parent policy is admitting TFTP packets only.  TFTP packets are allowed by the parent policy at a rate of 96,000 kbps only to have most of them dropped by the child policy, and the bandwidth that the other classes could be using goes to waste.  The problem here is the order that the policing occurs in and that, as shown earlier, a policer will give preference to flows with smaller packet sizes when the CIR is exceeded for a sustained period.

 

Using Committed Access Rate (CAR) can partially overcome this problem.  CAR does not use MQC and does not allow named access lists, so we will have to create new ones.  We will use the same CIR and Bc sizes for each policer.  The configuration for this is:

R1:
access-list 100 permit udp any any eq domain
access-list 101 permit udp any any eq bootps
access-list 102 permit udp any any eq tftp
!
interface Serial0/0
 no service-policy output policer
 rate-limit output access-group 100 80000 5000 5000 conform-action continue exceed-action drop
 rate-limit output access-group 101 24000 2000 2000 conform-action continue exceed-action drop
 rate-limit output access-group 102 24000 2000 2000 conform-action continue exceed-action drop
 rate-limit output 96000 6000 6000 conform-action transmit exceed-action drop

The continue action tells CAR to continue looking at rate-limit commands until it finds another match, rather than performing an action based on the first match like an ACL or MQC class.  Start the traffic flows on PC again using the same parameters:

flood.pl --port=53 --size=1496 --delay=125 10.1.12.2
flood.pl --port=67 --size=1496 --delay=125 10.1.12.2
flood.pl --port=69 --size=496 --delay=15 10.1.12.2

The CAR statistics on R1 are:

8-r1-car

This time, R1 polices subsets of traffic before policing the combined traffic.  We can see that 79,000 bps of DNS, 23,000 bps of DHCP, and 24,000 bps of TFTP have conformed, which approximately matches the CIR configured for each type of traffic.  The results of the subset policing are then offered to the ‘all traffic policer’.  The output shows that 126,000 bps of traffic has been offered to the ‘all traffic policer’ which matches the combined CIRs of the individual subset policers.  Of this, 95,000 has conformed and been sent, while 31,000 has exceeded and been dropped.  R2 verifies that the entire CIR is now being used and the amount of each traffic type that is being received:

8-r2-s0

8-r2-pmap

The problem of inefficient link usage has been solved; however the problem of flows with smaller packet sizes receiving preference by the policer still remains.  All 24,000 bps of TFTP traffic that the subset policer offered to the ‘all traffic policer’ has been sent due to it’s much smaller packet sizes, just like earlier tests showed.  A better solution (if we removed the requirement that only policing is allowed) would probably be to shape traffic outbound on S0/0 and use the queueing strategy on the shaping queues to control which packets are sent, rather than relying on the random nature of packet size.  Policing could still be used either inbound or outbound if the rate of certain types of traffic should be limited before being placed into the shaping queue system.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
%d bloggers like this: