Cisconinja’s Blog

Class-Based Weighted Fair Queueing and Low Latency Queueing Tests

Posted by Andy on January 22, 2009

This post will be about testing class-based weighted fair queueing (CBWFQ).  The same UDP flood script and topology will be used that was used for testing WFQ.  The topology and initial configurations of each router are shown below:

cbwfq-topology1

R1:
interface FastEthernet0/0
 ip address 10.1.1.1 255.255.255.0
 load-interval 30
 speed 100
 full-duplex
 no keepalive
 no mop enabled
!
interface Serial1/0
 ip address 10.1.12.1 255.255.255.0
 load-interval 30
 no keepalive
!
no cdp run

R2:
interface Serial1/0
 ip address 10.1.12.2 255.255.255.0
 load-interval 30
 no keepalive
!
no cdp run

R1 and R2 are dynamips routers, and PC is a loopback interface connected to R1 in dynamips.  PC will generate traffic destined for R2’s S1/0 interface which will allow queueing to be tested outbound on R1’s S1/0 interface.  R2 will drop the traffic because it does not have a route to reach PC (this is intentional so that the return traffic does not unnecessarily consume CPU use).

First we will do a simple test to verify the operation of CBWFQ.  We will have 3 separate UDP traffic streams sending traffic from PC to R2 on ports 53 (DNS), 67 (DHCP), and 69 (TFTP).  Each traffic flow will send 1500-byte packets every 125 ms.  The total sent by each flow will be approximately 96 kbps, resulting in approximately 288 kbps of offered traffic.  We will shape traffic outbound on R1 S1/0 to a rate of 96 kbps to simulate a clock rate of 96 kbps.  CBWFQ will be applied to the shaping queues to examine how it’s scheduler allocates bandwidth to each flow.  Since class-based shaping supports CBWFQ on shaping queues, we will use that for our shaping policy.  For the first test, we will create a policy-map on R1 with 3 classes (one to match each type of traffic) and assign the entire 96k of interface bandwidth to the classes.  We will assign 48k to DNS, 32k to DHCP, and 16k to TFTP.  This will require that the max-reserved-bandwidth be increased to 100.  A policy-map will also be created on R2 and applied inbound on S1/0 to measure the amount of traffic of each type that makes it to R2.  The configuration for this is:

R1:
ip access-list extended DHCP
 permit udp any any eq bootps
ip access-list extended DNS
 permit udp any any eq domain
ip access-list extended TFTP
 permit udp any any eq tftp
!
class-map match-all TFTP
 match access-group name TFTP
class-map match-all DHCP
 match access-group name DHCP
class-map match-all DNS
 match access-group name DNS
!
policy-map CBWFQ
 class DNS
  bandwidth 48
 class DHCP
  bandwidth 32
 class TFTP
  bandwidth 16
policy-map Shaper
 class class-default
  shape average 96000
  service-policy CBWFQ
!
interface Serial1/0
 bandwidth 96
 max-reserved-bandwidth 100
 service-policy output Shaper

R2:
ip access-list extended DHCP
 permit udp any any eq bootps
ip access-list extended DNS
 permit udp any any eq domain
ip access-list extended TFTP
 permit udp any any eq tftp
!
class-map match-all TFTP
 match access-group name TFTP
class-map match-all DHCP
 match access-group name DHCP
class-map match-all DNS
 match access-group name DNS
!
policy-map Traffic-Meter
 class TFTP
 class DHCP
 class DNS
!
interface Serial1/0
 service-policy input Traffic-Meter

Now we’re ready to start the 3 traffic streams:

flood.pl --port=53 --size=1500 --delay=125 10.1.12.2
flood.pl --port=67 --size=1500 --delay=125 10.1.12.2
flood.pl --port=69 --size=1500 --delay=125 10.1.12.2

A show policy-map interface on R1 verifies that approximately 96k of each class of traffic is being received from PC:

cbwfq-1-r1pmap

 

On R2 we can see the actual amount of each type of traffic that is being sent across the link:

cbwfq-1-r2pmap

The 30 second offered rates almost exactly match the bandwidth that we allocated each class in the CBWFQ policy.  The packet counter also matches our policy exactly – DHCP was given twice as much bandwidth as TFTP and has sent exactly twice as many packets, and DNS has been given 3 times as much bandwidth as TFTP and is 1 packet away from sending 3 times as many packets (most likely the next packet sent will be DNS). 

Next, let’s allocate bandwidth in the same proportions, but only allocate a small amount of our total bandwidth.  In the last example we gave DHCP twice as much as TFTP and DNS three times as much as TFTP, so we will keep that ratio by giving TFTP 1% of the total bandwidth, DHCP 2%, and DNS 3%.  The bandwidth statements must all be removed from the classes first since only consistent units are allowed.  The configuration is:

R1:
policy-map CBWFQ
 class DNS
  bandwidth percent 3
 class DHCP
  bandwidth percent 2
 class TFTP
  bandwidth percent 1

Now look at the traffic coming in on R2:

cbwfq-2-r2pmap

The ratio is still exactly 1:2:3, even though 94% of the bandwidth was not allocated to any class.  The reason for this has to do with how the CBWFQ scheduling algorithm works.  CBWFQ assigns a sequence number to each packet just like WFQ.  CBWFQ is essentially a combination of dynamic conversations and user defined conversations (that’s what Internetwork Expert calls them in this article on CBWFQ.  I think these names are somewhat misleading, which I’ll get to later, but for now we’ll stick to these names).  The weights of dynamic conversations are calculated the same as for WFQ conversations, 32384 / (IPP + 1).  The weights of user defined conversations are calculated as:

Weight = Constant * InterfaceBandwidth / ClassBandwidth

 if a flat bandwidth value is used, or:

Weight = Constant * 100 / BandwidthPercent

if bandwidth percent or bandwidth percent remaining is used.  The constant used in the formula depends on the number of dynamic flows in the WFQ system.  The following table shows the constant that is used in the weight calculation for each number of WFQ flows:

WFQ flows

Constant

16

64

32

64

64

57

128

30

256

16

512

8

1024

4

2048

2

4096

1

Like WFQ, CBWFQ also assigns a conversation number to each conversation.  Dynamic conversations work just like conversations in WFQ.  Based on a hash of header fields, they are classified into a conversation number between 0 and N-1, where N is the number of queues in the WFQ system.  Conversations N through N+7 are reserved for link queues.  Conversation N+8 is the priority queue (if LLQ is added to CBWFQ).  Conversations N+9 and above are used for user defined conversations.  Going back to the last example with 3% given to DNS, 2% to DHCP, and 1% to TFTP, we can see the conversation number and weight values assigned to each conversation on R1:

cbwfq-2-r1queue1

We can see that the WFQ system on the shaping queues is using 32 dynamic queues.  The user defined conversations for our flows are 41, 42, and 43, which is consistent with the formula for conversation numbers.  Using the weight formula for classes configured with bandwidth percent, we get:

DNS = 64 * 100 / 3 = 2133.33

DHCP = 64 * 100 / 2 = 3200

TFTP = 64 * 100 / 1 = 6400

which matches each of the weights shown in the output.  In the first example, when we allocated all of the interface bandwidth to the 3 classes, the weights would have been much lower, but still in the same proportion to one another.  Therefore, if all traffic is accounted for in user classes, it makes no difference how much is allocated to each class – only the proportion that the class is allocated relative to the others.

 

Now let’s look at what the criteria is for being put into a dynamic conversation vs. a user defined conversation.  Consider the following configuration:

R1:
class-map match-any DNS-DHCP
 match access-group name DNS
 match access-group name DHCP
!
policy-map CBWFQ
 class DNS-DHCP
 class class-default
  bandwidth percent 10
policy-map Shaper
 class class-default
  shape average 96000
  service-policy CBWFQ
!
interface Serial1/0
 service-policy output Shaper

DNS and DHCP are both being classified into the user defined class ‘DNS-DHCP’ which does not have a bandwidth guarantee.  TFTP will be classified into class-default which has been configured with 10% bandwidth guarantee.  You might expect that DNS and DHCP will both be put into the same user defined conversation, and that TFTP will be put into a dynamic converation.  The queueing information on R1 is shown below:

cbwfq-3-r1queue1

We can see that there are 16 dynamic queues in the WFQ system.  This means that user defined conversations will use a constant of 64 in the weight calculation and will start at conversation #25.  We can see that DNS and DHCP have each been given a separate queue, and the conversation numbers (0 and 13) fall within the dynamic queue range even though they were classified into a single user defined class.  The weight has also been calculated according to the weight formula for dynamic conversation, (32384 / (0 + 1)), giving them each a weight of 32,384.  Also, TFTP has been given a user defined conversation number even though it was classified into class-default.  The weight has been calculated according to the weight formula for user defined conversations with bandwidth percent configured, (64 * 100 / 10), resulting in a weight of 640.  Therefore, the type of conversation depends not on the class that the traffic is classifed into, but whether or not that class has a bandwidth guarantee.  The calculated weights shown in the output also point out another very surprising characteristic of CBWFQ – classes with a bandwidth guarantee generally have a much lower scheduling weight than classes without, even if the guarantee is very small.  Take a look at the traffic measurement information on R2:

cbwfq-3-r2pmap

Even though TFTP was only given bandwidth percent 10 in class-default, it has consumed almost the entire available bandwidth on the link.  Looking back at the weights on R1, we can see that DNS and DHCP have (32,384 / 640), or 50.6 times the scheduling weight of TFTP, allowing TFTP to send 50.6 times as many bytes.  The packet counters for packets received by R2 confirms this exactly (956 / 50.6 = 18.89).  Even if we had marked DNS or DHCP with IPP 7, the calculated weight would have been 4,048 (32,384 / (7 + 1)), which still would have allowed TFTP to consume the majority of the link bandwidth.

 

Now let’s add a priority guarantee to one of the classes, turning our CBWFQ policy into an LLQ policy.  We will give DNS a priority guarantee of 64 kbps, DHCP 2% of the bandwidth, and TFTP 1% of the bandwidth.  The new policy map configuration on R1 is shown below:

R1:
policy-map LLQ
 class DNS
  priority 64 8000
 class DHCP
  bandwidth percent 2
 class TFTP
  bandwidth percent 1
policy-map Shaper
 class class-default
  shape average 96000
  service-policy LLQ
!
interface Serial1/0
 service-policy output Shaper

Notice that the policer on the priority class has also been configured with a Bc of 8000 bytes.  We’ll look at why this was necessary in a minute, but for now we will ignore it.  Each traffic flow will be sent with the same parameters (1500-byte packets + L2 header and 125 ms interpacket delay for approximately 96 kbps per flow).  The queueing information from the shaping queues on R1 is shown below:

llq-1-r1queue

With WFQ configured to use 16 dynamic queues, the LLQ conversation number should be 24 and we can see that it is.  Notice that the weight for the LLQ conversation is 0 – this explains why packets in the LLQ are always scheduled first if the LLQ has not exceeded the policing rate.  R2 shows how much of each type of traffic is being sent across the serial link:

llq-1-r2pmap

As expected, DNS consumes all of it’s 64 kbps of priority bandwidth and DHCP and TFTP share the remaining bandwidth in a 2:1 ratio.

 

Next let’s see what happens if we didn’t adjust Bc on the policer of the LLQ.  The configuration remains the same, other than accepting the default Bc generated by IOS for the LLQ class:

R1:
policy-map LLQ
 class DNS
  priority 64

After starting the 3 traffic streams again, take a look at the traffic arriving at R2:

llq-2-r2pmap

Even though we’ve given it 64 kbps of priority bandwidth, DNS is only sending 48 kbps across the serial link.  The reason for this is due to how the LLQ is being policed.  The default Bc for a policer on an LLQ class is 20% of the policed rate.  With a policed rate of 64 kbps, Bc defaults to:

Bc = (64,000 bits/sec) * (.2 sec) * (1 byte / 8 bits) = 1600 bytes

This is verified on R1:

llq-2-r1pmap

The DNS traffic stream is sending a 1500-byte packet (1504 with HDLC) every 125 ms.  Once congestion starts to occur, the policer will function as follows:

1. Policer starts with a full 1600 token bucket.

2. DNS packet arrives at time T.  Packet has size 1504 and bucket has 1600 tokens, so the packet is allowed to be sent and bucket is decremented to 96 tokens.

3. DNS packet arrives at time T + .125.  Bucket is replenished with a pro-rated number of tokens based on the InterpacketArrivalTime * PolicerRate (in bytes).  In this case, the new number of tokens in the bucket is:  96 + .125 * 8000  = 1096.  Packet size (1504) > bucket size (1096), so the packet is policed.

4. DNS packet arrives at time T + .250.  Bucket is replenished with a pro-rated number of tokens based on the InterpacketArrivalTime * PolicerRate (in bytes).  In this case, the new number of tokens in the bucket is: 1096 + .125 * 8000 = 2096.  However, since the bucket has maximum size 1600, the extra tokens spill out of the bucket and the number of tokens is set to 1600.  Packet size (1504) < bucket size (1600), so the packet is sent.  Bucket is decremented to 96 tokens.

As you can see this cycle will continue forever.  The bucket reaches it’s maximum size at 200 ms, but another packet does not arrive until 50 ms later, and 50 ms worth of tokens are essentially wasted.  The net result is that every other packet is sent.  This explains why the amount of DNS traffic arriving at R2 (48 kbps) is half of the total amount of DNS traffic being sent (96 kbps).

 

Next let’s look at how the LLQ class behaves when there is no congestion on the interface.  The configuration remains the same, but only the DNS traffic flow will be sent.  Let’s see how much traffic R2 is receiving:

llq-3-r2pmap2

All of the traffic makes it to R2.  This verifies that the priority queue of LLQ is only policed when there is congestion.

 

For the last test, we will look at how CBWFQ behaves in IOS version 12.4(20)T.  We will create a CBWFQ policy and first test it in 12.4(18), which is what all the previous tests were done in, and then test the same CBWFQ policy in 12.4(20)T.  Consider the following configuration:

R1:
policy-map CBWFQ
 class DNS
  bandwidth percent 2
 class class-default
  fair-queue 128
policy-map Shaper
 class class-default
  shape average 96000
  service-policy CBWFQ

Using the same 3 traffic streams, DNS will be classified into the DNS class and DHCP and TFTP will be classified into class-default.  Based on the results of our previous tests, we can expect that DNS will be assigned conversation # 137 (128+9) and given a scheduling weight of 1500 (30 * 100 / 2).  DHCP and TFTP should each be given separate conversation numbers between 0 and 127 and given a scheduling weight of 32,384.  Therefore, DNS should get roughly 21.5 times as much bandwidth as DHCP and TFTP.  This is confirmed by the output on R1 and R2:

cbwfq-5-r1queue

cbwfq-5-r2pmap1

Now let’s put the exact same CBWFQ configuration into 12.4(20)T.  The show traffic-shaping queue command does not seem to work anymore in 12.4(20)T so it is difficult to determine exactly how the CBWFQ algorithm schedules packets.  However, take a look at the net result by looking at the incoming traffic on R2:

cbwfq-6-r2pmap

DNS has only been given approximately 2% of the total bandwidth, which was the minimum that we reserved for it in the bandwidth percent command.  All the remaining bandwidth has been divided between flows that did not have a bandwidth reservation given to them.  As you can see, the same configuration has a very different end result in 12.4(20)T.  DNS has gone from receiving nearly all the bandwidth, to receiving only 2% of it during congestion.

Advertisements

4 Responses to “Class-Based Weighted Fair Queueing and Low Latency Queueing Tests”

  1. Joyce said

    Hello Andy,

    Thank you for sharing the simulation result to better explain how CBWFQ/LLQ works. I can’t find any good documentation in Cisco to explain any of these formulas.

    I have two quick questions:

    1. In the priority 64 with default 1600 bytes, your formula shows:

    ….”3. DNS packet arrives at time T + .125. Bucket is replenished with a pro-rated number of tokens based on the InterpacketArrivalTime * PolicerRate (in bytes). In this case, the new number of tokens in the bucket is: 96 + .125 * 8000 = 1096….”

    My question is this is the example where you don’t explicit adjust the burst, so that means the policerRate is 1600 bytes, so why in the formula it is 8000 in the 96+.125*8000? This is where I was little confused since I thought we are using default 1600 so not sure where 8000 comes from.

    Can you please help to clarify?

    2. So can you put the little example of how the packet won’t get dropped when you adjust the burst size to 8000 bytes? Like the same little example with formula?

    3. So about the IOS 12.4.20T, so this is dramatic different than older 12.4 IOS. So do you know what happened? This seems to toss out the entire calculation of the weight and share. What do I do when I need to assign bandwidth to various class of which way to do?

    4. I would like to do simulation like you did. I have dynamics with GNS3 on my windows XP. How do I setup the PC with the UDP flood script? I clicked on the script link and it says to copy the source file to flood.pl. What does that mean? Sorry I guess this is a pretty trivial question that I am not familiar with how to set it up. Thanks for your help.

    Joyce

  2. Andy said

    Hi Joyce,

    The 8000 comes from the policed rate (CIR) in bytes. To find the number of tokens in the bucket, the previous number of tokens in the bucket (96) are added to the time since the last packet arrival (.125 seconds) times the CIR in bytes (8000). It could also be thought of as:

    96 bytes + (.125 seconds) * (64,000 bps) * (1 byte / 8 bits) = 1096 bytes

    which may have made more sense, but I removed the bits to bytes conversion and just wrote the CIR in bytes.

    When Bc is 8000 bytes, the cycle will go like this:

    1. Policer starts with a full 8000 token bucket.

    2. DNS packet arrives at time T. Packet has size 1504 and bucket has 8000 tokens, so the packet is allowed to be sent and bucket is decremented to 6496 tokens.

    3. DNS packet arrives at time T + .125. Bucket is replenished with a pro-rated number of tokens based on the InterpacketArrivalTime * PolicerRate (in bytes). In this case, the new number of tokens in the bucket is: 6496 + .125 * 8000 = 7496. Packet size (1504) < bucket size (7496), so the packet is allowed to be sent and bucket is decremented to 5992.

    4. DNS packet arrives at time T + .250. Bucket is replenished with a pro-rated number of tokens based on the InterpacketArrivalTime * PolicerRate (in bytes). In this case, the new number of tokens in the bucket is: 5992 + .125 * 8000 = 6992. Packet size (1504) < bucket size (6992), so the packet is allowed to be sent and bucket is decremented to 5488.

    Eventually the bucket will empty since 1504 tokens are being used up for every 1000 that are put back in, but the important point is that the bucket is big enough that it never fills up in between intervals, so no tokens end up being wasted and the full 64,000 bps CIR can be sent.

    I’m not sure why CBWFQ behaves so much differently in 12.4(20)T and I was not able to find a whole lot of documentation on it. Here is one article about it from Cisco, but it does not really get into specifics:
    http://www.cisco.com/en/US/prod/collateral/iosswrel/ps6537/ps6558/white_paper_c11-481499.html
    For assigning bandwidths, the important thing to keep in mind is it seems that during congestion, classes that are configured with a guaranteed bandwidth are only allowed to use up that much bandwidth – which actually seems closer to how Cisco documentation claimed CBWFQ worked in the past.

    To use the flood script, you’ll need to download PERL from:
    http://www.activestate.com/activeperl/
    After that, copy the script to a text document and save it as flood.pl, then open a command prompt and change to the directory where you saved the script and run it by typing the filename followed by the parameters (size, port number, etc).

  3. Joyce said

    Hi Andy,

    Ah,,now I understand where 8000 comes from, so it doesn’t matter what burst size value, the replenish rate is 8000 based on 64kbps.

    So I did an excel to subtract 1504 and add 1000 until after 13 packets when the bucket is empty, so it drops 1 packet and then it lets 2 packets to go and drop 1 packet. Because we only replenish 1000 while we need to subtract 1504. So after 13 packets, it just drop 1 packet and let 2 packets go and drop 1 packet and let 2 packets go. So it is still dropping 33% rate then instead of every other packet at 50% rate as in 1600 default settings. So even if we set burst to 3000 instead of 8000, it will still pretty much do 33% drop rate. So the only that will help for not dropping packet will be if the police rate is higher than 64kbps to have a replenish bigger than 1504, then in that case then probably the default burst size 0.2 sec is just as fine.

    Just a thought.

    Thanks for sharing the cisco link, I am particularly interested in the section about QoS behavior change under default class. So if we do fair-queue, it is no longer send via weight via IP Precedence anymore but just “equal” share. So the whole weight calculation for default class when use WFQ is no longer holds true. But then it doesn’t say how it “compare” with the user-defined class. In other words, in the old days, we have the calculation for user-defined class with the formula and for WFQ unclassified traffic, it has the 32384 formula so it will make pretty much almost always user-defined traffic has much better weight to have more share than any unclassified default traffic (only unless if the user-defined has assign a very small bandwidth percentage). So in the old days, we have 2 formula to clearly identify the relationship between those two type of traffic. But now according to cisco on the new IOS version, it just say it won’t use “weight” anymore and each will be equal, then I don’t know if it still holds the similar comparison with the user-defined traffic in terms of “weight” and “share” calculation.

    Also interesting to see max-reserved-bandwidth no longer is needed as we can assign upto 99% of user-defined traffic without adding this command to change the default 75%.

    And what you have observed in your simulation with the dramatic different behavior in the newer IOS was not addressed at all in this cisco link. I will think this is one most important behavior that they need to address in the new IOS release as it is so “opposite” behavior as the older version. I can open a tac case to ask about it to confirm but I need all the simulation data to include in the tac case to prove the result so cisco can trace back the code for the different behavior.

    Finally, I still have a hard time to really understand how the weight relates to the bandwidth for each class. I mean I understand each “flow” has a weight whether it is user-defined or unclassified. The weight for each flow comes from the bandwidth assignment to calculate, hence higher bandwidth value, lower the weight, hence higher the share for that particular flow.

    But each flow has its “share” of the entire interface available bandwidth from the calculation. But then each class can have many many flows in it, so let’s say a simple case where we have 2 classes and each says 50% bandwidth to keep it simple. So both classes are user-defined so if I have a 1000kbps pipe, then if I have only flow per class, then Flow(1) and Flow(2) will each get 500kbps because flow(1) and flow(2) has equal weight as it has same bandwidth value, so it’s 1:1 ratio.

    Now what if now I have 9 flows in Class 1 while Class 2 still has 1 flow? Then what happens now? the weight should still be same for each flows as 1:1, but now I have 9 of those flows in Class 1. So will that mean now I have a total of 10 flows and they are 1:1 between flow in Class 1 versus flow in Class 2. So that means 900kbps will goto Class 1 as a whole for each of the 9 flow to have 100kbps each. and the 1 flow in Class 2 also gets 100kbps. If this is correct, then on a Class bandwidth perspective, I have 9:1 instead of 50/50 for 500kbps each.

    So this is if I look at from each flow calculation point of view. I don’t know if it is correct or not.

    or will each of the 9 flow total gets 500kps so each flow gets 500/9 and Flow(10) in Class 2 gets 500kbps all by itself on one flow? This is all for discussion based on older IOS (prior to 12.2.20T as it seems everything about the calculation of weight and flow and share does not apply in 12.2.20T anymore) where we see 87kbps for DNS in prior code and all of a sudden it gets 2kbps in new code with the exact same config. the 87kbps result match the weight/flow/share calculation while the 2kbps result throw that calculation off.

    thanks again for all your help.

    Joyce

  4. Andy said

    Hi Joyce,

    In the example you gave, each of the flows in Class 1 would get 500/9, or about 55 kbps each, and the single flow in Class 2 would get 500 kbps. Each class with a bandwidth reservation on it is treated as a single FIFO queue and all flows that are classified into that class share the single queue. Here is a quick example in 12.4(18) showing this using the same topology. There are 4 flows to ports 1001-1004. The first 3 flows are classified into Class1 and the 4th flow is classified into Class2. Both classes are configured with ‘bandwidth percent 50’.

    R1:
    ip access-list extended Class1
    permit udp any any eq 1001
    permit udp any any eq 1002
    permit udp any any eq 1003
    ip access-list extended Class2
    permit udp any any eq 1004
    !
    class-map match-all Class1
    match access-group name Class1
    class-map match-all Class2
    match access-group name Class2
    !
    policy-map CBWFQ
    class Class1
    bandwidth percent 50
    class Class2
    bandwidth percent 50
    policy-map Shaper
    class class-default
    shape average 96000
    service-policy CBWFQ

    R2:
    access-list 101 permit udp any any eq 1001
    access-list 102 permit udp any any eq 1002
    access-list 103 permit udp any any eq 1003
    access-list 104 permit udp any any eq 1004
    !
    class-map match-all 1001
    match access-group 101
    class-map match-all 1002
    match access-group 102
    class-map match-all 1003
    match access-group 103
    class-map match-all 1004
    match access-group 104
    !
    policy-map Traffic-Meter
    class 1001
    class 1002
    class 1003
    class 1004

    Traffic meter statistics on R2:
    R2#sh policy-map int
    Serial0/0

    Service-policy input: Traffic-Meter

    Class-map: 1001 (match-all)
    101 packets, 151500 bytes
    30 second offered rate 14000 bps
    Match: access-group 101

    Class-map: 1002 (match-all)
    211 packets, 316500 bytes
    30 second offered rate 30000 bps
    Match: access-group 102

    Class-map: 1003 (match-all)
    37 packets, 55500 bytes
    30 second offered rate 3000 bps
    Match: access-group 103

    Class-map: 1004 (match-all)
    349 packets, 523500 bytes
    30 second offered rate 48000 bps
    Match: access-group 104

    Class-map: class-default (match-any)
    0 packets, 0 bytes
    30 second offered rate 0 bps, drop rate 0 bps
    Match: any

    The combined packet count for flow 1, 2, and 3 is 349 (101 + 211 + 37), which is exactly equal to the packet count for flow #4. Also the 3 different flows in Class1 do not send equal amounts because there is no preference for which type of packets are tail dropped when the queue is full. Dynamic conversations on the other hand each receive their own FIFO queue as long as the total number of WFQ conversations is not exceeded, so although they typically have much higher weights they are not effected in the same way as the number of flows increases.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
%d bloggers like this: