Cisconinja’s Blog

Archive for November, 2008

QoS Pre-classify in GRE over IPsec VPNs

Posted by Andy on November 29, 2008

When a packet is encapsulated and/or encrypted, the ToS byte is by default copied to the new IP header, however the other header fields are no longer available for classification and QoS actions.  QoS pre-classify allows IOS to create a temporary copy of a packet in memory to be used for classification so that QoS actions can be performed on the final packet after encapsulation and/or encryption.  This example will take a look at the three different ways QoS preclassification can be configured in a GRE over IPsec VPN, what the results are of each, and why each of them behave the way that they do.  The following diagram shows the topology for this example:

 

qos-preclassify-topology

 

R1 has a GRE over IPsec tunnel to R3 which will be used to encrypt traffic between each of their LAN subnets.  R1 is connected to ‘Host’ which will be used to simulate a host on the LAN for traffic generation to test our QoS preclassify configuration.  R3 is using a loopback to simulate its LAN.  The relevant portions of each router’s intial configuration are shown below:

qos-preclassify-host-initialconfig

 

qos-preclassify-r1-initialconfig

 

qos-preclassify-r2-initialconfig

 

qos-preclassify-r3-initialconfig

 

Since we will be testing preclassification outbound on R1’s S0/0 interface, we do not want any traffic being sent out the interface besides the traffic that we generate in order to make the results easier to interpret.  To accomplish this, we will create static routes on R1 and R3 so that no routing protocol traffic is needed, and disable unnecessary services on R1 that create traffic.  R2 does not need any static routes to create full reachability because all traffic between R1 and R3’s LAN subnets will be sent through the VPN and destined to R1 or R3’s S0/0 interfaces, which are directly connected to R2.  The config for R1 and R3 is as follows:

 

qos-preclassify-r1-staticroutes

 

qos-preclassify-r3-staticroutes

 

Next,  let’s create a policy map to use for testing how our traffic is classified.  When we test it out, we will generate pings from ‘Host’ to R3’s loopback.  Depending on when the classification is performed, the traffic could be classified as either ICMP traffic, GRE traffic, or ESP traffic so we will create a class to match each and add them to a policy map.  Finally, we will enable the policy map on R1 S0/0 outbound.

 

qos-preclassify-r1-policymap1

 

Now we’re ready to look at how traffic is classified in each of the three scenarios.  The first scenario is with no preclassification configured – in other words, the default behavior.  We ping from ‘Host’ to R3’s loopback and examine the classification results in our policy map on R1:

 

qos-preclassify-host-ping

 

qos-preclassify-r1-showpolicymap-1

 

The packets have been classified as ESP traffic, and the QoS actions (if we had configured any) for that class would be performed on the final outgoing packet.  This generally isn’t very useful in a real network, since we don’t know what type of traffic is inside the ESP packet.  That’s where QoS preclassification comes in.

 

The second scenario we will look at is with qos pre-classify configured on the crypto map.  We configure this on R1 and clear the counters to remove the traffic statistics from our previous example:

 

qos-preclassify-r1-qospre-cryptomap

 

Then we initiate another ping from ‘Host’ to R3’s loopback and view the policy map statistics on R1 again:

 

qos-preclassify-host-ping1

 

qos-preclassify-r1-showpolicymap-2

 

This time the packets have been classified as GRE traffic.  Again, this is probably not very useful because we do not know what is encapsulated within the GRE packet.

 

For the third scenario, we will configure qos pre-classify on the Tunnel 0 interface.  First remove the qos pre-classify from the crypto map in the previous scenario, then configure it on the tunnel interface and clear the counters:

 

qos-preclassify-r1-qospre-tunnel

 

Then initate a ping from ‘Host’ to R3’s loopback again and view the policy map statistics:

 

qos-preclassify-r1-showpolicymap-3

 

This time the traffic has been classified as ICMP and if we had configured any QoS actions for ICMP traffic, they would be performed on the final ESP packet when it leaves the router.

 

Why does the router classify the traffic like this in each scenario?  It’s probably easiest to start with the third scenario, followed by the second, and finally the first.

Scenario #3 – Preclassify on the tunnel interface

1. R1 receives an ICMP packet from ‘Host’ to R3’s loopback.

2. R1 performs a routing table lookup on the packet and finds 3.3.3.0 /24 out interface Tunnel 0 as the best match, which we configured statically

3. R1 ‘sends’ the packet to Tunnel 0 and finds the qos pre-classify command configured.  A temporary copy of the ICMP packet is created at this point to be used for classification, as shown below:

qos-preclassify-icmp-packet

 

Scenario #2 – Preclassify on the crypto map

1. R1 receives an ICMP packet from ‘Host’ to R3’s loopback.

2. R1 performs a routing table lookup on the packet and finds 3.3.3.0 /24 out interface Tunnel 0 as the best match, which we configured statically

3. R1 ‘sends’ the packet to Tunnel 0.  The tunnel mode, which was left at default, is gre ip.  R1 adds a GRE header and a new IP header outside the GRE header, using the tunnel source and tunnel destination addresses that are configured on the tunnel interface as the source and destination for the new IP header.

4. R1 performs a routing table lookup on the new destination address, 10.1.23.3, and finds the default route out interface S0/0 as the best match (this was configured statically as well).

5. R1 finds that there is a crypto map configured on S0/0 and finds the qos pre-classify command in the crypto map.  A temporary copy of the packet is created at this point to be used for classification, as shown below:

qos-preclassify-gre-packet1

 

Scenario #1 – No preclassification configured

1. R1 receives an ICMP packet from ‘Host’ to R3’s loopback.

2. R1 performs a routing table lookup on the packet and finds 3.3.3.0 /24 out interface Tunnel 0 as the best match, which we configured statically

3. R1 ‘sends’ the packet to Tunnel 0.  The tunnel mode, which was left at default, is gre ip.  R1 adds a GRE header and a new IP header outside the GRE header, using the tunnel source and tunnel destination addresses that are configured on the tunnel interface as the source and destination for the new IP header.

4. R1 performs a routing table lookup on the new destination address, 10.1.23.3, and finds the default route out interface S0/0 as the best match (this was configured statically as well).

5. R1 finds that there is a crypto map configured on S0/0 and that the GRE packet matches the ACL in the crypto map.  R1 adds an ESP header and a new IP header outside the ESP header, as specified by the crypto map and IPsec transform set.

6. R1 performs classification on the final ESP packet and sends the packet out S0/0, as shown below:

qos-preclassify-esp-packet1

 

One final scenario to look at is configuring the service policy on the tunnel interface rather than the physical interface without using preclassification:

 

qos-preclassify-policyontunnel

 

Then we initiate a ping from ‘Host’ to R3’s loopback again and view the statistics:

 

qos-preclassify-host-ping2

 

qos-preclassify-r1-showpolicymap-4

 

Just like the third scenario, the traffic is classified as ICMP, and no preclassification was even needed.  However, this service policy will only be applied to traffic exiting Tunnel 0, whereas the first three scenarios would apply to traffic exiting S0/0, Tunnel 0, and any other tunnel interfaces that were configured to use S0/0.  Ultimately, the choice of where to apply the service policy and where or if to apply QoS preclassification will depend on what is trying to be accomplished.

Posted in QoS, VPN | 5 Comments »

Marking DSCP values with Policy-Based Routing

Posted by Andy on November 26, 2008

I recently read Wendell Odom’s QoS Exam Certification Guide.  One of the appendixes that covers legacy methods for marking traffic describes how policy-based routing can be used as a marking tool.  According to the book, PBR can be used to mark IP precedence but not DSCP – possibly because class-based marking is the most common type of marking now used, so it may not have been considered necessary to update PBR in IOS to follow the DiffServ standard.  The following diagram shows the two different ways that the IP header Terms of Service (ToS) byte has been defined:

 

tos-diagram

 

I recalled that route maps (which are what PBR uses to perform marking) allow the delay, throughput, reliability, and monetary cost bits (collectively known as the ToS field in the Pre-DiffServ standard) to be set, which correspond to the last 3 bits of the DSCP field and 1st bit of the ECN field in the DiffServ standard.  This made me wonder: Will IOS allow both the IP precedence and the ToS field to be marked with PBR?  Will it be interpreted by DiffServ capable devices as the correct DSCP value?  To find out, I set up a simple lab scenario to test it out:

 

pbr-marking-topology

 

R2 will be using PBR to mark traffic that is sourced from R1.  We will attempt to mark ICMP traffic with a DSCP value of Expedited Forwarding (EF) and then ping from R1 to R3 to test it out.  IOS displays the following options for setting the ToS field within a route map:

 

pbr-marking-tos-field-options

 

To set any one of the four bits that make up this field, we can enter the keyword of the bit we want to set.  To set multiple bits, we must enter the decimal value of the combined bits we want to set.  To obtain the EF DSCP value (101110), we will need to set the IP precedence to 5 (101) and the ToS to 12 (1100).  Note that the last bit of the ToS field is not a part of the DSCP field in the DiffServ standard, so we are leaving it set to 0.  Our PBR configuration on R2 is:

 

pbr-marking-config

 

To verify, we will ping from R1 to R3.  The ICMP traffic should match route map clause 10 and have the IP precedence set to 5 and ToS set to 12.  A capture taken on R2’s F0/1 interface is shown below in Wireshark:

 

pbr-marking-wireshark1

 

Wireshark correctly interprets it as DSCP EF, just as we hoped.

Posted in PBR, QoS | Leave a Comment »

Using ACLs with discontiguous wildcard bitmasks – an example

Posted by Andy on November 21, 2008

Suppose we’ve been given the requirement to permit the following 3 (randomly chosen) addresses:

 

#1 – 131.47.104.88

#2 – 203.42.155.67

#3 – 90.143.32.224

 

and to deny the following 3 (also randomly chosen) addresses:

 

#4 – 152.209.18.22

#5 – 8.25.99.150

#6 – 79.242.201.57

 

using only a single line access list.  Conventional access list knowledge would say that this is impossible, because even the very first bit differs between the 3 addresses that we must permit.  However, the truth is that each bit of an ACL is treated individually, and wildcard masks do not need to be contiguous.  In order to find the most specific address and wildcard mask that will match the first 3 addresses, we first convert each of them to binary:

 

#1 – 131.47.104.88  =  10000011 . 00101111 . 01101000 . 01011000

#2 – 203.42.155.67  =  11001011 . 00101010 . 10011011 . 01000011

#3 – 90.143.32.224  =  01011010 . 10001111 . 00100000 . 11100000

 

Next, we perform a logical AND on these 3 addresses to obtain the address portion of our ACL statement:

 

  10000011 . 00101111 . 01101000 . 01011000

  11001011 . 00101010 . 10011011 . 01000011

AND    01011010 . 10001111 . 00100000 . 11100000

————————————————————————————

   00000010 . 00001010 . 00000000 . 01000000  =  2.10.0.64

 

Next, we perform a logical XOR on these 3 addresses to obtain the wildcard mask portion of our ACL statement:

 

  10000011 . 00101111 . 01101000 . 01011000

  11001011 . 00101010 . 10011011 . 01000011

XOR   01011010 . 10001111 . 00100000 . 11100000

————————————————————————————

  11011001 . 10100101 . 11111011 . 10111011  =  217.165.251.187

 

Why AND the addresses to find the address portion and XOR them to find the wildcard mask?  The logic works as follows:

1.  If all addresses have the same value in a given bit position, use that value in the address portion of the ACL entry.  This bit will be ‘checked’ (Step #3), so the value of that bit in the address portion must match the value of that bit in all the addresses we want to permit.

2. If not all addresses have the same value in a given bit position, use zero as the value for that bit.  This bit will not be ‘checked’ anyway (Step #4) because doing so would only allow a subset of the addresses that we want to match.  We could, in fact, use either a zero or one here without affecting the matching logic since it is not checked anyway, but standard practice is to use a zero.

3. If all addresses have the same value in a given bit position, place a zero in that position in the wildcard mask portion.  This means that we will ‘check’ this bit, meaning we will require it to match the value that we specified for this bit position in the address portion.  This should always result in a match, because in Step #1 we placed the value that all of these addresses have in common in that position.

4. If not all addresses have the same value in a given bit position, place a one in that position in the wildcard mask portion.  This is considered a ‘don’t care’ bit, and means that we do not require it to match.

 

Ok, so the address and wildcard mask that we came up with meet our requirement of permitting the first 3 addresses, but what about denying the next 3?  Start with the wildcard mask and look at all bit positions that contain a zero.  We’ve specified that these bits must match our address portion of the ACL, so allow the address bits to ‘pass through’ the wildcard bits where the wildcard bit is a zero and cross out all other bit positions that we are not checking:

 

ACL Address    00000010 . 00001010 . 00000000 . 01000000

ACL Wildcard   11011001 . 10100101 . 11111011 . 10111011

————————————————————————————————-

 Filter              xx0xx01x . x0x01x1x . xxxxx0xx . x1xxx0xx 

 

Next, compare each of the addresses that we want to deny against this filter.  If any bit that isn’t crossed out differs, the address will be denied.

 

#4 – 152.209.18.22   =  10011000 . 11010001 . 00010010 . 00010110

#5 – 8.25.99.150       =  00001000 . 00011001 . 01100011 . 10010110

#6 – 79.242.201.57  =   01001111 . 11110010 . 11001001 . 00111001

 

Addresses #4 and #5 will be denied by the 3rd bit that we are checking (7th bit, 1st octet).  Address #6 will be denied by the 2nd bit that we are checking (6th bit, 1st octet).  It appears that our address and wildcard will accomplish what we need.  Our ACL statement becomes:

 

access-list 1 permit 2.10.0.64 217.165.251.187

 

 However, note that the usefulness of an ACL like this is very limited, because it permits much more than we need to.  Because there are 22 ‘don’t care’ bits in the wildcard mask, this ACL will permit a total of 2^22 (4,194,304) different addresses.  A randomly chosen address will have a 1 in 2^10 (1/1024) chance of being permitted by our single line ACL.

Finally, let’s test it out and see if it works correctly.  We will use a simple 2 router topology as shown below:

acl-lab1

R1 will be using 6 loopbacks to simulate our addresses and sourcing pings from each of them to R2.  R2 will use the ACL we created inbound on F0/0.  The relevant portions of each router’s configuration are shown below: 

 

acl-lab-r1-config1

 

acl-lab-r2-config2

 

 

 

Now we can initiate pings on R1 to R2 sourced from each of our 6 addresses:

 

acl-lab-results

The first 3 addresses are permitted and second 3 are denied, just as we expected.

 

One final note on using discontiguous wildcard masks is that not all IOS features will allow them.  Attempting to use the same address and wildcard mask to enable OSPF on the first 3 loopback interfaces of R1 gives the following result:

acl-lab-ospf

Posted in ACL | 2 Comments »