Application Delivery (ADX)

Transparent Cache Switching with Brocade ServerIron and Blue Coat Proxy SG Cache Servers

by ssteve on ‎06-15-2013 04:45 PM - edited on ‎10-30-2013 05:48 PM by bcm1 (4,301 Views)

Transparent Cache Switching with Brocade ServerIron and Blue Coat Proxy SG Cache Servers

 

Introduction

This document guides you through all the steps required to successfully deploy Blue Coat Proxy SG cache servers with ServerIron. The first part of the document describes how to create a simple working environment. The second part discusses some cavaets which need to be considered when deploying the solution.

Pre-requisites

The following prerequisites are required to deploy the solution:

•       Working knowledge of the Web Caching

•       Working knowledge of the Transparent Caching Switching

•       Experience with basic networking and troubleshooting

•       Experience installing and configuring the Brocade ServerIron products

•       Working knowledge of ServerIron’s CLI

 

Network Topology

 

In figure 1, an active-active pair of ServerIrons is used to load balance traffic to be cached and inspected by Blue Coat Proxy SG and Proxy AV devices.  Incoming client traffic is directed to a shared VRRP address owned by the ServerIrons which load balance those sessions to SG devices.  Redundant switches are used between the ServerIrons and Blue Coats.  The last feature of this configuration is the ability to handle traffic routed asymmetrically to other sites

 

 tcs.JPG

 

 

 

 Figure 1 : Network Topology

 


    

Required ServerIron configuration

The minimum required steps on the ServerIron to enable Transparent Cache Switching (TCS) the following.

1. Configure access via SSH

2. Configure Interfaces and Trunks

3. Create VLANs and their associated Virtual Ethernet

4. Create Cache Servers and Cache Groups

5. Create Active-Active Redundancy

6. Complete Configuration

7. Verify operation

1. Configure access via SSH

The following commands are required configure SSH (not included is the ssh public key):

                     interface ethernet 3/1

                        port-name Mgmt

                        ip address 10.9.71.240 255.255.255.0

                        aaa authentication web-server default local

                        aaa authentication enable default local                    

                        aaa authentication login default local

                        aaa authentication login privilege-mode

                        enable telnet authentication

                        enable aaa console

                        hostname SI-1

                        no telnet server

                        username admin password .....

                        web-management https

                        ip route 0.0.0.0 0.0.0.0 10.97.0.1


2. Configure Interfaces, and Trunks

Configure the trunk between the ServerIron and ISG (Firewall)

                      trunk server ethe 2/3 to 2/4

                      port-name "To_ISG1" ethernet 2/3

                      trunk server ethe 3/3 to 3/4

                      port-name "To_ISG2" ethernet 3/3

 

Configure the interfaces between the ServerIron and FLS switches (Layer 2 Access)

 

                        interface ethernet 2/5

                         port-name To_LS-top

                         link-aggregate configure key 10500

                         link-aggregate active

                        interface ethernet 2/6

                         link-aggregate configure key 10500

                         link-aggregate active

                        interface ethernet 3/5

                         port-name To_LS-bottom

                         link-aggregate configure key 11500

                         link-aggregate active

                        interface ethernet 3/6

                         link-aggregate configure key 11500                        

                         link-aggregate active

 

3. Create VLANs and associated Virtual Ethernet Interfaces

 

VLAN 10 is the Virtual LAN between the ServerIron and the FLS (Layer 2 access).

                        vlan 10 by port

                        untagged ethe 2/5 to 2/6 ethe 3/5 to 3/6                  

                        router-interface ve 10

                        spanning-tree 802-1w

                        spanning-tree 802-1w priority 7000

 

The Virtual Ethernet (VE) is configured to support redundancy by monitoring (tracking) the downstream ports and sharing the same VRRP virtual IP address (found in the backup section of the ve configuration).

 

                     interface ve 10

                      port-name To_FLSs

                      ip address 10.98.1.252 255.255.255.0

                      ip vrrp-extended vrid 2

                      backup priority 150

                      ip-address 10.98.1.1

                      track-port e 2/3 priority 30

                      track-trunk-port e 2/3

                      track-port e 3/3 priority 30                             

                      track-trunk-port e 3/3

                      enable

 

VLAN 100 is the Virtual LAN between the ServerIron and the Firewalls.

 

                      vlan 100 by port

      untagged ethe 2/3 to 2/4 ethe 3/3 to 3/4

      router-interface ve 100

                      spanning-tree 802-1w

                      spanning-tree 802-1w priority 7000

 

The Virtual Ethernet (VE) is configured to support redundancy by monitoring (tracking) the upstream ports and sharing the same VRRP virtual IP address (found in the backup section of the ve configuration).

 

    interface ve 100

     port-name To_ISGs

      ip address 10.97.0.252 255.255.255.0

                       ip vrrp-extended vrid 1

                     backup priority 150

                     ip-address 10.97.0.254

                       track-port e 2/5 priority 30

                       track-port e 3/5 priority 30

                       enable

VLAN 200 is the Virtual LAN that supports the sync between the ServerIrons in the Active-Active configuration

 

     vlan 200 by port

   untagged ethe 3/11

                       static-mac-address 0012.f2a7.bd4a ethernet 3/11

4. Creating Cache-Servers and Cache-Groups and Caching policy

The next step is to configure the cache servers. The cache servers are the actual references to the caching servers. In this scenario we are assuming that the cache servers are already configured.

server cache-name SG2 10.98.1.3

port http

port http url "HEAD /"

port http l4-check-only

port ssl

port ssl l4-check-only

server cache-name SG3 10.98.1.4

   port http

   port http url "HEAD /"

   port http l4-check-only

   port ssl

   port ssl l4-check-only

server cache-name SG4 10.98.1.5

   port http

   port http url "HEAD /"                                    

   port http l4-check-only

   port ssl

   port ssl l4-check-only

server cache-name SG5 10.98.1.6

   port http

   port http url "HEAD /"

   port http l4-check-only

   port ssl

   port ssl l4-check-only

server cache-name SG6 10.98.1.7

   port http

   port http url "HEAD /"

   port http l4-check-only

   port ssl

   port ssl l4-check-only

server cache-name SG7 10.98.1.8

   port http

   port http url "HEAD /"

   port http l4-check-only

   port ssl                                                  

   port ssl l4-check-only

The cache servers are placed into the cache-group.  The cache-group contains the hash-mask and the group ACLs.  The hash-mask is the policy used to balance the traffic to the cache servers.

      server cache-group 1

                            hash-mask 255.255.255.255 0.0.0.255

                            filter-acl 101

                            cache-name SG2                                            

                            cache-name SG3

                            cache-name SG4

                            cache-name SG5

                            cache-name SG6

                            cache-name SG7

 

Next you will need to turn on TCS globally on the ServerIron, you must define IP cache policy as a global cache

 

                         ip l4-policy 1 cache tcp http global

 

To ensure that traffic is balanced to a new cache server when it is added to the cache-group, following command must be entered:

 

                        server force-cache-rehash

5. Creating Active-Active Redundancy

To ensure that if one of the ServerIron fails, the other will take over, you must cofigure the following:

The session sync-update command syncs the sessions

 

                  session sync-update

 

The server active-active port commands is used the identify the the port that connects the ServerIron to its active-active partner. Note that the port must be associated to its own VLAN.

 

                            server active-active-port ethe 3/11 vlan-id 200

 

The following commands enable session synchronization on the ports where the active-active SLB feature is used. This is required both to ensure continued service following a failover and to enable each ServerIron to send server replies back to the clients, regardless of which ServerIron load balanced the requests

 

                              server port 80

                              session-sync

                              tcp

 

6. Minimum required Configuration

Complete   configuration

ServerIron#show   running-config
!Building configuration...
!Current configuration : 1843 bytes
!

ver 10.2.01eTG4

!

module 1   bi-0-port-wsm7-management-module

module 2   bi-jc-16-port-gig-copper-module

module 3   bi-jc-16-port-gig-copper-module

!

global-stp

global-protocol-vlan

!

trunk server   ethe 2/3 to 2/4

port-name   "To_ISG1" ethernet 2/3

trunk server   ethe 3/3 to 3/4

port-name   "To_ISG2" ethernet 3/3

!

session   sync-update

!

server   active-active-port ethe 3/11 vlan-id 200

!

!

server   force-cache-rehash

!

server port 80

session-sync

tcp

!

context default

!

server   cache-name SG2 10.98.1.3

port http

port http url   "HEAD /"

port http   l4-check-only

port ssl

port ssl   l4-check-only

!

server   cache-name SG3 10.98.1.4

port http

port http url   "HEAD /"

port http   l4-check-only

port ssl

port ssl   l4-check-only

!

server   cache-name SG4 10.98.1.5

port http

port http url   "HEAD   /"                                    

port http   l4-check-only

port ssl

port ssl   l4-check-only

!

server   cache-name SG5 10.98.1.6

port http

port http url   "HEAD /"

port http   l4-check-only

port ssl

port ssl   l4-check-only

!

server   cache-name SG6 10.98.1.7

port http

port http url   "HEAD /"

port http   l4-check-only

port ssl

port ssl   l4-check-only

!

server   cache-name SG7 10.98.1.8

port http

port http url   "HEAD /"

port http   l4-check-only

port   ssl                                                  

port ssl   l4-check-only

!

server   cache-name SG9 10.98.1.10

port http

port http url   "HEAD /"

port http   l4-check-only

port ssl

port ssl   l4-check-only

!

server   cache-name SG10 10.98.1.11

port http

port http url   "HEAD /"

port http   l4-check-only

port ssl

port ssl   l4-check-only

!

server   cache-name SG11 10.98.1.12

port http

port http url   "HEAD /"

port http   l4-check-only

port ssl

port ssl   l4-check-only

!                                                          

server   cache-name SG12 10.98.1.13

port http

port http url   "HEAD /"

port http   l4-check-only

port ssl

port ssl   l4-check-only

!

server   cache-name SG13 10.98.1.14

port http

port http url   "HEAD /"

port http   l4-check-only

port ssl

port ssl   l4-check-only

!

server   cache-name SG14 10.98.1.15

port http

port http url   "HEAD /"

port http l4-check-only

port ssl

port ssl   l4-check-only

!

server   cache-name SG-VIP 10.98.1.100

port   http                                                 

port http url   "HEAD /"

port http   l4-check-only

port ssl

port ssl   l4-check-only

!

server   cache-name SG1 10.98.1.2

port http

port http url   "HEAD /"

port http   l4-check-only

port ssl

port ssl   l4-check-only

!

server   cache-name SG8 10.98.1.9

port http

port http url   "HEAD /"

port http   l4-check-only

port ssl

port ssl   l4-check-only

!

server   cache-group 1

hash-mask 255.255.255.255   0.0.0.255

filter-acl 101

cache-name   SG2                                            

cache-name SG3

cache-name SG4

cache-name SG5

cache-name SG6

cache-name SG7

cache-name SG9

cache-name SG10

cache-name SG11

cache-name SG12

cache-name SG13

cache-name SG14

server   cache-group 2

hash-mask   255.255.255.255 0.0.0.255

filter-acl 102

cache-name SG1

spoof-support

vlan 1 name   DEFAULT-VLAN by port

!

vlan 10 by port

untagged ethe   2/5 to 2/6 ethe 3/5 to   3/6                  

router-interface   ve 10

spanning-tree   802-1w

spanning-tree   802-1w priority 7000

!

vlan 100 by port

untagged ethe   2/3 to 2/4 ethe 3/3 to 3/4

router-interface   ve 100

spanning-tree   802-1w

spanning-tree   802-1w priority 7000

!

vlan 200 by port

untagged ethe   3/11

static-mac-address   0012.f2a7.bd4a ethernet 3/11

!

vlan 99 by port

untagged ethe   3/13

router-interface   ve 99

spanning-tree   802-1w

spanning-tree   802-1w priority 7000

!

default-mtu 9000

aaa   authentication web-server default local

aaa   authentication enable default   local                    

aaa   authentication login default local

aaa   authentication login privilege-mode

enable telnet   authentication

enable aaa   console

hostname SI-1

ip   acl-permit-udp-1024

ip l4-policy 1   cache tcp http global

ip route 0.0.0.0   0.0.0.0 10.97.0.1

!

no telnet server

username admin   password .....

router   vrrp-extended

snmp-server

snmp-server   community ..... ro

snmp-server host   10.9.71.90 .....

no   web-management http

web-management   https

!

interface   ethernet 2/3

port-name   To_ISG1

!                                                          

interface   ethernet 2/5

port-name   To_LS-top

link-aggregate   configure key 10500

link-aggregate   active

!

interface   ethernet 2/6

link-aggregate   configure key 10500

link-aggregate   active

!

interface   ethernet 3/1

port-name Mgmt

ip address   10.9.71.240 255.255.255.0

!

interface   ethernet 3/3

port-name   To_ISG2

!

interface   ethernet 3/5

port-name   To_LS-bottom

link-aggregate   configure key 11500

link-aggregate   active

!

interface   ethernet 3/6

link-aggregate   configure key 11500                        

link-aggregate   active

!

interface   ethernet 3/13

disable

!

interface   ethernet 3/14

disable

!

interface   ethernet 3/15

disable

!

interface   ethernet 3/16

disable

!

interface ve 10

port-name To_SGs

ip address   10.98.1.254 255.255.255.0

ip vrrp-extended   vrid 2

  backup   priority 150

    ip-address 10.98.1.1

    track-port e 2/3 priority 30

    track-trunk-port e 2/3

    track-port e 3/3 priority   30                             

    track-trunk-port e 3/3

  enable

!

interface ve 99

port-name To_ISG2

ip address   10.99.0.252 255.255.255.0

ip vrrp-extended   vrid 3

  backup

    ip-address 10.99.0.254

    track-port e 2/5 priority 30

    track-port e 3/5 priority 30

  disable

!

interface ve 100

port-name   To_ISGs

ip address   10.97.0.252 255.255.255.0

ip vrrp-extended   vrid 1

  backup   priority 150

    ip-address 10.97.0.254

    track-port e 2/5 priority 30

    track-port e 3/5 priority 30

  enable

!                                                          

access-list 101   deny tcp 10.95.100.0 0.0.0.255 any eq http

access-list 101   deny tcp 10.98.100.0 0.0.0.255 any

access-list 101   permit tcp any any

!

access-list 102   permit tcp 10.95.100.0 0.0.0.255 any eq http

access-list 102   permit tcp 10.95.100.0 0.0.0.255 any eq ssl

!

8. Verifying Operation

verify the cache servers are up and their states


                              show cache-group

 

 

 

 

Proxy Flow

 

This solution relies on a policy route on the external router to direct client http traffic to an address shared by the ServerIrons which run VRRP-Extended.  The traffic is distributed to the Proxy SG farm based on a hash of the destination address and the last octet of the client address.  This hash combination was used to ensure a wider spread of the limited streams in use for testing.  In real life applications, a hash of the network address of the destination would minimize duplicate caching of content on multiple Proxy SGs.  Once traffic reaches a Proxy SG, the tcp connection is terminated, the http is evaluated, and a new connection is originated from the Proxy SG to the destination server with the http request.    This second connection is from a source address of the Proxy SG to the destination server which ensures that the return packets are forwarded to the same proxy.  Once the proxy receives the return packets from the destination server, the content is evaluated by the Proxy SG and forwarded to the client via the connection between the client and proxy.

Client Spoofing

 

 

In cases where Client IP needs to be preserved when it reaches the server, the Proxy SG must be configured to reflect the client IP address.  Additionally, the ServerIron must be set to support client spoofing.  The feature on the ServerIron allows it to create a session table entry for the connection from the proxy to the destination server even though the source IP address is that of the client.  This ensures that the return packet from the server is directed back at the correct proxy.

To support the case where the client address is preserved when it gets sent to the server, the router on the left side of the diagram must have a policy route forcing the traffic returning from the server to go to the ServerIron shared address.  Otherwise the return packets will be routed directly back to the client creating tcp errors on the proxy.

 

Cavaets

Don’t Proxy Twice

 

Initially, we tested with the Proxy SG devices connected directly to the ServerIrons with a cross-connect trunk between the ServerIrons.  It turns out that, when the cache policy is applied globally, the ServerIron will attempt to proxy any traffic matching the cache policy.  When proxying http, this could mean that any client traffic or even health checks that go from one ServerIron through the other to reach a Proxy SG device will become subject to the cache policy.  Because the setup described in this document is essentially a one-armed arrangement, caching policy could not be configured on a local basis.  To resolve this issue, we added the switches between the ServerIrons and the Proxy SG devices.

Eliminating the cross-connect trunk wasn’t the only step needed to avoid a potential double proxy.  The VRRP setup uses track port to ensure that, if the interfaces on one side of the primary ServerIron go down, traffic isn’t forwarded from SI-2 to SI-1 due to SI-1 being the VRRP master on the proxy side.  The ServerIron configuration used in this test set the VRRP master to priority 150 with track priority 30.  That allows the primary ServerIron to continue forwarding traffic if only one of the two interfaces on the left side of the diagram go down.  If both interfaces go down, SI-2 becomes master for VRRP instances on both sides of the diagram.

The last step taking to avoid a double proxy of data was to give the LS switches the top bridge priorities (in 802.1w) on the right side of the diagram.  If step wasn’t taken, there could be a scenario where failed links could cause the traffic to transit both ServerIrons which would cause both to attempt to proxy the traffic.

 

Balancing Cache Distribution

 

During testing, a limited number of clients and servers were used in the test profile.  In order to ensure there traffic was widely distributed across the candidate caching devices, we changed the hash mask from the default 255.255.255.0 0.0.0.0 to 255.255.255.255 0.0.0.255.  This caused the ServerIron to assign each destination server and client host (actually all hosts with the same last octet) it own hash value.  Since traffic for a computed hash value is load balanced based to the same cache server, we wanted to avoid having too few hash values causing unequal load distribution.

When testing is done for short durations with frequent changes, it often makes sense to clear the hash buckets so that the traffic can be distributed evenly.  The ServerIron does not have a “clear hash” type of command available.  The only way to clear the hash bucket mappings (that we found) is to change the hash mask.  Changing it to something and changing it back to the desired value allowed us to start clean from a traffic distribution perspective.

 

Trunking Protocol

 

The generic firewall shown in the initial diagram supports aggregate interfaces but does not support 802.1d.  As a result, LACP aggregation does not work correctly.  The trunk will appear to be up, but traffic will only be seen on one link.  (That could be that error messages were going back to a single host.  We didn’t take the time to fully diagnose what was happening.)  The solution was to use the older “trunk server” syntax associated with the non-standard trunking supported by IronWare.

Contributors