Combining FCoE SAN A/B and IRF with BridgeAggregation

FCoE and BridgeAggregation

Storage networks have a traditional setup of SAN A/B, which are totally separated, by a so called “air-gap”.

When using FCoE, this airgap is nearly impossible to maintain (unless only storage would be passing the CNA adapter) if the 2 CNA cards need to be connected to the Ethernet/IP network as well, since there is only 1 Ethernet/IP network.

Cisco has the concept of vPC for this, which is nice, but accepts already the fact that there is a cable between devices of SAN-A and SAN-B (used for the Ethernet MAC synchronization). There is the concept of the isolated control plane, but again, using the vPC control packets, there is communication between these control planes.

HP has IRF, which is not only synchronizing the L2 MAC information between the switches, but also all the other control plane protocols (that is the point of a stack obviously). However, at the conceptual level, each member is running its own processes, so these are simply synchronized over an Ethernet link (the IRF Port) using IRF control plane packets.

So how different is this from the vPC model (separate processes on each side, but using control packets to synchronize only the L2 tables), but in a more complete form (separate processes on each member, using IRF control packets to synchronize all tables) ?

Accepting the logical airgap

So you can have long discussions on this topic, but I believe the future will be focused on trying to keep the SAN A/B as isolated as possible, but without the physical air-gap, which is simply impossible in a true converged network.

Single IRF with SAN A/B

So IRF has been available for years, it is very proven in the field, very easy to use etc. but now we need to configure it in such a way that SAN A/B can be transported through a single IRF system, with as much isolation as possible.

This will be achieved by defining 2 VSANs (e.g. VSAN 100 and VSAN 200), and making sure the VSAN100 is only hosted on unit1, while VSAN200 is only hosted on unit2.

Each VSAN is internally mapped to a VLAN, for example:

  • VSAN100 could be hosted on VLAN4001
  • VSAN200 could be hosted on VLAN4002

When the admin makes sure that VLAN4001 has only member ports on unit1, and VLAN4002 has only ports on unit2, the 2 VSANs are isolated at the dataplane.

How about link aggregation ?

When a server is connected to a 2-switch server access IRF system, it is possible to bundle the 2 switch ports in a BridgeAggregation group (link aggregation).

For a lot of deployments, this is not really required, since e.g. VMWare will pin a VM to a physical server NIC by default. Only when the NIC fails, the VM traffic will be send over the other NIC (so there is only a MAC address change on the network when there is a link failure, no continuous MAC flap).

Now if you want the server system to be active/active (through LACP or manual link-aggregation), you will need the Bridge Aggregation on the switch side.

If the server is using 2 CNA adapters, these could be used for storage and data of course.

So how can the data traffic use active/active setup, while the storage CNA function needs to be connected to SAN-A (on IRF unit1) and SAN-B (hosted by IRF unit2)?

BridgeAggregation : ignore vlan

This is done with a very simple new feature: ignore vlan configuration on the BAGG.

A classic BAGG setup is like:

interface BridgeAggregation 3
port link-type trunk
port trunk permit vlan 100 to 200

interface ten 1/0/3
port link-type trunk
port trunk permit vlan 100 to 200
port link-aggregation group 3

interface ten 2/0/3
port link-type trunk
port trunk permit vlan 100 to 200
port link-aggregation group 3

If you are familiar with Comware, you will know that the physical member ports of a BridgeAggregation group must have the same port link type and permitted vlans as the parent BAGG interface.

The new feature allows you to make exceptions to this, and this will be very useful for the 2 FCoE VLANs, since you can now allow VSAN100 (VLAN4001) on interface1, and VSAN200(VLAN4002) on interface2. This follows the “isolation” principle we have covered earlier.

At the same time, all other data vlans will be enabled on both interfaces and the BAGG parent interface.

Sample new configuration:

interface BridgeAggregation 3
link-aggregation ignore vlan 4001 to 4002
port link-type trunk
port trunk permit vlan 100 to 200

interface ten 1/0/3
port link-type trunk
port trunk permit vlan 100 to 200 4001
port link-aggregation group 3

interface ten 2/0/3
port link-type trunk
port trunk permit vlan 100 to 200 4002
port link-aggregation group 3

 

As you can see, the parent BAGG interface does not permit vlans 4001 and 4002, these are only permitted on the physical member ports. Normally this would lead to “unselected” ports, but the ignore vlan feature will ensure these vlans are not checked / do not need to match with the parent BAGG interface.

Now each physical interface can have its own Virtual FiberChannel (VFC) interface bound to it, which can be bound to VSAN100 and VSAN200, and the data vlans can still use the active/active load-balancing.

This entry was posted in Comware7, FCOE, IRF and tagged , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s