HP has released version E0321 for the Virtual Services Router (VSR) platform. This is a major new release, which includes some very cool new features, such as IRF for the virtual platform.
Download here:
In the upcoming posts I will cover some of the new feature, this post covers the differences with IRF on the switches.
IRF for 2 nodes
Up to 2 VSR nodes can be combined into 1 logical router, based on the well-known IRF technology. Since we are creating an IRF system of 2 VMs, some differences will apply compared to the physical switches with IRF.
Initial Configuration
The VSR is by default a standalone router, with e.g. GigabitEthernet1/0, G2/0 etc. When you want the IRF mode, you will need to use the command
chassis convert mode irf
The admin can then also assign the IRF member number, priority etc. After the reboot, the interfaces will be re-labeled to e.g. G1/1/0, G1/2/0 etc. in case this is unit ID 1.
IRF Ports
During the testing, it appeared that 1 or more dedicated interfaces (but remember these are virtual NICs) are required for the IRF ports, just like the physical IRF.
When multiple interfaces are used, the administrator can configure which interface will handle the control and/or data plane traffic. Since each VNIC can be mapped to a dedicated physical NIC through the vSwitch, it means that IRF control and data plane traffic could be sent over separate physical links of the server.
Traditional IRF port configuration, which handles both control and data plane:
irf-port 1 port group interface GigabitEthernet1/2/0
Option for dedicated control and data plane interfaces on the IRF port:
irf-port 1 port group interface GigabitEthernet1/5/0 type control port group interface GigabitEthernet1/6/0 type data
Active/Standby
While the physical switches support e.g. active/active Layer2 link-aggregation (or MLAG or whatever you want to call it), this is technically not possible, since 1 VSR VM could run on Hypervisor A, and the other VSR IRF Member could run on Hypervisor B. This means that an active/standby model is used, so 1 node will handle traffix x, and when that active node fails, the other node will resume operation.
Redundant Ethernet (Reth)
In order to setup this active/standby connection, a new concept is introduced: a RedundantEthernet interface.
This Reth interface is a new logical interface (like the BridgeAggregation is the logical interface for active/active link-aggregation) which provides the active/standby feature for the IRF system of the VSR.
Just like the BridgeAggregation interface, the Redundant Ethernet interface groups 2 interface (1 from each member VSR) into a new logical interface. The new Reth interface is a routed port (Layer3), it cannot be configured as switch port. The new Reth interface object will be configured with all the Layer3 IP/Routing etc. parameters.
These settings will then be active on 1 member interface, when it fails, these settings will become active on the standby interface (same MAC/IP), with a gratuitous ARP to notify the network about the MAC/IP changed location. During my tests, when I powered off the active node VM, a sub-second failover was achieved and traffic was resumed on the other node VM.
While the Hypervisor gives nice options with vMotion for planned maintenance, it will not help when the physical server is powered off for whatever reason. This is a simple way to have the network service (NFV) active on 2 VMs in the network (and still have 1 management interface).
Redundancy Groups
While the Redundant Ethernet connection is a link-level failover, a router will typically route from interface A to interface B. When we look at any type of stack, it would be desired to have interface A and B on the same stack member node.
Applied to IRF with the VSR, this means that the node which is active for Reth A (e.g. the downlink to users), should also be the active node for the Reth B (e.g. the uplink to WAN).
This can be easily achieved with the basic Reth interface priority settings.
However, when e.g. the WAN uplink goes down on the active node, the standby node would activate the WAN interface service for this Reth B interface. While this seems fine initially, the Reth A (user facing) would still be active on the original node. This means that the user traffic would arrive on node 1, traffic would have to travel through the IRF-port to reach node2, and then it can be sent over the WAN link.
With the Redundancy Groups, it is possible to link the active/standby state of multiple Reth interfaces (other other functions), so when 1 of these interfaces or services fails and moves the the other node, all the services in the group will be moved all together. This ensures that both uplink and downlink routed ports will be handled by the same VM.
Summary
When you want NFV and use the VSR, the NFV function should be active on 2 VMs for fast redundancy. A host failure and VM restart on another physical host takes too much time.
Having 2 VMs with 2 independent VSRs also means having to setup VRRP for the gateway redundancy, having 2 configuration points etc.
With IRF, the NFV network function gets the full redundancy (active/standby), but you maintain a single configuration and management point (e.g. SNMP interface statistics are always coming from the same node IP, independent of the active/standby state).
This is a very easy way to achieve redundant NFV functionality in the network.
Combined with the firewall function and the stateful synchronization of firewall sessions inside the IRF system, you just received a very cheap and convenient stateful redundant firewall with the VSR+IRF combination.