Tuesday, January 15, 2008

Complex networking using Linux on Power blades

Blades are an excellent choice for many applications and services, especially in the telecommunications service provider industry. But the unique requirements of these provider networks often require configurations that are complex and need up-front focus and planning so all the stringent functional requirements are met. In this article, learn how to plan and set up the necessary network configurations for a POWER6™ JS22 blade deployment.

Blade-based operational models are of tremendous value in the wired and wireless telecommunications domain for several reasons:

  1. Small footprints mean cost-effective use of data center space.
  2. Deployment meets NEBS requirements for distributed deployments. (NEBS, or Network Equipment Building System, is a set of criteria that networked equipment must meet to qualify for compatibility.)
  3. Cost-effective massive horizontal scalability lowers deployment costs for the telecommunications service provider.
  4. Centralized management support provides better OAM&P support for in-network deployment in service provider networks. (OAM&P stands for "operations, administration, maintenance, and provisioning." The term describes this collection of disciplines and specific software used to track it.)
  5. Built-in support for continuous-availability-based operational models—including upgrades and maintenance activities—avoids service downtime from a subscriber perspective.

These additional considerations are key in a telecommunications service provider environment, especially one with complex configurations:

  • Multiple VLANs. These are used for CDN (Customer Data Network) and Management (OAM&P) traffic. Considering them separately ensures that subscriber QoS (Quality of Service) is effectively maintained across multiple LPARs (logical partitions).
  • Micro-partitioning and virtualization. These strategies help maximize the capacity utilization and TCO (Total Cost Of Ownership).
  • Existing network complexities. Existing networks can have a higher degree of load variability, requiring resource load balancing among multiple client LPARs.

In this article, we describe an implementation of a multiple VLAN configuration of a blade chassis using Cisco switches paired in an active/passive configuration. In our example, we configured networks to connect multiple VLANs on a BladeCenter® JS22 using Linux® on Power. The architecture consists of six Cisco catalyst switch modules with fourteen internal ports and four external 1GB ports each.

To properly leverage all six switches in the chassis, the blade requires six Ethernet interfaces. Ethernet interface ent0 on the blade maps to the first switch on the blade chassis; each consecutive Ethernet interface maps to the next available switch. This mapping creates a limitation, because it does not allow administrators to map the physical adapter on the blade to the switch they choose to map them to.

When creating network architecture for your blades, you must have one physical interface for each Cisco switch you want to use on the chassis. If some blades do not have the same number of adapters as the chassis, the switches that do not have a physical adapter associated with them on the blade cannot be used by the blade.

It is important to understand how to pair the Ethernet interfaces on the blade and the switches within the chassis. The first switch in the chassis is normally located in the upper left of the blade chassis, just below the power plug. It would map to ent0 on the blade since it is the first interface on the blade. Figure 1 shows the numbering of the switches in our configuration.


Figure 1. Physical adapter switches in the configuring we're using
Physical adapter switches in the configuring we're using
 

Determining switch pairing is extremely important for high availability. In a typical configuration, one power distribution unit (PDU) supplies power to the top half of the blade chassis while another PDU provides power to the bottom half of the chassis. If you are creating a redundant solution, it's important to split the primary and secondary switch between the upper half and lower half of the chassis.

In our case, we created a solution with the following switch pairs: (1, 9) (2, 3) (4, 7). Because adapter pairs (ent0, ent1), (ent2, ent3), and (ent4, ent5) are on the same physical I/O card, we also needed to make sure that the network traffic for our target VLANs did not travel across the same I/O card. Our configuration splits traffic across multiple PDUs and across multiple interfaces.

Even though the pairing of the adapters and switches may seem simple, there are multiple steps to configure the IVM (Integrated Virtualization Manager), switches, and LPARs to take advantage of this architecture. Figure 2 represents the setup of one of our blades with the associated switch, trunking, and VLAN tagging. This configuration allows multiple VLANs to communicate over the same physical adapters to multiple switches.


Figure 2. One of the blades with the associated switch, trunking, and VLAN tagging
One of the blades with the associated switch, trunking, and VLAN tagging
 

In this example, each LPAR has two Ethernet adapters that connect to one Virtual Ethernet Adapter (VEA) on the IVM. Notice that the VEA on the IVM has multiple VLANs associated with it. Traffic for each VLAN is separated on the LPARs by their adapters. The VEA trunks the VLANs together and sends them over the Shared Ethernet Adapter (SEA) through a Link Aggregation Device and out to the network via one of the chassis switches. The switches route the VLAN traffic to the appropriate network through the use of VLAN tagging.

Five steps to configuring VLANs

There are five main steps (and one optional step) for configuring VLANs for client LPARs on the IVM:

  1. Configure Cisco switches. (An optional step at this point may be to create a link aggregation device, commonly referred to as the LNAGG device.)
  2. Create 802.1q IEEE VLAN-aware interfaces on the IVM to support the VLANs you want to use on each interface. It is very important to design the VLANs prior to any work because it is not possible to modify the created VLAN-aware interfaces afterwards. You would need to delete them and re-add them, which means a big waste of time.
  3. Assign the Virtual Ethernet Adapter to the physical adapter (LNAGG) in the Virtual Ethernet Menu on the IVM.
  4. Modify the LPARs properties to map the new virtual adapters to LPAR adapters. Make sure the LPARs are inactive before performing changes to the network device properties.
  5. Boot each LPAR and configure the new interfaces.

The example in this article is intended for a fresh install of a blade.


 


 

Step 1: Configure the Cisco switches

You may choose to skip this step if the switches are already properly configured with the VLANs that you want to use. The example does not demonstrate how to configure spanning trees and switch priority for spanning trees that may be required. If you want to follow the example, then you should configure your switches to match the configuration that follows.

Log in to the switch

Type the following commands in this order:

  • enable
  • config
  • interface GigabitEthernet0/1
  • description blade1
  • switchport trunk native vlan 383
  • switchport trunk allowed vlan 373,383
  • switchport mode trunk

The commands configure port 1 on the switch for trunking. If traffic comes in from the blade to the switch or from the external ports of the blades to the switch and they are not tagged with a VLAN, the switch will tag them with VLAN 383. The switchport (external ports) will allow only traffic from 373 and 383, in this case, to route through to and from the IVM's VEAs. To change which VLANs need to access the port from the IVM, simply change the trunk-allowed VLANs numbers.

Splitting the traffic

Once the traffic is trunked and sent to the switch, the switch will determine which external port routes the VLAN traffic. In the example, we are sending external VLAN 373 traffic across port 17, and VLAN 383 over port 20.

To configure VLAN 373 traffic over port 17, type the following commands on the Cisco switch:

  • interface GigabitEthernet0/17
  • description extern1
  • switchport access vlan 373
  • switchport mode access

To configure VLAN 383 traffic over port 20, type the following commands on the Cisco switch:

  • interface GigabitEthernet0/20
  • description extern4
  • switchport access vlan 383
  • switchport mode access

After setting the external port configuration, type exit twice to back out of the configuration mode of the command line. Then run the show run command to see your configuration. The show run command displays the actively running switch configuration. The configuration has not been written to memory in the switch, but you can see the changes that are currently running on the switch. If you look in the configuration, you can see changes we made in the steps above. Look for the Ethernet port for blade 1 as we configured it:


Listing 1. Displaying the configuration of switch port 1
 
interface GigabitEthernet0/1
       description blade1
       switchport trunk native vlan 383
       switchport trunk allowed vlan 373,383
       switchport mode trunk

 

If you issue a show config command, you will see the previous configuration of the interface, not the one we just configured. If the switch is rebooted in this state, the current configuration will be lost. To write it to memory, type write on the command line of the switch. If you run a show config again, the configuration stored on the switch for our interfaces will match the running configuration.


 


 

Step optional: Create a Link Aggregation Adapter

A link aggregation device can be used to connect two physical adapters together to look like one adapter. This is useful for creating an active/passive configuration for failover.

In our example, we wanted to create an active/passive configuration on the blade linking adapter ent0 and ent5 together. Issue the following command to create link aggregation device (LNAGG) with a backup adapter on ent5 from the IVM's command line:

$ mkvdev -lnagg ent0 -attr backup_adapter=ent5

Use the lsdev command to verify the creation of the LNAGG device.


Listing 2. Verifying the creation of the LNAGG device
 
$lsdev |grep ^ent
ent0             Available   Logical Host Ethernet Port (lp-hea)
ent1             Available   Logical Host Ethernet Port (lp-hea)
ent2             Available   Gigabit Ethernet-SX Adapter (e414a816)
ent3             Available   Gigabit Ethernet-SX Adapter (e414a816)
ent4             Available   Gigabit Ethernet-SX PCI-X Adapter (14106703)
ent5             Available   Gigabit Ethernet-SX PCI-X Adapter (14106703)
ent6             Available   Virtual I/O Ethernet Adapter (l-lan)
ent7             Available   Virtual I/O Ethernet Adapter (l-lan)
ent8             Available   Virtual I/O Ethernet Adapter (l-lan)
ent9             Available   Virtual I/O Ethernet Adapter (l-lan)
ent10            Available   EtherChannel/IEEE 802.3ad Link Aggregation
ent11            Available   EtherChannel/IEEE 802.3ad Link Aggregation
ent12            Available   EtherChannel/IEEE 802.3ad Link Aggregation
ent13            Available   Virtual I/O Ethernet Adapter (l-lan)
ent14            Available   Virtual I/O Ethernet Adapter (l-lan)
ent15            Available   Virtual I/O Ethernet Adapter (l-lan)
ent16            Available   Shared Ethernet Adapter
ent17            Available   Shared Ethernet Adapter
ent18            Available   Shared Ethernet Adapter

 

In the output from our lsdev command, you can see that the link aggregation device (ent10 - ent12) looks like a physical adapter. This allows you to map the link aggregation device to a virtual device (ent13 - ent15) via a Shared Ethernet Adapter (ent16 - ent18). The SEA treats the link aggregation device (LNAGG) as though it is a physical adapter. The Virtual Ethernet Adapters (VEA) ent6 - ent9 are created by default and are not VLAN-aware devices, nor can you modify them to become VLAN aware. ent0 through ent5 are the physical adapters on the blade server.

Use the attr flag on the lsdev command to see how the LNAGG device maps to its physical adapters: $lsdev -dev ent10 -attr will produce mapping as shown in Table 1.


Table 1. LNAGG to PA mapping with $lsdev -dev ent10 -attr
 
Attribute Value Description Is it user-settable?
adapter_names ent0 EtherChannel Adapters True < Primary
alt_addr 0x000000000000 Alternate EtherChannel Addr True
auto_recovery no Enable automatic recovery after failover True
backup_adapter ent5 Adapter used when whole channel fails True < Backup
hash_mode default Determines how outgoing adapter is chosen True
mode standard EtherChannel mode of operation True
netaddr   Address to ping True
noloss_failover yes Enable lossless failover after ping failure True
num_retries 8 Times to retry ping before failing True
retry_time 1 Wait time (in seconds) between pings True
use_alt_addr no Enable Alternate EtherChannel Address True
use_jumbo_frame no Enable Gigabit Ethernet Jumbo Frames True

 


 

Step 2: Create virtual adapters on the IVM

In this example, we're going to configure network traffic to flow through the Virtual Ethernet Adapter for VLANs 373 and 383. The first step in configuring the solution is to create the virtual adapter on the IVM that is needed to transport traffic to and from the client LPARs. Let's create a virtual adapter with a primary port of 373 and a secondary port of 383.

To create the VLAN-aware interface on the IVM:

  1. Use your favorite (putty) telnet program, open a window to the IVM, and log in as padmin (the default password is "passw0rd" with a "0" instead of an "O").
  2. Use the shwres -r virtualio --rsubtype eth --level lpar command to list the Ethernet resources:

    Listing 3. Listing the Ethernet resources
     
    $lshwres -r virtualio --rsubtype eth --level lpar
    lpar_name=IVM_01,lpar_id=1,slot_num=3,state=1,ieee_virtual_eth=0,port_vlan_id=1,
     addl_vlan_ids=none,is_trunk=1,trunk_priority=1,is_required=0,mac_addr=463337C4B503
    lpar_name=IVM_01,lpar_id=1,slot_num=4,state=1,ieee_virtual_eth=0,port_vlan_id=2,
     addl_vlan_ids=none,is_trunk=1,trunk_priority=1,is_required=0,mac_addr=463337C4B504
    lpar_name=IVM_01,lpar_id=1,slot_num=5,state=1,ieee_virtual_eth=0,port_vlan_id=3,
     addl_vlan_ids=none,is_trunk=1,trunk_priority=1,is_required=0,mac_addr=463337C4B505
    lpar_name=IVM_01,lpar_id=1,slot_num=6,state=1,ieee_virtual_eth=0,port_vlan_id=4,
     addl_vlan_ids=none,is_trunk=1,trunk_priority=1,is_required=0,mac_addr=463337C4B506
    

     
  3. Use the chhwres command to create a virtual adapter on the IVM that supports IEEE VLAN awareness and the additional VLANs you want on that interface. To do this, issue the following command from the command line of the IVM: $ chhwres -p IVM_01 -o a -r virtualio --rsubtype eth -s 15 -a\ '"ieee_virtual_eth=1","port_vlan_id=373","addl_vlan_ids=383","is_trunk=1","trunk_priority=1"'. The chhwres command tells the IVM how to construct a new VLAN-aware Virtual Ethernet Adapter.

    There are some important features of the command that you need to know to create multiple virtual adapters on the IVM:
    • -p partition: In the command, we are telling the chhwres that there is a change to the IVM partition by issuing the -p command.
    • -s nn: This tells the IVM that we are going to use a particular slot number. If this parameter is not specified, the IVM will use the next available slot. The slot number is required when a device is removed from the IVM.
    • ieee_virtual_eth: A value of 1 informs the IVM that this adapter supports IEEE 802.1Q. This needs to be set to 1 if there are additional VLANs that are required.
    • port_vlan_id: This is the primary VLAN for the virtual adapter.
    • add_vlan_ids: If trunking is enabled, then this parameter accepts the additional VLANs.
    • is_trunk: This attribute must also be set to 1 if you are passing multiple VLANs.
    • trunk_priority: When trunking, the priority of the adapter must be set between 1-15.
  4. Ensure the creation is complete by re-running the lshwres command and look for the new devices.

    Listing 4. Displaying the new devices
     
    $lshwres -r virtualio --rsubtype eth --level lpar
    lpar_name=IVM_01,lpar_id=1,slot_num=3,state=1,ieee_virtual_eth=0,port_vlan_id=1,
     addl_vlan_ids=none,is_trunk=1,trunk_priority=1,is_required=0,mac_addr=463337C4B503
    lpar_name=IVM_01,lpar_id=1,slot_num=4,state=1,ieee_virtual_eth=0,port_vlan_id=2,
     addl_vlan_ids=none,is_trunk=1,trunk_priority=1,is_required=0,mac_addr=463337C4B504
    lpar_name=IVM_01,lpar_id=1,slot_num=5,state=1,ieee_virtual_eth=0,port_vlan_id=3,
     addl_vlan_ids=none,is_trunk=1,trunk_priority=1,is_required=0,mac_addr=463337C4B505
    lpar_name=IVM_01,lpar_id=1,slot_num=6,state=1,ieee_virtual_eth=0,port_vlan_id=4,
     addl_vlan_ids=none,is_trunk=1,trunk_priority=1,is_required=0,mac_addr=463337C4B506
    lpar_name=IVM_01,lpar_id=1,slot_num=15,state=1,ieee_virtual_eth=1,port_vlan_id=383,
     addl_vlan_ids=378,is_trunk=1,trunk_priority=1,is_required=0,mac_addr=463337C4B50F
    lpar_name=IVM_01,lpar_id=1,slot_num=16,state=1,ieee_virtual_eth=1,port_vlan_id=6,
     "addl_vlan_ids=22,23",is_trunk=1,trunk_priority=1,is_required=0,
     mac_addr=463337C4B510
    lpar_name=IVM_01,lpar_id=1,slot_num=17,state=1,ieee_virtual_eth=1,port_vlan_id=7,
     "addl_vlan_ids=565,566,567,568",is_trunk=1,trunk_priority=1,is_required=0,
     mac_addr=463337C4B511
    

     

 


 

Step 3: Assign the virtual Ethernet ID to a physical adapter

Once the link aggregation device has been created, it needs to be mapped to a virtual adapter. This is easily accomplished via the IVM GUI as shown in Figure 3.


Figure 3. Mapping the LNAGG to the virtual adapter via the IVM GUI
Mapping the LNAGG to the virtual adapter via the IVM GUI
 

After logging into the GUI, select the "View/Modify Virtual Ethernet" from the left-side navigation, then choose the "Virtual Ethernet Bridge" tab. From this menu you can see the virtual Ethernet adapter we created previously with the 383 primary VLAN and the 373 additional VLAN. From the drop-down box, you can select the link aggregation device we created in the previous step. Once the new device has been chosen, click Apply. This creates a Shared Ethernet Adapter (SEA) within the IVM.


 


 

Step 4: Modify the LPAR's properties

Once a physical adapter or LNAGG device has been mapped to a Virtual Ethernet ID on the IVM, then the virtual adapter needs to be created for each logical partition. The first step is to log into the GUI on the IVM (see Figure 4).


Figure 4. Creating the virtual adapter for each LPAR
Creating the virtual adapter for each LPAR
 

After logging in, select View/Modify Partition in the upper left corner. Once the page refreshes, choose the LPAR you are going to modify.

Select the client LPAR in the menu and be sure that your browser supports pop-up menus. On the pop-up menu, select the "Ethernet" tab (Figure 5).


Figure 5. Remember to power-off LPAR to modify properties
Remember to power-off LPAR to modify properties
 

In Figure 5, you can see that Virtual Ethernet pull-downs are grayed out; that is because the LPAR was running when the screenshot was taken. Make sure the client LPAR is powered off or inactive before modifying the properties.

Use the pull-downs to map the VLAN-aware VEAs to the client LPAR's adapter on this screen. Notice that the virtual adapter is associated with one VLAN—this allows the IVM to attach VLAN tags to the traffic as it comes in from the operating systems and sends it out the appropriate LNAGG device. If more adapters are needed, click Create Adapter as needed.


 


 

Step 5: Configure Linux to use the VLAN-aware interfaces

Once the IVM and Cisco switches have been configured, you may need to do one additional step if the configuration requires static IP addresses for your Linux partitions. From the IVM GUI, activate the LPAR.

  1. Log in to the box and change your user to root.
  2. Type the following: cd /etc/sysconfig/network-scripts.

If you perform an ifconfig command, you will see that the virtual adapter individual VLAN mapped to the LPAR. With your favorite editor, change the interfaces parameters to meet your configuration requirements. The following is an example of an interface, ifcfg-eth0, with a static IP address:


Listing 5. An interface with a static IP address
 
DEVICE=eth0
BOOTPROTO=static
BROADCAST=192.168.1.31
HWADDR=00:1A:64:8C:B2:32
IPADDR=192.168.1.44
NETMASK=255.255.255.0
ONBOOT=yes
TYPE=Ethernet
GATEWAY=192.168.1.1

 

Restart the interfaces using /etc/init.d/network restart.


 


 

Conclusion

As with any deployment, planning is crucial to success. To avoid costly rework, it's important to lay out your network design before implementing a complex network within the blade chassis. From our experience, we can tell you that reconfiguring a complex IVM-supported network requires considerable effort; in fact, the administrator usually must remove the previous configuration before reconfiguring.

Planning is also critical to the install because you cannot add new VLANs to the virtual adapters on the fly in the IVM. Since you can only have one IVM in the JS22, you cannot use SEA failover as you can in a traditional VIOS installation. Link aggregation provides a mechanism for routing traffic across multiple switches in the case of a network failure of a switch. When considering redundancy in the blade, remember that the top half of the blade is powered by one PDU, while the bottom half is powered by the other PDU.

All these considerations add up to a relatively complex network implementation, so our best advice is to plan before you act.

No comments: