The most popular innovation of IBM® AIX® 6.1 is arguably
workload partitioning. Workload partitioning allows you
to have fewer operating system images on your managed
system, which is accomplished by allowing virtualization
of operating system resources. Live Application Mobility
is an important component of workload partitioning and
provides for increased availability for workload
partitions (WPARs). Simply put, it allows you to move
WPARs from one logical partition (LPAR) to another while
the WPARs are up and running. It provides automatic,
policy-based relocation of workload between systems
using WPARs. This article explores how and when to use
Live Application Mobility and how to configure a system
and its applications to run it.
WPARs, LPARs, and
Live Application Mobility
Among other benefits, WPARs allow you to deploy
applications more quickly and requires far fewer dedicated
hardware resources. In fact, unlike a logical partition
(LPAR), no physical resources are actually required to build
a WPAR. As most system administrators have come to realize,
the biggest disadvantage of LPARs are maintaining multiple
images, which goes along with possibly over-committing
expensive hardware resources, such as CPU and RAM. In other
words, while partitioning helps you consolidate and
virtualize hardware within a single box, operating system
virtualization through WPAR technology goes one step further
and allows for an even more granular approach of resource
management. It does this by allowing for the sharing of OS
images and is clearly the most efficient use of CPU, RAM,
and I/O resources.
Rather than a replacement for LPARs, WPARs are a
compliment to them and allow one to further virtualize
application workloads through operating system
virtualization. WPARs also work very well with Role-Based
Access Control (RBAC), which is another important innovation
in AIX 6.1. So what does this have to do with Live
Application Mobility?
Live Application Mobility allows you to relocate running
WPARs from one LPAR to another. While Sun may have a similar
concept with its zone-based strategy, it does not provide
for a hot migration of running applications. Of all the
UNIX®-based systems, only the IBM AIX OS has this important
innovation. The way it works is that it uses features such
as checkpointing to move the actual running
partitions. The checkpoint saves and validates the status of
the current application and then starts its back up on the
other LPAR in this saved state. Do you still need High
Availability solutions such as HACMP if you will be using
this feature? Absolutely. It's important to make the
distinction that Live Application Mobility provides for
increased availability during scheduled outages, not
unscheduled outages. One needs to actively use WPAR manager
or the command line interface to initiate the movement of
the WPARs; it is not automatic. Live Application Mobility is
actually an optional feature that is enabled within the WPAR
manager component. What's the difference between partition
mobility and Live Application Mobility? Partition mobility
is a feature of Power6™ that allows you to migrate entire
AIX or Linux® LPARs from one physical server to another. It
does not require AIX 6.1 or WPARs. This feature helps when
scheduling downtime for entire frames.
If you need to take an entire managed system off-line,
you can move its partitions to another server. It also
allows you to balance workloads and resources by allowing
you to move LPARs to different physical servers. Live
Application Mobility is an innovation of AIX 6.1 alone and
is a component of its WPAR strategy, allowing you to move
workloads rather than entire partitions. The actual target
WPARs can be a different server, though it doesn't have to
be. It is more flexible, as you can use it on environments
where you have a mixed physical architecture: Power5 and
Power6. It moves applications away from systems that require
scheduled downtime for maintenance. It can also be used to
improve performance by moving workloads from stressed out
servers to less overtaxed ones. Further, it can help provide
energy savings by moving the workload around in such a way
as to allow a physical server to literally sleep during
non-peak periods. The table below shows some basic
differences between Live Application Mobility and Partition
Mobility.
Table 1. Differences between Live
Application Mobility and Partition Mobility
TYPE |
Live Application Mobility |
Live Partition Mobility |
OS |
AIX 6.1 |
Linux, AIX 5.3, AIX 6.1 |
Hardware |
PowerPC® 970, POWER4™, 5™, 6™ |
Power6 |
Granularity |
WPARs |
LPARs |
Live Application Mobility --
Configuration challenges
This section looks at how to configure Live Application
Mobility.
There are two ways to configure Live Application
Mobility. The first is by using the WPAR manager (part of
the IBM System Director family) and the second is using the
command line. IBM strongly suggests you use the WPAR
manager, and having used both, I would say it's simpler and
much more powerful to use the manager. Furthermore, it
performs certain compatibility tests between both the source
and global environments, which are just not provided from
the command line. IBM goes as far as to say: "Using the WPAR
manager is therefore the only recommended way to perform a
WPAR relocation" (see
Resources for
a link). That was enough for me not to even consider
demonstrating moving WPARs around using the command line;
however, if you are so inclined to experiment, the command
line tools include the following commands:
chkptwpar -- This is the command that
creates a snapshot of all the tasks in a WPAR.
killwpar -- This command kills all
tasks belonging to a paused WPAR.
restartwpar -- This command creates the
workload partition from the checkpointable state.
resumewpar -- This command resumes
execution of a paused or frozen WPAR.
The WPAR manager uses a browser-based interface that
allows for the management of the WPARs from literally any
platform. It is designed to manage WPARs and provides the
enablement that allows one to use Live Application Mobility.
It is also a licensed product, which means it costs money.
It includes a centralized database consisting of DB2® with
an agent that needs to be installed on each computer. This
is very helpful because you do not need to log into the LPAR
itself to actually create, configure, and enable Live
Application Mobility. It also provides for policy-based
mobility, which can significantly reduce your overall
workload.
When planning for Live Application Mobility, it's
important to start out by defining your reason behind
relocating the WPARs. And once you understand the why
behind your decision to relocate the WPARs, you'll also need
to take into account the workloads on both the source and
target system. It's one thing to use Live Application
Mobility as a tool that will allow you to move an
application off of an LPAR whose physical server needs
maintenance for a couple of hours; it's another issue
entirely to use this to improve application performance.
When using it for the latter purpose, you'll need to search
for frames that have better CPU and memory resources than
the source systems. If your goal for using Live Application
Mobility is energy management and using it as a server
consolidation tool, then resource utilization is not as
important.
WPAR manager and Live Application
Mobility demo
This section shows how to install WPAR manager and
relocate a running WPAR.
First, install the WPAR manager. There are two parts to
the process:
- Installation of the WPAR manager
- Installation of the agent
The agent goes on every system that will be managed by
the WPAR manager, while the WPAR manager and centralized
database is installed on only one system. Start by
installing the filesets for the WPAR manager (see Listing
1).
Listing 1. Installing the filesets for
the WPAR manager
# installp -acqgYXd . wparmgt.mgr
Name Level Part Event Result
-------------------------------------------------------------------------------
wparmgt.mgr.rte 1.1.1.0 USR APPLY SUCCESS
wparmgt.mgr.rte 1.1.1.0 ROOT APPLY SUCCESS
wparmgt.cas.agentmgr 1.3.2.18 USR APPLY SUCCESS
wparmgt.cas.agentmgr 1.3.2.18 ROOT APPLY SUCCESS
tivoli.tivguid 1.3.0.0 USR APPLY SUCCESS
tivoli.tivguid 1.3.0.0 ROOT APPLY SUCCESS
# installp -acqgYXd . wparmgt.db
+-----------------------------------------------------------------------------+
Pre-installation Verification...
+-----------------------------------------------------------------------------+
Verifying selections...done
Verifying requisites...done
Results...
SUCCESSES
---------
Filesets listed in this section passed pre-installation verification
and will be installed.
Selected Filesets
-----------------
wparmgt.db.db2 1.1.1.0 # Workload Partitions Manager ...
Installation Summary
--------------------
Name Level Part Event Result
-------------------------------------------------------------------------------
wparmgt.db.db2 1.1.1.0 USR APPLY SUCCESS
wparmgt.db.db2 1.1.1.0 ROOT APPLY SUCCESS
|
When the filesets are installed, you need to configure the database:
# /opt/IBM/WPAR/manager/db/bin/DBInstall.sh
-dbinstallerdir /db2 -dbpassword my_password .
When this is completed, you'll need to configure the
database connection between the WPAR Manager and the
database, and to define the WPAR Agent registration
password. While there are three supported modes to run the
WPAR manager configurator, we used the console mode -- the
non-GUI version -- for our install. We found this to be the
simplest method to use.
Listing 2. Console method for running
the WPAR manager configurator
lpar55p682e_pub[/tmp/wparmgr] > /opt/IBM/WPAR/manager/bin/WPMConfig.sh -i console
Preparing CONSOLE Mode Installation...
===============================================================================
Choose Locale...
----------------
1- Catala
2- Deutsch
->3- English
4- Espanol
5- Francais
6- Italiano
7- Portugues (Brasil)
===============================================================================
WPAR Manager Configuration Summaryn
----------------------------------
Click Next to configure WPAR Manager with the following values.
Click Cancel to terminate.
WPAR Manager Access:
Public Port: 14080
Secure Port: 14443
Database Access:
Hostname: lpar55p682e_pub
Username: db2wmgt
Password: ********
Service Port: 50000
Name: WPARMGT
Agent Manager, configure local:
Base Port: 9511
Public Port: 9513
Secure Port: 9512
Registration password: ********
->1- Next
2- Cancel
ENTER THE NUMBER OF THE DESIRED CHOICE, OR PRESS <ENTER> TO ACCEPT THE
DEFAULT:
1
===============================================================================
WPAR Configuration Complete
---------------------------
The configuration of IBM Workload Partitions Manager for AIX has completed
successfully.
PRESS <ENTER> TO EXIT THE INSTALLER: lpar55p682e_pub[/tmp/wparmgr] >
|
The final piece is installing and configuring the agent, which must be
done on each partition that you want managed by the WPAR
manager: # installp -acqgYXd <MOUNT_POINT>
wparmgt.agent .
When the filesets are installed, this command will
configure the agent for you: #
/opt/IBM/WPAR/agent/bin/configure-agent -yourhostmachine .
When completed, we're ready to point our browser to the
IP address where WPAR manager is configured. I was able to
use this from a VNC client after configuring VNServer on the
host console or from a standard Firefox browser, even from a
PC running Windows®. The latter will work only if you are on
the same network as the WPAR manager.
Figure 1. Pointing the browser to the IP
address where the WPAR manager is configured
After logging in with your root password, you can validate the managed
systems. We had two systems, as illustrated in Figure 2.
Figure 2. Validating the managed systems
It is from here, that one adds the WPARs and configures them to be able
to be relocatable. You do this by making sure that the
filesystems used are NFS and you also check the option that
says "checkpointable."
The steps themselves are fairly clear.
- Create a working NFS environment on a defined server
- Create a WPAR using NFS directories, using the -c
option allowing for checkpointable, necessary to use
Live Application Mobility
- Relocate WPARs.
There are several ways to create WPARs. In Figure 3 we
used the WPAR manager.
Figure 3. Create WPARs using WPAR
manager
In this example (see Listing 3), we'll create a specification file from
the command line using mkwpar. There is also a sample file
in /usr/samples/wpars -- called sample.spec -- which you can
use as a template.
Listing 3. Use mkwpar to create a
specification file
# mkwpar -n app20 -h app20 -N interface=en0 netmask=255.255.192.0
address=172.29.140.243 -r -c -M directory=/vfs=nfs host=lpar32p682e_pub
dev=/scratch/app20root -M directory=/home vfs=nfs host=lpar32p682e_pub
dev=/scratch/app20home -M directory=/tmp vfs=nfs host=lpar32p682e_pub
dev=/scratch/app20tmp -M directory=/var
vfs=nfs host=lpar32p682e_pub dev=/scratch/app20var
|
Now, let's return to the WPAR manager. Regardless of where you create
your WPAR, you can relocate your partitions from here as
well (see Figure 4).
Figure 4. Start the process of
relocating your partitions
Figure 5 shows that the relocation process is in progress.
Figure 5. The relocation process in
progress
It's important to note that even with the GUI, there will be some
challenges in terms of getting everything operational,
particularly when you first start using the system. We found
one interesting bug, when we had used an underscore (_) in
the host-name, which kept the WPAR manager from being able
to locate one of our managed systems. After removing the
underscore we of course got everthing working well, which
you've seen in this article.
Summary
In this article, we discussed Live Application Mobility
in the context of WPARs and how best to use the system. We
defined the differences between Live Application Mobility
and Live Partition Mobility. We created WPARs and installed
WPAR Manager to help us manage our WPAR environment. Lastly,
we created WPARs for the express purpose of using Live
Application Mobility and illustrated how to create and move
running WPARs from one system to another. It's important to
understand that Live Application Mobility does not replace
High Availability. While it has several purposes, it is best
used when performing scheduling outages, which allows for
system maintenance to occur with no downtime. This is a very
real and important innovation of AIX 6.1, which neither
Solaris nor HP-UX has at this time. |
No comments:
Post a Comment