Friday, February 15, 2008

Encrypt Your FreeBSD Home Partition

Sometimes you need to encrypt your home (and maybe swap) partition so it  will not be available until you input a password and/or use a key. For example if your company have valuable data/documents that must be protected from thieves. Other case could be for encryption of Laptops/Notebooks computers which often are lost or stolen.

The downside do this it a decrease of performance of your system.

This tutorial is about encrypting your home partition of a FreeBSD server or desktop, using GELI.

Warning! Before trying this tutorial, backup your data. We are not responsable for your lost data.

Note. If you really are into security you should also consider encrypting swap partition.


Step 1. Install FreeBSD, creating a dedicated home partition
------------------------------------------------------------

Install FreeBSD standard (usual way) but also create a dedicated partition for /home.
So you will have for example:

/dev/ad0s1a          /        (root partition)
/dev/ad0s1b         swap  (swap partition
/dev/ad0s1d         /var    (var partition)
/dev/ad0s1e         /tmp    (tmp partition)
/dev/ad0s1f         /home  (home partition)
/dev/ad0s1g        /usr      (/usr partition)

Note that ad0s1f is our home partition that we will encrypt. If you already have your system installed without your home partition if you have enough free space on your hard drive you still will be able to create it, or if you you can use a second hard drive for your /home partition. In both cases if you use an already created home partition, backup your data from that partition because it will be lost.


Step 2. Compile FreeBSD kernel with GELI support
--------------------------------------------------

Go to your kernel configuration file directory and add lines to support GELI

cd /usr/src/sys/i386/conf/
cp GENERIC SERVER
edit SERVER

and add the following lines:

options GEOM_ELI
device crypto

After that recompile the kernel and install the kernel.

cd /usr/src
make -j4 buildkernel KERNCONF=SERVER
make installkernel KERNCONF=SERVER


At this point kernel is compiled and installed with support for GELI. We will not reboot yet the machine, we have other configurations to do in next steps that require reboot, so we will do that later.

If you do not want to recompile the kernel it is possible to load GELI module at boot by adding the following line to your /boot/loader.conf (add the following line only if you do not want to recompile the FreeBSD Kernel):

geom_eli_load="YES"


Step 3. Create a key for your home partition
--------------------------------------------

We will create a directory /etc/geli where we will store our key. Then we will create a random key that will be used for encryption using /dev/random.

mkdir /etc/geli
dd if=/dev/random of=/etc/geli/server.key bs=64 count=1



Step 4. Encrypt partition and create filesystem for it
----------------------------------------------------

Now we will backup /home partition and then we will unmount /home partition

umount /dev/ad0s1f

If you get a busy error message, use:

umount -f /dev/ad0s1f

Next we will init the partition for GELI encryption and we will attach the partition using server.key file from /etc/geli directory.

You will be prompted to setup a for a password, fill in your password there:

geli init -l 256 -K /etc/geli/server.key /dev/ad0s1f
(note that -l 256 will setup a 256 key encryption length)

geli attach -k /etc/geli/server.key /dev/ad0s1f
(you will use the password you've setup when you've init the partition using geli init)

After this process you now have an encrypted partition.

Only you want to wipe all informations before creating file system for encrypted partition with newfs, you can use the following command:

dd if=/dev/random of=/dev/ad0s1f.eli bs=1m
(Note that it will take long time to wipe all data. If you do not need to wipe previous data, this can be skipped).

We will now create a FreeBSD file system for our newly encrypted partition:

newfs /dev/ad0s1f.eli
(Note that after attaching encrypted partition you can see if the process went ok by looking for a .eli extension for the partition you've wanted to attach using: ls -la /dev/ad0s1f* ).

Now we can mount our newly created partition:

mount /dev/ad0s1f.eli /home

After successfully creating and mounting an encrypted /home partition we can restore /home data, by copying from backup all files/directories to the new /home.


Step 5. Setup /boot/loader.conf parameters for boot time encryption setup
--------------------------------------------------------------------------------------
Edit /boot/loader.conf file:

edit /boot/loader.conf

and add the following lines:

geli_ad0s1f_keyfile0_load="YES"
geli_ad0s1f_keyfile0_type="ad0s1f:geli_keyfile0"
geli_ad0s1f_keyfile0_name="/etc/geli/server.key"

And save file loader.conf.


Step 6. Setup /etc/rc.conf GELI parameters
--------------------------------------------------

Edit /etc/rc.conf file and add the following lines (edit /etc/rc.conf) :

geli_ad0s1f_keyfile0_load="YES"
geli_ad0s1f_keyfile0_type="ad0s1f:geli_keyfile0"
geli_ad0s1f_keyfile0_name="/etc/geli/server.key"



Step 7. Add a /etc/fstab entry for your encrypted partition
----------------------------------------------------------
Edit /etc/fstab file (edit /etc/fstab) and add the following line:

/dev/ad6s1f.eli         /home           ufs     rw              2       2

Also if you have a line that mount /home, remove that line.


Step 8. Reboot your machine and test the setup
------------------------------------------------

After reboot during boot process, after FreeBSD kernel boots up you will be prompted for a password. Fill in password you've setup when you've init the /home partition and if you've setup everything right it will finish boot process by mounting all partitions included encrypted /home partition.

Concepts of WPARs and configuring DB2

WPAR (Workload Partition) is a licensed program product shipped with IBM® AIX® 610. Tthis article teaches WPAR concepts and configurations. By following the examples in this article, you will be able to install and configure DB2® on a system and application WPAR.

Overview

WPAR is an isolated execution environment with its own init processes inside standard AIX machines. To the end user, WPAR appears as a standalone AIX machine with its own set of processes similar to standard AIX machines. This article concentrates on the following concepts, which are useful in setting up a DB2 environment inside a WPAR:

  • Types of WPARs (system and application WPARs)
  • Creating system WPARs
  • Installing and configuring DB2 on a WPAR

 

The instructions and tips provided in this article can help users of WPARs with installation and configuration of various products, such as DB2, Oracle, IDS, WebSphere® Application Server, and SAP.

Types of WPARs

WPARs can be categorized as system and application WPARs.

  • A system WPAR is a process inside standard AIX machines (which I will call "Global") with its own execution environment. It provides all standard (/ /usr /opt /var /etc /tmp and /home) file systems to the end user. System WPARs can share /usr or /opt from Global in read-only mode or can have its own /usr or /opt file systems. File systems in system WPARs could be configured in three different ways: with shared /usr and /opt (which is namefs type), with private /usr and /opt, and with so-called "remote" - root file systems are configured as NFS types (could be nfsv3 and nfsv4) and mounted from a NFS server.
  • An application WPAR is also a process inside the Global machine with its own execution environment and uses all file systems from the global environment. Application WPARs start with a startup script and end whenever the startup script completes its execution. For instance:
      
    wparexec -n "application wpar name" "absolute path of the script to be 
    executed with arguments if any"
    

     

     

       
    # wparexec -n appwpar /usr/bin/sleep 10
    Starting workload partition appwpar.
    Mounting all workload partition file systems.
    Loading workload partition.
    [ ----Script will start execution here---- ]
    Shutting down all workload partition processes.
    

      

    Appwpar will be started and execute the script given. In this case, it is sleep 10. Application WPAR will terminate itself after sleeping for 10 seconds.

How do you create system WPARs?

There are three ways to create system WPARs based on how you allocate file systems to system WPARs:

  • System WPAR with shared /usr
  • System WPAR with private /usr
  • System WPAR with remote OR NFS exported file systems

 

System WPAR with shared /usr

In this WPAR, /usr and /opt of Global are shared by the system WPAR. The following command creates the shared system WPAR:

mkwpar -n shared_wpar

  

The following figure shows the file system mapping of Global and the shared system WPAR.


Figure 1. Shared WPAR in Global environment
Shared WPAR in Global environment
 


File systems in shared WPARs
 

                    

# lsfs|grep shared_wpar
/dev/fslv00  --         /wpars/shared_wpar       jfs2   
/dev/fslv01  --         /wpars/shared_wpar/home   jfs2   
/opt   --         /wpars/shared_wpar/opt   namefs  
/proc   --         /wpars/shared_wpar/proc   namefs  
/dev/fslv02  --         /wpars/shared_wpar/tmp   jfs2   
/usr   --         /wpars/shared_wpar/usr   namefs  
/dev/fslv03  --         /wpars/shared_wpar/var   jfs2  

  

System WPARs with private /usr

In this WPAR, /usr and /opt are created separately for the system WPAR. Global gives Logical Volumes required to create /usr and /opt file systems (see Figure 2). The following command creates the private system WPAR:

mkwpar -l -N interface=en0 address="IP" netmask=255.255.255.192 
broadcast=9.2.60.255 -n "wpar name having DNS entry"

  

IP and DNS name are needed for system WPARs as DB2 probes for it while creating the DB2 instance. The following figure shows file system mapping of Global and the private system WPAR.


Figure 2. Private WPAR in the Global environment
Private WPAR in Global environment
 


File systems in a private WPAR
 

                    

# lsfs|grep private_wpar
/dev/fslv04 --         /wpars/private_wpar      jfs2   
/dev/fslv05 --         /wpars/private_wpar/home   jfs2   
/dev/fslv06      --         /wpars/private_wpar/opt   jfs2   
/proc            --         /wpars/private_wpar/proc   namefs  
/dev/fslv07      --         /wpars/private_wpar/tmp   jfs2   
/dev/fslv08      --         /wpars/private_wpar/usr   jfs2   
/dev/fslv09     --         /wpars/private_wpar/var   jfs2   

  

System WPAR with remote OR NFS exported File systems

In this WPAR, all the file systems come from a NFS server, which exports file systems using the mknfsexp command. The following figure shows file system mapping of Global and Remote System WPARs.


Figure 3. Remote WPAR in Global env ironment
Remote WPAR in Global env ironment
 


File systems in remote WPAR
 

                    
# lsfs|grep remote_wpar
/remote_wpar janet01    /wpars/remote_wpar      nfs    
/remote_wpar/opt  janet01    /wpars/remote_wpar/opt  nfs    
/proc  --   /wpars/remote_wpar/proc  namefs  
/remote_wpar/tmp  janet01    /wpars/remote_wpar/tmp  nfs    
/remote_wpar/usr  janet01    /wpars/remote_wpar/usr  nfs    
/remote_wpar/var  janet01    /wpars/remote_wpar/var  nfs    

  

The following command creates remote WPARs:

/usr/sbin/mkwpar -A -F -s -r -n remote_wpar -f remote_wpar.cf

 

remote_wpar.cf is the specification file used to create remote_wpar WPAR.

Entries in remote_wpar.cf are:

#cat remote_wpar.cf

network:
       interface = en0
       netmask = 255.255.255.192
       address = 9.2.65.91

general:
       privateusr=yes

mount:
        dev = /remote_wpar
        directory = /
        vfs = nfs
        host = janet01
  
mount:
        dev = /remote_wpar/usr
        directory = /usr
        vfs = nfs
        host = janet01

mount:
        dev = /remote_wpar/opt
        directory = /opt
        vfs = nfs
        host = janet01

mount:
        dev = /remote_wpar/var
        directory = /var
        vfs = nfs
        host = janet01

mount:
        dev = /remote_wpar/home
        directory = /home
        vfs = nfs
        host = janet01

mount:
        dev = /remote_wpar/tmp
        directory = /tmp
        vfs = nfs
        host = janet01

 

Here janet01 is the NFS server holding file systems required for remote system WPARs.

The following shows how to create and export /remote_wpar/* file systems on an NFS server:


Creation of file systems
 
                
crfs -v jfs2 -g ${VG} -m /remote_wpar -A yes -a size=${SZ}
crfs -v jfs2 -g ${VG} -m /remote_wpar/usr -A yes -a size=${SZ}
crfs -v jfs2 -g ${VG} -m /remote_wpar/opt -A yes -a size=${SZ}
crfs -v jfs2 -g ${VG} -m /remote_wpar/var -A yes -a size=${SZ}
crfs -v jfs2 -g ${VG} -m /remote_wpar/home -A yes -a size=${SZ}
crfs -v jfs2 -g ${VG} -m /remote_wpar/tmp -A yes -a size=${SZ}

 

where VG is Volume Group and SZ is size of file system.

This example shows how to export file systems:

mknfsexp -d /remote_wpar -B -a 0 -v 3 -t rw -r *.ibm.com
mknfsexp -d /remote_wpar/usr -B -a 0 -v 3 -t rw -r *.ibm.com
mknfsexp -d /remote_wpar/opt -B -a 0 -v 3 -t rw -r *.ibm.com
mknfsexp -d /remote_wpar/var -B -a 0 -v 3 -t rw -r *.ibm.com
mknfsexp -d /remote_wpar/home -B -a 0 -v 3 -t rw -r *.ibm.com
mknfsexp -d /remote_wpar/tmp -B -a 0 -v 3 -t rw -r *.ibm.com

 

The previous steps are required to set up various types of system WPARs. To create a DB2 environment, you can use any of the three types. Here are the possible configurations:

  • To install DB2 in the default directory (/opt is default directory), you need either a private or remote system WPAR.
  • To install DB2 in a non-default location, you can use any of the above-mentioned system WPAR types.

 

DB2 installation and setup on a system WPAR:

The following shows how to install and configure DB2 inside of a system WPAR:

  1. Create a system WPAR (private or remote). DB2 needs a bigger /usr, /opt and /home directories. You might need to increase them in case of failure.
  2. Start the system WPAR:
     startwpar remote_wpar

     
  3. Log in to remote_wpar using either clogin or telnet.
  4. Install DB2. This is similar to how you do installation on normal AIX machines (Global).
    1. Copy the DB2 images or mount a directory where DB2 images are available.
                  
                  mount "ImageServer":/DB2Images /mnt on your System WPAR

       
      cd /mnt/"DB2 path"

       
    2. Run db2_install (here we need to select the DB2 install path. You can change the path if you want to go for a non-default location.)
  5. List all DB2 filesets.
    lslpp -l |grep db2 lists 

     
  6. Create a DB2 instance just to reconfirm the installation.
    mkuser test
    mkuser testfc
    /DB2 Installation Dir/instance/db2icrt -a SERVER -s ESE -w 64 -u testfc test
    

     
  7. DB2 is now ready on system WPAR and can be used to run DB2 applications.

This section described set up DB2 on a system WPAR. The following section describes how set up DB2 setup on an application WPAR.

DB2 installation and setup on an application WPAR

As stated earlier, application WPARs use all the file systems from Global only.

  1. Install DB2 on Global. There is no need to create DB2 instance at this stage.
  2. Create the application WPAR using the wparexec command and pass the createdb2instance script as the startup script to wparexec command:
    wparexec -N interface=en0 address="IP" netmask=255.255.255.192 broadcast=9.2.60.255 
    -n "App WPAR name having DNS entry" /"absolutePath"/createdb2instance
    

     

    createdb2instance is a script, which creates the DB2 instance. We need to create the DB2 instance in this manner so that DB2 will take the IP and hostname from the current execution environment, which is nothing but the application WPAR environment.

    #cat  createdb2instance:
    mkuser test
    mkuser testfc
    /"DB2 Installation Dir"//instance/db2icrt -a SERVER -s ese -w 64 -u testfc test

     
  3. After completing the createdb2instance, we can infer:
    • Application WPAR exits.
    • DB2 instance (test in this case) exists on global file systems.
    • DB2 instance (test) internally contains application WPAR IP and DNS names.
  4. At this stage we have DB2 on Global and a DB2 instance with application WPAR references.
  5. Now we can start our application on the application WPAR and its DB2 instance.
    wparexec -N interface=en0 address="IP" netmask=255.255.255.192 broadcast=9.2.60.255 
    -n "App WPAR having DNS entry" /"absolutePath"/somedb2application
     

     

    Here somedb2application is a DB2 application uses the DB2 instance created inside the application WPAR.

Conclusion

This article discussed how to create various types of WPARs and install DB2 inside of a WPAR. The instructions and tips provided here can help users of WPARs with installation and configuration of various products, such as DB2, Oracle, IDS, WebSphere Application Server, and SAP.

Here are the importatnt points to be noted with respect to WPAR and DB2:

  • The WPAR name should have an IP and associated DNS entry.
  • /etc/hosts entry with WPAR IP and DNS name
  • Selection of type of system WPAR to be used for DB2 installation

Managing the PlayStation 3 Wi-Fi network

Terra Soft Solutions IT Manager Aaron Johnson shows you, step-by-step, how to configure and encrypt the built-in Wi-Fi network that comes with the Cell Broadband Engine™-based Sony PlayStation 3. And, as a little bonus, get 16 quick steps that explain how to switch from a wireless network back to a wired network on the PS3.

Introduction

In this article, you will learn:

  • The four major steps to configuring the built-in PS3 Wi-Fi network, including the Wi-Fi encryption options that are available to you, the two quick steps to enable encryption, and how to upgrade your Linux kernel.
  • The 16 steps needed to make the complex task of switching between wired and wireless networks less arduous.

Configuring the PS3 Wi-Fi

There are four major steps to configuring the Wi-Fi that is built into the PlayStation 3:

Update the GameOS firmware

Before the Wi-Fi can work under Linux, you need to run GameOS firmware version 1.6 or later. Complete these steps in GameOS to ensure you are running the latest firmware.

  1. Reboot into GameOS by doing the following:
    1. Log out of Linux (Yellow Dog symbol on the task bar).
    2. Select Applications > Boot Game OS.
    3. Click Shutdown from the login menu.
    4. Once the PS3 is off, push and hold the power button until you hear the second beep (about five seconds).
  2. Go to System > System update.
  3. Select Update Via Internet.
  4. Follow the prompts to complete the update.
  5. Reset the default OS by going to Settings > System Settings > Default System.
  6. Select OtherOS.
  7. Click X.
  8. When prompted with Start the other system now?, select Yes.

If your kernel is upgraded, skip to Activate the Wi-Fi on the PS3 under Linux. Otherwise, upgrade your kernel by continuing to the next section.

Upgrade the Linux kernel

This kernel upgrade information applies only if you are using YDL 5.0.x; YDL 6 doesn't need a kernel upgrade. If you don't need a kernel upgrade, skip to Activate the Wi-Fi on the PS3 under Linux.

This article describes three methods to upgrade your Linux kernel:

  • A semi-automated kernel upgrade that assumes your PS3 has a LAN (Ethernet) connection to the Internet under YDL
  • The use of an additional personal computer and USB key or CD-R because your PS3 under Linux does not have an Internet connection
  • A manual upgrade option for the Linux savvy with adequate Linux command-line experience.
Semi-automated method with LAN connection
 

Two cautions before you attempt this:

  • This kernel upgrade is beta software and is not recommended in a production environment.
  • This script rewrites your kboot.conf, which could render your system useless.

Do the following to upgrade the kernel using the semi-automated method:

  1. Open a terminal window by clicking on the Yellow Dog icon on the task bar.
  2. Select Applications > Accessories > Gnome Terminal.
  3. To gain root access, type su - [ENTER].
  4. When prompted, enter the root password.
  5. Download the auto updater script by entering:
    wget http://www.terrasoftsolutions.com/support/solutions/ydl_5.0/ConfigWifiKernel.sh

    You see a progress bar and confirmation of download with 'ConfigWifiKernel.sh' saved [1089/1089]
  6. Enter chmod 700 ConfigWifiKernel.sh
  7. Run the script by entering ./ConfigWifiKernel.sh
  8. Reboot the computer by entering reboot
Additional computer method without LAN connection
 

Now go through the manual process of upgrading the YDL kernel and kboot bootloader without an Internet connection to your PS3. You'll need some Linux command-line experience to complete this.

  • An Internet connection is required on an assisting computer.
  • This kernel upgrade is beta software, and it is not recommended in a production environment.

This method is broken into three subtasks: download, transfer, and activate.

Download
Download the new kernel, and transfer it to a USB key. The kernel that supports Wi-Fi on the PS3 under YDL is currently beta code, so it must be downloaded outside of regular yum updates. Because your PS3 does not have an Internet connection, you are going to use a USB key or CD to transfer the new kernel from your personal computer to the PS3. Do the following:

  1. Download the following items (right-click Save Target As or Save Link As):
  2. Move these items onto a USB key or burn them to CD.
  3. Insert the USB key or CD into the PS3.

Transfer
Now transfer the new kernel from the USB key to the PS3. Do the following:

  1. Select YDL Menu > Applications > Accessories > Gnome Terminal.
  2. At the command prompt, enter the following commands:
    su - [ENTER]
    cd /path/to/CDorUSBkey/ [ENTER]
    rpm -ivh kernel-2.6.23-9.ydl5.1.ppc64.rpm [ENTER] 
    

    The path to the CD is usually /media/cdrom, and the USB key /media/{Name of your usb key}.
  3. Remain in the terminal as root for the Activate instructions.

Activate
To activate the new kernel, you must modify a file called kboot, which is located at /etc/kboot.conf. The following script automates this process for you. Warning: Use this script with caution, because it does completely rewrite your kboot.conf.

  1. Make a backup copy of your current kboot.conf by entering cp /etc/kboot.conf /etc/kboot.conf.org
  2. Run the script to build a new kboot.conf by entering
    chmod 700 buildkboot.sh [ENTER]
     ./buildkboot.sh [ENTER]

     
  3. Exit the terminal.
  4. Reboot the PS3, and you should be good to go.
Manual upgrade method
 

Now go through the manual process of upgrading the YDL kernel and kboot bootloader. Again, this instruction is intended for intermediate-to-advanced users with solid Linux command-line experience. This kernel upgrade is beta software, and it is not recommended in a production environment.

This method is broken into three subtasks: update, install, and activate.

Update
Update the YDL system. The following process should be done regularly in order to maintain the YDL system:

  1. Select YDL Menu > Applications > System Tools > Software Management > Software Updater.
  2. All updates should already be selected. If not, select all updates.
  3. Click Apply Updates.
  4. After all updates are installed, select Reboot Later from Software Updater.

Install
Install the new kernel. The kernel that supports Wi-Fi on the PS3 under YDL is currently in beta status, so it must be downloaded separately from regular updates.

  1. Select YDL Menu > Applications > Accessories > Gnome Terminal.
  2. At the command prompt run the following commands:
    su - [ENTER]
    wget ftp://ftp.yellowdoglinux.com/pub/yellowdog/betas/kernel/kernel-
    2.6.23-9.ydl5.1.ppc64.rpm [ENTER]
    rpm -ivh kernel-2.6.23-9.ydl5.1.ppc64.rpm [ENTER] 
    

     
  3. Remain in the terminal as root for the Activate instructions.

Activate
To activate the new kernel, you must modify a file called kboot, which is located at /etc/kboot.conf. You can activate manually using a command-line editor (such as vi or nano), or you can use this script to build a new kboot.conf. This script is built based on existing installed kernels. Warning: Use this script with caution, because it does completely rewrite your kboot.conf.

  1. Make a backup copy of your current kboot.conf by entering cp /etc/kboot.conf /etc/kboot.conf.org
  2. Edit kboot.conf using your favorite editor. For example, you can use nano /etc/kboot.conf [ENTER]
  3. Exit the terminal.
  4. Reboot the PS3, and you're ready to go.

Activate the Wi-Fi on the PS3 under Linux

For the Wi-Fi on the PS3 in Linux, it takes a little bit of work to activate wireless networking to use eth0 (the Linux ID for the networking device). Unfortunately, the PS3 allows for either a wired or a wireless connection, but not both. This is because of Hypervisor limitations that all other operating systems (in this case Yellow Dog Linux) have to go through to get to the hardware.

To activate the Wi-Fi, do the following:

  1. Log into YDL as a user.
  2. Unplug the wired network cable.
  3. Open the Network configuration menu by clicking on the Yellow Dog logo (Menu Button) from the main shelf, and then select Applications > Applications > System Tools > System Config > Network.
  4. Enter the root password when prompted (for security purposes).
  5. Click on the Hardware tab.
  6. Select Sony PS3 Ethernet Dev.
  7. Click Delete.
  8. Confirm deletion by clicking Yes.
  9. Confirm again by clicking Yes.

Set up or change your wireless connection

To change your wireless settings, you should have already configured wireless using the following steps or have installed a version of YDL 5 that already supports wireless. To set up or change your wireless connection, do the following:

  1. Click on the Devices tab.
  2. Click Deactivate.
  3. Click New.
  4. Select Wireless Connection and click Forward.
  5. Select Sony PS3 Ethernet Device (eth0) and click Forward.
  6. Set the mode to Auto.
  7. Select Specified: for Network name: (SSID).
  8. In the box provided, enter the SSID of your access point. You can find this SSID in the management settings for your access point.
  9. Select the channel that your access point is using. You can find this in the management settings for your access point. Note: Only channels 1-11 are legal for use in the United States. Check local laws for other restrictions.
  10. Set the Transmit Rate to Auto.
  11. If your access point does not use encryption, make sure the box labeled Key is empty and skip to Step 15. If your access point does use encryption, continue with the next steps to see the available encryption options and the simple steps to enable encryption.
  12. To allow access to an access point that has encryption enabled (such as WEP or WPA), get the encryption key from the management settings of your access point.
    • A WEP 64-bit key looks like this: 4a 9f 1f 98 f3.
    • A WEP 128-bit key looks like this: 4b bc 8e 20 e7 1d 24 e4 7f 5d 88 d0 2e.
    • The user selects a WPA-PSK and WPA2, and they vary depending on model.
    At the time this article was published, WPA-PSK and WPA2 were not supported on the PS3, but it should have been in the works.
  13. For Key, enter 0x and your hex key.
    • For 64-bit, it looks like 0x4a9f1f98f3.
    • For 128-bit, it looks like 0x4bbc8e20e71d24e47f5d88d02e.
    • For 256-bit, it's as yet unknown.
  14. Click OK.
  15. Click Forward.
  16. If your network is using DHCP to hand out IP addresses or if you are unsure, select Automatically obtain IP address settings with DHCP.
  17. Click Forward > Forward > Apply > Activate > Yes > OK.
  18. Test that your connection is active.


 

Switching between wired and wireless

Considering the Hypervisor limitations and the challenge of controlling which network connection is active, it is not easy to switch between wireless and wired connections. And it's impossible to have both online at the same time. This procedure is more technical than configuring wireless settings. It requires some technical knowledge of how Linux works and how to use the command line. If you are unsure of any steps, get help from an experienced Linux user. (Editor: Or you can explore the developerWorks Linux zone's articles and expert forums.)

To switch from the wireless to the wired connection, do the following:

  1. Open the Network configuration menu by clicking on the Yellow Dog logo (Menu Button) from the main shelf, and then select Applications > Applications > System Tools > System Config > Network.
  2. Delete Sony PS3 Ethernet Dev under the Hardware tab.
  3. Close the network configuration manager. Confirm with Yes and OK.
  4. Open a terminal window.
  5. Type su - [ENTER].
  6. Enter the root password when prompted.
  7. Enter rm /etc/sysconfig/network-scripts/ifcfg-eth0
  8. Enter rm /etc/sysconfig/networking/devices/ifcfg-eth0
  9. Restart networking by entering service network restart
  10. Enter killall dhclient
  11. Rebuild /etc/sysconfig/network-scripts/ifcfg-eth0 from scratch. If in doubt, enter nano /etc/sysconfig/network-scripts/ifcfg-eth0 to edit the file, and use these defaults:
    DEVICE=eth0
    BOOTPROTO=dhcp
    ONBOOT=yes
    

     
  12. Press and hold the Ctrl key, then press the X key.
  13. Press the Y key to save changes to the file.
  14. Press the Enter key to confirm to save to that filename.
  15. Enter cp /etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/networking/devices/ifcfg-eth0
  16. Enter service network restart

Yellow Dog Linux should now be running on the wired network once again.



 

Conclusion

This article described how to configure and encrypt the built-in Wi-Fi network that comes with the Cell Broadband Engine(TM)-based Sony PlayStation 3. It also described 16 quick steps for how to switch between wired and wireless networks on the PS3.

Making a Admin’s job easier with AIX GUI

Assigning disks from storage disks (IBM® System Storage™ DS8000™/DS6000™) to an IBM AIX® host using GUI is easy but time consuming. This article explains a easier and faster way of assigning disks. You can use the this procedure to automate the disk-assigning process.

Introduction

There are two ways to communicate with the storage:

  • Storage Manager Graphical User Interface (GUI)
  • DS Command Line Interface

 

The simplest and best way of assigning disk from storage (for instance, IBM System Storage DS6000/DS8000) to an AIX host is to use the Storage Manger GUI. It is very user friendly. All information is populated automatically to make the user's job easier. But when users access the GUI from the remote site, performance degrades. This article looks at a procedure to assign disks from storage to host using DS Command Line Interface with improved performance.

Please note that information in this article only applies to the storages of type DS8000(2107) and DS6000(1750). You can also assign disks to the host from ESS800 using the ESS command line interface (for instance, esscli, which is beyond the scope of this article).

Before starting the disk-assigning procedure, here are some assumptions that I have made in this article:

  • Assume that the Zone in the switch is configured properly and it has at least two ports, one that is connected to host and the other that is connected to storage. Figure 1 shows a sample Zone setup.
  • DSCLI is installed on your host.
  • The default directory is /opt/ibm/dscli
  • The system administrator knows the IP, username, and password of the Storage/Hardware Management Console (SMC/HMC) and knows which storage image to use. Assume the required values as follows:
    • SMC IP: 198.162.1.2
    • Username: admin
    • password: article123
    • Storage Image ID: IBM.2107-7516231 (Refer Example1 to know how to get the Storage image ID)

 


Figure 1. Sample Zone setup
Sample Zone setup
 

The DSCLI command syntax is:

dscli -user <username> -passwd <password> -hmc1 <SMCIP>  <command>

 

For example:

# /opt/ibm/dscli/dscli -user admin -passwd article123 -hmc1 198.162.1.2 lssi
Date/Time: May 15, 2008 4:50:04 AM CDT IBM DSCLI Version: 5.2.400.426
Name ID               Storage Unit     Model WWNN             State  ESSNet
============================================================================
-    IBM.2107-7516231 IBM.2107-7516230 922   5005076303FFC150 Online Enabled

 

The second column shows all of the Storage Images managed by the HMC 198.162.1.2. "IBM.2107-7516231" is known as the storage Image ID. Throughout this article, $DSCLIcmd is used instead of the lengthy command.

# $DSCLIcmd <command>

where DSCLIcmd=/opt/ibm/dscli/dscli -user admin -passwd article123 -hmc1 198.162.1.2

There are two possibilities while assigning disks to the host:

  • The first time adding disks to host
  • Adding additional disks to host

 

The first time adding disks to the host

These are the simple steps to assign disks to the new host for the first time.

  1. Identify the Fibrechannel Adapters WWNN ( worldwide Network Number) or the AIX host.
    1. Search for the FC adapters available in the host using the lsdev command:
      # lsdev -Cc adapter | grep fc
      fcs0    Available 1Z-08    FC Adapter
              

       

      The FC adapter device name starts with fcs. Here only one FC adapter is available. Your host may have many FC adapters.

    2. Use the lscfg command to get the WWNN for the fcs0 adapter:
      # lscfg -vl fcs0 | grep -w "Network Address"
              Network Address.............10000000C9427D30
      

       
  2. Create the host connection:
    1. Use the mkhostconnect dscli command to add the host definition to the storage image.
      # $DSCLIcmd mkhostconnect -dev  IBM.2107-7516231  -wwname 10000000C9427D30 
                                    -profile \"IBM pSeries - AIX\"   Node1
      Date/Time: May 13, 2008 1:23:14 AM CDT IBM DSCLI Version: 5.2.400.426 DS: 
               IBM.2107-7516231
      CMUC00012I mkhostconnect: Host connection 0060 successfully created.
      

       
    2. Using the lshostconnect command, you can see the list of connected hosts.

      To verify after adding the host definition, run the following dscli command.

      # $DSCLIcmd  lshostconnect -dev IBM.2107-7516231 | grep Node1
      Node1      0060 10000000C9427D30 -    IBM pSeries - AIX      0 -     all
      

       

      Record the hostid for future reference, which is 0060 in this case.

  3. Use the mkvolgrp dscli command to create a volume group:
    # $DSCLIcmd mkvolgrp -dev  IBM.2107-7516231 Node1_vg
    Date/Time: May 13, 2008 1:40:12 AM CDT IBM DSCLI Version: 5.2.400.426 DS: 
             IBM.2107-7516231
    CMUC00030I mkvolgrp: Volume group V77 successfully created.
    

     

    Record Volume Group ID V77 in this example for future reference.

  4. Create volumes:
    1. Identify Logical Sub System (LSS):

      Before creating volumes, select the LSS from which you want to create volumes. The lslss dscli command lists the available LSSs in the storage image.

      # $DSCLIcmd lslss -dev IBM.2107-7516231
      Date/Time: May 13, 2008 1:43:50 AM CDT IBM DSCLI Version: 5.2.400.426 DS: 
                IBM.2107-7516231
      ID Group addrgrp stgtype confgvols
      ==================================
      08     0       0 fb              2
      09     1       0 fb             33
      0A     0       0 fb             20
      0D     1       0 fb              8
      10     0       1 fb             51
      11     1       1 fb            189
      

       

      Select one LSS from the list. Assume that you have selected LSS 05

      .
    2. Identify maximum volume ID:

      Use the lsfbvol command to get the list of volumes belongs to the LSS 05 and identify the highest volume ID.

      # $DSCLIcmd lsfbvol -dev IBM.2107-7516231 -lss 05
      Date/Time: May 13, 2008 1:46:23 AM CDT IBM DSCLI Version: 5.2.400.426 DS:
              IBM.2107-7516231
      Name  D   accstate datastate configstate deviceMTM datatype extpool cap (2^30B) 
      cap (10^9B) cap (blocks)
      =============================================================================
      PPRC0003  050D Online   Normal  Normal   2107-900  FB 512   P1  - 1.0  1953152
      PPRC0004  050E Online   Normal   Normal   2107-900  FB 512   P1  - 1.0  1953152
      PPRC0005  050F Online   Normal   Normal   2107-900  FB 512   P1  - 1.0  1953152
      PPRC0006  0510 Online   Normal   Normal   2107-900  FB 512   P1  - 1.0  1953152
      PPRC0007  0511 Online   Normal   Normal   2107-900  FB 512   P1  - 1.0  1953152
      PPRC0008  0512 Online   Normal   Normal   2107-900  FB 512   P1  - 1.0  1953152
      PPRC0009  0513 Online   Normal   Normal   2107-900  FB 512   P1  - 1.0  1953152
      

       

      Record the highest volume ID (second column), for instance 0513, and the Extentpool (eighth column) it belongs to; in this case, LSS 05 belongs to extentpool P1.

    3. Create a new volume:

      Use the mkfbvol dscli command to create new volumes. This command creates one disk of 10GB size from the LSS 10.

      # $DSCLIcmd mkfbvol -dev IBM.2107-7516231 -extpool  P1 -type ds -cap  10 0514
      Date/Time: May 13, 2008 1:59:24 AM CDT IBM DSCLI Version: 5.2.400.426 DS: 
            IBM.2107-7516231
      CMUC00025I mkfbvol: FB volume 0514 successfully created.
      

       

      The type attribute could be "DS" or "ESS." If you want to use these disks for PPRC, then the type attributes depends on the type of target disk. If the target disk type is either 2107 or 1750, the "-type" attribute here should be "DS." If the target disk type is 2105 then the "-type" attribute should be "ESS."

      The previous command can create only one volume. You can specify the range to create multiple volumes. This is illustrated as follows:.

      # $DSCLIcmd mkfbvol -dev IBM.2107-7516231 -extpool P1 -type ds -cap 10 0515-0518
      Date/Time: May 13, 2008 2:10:55 AM CDT IBM DSCLI Version: 5.2.400.426 DS: 
           IBM.2107-7516231
      CMUC00025I mkfbvol: FB volume 0515 successfully created.
      CMUC00025I mkfbvol: FB volume 0516 successfully created.
      CMUC00025I mkfbvol: FB volume 0517 successfully created.
      CMUC00025I mkfbvol: FB volume 0518 successfully created.
      

       

      Record the range or volume IDs created that you need to use in the future.

      The volume IDs should be in Hex-Decimal format.

  5. Add the created volumes to a volume group, so that the host can access all volumes in VG. This can be done using the chvolgrp command:

    chvolgrp
     
                            
    #  $DSCLIcmd chvolgrp -action add -volume 0515-0520 IBM.2107-7516231/V77
    Date/Time: May 13, 2008 2:44:08 AM CDT IBM DSCLI Version: 5.2.400.426
    CMUC00031I chvolgrp: Volume group V77 successfully modified.
    

     
  6. Use the chhostconnect dscli command to add the created volume group to the new host.
    # $DSCLIcmd chhostconnect -dev IBM.2107-7516231 -volgrp  V77 0060
    Date/Time: May 13, 2008 2:46:35 AM CDT IBM DSCLI Version: 5.2.400.426 DS: 
         IBM.2107-7516231
    CMUC00013I chhostconnect: Host connection 0060 successfully modified.
            

     
  7. Run the cfgmgr command on the host to configure the added disks from storage. Run the following command to view the disks:
    # lsdev -Cc disk | grep -e 2107 -e 1750
    hdisk2  Available 1Z-08-02     IBM MPIO FC 2107
    hdisk3  Available 1Z-08-02     IBM MPIO FC 2107
    hdisk4  Available 1Z-08-02     IBM MPIO FC 2107
    hdisk5  Available 1Z-08-02     IBM MPIO FC 2107
    

     

    2107 here indicates that the disks belong to DS8000 and 1750 indicates that the disks belong to the DS6000 type.



 

Adding additional disks to host

This is the second case when the host definition already exists in storage image and the user wants to add more disks to host from storage then the procedure assigning disks slightly varies and it is very simple.

  1. Identify the Fibrechannel Adapters WWNN (worldwide Network Number) address. Search for the FC adapters available in the host using the lsdev command.
    # lsdev -Cc adapter | grep fc
    fcs0    Available 1Z-08    FC Adapter
    

     

    The FC adapter device name starts with "fcs." Here only one FC adapter is available. Your host may have many FC adapters.

    Use the lscfg command to get the WWNN for the fcs0 adapter.



    lscfg
     
                            
    # lscfg -vl fcs0 | grep -w "Network Address"
            Network Address.............10000000C9427D30
    

     
  2. Search for the host and Identify Volume Group.

    Use the the lshostconnect dscli command to verify whether the host is already defined to a storage image.

    # $DSCLIcmd  lshostconnect -dev IBM.2107-7516231 $Delim  | grep -w 10000000C9427D30
    Node1:0060:10000000C9427D30:-:IBM pSeries - AIX:0:V77:all
    

     

    If host WWNN exists, the lshostconnect lists the host definition as shown above. The seventh field shows the Volumegroup ID assigned to this host. So, in this case the VGID is V77.

    If the host is not defined already, the output is null. So, the user needs to follow the First time adding disks to host procedure.

    As we identified the host and VG recorded these values and followed the steps in the First time adding disks to host procedure starting with Step 4. Create volumes.

    The following example shows all of the commands together:

    # DSCLIcmd=/opt/ibm/dscli/dscli -user admin -passwd article123 -hmc1 198.162.1.2 
    # Delim="-fmt delim -delim ":" -bnr off -hdr off"
    
    # $DSCLIcmd mkhostconnect -dev  IBM.2107-7516231  -wwname 10000000C9427D30 \
                                           -profile \"IBM pSeries - AIX\"   Node1
    Date/Time: May 13, 2008 1:23:14 AM CDT IBM DSCLI Version: 5.2.400.426 DS: 
          IBM.2107-7516231
    CMUC00012I mkhostconnect: Host connection 0060 successfully created.
    
    # $DSCLIcmd  lshostconnect -dev IBM.2107-7516231 | grep Node1
    Node1     0060 10000000C9427D30 -     IBM pSeries - AIX         0 -      all
    
    # $DSCLIcmd mkvolgrp -dev  IBM.2107-7516231 Node1_vg
    Date/Time: May 13, 2008 1:40:12 AM CDT IBM DSCLI Version: 5.2.400.426 DS: 
         IBM.2107-7516231
    CMUC00030I mkvolgrp: Volume group V77 successfully created.
    
    # $DSCLIcmd lslss -dev IBM.2107-7516231
    Date/Time: May 13, 2008 1:43:50 AM CDT IBM DSCLI Version: 5.2.400.426 DS: 
         IBM.2107-7516231
    ID Group addrgrp stgtype confgvols
    ==================================
    08     0       0 fb              2
    09     1       0 fb             33
    0A     0       0 fb             20
    0D     1       0 fb              8
    10     0       1 fb             51
    11     1       1 fb            189
    13     1       1 fb             31
    
    # $DSCLIcmd lsfbvol -dev IBM.2107-7516231 -lss 05 $Delim \
        | awk  ' FS=":" { print $2" "$8 }' | sort -rn | head -1
    0513 P1
    
    Note: In the above output 0520 is the MAX volumeid and P1 is the extentpool 
    to which the lss 05 belongs to.
    
    # $DSCLIcmd mkfbvol -dev IBM.2107-7516231 -extpool  P1 -type ds -cap  10 0514
    Date/Time: May 13, 2008 1:59:24 AM CDT IBM DSCLI Version: 5.2.400.426 DS: 
          IBM.2107-7516231
    CMUC00025I mkfbvol: FB volume 0514 successfully created.
    
    # $DSCLIcmd mkfbvol -dev IBM.2107-7516231 -extpool  P1 -type ds -cap  10 0514-0518
    Date/Time: May 13, 2008 2:10:55 AM CDT IBM DSCLI Version: 5.2.400.426 DS: 
         IBM.2107-7516231
    CMUC00025I mkfbvol: FB volume 0514 successfully created.
    CMUC00025I mkfbvol: FB volume 0515 successfully created.
    CMUC00025I mkfbvol: FB volume 0516 successfully created.
    CMUC00025I mkfbvol: FB volume 0517 successfully created.
    CMUC00025I mkfbvol: FB volume 0518 successfully created.
    
    
    #  $DSCLIcmd chvolgrp -action add -volume 0515-0520 IBM.2107-7516231/V77
    Date/Time: May 13, 2008 2:44:08 AM CDT IBM DSCLI Version: 5.2.400.426
    CMUC00031I chvolgrp: Volume group V77 successfully modified.
    
    # $DSCLIcmd chhostconnect -dev IBM.2107-7516231 -volgrp  V77 0060
    Date/Time: May 13, 2008 2:46:35 AM CDT IBM DSCLI Version: 5.2.400.426 DS: 
         IBM.2107-7516231
    CMUC00013I chhostconnect: Host connection 0060 successfully modified.
    
    #cfgmgr
    
    # lsdev -Cc disk | grep -e 2107 -e 1750
    hdisk2  Available 1Z-08-02     IBM MPIO FC 2107
    hdisk3  Available 1Z-08-02     IBM MPIO FC 2107
    hdisk4  Available 1Z-08-02     IBM MPIO FC 2107
    hdisk5  Available 1Z-08-02     IBM MPIO FC 2107
    hdisk6  Available 1Z-08-02     IBM MPIO FC 2107
    
    

     


 

Summary

You can achieve high performance using the dscli command. Simple commands to create volumes and adding to AIX host. One thing that the system administrator has to take care is that some commands need arguments that can be found from the preceding commands. For example, to create volumes, the mkfbvol dscli command needs LSS and EXTENTPOOL, which can be taken from the lsfbvol command. You can combine all commands and write a script that can automate the entire disk-assigning process.

A beginner’s guide to Korn shell scripting

Korn shell scripting is something all UNIX® users should learn how to use. Shell scripting provides you with the ability to automate many tasks and can save you a great deal of time. It may seem daunting at first, but with the right instruction you can become highly skilled in it. This article will teach you to write your own Korn shells scripts.

What is a shell?

The IBM® AIX® operating system and other UNIX-like operating systems need a way to communicate with the kernel. This is done is through the use of a shell. There are a few different shells that you can use, but this article focuses on the Korn shell. The Korn shell is the default shell used with AIX.

When you log into AIX, you are started at a prompt in a certain directory. The default directory is usually your home directory. The reason it's called a home directory is because the directory structure is usually something like this:

$/home/jthomas: 

 

When you log in, you are said to be at the command line or command prompt. This is where you enter UNIX commands. You enter shell commands that interact with the UNIX kernel. These commands can be as simple as one line to check the date or multiple lines long, depending on what you're doing. Listing 1 provides some sample commands.


Listing 1. Sample commands
 
                
$date
Fri May  1 22:59:28 EDT 2008
$uptime
10:59PM   up 259 days,   9:44,  5 users,  load average: 3.81, 14.27, 13.71
$hostname
gonzo

 

The great thing about shell commands is that you can combine them in a file called a script that allows you to run multiple commands one after another. This is great when you have to repeat the same commands over and over again. Instead of repeatedly typing these commands, you can put them in a Korn shell script.

Writing your first Korn shell script

The first line in a Korn shell script is the shell itself. It is denoted as follows:

#!/bin/ksh

 

To write Korn shell scripts on AIX, you need to use a text editor. The most widely used and most readily available is vi. It can be a little intimidating at first, but the more you use vi, the more comfortable you will become. People have written whole books just on how to use the vi text editor.

To begin writing your first Korn shell script, you need to open the vi editor and add the shell name as the first line. After that, you need to build some type of script header telling users who wrote the script, what the script does, and when it was written. You can name your script anything you want, but you usually use the extension .ksh to refer to a Korn shell script. You do not have to do this, but it's good practice. The pound symbol (#) is used to comment with scripts, as shown in Listing 2.


Listing 2. Example of a script header
 
                
$vi my_first_script.ksh
#!/bin/ksh
###################################################
# Written By: Jason Thomas
# Purpose: This script was written to show users how to develop their first script
# May 1, 2008
###################################################

 

This script header is pretty basic, but it does the trick.


 


 

Variables

Setting variables within a script is fairly simple. I usually capitalize all variables within my scripts, as shown in Listing 3, but you don’t have to.


Listing 3. Example of variables
 
                
#Define Variables
HOME="/home/jthomas" #Simple home directory 
DATE=$(date) # Set DATE equal to the output of running the shell command date
HOSTNAME=$(hostname) # Set HOSTNAME equal to the output of the hostname command
PASSWORD_FILE="/etc/passwd" # Set AIX password file path


 


 

Korn shell nuts and bolts

So far, you’ve learned how to start writing a Korn shell script by writing a basic script header and defining some variables. Now it's time to start writing some Korn shell code.

Start by reading some lines from a file. In this case, use the /etc/passwd file you already defined in your script, and print just the usernames, as shown in Listing 4.


Listing 4. The for loop
 
                
$vi my_first_script.ksh
#!/bin/ksh
###################################################
# Written By: Jason Thomas
# Purpose: This script was written to show users how to develop their first script
# May 1, 2008
###################################################

#Define Variables
HOME="/home/jthomas" #Simple home directory 
DATE=$(date) # Set DATE equal to the output of running the shell command date
HOSTNAME=$(hostname) # Set HOSTNAME equal to the output of the hostname command
PASSWORD_FILE="/etc/passwd" # Set AIX password file path

#Begin Code

for username in $(cat $PASSWORD_FILE | cut -f1 -d:)
do

       print $username

done

 

This syntax is called a for loop. It allows you to open the /etc/passwd file and read each line one at a time, cutting out just the first field in the file and then printing that line. Notice this special character:|. This is called a pipe. Pipes allow you to redirect output from one command into another.

After you save the script, you run it as shown in Listing 5.


Listing 5. Running the script
 
                
$./my_first_script.ksh
root
daemon
bin
sys
adm
uucp
nobody
lpd

 

The script begins to print the output to the screen. Alternatively, you can just print the output to a file by doing the following:

print $username >> /tmp/usernames

 

The >> tells the print command to append each username one after another to a file only. By doing this, you never see the text display on your terminal. You can also print the output to both the screen and a file by using this command:

print $username | tee –a /tmp/usernames

 

The tee command allows you to redirect output to your terminal and to a file at the exact same time.

You just learned how to read from a file with a for loop and how to cut out just a username and redirect the output to a file or to your terminal.


 


 

Error checking

What happens if the /etc/passwd file didn't exist to begin with? The short answer is the script would fail. Listing 6 shows the syntax that checks to see if the files exist.


Listing 6. Syntax for checking to see if a file exists
 
                
#Begin Code
if [[ -e $PASSWORD_FILE ]]; then #Check to see if the file exists and if so then continue

     for username in $(cat $PASSWORD_FILE | cut -f1 -d:)
     do

         print $username

     done
else
  
         print "$PASSWORD_FILE was not found"
         exit
fi

 

This little piece of code shows the conditional if statement. If the /etc/passwd file exists, then the script continues. If the file doesn’t exist, then the script prints "/etc/passwd file was not found" to the terminal screen and then exits. Conditional if statements start with if and end with the letters reversed (fi).


 


 

The dollar question mark ($?)

Every time you run a command in AIX, the system sets a variable that is often referred to as dollar question. AIX sets this to either 0 for successful or non-zero for failure. This is excellent for Korn shell scripting. Listing 7 shows how the $? is set when you run valid and invalid AIX commands.


Listing 7. How the $? is set for valid and invalid AIX commands
 
                
$date  
Sat May 10 00:02:31 EDT 2008
$echo $?
0
$uptime
  12:02AM   up 259 days,  10:47,  5 users,  load average: 4.71, 10.44, 12.62
$echo $?
0
$IBM
ksh: IBM:  not found.
$echo $?
127
$aix
ksh: aix:  not found.
$echo $?
127
$ls -l /etc/password
ls: 0653-341 The file /etc/password does not exist.
$echo $?
2

 

This is helpful when writing Korn shell scripts because it gives you another way to check for errors. Here's a different way to see if the /etc/passwd file exists:

#Begin Code
PASSWORD_FILE="/etc/passwd"

ls –l $PASSWORD_FILE > /dev/null 2>&1

 

This command allows you to list the file. However, you don’t really care if the file is there or not. The important thing is for you to get the return code of the command. The greater than sign (> allows you to redirect output from the command. You will learn more about redirecting output later in this article.

Listing 8 shows how to use $? in a script.


Listing 8. Using $? in a script
 
                
#Begin Code
PASSWORD_FILE="/etc/passwd"

ls –l $PASSWORD_FILE > /dev/null 2>&1 
if [[ $? != 0 ]]; then

       print “$PASSWORD_FILE was not found"
       exit

else
   
   for username in $(cat $PASSWORD_FILE | cut -f1 -d:)
   do

       print $username

   done          
fi

 

Instead of actually checking to see if the file exists, I tried to list the file. If you can list the file, then the file exists. If you can't list it, then it doesn't exist. You list a file in AIX by using the ls –l filename command. This gives you a way to test to see if your AIX command was successful by checking the $? variable.


 


 

Standard in, out, and error

You really need to understand these. You basically have three sources of input and output. In AIX, they are referred to as STDIN, STDOUT, and STDERR. STDIN refers to the input you might get from a keyboard. STDOUT is the output that prints to the screen when a command works. STDERR prints to the screen when a command fails. The STDIN, STDOUT, and STDERR file descriptors map to the numbers 0, 1, and 2, respectively.

If you want to check to see if a command was a success or a failure, you do something like Listing 9.


Listing 9. Redirecting output to STDOUT and STDERR
 
                
$date  /dev/null 2>&1  # Any output from this command should never be seen

if [[ $? = 0 ]]; then
      print "The date command was successful"
else
      print "The date command failed
fi

 

This code runs the AIX date command. You should never see any output from STDOUT (file descriptor 1) or STDERR (file descriptor 2). Then you use the conditional if statement to check the return code of the command. As you learned previously, if the command returns a zero, then it was successful; if it returns a non-zero, then it failed.


 


 

Functions

In Korn shell scripting, the word function is a reserved word. Functions are a way to divide the script into segments. You only run these segments when you call the function. Create an error-check function based on the code you've already written, as shown in Listing 10.


Listing 10. Error-check function
 
                
##################
function if_error
##################
{
if [[ $? -ne 0 ]]; then # check return code passed to function
    print "$1" # if rc > 0 then print error msg and quit
exit $?
fi
}

 

If I want to run a simple command from inside a script, I can easy write some code similar to the error checking of the $? above. I can also just call the if_error function every time I want to check to see if something failed, as shown in Listing 11.


Listing 11. Calling the if_error function
 
                
rm –rf /tmp/file #Delete file
if_error "Error: Failed removing file /tmp/file"

mkdir /tmp/test #Create the directory test
if_error "Error: Failed trying to create directory /tmp/test"

 

Each time one of the above commands is run, a call is made to the if_error function. The message you want to display for that particular error check is passed to the if_error function. This is great because it allows you to write the shell script code one time, but you get to leverage it over and over. This makes writing shell scripts quicker and easier.


 


 

The case statement

The case statement is another conditional statement that you can use in place of using an if statement. The case statement begins with the word case and ends with the reverse (esac). A case statement allows you to quickly build upon it as your script evolves and needs to perform different tasks. Listing 12 provides an example.


Listing 12. The case statement
 
                
case value in 
"Mypattern") commands to execute 
 when value matches 
 Mypattern 
 ;; 
esac

 

So, say that you want to delete a file at different times of the day. You can create a variable that checks to see what time it is:

TIME=$(date +%H%M)

 

The code shown in Listing 13 will delete a file at 10:00 p.m. and 11:00 p.m. So, each time this section of code is executed, the $TIME is checked to see if it matches the times of the case statement. If so, then the code is executed.


Listing 13. case statement to check time
 
                
case $TIME in
                 "2200") #This means 10:00
                  rm –rf /tmp/file1
                        ;;
                  "2300")#This means 11:00
                  rm –rf /tmp/file1
                        ;;
                    "*")
                        echo "Do nothing" > /dev/null
                        ;;

 esac


 


 

Putting a whole script together

So far, you've created a script header and some simple variables and added a function, as shown in Listing 14.


Listing 14. Example Korn shell script
 
                
$vi my_second_script.ksh
#!/bin/ksh
###################################################
# Written By: Jason Thomas
# Purpose: This script was written to show users how to develop their first script
# May 1, 2008
###################################################

#Define Variables
HOME="/home/jthomas" #Simple home directory 
TIME=$(date +%H%M) # Set DATE equal to the output of running the shell command date
HOSTNAME=$(hostname) # Set HOSTNAME equal to the output of the hostname command


##################
function if_error
##################
{
if [[ $? -ne 0 ]]; then # check return code passed to function
    print "$1" # if rc > 0 then print error msg and quit
exit $?
fi
}

if [[ -e /tmp/file ]]; then  #Check to see if the file exists first
   rm –rf /tmp/file #Delete file
   if_error "Error: Failed removing file /tmp/file"
else
   print "/tmp/file doesn’t exist"
fi

if [[ -e /tmp/test ]]; then
     mkdir /tmp/test #Create the directory test
     if_error "Error: Failed trying to create directory /tmp/test"
else
     print "Directory exists, no need to create directory"
fi

case $TIME in
                 "2200")
                  rm –rf /tmp/file1
                        ;;
                  "2300")
                  rm –rf /tmp/file1
                        ;;
#End Script
esac

 

To run the script, you simply type ./scriptname.ksh, like this:

$./my_second_script.ksh


 


 

Feeding input into a script from the command line

You can create scripts that allow you to feed input into the script. Look at Listing 15.


Listing 15. Feeding input into a script
 
                
#!/bin/ksh

OPTION=$1

print "I love $OPTION"

$./scriptname milk
I love milk
$./scriptname tea
I love tea
$./scriptname "peanut butter"
I love peanut butter

 

Any time you feed something into a script, the first option after the script name is called $1. The second option after the script name is called $2, and so on. This is a great way to write a script so that it is more like a UNIX command with switches or options.


 


 

E-mail from a script

You can use your scripts to generate some type of report. For instance, maybe a script was written to keep track of new users added to the system daily. This script could write the output to a file, and then you could send it to yourself. This way, you could get a copy of all the new users added to the system every day. To do this, run the following command:

$REPORT="/tmp/users"

cat $REPORT | mailx –s "User admin report from server XYZ" Jason_Thomas@kitzune

 

This will send you an e-mail of the contents in the $REPORT file. The -s is what the subject of the e-mail will be. This can come in really handy.


 


 

Conclusion

Korn shell scripting can save you a lot of time and make your job so much easier. It can seem intimidating at first, but remember to always start out simple and build upon each and every script. Always follow the same steps: build your script header, define your variables, and error check your work. You just might find yourself trying to write a script for everything you do.

Running Oracle on AIX

A systems administrator always needs to be cognizant of system performance. Performance tuning on IBM® AIX® has changed considerably in recent years due to changes that have been made in AIX and its hardware platform, System p™. If you were to read an AIX-specific performance tuning document from two years ago and applied the same strategies today, not only might you not be improving performance, but in some cases you would be making things worse. As an administrator, while at times you may find that changing some parameters on the fly might increase performance dramatically and fairly quickly, performance tuning in a database environment like Oracle is a marathon, not a sprint. This article drills down into the many aspects of tuning AIX to run Oracle. We'll look at the Virtual Memory Manager (VMM), CPU, Memory, and I/O (disk and network). We'll examine some of the tools that you can use to analyze bottlenecks, while also making some changes to the system. Finally, we'll also review some Oracle tools you can use to help with your performance tuning.

Introduction

As a systems administrator, you should already know some of the basics of memory, CPU, and Disk I/O (see the Resources section for articles on these subjects). What you may not fully understand is how the VMM works in AIX and what that means to Oracle. You will also find that because many of the AIX tuning commands and parameters have changed in recent years, Oracle has changed also, and there are changes to utilities such as the Oracle Enterprise Manager, which is an important utility you should definitely take the time to learn and add to your repertoire.

This article discusses in detail the AIX VMM and the tuning commands that you will be using to tune memory. It also introduces some of the monitoring tools that you will be using, which will help put you in a position to tune.

Before we get started, it is important to note that you must have an overall approach to what you are doing. Make sure you use proper change control processes; only make one change at a time and monitor that change very carefully before introducing that change into other environments, particularly production. Performance tuning really is an iterative, ongoing process and you'll oftentimes find that by fixing one bottleneck you will create another, which is okay as long as you continuously look to improve the health of your systems. Make sure that you start monitoring your system at the beginning, well before your users are screaming about slow performance. How can you know what a poorly performing system is like unless you know what a healthy system looks like? A proper baseline is key. The system we'll be looking at is running Oracle 10g -- 10.1.0.2.0 and AIX 5.3 TL7 on a POWER5™ LPAR with one CPU and 4GB of RAM.


 


 

Memory

In this section, we'll review memory as it relates to AIX and Oracle. We'll discuss how AIX uses virtual memory and how this relates to Oracle. We'll also analyze the data and tune our subsystems.

Let's start with VMM. It's important to understand that the VMM services all memory requests from the system, not just virtual memory. When RAM is accessed, the VMM must allocate space even where there is plenty of physical memory left on the box. This is what confuses both DBAs and systems administrators at times. It does this by using a process called early allocation of paging space, by partitioning segments into pages. These pages can be either RAM or paging space (virtual memory stored on disk). At the same time, VMM maintains a free list of unallocated page frames, which are used to satisfy page faults. The VMM has a page-replacement algorithm, which assigns the page frames and determines exactly which virtual-memory pages currently in RAM will have their page frames brought back to the free list.

Furthermore, the AIX operating system will use all available memory, except that which is configured to be unallocated and known as the free list. Obviously, administrators prefer to use physical memory rather than paging space, where the physical memory is available. VMM classifies memory segments into two categories: persistent segments and working segments. Persistent segments use file memory and working segments use computational memory. What does this mean to us? It's the computational memory that is used while your SQL queries are accessing the database. These are working segments and will terminate when the process is completed. These segments have no real permanent location. On the other hand, file memory uses persistent segments and do have permanent locations on the disks. They will remain in memory usually until the pages are stolen or the database is recycled. Again, you want the file memory paged to disk and not the computational memory.

How do we tune our systems? One critical parameter worth discussing is the Translation Lookaside Buffer (TLB). Applications like Oracle exploit a tremendous amount of virtual memory, so using large pages increases performance substantially. Increasing the size of this buffer allows the system to map more virtual memory, which results in a lower miss rate for applications that use a lot of virtual memory like Oracle. This includes both OLTP and Data Warehouse applications. Oracle uses large pages for its SGA, because it is the SGA that really dominates virtual memory. With AIX 5.3 and older, we will use vmo; prior to that, we used vmtune.

Let's look at the parameters, using vmo, as shown in Listing 1.


Listing 1. Parameters using vmo
 
                 
root@lpar21ml16ed_pub[/] > vmo -L lgpg_size
NAME                      CUR    DEF    BOOT   MIN    MAX    UNIT           TYPE
     DEPENDENCIES
--------------------------------------------------------------------------------
lgpg_size                 0      0      0      0      16M    bytes             D
     lgpg_regions


root@lpar21ml16ed_pub[/] > vmo -L lgpg_regions
NAME                      CUR    DEF    BOOT   MIN    MAX    UNIT           TYPE
     DEPENDENCIES
--------------------------------------------------------------------------------
lgpg_regions              0      0      0      0                               D
     lgpg_size

 

Using the following command, we'll allocate 16777216 bytes to provide large pages, with 256 actual large pages:

# vmo -r -o lgpg_size=16777216 lgpg_regions=256

At the same time, with Oracle Database 10g, make sure that the LOCK_SGA Oracle initialization parameter is set to TRUE, so that Oracle request large pages when allocating shared memory. By far, the two most important vmo settings are minperm and maxperm. We use these parameters to determine whether our system favors computational memory or file memory. The first thing we do here is make certain that our lru_file_repage parameter = 0. This parameter was introduced in ML1 of AIX 5.3 and determines if the VMM repage-counts are considered and the type of memory it should steal (see Listing 2).


Listing 2. The lru_file_repage parameter
 
                
root@lpar21ml16ed_pub[/] > vmo -L lru_file_repage
NAME                      CUR    DEF    BOOT   MIN    MAX    UNIT           TYPE
     DEPENDENCIES
--------------------------------------------------------------------------------
lru_file_repage           1      1      1      0      1      boolean           D
--------------------------------------------------------------------------------
root@lpar21ml16ed_pub[/] >

 

As shown in Listing 2, the default is 1, so we'll need to change this using vmo (see Listing 3).


Listing 3. Changing the default setting for the lru_file_repage parameter using vmo
 
                
root@lpar21ml16ed_pub[/] > vmo -o lru_file_repage=0
Setting lru_file_repage to 0
root@lpar21ml16ed_pub[/] >

 

Setting this to 0 tells the VMM that you want to steal only file pages and not computational pages. As this will change if the numperm < minperm or > maxperm, we will make maxperm high and minperm very low. Years ago, before the lru_file_repage parameter was introduced, we used to make maxperm low. If we did this now, we would stop the application caching programs that are currently running.

Listing 4 shows how we'll set these parameters:


Listing 4. Setting the minperm, maxperm and maxclient parameters
 
                
vmo -p -o minperm%=5
vmo -p -o maxperm%=90
vmo -p -o maxclient%=90

 

We also want to take a look at minfree and maxfree. When the pages on our free list fall below minfree, the VMM will start to steal pages, which we don't want to happen until the free list has beefed up the number in maxfree. The values should be similar to the ones shown in Listing 5.


Listing 5. Setting the minfree and maxfree parameters
 
                
vmo -p -o minfree=960
vmo -p -o maxfree=1088


 


 

CPU

In this section, we'll discuss CPU as it relates to AIX and Oracle. We'll discuss how we can tune our CPU subsystems and take advantage of recent System p innovations to increase Oracle performance.

Let's start with SMT. This important POWER5 innovation allows for the ability of one single physical processor to concurrently dispatch instructions from several hardware threads. In AIX 5L Version 5.3, a dedicated partition created with one physical processor is configured as a logical two-way by turning on SMT, which allows two hardware threads to run on one physical processor at the same time. You should always leave SMT on with Oracle (see Listing 6).


Listing 6. Leaving SMT on with Oracle
 
                
oot@lpar21ml16ed_pub[/home/u0004773] > smtctl

This system is SMT capable.

SMT is currently enabled.

SMT boot mode is not set.
SMT threads are bound to the same virtual processor.

proc0 has 2 SMT threads.
Bind processor 0 is bound with proc0
Bind processor 1 is bound with proc0

root@lpar21ml16ed_pub[/home/u0004773] >

 

Let's run a performance-monitoring utility, mpstat (see Listing 7).


Listing 7. Running the mpstat utility
 
                
root@lpar21ml16ed_pub[/] > mpstat 1 5

System configuration: lcpu=2 ent=0.2 mode=Uncapped

cpu  min  maj  mpc  int   cs  ics   rq  mig lpa sysc us sy wa id   pc  %ec  lcs
  0    0    0    0  557  274  128    1    1 100  682 26 51  0 22 0.02  9.9  769
  1    0    0    0  289    2    2    1    1 100    0  0 27  0 73 0.01  4.1  772
  U    -    -    -    -    -    -    -    -   -    -  -  -  0 86 0.22 86.1    -
ALL    0    0    0  846  276  130    2    2 100  682  3  6  0 91 0.03 13.9 1541

 

Though our system has only one physical CPU, we can see that both logical CPU's come up when analyzing our systems.

Another important utility worth mentioning is nmon, which has been my favorite monitoring utility for years now (see Figure 1).


Figure 1. nmon output
nmon analyzer
 

Although nmon shows activity by CPU, you can use different flags to show the amount of activity that the Oracle processes are using. Furthermore, using the nmon analyzer, you can download information into spreadsheets and compile nice-looking charts that senior management likes to see.

There are some other important things you can do with CPU:

  • Processor affinity -- This allows processes to run on specific processors. You can actually correlate specific processes with running processes.
  • Nice and Renice -- These change the priority of running processes. It is not recommended to renice Oracle processes.

Another utility that is important with monitoring CPU is vmstat, which will also quickly let you know where a bottleneck resides.


 


 

Disk I/O

In this section, we'll discuss the disk I/O subsystem as it relates to AIX and Oracle. We'll review how we can monitor and tune our I/O subsystems and also discuss some important subsystems that relate to I/O.

When our system is slow, most inexperienced administrators will usually look at CPU. It is, however, the disk I/O subsystem that can cause the most problems. We'll examine the ever-important asynchronous I/O and concurrent I/O in this section, as well.

Asynchronous I/O (AIO) servers

AIO determines if Oracle is waiting for your I/O to complete prior to starting new processing. If asynchronous I/O is not tuned properly, it can significantly affect the overall performance of writes on the I/O subsystem. What it does is allow the system to continue processing while I/O completes in the background. This improves performance significantly because processes can run at the same time as I/O is going on. We can monitor the AIO subsystem by using either iostat or nmon (see Listing 8).


Listing 8. Monitoring the AIO subsystem using iostat
 
                
oot@lpar21ml16ed_pub[/home/u0004773] > iostat -A 1 5

System configuration: lcpu=2 drives=2 ent=0.25 paths=2 vdisks=2

aio: avgc avfc maxg maif maxr avg-cpu: % user % sys % idle % iowait physc % entc
        0    0  312    0 4096             3.1   7.1   89.8      0.0   0.0   16.7

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk1           0.0       0.0       0.0          0         0
hdisk0           0.0       0.0       0.0          0         0

 

The following list is a description of parameters used to monitor the AIO subsystem.

  • avfc: This reports back the average fastpath request count per second for your interval.
  • avgc: This reports back the average global asynchronous I/O request per second of the interval you specified.
  • maxgc: This reports back the max global asynchronous I/O request since the last time this value was fetched.
  • maxfc: This reports back the maximum fastpath request count since the last time this value was fetched.
  • maxreqs: This is the maximum asynchronous I/O requests allowed.

In our case, AIO servers are not a system bottleneck.

Concurrent I/O (CIO)

CIO, introduced in AIX Version 5.2, is an extremely important system that you should use in your Oracle environment. Similar to its predecessor, direct I/O, when turned on, it allows filesystem I/O to bypass the VMM and transfer data directly to disk from the user's buffer. CIO allows multiple threads to read and write data concurrently to the same file, which is due to the way in which JFS2 is implemented, allowing users to read and write simultaneously. In order to turn this on, you mount your filesystems with the cio flag: # mount -o cio /orafilesystem.

These elements are important to consider with CIO:

  • Raw devices -- While some Oracle DBAs like to create raw logical volumes for their data, and there is little argument about the performance benefit, in most cases it is too difficult to administer and usually I've found that the UNIX® administrators can talk the Oracle DBAs out of this one. With the advent of CIO, I would not use raw logical volumes unless performance is the driving factor of everything you are doing and you have the staff that can maintain the complexities inherent in this type of environment.
  • Spreading the wealth -- The more spindles you have, the more you should spread your wealth around. The more adapters you will have, the more performance will also increase. You should also try to keep indexes and redo logs off the same volumes as your data.
  • SAN -- Make sure you spend time looking at your SAN; optimizing the hardware will help you more than anything you can do at the OS level.

 


 

Oracle tools

In this section, we'll look at Oracle-specific tools that can help you with your AIX administration.

Statspack

This is an Oracle performance diagnosis tool, and I highly recommend that Unix administrators learn to use this tool. It's really not that hard, once you have it set up and configured. This is done from sql once you have the Oracle installed. There are really two types of collection options: level and threshold. You need to configure the level parameter, which controls the type of data collected from Oracle. The threshold parameter acts as a filter for the collection of SQL statements the status summary tables.

Here's how to install it. After logging on to the systems as Oracle, start up sqlplus and then just follow the steps as instructed (see Listing 9).


Listing 9. Starting up sqlplus to install Statspack
 
                

SQL*Plus: Release 10.1.0.2.0 - Production on Sun May 18 19:21:21 2008

Copyright (c) 1982, 2004, Oracle.  All rights reserved.

Enter user-name: system as sysdba
Enter password:

Connected to:
Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
SQL> execute
SQL> @?/rdbms/admin/spcreate

Choose the PERFSTAT user's password
-----------------------------------
Not specifying a password will result in the installation FAILING

Oracle Enterprise Manager
choose the Temporary tablespace for the PERFSTAT user
-----------------------------------------------------
Below is the list of online tablespaces in this database which can
store temporary data (e.g. for sort workareas).  Specifying the SYSTEM
tablespace for the user's temporary tablespace will result in the
installation FAILING, as using SYSTEM for workareas is not supported.

Choose the PERFSTAT user's Temporary tablespace.

 

Oracle Enterprise Manager

The Oracle Enterprise Manager is a tool that I've used for years. In order to turn it on, you'll need to make sure you first allow it to run when installing Oracle or creating a database using the Oracle dbca utility. After the database is created, you'll need to turn OEM on using: $ emctl start dbconsole.

This is what you'll put in your browser: http://lpar21ml16ed_pub:5505/em.

After logging in, you'll see something like Figure 2.


Figure 2. The Oracle Enterprise Manager
The Oracle Enterprise Manager
 

There is so much you can monitor and tune within OEM that there are actually books on this utility. If you are working in an Oracle environment, this is a must-use system.


 


 

Summary

In this article, we introduced the concepts of performance tuning as it relates to Oracle. We looked at the memory, CPU, and I/O subsystems as we analyzed and tuned our systems. We captured data and analyzed the results of our changes. We discussed important systems such as concurrent I/O and why implementing these systems will help our systems perform better. We also discussed some important kernel parameters, what they do, and how to tune them. At the same time, we made note of some important changes through the years and our approach to certain parameters. We also looked at some Oracle-specific utilities and how they could help us as AIX systems administrators.