Archive

Archive for October, 2009

Use rsync, ssh, and cron to synchronize files between AIX hosts

October 30, 2009 1 comment

Recently I had to synchronize files between AIX hosts. There are a variety of methods to use. The method I chose I using rsync, openssh, and cron.  I chose rsync because of its speed and versatility when dealing with file attributes.

The first step was to install rsync from the AIX Toolbox for LINUX Aplications  cd (openSSH and Cron were already installed on these boxes). Rsync has a dependency upon popt. The rpms can be found on the cd in RPMS/ppc. Here are the commands I ran to install them:

 $ rpm -Uhv popt-1.7-2.aix5.1.ppc.rpm
popt                        ##################################################

$ rpm -Uhv rsync-2.6.2-1.aix5.1.ppc.rpm
rsync                       ##################################################

The next step is to generate a key for SSH to utilize. The key is generated with no password.  There are security ramifications using this method, study up on SSH if you need further info.

root@testhost# ssh-keygen -t dsa -b 1024 -f /home/admin/bin/jbhqts03-rsync-key
Generating public/private dsa key pair.
Enter passphrase (empty for no passphrase):  [I pressed enter]
Enter same passphrase again:   [I pressed enter]
Your identification has been saved in /home/admin/bin/testhost-rsync-key.
Your public key has been saved in /home/admin/bin/testhost-rsync-key.pub.
The key fingerprint is:
f9:8c:4b:5f:08:cb:4d:47:9c:d7:43:81:6e:4a:33:9e root@testhost

The pub file must now be copied to the remote host. I did this as follows:

scp /home/admin/bin/testhost-rsync-key.pub remotehostuser@remotehost:/home/remotehostuser/

I then logged into the remote host. In that users home directory I did the following:

$ if [ ! -d .ssh ]; then mkdir .ssh ; chmod 700 .ssh ; fi
$ mv testhost -rsync-key.pub .ssh/
$ cd .ssh/
$ if [ ! -f authorized_keys ]; then touch authorized_keys ; chmod 600 authorized_keys ; fi
$ cat testhost -rsync-key.pub >> authorized_keys

At this point you should now be able to use rsync and SSH to synchronize files between hosts in a cron job. Please study SSH to determine how to harden the security.

An example of doing this is the user synchronization script I am using. This script is done from the point of view of the remote host. I am doing a pull on these files.

#!/usr/bin/ksh
# Get new /etc/passwd & /etc/group files from mainhost
# Overwrite existing files
rsync -goptvz -e “ssh -i /home/admin/bin/remotehost-rsync-key” mainhost:/etc/passwd /etc/passwd
rsync -goptvz -e “ssh -i /home/admin/bin/remotehost-rsync-key” mainhost:/etc/group /etc/group
# Get new files from /etc/security from mainhost
# Overwrite existing files
rsync -goptvz -e “ssh -i /home/admin/bin/remotehost-rsync-key” mainhost:/etc/security/passwd /etc/security/passwd
rsync -goptvz -e “ssh -i /home/admin/bin/remotehost-rsync-key” mainhost:/etc/security/group /etc/security/group

In crontab I set this script to run every 10 minutes and log the output.

0,10,20,30,40,50 * * * * /home/admin/bin/usersync.ksh >> /home/admin/logs/usersync.out 2>&1

There are more ‘elegant’ ways to script this, but I’m an administrator not a programmer. I want simple and easy to understand.

The rsync options chosen are:

-g = preserve group

-o = preserve owner

-p = preserve permissions

-t = preserve modification times

-v = verbose (for the log)

-z = compress files during transfer

As you can see the options insure the passwd and groups files on the remote host keep the same attributes as on the original host. This is a primary reason I chose the rsync method.

Categories: UNIX

Rocket Software bought the Universe from IBM

October 22, 2009 Leave a comment

Updated At End

I realized earlier today I referred to Universe as IBM Universe. This more out of habit than anything. As of October 1, 2009, Rocket Software fiinished its acquisition of IBM’s Universe. I learned this only a week ago myself.

So far I am not impressed with Rocket Software. I have twice sent requests for a support account, both times I have gotten no response.

Anyone going to the IBM U2  wiki will find their browser redirected to the rocket software support site. They do have a letter on their site asking customers for patience during the conversion of Universe from IBM to Rocket Software. I will give them the benefit of the doubt on this one. Migrating dev and support for a major product has to be a logistical nightmare. But then I have no emergency right now, so i’m a little more patient.

Updated!!!

I received a call from the Director of Support for U2 on Monday. He was able to pinpoint where my problem was. The route I had taken into their website was the problem. And just as I figured I’m supposed to go through my reseller for access.  As a test to see if Rocket is committed to good support they passed with flying colors in my opinion. Once they found there was a problem they worked to find a resolution right away.

Categories: Universe

Universe 10.2 does not like to move

October 22, 2009 1 comment

Recently I upgraded IBM Universe from 10.0 to 10.2 on an AIX box. The upgrade went fairly smooth; the only annoying part was the licensing. But once the upgrade was performed things went smooth. That was until I had to move things around on the SAN for performance and DR purposes.

Before the /adv partition Universe was installed on was in its own volume/lun on the NetApp filer. The two databases were also in their own volume/lun. This was problematic for multiple reasons. One reason is it provided only 1 r/w path for the databases. Second my snapshots were all separate on the filer. This could create a situation where the two databases were not perfectly in synch (which is a requirement of the ERP that utilizes these databases).  Below is an example of the previous layout.

 To overcome this one larger volume was created on the SAN, with four equal sized LUN’s inside. These LUN’s were presented to the AIX box. On AIX these 4 physical volumes were used to create one large Volume Group. Inside this volume group I created 3 Logical Volumes to house /adv and the two databases.  Below is a simple diagram of this:

As you can see all of the data is now in one volume on the SAN. This allows for a snapshot that guarantees the state of all 3 LV’s are snapped at the same instant. Now each LV also has 4 potential r/w paths to the SAN. This is important because DB1 gets used mostly during the day, and DB2 gets used at night. There has been noticeable IO performance since this change has been made.

So if that went smooth then what went wrong with Universe? I used tar to migrate data from the 3 old LV’s to the new LV’s. When the data was migrated I simply put my mount points where they needed to be full time and started Universe using uv.rc start.

At that point I got the error:

                Invalid .uvconfig

Before moving the data Universe had worked fine. Nothing had gone wrong during the upgrade. After calling our ERP vendor they came up with an answer. IBM in their infinite wisdom decided to change how Universe 10.2 is licensed.  Here is a snippit from the IBM article (I cannot find this online again so I’m just posting it):

‘Invalid .uvconfig’
This error occurs on UniVerse 10.2 because the authorization routine retrieves some inodes from the system, and uses them in the configuration and authorization keys. The inodes may change if UniVerse is copied to another system or an OS upgrade occurs. The list of inodes used is not available publicly. To resolve this error, UniVerse needs to be unauthorized and then reauthorized. The steps with uvregen would be:

1.) run ‘bin/uvregen -u #_users+1’ (changing a parameter, in this case user count, forces UV to become unauthorized). 
2.) run ‘bin/uvregen -u correct_#_of_users’ (change parameter to correct value)
3.) Take configuration code in output and generate authorization code from website
4.) run ‘bin/uvregen -A auth_key’
5.) stop universe, ‘bin/uv -admin -stop’ (uv segment may have been created in shared memory)
6.) start universe, ‘bin/uv -admin -start’

Alternatively, you can reauthorize UniVerse using one of the other methods, ie. Control panel, UniAdmin, or Sys Admin menu.

Since I had used TAR to move Universe from one LUN to another the inodes had obviously changed. This caused Universe to become “unauthorized”. I followed the procedure to reauthorize Universe. After that Universe came up with no problems. As a side note be sure you know how many users you are licensed for, its required for this procedure.

This has impacts beyond this simple migration I did here. It is possible that any AIX or Universe upgrade will cause this happen in the future. It is also likely that if we go into a DR scenario where we use Universe from our backup SAN will require reauthorization of Universe.

Categories: Universe, UNIX

Get ready for next tuesday Network Admins

October 9, 2009 Leave a comment

Next Tuesday MS is set to release a record 13 patches. ComputerWorld has a good article summarizing the patches. I haven’t had time to fully read through the advance notice yet (on the long list of things to do today). Of these 13 patches, 8 of them are critical. These patches address 34 vulnerabilities, and rebooting is required.  The patches are for all supported version of windows (Including Windows7). There is IE and SQL patches as well. Looks like I’ll be having fun next week (thank goodness for patch deployment technologies!)

Categories: Microsoft, Security

Network Security Admins: Don’t fight Web2.0!

October 9, 2009 Leave a comment

Like many network security professionals I have always taken the “lock it down then give specific access” approach. This approach worked well throughout the 90’s and early 2000’s. However with the rise of Web2.0 this just isn’t possible anymore.

Web2.0 is all about collaboration and interaction. Marketing departments are using Facebook, Twitter, MySpace, Flickr, YouTube and a multitude of other interactive technologies to gain the attention of current and prospective customers. HR departments are using the same technologies to find and retain employees. There is a definite business case for Web2.0 technologies to be used in a corporate environment.

The legacy Web1.0 applicaitons were easier to deal with. Firewalls and content filtering proxy servers could be utilized to block and/or restrict traffic to web mail, IM, dating websites, and other interactive sites. Web2.0 has changed this. Perimeter security vendors have been slow to deal with Web2.0 management, other than to block it. In my research I’ve also had almost every security vendor say “We have something coming soon!” Facetime has some promising offerings for social networking content control.

If the security technology is not quite there then Web2.0 must be blocked right? Wrong! From experience I can say that users wishing to use Web2.0 will find a way even if it is blocked. No matter how secure your network is there are always new online proxy services providing a hole through  your firewall. Not to mention that Web2.0 developers are finding ways to get into the corporate network in ways that make them hard to spot.

Instead as IT professionals we must work with the users to find out what they need out of Web2.0 technolgies. Instead of saying “No, you can’t have a Facebook site” we must research the technology and determine the best way for users to utilize it. I am currently doing this myself. I have researched how marketing can utilize Facebook, Flickr, YouTube, and Twitter together to provide meaningful interaction with customers. At the same time I have researched the security best practices for each of the technologies to reduce business risk.

While doing this the company policy also has to be updated. Most companies have an outdated corporate internet policy which does not take Web2.o into account. Or worse companies have no internet usage policy. A new policy has to be written and put forth letting users know what they can and can’t do on the internet while at work. Insure the policy explains the dangers of interactive content, but also allows enough leeway so users can get their job done. I really hope the days of network security professionals acting as big brother are done.

Just remember: Don’t forbid users from using Web2.0! Despite your best intentions it will cause a divide between IT and end-users. Remember no matter how secure you think your perimeter is, there are a multitude of ways for users to bypass that security and use Web2.0 technologies anyhow. Be sure to work with them to reduce risk associated with online social networking technologies.

Categories: Networking, Security

Make local backup of MySQL DB

October 8, 2009 Leave a comment

To make a local backup of a MySQL DB use the mysqldump command. This information is available all over the internet, but here it is again (In case I once again forget and have to search for it, thats the problem with things you rarely have to do).

Here is the most common option, read the man page for further needs:

root@TEST#mysqldump –user %U –password=%P –all-databases > mysql.dump

mysqldump : utility to backup the db
–user : in place of %U enter the username of a mysql account with access
 –password= : in place of %P enter the password of the account, make sure there is no space and the = is there
 –all-databases : this creates a dump of all databases, see man file for individual db’s or tables
 > : redirect, in this case we are redirecting to a file called mysql.dump in the current directory

Categories: UNIX

Determine LUN ID in AIX

October 8, 2009 3 comments

Recently I had to present some new LUN’s from a NetApp to my AIX box. Here is the procedure I used to determine which hdisks were which.

First I had my storage administrator create a Volume and LUN on the Filer and present it to this system. It is utilizing the Fiber Channel. The LUN ID’s were presented as 19 and 23.

On the AIX box I did a cfgmgr –v to install the new disks. The –v is not necessary, but I like verbose mode to see what it is actually doing.

Then I did a lsdev to see which hdisks were available. As you can see below I have 2 disks from the NetApp and 2 local SCSI disks.  I now know my new LUNs were presented to the AIX box as hdisk0 and hdisk1.

root@TEST# lsdev | grep hdisk

hdisk0      Available 06-08-02      MPIO NetApp FCP Default PCM Disk

hdisk1      Available 06-08-02      MPIO NetApp FCP Default PCM Disk

hdisk2      Available 09-08-00-3,0  16 Bit LVD SCSI Disk Drive

hdisk3      Available 09-08-00-4,0  16 Bit LVD SCSI Disk Drive

To determine the LUN ID for each of the hdisks I then ran lsattr. lsattr is used in AIX to determine the attributes of a device. –E is to diplay the effective values and –l followed by the device name is how to specify the hdisk in question.

root@TEST# lsattr -El hdisk0 |grep lun_id

lun_id          0x13000000000000                 Logical Unit Number ID           False

root@TEST# lsattr -El hdisk1 |grep lun_id

lun_id          0x17000000000000                 Logical Unit Number ID           False

Notice the LUN IDs are presented here in HEX format. If you are unable to do HEX > Decimal conversion in your head there is a handy converter at this website.

                HEX 0x13 = Decimal 19

                HEX 0x17 = Decimal 23

With the above information I now know the LUN presented with ID 19 shows up in AIX as hdisk0 and the LUN presented with ID 23 shows up in AIX as hdisk1.

With this information I then used SMIT to create the volume group, logical volumes, filesystems, and mount points.

Categories: UNIX