Archive

Archive for July, 2009

SMIT is an AIX administrators friend

Every UNIX flavor has something (or many things) which differentiates it from other distributions. AIX is no exception to the rule thanks to the System Management Interface Tool (SMIT).  As a former UNIX admin I couldn’t understand why you would want/need a GUI for UNIX. However I have learned to not only understand but utilize SMIT more every day.

The tip that has helped me the most is to show what commands SMIT is actually doing. When you run a command in SMIT hit F6 and the command will be display for you. This is helpful for commands that are different from standard UNIX commands. Here is a sample command from the SMIT option “List All Supported Printers/Plotters”:

                 lsdev -P -c printer -F “type subclass description” | sort –u

It is true that lsdev is not specific to AIX. However for admin’s who spend very little time in UNIX it is useful to see how SMIT performs the command if you want to replicate it at the command line. To continue with the printer example here is the command for SMIT’s “List All Print Queues”:

                function is_ext {

                                   if [[ -x /usr/lib/lpd/pio/etc/piolsvp ]]

                   then

                                           /usr/lib/lpd/pio/etc/piolsvp -p 2>/dev/null

                                   else

                                           /usr/bin/lsallq

                                   fi

                   }; is_ext

As you can see it is much easier to run this from SMIT than from the command line. True you can make a script to do the same. But how often are all of the functions available in SMIT performed? SMIT was created for easy administrator. Being a UNIX guru is not necessary for basic system administration.

Another example would be LPP, the package manager utilized in AIX. It is likely you will have to use LPP at least once. Installation of packages via LPP is pretty straightforward. For example the command for “Show Software Installation History” is:

                lslpp -h all

However SMIT contains some other possibly useful options. One of those options is “List Applied but Not Committed Software Updates”:

                installp -s 1>/tmp/_$$.out 2>/tmp/_$$.err ; cat /tmp/_$$.err /tmp/_$$.out ; rm -rf /tmp/_$$.out /tmp/_$$.err

I had used the above command and it helped me discover a problem I was having.

When I first started using SMIT it was to go in and get the F6 command for what I wanted. I would then go to the command line and do what I needed. I have a nice little list of command showing me how to do things that I would reference. Then it occurred to me… Why am I doing this?  The whole point of SMIT is that administrators do not have to remember all of these commands.

Does SMIT replace all functions of UNIX administration? Not even close. It does however speed the administration process for those who are not full time UNIX administrators. It can also be essential for UNIX administrators who think they know how to do things. There are certain commands that if done at the CLI will not update the Object Data Manager (ODM). ODM is where AIX keeps system and device configurations, similar to the windows registry.

Categories: UNIX

CTRL+ALT+DEL is not just for rebooting

While speaking with a colleague earlier this week the question of logon security came up. Specifically it was asked if the classic CTRL+ALT+DEL (control, alt, delete) was still necessary.  Beyond that he was more interested in why it should or shouldn’t be used during the logon process.

The simple answer to his questions was “Yes, keep using CTRL+ALT+DEL” for domain logons. When a user presses CTRL+ALT+DEL windows will invoke the Graphical Identification and Authentication (GINA) module.  Most administrators never deal with GINA directly, however if you implement security devices as I have it may be necessary to customize GINA.

GINA takes the users credentials and passes them securely on to their destination. The CTRL+ALT+DEL will assure a user they are actually at the proper logon screen. The operating system typically does not allow editing of the CTRL+ALT+DEL key combination. If a hacker or other unwanted software package was ‘pretending’ to be the logon page then usernames and passwords could be stolen. The false security program will be found when the user hits CTRL+ALT+DEL and the Task Manager or Security Center come up (depending upon the version of windows).

Some administrators will say CTRL+ALT+DEL is nothing but a legacy left over from days when it was required to do a ‘soft reboot’ or end a hung process. This is true, but no longer the whole story. The origins of CTRL+ALT+DEL do in fact go back to this need. However from a security standpoint CTRL+ALT+DEL is important to insure GINA is not imitated.

Categories: Microsoft, Security

A syslog server just isn’t good enough anymore

Like many network professionals I have been tasked to provide some sort of log management system. Back in the 90’s I would have just thrown in a syslog server and called it good. Today however there is a need to analyze and react to logs from multiple devices. Traditional syslog servers simply collected logs, there was little if any analytical capabilities to correlate data between devices. There are also a plethora of regulations that network administrators must adhere to. This can be anything from the Sarbanes-Oxley Act  of 2002 (SOX) to the Payment Card Industry Data Security Standard (PCI).

An option called SIEM, which stands for Security Information and Event Management, came out some years ago to fill this void. SIEM has also been referred to as Security Incident and Event Manager. I prefer the first definition of the acronym, however both are accurate descriptions. It also appears that vendors I’m working with prefer the first definition. These products were good at sorting through firewall and IDP logs. However, from personal experience I can attest to fact that some of them were a nightmare to deploy and configure. The cost of these systems also put them at an almost unattainable level for SMB’s.

Now there is a new breed of SIEM products which fill the gaps in log management and security management. Newer SIEM products take the features of a traditional syslog server and incorporate it into a more robust event manager. The first big change from traditional SIEM products is they are no longer focused upon security devices. SIEM’s now collect data from all network and system devices. This includes (but is not limited to): routers, switches, firewalls, IDP’s, SSL-VPN’s, servers, applications, etc…

Just as a traditional syslog server the SIEM will collect and archive all of the raw logs. This alone is good enough for some regulatory compliance issues. Simply complying with regulations is not enough in the modern IT world. Administrators must be able to find actual problems. To answer this SIEM’s analyze all data from the logs and provides feedback to administrators. The upshot of this is suspicious or out of place events can be alerted and reacted to. For instance, if multiple devices inside a network get a lot of SSH authentication failures it could show that someone is inside the network trying to access data upon devices they should not be in. The SIEM at this point can alert the security administrator as to the event.

My process of finding a SIEM for my corporation has just started.  So far the products I have looked at are light years beyond the syslog and Generation 1 SIEM’s I’ve worked with in the past. Some of the solutions come in an appliance or virtual machine making deployment quick and easy. The key when looking at these devices is that they incorporate the best of a syslog server and a traditional SIEM into the same interface.  As I get further in the process I will update my thoughts on products tested.

Categories: Security

Get rid of bottlenecks

July 15, 2009 2 comments

For anyone looking for a good book to read I recommend The Goal by Eliyahu Goldratt and Jeff Cox. If you have gone to college for you BBA or MBA it is quite likely you have already read this masterpiece. However, if like most IT professionals you have not gone for a business degree I recommend picking this up immediately. The book is a work of fiction. However, it is done in way to instruct the readers how to become better operations managers. There is also enough suspense to make it a very compelling ‘page-turner’.

I always believe IT professionals should understand operation almost as well as actual operations managers. Understanding how to support your customers means knowing their business. The main focus of an IT department is supporting operations. Therefore IT professionals must understand how operations run, and how it should be running. Many solutions for operations problems can be solved with technology. If operations and IT managers can work together and utilize the same language there are few boundaries which cannot be overcome.

Another reason for IT professionals to read this book is the implementation of the Goldratt’s Theory of Constraints. For those new to IT this theory can summarize what network administrators and engineers do on a daily basis: look for and remove bottlenecks. Goldratt was speaking of business process and operations when laying the foundations for the theory of constraints. However it is very applicable to networking. With the theory of constraints it is a necessity to find the step in operations which goes slower than the steps before and/or after. This is known as the ‘bottleneck’. In IT networking we use this same term for the exact same scenario.

For instance, if there are two locations which have gigabit internal networks. The two networks are connected with 1.5Mbs T1. The T1 would be considered the bottleneck. Having gigabit networks on each side is irrelevant for inter-site traffic because that is constrained to 1.5Mbs. This is a simple example of a network bottleneck.

There are many ways to compensate for this bottleneck. Here are some possible solutions:

  • It may be determined this bottleneck does not affect production, in which case nothing will be done.
  • If the bottleneck is affecting production it may be a simple matter of adding a second T1. This would double the bandwidth to 3Mbs. Chances are the routers being used have a T1 port available and the telco will waive the install of the second T1 if you resign your contract for another 18 to 36 months.
  • If there is non-critical data being exchanged between sites restrict it to off-peak times. Data being backed up over the network can over-consume WAN links. Schedule the backups to occur at night when the network would normally be sitting idle.
  • Implement QOS. Insure that important traffic always gets through. Users may be consuming bandwidth for YouTube videos and the like. Insure your network recognizes and prioritizes important traffic.
  • Upgrade to metro Ethernet. Depending upon where you live metro Ethernet may be available. With metro Ethernet you can get the required amount of bandwidth, and easy add bandwidth as the company grows.

The above solutions were not all inclusive, there are other possible solutions. The key here is to find the bottlenecks in your network, determine if they affect operations, and implement a plan. Bottlenecks are not necessarily always in network gear and circuits. The bottleneck for a slow application may be a server with insufficient memory or processor power.  IT staff with insufficient training in a product can be a support bottleneck. Lack of documentation for a user to utilize software can be a bottleneck.

By reading The Goal I believe IT professionals can look at operations in a whole new light. The key is to take that new understanding and incorporate into daily IT operations. IT professionals who do so fill find their careers soaring higher than they ever expected.

Categories: Networking

The firewall is NOT dead!

Finally, someone makes some sense when talking about firewalls. ScottL over at Juniper has an excellent post about why firewalls are still around. Even though this is a sneaky way to talk about their new SRX Series gear; he highlights a topic which I have debated recently with security vendors: The firewall is not dead!

As Scott pointed out, the traditional “firewall” no longer exists (at least shouldn’t). Instead modern firewalls fulfill many purposes. These purposes include intrusion detection, routing, anti-virus, anti-spam, anti-malware, VPN, remediation, etc…  In fact I can’t recall working on a firewall recently that simply separated the ‘outside’ from the ‘inside’. The current firewalls I manage have multiple zones with complex routing and security rules between them.

However no matter how much is done with them one fact remains: the firewall acts as a barrier between zones. True, many of these zones no longer exist in the physical world, but are instead virtual (I resist using the term ‘cloud’ here). However at its basic function the firewall is very little different from what existed back in the 90’s.

For anyone out there looking to secure their perimeter I recommend continuing the ‘old’ methodology of using the firewall as the foundation.  Yes it is true that determine what ports to open for new applications and services can be a pain. Just remember, a slight pain in deployment is much better than an agonizing death caused by a preventable perimeter breach.