Monday, August 18, 2008

How to remove a dynamically allocated i/o slot in a DLPAR in IBM AIX

To remove a dynamically allocated I/O slot (must be a desired component) from a partition on a P-series IBM server partition:

1) Find the slot you wish to remove from the partition:

# lsslot -c slot
# Slot Description Device(s)
U1.5-P2/Z2 Logical I/O Slot pci15 scsi2
U1.9-P1-I8 Logical I/O Slot pci13 ent0
U1.9-P1-I10 Logical I/O Slot pci14 scsi0 scsi1

In our case, it is pci14.

2) Delete the PCI adapter and all of its children in AIX before removal:

# rmdev -l pci14 -d -R
cd0 deleted
rmt0 deleted
scsi0 deleted
scsi1 deleted
pci14 deleted

3) Now, you can remove the PCI I/O slot device using the HMC:

a) Log in to the HMC

b) Select "Server and Partition", and then "Server Management"

c) Select the appropriate server and then the appropriate partition

d) Right click on the partition name, and then on "Dynamic Logical Partitioning"

e) In the menu, select "Adapters"

f) In the newly created popup, select the task "Remove resource from this partition"

g) Select the appropriate adapter from the list (only desired one will appear)

h) Select the "OK" button

i) You should have a popup window which tells you if it was successful.


lsslot -c slot; rmdev -l pci14 -d -R

How to recover a failed MPIO paths from an IBM VIO server on an AIX LPAR

If you have set up disks from 2 VIO servers using MPIO to an AIX LPAR, then you need to make some changes to your hdisks.

You must make sure the hcheck_interval and hcheck_mode are set correctly:

Example for default hdisk0 settings:

# lsattr -El hdisk0
PCM PCM/friend/vscsi Path Control Module False
algorithm fail_over Algorithm True
hcheck_cmd test_unit_rdy Health Check Command True
hcheck_interval 0 Health Check Interval True
hcheck_mode nonactive Health Check Mode True
max_transfer 0x40000 Maximum TRANSFER Size True
pvid 00cd1e7cb226343b0000000000000000 Physical volume identifier False
queue_depth 3 Queue DEPTH True
reserve_policy no_reserve Reserve Policy True

IBM recommends a value of 60 for check_interval and hcheck_mode should be set to "nonactive".

To change these values (if necessary):

# chdev -l hdisk0 -a hcheck_interval=60 -P

# chdev -l hdisk0 -a hcheck_mode=nonactive -P

Now, you would need to reboot for automatic path recovery to take effect.

If you did not set the check_interval and hcheck_mode as described above or did not reboot, then after a failed path, you would see the following even after the path is back online:

# lspath
Enabled hdisk0 vscsi0
Failed hdisk0 vscsi1

To fix this, you would need to execute the following commands:

# chpath -l hdisk0 -p vscsi1 -s disable

# chpath -l hdisk0 -p vscsi1 -s enable

Now, check the status again:

# lspath
Enabled hdisk0 vscsi0
Enabled hdisk0 vscsi1

chpath -l hdisk0 -p vscsi1 -s disable; chpath -l hdisk0 -p vscsi1 -s enable

How to mount an ISO file as a filesystem in AIX

In AIX you "dd" the ISO file into a raw LV, then mount the LV as a filesystem.

Here are the steps for copying the ISO named "image.iso" into "/cd1iso", a JFS filesystem:

1. Create a filesystem with size slightly bigger than the size of the ISO image. Do NOT mount the filesystem:
# /usr/sbin/crfs -v jfs -g rootvg -a size=800M -m/cd1iso -Ano -pro -tno -a frag=4096 -a nbpi=4096 -a ag=8

2. Get the logical volume name associated with the new filesystem:
# lsfs | grep cd1iso (assume it is /dev/lv00)

3. dd the ISO image into rlv00 (raw lv00):
# dd if=image.iso of=/dev/rlv00 bs=10M

4. Alter /cd1iso stanza in /etc/filesystems => vfs=cdrfs and options=ro (read-only)

dev = /dev/cd1_lv
vfs = cdrfs
log = /dev/loglv00
mount = false
options = ro
account = false

5. Mount the file system :
# mount /cd1iso

6. When finished, remove the filesystem:
# rmfs /cd1iso


/usr/sbin/crfs -v jfs -g rootvg -a size=800M ...

How to mount a cd manually in AIX

To manually mount a cd in IBM AIX:

# mount -V cdrfs -o ro /dev/cd0 /cdrom


mount -V cdrfs -o ro /dev/cd0 /cdrom

How to find the world-wide name (WWN) of a fibre-channel card in IBM AIX

To find the world-wide name (WWN) or network address of a fibre-channel (FC) card in IBM AIX:

First find the name of your fibre-channel cards:

# lsdev -vp | grep fcs

Then get the WWN (for fcs0 in this example):

# lscfg -vp -l fcs0 | grep "Network Address"


lscfg -vp -l fcs0 | grep "Network Address"

How to find what level your IBM VIO Server is running at

To determine which level of Virtual I/O Server (VIOS) you're running:

1) Login in to the VIO partition using the user "padmin"

2) Issue the ioslevel command:

# ioslevel


# ioslevel

How to find the values for asynchronous i/o in IBM AIX

To find the values for asynchronous I/O in IBM AIX:

# lsattr -El aio0


lsattr -El aio0

How to find the number of asynchronous i/o servers running in IBM AIX

To find the number of asynchronous i/o servers running in IBM AIX:

To determine you how many Posix AIO Servers (aios) are currently running, as root:

# pstat -a | grep -c posix_aioserver

To determine you how many Legacy AIO Servers (aios) are currently running, as root:

# pstat -a | grep -c aioserver


pstat -a | grep -c aioserver

How to find the maximum supported logical track group (LTG) size of a disk in AIX

To find the maximum supported logical track group (LTG) size of a disk in IBM AIX, you can use the lquerypv command with the -M flag. The output gives the LTG size in KB.

# /usr/sbin/lquerypv -M hdisk#


/usr/sbin/lquerypv -M hdisk0

How to find all the rpm packages installed on a particular date

To find all the RPM packages which were installed on a particular date:

# rpm -qa --queryformat "%{NAME}-%{VERSION}.%{RELEASE} (%{ARCH}) INSTALLED: %{INSTALLTIME:date}\n" | grep my_date


rpm -qa --queryformat "%{NAME}-%{VERSION}.%{RELEASE} (%{ARCH}) INSTALLED: %{INSTALLTIME:date}\n" | grep "29 Sep 2006"

To find the install date and time of an RPM package:

# rpm -qa --queryformat "%{NAME}-%{VERSION}.%{RELEASE} (%{ARCH}) INSTALLED: %{INSTALLTIME:date}\n" | grep rpm_package_name

If you want the epoch time rather than human readable date:

# rpm -qa --queryformat "%{NAME}-%{VERSION}.%{RELEASE} (%{ARCH}) INSTALLED: %{INSTALLTIME}\n" | grep rpm_package_name


rpm -qa --queryformat "%{NAME}-%{VERSION}.%{RELEASE} (%{ARCH}) INSTALLED: %{INSTALLTIME:date}\n" | grep libaio

How to enable full core dumps in IBM AIX

To enable full core dumps in IBM AIX:

# chdev -l sys0 -a fullcore='true'


chdev -l sys0 -a fullcore='true'

How to extract an individual file from an AIX mksysb on tape

To extract an individual file from a mksysb on tape:

# restore -s4 -xqvf /dev/rmt0.1 /my/filename


restore -s4 -xqvf /dev/rmt0.1 ./etc/passwd

How to enable extended netstat statistics in IBM AIX

To enable extended netstat statistics in IBM AIX :

# /usr/sbin/no -o extendednetstats=1

To disable extended netstat statistics in IBM AIX (default in AIX):

# /usr/sbin/no -o extendednetstats=0

Note: you must reboot to take effect.


/usr/sbin/no -o extendednetstats=1

How to enable entended history in AIX 5.3

In AIX 5.3, you have the capability to have a time stamped history. To enable it, just set the following variable:




how to enable automatic notification of hardware errors in IBM AIX 5L

AIX 5L can email you when it detects a hardware error. To configure email notification, use the "diag" command.

# diag
=> Task Selection
=> Automatic Error Log Analysis and Notification
=> Add to the error notification mailing list


how to determine which MPIO path is associated to a vscsi adapter in AIX 5L

To determine which MPIO path is associated to a vscsi adapter in AIX 5L:

# lspath -F "name path_id parent connection status"

The output returns something similar to:

hdisk0 0 vscsi0 810000000000 Enabled
hdisk0 1 vscsi1 810000000000 Enabled
hdisk1 0 vscsi0 820000000000 Enabled
hdisk1 1 vscsi1 820000000000 Enabled

The first field is the hdisk.
The second field is the "Path" number (seen with 'lspath')
The third field is the parent (vscsi adapter)
The fourth field is the connection
The fifth field is the status (Enabled, Disabled, Missing, etc)


lspath -F "name path_id parent connection status"

how to determine which eFixes are installed on your VIO Server

To determine which emergency fixes are installed on your Virtual I/O Server (VIOS):

1) Login to the VIO server as the user padmin

2) Setup your environment by issuing:

$ oem_setup_env

3) List installed efixes by label:

# emgr -P

4) To remove a specific efix by label (IY71303 in my example):

# emgr -r -L IY71303

5) Go back to the padmin restricted shell

# exit


emgr -P

how to determine which application created the OS core file in AIX

To determine which application created the OS core file in AIX:

# /usr/sbin/lquerypv -h /path/to/core 6b0 64

The output of this command is neat, clean and easy to read. Here is an example:

# lquerypv -h core 6b0 64
000006B0 7FFFFFFF FFFFFFFF 7FFFFFFF FFFFFFFF |................|
000006C0 00000000 000007D0 7FFFFFFF FFFFFFFF |................|
000006D0 00120000 1312C9C0 00000000 00000017 |................|
000006E0 6E657473 63617065 5F616978 34000000 |netscape_aix4...|
000006F0 00000000 00000000 00000000 00000000 |................|
00000700 00000000 00000000 00000000 00000ADB |................|
00000710 00000000 000008BF 00000000 00000A1E |................|

The executable is located between the pipes on the right hand side of the output. In this case, the core was generated by Netscape.


/usr/sbin/lquerypv -h /tmp/core 6b0 64

how to disable a single path through one HBA to a disk in IBM AIX

This disables a single path through one HBA to a disk in IBM AIX. This instructs the system to start using an alternative path to the disk.

# chpath -l hdisk# -p fscsi# -s disable


chpath -l hdisk# -p fscsi# -s disable

how to display the process hierarchy / tree in AIX 5.3

New in AIX 5.3, to display the hierarchy of a process:

# ps -T 1
1 - 0:00 init
4258 - 0:00 |\--errdemon
4728 - 0:00 |\--hdlm_log_push_process
5458 - 0:00 |\--hdlm_link_proc
6488 - 0:00 |\--aioserver
7300 - 0:02 |\--syncd
8092 - 0:00 |\--httpd
9912 - 0:00 ;| |\--httpd
13458 - 0:00 ;| |\--httpd
17580 - 0:00 ;| |\--httpd
17828 - 0:00 ;| |\--httpd
20642 - 0:00 ;| \--httpd
8514 - 0:00 |\--shlap
9496 - 0:00 |\--nfsd
11206 - 0:00 |\--random
12984 - 0:00 |\--rpc.lockd
13220 - 0:00 |\--srcmstr
6904 - 0:00 ;| |\--portmap
7562 - 0:00 ;| |\--sendmail
10404 - 0:00 ;| |\--IBM.LPRMd

ps -T 1

how to determine / disable if simultaneous multi-threading (SMT) is enabled in AIX

Your system is capable of SMT if it's a POWER5-based system running AIX 5L Version 5.3.

To determine if it is enabled:

# smtctl



How to disable simultaneous multi-threading (SMT) in AIX:

Your system is capable of SMT if it's a POWER5-based system running AIX 5L Version 5.3.

To disable smt:

# smtctl -m off [ -w boot | now]

Note: If neither the -w boot or the -w now options are specified, then the mode change is made immediately. It persists across subsequent reboots if you run the bosboot command before the next system reboot.


smtctl -m off

how to detach a network device and update the ODM in IBM AIX

If "ifconfig" is used to detach a network device, the ODM is NOT updated and when the system is rebooted, the adapter will try and bring up its network interface because that is what the ODM instructs it to do.

To detach a network device and update the ODM, the following command needs to be used:

# chdev -l en0 -a state='detach'

This will detach the network device even across reboots.


chdev -l en0 -a state='detach'

how to clone (make a copy of ) the rootvg in AIX

To clone rootvg in IBM AIX, you can run the alt_disk_copy command to copy the current rootvg to an alternate disk.

The following example shows how to clone the rootvg to hdisk1.

# alt_disk_copy -d hdisk1


# alt_disk_copy -d hdisk1

how to check if any mksysb resources are allocated for use in IBM AIX

To see if any mksysb resources are allocated for use, the following command can be run:

# lsnim -a mksysb

With a remote hostname of "darkstar" and a mksysb resource with the name "darkstar_mksysb", the output will be similar to this:
# lsnim -a mksysb
mksysb = darkstart_mksysb


lsnim -a mksysb

how to change the reserve_policy so that multiple IBM VIO servers can access the same disk

In a p5 environment, for disk redundancy through two VIO servers, the default reserve policy on the disk needs to be changed to "no_reserve". This allows multiple VIO servers to see the disk and provide redundancy to LPARs. Here is the command:

# chdev -l hdisk# -a reserve_policy=no_reserve -P

This requires that the disk be removed from the system and brought back in. This can be accomplished with a reboot, or the following commands:

1. unmount and vary off the VG that the disk is part of.
2. rmdev -l hdisk#
3. mkdev -l hdisk#

NOTE: The "-P" makes the change in the ODM. Without it, the command will not work because it tries to update the disk while it is already configured in the sytem and an error will be reported. That is why the removal and restoration of the disk is required.


chdev -l hdisk3 -a reserve_policy=no_reserve -P

genkld - list of shared objects in AIX

The genkld command extracts the list of shared objects currently loaded onto the
system and displays the address, size, and path name for each object on the


# genkld

fwtmp - how to manipulate connect-time accounting records

fwtmp manipulates connect-time accounting records by reading binary records in wtmp format from standard input, converting them to formatted ASCII records. The ASCII version is useful when it is necessary to edit bad records.

# fwtmp [-ic]

where :

-ic : denotes that input is in ASCII form, and output is to be written in binary form.

Example to convert a binary record in wtmp format to an ASCII record called dummy.file, enter:

# fwtmp < /var/adm/wtmp > dummy.file

Example to convert an ASCII dummy.file to a binary file in wtmp format called /var/adm/wtmp, enter the fwtmp command with the -ic switch:

# fwtmp -ic <> /var/adm/wtmp

Note: Depending on your flavour of Unix, the file may be called wtmpx or wtmp.


fwtmp < /var/adm/wtmp > dummy.file

To get a list of all the failed logins in IBM AIX:

To read the file /etc/security/failedlogin, you need to use fwtmp:

# /usr/sbin/acct/fwtmp < /etc/security/failedlogin

To get information for a particular user:

# /usr/sbin/acct/fwtmp < /etc/security/failedlogin | grep username


/usr/sbin/acct/fwtmp < /etc/security/failedlogin

Commands to find memory utilisation of processes

To find memory usage, try these commands (may vary with version of UNIX):

# svmon -u | more
# svmon -P | more
# ps aux | more
# ipcs -ma | more

Command to find process which uses the most memory in AIX:

# svmon -P -t 1 (aix 4.3.3)
# svmon -Pau 1 (aix 4.3.2)

To find the memory utilisation of a certain proces:

# ps auwww [PID]

PID = process id


svmon -u ; svmon -P ; ps aux ; ipcs -ma

chvg - notify a VG of the increase of a disk in IBM AIX

To examine all the disks in the volume group to see if they have grown in size in IBM AIX version 5.2 and onward :

# chvg -g

From the man page:

1. The user might be required to execute varyoffvg and then varyonvg on the
volume group for LVM to see the size change on the disks.
2. There is no support for re-sizing while the volume group is activated in
classic or enhanced concurrent mode.
3. There is no support for re-sizing for the rootvg.


chvg -g datavg1

chhmc - to change the HMC network configuration or to enable and disable remote command execution

The chhmc command is used to change the HMC network configuration or to enable and disable remote command execution:

chhmc -c [network | ssh] -s [enable | disable | add | modify | remove]
[ -i ethØ | eth1 [ -a ip-address ] [ -nm network-mask ]]
[ -d network-domain-name ] [ -h host-name ]
[ -g gateway ] [ -ns DNS-Server ] [ -ds Domain-suffix ]
[ —help ]

• -c – the type of configuration to modify. Valid values are ssh and network.
• -s – the new state value of the configuration. When the configuration type is ssh, the valid values are enable and disable. When the configuration type is network, the valid values are add, modify, and remove. Add and remove are valid only when specifying -ns or -ds.
• -i – the interface to configure. Valid values are eth0 and eth1. This parameter can be used only with -s modify.
• -a – the new network IP address. This parameter can be used only with the -i parameter.
• -nm – the new network mask. This parameter can be used only with the -i parameter.
• -d – the new network domain name. This parameter can be used only with -s modify.
• -h – the new host name. This parameter can be used only with -s modify.
• -g – the new gateway address. This parameter can be used only with -s modify.
• -ns – the DNS server to add or remove. This parameter can be used only with -s add or -s remove.
• -ds – the domain suffix to add or remove. This parameter can be used only with -s add or -s remove.
• -help – prints a help message.

The following are examples of the usage of this command.

To add and remove another domain suffix search other than mycompany.corp:

[hmcusr@hmcproj hmcusr]$ lshmc -n
Network Configuration:
Host Name: hmcproj
TCP/IP Interface Ø Address:
TCP/IP Interface Ø Network Mask: 255.255.255.Ø
Default Gateway:
Domain Name:
DNS Server Search Order:
Domain Suffix Search Order: mycompany.corp

[hmcusr@hmcproj hmcusr]$ chhmc -c network -s remove -ds mycompany.corp

[hmcusr@hmcproj hmcusr]$ lshmc -n
Network Configuration:
Host Name: hmcproj
TCP/IP Interface Ø Address:
TCP/IP Interface Ø Network Mask: 255.255.255.Ø
Default Gateway:
Domain Name:
DNS Server Search Order:
Domain Suffix Search Order:



chpv - how to clear the boot record of a physical volume in IBM AIX

To clear the boot record of a physical volume (disk) in IBM AIX:

# chpv -c hdisk#


chpv -c hdisk1

chsec - reset the failed login count for a user in IBM AIX

To reset the "unsuccessful_login_count" variable of a user in IBM AIX

# chsec -f /etc/security/lastlog -a "unsuccessful_login_count=0" -s user


chsec -f /etc/security/lastlog -a "unsuccessful_login_count=0" -s user

chps - how to change your paging space (swap) in IBM AIX

To change your paging space (swap) in IBM AIX:

# chps [ -s LogicalPartitions | -d LogicalPartitions ] [ -a { y | n } ] PagingSpace


-s : specifies the number of logical partitions to add.

-a : specifies to use a paging space at the next system restart. Valid values are "y" or "n".

-d : specifies the number of logical partitions to subtract.

Example to add 16 logical partitions to paging00:

# chps -s'16' paging00

Example to remove 16 logical partitions to paging00:

# chps -d'16' paging00

Example to add 16 logical partitions to paging00 and make the change effective through reboots:

# chps -s'16' paging00 -a'y'


chps -s'16' paging00 -a'y'

chdev - how to change the media speed of a network interface in IBM AIX

To change the media speed of a network interface in IBM AIX:

# chdev -l 'ent0' -a media_speed='100_Full_Duplex' '-P'

Possible values for media_speed are:


Note: The "-P" flag makes this a permament change.


chdev -l 'ent0' -a media_speed='100_Full_Duplex' '-P'

bootlist - how to view/modify the boot list in IBM AIX

To modify the boot list in IBM AIX:

# bootlist -m normal hdisk# hdisk#


bootlist -m normal hdisk0 hdisk1

To view the boot list in IBM AIX:

# bootlist -m normal -o


bootlist -m normal -o

bosboot - make a disk bootable in IBM AIX

To make a disk bootable on the default boot logical volume on the fixed disk from which the system is booted in IBM AIX:

# bosboot -a

To create a bootable image called /tmp/tape.bootimage for a tape device:

# bosboot -ad /dev/rmt0 -b /tmp/tape.bootimage

To create a boot image file for an Ethernet boot:

# bosboot -ad /dev/ent0


bosboot -a

Friday, May 30, 2008

How to Use Google to Search With in a Single Web Site?

Ever want to use Google to search a single Web site?

You can use Google's site: syntax to restrict your search to a single Web site. Make sure there's no space between site: and your Web site. Follow with a space and then your search terms. You don't need to use the "http://" portion of your URL. power search tricks

You could also widen the search to include all the Guide sites: google

This same search can be widened to include all the Web sites within a domain.

site:edu text buyback dates

Google's site: syntax can be mixed with other syntax, such as AND and OR searches.

Thursday, April 24, 2008

rsync examples from command line.

To copy all the files in /home/lokams/* of remote host to local directory /backup/home/lokams/
# rsync -e ssh -avz --delete --exclude dir/* --delete-excluded --stats user@remotehost:/home/lokams/ /backup/home/lokams/

To copy the directory /home/lokams of remote host to local directory /backup/home. Directory "lokams" will get created in /backup/home.
# rsync -e ssh -avz --delete --exclude-from=~/.rsync/excludeFile --delete-excluded --stats user@remotehost:/home/lokams /backup/home/

To ignore the permissions errors while copying to remote windows/mac systems, use "-rlt" option.
# rsync -e ssh -avz --delete --exclude dir/* --delete-excluded --stats -rlt user@remotehost:/home/lokams/ /backup/home/lokams/

rsync behaves differently if the source directory has a trailing slash. Study and learn the difference between the following two commands before moving on. Use the -n option to rsync when testing to preview what would happen.

$ rsync -n -av /tmp .
$ rsync -n -av /tmp/ .

More rsync examples:

# rsync -crogpvltz -e ssh --exclude "bin" --exclude "ifany" --delete --delete-after --bwlimit 20 root@ /apps/uae > /apps/uae/logs/sync.log

# rsync -avz -e ssh root@remotehost:/backup/reptest /backtmp/reptest/

# rsync -avz -e ssh /logstage/archive/DXBSEC/NODE0000/ db2inst1@

# rsync -e ssh -avz --timeout=999 --delete --exclude dir-or-file-to-exclude --delete-excluded --stats -rlt user@remotehost:/home/lokams/

rsync useful options:

-a, --archive archive mode, equivalent to -rlptgoD
-n, --dry-run show what would have been transferred ( Preview mode )

-c - always checksum
-r - recursive into directories
-o - preserve owner
-g - preserve group
-p - preserve permissions
-t - preserve times
-v - verbose
-l - copy symlinks as symlinks
-z - compress file data
-P - show progress during transfer
-q - quite (decrease verbosity)
-e - specify the remote shell
-b - make backup
-R - Use relative path names
-u - skip files that are newer on the receiver

--stats give some file-transfer stats
--timeout=TIME set I/O timeout in seconds

--backup-dir - make backups into this directory
--bwlimit=KBPS - limit I/O bandwidth, KBytes per second
--delete - delete files that don't exist on the sending side
--delete-after - receiver deletes after transferring, not before
--daemon run as an rsync daemon
--address=ADDRESS bind to the specified address
--exclude=PATTERN exclude files matching PATTERN
--exclude-from=FILE exclude patterns listed in FILE
--include=PATTERN don't exclude files matching PATTERN
--include-from=FILE don't exclude patterns listed in FILE
--min-size=SIZE don't transfer any file smaller than SIZE
--max-size=SIZE don't transfer any file larger than SIZE

# --numeric-ids:
Tells rsync to not map user and group id numbers local user and group names
# --delete:
Makes server copy an exact copy of the source by removing any files that have been removed on the remote machine

Saturday, April 19, 2008

Creating large empty files in Linux / UNIX

To create large empty files in Linux or UNIX:

# dd if=/dev/zero of=filename bs=1024 count=desired

Example to create a 1GB file:

dd if=/dev/zero of=file_1GB bs=1024 count=1000
dd if=/dev/zero of=file_1GB bs=4096 count=250
dd if=/dev/zero of=file_1GB bs=2048 count=500

Example to create a 2GB file:

dd if=/dev/zero of=file_2GB bs=2048 count=1000
dd if=/dev/zero of=file_2GB bs=1024 count=2000

Example to create a 512MB file:

dd if=/dev/zero of=file_512MB bs=1024 count=500
dd if=/dev/zero of=file_1GB bs=512 count=1000

either use

# mkfile size myfile

where can be in KB, MB or GB using k, m or g as suffix. To create a 10 GB file, use

# mkfile 10240m myfile

If you run # mkfile -n 10240m myfile the file will be created, but the disk blocks will not get allocated, only when writing data into the file.

burn_cd - command to burn a cd

Command to burn a CD in AIX and Linux:

# burn_cd -d cd0 iso_image_file
Running readcd ...
Capacity: 2236704 Blocks = 4473408 kBytes = 4368 MBytes = 4580 prMB
Sectorsize: 2048 Bytes
burn_cd was successful.


# burn_cd -d cd0 iso_image_file

Zettabyte File System (ZFS) in Solaris 10

ZFS is an advanced modern filesystem from Sun Microsystems, originally designed for Solaris/OpenSolaris.

ZFS is a new file system in Solaris 10 OS which provides excellent data integrity and performance compared to other file systems (considering the enterprise storage scenario). Unlike previous file systems, it's a 128-bit file system, which means it can scale up to accommodate very large data. It is perhaps the world's first 128-bit file system. But why do we need so much scalability? The reason is simple. In an enterprise, data is continuously stored on servers and it keeps on increasing. Enterprises want to keep as much of this data live as possible, so that it can be quickly retrieved when required.

ZFS has many features which can benefit all kinds of users - from the simple end-user to the biggest enterprise systems:
  • Provable integrity - it checksums all data (and metadata), which makes it possible to detect hardware errors (hard disk corruption, flaky IDE cables, etc...)
  • Atomic updates - means that the on-disk state is consistent at all times, there's no need to perform a lengthy filesystem check after forced reboots or power failures
  • Instantaneous snapshots and clones - it makes it possible to have hourly, daily and weekly backups efficiently, as well as experiment with new system configurations without any risks
  • Built-in (optional) compression
  • Highly scalable
  • Pooled storage model - creating filesystems is as easy as creating a new directory. You can efficiently have thousands of filesystems, each with it's own quotas and reservations, and different properties (compression algorithm, checksum algorithm, etc...)
  • Built-in stripes (RAID-0), mirrors (RAID-1) and RAID-Z (it's like software RAID-5, but more efficient due to ZFS's copy-on-write transactional model).
  • Many others (variable sector sizes, adaptive endianness, ...)

In traditional file systems, data is stored on a single disk or on a large volume, consisting of multiple disks. In ZFS, a pool of storage model is used, ie every single storage device is part of a single expandable storage pool, irrespective of where the data is being written. Each storage device which resides inside the pool can have different file systems, which helps administrators scale the system in an easy and efficient manner, ie you no longer need to take care of the file system. Just add a storage device to the pool. With this new architecture, each file system that resides under the pool can share the same amount of size and I/O resources as the pool itself. Also ZFS is used for correcting noisy data corruption. For eg, in cases when you've done an I/O operation, the disk returns an error message, say, 'Can't read the specified block.' The second case could be silent data corruption, wherein you do an I/O operation and the system returns corrupted results. ZFS identifies and if possible even corrects these data corruptions, something which existing file systems can't do. Managing existing file systems is also difficult. For example, you upgrade your system after which you find that the file system doesn't support the machine and you have to copy all the data. This would consume a lot of time, but ZFS helps alleviate this. Moreover, existing file systems have limitations in terms of volumes, file size, etc.

ZFS definitely looks like a great engineering achievement and its makers have all rights to be proud of it. In their own words, they've blown away 20 years of obsolete assumptions and now they refer to ZFS as the last word in filesystems.

When ZFS was first announced, I'm sure many Linux hackers had a thought how it would be a great idea to port such a great filesystem to Linux. Unfortunately, ZFS source is distributed under Sun's CDDL license which is (some say deliberatly) incompatible with the GPL license that Linux kernel uses. So, it looks like there will be no native port of ZFS for Linux in the foreseeable future. What a pity.

Wednesday, April 16, 2008

Bash Shortcuts and Tips

Repeating an argument
You can repeat the last argument of the previous command in multiple ways. Have a look at this example:

$ mkdir /path/to/dir
$ cd !$

The second command might look a little strange, but it will just cd to /path/to/dir.

Some keyboard shortcuts for editing

There are some pretty useful keyboard shortcuts for editing in bash. They might appear familiar to Emacs users:

• Ctrl + a => Return to the start of the command you're typing
• Ctrl + e => Go to the end of the command you're typing
• Ctrl + u => Cut everything before the cursor to a special clipboard
• Ctrl + k => Cut everything after the cursor to a special clipboard
• Ctrl + y => Paste from the special clipboard that Ctrl + u and Ctrl + k save their data to
• Ctrl + t => Swap the two characters before the cursor (you can actually use this to transport a character from the left to the right, try it!)
• Ctrl + w => Delete the word / argument left of the cursor
• Ctrl + l => Clear the screen

Redirecting both Standard Output and Standard Error:

# ls -ltR 2>&1 > /tmp/temp.txt

Specify this in .bashrc

Make Bash append rather than overwrite the history on disk:

# shopt -s histappend

Whenever displaying the prompt, write the previous line to disk:

# export PROMPT_COMMAND=’history -a’

To erase duplicate entries in History.

# export HISTCONTROL=erasedups
# export HISTCONTROL=ignoreboth

To see the history with timestamps

# export HISTTIMEFORMAT="%d/%m/%Y-%H:%M:%S "

To set the Size of the historyHISTSIZE: The number of commands to remember in the command history. The default value is 500.

# export HISTSIZE=500

Searching the Past
  • Ctrl+R searches previous lines
This will put bash in history mode, allowing you to type a part of the command you're looking for. In the meanwhile, it will show the most recent occasion where the string you're typing was used. If it is showing you a too recent command, you can go further back in history by pressing Ctrl + r again and again. Once you found the command you were looking for, press enter to run it.

  • !tcp will execute the previous command which starts with "tcp"

Tuesday, April 15, 2008

Simple usage of "tcpdump"

Tcpdump is a really great tool for network security analyst; you can dump packets that flow within your networks into file for further analysis. With some filters you can capture only the interested packets, which it reduce the size of saved dump and further reduce loading and processing time of packets analysis.

Lets start with capturing packets based on network interface, ports and protocols. Let assume I wanna capture tcp packets that flow over eth1, port 6881. The dump file with be save as test.pcap.

tcpdump -w test.pcap -i eth1 tcp port 6881

Simple right? What if at the same time I am interested on getting packets on udp port 33210 and 33220?

tcpdump -w test.pcap -i eth1 tcp port 6881 or udp \( 33210 or 33220 \)

‘\’ is an escape symbol for ‘(’ and ‘)’. Logic OR implies PLUS (+). In plain text is I want to capture tcp packets flows over port 6881 plus udp ports 33210 and 33220. Careful with ‘and’ in tcpdump filter expression, it means intersection. Thats why I put ‘or’ instead of and within udp port 33210 and 33220. The usage of ‘and’ in tcpdump will be illustrate later.

Ok, how about reading pcap that I saved previously?

tcpdump -nnr test.pcap

The -nn is to tell tcpdump not to resolve DNS on IP and Ports, where r is read.

Adding -tttt to makes the timestamp appears more readable format.

tcpdump -ttttnnr test.pcap

How about capture based on IP ?You need to tell tcpdump which IP you are interested in? Destination IP? or Source IP ? Let say I wanna sniff on destination IP tcp port 22,

how should i write?

tcpdump -w test.pcap dst and tcp port 22

So the ‘and’ makes the intersection of destination IP and port.

By default the sniff size of packets is 96 bytes, you somehow can overload that size by specified with -s.

tcpdump -w test.pcap -s 1550 dst and tcp port 22

Some version of tcpdump allows you to define port range. You can as bellow for capturing packets based on a range of tcp port.

tcpdump tcp portrange 20-24

Bare in mind, the line above I didn’t specified -w which it won’t write to a file but i will just print the captured packets on the screen.

Sunday, March 23, 2008

find command real time examples

$ find . -name *.gif -exec ls {} \;

The -exec parameter holds the real power. When a file is found that matches the search criteria, the -exec parameter defines what to do with the file. This example tells the computer to:

1. Search from the current directory on down, using the dot (.) just after find.
2. Locate all files that have a name ending in .gif (graphic files).
3. List all found files, using the ls command.

The -exec parameter requires further scrutiny. When a filename is found that matches the search criteria, the find command executes the ls {} string, substituting the filename and path for the {} text. If saturn.gif was found in the search, find would execute this command:

$ ls ./gif_files/space/solar_system/saturn.gif

An important alternative to the -exec parameter is -ok; it behaves the same as -exec, but it prompts you to see if you want to run the command on that file. Suppose you want to remove most of the .txt files in your home directory, but you wish to do it on a file-by-file basis. Delete operations like the UNIX rm command are dangerous, because it's possible to inadvertently delete files that are important when they're found by an automated process like find; you might want to scrutinize all the files the system finds before removing them.

The following command lists all the .txt files in your home directory. To delete the files, you must enter Y or y when the find command prompts you for action by listing the filename:

$ find $HOME/. -name *.txt -ok rm {} \;

Each file found is listed, and the system pauses for you to enter Y or y. If you press the Enter key, the system won't delete the file.

If too many files are involved for you to spend time with the -ok parameter, a good rule of thumb is to run the find command with -exec to list the files that would be deleted; then, after examining the list to be sure no important files will be deleted, run the command again, replacing ls with rm.

Both -exec and -ok are useful, and you must decide which works best for you in your current situation. Remember, safety first!


  • To remove all temp, swap and core files in the current directory.
$ find . \( -name '*.tmp* -o -name '*.swp' -o -name 'core' \) -exec rm {} \;

  • To copy the entire contents of a directory while preserving the permissions, times, and ownership of every file and subdirectory
$ cd /path/to/source/dir
$ find . | cpio -vdump /path/to/destination/dir

  • To list the first line in every text file in your home directory
$ find $HOME/. -name *.txt -exec head -n 1 -v {} \; > report.txt
$ less <>

  • To maintain LOG and TMP file storage space for applications that generate a lot of these files, you can put the following commands into a cron job that runs daily:
  • The first command runs all the directories (-type d) found in the $LOGDIR directory wherein a file's data has been modified within the last 24 hours (-mtime +0) and compresses them (compress -r {}) to save disk space. The second command deletes them (rm -f {}) if they are more than a work-week old (-mtime +5), to increase the free space on the disk. In this way, the cron job automatically keeps the directories for a window of time that you specify.
$ find $LOGDIR -type d -mtime +0 -exec compress -r {} \;
$ find $LOGDIR -type d -mtime +5 -exec rm -f {} \;

  • To find links that point to nothing
$ find / -type l -print | perl -nle '-e || print';

  • To list zero-length files
$ find . -empty -exec ls {} \;

  • To delete all *.tmp files in the home directory
$ find ~/. -name "*.tmp" | xargs rm
$ find ~/. -name "*.tmp" -exec rm {} \;

  • To see what hidden files in your home directory changed in the last 5 days:
$ find ~ -mtime -5 -name \.\*

  • If you know something has changed much more recently than that, say in the last 14 minutes, and want to know what it was there's the mmin argument:
$ find ~ -mmin 14 -name \.\*

  • To locate files that have been modified since some arbitrary date use this little trick:
$ touch -d "13 may 2001 17:54:19" date_marker
$ find . -newer date_marker

  • To find files created before that date, use the cnewer and negation conditions:
$ find . \! -cnewer date_marker

  • To find files containing between 600 to 700 characters, inclusive.
$ find . -size +599c -and -size -701c

Thus we can use find to list files of a certain size:

$ find /usr/bin -size 48k

  • To find empty files
$ find . -size 0c

  • Using the -empty argument is more efficient. To delete empty files in the current directory:
$ find . -empty -maxdepth 1 -exec rm {} \;

  • To locate files belonging to a certain user:
# find /etc -type f \! -user root -exec ls -l {} \;

  • To search for files by the numerical group ID use the -gid argument:
$ find -gid 100

  • To find directories with '_of_' in their name we'd use:
$ find . -type d -name '*_of_*'

  • To redirect the error messages to /dev/null
$ find / -name foo 2>/dev/null

  • To remove all files named core from your system.
# find / -name core | xargs /bin/rm -f
# find / -name core -exec /bin/rm -f '{}' \; # same thing
# find / -name core -delete # same if using Gnu find

  • To find files modified less than 10 minutes ago. I use this right after using some system administration tool, to learn which files got changed by that tool:

# find / -mmin -10

  • When specifying time with find options such as -mmin (minutes) or -mtime (24 hour periods, starting from now), you can specify a number n to mean exactly n, -n to mean less than n, and +n to mean more than n. 2 For example:

# find . -mtime 0 # find files modified within the past 24 hours
# find . -mtime -1 # find files modified within the past 24 hours
# find . -mtime 1 # find files modified between 24 and 48 hours ago
# find . -mtime +1 # find files modified more than 48 hours ago
# find . -mmin +5 -mmin -10 # find files modifed between 6 and 9 minutes ago

  • To find all files containing “house” in the name that are newer than two days and are larger than 10K, try this:

# find . -name “*house*” -size +10240 -mtime -2

  • The -xdev prevents the file “scan” from going to another disk volume (refusing to cross mount points, for example). Thus, you can look for all regular directories on the current disk from a starting point like this:

# find /var/tmp -xdev -type d -print

  • To find world writables in your system:

# find / -perm 777 | xargs ls -ld | grep -v ^l | grep -v ^s
# find / -perm 666 | xargs ls -ld | grep -v ^l | grep -v ^s

# find . -perm +o=w -exec ls -ld {} \; | grep -v ^l | grep -v ^s | grep -v ^c
# find . -perm +o=w | xargs ls -ld | grep -v ^l | grep -v ^s | grep -v ^c

  • To find the orphan files and directories in your system:

# find / -nouser -nogroup | xargs ls -ld

  • To find the files changed in last 5 mins and move them to a different folder.

# find /tmp -mmin -5 -type f -exec mv {} /home/lokams \;
# mv `find . -mmin -5 -type f` /tmp/
# find . -mmin -10 -type f | xargs -t -i {} mv {} /tmp/

  • To search on multiple directories

$ find /var /etc /usr -type f -user root -perm 644 -name '*ssh*'

  • To find and list all regular files set SUID to root (or anyone else with UID 0 ;)
# find / -type f -user 0 -perm -4000

  • To find all regular files that are world-writable and removes world-writability:
# find / -type f -perm -2 -exec chmod o-w {} \;

  • To find all files owned by no one in particular and give them to root:
# find / -nouser -exec chown root {} \;

  • To find all files without group ownership and give them to the system group:
# find / -nogroup -exec chgrp system {} \;

  • To find and gzip regular files in current directory that do not end in .gz
$ gzip `find . -type f \! -name '*.gz' -print`

  • To find all empty files in my home directory and delete after being prompted:
$ find $HOME -size 0 -ok rm -f {} \;

  • To find all files or symlinks in /usr not named fred:
$ find /usr \( -type f -o -type l \) \! -name fred

  • If you have a file with spaces, control characters, leading hyphens or other nastiness and you want to delete it, here's how find can help. Forget the filename, use the inumber instead. Say we have a file named "-rf .", spaces and all. We wouldn't dare attempt removing it the normal way for fear of issuing 'rm -rf /' at the command line.

$ echo jhjhg > "-rf /"
$ ls -la
total 4
-rw-r----- 1 mongoose staff 6 Nov 07 15:57 -rf /
drwxr-x--- 2 mongoose staff 512 Nov 07 15:57 .
drwxr-xr-x 3 mongoose bin 1024 Nov 07 15:53 ..

Find the I-number of the file using the -i flag in ls:

$ ls -lai .
total 4
18731 -rw-r----- 1 mongoose staff 6 Nov 07 15:57 -rf .
18730 drwxr-x--- 2 mongoose staff 512 Nov 07 15:57 .
1135 drwxr-xr-x 3 mongoose bin 1024 Nov 07 15:53 ..
There it is: I-node 18731. Now plug it into find and make find delete it:

$ find . -inum 18731 -ok rm {} \;
rm ./-rf . (?) y
$ ls -la
total 3
drwxr-x--- 2 mongoose staff 512 Nov 07 16:03 .
drwxr-xr-x 3 mongoose bin 1024 Nov 07 15:53 ..

Simple Port Forwarding using IPTABLES

# IP Forwarding
echo "1" > /proc/sys/net/ipv4/ip_forward

# Policy
/sbin/iptables -P INPUT ACCEPT
/sbin/iptables -P OUTPUT ACCEPT
/sbin/iptables -P FORWARD ACCEPT

# IP Masquerade
/sbin/iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

# Forward
/sbin/iptables -A FORWARD -i eth0 -j ACCEPT

# Portforwarding from to
/sbin/iptables -t nat -A PREROUTING -p tcp -i eth0 --dport 8888 -j DNAT --to
/sbin/iptables -A FORWARD -p tcp -i eth0 -d --dport 80 -j ACCEPT

Saturday, March 22, 2008

Using Hamachi on Linux

LogMeIn Hamachi is a VPN service that easily sets up in 10 minutes, and enables secure remote access to your business network, anywhere there’s an Internet connection.
It works with your existing firewall, and requires no additional configuration. Hamachi is the first networking application to deliver an unprecedented level of direct peer-to-peer connectivity. It is simple, secure, and cost-effective.

Hamachi is a simple way of making a VPN between different computers. Hamachi can be used to anything from printing to your office printer from home to sharing files heavily encrypted over the internet to your friends.

Installing Hamachi

The first thing you need to do is download Hamachi from

Extract the archive and run make as root:

# tar zxvf hamachi-

# cd hamachi-

# make install

Now you need to start the tuncfg daemon as root

# /sbin/tuncfg


To begin using Hamachi you first must run with your own user.

$ hamachi-init

When you have completed the initialization you can start Hamachi by simply typing

$ hamachi start

Since you are starting hamachi for the first time you need to tell the client to go online

$ hamachi login

You may want to change you nickname with

$ hamachi change-nick

Once logged in you can create a network with

$ hamachi create networkname

You can then give the network name and password to your friends and ask them to join you. Once you have friends in your network you can list their IP-addresses with

$ hamachi list

You can then use the listed IP-addresses to connect to your friends just like they would be on your LAN.

For more commands and options check the Hamachi Readme.

Automatically starting Hamachi

To get Hamachi to automatically start when you start your computer you need to create a startup script for it.

### This is a startup script for hamachi ###


case "$1" in
/bin/su - $USER -c "hamachi start"
/bin/su - $USER -c "hamachi stop"
/bin/su - $USER -c "hamachi start"
/bin/su - $USER -c "hamachi stop"
exit 1

exit 0

################ End of Hamachi Startup Script ##################

Change the USER variable to your username in the script. Then make the script executable and move it to /etc/init.d/

# chmod +x hamachi

# mv hamachi /etc/init.d

You then need to link the script to the appropriate runlevel

# ln -s /etc/init.d/hamachi /etc/rc3.d/S99hamachi

# ln -s /etc/init.d/hamachi /etc/rc3.d/K99hamachi

Where 3 is the runlevel for which you want Hamachi to start.

Thursday, March 20, 2008

Hardware and System information tools in Linux

Hardware Lister (lshw) -

lshw (Hardware Lister) is a small tool to provide detailed information on the hardware configuration of the machine. It can report exact memory configuration, firmware version, mainboard configuration, CPU version and speed, cache configuration, bus speed, etc.


lshw [format] [options... ] where format can be

-X to launch the GUI (if available)
-html to activate HTML mode
-xml to activate XML mode
-short to print hardware paths
-businfo to print bus information


Dmidecode reports information about your system's hardware as described in your system BIOS. This command gives you vendor name, model name, serial number, BIOS version, asset tag as well as a lot of other details. This will include usage status for the CPU sockets, expansion slots (e.g. AGP, PCI, ISA) and memory module slots, and the list of I/O ports (e.g. serial, parallel, USB). A pretty useful command for sysadmins to prepare their system inventory.


Cfg2html generates an HTML or plain ASCII report of your Linux/AIX/HP-UX system. It includes configuration information about kernel, filesystems, security, etc, and might be usefull for system documentation.

AIX version of cfg2html can be found here:

Linux and HP-UX version of cfg2html can be found here:

sosreport:(son of sysreport)

The command sosreport is a tool that collects information about a system, such as what kernel is running, what drivers are loaded, and various configuration files for common services. It also does some simple diagnostics against known problematic patterns.

Ethernet bonding in Linux

Bonding is creation of a single bonded interface by combining 2 or more ethernet interfaces. This helps in high availability and performance improvement.

Steps for bonding in Fedora Core and Redhat Linux

Step 1.

Create the file ifcfg-bond0 with the IP address, netmask and gateway. Shown below is my test bonding config file.

$ cat /etc/sysconfig/network-scripts/ifcfg-bond0

IPADDR=192.168. 10.100
NETMASK=255. 255.255.0
GATEWAY=192. 168.10.1

Step 2.

Modify eth0, eth1 and eth2 configuration as shown below. Comment out, or remove the ip address, netmask, gateway and hardware address from each one of these files, since settings should only come from the ifcfg-bond0 file above.

$ cat /etc/sysconfig/network-scripts/ifcfg-eth0


$ cat /etc/sysconfig/network-scripts/ifcfg-eth1


$ cat /etc/sysconfig/network-scripts/ifcfg-eth2


Step 3.

Set the parameters for bond0 bonding kernel module. Add the following lines to /etc/modprobe. conf

# bonding commands
alias bond0 bonding
options bond0 mode=balance-alb miimon=100

Note: Here we configured the bonding mode as "balance-alb". All the available modes are given at the end and you should choose appropriate mode specific to your requirement.

Step 4.

Load the bond driver module from the command prompt.

$ modprobe bonding

Step 5.

Restart the network, or restart the computer.

$ service network restart # Or restart computer

When the machine boots up check the proc settings.

$ cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.0.2 (March 23, 2006)

Bonding Mode: adaptive load balancing
Primary Slave: None
Currently Active Slave: eth2
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth2
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:13:72:80: 62:f0

Look at ifconfig -a and check that your bond0 interface is active. You are done!

RHEL bonding supports 7 possible "modes" for bonded interfaces. These modes determine the way in which traffic sent out of the bonded interface is actually dispersed over the real interfaces. Modes 0, 1, and 2 are by far the most commonly used among them.

* Mode 0 (balance-rr)
This mode transmits packets in a sequential order from the first available slave through the last. If two real interfaces are slaves in the bond and two packets arrive destined out of the bonded interface the first will be transmitted on the first slave and the second frame will be transmitted on the second slave. The third packet will be sent on the first and so on. This provides load balancing and fault tolerance.

* Mode 1 (active-backup)
This mode places one of the interfaces into a backup state and will only make it active if the link is lost by the active interface. Only one slave in the bond is active at an instance of time. A different slave becomes active only when the active slave fails. This mode provides fault tolerance.

* Mode 2 (balance-xor)
Transmits based on XOR formula. (Source MAC address is XOR'd with destination MAC address) modula slave count. This selects the same slave for each destination MAC address and provides load balancing and fault tolerance.

* Mode 3 (broadcast)
This mode transmits everything on all slave interfaces. This mode is least used (only for specific purpose) and provides only fault tolerance.

* Mode 4 (802.3ad)
This mode is known as Dynamic Link Aggregation mode. It creates aggregation groups that share the same speed and duplex settings. This mode requires a switch that supports IEEE 802.3ad Dynamic link.

* Mode 5 (balance-tlb)
This is called as Adaptive transmit load balancing. The outgoing traffic is distributed according to the current load and queue on each slave interface. Incoming traffic is received by the current slave.

* Mode 6 (balance-alb)
This is Adaptive load balancing mode. This includes balance-tlb + receive load balancing (rlb) for IPV4 traffic. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the server on their way out and overwrites the src hw address with the unique hw address of one of the slaves in the bond such that different clients use different hw addresses for the server.

Wednesday, March 19, 2008

About SUID, SGID and Sticky bit

Set user ID, set group ID, sticky bit

In addition to the basic permissions discussed above, there are also three bits of information defined for files in Linux:

* SUID or setuid: change user ID on execution. If setuid bit is set, when the file will be executed by a user, the process will have the same rights as the owner of the file being executed.
* SGID or setgid: change group ID on execution. Same as above, but inherits rights of the group of the owner of the file on execution. For directories it also may mean that when a new file is created in the directory it will inherit the group of the directory (and not of the user who created the file).
* Sticky bit: It was used to trigger process to "stick" in memory after it is finished, now this usage is obsolete. Currently its use is system dependent and it is mostly used to suppress deletion of the files that belong to other users in the folder where you have "write" access to.

Numeric representation

Octal digit Binary value Meaning
0 000 setuid, setgid, sticky bits are cleared
1 001 sticky bit is set
2 010 setgid bit is set
3 011 setgid and sticky bits are set
4 100 setuid bit is set
5 101 setuid and sticky bits are set
6 110 setuid and setgid bits are set
7 111 setuid, setgid, sticky bits are set

Textual representation

SUID, If set, then replaces "x" in the owner permissions to "s", if owner has execute permissions, or to "S" otherwise.

-rws------ both owner execute and SUID are set
-r-S------ SUID is set, but owner execute is not set

SGID, If set, then replaces "x" in the group permissions to "s", if group has execute permissions, or to "S" otherwise.

-rwxrws--- both group execute and SGID are set
-rwxr-S--- SGID is set, but group execute is not set

Sticky, If set, then replaces "x" in the others permissions to "t", if others have execute permissions, or to "T" otherwise.

-rwxrwxrwt both others execute and sticky bit are set
-rwxrwxr-T sticky bit is set, but others execute is not set

Setting the sticky bit on a directory : chmod +t

If you have a look at the /tmp permissions, in most GNU/Linux distributions, you'll see the following:

lokams@tempsrv# ls -l | grep tmp
drwxrwxrwt 10 root root 4096 2006-03-10 12:40 tmp

The "t" in the end of the permissions is called the "sticky bit". It replaces the "x" and indicates that in this directory, files can only be deleted by their owners, the owner of the directory or the root superuser. This way, it is not enough for a user to have write permission on /tmp, he also needs to be the owner of the file to be able to delete it.

In order to set or to remove the sticky bit, use the following commands:

# chmod +t tmp
# chmod -t tmp

Setting the SGID attribute on a directory : chmod g+s

If the SGID (Set Group Identification) attribute is set on a directory, files created in that directory inherit its group ownership. If the SGID is not set the file's group ownership corresponds to the user's default group.

In order to set the SGID on a directory or to remove it, use the following commands:

# chmod g+s directory
# chmod g-s directory

When set, the SGID attribute is represented by the letter "s" which replaces the "x" in the group permissions:

# ls -l directory
drwxrwsr-x 10 george administrators 4096 2006-03-10 12:50 directory

Setting SUID and SGID attributes on executable files : chmod u+s, chmod g+s

By default, when a user executes a file, the process which results in this execution has the same permissions as those of the user. In fact, the process inherits his default group and user identification.

If you set the SUID attribute on an executable file, the process resulting in its execution doesn't use the user's identification but the user identification of the file owner.

For instance, consider the script which tries to write things into mylog.log :

# ls -l
-rwxrwxrwx 10 george administrators 4096 2006-03-10 12:50
-rwxrwx--- 10 george administrators 4096 2006-03-10 12:50 mylog.log

As you can see in this example, George gave full permissions to everybody on but he forgot to do so on mylog.log. When Robert executes, the process runs using Robert's user identification and Robert's default group (robert:senioradmin). As a consequence, myscript fails and reports that it can't write in mylog.log.

In order to fix this problem George could simply give full permissions to everybody on mylog.log. But this would make it possible for anybody to write in mylog.log, and George only wants this file to be updated by his program. For this he sets the SUID bit on

# chmod u+s

As a consequence, when a user executes the script the resulting process uses George's user identification rather than the user's. If set on an executable file, the SUID makes the process inherit the owner's user identification rather than the one of the user who executed it. This fixes the problem, and even though nobody but George can write directly in mylog.log, anybody can execute which updates the file content.

Similarly, it is possible to set the SGID attribute on an executable file. This makes the process use the owner's default group instead of the user's one. This is done by:

# chmod g+s

By setting SUID and SGID attributes the owner makes it possible for other users to execute the file as if they were him or members of his default group.

The SUID and GUID are represented by a "s" which replaces the "x" character respectively in the user and group permissions:

# chmod u+s
# ls -l
-rwsrwxrwx 10 george administrators 4096 2006-03-10 12:50
# chmod u-s
# chmod g+s
# ls -l
-rwxrwsrwx 10 george administrators 4096 2006-03-10 12:50

About umask

umask command will be used for setting the default file creation permissions.

When a file is created, its permissions are set by default depending on the umask setting. This value is usually set for all users in /etc/profile and can be obtained by typing:

# umask

The default umask value is usually 022. It is an octal number which indicates what rights will be removed by default to all new files. For instance, 022 indicates that write permissions will not be given to group and other.

By default, and with a umask of 000, files get mode 666 and directories get mode 777. As a result, with a default umask value of 022, newly created files get a default mode 644 (666 - 022 = 644) and directories get a default mode 755 (777 - 022 = 755).

In order to change the umask value, simply use the umask command and give it an octal number. For instance, if you want all new directories to get permissions rwxr-xr--- and files to get permissions rw-r----- by default (modes 750 and 640), you'll need to use a umask value which removes all rights to other, and write permissions to the group : 027. The command to use is:

# umask 027

RPM Packages installation and usage


  • Normal querying doesnot require a root loogin but for installation and uninstalling a package you need to be logged in as root.
  • We can also use regular expressions or wiild-characters with the rpm command.


# installing a rpm package with hash printing and in verbose mode

rpm -ivh foobar-1.0-i386.rpm

# to install a package ignoring any dependencies

rpm -ivh --nodeps

# upgrading a package with hash printing and in verbose mode

rpm -Uvh foobar-1.1-i386.rpm

# Upgrade only those which are already installed from an RPM repository

rpm -Fvh *.rpm

# uninstall a package

rpm -e foobar

# uninstall ignoring the dependencies

rpm -e --nodeps foobar

# to force install /uninstall

rpm -ivh --force foobar-1.0-i386.rpm


# find all those packages which are installed on your system

rpm -qa | sort | less

rpm -qa | sort > rpmlist

# findout all the files which are installed by a rpm package

rpm -ql foobar

rpm -qpl foobar-1.0-i386.rpm

# search for an installed package

rpm -qa | grep foobar

# search for a specific file in a rpm repository

for i in *.rpm ; do rpm -qpl $i | grep filename && echo $i ; done

# findout to what package does the a directory/file (say) /etc/skel belong to

rpm -qf /etc/skel

rpm -q --whatprovides

# to see what config files are installed by a package

rpm -qc foobar


# To test walk-through a installtion of a package use

rpm -ivh --test foobar-1.1-i386.rpm

10. Similarly uninstalling a package without considering dependencies, use

# rpm -e --nodeps

11. To force install a package ( same as using "--replacefiles" and "--replacepkgs" together.

It like installing a package with no questions asked :) use it with caution, this option can make some of your existing software unusable or unstable

# rpm -i --force

12. To exclude the documentation for a package while installing, useful incase of minimal stripped-down installation

# rpm -i --excludedocs

13. To include documentation while installing (by default this option is enabled), this option is useful only one has set to exclude documentation in "/etc/rpmrc" or in "~/.rpmrc" or in /usr/lib/rpm/rpmrc"

# rpm -ivh --includedocs

14. To display the debug info while installing, use

When using this option it not neccessary to specify the "-v" verbose option as the debug information provided by the rpm command is verbose by default.

# rpm -ih --test -vv

As already discussed "-ih" combined option tell rpm to do installation with hash printing, and using the "--test" tells the rpm command to only do walkthrough of installation and not to do the actual installation, "-vv" option asks the rpm package to also print the debug information.

15. To upgrade a package (i.e uninstall the previous version and install a newer version), use

# rpm -U -v -h

16. To permit upgrade to an old package version (i.e downgrade), use

# rpm -U -v -h --oldpackage

17. To list all the rpm(s) installed on your system, use

$ rpm -qa

One can pipe the output of the above command to another shell command, e.g.

$ rpm -qa | less

$ rpm -qa | grep "foobar"

$ rpm -qa > installed_rpm.lst

  • Use you imagination for more combinations, you may even use wild characters.

How to forcefully unmount a Linux / AIX /Solaris disk partition?

Linux / UNIX will not allow you to unmount a device that is busy. There are many reasons for this (such as program accessing partition or open file) , but the most important one is to prevent data loss.

To find out the processes which are active on the partition.

[root@tempsrv ~]# lsof | grep "/mnt"
ssh 22883 lokams cwd DIR 253,1 4096 193537 /mnt
vi 22909 root cwd DIR 253,1 4096 193537 /mnt

/** or **/

[root@tempsrv ~]# fuser -mu /mnt
/mnt: 22883c(lokams) 22909c(root)
[root@tempsrv ~]#

Above output tells that users "lokams" and "root" has a "ssh and vi" processes running that is using /mnt. All you have to do is stop those process and run umount again. As soon as that program terminates its task, the device will no longer be busy and you can unmount it with the following command:

# umount /mnt

To unmount /mnt forcefully with out checking which processes are active currently:
# fuser -km /mnt

-k : Kill processes accessing the file.
-m : Name specifies a file on a mounted file system or a block device that is mounted. In above example you are using /mnt

You can also try umount command with –l option:
# umount -l /mnt

-l : Also known as Lazy unmount. Detach the filesystem from the filesystem hierarchy now, and cleanup all references to the filesystem as soon as it is not busy anymore. This option works with kernel version 2.4.11+ and above only.

To unmount a NFS mount point:

# umount -f /mnt

-f: Force unmount in case of an unreachable NFS system

The above can be accomplished with the below command in AIX:
# fuser -kxuc /mnt

The above can be accomplished with the below command in Solaris:
# fuser -ck /mnt

To list the process numbers and user login names of processes using the /etc/passwd file in AIX / Solaris, enter:
# fuser -cu /etc/passwd

List folders / directories by size in Linux / AIX / Windows

To list the directory sizes in kilo bytes and largest at the top
du -sk * | sort +0nr
du -sk * | sort -nr

To list the directory sizes in Mega bytes and largest at the top
du -sm * | sort +0nr
du -sm * | sort -nr

To list the directory sizes in kilo bytes and largest at the bottom.
du -sk * | sort +0n
du -sk * | sort -n

To list the directory sizes in Mega bytes and largest at the bottom.
du -sm * | sort +0n
du -sm * | sort -n

To list the directory sizes in human readable format (Mix of kilo, Mega and Giga bytes) and largest at the bottom
du -s *|sort -n|cut -f 2-|while read a;do du -hs $a;done

To list the size of hidden directories

du -sk .[a-z]* | sort +0nr

To list the size of all the files and directorires including hidden files and directories
du -sk .[a-z]* * | sort +0n

Windows explorer Folder size extension

Download the package from the abouve URL and install it.

· After the installation Folder Size column is available to Explorer, but Explorer isn't displaying it yet. Open an Explorer window in Details view.

· Right click on the column headers to see a list of columns you can add. Choose Folder Size.

· Now we can replace the existing Size column with the new Folder Size column. Right click on the column headers and uncheck the Size column. Drag the Folder Size column header to where Size used to be.

· Make this the default view for all folders. Go to Folder Options from the Tools menu. In the View tab, click Apply to All Folders

Tuesday, March 18, 2008

tar and gzip/bzip on a single command on AIX

This command will be helpful when you do not have an option "z" for tar. In Linux you can directly specify "z" option in tar command but in AIX, you can not ....

To compress:

"tar cvf - abc | gzip > abc.tar.gz"

"tar cvf - abc | bzip2 > abc.tar.bz2"

To uncompress:

"gunzip < abc.tar.gz | tar xvf -"

"bzip2 < abc.tar.bz2 | tar xvf -"

Monday, March 17, 2008

Search and replace recursively on a directory in Linux

Here is the small bash shell script to make life simple... This script can do a search for string and replace with a new string recursively in a directory.

# This script will search and replace all regular files for a string
# supplied by the user and replace it with another string.
function usage {
echo ""
echo "Search/replace script"
echo "Usage: ./$0 searchstring replacestring"
echo "Remember to escape any special characters in the searchstring or the replacestring"
echo ""

#check for required parameters
if [ ${#1} -gt 0 ] && [ ${#2} -gt 0 ];

for f in `find -type f`;
grep -q $1 $f
if [ $? = 0 ];then
cp $f $f.1bak
echo "The string $1 will be replaced with $2 in $f"
sed s/$1/$2/g < $f.1bak > $f
rm $f.1bak

#print usage informamtion

How to make LiveCD detect and mount LVM partition?

Reestablish Volume Group

To tap into the volume group you wish to work with, make sure /etc/lvm/lvm.conf filters are able to see the /dev/md? devices, and execute the following:

[tempsrv] # vgscan

Reading all physical volumes. This may take a while...

Found volume group "rootvg" using metadata type lvm2

which should display the volume group (here it is “rootvg”)associated with an md device you enabled. Then, to make the logical volumes (LVs) available for mounting, execute the following:

[tempsrv] # vgchange -ay rootvg

Now Mount the Logical Volumes:

Now all you need do is mount the reestablished LVs. I find this an excellent time to make use of Bash:

[tempsrv] # for i in `ls /dev/rootvg`; do mount /dev/rootvg/$i /mnt/$i; done;

Of course, you need to create the destination mount point before running the script.

Why is the difference between du and df output?

# df –h /apps

/dev/mapper/datavg-lv01 70G 60G 10G /apps

# du –sh /apps

/apps 50G

If the files are deleted (by rm command) while they are being opened or used by a Linux program / process, the evil of “open file descriptor” problem arises and confuse the Linux file system on reporting the real figure of used disk space or free disk space available.

In order to resolve the fake “disk space full” problem, i.e. to reclaim “used disk space”, you need to kill or terminate the “defunct process” - in this case, the rm command that turns to be defunct process while the files are being used.

Once these defunct processes are terminated, the “open file descriptor” problem will be resolved, and both the du and df commands will agree to report the real file system used disk space or free disk space!

How to find out and terminate or kill the defunct processes that cause open file descriptor problem, in order to resolve the difference of used disk space in du and df command?

For this particular scenario, the lsof command (list open file command) is great to show light:

lsof | grep "deleted" or
lsof | grep "/apps" (rather long and messy)

and look for Linux process ID in second column of the lsof command output. The seventh column is the size of file being “deleted” (but not success and turns out to be defunct process).

How to recreate the “open file descriptor” problem that causes the difference of used disk space reported by df and du command?

  1. Create one 500MB file in my /home file system:

dd if=/dev/zero of=/home/lokams bs=1024 count=500000

  1. Run md5 checksum against the 500MB file with md5sum command:

md5sum /home/lokams

  1. Now, open another session and remove the /home/lokams file while md5sum still computing its md5 checksum:

rm /home/lokams

  1. Now, both the Linux df and du commands will report different used disk space or free disk space, that caused by “open file descriptor” problem:
df -h; du -h --max-depth=1 /home