Thursday, April 24, 2008

rsync examples from command line.

To copy all the files in /home/lokams/* of remote host to local directory /backup/home/lokams/
# rsync -e ssh -avz --delete --exclude dir/* --delete-excluded --stats user@remotehost:/home/lokams/ /backup/home/lokams/

To copy the directory /home/lokams of remote host to local directory /backup/home. Directory "lokams" will get created in /backup/home.
# rsync -e ssh -avz --delete --exclude-from=~/.rsync/excludeFile --delete-excluded --stats user@remotehost:/home/lokams /backup/home/

To ignore the permissions errors while copying to remote windows/mac systems, use "-rlt" option.
# rsync -e ssh -avz --delete --exclude dir/* --delete-excluded --stats -rlt user@remotehost:/home/lokams/ /backup/home/lokams/

rsync behaves differently if the source directory has a trailing slash. Study and learn the difference between the following two commands before moving on. Use the -n option to rsync when testing to preview what would happen.

$ rsync -n -av /tmp .
$ rsync -n -av /tmp/ .

More rsync examples:
--------------------------


# rsync -crogpvltz -e ssh --exclude "bin" --exclude "ifany" --delete --delete-after --bwlimit 20 root@10.144.17.1:/apps/uae/ /apps/uae > /apps/uae/logs/sync.log

# rsync -avz -e ssh root@remotehost:/backup/reptest /backtmp/reptest/

# rsync -avz -e ssh /logstage/archive/DXBSEC/NODE0000/ db2inst1@10.20.202.236:/db2/archivelog/security/logstage/retrieve/DXBSEC/NODE0000/

# rsync -e ssh -avz --timeout=999 --delete --exclude dir-or-file-to-exclude --delete-excluded --stats -rlt user@remotehost:/home/lokams/

rsync useful options:
-------------------------


-a, --archive archive mode, equivalent to -rlptgoD
-n, --dry-run show what would have been transferred ( Preview mode )

-c - always checksum
-r - recursive into directories
-o - preserve owner
-g - preserve group
-p - preserve permissions
-t - preserve times
-v - verbose
-l - copy symlinks as symlinks
-z - compress file data
-P - show progress during transfer
-q - quite (decrease verbosity)
-e - specify the remote shell
-b - make backup
-R - Use relative path names
-u - skip files that are newer on the receiver

--stats give some file-transfer stats
--timeout=TIME set I/O timeout in seconds

--backup-dir - make backups into this directory
--bwlimit=KBPS - limit I/O bandwidth, KBytes per second
--delete - delete files that don't exist on the sending side
--delete-after - receiver deletes after transferring, not before
--daemon run as an rsync daemon
--address=ADDRESS bind to the specified address
--exclude=PATTERN exclude files matching PATTERN
--exclude-from=FILE exclude patterns listed in FILE
--include=PATTERN don't exclude files matching PATTERN
--include-from=FILE don't exclude patterns listed in FILE
--min-size=SIZE don't transfer any file smaller than SIZE
--max-size=SIZE don't transfer any file larger than SIZE


# --numeric-ids:
Tells rsync to not map user and group id numbers local user and group names
# --delete:
Makes server copy an exact copy of the source by removing any files that have been removed on the remote machine

Saturday, April 19, 2008

Creating large empty files in Linux / UNIX

To create large empty files in Linux or UNIX:

# dd if=/dev/zero of=filename bs=1024 count=desired

Example to create a 1GB file:

dd if=/dev/zero of=file_1GB bs=1024 count=1000
/or/
dd if=/dev/zero of=file_1GB bs=4096 count=250
/or/
dd if=/dev/zero of=file_1GB bs=2048 count=500

Example to create a 2GB file:

dd if=/dev/zero of=file_2GB bs=2048 count=1000
/or/
dd if=/dev/zero of=file_2GB bs=1024 count=2000

Example to create a 512MB file:

dd if=/dev/zero of=file_512MB bs=1024 count=500
/or/
dd if=/dev/zero of=file_1GB bs=512 count=1000


either use

# mkfile size myfile

where can be in KB, MB or GB using k, m or g as suffix. To create a 10 GB file, use

# mkfile 10240m myfile

If you run # mkfile -n 10240m myfile the file will be created, but the disk blocks will not get allocated, only when writing data into the file.

burn_cd - command to burn a cd

Command to burn a CD in AIX and Linux:

# burn_cd -d cd0 iso_image_file
Running readcd ...
Capacity: 2236704 Blocks = 4473408 kBytes = 4368 MBytes = 4580 prMB
Sectorsize: 2048 Bytes
burn_cd was successful.

Example:

# burn_cd -d cd0 iso_image_file

Zettabyte File System (ZFS) in Solaris 10

ZFS is an advanced modern filesystem from Sun Microsystems, originally designed for Solaris/OpenSolaris.

ZFS is a new file system in Solaris 10 OS which provides excellent data integrity and performance compared to other file systems (considering the enterprise storage scenario). Unlike previous file systems, it's a 128-bit file system, which means it can scale up to accommodate very large data. It is perhaps the world's first 128-bit file system. But why do we need so much scalability? The reason is simple. In an enterprise, data is continuously stored on servers and it keeps on increasing. Enterprises want to keep as much of this data live as possible, so that it can be quickly retrieved when required.

ZFS has many features which can benefit all kinds of users - from the simple end-user to the biggest enterprise systems:
  • Provable integrity - it checksums all data (and metadata), which makes it possible to detect hardware errors (hard disk corruption, flaky IDE cables, etc...)
  • Atomic updates - means that the on-disk state is consistent at all times, there's no need to perform a lengthy filesystem check after forced reboots or power failures
  • Instantaneous snapshots and clones - it makes it possible to have hourly, daily and weekly backups efficiently, as well as experiment with new system configurations without any risks
  • Built-in (optional) compression
  • Highly scalable
  • Pooled storage model - creating filesystems is as easy as creating a new directory. You can efficiently have thousands of filesystems, each with it's own quotas and reservations, and different properties (compression algorithm, checksum algorithm, etc...)
  • Built-in stripes (RAID-0), mirrors (RAID-1) and RAID-Z (it's like software RAID-5, but more efficient due to ZFS's copy-on-write transactional model).
  • Many others (variable sector sizes, adaptive endianness, ...)

In traditional file systems, data is stored on a single disk or on a large volume, consisting of multiple disks. In ZFS, a pool of storage model is used, ie every single storage device is part of a single expandable storage pool, irrespective of where the data is being written. Each storage device which resides inside the pool can have different file systems, which helps administrators scale the system in an easy and efficient manner, ie you no longer need to take care of the file system. Just add a storage device to the pool. With this new architecture, each file system that resides under the pool can share the same amount of size and I/O resources as the pool itself. Also ZFS is used for correcting noisy data corruption. For eg, in cases when you've done an I/O operation, the disk returns an error message, say, 'Can't read the specified block.' The second case could be silent data corruption, wherein you do an I/O operation and the system returns corrupted results. ZFS identifies and if possible even corrects these data corruptions, something which existing file systems can't do. Managing existing file systems is also difficult. For example, you upgrade your system after which you find that the file system doesn't support the machine and you have to copy all the data. This would consume a lot of time, but ZFS helps alleviate this. Moreover, existing file systems have limitations in terms of volumes, file size, etc.

ZFS definitely looks like a great engineering achievement and its makers have all rights to be proud of it. In their own words, they've blown away 20 years of obsolete assumptions and now they refer to ZFS as the last word in filesystems.

When ZFS was first announced, I'm sure many Linux hackers had a thought how it would be a great idea to port such a great filesystem to Linux. Unfortunately, ZFS source is distributed under Sun's CDDL license which is (some say deliberatly) incompatible with the GPL license that Linux kernel uses. So, it looks like there will be no native port of ZFS for Linux in the foreseeable future. What a pity.

Wednesday, April 16, 2008

Bash Shortcuts and Tips

Repeating an argument
You can repeat the last argument of the previous command in multiple ways. Have a look at this example:

$ mkdir /path/to/dir
$ cd !$

The second command might look a little strange, but it will just cd to /path/to/dir.

Some keyboard shortcuts for editing

There are some pretty useful keyboard shortcuts for editing in bash. They might appear familiar to Emacs users:

• Ctrl + a => Return to the start of the command you're typing
• Ctrl + e => Go to the end of the command you're typing
• Ctrl + u => Cut everything before the cursor to a special clipboard
• Ctrl + k => Cut everything after the cursor to a special clipboard
• Ctrl + y => Paste from the special clipboard that Ctrl + u and Ctrl + k save their data to
• Ctrl + t => Swap the two characters before the cursor (you can actually use this to transport a character from the left to the right, try it!)
• Ctrl + w => Delete the word / argument left of the cursor
• Ctrl + l => Clear the screen

Redirecting both Standard Output and Standard Error:

# ls -ltR 2>&1 > /tmp/temp.txt

Specify this in .bashrc

Make Bash append rather than overwrite the history on disk:

# shopt -s histappend

Whenever displaying the prompt, write the previous line to disk:

# export PROMPT_COMMAND=’history -a’

To erase duplicate entries in History.

# export HISTCONTROL=erasedups
(or)
# export HISTCONTROL=ignoreboth

To see the history with timestamps

# export HISTTIMEFORMAT="%d/%m/%Y-%H:%M:%S "

To set the Size of the historyHISTSIZE: The number of commands to remember in the command history. The default value is 500.

# export HISTSIZE=500

Searching the Past
  • Ctrl+R searches previous lines
This will put bash in history mode, allowing you to type a part of the command you're looking for. In the meanwhile, it will show the most recent occasion where the string you're typing was used. If it is showing you a too recent command, you can go further back in history by pressing Ctrl + r again and again. Once you found the command you were looking for, press enter to run it.

  • !tcp will execute the previous command which starts with "tcp"

Tuesday, April 15, 2008

Simple usage of "tcpdump"

Tcpdump is a really great tool for network security analyst; you can dump packets that flow within your networks into file for further analysis. With some filters you can capture only the interested packets, which it reduce the size of saved dump and further reduce loading and processing time of packets analysis.

Lets start with capturing packets based on network interface, ports and protocols. Let assume I wanna capture tcp packets that flow over eth1, port 6881. The dump file with be save as test.pcap.

tcpdump -w test.pcap -i eth1 tcp port 6881

Simple right? What if at the same time I am interested on getting packets on udp port 33210 and 33220?

tcpdump -w test.pcap -i eth1 tcp port 6881 or udp \( 33210 or 33220 \)

‘\’ is an escape symbol for ‘(’ and ‘)’. Logic OR implies PLUS (+). In plain text is I want to capture tcp packets flows over port 6881 plus udp ports 33210 and 33220. Careful with ‘and’ in tcpdump filter expression, it means intersection. Thats why I put ‘or’ instead of and within udp port 33210 and 33220. The usage of ‘and’ in tcpdump will be illustrate later.

Ok, how about reading pcap that I saved previously?

tcpdump -nnr test.pcap

The -nn is to tell tcpdump not to resolve DNS on IP and Ports, where r is read.

Adding -tttt to makes the timestamp appears more readable format.

tcpdump -ttttnnr test.pcap

How about capture based on IP ?You need to tell tcpdump which IP you are interested in? Destination IP? or Source IP ? Let say I wanna sniff on destination IP 10.168.28.22 tcp port 22,

how should i write?

tcpdump -w test.pcap dst 10.168.28.22 and tcp port 22

So the ‘and’ makes the intersection of destination IP and port.

By default the sniff size of packets is 96 bytes, you somehow can overload that size by specified with -s.

tcpdump -w test.pcap -s 1550 dst 10.168.28.22 and tcp port 22

Some version of tcpdump allows you to define port range. You can as bellow for capturing packets based on a range of tcp port.

tcpdump tcp portrange 20-24

Bare in mind, the line above I didn’t specified -w which it won’t write to a file but i will just print the captured packets on the screen.