Category Archives: Ubuntu

find files size greater than 1 GB on linux

find files having size greater than 1 GB on linux

find / -type f -size +1G -exec ls -lh {} \;

find files with name starting with “WAS_V8”

find /-type f -name WAS_V8* -exec ls -lh {} \;

Rsync

rsync -avz --exclude-from 'exclude-list.txt' ssh source_dir/ user@hostname.com:/opt

-v : Verbose (try -vv for more detailed information)
-a : archive mode
-z : compress file data

[root@sms opt]# cat exclude-list.txt
img
logs
src
bin
web
deploy
dist
staging
backup

Running rsync in background:

ctrl + z

When you press ctrl + z then the process stopps and goes to background.

^Z
[1]+  Stopped                 rsync -avz --exclude-from 'exclude-list.txt' ssh source_dir/ user@hostname.com:/opt

Now press bg and will start in background the previous process you stopped.

[1]+ rsync -avz --exclude-from 'exclude-list.txt' ssh source_dir/ user@hostname.com:/opt

Press jobs to see the process is running

[1]+ Running rsync -avz --exclude-from 'exclude-list.txt' ssh source_dir/ user@hostname.com:/opt

If you want to to go in foreground press fg 1 1 is the process number

AMQ8101: WebSphere MQ error (893) has occurred

Problem:

You experience following error while creating/starting/stopping/deleting a queue manager

AMQ8101: WebSphere MQ error (893) has occurred

And queue manager error logs are showing following error message

AMQ6119: An internal WebSphere MQ error has occurred ('28 - No space left on device' from semget.)

Cause:

max number of semaphore arrays are lower than required by queue manager.

root@mq:~# cat /proc/sys/kernel/sem
250     32000   32      128

Solution:

Increase maximum number of semaphore arrays limit

root@mq:~# echo "250 32000 32  256" > /proc/sys/kernel/sem

Clearing WebSphere MQ shared memory resources

if it is necessary to remove IPC resources owned by mqm, follow these instructions.

WebSphere MQ provides a utility to release the residual IPC resources allocated by a queue manager. This utility clears the internal queue manager state at the same time as it removes the corresponding IPC resource. Thus, this utility ensures that the queue manager state and IPC resource allocation are kept in step. To free residual IPC resources, follow these steps:

    1. End the queue manager and all connecting applications.
    2. Log on as user
      mqm
    3. Type the following:

      On Solaris, HP-UX, and Linux:

      /opt/mqm/bin/amqiclen -x -m QMGR 

      This command does not report any status. However, if some WebSphere MQ-allocated resources could not be freed, the return code is nonzero.

  1. Explicitly remove any remaining IPC resources that were created by user
    mqm.

If you want to remove all shared memory resources and semaphores that belong to user mqm:

ipcs -m | awk ' $3 == "mqm" {print $2, $3}' | awk '{ print $1}' | while read i; do ipcrm -m $i; done
 ipcs -s | awk ' $3 == "mqm" {print $2, $3}' | awk '{ print $1}' | while read i; do ipcrm sem $i; done 

Note: If you do this when you’ve got more than 1 QM, or things like client trigger monitors running on the same box, you’ll be in big trouble. Run this only if you want to clear everything!

OpenSSL

Introduction

Secure Sockets Layer is an application-level protocol which was developed by the Netscape Corporation for the purpose of transmitting sensitive information, such as Credit Card details, via the Internet. SSL works by using a private key to encrypt data transferred over the SSL-enabled connection thus maintaining the confidentiality of the information.. The most popular use of SSL is in conjunction with web browsing (using the HTTP protocol), but many network applications can benefit from using SSL. By convention, URLs that require an SSL connection start with https: instead of http:.

Basic OpenSSL Commands

Generate a new private key and Certificate Signing Request
openssl req -out techish.com.csr -pubkey -new -keyout techish.com.key

Generate a self-signed certificate
openssl req -x509 -nodes -days 365 -newkey rsa:1024 -keyout techish.com.key -out techish.com.crt

Generate a certificate signing request (CSR) for an existing private key
openssl req -out techish.com.csr -key techish.com.key -new

Generate a certificate signing request based on an existing certificate
openssl x509 -x509toreq -in techish.com.crt -out techish.com.csr -signkey techish.com.key

Remove a passphrase from a private key
openssl rsa -in techish.com.pem -out techish.com2.pem

Verify if CSR, KEY and CERT are same
openssl x509 -noout -modulus -in techish.com.crt | openssl md5
openssl rsa -noout -modulus -in techish.com.key | openssl md5
openssl req -noout -modulus -in techish.com.csr | openssl md5

MySQL – High Availability on Linux using DRBD and Heartbeat

Coming soon!

WebSphere MQ – High Availability on Linux using DRBD and Heartbeat

Overview

High availability

High availability refers to the ability of a system or component to be operational and accessible when required for use for a specified period of time. The system or component is equipped to handle faults in an unplanned outage gracefully to continue providing the intended functionality.

What is DRBD

DRBD is a block device which is designed to build high availability clusters. This is done by mirroring a whole block device via (a dedicated) network.

What is heartbeat

Heartbeat is a daemon that provides cluster infrastructure services. It allows clients to know about the presence of peer processes on other machines and to easily exchange messages with them. The heartbeatdaemon needs to be combined with a cluster resource manager which has the task of starting and stopping the services (e.g. IP addresses, WebSphere MQ etc ) that cluster will make highly available.

Installation(HAMQ)

In this tutorial we will set up a highly available server providing WebSphere MQ services to clients. Should a server become unavailable, WMQ will continue to be available to users. However, WMQ Clients which are communicating with a queue manager that may be subject to a restart or takeover should be written to tolerate a broken connection and should repeatedly attempt to reconnect.

WMQ server1: techish-mq-a IP address: 10.10.1.21
WMQ server2: techish-mq-b IP address: 10.10.1.22
WMQ Server Virtual IP address 10.10.1.20
We will use the /drbd directory as the highly available Queue Managers

To begin, set up two Ubuntu 10.04 LTS servers. You can use RAID, LVM etc as per your requirement. I’ll assume you are using LVM to manage your disk, you may or may not use LVM (beyond the scope of this tutorial)

Install/Configure DRBD

The following partition scheme will be used for the DRBD data:

/dev/data/meta-disk -- 1  GB DRBD meta data
/dev/data/drbdlv    -- 20 GB unmounted DRBD device

Sample output from lvdisplay:

--- Logical volume ---
LV Name                /dev/data/drbdlv
VG Name                data
LV UUID                GCJWiy-0eGD-S5ti-19yy-9QAN-E6tJ-j3mnce
LV Write Access        read/write
LV Status              available
# open                 1
LV Size                20.00 GiB
Current LE             78336
Segments               2
Allocation             inherit
Read ahead sectors     auto
- currently set to     256
Block device           251:0

--- Logical volume ---
LV Name                /dev/data/meta-disk
VG Name                data
LV UUID                XaDR2x-cNhV-Sxgb-auKi-YNPZ-HxLf-GYZxoE
LV Write Access        read/write
LV Status              available
# open                 1
LV Size                1.00 GiB
Current LE             256
Segments               1
Allocation             inherit
Read ahead sectors     auto
- currently set to     256
Block device           251:1

The isolated network between the two servers will be:

WMQ server1:     node1-private IP address: 192.168.0.21
WMQ server2:     node2-private IP address: 192.168.0.22

Ensure that /etc/hosts contains the names and IP addresses of the two servers.

Sample /etc/hosts:

127.0.0.1         localhost
10.10.1.21        techish-mq-a    node1
10.10.1.22        techish-mq-b    node2
192.168.0.21      mq-a-private
192.168.0.22      mq-b-private

Install NTP to ensure both servers have the same time.

apt-get install ntp

You can verify the time is in sync with the date command. Install drbd and heartbeat.

apt-get install drbd8-utils heartbeat

Now create a resource configuration file mq.res and place it in /etc/drbd.d/

Example /etc/drbd.d/mq.res would look as follows

resource mq {

protocol C;

handlers {
pri-on-incon-degr "echo o > /proc/sysrq-trigger ; halt -f";
pri-lost-after-sb "echo o > /proc/sysrq-trigger ; halt -f";
local-io-error "echo o > /proc/sysrq-trigger ; halt -f";
outdate-peer "/usr/lib/heartbeat/drbd-peer-outdater -t 5";
}

startup {
degr-wfc-timeout 120;
}

disk {
on-io-error detach;
}

net {
cram-hmac-alg sha1;
shared-secret "password";
after-sb-0pri disconnect;
after-sb-1pri disconnect;
after-sb-2pri disconnect;
rr-conflict disconnect;
}

syncer {
rate 100M;
verify-alg sha1;
al-extents 257;
}

on node1 {
device /dev/drbd0;
disk /dev/data/drbdlv;
address 192.168.0.21:7788;
meta-disk /dev/data/meta-disk[0];
}

on node2 {
device /dev/drbd0;
disk /dev/data/drbdlv;
address 192.168.0.22:7788;
meta-disk /dev/data/meta-disk[0];
}

}

Duplicate the DRBD configuration to the other server.

scp /etc/drbd.conf root @ 10.10.1.22:/etc/

As we are using heartbeat with drbd, we need to change ownership and permissions on several DRBD related files on both servers:

chgrp haclient /sbin/drbdsetup
chmod o-x /sbin/drbdsetup
chmod u+s /sbin/drbdsetup
chgrp haclient /sbin/drbdmeta
chmod o-x /sbin/drbdmeta
chmod u+s /sbin/drbdmeta

Initialize the meta-data disk on both servers.

drbdadm create-md mq

Decide which server will act as a primary for the DRBD device and initiate the first full sync between the two servers e.g. node1 and execute the following on node1:

drbdadm -- --overwrite-data-of-peer primary mq

You can view the current status of DRBD with:

cat /proc/drbd  

version: 8.3.7 (api:88/proto:86-91)
GIT-hash: ea9e28dbff98e331a62bcbcc63a6135808fe2917 build by mq@techish, 2011-03-14 20:36:57
 0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r----
    ns:1687824 nr:168 dw:1655320 dr:42512 al:572 bm:205 lo:1 pe:17 ua:2032 ap:0 ep:1 wo:b oos:1015968
    [>....................] sync'ed:  3.6% (1015968/1048576)K
    finish: 0:01:32 speed: 10,868 (10,868) K/sec

I prefer to wait for the initial sync to complete. Once completed, format /dev/drbd0 and mount it on node1:

mkfs.ext3 /dev/drbd0
mkdir -p /drbd
mount /dev/drbd0 /drbd

To ensure replication is working correctly, create test data on node1 and then switch node2 to be primary.

Create data:

dd if=/dev/zero of=/drbd/test.techish bs=1M count=100

Switch to node2 and make it the Primary DRBD device:

On node1:
[node1]umount /drbd
[node1]drbdadm secondary mq
On node2:
[node2]mkdir -p /drbd
[node2]drbdadm primary mq
[node2[mount /dev/drbd0 /drbd

You should now see the 100MB file in /drbd on node2. Now delete this file and make node1 the primary DRBD server to ensure replication is working in both directions.

On node2:
[node2]rm /drbd/test.techish
[node2]umount /drbd
[node2[drbdadm secondary mq
On node1:
[node1]drbdadm primary mq
[node1]mount /dev/drbd0 /drbd

Performing an ls -lh /drbd on node1 will verify the file is now removed and synchronization successfully occurred in both directions.

By now, we have configured DRBD HA cluster location /drbd and have tested that replication is working perfectly.

Install/Configure WSMQ

Install WSMQ on node1 and node2. Please refer to blog post Install IBM WebSphere MQ on Ubuntu

note: remove any runlevel init scripts on node1 and node2 for wsmq.

Relocate the  qmgrs data directory and configuration to our DRBD device.

On node1:
[node1]mount /dev/drbd0 /drbd
[node1]mv /var/mqm/ /drbd/
[node1]ln -s /drbd/mqm/ /var/mqm
On node2:
[node2]rm -rf /var/mqm
[node2]ln -s /drbd/mqm/ /var/mqm

Configure Heartbeat

Configure heartbeat to control a Virtual IP address and fail-over WSMQ in the case of a node failure.

On node1, define the cluster within /etc/heartbeat/ha.cf. Example /etc/heartbeat/ha.cf:

logfacility     local0
keepalive 2
deadtime 30
warntime 10
initdead 120
bcast eth0
bcast eth1
node node1
node node2

On node1, define the authentication mechanism within /etc/heartbeat/authkeys the cluster will use. Example /etc/heartbeat/authkeys:

auth 3
3 md5 password

Change the permissions of /etc/heartbeat/authkeys.

chmod 600 /etc/heartbeat/authkeys

On node1, define the resources that will run on the cluster within /etc/heartbeat/haresources. We will define the master node for the resource, the Virtual IP address, the file systems used, and the service(wsmq) to start. Example /etc/heartbeat/haresources:

node1 IPaddr::10.10.1.20/24/eth1 drbddisk::mq Filesystem::/dev/drbd0::/drbd::ext3 wsmq

Copy the cluster configuration files from node1 to node2.

[node1]scp /etc/heartbeat/ha.cf root@10.10.1.22:/etc/heartbeat/
[node1]scp /etc/heartbeat/authkeys root@10.10.1.22:/etc/heartbeat/
[node1]scp /etc/heartbeat/haresources root@10.10.1.22:/etc/heartbeat/

Reboot both servers.

Install IBM WebSphere MQ on Ubuntu

Installation

Install following packages, if not already installed.

sudo apt-get install rpm
sudo apt-get install ia32-libs
sudo apt-get install libgtk2.0-dev

I have noticed some shell related warnings like

[: 120: MQSeriesTXClient: unexpected operator
[: 134: MQSeriesClient: unexpected operator

Though it doesn’t affect WSMQ installation, but to avoid this make sure default shell is bash not dash

unlink /bin/sh
ln -s /bin/bash /bin/sh

Download the tar ball into a new directory, then extract (use appropriate name of tar ball)

tar -xzvf WMQSOURCE.tar.gz

Accept the license agreement

sudo ./mqlicense.sh -accept

Note: Do the above command from a non-X11 console/terminal

Install the rpm packages

rpm --nodeps --force-debian -ivh MQSeriesRuntime-6.0.1-0.x86_64.rpm
rpm --nodeps --force-debian -ivh MQSeriesSamples-6.0.1-0.x86_64.rpm
rpm --nodeps --force-debian -ivh MQSeriesSDK-6.0.1-0.x86_64.rpm
rpm --nodeps --force-debian -ivh MQSeriesServer-6.0.1-0.x86_64.rpm
rpm --nodeps --force-debian -ivh MQSeriesMan-6.0.1-0.x86_64.rpm
rpm --nodeps --force-debian -ivh MQSeriesJava-6.0.1-0.x86_64.rpm
rpm --nodeps --force-debian -ivh MQSeriesTXClient-6.0.1-0.x86_64.rpm
rpm --nodeps --force-debian -ivh MQSeriesClient-6.0.1-0.x86_64.rpm

Note:  –force-debian option is new with Ubuntu 9.10 and can be left out with prior releases of Ubuntu

Automatic Startup

Please download MSL1 support SupportPac from IBM website.

This SupportPac provides an RPM package containing an init script to start and stop IBM WebSphere MQ Queue Managers.

Installation Instructions

Download the tar ball into a new directory, then extract

tar xzf msl1.tar.gz

Install the rpm packages

cd MSL1
rpm -ivh --nodeps --force-debian MSL1-1.0.1-1.noarch.rpm

Rename installed init script for your ease of use

mv /etc/init.d/ibm.com-WebSphere_MQ /etc/init.d/wsmq

You can manage your queue manger using service command e.g.

service wsmq status
service wsmq stop
service wsmq start

If you have difficulties installing init script, you can download this service.tar here and follow the instructions below

tar -xvf service.tar
mkdir /etc/conf.d
cp ibm.com-WebSphere_MQ /etc/conf.d
cp wsmq /etc/init.d/

service wsmq status

Uninstallation

To uninstall all MQSeries RPM packages

sudo rpm -qa | grep MQSeries | xargs sudo rpm -e --noscripts --force-debian --nodeps
sudo rpm -qa | grep gsk7bas  | xargs sudo rpm -e --noscripts --force-debian --nodeps

or

sudo rpm -e --nodeps --noscripts --force-debian `rpm -qa | grep MQS`

Also delete WSMQ data

rm -rf /var/mqm/*
rm -rf /opt/mqm

Setting Up A Highly Available NFS Server

Coming soon!

Array ( [marginTop] => 100 [pageid] => @techish1 [alignment] => left [width] => 292 [height] => 300 [color_scheme] => light [header] => header [footer] => footer [border] => true [scrollbar] => scrollbar [linkcolor] => #2EA2CC )