Category Archives: IBM WebSphere MQ

Error provisioning VM with Vagrant

Problem:

Today I got following error while trying to provision a VM using Vagrant to test chef cookbooks

ishtiaqmedsmbp2:wmq Ishtiaq$ vagrant provision
[default] The cookbook path '/Users/Ishtiaq/chef-repo/cookbooks/wmq/cookbooks' doesn't exist. Ignoring...
[default] The cookbook path '/Users/Ishtiaq/chef-repo/cookbooks/wmq/roles' doesn't exist. Ignoring...
[default] The cookbook path '/Users/Ishtiaq/chef-repo/cookbooks/wmq/data_bags' doesn't exist. Ignoring...
[default] Running provisioner: chef_solo...
The chef binary (either `chef-solo` or `chef-client`) was not found on
the VM and is required for chef provisioning. Please verify that chef
is installed and that the binary is available on the PATH.

Fix:

vagrant plugin install vagrant-omnibus

Tried vagrant provision, resulted in another error

ishtiaqmedsmbp2:wmq Ishtiaq$ vagrant provision
There are errors in the configuration of this machine. Please fix
the following errors and try again:

Vagrant:
* Unknown configuration section 'omnibus'.

ishtiaqmedsmbp2:wmq Ishtiaq$

Install omnibus vagrant plugin

ishtiaqmedsmbp2:wmq Ishtiaq$ vagrant plugin install vagrant-omnibus
Installing the 'vagrant-omnibus' plugin. This can take a few minutes...
Installed the plugin 'vagrant-omnibus (1.4.1)'!

Run vagrant provision again

ishtiaqmedsmbp2:wmq Ishtiaq$ vagrant provision
[default] The cookbook path '/Users/Ishtiaq/chef-repo/cookbooks/wmq/cookbooks' doesn't exist. Ignoring...
[default] The cookbook path '/Users/Ishtiaq/chef-repo/cookbooks/wmq/roles' doesn't exist. Ignoring...
[default] The cookbook path '/Users/Ishtiaq/chef-repo/cookbooks/wmq/data_bags' doesn't exist. Ignoring...
[default] Installing Chef 11.14.2 Omnibus package...
[default] Downloading Chef 11.14.2 for el...
[default] downloading https://www.getchef.com/chef/metadata?v=11.14.2&prerelease=false&nightlies=false&p=el&pv=6&m=x86_64
[default]   to file /tmp/install.sh.2012/metadata.txt
[default] trying curl...
[default] url	https://opscode-omnibus-packages.s3.amazonaws.com/el/6/x86_64/chef-11.14.2-1.el6.x86_64.rpm
md5	ffeffb67c310e6f76bb5d90ee7e30a3f
sha256	840946bc5f7931346131c0c77f2f5bfd1328245189fc6c8173b01eb040ffb58b
[default] downloaded metadata file looks valid...
[default] downloading https://opscode-omnibus-packages.s3.amazonaws.com/el/6/x86_64/chef-11.14.2-1.el6.x86_64.rpm
  to file /tmp/install.sh.2012/chef-11.14.2-1.el6.x86_64.rpm
trying curl...
[default] Comparing checksum with sha256sum...
[default] Installing Chef 11.14.2
[default] installing with rpm...
[default] warning:
[default] /tmp/install.sh.2012/chef-11.14.2-1.el6.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID 83ef826a: NOKEY
[default] Preparing...
[default] ##################################################
[default]
[default] chef
[default] #
[default] #
[default] #
[default] #
default] #
[default] #
[default]
[default] Thank you for installing Chef!
[default] Running provisioner: chef_solo...
Generating chef JSON and uploading...
Running chef-solo...
[2014-08-17T01:53:04+00:00] INFO: Forking chef instance to converge...
[2014-08-17T01:53:04+00:00] WARN:
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
SSL validation of HTTPS requests is disabled. HTTPS connections are still
encrypted, but chef is not able to detect forged replies or man in the middle
attacks.

To fix this issue add an entry like this to your configuration file:

```
  # Verify all HTTPS connections (recommended)
  ssl_verify_mode :verify_peer

  # OR, Verify only connections to chef-server
  verify_api_cert true
```

To check your SSL configuration, or troubleshoot errors, you can use the
``knife ssl check` command like so:

```
  knife ssl check -c /tmp/vagrant-chef-1/solo.rb
```

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

[2014-08-17T01:53:04+00:00] INFO: *** Chef 11.14.2 ***
[2014-08-17T01:53:04+00:00] INFO: Chef-client pid: 2220
[2014-08-17T01:53:16+00:00] INFO: Setting the run_list to ["recipe[wmq]"] from CLI options
[2014-08-17T01:53:16+00:00] INFO: Run List is [recipe[wmq]]
[2014-08-17T01:53:16+00:00] INFO: Run List expands to [wmq]
[2014-08-17T01:53:16+00:00] INFO: Starting Chef Run for techish-starter

Chef cookbook for websphere MQ

The was cookbook installs and configures the WebSphere MQ. Current version of cookbook installs MQ binaries, creates queue manager and creates a listener.

GitHub link is https://github.com/techish1/wmq/

Note: This cookbook was tested only on CentOS release 6.4

CONNX CompCode:2 Reason:2035 Text:Not authorized for access

If you experience this error while connecting to queue manager, please check MCAUSER property of CHANNEL

DIS chl(APPS.CHL) MCAUSER
2 : DIS chl(APPS.CHL) MCAUSER
AMQ8414: Display Channel details.
CHANNEL(APPS.CHL)                       CHLTYPE(SVRCONN)
MCAUSER()

if MCAUSER is empty – set it to ‘mqm’

ALTER CHANNEL(APPS.CHL) CHLTYPE(SVRCONN) MCAUSER('mqm')

You should be able to connect without any errors now.

DIS chl(APPS.CHL) MCAUSER
2 : DIS chl(APPS.CHL) MCAUSER
AMQ8414: Display Channel details.
CHANNEL(APPS.CHL)                       CHLTYPE(SVRCONN)
MCAUSER(mqm)

AMQ8101: WebSphere MQ error (893) has occurred

Problem:

You experience following error while creating/starting/stopping/deleting a queue manager

AMQ8101: WebSphere MQ error (893) has occurred

And queue manager error logs are showing following error message

AMQ6119: An internal WebSphere MQ error has occurred ('28 - No space left on device' from semget.)

Cause:

max number of semaphore arrays are lower than required by queue manager.

root@mq:~# cat /proc/sys/kernel/sem
250     32000   32      128

Solution:

Increase maximum number of semaphore arrays limit

root@mq:~# echo "250 32000 32  256" > /proc/sys/kernel/sem

Clearing WebSphere MQ shared memory resources

if it is necessary to remove IPC resources owned by mqm, follow these instructions.

WebSphere MQ provides a utility to release the residual IPC resources allocated by a queue manager. This utility clears the internal queue manager state at the same time as it removes the corresponding IPC resource. Thus, this utility ensures that the queue manager state and IPC resource allocation are kept in step. To free residual IPC resources, follow these steps:

    1. End the queue manager and all connecting applications.
    2. Log on as user
      mqm
    3. Type the following:

      On Solaris, HP-UX, and Linux:

      /opt/mqm/bin/amqiclen -x -m QMGR 

      This command does not report any status. However, if some WebSphere MQ-allocated resources could not be freed, the return code is nonzero.

  1. Explicitly remove any remaining IPC resources that were created by user
    mqm.

If you want to remove all shared memory resources and semaphores that belong to user mqm:

ipcs -m | awk ' $3 == "mqm" {print $2, $3}' | awk '{ print $1}' | while read i; do ipcrm -m $i; done
 ipcs -s | awk ' $3 == "mqm" {print $2, $3}' | awk '{ print $1}' | while read i; do ipcrm sem $i; done 

Note: If you do this when you’ve got more than 1 QM, or things like client trigger monitors running on the same box, you’ll be in big trouble. Run this only if you want to clear everything!

How to connect WebSphere MQ from PHP

Install WebSphere MQ Client

Download WebSphere MQ V6.0 Clients supportpac (MQC6) for your respective platform.


tar -zxvf mqc6_6.0.2.11_linuxx86.tar.gz
./mqlicense.sh -accept
rpm --nodeps -ivh MQSeries*

Install mqseries php extension

Install following dependencies first

yum install php-devel
yum install gcc
yum install re2c

Download latest stable version of php ‘mqseries‘ extension.


tar -zxvf mqseries-0.13.0.tgz
cd mqseries-0.13.0

Build mqseries module by running following commands

phpize
./configure
or ./configure --with-libdir=lib64 ( for 64-bit)
make
Copy mqseries extension module into php modules location

cp modules/mqseries.so /usr/lib/php/modules/
or cp modules/mqseries.so /usr/lib64/php/modules/ ( 64 bit )

Create file /etc/php.d/mqseries.ini and add following config into it

; Enable mqseries extension module
extension=mqseries.so

sample test php programs can be found under examples/ directory.

Documentation here.

WebSphere MQ installations on RedHat Linux 64-bit systems

Error description

WebSphere MQ installations on Linux 64-bit systems are failing. Also, the correct directory structure under /var/mqm is not present after the install, with only a tivoli directory found under /var/mqm.

If an attempt is made to install a fix pack after the initial install, then this also fails with the following error:

This package is not applicable to the version of WebSphere MQ that is currently installed on this machine.

Installation will not continue.

Local fix

install the 31bit glibc package and re-try the MQ install.
yum install glibc.i686

WebSphere MQ – Save Queue Manager object definitions

MS03 SupportPac interrogates the attributes of all the objects defined to a queue manager (either local or remote) and saves them to a file. The format of this file is suitable for use with runmqsc. It is therefore possible to use this SupportPac to save the definitions of objects known to a queue manager and subsequently recreate that queue manager.

WebSphere MQ – High Availability on Linux using DRBD and Heartbeat

Overview

High availability

High availability refers to the ability of a system or component to be operational and accessible when required for use for a specified period of time. The system or component is equipped to handle faults in an unplanned outage gracefully to continue providing the intended functionality.

What is DRBD

DRBD is a block device which is designed to build high availability clusters. This is done by mirroring a whole block device via (a dedicated) network.

What is heartbeat

Heartbeat is a daemon that provides cluster infrastructure services. It allows clients to know about the presence of peer processes on other machines and to easily exchange messages with them. The heartbeatdaemon needs to be combined with a cluster resource manager which has the task of starting and stopping the services (e.g. IP addresses, WebSphere MQ etc ) that cluster will make highly available.

Installation(HAMQ)

In this tutorial we will set up a highly available server providing WebSphere MQ services to clients. Should a server become unavailable, WMQ will continue to be available to users. However, WMQ Clients which are communicating with a queue manager that may be subject to a restart or takeover should be written to tolerate a broken connection and should repeatedly attempt to reconnect.

WMQ server1: techish-mq-a IP address: 10.10.1.21
WMQ server2: techish-mq-b IP address: 10.10.1.22
WMQ Server Virtual IP address 10.10.1.20
We will use the /drbd directory as the highly available Queue Managers

To begin, set up two Ubuntu 10.04 LTS servers. You can use RAID, LVM etc as per your requirement. I’ll assume you are using LVM to manage your disk, you may or may not use LVM (beyond the scope of this tutorial)

Install/Configure DRBD

The following partition scheme will be used for the DRBD data:

/dev/data/meta-disk -- 1  GB DRBD meta data
/dev/data/drbdlv    -- 20 GB unmounted DRBD device

Sample output from lvdisplay:

--- Logical volume ---
LV Name                /dev/data/drbdlv
VG Name                data
LV UUID                GCJWiy-0eGD-S5ti-19yy-9QAN-E6tJ-j3mnce
LV Write Access        read/write
LV Status              available
# open                 1
LV Size                20.00 GiB
Current LE             78336
Segments               2
Allocation             inherit
Read ahead sectors     auto
- currently set to     256
Block device           251:0

--- Logical volume ---
LV Name                /dev/data/meta-disk
VG Name                data
LV UUID                XaDR2x-cNhV-Sxgb-auKi-YNPZ-HxLf-GYZxoE
LV Write Access        read/write
LV Status              available
# open                 1
LV Size                1.00 GiB
Current LE             256
Segments               1
Allocation             inherit
Read ahead sectors     auto
- currently set to     256
Block device           251:1

The isolated network between the two servers will be:

WMQ server1:     node1-private IP address: 192.168.0.21
WMQ server2:     node2-private IP address: 192.168.0.22

Ensure that /etc/hosts contains the names and IP addresses of the two servers.

Sample /etc/hosts:

127.0.0.1         localhost
10.10.1.21        techish-mq-a    node1
10.10.1.22        techish-mq-b    node2
192.168.0.21      mq-a-private
192.168.0.22      mq-b-private

Install NTP to ensure both servers have the same time.

apt-get install ntp

You can verify the time is in sync with the date command. Install drbd and heartbeat.

apt-get install drbd8-utils heartbeat

Now create a resource configuration file mq.res and place it in /etc/drbd.d/

Example /etc/drbd.d/mq.res would look as follows

resource mq {

protocol C;

handlers {
pri-on-incon-degr "echo o > /proc/sysrq-trigger ; halt -f";
pri-lost-after-sb "echo o > /proc/sysrq-trigger ; halt -f";
local-io-error "echo o > /proc/sysrq-trigger ; halt -f";
outdate-peer "/usr/lib/heartbeat/drbd-peer-outdater -t 5";
}

startup {
degr-wfc-timeout 120;
}

disk {
on-io-error detach;
}

net {
cram-hmac-alg sha1;
shared-secret "password";
after-sb-0pri disconnect;
after-sb-1pri disconnect;
after-sb-2pri disconnect;
rr-conflict disconnect;
}

syncer {
rate 100M;
verify-alg sha1;
al-extents 257;
}

on node1 {
device /dev/drbd0;
disk /dev/data/drbdlv;
address 192.168.0.21:7788;
meta-disk /dev/data/meta-disk[0];
}

on node2 {
device /dev/drbd0;
disk /dev/data/drbdlv;
address 192.168.0.22:7788;
meta-disk /dev/data/meta-disk[0];
}

}

Duplicate the DRBD configuration to the other server.

scp /etc/drbd.conf root @ 10.10.1.22:/etc/

As we are using heartbeat with drbd, we need to change ownership and permissions on several DRBD related files on both servers:

chgrp haclient /sbin/drbdsetup
chmod o-x /sbin/drbdsetup
chmod u+s /sbin/drbdsetup
chgrp haclient /sbin/drbdmeta
chmod o-x /sbin/drbdmeta
chmod u+s /sbin/drbdmeta

Initialize the meta-data disk on both servers.

drbdadm create-md mq

Decide which server will act as a primary for the DRBD device and initiate the first full sync between the two servers e.g. node1 and execute the following on node1:

drbdadm -- --overwrite-data-of-peer primary mq

You can view the current status of DRBD with:

cat /proc/drbd  

version: 8.3.7 (api:88/proto:86-91)
GIT-hash: ea9e28dbff98e331a62bcbcc63a6135808fe2917 build by mq@techish, 2011-03-14 20:36:57
 0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r----
    ns:1687824 nr:168 dw:1655320 dr:42512 al:572 bm:205 lo:1 pe:17 ua:2032 ap:0 ep:1 wo:b oos:1015968
    [>....................] sync'ed:  3.6% (1015968/1048576)K
    finish: 0:01:32 speed: 10,868 (10,868) K/sec

I prefer to wait for the initial sync to complete. Once completed, format /dev/drbd0 and mount it on node1:

mkfs.ext3 /dev/drbd0
mkdir -p /drbd
mount /dev/drbd0 /drbd

To ensure replication is working correctly, create test data on node1 and then switch node2 to be primary.

Create data:

dd if=/dev/zero of=/drbd/test.techish bs=1M count=100

Switch to node2 and make it the Primary DRBD device:

On node1:
[node1]umount /drbd
[node1]drbdadm secondary mq
On node2:
[node2]mkdir -p /drbd
[node2]drbdadm primary mq
[node2[mount /dev/drbd0 /drbd

You should now see the 100MB file in /drbd on node2. Now delete this file and make node1 the primary DRBD server to ensure replication is working in both directions.

On node2:
[node2]rm /drbd/test.techish
[node2]umount /drbd
[node2[drbdadm secondary mq
On node1:
[node1]drbdadm primary mq
[node1]mount /dev/drbd0 /drbd

Performing an ls -lh /drbd on node1 will verify the file is now removed and synchronization successfully occurred in both directions.

By now, we have configured DRBD HA cluster location /drbd and have tested that replication is working perfectly.

Install/Configure WSMQ

Install WSMQ on node1 and node2. Please refer to blog post Install IBM WebSphere MQ on Ubuntu

note: remove any runlevel init scripts on node1 and node2 for wsmq.

Relocate the  qmgrs data directory and configuration to our DRBD device.

On node1:
[node1]mount /dev/drbd0 /drbd
[node1]mv /var/mqm/ /drbd/
[node1]ln -s /drbd/mqm/ /var/mqm
On node2:
[node2]rm -rf /var/mqm
[node2]ln -s /drbd/mqm/ /var/mqm

Configure Heartbeat

Configure heartbeat to control a Virtual IP address and fail-over WSMQ in the case of a node failure.

On node1, define the cluster within /etc/heartbeat/ha.cf. Example /etc/heartbeat/ha.cf:

logfacility     local0
keepalive 2
deadtime 30
warntime 10
initdead 120
bcast eth0
bcast eth1
node node1
node node2

On node1, define the authentication mechanism within /etc/heartbeat/authkeys the cluster will use. Example /etc/heartbeat/authkeys:

auth 3
3 md5 password

Change the permissions of /etc/heartbeat/authkeys.

chmod 600 /etc/heartbeat/authkeys

On node1, define the resources that will run on the cluster within /etc/heartbeat/haresources. We will define the master node for the resource, the Virtual IP address, the file systems used, and the service(wsmq) to start. Example /etc/heartbeat/haresources:

node1 IPaddr::10.10.1.20/24/eth1 drbddisk::mq Filesystem::/dev/drbd0::/drbd::ext3 wsmq

Copy the cluster configuration files from node1 to node2.

[node1]scp /etc/heartbeat/ha.cf root@10.10.1.22:/etc/heartbeat/
[node1]scp /etc/heartbeat/authkeys root@10.10.1.22:/etc/heartbeat/
[node1]scp /etc/heartbeat/haresources root@10.10.1.22:/etc/heartbeat/

Reboot both servers.

Install IBM WebSphere MQ on Ubuntu

Installation

Install following packages, if not already installed.

sudo apt-get install rpm
sudo apt-get install ia32-libs
sudo apt-get install libgtk2.0-dev

I have noticed some shell related warnings like

[: 120: MQSeriesTXClient: unexpected operator
[: 134: MQSeriesClient: unexpected operator

Though it doesn’t affect WSMQ installation, but to avoid this make sure default shell is bash not dash

unlink /bin/sh
ln -s /bin/bash /bin/sh

Download the tar ball into a new directory, then extract (use appropriate name of tar ball)

tar -xzvf WMQSOURCE.tar.gz

Accept the license agreement

sudo ./mqlicense.sh -accept

Note: Do the above command from a non-X11 console/terminal

Install the rpm packages

rpm --nodeps --force-debian -ivh MQSeriesRuntime-6.0.1-0.x86_64.rpm
rpm --nodeps --force-debian -ivh MQSeriesSamples-6.0.1-0.x86_64.rpm
rpm --nodeps --force-debian -ivh MQSeriesSDK-6.0.1-0.x86_64.rpm
rpm --nodeps --force-debian -ivh MQSeriesServer-6.0.1-0.x86_64.rpm
rpm --nodeps --force-debian -ivh MQSeriesMan-6.0.1-0.x86_64.rpm
rpm --nodeps --force-debian -ivh MQSeriesJava-6.0.1-0.x86_64.rpm
rpm --nodeps --force-debian -ivh MQSeriesTXClient-6.0.1-0.x86_64.rpm
rpm --nodeps --force-debian -ivh MQSeriesClient-6.0.1-0.x86_64.rpm

Note:  –force-debian option is new with Ubuntu 9.10 and can be left out with prior releases of Ubuntu

Automatic Startup

Please download MSL1 support SupportPac from IBM website.

This SupportPac provides an RPM package containing an init script to start and stop IBM WebSphere MQ Queue Managers.

Installation Instructions

Download the tar ball into a new directory, then extract

tar xzf msl1.tar.gz

Install the rpm packages

cd MSL1
rpm -ivh --nodeps --force-debian MSL1-1.0.1-1.noarch.rpm

Rename installed init script for your ease of use

mv /etc/init.d/ibm.com-WebSphere_MQ /etc/init.d/wsmq

You can manage your queue manger using service command e.g.

service wsmq status
service wsmq stop
service wsmq start

If you have difficulties installing init script, you can download this service.tar here and follow the instructions below

tar -xvf service.tar
mkdir /etc/conf.d
cp ibm.com-WebSphere_MQ /etc/conf.d
cp wsmq /etc/init.d/

service wsmq status

Uninstallation

To uninstall all MQSeries RPM packages

sudo rpm -qa | grep MQSeries | xargs sudo rpm -e --noscripts --force-debian --nodeps
sudo rpm -qa | grep gsk7bas  | xargs sudo rpm -e --noscripts --force-debian --nodeps

or

sudo rpm -e --nodeps --noscripts --force-debian `rpm -qa | grep MQS`

Also delete WSMQ data

rm -rf /var/mqm/*
rm -rf /opt/mqm
Array ( [marginTop] => 100 [pageid] => @techish1 [alignment] => left [width] => 292 [height] => 300 [color_scheme] => light [header] => header [footer] => footer [border] => true [scrollbar] => scrollbar [linkcolor] => #2EA2CC )