Clone validation

From Unix Wiki
Jump to: navigation, search

Snapshot based clones

Data collection

SAN

1. Determine Devices for SAN Volumes

Log in to non-prod database server and run a df -h command and check /etc/fstab for mountpoint information.

Make a note of the devices archive logs and datafiles are mounted from ie: /dev/xvdb1, etc. for OVM hosted VMs or /dev/mapper/mpath1p1, etc. for physical machines. Alternatively, for physical machines a UUID could be used ie: 350002ac00****bc6p1, etc. or a friendly name such as prdArchp1, etc. (if a multipath alias is defined in /etc/multipath.conf).

[root@abcdb01 ~]# df –h /abc/dev/arch
 Filesystem            Size  Used Avail Use% Mounted on
 /dev/xvdb1            10.0G 2.0G 8.0G  20% /abc/dev/arch

2. Determine WWNs for SAN Volumes

This is simplest if a UUID is in use as the WWN is the same as the UUID without the 3 as the first character ie: 50002ac00****bc6. Next simplest is a friendly name, running a multipath -ll devArch will return the UUID (drop the 3), size, and path information.

If the device starts with /dev/xvdb, etc. the WWN will have to be determined from the contents of /OVS/running_pool/abcdb01/vm.cfg on the OVM host ie: 'phy:/dev/mapper/350002ac00****bc6,xvdb,w' (drop the 3). The OVM host can be confirmed by searching for the VM at the OVM Util site found at http://di-scripttools-01/ovmutil. Make note of the WWNs for the archive logs and datafiles LUNs.

[root@abcdb01 ~]# multipath –ll devArch 
350002ac00****bc6 dm-16 3PARdata,VV
[size=10G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=4][active]
 \_ 3:0:0:24  sdaa 66:176  [active][ready]
 \_ 4:0:0:24  sdab 67:96   [active][ready]
 \_ 3:0:1:24  sdac 132:160 [active][ready]
 \_ 4:0:1:24  sdad 132:208 [active][ready]

For Solaris you should login to global zone server and make folowing:

root@abcgzdb01:/ #mpathadm show lu /dev/rdsk/c0t500********E0BC6d0s2
Logical Unit:  /dev/rdsk/c0t5000********E0BC6d0s2
       mpath-support:  libmpscsi_vhci.so
       Vendor:  3PARdata
       Product:  VV
       Revision:  0000
       Name Type:  unknown type
       Name:  50002******e0bc6
       Asymmetric:  no
       Current Load Balance:  round-robin
       Logical Unit Group ID:  NA
       Auto Failback:  on
       Auto Probing:  NA

Name string is a WWN of your SAN device, but /dev/rdsk/c0t500********E0BC6d0s2 can be taken from /etc/vfstab or df output (just add r before dsk) for special mountpoint.

3. Determine 3PAR Array Hostname

Find out what SAN devices the LUNs connect to. To do this, review the last 4 characters of the WWN:

mg-di3par-01 ends in 0689
mg-dit400-01 ends in 0BC6
mg-div400-01 ends in 4F43
mg-di-7450-01 ends in C230
mg-djo-7400c-01 ends in 76C7
slc-dif400-01 ends in 14B5
slc-di-7400c-01 ends in 235E

Make note of the 3PAR hostname.

4. Determine Clone Volume Names

SSH to the 3PAR above from a Linux server and grep the volume list for the WWNs ie: ssh 3paradm@mg-dit400-01 showvv -showcols Name,VV_WWN | grep 50002AC00****BC6, this will return the volume name and WWN. The volume name that is returned is the read-write volume exported to the server ie: abcdev_arch. The volumes we need to check are the read-only volumes which are the top-tier snapshots from the parent volume. This volume name can be determined by adding a .ro at the end of the volume name returned above ie: abcdev_arch.ro. Make note of the read-only volume name for archive logs and datafiles.

[root@abcdb01 ~]# ssh 3paradm@mg-dit400-01 showvv -showcols Name,VV_WWN | grep -i 50002AC00****BC6
abcdev_arch (add .ro to the end ie: abcdev_arch.ro) 50002AC00****BC6

NFS

1. Get Filer and Volume Information for NFS Volumes

Log in to the non-prod database and mid-tier servers and run a df –h command and check /etc/fstab for mountpoint information. Make note of the NetApp filer and volume name for database and application binaries ie: mg-di3240-03:/vol/abcdev_db and mg-di3240-03:/vol/abcdev_appl.

Filesystem            Size  Used Avail Use% Mounted on
mg-di3240-03:/vol/abcdev_appl
                     10G   2G   8G  20% /abc/dev/app

Snap Clone Validation

To ensure the clone refresh will complete successfully, the following should be done prior to beginning the clone. Use the data collected in the previous steps to check the environment; if any of the following checks show concern, the refresh will likely not complete.

1. Confirm SAN Parent and Clone Volume Size are Equal

Log in to the 3PAR as 3paradm and run a showvv -showcols Name,VSize_MB abcprd_arch and showvv -showcols Name,VSize_MB abcdev_arch (the data returned is in MB). If the parent volume has been resized, the clone will fail and new clone volumes will need to be provisioned by the Storage Team. This step should be performed for archive logs and datafiles volumes.

mg-diT400-01 cli% showvv -showcols Name,VSize_MB abcprd_arch
Name            VSize_MB
abcprd_arch       102400
------------------------
total             102400
mg-diT400-01 cli% showvv -showcols Name,VSize_MB abcdev_arch
Name            VSize_MB
abcprd_arch       102400
------------------------
total             102400

2. Confirm SAN Volumes are Clones and Identify their Parent/Source Volumes

Login to the 3PAR as 3paradm and run a showvv -showcols Name,Prov,CopyOf abcdev_arch.ro and something such as abcprd_arch should be returned. If the .ro volume is not found, it is likely the read-write volume is not a clone, under the Prov column, full and tpvv indicate physical volumes and snp indicates clone volumes. This step should be performed for archive logs and datafiles volumes.

mg-diT400-01 cli% showvv -showcols Name,Prov,CopyOf abcdev_arch.ro
Name              Prov CopyOf
abcdev_arch.ro snp  abcprd_arch
--------------------------------------
total

3. Confirm NFS Volumes are Clones and Confirm their Parent/Source Volumes

Login to the NetApp as root and run a vol status abcdev_appl, this should return the parent volume and snapshot name. If no clone volume is specified, the volume is not a clone. This step should be performed for database and application binaries.

mg-di3240-03*> vol status abcdev_db
         Volume State           Status            Options
       abcdev_db online         raid0, flex       nosnap=on, nosnapdir=on,
                                   sis            guarantee=none
                Clone, backed by volume 'abcprd_db', snapshot 'clone_abcprd_abcdev'

4. Confirm Key-Based Logins to 3PAR and NetApp are Configured

Login to the customer’s production database server ie: abcdb11 as root and run a ssh abcdba@mg-dit400-01 showvv abc* and ssh abcdba@mg-di3240-03 vol status | grep abc. You should not be prompted for a password and volume info should be returned and if this is not the case, the Storage Team will need to configure access.

[root@abcdb11 ~]# ssh abcdba@mg-dit400-01 showvv -showcols Name abc*
Name
abcdev_arch
abcdev_data
abcprd_arch
abcprd_data
[root@abcdb11 ~]# ssh abcdba@mg-di3240-03 vol status | grep abc
abcdev_app online          raid0, flex       nosnap=on, nosnapdir=on,
abcdev_db  online          raid0, flex       nosnap=on, nosnapdir=on,
abcprd_app online          raid0, flex       nosnap=on, nosnapdir=on,
abcprd_db  online          raid0, flex       nosnap=on, nosnapdir=on,


RMAN Based/Manual Clones

Data Collection

To confirm the clone will be successful, we first need to collect data from the SAN and NAS environments. The following 2 steps will be required for validation

SAN

1. Determine Size of SAN Volumes

Log in to non-prod database server and run a df -h command. Make note of the size of archive logs and datafiles filesystems, do the same for prod and confirm both are the same size.

Filesystem            Size  Used Avail Use% Mounted on
/dev/xvdb
                      10G   2G   8G  20% /abc/dev/arch

NFS

1. Determine Size of NFS Volumes

Log in to non-prod database and middle-tier servers and run a df -h command. Make note of the size of application and database binaries filesystems, do the same for prod and confirm both are the same size.

mg-di3240-03:/vol/abcdev_appl
                      10G   2G   8G  20% /abc/dev/app