Wednesday 24 December 2014

RAC Installation and Database creation on Oracle Linux 6.4 on Virtual Box

RAC Installation and Database creation on Oracle Linux 6.4 on Virtual Box

Here we will see the steps for Creating RAC on an  Oracle Virtual Box. We will create seperate OS users for Grid owner (grid) and Oracle RDBMS Software owner (oracle). 


ORACLE_BASE for Grid : /u01/app/grid 
ORACLE_HOME for Grid : /u01/app/11.2.0/grid
ORACLE_BASE for RDBMS : /u01/app/oracle/ 
ORACLE_HOME for RDBMS : /u01/app/oracle/product/11.2.0/db_1

Virtual Box Version : 4.3.20

Oracle Linux Version : 6.5
Oracle Software version : 11.2.0.3.0


Create the Virtual Machine

------------------------------
Total Hard Disk Attached for Linux file system : 30 GB
Total Hard Disk attached for ASM File System : 4 Disks of 5 GB each
RAM - 4096 MB
Enable two network Adapters. 
Attach Adapter 1 to Bridge Adapter and Adapter 2 to Internal Network

Hostname1 -  rac1.localdomain

Hostname2 -  rac2.localdomain

Liux Installation Details


Mount Point Space Distribution

/ - 10 GB
Swap – 4200 MB
/u01 – 15 GB
Space required for Linux Installation - 3.8 GB
Space required for Grid Installation – 5.5 GB
Space Required for Database Installation – 4.7 GB

Network Adapter 1 -  Attached to Bridged Adapter

Network Adapter 2 -  Attached to Internal Network


Steps


1. Setup Virtual Machines

Create HDD of 30 GB
RAM - 4096 MB
Network Adapter 1 -  Attached to Bridged Adapter
Network Adapter 2 -  Attached to Internal Network
hostname : rac1.localdomain

2. Install Linux Software.


Give the hostname as  rac1.localdomain


Choose database server and select High Availability as additional repositories


3. Install Guest Additions and restart the machine. Login as root user.


4. Configure Network. 


hostname: rac1.localdomain


eth0 - Public 

IP 192.168.37.211
Netmask 255.255.255.0
Gateway 192.168.37.1


eth1 - Private

IP 192.168.100.111
Netmask 255.255.255.0
Gateway 0.0.0.0

Make sure to create a wired connection for internet access.


5. Update the /etc/hosts file with below details


#127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

#::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
127.0.0.1   localhost.localdomain localhost

#Public

192.168.37.211 rac1.localdomain rac1
192.168.37.212 rac2.localdomain rac2

#Private

192.168.100.111 rac1-priv.localdomain rac1-priv
192.168.100.112  rac2-priv.localdomain rac2-priv

#Virtual

192.168.37.213 rac1-vip.localdomain rac1-vip
192.168.37.214 rac2-vip.localdomain rac2-vip

#SCAN

192.168.37.215 rac-scan.localdomain rac-scan
192.168.37.216  rac-scan.localdomain rac-scan
192.168.37.217  rac-scan.localdomain rac-scan

6. Create the new groups and users.

groupadd -g 503 dba
groupadd -g 504 oinstall
groupadd -g 505 oper
groupadd -g 506 asmdba
groupadd -g 507 asmoper
groupadd -g 508 asmadmin
useradd -u 501 -g oinstall -G dba,oper,asmdba,vboxsf oracle
useradd -u 502 -g oinstall -G asmoper,asmdba,asmadmin,vboxsf -p oracle grid
 groupmod -g 503 dba
groupmod -g 504 oinstall
usermod -u 501 -g oinstall -G dba,oper,asmdba,vboxsf oracle
usermod -u 502 -g oinstall -G asmoper,asmdba,asmadmin,vboxsf grid

7. Install Oracle Installation Prerequisites as root user.

yum install oracle-rdbms-server-11gR2-preinstall

yum install oracleasm
yum install oracleasm-support

8. Restart the Machine as root user. Check if the guest addition is working fine. if not reinstall the guest addition.



9. Update the /etc/security/limits.d/90-nproc.conf file as described below


# Change this

*          soft    nproc    1024

# To this

* - nproc 16384

10. Change the setting of SELinux to permissive by editing the /etc/selinux/config file, making sure the SELINUX flag is set as follows.


SELINUX=permissive


11. Disable Linux Firewall.


12. Deconfigure NTP using below command


# service ntpd stop

Shutting down ntpd:                                        [  OK  ]
# chkconfig ntpd off
# mv /etc/ntp.conf /etc/ntp.conf.orig
# rm /var/run/ntpd.pid

13. Create the directories in which the Oracle software will be installed.

mkdir -p /u01/app/11.2.0/grid
mkdir -p /u01/app/grid
chown -R grid:oinstall /u01/app
mkdir -p /u01/app/oracle/product/11.2.0/db_1
chown -R oracle:oinstall /u01/app/oracle
chmod -R 775 /u01/

14. shutdown the machine and create the shared folder pointing to the Oracle sw folder.


15. Installing the cvuqdisk Package for Linux


Login as root user


Without cvuqdisk, Cluster Verification Utility is unable to discover shared disks, and you receive the error message "Package cvuqdisk not installed" when you run Cluster Verification Utility.

Install it from  grid/rpm directory as root user:

cd /media/sf_Oracle_11g_sw/grid/rpm

rpm -Uvh cvuqdisk*


16. Update /etc/profile file with the below details


if [ $USER = "oracle" ] || [ $USER = "grid" ]; then

umask 022
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
fi

17. Edit /etc/security/limits.conf


oracle soft nofile 131072

oracle hard nofile 131072
oracle soft nproc 131072
oracle hard nproc 131072
oracle soft core unlimited
oracle hard core unlimited
oracle soft memlock 3500000
oracle hard memlock 3500000
grid soft nofile 131072
grid hard nofile 131072
grid soft nproc 131072
grid hard nproc 131072
grid soft core unlimited
grid hard core unlimited
grid soft memlock 3500000
grid hard memlock 3500000


 18. Update /home/oracle/.bash_profile of oracle user as below


export ORACLE_HOSTNAME=rac1.localdomain

export ORACLE_UNQNAME=orcl
export ORACLE_BASE=/u01/app/oracle
export DB_HOME=$ORACLE_BASE/product/11.2.0/db_1
export GRID_HOME=/u01/app/11.2.0/grid
export ORACLE_HOME=$DB_HOME
export ORACLE_SID=orcl1
export BASE_PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$BASE_PATH
export ORACLE_TERM=xterm
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
umask 022
ulimit -u 16384 -n 65536



19. Update /home/grid/.bash_profile of grid user as below


export ORACLE_HOSTNAME=rac1.localdomain

export ORACLE_UNQNAME=orcl
export ORACLE_BASE=/u01/app/grid
export ORACLE_HOME=/u01/app/11.2.0/grid
export BASE_PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$BASE_PATH
export ORACLE_TERM=xterm
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
umask 022
ulimit -u 16384 -n 65536


20. Shutdown the machine and Clone the virtual Machine.


21. After cloning generate a new random MAC address for Adapter 1 and 2 by clicking on the refresh icon on the left of the MAC Address. Notedown both MAC address. This is required to update the network settings of the machine. Make sure the Cable connected check box is checked.



22. Log in to the rac2.localdomain virtual machine as root user


23. Amend the hostname in the /etc/sysconfig/network file.


NETWORKING=yes

HOSTNAME=rac2.localdomain

24. Reconfigure the network settings.


Update the new MAC address and the IP Address. Update the same MAC address in the below files. 


Edit the "/etc/sysconfig/network-scripts/ifcfg-eth0", amending only the IPADDR and HWADDR settings as follows and delete the UUID entry. Note, the HWADDR value comes from the "eth2" interface displayed above.


HWADDR=08:00:27:95:ED:33  <----------Update your Adapter 1 MAC Address here which u have from step 21

IPADDR=192.168.37.212

Edit the "/etc/sysconfig/network-scripts/ifcfg-eth1", amending only the IPADDR and HWADDR settings as follows and delete the UUID entry. Note, the HWADDR value comes from the "eth3" interface displayed above.


HWADDR=08:00:27:E3:DA:B6   <----------Update your Adapter 2 MAC Address here which u have from step 21

IPADDR=192.168.100.112

25. update the parameters in .bash_profile file of oracle user as below


export ORACLE_HOSTNAME=rac2.localdomain

 export ORACLE_SID=orcl2

update the parameter in .bash_profile file of grid user as below


export ORACLE_HOSTNAME=rac2.localdomain


26. Go to the network settings and make sure the IP address and MAC address are updated there also.

eth0 - Public 

IP 192.168.37.212
Netmask 255.255.255.0
Gateway 192.168.37.1


eth1 - Private

IP 192.168.100.112
Netmask 255.255.255.0
Gateway 0.0.0.0


27. Shutdown and restart the  rac2.localdomain virtual machine and start the rac1.localdomain virtual machine. When both nodes have started, check they can both ping all the public and private IP addresses using the following commands.


ping -c 3 rac1

ping -c 3 rac1-priv
ping -c 3 rac2
ping -c 3 rac2-priv


28. Shutdown both the Machines and Create four Shared Storage Disk with 5 GB Capacity each. 


ASM_DISK01

ASM_DISK02
ASM_DISK03
ASM_DISK04

Mark it as sharable in virtual media manager and attach it to both the machines


29. Startup RAC1 Machine and login as root user. Configure oracleasm by using the oracleasm configure command. The owner should be grid and the group should be asmadmin. Make sure that the driver loads and scans disks on boot

# oracleasm configure -i
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the
driver is loaded on boot and what permissions it will have. The current
values will be shown in brackets ('[]'). Hitting <ENTER> without
typing an answer will keep that current value. Ctrl-C will abort.

Default user to own the driver interface []: grid

Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done

Startup RAC2 Machine and configure oracleasm as root user

#oracleasm configure -i


30. Shutdown the other nodes except the primary(RAC1)


31. Determine the partitions available. The following command shows all the partitions known to the OS.


cat /proc/partitions


or use


ls -lrt /dev/sd*


32. Format the partitions using fdisk

fdisk /dev/sdb
fdisk /dev/sdc
fdisk /dev/sdd
fdisk /dev/sdd

Use cat /proc/partitions or ls -lrt /dev/sd* to see if the new partitions are created

33. Initialize the asmlib with the oracleasm init command


#oracleasm init



34. Use the oracleasm createdisk command to create the ASM disk label for each disk.


oracleasm createdisk <diskname> <device_name>


oracleasm createdisk ASMDISK01 /dev/sdb1

oracleasm createdisk ASMDISK02 /dev/sdc1
oracleasm createdisk ASMDISK03 /dev/sdd1
oracleasm createdisk ASMDISK04 /dev/sde1

35. Run the "scandisks" command to refresh the ASMLib disk configuration.


# oracleasm scandisks


36. Check the disk as below


# oracleasm listdisks


37. Check the the disks are mounted in the oracleasm filesystem with the command


ls -l /dev/oracleasm/disks


38. start up the other nodes and check the new ASM disks are accessible from that nodes.

oracleasm scandisks

oracleasm listdisks

39. Login as grid user for Installing Grid Infrastructure on RAC1. Make sure that the ORACLE_BASE and ORACLE_HOME environment values are set to /u01/app/grid and /u01/app/11.2.0/grid respectively.

echo $ORACLE_BASE

echo $ORACLE_HOME

40. Go to the location where you have the unzipped grid software.


cd /media/sf_Oracle_11g_sw/grid/

./runInstaller

41. Skip Software update.




 41. On the Select Installation Option page, select the “Install and Configure Grid Infrastructure for a Cluster” option and click Next.




44. On the Select Installation Type page, select Advanced Installation and click Next.




45. On the Product Languages page, select all languages and click Next




46. The “Grid Plug and Play Information” page appears next.Input the proper data carefully

Scan Name : rac-scan.localdomain
Uncheck configure GNS




47. On the Cluster Node Information page, you add your second node. Click the Add button

Enter the fully qualified name of your second node into the box and click OK.


Public : rac2.localdomain
Virtual : rac2-vip.localdomain




Your second node should appear in the window under your first node. Click the SSH Connectivity button. Enter the grid password. Click the Setup button. 




A dialog box stating that you have successfully established passwordless SSH connectivity appears. Click OK to close the dialog box. Click Next to continue.





 48. On the Specify Network Usage page, you must configure the correct interface types for the listed network interface
eth0 192.168.37.0  Public
eth1 192.168.100.0 Private


 49. On the Storage Option Information page, select Automatic Storage Management (ASM) and click Next.

50. On the Create ASM Disk Group page, make sure that Disk Group Name is DATA and Redundancy is Normal. In the Add Disks region, if the ASM Disks are not displayed, Click on Change Discovery Path




Provide the Disk Discovery Path as /dev/oracleasm/disks. Click OK




select ASMDISK01, ASMDISK02 and ASMDISK03. Click Next




51. On the ASM Passwords page, click the Use Same Password for these accounts button. In the Specify Password field, enter oracle_4U and confirm it in the Confirm Password field. Click Next to continue.




52. Select the “Do not use Intelligent Platform Management Interface (IPMI)” option on the Failure Isolation page and click Next to continue.




53. On the Privileged Operating System Groups page, select “asmdba” for the ASM Database Administrator (OSDBA) group, “asmoper” for the ASM Instance

Administration Operator (OSOPER) group, and “asmadmin” for the ASM
Instance Administrator (OSASM) group. Click Next to continue.



54. On the Specify Installation Location page, make sure that Oracle Base is

/u01/app/grid and Software Location is /u01/app/11.2.0/grid. Click Next



55. On the Create Inventory page, Inventory Directory should be

/u01/app/oraInventory and the oraInventory Group Name should be
oinstall. Click Next.




56. On the Perform System Prerequisites page, the Installer checks whether all the systems involved in the installation meet the minimum system requirements for that platform. If the check is successful, click Next. If any deficiencies are found, click the “Fix & Check Again” button. The Execute Fixup Script dialog box

appears. Click Next to continue.



Here you can ignore this warnings. Click Ignore all and click Next




Grid Installation starts.




57. Run the scripts as root. First run both the scripts one by one on the Local Node first. After successful completion you can run the script in parallel on all other nodes.


/u01/app/oraInventory/orainstRoot.sh
/u01/app/11.2.0/grid/root.sh



58. Once the Installation is finished, run asmca from grid to create FRA diskgroup. Click on Create to create the Disk Group





 Give the Disk Group Name as FRA. Select the redundancy as External and select the Disk. Click ok




After Successful creation the Disk Group will be Displayed in the Disk Group Page. Click Exit to exit from ASMCA



Switch to Oracle user and start the Database Software Installation.
./runInstaller. 
Uncheck the option to receive Updates from Oracle Support. Click Next



 Select skip software update. Click Next




Select the option to Install database software only. Next



In the Grid Installation Options page select Oracle RAC Database Installation. Select both the nodes and click on SSH Connectivity.



Give the password for oracle user and click on setup.



Once successfully created the SSH Connectivity , Oracle will display the message . Click OK and then Next




Select the Languages.



Select the Enterprise Edition.



Specify the installation location as below.
Oracle Base : /u01/app/oracle
Software location (ORACLE_HOME) : /u01/app/oracle/product/11.2.0/db_1
Click Next


Provide the Appropriate OS Groups here. Select dba for OSDBA Group and oper for OSOPER   Group. Click Next



Performing the pre requisites check.



Below warnings are expected. Click on Ignore all and then Click Next to start the Installation.




Run the root.sh script as root user. After running successfully on each nodes click OK. 



 The installation of Oracle Database was successful.





Now create the database. Start the Database Configuration Assistant using DBCA.





Select the option to create the RAC Database. Click Next



Select Cretate Database. Next


 Select The appropriate Database Template here. Select General Purpose or Transaction Processing. Next



Select Admin Managed as configuration type. 
Give the Global Database Name and SID Prefix  as orcl
Select the nodes on which you want to create the cluster database. Select rac1 and rac2. Click Next



Check Configure Enterprise Manager. Next



Provide password for the Administrative Accounts. Click Next


Select Storage type as ASM. Select +DATA as Database ares where the Database file will be created.



Provide the ASM Credential. Provide the same Password you gave during the Grid Installation. Click OK



Specify FRA diskgroup and provide the Size for FRA. Click Next



Select Sample Schemas. Click Next



Provide memory Size, Character sets and Connection Mode and click Next

Database Storage details will be displayed and if you want to edit you can do it here. Click Next



Click on Database Creation and Click on Finish.



The database summary will be displayed. Click on OK and the database creation will start.



Database creation in Progress.



After the Successful creation of database the details will be displayed. Notedown the  Database Control URL to connect to EM.


62. After database creation update oracle sid for the database instance in /etc/oratab file on both nodes as orcl1 and orcl2.


Login to EM 











Check the status of RAC


As grid user run

srvctl config database -d orcl


srvctl status database -d orcl


crsctl status resource -t


Clean Shutdown of RAC


1. Shutdown the database. As oracle user execute on any node:
$ . oraenv
ORACLE_SID = [oracle] ? ASM1
srvctl stop database -d orcl

2. Shutdown the clusterware on the first node. As grid user execute:

$ . oraenv
ORACLE_SID = [oracle] ? ASM1
srvctl stop asm -f 

3. Shutdown the clusterware on the first node. As grid user execute:

$ . oraenv
ORACLE_SID = [oracle] ? ASM2
srvctl stop asm -f 

Shutdown both virtual machines. Wait until all VM windows are closed.

Make sure ASM, database and Listener are down using.

ps -ef|grep pmon

ps -ef|grep tns

In case any listener is still running, use lsnrctl to stop it.


Startup RAC



Start new VMs. The clusterware should start automatically. You will need to bring up the database. Login as the oracle user and execute:


$ . oraenv

ORACLE_SID = [oracle] ? orcl1
The Oracle base has been set to /u01/app/oracle

$ srvctl start database -d orcl

2 comments:

  1. I really appreciate information shared above. It’s of great help. If someone want to learn Online (Virtual) instructor lead live training in Oracle Pre _RAC Installation, kindly contact us http://www.maxmunus.com/contact
    MaxMunus Offer World Class Virtual Instructor led training on Oracle Pre _RAC Installation. We have industry expert trainer. We provide Training Material and Software Support. MaxMunus has successfully conducted 100000+ trainings in India, USA, UK, Australlia, Switzerland, Qatar, Saudi Arabia, Bangladesh, Bahrain and UAE etc.
    For Demo Contact us:
    Name : Arunkumar U
    Email : arun@maxmunus.com
    Skype id: training_maxmunus
    Contact No.-+91-9738507310
    Company Website –http://www.maxmunus.com



    ReplyDelete
  2. Pretty Gret post. I just stumbled upon your blog and wanted to say that i have really enjoyed reading your blog posts. Italy Dedicated Server

    ReplyDelete