8i | 9i | 10g | 11g | 12c | 13c | 18c | 19c | 21c | 23ai | Misc | PL/SQL | SQL | RAC | WebLogic | Linux
Oracle Database 12c Release 1 (12.1) RAC On Oracle Linux 6 Using VirtualBox
This article describes the installation of Oracle Database 12c release 1 (12.1 64-bit) RAC on Linux (Oracle Linux 6.5 64-bit) using VirtualBox (4.3.16) with no additional shared disk devices.
- Introduction
- Download Software
- VirtualBox Installation
- VirtualBox Network Setup
- Virtual Machine Setup
- Guest Operating System Installation
- Oracle Installation Prerequisites
- Install Guest Additions
- Create Shared Disks
- Clone the Virtual Machine
- Install the Grid Infrastructure
- Install the Database Software
- Create a Database
- Check the Status of the RAC
Introduction
One of the biggest obstacles preventing people from setting up test RAC environments is the requirement for shared storage. In a production environment, shared storage is often provided by a SAN or high-end NAS device, but both of these options are very expensive when all you want to do is get some experience installing and using RAC. A cheaper alternative is to use a FireWire disk enclosure to allow two machines to access the same disk(s), but that still costs money and requires two servers. A third option is to use virtualization to fake the shared storage.
Using VirtualBox you can run multiple Virtual Machines (VMs) on a single server, allowing you to run both RAC nodes on a single machine. In addition, it allows you to set up shared virtual disks, overcoming the obstacle of expensive shared storage.
Before you launch into this installation, here are a few things to consider.
- The finished system includes the host operating system, two guest operating systems, two sets of Oracle Grid Infrastructure (Clusterware + ASM) and two Database instances all on a single server. As you can imagine, this requires a significant amount of disk space, CPU and memory.
- Following on from the last point, the VMs will each need at least 4G of RAM, preferably more if you don't want the VMs to swap like crazy. Don't assume you will be able to run this on a small PC or laptop. You won't.
- This procedure provides a bare bones installation to get the RAC working. There is no redundancy in the Grid Infrastructure installation or the ASM installation. To add this, simply create double the amount of shared disks and select the "Normal" redundancy option when it is offered. Of course, this will take more disk space.
- During the virtual disk creation, I always choose not to preallocate the disk space. This makes virtual disk access slower during the installation, but saves on wasted disk space. The shared disks must have their space preallocated.
- This is not, and should not be considered, a production-ready system. It's simply to allow you to get used to installing and using RAC.
- The Single Client Access Name (SCAN) should be defined in the DNS or GNS and round-robin between one of 3 addresses, which are on the same subnet as the public and virtual IPs. Prior to 11.2.0.2 it could be defined as a single IP address in the "/etc/hosts" file, which is wrong and will cause the cluster verification to fail, but it allowed you to complete the install without the presence of a DNS. This does not seem to work for 11.2.0.2 onward.
- The virtual machines can be limited to 2Gig of swap, which causes a prerequisite check failure, but doesn't prevent the installation working. If you want to avoid this, define 3+Gig of swap.
- This article uses the 64-bit versions of Oracle Linux and Oracle 12c Release 1.
- When doing this installation on my server, I split the virtual disks on to different physical disks ("/u02", "/u03", "/u04"). This is not necessary, but makes things run a bit faster.
Download Software
Download the following software.
- Oracle Linux 6 (Use the latest spin eg. 6.5)
- VirtualBox (4.3.16)
- Oracle 12c Release 1 (12.1.0.2) Software (64 bit)
This article has been updated for the 12.1.0.2 release, but the installation is essentially unchanged since 12.1.0.1. Any variations specific for 12.1.0.1 will be noted.
Depending on your version of VirtualBox and Oracle Linux, there may be some slight variation in how the screen shots look.
VirtualBox Installation
First, install the VirtualBox software. On RHEL and its clones you do this with the following type of command as the root user.
# rpm -Uvh VirtualBox-4.3-4.3.16_95972_el6-1.x86_64.rpm
The package name will vary depending on the host distribution you are using. Once complete, VirtualBox is started from the menu.
VirtualBox Network Setup
We need to make sure a host-only network is configured and check/modify the IP range for that network. This will be the public network for our RAC installation.
Start VirtualBox from the menu.
Select the "File > Preferences" menu option.
Click "Network" in the left pane and click the "Host-only Networks" tab.
Click the "Add host-only network (Ins)" button on the right size of the screen. A network called "vboxnet0" will be created.
Click the "Edit host-only network (Space)" button on the right size of the screen.
If you want to use a different subnet for your public addresses you can change the network details here. Just make sure the subnet you choose doesn't match any real subnets on your network. I've decided to stick with the default, which for me is "192.168.56.X".
- Use the "OK" buttons to exit out of this screen and the previous one.
Virtual Machine Setup
Now we must define the two virtual RAC nodes. We can save time by defining one VM, then cloning it when it is installed.
Start VirtualBox and click the "New" button on the toolbar. Enter the name "ol6-121-rac1", OS "Linux" and Version "Oracle (64 bit)", then click the "Next" button.
Enter "4096" as the base memory size, then click the "Next" button. Use more memory if you have enough physical memory on your machine as it will make the process much quicker!
Accept the default option to create a new virtual hard disk by clicking the "Create" button.
Acccept the default hard drive file type by clicking the "Next" button.
Acccept the "Dynamically allocated" option by clicking the "Next" button.
Accept the default location and set the size to "50G", then click the "Create" button. If you can spread the virtual disks onto different physical disks, that will improve performance.
The "ol6-121-rac1" VM will appear on the left hand pane. Scroll down the details on the right and click on the "Network" link.
Make sure "Adapter 1" is enabled, set to "NAT", then click on the "Adapter 2" tab.
Make sure "Adapter 2" is enabled, set to "Host-only Adapter", then click on the "Adapter 3" tab.
Make sure "Adapter 3" is enabled, set to "Internal Network", then click on the "System" section.
Move "Hard Disk" to the top of the boot order and uncheck the "Floppy" option, then click the "OK" button.
The virtual machine is now configured so we can start the guest operating system installation.
Guest Operating System Installation
With the new VM highlighted, click the "Start" button on the toolbar. On the "Select start-up disk" screen, choose the relevant Oracle Linux ISO image and click the "Start" button.
The resulting console window will contain the Oracle Linux boot screen.
Continue through the Oracle Linux 6 installation as you would for a basic server. A general pictorial guide to the installation can be found here. More specifically, it should be a server installation with a minimum of 4G+ swap, firewall disabled, SELinux set to permissive and the following package groups installed:
- Base System > Base
- Base System > Compatibility libraries
- Base System > Hardware monitoring utilities
- Base System > Large Systems Performance
- Base System > Network file system client
- Base System > Performance Tools
- Base System > Perl Support
- Servers > Server Platform
- Servers > System administration tools
- Desktops > Desktop
- Desktops > Desktop Platform
- Desktops > Fonts
- Desktops > General Purpose Desktop
- Desktops > Graphical Administration Tools
- Desktops > Input Methods
- Desktops > X Window System
- Applications > Internet Browser
- Development > Additional Development
- Development > Development Tools
To be consistent with the rest of the article, the following information should be set during the installation:
- hostname: ol6-121-rac1.localdomain
- eth0: DHCP (Connect Automatically)
- eth1: IP=192.168.56.101, Subnet=255.255.255.0, Gateway=192.168.56.1, DNS=192.168.56.1, Search=localdomain (Connect Automatically)
- eth2: IP=192.168.1.101, Subnet=255.255.255.0, Gateway=<blank>, DNS=<blank>, Search=<blank> (Connect Automatically)
You are free to change the IP addresses to suit your network, but remember to stay consistent with those adjustments throughout the rest of the article.
Oracle Installation Prerequisites
Perform either the Automatic Setup or the Manual Setup to complete the basic prerequisites. The Additional Setup is required for all installations.
Automatic Setup
If you plan to use the "oracle-rdbms-server-12cR1-preinstall" package to perform all your prerequisite setup, issue the following command.
# yum install oracle-rdbms-server-12cR1-preinstall -y
Earlier versions of Oracle Linux required manual setup of the Yum repository by following the instructions at http://public-yum.oracle.com.
It is probably worth doing a full update as well, but this is not strictly speaking necessary.
# yum update -y
Manual Setup
If you have not used the "oracle-rdbms-server-12cR1-preinstall" package to perform all prerequisites, you will need to manually perform the following setup tasks.
Add or amend the following lines to the "/etc/sysctl.conf" file.
fs.file-max = 6815744 kernel.sem = 250 32000 100 128 kernel.shmmni = 4096 kernel.shmall = 1073741824 kernel.shmmax = 4398046511104 net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048576 fs.aio-max-nr = 1048576 net.ipv4.ip_local_port_range = 9000 65500
Run the following command to change the current kernel parameters.
/sbin/sysctl -p
Add the following lines to the "/etc/security/limits.d/oracle-database-server-12cR2-preinstall.conf" file.
oracle soft nofile 1024 oracle hard nofile 65536 oracle soft nproc 2047 oracle hard nproc 16384 oracle soft stack 10240 oracle hard stack 32768
In addition to the basic OS installation, the following packages must be installed whilst logged in as the root user. This includes the 64-bit and 32-bit versions of some packages.
# From Public Yum or ULN yum install binutils -y yum install compat-libcap1 -y yum install compat-libstdc++-33 -y yum install compat-libstdc++-33.i686 -y yum install gcc -y yum install gcc-c++ -y yum install glibc -y yum install glibc.i686 -y yum install glibc-devel -y yum install glibc-devel.i686 -y yum install ksh -y yum install libgcc -y yum install libgcc.i686 -y yum install libstdc++ -y yum install libstdc++.i686 -y yum install libstdc++-devel -y yum install libstdc++-devel.i686 -y yum install libaio -y yum install libaio.i686 -y yum install libaio-devel -y yum install libaio-devel.i686 -y yum install libXext -y yum install libXext.i686 -y yum install libXtst -y yum install libXtst.i686 -y yum install libX11 -y yum install libX11.i686 -y yum install libXau -y yum install libXau.i686 -y yum install libxcb -y yum install libxcb.i686 -y yum install libXi -y yum install libXi.i686 -y yum install make -y yum install sysstat -y yum install unixODBC -y yum install unixODBC-devel -y
Create the new groups and users.
groupadd -g 54321 oinstall groupadd -g 54322 dba groupadd -g 54323 oper #groupadd -g 54324 backupdba #groupadd -g 54325 dgdba #groupadd -g 54326 kmdba #groupadd -g 54327 asmdba #groupadd -g 54328 asmoper #groupadd -g 54329 asmadmin useradd -u 54321 -g oinstall -G dba,oper oracle
Uncomment the extra groups you require.
Additional Setup
The following steps must be performed, whether you did the manual or automatic setup.
Perform the following steps whilst logged into the "ol6-121-rac1" virtual machine as the root user.
Set the password for the "oracle" user.
passwd oracle
Apart form the localhost address, the "/etc/hosts" file can be left blank, but I prefer to put the addresses in for reference.
127.0.0.1 localhost.localdomain localhost # Public 192.168.56.101 ol6-121-rac1.localdomain ol6-121-rac1 192.168.56.102 ol6-121-rac2.localdomain ol6-121-rac2 # Private 192.168.1.101 ol6-121-rac1-priv.localdomain ol6-121-rac1-priv 192.168.1.102 ol6-121-rac2-priv.localdomain ol6-121-rac2-priv # Virtual 192.168.56.103 ol6-121-rac1-vip.localdomain ol6-121-rac1-vip 192.168.56.104 ol6-121-rac2-vip.localdomain ol6-121-rac2-vip # SCAN #192.168.56.105 ol6-121-scan.localdomain ol6-121-scan #192.168.56.106 ol6-121-scan.localdomain ol6-121-scan #192.168.56.107 ol6-121-scan.localdomain ol6-121-scan
The SCAN address is commented out of the hosts file because it must be resolved using a DNS, so it can round-robin between 3 addresses on the same subnet as the public IPs. The DNS can be configured on the host machine using BIND or Dnsmasq, which is much simpler. If you are using Dnsmasq, put the RAC-specific entries in the hosts machines "/etc/hosts" file, with the SCAN entries uncommented, and restart Dnsmasq.
Make sure the "/etc/resolv.conf" file includes a nameserver entry that points to the correct nameserver. Also, if the "domain" and "search" entries are both present, comment out one of them. For this installation my "/etc/resolv.conf" looked like this.
#domain localdomain search localdomain nameserver 192.168.56.1
The changes to the "resolv.conf" will be overwritten by the network manager, due to the presence of the NAT interface. For this reason, this interface should now be disabled on startup. You can enable it manually if you need to access the internet from the VMs. Edit the "/etc/sysconfig/network-scripts/ifcfg-eth0" file, making the following change. This will take effect after the next restart.
ONBOOT=no
There is no need to do the restart now. You can just run the following command.
# ifdown eth0
At this point, the networking for the first node should look something like the following. Notice that eth0 has no associated IP address because it is disabled.
# ifconfig -a eth0 Link encap:Ethernet HWaddr 08:00:27:60:E9:90 BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:2 errors:0 dropped:0 overruns:0 frame:0 TX packets:63 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1180 (1.1 KiB) TX bytes:12925 (12.6 KiB) eth1 Link encap:Ethernet HWaddr 08:00:27:73:0F:6D inet addr:192.168.56.101 Bcast:192.168.56.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fe73:f6d/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:4610 errors:0 dropped:0 overruns:0 frame:0 TX packets:5904 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:915043 (893.5 KiB) TX bytes:2528208 (2.4 MiB) eth2 Link encap:Ethernet HWaddr 08:00:27:91:5F:CD inet addr:192.168.1.101 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fe91:5fcd/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:97605 errors:0 dropped:0 overruns:0 frame:0 TX packets:52470 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:91468262 (87.2 MiB) TX bytes:29058220 (27.7 MiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:9879 errors:0 dropped:0 overruns:0 frame:0 TX packets:9879 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:6823490 (6.5 MiB) TX bytes:6823490 (6.5 MiB) #
With this in place and the DNS configured the SCAN address is being resolved to all three IP addresses.
# nslookup ol6-121-scan Server: 192.168.56.1 Address: 192.168.56.1#53 Name: ol6-121-scan.localdomain Address: 192.168.56.105 Name: ol6-121-scan.localdomain Address: 192.168.56.106 Name: ol6-121-scan.localdomain Address: 192.168.56.107 #
Change the setting of SELinux to permissive by editing the "/etc/selinux/config" file, making sure the SELINUX flag is set as follows.
SELINUX=permissive
If you have the Linux firewall enabled, you will need to disable or configure it, as shown here or here. The following is an example of disabling the firewall.
# service iptables stop # chkconfig iptables off
Either configure NTP, or make sure it is not configured so the Oracle Cluster Time Synchronization Service (ctssd) can synchronize the times of the RAC nodes. If you want to deconfigure NTP do the following, which is what I did for this installation.
# service ntpd stop Shutting down ntpd: [ OK ] # chkconfig ntpd off # mv /etc/ntp.conf /etc/ntp.conf.orig # rm /var/run/ntpd.pid
If your RAC is going to be permanently connected to your main network and you want to use NTP, you must add the "-x" option into the following line in the "/etc/sysconfig/ntpd" file.
OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"
Then restart NTP.
# service ntpd restart
Create the directories in which the Oracle software will be installed.
mkdir -p /u01/app/12.1.0.2/grid mkdir -p /u01/app/oracle/product/12.1.0.2/db_1 chown -R oracle:oinstall /u01 chmod -R 775 /u01/
Log in as the "oracle" user and add the following lines at the end of the "/home/oracle/.bash_profile" file.
# Oracle Settings export TMP=/tmp export TMPDIR=$TMP export ORACLE_HOSTNAME=ol6-121-rac1.localdomain export ORACLE_UNQNAME=CDBRAC export ORACLE_BASE=/u01/app/oracle export GRID_HOME=/u01/app/12.1.0.2/grid export DB_HOME=$ORACLE_BASE/product/12.1.0.2/db_1 export ORACLE_HOME=$DB_HOME export ORACLE_SID=cdbrac1 export ORACLE_TERM=xterm export BASE_PATH=/usr/sbin:$PATH export PATH=$ORACLE_HOME/bin:$BASE_PATH export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib alias grid_env='. /home/oracle/grid_env' alias db_env='. /home/oracle/db_env'
Create a file called "/home/oracle/grid_env" with the following contents.
export ORACLE_SID=+ASM1 export ORACLE_HOME=$GRID_HOME export PATH=$ORACLE_HOME/bin:$BASE_PATH export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
Create a file called "/home/oracle/db_env" with the following contents.
export ORACLE_SID=cdbrac1 export ORACLE_HOME=$DB_HOME export PATH=$ORACLE_HOME/bin:$BASE_PATH export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
Once the "/home/oracle/.bash_profile" has been run, you will be able to switch between environments as follows.
$ grid_env $ echo $ORACLE_HOME /u01/app/12.1.0.2/grid $ db_env $ echo $ORACLE_HOME /u01/app/oracle/product/12.1.0.2/db_1 $
We've made a lot of changes, so it's worth doing a reboot of the VM at this point to make sure all the changes have taken effect.
# shutdown -r now
Install Guest Additions
Click on the "Devices > Install Guest Additions" menu option at the top of the VM screen. If you get the option to auto-run take it. If not, then run the following commands.
cd /media/VBOXADDITIONS_4.3.16_95972 sh ./VBoxLinuxAdditions.run
Add the "oracle" user into the "vboxsf" group so it has access to shared drives.
# usermod -G vboxsf,dba oracle # id oracle uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54323(vboxsf) #
Unzip the grid and database software on the host machine.
unzip linuxamd64_12102_grid_1of2.zip unzip linuxamd64_12102_grid_2of2.zip unzip linuxamd64_12102_database_1of2.zip unzip linuxamd64_12102_database_2of2.zip
Create a shared folder (Devices > Shared Folders) on the virtual machine, pointing to the directory on the host where the Oracle software was unzipped. Check the "Auto-mount" and "Make Permanent" options before clicking the "OK" button.
The VM will need to be restarted for the guest additions to be used properly. The next section requires a shutdown so no additional restart is needed at this time. Once the VM is restarted, the shared folder called "/media/sf_12.1.0.2" will be accessible by the "oracle" user.
Create Shared Disks
Shut down the "ol6-121-rac1" virtual machine using the following command.
# shutdown -h now
On the host server, create 4 sharable virtual disks and associate them as virtual media using the following commands. You can pick a different location, but make sure they are outside the existing VM directory.
$ mkdir -p /u04/VirtualBox/ol6-121-rac $ cd /u04/VirtualBox/ol6-121-rac $ $ # Create the disks and associate them with VirtualBox as virtual media. $ VBoxManage createhd --filename asm1.vdi --size 5120 --format VDI --variant Fixed $ VBoxManage createhd --filename asm2.vdi --size 5120 --format VDI --variant Fixed $ VBoxManage createhd --filename asm3.vdi --size 5120 --format VDI --variant Fixed $ VBoxManage createhd --filename asm4.vdi --size 5120 --format VDI --variant Fixed $ $ # Connect them to the VM. $ VBoxManage storageattach ol6-121-rac1 --storagectl "SATA" --port 1 --device 0 --type hdd \ --medium asm1.vdi --mtype shareable $ VBoxManage storageattach ol6-121-rac1 --storagectl "SATA" --port 2 --device 0 --type hdd \ --medium asm2.vdi --mtype shareable $ VBoxManage storageattach ol6-121-rac1 --storagectl "SATA" --port 3 --device 0 --type hdd \ --medium asm3.vdi --mtype shareable $ VBoxManage storageattach ol6-121-rac1 --storagectl "SATA" --port 4 --device 0 --type hdd \ --medium asm4.vdi --mtype shareable $ $ # Make shareable. $ VBoxManage modifyhd asm1.vdi --type shareable $ VBoxManage modifyhd asm2.vdi --type shareable $ VBoxManage modifyhd asm3.vdi --type shareable $ VBoxManage modifyhd asm4.vdi --type shareable
Start the "ol6-121-rac1" virtual machine by clicking the "Start" button on the toolbar. When the server has started, log in as the root user so you can configure the shared disks. The current disks can be seen by issuing the following commands.
# cd /dev # ls sd* sda sda1 sda2 sdb sdc sdd sde #
Use the "fdisk" command to partition the disks sdb to sde. The following output shows the expected fdisk output for the sdb disk.
# fdisk /dev/sdb Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel with disk identifier 0x62be91cf. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u'). Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-652, default 1): Using default value 1 Last cylinder, +cylinders or +size{K,M,G} (1-652, default 652): Using default value 652 Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. #
In each case, the sequence of answers is "n", "p", "1", "Return", "Return" and "w".
Once all the disks are partitioned, the results can be seen by repeating the previous "ls" command.
# cd /dev # ls sd* sda sda1 sda2 sdb sdb1 sdc sdc1 sdd sdd1 sde sde1 #
Configure your UDEV rules, as shown here.
Add the following to the "/etc/scsi_id.config" file to configure SCSI devices as trusted. Create the file if it doesn't already exist.
options=-g
The SCSI ID of my disks are displayed below.
# /sbin/scsi_id -g -u -d /dev/sdb 1ATA_VBOX_HARDDISK_VB1bb0c812-29a5f87c # /sbin/scsi_id -g -u -d /dev/sdc 1ATA_VBOX_HARDDISK_VB48611c62-fb44446d # /sbin/scsi_id -g -u -d /dev/sdd 1ATA_VBOX_HARDDISK_VB86ad2f7a-0104fd50 # /sbin/scsi_id -g -u -d /dev/sde 1ATA_VBOX_HARDDISK_VB61da7d52-a5a283e4 #
Using these values, edit the "/etc/udev/rules.d/99-oracle-asmdevices.rules" file adding the following 4 entries. All parameters for a single entry must be on the same line.
KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB1bb0c812-29a5f87c", NAME="asm-disk1", OWNER="oracle", GROUP="dba", MODE="0660" KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB48611c62-fb44446d", NAME="asm-disk2", OWNER="oracle", GROUP="dba", MODE="0660" KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB86ad2f7a-0104fd50", NAME="asm-disk3", OWNER="oracle", GROUP="dba", MODE="0660" KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB61da7d52-a5a283e4", NAME="asm-disk4", OWNER="oracle", GROUP="dba", MODE="0660"
Load updated block device partition tables.
# /sbin/partprobe /dev/sdb1 # /sbin/partprobe /dev/sdc1 # /sbin/partprobe /dev/sdd1 # /sbin/partprobe /dev/sde1
Test the rules are working as expected.
# /sbin/udevadm test /block/sdb/sdb1
Reload the UDEV rules and start UDEV.
# /sbin/udevadm control --reload-rules # /sbin/start_udev
The disks should now be visible and have the correct ownership using the following command. If they are not visible, your UDEV configuration is incorrect and must be fixed before you proceed.
# ls -al /dev/asm* brw-rw---- 1 oracle dba 8, 17 Oct 12 14:39 /dev/asm-disk1 brw-rw---- 1 oracle dba 8, 33 Oct 12 14:38 /dev/asm-disk2 brw-rw---- 1 oracle dba 8, 49 Oct 12 14:39 /dev/asm-disk3 brw-rw---- 1 oracle dba 8, 65 Oct 12 14:39 /dev/asm-disk4 #
The shared disks are now configured for the grid infrastructure.
Clone the Virtual Machine
Later versions of VirtualBox allow you to clone VMs, but these also attempt to clone the shared disks, which is not what we want. Instead we must manually clone the VM.
Shut down the "ol6-121-rac1" virtual machine using the following command.
# shutdown -h now
You may get errors if you create the virtual disk in the default location VirtualBox will use to create the VM. If that happens, rename the folder holding the new virtual disk and go through the creation process of the new VM again.
Manually clone the "ol6-121-rac1.vdi" disk using the following commands on the host server.
$ mkdir -p /u03/VirtualBox/ol6-121-rac2 $ VBoxManage clonehd /u01/VirtualBox/ol6-121-rac1/ol6-121-rac1.vdi /u03/VirtualBox/ol6-121-rac2/ol6-121-rac2.vdi
Create the "ol6-121-rac2" virtual machine in VirtualBox in the same way as you did for "ol6-121-rac1", with the exception of using an existing "ol6-121-rac2.vdi" virtual hard drive.
Remember to add the three network adaptor as you did on the "ol6-121-rac1" VM. When the VM is created, attach the shared disks to this VM.
$ cd /u04/VirtualBox/ol6-121-rac $ $ VBoxManage storageattach ol6-121-rac2 --storagectl "SATA" --port 1 --device 0 --type hdd \ --medium asm1.vdi --mtype shareable $ VBoxManage storageattach ol6-121-rac2 --storagectl "SATA" --port 2 --device 0 --type hdd \ --medium asm2.vdi --mtype shareable $ VBoxManage storageattach ol6-121-rac2 --storagectl "SATA" --port 3 --device 0 --type hdd \ --medium asm3.vdi --mtype shareable $ VBoxManage storageattach ol6-121-rac2 --storagectl "SATA" --port 4 --device 0 --type hdd \ --medium asm4.vdi --mtype shareable
Start the "ol6-121-rac2" virtual machine by clicking the "Start" button on the toolbar. Ignore any network errors during the startup.
Log in to the "ol6-121-rac2" virtual machine as the "root" user so we can reconfigure the network settings to match the following.
- hostname: ol6-121-rac2.localdomain
- eth0: DHCP (*Not* Connect Automatically)
- eth1: IP=192.168.56.102, Subnet=255.255.255.0, Gateway=192.168.56.1, DNS=192.168.56.1, Search=localdomain (Connect Automatically)
- eth2: IP=192.168.1.102, Subnet=255.255.255.0, Gateway=<blank>, DNS=<blank>, Search=<blank> (Connect Automatically)
Amend the hostname in the "/etc/sysconfig/network" file.
NETWORKING=yes HOSTNAME=ol6-121-rac2.localdomain
Check the MAC address of each of the available network connections. Don't worry that they are listed as "eth3" to "eth5". These are dynamically created connections because the MAC address of the "eth0" to "eth2" connections are incorrect.
# ifconfig -a | grep eth eth3 Link encap:Ethernet HWaddr 08:00:27:43:41:74 eth4 Link encap:Ethernet HWaddr 08:00:27:4B:4F:0F eth5 Link encap:Ethernet HWaddr 08:00:27:E8:70:17 #
Edit the "/etc/sysconfig/network-scripts/ifcfg-eth0", amending only the HWADDR setting as follows and deleting the UUID entry. Note, the HWADDR value comes from the "eth3" interface displayed above.
HWADDR=08:00:27:43:41:74
Edit the "/etc/sysconfig/network-scripts/ifcfg-eth1", amending only the IPADDR and HWADDR settings as follows and deleting the UUID entry. Note, the HWADDR value comes from the "eth4" interface displayed above.
HWADDR=08:00:27:4B:4F:0F IPADDR=192.168.56.102
Edit the "/etc/sysconfig/network-scripts/ifcfg-eth2", amending only the IPADDR and HWADDR settings as follows and deleting the UUID entry. Note, the HWADDR value comes from the "eth5" interface displayed above.
HWADDR=08:00:27:E8:70:17 IPADDR=192.168.1.102
Restart the virtual machines.
# shutdown -r now
If the adapter names do not reset properly, check the HWADDR
in the "/etc/udev/rules.d/70-persistent-net.rules" file. If it is incorrect, amend it to match the settings described above and restart the VM.
At this point, the networking for the second node should look something like the following. Notice that eth0 has no associated IP address because it is disabled.
# ifconfig -a eth0 Link encap:Ethernet HWaddr 08:00:27:80:14:C5 BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:2 errors:0 dropped:0 overruns:0 frame:0 TX packets:62 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1180 (1.1 KiB) TX bytes:12838 (12.5 KiB) eth1 Link encap:Ethernet HWaddr 08:00:27:CE:D9:84 inet addr:192.168.56.102 Bcast:192.168.56.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fece:d984/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5918 errors:0 dropped:0 overruns:0 frame:0 TX packets:4467 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2512951 (2.3 MiB) TX bytes:921096 (899.5 KiB) eth2 Link encap:Ethernet HWaddr 08:00:27:62:8C:96 inet addr:192.168.1.102 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fe62:8c96/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:51556 errors:0 dropped:0 overruns:0 frame:0 TX packets:96842 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:28449256 (27.1 MiB) TX bytes:91040172 (86.8 MiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:12571 errors:0 dropped:0 overruns:0 frame:0 TX packets:12571 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:8117490 (7.7 MiB) TX bytes:8117490 (7.7 MiB) #
Edit the "/home/oracle/.bash_profile" file on the "ol6-121-rac2" node to correct the ORACLE_SID and ORACLE_HOSTNAME values.
export ORACLE_SID=cdbrac2 export ORACLE_HOSTNAME=ol6-121-rac2.localdomain
Also, amend the ORACLE_SID setting in the "/home/oracle/db_env" and "/home/oracle/grid_env" files.
Restart the "ol6-121-rac2" virtual machine and start the "ol6-121-rac1" virtual machine. When both nodes have started, check they can both ping all the public and private IP addresses using the following commands.
ping -c 3 ol6-121-rac1 ping -c 3 ol6-121-rac1-priv ping -c 3 ol6-121-rac2 ping -c 3 ol6-121-rac2-priv
Check the SCAN address is still being resolved properly on both nodes.
# nslookup ol6-121-scan Server: 192.168.56.1 Address: 192.168.56.1#53 Name: ol6-121-scan.localdomain Address: 192.168.56.105 Name: ol6-121-scan.localdomain Address: 192.168.56.106 Name: ol6-121-scan.localdomain Address: 192.168.56.107 #
At this point the virtual IP addresses defined in the "/etc/hosts" file will not work, so don't bother testing them.
Check the UDEV rules are working on both machines. In previous versions of OL6 the "/etc/udev/rules.d/99-oracle-asmdevices.rules" file copied between servers during the clone without any issues. For some reason, this doesn't seem to happen on my OL6.3 installations, so you may need to repeat the UDEV configuration on the second node if the output of the following command is not consistent on both nodes.
# ls -al /dev/asm* brw-rw----. 1 oracle dba 8, 17 Jan 12 20:16 /dev/asm-disk1 brw-rw----. 1 oracle dba 8, 33 Jan 12 20:16 /dev/asm-disk2 brw-rw----. 1 oracle dba 8, 49 Jan 12 20:16 /dev/asm-disk3 brw-rw----. 1 oracle dba 8, 65 Jan 12 20:16 /dev/asm-disk4 #
Prior to 11gR2 we would probably use the "runcluvfy.sh" utility in the clusterware root directory to check the prerequisites have been met. If you are intending to configure SSH connectivity using the installer this check should be omitted as it will always fail. If you want to setup SSH connectivity manually, then once it is done you can run the "runcluvfy.sh" with the following command.
/mountpoint/clusterware/runcluvfy.sh stage -pre crsinst -n ol6-121-rac1,ol6-121-rac2 -verbose
If you get any failures be sure to correct them before proceeding.
The virtual machine setup is now complete.
Before moving forward you should probably shut down your VMs and take snapshots of them. If any failures happen beyond this point it is probably better to switch back to those snapshots, clean up the shared drives and start the grid installation again. An alternative to cleaning up the shared disks is to back them up now using zip and just replace them in the event of a failure.
$ cd /u04/VirtualBox/ol6-121-rac $ zip PreGrid.zip *.vdi
Install the Grid Infrastructure
Make sure both virtual machines are started. Install the following package from the Oracle grid media as the root user.
# cd /media/sf_12.1.0.2/grid/rpm # rpm -Uvh cvuqdisk*
Login to "ol6-121-rac1" as the "oracle" user and start the Oracle installer.
$ cd /media/sf_12.1.0.2/grid $ ./runInstaller
Select the "Install and Configure Oracle Grid Infrastructure for a Cluster" option, then click the "Next" button.
Accept the "Configure a Standard cluster" option by clicking the "Next" button.
Select the "Typical Installation" option, then click the "Next" button.
On the "Specify Cluster Configuration" screen, enter the correct SCAN Name and click the "Add" button.
Enter the details of the second node in the cluster, then click the "OK" button.
Click the "SSH Connectivity..." button and enter the password for the "oracle" user. Click the "Setup" button to configure SSH connectivity, and the "Test" button to test it once it is complete. Once the test is complete, click the "Next" button.
If you are doing a 12.1.0.1 installation, you will have to click the "Identify network interfaces" button, but in 12.1.0.2 this is on the following screen.
Check the public and private networks are specified correctly. If the NAT interface is displayed, remember to mark it as "Do Not Use". Click the "Next" button.
Enter "/u01/app/12.1.0.2/grid" as the software location and "Automatic Storage Manager" as the cluster registry storage type. Enter the ASM password, select "dba" as the group and click the "Next" button.
Set the redundancy to "External", click the "Change Discovery Path" button and set the path to "/dev/asm*". Return the main screen and select all 4 disks and click the "Next" button.
Accept the default inventory directory by clicking the "Next" button.
If you want the root scripts to run automatically, enter the relevant credentials. I prefer to run them manually. Click the "Next" button.
Wait while the prerequisite checks complete. If you have any issues use the "Fix & Check Again" button. Once possible fixes are complete, check the "Ignore All" checkbox and click the "Next" button. It is likely the "Physical Memory" and "Device Checks for ASM" tests will fail for this type of installation. This is OK.
If you are happy with the summary information, click the "Install" button.
Wait while the installation takes place.
When prompted, run the configuration scripts on each node.
The output from the "orainstRoot.sh" file should look something like that listed below.
# /u01/app/oraInventory/orainstRoot.sh Changing permissions of /u01/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /u01/app/oraInventory to oinstall. The execution of the script is complete. #
The output of the "root.sh" will vary a little depending on the node it is run on. Example output can be seen here (Node1, Node2).
Once the scripts have completed, return to the "Execute Configuration Scripts" screen on "ol6-121-rac1" and click the "OK" button.
Wait for the configuration assistants to complete.
If any of the configuration steps fail you should check the specified log to see if the error is a show-stopper or not. If you are not using a DNS to resolve the SCAN you can expect the verification phase to fail with an error like the following.
INFO: Checking Single Client Access Name (SCAN)... INFO: Checking name resolution setup for "rac-scan.localdomain"... INFO: ERROR: INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name "rac-scan.localdomain" INFO: ERROR: INFO: PRVF-4657 : Name resolution setup check for "rac-scan.localdomain" (IP address: 192.168.2.201) failed INFO: ERROR: INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name "rac-scan.localdomain" INFO: Verification of SCAN VIP and Listener setup failed
Provided this is the only error, it is safe to ignore this and continue by clicking the "Next" button.
Click the "Close" button to exit the installer.
The grid infrastructure installation is now complete. We can check the status of the installation using the following commands.
$ grid_env $ crsctl stat res -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.DATA.dg ONLINE ONLINE ol6-121-rac1 STABLE ONLINE ONLINE ol6-121-rac2 STABLE ora.LISTENER.lsnr ONLINE ONLINE ol6-121-rac1 STABLE ONLINE ONLINE ol6-121-rac2 STABLE ora.asm ONLINE ONLINE ol6-121-rac1 Started,STABLE ONLINE ONLINE ol6-121-rac2 Started,STABLE ora.net1.network ONLINE ONLINE ol6-121-rac1 STABLE ONLINE ONLINE ol6-121-rac2 STABLE ora.ons ONLINE ONLINE ol6-121-rac1 STABLE ONLINE ONLINE ol6-121-rac2 STABLE -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE ol6-121-rac2 STABLE ora.LISTENER_SCAN2.lsnr 1 ONLINE ONLINE ol6-121-rac1 STABLE ora.LISTENER_SCAN3.lsnr 1 ONLINE ONLINE ol6-121-rac1 STABLE ora.MGMTLSNR 1 ONLINE ONLINE ol6-121-rac1 169.254.241.156 192. 168.1.101,STABLE ora.cvu 1 ONLINE ONLINE ol6-121-rac1 STABLE ora.mgmtdb 1 ONLINE ONLINE ol6-121-rac1 Open,STABLE ora.oc4j 1 ONLINE ONLINE ol6-121-rac1 STABLE ora.ol6-121-rac1.vip 1 ONLINE ONLINE ol6-121-rac1 STABLE ora.ol6-121-rac2.vip 1 ONLINE ONLINE ol6-121-rac2 STABLE ora.scan1.vip 1 ONLINE ONLINE ol6-121-rac2 STABLE ora.scan2.vip 1 ONLINE ONLINE ol6-121-rac1 STABLE ora.scan3.vip 1 ONLINE ONLINE ol6-121-rac1 STABLE -------------------------------------------------------------------------------- $
At this point it is probably a good idea to shutdown both VMs and take snapshots. Remember to make a fresh zip of the ASM disks on the host machine, which you will need to restore if you revert to the post-grid snapshots.
$ cd /u04/VirtualBox/ol6-121-rac $ zip PostGrid.zip *.vdi
Install the Database Software
Make sure the "ol6-121-rac1" and "ol6-121-rac2" virtual machines are started, then login to "ol6-121-rac1" as the oracle user and start the Oracle installer. Check that all services are up using "crsctl stat res -t", as described before.
$ cd /media/sf_12.1.0.2/database $ ./runInstaller
Uncheck the security updates checkbox and click the "Next" button and "Yes" on the subsequent warning dialog.
Select the "Install database software only" option, then click the "Next" button.
Accept the "Oracle Real Application Clusters database installation" option by clicking the "Next" button.
Make sure both nodes are selected, then click the "Next" button.
Select the required languages, then click the "Next" button.
Select the "Enterprise Edition" option, then click the "Next" button.
Enter "/u01/app/oracle" as the Oracle base and "/u01/app/oracle/product/12.1.0.2/db_1" as the software location, then click the "Next" button.
Select the desired operating system groups, then click the "Next" button.
Wait for the prerequisite check to complete. If there are any problems either click the "Fix & Check Again" button, or check the "Ignore All" checkbox and click the "Next" button.
If you are happy with the summary information, click the "Install" button.
Wait while the installation takes place.
When prompted, run the configuration script on each node. When the scripts have been run on each node, click the "OK" button.
Click the "Close" button to exit the installer.
Shutdown both VMs and take snapshots. Remember to make a fresh zip of the ASM disks on the host machine, which you will need to restore if you revert to the post-db snapshots.
$ cd /u04/VirtualBox/ol6-121-rac $ zip PostDB.zip *.vdi
Create a Database
Make sure the "ol6-121-rac1" and "ol6-121-rac2" virtual machines are started, then login to "ol6-121-rac1" as the oracle user and start the Database Creation Asistant (DBCA).
$ dbca
Select the "Create Database" option and click the "Next" button.
Select the "Create a database with default configuration" option. Enter the container database name (cdbrac), pluggable database name (pdb1) and administrator password. Click the "Next" button.
Wait for the prerequisite checks to complete. If there are any problems either fix them, or check the "Ignore All" checkbox and click the "Next" button.
If you are happy with the summary information, click the "Finish" button.
Wait while the database creation takes place.
If you want to modify passwords, click the "Password Management" button. When finished, click the "Close" button.
The RAC database creation is now complete.
Check the Status of the RAC
There are several ways to check the status of the RAC. The srvctl
utility shows the current configuration and status of the RAC database.
$ srvctl config database -d cdbrac Database unique name: cdbrac Database name: cdbrac Oracle home: /u01/app/oracle/product/12.1.0.2/db_1 Oracle user: oracle Spfile: +DATA/CDBRAC/PARAMETERFILE/spfile.296.860703391 Password file: +DATA/CDBRAC/PASSWORD/pwdcdbrac.276.860702185 Domain: Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: Disk Groups: DATA Mount point paths: Services: Type: RAC Start concurrency: Stop concurrency: OSDBA group: dba OSOPER group: Database instances: cdbrac1,cdbrac2 Configured nodes: ol6-121-rac1,ol6-121-rac2 Database is administrator managed $ $ srvctl status database -d cdbrac Instance cdbrac1 is running on node ol6-121-rac1 Instance cdbrac2 is running on node ol6-121-rac2 $
The V$ACTIVE_INSTANCES
view can also display the current status of the instances.
$ sqlplus / as sysdba SQL*Plus: Release 12.1.0.2.0 Production on Sat Oct 11 20:27:34 2014 Copyright (c) 1982, 2014, Oracle. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Advanced Analytics and Real Application Testing options SQL> SELECT inst_name FROM v$active_instances; INST_NAME -------------------------------------------------------------------------------- ol6-121-rac1.localdomain:cdbrac1 ol6-121-rac2.localdomain:cdbrac2 SQL>
For more information see:
- Oracle Grid Infrastructure Installation Guide 12c Release 1 (12.1) for Linux
- Oracle Real Application Clusters Installation Guide 12c Release 1 (12.1) for Linux and UNIX
Hope this helps. Regards Tim...