Configuring ASM on RHEL6.9 using RAW file system
From 11GR2 on wards oracle have introduced grid infrastructure software , which contains both cluster-ware and ASM . We can use the same software to install oracle RAC also
Pre-checks:
1-> download the oracle grid software from OTN
2-> I am using oracle vmware , so go to settings and then go to storage and the select create new virtual hard disk of fixed storage
3-> Now start the RHEL6 and then start creating partitions using FDISK utility
4-> Now we have to create RAW disks using raw utility but we have to make sure the raw partition information to be permanently updated for every reboot of the system.
note:
===
Raw Disk Definition. The term raw disk refers to the accessing of the data on a hard disk drive (HDD) or other disk storage device or media directly at the individual byte level instead of through its file-system as is usually done
5-> Now check the network connectivity and also configure required network settings
6-> Now copy the downloaded software from local file system to oracle RHEL6 VM using winscp
Lets see the installation logs below
1-> create a virtual hard disk
3This command will show the partitions information
From rhel6 onwards we cannot see a service called raw devices , so from rhel6 onwards we have to use the utility udev
now open the above mentioned file and update the below entries
step 5-> now start the udev service
step 6 -> now restart the machine and execute the below commands this will sows the RAW devices
based on the entries under /etc/udev/rules.d/60-raw.rules
Step 3
Rollback the root.sh changes on racnode1.
cd $GRID_HOME/crs/install
Now run the root.sh script
it will work now
Thanks,
Satya
Pre-checks:
1-> download the oracle grid software from OTN
2-> I am using oracle vmware , so go to settings and then go to storage and the select create new virtual hard disk of fixed storage
3-> Now start the RHEL6 and then start creating partitions using FDISK utility
4-> Now we have to create RAW disks using raw utility but we have to make sure the raw partition information to be permanently updated for every reboot of the system.
note:
===
Raw Disk Definition. The term raw disk refers to the accessing of the data on a hard disk drive (HDD) or other disk storage device or media directly at the individual byte level instead of through its file-system as is usually done
5-> Now check the network connectivity and also configure required network settings
6-> Now copy the downloaded software from local file system to oracle RHEL6 VM using winscp
Lets see the installation logs below
1-> create a virtual hard disk
Disk /dev/sdc: 42.7 GB, 42747527168 bytes
255 heads, 63 sectors/track, 5197 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
2-> create the partitions using fdisk
Disk /dev/sdc: 42.7 GB, 42747527168 bytes
255 heads, 63 sectors/track, 5197 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x4967274b
Device Boot Start End Blocks
Id System
/dev/sdc1
1 654 5253223+
83 Linux
/dev/sdc2
655 1308 5253255
83 Linux
/dev/sdc3
1309 1962 5253255
83 Linux
/dev/sdc4
1963 5197 25985137+
5 Extended
/dev/sdc5
1963 2616 5253223+
83 Linux
/dev/sdc6
2617 3270 5253223+
83 Linux
/dev/sdc7
3271 3924 5253223+
83 Linux
/dev/sdc8
3925 4578 5253223+
83 Linux
/dev/sdc9
4579 5197 4972086
83 Linux
3This command will show the partitions information
[root@test2 ~]#
cat /proc/partitions|grep sdc*
8 0
20971520 sda
8 1
512000 sda1
8 2
20458496 sda2
8 16
20971520 sdb
8 17
20964793 sdb1
8 32
41745632 sdc
8 33
5253223 sdc1
8 34
5253255 sdc2
8 35
5253255 sdc3
8 36 1 sdc4
8 37
5253223 sdc5
8 38
5253223 sdc6
8 39
5253223 sdc7
8 40
5253223 sdc8
8 41
4972086 sdc9
[root@test2 ~]#
Verify that you have one partition for every disk
3-> Open /etc/udev/rules.d/60-raw.rules and put these
lines there for 8 disks
ACTION=="add", KERNEL=="sdc1",
RUN+="/bin/raw /dev/raw/raw1 %N"
ACTION=="add",
KERNEL=="sdc2", RUN+="/bin/raw /dev/raw/raw2 %N"
ACTION=="add",
KERNEL=="sdc3", RUN+="/bin/raw /dev/raw/raw3 %N"
ACTION=="add",
KERNEL=="sdc5", RUN+="/bin/raw /dev/raw/raw4 %N"
ACTION=="add", KERNEL=="sdc6",
RUN+="/bin/raw /dev/raw/raw5 %N"
ACTION=="add",
KERNEL=="sdc7", RUN+="/bin/raw /dev/raw/raw6 %N"
ACTION=="add",
KERNEL=="sdc8", RUN+="/bin/raw /dev/raw/raw7 %N"
ACTION=="add",
KERNEL=="sdc9", RUN+="/bin/raw /dev/raw/raw8 %N"
# set permissions on these
disks for the Oracle Owner user.
ACTION=="add",
KERNEL=="raw*", OWNER=="oracle",
GROUP=="oinstall",
MODE=="0660"
in the above file we can see we have defined all the raw devices so for next boot OS will read this file and then create the RAW devices automatically
step 4 -> Now create the raw devices using RAW command
[root@test2 ~]# vi /etc/udev/rules.d/60-raw.rules
[root@test2 ~]# raw /dev/raw/raw1 /dev/sdc1
/dev/raw/raw1:
bound to major 8, minor 33
[root@test2 ~]# raw /dev/raw/raw2 /dev/sdc2
/dev/raw/raw2:
bound to major 8, minor 34
[root@test2 ~]# raw /dev/raw/raw3 /dev/sdc3
/dev/raw/raw3:
bound to major 8, minor 35
[root@test2 ~]# raw /dev/raw/raw4 /dev/sdc5
/dev/raw/raw4:
bound to major 8, minor 37
[root@test2 ~]# raw /dev/raw/raw5 /dev/sdc6
/dev/raw/raw5:
bound to major 8, minor 38
[root@test2 ~]# raw /dev/raw/raw6 /dev/sdc7
/dev/raw/raw6:
bound to major 8, minor 39
[root@test2 ~]# raw /dev/raw/raw7 /dev/sdc8
/dev/raw/raw7: bound
to major 8, minor 40
[root@test2 ~]# raw /dev/raw/raw8 /dev/sdc9
/dev/raw/raw8:
bound to major 8, minor 41
[root@test2 block]# start_udev
Starting udev: [ OK ]
[root@test2 block]#
[root@test2 ~]# raw -qa
/dev/raw/raw1:
bound to major 8, minor 33
/dev/raw/raw2:
bound to major 8, minor 34
/dev/raw/raw3:
bound to major 8, minor 35
/dev/raw/raw4:
bound to major 8, minor 37
/dev/raw/raw5:
bound to major 8, minor 38
/dev/raw/raw6:
bound to major 8, minor 39
/dev/raw/raw7:
bound to major 8, minor 40
/dev/raw/raw8:
bound to major 8, minor 41
[root@test2 ~]#
[root@test2 ~]#
[root@test2 ~]# ls -lrt /dev/raw/raw*
crw-rw---- 1 oracle oinstall 162, 0 Mar 29 23:17
/dev/raw/rawctl
crw-rw---- 1 oracle oinstall 162, 7 Mar 29 23:17
/dev/raw/raw7
crw-rw---- 1 oracle oinstall 162, 6 Mar 29 23:17
/dev/raw/raw6
crw-rw---- 1 oracle oinstall 162, 1 Mar 29 23:17
/dev/raw/raw1
crw-rw---- 1 oracle oinstall 162, 2 Mar 29 23:17
/dev/raw/raw2
crw-rw---- 1 oracle oinstall 162, 4 Mar 29 23:17
/dev/raw/raw4
crw-rw---- 1 oracle oinstall 162, 8 Mar 29 23:17
/dev/raw/raw8
crw-rw---- 1 oracle oinstall 162, 3 Mar 29 23:17
/dev/raw/raw3
crw-rw---- 1 oracle oinstall 162, 5 Mar 29 23:17
/dev/raw/raw5
[root@test2 ~]#
Now you are ready with the raw partitions and then now
you can go for grid installation
Now login to the server as oracle user and execute the run installer this will pop up OUI screen
next screen
now we have to login as rot user and then execute the root.sh script , but unfortunately it didn't worked as we are unable to start the OHASD service failing with the below error
[root@test1 ~]# /u02/app/11.2.0/grid/root.sh
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER=
oracle
ORACLE_HOME= /u02/app/11.2.0/grid
Enter the full pathname of the local bin directory:
[/usr/local/bin]:
The file "dbhome" already exists in
/usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying dbhome
to /usr/local/bin ...
The file "oraenv" already exists in
/usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying oraenv
to /usr/local/bin ...
The file "coraenv" already exists in
/usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying coraenv
to /usr/local/bin ...
Entries will be added to the /etc/oratab file as needed
by
Database Configuration Assistant when a database is
created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2018-03-30 02:44:52: Checking for super user privileges
2018-03-30 02:44:52: User has super user privileges
CRS-4124: Oracle High Availability Services startup failed ohasd failed to start: Inappropriate ioctl for device
Here installation fails because grid 11gR2 does not
support rhel6.9
Execute the below steps
Step 1
Login to racnode1 and open the file $GRID_HOME/crs/install/s_crsconfig_lib.pm
Add the following command before # Start OHASD
my $UPSTART_OHASD_SERVICE = "oracle-ohasd";
my $INITCTL = "/sbin/initctl";
($status, @output) = system_cmd_capture ("$INITCTL start $UPSTART_OHASD_SERVICE");
if (0 != $status)
{
error ("Failed to start $UPSTART_OHASD_SERVICE, error: $!");
return $FAILED;
}
Login to racnode1 and open the file $GRID_HOME/crs/install/s_crsconfig_lib.pm
Add the following command before # Start OHASD
my $UPSTART_OHASD_SERVICE = "oracle-ohasd";
my $INITCTL = "/sbin/initctl";
($status, @output) = system_cmd_capture ("$INITCTL start $UPSTART_OHASD_SERVICE");
if (0 != $status)
{
error ("Failed to start $UPSTART_OHASD_SERVICE, error: $!");
return $FAILED;
}
Step 2
Create a file /etc/init/oracle-ohasd.conf with below content.
# Oracle OHASD startup
start on runlevel [35]
stop on runlevel [!35]
respawn
exec /etc/init.d/init.ohasd run >/dev/null 2>&1
Create a file /etc/init/oracle-ohasd.conf with below content.
# Oracle OHASD startup
start on runlevel [35]
stop on runlevel [!35]
respawn
exec /etc/init.d/init.ohasd run >/dev/null 2>&1
Step 3
Rollback the root.sh changes on racnode1.
cd $GRID_HOME/crs/install
[root@test1 ~]# /u02/app/11.2.0/grid/crs/install/roothas.pl
-deconfig -force -verbose
2018-03-30 02:51:19: Checking for super user privileges
2018-03-30 02:51:19: User has super user privileges
2018-03-30 02:51:19: Parsing the host name
Using configuration parameter file:
/u02/app/11.2.0/grid/crs/install/crsconfig_params
CRS-4639: Could not contact Oracle High Availability
Services
CRS-4000: Command Stop failed, or completed with errors.
CRS-4639: Could not contact Oracle High Availability
Services
CRS-4000: Command Delete failed, or completed with
errors.
CRS-4544: Unable to connect to OHAS
CRS-4000: Command Stop failed, or completed with errors.
/u02/app/11.2.0/grid/bin/acfsdriverstate: line 51:
/lib/acfstoolsdriver.sh: No such file or directory
/u02/app/11.2.0/grid/bin/acfsdriverstate: line 51: exec:
/lib/acfstoolsdriver.sh: cannot execute: No such file or directory
Successfully deconfigured Oracle Restart stack
[root@test1 ~]#
[root@test1 ~]# /u02/app/11.2.0/grid/root.sh
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER=
oracle
ORACLE_HOME= /u02/app/11.2.0/grid
Enter the full pathname of the local bin directory:
[/usr/local/bin]:
The file "dbhome" already exists in
/usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying dbhome
to /usr/local/bin ...
The file "oraenv" already exists in
/usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying oraenv
to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying coraenv
to /usr/local/bin ...
Entries will be added to the /etc/oratab file as needed
by
Database Configuration Assistant when a database is
created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2018-03-30 02:57:24: Checking for super user privileges
2018-03-30 02:57:24: User has super user privileges
2018-03-30 02:57:24: Parsing the host name
Using configuration parameter file:
/u02/app/11.2.0/grid/crs/install/crsconfig_params
LOCAL ADD MODE
Creating OCR keys for user 'oracle', privgrp 'oinstall'..
Operation successful.
CRS-4664: Node test1 successfully pinned.
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been
started.
ohasd is starting
ADVM/ACFS is not supported on
oraclelinux-release-6Server-9.0.3.x86_64
test1
2018/03/30 02:57:57
/u02/app/11.2.0/grid/cdata/test1/backup_20180330_025757.olr
Successfully configured Oracle Grid Infrastructure for a
Standalone Server
Updating inventory properties for clusterware
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 3057 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
[root@test1 ~]#
How to add swap space if it is less
[root@test1 ~]# fallocate -l 1G /u02/swapfile
[root@test1 ~]# ls -lh /u02/swapfile
-rw-r--r-- 1 root root 1.0G Mar 30 02:36 /u02/swapfile
[root@test1 ~]# chmod 600 /u02/swapfile
[root@test1 ~]# mkswap /u02/swapfile
mkswap: /u02/swapfile: warning: don't erase bootbits
sectors
on whole
disk. Use -f to force.
Setting up swapspace version 1, size = 1048572 KiB
no label, UUID=6dfb481c-e88c-4af2-892f-9c6c8895a73f
[root@test1 ~]# swapon /u02/swapfile
Vi /etc/fstab
/u02/swapfile
swap swap defaults 0 0
Comments
Post a Comment