Wednesday, October 19, 2016

CREATE DATABASE LINK


Private database link - belongs to a specific schema of a database. Only the owner of a private database link can use it.
Public database link - all users in the database can use it.
Global database link - defined in an OID or Oracle Names Server. Anyone on the network can use it.

Use CREATE DATABASE LINK statement to create a database link. A database link is a schema object in one database that enables you to access objects on another database.

Once you have created a database link, you can use it to view the tables on the other database using SQL statements, on the other database by appending @dblink to the table. You can also access remote tables using any INSERT, UPDATE, DELETE statements.

Create database link :
create public database link
 
connect to
 
identified by
 
using <'tns_service_name'>;

ex 1 : create database link msql connect to user_sqlserver identified by password using 'MSQL';

ex 2 : create public database link link_db connect to murthy identified by murthy123 using 'local';

ex 3 : create public database link
             demo_db                                         ---- db link name.
              connect to
                  murty identified by murthy123  -- remote db username/password
                  using '192.168.6.201:1521/orcl'; -- remote database ip with port number.



you can check the other schema tables by using the following command

sql> select * from murthy_demo@demo_db

Sunday, October 2, 2016

Failed to create Oracle Cluster Registry configuration, rc 255

In Oracle Rac Grid installation when you run the root.sh file, if you get the following errors :
ORA-27091: unable to queue I/O, ORA-15081 failed to submit an I/O operation to a disk.

[root@ocluster01 ~]# /u01/app/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization - successful
Adding Clusterware entries to upstart
CRS-2672: Attempting to start 'ora.mdnsd' on 'ocluster01'
CRS-2676: Start of 'ora.mdnsd' on 'ocluster01' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'ocluster01'
CRS-2676: Start of 'ora.gpnpd' on 'ocluster01' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'ocluster01'
CRS-2672: Attempting to start 'ora.gipcd' on 'ocluster01'
CRS-2676: Start of 'ora.cssdmonitor' on 'ocluster01' succeeded
CRS-2676: Start of 'ora.gipcd' on 'ocluster01' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'ocluster01'
CRS-2672: Attempting to start 'ora.diskmon' on 'ocluster01'
CRS-2676: Start of 'ora.diskmon' on 'ocluster01' succeeded
CRS-2676: Start of 'ora.cssd' on 'ocluster01' succeeded

ASM created and started successfully.

Disk Group DATA created successfully.

Errors in file :
ORA-27091: unable to queue I/O
ORA-15081: failed to submit an I/O operation to a disk
ORA-06512: at line 4
Errors in file :
ORA-27091: unable to queue I/O
ORA-15081: failed to submit an I/O operation to a disk
ORA-06512: at line 4
Failed to create Oracle Cluster Registry configuration, rc 255
Oracle Grid Infrastructure Repository configuration failed at /u01/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 6919.
/u01/app/11.2.0/grid/perl/bin/perl -I/u01/app/11.2.0/grid/perl/lib -I/u01/app/11.2.0/grid/crs/install /u01/app/11.2.0/grid/crs/install/rootcrs.pl execution failed
[root@ocluster01 ~]#

there are 2 reasons that might you will get this error is :
1. there might me issue with the permissions of grid base and asm disks.
2.it might be issue with configuration of oracleasm.

first you need to deconfigure the root.sh by using following command.

1. As root, run "$GRID_HOME/crs/install/rootcrs.pl -verbose -deconfig -force" on all nodes, except the last one.
for last node you need to run below command
2. As root, run "$GRID_HOME/crs/install/rootcrs.pl -verbose -deconfig -force -lastnode"

[root@ocluster01 oracle]# /u01/app/11.2.0/grid/crs/install/rootcrs.pl -verbose -deconfig -force
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
PRCR-1119 : Failed to look up CRS resources of ora.cluster_vip_net1.type type
PRCR-1068 : Failed to query resources
Cannot communicate with crsd
PRCR-1070 : Failed to check if resource ora.gsd is registered
Cannot communicate with crsd
PRCR-1070 : Failed to check if resource ora.ons is registered
Cannot communicate with crsd

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'ocluster01'
CRS-2673: Attempting to stop 'ora.ctssd' on 'ocluster01'
CRS-2673: Attempting to stop 'ora.asm' on 'ocluster01'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'ocluster01'
CRS-2677: Stop of 'ora.mdnsd' on 'ocluster01' succeeded
CRS-2677: Stop of 'ora.asm' on 'ocluster01' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'ocluster01'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'ocluster01' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'ocluster01' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'ocluster01'
CRS-2677: Stop of 'ora.cssd' on 'ocluster01' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'ocluster01'
CRS-2677: Stop of 'ora.gipcd' on 'ocluster01' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'ocluster01'
CRS-2677: Stop of 'ora.gpnpd' on 'ocluster01' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'ocluster01' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Removing Trace File Analyzer
error: package cvuqdisk is not installed
Successfully deconfigured Oracle clusterware stack on this node
[root@ocluster01 oracle]# /u01/app/11.2.0/grid/crs/install/rootcrs.pl -verbose -deconfig -force -lastnode
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Adding Clusterware entries to upstart
crsexcl failed to start
Failed to start the Clusterware. Last 20 lines of the alert log follow:
[client(9216)]CRS-1006:The OCR location +DATA is inaccessible. Details in /u01/app/11.2.0/grid/log/ocluster01/client/ocrconfig_9216.log.
2016-10-02 12:49:30.114:
[client(9216)]CRS-1001:The OCR was formatted using version 3.
[client(12853)]CRS-10001:02-Oct-16 13:06 ACFS-9459: ADVM/ACFS is not supported on this OS version: 'centos-release-6-8.el6.centos.12.3.x86_64
'
[client(12855)]CRS-10001:02-Oct-16 13:06 ACFS-9201: Not Supported
2016-10-02 13:06:31.010:
[ctssd(9018)]CRS-2405:The Cluster Time Synchronization Service on host ocluster01 is shutdown by user
2016-10-02 13:06:31.017:
[mdnsd(8889)]CRS-5602:mDNS service stopping by request.
2016-10-02 13:06:43.359:
[cssd(8957)]CRS-1603:CSSD on node ocluster01 shutdown by user.
2016-10-02 13:06:43.855:
[ohasd(8496)]CRS-2767:Resource state recovery not attempted for 'ora.cssdmonitor' as its target state is OFFLINE
2016-10-02 13:06:43.856:
[ohasd(8496)]CRS-2769:Unable to failover resource 'ora.cssdmonitor'.
2016-10-02 13:06:44.771:
[ohasd(8496)]CRS-2769:Unable to failover resource 'ora.cssd'.
2016-10-02 13:06:48.722:
[gpnpd(8900)]CRS-2329:GPNPD on node ocluster01 shutdown.

****Unable to retrieve Oracle Clusterware home.
Start Oracle Clusterware stack and try again.
****Unable to retrieve Oracle Clusterware home.
Start Oracle Clusterware stack and try again.
****Unable to retrieve Oracle Clusterware home.
Start Oracle Clusterware stack and try again.
****Unable to retrieve Oracle Clusterware home.
Start Oracle Clusterware stack and try again.
****Unable to retrieve Oracle Clusterware home.
Start Oracle Clusterware stack and try again.
Either /etc/oracle/ocr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Either /etc/oracle/ocr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Stop failed, or completed with errors.
Either /etc/oracle/ocr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Either /etc/oracle/ocr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
################################################################
# You must kill processes or reboot the system to properly #
# cleanup the processes started by Oracle clusterware          #
################################################################
Either /etc/oracle/ocr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Either /etc/oracle/ocr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Either /etc/oracle/olr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Either /etc/oracle/olr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Failure in execution (rc=-1, 256, No such file or directory) for command /etc/init.d/ohasd deinstall
error: package cvuqdisk is not installed
Successfully deconfigured Oracle clusterware stack on this node
[root@ocluster01 oracle]# 


then remove the existing ASMdisks : 

[root@ocluster01 ~]# /etc/init.d/oracleasm deletedisk ASMDISK1
Removing ASM disk "ASMDISK1":                              [  OK  ]
[root@ocluster01 ~]# /etc/init.d/oracleasm deletedisk ASMDISK2
Removing ASM disk "ASMDISK2":                              [  OK  ]
[root@ocluster01 ~]# /etc/init.d/oracleasm deletedisk ASMDISK3
Removing ASM disk "ASMDISK3":   
[root@ocluster01 ~]#

then after you have to configure the oracleasm and create disks again as following :
[root@ocluster01 ~]# /etc/init.d/oracleasm configure -i
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface [grid]: grid
Default group to own the driver interface [dba]: asmadmin

Start Oracle ASM library driver on boot (y/n) [y]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver:                     [  OK  ]
Scanning the system for Oracle ASMLib disks:               [  OK  ]
 

[root@ocluster01 ~]# /etc/init.d/oracleasm init
Usage: /etc/init.d/oracleasm {start|stop|restart|enable|disable|configure|createdisk|deletedisk|querydisk|listdisks|scandisks|status}
[root@ocluster01 ~]# /etc/init.d/oracleasm createdisk ASMDISK1 /dev/sdb1
Marking disk "ASMDISK1" as an ASM disk:                    [  OK  ]
[root@ocluster01 ~]# /etc/init.d/oracleasm createdisk ASMDISK2 /dev/sdc1
Marking disk "ASMDISK2" as an ASM disk:                    [  OK  ]
[root@ocluster01 ~]# /etc/init.d/oracleasm createdisk ASMDISK3 /dev/sdd1
Marking disk "ASMDISK3" as an ASM disk:                    [  OK  ]
[root@ocluster01 ~]# /etc/init.d/oracleasm scandisks
Scanning the system for Oracle ASMLib disks:               [  OK  ]
[root@ocluster01 ~]# /etc/init.d/oracleasm listdisks
ASMDISK1
ASMDISK2
ASMDISK3
[root@ocluster01 ~]#

After successfully created the ASM disks then you have to re run the root.sh .

[root@ocluster01 ~]# /u01/app/11.2.0/grid/root.sh

hope you will get success... All the best.!!!.

Another reason might be issue with permissions.

keep check the grid base & grid home.
Grid_base and Grid_home groups it should be grid:oinstall.

Disk Permissions.

[root@ocluster01 ~]# ls -l /dev/oracleasm/disks/
total 0
brw-rw---- 1 grid asmadmin 8, 17 Oct  2 12:45 ASMDISK1
brw-rw---- 1 grid asmadmin 8, 33 Oct  2 12:45 ASMDISK2
brw-rw---- 1 grid asmadmin 8, 49 Oct  2 12:45 ASMDISK3
[root@ocluster01 ~]# ll /dev/sd*
brw-rw---- 1 root disk 8,  0 Oct  2 01:30 /dev/sda
brw-rw---- 1 root disk 8,  1 Oct  1 15:25 /dev/sda1
brw-rw---- 1 root disk 8,  2 Oct  1 15:25 /dev/sda2
brw-rw---- 1 root disk 8, 16 Oct  2 12:45 /dev/sdb
brw-rw---- 1 root disk 8, 17 Oct  2 12:45 /dev/sdb1