Database patching inerview questions

 I have installed  a PSU in oracle database , but I have ignored the precheck errors and continued to patch. After the patch completion 

I see lot of issues at the database. How to fix the issue

check the alert log and trace file for error

you can even query select * from dba_errors and also check patch version from select * from v$version where banner like '%PSU%';

after determining error due to improper patch installation , better to roll back the patch using 

opatch rollback -d <PSU patch ID>.


Now fix the issues and rerun the patch , if not you can always restore the database to the GRP


********************************************************


Step by step database upgrade from 12c to 19c CDB database 


Prepare your source by taking the valid database backup 

install the 19c binaries 

copy the preupgrade.jar file from target home and run for the 12c DB HOME 

validate and fix the preupgrade actions 

start the upgrade activity by creating GRP ,followed by copying of SPFILE,Password file,TDE files,listener files to the new home 

shutdown the database and start the database with cluster_database=false in RAC and run startup upgrade 

now connect to the pdb and also run the startup upgrade to update the pdb 

Now run the dbupgarde utility 

run the utlsts.sql, utlrp.sql

run the postupgrade_fixups.sql to check the database consistency state

run the TZ scripts

finally chnage the database compatibilty after through application testing 



what does dbupgrade utility does?

we have to run the dbupgrade afther startup upgrade as this will perform important actions like 

validating the prechecks and also runs the catupgrade.sql script

creating new database components ,tablespaces,adding or updating new database objects and also database statistics


what are the preupgrade scripts in pre 12c database?

 preupgrd.sql, utluppkg.sql, and utlu122s.sql 


what are the major difference between the database upgrade flow between post-12c Oracle database releases and pre-12c Oracle database releases

the main differences in the database upgrade flow between pre-12c and post-12c releases are the introduction of the multitenant architecture, the use of PDBs, and the use of the preupgrade.jar utility for identifying potential issues.


give some information about oracle patches?


oracle releases quarterly patches starting from JAN 

we cannot install patchset using Opatch 

we cannot apply patch without downtime except in RAC 

use the comamnd to apply single patch "opatch util napply <patch_location> -id 2 -skip_subset -skip_duplicate"


what is rolling upgrade?

The rolling upgrade patch process can be time-consuming and complex, but it is a critical process to ensure that the database remains available and the patch is applied successfully without any downtime. It is important to follow Oracle's instructions carefully and to have a backup plan in case of any issues during the patching process.


How to patch pre12c databases?

backup database 

install the new db home

set the new env with new dbhome 

run the utlu11i.sql 

connect to the database and execute startup upgrade 

run catupgrade.sql

run utlrp.sql

run utlu1s.sql


what is difference between startup upgrade and migrate 


startup migrate used to upgrade till 9i

startup upgrade from 10g databases , it runs db in restricted mode


cross platform migration metrics


OS SOURCEOS TARGETENDIANCONVERSIONRMANTransportable
Linux x86-64Linux x86-64sameNOYESYES
Linux x86-64Microsoft Windows x86-64differentYESYESYES
Linux x86-64IBM AIX on POWER Systems (64-bit)differentYESYESYES
Linux x86-64Oracle Solaris on SPARC (64-bit)differentYESYESYES
Microsoft Windows x86-64Linux x86-64differentYESYESYES
Microsoft Windows x86-64Microsoft Windows x86-64sameNOYESYES
IBM AIX on POWER Systems (64-bit)Linux x86-64differentYESYESYES
Oracle Solaris on SPARC (64-bit)Linux x86-64differentYESYESYES



There are 2 main approaches to migrate the oracle databases on on-prim environment?

The 2 approaches for migrating an Oracle database from one platform to another: Transportable Tablespaces (TTS) and RMAN backup and restore


give me the steps for transportable method?

Prepare the source database:
a. Ensure that the tablespaces to be migrated are self-contained and do not have any dependencies on other objects outside of the tablespaces.
b. Verify that the source database is running on a compatible platform and Oracle version with the target database.
c. Convert tablespaces to read-only mode using the ALTER TABLESPACE ... READ ONLY command.
d. Ensure that the character set of the source database is compatible with the target database.
e. Generate the Transportable Tablespace set (TTS) metadata using the EXPDP utility.
$ expdp user@src directory=DP dumpfile=tablespace_metadata.dmp logfile=tablespace_metadata.log transport_tablespaces=tablespace_name


Prepare the target database:
a. Create an empty database on the target platform with the same Oracle version and compatible character set as the source database.
b. Create an equivalent tablespace in the target database using the CREATE TABLESPACE command.
c. Copy the TTS metadata file and the data files to the target platform.

Perform the Transportable Tablespace (TTS) migration:
a. Import the TTS metadata file into the target database using the IMPDP utility.
$ impdp user@trg directory=DATA_PUMP_DIR dumpfile=tablespace_metadata.dmp logfile=tablespace_metadata.log transport_datafiles='/path/to/datafile.dbf'
b. Convert the tablespaces from read-only to read-write mode using the ALTER TABLESPACE ... READ WRITE command.
c. Ensure that all required objects are available and accessible in the target database.
d. Update the dependent objects in the target database to point to the new tablespace.

Perform post-migration tasks:
a. Update statistics on the migrated objects.
b. Test the migrated database thoroughly to ensure that all functionality is working correctly.



Give me the setps for endian issue?
Here are the steps to perform a cross-platform TTS migration with endian conversion:

Prepare the source database:
a. Convert tablespaces to read-only mode: SQL> ALTER TABLESPACE tablespace_name READ ONLY;
b. Generate TTS metadata file using EXPDP with the platform parameter:
$ expdp srcusr directory=DP dumpfile=tbs_metadata.dmp logfile=tbs_metadata.log transport_tablespaces=tbs_name platform=target_platform_name


Prepare the target database:
a. Create an empty database with the same Oracle version and compatible character set as the source database.
b. Create equivalent tablespace in the target database:
c. Copy TTS metadata file to the target platform.
d. Convert the data files to the target platform's endianness using the convert datafile command of the RMAN utility:
$ rman target / auxiliary sys/password@target_database
RMAN> CONVERT DATAFILE '/path/to/source_datafile.dbf', '/path/to/target_datafile.dbf';
Repeat this command for each data file.

Perform the TTS migration:
a. Import TTS metadata file using IMPDP:
$ impdp trguser directory=DP dumpfile=tbs_metadata.dmp logfile=tbs_metadata.log transport_datafiles='/path/to/target_datafile.dbf'
b. Convert tablespaces from read-only to read-write mode:
SQL> ALTER TABLESPACE tablespace_name READ WRITE;
c. Update dependent objects in the target database to point to the new tablespace.
a. Update statistics on the migrated objects.
b. Test the migrated database thoroughly to ensure that all functionality is working correctly.

what are the endian format certifications?

Same Endian: If both the source and target platforms have the same endian format, then cross-platform migration is supported for the same Oracle Database version.

Different Endian: If the source and target platforms have different endian formats, then the following rules apply:

a. Cross-platform migration is supported between platforms with different endian formats for Oracle Database versions 11.2.0.3 and later.

b. Cross-platform migration is not supported between platforms with different endian formats for Oracle Database versions earlier than 11.2.0.3.

c. Cross-platform migration is not supported between platforms with different endian formats for Oracle RAC databases.

d. Cross-platform migration is not supported between platforms with different endian formats for Oracle ASM.


How to  high-level steps to perform cross-platform migration with ASM involved:

Convert ASM disk groups to non-ASM disk groups using the RMAN CONVERT command.

Perform the cross-platform migration of the database using either the Transportable Tablespaces (TTS) method or the RMAN method.

After the migration is complete, convert the non-ASM disk groups back to ASM disk groups using the RMAN CONVERT command.


steps to convert ASM disk groups to non-ASM disk groups using the RMAN CONVERT command:

Create a backup of the database using RMAN.

Stop the database and ASM instances and Create new non-ASM directories and create new pfile file and also Create a control file 

Stop the non-ASM instance and set the ORACLE_SID and ORACLE_HOME environment variables to the ASM instance.

Use the RMAN CONVERT command to convert the ASM disk groups to non-ASM disk groups. Here is an example command:

RMAN> CONVERT DATAFILE '/+ASM/data/dbname/datafile.001' FORMAT '/u01/app/oracle/oradata/dbname/datafile.001';

Open the database with the RESETLOGS option using the SQL statement "ALTER DATABASE OPEN RESETLOGS;".

How to use NUID utility, this will be used during the failure of duplicate process 

rman target /
run {
  allocate channel ch1 type disk;
  set nuid on;
  restore database;
  recover database;
  set nuid off;
}




IN RAC there are 2 ways to do patch Opatch auto and Opatch napply ,this ia traditional method 

Old method:
========== 
download and install latest version of Opatch  and do the cluster health check and validate patch dependecies 

stop db on node 1
unlock crs on node 1 to apply the patch  $GRID_HOME/crs/install/rootcrs.pl -prepatch
apply the OCW patch on the Grid Home:
apply the DB PSU patch on the Grid Home
applu OCW PSU patch on the RDBMS Home
apply the DB PSU patch on the RDBMS_HOME and run ‘postpatch’:$GRID_HOME/crs/install/rootcrs.pl -postpatch

Now do the same on node 2
run the datapatch 


Opatch auto:
=======
install latest version of Opatch  and perform complete cluster health check using tfactl>orachk ,
Verify and resolve the Patch Conflicts opatch prereq CheckConflictAgainstOHWithDetail -ph /u01/app/backup/29834717
relocate services to other nodes,stop db and opatch apply -oh /u01/app/oracle/product/19.0.0/dbhome_1 -local
# opatchauto apply /u01/stage/29698592 -analyze
Run opatch auto to patch node1
#export ORACLE_HOME=/u01/grid/product/12.1.0.2/gi
#export PATH=$PATH:$ORACLE_HOME/OPatch:$ORACLE_HOME/bin
#export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH
#opatchauto apply /u01/stage/29698592
repeat the same on node2
Verify the patch Installation
run datapatch for sql changes check for dba_registry_sqlpatch

 relinking the Oracle binaries during RAC patching is an important step to ensure compatibility, incorporate patch changes, improve performance, and ensure consistency in the RAC cluster.

How to Replace a Hard Drive in an Exadata Storage Server (Predictive Failure)  in EXADATA with steps 


Step 1: Verify the predictive failure of the drive
cellcli -e "LIST PHYSICALDISK WHERE PREDICTIVEFAILURESTATE = 'YES'"
Step 2: Confirm the drive replacement plan with Oracle Support
Step 3: Identify the replacement hard drive
Step 4: Verify the status of the replacement hard drive
cellcli -e "LIST PHYSICALDISK WHERE STATUS != 'normal'"
Step 5: Put the storage cell in maintenance mode
find the cell name dcli -l root -g cell_group cellcli -e "list cell detail"
cellcli -e "ALTER CELL <cell name> SHUTDOWN SERVICES"
cellcli -e "ALTER CELL <cell name> ENTER MAINTENANCE MODE"
Step 6: Remove the failed hard drive
cellcli -e "ALTER PHYSICALDISK <disk name> OFFLINE"
Step 7: Replace the failed hard drive with the replacement drive
Step 8: Power on the replacement hard drive
cellcli -e "ALTER PHYSICALDISK <disk name> ONLINE"
Step 9: Verify the replacement hard drive
cellcli -e "LIST PHYSICALDISK WHERE NAME = '<disk name>'"
Step 10: Exit maintenance mode and start the storage server services
cellcli -e "ALTER CELL <cell name> EXIT MAINTENANCE MODE"
cellcli -e "ALTER CELL <cell name> START SERVICES"
Replace <cell name> with the name of the affected storage cell.






Comments

Popular posts from this blog

How to make CRS and ASM not to restart after server reboot

How to repair ASM disk header

How to replace ASM failed disk?