Thursday 8 October 2015

Oracle Golden Gate (Oracle GG)

   
Oracle Golden Gate (Oracle GG)
=========================
********* RAFFEEK **********
*******************************

1. GoldenGate is used to extract and replicate data.

2. Golden Gate is like oracle dataguard but only for particular schema or table replication where oracle dataguard for full database replication.

3. It supports variety of topologies like unidirectional,bi-directional,peer-to-peer,Broadcast, consolidation,cascading

4. It Supports exchange and manipulation of data at the transactional level between database platforms like Oracle, DB2, SQL Server, MySQL , etc.


Advantages of GG
===============
Business Continuity
High Availablity
Data Warehousing and Reporting
Data migrations and upgrades
Data integration and consolidation

Basic Understanding of Golden gate
==========================

Soure Database --> MANAGER-->EXTRACT-->Local trail file ***||*** Remote Trail file --> REPLICAT--> MANAGER --> Target Database


Background Processes:
=================

Manager

The Manager process must be running on both the source as well as target systems before the Extract or Replicat process can be started and performs a number of functions including monitoring and starting other GoldenGate processes, managing the trail files and also reporting.

Extract

The Extract process runs on the source system and is the data capture mechanism of GoldenGate. It can be configured both for initial loading of the source data as well as to synchronize the changed data on the source with the target. This can be configued to also propagate any DDL changes on those databases where DDL change support is available.

Replicat

The Replicat process runs on the target system and reads transactional data changes as well as DDL changes and replicates then to the target database. Like the Extract process, the Replicat process can also be configured for Initial Load as well as Change Synchronization.

Collector

The Collector is a background process which runs on the target system and is started automatically by the Manager (Dynamic Collector) or it can be configured to stsrt manually (Static Collector). It receives extracted data changes that are sent via TCP/IP and writes then to the trail files from where they are processed by the Replicat process.

Trails

Trails are series of files that GoldenGate temporarily stores on disks and these files are written to and read from by the Extract and Replicat processes as the case may be. Depending on the configuration chosen, these trail files can exist on the source as well as on the target systems. If it exists on the local system, it will be known an Extract Trail or as an Remote Trail if it exists on the target system.

Data Pumps

Data Pumps are secondary extract mechanisms which exist in the source configuration. This is optional component and if Data Pump is not used then Extract sends data via TCP/IP to the remote trail on the target. When Data Pump is configured, the the Primary Extract process will write to the Local Trail and then this trail is read by the Data Pump and data is sent over the network to Remote Trails on the target system.

In the absence of Data Pump, the data that the Extract process extracts resides in memory alone and there is no storage of this data anywhere on the source system. In case of network of target failures, there could be cases where the primary extract process can abort or abend. Data Pump can also be useful in those cases where we are doing complex filtering and transformation of data as well as when we are consolidating data from many sources to a central target.

Data source

When processing transactional data changes, the Extract process can obtain data directly from the database transaction logs (Oracle, DB2, SQL Server, MySQL etc) or from a GoldenGate Vendor Access Module (VAM) where the database vendor (for example Teradata) will provide the required components that will be used by Extract to extract the data changes.

Groups

To differentiate between the number of different Extract and Replicat groups which can potentially co-exist on a system, we can define processing groups. For instance, if we want to replicate different sets of data in parallel, we can create two Replicat groups.

A processing group consists of a process which could be either a Extract or Replicat process, a corresponding parameter file, checkpoint file or checkpoint table (for Replicat) and other files which could be associated with the process.



How To check the status of Oracle Golen Gate:
=================================

Unidirectional:
===========

Soure DATABASE:
./ggsci
 ggsci> info all
Program     Status      Group       Lag       Time Since Chkpt
MANAGER     RUNNING
EXTRACT     RUNNING     EXTR      00:00:00      00:00:00


Target DATABASE:

./ggsci
 ggsci> info all
Program     Status      Group       Lag       Time Since Chkpt
MANAGER     RUNNING
REPLICAT    RUNNING     REP       00:00:00      00:00:0

Note: Always status show be RUNNING and Lag should be 00:00, It means there is no gap and gg is working fine

Bi-directional:
===============
Soure DATABASE:
./ggsci
 ggsci> info all
Program     Status      Group       Lag       Time Since Chkpt
MANAGER     RUNNING
EXTRACT     RUNNING     EXTR      00:00:00      00:00:00
REPLICAT    RUNNING     REP       00:00:00      00:00:00


Target DATABASE:
./ggsci
 ggsci> info all
Program     Status      Group       Lag       Time Since Chkpt
MANAGER     RUNNING
EXTRACT     RUNNING     EXTR      00:00:00      00:00:00
REPLICAT    RUNNING     REP       00:00:00      00:00:00


Note: Always status show be RUNNING and Lag should be 00:00, It means there is no gap and gg is working fine


How to start and stop golden gate services
===========================================

./ggsci
 ggsci> start mgr   --> it will automatically start other process like EXTRACT and REPLICTAE

 ggsci> stop mgr --> it will automatically stop other process like EXTRACT and REPLICTAE


How to start particular EXTRACT golden gate services
=====================================================
./ggsci
 ggsci> info all
Program     Status      Group       Lag       Time Since Chkpt
MANAGER     RUNNING
EXTRACT     ABENDED     EXTR      00:02:55      00:00:50
REPLICAT    RUNNING     REP       00:00:00      00:00:01

ggsci>start EXTR   -->Need to mention group name of extract



ggsci>view report EXTR   --> To check errors like oralce alert log.




GGSCI > help

GGSCI Command    Summary
SUBDIRS          CREATE SUBDIRS
ER               INFO ER, KILL ER, LAG ER, SEND ER, STATUS ER,
                 START ER, STATS ER, STOP ER
EXTRACT          ADD, ALTER, CLEANUP, DELETE, INFO, KILL,
                 LAG, SEND, START, STATS, STATUS, STOP
EXTTRAIL         ADD, ALTER, DELETE, INFO
GGSEVT           VIEW
MANAGER          INFO, REFRESH, SEND, START, STOP, STATUS
MARKER           INFO
PARAMS           EDIT, VIEW
REPLICAT         ADD, ALTER, CLEANUP, DELETE, INFO, KILL,
                 LAG, SEND, START, STATS, STATUS, STOP
REPORT           VIEW
RMTTRAIL         ADD, ALTER, DELETE, INFO
TRACETABLE       ADD, DELETE, INFO
TRANDATA         ADD, DELETE, INFO
Database         DBLOGIN, LIST TABLES,
                 ENCRYPT PASSWORD
DDL              DUMPDDL
CHECKPOINTTABLE  ADD CHECKPOINTTABLE, DELETE CHECKPOINTTABLE,
                 CLEANUP CHECKPOINTTABLE, INFO CHECKPOINTTABLE
Miscellaneous    FC, HELP, HISTORY, INFO ALL, INFO MARKER, OBEY,
                 SET, SHELL, SHOW, VERSIONS

Sunday 17 August 2014

Oracle Dataguard Switchover

Oracle Dataguard Switchover steps

1. Initiate the switchover on the primary database PRIM:
SQL>connect / as sysdba
SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN;

2. After step 1 finishes, Switch the original physical standby db STAN to primary role;
Open another prompt and connect to SQLPLUS:
SQL>connect / as sysdba
SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;

3. Immediately after issuing command in step 2, shut down and restart the former primary instance PRIM:
SQL>SHUTDOWN IMMEDIATE;
SQL>STARTUP MOUNT;

4. After step 3 completes:
- If you are using Oracle Database 10g release 1, you will have to Shut down and restart the new primary database STAN.
SQL>SHUTDOWN IMMEDIATE;
SQL>STARTUP;

- If you are using Oracle Database 10g release 2, you can open the new Primary database STAN:
SQL>ALTER DATABASE OPEN;

STAN is now transitioned to the primary database role.

5. On the new primary database STAN, perform a SWITCH LOGFILE to start sending redo data to the standby database PRIM.
SQL>ALTER SYSTEM SWITCH LOGFILE;

ORACLE DATA GUARD (Disaster Recovery)

                               
                        ********************************
                                   DATA GUARD
                        ********************************                                 
                                

DATA GUARD:
Data Guard is the name for Oracle's standby database solution, used for disaster recovery and high availability. Its nothing but a consistent copy of a database.

types of standby database:
a.Physical Standby Database
b.Logical Standby Database

A.PHYSICAL STANDBY DATABASE CREATION
====================================
prim1ary database : " prim1 "
Standby database : " stan1 "

Step  1: Make sure that o/s and architecture on both prim1ary and Secondary are same.

Step  2: Install oracle same version software for prim1ary and Secondary.

Step  3: Check whether prim1ary Database is in archive logmode.(Must)

        To Check : archive log list;
    If not enable then,
     >shut immediate
     >startup mount
     >alter database archivelog;
     >alter database open;
      
  
Step  4: Change the database in logging Mode:

           > alter database force logging;

Step  5: Check whether this database have Password File:
        ( If it doesnt exit create Password File and both the passwords should be same )

          > select * from v$pwfile_users;
    
Step  6: Create Three Redo-log files to standby database(prim1ary Side):
         
          >alter database add standby logfile group 4 '/u01/app/oracle/oradata/prim1/redo04.log' size 50m;
          >alter database add standby logfile group 5 '/u01/app/oracle/oradata/prim1/redo05.log' size 50m;
          >alter database add standby logfile group 6 '/u01/app/oracle/oradata/prim1/redo06.log' size 50m;
      
        Check the standby logfile : select * from v$standby_log;

Step  7: create pfile:
      
        >create pfile from spfile;
       
        >shut immediate;

    >exit

Step  8: Edit Pfile and Parameters in prim1ary:

         cd /u01/app/oracle/product/10.2.0/db_1/dbs

    ls -lrt

    vi initprim1.ora
      
        Adding parameters in Pfile(primary):add at the end
      
     db_unique_name=prim1
         LOG_ARCHIVE_CONFIG='DG_CONFIG=(prim1,stan1)'
            LOG_ARCHIVE_DEST_1='LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=prim1'
     LOG_ARCHIVE_DEST_2='SERVICE=stan1 LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=stan1'
     LOG_ARCHIVE_DEST_STATE_1=ENABLE
     LOG_ARCHIVE_DEST_STATE_2=ENABLE
     LOG_ARCHIVE_FORMAT=%t_%s_%r.arc
     LOG_ARCHIVE_MAX_PROCESSES=30
     remote_login_passwordfile='EXCLUSIVE'
     FAL_SERVER=stan1
     FAL_CLIENT=prim1
     STANDBY_FILE_MANAGEMENT=AUTO
     DB_FILE_NAME_CONVERT='/u01/app/oracle/oradata/stan1','/u01/app/oracle/oradata/prim1'
     LOG_FILE_NAME_CONVERT='/u01/app/oracle/oradata/stan1','/u01/app/oracle/oradata/prim1'

Step  9: Shutdown and Startup the prim1ary database using Pfile.
        
         >sqlplus "/as sysdba"

         >startup nomount pfile='/u01/app/oracle/product/10.2.0/db_1/dbs/initprim1.ora';


Step 10: Create spfile from pfile and shutdown the prim1ary database;
       
      >create spfile from pfile='/u01/app/oracle/product/10.2.0/db_1/dbs/initprim1.ora';

     >Shut immediate;

Step 11: Create necessary directorires for Standby database:
       
         cd /u01/app/oracle/admin   
     mkdir stan1
         cd /u01/app/oracle/admin/stan1
     mkdir adump bdump cdump udump

         cd /u01/app/oracle/oradata  
     mkdir stan1


Step 12: Copy the database file, log file from prim1ary to standby database:

        cd /u01/app/oracle/oradata/prim1
       
         $cp *.dbf *.log /u01/app/oracle/oradata/stan1


Step 13: startup the prim1ary database;
   
    >sqlplus "/as sysdba"
   
    > Startup mount
      
        >alter database create standby controlfile as '/u01/app/oracle/oradata/stan1/control01.ctl';

        > alter database open;

Step 14: Create Password file for standby database:
       
         $cd /u01/app/oracle/product/10.2.0/db_1/dbs
  
         cp orapwprim1 orapwstan1
   
Step 15: Check the Listener

    ps -ef|grep tns
       (or)
    $lsnrctl status

    if its down then start the listener

    $lsnrctl start
       
Steps 16: Check the tns entries

    tnsping <Database name>

    if not working then Edit and Put the TNS Entry for databases.

         $vi /u01/app/oracle/product/10.2.0/db_1/network/admin/tnsnames.ora


Step 17: Copy the pfile from prim1ary to standby database:

         $cd  /u01/app/oracle/product/10.2.0/db_1/dbs
         $cp initprim1.ora initstan1.ora
       
       edit the pfile for standby, replace with :%s/prim1/stan1 except dba_name parameter because db_name should be same for primary     and standby and add below

parameters without any change.

vi initstan1.ora

    db_name='prim1'
    db_unique_name=stan1
    LOG_ARCHIVE_CONFIG='DG_CONFIG=(prim1,stan1)'
    LOG_ARCHIVE_DEST_1='LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=stan1'
    LOG_ARCHIVE_DEST_2='SERVICE=prim1 LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=prim1'
    LOG_ARCHIVE_DEST_STATE_1=ENABLE
    LOG_ARCHIVE_DEST_STATE_2=ENABLE
    LOG_ARCHIVE_FORMAT=%t_%s_%r.arc
    LOG_ARCHIVE_MAX_PROCESSES=30
    FAL_SERVER=prim1
    FAL_CLIENT=stan1
    remote_login_passwordfile='EXCLUSIVE'
    DB_FILE_NAME_CONVERT='/u01/app/oracle/oradata/prim1','/u01/app/oracle/oradata/stan1'
    LOG_FILE_NAME_CONVERT='/u01/app/oracle/oradata/prim1','/u01/app/oracle/oradata/stan1'
    STANDBY_FILE_MANAGEMENT=AUTO

     
  NOTE: Mention proper standby controlfile location(since you have only one controlfile, so delete 2,3 control files in standby pfile)

Step 18: edit oratab and Startup the standby database in nomount stage and create spfile:
     

       edit the oratab for standby
       vi /etc/oratab
          
      . oraenv
       stan1  
    sqlplus "/as sysdba"
 
         startup nomount pfile='/u01/app/oracle/product/10.2.0/db_1/dbs/initstan1.ora';
      
         >create spfile from pfile='/u01/app/oracle/product/10.2.0/db_1/dbs/initstan1.ora';

         > shut immediate

         > startup mount

Step 19: Start MRP process in Standby database Side:
 
        stan1:
    ======
         >alter database recover managed standby database disconnect from session;

      
Step 20: Check the Sequence and applied in standby database:

          prim1:
         =======
         > archive log list
         >alter system switch logfile; - do it for few times

        stan1:
    ======

         >select sequence#,applied from v$archived_log order by sequence#;


**************************** END**************************        

Tuesday 17 December 2013

RMAN Cloning - only 4 steps


Database name: cts
Clone database name: clone

1. connect to cts database and take RMAN full backup(database must be open).

. oraenv
cts
rman
rman>connect target /

rman>backup database plus archivelog;

exit

2. Create directory structure for clone database.

2.1 cd /u01/app/oracle/admin
   
mkdir clone
cd clone

mkdir adump bdump cdump udump pfile


2.2.cd /u01/app/oracle/oradata

mkdir clone

2.3. pfile creation steps

 cd /u01/app/oracle/admin

ls -lrt

cd cts/pfile

ls -lrt

Note: if init.ora file name is different just rename the file(MV oldname init.ora)

cp init.ora /u01/app/oracle/admin/clone/pfile

cd /u01/app/oracle/admin/clone/pfile

ls -lrt

vi init.ora

press "i" to edit

change database name,audit/background/core/user dump locations, controlfile locations and dispatchers name

add below parameters:
=====================

DB_FILE_NAME_CONVERT='/u01/app/oracle/oradata/cts/','/u01/app/oracle/oradata/clone/'
LOG_FILE_NAME_CONVERT='/u01/app/oracle/oradata/cts/','/u01/app/oracle/oradata/clone/'


esc+shift+:+wq  --> to save
or esc+shft+:%s/cts/clone


2.4. password file creation steps

cd /u01/app/oracle/product/10.2.0/db_1/dbs

pwd

orapwd file='/u01/app/oracle/product/10.2.0/db_1/dbs/orapwclone' password=sys entries=5

ls -lrt

Note: Make sure that CTS database also have password file.


2.5. edit the oratab entry

 vi /etc/oratab

Press "i" to edit

clone:/u01/app/oracle/product/10.2.0/db_1:N

esc+shift+:+wq  --> to save


2.6. . oraenv  --> Set the oracle environment

? clone

  (or)

export ORACLE_SID=clone
export ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_1

echo $ORACLE_SID   --> just to check the database name
echo $ORACLE_HOME  --> just to check the oracle home path

sqlplus "/as sysdba"

2.7.SQL> startup nomount pfile='/u01/app/oracle/admin/clone/pfile/init.ora';

SQL>create spfile from pfile='/u01/app/oracle/admin/clone/pfile/init.ora';

SQL>shut immediate

SQL>startup nomount

3. start listener and database entires

to check listnere status

ps -ef|grep tns

lsnrctl status LISTENER

if it is down start the listener

lsnrctl start 

to check TNS entries

tnsping cts
tnsping clone

if not pingable

cd /u01/app/oracle/product/10.2.0/db_1/network/admin/

ls -lrt
vi tnsnames.ora

put the entry for database cts and clone


4. create clone database.

. oraenv
clone

rman
rman>connet target sys/sys@cts
rman>connect auxiliary /
rman>duplicate target database to clone;

*********** end ***********





Thursday 29 August 2013

RMAN (Recovery Manager) by Raffeek

 ===============================
            ORACLE DATABASE BACKUPS
        ===============================
        *********** RAFFEEK ***********
        ===============================
     

1. logfileical backup (exp/imp --> 9i , expdp/impdp --> 10g)
2. Physical backup

2.A) User Managed backup( Cold and Hot backup)
2.B) Server Manager backup( RMAN )

2.B) RMAN - Recovery manager (RMAN) is a command line utility used to perform backup and recovery. RMAN provide greater ease of

management and administration of the backup and recovery operations.


* V$RMAN_OUTPUT - This is an in-memory view of the messages reported by RMAN holding a maximum of 32767 rows. Since this information

is not recorded in the controlfile it is lost on instance restart.
* V$RMAN_STATUS - This view displays progress and status information for in-progress and complete RMAN jobs. The information for the

in-progress jobs is memory only, while the complete job information comes from the controlfile.
* V$BACKUP_FILES - This view display information about RMAN image copies, backupsets and archived logs, similar to the information

listed by the RMAN commands LIST BACKUP and LIST COPY. This view relies on the DBMS_RCVMAN.SETDATABASE procedure being run to set the

database.


Note: Database must be open.

RMAN will take online Archive log as one Backupset(BS)

Then it will take actuall datafiles

Then it will take controlfile and spfile

Then it will take actual archive log files


* Set oracle environment

. oraenv
<cts>

rman

RMAN>connect target /

RMAN> show all   --> it will display the default parameter files

RMAN configuration parameters are:

CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default 
--> only(one) last backup will be valid since redundance is 1 remaining backup will be obsolete.

CONFIGURE BACKUP OPTIMIZATION OFF; # default
--> if its ON, datafile will not be backedup if there is no changes in datafile blocks.

CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default 
--> Bakup will go to disk( we can mention TAPE also)

CONFIGURE CONTROLFILE AUTOBACKUP OFF; # default 
--> If its ON, Controlfile will be backedup even if we take one datafile as well.

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default  -->controlfile backup will go to disk.

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
-->we can increase the parallelsim but we shold have more CPUs in our server.

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
-->only one copy of datafiles backup will happen if we take database backup .

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
-->only one copy of archive files backup will happen if we take archivefiles.

CONFIGURE MAXSETSIZE TO UNLIMITED; # default 
--> we can define the size of backup piece.

CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default

CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default 
--> We can delete the archivefiles using delete input or sysdate-1 or none

CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u01/app/oracle/product/10.2.0/db_1/dbs/snapcf_citi.f'; # default




> How to take full backup?


RMAN>BACKUP DATABASE PLUS ARCHIVELOG; 

*it will take four backup sets (current archivelog file, Datafiles, controlfile and spfile, actual archivelog files) each backup set

contains one or more backup  pieces.

> How to take incremental backup?

RMAN>BACKUP INCREMENTAL LEVEL 1 DATABASE;


> How to check the backup details?

RMAN>list backup;

RMAN>list backup summary;

>How to valiate backup?

RMAN>validate backupset 2;
    (or)
RMAN>validate backupset 2,3,4;


> How to check backup of datafiles?

RMAN>list backup of database;

> How to check backup of archivelog?

RMAN> list backup of archivelog all;

>How to check backup of controlfiles?

RMAN> list backup of controlfile;


>What is mean by obsolete and expiry?

* Obsolete -> Invalid backup
* Expairy -> Backup not found in os level(physically deleted)

RMAN>report obsolete;
RMAN>list expired backup;

>Types of RMAN backup?

* L0 bakup or Full backup
* L1 bckup or Incremental bckup ( it will take only changed block after last backup)

>Can we give the database commands in RMAN prompt?

sample commands:

RMAN>startup
RMAN>startup mount
RMAN>shutdown immediate
RMAN>alter database open;

>What is cross check?

 If any arhivelog file is physially deleted we can't take backup of archivelogs, so we can give the below command then we can take the

backup.

RMAN>crosscheck archivelog all;

>Full database backup command?

Example for DISK BACKUP:
=======================

. oraenv
<DB NAME>

rman target /
configure default device type to disk;
configure controlfile autobackup on;
configure channel device type disk format '<Disk_Location>/df_%t_%s_%p';
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '<Disk_Location>/snapcf_oid.f';
run
{
backup current controlfile;
backup database plus archivelog;
}


Example for TAPE BACKUP:
========================
Before taking backup to tape we need to configure Tivoli client(TSM) or some other media library softwares to connect with TAPE.

. oraenv
<DB NAME>
rman
RMAN>connect target;

run {
      allocate channel t1 type 'sbt_tape'
      parms 'ENV=(TDPO_OPTFILE=/u001/app/TDP/dbname/tdpo.opt)'
      format '%d_inc0_%U.rbck';
      backup incremental level 0 database tag OPEN_INC0_DAY;
      release channel t1;
}

sql 'alter system archive log current';
change archivelog all crosscheck;

run {
      allocate channel a1 type 'sbt_tape'
      parms 'ENV=(TDPO_OPTFILE=/u001/app/TDP/dbname/tdpo.opt)'
      format '%d_arch_%U.rbck';
      backup archivelog all not backed up 3 times tag ARCHIVE_Backup ;
      delete archivelog until time = 'sysdate-7';
      release channel a1;
    }

run {
      allocate channel c1 type 'sbt_tape'
      parms 'ENV=(TDPO_OPTFILE=/u001/app/TDP/dbname/tdpo.opt)'
      format '%d_controlfile_%U.rbck';
      backup  current controlfile tag CONTROLFILE_DAY;
      release channel c1;
    }

exit


Friday 3 May 2013

Oracle Datapump ( expdp ) / Import ( impdp ) With Full DB and Schema Refresh Steps

 ORACLE DATABASE BACKUPS:

      Backup and recovery is one of the most important aspects of a DBA's job. If you lose your company's data, you could very well lose your job. Hardware and software can always be replaced, but your data may be irreplaceable!

1. Logical backup (exp/imp --> 9i , expdp/impdp --> 10g)
2. Physical backup

2.A) User Managed backup( Cold and Hot backup)
2.B) Server Manager backup( RMAN )

1. Logical backup (Datapump expdp/impdpdp)
We can take expdport(expdp) in four levels.

1A. Full database Level expdp/impdp
1B. Schema(User) Level expdp/impdp
1C. Table Level expdp/impdp
1D. Tablespace Level expdp/impdp

Note: Do the expdport in Source(ctsp) database and impdport in target(ctsd) database.

      Database must be in open and create the directory in both source and target database.

Sourec Database:

SQL> create directory expdp_dir1 as '/u01/app/oracle/';
SQL> grant read,write on directory expdp_dir1 to system;

SQL> select *from dba_directories;

Target Database:

SQL> create directory expdp_dir1 as '/u01/app/oracle/';

SQL> grant read,write on directory expdp_dir1 to system;

SQL> select *from dba_directories;


1A. Full database Level expdp/impdp
=====================================

expdp system/manager directory=expdp_dir1 dumpfile=ctsfull.dmp logfile=ctsfull.log full=y

impdp system/manager directory=expdp_dir1 dumpfile=ctsfull.dmp logfile=ctsfull_impdp.log full=y


1B. Schema(User) Level expdp/impdp
====================================
expdp system/manager directory=expdp_dir1 dumpfile=ctsuser.dmp logfile=ctsuser.log schemas=raja

impdp system/manager directory=expdp_dir1 dumpfile=ctsuser.dmp logfile=ctsuser_impdp.log remap_schema=raja:raja


Note: RAJA is a user in the database

1C. Table Level expdp/impdp
=============================
expdp system/manager directory=expdp_dir1 dumpfile=ctstab.dmp logfile=ctstab.log tables=raja.emp


impdp system/manager directory=expdp_dir1 dumpfile=ctstab.dmp logfile=ctstab_impdp.log remap_schema=raja:raja tables=emp


DB Refresh steps:
=================

1. take export(expdp) of source database(ctsp).

expdp system/manager directory=expdp_dir1 dumpfile=ctsfull.dmp logfile=ctsfull.log full=y


2. move the dumpfile to target database(ctsd) using scp.. if both the database running in same server this steps is not required.

scp ctsfull.dmp oracle@<Target server IP>:/u02/app/oracle


3. Drop the application users and dont drop database default users like sys,system,dbsnmb,xdb.

If you give below command it will show username,created date. Mostly database default users will be created in same day.

select username,created from dba_users;

drop user raja cascade;


4. Befor doing import(impdp) check the used space of source database and freespace in the target database. tablespaces names should

same between sourec and target database then Target database each tablespace size should be more than or equal to source database tablespace.

Source:

select tablespace_name,sum(bytes/1024/1024) from dba_segments group by tablespace_name;

Target:

select tablespace_name,sum(bytes/1024/1024) from dba_free_space group by tablespace_name;

5. Start the impdport in taget database.


impdp system/manager directory=expdp_dir1 dumpfile=ctsfull.dmp logfile=ctsfull_impdp.log full=y


6. once competed compare the object counts between source and target databases.

select object_type,count(*) from dba_objects where status='VALID' group by object_type;

=============================== end ===========================



Schema Refersh steps:
=====================

1. take expdport of a schema in source database(ctsp).

expdp system/manager directory=expdp_dir1 dumpfile=ctsuser.dmp logfile=ctsuser.log schemas=raja

Note: Dont give semicolon(;) at the end of the above command.

2. move the dumpfile to target database(ctsd) using scp.if both the database running in same server this steps is not required.

scp ctsuser.dmp oracle@<Target server IP>:/u02/app/oracle


3. create the new user in target database(if already existed drop and recreate)

select username from dba_users;

drop user raja cascade;

create user raja identified by raja;


4. Befor doing import(impdp) check the used space of a schema(user) in source database and freespace in the target database then

target database tablespaces should be more than or equal to source database tablespaces.

Source:

select tablespace_name,sum(bytes/1024/1024) from dba_segments where owner='RAJA' group by tablespace_name;

Target:

select tablespace_name,sum(bytes/1024/1024) from dba_free_space group by tablespace_name;


5. Start the import(impdp) in taget database.


impdp system/manager directory=expdp_dir1 dumpfile=ctsuser.dmp logfile=ctsuser_impdp.log remap_schema=raja:raja


6. once completed compare the object counts between source and target databases.

select object_type,count(*) from dba_objects where owner='RAJA' and status='VALID' group by object_type;

Thursday 2 May 2013

Oracle Export ( exp ) / Import ( imp ) With Full DB and Schema Refresh Steps

    
ORACLE DATABASE BACKUPS:

Backup and recovery is one of the most important aspects of a DBA's job. If you lose your company's data, you could very well lose your job. Hardware and software can always be replaced, but your data may be irreplaceable!


Types of Backup:

1. Logical backup (exp/imp --> 9i , expdp/impdp --> 10g)
2. Physical backup

2.A) User Managed backup( Cold and Hot backup)
2.B) Server Manager backup( RMAN - Recovery Manager )


1. Logical backup (exp/imp)
====================

We can take export(exp) in four levels.

1A. Full database Level export/import
1B. Schema(User) Level export/import
1C. Table Level export/import
1D. Tablespace Level export/import


Note: Do the export in Source(ctsp) database and import in target(ctsd) database.

      Database must be in open.


1A. Full database Level export/import
=====================================

exp system/manager file='/u01/app/oracle/ctsfull.dmp' log='/u01/app/oracle/ctsfull.log' full=y

imp system/manager file='/u01/app/oracle/ctsfull.dmp' log='/u01/app/oracle/ctsfull_imp.log' full=y



1B. Schema(User) Level export/import
====================================
exp system/manager file='/u01/app/oracle/ctsuser.dmp' log='/u01/app/oracle/ctsuser.log' owner=raja

imp system/manager file='/u01/app/oracle/ctsuser.dmp' log='/u01/app/oracle/ctsuser_imp.log' fromuser=raja touser=raja

Note: RAJA is a user in the database


1C. Table Level export/import
=============================


exp system/manager file='/u01/app/oracle/ctstab.dmp' log='/u01/app/oracle/ctstab.log' tables=raja.emp


imp system/manager file='/u01/app/oracle/ctstab.dmp' log='/u01/app/oracle/ctstab_imp.log' fromuser=raja touser=raja tables=emp


DB Refresh steps:
=================

1. take export of source database(ctsp).

exp system/manager file='/u01/app/oracle/ctsfull.dmp' log='/u01/app/oracle/ctsfull.log' full=y


2. move the dumpfile to target database(ctsd) using scp.. if both the database running in same server this steps is not required.

scp /u01/app/oracle/ctsfull.dmp oracle@<Target server IP>:/u02/app/oracle


3. Drop the application users and dont drop database default users like sys,system,dbsnmb,xdb.

If you give below command it will show username,created date. Mostly database default users will be created in same day.

select username,created from dba_users;

drop user raja cascade;


4. Befor doing import check the used space of source database and freespace in the target database. tablespaces names should same between sourec and target database then Target database each tablespace size should be more than or equal to source database tablespace.

Source:

select tablespace_name,sum(bytes/1024/1024) from dba_segments group by tablespace_name;

Target:

select tablespace_name,sum(bytes/1024/1024) from dba_free_space group by tablespace_name;

5. Start the import in taget database.


imp system/manager file='/u01/app/oracle/ctsfull.dmp' log='/u01/app/oracle/ctsfull_imp.log' full=y


6. once competed compare the object counts between source and target databases.

select object_type,count(*) from dba_objects where status='VALID' group by object_type;

=============================== end ===========================



Schema Refersh steps:
=====================

1. take export of a schema in source database(ctsp).

exp system/manager file='/u01/app/oracle/ctsuser.dmp' log='/u01/app/oracle/ctsuser.log' owner=raja

Note: Dont give semicolon(;) at the end of the above command.

2. move the dumpfile to target database(ctsd) using scp.if both the database running in same server this steps is not required.

scp /u01/app/oracle/ctsuser.dmp oracle@<Target server IP>:/u02/app/oracle


3. create the new user in target database(if already existed drop and recreate)

select username from dba_users;

drop user raja cascade;

create user raja identified by raja;


4. befor doing import check the used space of a schema(user) in source database and freespace in the target database then target

database tablespaces should be more than or equal to source database tablespaces.

Source:

select tablespace_name,sum(bytes/1024/1024) from dba_segments where owner='RAJA' group by tablespace_name;

Target:

select tablespace_name,sum(bytes/1024/1024) from dba_free_space group by tablespace_name;


5. Start the import in taget database.


imp system/manager file='/u01/app/oracle/ctsuser.dmp' log='/u01/app/oracle/ctsuser_imp.log' fromuser=raja touser=raja


6. once competed compare the object counts between source and target databases.

select object_type,count(*) from dba_objects where owner='RAJA' and status='VALID' group by object_type;