This third instalment covering some of the flashback features in Oracle 10g takes a look at combining flashback with using a physical standby. I will take a look at two usage scenarios in which I think could offer significant benefit to anyone using Oracle Database Flashback with a physical standby. This post will cover the use of the physical standby as a read/write copy of a production database.
Using a physical standby as part of a Disaster Recovery solution can offer a comprehensive level of protection for a number of failures.
Implementing a physical standby can however require significant investment in hardware and software licenses. Using the physical standby database to support non-production development and testing can provide greater return on investment.
When reviewing Oracle Database physical standby options a common question is whether the infrastructure hosting the standby database can be “used”. Combining Oracle Database Flashback and Oracle Data Guard a very robust solution can be provisioned that enables the “use” of the physical standby.
Enabling the Physical Standby Oracle Database
The steps to do this are as follows (for those using the Data Guard Broker there are a number of slightly different steps to do that I will cover in a later post):
Prepare the physical standby
- Cancel the media recovery process.
- Create a restore point at the point in time the standby needs to be restored to.
Prepare the primary
- Archive the current redo log.
- Disable redo transport.
Activate the physical standby
- Convert standby database to primary database. This allows the physical standby to actually be opened for read/write access.
- Mount the database.
- Set the protection level to maximum performance.
- Open the database.
Running through the above this is what we see on the physical standby (assuming the physical standby is already running):
Physical Standby Database Preparation Tasks
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL; Database altered. SQL> create restore point before_ps_open guarantee flashback database; Restore point created.
The database restore point is created and changes are no longer being applied to the standby.
Primary Database Preparation Tasks
SQL> alter system archive log current; System altered. SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_1=DEFER; System altered. SQL>
Physical Standby Open Tasks
SQL> ALTER DATABASE ACTIVATE STANDBY DATABASE; Database altered. SQL> STARTUP MOUNT FORCE; ORACLE instance started. Total System Global Area 482344960 bytes Fixed Size 2073664 bytes Variable Size 339741632 bytes Database Buffers 134217728 bytes Redo Buffers 6311936 bytes Database mounted. SQL> ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PERFORMANCE; Database altered. SQL> ALTER DATABASE OPEN; Database altered. SQL> select open_mode, database_role from v$database; OPEN_MODE DATABASE_ROLE ---------- ---------------- READ WRITE PRIMARY
Once the database is open it will be current as of the time the last changes were applied. Running the following query against the physical standby will show how current the database is:
SQL> select scn_to_timestamp(STANDBY_BECAME_PRIMARY_SCN) from v$database; SCN_TO_TIMESTAMP(STANDBY_BECAME_PRIMARY_SCN) --------------------------------------------------------------------------- 23/JAN/10 10:21:37.000000000 PM
Physical Standby Database Use Cases
The following types of processes could make very good use of this “copy” of the production database and remove expensive workload from the production infrastructure:
Database Statistics Collection
If the workload on the primary database prevents the resources required for statistics collection then the physical standby could be used to run the dbms_stats (fnd_stats for those in an e-Business Suite environment) programs required to collect statistics. The workload required on the primary database to update the statistics would be limited to running the dbms_stats package calls to import the updated statistics.
Large interface jobs and ETL processing can consume a significant amount of resources on a database. Using the physical standby to process ETL and/or interface jobs could remove a significant amount of workload from the primary database to the physical standby infrastructure. Using a combination of partitioning, transportable tablespaces and data pump the “processed” data could easily be moved back to the primary database and made available to the application with very little overhead.
Application reports that view data over a historical window prior to the time of switchover could be run against the physical standby.
Having an exact copy of the production database for tuning efforts can be of immense value. This particularly applies in cases where it is not feasible to have a clone taken of the production database quickly.
This technique allows the database to be opened for DR testing and then reverted back to its original state on completion of testing. For DR tests that involve “dummy” transactions this is an ideal way to thoroughly test the DR process without having to worry about failing over to the DR site.
Physical Standby Database Rollback
Once the jobs that use the physical standby have completed the standby needs to be re-synchronised with the production database:
- Startup mount (this puts the database in a position to allow the flashback to begin).
- Flashback to restore point created prior to the opening of the standby database.
- Convert the physical standby control file from a primary mount back to a standby mount.
- Startup mount the physical standby.
- Begin log apply the start the re-synchronisation.
- Enable log shipping on the primary.
Verify the physical standby database has been used since it was opened:
SQL> select current_scn, standby_became_primary_scn from v$database; CURRENT_SCN STANDBY_BECAME_PRIMARY_SCN ----------- -------------------------- 704158 703033
- Startup mount the database to enable the database to be flashed back to the restore point:
SQL> STARTUP MOUNT FORCE; ORACLE instance started. Total System Global Area 482344960 bytes Fixed Size 2073664 bytes Variable Size 339741632 bytes Database Buffers 134217728 bytes Redo Buffers 6311936 bytes Database mounted. SQL>
- Check that the restore point is available and perform the flashback:
SQL> select name, time, scn from v$restore_point; NAME TIME SCN -------------------- ---------------------------------------- ---------- BEFORE_PS_OPEN 23/AUG/07 10:30:41.000000000 PM 703034 SQL> flashback database to restore point before_ps_open; Flashback complete.
- Convert the physical standby control file from primary mount to standby mount and restart the database with the standby control file mounted:
SQL> alter database convert to physical standby; Database altered. SQL> startup mount force; ORACLE instance started. Total System Global Area 482344960 bytes Fixed Size 2073664 bytes Variable Size 339741632 bytes Database Buffers 134217728 bytes Redo Buffers 6311936 bytes Database mounted. SQL>
- Enable log apply on the physical standby:
SQL> alter database recover managed standby database disconnect; Database altered. SQL>
alert log from physical standby:
Physical Standby Database mounted. Completed: ALTER DATABASE MOUNT Thu Aug 23 22:59:34 2007 alter database recover managed standby database disconnect Thu Aug 23 22:59:34 2007 Attempt to start background Managed Standby Recovery process (DEV) MRP0 started with pid=20, OS id=6036 Thu Aug 23 22:59:34 2007 MRP0: Background Managed Standby Recovery process started (DEV) Managed Standby Recovery not using Real Time Apply parallel recovery started with 2 processes Clearing online redo logfile 1 /oradata/DEV/DEVDR/onlinelog/o1_mf_1_3dqv92kb_.log Clearing online log 1 of thread 1 sequence number 1 Deleted Oracle managed file /oradata/DEV/DEVDR/onlinelog/o1_mf_1_3dqv92kb_.log Deleted Oracle managed file /oradata/flash_recovery_area/DEVDR/onlinelog/o1_mf_1_3dqv92ps_.log db_recovery_file_dest_size of 2048 MB is 75.39% used. This is a user-specified limit on the amount of space that will be used by this database for recovery-related files, and does not reflect the amount of space available in the underlying filesystem or ASM diskgroup. Thu Aug 23 22:59:40 2007 Completed: alter database recover managed standby database disconnect
- Enable log transport on the primary database:
SQL> alter system set log_archive_dest_state_1=enable; System altered. SQL> alter system switch logfile; System altered.
alert log from primary database:
ALTER SYSTEM SET log_archive_dest_state_1='ENABLE' SCOPE=BOTH; Thu Aug 23 23:02:36 2007 Destination LOG_ARCHIVE_DEST_1 is UNSYNCHRONIZED ****************************************************************** LGWR: Setting 'active' archival for destination LOG_ARCHIVE_DEST_1 ****************************************************************** LNSa started with pid=16, OS id=6206 Thu Aug 23 23:02:39 2007 Destination LOG_ARCHIVE_DEST_1 is SYNCHRONIZED LGWR: Standby redo logfile selected to archive thread 1 sequence 34 LGWR: Standby redo logfile selected for thread 1 sequence 34 for destination LOG_ARCHIVE_DEST_1 Thread 1 advanced to log sequence 34 Current log# 3 seq# 34 mem# 0: /oradata/DEV/DEV/onlinelog/o1_mf_3_3dh6r6cl_.log Current log# 3 seq# 34 mem# 1: /oradata/flash_recovery_area/DEV/onlinelog/o1_mf_3_3dh6rc6y_.log Thu Aug 23 23:02:49 2007 ARC1: LGWR is actively archiving destination LOG_ARCHIVE_DEST_1 ARC1: Standby redo logfile selected for thread 1 sequence 33 for destination LOG_ARCHIVE_DEST_1
Taking a look at the status of log apply on the physical standby we see:
SQL> SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS from v$managed_standby; PROCESS STATUS THREAD# SEQUENCE# BLOCK# BLOCKS --------- ------------ ---------- ---------- ---------- ---------- ARCH CONNECTED 0 0 0 0 ARCH CLOSING 1 33 6145 1398 MRP0 WAIT_FOR_LOG 1 34 0 0 RFS IDLE 0 0 0 0 RFS IDLE 1 34 281 5 RFS IDLE 0 0 0 0 6 rows selected.
This exercise demonstrates that it is relatively straightforward to use a physical standby for a number of useful activities to reduce workload on the primary server.
This example is a very simple case and it must be stated that whilst the physical standby is being used it is not available immediately for switchover or failover should a disaster happen on the primary site.