The next requirement that I found a very good use for Oracle Flashback Query came about as part of a tuning exercise.
I was working through an issue where an application would use a table to temporarily store data before presenting the data to the client program. What was happening was a classic case of incorrect statistics on the table segment resulting in incorrect SQL execution plans.
Even by using the automatic statistics collection features in Oracle Database 10g this problem would not go away. Basically the application performed the following across multiple sessions:
- Populate table with data.
- Query data to present to user.
- Delete data from table.
The problem resulted from the scheduled automatic DBMS_STATS job running at 10pm at night when there was no user activity on the system. This would result in stats collected for the table containing with no rows in it.
This became a problem when there was user activity on the system as the optimiser favoured full table scan accesses when it should have been using a index. This problem was resolved by gathering statistics on the table and indexes when the application was in use.
The only problem being that at the point in time when the issue needed to be resolved it was not possible to reproduce the user activity to populate the table again. Any collection of statistics without the user activity would end up being worthless as the table would contain no data.
Using Oracle Database flashback query the option is available to view the data in the table at the time the user activity was occurring. I knew that at a certain point in time we had an accurate picture of what the normal content of the table looked like – we just had to provide a copy of the table at that point in time.
Using Oracle Flashback Query to Resolve SQL Execution Plan Problems
This is how we solved this problem:
- Created a table as a copy of the table we needed to collect stats on at the point in time we needed:
CREATE TABLE EMP_FB AS SELECT * FROM EMP AS OF TIMESTAMP ('02-08-2007 09:30:00', 'DD-MON-RRRR HH24:MI:SS');
- Created the same indexes on the copied table as exist on the base table:
Use DBMS_METADATA.GET_DLL to obtain the index definitions from the source table.
- Gather stats on the table and indexes using DBMS_STATS:
dbms_stats.gather_table_stats( ownname => 'SCOTT', tabname => 'EMP_FB', estimate_percent => 10, cascade => TRUE);
- Export the stats to a stat table from the table that contains the flashback data:
dbms_stats.create_stat_table( ownname => 'SCOTT', stattab => 'FB_TAB_STATS'); dbms_stats.export_table_stats( ownname => 'SCOTT', tabname => 'EMP_FB', stattab => 'FB_TAB_STATS', statid => 'EXP_FB_TAB', cascade => TRUE);
- Update the table and index name to match the objects that we want to import the stats for as the table names that I created were different than the target objects:
update FB_TAB_STATS set C1 = 'EMP' where statid = 'EXP_FB_TAB';
- Import the collected stats against the table and indexes that we want to set the stats for:
dbms_stats.import_table_stats( ownname => 'SCOTT', tabname => 'EMP_FB', stattab => 'FB_TAB_STATS', statid => 'EXP_FB_TAB', cascade => TRUE);
- Check that the stats imported match what is expected.
Once this is completed I re-ran my tests on the execution plan and it corrected what was a previously inefficient plan to one that was workable given the application design.
This technique worked as expected and worked well. Another example of the practical benefits of using the flashback database features.