Tuesday, May 24, 2016

Create database baseline using DBMS_WORKLOAD_REPOSITORY

Run the following statement to create a baseline between 2 snapshots.

EXECUTE DBMS_WORKLOAD_REPOSITORY.CREATE_BASELINE (start_snap_id => , end_snap_id => , baseline_name => 'oltp_peakload_bl');

Monday, May 23, 2016

Query to view plan hash value over time for a sql id

This query uses the AWR history tables. So you would need a license to be able to use it.

It will return something like this:

select snap_id,begin_interval_time,end_interval_time,sql_id,plan_hash_value,parsing_schema_name
from dba_hist_sqlstat
join dba_hist_snapshot
using (snap_id,instance_number)
where sql_id='&sql_id'
and parsing_schema_name='&schema'
order by snap_id desc;



SNAP_ID BEGIN_INTERVAL_TIME            END_INTERVAL_TIME              SQL_ID        PLAN_HASH_VALUE SCHEMA   
------- ------------------------------ ------------------------------ ------------- --------------- ----------
   6219 5/19/2016 7:30:05.359 AM       5/19/2016 8:00:10.533 AM       cu2zgt871kzsv      2820428760 NICO     
   6219 5/19/2016 7:30:05.359 AM       5/19/2016 8:00:10.533 AM       cu2zgt871kzsv      4084779347 NICO     
   6218 5/19/2016 7:00:00.935 AM       5/19/2016 7:30:05.359 AM       cu2zgt871kzsv      4084779347 NICO     
   6183 5/18/2016 1:30:20.839 PM       5/18/2016 2:00:25.344 PM       cu2zgt871kzsv      4084779347 NICO     
   6182 5/18/2016 1:00:16.638 PM       5/18/2016 1:30:20.839 PM       cu2zgt871kzsv      4084779347 NICO     
   6181 5/18/2016 12:30:12.766 PM      5/18/2016 1:00:16.638 PM       cu2zgt871kzsv      4084779347 NICO     
   5669 5/7/2016 8:30:06.532 PM        5/7/2016 9:00:21.666 PM        cu2zgt871kzsv      2820428760 NICO     
   5329 4/30/2016 6:30:27.380 PM       4/30/2016 7:00:32.868 PM       cu2zgt871kzsv      2820428760 NICO     
   5284 4/29/2016 8:00:34.962 PM       4/29/2016 8:30:41.139 PM       cu2zgt871kzsv      2820428760 NICO     
   5258 4/29/2016 7:00:28.038 AM       4/29/2016 7:30:34.104 AM       cu2zgt871kzsv      2820428760 NICO     
   4996 4/23/2016 8:00:34.871 PM       4/23/2016 8:30:42.130 PM       cu2zgt871kzsv      2820428760 NICO     
   2382 2/29/2016 8:00:19.203 AM       2/29/2016 8:30:25.207 AM       cu2zgt871kzsv      1221363908 NICO     
   2381 2/29/2016 7:30:15.311 AM       2/29/2016 8:00:19.203 AM       cu2zgt871kzsv      1221363908 NICO     
   2380 2/29/2016 7:00:10.910 AM       2/29/2016 7:30:15.311 AM       cu2zgt871kzsv      1221363908 NICO     
   2199 2/25/2016 12:30:23.410 PM      2/25/2016 1:00:26.904 PM       cu2zgt871kzsv       661146938 NICO     
   1912 2/19/2016 12:54:50.237 PM      2/19/2016 1:30:16.031 PM       cu2zgt871kzsv      2820428760 NICO

So now I can see that the plan has changed in the last few days.

Friday, May 20, 2016

Cluster wait events "gc buffer busy release", "gc buffer busy acquire"

Whenever we have a large batch insert or a web service hitting our database, we would experience a very high level of cluster waits.

Cluster waits are grey (click on picture to enlarge)

That nice grey section there is the cluster waits.
Drilling down further, I can see that they are "gc buffer busy release", "gc buffer busy acquire" wait events.

Let's have a look at the prime contender for this wait event:
This query was an insert into a table called audit_row.
Being an audit table, almost every tables had a trigger before insert and before update inserting into that table. The trigger uses a sequence to update the primary key.

The hot object however was an index on that table.
For every new rows in the table, the sequence will provide the next value which is always incremented by 1 and this will insert in the last block of the index on the right hand side.
When the block is full, the index will split into 2 new blocks, with the previous block having all rows minus one row and the last block having one row and the rest of the block empty.

In RAC, with 3 instances all trying to write in the same block, they will acquire a lock on the block, perform their operation then release it, allowing other instances to perform their insert.

But why is this happening?
The sequences have a default cache value of 20. In a single instance, it's not too big of a deal, although increasing the cache by a larger number can increase performances a bit.

In RAC, each instances would grab the 20 next values and cache them. But they will all insert in the same block on the index.

Increasing the cache is useless if the option "ORDER" is used for the sequence.

Solution:
Ensure that if a sequence is used to generate the primary key, it has a large cache value. Use cache 10000. Ensure that NOORDER which is the default value is used.


Thursday, May 19, 2016

How to prevent a PL/SQL cursor loop from failing?

The PL/SQL language uses the following format:

DECLARE
<variable and cursor declaration>
BEGIN
<Write code here>
EXCEPTION
<optional>

END;
/

But what happens if we use a cursor and a loop and one of the records in there cause the loop to fail?

Exceptions work together with BEGIN ... END;
So we add that to the loop.

DECLARE

vMyVarchar varchar2(100);
vMynumber number(2) := 10;

cursor cMyCursor is
 select some,columns
 from sometables
 where someconditions = 'something';

BEGIN
   OPEN cMyCursor;
   LOOP
      BEGIN
         FETCH cMyCursor into vMyVarchar;
         EXIT WHEN cMyCursor%NOTFOUND;

         --Let's pretend thatvMyVarchar is an executable statement
         exec immediate vMyVarchar;
         EXCEPTION WHEN OTHERS THEN
            dbms_output.put_line(vMyVarchar);
         END;
EXCEPTION WHEN OTHERS CONTINUE;
END;
/

Wednesday, May 18, 2016

How do I change the start time and duration for Automatic Optimizer Statistics Collection

Last time we saw how long the Automatic Optimizer Statistics Collection job was running.
Now let's change it to match our environment better.

The normal job starts at 10pm and ends at 2am. A lot of shops will have reports starting at midnight. So this is not the most idle times to run statistics.

In our environment we have identified a period from 6pm to midnight as the best period to run stats. Bonus: We have gained an extra 2 hours for it.

The default window to run Automatic Optimizer Statistics Collection is from 10pm until 2am during week days and 20 hours starting at 6am Saturdays and Sundays.

If the task appears to take the whole default 4 hours, and its status is marked "STOPPED", then it may not have enough time to gather all necessary statistics.


--This will set the time at 6pm (18:00) with "byhour=18"
EXECUTE DBMS_SCHEDULER.SET_ATTRIBUTE('MONDAY_WINDOW',   'repeat_interval','freq=daily;byday=MON;byhour=18;byminute=0; bysecond=0');
EXECUTE DBMS_SCHEDULER.SET_ATTRIBUTE('TUESDAY_WINDOW',  'repeat_interval','freq=daily;byday=TUE;byhour=18;byminute=0; bysecond=0');
EXECUTE DBMS_SCHEDULER.SET_ATTRIBUTE('WEDNESDAY_WINDOW','repeat_interval','freq=daily;byday=WED;byhour=18;byminute=0; bysecond=0');
EXECUTE DBMS_SCHEDULER.SET_ATTRIBUTE('THURSDAY_WINDOW', 'repeat_interval','freq=daily;byday=THU;byhour=18;byminute=0; bysecond=0');
EXECUTE DBMS_SCHEDULER.SET_ATTRIBUTE('FRIDAY_WINDOW',   'repeat_interval','freq=daily;byday=FRI;byhour=18;byminute=0; bysecond=0');
--This will change the duration of the job to 6 hours long.
EXECUTE DBMS_SCHEDULER.SET_ATTRIBUTE('MONDAY_WINDOW',   'duration',interval '6' hour);
EXECUTE DBMS_SCHEDULER.SET_ATTRIBUTE('TUESDAY_WINDOW',  'duration',interval '6' hour);
EXECUTE DBMS_SCHEDULER.SET_ATTRIBUTE('WEDNESDAY_WINDOW','duration',interval '6' hour);
EXECUTE DBMS_SCHEDULER.SET_ATTRIBUTE('THURSDAY_WINDOW', 'duration',interval '6' hour);
EXECUTE DBMS_SCHEDULER.SET_ATTRIBUTE('FRIDAY_WINDOW',   'duration',interval '6' hour);

Tuesday, May 17, 2016

How long does my auto optimizer stats collection job takes to run?

This query helps identify how long the automatic stats collection job takes to run.

The default maintenance window is from 10pm until 2am.

col client_name for a40
col job_name for a30
col job_status for a10
col job_duration for a50
set linesize 200
select client_name,JOB_NAME,JOB_STATUS,JOB_START_TIME,JOB_DURATION 
from DBA_AUTOTASK_JOB_HISTORY 
where JOB_START_TIME >systimestamp -7 and client_name='auto optimizer stats collection';

CLIENT_NAME                              JOB_NAME                       JOB_STATUS JOB_START_TIME                      JOB_DURATION                                      
---------------------------------------- ------------------------------ ---------- ----------------------------------- --------------------------------------------------
auto optimizer stats collection          ORA$AT_OS_OPT_SY_2522          STOPPED    5/12/2016 10:00:11.870810 PM -07:00 +00 03:59:51.000000                               
auto optimizer stats collection          ORA$AT_OS_OPT_SY_2502          STOPPED    5/11/2016 10:00:12.471104 PM -07:00 +00 03:59:50.000000                               
auto optimizer stats collection          ORA$AT_OS_OPT_SY_2482          STOPPED    5/10/2016 10:00:12.256956 PM -07:00 +00 03:59:49.000000                               
auto optimizer stats collection          ORA$AT_OS_OPT_SY_2462          STOPPED    5/9/2016 10:00:12.331670 PM -07:00  +00 04:00:03.000000                               
auto optimizer stats collection          ORA$AT_OS_OPT_SY_2442          STOPPED    5/8/2016 6:07:27.878272 PM -07:00   +00 07:52:40.000000                               
auto optimizer stats collection          ORA$AT_OS_OPT_SY_2423          SUCCEEDED  5/8/2016 7:10:14.887914 AM -07:00   +00 06:58:59.000000                               
auto optimizer stats collection          ORA$AT_OS_OPT_SY_2422          STOPPED    5/7/2016 10:08:38.445532 PM -07:00  +00 03:51:30.000000                               
auto optimizer stats collection          ORA$AT_OS_OPT_SY_2405          SUCCEEDED  5/7/2016 6:00:43.423975 AM -07:00   +00 12:44:10.000000                               
auto optimizer stats collection          ORA$AT_OS_OPT_SY_2402          STOPPED    5/6/2016 10:00:08.874159 PM -07:00  +00 03:59:59.000000

Monday, May 16, 2016

How to run SQL Tune for a poorly performing SQL query

Disclaimer

It's always best to work on the underlying query rather than apply a profile to a query. A profile would affect the same query across all schemas and may reduce performance for another customer.
A profile when applied should be considered a short term solution to an immediate customer problem.
A CR must be open requesting changes to the query which required a profile, after the profile is applied, even if the problem seems to be resolved.

Obtain the querie's sql_id

select sql_id, sql_text
from v$sql
where sql_text like 'select * from sometable%';

Run a tuning task

Replace <sql_id> with the sql_id obtained from the previous query.
A tuning task ID will be obtained from this step.

set serveroutput on
declare
stmt_task VARCHAR2(40);
begin
stmt_task := DBMS_SQLTUNE.CREATE_TUNING_TASK(sql_id => ');
DBMS_OUTPUT.put_line('task_id: ' || stmt_task );
end;
/

Execute the tuning task 

Replace <tuning id="" task=""> with the task id obtained from the previous step.

begin
DBMS_SQLTUNE.EXECUTE_TUNING_TASK(task_name => 'tuning id="" task=""');
end;
/
This step takes some time depending on the query's complexity, amount of objects to analyze.
It's possible to verify that the task is running with the following query, in another session.
Replace <task id=""> with the task id used in the previous step:

SELECT TASK_NAME, STATUS FROM DBA_ADVISOR_LOG WHERE TASK_NAME = ';

Read the recommandations

set pages 0
set linesize 1000
SELECT DBMS_SQLTUNE.report_tuning_task('TASK_107340') AS recommendations FROM dual;
The tuning task may provide a new execution plan. Compare the current and previous plan.  It will also inform of the improvement gain from changing to the new profile. If the benefits are great and can alleviate a current problem for a customer, apply the profile

 Applying a profile

begin
DBMS_SQLTUNE.ACCEPT_SQL_PROFILE(task_name => 'TASK_69287', task_owner => 'SYSTEM', replace => TRUE);
end;
/
That command will be provided inside the recommandation.

Friday, May 13, 2016

How to obtain all logs within a period of time from the RAC environment

While investigating an event or to provide information to oracle's support, a number of logs must be gathered.

Oracle made this easy with tfactl. If an event say ORA-4031, occured at 10pm, and an investigation is performed with oracle support's help the next day.

 Run the following command in a range one hour before and after the problem occured:
export ORACLE_HOME=/u01/app/12.1.0/grid export ORACLE_SID=+ASM1 cd $ORACLE_HOME/bin
tfactl diagcollect -from "May/11/2016 21:00:00" -to "May/11/2016 23:00:00"
This will collect all cluster logs, alert logs, database logs, listener's log, scan listener's log and so on pack it nicely in a zip file. All that's left is have a look. I have solved many issues just running this command and looking at the content produced.

Saturday, May 7, 2016

What is the dual table?

In oracle, it's not possible to type a select statement without the from clause. So oracle came up with the "dual" table which is an actual table.

For instance getting the system date with the function "sysdate" will fail as follow:
SQL> select sysdate;
select sysdate
             *
ERROR at line 1:
ORA-00923: FROM keyword not found where expected

Typing desc dual will show the following:
SQL> desc dual;
Name         Null?    Type
----------- -------- --------------------
DUMMY                 VARCHAR2(1)

It has precisely one column and one row with a value of 'X':

SQL> col dummy for a5
SQL> select * from dual;

DUMMY
-----
X

With the dual table as a placeholder, we can obtain our sysdate:

SQL> select sysdate from dual;
SYSDATE
---------
06-MAY-16

Friday, May 6, 2016

Insert into a table if not exists

In order to make a SQL statement repeatable, you may want to avoid inserting the same value and prevent a primary key constraint error if inserting twice.

The command MERGE INTO which allows inserts and updates can also be used for insert only.

merge into table1 t1
    using (select 'value1' as value1 ,value2 as value2
           from dual) t2
    on ( t1.value1 = t2.value1)
when not matched then
   insert values (t2.value1, t2.value2)

Thursday, May 5, 2016

Gathering Statistics In Oracle - Part 1

If a system is completely new, these are the statistics which should be gathered:

  • Dictionary Statistics / SYS schema Statistics - They are the same thing.
    exec dbms_stats.gather_dictionary_stats;
  • Fixed Objects Statistics
    exec dbms_stats.gather_fixed_objects_stats;
  • System Statistics - Do not mix this up with SYS schema statistics
    • First run an IO Calibration:
      dbms_resource_manager.calibrate_io();
      • You will find the results in the following views:
        • V$IO_CALIBRATION_STATUS
        • DBA_RSRC_IO_CALIBRATE
    • Then run the system statistics:
      exec dbms_stats.gather_system_stats(interval=>60);
      • The interval is in minute. The store procedure will execute and run in the background for 60 minutes.
It's important to run all 3 when there's a realistic load on the system. However system statistics have a system wide impact. So if the system is already in production, a test should be performed in an identical staging environment first. If a staging environment is not available, backup the system stats first and get very familiar with how to restore them. 

Some storage hardware vendors with storage tiering and dynamic change of blocks between tiers tend to mess up the IO calibration. I have experienced terrible IO Calibration and system statistics with EMC VMAX Fast technology and I haven't found a nice way for oracle and EMC to communicate about disks IO very well.

Wednesday, May 4, 2016

In oracle, How to obtain statistics preferences for all tables

The oracle package dbms_stats provides a get_prefs stored procedure to retrieve statisticspreferences , for one given parameter at a time and:

  1. At DB level: 
    select dbms_stats.get_prefs('METHOD_OPT', null,null)  from dual;
  2. At schema level: 
    select dbms_stats.get_prefs('METHOD_OPT', 'MYSCHEMA',null)  from dual;
  3. At table level: 
    select dbms_stats.get_prefs('METHOD_OPT', 'MYSCHEMA','MYTABLE')  from dual;
Naturally that's a very cumbersome method if a parameter needs to be verified at table level.

The following PL/SQL will display the statistics preferences for all tables in the database by default or a specific table if mentioned. It's easy to modify it and insert the result in a table instead or change it into a full on stored procedure.

DBA privilege is required.

declare
 p_owner varchar2(15) := 'SOMESCHEMA';
 p_tablename varchar2(20) := null;
 v_result varchar(100);
 v_tablename dba_tables.table_name%type;
 cursor tablelist is
            select table_name 
            from dba_tables
            where owner=p_owner;
begin
   
    if p_tablename is null then
         open tablelist;
         loop
            fetch tablelist into v_tablename;
            exit when tablelist%notfound;
                dbms_output.enable(5000);
                select dbms_stats.get_prefs('METHOD_OPT', p_owner,v_tablename) into v_result from dual;
                dbms_output.put_line(p_owner||' - '||v_tablename||' - METHOD_OPT - '||v_result);
                select dbms_stats.get_prefs('ESTIMATE_PERCENT', p_owner,v_tablename) into v_result from dual;
                dbms_output.put_line(p_owner||' - '||v_tablename||' - ESTIMATE_PERCENT - '||v_result);
                select dbms_stats.get_prefs('DEGREE', p_owner,v_tablename) into v_result from dual;
                dbms_output.put_line(p_owner||' - '||v_tablename||' - DEGREE - '||v_result);
                 select dbms_stats.get_prefs('CASCADE', p_owner,v_tablename) into v_result from dual;
                dbms_output.put_line(p_owner||' - '||v_tablename||' - CASCADE - '||v_result);
                 select dbms_stats.get_prefs('NO_INVALIDATE', p_owner,v_tablename) into v_result from dual;
                dbms_output.put_line(p_owner||' - '||v_tablename||' - NO_INVALIDATE - '||v_result);
         end loop;
         close tablelist;
    else
        select dbms_stats.get_prefs('METHOD_OPT', p_owner,p_tablename) into v_result from dual;
        dbms_output.put_line(p_owner||' - '||p_tablename||' - METHOD_OPT - '||v_result);
        select dbms_stats.get_prefs('ESTIMATE_PERCENT', p_owner,p_tablename) into v_result from dual;
        dbms_output.put_line(p_owner||' - '||p_tablename||' - ESTIMATE_PERCENT - '||v_result);
        select dbms_stats.get_prefs('DEGREE', p_owner,p_tablename) into v_result from dual;
        dbms_output.put_line(p_owner||' - '||p_tablename||' - DEGREE - '||v_result);
         select dbms_stats.get_prefs('CASCADE', p_owner,p_tablename) into v_result from dual;
        dbms_output.put_line(p_owner||' - '||p_tablename||' - CASCADE - '||v_result);
         select dbms_stats.get_prefs('NO_INVALIDATE', p_owner,p_tablename) into v_result from dual;
        dbms_output.put_line(p_owner||' - '||p_tablename||' - NO_INVALIDATE - '||v_result);
    end if;   
end;
/