Tuesday, August 16, 2016

PL/SQL to generate passwords

set serveroutput on
   TYPE tCharList IS VARRAY(26) OF CHAR(1);
   TYPE tNumberList IS VARRAY(10) OF CHAR(1);
   vCharLower tCharList;
   vCharUpper tCharList;
   vCharNumber tNumberList;
   vRandomNumber INTEGER;
   vRandomList INTEGER;
   vPassword VARCHAR2(16);
   --vHowMany INTEGER;

   --vHowMany := 6;
   vCharLower := tCharList('a', 'b', 'c','d','e','f','g','h','i','j','k','l','m','n','o','p','q','r','s','t','u','v','w','x','y','z');
   vCharUpper := tCharList('A', 'B', 'C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z');
   vCharNumber := tNumberList('0','1','2','3','4','5','6','7','8','9');
   --FOR n in 1 .. vHowMany LOOP
   SELECT dbms_random.value(1,26)
   INTO vRandomNumber
   FROM dual;

For j in 1 .. 10 LOOP
    vPassword := null;
     vPassword := vCharLower(vRandomNumber);      
   FOR i in 1 .. 15 LOOP
      SELECT dbms_random.value(1,3)
      INTO vRandomList
      FROM dual;
      CASE vRandomList
      WHEN 1 THEN
        SELECT dbms_random.value(1,26)
        INTO vRandomNumber
        FROM dual;
        vPassword := vPassword || vCharLower(vRandomNumber);
      WHEN 2 THEN
        SELECT dbms_random.value(1,26)
        INTO vRandomNumber
        FROM dual;
        vPassword := vPassword || vCharUpper(vRandomNumber);
      WHEN 3 THEN
        SELECT dbms_random.value(1,10)
        INTO vRandomNumber
        FROM dual;
        vPassword := vPassword || vCharNumber(vRandomNumber);
      END CASE;

Tuesday, June 7, 2016

Never,ever use LDAP for the RAC install user and sysdba user

Oracle's recommandation is to have a local dba and oinstall user group.
If there are cluster admins and DBAs, then using a cluster user for the cluster installation is best.
Otherwise, it's possible to use one unique user (generally oracle) and one group (generally DBA) for the cluster and database installation.

In any case, always make sure that the groups and users are local users and groups.
check /etc/groups, the groups should be there.

Failing this, it may be possible to install the RAC cluster and the database the first time, but when the time comes to patch the system, and the cluster is restarted, the ASM user may not be able to authenticated and the patch will fail.

I have found myself in this very real situation in pre-prod environment because our SAs did not follow our recommendations and ended up in a 13 hour marathon with a complete re-install of the clusterware.

The good news is, there is no data loss. The ASM disks were still available, and found when the cluster was back online. We had to add the diskgroups back and it was quite a bit of work.

Just use a local user and groups.

How to read an AWR report - How to generate an AWR report with sqlplus?

Generate AWR report with sqlplus:

Connect to sqlplus and run the following command:
sqlplus  @$ORACLE_HOME/rdbms/admin/awrrpt.sql

A few information will be displayed regarding the system.

AWR reports generation works with snapshot ids, so to know that information, we are prompted for a number of days to display them. If there was a problem yesterday, mention 2 days for more snap id:

Specify the number of days of snapshots to choose from
Entering the number of days (n) will result in the most recent
(n) days of snapshots being listed.  Pressing  without
specifying a number lists all completed snapshots.

Enter value for num_days: 1

Listing the last day's Completed Snapshots

Instance     DB Name        Snap Id    Snap Started    Level
------------ ------------ --------- ------------------ -----
dw02         DW02             29150 03 Jun 2016 00:00      1
                              29151 03 Jun 2016 00:30      1
                              29152 03 Jun 2016 01:00      1
                              29153 03 Jun 2016 01:30      1
                              29154 03 Jun 2016 02:00      1
                              29155 03 Jun 2016 02:30      1
                              29156 03 Jun 2016 03:00      1
                              29157 03 Jun 2016 03:30      1
                              29158 03 Jun 2016 04:00      1
                              29159 03 Jun 2016 04:30      1
                              29160 03 Jun 2016 05:00      1
                              29161 03 Jun 2016 05:30      1
                              29162 03 Jun 2016 06:00      1
                              29163 03 Jun 2016 06:30      1
                              29164 03 Jun 2016 07:00      1
                              29165 03 Jun 2016 07:30      1
                              29166 03 Jun 2016 08:00      1
                              29167 03 Jun 2016 08:30      1
                              29168 03 Jun 2016 09:00      1

Specify the Begin and End Snapshot Ids
Enter value for begin_snap: 29167
Begin Snapshot Id specified: 29167

Enter value for end_snap: 29168
End   Snapshot Id specified: 29168

Finally, the script prompts for the report name. Do mention a full path, otherwise it will get generated in the current working directory which will be $ORACLE_HOME/rdbms/admin. Additionally, snapids become quickly meaningless when looking if there's a need to search for that report again a few days later. I prefer a format with a full date start time and end time.

Specify the Report Name
The default report file name is awrrpt_1_29167_29168.html.  To use this name,
press  to continue, otherwise enter an alternative.

Enter value for report_name: /home/oracle/awrrpt_20160603_0830_0900.html

Following this, the report is generated and we can look at it.

Monday, June 6, 2016

How to read an AWR report

I want to show case how to read an AWR report with real world database issues.
The AWR I will display come from a production environment which I can display because there is no secret information, with the exception of the host which I will blur and SQLs which I will not show.

Because an AWR report is really long and complete, I will break it down into different posts showcasing:

  • How to generate an AWR report?
    • Using sqlplus
    • Using OEM
  • A first glance at AWR report - What to look for?
  • Identify bottlenecks
  • Memory issues
  • IO issues
  • CPU issues
All problems have many facets to them. For instance, if a server appears to be suffering from too much IO, is it because the SAN storage has reached its capacity, or is it because a new workload has caused significant increase in IO? Or is it because the system is CPU bound?

I will show how to use an AWR report to identify these issues.

Wednesday, June 1, 2016

How to pin a plan to a sql_id with dbms_spm

Important: This package is not RAC aware, so running LOAD_PLANS_FROM_CURSOR_CACHE on the wrong node means it won't load. use gv$sql, identify on which node it's running by checking column inst_id and logon to that specific node to complete the procedure below.

This procedure is very useful, because it allows the association of a better plan, maybe obtained using a hint, typically after migration from a version to another, say using alter session optimizer_features_enable='' to a SQL which is not performing as well as it was before in the new version.

Identify the sql_id and the bad plan

Every SQL commands are assigned a sql_id, this sql_id using a hash function based on the body of the SQL text. This includes formatting and comments.
So a SQL can perform the exact same task, a developer may call it the "same" query, but if it's written with an extra carriage return, it has a different sql_id.

So make sure that the sql_id you are looking for is the one coming from the application.

These views will help with the task:
  • v$sql and gv$sql in RAC.
  • dba_hist_sqltext - If the query can't be found in v$sql
  • dba_hist_sqlstat to have some history. Here's a query using dba_hist_sqlstat

Load the bad plan

l_pls :=
DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE (  sql_id=>'', plan_hash_value =>,fixed =>'NO',enabled=>'NO');
Once loaded, obtain the sql_handle from dba_sql_plan_baselines.
select *
from dba_sql_plan_baselines
order by created desc;

Now associate the good plan (plan_hash_value) with the sql_id/sql_handle for the sql_id obtained above. 

The good plan generally comes from a different source, possibly a developer working on that SQL with hints or alter session statements or simply the plan changed for worse and a better plan exists in the AWR views. For example running a query with explain plan.

l_pls :=
dbms_spm.load_plans_from_cursor_cache(  sql_id=>'',plan_hash_value=> 1509555798, sql_handle =>'');

Tuesday, May 24, 2016

Create database baseline using DBMS_WORKLOAD_REPOSITORY

Run the following statement to create a baseline between 2 snapshots.

EXECUTE DBMS_WORKLOAD_REPOSITORY.CREATE_BASELINE (start_snap_id => , end_snap_id => , baseline_name => 'oltp_peakload_bl');

Monday, May 23, 2016

Query to view plan hash value over time for a sql id

This query uses the AWR history tables. So you would need a license to be able to use it.

It will return something like this:

select snap_id,begin_interval_time,end_interval_time,sql_id,plan_hash_value,parsing_schema_name
from dba_hist_sqlstat
join dba_hist_snapshot
using (snap_id,instance_number)
where sql_id='&sql_id'
and parsing_schema_name='&schema'
order by snap_id desc;

------- ------------------------------ ------------------------------ ------------- --------------- ----------
   6219 5/19/2016 7:30:05.359 AM       5/19/2016 8:00:10.533 AM       cu2zgt871kzsv      2820428760 NICO     
   6219 5/19/2016 7:30:05.359 AM       5/19/2016 8:00:10.533 AM       cu2zgt871kzsv      4084779347 NICO     
   6218 5/19/2016 7:00:00.935 AM       5/19/2016 7:30:05.359 AM       cu2zgt871kzsv      4084779347 NICO     
   6183 5/18/2016 1:30:20.839 PM       5/18/2016 2:00:25.344 PM       cu2zgt871kzsv      4084779347 NICO     
   6182 5/18/2016 1:00:16.638 PM       5/18/2016 1:30:20.839 PM       cu2zgt871kzsv      4084779347 NICO     
   6181 5/18/2016 12:30:12.766 PM      5/18/2016 1:00:16.638 PM       cu2zgt871kzsv      4084779347 NICO     
   5669 5/7/2016 8:30:06.532 PM        5/7/2016 9:00:21.666 PM        cu2zgt871kzsv      2820428760 NICO     
   5329 4/30/2016 6:30:27.380 PM       4/30/2016 7:00:32.868 PM       cu2zgt871kzsv      2820428760 NICO     
   5284 4/29/2016 8:00:34.962 PM       4/29/2016 8:30:41.139 PM       cu2zgt871kzsv      2820428760 NICO     
   5258 4/29/2016 7:00:28.038 AM       4/29/2016 7:30:34.104 AM       cu2zgt871kzsv      2820428760 NICO     
   4996 4/23/2016 8:00:34.871 PM       4/23/2016 8:30:42.130 PM       cu2zgt871kzsv      2820428760 NICO     
   2382 2/29/2016 8:00:19.203 AM       2/29/2016 8:30:25.207 AM       cu2zgt871kzsv      1221363908 NICO     
   2381 2/29/2016 7:30:15.311 AM       2/29/2016 8:00:19.203 AM       cu2zgt871kzsv      1221363908 NICO     
   2380 2/29/2016 7:00:10.910 AM       2/29/2016 7:30:15.311 AM       cu2zgt871kzsv      1221363908 NICO     
   2199 2/25/2016 12:30:23.410 PM      2/25/2016 1:00:26.904 PM       cu2zgt871kzsv       661146938 NICO     
   1912 2/19/2016 12:54:50.237 PM      2/19/2016 1:30:16.031 PM       cu2zgt871kzsv      2820428760 NICO

So now I can see that the plan has changed in the last few days.

Friday, May 20, 2016

Cluster wait events "gc buffer busy release", "gc buffer busy acquire"

Whenever we have a large batch insert or a web service hitting our database, we would experience a very high level of cluster waits.

Cluster waits are grey (click on picture to enlarge)

That nice grey section there is the cluster waits.
Drilling down further, I can see that they are "gc buffer busy release", "gc buffer busy acquire" wait events.

Let's have a look at the prime contender for this wait event:
This query was an insert into a table called audit_row.
Being an audit table, almost every tables had a trigger before insert and before update inserting into that table. The trigger uses a sequence to update the primary key.

The hot object however was an index on that table.
For every new rows in the table, the sequence will provide the next value which is always incremented by 1 and this will insert in the last block of the index on the right hand side.
When the block is full, the index will split into 2 new blocks, with the previous block having all rows minus one row and the last block having one row and the rest of the block empty.

In RAC, with 3 instances all trying to write in the same block, they will acquire a lock on the block, perform their operation then release it, allowing other instances to perform their insert.

But why is this happening?
The sequences have a default cache value of 20. In a single instance, it's not too big of a deal, although increasing the cache by a larger number can increase performances a bit.

In RAC, each instances would grab the 20 next values and cache them. But they will all insert in the same block on the index.

Increasing the cache is useless if the option "ORDER" is used for the sequence.

Ensure that if a sequence is used to generate the primary key, it has a large cache value. Use cache 10000. Ensure that NOORDER which is the default value is used.

Thursday, May 19, 2016

How to prevent a PL/SQL cursor loop from failing?

The PL/SQL language uses the following format:

<variable and cursor declaration>
<Write code here>


But what happens if we use a cursor and a loop and one of the records in there cause the loop to fail?

Exceptions work together with BEGIN ... END;
So we add that to the loop.


vMyVarchar varchar2(100);
vMynumber number(2) := 10;

cursor cMyCursor is
 select some,columns
 from sometables
 where someconditions = 'something';

   OPEN cMyCursor;
         FETCH cMyCursor into vMyVarchar;
         EXIT WHEN cMyCursor%NOTFOUND;

         --Let's pretend thatvMyVarchar is an executable statement
         exec immediate vMyVarchar;

Wednesday, May 18, 2016

How do I change the start time and duration for Automatic Optimizer Statistics Collection

Last time we saw how long the Automatic Optimizer Statistics Collection job was running.
Now let's change it to match our environment better.

The normal job starts at 10pm and ends at 2am. A lot of shops will have reports starting at midnight. So this is not the most idle times to run statistics.

In our environment we have identified a period from 6pm to midnight as the best period to run stats. Bonus: We have gained an extra 2 hours for it.

The default window to run Automatic Optimizer Statistics Collection is from 10pm until 2am during week days and 20 hours starting at 6am Saturdays and Sundays.

If the task appears to take the whole default 4 hours, and its status is marked "STOPPED", then it may not have enough time to gather all necessary statistics.

--This will set the time at 6pm (18:00) with "byhour=18"
EXECUTE DBMS_SCHEDULER.SET_ATTRIBUTE('MONDAY_WINDOW',   'repeat_interval','freq=daily;byday=MON;byhour=18;byminute=0; bysecond=0');
EXECUTE DBMS_SCHEDULER.SET_ATTRIBUTE('TUESDAY_WINDOW',  'repeat_interval','freq=daily;byday=TUE;byhour=18;byminute=0; bysecond=0');
EXECUTE DBMS_SCHEDULER.SET_ATTRIBUTE('WEDNESDAY_WINDOW','repeat_interval','freq=daily;byday=WED;byhour=18;byminute=0; bysecond=0');
EXECUTE DBMS_SCHEDULER.SET_ATTRIBUTE('THURSDAY_WINDOW', 'repeat_interval','freq=daily;byday=THU;byhour=18;byminute=0; bysecond=0');
EXECUTE DBMS_SCHEDULER.SET_ATTRIBUTE('FRIDAY_WINDOW',   'repeat_interval','freq=daily;byday=FRI;byhour=18;byminute=0; bysecond=0');
--This will change the duration of the job to 6 hours long.

Tuesday, May 17, 2016

How long does my auto optimizer stats collection job takes to run?

This query helps identify how long the automatic stats collection job takes to run.

The default maintenance window is from 10pm until 2am.

col client_name for a40
col job_name for a30
col job_status for a10
col job_duration for a50
set linesize 200
where JOB_START_TIME >systimestamp -7 and client_name='auto optimizer stats collection';

CLIENT_NAME                              JOB_NAME                       JOB_STATUS JOB_START_TIME                      JOB_DURATION                                      
---------------------------------------- ------------------------------ ---------- ----------------------------------- --------------------------------------------------
auto optimizer stats collection          ORA$AT_OS_OPT_SY_2522          STOPPED    5/12/2016 10:00:11.870810 PM -07:00 +00 03:59:51.000000                               
auto optimizer stats collection          ORA$AT_OS_OPT_SY_2502          STOPPED    5/11/2016 10:00:12.471104 PM -07:00 +00 03:59:50.000000                               
auto optimizer stats collection          ORA$AT_OS_OPT_SY_2482          STOPPED    5/10/2016 10:00:12.256956 PM -07:00 +00 03:59:49.000000                               
auto optimizer stats collection          ORA$AT_OS_OPT_SY_2462          STOPPED    5/9/2016 10:00:12.331670 PM -07:00  +00 04:00:03.000000                               
auto optimizer stats collection          ORA$AT_OS_OPT_SY_2442          STOPPED    5/8/2016 6:07:27.878272 PM -07:00   +00 07:52:40.000000                               
auto optimizer stats collection          ORA$AT_OS_OPT_SY_2423          SUCCEEDED  5/8/2016 7:10:14.887914 AM -07:00   +00 06:58:59.000000                               
auto optimizer stats collection          ORA$AT_OS_OPT_SY_2422          STOPPED    5/7/2016 10:08:38.445532 PM -07:00  +00 03:51:30.000000                               
auto optimizer stats collection          ORA$AT_OS_OPT_SY_2405          SUCCEEDED  5/7/2016 6:00:43.423975 AM -07:00   +00 12:44:10.000000                               
auto optimizer stats collection          ORA$AT_OS_OPT_SY_2402          STOPPED    5/6/2016 10:00:08.874159 PM -07:00  +00 03:59:59.000000

Monday, May 16, 2016

How to run SQL Tune for a poorly performing SQL query


It's always best to work on the underlying query rather than apply a profile to a query. A profile would affect the same query across all schemas and may reduce performance for another customer.
A profile when applied should be considered a short term solution to an immediate customer problem.
A CR must be open requesting changes to the query which required a profile, after the profile is applied, even if the problem seems to be resolved.

Obtain the querie's sql_id

select sql_id, sql_text
from v$sql
where sql_text like 'select * from sometable%';

Run a tuning task

Replace <sql_id> with the sql_id obtained from the previous query.
A tuning task ID will be obtained from this step.

set serveroutput on
stmt_task VARCHAR2(40);
stmt_task := DBMS_SQLTUNE.CREATE_TUNING_TASK(sql_id => ');
DBMS_OUTPUT.put_line('task_id: ' || stmt_task );

Execute the tuning task 

Replace <tuning id="" task=""> with the task id obtained from the previous step.

DBMS_SQLTUNE.EXECUTE_TUNING_TASK(task_name => 'tuning id="" task=""');
This step takes some time depending on the query's complexity, amount of objects to analyze.
It's possible to verify that the task is running with the following query, in another session.
Replace <task id=""> with the task id used in the previous step:


Read the recommandations

set pages 0
set linesize 1000
SELECT DBMS_SQLTUNE.report_tuning_task('TASK_107340') AS recommendations FROM dual;
The tuning task may provide a new execution plan. Compare the current and previous plan.  It will also inform of the improvement gain from changing to the new profile. If the benefits are great and can alleviate a current problem for a customer, apply the profile

 Applying a profile

DBMS_SQLTUNE.ACCEPT_SQL_PROFILE(task_name => 'TASK_69287', task_owner => 'SYSTEM', replace => TRUE);
That command will be provided inside the recommandation.

Friday, May 13, 2016

How to obtain all logs within a period of time from the RAC environment

While investigating an event or to provide information to oracle's support, a number of logs must be gathered.

Oracle made this easy with tfactl. If an event say ORA-4031, occured at 10pm, and an investigation is performed with oracle support's help the next day.

 Run the following command in a range one hour before and after the problem occured:
export ORACLE_HOME=/u01/app/12.1.0/grid export ORACLE_SID=+ASM1 cd $ORACLE_HOME/bin
tfactl diagcollect -from "May/11/2016 21:00:00" -to "May/11/2016 23:00:00"
This will collect all cluster logs, alert logs, database logs, listener's log, scan listener's log and so on pack it nicely in a zip file. All that's left is have a look. I have solved many issues just running this command and looking at the content produced.

Saturday, May 7, 2016

What is the dual table?

In oracle, it's not possible to type a select statement without the from clause. So oracle came up with the "dual" table which is an actual table.

For instance getting the system date with the function "sysdate" will fail as follow:
SQL> select sysdate;
select sysdate
ERROR at line 1:
ORA-00923: FROM keyword not found where expected

Typing desc dual will show the following:
SQL> desc dual;
Name         Null?    Type
----------- -------- --------------------
DUMMY                 VARCHAR2(1)

It has precisely one column and one row with a value of 'X':

SQL> col dummy for a5
SQL> select * from dual;


With the dual table as a placeholder, we can obtain our sysdate:

SQL> select sysdate from dual;

Friday, May 6, 2016

Insert into a table if not exists

In order to make a SQL statement repeatable, you may want to avoid inserting the same value and prevent a primary key constraint error if inserting twice.

The command MERGE INTO which allows inserts and updates can also be used for insert only.

merge into table1 t1
    using (select 'value1' as value1 ,value2 as value2
           from dual) t2
    on ( t1.value1 = t2.value1)
when not matched then
   insert values (t2.value1, t2.value2)

Thursday, May 5, 2016

Gathering Statistics In Oracle - Part 1

If a system is completely new, these are the statistics which should be gathered:

  • Dictionary Statistics / SYS schema Statistics - They are the same thing.
    exec dbms_stats.gather_dictionary_stats;
  • Fixed Objects Statistics
    exec dbms_stats.gather_fixed_objects_stats;
  • System Statistics - Do not mix this up with SYS schema statistics
    • First run an IO Calibration:
      • You will find the results in the following views:
    • Then run the system statistics:
      exec dbms_stats.gather_system_stats(interval=>60);
      • The interval is in minute. The store procedure will execute and run in the background for 60 minutes.
It's important to run all 3 when there's a realistic load on the system. However system statistics have a system wide impact. So if the system is already in production, a test should be performed in an identical staging environment first. If a staging environment is not available, backup the system stats first and get very familiar with how to restore them. 

Some storage hardware vendors with storage tiering and dynamic change of blocks between tiers tend to mess up the IO calibration. I have experienced terrible IO Calibration and system statistics with EMC VMAX Fast technology and I haven't found a nice way for oracle and EMC to communicate about disks IO very well.

Wednesday, May 4, 2016

In oracle, How to obtain statistics preferences for all tables

The oracle package dbms_stats provides a get_prefs stored procedure to retrieve statisticspreferences , for one given parameter at a time and:

  1. At DB level: 
    select dbms_stats.get_prefs('METHOD_OPT', null,null)  from dual;
  2. At schema level: 
    select dbms_stats.get_prefs('METHOD_OPT', 'MYSCHEMA',null)  from dual;
  3. At table level: 
    select dbms_stats.get_prefs('METHOD_OPT', 'MYSCHEMA','MYTABLE')  from dual;
Naturally that's a very cumbersome method if a parameter needs to be verified at table level.

The following PL/SQL will display the statistics preferences for all tables in the database by default or a specific table if mentioned. It's easy to modify it and insert the result in a table instead or change it into a full on stored procedure.

DBA privilege is required.

 p_owner varchar2(15) := 'SOMESCHEMA';
 p_tablename varchar2(20) := null;
 v_result varchar(100);
 v_tablename dba_tables.table_name%type;
 cursor tablelist is
            select table_name 
            from dba_tables
            where owner=p_owner;
    if p_tablename is null then
         open tablelist;
            fetch tablelist into v_tablename;
            exit when tablelist%notfound;
                select dbms_stats.get_prefs('METHOD_OPT', p_owner,v_tablename) into v_result from dual;
                dbms_output.put_line(p_owner||' - '||v_tablename||' - METHOD_OPT - '||v_result);
                select dbms_stats.get_prefs('ESTIMATE_PERCENT', p_owner,v_tablename) into v_result from dual;
                dbms_output.put_line(p_owner||' - '||v_tablename||' - ESTIMATE_PERCENT - '||v_result);
                select dbms_stats.get_prefs('DEGREE', p_owner,v_tablename) into v_result from dual;
                dbms_output.put_line(p_owner||' - '||v_tablename||' - DEGREE - '||v_result);
                 select dbms_stats.get_prefs('CASCADE', p_owner,v_tablename) into v_result from dual;
                dbms_output.put_line(p_owner||' - '||v_tablename||' - CASCADE - '||v_result);
                 select dbms_stats.get_prefs('NO_INVALIDATE', p_owner,v_tablename) into v_result from dual;
                dbms_output.put_line(p_owner||' - '||v_tablename||' - NO_INVALIDATE - '||v_result);
         end loop;
         close tablelist;
        select dbms_stats.get_prefs('METHOD_OPT', p_owner,p_tablename) into v_result from dual;
        dbms_output.put_line(p_owner||' - '||p_tablename||' - METHOD_OPT - '||v_result);
        select dbms_stats.get_prefs('ESTIMATE_PERCENT', p_owner,p_tablename) into v_result from dual;
        dbms_output.put_line(p_owner||' - '||p_tablename||' - ESTIMATE_PERCENT - '||v_result);
        select dbms_stats.get_prefs('DEGREE', p_owner,p_tablename) into v_result from dual;
        dbms_output.put_line(p_owner||' - '||p_tablename||' - DEGREE - '||v_result);
         select dbms_stats.get_prefs('CASCADE', p_owner,p_tablename) into v_result from dual;
        dbms_output.put_line(p_owner||' - '||p_tablename||' - CASCADE - '||v_result);
         select dbms_stats.get_prefs('NO_INVALIDATE', p_owner,p_tablename) into v_result from dual;
        dbms_output.put_line(p_owner||' - '||p_tablename||' - NO_INVALIDATE - '||v_result);
    end if;