Tuesday 25 July 2023

SQL, Explain-Plan, Show me the Data, automated

 TL;DR: Find relevant info about the data behind an SQL: easy, and automated.


OK, so you found the SQL-statement that seems to cause problems. From Statspack or AWR, or just from v$sqlarea. You got the stmnt, the sql_id, and probably the explain-plan(s) using dbms_xplan. In earlier blogs I also showed you how to check on the history of an sql_id (from dba_hist_sqlstat) and how to find the bind-variables (from v$sql_bind_capture).


How about we also do some inspection of the data in the tables and indexes? And how about we automate that job ? 


Take this (simple) SQL stmnt as an example: 

select e.ename, e.hiredate, d.dname, d.loc 
from emp e, dept d 
where 1=1 
and e.deptno = d.deptno 
and e.ename = :vc_ename 
order by e.ename, d.dname; 

In practice, problem-queries may fetch data form 10+ tables but we'll keep this example small. 


We can use dbms_xplan to show the explain:

-----------------------------------------------------------------
| Id  | Operation                      | Name      | Cost (%CPU)|
-----------------------------------------------------------------
|   0 | SELECT STATEMENT               |           |            |
|   1 |  SORT ORDER BY                 |           |     0   (0)|
|   2 |   NESTED LOOPS                 |           |            |
|   3 |    NESTED LOOPS                |           |            |
|   4 |     TABLE ACCESS BY INDEX ROWID| EMP       |            |
|*  5 |      INDEX RANGE SCAN          | EMP_ENAME |            |
|*  6 |     INDEX UNIQUE SCAN          | PK_DEPT   |            |
|   7 |    TABLE ACCESS BY INDEX ROWID | DEPT      |            |
-----------------------------------------------------------------


But we can take it a few steps further: we know the plan is stored in v$sql_plan. So let's combine v$sql_plan with the information in dba_tables and dba_indexes:

column obj_own_name     format A30
column last_analyzed    format A20
column est_rows         format 999,999
column est_keys         format 999,999

select p.object_owner || '.' || p.object_name || ' (T)' as obj_own_name
, to_char ( t.last_analyzed, 'YYYY-MON-DD HH24:MI:SS' ) as last_analyzed
, t.num_rows est_rows
from v$sql_plan p
   , dba_tables t
where 1=1
and p.object_owner = t.owner
and p.object_name = t.table_name
and sql_id = '&1'
and p.object_type like 'TAB%'
order by p.position ;

select p.object_owner || '.' || p.object_name
|| case i.uniqueness
   when 'UNIQUE' then
        ' (UNQ)'
   else ' (IDX)'
end  as obj_own_name
, to_char ( i.last_analyzed, 'YYYY-MON-DD HH24:MI:SS' ) as last_analyzed
, i.distinct_keys est_keys
from v$sql_plan p
   , dba_indexes i
where 1=1
and p.object_owner = i.owner
and p.object_name = i.index_name
and sql_id = '&1'
and p.object_type like 'IND%'
order by p.position ;


These queries will display the relevant statistics (number of rows and number of keys). Note that the statistics can be out of date, hence we also generated the count-queries:


OBJ_OWN_NAME		       LAST_ANALYZED	    EST_ROWS
------------------------------ -------------------- --------
SCOTT.EMP (T)		       2023-JUL-23 06:38:30	  14
SCOTT.DEPT (T)		       2023-JUL-23 06:38:30	   4

OBJ_OWN_NAME		       LAST_ANALYZED	    EST_KEYS
------------------------------ -------------------- --------
SCOTT.EMP_ENAME (IDX)	       2023-JUL-23 06:38:30	  14
SCOTT.PK_DEPT (UNQ)	       2023-JUL-23 06:38:30	   4

-- Generated count-queries ready to copy/paste
select ' SCOTT.DEPT  :' || to_char ( count ('x' ), '9,999,999,999' ) from SCOTT.DEPT;
select ' SCOTT.EMP   :' || to_char ( count ('x' ), '9,999,999,999' ) from SCOTT.EMP;


This will give you a quick insight 

Note that the dbms_xplan-advanced will provide you most of the same information, and more. But being paranoid, I can never resist to verify for myself. Sometimes the "statistics" are way off the mark, the last-analyzed may be outdated, or you may have overlooked a particularly large table in the from-clause, or some tables were possibly hidden by a view. 


Warning: Careful with count-queries.

The Count-queries are deliberately Not Executed Automatically: on some systems a thoughtless count can take minutes or even hours. But if you need them: copy/paste and run .... 



In summary: 

We combine the information in v$sql_plan with the available statistics, and we generate count-queries in case you need them. By automating this in a script, we save the copy/paste/typing effort, and we obtain a more or less standardized format of data. 


And all of this with fairly simple SQL statements...


Remember, in a good RDBMS:

Everything is SQL, and SQL is everything.


Happy Selecting !


Saturday 22 July 2023

Career Avice: Learn SQL.

 TL;DR: knowledge of SQL is Mandatory. If you get to deal with data, you will benefit from knowing SQL

Note: geeky SQL-based trips and tricks to manage your Oracle Database will resume shortly. This SQL-advertisement just happened to come along on twitter.



Before you continue, read this twitter-thread:

https://twitter.com/parmardarshil07/status/1681570471475834882



This tweet underscores my favourite statement: 


Everything is SQL and SQL is Everything.



Why?  Because IT is always about Data. 

And the most natural way to process data is SQL

Data is best found, kept, and manipulated with SQL. 

Even the noSQL systems are implementing SQL-layers-access to facilitate retrieving content.



More elaborated…


First of all, SQL will help you find and retrieve data: SELECT: Select from table(s), with Joins on key-columns if needed, with Where-Clauses filters to obtain just the data you need, with Order-By to put the data in a sensible sequence, if only to put the most relevant data at the best-visible end.


But if you choose to do so (and you should) SQL will also help you define your data-structures: Create-Table with relevant fields, some mandatory, and typed to ensure that a hire-date is in deed a valid date).


Furthermore, SQL will help you define and enforce relations between tables: an employee should be attached to a valid, existing, department.


And SQL will allow you to define Constraints to make sure the data is sensible: A price (generally) can not be negative, assignment-dates can not overlap.

These conditions, rules, you can define (declare!) as constraints and no data-manipulation can (should be able to) overrule them.


And this is only the short summary of the benefits of SQL…


My whole blog is 90% based on SQL, and how to use SQL to control your data and your Database System.



Friday 14 July 2023

Find and use Bind Variables from SQL statements.

TL;DR: You can see what the content of a bind-variable was in v$sql_bind_capture. You can then use that to generate re-play-scripts for queries. There are some caveats though.



Have a look at the "ADVANCED" option of dbms_xplan:


select * from table 

( dbms_xplan.display_cursor ('sql_id',0, 'ADVANCED'));


Somewhere in the 100+ lines of output you will find the list of bind-vars and their (peeked) values. You could use those values to re-run the query with same or similar, or different values. Ideal for testing.


Intrigued as I was, I reasoned there had to be some place this information was stored, and I stumbled across v$sql_bind_capture. After a bit of experimenting, I ended up with two queries I could use in scripting:


I can now define the variables for use in an SQL script:


-- define variables

select 'Variable ' || replace ( bvc.name, ':', 'b' ) 

    || ' ' || bvc.datatype_string ||';'  

from v$sql_bind_capture bvc

where 1=1

and sql_id = '&sql_id'

and child_number = 0

order by position, child_number, name;


And I can generate code to set the variables to the values stored:


select

--bvc.*

-- ANYDATA.AccessTimestamp(bvc.value_anydata)

-- ANYDATA.Accessdate(bvc.value_anydata)

  '  ' ||  replace ( bvc.name, ':', ':b' ) ||  ' := '

  || decode ( substr ( bvc.datatype_string, 1, 3)

            ,  'NUM' , nvl ( bvc.value_string, '''''' )

            ,  'VAR' , '''' || bvc.value_string || ''''

            ,  'DAT' , '      to_date ( ' || '''' || ANYDATA.Accessdate(bvc.value_anydata) || ''', ''YYYY-MM-DD HH24:MI:SSXFF'' ) '

            ,  'TIM' , ' to_timestamp ( ' || '''' || ANYDATA.AccessTimestamp(bvc.value_anydata) || ''', ''YYYY-MM-DD HH24:MI:SSXFF'' ) '

            ,  bvc.value_string

            ) || '  ;'  from v$sql_bind_capture bvc

where 1=1

-- and child_number = 1

and child_number = 0

and sql_id = '&sql_id'

order by position, name, child_number;



Notice the trickery needed to extract the date-values. Beware of your NLS settings here.


With those possibilities, I am very close to re-running the query with either the original values or with values that I can edit into a script. This will allow me to replay queries and to test fixes.


I short, once I know the sql_id, what I can now do (and largely automate!) is:

 - find the offending SQL from v$sqlarea or v$sql.

 - use the sql_id to find the types and values of the bind-variables.

 - semi-automate the definition and the setting of the bind-vars.

 - paste in the SQL and run it, and repeat-run it with various values for bind-vars or with edited varieties of the SQL.


My script to do this is called mk_rerun.sql.  I'll include the script below, and you can test if for yourself using sqlcl or sql*plus. The script takes 1 argument, the sql_id:


SQL> @mk_rerun <sql_id> 


It will generate a script called rerun_<sql_id>.lst that you can edit and use for re-run. With this trick, it now takes me less then 1 minute to find + test the re-running of a problem-SQL.


There are some caveats, and I want to mention two warnings:


Firstly, bind-variables are often shown as placeholders like :1, :2 etc.. and those are not valid variable-names in an sql-script. Therefore you need to edit the script to make variables start with a character.  Hence I tend to stick a "b" in front of the variable-name. This changes :1 (invalid variable name) into :b1 (perfectly valid)


Secondly, Date-variables don't work in sql-scripts. 

Hence you need to cast them into dates : to_date ( value, format) 


There are a few other quirks you might encounter when you try to automatically re-run SQL statements, but you are well on your way now.


Happy Scripting! 



In case someone wants to reuse my script: mk_rerun.sql:


set ver off
set feedb off
set head off

@set_nls

spool rerun_&1


-- try picking bind-vars from memory: define + assign

-- define variables

select 'Variable ' || replace ( bvc.name, ':', 'b' ) ||  ' ' || bvc.datatype_string ||';'
from v$sql_bind_capture bvc
where 1=1
and sql_id = '&1'
and child_number = 0
order by name, child_number, name;

prompt  /*  */
prompt  /* -- now all varibles declared, will try to assign from cursor-0 */
prompt  /*  */

prompt BEGIN

select  --bvc.sql_id
-- , bvc.child_number     chld
--, bvc.name             bind_variable
--, bvc.datatype_string  datatype
--, bvc.value_string     bind_value
--bvc.*
--ANYDATA.AccessTimestamp(bvc.value_anydata)
-- ANYDATA.Accessdate(bvc.value_anydata)
  '  ' ||  replace ( bvc.name, ':', ':b' ) ||  ' := '
  || decode ( substr ( bvc.datatype_string, 1, 3)
            ,  'NUM' , nvl ( bvc.value_string, '''''' )
            ,  'VAR' , '''' || bvc.value_string || ''''
            ,  'DAT' , '      to_date ( ' || '''' || ANYDATA.Accessdate(bvc.value_anydata) || ''', ''YYYY-MM-DD HH24:MI:SSXFF'' ) '
            ,  'TIM' , ' to_timestamp ( ' || '''' || ANYDATA.AccessTimestamp(bvc.value_anydata) || ''', ''YYYY-MM-DD HH24:MI:SSXFF'' ) '
            ,  bvc.value_string
            ) || '  ;'  from v$sql_bind_capture bvc
where 1=1
-- and child_number = 1
and child_number = 0
and sql_id = '&1'
order by name, child_number;


select 'END;'from dual ;
select '/'   from dual ;


prompt set autotrace off


prompt /* --                                                          */
prompt /* -- Paste formatted statement here, followed by semicolon... */
prompt /* --                                                          */
prompt /* --   SQL goes HERE, with semicolon added!                   */
promp  /* --   Will include explain to catch stmnt.                   */
prompt /* --                                                          */
prompt /* -- use this file to edit + run stmnt with variables         */
prompt /* -- SQL> ed rerun_&1.lst                                      */
prompt /* -- SQL> @rerun_&1.lst                                        */
prompt /* --                                                          */


prompt /*  first execute goes here, can use output from xplan to get SQL */
prompt /*  please remove plan...                                     */
SELECT plan_table_output FROM table(DBMS_XPLAN.DISPLAY_CURSOR('', 0, 'BASIC' ));


-- and repeat with autotrace on

prompt
prompt
prompt set autotrace on
prompt set timing on
prompt set linesize 150
prompt set feedback on
prompt
prompt /* -- and 2nd run with autotrace on */

prompt /

prompt set autotrace off
prompt set timing off

prompt /* --                                                          */
prompt /* -- use this file to edit and run stmnt with variables       */
prompt /* -- SQL> ed rerun_&1..lst                                      */
prompt /* -- SQL> @rerun_&1..lst                                      */
prompt /* --                                                          */

spool off
set feedb on



End of script mk_rerun.sql


Wednesday 12 July 2023

DBaaS is Fine in Most Cases.

TL;DR: For 99% of use-cases a DBaaS (autonomous...), an RDS- or an MPV-service is sufficient. But in case I want to investigate further, I am still very grateful to Oracle for exposing the V$ and  X$ views. They allow me to inspect, understand, and troubleshoot my system at the next level



This blog post is partly in reaction to...

https://twitter.com/samokhvalov/status/1678786986763755521

Firstly, I agree with Nikolay.
And when you need "backend access" or "root-level-control" on the system where you database is running: please build+manage your own servers. I understand the desire for control and access. It is the only way a real techie will "feel safe". But realise that when you have all that access: with great power comes great responsibility: You will have to secure, manage and maintain your platform yourself. Not all organisations are willing to do so, and even less are capable of attracting + retaining staff with the desired level of competence.
(and did I mention "Documentation" ??)


But the world is moving on....
And for the rest of us, there are the "managed services" like RDS and other MPVs. And we have to accept that those come with some restrictions.

For me, it took a while to get used to it, but now I'm ok with the RDS "level of service". For the following (nitpickity) reasons: 

1. My customers often dont want to be bothered with the details of DBA-work. And they definitely dont want to deal with Oracle-sales ever again. RDS is the best example I know that "takes away the worries" of mundane DBA-tasks and liberates us from the sales-vampires.

2. A good RDBMS (Oracle, and to some extend Postgres) will expose enough of its functioning via SQL-interface so that I can diagnose problems.

3. As an application-developer, software provider, I will also try to limit my software and its complexity to avoid getting into situations where I "need backend access".

4. A good provider (Oracle, RDS) will have + provided assistance if needed.  (this is not the place to write another long rant about the support-staff who are driven by the need to Close Tickets, rather than to solve problems, but you know who you are...)

To the RDBMS-vendors, if they care to listen, I would say : please expose your system, all of it, via the SQL-interface. The rules of Codd still apply, and Rule-4 is Especially applicable here. 

SQL is Everything, and Everything is SQL...

And please provide the data via SQL. That way your customers / developers / users can find their data, and combine it in any way they see fit. Running graphs, dashboards with metrics-over-time are very useful, but at some point some geeky cstmer will want to query for himself and combine data in ways your dashboard didnt think of yet... Please Give them that SQL..


To the providers of managed services (RDS, other MPVs), I would say: Expose! Provide as much dashboards and metrics as the RDBMS can expose. And provide the data via SQL: that way the consumer is free to select/slice/dice/present the data in any way they think they need.


Tuesday 11 July 2023

The History of a Query

 TL;DR: The dba_hist_sqlstat view can tell you what has changed in the frequency or in the execution plan of a given query.



We have this dashboard-query that suddenly became very slow. The qry is a sum (something) to indicate "how much work was done". Normally it uses an index and is lighting fast. This morning it seem to have suddenly slowed down.


Let's dig in and use this as an example on "How to Investigate".


We know what the query looks like, and using the v$sqlarea, we can find the SQL_ID: 7292jtjypdyvt.


Next is the DBA_HIST_SQLSTAT : this can tell you how an SQL performed over the snapshot-intervals. 


If you are using "statspack", you will find similar information in STATS$SQL_SUMMARY, but you will need to subtract the preceding value to find the difference since the previous snapshot.


We start with this:


select * from dba_hist_sqlstat  sq

where sq.sql_id = '7292jtjypdyvt'

order by sq.snap_id ;


Since v12, the DBA_HIST...  views contain "delta" columns which save you the effort of subtracting.  


And we need to join to the snapshot view to find at what time each snapshot was taken. The query evolves into something like this:


select to_char ( sn.end_interval_time , 'DDMON HH24:MI') as Time

, sq.executions_delta     execs

, sq.buffer_gets_delta    buff_gets

, rows_processed_delta nr_rows

-- , sq.*

from dba_hist_sqlstat  sq

, dba_hist_snapshot sn

where sq.sql_id = '7292jtjypdyvt'

  and sn.snap_id = sq.snap_id

  and sn.dbid = sq.dbid

  and sn.instance_number = sq.instance_number

order by sn.dbid, sn.snap_id, sn.instance_number ;


To place it all in perspective, I also tend to look at the elapsed-time per execution, and the buffer_gets per execution:


elapsed_time_delta / executions_delta  as  sec_p_exe


But here I need to be careful, some decode is needed to avoid divide-by-zero.


And another interesting item is the plan_hash_value: if the plan has changed over time, it means the optimiser has chosen another plan, which may or may not be advantageous.


In my case, the inspection-query has turned into :



Or, in my case, I often run it from a script, that I have called 

SQL> @sql_freq <sql_id> <enter>




Let me discuss the relevant columns here:


TIME: Notice that my awr-snapshots are only 10min apart. I tend to set that interval narrow on critical systems or on systems under investigation.


EXECS: notice that this SQL is fired 10404 times between 07:30 and 07:40. That is quite a lot of executes in a 10min interval. Busy time-window?


BUFF_GETS and GET_PX (gets per execute) : At 07:50, the nr of buffer-gets suddenly goes up. And the nr of gets-per-execute goes from 3 (very efficient) to 7000 (not so efficient). 


SEC_PX: the nr of seconds per execute is still way below 0.5 sec. When rounded to whole-seconds, it ends up as zero-seconds. In some versions of the script, I display milliseconds, but when dealing with big/slow SQL, those numbers tend to become unpractically wide.


NR_ROWS: Because this is a sum-query, each execute only returns only 1 row.


G_PR: the nr of block-gets per row-returned. For some queries, I want to know the nr of blocks processed (gets) in relation to the nr of rows returned. In an OLTP system, you want the nr of blocks processed to be low. But if for some reasons the nr of rows is high, I can tolerate the processing of more blocks (e.g. bulk-shipments with a lot of items...)


PLN_HV: the Plan-Hash-Value. If this value changes, the execution-plan has changed over time. In this case, we we notice 3 intervals where the plan was indeed different. From the nr of Buffer-gets and the gets-per-row, we deduce that the "other plan" is probably a lot less efficient because it requires 7000 gets to return the result, whereas the "efficient plan" only needed 3 gets (a typical index-lookup) to return the result.


To further inspect the SQL, and display the execution-plans used, you can use something like:


select plan_table_output from table (dbms_xplan.display_awr('<sql_id>'));


And there is a lot more to discover, for example:

The DBA_HIST_SQL_PLAN contains details about the execution-plans, and DBA_HIST_SQLBIND can be used to hunt down the values of bind-variables from queries.


It is all there for you to explore ... 


In this particular case, the re-calculation of statistics for the relevant index was sufficient to "fix" this problem. At 09:00 the SQL had returned to its normal, efficient, PHV and 3 GET_PX.


Friday 7 July 2023

Who uses this Index ?

 TL;DR: One more way to "monitor an index". Checking v$sql_plan can get you to the relevant SQL, tell you what component runs the query, and give you a lot of details. Read, experiment, and choose the way you want Your monitoring done.



There are multiple ways to find out if an index is useful or not, and they all have pro- and con- arguments.


Firstly, a very blunt way is to make the index invisible, or even drop it, and just see what happens. This method has a downside: it can impact users hard+fast. 


Secondly, the official documentation will provide you with DBA_INDEX_USAGE, and a very good article on this by Tim -oracle-base-dot-com Hall is found Here (Do Read It!). This 2nd method is less intrusive, more subtle, but it is still more or less a Yes/No answer to the question. It does not give you an indication of how-many users/session/queries/users are hitting this index, or how-often.


We are going to add yet another method: also not perfect, but maybe more suitable for some use-cases. Read on if you like to Dig In Deep....


Take a look at v$sql_plan and at the AWR and Statspack equivalents: DBA_HIST_SQL_PLAN and STAT$_SQL_PLAN. If you filter by object_owner, object_name and object_type, you can see which SQL-statements have used your index in their plan, both in sql-cache and in AWR/Statspack.


This view stores the "explained plan" of the SQL that resides in the shared_pool, and is also used by dbms_xplan.display_cursur and by (auto-)explain to display the explain-plans.


At this moment, I am interested in any SQL that uses my index. And I can obtain the list of SQL_IDs like this: 


select p.sql_id

from v$sql_plan p

where  p.object_owner = 'SCOTT'

and    p.object_type  = 'INDEX' ;

and    p.object_name  like 'EMP%'/* idx name(s) here */ 

;


From the nr of sql_ids we already get an idea of how many and how often this index is used. 


To make sure you dont miss occurrences, you should also check the DBA_HIST and/or the STAT$ equivalent because not all SQL-stmtns will remain in the sql-cache and be visible via the v$ views.


Once you have the set of SQL_IDs, you can link them to v$sql or v$sqlarea to find out how often each of them is used, and by which users or programs. I tend to use something like this:


select a.sql_id

     , a.executions

     , a.parsing_schema_name

     , substr ( a.sql_text, 1, 25 ) as sqltxt

from v$sql_plan p

   , v$sqlarea a

where p.object_owner = 'SCOTT'

and   p.object_type  = 'INDEX'

and   p.object_name  like 'EMP%'  /* name(s) here...*/

and   a.sql_id = p.sql_id;


For even more detail, you can also check to see if the SQL_IDs occur in the v$active_session_history views: those will tell you which sessions, e.g. which programs/user/jobs have been running the queries that used your index.


By doing this little extra digging, you may obtain a much better view of who + when + what is using your index(es).



I would encourage you to use each of the methods described, and to "do your own digging", and then to make an informed decision on whether an index can be dropped or not.


As I've often done, I'll make the point again: All this information is available to you in Views and Tables, and can be viewed with just SQL, and you are free to use the tool of your choice: SQLDeveloper, SQLcl, SQL-Plus, or any other adhoc-query tool at your disposal. 


Everything is SQL, and SQL is Everything.


Go Experiment, Go Learn and Enjoy !



Tuesday 4 July 2023

Use SQL to find problem-statements

 TL;DR: There is a lot of interesting information in v$sqlarea, but beware of jumping to conclusions. 



On a troubled system, one of the first things I look at is v$sqlarea.


Have a look: 


Select * from v$sqlarea s order by buffer_gets ;


This view has grown over time to include a lot of columns but I recommend you start by looking at the ones you (think you...) understand: executions, time, disk_reads.


Depending on what you (think you) know to be your bottleneck, you can order by buffer_gets, disk_reads, or simply by elapsed time. 


My starting point for investigating a system is often:


select sql_id

, executions

, elapsed_time as microsec

, elapsed_time / ( (executions +1) * 1000 ) msec_p_exec

, buffer_gets

, disk_reads

, substr ( sql_text, 1, 60) txt

from v$sqlarea s

where 1=1

-- and parsing_schema_name like 'APP_OWN%'

order by 4 desc /* heaviest on top */

, elapsed_time desc

, buffer_gets desc ;



When run in SQLDeveloper, this puts the "slowest-per-execute" on top of the list. But by editing the query, I can quickly look for the most-buffer-gets, or the most-disk-reads as well. 




In the screenshot above, it is clear that the first 3 lines, a PL/SQL block and two select-statements, are consuming the majority of the elapsed-time. Those are probably the pieces we should investigate.


Notice several things about this way of "finding heavies":


Depending on what I want to look for, I can add/remove columns to the select. Some lines are read-to-uncomment, in case I need them. In reality, the stmnt I have saved in a ready-to-use txtfile is much larger, but it would look really messy to put that up in a blog.


The +1 is my dirty-trick to avoid div-by-zero. Notice how several stmnts have Zero executes: those were still running while we did the sample.


When run in sqlcl or sql*plus, I remove the DESC from the order-by, to make sure my relevant stmnt appears at the bottom (e.g. I dont have to scroll back).




Some warnings are due: 


There are a lot of caveats to this quick-and-dirty method.


It will only find statements that are currently in the shared_pool, e.g. recent or often-used statements. 


The numbers are "averages", which may or may not be impacted by outliers. One slow stmnt, an accidental lock, or one user looking for "Everyone called Garcia" may impact the results. Hence in interpreting the results: also check against your AWR or statspack, and check against common sense.


For example, if you know or suspect your database has an IO bottleneck, you should also order by disk-reads rather than by buffer_gets or by elapsed.  And if the slowest SQL does not concur with the most disk-reads,  you should question whether you really have an IO bottleneck.


Once you have identified a few relevant or "heavy statements", also go and verify that those statements actually are involved in relevant activity. Sometimes, the "load" is caused by something you didnt expect. (I have a funny story about an uptime-measuring script that impacted a running system, causing a hiccup every 15 minutes).


And next, once you think you have the SQL_ID of a problem-stmnt, you can use that sql_id do do further inspection.


For example, you can dig into v$sql for child-cursors, or into v$sql_plan and v$sql_bind_capture. Lots of possibilities there, and there is always more to discover.


At this point, I strongly encourage you to experiment and do some further research by yourself. You are welcome to Copy/Paste my code, but you learn much more from discovering things by yourself.


Happy Searching !