Saturday, 2 May 2026

Use X-windows on your Oracle containers, Simple...

TL;DR: To run some of the GUI tools that come with Oracle 26ai, you need X-windows. Let me show you how I did that on my containers.

I needed this "solution" several times in the last few years, and I got tired of searching for it every time,  hence this post is also a "reminder to self".

And this how-to-X11 post, is hopefully a setup to a series where I want to use the output from the Oracle DataBase Configuration Assistant (DBCA) to Create a Database. To use that assistant, I first had to set up X-windows (or X11, as it is often called).

Note that this X11 pre-dates the MS-windows that most you know + use...

Image: the Oracle11 installer. I couldnt find an older one, but this installer goes back to v7 or maybe even further.


Background: I want to run those tools, hence I need X11

For some of my hobby / experiments / research, I am using container-images with Oracle (database) software on them.

One source is the official Oracle container-registry (link). I took mine following the links to "Database" -> "Enterprise".

Another very convenient source is the collection of "FREE" images maintained by Gerald Venzl (link). The latter are hassle-free, license-free, Very Easy to pull and run containers you can use for Dev, Testing, or just playing around with an otherwise Enterprise-Grade database (ok, some limitations apply... rtfm ;-)  )

In both cases, I also wanted to run the "assistants" that come with the Oracle Database. I can normally manage quite well without those, and I would most of the time recommend you do your Research and use proper scripts to do your work. But in some cases, I also recommend to use those assistants to "ask Oracle for a second opinion". 

In my case, I notably have this hobby-project of Creating a Database from Scripts rather than from recovering an existing database or a pre-built template. More on this in some older posts (link to 2023) and maybe in an upcoming blog-series (future links here...).

You can find more on X-windows, or X11 as it is often called, here (link)


Installing x11, using yum, should be simple, right?

(I ended up spending a lot of time on this, several times over, and it isnt perfect yet...)

So I started my container, as usual, with this (simplified) command:

That is (simplified) how my container normally runs. And once the initial pull is done, this is a Fast and convenient way to get a database up.

I have defined variables for :

SRC_IMAGE : where to find the image to pull

CONT : the name and hostname of the new container.

YUM : because not all containers have yum, sometimes YUM=microdnf.

And then I start it up...  

Note that you can do many more sophisticated things on startup: rtfm on the registry-docu and on the page describing the free-images - there are many possibilities! I can notably recommend you map some volumes for data- and diag-data. But that is outside of the scope for this item.

These container images do not yet contain X11, so I had to get that myself. From ancient memory and some googling-around I knew that it could be yum-installed.. (yum works on the enterprise-image, on the FREE images you can use microdnf).

Well, this time yum didnt work so easily:

It didnt find xorg-x11-apps ... 

Yum isnt quite my expertise, but after a lot of searching and a tip from Kamil (the axeman himself, no less), I found I had to zip the file /etc/yum/vars/ociregion. Here is the relevant part of my script, the link below contains the complete script:

Those docker (or podman) commands made it work. 

I used docker-exec to move the ociregion file and create an empty one. From that point on, yum (or dnf, or microdnf) was able to install X11. 

And at every new container, I am now happy to see those eyes appear:

Success !  (for now). Proven by the eyes... 


About the X-server...

Note that when you start an X11 app anywhere, you need to make sure your laptop runs the X-Server to receive and display the resulting windows. That is why everyone will "test" with xeyes or xterm. In my case, I use XQuarts on the macbook. On windows you might use mobaXterm or xming.  Make sure your X-Server is running and reachable, otherwise you will see "cannot open display" or a similar error when starting any X11 applications.

Notice how I told xeyes where to send its output with the --display. Once you log on you need to set+export the DISPLAY value in your shell-session. That DISPLAY needs to point to your X11 server (yes, it is called a server...)

To prove it all worked in my case: I displayed "the eyes", in "violet". I also used to put a "green" set of eyes at the end of the final script to signal "complete". Check the link to the final script at the bottom of the post.

Personal note: My Kids loved those eyes back in the years 00, and I often produced them just for fun. The eyes follow the mouse-pointer, really nice.

That was not all that difficult... however... we are not done yet.


The dbca (not) working ?? (not on 1 laptop, not on the newest...)

Once I had X11 on there, I could normally logon as oracle (the dflt user), export my display, test it with xeyes (looked Good), and then start up the DBCA...

Not in this case. The xeyes worked fine. But not the DBCA.


The DBCA does not display in readable colors. At least not on this screen. But it did always work fine on older laptops. And still does even with the latest container-images. I've asked about this on the Oracle forum but as of 02-May-2026, no solution yet.

My (failing) platform was: 

X-server is XQuartz, running on a macbook M5-pro. 

My screen is the nano-texture variety for macbook (expensive, but Good even in sunlight)

The "linux server" is the container pulled from Oracle, image id 224c51ce1555

At the time of writing, I have not solved this. There is a thread on the oracle forum about it (link).

My workaround was to use another, older laptop where the display of the X11 window was as-expected. That dbca feels a bit old, but also comfortable: it was there already when I started my DBA-career back in the 90s...

Using the older laptop, I was able to generate all the "create scripts" that I wanted, run them, and examine them at my leisure. But that is for another topic.


Summary : Simple, but necessary. X11 install Done.

This was a fairly simple, but the time went into "Researching". Mostly because I never had mastered the install of X11, and partly bcse I had to tweak the yum-variables provided on the (oracle-) container-image.

All is (mostly) Fixed and successfully worked-around.

All well. 

I was ready to generate "The Scripts" 


-- -- -- -- -- End of this how-to blogpost -- -- -- -- --

Appendix: the script

mk_cont_x11.sh : the shell script to create the container.

Do try. And let me know how it works for you. You can of course use this script to build your own running containers with Oracle 26ai in them. Have Fun.

-- -- -- -- -- End of this blogpost, for Real -- -- -- -- --


Monday, 27 April 2026

Oracle 26ai - The Con_ID of CDB$ROOT...

TL;DR: When your query joins V$PDBS to get the name of the PDB, you might miss the contents of CDB$ROOT. I fixed that in various ways for my scripts.

(note: this was a classic case of at least two errors on my part: not reading the RTFM on "containers" careful enough, and a case of an INNER JOIN being too strict)

Image generated in style of X-files: Never Trust your own Scripts...


Background: Do I miss some data ??

When you are "administrating" (monitoring) your Container DataBases from CDB$ROOT, you want to see Everything. All PDBs, all datafiles, all services, all session and all processes. And you want to know in which PDB each item resides. Not just the con_id, but preferably the actual name of the PDB.

But when you join to V$PDBS to get a list of the pluggable databases in your container, there is no entry in that list for CDB$ROOT. 

In some of my scripts, I ended up with discrepancies. As an example, I'll take my daily "storage-check". But I had similar "issues" with v$services and v$session.

The sum of all filesizes from V$DATAFILE was (example) 999 GB. But when I tried to group the data per PDB, the total of that list was clearly less than 999 GB. Huh?

The difference was that the datafiles from the CDB$ROOT were not taken into account, because my (inner) join with V$PDBS automatically excluded the files where CON_ID=1, e.g. the files from the CDB$ROOT.

It got even stranger: in V$database, when you do  select con_id and name from v$database it yields Con_id=0, and name=FREE (e.g. the db_name).

So it seems that CON_ID = 0 and CON_ID = 1 are the exception-cases, and are both referring to CDB$ROOT, or to DB_NAME, depending on where you look ?

Not a huge problem, and there are several simple solutions. But still enough of an anomaly for me to take a closer look (and fix my queries...).


Note: I'll do the demo using the "Free" container from Gerald Venzl (link). If you ever quickly need a demo- or test-database check this out. Some of our devs just love them for their ease-of-use. No license-accept, no login, just pullable-images. I got into the habit of using these containers for just about any "proof of concept" (or any blog-article). You'll have an Instant Database available on any laptop where docker or podman is installed. I might do a blog on "how to" sometime, but in general: Just read the page and get running...  Most DEVOPS-  and DBA folks figure it out in no-time - And then they suddenly "run a local database"...


The Example: I missed GBs diskpace from the sum of my datafiles.

For demo purposes of this blog, I started one of those containers with FREE and FREEPDB1, I then added several PDBs and then I checked for used disk-space (simplified version of my check-script in link below):

Notice how the list of PDBs adds up to 5 x 790 = 3950 MB, but the total reported is 5673 MB. There is some 1700 MB is missing in the first list, even though both queries report from v$datafile.

From looking at the SQL, or by listing just the v$datafiles, it is +/- evident that the first query missed the files where CON_ID = 1, e.g. it missed the files from the CDB$ROOT:

The two files at the top hold the 1700MB that seems missing. And the cause is classic: the simple, old fashioned, join with V$PDB in the previous query had left out the Con_ID=1. 

Aha. A quick fix with LEFT JOIN should do the job. Or.... Even better, after poking around I also found V$CONTAINERS, and that seemed like the Best Fix (for this case):

So Far so Good. This scripts is now working as intended again.


But What About that CON_ID for root ??

On further inspection, I notice that also my session-counts (checking +/- how busy is the system...) were off by a good margin when compared to, say:

select count(*) from v$session;

or even 

ps -ef | grep $ORACLE_SID

Same problem of course: When I had done my quick+dirty equal-joins to V$PDBS on CON_ID to pick up PDB-NAME, I suddenly had eliminated all of the sessions for non-PDB connections.

But it got stranger: Here is my check on  GV$SESSION, after putting in the "fix" with GV$CONTAINERS to pick up the pdb_names and CDB$ROOT.. 

The total shows a count of 86 sessions, but the various containers only hold ... 3+3+1 = Seven ?? How can I still miss 79 sessions ?

This is where I had to go RTFM, and the actual information was buried in an older version (v12, link) of the doc:

The values for con_id are explained as follows:

  • Con_ID = 0: Pertains to the Entire CDB (example: V$DATABASE contains value of 0, and this is probably why many "background" session have Con_ID=0)
  • Con_ID = 1: Pertains to the CDB$ROOT (for example in V$CONTAINERS and V$DATAFILE, you will find the value of Con_id=1, as I found with my storage-check)
  • Con_ID = 2: Pertains to the PDB$SEED.

Other, higher values of Con_ID correspond to proper PDBs that have been created, cloned, or plugged into the CDB.

And because there is no Con_ID = 0 in the V$CONTAINERS, I missed some 79 sessions in my uninformed count...

Note that when you check the same RTFM-page for 26ai (click on "view latest" in the previous link) there is no clear mention of this anymore...? This information has either moved to other parts of the documentation or was considered overly confusing (I'd say: Not Knowing this is even more confusing, but maybe I dig too deep...).

In this case, a little more fixing was needed, and the fix did end up as a LEFT-JOIN:

And QED: we have the sessions, counted per container, and we did include the sessions where Con_ID=0. The sum of the part is equal to the whole again. All is well.

For a more elaborate fix, I even experimented with a view that I named All_Con_IDs: a Union of V$PDBS and V$DATABASE. The result was something along these lines: Here is my check-count for "processes" using an inline-view version:

Using this CTE (or inline view), I got the satisfaction of seeing the proper name of the (container-)database displayed with Con_ID=0, instead of some self-designated hardcoded "-root-" value from an improvised-NVL.

Needless to say: I didnt seriously implement that "fix" in a self-made view (yet), but it was an interesting idea for a moment: unite all the Con_IDs in a single view/place to join that to various sources where and when needed...

Maybe someone out there knows of a similar view that already exists ?


Summary: Check your scripts when using Con_ID (and Trus7No!)

From spotting an anomaly in my scripts, I ended up digging into CON_ID values. I discovered that values for CON_Id of 0 (zero) and 1 (one) mean different things in the various gv$ views, and I ended up with better scripts, and a slightly better understanding of my systems.

For the innocent reader: Never fully Trust your own Scripts.. In my case, I notably "missed" some GB sizes of datafiles in the CDB$ROOT...  Not a huge thing, but still indicative of what can go wrong.

For Oracle folks (if you guys are reading), I would suggest a few tidbits:

 - Document the values for Con_Id=0 and Con_id=1 little better. I found the relevant info in v12c, but not in page "latest version" for v26ai. I suspect my google skipped  the relevant pages for v26ai?

- Consider some extra view that unites and explains all the values for CON_ID in one place,  similar to my inline-view experiment: all_con_ids ?

And yes, my pwd used to be similar to that of agent Mulder: Trus7No!. Until they started demanding a minimum of 42-characters...


-- -- -- -- -- End of this Blogpost - for now -- -- -- -- --

Appendix: Script with demos.


demo_con_ids.sql : run this in some CDB$ROOT (with a lot of busy PDBs) to see for yourself.

-- -- -- -- -- End of this Blogpost - For Real -- -- -- -- --

Sunday, 8 March 2026

Oracle 26ai with Assertions - a quick look under the cover...

 TL;DR: Some Very Simple tests of Assertions in Orale 26ai. I want to know if this feature has any impact and I want to look a little bit closer or even "under the hood".

This SQL-Assertion feature is still quite fresh, and I feel a bit like that boy looking at an item in development...


<smallprint

Warning: Do Not Trust This Information Yet, and Do Not Rely on it.

My tests here are very simple, blunt. And I would not claim any "solid finding" here yet. I also suspect several thing will happen that make the blog below go "out of date". Please note the caveats...

1- Oracle will further develop the feature and change some of the items I describe here below. For example, I expect new wait-events to appear that can pinpoint SQL-Assertions.

2- The real geeks (ChristofL, KamilS,  StefanK) will soon start using oracle-tracefiles on this feature, and learn more about the "inside" (the famous 10046 and 10053 traces, google them if you are interested). I will sit back and wait for others to do the Real Work, while I just stick to SQL.

Also Please note that this sort of "digging inside" is generally only useful if you need to go troubleshooting, or if you are just plain-Curious (like me). 

Hence, what I found below is just "scratching the surface" and is mainly written for my own curiousity. And Maybe for use when I need to know if an assertion is causing problems in a real system. But that day is still somewhere in the future...


Just SQL...

As I am a fairly Simple DBA, I try to do all my work with just SQL (SQL*Plus, SQLcl, SQLDeveloper, T.o.a.d., etc...) and I have tried, for all my years as a DBA, to refrain (mostly) from diving into trace files or stack-traces or linux-level debugging. This blog will do "SQL only", which might keep it more accessible for the normal ppl out there... 

/smallprint>


Background: SQL-Assertions, what can I see... ?

The previous blog described my first-use of SQL-Assertions (link). After testing a very simple use-case for the new feature (link), and sort-of demonstrating that it works as advertised, I tried to take a closer look. 

Firstly, I wanted to see if+how the Assertions created "extra effort", extra load on my system, and possibly How can I determine that any extra load coming from "SQL assertions". And I think I found a way to show that extra effort.

Secondly, I found some statements on the "new objects" in my schema. It is probably all related, so I tried examine some of it. Here is what I found so far.

(and as my notes.txt grows,  and my experimental scripts increase, there may be material for more than 1 blog, but not sure how useful this digging into a fresh feature really is...)


My Research Question: Is there extra Load?

First thing I noticed was there were some tables in my schema that I had not created myself, and they seemed related to the SQL-Assertions. Check for yourself with  USER_TABLES... But since I could not read or manipulate the data in those tables, I left them alone at first. I suspect  that, if they remain visible in future versions, there will be blogs + traces about those tables in the future.

Because Assertions will have to check things conditions "on the fly" during DML, I expect there to be some extra effort, extra load on my database. Hence I set out to measure the "work done by the database".

For my "research", I the started with two things:

1. Session-statistics and the "time model" to have my session report on "effort done"

2. Checking the SQL done by the session from V$SQL, and see if I can spot a difference with and without Assersions.

In the course of looking for those SQL stmnts, I also found some of the "under the covers" SQL caused by SQL-Assertioons.


Experimental setup: Just the Asserion, then run with and without.

From the original test-script in previous blog (see link above), I created a script that would insert significant data to trigger the assertion.

I saved the data inserted from the table A_FND, originally 19 records, in a helper-table, generating 608 records. This data I could use to "fire the assertion" many times.

I than proceeded to insert data into A_FND, and tried to measure the effort. To keep the action repeatable, I used "rollback" to make sure the table didnt grow larger during the experiments. 

Then it was just a matter of running the insert several times, with and without the Assertion created. I tried to run that on a database that was as "clean as possible", notably by a flush of the shared_pool. And each run used a "fresh connection" so the v$stats of the session would reflect only the inserts I did. 

Remember that on tests like this you have to take into account the "startup-effect" as there is the overhead of parsing a stmnt for the first time, hence every run was repeated 3 times and every stmnt was done 5x inside every run (scripts in links below)


Run it.. (get the scripts, and try for them yourself!)

After several attempts to put a good, scripted, test together, I ended up with the scripts linked blow. You should be able to try these for yourself. I am curious to see if the outcome differs much from what I found. 

The scripts should be run in the order presented. And you can repeat the last two scripts, tst_as3b.sql and tst_as3c.sql, as often as you like.

When running them, you will see that the "session statistics" (using mystat.sql) from insert-statments done "With" the Assertion switched on, are measurably higher.

Note Again, there was a startup-effect: the very first time running the whole set of scripts on a fresh database, the numbers tended to be higher, but after running tst_as3b for several times, they are repeatable, and I would consider them reliable.

I also tested against both Oracle an oracle provided container-image of v23.26.1 and against the image of gvenzl/orale-free:slim, also v23.26.1. Both yield similar resulting numbers that differ only in single-percentages.

There is Extra effort going on "With" the SQL-Assertion, and it is Detectable, Measurable. I consider that a good sign, because it means we found a way to  "attribute" effort to the assertion.

The outcome that I found most interesting is Here: 

Notice the difference in effort between "with" and "without" the SQL-Assertion in place. The numbers are "per execution" (e.g. ela_p_x = elapsed time per execution).

When The Assertion is defined (With..), the insert of 608 rows takes about 3x longer in CPU-tine and in ELApsed-time in microseconds. And "with" uses .. 2606 buffer-gets instead of the 142. There is a clear indication of extra effort, and it is visible "on the statement" in V$SQL, not hidden in some background-process.

In total time+effort: both statements are still well inside a single Second, and inside what I would call "acceptably efficient" for processing 608 records. Hence on a normal system, a user would probably not notice any difference. But on a busy system, the overhead for SQL-Assertions might be noticeable in AWR or in db-time consumed, e.g. capacity used. 

I want to put at least two comments on this:

 - The extra effort would likely be A Lot More if we implemented this "constraint" in PL/SQL or in Triggers instead of in an SQL-Assertion.

 - This feature is brand-new: Expect changes and improvements in the future.


Diving Deeper? more SQL-stmtns.

I could not resist examining the "new tables" that I got from creating the assertion. And for quick search on their names in V$SQL I used the script tst_as3c.sql.

It so happens that, in my testcase, there are some 23-26 additional SQL statements generated when we do inserts on tables involved in an Assertion. Some consume very little, some a bit more. Check for yourself.

Here is a sample:


Interesting to see those stmnts...

In future tests, I would be insterested to see if those stmnts "cover" the extra effort found when using the Assertion. And it will be interesting to see how those may one day show up in an AWR or AHS.

For now I am just content to "see them". In future I will probably be able to more or less guesstimate the impact of SQL-Assertions on systems, and to put together unit-tests to spot possible problems. 

And I hope to spot them "in the wild" if they ever cause Real Trouble.


Summary: Found it. And Happy so far.

I found the Extra Effort caused by SQL-Assertions.

With these tests, I was able to spot the extra effort generated by SQL-Asssertions on a simple test-system. And if these items (temp-tables, SQL-stmnts) behave similarly in future implementations, I will be able to spot them, and to "test them". And I will probably be able to help fix any problems they cause.

I should also add that I think these SQL-Assertions are a Real Cool Feature, which can potentially replace many triggers and "after-the-fact" check programs. 

This can make The Database even Smarter.

#SmartDB.


-- -- -- -- -- End of this blog, for Now -- -- -- -- -- 


There are of course some appendices, notes, remarks. So here come the Appendices....


-- -- -- -- -- Appendix 1 : Links to Scripts -- -- -- -- -- 

This is what I used for this blog (and for the previous one). If you run them in this order, you should get comparable results:

demobld.sql    : Build the demo tables EMP and DEPT.

tst_ass.sql        : Script from previous blog, use this to prime the demo-test. 

mystat.sql         : Report statistics from v$mystat. Called repeatedly.

connscott.sql    : Freshly connect scott/tiger. Called repeatedly.

tst_as3b.sql      : The test-script for this blog: do tests, show results.

tst_as3c.sql        : Lis the other SQL...

Note: if you run on Windows, the host-read command to pause the script will not work. Easiest is to just remove it or out-comment it and examine the spool-file, tst_as3b.lst, afterwards.

The containers/images I used:

With an Honorable Metion to Gerald Venzl for providing the super-dev-Friendly images via Docker (link)

(notice that those images, both from Oracle and from GVenzl, get updated over time, generally every few months - YVMBN: your version may be newer...)


-- -- -- -- -- Appendix 2: more future items -- -- -- -- --

There is always more to investigate...

- Split up the assertion in two separate ones to avoid the OR-conditions and see if that is more efficient.. (Interesting one!!)

- What is the impact on Delete- or Update stmnts ? (on Delete, for this particular assertion, there may not be an impact at all.. )

- Examine the additional SQL in detail, see if the effort for those (background-) stmnts adds up to the difference in time+effort of the original inserts.

- Run larger tests, and examine the effects in AWR reports: Is this the good way to spot "expensive assertions", possibly find problematic ones ?

 - We could examine the ORA$SA$ tables in more details. But Is that Relevant ??

- .. You can probably think of more..

And remember, we are only geeking out.. and this is a very new Feature.. Expect more development some time soon.

Maybe Later ! 

-- -- -- -- -- End of this Blog, for Real -- -- -- -- -- 

Thursday, 19 February 2026

Assertions used in arc-relationship

 TL;DR: A simple use-case for SQL-Assertions: Adding a data-related constraint to an arc-relation. Demo using the Emp-Dept schema.


Summary up front: 

Use SQL-Assertions to add a data-related constraint covering multiple tables.

Curious?  The link to the Source ( tst_ass.sql ). You can Immediately run this file but dont forget to first run demobld.sql or utlsampl.sql to get your emp-dept tables.


-- -- -- Background: I wanted to try this... -- -- --

Oracle 26ai (link ) as we write in Feb-2026 it is at release 26.1. And it has this new feature: SQL-Assertions ( link ).

I'll put some more links to "SQL Assertions" at the bottom. The Actual Documentation and several Other sources can explain it all better than I can. 

As the feature became GA-available, and I got to read about it, and I kept viewing presentations about it.. It got to the front of my mind. I Wanted to Try it..

And when walking around with a Neat Solution looking for an interesting Problem, you inevitable find one...  


-- -- Use-case: store audit-findings at the level of Emp, Dept, Total, or Any. -- --

Imagine you want to store+report audit-findings in a table. Those findings can be at detail-level (say, for a single Emp), or at higher level (e.g. at Dept level), or even for  the whole Enterprise (e.g. Total). But you dont want to allow the storing of a finding at a non-suitable level.

Then you have an auditor wants to record items (automatically) and check for auditable-conditions. Those checks result in "Findings", and we want to store them in tables.

Like, for example:

 - Max-commission: applies to Emp (and can not apply to other levels).

- Min and max-headcount: applies to Dept (and not to any other levels)

 - Total headcount: employees in the organisation: applies to Total.

 - Max reports to a manager: how many Empoloyees under a manager.

- ... anyone with more examples, let me know ... 

Our real-world case is a little more complicated, but I hope you get the idea: Definitions determine at what level a finding applies, and should be stored. When creating the Definition-record to define an audit-item, you can determine whether that definition can apply to a certain level: Emp, Dept, All, or Any.


-- -- -- Entities: Definition and Finding -- -- -- 

The problem description leads to two entities: DEFinition and FiNDing.

Audit_Definition (A_DEF):

A description of a Finding. In our case we will even try to store an SQL-statement to generate a finding so it can be run at regular intervals...

Audit_Finding (A_FND):

The result of a check or sample, stored as a record.  Such a finding should be linked to the Definition, it's parent-entity. But it might also be linked to some other entity to which it relates: an Emp, a Dept, or the Totality of the Entreprise, or maybe to "nothing" or some as-yet undefined entity...

The other two tables are the classic Emp and Dept from the oracle-demo schema.

Those entities have the following Relations:

  • Emp Must Have a Dept (FK, parent) - this is the classic one.
  • Finding Must Have a Definition (FK, parent).
  • Finding Can Have an Emp (FK, parent), if the finding is at Emp-level.
  • Finding Can Have a Dept (FK, parent), if the finding is at Dept Level.
  • Finding Can Not Have Both Empt and Dept, an Xor-Arc relationship
That summarizes the translation of Requirements into a Datamodel. 
For now. Before v26ai with Assertions...  

But there is one check we have not yet covered: When the Definition of a check says it applies at Dept level, it should not be stored as a Finding at Emp-level. And vice versa.

Such a check will have to compare the Definition, where we have the property "level" and the Finding itself, where it will be linked to Emp or Dept or none. Hence that check will cover two tables.

We'll try to use an Assertion to prevent invalid combinations to go into the Findings-table.


-- -- -- The data-model, a simple ERD with one arc-relation. -- -- --

To illustrate, I'll present the simplified data-model as an ERD (and drawing with whatever tools always takes more time thant you think...

My datamodel looks like this:

The ERD: 4 entities, and and X-Arc relation from A_FND to Emp or Dept.

I know there exist other modelling methods as well, but this is what I mostly use. Tables, with relations defined by lines with Os, and the crow-foot. And with the Arc denoting 1 of several... The exact drawing-conventions is something for another blog/discussion.

(<smallprint... yes, modern systems would probably use some JSON-string to record this kind of data, and probably would not build in this kind of complicated constraint into an assertion.. but IMHO they miss an opportunity for Data-Integrity /smallprint>)


-- -- The Tables, the DDL -- --

The two tables Emp and Dept are from the known demo: Emp and Dept:

The Audit data and the Assertion will use a table for Definitions and a table for Findings: A_def and A_fnd.

This table holds the definitions... (there is room for some refinement, such as storing an SQL statement or an API call in the record)

Next is the table for Findings:


Notice the constraints on the findings-table: we can sort-of specify the x-arc, but not quite the additional constraint we need...

Finally, the Assertion, it looks like this:


A careful observer would say: The OR-condition in the assertion could be used to create Two separate assertions. That is something I want to examine and play with later...


-- -- -- Now Insert some data (and generate the errors...) -- -- --

The demo file contains several error-free inserts, and 2 stmnts that trigger the Assertion... 

Here is the first error: we try to insert a Finding that is supposed to be at Dept level, but we try to attribute it to an empno:

There is the Assertion-in-action.

In trying to provoke the error, I also discovered that the FK constraint seemed to fire before the Assertion. Which, I think, makes sense.

Now try to insert an emp-level finding with a deptno:

And voila: The Assertion again prevents the insert. 

The demo-file ( tst_ass.sql ) contains a few more statement: Do Try it.

And if you want to test further: run the demo without creating the assertion, and see the (unwanted) data go in.

Notice that the error-mesage is the same, regardless of which part of the OR triggers the error. First remark from an Oracle-friend: If you create two separate Assertions, you get the benefit of more Detailed Error Messages. Something to remember. 

And there may also be performance considerations for choosing larger or smaller assertions. Maybe Later...


-- -- -- Conclusion: I found "a case" -- -- --

I think I found a very Simple case for using SQL-Assertions. This case cold have been done with Triggers but that would haver resulted in much more complicated code divided over several triggers on the tables involved.

The Assertion (or two assertions of you prefer that) does this more elegant, and probably more efficient.

And in case you missed it above: the link to sourcefile tst_ass.sql 

-- -- -- -- -- End of this blogpost, for now -- -- -- -- -- 

That is not all...

More then other blogs I've done, there is a surprising amount of notes and links that deserve keeping + Sharing, hence some appendices...


-- -- -- -- -- Appendix 1 : Possible Future items, Several -- -- -- -- -- 

 - Refine the assertion into two (or more) and use assertion-names to better "Clarify" the error when the assertion pops (First suggestion from a proofreader)

- Highlight the fact that FKs and Check-constraints are verified before Assertions ? (is that correct? I think it is...).

- Show that data can not be modified or deleted to invalidate an assertion.

 - Investigate (verify) the dependencies between Assertions and Tables.

 - Show the internal tables used to trace and check assertions (and why are those visible in v23.26.1 ?)

 - Examine user_assertions and other related views, such as the "dependencies".

- Operation 0 succeeded (older SQL*Plus versions).

- Try testing a ridiculous check-condition... or some heavy SQL.. what happens?


-- -- -- -- -- -- Appendix 2: Links to SQL-Assertions -- -- -- -- -- 

Despite all the saved notes... I still feel like I dont have a good intro on Assertions. 

When trying to find non-Oracle sources, it was surprisingly difficult to find good text on SQL-Assertions. The Assertion is part of SQL since the SQL92 standard. Yes, the concept is That Old. But the implementations are lagging. And so is the set of documentation and examples... 


To get the official ISO or ANSI standard text on SQL92, you need to purchase it.. (Huh??). But I found the almost-official text, an old page from Carnegie-Mellon University:

https://www.contrib.andrew.cmu.edu/~shadow/sql/sql1992.txt

(check section 4.10.4 on Assertions, a very brief text...)


This link to a book-chapter on Assertions:

https://csci3030u.science.ontariotechu.ca/chapter_7/chapter_7_section_4.html

And this (recommended reading?) on Constraints and Asssersions:

https://sql-99.readthedocs.io/en/latest/chapters/20.html


And a CS course from U of Virginia, this slide-deck about  SQL-Assertions

https://www.cs.virginia.edu/~up3f/cs4750/slides/4750meet19-SQL-assertion.pdf


But to get Assertions better ingrained in the design and build of datatabase-systems, I think I need to... 

Call on all Bloggers and other CCs: Collect + Write more on Assertions ??


-- -- -- -- -- -- Appendix 3 :  Remarks on demo-scripts...  -- -- -- -- -- 

Allow me some Friendly Remarks to the folks at Oracle, not related to the assertion-demo:

 - Script demobld.sql is missing in oracle 26ai ?? It used to be under $ORALE_HOME/sqlplus/admin/demobld ? The alternative script is now located at $ORACLE_HOME/rdbms/admin/utlsampl.sql... took me some searching..

- The utlsampl.sql does a connect as scott/tiger (hardcoded password, and no SID or service specified ??) In my case, the data once accidentally ended up under the SYS-user in the CDB$ROOT (my bad for not paying attention...)

Luckily, I keep a version of demobld.sql in my own toolkit-collection of scripts-to-carry. Both on github and on my trusty USB-stick (26yrs ago that was a floppy-disk, some script have survived the teeth of time).


-- -- -- -- -- more notes -- -- -- -- -- 

To get an ERD, I tried submitting text and code  to chatgpt and asked it to draw me an ERD (four tables, five relations, one of which is an X-Arc, how hard can it be...).

 First results were abysmal, but chatgpt asked me if I wanted x, y or z as improvement. None of the improvements were any good. And all the followup questions seemed geared towards making me spoil my remaining free questions...  I wont blame openAI for trying to make a buck, but I am Not Impressed.

An ancient TOAD version did a much better job.

My Final ERDs for this blog: I used the online version of Draw.io (link). And Quite Happy with that. I probably will use more of that tool

-- -- -- -- -- End of this Blogpost, for real -- -- -- -- -- 


Saturday, 8 November 2025

Oracle 26ai and UUIDs - Playing with uuid-v7

TL;DR: The UUID capability of Oracle 26ai prompted me to play with some UUID data. It also gave me ideas for some PL/SQL code. 

But first: Three Caveats:

1) Do Not use this your Production systems (unless you Really Know what you are doing, and even then...).

2) A lot of this blogpost is trickery, or plain Bad-Practice: You Should Not put relevant Information in a technical Key. And do not depend on info that might be available in the key.

3) The Jury is still Out whether UUIDs, in an Oracle RDBMS, really are better than plain old, smaller, readable, Sequences or (Integer-) ID-colunms.

The reason I still coded this, and wrote this text, is that I got inspired (got jealous?) from the python-module for UUID (link) and from some other systems. I tried to copy some of its capabilities. Additionally, Maybe some ppl out there can benefit from some of this code. And lastly, I wouldnt mind to have some discussion on the use of UUIDs.


Image: UUID, clock and universe, generated by some AI tool (it is called Oracle 26ai, Twenty-Six-eeh-aai, after all...)



Background: UUIDs are coming, like it or not..

The use of UUIDs ... Will Just Happen. 
In practice, I expect a lot of new (distributed) system will be designed to generate and process UUIDs. For "distributed systems" those keys are a necessity. Hence your data will contain UUIDs soon.

The jury is out whether UUIDs (stored as RAW, Varchar, or otherwise) are All that useful or efficient in an Oracle RDBMS.  But Given that their use is growing, Expect them to invade your RDBMS soon (if not already).

Allow me a few comments, personal opinions...

Opinion nr 1: In a lot of cases, plain-old Integer-IDs, when used sensibly, may still be usable, and perform even better... YMMV.

Opinion nr 2: For UUID, V4 and V7 probably equally useful, and equally performant, unless you have some Extreme Use Case (or when you plan to use the hidden time-info in a UUID-V7...).

Since Oracle introduced the SYS_GUID (somewhere around 2013?), I have not seen a lot of usage of that function. Possibly because most Oracle systems are much older ad will stick to Sequences. 

Add to that: Experienced Oracle folks (designers, dbas, gurus..) have learned to work around the (perceived!) downsides of sequences. 
Notably:
1. The correct Caching of Sequences alleviates a lot of contention.
2. Partitioning (for time) is Conceptually Better on a true Date- or Timestamp- column than on (tweaked) UUID-V7 values.

But enough Caveats and Opinions now.



Let's Tweak Some... Start with Version.

So, after all these Cautious words, I still could not get the documentation  of the python module (link) out of my head, and decided to do some coding anyway. From the functionality included in the python module, I started with the Easy Ones...

The easiest thing to extract from a UUID is the visible-properties: Version. A function to do that would look like this:


Easy to define, but there are some items to "catch" around the values for MIN/MAX, and how to trap invalid input in general. After a bit of coding and testing, output was generally OK. Result is intended to always be an Integer (number), or NULL, if null is given as input. 


Notice the results for min, max and the negative values that signal "invalid". A  SYS_GUID() will result in a return of -2. A lot more sophistication is possible here, but since I dont have a pressing need or urgent use-case, I didnt go deeper, yet.

Note: a few days after writing, I considered the NIL-UUID should have version 0 rather than the negative -255 ??

One of the Use-Cases for UUID_VERSION could be to check or to report on "incoming data" when ingesting from other systems or sources.

And speaking of Ingesting data....



Conversions from/to Varchar (lowercase and hyphens...)

For use with ingest- and display software, it will be good to have conversions to/from Varchar2. Trivial, but useful. Hence..
 
UUID_TO_VC2 ( RAW) -=> lowercase 8-4-4-4-12. This is covered mostly in the function FMT_UUID ( RAW) already described in the previous blog-post (link). Hence a "wrapper" will do for now. Others (or myself) can improve on it later if needed.

VC2_TO_UUID ( vc2, upper/lower, hyphens, dots... ) -> RAW(16). This will come in useful when ingesting data from text-files or other sources that deliver the UUID as a string.

Those two "converters" are fairly trivial functions, but these will notably allow easy conversion of ingested data. And especially for the "to_uuid", I want an error Raised when input is dubious or invalid. I might put a little more Effort into that later. For now using the same dataset as above:


Currently, the only check is on "length" and valid-hex. Hyphens, if any, will simply be removed from the string, and then it will go HEXTORAW. Any non-HEX will result in Error-Raised, any wrong length will also result in Raise of error...

At this moment, the errors are rather un-elegant. But I would prefer to throw an error than to ingest "corrupt" data. There is room for improvement here, such as removing "spaces" or other separators like dots, underscores etc. Maybe Later.



Time from a UUID7 (which you Should Never Use...!)

Given a RAW UUID-V7, I wanted to extract the Timestamp, either as epoch-value, as oracle-DATE or TIMESTAMP. That should not be too difficult.

These time-extractions can be used to sneakily select ranges with given date-time boundaries. They can also be used to determine the "create-time" of a given UUID-V7 (which may give an indication of the origin, or the time when a certain record was created, should you meed that ...)

It starts with a function to extract the epoch:


Once we can extract the Epoch from a UUID-V7, we can build on that. From the epoch (with approx 3 decimals, hence milliseconds) we can convert to a Timestamp or to a Date. Hence two more functions:

UUID7_TO_TIMESTAMP  ( UUID7 RAW ) => return a Timestamp (without tz)
UUID7_TO_DATE ( UUID7 RAW ) => return a Date (precision on whole seconds)

There is a bonus function included, one that I needed anyway:
EPOCH_TO_TIMESTAMP ( TS NUMBER) => return a Timestamp (without tz).

Demo (link to file at bottom): Select some values of UUID7, including the the NIL and the the max-Epoch, and show the resulting Timestamps and Date-values:


There we are. And if anyone needs it: extracting the time-values from UUID-V1 and -V6 should not be too difficult. But I didnt have the need yet, and I was too lazy...

Also notice the max-epoch value resulting from UUID-MAX. Remember that number and remember that Date...



And to Abuse the time-info....

Now that we have this information, we can do something naughty, like counting records per Hour, based on just the UUID...


 We see an example of a count of record per day-hr, based on the information hidden in an ID column with keys that conform UUID-V7.
And as you can verify, no DATE or Timestamp column was used in this demo...

I can not repeat enough how this is ABUSE of the UUID(V7)... But it might happen.

Another way you can use those implicit time-values for is to check on UUID7 keys: if the resulting timestamp is before, say, 2020, the data is probably "generated" and not "genuine" because the Standard for UUID-V7 was only designed after 2020. Just saying.



Summary and closing remarks...

Regardless of the fact that UUIDs were Firstly meant to be Unique + Random, there can be some "information" hidden in them. Python (and other systems) are able to expose that information. Other systems, notably the Python module for UUID, can extracat that information. So I also built a few primitive PL/SQL functions, just to play.

We can those functions to extract version and date/time values from UUIDs as s "forensics tool". And we can do some conversion (and formatting) between Varchar2 and Raw for use in interfaces.

I had also thought about constructing UUID-NIL and UUID-MAX, but those are so trivial that I will leave that to the next hobby-programmer.

For Practical use? ...  I would recommend to be Very Careful to Use those in Prod. A true Key-field should Never be abused for "information". Period.

Question for Oracle: Will they succumb to temptation to include extractor-functions|? Or will they stand on the opinion that Key Data should be Meaningless ? 
Note: The links to the UUID-documentation is Evolving as we Write. For example, this page on function IS_UUID appeared recently in my searches... Good Sign.

Oh, and the elephant in the room: Should I demo the use of UUID-V7 for use in partitioning or other constructs with time-restricted data ?  We all know it can be done. But doing it will set a Bad Example. 

My aversion to use Key-values for anything other than Key-Reference is still strong...

-- -- -- -- -- -- End of this blogpost, for now -- -- -- -- -- --

Appendix A: links to scripts

mk_uuid7.sql :  Original from previous blogpost, with some corrections.

mk_uuid7_p2.sql : The additional functions.

-- -- -- -- -- -- End of this blogpost, For Real -- -- -- -- -- --

Saturday, 1 November 2025

Oracle 26ai - UUID v4 available, and looking at v7, Interesting!

TL;DR: in Oracle 26ai, there is the new function UUID(), which implements the UUID-V4. Someone also mentioned UUID-V7. I started experimenting with the Possibilities.

And of course I expect (hope) Oracle implements other versions of UUID quite soon...

(For the purists on Standards and Database-IDs, keys: Several Caveats included. Dont do this in Production unless You Know What You are Doing...)


Image: some Universe with the UUID.



Background: UUID are everywhere (but I avoided them)

So far, I have not encountered UUIDs much in Oracle. Some other databases use UUID, and to me it was mostly a funny-looking random-string. I was aware of the Benefits of UUIDs: Uniqueness + Randomness, even when generated at different place and time. And in Oracle there was the SYS_GUID() function, should we need it. But I dont think I ever used it.

That May Change: Because in Oracle 26ai, there is a function to generate UUID-V4 keys. 

Here is the link to the Standard Paper (which I didnt Study in Detail myself...)

Jonathan Lewis experimented some, he also dug into the error-messages, and pointed out that Oracle will probably also implements the versions 1, 2, 3, 7, and 8 (link to forum) 

In the Database-world, the growing consensus seems to be that UUID-V7 is Better. Notably in this article is an interesting example of someone who tried to benchmark, and concluded that "in his case" the UUID-V7 was better in a number of scenarios. 

We also had at least 1 Oracle-DBA who was Actively Asking for UUID-V7. And as Jonathan Lewis showed, the indications are that this will be implemented at some point in the (near) future.



Let's write some code...

DBA's being Data-Geeks. Shortly after the intro of UUID-V4 in Oracle came the discussion whether UUID-V4 was good, was usable, or was just plain bad for use (as a key/reference/pointer) in an Oracle RDBMS.

Geeks have to Geek, hence... Jasmin Fluri wrote a PL/SQL function to generate UUID-V7 keys and blogged her findings here (link). I could not resist, and took that code, tweaked it a bit, and started experimenting as well. As a result, I ended up with a few functions that I can share, and that I might use in further experiments. 

Link(s) to script(s) are at bottom of blog, in case someone wants to mess with it.



Raw, but (not very) Readable...

The Basis if the UUIDs is a 128-bit number, and this wiki-page on UUIDs is quite extensive.

Given the definition of the standard, and the nature of the documentation found so far, the consensus between myself and a small group was: Store this as RAW(16) for the moment (we might re-consider that later, and of course ymmv...).

With a copy of the code from Jasmin, I was able quickly able to create UUIDs of V7.

For the complete source: link at bottom.

I was now able to create UUID-V7 and experiment with them.

From working with "the other database", I was used to see UUIDs in a certain format. And I sometimes had data-sets delivered to me with (quoted) GUID-keys in them. They are generally displayed as 8-4-4-4-12, as per Standard. Both the Standard and the Wiki-page also states that the Lowercase is preferred. 

At this point I regretted that Oracle did not have (yet) a data-type for UUID. In the other database, I can define a column as a UUID, and get the display-formatting automatically on selecting the column.

Oracle provides the function RAW_TO_UUID, but at this moment, Oct 2025, with v23.26ai, that only works on UUID-V4 keys.

So out of habit, I wrote a formatting-function to facilitate display of generic UUIDs. The result was :

As we can see, the function is able too format most forms of UUID, including the old SYS-GUID() from Oracle v12.x. Can you spot the V4 and V7 UUIDs?



Time-ordered keys, hence I want Boundaries, Ranges.

The UUID-V7 is specifically time-ordered. That means that if you use them in an ORDER-BY, that the values are sorted by "created_date".

Thus, if I wanted to select all the records with UUID-V7 keys generated between T1 and T2, I needed to be able to specify T1 and T2, Precisely, and in a RAW, UUID usable type.

This called for a function that would generate the UUID7 for a Given Timestamp. The result was :

By specifying a timestamp, I obtain the corresponding UUID. And I placed some irrelevant (?), but informative data in the "random bytes" after the last two groups. 

Caveat - Note: I ignored the "variant" information from the Standard-spec. Something to think about later.

The results, output, can look like this :


The first two lines in the result are the UUID generated with explicitly-specified SYSDATE and SYSTIMESTAMP. And the time is "readably" stored in the last two groups of 4 + 12 bytes. You can recognise the ID generated from SYSDATE in the first record from the last two bytes: Because the DATE is only accurate to the Seconds, the Centi-Seconds, are "00".

You can experiment for yourself with the peculiar difference between :

Select uuid7_ts ( systimestamp) from dual connect by level < 3;

Select uuid7_ts ( )             from dual connect by level < 3;

There might be discussion on the implementation-possibilties there... Later.

Main point is: I can now filter the UUIDs generated before, after or between timestamps. 

Caveat! ... This is Not Best Practice (abuse of "information" in key-fields).. But it can bed Very Useful when analyzing data per hour / day /  week.


Closing Summary (hmmm, not yet closing...).

It looks like we can now generate UUID-V7 values. And we can even generate them with "given timestamps". 

We have the following functions:

- FMT_UUID ( UUID ) : return a varchar2 with uuid-format: 8-4-4-4-12 for "display" usage. For formatting the other UUIDs, since Oracle choose to accept only UUID-V4 for their RAW_TO_UUID (), at this moment (Oct 2025).

- UUID7 () : return a RAW(16) with the UUID of V7

- UUID7_TS ( Some_TS ) : return a RAW(16) with a UUID-V7, but with the time-part set to the given parameter. 

I look forward to playing with those ...


Because there is More...

Now, Once we have the ability to generate "precisely timed values", we can use those to allow partitioning on Time-ranges. Something for another session. 

Then there is the question: Do I want to Know the Time ? The UUIDs of type V1, V6 and V7 contain timestamp-bits. I can potentially extract the time-of-generation from those. Would this be useful? 

For a future versions, I would also consider creating a package to bundle all my UUID-related functions. A package will also allow "overloading" the two functions UUID7() and UUID7_TS() to a single "name". Something to think about.

Give me some time to think and experiment...


Or.. You can experiment Yourself. 

The best learning is from the things You Do Yourself - so Go tweak this code.

Enjoy !

-- -- -- -- -- -- End of blog text, for now -- -- -- -- -- --

Appendix A : links to Scripts..

mk_uuid7.sql : create the functions, and some test code to verify.

demo_uuid_fmt : demo the formatting.

-- -- -- -- -- -- end of this blog-episode, for real -- -- -- -- -- -- 

 

Sunday, 28 September 2025

Oracle23ai, python, and v$history for your developers...

TL;DR: We got the v$sql_history feature configured for use in development. Python programs can now report on the SQL statements done Exactly by Them.
Will this replace sql_trace one day ? 

Image: (copied from some "History" page...) Keeping History is Important. 


Background: What SQL you did spent your time on...?

After tackling most of the latency-problems, the "suspicion of slow SQL" came back. Not in the last place, because most python programs started reporting their "App-time", "Network-time" and "DB-time" components. So now we need to find the slow-SQL again...

From AWR and ASH, we have a fair idea what the most over-all time-consuming SQL stmnts are. And they are "known good": they need to be running, and they have little room left for improvement.

But what else is there? Which stmnts remain "under the radar". Which New stmnts needed attention, and which ones might be performing erratic.

Hence the wish: Let Each program report on "What it Did in the Database", and notably, what were the most time-consuming SQL-stmts. 

This is what v$sql_history looked Ideal for...


Earlier attempts...soso.

In a previous blog (link), I described how I hit the limits of sql_history: only on 23ai, and only when the parameter was set. I made a feeble attempt (used an old trick), to report on SQL done by specific programs or modules, based on "module" in the session-properties. This worked to a point, but it wasnt what I wanted. Not after I had seen the possibilities of v$sql_history.

additional note + disclaimer: I've tweaked a lot of Oracle databases and I have also recently experimented with the  active_session_history that was built into Yugabyte,  and did some experimenting with a the feature. I guess I am in hobby-obsession-territory here (link to sibling blog).

Oracle wise, I would expect a (hidden) parameter to overcome that limit of 60? But searching the view x$ksppi didnt seem to give me any candidates, so I started asking around...  


Friends (from oracle) to the rescue...

Martin Berger, from Oracle RWP-Group, pointed me to his blog (link) : Yes, there was a hidden parameter to help increase the nr of stmnts kept in v$sql_history (and it turned out I had queried for hidden parameters against a v19 database... sigh)

And a week later, Connor also "answered the question" in his office-hours session of 24 Sept 2025. Thx Connor! (link).

Of course you need the, non-default, enabler:

Alter system set sql_history_enabled=True;

And then this (hidden)parameter to allow more history than the dflt 50:

Alter system set "_sql_history_buffers"=10000 ;

Setting those parameters had my problem fixed, to some point. Development happens mostly on laptops, the Dev's very own machines, using the 23ai containers from Gerald Venzl (link). Hence, I could have the feature enabled and buffers set for developers fairly easily (those containers have a very easy provision to allow scripts at db-create-time and/or at db-startup time. Great tools for simple-devs!).


The (python) code.

Firstly, I experimented with SQL*plus, and the result was the script mysqlhist.sql  (link below). I'll probably improve that over time as it gets used. Notice that I report "aggregated" values per SQL. In some cases you might want to report the complete history, all statements in order of execution. That list can get Very Long Very Quicly... I was more interested in which of the statements took most time overall, e.g. accumulated.

But my original search into history was for the python programs...
 
A function got includee into my generic py-file with oracle-stuff,  ora_login.py. This import-able file begun life with the login-function, and gradually grew to contain my oracle-toolkit.  Link below.

I included the checks on version and parameters, just in case, so when someone accidentally runs this code against the wrong database, nothing bad happens:


Notice: this test was against v19, and it only produces the friendly 3-line message. The same message will appear if history-enabled is not set.

But when a call to the function ora_sess_hist ( ora_conn)  is done at the end of a python program that is connected to an "enabled" database, it will use the ora_conn to query v$sql_history. And output will looks like this: 


Notice how, at the end of the run, this program called two functions: first to report history and do the listing x lines of SQL (limited to 100, to keep it readable). We notice 8 different SQLs, one of which was executed 1000 times totalling some 236495 microsec. But the "biggest SQL" in this run was the one that was executed 5 times, totalling 5.6 seconds.

After listing the history, this program also calls the "ora time spent" function to report how much time it has spent in the various components: in App, in DB, and (a guesstimate) in Network doing RoundTrips. You can combine this time-reporting with the linux time command to double-check (see previous blogs as well).

Notice that the DB-time, listed as 6.612 sec, and it corresponds roughly with the total of the sum of the elapsed time values (which are in microseconds!). Consider this a useful double-check.

This example of course contains just some bogus SQL for demo-reasons. The self-test of ora_login.py will yield similar limited results.
But please feel free to test this yourself and let me know. 


What we dont quite know yet... 

The main uncertainty is the behaviour of sessions with the new parameters. This is probably why the feature is default not enabled (yet), and why the value of "buffers" is (IMHO) rather low at 50 or 60.

On our current dev-container-version, v23.9.0.25.07, the default value for sql_history_enabled is False, and when enabled the dflt buffers seems set to 50. If this parameter ever hits "Production" with high values, what will be the effect on PGA and behaviour ? Where are the practical limits and what do they depend on....?

There are a few other anomalies to note, possibly to investigate and keep an eye on in future releases:

Buffers? 
The actual nr of records in the view that came out was mostly just a few more than the setting of "buffers", say 8-10 more lines more. Not sure why/how.
(future test: various buffer-settings, and the actual impact on PGA with/out history)

Session-history private?
Sessions by the same user could see each others records. This may be why the SID is part of the view. Not sure if this is intentional or not. I'm curious to see how this feature will end up in Prod.
(future check: can system or sysdba also see all history from every session ?  )

Double counting?
Rows 2 and 6 from sql_hist in the screen-print above: The pl/sql block from line 6 contains the create-stmnt from line2. I suspect there is a double count in there. Not a big problem, but I'd like to verify that some time. There is probably some way to avoid that using top_level_sql_id... (Later!)


Summary: Got it Working.

This now works for Dev, and can work in production if/when we every get this feature in a Prod database. We can call a python function to list the v$sql_history, or at least the last 10k statements from that program.
With some checks, that function-code can now also be run without causing errors in Prod, on v19 or other database, or on a system with the history parameters "disabled", as they are by default.

I would still be careful to use this feature "in anger" as the effect on PGA-memory is not really known or tested yet.

And if needed, we can always revert to searching statements from v$sqlarea, as demonstrated in previous blog (link).

In future, I might even enhance the python-function to save the history to a table before exiting the program. Since history is currently only taken in DEV, on machines and databases that are very volatile (many re-starts, re-creates...), saving that data is not practical or useful, yet. 

Once history is also available in pre-prod and Production systems, I might want to persist history before exiting program. Until then, Prod will have to rely on AWR and ASH to find problematic SQL.

And I can see how this feature may replace sql_trace ?


Wishlist... Yes!
(Someone at Oracle is probably also thinking about this ...) 

Session: I would like to be able to set the parameters on a Session basis and only consume resource for specific sessions that Need/Want to keep their History. A bit like sql-trace, but without the file-interface.

AWR/ASH: I expect the v$sql_history to be available in combination with AWR or ASH in the near future.


-- -- -- -- -- End of this blogpost, for now -- -- -- -- --

Appendix: links to source-files.

mysqlhist.sql :  for use in SQL*Plus or SQLcl

tst_hist.py : short demo program.

ora_login.py : contains the actual funtion ora_sess_hist ( conn )

.env : used by dotenv, edit to add your own credentials.

Also needed are these python components:

pip oracledb : install the oracle driver

pip dotenv : install dotenv (used to store credentials)

Additionally: the grants...

To get to your v$ views, ... If you can get it past your DBA and your security-folks,... I can recommend these two:

grant ADVISOR to scott ; 

grant SELECT ANY DICTIONARY to scott ; 

Your account still wont be able to see data in other schemas, but you will see the v$stuff, all the relevant views.. 

-- -- -- -- -- End of this blogpost, For Real -- -- -- -- --