Saturday, 2 May 2026

Use X-windows on your Oracle containers, Simple...

TL;DR: To run some of the GUI tools that come with Oracle 26ai, you need X-windows. Let me show you how I did that on my containers.

I needed this "solution" several times in the last few years, and I got tired of searching for it every time,  hence this post is also a "reminder to self".

And this how-to-X11 post, is hopefully a setup to a series where I want to use the output from the Oracle DataBase Configuration Assistant (DBCA) to Create a Database. To use that assistant, I first had to set up X-windows (or X11, as it is often called).

Note that this X11 pre-dates the MS-windows that most you know + use...

Image: the Oracle11 installer. I couldnt find an older one, but this installer goes back to v7 or maybe even further.


Background: I want to run those tools, hence I need X11

For some of my hobby / experiments / research, I am using container-images with Oracle (database) software on them.

One source is the official Oracle container-registry (link). I took mine following the links to "Database" -> "Enterprise".

Another very convenient source is the collection of "FREE" images maintained by Gerald Venzl (link). The latter are hassle-free, license-free, Very Easy to pull and run containers you can use for Dev, Testing, or just playing around with an otherwise Enterprise-Grade database (ok, some limitations apply... rtfm ;-)  )

In both cases, I also wanted to run the "assistants" that come with the Oracle Database. I can normally manage quite well without those, and I would most of the time recommend you do your Research and use proper scripts to do your work. But in some cases, I also recommend to use those assistants to "ask Oracle for a second opinion". 

In my case, I notably have this hobby-project of Creating a Database from Scripts rather than from recovering an existing database or a pre-built template. More on this in some older posts (link to 2023) and maybe in an upcoming blog-series (future links here...).

You can find more on X-windows, or X11 as it is often called, here (link)


Installing x11, using yum, should be simple, right?

(I ended up spending a lot of time on this, several times over, and it isnt perfect yet...)

So I started my container, as usual, with this (simplified) command:

That is (simplified) how my container normally runs. And once the initial pull is done, this is a Fast and convenient way to get a database up.

I have defined variables for :

SRC_IMAGE : where to find the image to pull

CONT : the name and hostname of the new container.

YUM : because not all containers have yum, sometimes YUM=microdnf.

And then I start it up...  

Note that you can do many more sophisticated things on startup: rtfm on the registry-docu and on the page describing the free-images - there are many possibilities! I can notably recommend you map some volumes for data- and diag-data. But that is outside of the scope for this item.

These container images do not yet contain X11, so I had to get that myself. From ancient memory and some googling-around I knew that it could be yum-installed.. (yum works on the enterprise-image, on the FREE images you can use microdnf).

Well, this time yum didnt work so easily:

It didnt find xorg-x11-apps ... 

Yum isnt quite my expertise, but after a lot of searching and a tip from Kamil (the axeman himself, no less), I found I had to zip the file /etc/yum/vars/ociregion. Here is the relevant part of my script, the link below contains the complete script:

Those docker (or podman) commands made it work. 

I used docker-exec to move the ociregion file and create an empty one. From that point on, yum (or dnf, or microdnf) was able to install X11. 

And at every new container, I am now happy to see those eyes appear:

Success !  (for now). Proven by the eyes... 


About the X-server...

Note that when you start an X11 app anywhere, you need to make sure your laptop runs the X-Server to receive and display the resulting windows. That is why everyone will "test" with xeyes or xterm. In my case, I use XQuarts on the macbook. On windows you might use mobaXterm or xming.  Make sure your X-Server is running and reachable, otherwise you will see "cannot open display" or a similar error when starting any X11 applications.

Notice how I told xeyes where to send its output with the --display. Once you log on you need to set+export the DISPLAY value in your shell-session. That DISPLAY needs to point to your X11 server (yes, it is called a server...)

To prove it all worked in my case: I displayed "the eyes", in "violet". I also used to put a "green" set of eyes at the end of the final script to signal "complete". Check the link to the final script at the bottom of the post.

Personal note: My Kids loved those eyes back in the years 00, and I often produced them just for fun. The eyes follow the mouse-pointer, really nice.

That was not all that difficult... however... we are not done yet.


The dbca (not) working ?? (not on 1 laptop, not on the newest...)

Once I had X11 on there, I could normally logon as oracle (the dflt user), export my display, test it with xeyes (looked Good), and then start up the DBCA...

Not in this case. The xeyes worked fine. But not the DBCA.


The DBCA does not display in readable colors. At least not on this screen. But it did always work fine on older laptops. And still does even with the latest container-images. I've asked about this on the Oracle forum but as of 02-May-2026, no solution yet.

My (failing) platform was: 

X-server is XQuartz, running on a macbook M5-pro. 

My screen is the nano-texture variety for macbook (expensive, but Good even in sunlight)

The "linux server" is the container pulled from Oracle, image id 224c51ce1555

At the time of writing, I have not solved this. There is a thread on the oracle forum about it (link).

My workaround was to use another, older laptop where the display of the X11 window was as-expected. That dbca feels a bit old, but also comfortable: it was there already when I started my DBA-career back in the 90s...

Using the older laptop, I was able to generate all the "create scripts" that I wanted, run them, and examine them at my leisure. But that is for another topic.


Summary : Simple, but necessary. X11 install Done.

This was a fairly simple, but the time went into "Researching". Mostly because I never had mastered the install of X11, and partly bcse I had to tweak the yum-variables provided on the (oracle-) container-image.

All is (mostly) Fixed and successfully worked-around.

All well. 

I was ready to generate "The Scripts" 


-- -- -- -- -- End of this how-to blogpost -- -- -- -- --

Appendix: the script

mk_cont_x11.sh : the shell script to create the container.

Do try. And let me know how it works for you. You can of course use this script to build your own running containers with Oracle 26ai in them. Have Fun.

-- -- -- -- -- End of this blogpost, for Real -- -- -- -- --


Monday, 27 April 2026

Oracle 26ai - The Con_ID of CDB$ROOT...

TL;DR: When your query joins V$PDBS to get the name of the PDB, you might miss the contents of CDB$ROOT. I fixed that in various ways for my scripts.

(note: this was a classic case of at least two errors on my part: not reading the RTFM on "containers" careful enough, and a case of an INNER JOIN being too strict)

Image generated in style of X-files: Never Trust your own Scripts...


Background: Do I miss some data ??

When you are "administrating" (monitoring) your Container DataBases from CDB$ROOT, you want to see Everything. All PDBs, all datafiles, all services, all session and all processes. And you want to know in which PDB each item resides. Not just the con_id, but preferably the actual name of the PDB.

But when you join to V$PDBS to get a list of the pluggable databases in your container, there is no entry in that list for CDB$ROOT. 

In some of my scripts, I ended up with discrepancies. As an example, I'll take my daily "storage-check". But I had similar "issues" with v$services and v$session.

The sum of all filesizes from V$DATAFILE was (example) 999 GB. But when I tried to group the data per PDB, the total of that list was clearly less than 999 GB. Huh?

The difference was that the datafiles from the CDB$ROOT were not taken into account, because my (inner) join with V$PDBS automatically excluded the files where CON_ID=1, e.g. the files from the CDB$ROOT.

It got even stranger: in V$database, when you do  select con_id and name from v$database it yields Con_id=0, and name=FREE (e.g. the db_name).

So it seems that CON_ID = 0 and CON_ID = 1 are the exception-cases, and are both referring to CDB$ROOT, or to DB_NAME, depending on where you look ?

Not a huge problem, and there are several simple solutions. But still enough of an anomaly for me to take a closer look (and fix my queries...).


Note: I'll do the demo using the "Free" container from Gerald Venzl (link). If you ever quickly need a demo- or test-database check this out. Some of our devs just love them for their ease-of-use. No license-accept, no login, just pullable-images. I got into the habit of using these containers for just about any "proof of concept" (or any blog-article). You'll have an Instant Database available on any laptop where docker or podman is installed. I might do a blog on "how to" sometime, but in general: Just read the page and get running...  Most DEVOPS-  and DBA folks figure it out in no-time - And then they suddenly "run a local database"...


The Example: I missed GBs diskpace from the sum of my datafiles.

For demo purposes of this blog, I started one of those containers with FREE and FREEPDB1, I then added several PDBs and then I checked for used disk-space (simplified version of my check-script in link below):

Notice how the list of PDBs adds up to 5 x 790 = 3950 MB, but the total reported is 5673 MB. There is some 1700 MB is missing in the first list, even though both queries report from v$datafile.

From looking at the SQL, or by listing just the v$datafiles, it is +/- evident that the first query missed the files where CON_ID = 1, e.g. it missed the files from the CDB$ROOT:

The two files at the top hold the 1700MB that seems missing. And the cause is classic: the simple, old fashioned, join with V$PDB in the previous query had left out the Con_ID=1. 

Aha. A quick fix with LEFT JOIN should do the job. Or.... Even better, after poking around I also found V$CONTAINERS, and that seemed like the Best Fix (for this case):

So Far so Good. This scripts is now working as intended again.


But What About that CON_ID for root ??

On further inspection, I notice that also my session-counts (checking +/- how busy is the system...) were off by a good margin when compared to, say:

select count(*) from v$session;

or even 

ps -ef | grep $ORACLE_SID

Same problem of course: When I had done my quick+dirty equal-joins to V$PDBS on CON_ID to pick up PDB-NAME, I suddenly had eliminated all of the sessions for non-PDB connections.

But it got stranger: Here is my check on  GV$SESSION, after putting in the "fix" with GV$CONTAINERS to pick up the pdb_names and CDB$ROOT.. 

The total shows a count of 86 sessions, but the various containers only hold ... 3+3+1 = Seven ?? How can I still miss 79 sessions ?

This is where I had to go RTFM, and the actual information was buried in an older version (v12, link) of the doc:

The values for con_id are explained as follows:

  • Con_ID = 0: Pertains to the Entire CDB (example: V$DATABASE contains value of 0, and this is probably why many "background" session have Con_ID=0)
  • Con_ID = 1: Pertains to the CDB$ROOT (for example in V$CONTAINERS and V$DATAFILE, you will find the value of Con_id=1, as I found with my storage-check)
  • Con_ID = 2: Pertains to the PDB$SEED.

Other, higher values of Con_ID correspond to proper PDBs that have been created, cloned, or plugged into the CDB.

And because there is no Con_ID = 0 in the V$CONTAINERS, I missed some 79 sessions in my uninformed count...

Note that when you check the same RTFM-page for 26ai (click on "view latest" in the previous link) there is no clear mention of this anymore...? This information has either moved to other parts of the documentation or was considered overly confusing (I'd say: Not Knowing this is even more confusing, but maybe I dig too deep...).

In this case, a little more fixing was needed, and the fix did end up as a LEFT-JOIN:

And QED: we have the sessions, counted per container, and we did include the sessions where Con_ID=0. The sum of the part is equal to the whole again. All is well.

For a more elaborate fix, I even experimented with a view that I named All_Con_IDs: a Union of V$PDBS and V$DATABASE. The result was something along these lines: Here is my check-count for "processes" using an inline-view version:

Using this CTE (or inline view), I got the satisfaction of seeing the proper name of the (container-)database displayed with Con_ID=0, instead of some self-designated hardcoded "-root-" value from an improvised-NVL.

Needless to say: I didnt seriously implement that "fix" in a self-made view (yet), but it was an interesting idea for a moment: unite all the Con_IDs in a single view/place to join that to various sources where and when needed...

Maybe someone out there knows of a similar view that already exists ?


Summary: Check your scripts when using Con_ID (and Trus7No!)

From spotting an anomaly in my scripts, I ended up digging into CON_ID values. I discovered that values for CON_Id of 0 (zero) and 1 (one) mean different things in the various gv$ views, and I ended up with better scripts, and a slightly better understanding of my systems.

For the innocent reader: Never fully Trust your own Scripts.. In my case, I notably "missed" some GB sizes of datafiles in the CDB$ROOT...  Not a huge thing, but still indicative of what can go wrong.

For Oracle folks (if you guys are reading), I would suggest a few tidbits:

 - Document the values for Con_Id=0 and Con_id=1 little better. I found the relevant info in v12c, but not in page "latest version" for v26ai. I suspect my google skipped  the relevant pages for v26ai?

- Consider some extra view that unites and explains all the values for CON_ID in one place,  similar to my inline-view experiment: all_con_ids ?

And yes, my pwd used to be similar to that of agent Mulder: Trus7No!. Until they started demanding a minimum of 42-characters...


-- -- -- -- -- End of this Blogpost - for now -- -- -- -- --

Appendix: Script with demos.


demo_con_ids.sql : run this in some CDB$ROOT (with a lot of busy PDBs) to see for yourself.

-- -- -- -- -- End of this Blogpost - For Real -- -- -- -- --

Sunday, 8 March 2026

Oracle 26ai with Assertions - a quick look under the cover...

 TL;DR: Some Very Simple tests of Assertions in Orale 26ai. I want to know if this feature has any impact and I want to look a little bit closer or even "under the hood".

This SQL-Assertion feature is still quite fresh, and I feel a bit like that boy looking at an item in development...


<smallprint

Warning: Do Not Trust This Information Yet, and Do Not Rely on it.

My tests here are very simple, blunt. And I would not claim any "solid finding" here yet. I also suspect several thing will happen that make the blog below go "out of date". Please note the caveats...

1- Oracle will further develop the feature and change some of the items I describe here below. For example, I expect new wait-events to appear that can pinpoint SQL-Assertions.

2- The real geeks (ChristofL, KamilS,  StefanK) will soon start using oracle-tracefiles on this feature, and learn more about the "inside" (the famous 10046 and 10053 traces, google them if you are interested). I will sit back and wait for others to do the Real Work, while I just stick to SQL.

Also Please note that this sort of "digging inside" is generally only useful if you need to go troubleshooting, or if you are just plain-Curious (like me). 

Hence, what I found below is just "scratching the surface" and is mainly written for my own curiousity. And Maybe for use when I need to know if an assertion is causing problems in a real system. But that day is still somewhere in the future...


Just SQL...

As I am a fairly Simple DBA, I try to do all my work with just SQL (SQL*Plus, SQLcl, SQLDeveloper, T.o.a.d., etc...) and I have tried, for all my years as a DBA, to refrain (mostly) from diving into trace files or stack-traces or linux-level debugging. This blog will do "SQL only", which might keep it more accessible for the normal ppl out there... 

/smallprint>


Background: SQL-Assertions, what can I see... ?

The previous blog described my first-use of SQL-Assertions (link). After testing a very simple use-case for the new feature (link), and sort-of demonstrating that it works as advertised, I tried to take a closer look. 

Firstly, I wanted to see if+how the Assertions created "extra effort", extra load on my system, and possibly How can I determine that any extra load coming from "SQL assertions". And I think I found a way to show that extra effort.

Secondly, I found some statements on the "new objects" in my schema. It is probably all related, so I tried examine some of it. Here is what I found so far.

(and as my notes.txt grows,  and my experimental scripts increase, there may be material for more than 1 blog, but not sure how useful this digging into a fresh feature really is...)


My Research Question: Is there extra Load?

First thing I noticed was there were some tables in my schema that I had not created myself, and they seemed related to the SQL-Assertions. Check for yourself with  USER_TABLES... But since I could not read or manipulate the data in those tables, I left them alone at first. I suspect  that, if they remain visible in future versions, there will be blogs + traces about those tables in the future.

Because Assertions will have to check things conditions "on the fly" during DML, I expect there to be some extra effort, extra load on my database. Hence I set out to measure the "work done by the database".

For my "research", I the started with two things:

1. Session-statistics and the "time model" to have my session report on "effort done"

2. Checking the SQL done by the session from V$SQL, and see if I can spot a difference with and without Assersions.

In the course of looking for those SQL stmnts, I also found some of the "under the covers" SQL caused by SQL-Assertioons.


Experimental setup: Just the Asserion, then run with and without.

From the original test-script in previous blog (see link above), I created a script that would insert significant data to trigger the assertion.

I saved the data inserted from the table A_FND, originally 19 records, in a helper-table, generating 608 records. This data I could use to "fire the assertion" many times.

I than proceeded to insert data into A_FND, and tried to measure the effort. To keep the action repeatable, I used "rollback" to make sure the table didnt grow larger during the experiments. 

Then it was just a matter of running the insert several times, with and without the Assertion created. I tried to run that on a database that was as "clean as possible", notably by a flush of the shared_pool. And each run used a "fresh connection" so the v$stats of the session would reflect only the inserts I did. 

Remember that on tests like this you have to take into account the "startup-effect" as there is the overhead of parsing a stmnt for the first time, hence every run was repeated 3 times and every stmnt was done 5x inside every run (scripts in links below)


Run it.. (get the scripts, and try for them yourself!)

After several attempts to put a good, scripted, test together, I ended up with the scripts linked blow. You should be able to try these for yourself. I am curious to see if the outcome differs much from what I found. 

The scripts should be run in the order presented. And you can repeat the last two scripts, tst_as3b.sql and tst_as3c.sql, as often as you like.

When running them, you will see that the "session statistics" (using mystat.sql) from insert-statments done "With" the Assertion switched on, are measurably higher.

Note Again, there was a startup-effect: the very first time running the whole set of scripts on a fresh database, the numbers tended to be higher, but after running tst_as3b for several times, they are repeatable, and I would consider them reliable.

I also tested against both Oracle an oracle provided container-image of v23.26.1 and against the image of gvenzl/orale-free:slim, also v23.26.1. Both yield similar resulting numbers that differ only in single-percentages.

There is Extra effort going on "With" the SQL-Assertion, and it is Detectable, Measurable. I consider that a good sign, because it means we found a way to  "attribute" effort to the assertion.

The outcome that I found most interesting is Here: 

Notice the difference in effort between "with" and "without" the SQL-Assertion in place. The numbers are "per execution" (e.g. ela_p_x = elapsed time per execution).

When The Assertion is defined (With..), the insert of 608 rows takes about 3x longer in CPU-tine and in ELApsed-time in microseconds. And "with" uses .. 2606 buffer-gets instead of the 142. There is a clear indication of extra effort, and it is visible "on the statement" in V$SQL, not hidden in some background-process.

In total time+effort: both statements are still well inside a single Second, and inside what I would call "acceptably efficient" for processing 608 records. Hence on a normal system, a user would probably not notice any difference. But on a busy system, the overhead for SQL-Assertions might be noticeable in AWR or in db-time consumed, e.g. capacity used. 

I want to put at least two comments on this:

 - The extra effort would likely be A Lot More if we implemented this "constraint" in PL/SQL or in Triggers instead of in an SQL-Assertion.

 - This feature is brand-new: Expect changes and improvements in the future.


Diving Deeper? more SQL-stmtns.

I could not resist examining the "new tables" that I got from creating the assertion. And for quick search on their names in V$SQL I used the script tst_as3c.sql.

It so happens that, in my testcase, there are some 23-26 additional SQL statements generated when we do inserts on tables involved in an Assertion. Some consume very little, some a bit more. Check for yourself.

Here is a sample:


Interesting to see those stmnts...

In future tests, I would be insterested to see if those stmnts "cover" the extra effort found when using the Assertion. And it will be interesting to see how those may one day show up in an AWR or AHS.

For now I am just content to "see them". In future I will probably be able to more or less guesstimate the impact of SQL-Assertions on systems, and to put together unit-tests to spot possible problems. 

And I hope to spot them "in the wild" if they ever cause Real Trouble.


Summary: Found it. And Happy so far.

I found the Extra Effort caused by SQL-Assertions.

With these tests, I was able to spot the extra effort generated by SQL-Asssertions on a simple test-system. And if these items (temp-tables, SQL-stmnts) behave similarly in future implementations, I will be able to spot them, and to "test them". And I will probably be able to help fix any problems they cause.

I should also add that I think these SQL-Assertions are a Real Cool Feature, which can potentially replace many triggers and "after-the-fact" check programs. 

This can make The Database even Smarter.

#SmartDB.


-- -- -- -- -- End of this blog, for Now -- -- -- -- -- 


There are of course some appendices, notes, remarks. So here come the Appendices....


-- -- -- -- -- Appendix 1 : Links to Scripts -- -- -- -- -- 

This is what I used for this blog (and for the previous one). If you run them in this order, you should get comparable results:

demobld.sql    : Build the demo tables EMP and DEPT.

tst_ass.sql        : Script from previous blog, use this to prime the demo-test. 

mystat.sql         : Report statistics from v$mystat. Called repeatedly.

connscott.sql    : Freshly connect scott/tiger. Called repeatedly.

tst_as3b.sql      : The test-script for this blog: do tests, show results.

tst_as3c.sql        : Lis the other SQL...

Note: if you run on Windows, the host-read command to pause the script will not work. Easiest is to just remove it or out-comment it and examine the spool-file, tst_as3b.lst, afterwards.

The containers/images I used:

With an Honorable Metion to Gerald Venzl for providing the super-dev-Friendly images via Docker (link)

(notice that those images, both from Oracle and from GVenzl, get updated over time, generally every few months - YVMBN: your version may be newer...)


-- -- -- -- -- Appendix 2: more future items -- -- -- -- --

There is always more to investigate...

- Split up the assertion in two separate ones to avoid the OR-conditions and see if that is more efficient.. (Interesting one!!)

- What is the impact on Delete- or Update stmnts ? (on Delete, for this particular assertion, there may not be an impact at all.. )

- Examine the additional SQL in detail, see if the effort for those (background-) stmnts adds up to the difference in time+effort of the original inserts.

- Run larger tests, and examine the effects in AWR reports: Is this the good way to spot "expensive assertions", possibly find problematic ones ?

 - We could examine the ORA$SA$ tables in more details. But Is that Relevant ??

- .. You can probably think of more..

And remember, we are only geeking out.. and this is a very new Feature.. Expect more development some time soon.

Maybe Later ! 

-- -- -- -- -- End of this Blog, for Real -- -- -- -- -- 

Thursday, 19 February 2026

Assertions used in arc-relationship

 TL;DR: A simple use-case for SQL-Assertions: Adding a data-related constraint to an arc-relation. Demo using the Emp-Dept schema.


Summary up front: 

Use SQL-Assertions to add a data-related constraint covering multiple tables.

Curious?  The link to the Source ( tst_ass.sql ). You can Immediately run this file but dont forget to first run demobld.sql or utlsampl.sql to get your emp-dept tables.


-- -- -- Background: I wanted to try this... -- -- --

Oracle 26ai (link ) as we write in Feb-2026 it is at release 26.1. And it has this new feature: SQL-Assertions ( link ).

I'll put some more links to "SQL Assertions" at the bottom. The Actual Documentation and several Other sources can explain it all better than I can. 

As the feature became GA-available, and I got to read about it, and I kept viewing presentations about it.. It got to the front of my mind. I Wanted to Try it..

And when walking around with a Neat Solution looking for an interesting Problem, you inevitable find one...  


-- -- Use-case: store audit-findings at the level of Emp, Dept, Total, or Any. -- --

Imagine you want to store+report audit-findings in a table. Those findings can be at detail-level (say, for a single Emp), or at higher level (e.g. at Dept level), or even for  the whole Enterprise (e.g. Total). But you dont want to allow the storing of a finding at a non-suitable level.

Then you have an auditor wants to record items (automatically) and check for auditable-conditions. Those checks result in "Findings", and we want to store them in tables.

Like, for example:

 - Max-commission: applies to Emp (and can not apply to other levels).

- Min and max-headcount: applies to Dept (and not to any other levels)

 - Total headcount: employees in the organisation: applies to Total.

 - Max reports to a manager: how many Empoloyees under a manager.

- ... anyone with more examples, let me know ... 

Our real-world case is a little more complicated, but I hope you get the idea: Definitions determine at what level a finding applies, and should be stored. When creating the Definition-record to define an audit-item, you can determine whether that definition can apply to a certain level: Emp, Dept, All, or Any.


-- -- -- Entities: Definition and Finding -- -- -- 

The problem description leads to two entities: DEFinition and FiNDing.

Audit_Definition (A_DEF):

A description of a Finding. In our case we will even try to store an SQL-statement to generate a finding so it can be run at regular intervals...

Audit_Finding (A_FND):

The result of a check or sample, stored as a record.  Such a finding should be linked to the Definition, it's parent-entity. But it might also be linked to some other entity to which it relates: an Emp, a Dept, or the Totality of the Entreprise, or maybe to "nothing" or some as-yet undefined entity...

The other two tables are the classic Emp and Dept from the oracle-demo schema.

Those entities have the following Relations:

  • Emp Must Have a Dept (FK, parent) - this is the classic one.
  • Finding Must Have a Definition (FK, parent).
  • Finding Can Have an Emp (FK, parent), if the finding is at Emp-level.
  • Finding Can Have a Dept (FK, parent), if the finding is at Dept Level.
  • Finding Can Not Have Both Empt and Dept, an Xor-Arc relationship
That summarizes the translation of Requirements into a Datamodel. 
For now. Before v26ai with Assertions...  

But there is one check we have not yet covered: When the Definition of a check says it applies at Dept level, it should not be stored as a Finding at Emp-level. And vice versa.

Such a check will have to compare the Definition, where we have the property "level" and the Finding itself, where it will be linked to Emp or Dept or none. Hence that check will cover two tables.

We'll try to use an Assertion to prevent invalid combinations to go into the Findings-table.


-- -- -- The data-model, a simple ERD with one arc-relation. -- -- --

To illustrate, I'll present the simplified data-model as an ERD (and drawing with whatever tools always takes more time thant you think...

My datamodel looks like this:

The ERD: 4 entities, and and X-Arc relation from A_FND to Emp or Dept.

I know there exist other modelling methods as well, but this is what I mostly use. Tables, with relations defined by lines with Os, and the crow-foot. And with the Arc denoting 1 of several... The exact drawing-conventions is something for another blog/discussion.

(<smallprint... yes, modern systems would probably use some JSON-string to record this kind of data, and probably would not build in this kind of complicated constraint into an assertion.. but IMHO they miss an opportunity for Data-Integrity /smallprint>)


-- -- The Tables, the DDL -- --

The two tables Emp and Dept are from the known demo: Emp and Dept:

The Audit data and the Assertion will use a table for Definitions and a table for Findings: A_def and A_fnd.

This table holds the definitions... (there is room for some refinement, such as storing an SQL statement or an API call in the record)

Next is the table for Findings:


Notice the constraints on the findings-table: we can sort-of specify the x-arc, but not quite the additional constraint we need...

Finally, the Assertion, it looks like this:


A careful observer would say: The OR-condition in the assertion could be used to create Two separate assertions. That is something I want to examine and play with later...


-- -- -- Now Insert some data (and generate the errors...) -- -- --

The demo file contains several error-free inserts, and 2 stmnts that trigger the Assertion... 

Here is the first error: we try to insert a Finding that is supposed to be at Dept level, but we try to attribute it to an empno:

There is the Assertion-in-action.

In trying to provoke the error, I also discovered that the FK constraint seemed to fire before the Assertion. Which, I think, makes sense.

Now try to insert an emp-level finding with a deptno:

And voila: The Assertion again prevents the insert. 

The demo-file ( tst_ass.sql ) contains a few more statement: Do Try it.

And if you want to test further: run the demo without creating the assertion, and see the (unwanted) data go in.

Notice that the error-mesage is the same, regardless of which part of the OR triggers the error. First remark from an Oracle-friend: If you create two separate Assertions, you get the benefit of more Detailed Error Messages. Something to remember. 

And there may also be performance considerations for choosing larger or smaller assertions. Maybe Later...


-- -- -- Conclusion: I found "a case" -- -- --

I think I found a very Simple case for using SQL-Assertions. This case cold have been done with Triggers but that would haver resulted in much more complicated code divided over several triggers on the tables involved.

The Assertion (or two assertions of you prefer that) does this more elegant, and probably more efficient.

And in case you missed it above: the link to sourcefile tst_ass.sql 

-- -- -- -- -- End of this blogpost, for now -- -- -- -- -- 

That is not all...

More then other blogs I've done, there is a surprising amount of notes and links that deserve keeping + Sharing, hence some appendices...


-- -- -- -- -- Appendix 1 : Possible Future items, Several -- -- -- -- -- 

 - Refine the assertion into two (or more) and use assertion-names to better "Clarify" the error when the assertion pops (First suggestion from a proofreader)

- Highlight the fact that FKs and Check-constraints are verified before Assertions ? (is that correct? I think it is...).

- Show that data can not be modified or deleted to invalidate an assertion.

 - Investigate (verify) the dependencies between Assertions and Tables.

 - Show the internal tables used to trace and check assertions (and why are those visible in v23.26.1 ?)

 - Examine user_assertions and other related views, such as the "dependencies".

- Operation 0 succeeded (older SQL*Plus versions).

- Try testing a ridiculous check-condition... or some heavy SQL.. what happens?


-- -- -- -- -- -- Appendix 2: Links to SQL-Assertions -- -- -- -- -- 

Despite all the saved notes... I still feel like I dont have a good intro on Assertions. 

When trying to find non-Oracle sources, it was surprisingly difficult to find good text on SQL-Assertions. The Assertion is part of SQL since the SQL92 standard. Yes, the concept is That Old. But the implementations are lagging. And so is the set of documentation and examples... 


To get the official ISO or ANSI standard text on SQL92, you need to purchase it.. (Huh??). But I found the almost-official text, an old page from Carnegie-Mellon University:

https://www.contrib.andrew.cmu.edu/~shadow/sql/sql1992.txt

(check section 4.10.4 on Assertions, a very brief text...)


This link to a book-chapter on Assertions:

https://csci3030u.science.ontariotechu.ca/chapter_7/chapter_7_section_4.html

And this (recommended reading?) on Constraints and Asssersions:

https://sql-99.readthedocs.io/en/latest/chapters/20.html


And a CS course from U of Virginia, this slide-deck about  SQL-Assertions

https://www.cs.virginia.edu/~up3f/cs4750/slides/4750meet19-SQL-assertion.pdf


But to get Assertions better ingrained in the design and build of datatabase-systems, I think I need to... 

Call on all Bloggers and other CCs: Collect + Write more on Assertions ??


-- -- -- -- -- -- Appendix 3 :  Remarks on demo-scripts...  -- -- -- -- -- 

Allow me some Friendly Remarks to the folks at Oracle, not related to the assertion-demo:

 - Script demobld.sql is missing in oracle 26ai ?? It used to be under $ORALE_HOME/sqlplus/admin/demobld ? The alternative script is now located at $ORACLE_HOME/rdbms/admin/utlsampl.sql... took me some searching..

- The utlsampl.sql does a connect as scott/tiger (hardcoded password, and no SID or service specified ??) In my case, the data once accidentally ended up under the SYS-user in the CDB$ROOT (my bad for not paying attention...)

Luckily, I keep a version of demobld.sql in my own toolkit-collection of scripts-to-carry. Both on github and on my trusty USB-stick (26yrs ago that was a floppy-disk, some script have survived the teeth of time).


-- -- -- -- -- more notes -- -- -- -- -- 

To get an ERD, I tried submitting text and code  to chatgpt and asked it to draw me an ERD (four tables, five relations, one of which is an X-Arc, how hard can it be...).

 First results were abysmal, but chatgpt asked me if I wanted x, y or z as improvement. None of the improvements were any good. And all the followup questions seemed geared towards making me spoil my remaining free questions...  I wont blame openAI for trying to make a buck, but I am Not Impressed.

An ancient TOAD version did a much better job.

My Final ERDs for this blog: I used the online version of Draw.io (link). And Quite Happy with that. I probably will use more of that tool

-- -- -- -- -- End of this Blogpost, for real -- -- -- -- --