Tuesday 15 October 2024

Serverless, Docker, Podman, k8s #JoelKallmanDay

 TL;DR: Soon you will run Production databases from a "serverless" platform. 


For several years now, I run most of my tests and demos from docker or podman, and it is just Convenient. It is even easier than using VBox, although I still keep my VBoxes around to give me more of a "sysadmin" experience when I want that.


But should you run "Production" from containers?

Usual answer: It Depends.

Practical Answer: Soon you will. 

So Get Ready to run your (Mission Critical) Databases from K8s or similar container environment.




What I didnt like about Containers, at first:

  •  Layers: Another abstraction layer to "cloud my view", things get hidden deeper and deeper.
  • Ephemeral: Data wants to be "kept", not lost on the first restart-glitch. Mapped volumes can fix that, but they are just another item to worry about.


What there is to like about containers:

  • Abstraction: Easier to run your software/system/database on many platforms.
  • Flexibility: Easier to move around and Scale.
I'm sure you can see the humour of the arguments.  And the lists can be even longer, much longer. I might dedicate a blog to "scale out" of Databases, and the problems that arise, especially on shared-nothing clusters. Some Other Time.


Why I think you will (soon) be running on containers: Because the rest of the IT world, e.g. most developers and administrators, will be running "Containerised".

Some (hip, trendy) organisations already dictate the use of some K8s or other container-platform, and their sys-admins are not able or willing to facilitate anything else anymore.


Also, the next generation of databases is under development, and most of them are compatible with mainstream Open Source systems (either forks or wire-compatible, which also merits a whole separate blog-discussion). The deployment, their use-case, is more and more Native-Containerised. Their whole setup is geared towards container-deployment. 

Of course, performance-geeks (me too..) will state that more layers and more components will Complicate things, and that nothing beats deployment on Iron. This is True. But I suspect that only the extreme-cases (high-volume OLTP) will stick with "raw iron". All other usages might well end up on containers by approx 2030 (in 6yrs from now?)



Also allow me a look-back: In around 2004, we didnt think VMs (virtual machines), were solid enough to run "real databases". By 2014, everything was on VMs, simply because "that is what we standardized on". And we take the advantages of VMs (isolation, layering, flexibility) for granted, we could not live without it anymore.

And by now, Oct 2024, I feel confident enough to run an Oracle Database via Podman, Docker or K8s. As long as I can :"control" the volume mapping and locate my files in a safe storage area somewhere. I suspect this technology is now "Database-Ready"

Anecdote: I recall a cstmr where the statement "we dont want servers anymore" was in vogue, this was 2018, some 6yrs ago. They went to Large-Vendor-DBaaS for databases, and some of their "architects" went to great pains to eliminate the last few servers (batch-processing systems). They even bullied a few of their grey-bearded developers out of the company to forcibly "kill those batches". It wasnt an efficient move, and it wasnt pretty to see.

Luckily, you can start a container (I think we even used an oracle-container image) to act as a linux-server. And the command:

docker exec -it node42 "bash -c yourcommand --options and args"

works Fine in 99% of cases. This is one way to hide your server and keep your legacy (unix, linux, even solaris) software from running. It may not be "The Container Way", and you might get some critical remarks from the zealots. But it Works Fine.

Using these workarounds, you wont need a server anymore. On which some will say: You just run on someone else's server (oh, Hi Kamil!)

But There you have it: You can Run Serverless or Contanerized.



Sunday 13 August 2023

Adding PDBs is Easy also in v23c.

TL;DR: Success! I've got several PDBs plugged in a CDB, all created "just from scripts", no "template-recovery". From here on it is all is Dead-Simple. Or is it.


Spoiler: Finished. Success. But there is Always More ToDo.

And... Need I state it: Do Not Try This at Work. (but if you Do, let me know how it went. I'm sure you can improve on my attempts...).

Update: Oracle has, on 02-May-2024, announced to rename version 23c as 23ai. This series of blogs was researched and written with the early releases of v23, when the suffix was still "c" (for cloud), but I am too lazy to go and modify all the referneces to the new 23ai (artificial intelligence). 
As soon as time permits, I will repeat my "testing" on the new versions, but on first impression, it all works identically on 23ai.



Background:

In the first three "episodes", I created the CDB + Seed, added the "catalog", and added the more or less mandatory components. Now it is time to add (and maybe delete) some PDBs and see what is happening.

To avoid the use of "templates-to-recover" (which, btw, is Highly Recommended, and much more efficient), I've been creating a database manually, from scratch, just using scripts. In all cases, I have tried to choose the "Minimal Effort" and. "Minimal Configuration Work". Minimal RTFM even (which is kinda risky... You Should Always RTFM, Always). 

My list of parameters is kept minimal, thus using defaults wherever possible, my typing effort was minimal, and the management of files (creation, placement, naming, deletion) is totally left to Oracle (using defaults and OMF wherever possible). 

As software I'm using the "Free Developer Release v23c". But since that is an early (preview) release, not all components are "in", and the option to "generate scripts" wasn't included in the DBCA from v23c yet. Hence I used scripts generated from a v21c as examples and edited them where needed.

Because I'm still working with the list of generated scripts from DBCA-v21c (not the same version! hence use at own risk) I had to edit. In part-3, I eliminated a few obsolete items (intermedia was deprecated in 12c, and "removed" in 19c, how many years ago...). Then I consolidated everything up-to-but-not-including any PDB-creation in one Simplified script, script nr3. That worked well, resulting in 0 (zero) invalid objects. Check previous blogs for the details.

Note: I Hate long command-lines and complicated scripts, and I especially Hate hardcoded things in scripts (like 10s of repetitions of a file-path with a version-number in it). We have $ORACL_HOME and $ORACLE_BASE and $ORACLE_SID, and we have them For a Reason.
On top of that, editing is Risky Business: Typos! - Apologies if I repeat that too often. 



Finally, Creating the PDBs:

Most demos and examples show how to create the CDB and then add 1, single, PDB to it. I'm going to be a little more ambitions: I want 3 PDBs and by using OMF (Oracle Managed Files) and by avoiding FILE_NAME_CONVERT parameters, I don't want to worry about Anything on the file-system.

Now from the "generated" create-scripts from DBCA v21c, there are three scripts remaining. Time for another Eyeball Mark-1 Inspection, and let's see what we can do and maybe improve.

The v21c generated PDBCreation: 


Strangely, this one contains little relevant items. Fine. let's move on.

Next would be the generated plug_freepdb1: 


This actually creates a PDB (yesss). 
With the name "hardcoded" in the script, on no less then 6 lines, including in the file-name-convert. 
The spool-command seems a double (is fixable?). 
The select shows data- and temp-files (can go?).

And then, Voila, we get to the Create, and Open, and Register of the first real Pluggable Database. That looks deliciously Simple, and it even announced itself to the listener straight away. That is the good part.

Hmm, yeah. I can work with that. Will still try to improve on the script though.


Last script from the generated set was postPDBCreation_freepdb1: 


This one is funny. It tries to add a USERS-tablespace that already exists in the PDB$SEED (hence error!), with a hardcoded datafile-path *), hm.  
That USERS-tablespace is set as default for users (Good Idea). 
It then does a call to OPatch/datapatch (hardcoded 21c path and hardcoded names! what about using $ORACLE_HOME and $ORACLE_SID? ). 
And it does a number of reconnects and selects, checks to report things to screen or spoolfile.
It has _only_ 11 lines with the PDB-name hardcoded in them.
OK. All Seems "Fixable" to me. 

Some "good news" as well: apart from the OPatch call, these scripts are "self contained". There is no call to perl-catcon or other callout. 
All I would do is to simplify a little, for Readability + Portability. Maybe, use a DEFINE or some form of arg1 (or $1 or &1) to minimise use of hardcoded names and make the scripts more portable (I'm not obsessed,  not obsessed, ...)

After some more looking (on a larger screen), and some coffee, I concatenated the first two scripts and edited that into crdb4_pdb.sql:

(pre and code for crdb4_pdb) 
/* -------------------------
 crdb4_pdb.sql: create the first PDB(s), with minimal code + config..

notes:
 - in future: define the pdb-name(s) once or pass as args. 
  --------------------------- 
*/

-- get the passwords 
@accpws

SET VERIFY OFF
connect "SYS"/"&&sysPassword" as SYSDBA
set echo on

spool crdb4_pdb.log append

-- found out this "bootstrap" was needed. MOS note: 2538591.1
host $ORACLE_HOME/OPatch/datapatch --verbose
 
-- let's do multiple plugs straight away.
-- future: create 42 plugs in a pl/sql loop? would that work?

CREATE PLUGGABLE DATABASE FREEPDB1 ADMIN USER PDBADMIN IDENTIFIED BY "&&pdbAdminPassword" ;
CREATE PLUGGABLE DATABASE FREEPDB2 ADMIN USER PDBADMIN IDENTIFIED BY "&&pdbAdminPassword" ;
CREATE PLUGGABLE DATABASE ORCL     ADMIN USER PDBADMIN IDENTIFIED BY "&&pdbAdminPassword" ;

-- only see 2 of the 3 new ones, wonder why ...
select name from v$containers where upper(name) like 'FREEPDB%';

alter pluggable database FREEPDB1 open;
alter pluggable database FREEPDB2 open;
alter pluggable database ORCL     open;

show pdbs                

alter system register;   

prompt ---- End of crdb4_pdbs. Please check, properties, files, listener etc... -----

spool off

Here is what I hope to achieve:
There are no file-paths at all, I'm not going to mess with files if I can help it.
The remaining commands are readable (even on 80x25 terminal) and straightforward understandable, I hope.
On testing, I (re-)found that OPatch/datapatch needs a "bootstrap" (MOS note: 2538591.1, known issue...). I'm doing that before any further PDB-creations.
I then create and open the PDBs.. no big deal actually, and I register them with the listener. The select-checks and the can wait (for next script).

In my current script I am left with just 3 PDB names, which are "hardcoded" 2x each. That still makes 6 items to correct later. ;-(.

That user-tablespace, the selects and checks from the postPDBCreation, I'll isolate in a separate check-script, to be run per PDB. One day, I'm hoping for a generic, portable script to "add 1 PDB" to a CDB, in the simplest fashion, for now, I'm in a hurry: I want my PDBs, to observe and play with...

Bonus "feature": I _could_  try run these last script even from  just-a-client when connected to the CDB$ROOT, but it would mean ignoring the datapatch and editing every "connect". Too Lazy at the moment.  But good to know it can be done (yep, I checked, later, works fine, but ignoring datapatch, so maybe not recommended)


I'm impatient, Let's Do This:

If you are like me, you might check disk-usage on  "the server" and check the listener status, before and after. But was in a hurry...

The datapatch part went down fine:



And the creation + opening:


You can see the creation of 3 PDBs, with not drama at all. One of them has the "classic" name ORCL, just because I can. I do now wonder if I can't find a way to leave out the admin-user part as well. I also left the original "select" statement from the generated script (hint: something about a name "hardcoded" maybe).

We then witness the OPEN-ing of each PDB and if you run the script for yourself, the command "show pdbs" will display them, all of them this time. In the end there is also a "register" to let the listener know what was added.

If you go back and check your filesystem and listener: Creating will have taken up about 3G of diskspace, and the listener will show the new PDBs as services. 

Checking from a "client machine" to connect over the listener:


Of course, from a client machine, I can connect to any of the open PDBs.  Just to show how Easy.
 

Almost Done... 

We now have 3 PDBs and I can play with them. Nice. I got more or less what I wanted. We are Nearly There.

Just a little bit of housekeeping to do: remember that postPDBCreation.sql? It did set the default to USER-tablespace, did OPatch and did a number of selects. I am going to respect the same commands and same order of events, more or less. But I have edited that to become a script to be run per-PDB, with the PDB-name as argument (the &1 in the oracle sql-script world):

The script crdb4_postcheck.sql:

/* ---------------  

  crdb4_postcheck.sql pdb_name: set user-ts, do OPatch and run some selects

    arg1 : PDBNAME to check, e.g. FREEPDB1 or ORCLPDB2
   
notes: 
  - adapted from postPDBCreate_freepdb1.sql, as genrated by dbca-v21c.

   -----------------
*/

-- pick arg from cmdline.
define targetPDB=&1;

-- get the pwds, we need m
@accpws

-- stmnt and their sequence-order copied from generated postPDBCreate

SET VERIFY OFF
connect "SYS"/"&&sysPassword" as SYSDBA

spool crdb4_postchecks.log append

alter session set container="&&targetPDB";

prompt Adding + setting dflt tablespace USERS, If needed...

set echo on

CREATE SMALLFILE TABLESPACE USERS LOGGING  
DATAFILE  '/opt/oracle/oradata/FREE/freepdb1/users01.dbf' 
SIZE 25M REUSE AUTOEXTEND ON NEXT  20M MAXSIZE 1G   ;

ALTER DATABASE DEFAULT TABLESPACE USERS;

-- next, the call to OPatch, added OH and SID

host $ORACLE_HOME/OPatch/datapatch -skip_upgrade_check -db $ORACLE_SID -pdbs &&targetPDB 

-- queries, copied straight from genereated v21c original. 
-- not sure if essential

connect "SYS"/"&&sysPassword" as SYSDBA

-- expect TRUE
select property_value 
from database_properties 
where property_name='LOCAL_UNDO_ENABLED';

connect "SYS"/"&&sysPassword" as SYSDBA

alter session set container="&&targetPDB";

select TABLESPACE_NAME 
from cdb_tablespaces a,dba_pdbs b 
where a.con_id=b.con_id and UPPER(b.pdb_name)=UPPER('&&targetPDB');

connect "SYS"/"&&sysPassword" as SYSDBA

alter session set container="&&targetPDB";

set echo on

Select count(*)  nr_valid_DV
from dba_registry 
where comp_id = 'DV' 
and status='VALID';

-- go back to root.. why?, does exit straight after.. 
alter session set container=CDB$ROOT;

exit
And ... running that script, 3 times, Went Fine: the USER-tablespace came from the PDB$SEED, it was already there. If it is ever missing, the error will notify us. Setting the default tablespace is no problem, and running Opatch/datapatch works Fine. The last select, checking for DV (DataVault, installable per catols.sql) isnt present here because it wasnt included in the "generated scripts" for v21c. Something I could correct, but was too Lazy...

Plus, I can now Rapidly provide dev/test systems (mostly containers) and customise them to my hearts content... 

Also: (in scope, but this blog is too long already): Dropping a PDB including datafiles, gives you back the disk-space immediately, which I think is quite nice.

I've run most of my ppt-demo SQL scripts, and they work fine agains 23c! (I had not expected anything else). I can use this container in my next presentations and be one of the Uber-Hip crowd using v23c.  How Cool is That?


I would say: This is Success !

So in summary, I was able to create several PDBs, with minimal configuration and minimal typing-effort. Mostly due to the OMF parameters in my spfile and using as many defaults as I can.

My main "obstacle" was actually the "typing effort" in crdb3_com.sql: installing all the various components. And the testing and removal of typos to make the whole exercise look even more Simple.

I will now sagely, as a well behaved Oracle-DBA, wait for the actual GA (General Availability) of v23c.  A GA which hopefully still comes with a DBCA that can generate scripts that I can read and alter. 



Warning: writing this up took me much more time then anticipated. Actually creating the system was easy. But tweaking the scripts to be 0-error, and Portable, and chasing down some of the nitty-gritty details took waay too much time... Anyway, it was interesting enough to do it. Maybe it will be useful knowledge one day.

The List of "Always More ToDo"...

I wanted a quick-create, .. reading + investigating turned out 4 waaay to long blogposts... And ... There is always more. The list got so long it would merit a separate blog+discussion.. Here are some of the items you can think about, or maybe I'll do some writeups later.

Experiment with Read-Only Oracle-Home. Ensure maximum integrity of the software-tree. Separate Software from Data was always a good concept.

Experiment with mounted volume and then store the databases, and the admin- and the admin-output (diag!) on a mounted-volume. That could make the container much smaller still. (This would be my Prio if I was deploying via Containers...?)

Completeness.... I'd really like to see the scripts generated by v23c to make sure I got all the components and activities Correct: complete and in the right order etc... Up to now, this is just a hobby-project with a lot of Guesstimation involved.

Directory-names: I dont like the overly-long generated names for PDB-subdirs. I'll have to investigate the "simplest possibly" way to use FILE_NAME_CONVERT...

The UNDO=Local setting merits some investigation. Can that save me more effort+space? What is best/easiest, and when? For the moment, I'll just point you to Tim "Oracle-base" Hall for reading up on the topic.

Housekeeping, files: My "disk system" got full a few times during these experiments. There are MAX_SIZE clauses for data, audit and diag, and those Will be Useful. Alternatively: Check your subdirectories under the "diag" tree, and the output-log-dir for catcon. Especially when providing "dev-/test-containers" to developers, you dont want your system to run into "mysterious errors" due to disk-full problems.

Housekeeping, drop PDB: Demonstrate how dropping a PDB frees up disk-space, Automatically. Lazy-me is Really Happy with this (OMF-)feature. But dropping a PDB leaves an empty subdirectory. Not a  big problem, but I'd rather have that cleaned anyway. Requires another trick (script).

Script-beautifying: Evidently. Also, the original (generated) scripts do a lot of (re)connects and spools.. I can still make my scripts cleaner by removing seemingly useless "connect"  and housekeeping stmnts. (I like small, clean scripts...).

LockAccount: The crdb3_com.sql now contains a copy of the generated code. On close inspection: this is identical, repeating code. Make it a call to a script.

Make a single, better, crdb4-script: a Simple script, taking the PDB-Name as input, should be able to add (or later remove..) a PDB, including the OPatch and some of the checks.

The OPatch, needed a bootstrap. Doesnt seem like a critical problem to me, but would not discard it. Easily Fixable, MOS note: 2538591.1. Best add that to crdb3_com at some point (after CDB, but before 1st PDB).

Test a patch or an upgrade: This will Inevitable need to be done sometime...

On perl: I'm still not very happy with the perl-stuff as it is yet another component, and yet another dependency. The scripts have a hardcoded path to the oracle-provided perl-executable in them.  And another, different-version of perl is found in /usr/bin, which apparently the scripts don't trust or don't want you to use. It makes sense, but it is a bit cumbersome.


But in the end: Happy! 

At least I got my kick of creating a Database from Scratch using scripts. I got the learnings, now documented in a set of scripts. And my scripts seem Portable enough to be able to create a v23c DB on just about any (linux)platform (VMs, containers) even without the templates.

If anyone wants a copy: copy/paste from the blog seems to work, or drop me a message for the repository.

------------ end of part 4 -------------

Disclaimer etc. ... nah. 
OK. Find disclaimer at the bottom of the starting-post.

And remember what I wrote earlier: Dont Do This At Work (or... not yet).


*) Footnote: 
On the use of hardcoded-file-path-name for database-files (do I have to say this..?): How often did you end up with one or two files (crucial, needed datafiles) in a rogue location because of forgotten or badly-replaced filenames in re-used scripts ? Folks have overwritten files because of this, folks have ended up with "windows file-path-names" in the DBS-directory of a linux-systems bcse of this. (do I have to do a blog on this with bad-examples... )
Minimise hardocoded paths, please.


-------- the Real End of Part 4 ------- Maybe parts 5, 6 etc., but Later --------

Thursday 10 August 2023

Create a Database v23c - Installing components Manually

TL;DR: I simplified the assistant-generated scripts from v21c, I learned a few things, and it all still worked. We are slowly Getting There  in v23c 

Spoiler: still a long-ish script, but it worked Fine.

Note: Aug 2023, still using the Free Developer Release of 23c.

Update: Oracle has, on 02-May-2024, announced to rename version 23c as 23ai. This series of blogs was researched and written with the early releases of v23, when the suffix was still "c" (for cloud), but I am too lazy to go and modify all the referneces to the new 23ai (artificial intelligence). 
As soon as time permits, I will repeat my "testing" on the new versions, but on first impression, it all works identically on 23ai.

Background:

Still on my quest of trying to create a database, manually-scripted, without using (recovering) "the template". But with minimal effort, minimal scripting, minimal typing (even minimal RTFM, which is never a safe idea...). In parts 1 and 2. Earlier I explained how+why I am using a set of scripts generated by the previous version, v21c.

It was more work than I expected. In the next stage I am going to get angry with "too many, too complicated and too-hardcoded scripts". But it seems the old-ish code Works Fine against the New Version. Legacy ;-)

Again: Keep in Mind that I am using the "Free Developer Release v23c". No guarantee this will work correctly, and nog guarantee this will still work when v23c comes out for General Availability.

I can't wait to get a G.A version and check if the new DBCA still agrees with my approach.

But... So far it is going Well..  I got to the stage where "Catalog" is created in the CDB and in the SEED with little problems. Here is how:


The components (list taken from v21c)

When I check the generated script for creation of a v21c database, it shows the following:


Mind the long-ish list of items after "CreateDBCatalog" (covered in part-2).  Those scripts install the "additional components" that the DBCA wanted me to install. That list looks familiar: that same list has been there since waay back. I guess I'd better run (most of) those scripts on v23c as well...

The first script, JServer.sql, looks like:


As you can see, similarly long commands as we have seen in the "CreateDBCatalog" in part-2, except that I already put in $ORACLE_HOME to simplify. And by opening the scripts one after another, they all look more or less similar (and equally hard to read on first attempt) Note: I Hate long command-lines and complicated scripts - apologies if I repeat that too often.

On first attempt, they ran fine after just editing the file-paths to adjust to 23c, Good sign. But I wanted to do a little more Inspection using Eyeball Mark-I on a large-screen (e.g. have a good look at the subject...). So I concatenated them all into one big script and viewed them on another editor, to better horizontally compare the various lines.

The following of the generated scripts were very similar: 
JServer.sql 
context.sql 
ordinst.sql and interMedia.sql (huh?)
cwmlite.sql (olap), 
spatial.sql 
... are all very much alike (but not quite).  They contain a lot of repeating, identical-looking calls with generally only two understandable differences: The -b <logname> and the end of the line the "script + options" are different. 

That makes sense, in multi-tenant, those components are installed via catcon.pl in both CDB$ROOT and PDB$SEED.


Some remarks on the scripts:

ordinst and interMedia: Deprecated, checked them anyway. Those are now just stubs, and can probably be removed from the list of scripts to run. So I deleted them from the calling-script.

CreateClusterDBViews: Doesn't use the -c (or -C) option, but is otherwise identical. Apparently, leaving out -c and -C runs the scripts against both CDB$ROOT and PDB$SEED. It runs both catclust and catfinal (which is effectively empty), thereby sort-of signalling the "End of Catalog stuff".

LockAccount: Does a stop/start of PDB$SEED and then runs a block of PL/SQL in both CDB$ROOT and PDB$SEED to presumably lock-down default accounts. OK. We can keep that code as is for the moment (would v23c contain more dflt-accounts, more items to lock or protect...?).

PostDBCreation: Generates the spfile and does the classic utlrp. OK, that should fix  the "invalid objects" we had after doing the catalog in part-2.


Then a bit of a surprise, good and bad: The last three scripts are: 

PDBCreation: Only contains a spool-append command to a logfile.

plug_freepdb1: Creates an Actual PDB (yesss, finally...), with the name "hardcoded" in the script, and  file-name-convert directives included. I'd like to tweak this stmnt to at least get rid of hardcoded names + paths.

postPDBCreation_freepdb1: Adds a user-tablespace with hardcoded datafile-path (hm), it does an OPatch, and does a number of selects to report things to spoolfile.

I am going to keep the PDB-related scripts (the last 3) for later, and first run the scripts to add all the (mandatory or recommended?) components to CDB and SEED. 


To Simplify...

Already I had them all concatenated into one, so I decide to consolidate the scripts up-to, but not including, any Create-PDB work. I'll keep everything in one single script, call it crdb3_comp(onents).sql. And I cannot resist to simplify it just a little.

I am also going to keep the spfile-creation under my own (manual, but scripted) control, putting a "comment" in front of that command, for the moment (yes, I understand the risk.. some script may want to set parameters to spfile/scope-both).

The idea for crdb3_comp.sql is born.. now just had to do a bit of editing+testing.

The main changes I made are reduction of hardcoded file-paths and smaller commands over-all, for readability and editability. I then removed a lot of lines that did just spool-append. I'm keeping everything the main logfile. There will also be the logfiles from catcon.pl which I directed to /tmp.

The resulting script, crdb3_comp.sql:

 /* -----------------
 
crdb3_comp.sql : install basic components before creating more pdbs

notes:
 - wasnt happy with the long commandlines, still not happy
 - after utlrp: 0 invalids. Good Sign.
 - why repeated connect? to verify un/pw ?, or Need for fresh session??
 - option -a is undocumented (Martin berx Berger traced: probably GUI/windows related)

  --------------- */ 

-- get passwords
@accpws

-- one main logfile for this script...
spool crdb3_comp 

-- Wanted shorter commands to Run CatCon.. Hence define rcc
DEFINE rcc="$ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/catcon.pl -n 1 -l /tmp -v "

prompt first JServer

SET VERIFY OFF
connect "SYS"/"&&sysPassword" as SYSDBA
set echo on

host &&rcc -b initjvm  -c 'PDB$SEED CDB$ROOT' -U "SYS"/"&&sysPassword" $ORACLE_HOME/javavm/install/initjvm.sql;
host &&rcc -b initxml  -c 'PDB$SEED CDB$ROOT' -U "SYS"/"&&sysPassword" $ORACLE_HOME/xdk/admin/initxml.sql;
host &&rcc -b xmlja    -c 'PDB$SEED CDB$ROOT' -U "SYS"/"&&sysPassword" $ORACLE_HOME/xdk/admin/xmlja.sql;
host &&rcc -b catjava  -c 'PDB$SEED CDB$ROOT' -U "SYS"/"&&sysPassword" $ORACLE_HOME/rdbms/admin/catjava.sql;

prompt next is context

-- keeping the connect, as each "script" originally did a fresh connection
connect "SYS"/"&&sysPassword" as SYSDBA

set echo on

host &&rcc -b catctx   -c 'PDB$SEED CDB$ROOT' -U "SYS"/"&&sysPassword" -a 1  $ORACLE_HOME/ctx/admin/catctx.sql 1Xbkfsdcdf1ggh_123 1SYSAUX 1TEMP 1LOCK;
host &&rcc -b dr0defin -c 'PDB$SEED CDB$ROOT' -U "SYS"/"&&sysPassword" -a 1  $ORACLE_HOME/ctx/admin/defaults/dr0defin.sql 1\"AMERICAN\";
host &&rcc -b dbmsxdbt -c 'PDB$SEED CDB$ROOT' -U "SYS"/"&&sysPassword" $ORACLE_HOME/rdbms/admin/dbmsxdbt.sql;

prompt ordinst and interMedia: stubbs removed, product no longer avaiable.
prompt next is cwmlite.sql

SET VERIFY OFF
connect "SYS"/"&&sysPassword" as SYSDBA
set echo on

host &&rcc   -b olap -c  'PDB$SEED CDB$ROOT'      -U "SYS"/"&&sysPassword" -a 1  $ORACLE_HOME/olap/admin/olap.sql 1SYSAUX 1TEMP;

prompt next is spatial.sql

SET VERIFY OFF
connect "SYS"/"&&sysPassword" as SYSDBA
set echo on

host &&rcc -b mdinst -c  'PDB$SEED CDB$ROOT'   -U "SYS"/"&&sysPassword" $ORACLE_HOME/md/admin/mdinst.sql;

prompt next is CreateClustDBViews.sql

SET VERIFY OFF
connect "SYS"/"&&sysPassword" as SYSDBA
set echo on

host &&rcc -b catclust  -U "SYS"/"&&sysPassword" $ORACLE_HOME/rdbms/admin/catclust.sql;
host &&rcc -b catfinal  -U "SYS"/"&&sysPassword" $ORACLE_HOME/rdbms/admin/catfinal.sql;

connect "SYS"/"&&sysPassword" as SYSDBA
set echo on

prompt next is lockAccount.sql

SET VERIFY OFF
connect "SYS"/"&&sysPassword" as SYSDBA

set echo on

alter session set "_oracle_script"=true;
alter pluggable database pdb$seed close;
alter pluggable database pdb$seed open;

BEGIN 
 FOR item IN ( SELECT USERNAME, AUTHENTICATION_TYPE FROM DBA_USERS WHERE ACCOUNT_STATUS IN ('OPEN', 'LOCKED', 'EXPIRED') AND USERNAME NOT IN ( 
'SYS','SYSTEM','SYSRAC','XS$NULL') ) 
 LOOP 
IF item.AUTHENTICATION_TYPE='PASSWORD' THEN
  dbms_output.put_line('Locking and Expiring: ' || item.USERNAME); 
  execute immediate 'alter user ' ||
 	 sys.dbms_assert.enquote_name(
 	 sys.dbms_assert.schema_name(
 	 item.USERNAME),false) || ' password expire account lock' ;
 ELSE 
  dbms_output.put_line('Locking: ' || item.USERNAME); 
  execute immediate 'alter user ' ||
 	 sys.dbms_assert.enquote_name(
 	 sys.dbms_assert.schema_name(
 	 item.USERNAME),false) || ' account lock' ;
 END IF;
 END LOOP;
END;
/

alter session set container=pdb$seed;

BEGIN 
 FOR item IN ( SELECT USERNAME, AUTHENTICATION_TYPE FROM DBA_USERS WHERE ACCOUNT_STATUS IN ('OPEN', 'LOCKED', 'EXPIRED') AND USERNAME NOT IN ( 
'SYS','SYSTEM','SYSRAC','XS$NULL') ) 
 LOOP 
IF item.AUTHENTICATION_TYPE='PASSWORD' THEN
  dbms_output.put_line('Locking and Expiring: ' || item.USERNAME); 
  execute immediate 'alter user ' ||
 	 sys.dbms_assert.enquote_name(
 	 sys.dbms_assert.schema_name(
 	 item.USERNAME),false) || ' password expire account lock' ;
 ELSE 
  dbms_output.put_line('Locking: ' || item.USERNAME); 
  execute immediate 'alter user ' ||
 	 sys.dbms_assert.enquote_name(
 	 sys.dbms_assert.schema_name(
 	 item.USERNAME),false) || ' account lock' ;
 END IF;
 END LOOP;
END;
/

alter session set container=cdb$root;

prompt next is postDBCreation.sql

SET VERIFY OFF

host $ORACLE_HOME/OPatch/datapatch -skip_upgrade_check -db FREE;

connect "SYS"/"&&sysPassword" as SYSDBA

prompt note on spfile: I prefer to do my spfile manually, 
prompt and at the moment in dflt location
prompt I might  consider moving it out of OH later.
--  create spfile='/opt/oracle/dbs/spfilefree.ora' FROM pfile='/opt/oracle/admin/free/scripts/init.ora';

connect "SYS"/"&&sysPassword" as SYSDBA

host &&rcc -b utlrp  -U "SYS"/"&&sysPassword" $ORACLE_HOME/rdbms/admin/utlrp.sql;

select comp_id, status from dba_registry;

shutdown immediate;

connect "SYS"/"&&sysPassword" as SYSDBA

startup ;

spool off

prompt Done crdb3_comp. Please Verify: errors, invalids objs etc...
  
That is the script I did the actually run with.
(feel free to copy, I checked: Chrome, Firefox and Opera.. allow copy-paste. One day I'll get around to using proper code-windows in blogger..)

Now... Guess how many invalid-objects? .... None in the CDB.

And in the PDB$SEED, where we still had 600+ Invalid objects after stage 2... :


Really Happy so far. Off course, I have no idea if v23c has or will introduced additional components or scripts to run at Create-Time. I havn't even checked the fairly long page of RTFM in all its details on this yet....

First, I want my Databases! 
With Maximum-Lazyness-Default, minimal effort and Minimal-scripting (but Readable and Portable scripts, Please...).


So far So Good.

All in all, it looks like the generated scripts from v21c are still working. This combination of CDB + Seed should now be ready for PDB-creation. I will of course try to minimalise and simplify that create-process, just a little.

Next blog will describe the creation + running of crdb4_pdb.sql in which I finally manage to create 2 PDBs. It worked. But I wasn't quite happy with the result, yet.

------- End of Part 3 ------

Disclaimer: Don't Try this at Work.

But if you seriously plan to try this at work: Read The Disclaimer underneath Part 1.

PS: Special Thanks to Martin "berx" Berger for digging down the catcon.pl -a option. 

----- Real End of Part 3 ----

Tuesday 8 August 2023

Minimalistic creation of a Database v23c - adding catalog.

TL;DR: Creating the "catalog", using a script from v21c as example. Beware: v23c Can Differ !

Spoiler: It ends with Optimism.

Update: Oracle has, on 02-May-2024, announced to rename version 23c as 23ai. This series of blogs was researched and written with the early releases of v23, when the suffix was still "c" (for cloud), but I am too lazy to go and modify all the referneces to the new 23ai (artificial intelligence). 
As soon as time permits, I will repeat my "testing" on the new versions, but on first impression, it all worked identically on 23ai.

Background:

Still on my quest of trying to create a database, scripted, but with minimal effort, minimal scripting, minimal typing. Because the create-assistant from "Free Developer Release 23c" doesn't provide an option to generate scripts yet, I took the set of scripts generated by the DBCA from a v21c as example.

Warning: This is part 2 of what could now become a 3 or 4-part series... (here is Part 1)


Next: Catalog.sql

Next step is to run the "catalog" scripts. From the generated scripts (which are from version v21c!!), I see that the first script to run after the basic "create database" is the CreateDBCatalog.sql.

This is a script that uses catcon.pl to run four components:

catpcat.sql - a fancy utility to run the old catalog and catproc

owminst.plb - known as workflow, apparently wm goes into CDB

pupbld.sql - known pub stuff for SQL*Plus, CLI tools

pupdel.sql - didn't know this one yet, but OK, seems a logical addition

The Challenge here, even in very old versions of DBCA, was always the adaptation of the many hardcoded file-paths. The DBCA would assume you only wanted to run the scripts on the one server where you generated them. Porting those scripts to anther server was a recipe for problems...

The original CreateDBCatalog, generated from DBCA v21c looked like this:


Good luck deciphering that on a 80x25 terminal, but ok...

I could not resist trying to simplify a little: I added $ORACLE_HOME where appropriate, and I redirected the logs to /tmp (good riddance, and Yes I still read logfiles. Sometimes..). This made the scripts a _little_ more simple and portable but I still think they should be made more readable, Simpler.  Might do Later.

 After some editing version looked like this (copy-able code, I hope...):

-- crdb2_cat.sql : call scripts to generate "catalog" etc..
-- adapted from CreateDBCatalog.sql, as genrated by dbca-v21c.

-- get the pwds, we need m
@accpws

set echo on
set verify off

connect "SYS"/"&&sysPassword" as SYSDBA

spool crdb2_cat.log append

alter session set "_oracle_script"=true;

alter pluggable database pdb$seed close;
alter pluggable database pdb$seed open;

-- oracle-home missing for catpcat? supposedly catctl.pl catches that ?
host $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/catctl.pl  -u "SYS"/"&&sysPassword" -n 2 -icatpcat -c 'CDB$ROOT PDB$SEED' -a  -d $ORACLE_HOME/rdbms/admin -l /tmp rdbms/admin/catpcat.sql;

host $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/catcon.pl -n 1 -l /tmp -v  -b owminst  -U "SYS"/"&&sysPassword" $ORACLE_HOME/rdbms/admin/owminst.plb;

-- make sure system pwd is correct
alter user system identified by "&&systemPassword"  ;

host $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/catcon.pl -n 1 -l /tmp -v  -b pupbld -u SYSTEM/&&systemPassword  -U "SYS"/"&&sysPassword" $ORACLE_HOME/sqlplus/admin/pupbld.sql;

host $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/catcon.pl -n 1 -l /tmp -v  -b pupdel -u SYS/&&sysPassword  -U "SYS"/"&&sysPassword" $ORACLE_HOME/sqlplus/admin/pupdel.sql;

-- again, make sure system pwd is correct, got error here ?
alter user system identified by "&&systemPassword"  ;

connect "SYSTEM"/"&&systemPassword"

set echo on

spool /opt/oracle/admin/free/scripts/sqlPlusHelp.log append
host $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/catcon.pl -n 1 -l /tmp -v  -b hlpbld -u SYSTEM/&&systemPassword  -U "SYS"/"&&sysPassword" -a 1  $ORACLE_HOME/sqlplus/admin/help/hlpbld.sql 1helpus.sql;

spool off

Running this script took +/- 2hrs on my 5yr old, trusty MB-Air but only just >1hr on the heavy, 10yrs old, MB-Pro (which is still overall faster, and has more CPUs). I always end up doing a lot of testing on that old thing.

But the result was OK.  No alarming warnings in the *.log and *.lst files.

----- intermezzo -----

Checking for compile errors... 

The CDB$ROOT had 1 invalid object,  a trigger on the sqlplus-help system. I can live with that "error", for the moment. The PDB$SEED has 600+ INVALID objects. That may need investigation. But I know there are some calls to utlrp in the next set of scripts. And Don't Forget: This is a generated script taken from v21c, and ported to a "Free Developer Release v23c". Which means that We are effectively testing the Big Red Flagship product, for Free.

----- end of intermezzo -----


Did it Work: Yes (but...?)

What worries me a little is that during the fairly long runtime of these scripts, I did some RTFM on Create-Database v23c (remember, folks, Always RTFM). I stumbled across the need to run catcdb.sql and catcdb.pl. I checked: that script isn't called anywhere in the generated scripts. But a quick glance in catcdb.sql  and the catcdb.pl scripts show that catcdb.sql does  _approximately_ (my interpretation!) the same as the generated CreateDBCatalog: it invokes catalog, catproc, owminst, and pupbld. Also, catcdb.sql seems intended for interactive usage (it asks for passwords at the commandprompt! )

For the moment I'm going to assume that CreateDBCatalog (intended for automatic-running) and catcdb.sql (plus catcdb.pl) are equivalent, where the latter is meant for purely for interactive usage. (this needs more RTFM-ing, or maybe some reader already Knows..)

In future, I hope the DBCA for v23c will soon allow me to "generate scripts". That way I can (should) be able to get a more Quality Assured version of these scripts.


File Sizes, possibly important:

Also, while running the "catalog" script(s), I noticed the datafiles growing bit by bit, so after several tests, I decided again to help a little by resizing them proactively. (I already knew the the resize-command works straight after the create-database, see earlier blog). I begin to appreciate the construction whereby you can specify sizes on "create time", and the SEED is stated to inherit from the CDB, and the PDBs then inherit from the SEED. It is just that I dont like to add too many items to my "minimalistic" create scripts yet. If I hardcode 1G size now, that may bite me in the tail later...

If you consider creating manually (which I probably would not recommend) and if you also want to limit size (for example to keep your container-footprint small), then study this part, check/search for the "tablespace_datafile" clauses and their cascading effect on ROOT and SEED. And try to aim for small datafiles (even if I would prefer tablespaces of type "bigfile", but that is another discussion).

Additionally, I hate it when an UNDO tablespace grows to several 100s of M, but cannot quickly be shrunk down due to the infamous ORA-03297 (grey-bearded DBA's will add this number to their hate-list, with 1555 and 12154 etc..). 

Note to Oracle: Can we fix that? Can I request EASY UNDO-RESIZING, for the next release (v25d or whatever) ? 

In a real world, there is always someone who will (+/- correctly) say: But Disk is Cheap Now... Yes, that is true. but I still don't like to waste resource. 


So Far So Good.

I seem to have survived up to this stage. 

Next, I will go over on the various components that the DBCA (when it was v21c and earlier) normally would install in the CDB, followed by some post-creation steps (stage 3, let's call it crdb3_..) and then I will hopefully, finally, add 1 or 2 PDBs (crdb4 ?). All strictly "scripted" of course, and from scripts that I hope are small, Simple, Readable and Portable (to linux/unix systems).


------- End of Part 2 ------

Disclaimer: Don't Try this at Work.

But if you seriously plan to try this at work: Read The Disclaimer underneath Part 1.

------ Real End of Part 2 ------



Monday 7 August 2023

Minimalistic Creation of an Oracle Database, v23c

  TL;DR: To create a v23c (now 23ai) database "manually", with Minimal effort and minimalistic scripts, here is how I did it.

Spoiler: It Worked, in the end.

Note: Aug 2023, still using the Free Developer Release of 23c.

Update: Oracle has, on 02-May-2024, announced to rename version 23c as 23ai. This series of blogs was researched and written with the early releases of v23, when the suffix was still "c" (for cloud), but I am too lazy to go and modify all the referneces to the new 23ai (artificial intelligence). 
As soon as time permits, I will repeat my "testing" on the new versions, but on first impression, it all works identically on 23ai.

Goal: 

The aim is to be Lazy, to type minimal code, and to keep scripts as simple as possible, but still create a complete database: a Container + at least 2 PDBs. And do this for version 23c, for which the Free Developer Release 23c is available for download since early 2023.

I would like to use "simple commands" that can be easily and flexibly used by Developers, Dev-supporting DBAs and SysAdmins.

My resulting scripts should be usable to create a database quickly on Any (linux) platform where Oracle can be installed.


Background: 

When I tried to create a database with the developer-release of version 23c, I first noticed that the trusty "db create assistant", the DBCA, would only allow me to use a template. The Assistant didn't offer the option to "generate scripts" anymore. I liked those scripts, both as examples and for inspiration/research (what is in there...? what happens ? is sql.bsq still used...?). As v23c comes together for General Availability (GA) the DBCA-assistant will be made "complete" again (or so Gerald and Loïc +/- promised). Not to be stopped, I went back to my notes since versions 11.2 and 19.3, from when I created the databases all by myself, and I tried to apply it to 23c. Here is "The Report".


More on how I approached it: I took some old examples.

To depart, I used the generated scripts from a v21c assistant, where the assistant still allowed me to do the "generate scripts".  These scripts were my examples on how to install and complete all the (required and more) components. I would definitely recommend to simplify those scripts a little, if that is possible (Oracle?).

But my first obstacle was, dont laugh:

First I had to change all occurrences of ORCL, ORCLCDB and ORCLPDBn (the defaults from the iron-age) to use the names FREE and FREEPDBn, because "FREE" is the only name the early developer-release for v23c will accept as database name. No big deal. It is a Brave New World... feeling FREE (but yeah, limited to that one name...).

Note: This demonstrates, again, the need for Simplicity: the less text, the less (hardcoded) code,  and the less editing you have to do, the better. You become more agile, more portable, less prone to typos, and more Robust.

To make a database useful, you need a minimum of additional scrips (think catalog, catproc, etc) and at least 1 PDB, but preferably more PDBs. Hence some more scripting and experimenting to do.


So Let's Start this...

First, I composed a minimal init.ora file. After some trial and error, it contained this:

# the minimalistic init.ora file

# name is fixed for the 23c-FREE edition...
db_name='FREE'

# file-parameters, mimimum of information to where to place files.
control_files               = '/opt/oracle/oradata/FREE/control01.ctl'
db_create_file_dest         = '/opt/oracle/oradata/'
db_create_online_log_dest_1 = '/opt/oracle/oradata/'
db_create_online_log_dest_2 = '/opt/oracle/oraflra/'

diagnostic_dest             ='/opt/oracle'

# needed
undo_tablespace             = 'UNDOTBS1'

# for PDB, we need some filename-convert ? (apparently not)

# use the default! (experiment later, at own risk!)
db_block_size		    = 8192

# set these out of habit..
audit_trail  ='db'
compatible   ='23.0.0'

# isnt this the dflt yet in 23 ? yep, it is.
# enable_pluggable_database=true

# some memory params, all goes faster if you can give space, if you have.. 
# you can actually leave this out, dflts may also work?? needs test.

open_cursors            = 300
processes               = 300

pga_aggregate_target    = 512m
sga_target              = 1536m


Notice there are very few references to file-paths, meaning very little to edit if you move to another box or system. And there are at the moment minimal memory settings, which is partly because the Developer Release 23c has some built-in limitations

With that minimum of parameters, with some experience from earlier versions+attempt, and by re-reading the Doc on Create-Database (always RTFM folks, Always RTFM ), I was able to use the following, simple, minimalistic create-statement to create my first v23c database:

spool crdb1

-- pwds get "accepted" in separate file, file and pwds are reused.

@accpws

-- we need the pw file, in the dflt location.
host $ORACLE_HOME/bin/orapwd file=$ORACLE_HOME/dbs/orapw$ORACLE_SID password=&&sysPassword force=y format=12

-- we need at least some directories to place the data.
host mkdir -p /opt/oracle/oradata/FREE
host mkdir -p /opt/oracle/oraflra/FREE
set echo on connect / as sysdba -- there must be an init or spfile, but mine is minimal... startup nomount ; -- minimal create stmnt CREATE DATABASE FREE EXTENT MANAGEMENT LOCAL DEFAULT TABLESPACE users DEFAULT TEMPORARY TABLESPACE temp UNDO TABLESPACE undotbs1 ENABLE PLUGGABLE DATABASE SEED SYSTEM DATAFILES SIZE 125M AUTOEXTEND ON NEXT 10M MAXSIZE UNLIMITED SYSAUX DATAFILES SIZE 100M; -- add a few checks.. select name, open_mode from v$database ; show pdbs prompt Create Database Done. Please Verify.

That was my script crdb1.sql: create the database.. from minimal init-content and a minimal create command. There are a lot of caveats and Warnings needed here (Don't Try This At Work ;-) ...), but it worked. A Good Start:


Result: The CDB Container and the PDB$SEED are created. Process-wise, a ps -ef | grep $ORACLE_SID shows 59 processes already. 

Me Curious, so immediately checking: lsnrctl status shows:



...And because of the LISTENER, we can already connect from a client-stack with SQLcl or SQL*Plus. Aha, that needs testing:

Yes. From the MB-air, I can already connect, just like that. And now double-check, connect from the "server": 



And indeed from the actual "server" (in reality a bash-shell inside a container), we get the same results: Same DBID. We are looking at a (minimalistic, not finished yet) CDB. Created very "easily" with minimal editing and scripting. And I can already connect over TNS.


Looking around:

Inside the database there are no USER-views or DBA-views yet, only V$ views to "inspect the system".

The First Check:


But wait until you see the filenames:

As was expected, due to the minimalistic (lazy) setting of parameters and the minimal typing effort, I have OMF files, and the Pluggable, the PDB$SEED, has gotten this horrendous subdirectory-name which makes memorising the file-paths impossible. I Really Like OMF, I dont mind about the crazy but readable filenames.dbf. I don't like the long subdir-names, but those are what you get if you, lazily, don't want to specify file_name_convert or other directives for the files of the PDBs (there is a reason, but that is out of scope for now).

I also tried to Increase the undo-tbs, just to see if I can... Resize command worked fine. Because I know some of the next operation will make my datafiles grow, I can already pre-grow the files. Pre-allocate space instead of grow incrementally,  to make the work slightly more efficient. And thanks to OMF, I can also add bigger logfile-groups with Very Simple commands :

Bigger logfiles and pre-grown datafiles might help the subsequent actions (catalog, components) to finish a little faster.  (And if you are re-playing this : don't forget to switch-logfile, do a checkpoint, and drop the small-ish logfile group 2 as well...). 


Success ? Not complete yet, but Getting There!

So.. basic operations already work nicely: resizing of datafiles, and adding of logfiles. I can even do that work over a connect from a client which allows me to run some of my trusty old maintenance-scripts right from my laptop-client machine. 

But since we don't have a PDB yet, we are not done, not by a long way. It looks like I will have to do a little more work: running the "catalog" scripts, adding the "required" components, and then finally creating one or more PDBs. 


More to come:

I will need to create + run at least three more scripts : 

crdb2_cat.sql: to add the catalog (this was actually the slowest part)

crdb3_comp.sql: to add a number of essential "mandatory" components.

crdc4_pdb.sql: to finally add some PDBs and show it All Really Works.


Spoiler: It is going to Work. But some of it is for the next Blogs.


---- End of Part 1 ---- 


Some Disclaimers, as ppl have asked (written 06 Aug 2023)


My Server: 

As platform, I used the container image provided by Oracle, which can be found here: container-registry.oracle.com/database/free, image id: 7c64410c08d5afc711a9da0cc7cfa6e4e66bc7ec4456c3ba5867823c8e40ef57

And I've ran this image via docker-desktop on a both an MBP and an MBAir, more or less as if it where a little Virtual Machine. Most of the actual work for this blog was done on a humble MBair from 2018. Kinda shows you really need Big Iron and Large Servers to run Oracle, right?


The Scripts:

The original "generated scripts" were generated on a VM with Oracle v21 installed. I copied them off and edited them to fit in the (often hardcoded) version/name 23c. By editing and simplifying the generated scripts, I got to the results above. You may see more of the generated scripts and their 21c origin in the next episodes.


Warnings, just in case: 

Dont Do This At Work.

and Use This at Your Own Risk.

The work and scripts and results in the above blog is all just for research purposes, and bcse I am a curious, slightly weird person. I am not suggesting you should do this for any Serious system, and Definitely Not with the pre-/developer release of 23c.

---- finally, the real end of part 1 ---


Saturday 5 August 2023

Oracle SQLcl Still Surprises... even when finishing work.

 TL;DR You can use \q in SQLcl. That is Cool! 

As stated before, The Mantra is: 

Everything is SQL, and SQL is everything.

And you should collect sql-scripts and test them, use them...hence you need SQLcl

I Strongly Recommend that you "store" your SQL Knowledge in (runnable) scripts. That way you don't have to go back and google (or AI-chat) to find how a certain query is done, or how to find the size of a set of objects. (on my laptop, typing SQL > @segsize abc% will find all segments by that name and list them with sizes, I've scripted that since the dawn of time to never have to type too much ....)

But that is not Today's little TIL.

So, I tend to have SQLcl installed on my client-machine (and SQLDeveloper, and even artifacts like SQL*Plus and sqlldr, exp, imp). And on my often-used jump-boxes, and bascially everywhere I can. You Need your SQL tools...

And TIL (Today I learned), by accident, this :


For those who now blink: The image above shows that the SQLcl tool will accept the command "\q"  (BackSlash and lowercase Kjuu).

And the \q will tell it to gracefully disconnect and exit.

For some of us, this is Very Convenient, as it is the same command used by a few other (DBA, geeky) tools that we use. I realise I probably have used \q several times on SQLcl without paying attention, as I tend to keep scripts "generic" as much as possible, and have a several versions of "the same script" for various purposes.

Having \q is just ... Convenient! 

And who else knows what is in there. I already found that \d also works.

Hence: Tip of the Hat to the Jeff, Layla, Kris, and the entire team behind SQLcl.

If you got to this point and are still clueless why \q or \d is convenient, no worries. You miss very little...

Tuesday 25 July 2023

SQL, Explain-Plan, Show me the Data, automated

 TL;DR: Find relevant info about the data behind an SQL: easy, and automated.


OK, so you found the SQL-statement that seems to cause problems. From Statspack or AWR, or just from v$sqlarea. You got the stmnt, the sql_id, and probably the explain-plan(s) using dbms_xplan. In earlier blogs I also showed you how to check on the history of an sql_id (from dba_hist_sqlstat) and how to find the bind-variables (from v$sql_bind_capture).


How about we also do some inspection of the data in the tables and indexes? And how about we automate that job ? 


Take this (simple) SQL stmnt as an example: 

select e.ename, e.hiredate, d.dname, d.loc 
from emp e, dept d 
where 1=1 
and e.deptno = d.deptno 
and e.ename = :vc_ename 
order by e.ename, d.dname; 

In practice, problem-queries may fetch data form 10+ tables but we'll keep this example small. 


We can use dbms_xplan to show the explain:

-----------------------------------------------------------------
| Id  | Operation                      | Name      | Cost (%CPU)|
-----------------------------------------------------------------
|   0 | SELECT STATEMENT               |           |            |
|   1 |  SORT ORDER BY                 |           |     0   (0)|
|   2 |   NESTED LOOPS                 |           |            |
|   3 |    NESTED LOOPS                |           |            |
|   4 |     TABLE ACCESS BY INDEX ROWID| EMP       |            |
|*  5 |      INDEX RANGE SCAN          | EMP_ENAME |            |
|*  6 |     INDEX UNIQUE SCAN          | PK_DEPT   |            |
|   7 |    TABLE ACCESS BY INDEX ROWID | DEPT      |            |
-----------------------------------------------------------------


But we can take it a few steps further: we know the plan is stored in v$sql_plan. So let's combine v$sql_plan with the information in dba_tables and dba_indexes:

column obj_own_name     format A30
column last_analyzed    format A20
column est_rows         format 999,999
column est_keys         format 999,999

select p.object_owner || '.' || p.object_name || ' (T)' as obj_own_name
, to_char ( t.last_analyzed, 'YYYY-MON-DD HH24:MI:SS' ) as last_analyzed
, t.num_rows est_rows
from v$sql_plan p
   , dba_tables t
where 1=1
and p.object_owner = t.owner
and p.object_name = t.table_name
and sql_id = '&1'
and p.object_type like 'TAB%'
order by p.position ;

select p.object_owner || '.' || p.object_name
|| case i.uniqueness
   when 'UNIQUE' then
        ' (UNQ)'
   else ' (IDX)'
end  as obj_own_name
, to_char ( i.last_analyzed, 'YYYY-MON-DD HH24:MI:SS' ) as last_analyzed
, i.distinct_keys est_keys
from v$sql_plan p
   , dba_indexes i
where 1=1
and p.object_owner = i.owner
and p.object_name = i.index_name
and sql_id = '&1'
and p.object_type like 'IND%'
order by p.position ;


These queries will display the relevant statistics (number of rows and number of keys). Note that the statistics can be out of date, hence we also generated the count-queries:


OBJ_OWN_NAME		       LAST_ANALYZED	    EST_ROWS
------------------------------ -------------------- --------
SCOTT.EMP (T)		       2023-JUL-23 06:38:30	  14
SCOTT.DEPT (T)		       2023-JUL-23 06:38:30	   4

OBJ_OWN_NAME		       LAST_ANALYZED	    EST_KEYS
------------------------------ -------------------- --------
SCOTT.EMP_ENAME (IDX)	       2023-JUL-23 06:38:30	  14
SCOTT.PK_DEPT (UNQ)	       2023-JUL-23 06:38:30	   4

-- Generated count-queries ready to copy/paste
select ' SCOTT.DEPT  :' || to_char ( count ('x' ), '9,999,999,999' ) from SCOTT.DEPT;
select ' SCOTT.EMP   :' || to_char ( count ('x' ), '9,999,999,999' ) from SCOTT.EMP;


This will give you a quick insight 

Note that the dbms_xplan-advanced will provide you most of the same information, and more. But being paranoid, I can never resist to verify for myself. Sometimes the "statistics" are way off the mark, the last-analyzed may be outdated, or you may have overlooked a particularly large table in the from-clause, or some tables were possibly hidden by a view. 


Warning: Careful with count-queries.

The Count-queries are deliberately Not Executed Automatically: on some systems a thoughtless count can take minutes or even hours. But if you need them: copy/paste and run .... 



In summary: 

We combine the information in v$sql_plan with the available statistics, and we generate count-queries in case you need them. By automating this in a script, we save the copy/paste/typing effort, and we obtain a more or less standardized format of data. 


And all of this with fairly simple SQL statements...


Remember, in a good RDBMS:

Everything is SQL, and SQL is everything.


Happy Selecting !