Wednesday, 17 September 2025
Oracle23ai, and a new kind of history.
Thursday, 14 August 2025
Oracle23ai and python - Measuring (Network-)Time.
TL;DR: I wanted a utility-function in python to report the "time spent", notably looking for "Network-Time". It evolved during coding into a time-reporting-function. It Worked. But I learned something new again.
Image: waiting for a bus, you never know when...
App-time: measured by time.process_time()DB-time: from the DBNetwork-time: pingtime x nr RoundTripsIdle time: calculated.Total time: measured from time.perf_counter()
The first time I tested this, I was surprised by the large, 5sec, idle time in a program that I thought did very little. Then I realised: The loop for the 5 pings has a 1-sec sleep in it, and that generates the (approx) 5 sec of Idle-time.
As a separate check, I've used the linux time command to time the run of the program, and the values concur, even for a very trivial, short-running program:
Real: the elapsed time, close to the 8sec reported by python.User and sys : Together they are +/- close to the App-time repoorted by python.
And the data from time correspond, with a margin of error, to the data reported by my python contraption.
So I now have a function I can call at the finish of a program that will report how time was spent. And a verification via linux time looks acceptable.
So far it looks promising, usable (but you can feel a problem coming...)
Tuesday, 12 August 2025
Oracle23ai and python - Pingtime and RoundTrips
os.system('ping -c10 www.oracle.com ' )
that code: Take it!
Feel free to copy or plagiarize (do you really need a large disclaimer..?) Just Copy What You think is Useful).
If you have suggestions for improvements: Let me know. I'm always interested in discussion these things if time+place allow. Preferably in the margin of a Good Conference, or on one of the social channels (bsky, slack, email...).
Tuesday, 5 August 2025
Oracle23ai and python - too many RoundTrips, Fixed.
TL;DR: Time got lost due to leftover code. In our case, some well-intended "defensive coding" caused additional RoundTrips and some commit-wait time.
Friday, 1 August 2025
Oracle23ai and python - How to fix (avoid) a MERGE statement.
TL;DR: Still using Oracle and python, and loving the combination. But some Annoying High-Frequency MERGE statements from several python programs caused too many RoundTrips... Trying to fix that.
And can I just say: RoundTrips are Evil !
Image: Vaguely related, we have lots of Traffic, and we are trying to MERGE...
Background: need to Eliminate RoundTrips.
For a number of python programs we have logic that checks on "existing data". One example is to find a certain SOURCE. The table holding the sources looks like this (simplified):
If a source-record, by whatever SRC_NAME, exists, we want to get the ID, and if the source does not exist, we need to create it and return its newly assigned ID.
Perfect case for a MERGE-statement, righ ?
Except that we do many source-checks per second, and each of those stmnts becomes a RoundTrip. These merge-stmnts were 2nd on our high-frequency-list. And since we eliminated the single-record-inserts, they are now Top-of-Problem-List.
The Problem (simplified):
The function will check for existence of a source-record. The MERGE statement was the most obvious to use here. Our MERGE statement looked like this (somewhat simplified):
The MERGE stmnt is quite standard: check for existence of a given name, and it it doesnt not exist: create it. Then return the ID.
Also notice the (global) variable: src_dict = {}, it will hold the pairs of SRC_NAME and ID as we find them.
The original function (simplified) looks like this:
Quite straight forward: create a cursor, assign the bind-variables, execute the statement, and catch the returned ID. Job Done. Except that this would sometimes run at 100s per sec. and show millions of executes per hour in AWR.
Note that we effectively check for parent-records or LOV-records before an insert of a detail-record. I can imagine other constructs, such as triggers or a PL/SQL function to contain this logic. But here we are...
Note Also: On the topic of MERGE-stmnt, allow me a sidestep to this blog by Oren Nakdimon about concurrency-issue with MERGE (link), but that is out of scope for me at this point. Maybe Later.
The Possible Solution:
Thinking back to my previous "Fix for Roundtrips" (link), some form of local buffering or a "local cache" would seem appropriate. But there were some issues:
- Uniqueness: Any new record, e.g. newly-found SRC_NAME, should Ideally be "claimed" with a new ID into the RDBMS Immediately to prevent other systems from assigning different IDs to the same source-name.
- Timeliness: A local buffer would _always_ be out of date, especially when multiple running jobs were likely to discover the same or similar sources in the same timespan. Ideally, the local buffer would always have to be up-to-date, or kept in sync, with the Database.
In short: The Truth is In the Database, the Single Point of Truth (SPOT, classic problem of copy-to-cache...).
- And preferably no "Slurp" of all data: A local buffer could potentially be large, but not every program-run needs all the records. Most programs would only need a small set of the data, typically 10-20 source-records (but they do millions of unnecessary merge-check-retrieve for that small set of SRC_NAMEs). A pro-active "Slurp" of a large set of LOV-data would not be desirable.
One of the "lucky" aspects of our merge-problem was that the SOURCE-data, for this process, was insert/lookup-only. Any updates (comments, modifications, or even combining of sources) would happen elsewhere. The "worker programs" just needed to fetch an ID, or create an ID where none existed yet.
But any new ID would have to be "immediately" stored into the RDBMS to have it available for others.
The concept-solution.
With some thought, the following pseudo-code came to mind (some architects will call this a "design pattern"):
The comments speak for itself. I chose a python structure called a DICT, which I think is similar to an associative-arry in PL/SQL.
Note that at this point of writing, I do not yet know if that choice was "optimal", but it seemed to work just fine in our cases. Again something to investigate Later...
Let's put it to Code.
This idea was relatively easy to code. A link to complete and hopefully runnable setup- and test-scripts is at the end of the blog.
The new function First checks if the SRC_NAME is present in a DICT, and if not, then calls the "old function" to check against the RDBMS. Then returns the ID.
It looks like this:
That was Easy enough (In practice there is a bit more, but out of scope). And it also seemed to pass all of my testing.
But was it really Better...?
Now Race it for Speed....
To find if something is "faster". I tend to just call the thing 100s or even millions of times, and compare timings.
Our test-case is best described as:
- The potential names are "src_123", using numbers from 1-1000.
- At the start, the table contains 501 records with SRC_NAMEs and IDs ( the odd numbers. Setup is via the file tst_merge_src.sql (links to all files below)
- Program will generate names randomly of format "src_123". Then check the name, and add a new record if needed.
- We run this for 100-random-names and report timing + statistics... We check the timings, and repeat for another 100-random-names. Until Control-C.
The Original Function with Merge: 200 records/sec, steady.
Here is the first run of 100 checks, using the old function.
The top-right terminal shows stdout of the test-program:
It did 106 RoundTrips to test 100 records (100 merges plus 6 to connect and some overhead). It managed to do this at a rate of 212 records/sec. Check also the time-difference of the two lines at start (blue mark): the test took about 0.5 of a sec for 100 records, which confirms: 200/sec.
To the Left, we see a terminal with the count of the records: At the start of the test, there were only the 500 + 1 existing old sources. After the first pass of 100 records, the random-mechanims found 51 (even numbered) new names and merged them into the table. Those newly-found names are immediately "claimed" with an ID. Any other program finding the same names, would be able to pick them up and use the Correct ID.
Let's hit enter a few times and do some tests. After 6 runs:
Now the program did 621 RTs (600 records checked, and some overhead for commit and statistics. But the speed is still 180/sec. In fact, it varied a little, but the rate was stable around 200 records/sec.
Meanwhile, the record-count now shows 223 new records added (e.g. 223 new names found + merged). And this program will keep processing at this rate.
The Speed is 200 records/sec, no matter how long we run test.
Time to hit Control-C and start testing the new function...
The New Function: using a DICT with cached values..
On the first run with the new function we found the same speed:
No improvement yet. It added 100 RTs and was still only processing at 200/sec.
But the DICT now contains 97 elements, this is the start of our "cache"...
The record-count showed the total of NEW records now 257, some 34 records were added in this round.
Let's now hit enter a few more times and get to 10-runs with the new function, and as we run more records, the DICT fills up and the cache-effect starts to work:
Now we have a speed of 500 records/sec and the DICT now contains 637 records (out of a potential maximum of 1000). And the number of RTs per run-of-100 is down to about 30 per testloop.
As we run more+more test-loops of 100 records, most of the 1000 potential names end up in the DICT, the number of RTs needed decreases, and the speed in nr or records/sec goes up steadily...
At the 25th test-loop, the DICT holds 902 values, and per test-of-100 we are down to about 10 RTs. The measured speed has gone up to 1797 records/sec: that is more than 8x faster than the original function.
This Cache-Mechanism Works !
Reflexions...
The local cache will Not Know about Deletes or Updates. But for most LOV- or parent-table records, deletes are un-likely. And from the nature of this data: the SRC_NAME is not likely to change over time (it is effectively the Alternate-Key).
As always, I do not like to add additional code to a system: There is enough "maintenance" already without me adding smart-tricks... Weigh the benefits carefully in your own situations.
I do not now (yet) how efficient the DICT-lookups are on large(er) sets. But assuming a local operation (inside 1 process) is generally much more efficient than a call-out to a (remote) RDBMS. But still something keep in mind, maybe check in future tests.
I dont rule out that certain DataFrames also solve this problem. I hope they Do. But I'm not yet sufficiently fluent in DataFrames to comment much. Maybe Later.
Alternatives.... Use a Local Database or a file? Someone suggested to copy relevant LOV-data into a local store, either a SQLite, or some copy-file type of cache. I would hesitate to do this, but it may be an option. This is not uncommon in systems that deploy Microservices.
Summary, Wrap-up.
The caching mechanism worked.
And again, the (evil) impact of RoundTrips is demonstrated.
(In This Particular system, YMMV!)
By eliminating calls to and from the database, we reduce the workload on the python program and on the database.
The python-program does not have to call / fill / run statements and wait for the returns..
This frees up time and resources for other work. => Win.
The RDBMS does no longer get 1000s of "identical- and near-useless" calls anymore, it doesnt not have to use its precious processing power to serve those merges anymore.
This frees up resources at the RDBMS side for other work too. => Win.
Needless to say, if you have your software distributed around servers or over datacentres at distance, the impact of Latency and RoundTrips is Even Bigger.
I said it before (+/- 1998): Chattiness is the next Challenge in IT.
-- -- -- -- -- End of this blog, for Now -- -- -- -- --
Appendix A: links to sourcefiles.
tst_merge_src.sql : the test table (from tst_item_labels.sql from earlier blog)
tst_merge_src.py : the testdemo python code. (it needs some imports!)
The program imports the following: os, sys, array, random, time, datetime and dotenv.
And of course the oracle-driver: python-oracledb.
All of which are either standard in your python-installation, or can be get with pip3.
And I use some helper-files, you'll need those for import:
prefix.py : function pp(*argv), prefix the stdout lines with file + timestamp
duration.py : the stopwatch-utility I use.
ora_logon.py : functions to logon to database, and to report data from v$mystat
.env : edit this to include your scott/tiger@orcl, dotenv will read it.
-- -- -- -- -- End of this blog, for Real -- -- -- -- --
Wednesday, 30 July 2025
oracle23ai and python - eliminate RoundTrips.
TL;DR: Searching for RoundTrips to eliminate between python and Oracle. In this case, we "collect" the individual inserts to do them in a few large "batches". The result was Surprising, in a Good Way.
And then... Some pieces of program I would rather not have to write (re-constructing the wheel...). But Hey, it wasnt too hard, and it Really Helped.
Old Lesson (re)Learned: Row-by-Row == Slow-by-Slow.
Image: Several Cites have an efficient metro-train running next to an ever-jammed highway. I remember notably Chicago. But the view from the train from AMS airport to Utrecht is often very similar.
Background: Processing data (outside of the RDBMS), and TAPIs
Some systems want their data "processed" by python or other tools that are not part of the Oracle RDBMS (yet). Even if I think that taking data to and from the Database is generally not #SmartDB (link to asktom), sometimes this needs to be done.
But when every individual record (ins/up) becomes a RoundTrip, you will notice.
Luckily, a lot of our tables already have rudimentary TAPIs (TAPI = Table Application Program Interface). And some of these TAPI-functions caused a lot of those infamous Round-Trips.
TAPI - a Good Concept - until it is not...
The concept might be rather old (80s, 90s?), but it still serves. You'll find similar concepts in ORM-frameworks like JOOQ and Hibernate (links)
In our case, our python code will generally handle the creation (insert/update) of a record in a separate function (f_ins_item... ). These TAPIs will typically handle things like: 1) ensure there is a parent-object, or maybe create one, 2) and handle MERGE-functionality when required to prevent insertion of duplicates. 3) verify (or create) the necessary metadata or lookup-data.
This is a Good Idea, as it centralises the logic for tables in a few, easy to find, functions.
Most of these TAPI functions do their Good Work quietly in the background. In our cases, the "create/update of a record" is not the most time-consuming activity of a program, but rather the outcome of a much longer process. Not a noticeable bottleneck. Mostly.
But all of these TAPIs are single-record functions: they act on 1 record at a time. And when processing large numbers of records, that TAPI-function and the round trip(s) it does can become a time-consuming activity.
And "network" is a funny resource-consumer: you end up with both Application and Database doing seemingly "nothing" until you know where to look (In our case: AWR and application-logs-to-stdout, but this story is not about the "diagnose" it is about the "Fix").
TAPIs causing round-trips - Too Many RoundTrips.
As it was, the most executed statements on a particular system were the Inserts of "Details" in the datamodel: Records at the fringes of the ERD-diagrams that would receive 100s or even 1000s of records as+when details become known about an item (e.g. the "generic datamodel strikes again", different topic...).
The nature of those TAPIs is 1-record-at-a-time. And sometimes that hurts. From application-log (lines with epoch-time printed to stdout) we could learn that the insert-function was often called, and was a time-consumer. The RDBMS had some notable INSERTS as "High Frequency" (top of list for "ordered by executions"), but not as notable inefficient or resource-consuming statements.
The whole picture of a slow-program, a relatively quiet RDBMS, and the AWR-numbers about executions and RoundTrips, was enough to warrant a little test.
What if we could "batch" those inserts and prove that 1-row-at-a-time was really the in-efficient part of the system ?
Test: Catching the insert of a record...
For test-purposes, I simplified the table like this (will put link to script below):
The ID and CREATED_DT get generated by the RDBMS on insert. The FKs are self-explanatory. The Real-World case is a record of some 20 columns with a few more constraints (think: optional columns for dates, times, intervals, validity, valid lat/long, various field- and record-level constraints that can take some DB-CPU to validate, but never much....). And the (average)size for records varies between 500bytes and 2000bytes, depending on the item, the source and the label.
The Original insert looked (simplified) like this:
We have a function that can be called whenever a "label" is detected for an Item. The properties of the label need to be stored, with a few FK-references and various detail-properties that go in the columns or (free-format-ish) fields of the record.
Needless to say the FKs must exist for this to work. Cases where the FK-parents may have to be inserted are more complicated. And checking of various "validations" for columns can also take code + time. For the sake of demo, this is a much-simplified example.
Notice this SQL-statement is watermarked as /* t1 ...*/ for easy spotting in tests. Watermarking can also be Very Useful in deployment. Just saying.
Also notice: this function does Not Commit. The TX-logic is done elsewhere.
In short, several INSERT statement of this nature were The Most Executed stmnts from our problem-programs...
Buffering in a list, the Simple Concept.
The fact that all(?) insert in the original version go via a single function is a Great Start. All we have to do is "catch" the inserts, collect a sufficient number of them, and then send those to the RDBMS as a single statement using something like cursor.executemany (see this good example in the docu..)
In pseudo code:
That pseudo-code speaks for itself: store new records in a list (of records), and insert them when you have a significant collection. The Inspiration came partly from what I knew about RoundTrips and previous programming effort. And from a Very Readable example that can be found in the python-oracledb doc on "Batch Execution" (link).
Two main things to check in this concept: 1) Do Not Forget to check and insert any leftover items in the list before program commits or exits. 2) Avoid errors with an Empty-list, e.g. when no records at all are in the list, stop the function from throwing an error.
Other than that: Piece of Cake, right ?
Note: Python-adepts may recognise this as "using dataframes". Very Similar. Except that at this point, I use self-coded lists for simplicity and demo. It is quite possible that our dev-team will, over time, adopt some form of data-frames (Pandas, Apache-PySpark) in future. You can lead a horse to water.... but Maybe Later.
Late-Edit: As I am writing this blog, Christopher Jones is writing about DataFrame support in the latest python-oracledb release. Check this!
Let's put it to Code:
Note: the complete, hopefully runnable, program and scripts are linked at the bottom of the blog...
We start by defining some necessary (global-)variables and constants:
The implementation will need a (global-) list to add records: itl_list
It needs a length at which to do the inserts and re-initialize the list: itl_list_max_len
And we have the SQL to do the work: itl_list_sql_ins
The SQL-statement does not have to be defined global, but putting it here cleans up the def-function code. In practice, having the SQL inside or near the function can help with coding and code-reading. You choose whatever is convenient for you.
With this in place, we can re-write the insert-functions, in two parts: First the function to catch the records:
This "insert" function no longer interacts with the RDBMS, instead it appends the records to the (global) list.
In practice, there might be additional logic to check the "validity" of the record before adding it to the list. I've left that out in this example for simplicity.
But the add2list Does check for the size of the list. And when itl_list_max_len is reached: it calls the function to insert the records from the list and to reset the list.
The function that does the actual insert looks like this:
If there is data in the list: then insert it.
Note that in this example, we do not (yet) check/re-check the validity of the data before handing it to the cursor. Any serious data-anomaly could throw a nasty error.
As return-value the function reports the actual number of rows processed by the cursor, assuming that it was the nr of inserted records.
This code "compiled and ran" and all seemed Well..
So Far So Good. But did it Help ?
Let's Race It....
To compare, I pasted together a program that will do two loops of n_sec. One loop of original, individual inserts. And another loop of n_sec of list-buffered-inserts. Let's see what comes out faster...
Note: On early testing, I started with n_sec = 120sec of inserts. The Array-insert was so fast it threw an error: ORA-01653. Good Start. I adjusted the timings downwards a bit...
So I have two while-loops that each try to insert "records" as fast as they can for _only_ 10 seconds.
The first loop does 10 seconds of "individual inserts", it uses the original call for each individual record. The results to stdout looked like this:
Notice the number of loops (records): 2109, or 210.9 records per sec. Then notice: 2113 RoundTrips (minus the 4 from program-startup). Yep, 1 RT per record. Reporting the stats and the commit will add 2 more RTs, and bring the total to 2115 RTs before the next test starts.
(Also notice, I sneakily eliminated hard-parsing to beautify my results a little... )
The second loop does 10 seconds of append-to-list, with the occasional insert-into-table when the list gets to 1000 records. Stdout report looks like this:
Wow... That 10sec loop inserted 339.339 records....? A factor of 150x more. No wonder the first test hit my tablespace-size-limit.
First verification: count (*) in the database, Yep: over 600.000 records (there was a 1st run to eliminate the overhead of hard-parses...). Looks correct.
Second verification : the nr of RoundTrips. Those 339.339 new records, at 1000 records per execute, would have caused 340 RTs.. The reported nr of RTs is 2455. And minus the previous 2115 RTs, that is ... 340 RTs. That seems to concur Very Well.
Final check: V$SQLAREA (and counts) after two(!) runs of the program:
The Statements show up in the Shared_pool, and look at those numbers:
The individual-inserts /* t1 indiv */ have consumed about 273 microseconds per execute, for 1 row per execute, or 273 microsecond of DB-time Per Row.
The list-insert, marked /* t2 list */, with 1000 rows per execute, has consumed 12,854 microseconds per execute, but with 1000 rows per execute, that is Only about 13 microseconds of precious DB-time consumed per Row.
This Thing Rocks!
Some sobering thoughts...
This is essentially an Old Lesson (re)Learned: Row-by-Row = Slow-by-Slow. We Knew this since, ah.. 1995.
The First Thing we missed with the new function was the RETURNING-ID. All of our TAPI functions so far return the ID (primary key) of newly inserted or merged record. In the case of bulk-detail-records that is not a problem. But for inserting new meta-data, new lookup-data or otherwise data that is needed for further processing, this can be an obstacle. In our case, we will only build list-inserts for records where we do not need that return-value. Minor problem so far...
Validation of records by the RDBMS, e.g. Constraints inside the Database, can be more complicated on bulk-inserts. Bulk-error processing is possible, but not always simple. With individual records, it is easier to catch errors with try ... except blocks. In our case, there are records where we dont want bulk (yet) for this reason. You decide how important this is to you.
Extra code means additional dependencies and additional (future-)maintenance. Especially difficult to argue when written to "work around a problem" rather than to add something (functionally-)useful to a system. In this case, I had to write two functions to replace the original (TAPI-)insert. And future programmers/users need to take into account that the leftover-data in the list needs 1-more-insert to clear out.
For this example: Someone will forget to purge the array at some point, and (inexplicably) loose the contents of the last batch...
I would recommend to only apply this trick when you Know it is going to make a Big Difference, and when your team is capable of understanding the additional (brain-)work.
Alternatively, you can search for existing solutions. Certain DataFrames for python may already solve this problem for you. The pandas-dataframe (link) looked promising, but on first-search it did not provide exactly what we were looking for.
Further items to explore..
Record-types. The equivalent of what PL/SQL has: EMP%TYPE could help in defining a structure to hold data. It could make the lists easier to manage and it can do some data-checks before adding data to the list. It would reduce potential for errors on the actual insert. Maybe Later.
Data-Frames or similar toolkits might "do the work for us". For java there are JOOQ and Hibernate. Python has several DataFrame options, such as Pandas and Apache-pySpark and some of those may have potential. Maybe Later.
Geeky: How big can the array be (in nr-records and/or in memory-footprint) before it shows signs of deterioration? For the moment, any number above, say, 100 will clearly benefit the system by reducing RoundTrips and overhead. But is there is an optimun or some upper-limit. Maybe Later.
Summary: Batch-Processing (array-processing) Works !
From this (over-simplified) testrun, I want to point out the Two Main Benefits of Array-Processing:
1. The program managed to insert 150x more records in the same 10sec interval. That is a clear increase in capacity for the app-program. The biggest benefit is in reducing the overhead, the call- and roundtrip-time per record.
2. The consumption of DB-resources on the RDBMS-side is much more efficient as well. Because the RDBMS can now handle "bigger chunks" per call, it spends less time on the same amount of "ingest data". This Benefits the RDBMS as well.
The numbers on this test Really Surprised me. Again. Despite using systems with only single-digit-millisec latency. And I knew RTs were costly, I've seen + fixed this kind of problem before. But I didnt expect the difference would be This Big.
This problem is as old as the app-rdbms dichotomy. And yet we dont seem to learn.
This test in particular also illustrates how un-necessary RoundTrips can slow down Both the App and the RDBMS: not just by losing time waiting for latency, but also from the incurred additional processing.
RoundTrips are the Next Important Challenge in IT...
This test demonstrated: Both components, the Application and the RDBMS, Gain from batch- or array-processing and reduced RoundTrips.
-- -- -- -- -- End of this blogpost, for Now -- -- -- -- --
Appendix 1: links to Scripts.
tst_ins_labels.sql : set up the datamodel.
tst_ins_label.py : the demo program with two while loops
You need following "importable files" to run the tst_ins_label.py program:
ora_login.py : the login utility (uses dotenv) and the session-info.
prefix.py : contains pp ( *argv ) to print to stdout with timing info.
duration.py : my stopwatch, time-measurement for python programs.
.env : used by dotenv to store + get credentials and other info.
And to verify the results, you can use:
tst_2arr.sql : view SQL in shared_pool, and count records in src_item_label.
Feel free to copy or even re-type.
Re-use of code is often a myth, unless you have typed or at least modified the code yourself.
-- -- -- -- -- End of this blogpost, for Real -- -- -- -- --