Monday, 6 March 2023

A few tips, as you dive into SQL

In previous posts, I proclaimed "Everything is SQL" (link)
And I re-called which tools you could use to explore the SQL from your database: SQL Developer and SQLcl (or whichever other SQL-client you prefer). (links...)
Now for some tips...


Tip1: Stay Current, and skilled.
Get familiar with your tools and Stay Current: Check the download-sites and keep up with the latest version. You are going to live in the SQL-environment, hence you may as well keep up with current versions. 
For SQL Developer you should probably explore the Web-Version as well.
For SQLcl, the most important item for me is to keep my $SQLPATH correct, or modify it for the task/client at hand. When I do demos or presentations, I often use a special, Clean, version of my sql-scripts.


Tip2: Collect and Save your scripts.
Collect and manage (github) your own set of tool-scripts and familiarise yourself with them. In my case, I have my own scripts for day-to-day examinations, and for troubleshooting. I can quickly clone my private set form github onto just about any machine I get to work on (if not, I have a zipfile, and at times I use old fashioned uuencode to circumvent filters ...)


Tip3: Explore scripts from others. 
There is a lot if useful stuff out there, and you should read some of it just for Inspiration. In the end, you will suffer the "not invented by me" syndrome, but it really helps to see what others did. I use the available tools from Oracle and others. 
Oracle provides AWR, ASH, (or statspack for Standard Edition).
Carlos Sierra maintains sqldb360
And for the high-tech folks, there is also the toolset from Tanel Poder
This topic merits a whole article by itself, but you can start by exploring the material from Tanel Poder and Carlos Sierra. Links...


Tip4: Always establish “Where You Are”.
Make sure that you are “looking at the problem” and not at some random database(-clone) running in a container on some developer-box. Many ppl seem to logon to the wrong DB (e.g. pre-prod instead of prod). Make sure you are looking at the correct database or instance.
Sounds trivial, but it is one of the most occurring "mistakes" I come across. You have to check and Double Check that you are Actually looking at the right database, and that the supposed problem you see really is the problem the cstmr is having issues with.

Similar when you get mailed a Statspack or AWR report: double + triple check that is was from the correct database, and that the problem-you-want did occur at the time-interval of the report.

The other classic is to run some disastrous stmnt in Production while you were assuming you were just connected to some Dev-copy.
Sorry to hammer on about this, but it has happened too often...
For this reason, my most used, and most valuable scripts are the ones called pr.sql (to set the prompt) and a number of varieties on it. I tend to customise it for the environment I look at. Possibly merits a separate blogpost (future link)


So far the first set of Tips..
next post probably about pr.sql or some of the ready-to-use scripts out there.


Tuesday, 28 February 2023

Your Tools: SQLDeveloper and SQLcl


In the previous post (link), I insisted: Everything is SQL, and SQL is Everything.


So if all the information you need is presented in the form of tables and views, and everything is SELECT-able, you should get familiar with SQL and with the tools to run SQL.


Your tools of choice should be SQLcl or SQL-Developer, or the SQL-Developer web variety, if you want to be totally hip  (downloads and information here).

You may also know that SQL*Plus is still available on Every Platform where the RDBMS itself is deployed.


And there are others... if you are used to other SQL-tools, you will find that most of them can connect to an Oracle database, and work just fine. For example TOAD or DBeaver will also do the job just fine (some documentation here).


Running a query from SQLDeveloper mostly looks like this:



In there  you see I am using SQL to join two views, and I am looking for statement executed by SCOTT, with the heaviest IO-stmnt listed on top. This information, and many other valuable data is available to query using just the tools and SQL that you already know.


But as a DBA, I have a strong preference for using a CLI (Command Line Interface) next to my GUIs or other tools. The big advantage of a CLI is the capability to "script" your commands. Scripts will "store your knowledge" and make it repeatable, re-usable.


In my case, I would use SQLDeveloper to do ad-hoc inspections, queries. But once a query or a report needs to be re-run more then once, when I think it is useful in future, I will create a script. From that point on, the query can be run from using its filename.sql from the CLI. 


Example is the script "pr.sql" a very small script that I use everywhere and all the time: it sets the SQL-Prompt to tell me I which user, which database and which server I am connected to, just to make sure I am always typing at the correct prompt... 


SQL> 

SQL> connect scott/tiger@orclpdb1 

Connected.


SQL> 

SQL> @pr


set sqlprompt "SCOTT @ ORCLCDB @ oracle-21c-vagrant > "


SCOTT @ ORCLCDB @ oracle-21c-vagrant > 



Because "Everything is SQL", that script actually goes out and uses SQL to find the user, the database (or PDB) and the host this database runs on, and then sets the prompt accordingly. 


If you make a habit of creating and using scrips, then, over time, you will collect a nice set of scripts. 


The first good reason for using the CLI tools, SQL*Plus or SQLcl, is that those tools can run scripts quickly using the start command or the @ operator. If you stick with the GUIs, you will at some point get tired of copy-pasting your code-snippets of SQL-commands into the GUI. Running a script from the command-prompt takes less of a mouse-and-type effort.


And once you are using the CLI, you should investigate the setting of the environment-variable $SQLPATH. (windows: %SQLPATH ). This variable can contain one or more directories where you can store your scripts and _always_ have them at your fingertips. Not to mention the useful scripts you might copy from others who have already gone there and done that.


In Summary:

 - Download and get to know the tools.

 - Explore the data, find your information.

 - Consider creating your own set of scripts.

 - Use environment-variables to always have the scripts available.


Next blogs...: 

Tips to avoid the mistakes I made, and still see others making

And links to where you can find good re-usable scripts already built by others. 

(insert future links here..)


And... I told you: "SQL is Everything and SQL is Everything"




Wednesday, 22 February 2023

With Oracle RDBMS: Everything is SQL

Everything is SQL, and SQL is Everyting.

One of the things I really like about Oracle RDBMS is that it follows the “12-rules of Codd” very well. 

And What I specifically like, All of the RDBMS information is accessible via SQL.

In Oracle, “Everything is a Table or View”. 
And thus, everything can be queried by SQL. 

If you need to observe or “diagnose” your Oracle Database, you can (and should) do so via SQL. And you will also learn a few things every time you delve into something. 

 Disclaimer: I'm writing this bcse I am trying to convince some newbies and DevOps folks to learn just a little more about Oracle.... 

 This blogpost is to show you how you can Get to Know Everything about your Oracle database by just using the SQL interface. For example: I can ask what tables are in my current schema with a query on a view called "USER_TABLES" 

SCOTT @ ORCLPDB1 > select table_name from user_tables order by 1 ;
TABLE_NAME 
-----------------------
DEPT
EMP
LOG_STATS
PT PT_C
PT_CC
PT_CCC
T
T_C
T_CC
T_CCC

11 rows selected. 

SCOTT @ ORCLPDB1 > 

And if you want to dig deeper: you can ask the database where it stores the actual datafiles, for example from v$datafile: 

SCOTT @ ORCLPDB1 > select name from v$datafile ; 
NAME 
------------------------------------------------------------- 
/opt/oracle/oradata/ORCLCDB/ORCLPDB1/system01.dbf /opt/oracle/oradata/ORCLCDB/ORCLPDB1/sysaux01.dbf /opt/oracle/oradata/ORCLCDB/ORCLPDB1/undotbs01.dbf /opt/oracle/oradata/ORCLCDB/ORCLPDB1/users01.dbf 

SCOTT @ ORCLPDB1  >

And in that spirit, following notably rule nr 4 of Codd, all the data and metadata of your Oracle database can be found via SQL. 

I strongly encourage any DBA (and Developer, Devops, etc.. ) to explore the available documentation on the DBA_% and V$... views. Everything you need to know about your database is There. And much more.  All reachable with Select-statements. 
Link to the official doc here ....

note: Compare Oracle to unix/linux. Those of you who are brought up with unix/linux will recognise this. In unix, “Everything is a File”. 

My fingers are itching to show more SQL and to show how you can inspect the darker corners of your database with some some nifty SQL-stmtns.

Also, I want to make some points about tooling, issue some tips and some Warnings. With powerfull tools comes some responsability on using them correctly. 
The more I think (write) about it, the more there is to elaborate upon.
Guess I have to do some more blogs...

But my main message would be: Check the 12 rules of Codd, and explore some of the documentation about the internal views. There is your starting point.

For now, just remember: Everything is SQL ! 

Tuesday, 11 October 2022

You Should Speak at Conferences. #JoelKallmanDay

 Conferences are Great - you should go (and Speak)

Instead of some geek-content, This is a call-to-speakers. The trigger for this post is two-fold: 1) the Joel-Kallman-Day inititative from Tim Hall, and 2) the fact that some IT conferences are still searching for relevant, new/local speakers.


So..

Conferences are Great, and You Should go + Speak! 



Conferences are for getting out of the (home)office, to discuss topics, meet people, and  learn new things, ideas, concepts, trends. And for the food+drink, although that varies from place to place.

And the best way to get to a conference is ... to do a Presentation.

Especially when I had to "ask my boss", it would always Greatly help if I could say: "I'm Speaking at XYZ, can you please budget the trip+time" (maybe more on that later).


So... What to Speak about?

Here is one of my best 3-step tricks to find a topic:

1. Find the biggest Obstacle you had at work in the last 12 months (e.g. learning python, creating K8s pods, Designing your Datamodel, arguing with your architect..). That can be your Topic.

2. Now write down what you want to Tell / Warn / Laugh to others about (The Quirks of Python, Yaml/Ansible-syntax, how to manipulate the architect). This will be your "Message".

3. Turn that into slides notably with a clear Conclusion. Give your listeners a "Take Home Lesson" at the end. (I do max 25 slides, but your style may vary).

There you are!


My reasoning behind this is multiple: 

First, you will be a better speaker if you have a grain of Passion, a Mission. Something you Really Want to Communicate. And even better: something you have Experienced Yourself. 

Secondly, you probably were not the only one with whatever challenge you had. Others will have been in the same situation and will recognise it. Those will be your Audience, and they will spark the discussion afterwards.

Thirdly: Because you are "On a Mission" to convey your learnings, you will be better motivated, and you will more easily overcome any stage-fright your may have.


There is more to speaking, but  you will learn in the process of Doing! 

A good source of information is also the MASH program (link). And there are the practicalities, things like: 

Avoid boring slides (important, but less important than Your Mission)

How to work towards the conclusion (important, but less important than your intrinsic Motivation)

Use of Clip-art and moderate humour (important, not Essential to your Message)

How to determine your Tempo, Timing. Your first presentation will run-over, that is normal, and not a problem: Organizers will keep-time, or not. And getting late to other talks is (partly) the problem of the audience. Truly Motivated listeners may even remain behind, and harass you with Questions. The secret to not run over is.. a) do the presentation a few times, and b) remove irrelevant content (this can be hard - especially if you want to tell a "whole story")

Note: running-over is Totally Impolite because it creates problems for audience, for other speakers, and for organizers. If an experienced speaker goes over time, you can tell him off. But every beginner-speaker should be allowed to run-over (once, just once :-) ). If a "sales-pitch-speaker" runs over time: Ban Him (that is 99% of cases a him) and Shame his Product.


Oh, and about that food+drink: After the event, tell the organizers how good or bad their catering was. Some will learn, some wont.

Now go out and Enjoy!



footnote: Another reason to Re-Shout this message is that, notably in the Ora-sphere, and in some of the dev-rel oriented conferences, the nr of speakers seems to decrease,` and I see a lot of "habitués" and company-sponsored professional Dev-Rel ppl  (pre-sales in disguise) repeating their messages. Some are Great People, and I love to discuss with them, but some others are ... 

foot-footnote: (deleted, too Rant-ty, NSFW)





Thursday, 10 October 2019

OGB Appreciation Day: Efficiency with Partitions.

Partitioning is Very Useful - If ... you take the effort to Design for it properly.

What I like best about Partitioning is the "life cycle feature". You can remove large amounts of (old) data with a single statement in the blink of an eye, and with very little Redo incurred.

If you want, you can also Add or Move data  around in the blink of an eye as well.

Google for :
Alter table Drop Partition
Alter table Add Partition


Use Partitioning for Fast and Efficient Data Manipulation (notably Deletes)

The main "successful" use I've seen from partitioning is this: (Re)Moving large amounts of data in the blink of an eye, without incurring Undo+Redo for the total amount of data, and without locking.

Very little Redo (it is a DDL operation, not DML)
No Locking.
No Index Rebuilding (e.g. only local indexes).
Fast!
Efficient!


Your Mission, should you choose to accept it: 
Verify this for yourself!

Here is your homework.

If you are serious about trying / using this feature:
 - Create a partitioned table with at least 2 partitions.
 - Insert data into both partitions, at least 100K records (and commit)
 - generate statistics, verify the approx row-counts in each partition.
 - set Autotrace on
 - set timing on
 - Delete all the data in 1 partition with a "Delete from ... Where ..."
 - Note the time and the amount of Redo involved.
 - commit (just dont forget...)
 - Re-generate stats and verify the row-counts, your data is deleted ?

Now for the Partition-operation...
 - Re-create the same table, same partitions, with same data (commit!)
 - generate + verify stats...
 - Set autotrace + timing again
 - Drop a partition: Alter table ... drop partition ;
 - note: the implicit commit... this is a DDL operation.
 - Note the time and the amount of redo.
- Verify the data is gone.. (trust-no-1....)


QED.

Now how cool is that?
And so Simple...


Go ahead, do it, and et us know the differences in Time and Redo in the comments...

My (not quite conform) example script is here

And if You can Do this:
Congratulations! You are on your way to Master the use of Partitioning.



Now, there is a lot more to Partitioning, of course.

My opinion in Short, rather simplified statements.
 - You need to "design" partitioning from the start. A "bolt on" to an existing data-model will 99/100 fail.
 - Use (only) automatic interval partitioning, prevent yourself from having to pre-create partitions regularly (you will forget...)
 - Every SQL to Ever Access that table needs a partition-key in the where-clause.
 - Avoid global indexes if possible.

NB: I know the "maintain global indexes" was created to prevent+fix some of my pet-problems-with-partitions, but I am still skeptical about the usage in live (OLTP) systems.

NB2: Kuddos to Hermann (dot) Baer (at oracle dot com), a.ka. @sdjh2000 for constantly improving the capabilities of the partitioning option. Good Work on a Great Feature.



That's all for now folks.

With a Large Kuddos to Tim Hall (@oraclebase) for the Yearly OGB-appreciation day (link). Makes me blog at least once per year...

OGB-appreciation day was f.k.a. the OTN-Appreciation-Day,
re-branded as the ODC-Appreciation day,
re-branded as the OGB-Appreciation Day.
Who knows what next year will bring - Oracle Cloud Dev-Ops (OCD) Appreciation day?


And if you got this far, a word of warning:
Do Not Ever Make Fun of Oracle Buzzwords or Hashtags.
Humour and Oracle only work if it is Oracle-Approved humour.
(mandatory buzzword compliant content...)
Remember: We all work for Larry, Only some of us dont know it yet.

CU at some event, at some webinar or next year on this blog-event.

Hashtag : #ThanksOGB

Wednesday, 10 October 2018

ODC Appreciation Day..

It is that time of year: Write about your favorite feature and say "Thank You Oracle"..

This year, I'll pick Oracle Golden Gate

Short story: if you need to replicate data, GG is one of your easy choices.

Slightly Longer story:

I was lucky: customer choose GG to replicate data for one country to another in major move of IT systems and data (yes, to the cloud, to those large Remote Database Systems in AWSome land...).

GG helped us run in sync for a number of weeks, and the final cutover was a breeze.
GG also (sort of) kept open a replica back to our original datacenter, so management felt they could potentially chicken out and move back to the old provider.

The whole process was easier than I expected,
and a Large Tip of the Hat to Golden Gate is appropriate.


(the issues, the problems, the quirks.. Later)

The reason for this post Tim Hall
https://oracle-base.com/blog/2018/10/11/odc-appreciation-day-effective-evangelism-staying-positive/

The link to Golden Gate (Hi Bobby!)
https://www.oracle.com/middleware/data-integration/goldengate/



Friday, 5 January 2018

What is SmartDB

What is #SmartDB ?

This question came up in the twitter discussion about #SmartDB, and all the advantages it brings (link to twitter).

Over the last year or so (and way before that, with the Helsinki-declaration in 2008), Toon Koppelaars has given us the reasons and guidelines for #SmartDB, and it boils down to “do the work in the database” (correct?)
So the shortest description, IMHO, would be : 

SmartDB: 
Any IT system using a database should do as much as possible of its processing inside the Database, and as little as possible of its processing in other layers.

(Agreed?)

TL;DR ? Read no further.  ;-)


Background of this approach is that this would lead to least-complexity, least-components, least round-trips, least-overhead, and least-complicated troubleshooting (only one component to examine and fix… ). 

Also, my too-short definition doesn’t include (yet) the need to apply sound database-practices. Good IT systems start with good (system) design (based on requirements). It also  includes things like 3NF, ACID, notably resilience, and adequate security based on minimum-privs and minimum exposed surface. Then there are “scalability”, “upgrades” and “monitoring” to allow the system to remain in action over longer periods of time, and under various loads. Sustainability, if you like.

To me, all the above still makes sense. And I feel comfortable to design/build an IT system given those guidelines. 
Of course, some are not content with an extremely short definition, and others demand a more elaborate description or a how-to cookbook-guide. All of that, Toon and Bryn are trying to provide in various presentations, videos and white papers on the subject.

I’m going to add a a suggestion.

To better define and describe SmartDB, I suggest to follow these steps:
- Requirements, 
- Reasoning, 
- Recommendations.
Those three steps should lead us to a better description and thus to a better way to “evangelise” the SmartDB concept. 

To Elaborate each step:

The (list of) Requirements should state why SmartDB is needed, and which problems the concept tries to solve.

By following logical Reasoning, with the knowledge and technology available, we can explain how each requirement is addressed in what is the most efficient way known to us.

To finalize, we create a list of Recommendations (if not: Directives!), on how to implement SmartDB. The recommendations should, at first, not be connected to any particular database or programming language. Those details can be filled in later. Each "vendor" should do this for his own product, and hopefully stay within the concept of SmartDB.

The result of this exercise should be (yet another) white paper, 10 page max, and some presentation material that we could throw at Developers, Architects, Managers and even Dev-Ops and UX wizards to explain and convince them.

Some history: 
Long time ago, in a cubicle-space far far away, an OFA-standard was defined via a similar process in the early 90s
I am that old. I recall how this Oracle Flexible Architecture was created and "explained" (OFA was the mother of all Oracle Standards - link needed).
OFA was created roughly in those three steps: Requirements, Reasoning and Recommendations (Directives!) I’ve not checked the original paper for years but this is what I recall…

This proces of problem identification and reasoned solutions was clear and could be explained to most users (Sysadmins, DBA’s) at the time. The standard was widely adapted and cited in the Oracle-DBA world. 


So, to recapitulate, I think we should follow a similar path to define the SmartDB concept: Requirements, Reasoning, Recommendations. Those three steps should lead us to a better description and thus to a better way to explain, promote, and verify the SmartDB concept. 

(I know, I know, Bryn is going to say that all of the above is covered 11.2 years ago with EBR, and we only need 4 schemas… but still, it wont hurt to re-assert some items…)


Additional notes:

note1: I have barely mentioned Oracle (or PostgreSQL, or MySQL, sukkel-Srvr, or even sp-hna). The SmartDB concept should not be locked into a particular product.
note2: I have not mentioned any theoretical or academic knowledge, but I would assume IT ppl to be familiar with handling requirements, IS-design, ACID, 0+12,  UX, some OOP, various methods of testing, etc… 
note3: I have not mentioned any procedural language, but SmartDB does imply the use of some stored-procedure dialect

note4:… there is much more, but it needs to be discussed.