DBA Blogs

how to delete data from table

Tom Kyte - 1 hour 33 min ago
with this normal query of delete delete from table where condition-'a'; will it work in table data are present from year 2014. i want to delete all data starting from 2014 to till now.
Categories: DBA Blogs

Column default value as another column from same table

Tom Kyte - 1 hour 33 min ago
Hello, We have a requirement to add a new column in the table which needs a default value like column1 || column2. For some reason application code can not be changed to deal with this new column and so the default value. I thought of two approaches that can solve this, using trigger to update the new column if any of two columns column1 or column2 are updated - so the new column can be initially updated and then trigger can be enabled to handle any future changes. Other approach is use virtual column. Now it seems that a direct insertion of data or an update might be required for the new column, that rules out the virtual column. And on trigger, web is full of articles that they are problematic and I am having tough time convincing that for a low volume table (number of records as well as the number of transactions) trigger may not be the worst idea, though I understand the maintenance headaches and side effects etc. Is there any other approach? Also why Oracle does not support the column default value as another column? Thank you, Priyank
Categories: DBA Blogs

Single row cursor for short text string from dual produces CHAR(32767)

Tom Kyte - 1 hour 33 min ago
Hi I have tried 19.9 - 19.11 I have noticed some suspicious behaviour regarding dual. I will create an example. At the moment, here you can see that from mytab, there comes only single row. Then I will dump the datatype to output. <code> SQL> set serveroutput on size unlimited; declare a clob; l_msg_content_begin CLOB := EMPTY_CLOB(); CURSOR cur IS with mytab as ( select 'SOMERANDOMTABLE' as main_table from dual --union select 'ALSOSOMERANDOMTABLE' as main_table from dual ) select main_table, lower_main_table from ( select main_table, lower(main_table) as lower_main_table from mytab ) order by 1 desc; rec cur%rowtype; BEGIN FOR rec IN cur LOOP dbms_output.put_line(rec.main_table); select dump(rec.lower_main_table) into a from dual; dbms_output.put_line(a); -- ORA-06502: PL/SQL: numeric or value error: character string buffer too small -- If you have only one row from dual, then you get error if you uncomment this: "l_msg_content_begin := ..." -- With 2 or more rows from dual, all good --l_msg_content_begin := 'blabla '||rec.lower_main_table||' blablabla '||rec.lower_main_table||'bla'||UTL_TCP.CRLF; END LOOP; --dbms_output.put_line(substr(l_msg_content_begin, 1, 2000) || 'AA'); END; / </code> And here you can see, datatype is CHAR (Typ=96), and check the length (so whole string padded with spaces "ascii32 == space") <code> SOMERANDOMTABLE Typ=96 Len=32767: 115,111,109,101,114,97,110,100,111,109,116,97,98,108,101,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,.................... </code> Seems like <b>lower()</b> function somehow produces this strange behaviour. Is this normal..? Also when I dump rec.main_table instead (so not lower() function output) <code>select dump(rec.main_table) into a from dual;</code> Then I get type CHAR and an actual length. So it is expected. On contrast, when I uncomment this second line also <code>--union select 'ALSOSOMERANDOMTABLE' as main_table from dual</code> Then it is expected: <code> SOMERANDOMTABLE Typ=1 Len=15: 115,111,109,101,114,97,110,100,111,109,116,97,98,108,101 ALSOSOMERANDOMTABLE Typ=1 Len=19: 97,108,115,111,115,111,109,101,114,97,110,100,111,109,116,97,98,108,101 </code> Type is varchar and length is actual length. Regards Raul
Categories: DBA Blogs

The most peculiar Oracle situation in my career- Oracle changes how it records a block read from direct read to not recording it in an I/O wait event at all

Tom Kyte - 1 hour 33 min ago
Greetings, I have this extremely perplexing situation where Oracle changes how it records a block read. Last week it wasn't COUNTING block reads at all in an I/O wait event; this week it started to add it to the ?direct read? wait event. This is occurring in our production environment; however, I was able to reproduce the situation in our test environment with test data. I used all_source view to create two test tables until I reached 1.2 million for table 1 and 4 million for table 2: Table1 ( 1.2 Mil records) create table table1 as select * from dba_source where rownum; Table2 ( 4 Mil records ) create table table2 as select * from dba_source; create index t1_pk on table1(owner); create index t2_pk on table2(owner, line); exec dbms_stats.gather_schema_stats('JOHN'); Then I ran this select statement 120 times: <code>select count(*) from Table1 where line=1 and owner in (select Table2.owner from Table2 where Table2.owner=Table1.owner) order by owner;</code> In some cases Oracle 19c records the I/O in "direct path read" wait events and in other cases, it doesn't seem to report in any I/O wait event. That is soooo odd. TEST CASE 1: IOStats summary doesn't record I/O nor does it in a wait event: <code>Top 10 Foreground Events by Total Wait Time ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Total Wait Avg % DB Wait Event Waits Time (sec) Wait time Class ------------------------------ ----------- ---------- --------- ------ -------- DB CPU 20.2 99.6 PGA memory operation 2,524 .1 20.27us .3 Other Disk file operations I/O 520 0 59.49us .2 User I/O db file sequential read 211 0 12.33us .0 User I/O Parameter File I/O 8 0 257.00us .0 User I/O enq: RO - fast object reuse 2 0 784.50us .0 Applicat control file sequential read 209 0 5.32us .0 System I log file sync 1 0 .95ms .0 Commit SQL*Net message to client 546 0 1.53us .0 Network SQL*Net more data to client 22 0 33.77us .0 Network SQL ordered by Gets DB/Inst: ORACLE/stbyoracle Snaps: 2727-2728 -> Resources reported for PL/SQL code includes the resources used by all SQL statements called by the code. -> %Total - Buffer Gets as a percentage of Total Buffer Gets -> %CPU - CPU Time as a percentage of Elapsed Time -> %IO - User I/O Time as a percentage of Elapsed Time -> Total Buffer Gets: 3,399,948 -> Captured SQL account for 98.1% of Total Buffer Gets Elapsed Gets Executions per Exec %Total Time (s) %CPU %IO SQL Id ----------- ----------- ------------ ------ ---------- ----- ----- ------------- 3,241,728 120 27,014.4 95.3 14.4 99.5 0 82mps751cqh84 Module: SQL*Plus select count(*) from Table1 where line=1 and owner in (select Table2.owner from Table2 where Table2.owner=Table1.owner) order by owner IOStat by Function summary DB/Inst: ORACLE/stbyoracle Snaps: 2727-2728 -> 'Data' columns suffixed with M,G,T,P are in multiples of 1024 other columns suffixed with K,M,G,T,P are in multiples of 1000 -> ordered by (Data Read + Write) desc Reads: Reqs Data Writes: Reqs Data Waits: Avg Function Name Data per sec per sec Data per sec per sec Count Time --------------- ------- ------- ------- ------- ------- ------- ------- -------- LGWR 3M 1.5 .022M 10M 3.6 .075M 678 368.73us Others 7M 2...
Categories: DBA Blogs

Method to measure performance gain of clustered table vs non-clustered tables

Tom Kyte - 1 hour 33 min ago
I have 2 pairs of parent and child tables ,1 pair is stored in a clustered object and the other is non-clustered. The primary key of the master table (which is foreign key) in the child table is clustered. An index on cluster is also created. The structure of 2 the parent tables is identical and structure of 2 child tables is also identical. Records in the 2 pairs are also identical. I want measure the performance gain of clustered tables vs non clustered table for SELECT statement. I am using SET TIMING ON and printing the elapsed time after the SELECT is executed on the tables. The SELECT statement is also identical. I was expecting the elapsed time of clustered tables to be less than the non-clustered table, consistently. But it is the not. Can you please explain this? Also , is there other way to measure the performance of non-clustered vs clustered, using auto trace or explain plan?
Categories: DBA Blogs

Tracking the Standby Lag from the Primary

Hemant K Chitale - Sun, 2021-05-09 10:38

 Here is a quick way of tracking the Standby Lag from the Primary.

This relies on the information in V$ARCHIVE_DEST on the Primary.

Note that this query will not work if the lag is so great that the SCN_TO_TIMESTAMP mapping fails (because the underlying table holds only a limited number of records) OR if the Standby instance is shutdown and the Primary cannot communicate with it.


Note : The lag based on "SCN_TO_TIMESTAMP" is always an approximation.  

SQL> l
1 select scn_to_timestamp(current_scn) - scn_to_timestamp(applied_scn) Time_Diff
2 from v$database d,
3* (select applied_scn from v$archive_dest a where target = 'STANDBY')
SQL> /

TIME_DIFF
---------------------------------------------------------------------------
+000000004 00:41:09.000000000

SQL>
SQL> /

TIME_DIFF
---------------------------------------------------------------------------
+000000004 01:07:22.000000000

SQL>
SQL> l
1 select scn_to_timestamp(current_scn) - scn_to_timestamp(applied_scn) Time_Diff
2 from v$database d,
3* (select applied_scn from v$archive_dest a where target = 'STANDBY')
SQL> /

TIME_DIFF
---------------------------------------------------------------------------
+000000004 01:07:58.000000000

SQL>
SQL> l
1 select scn_to_timestamp(current_scn) - scn_to_timestamp(applied_scn) Time_Diff
2 from v$database d,
3* (select applied_scn from v$archive_dest a where target = 'STANDBY')
SQL> /

TIME_DIFF
---------------------------------------------------------------------------
+000000004 01:13:16.000000000

SQL>
SQL> /

TIME_DIFF
---------------------------------------------------------------------------
+000000004 01:13:37.000000000

SQL>
SQL> /

TIME_DIFF
---------------------------------------------------------------------------
+000000000 00:00:00.000000000

SQL>


Here, the lag was 4 days and it took some time for the Standby to catchup with the Primary.
(this is my Lab environment, not a real production environment at my work place, so don't ask how I managed to create a lag of 4 days or how long it took for the Standby to catch-up with the Pirmary)

Note : If the Standby database is down and/or the lag is very high, you will get error :
ORA-08181: specified number is not a valid system change number
ORA-06512: at "SYS.SCN_TO_TIMESTAMP", line 1

for the "applied_scn" from v$archive_dest.  (If the Standby is down, the value for "applied_scn" in v$archive_dest on the Primary is "0").


If you have access to the Standby you can run this query :

select name, value from v$dataguard_stats where name like '%lag'


The demo above is only a quick away by querying the Primary without accessing the Standby
Categories: DBA Blogs

Zip a .csv file present at DB directory using PL/SQL

Tom Kyte - Fri, 2021-05-07 13:46
We have a requirement where we are generating .csv file from DB and placed it to a DB directory. We want to zip these .csv files so that size can be optimised. Could you please suggest a way to achieve it by using PL/SQL.
Categories: DBA Blogs

Is it safe to re-sequence table columns using invisible columns ?

Tom Kyte - Fri, 2021-05-07 13:46
Hello Team, First of all, thanks for all the good work you are doing. Request your help with a query related to re-sequencing of table columns using invisible columns. Is it safe to change order of columns in a production environment, following the method described in the following link ? https://connor-mcdonald.com/2013/07/22/12c-invisible-columns/ We tested it and are not able to find anything unusual. However, any particular "gotchas" we should lookout for? I know that ideally order of table columns should not matter. However, in our situation, codebase can have legacy code that don't use column names in insert statements. Pasted below is the detailed scenario on how/why we are planning to use this. Thanks, A ---------------------------------------------- Our requirement is to encrypt a column in existing tables in PROD ENVT. These tables can have hundreds of millions of rows. This task has be done during a down time window that is not large enough. In order to achieve this, we are trying to do as much work as possible out side the downtime window. Our plan is to add an invisible column to the tables. Data from the original column will be encrypted and stored into these invisible columns. This can be done outside the downtime window and will not affect the day to day operations. We also have a mechanism to identify and handle delta in the original column. The only task pending for the downtime will be to move values from the invisible column to the original column. In order to complete it in the short downtime window, We will make the invisible column visible and will swap it's name with the original column. The redundant original column can then be dropped. This approach works fine except that the order of the columns change. The encrypted column now appears as the last column in the table. Ideally, the order should not matter. However, these tables are used by some applications that have legacy code that inserts without specifying the column name. We are exploring if we can add the new column at the position of the original column.
Categories: DBA Blogs

Shuffle quantities between buckets

Tom Kyte - Thu, 2021-05-06 19:26
Hi Tom, I am given the "current" allocation of items to eight buckets, and I want to make it more efficient by filling as much as possible of bucket A, then of bucket B, then of bucket C (as indicated by the "priority"), by moving items between buckets by taking as many P1 items from the bucket H and reassigning them to bucket A (as many as possible), then to bucket B, etc., until you allocated all of them. Then you take the next lowest-priority bucket and repeat. How much of P1 and P2 to fill is defined in the volumes table i.e. each A, B-H can have quantities in multiple of eight and seven of P1 and P2 in sample data. I also want to include round-up and round-down logic to nearly distribute the quantities across buckets if one bucket has too big a quantity. The items P1 and P2 are completely independent and one's result shouldn't impact the other. The height, weight, and width don't matter here so not present in any sample data. The below I have started with but couldn't make round-up and round-down cases work. Also, in the cases when quantity is moved into two or more buckets from one bucket or moved out from two buckets into one bucket, can we show a single comma-separated row instead of multiple step-by-step rows? In the cases when one row has too much quantity, can we implement round up and round down logic to distribute the quantities near equally in multiple of the quantities of value table in buckets e.g. if we update the quantity as seventy-two in bucket H of P1 part, the current result gives five rows for H bucket. Can we round buckets A-H with sixteen and then the remaining ones in the H bucket?
Categories: DBA Blogs

APEX Message box label

Tom Kyte - Thu, 2021-05-06 01:06
Is it possible to change the labels of the confirm dialog buttons from ?Cancel/Ok? to ?No/Yes? in APEX?
Categories: DBA Blogs

Is there an Oracle document that has a checklist to be able to answer whether database server will handle "peak" load

Tom Kyte - Thu, 2021-05-06 01:06
Greetings, A question from the client that comes up every few years is to predict if the Oracle database server will be able to handle a new application's peak load. Instead of trying to think of all that needs to be considered on the fly, it would be great if there was an Oracle document that had a checklist with all of the questions that we must answer so that we can give the client a definite answer of yes we can predict if x,y and z or performed. I know that in most cases, this will be nearly impossible to answer as it will take too much time to answer and we can't control the variables for other apps that share the same resources like database, network, SAN, etc. For instance, usually the network and SAN are shared with the database server so we will need to get peak loads of all the other applications plus the expected max throughput for the network and SAN. Thanks for your help, John
Categories: DBA Blogs

RMAN-06172: no AUTOBACKUP found or specified handle is not a valid copy or piece

Hemant K Chitale - Wed, 2021-05-05 09:40

 You are attempting to restore a database to another server.  

So, you have verified that you have controlfile and datafile backups on the source server  :



RMAN> list backup of controlfile;

using target database control file instead of recovery catalog

List of Backup Sets
===================


BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
51 Full 11.52M DISK 00:00:01 20-FEB-21
BP Key: 51 Status: AVAILABLE Compressed: NO Tag: TAG20210220T114245
Piece Name: /opt/oracle/FRA/HEMANT/autobackup/2021_02_20/o1_mf_s_1065008565_j3119p5t_.bkp
Control File Included: Ckp SCN: 1093419 Ckp time: 20-FEB-21

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
55 Full 11.52M DISK 00:00:02 04-MAY-21
BP Key: 55 Status: AVAILABLE Compressed: NO Tag: TAG20210504T232054
Piece Name: /opt/oracle/FRA/HEMANT/autobackup/2021_05_04/o1_mf_s_1071703254_j92slr2m_.bkp
Control File Included: Ckp SCN: 1126526 Ckp time: 04-MAY-21

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
56 Full 11.48M DISK 00:00:01 04-MAY-21
BP Key: 56 Status: AVAILABLE Compressed: NO Tag: TAG20210504T232851
Piece Name: /home/oracle/controlfile.bak
Control File Included: Ckp SCN: 1126757 Ckp time: 04-MAY-21

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
57 Full 11.52M DISK 00:00:02 04-MAY-21
BP Key: 57 Status: AVAILABLE Compressed: NO Tag: TAG20210504T232853
Piece Name: /opt/oracle/FRA/HEMANT/autobackup/2021_05_04/o1_mf_s_1071703733_j92t1pow_.bkp
Control File Included: Ckp SCN: 1126766 Ckp time: 04-MAY-21

RMAN>


You have copied the backups to the target, new, server and attempt to restore :

oracle19c>rman target /

Recovery Manager: Release 19.0.0.0.0 - Production on Wed May 5 22:27:26 2021
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.

connected to target database (not started)

RMAN> startup nomount;

Oracle instance started

Total System Global Area 1207958960 bytes

Fixed Size 8895920 bytes
Variable Size 318767104 bytes
Database Buffers 872415232 bytes
Redo Buffers 7880704 bytes

RMAN> restore controlfile from '/home/oracle/controlfile.bak';

Starting restore at 05-MAY-21
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=20 device type=DISK

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 05/05/2021 22:27:47
RMAN-06172: no AUTOBACKUP found or specified handle is not a valid copy or piece

RMAN>
RMAN> quit


Recovery Manager complete.
oracle19c>ls /home/oracle/controlfile.bak
/home/oracle/controlfile.bak
oracle19c>ls /opt/oracle/FRA/HEMANT/autobackup/2021_02_20/o1_mf_s_1065008565_j3119p5t_.bkp
/opt/oracle/FRA/HEMANT/autobackup/2021_02_20/o1_mf_s_1065008565_j3119p5t_.bkp
oracle19c>ls /opt/oracle/FRA/HEMANT/autobackup/2021_05_04/o1_mf_s_1071703254_j92slr2m_.bkp
/opt/oracle/FRA/HEMANT/autobackup/2021_05_04/o1_mf_s_1071703254_j92slr2m_.bkp
oracle19c>ls /opt/oracle/FRA/HEMANT/autobackup/2021_05_04/o1_mf_s_1071703733_j92t1pow_.bkp
/opt/oracle/FRA/HEMANT/autobackup/2021_05_04/o1_mf_s_1071703733_j92t1pow_.bkp
oracle19c>


So, why do you get the RMAN-06172 error ?  All the controlfile backups, including the manual backup to /home/oracle/controlfile.bak and the three autobackups, one from February 2021 and two from 04-May-2021 are available.

oracle19c>oerr rman 6172
6172, 1, "no AUTOBACKUP found or specified handle is not a valid copy or piece"
// *Cause: A restore could not proceed because no AUTOBACKUP was found or
// specified handle is not a valid copy or backup piece.
// In case of restore from AUTOBACKUP, it may be the case that a
// backup exists, but it does not satisfy the criteria specified in
// the user's restore operands.
// In case of restore from handle, it may be the handle is not a
// backup piece or control file copy. In may be that it does not
// exist.
// *Action: Modify AUTOBACKUP search criteria or verify the handle.
oracle19c>
oracle19c>ls -l /home/oracle/controlfile.bak
-rw-r-----. 1 root root 12058624 May 4 23:28 /home/oracle/controlfile.bak
oracle19c>ls -l /opt/oracle/FRA/HEMANT/autobackup/2021_02_20/o1_mf_s_1065008565_j3119p5t_.bkp
-rw-r-----. 1 root root 12091392 Feb 20 11:42 /opt/oracle/FRA/HEMANT/autobackup/2021_02_20/o1_mf_s_1065008565_j3119p5t_.bkp
oracle19c>ls -l /opt/oracle/FRA/HEMANT/autobackup/2021_05_04/o1_mf_s_1071703254_j92slr2m_.bkp
-rw-r-----. 1 root root 12091392 May 4 23:20 /opt/oracle/FRA/HEMANT/autobackup/2021_05_04/o1_mf_s_1071703254_j92slr2m_.bkp
oracle19c>ls -l /opt/oracle/FRA/HEMANT/autobackup/2021_05_04/o1_mf_s_1071703733_j92t1pow_.bkp
-rw-r-----. 1 root root 12091392 May 4 23:28 /opt/oracle/FRA/HEMANT/autobackup/2021_05_04/o1_mf_s_1071703733_j92t1pow_.bkp
oracle19c>


You get the "error" message that there are no AUTOBACKUPs because the "oracle19c" account is unable to actually *read* those pieces.  It can list them using "ls" because it has permission to read the OS folders containing them, but it does no have permission to read the files owned by root without having granted read permission.

So, before you start wondering about your AUTOBACKUP configuration or search criteria specification like "RESTORE CONTROLFILE FROM AUTOBACKUP MAXDAYS 30",  check if the backup pieces are readable.


Categories: DBA Blogs

Avoiding overlap values...

Tom Kyte - Tue, 2021-05-04 12:26
Hai Mr Tam, U said U have a 'trick' for the following problem. It couldbe nice if u tell me that..thanks. A form(5.0) with tabular style is displayed like below.. to from discount -- --- ----- 10 40 1.5 50 65 2.5 70 90 1.2 . . . . 60 99 ----> should not be allowed. 65 80 ----> should not be allowed. Is there a way . . . . But I would like to stop OVERLAPPING range like above shown with arrow marks. How can I do it. Thanks once again rgs priya
Categories: DBA Blogs

Fluctuating counts on ROWID splits using DIY parallelism.

Tom Kyte - Mon, 2021-05-03 18:26
Hi Tom, Chris, Connor and all, I've been a user of your DIY Parallel solution for many years now as a way to pull data out of large unpartitioned tables in parallel to send to other non-Oracle databases or for file system archiving. I've ran into a situation recently at my last company and now my new company where the solution is acting different. I first noticed the change at my old company when the data warehouse I was supporting was moved to an Exadata system in OCI. The version of the database stayed the same, 11.2.0.4, making the only change being the hardware/datacenter move. What happened was during a row count validation of a table export based upon rowid splits the counts didn't match what was exported. Upon further investigation I found that the row count for a given rowid split was flutuating. Upon doing one count it would return a value and then the value would change upon subsequent counts. The count didn't just go up, it would go up and down between a set of three or four different values, making getting an accurate count imposssible. The SQL I'd used to do this was of the formats: <code>SELECT COUNT(*) FROM X.XXX WHERE ROWID BETWEEN 'AAC2GBAFRAAD+c4AAP' AND 'AAC2GBAGIAAJ97wABX';</code> or <code>SELECT COUNT(*) FROM X.XXX WHERE ROWID > 'AAC2GBAFRAAD+c4AAP' AND ROWID <= 'AAC2GBAGIAAJ97wABX';</code> I can see where the counts could increment up if data is being added to the table but these were static tables and the count bounced back and forth between a few different sets of numbers. I'm now seeing this happen on other databases at my new job and I'm not sure what the cause of it is. I can't pin it down to a type of table or version or whether it's Exadata related or maybe something related to background work ASM is doing. I did a search to see if anyone else is having this occur to them without any luck. I'm seeing where lots of folks have implemented it but not where the row counts for a given split fluctuates. Do you have any idea what could be causing this and how to make it stop? It doesn't happen on all rowid splits for a table and it doesn't happen for all tables in a given database, it appears to be very random. Thanks, Russ
Categories: DBA Blogs

My Posts on RMAN

Hemant K Chitale - Sat, 2021-05-01 23:05

 My series of posts on RMAN :

1. 1 : Backup Job Details

2. 2 : ArchiveLog Deletion Policy

3. 3 : The DB_UNIQUE_NAME in Backups to the FRA

4. 4 : Recovering from an Incomplete Restore

5. 4b : Recovering from an Incomplete Restore with OMF Files

6. 5 : Useful KEYWORDs and SubClauses

7. 5b : (More) Useful KEYWORDs and SubClauses

8. 5c : (Some More) Useful KEYWORDs and SubClauses

9. 6 : RETENTION POLICY and CONTROL_FILE_RECORD_KEEP_TIME

10. 7 : Recovery Through RESETLOGS -- how are the ArchiveLogs identified ?

11. 8 : Using a Recovery Catalog Schema

12. 9 : Querying the RMAN Views / Catalog

13. 10 : VALIDATE


An older series of "tips" :

14. Tips -- 1

15. Tips -- 2

16. Tips -- 3

17. Tips -- 4


Other RMAN posts not in the  above series : (not in any particular order)

18. RMAN's CATALOG command

19. RESTORE and RECOVER a NOARCHIVELOG Database, with Incremental Backups

20. RESTORE and RECOVER a NOARCHIVELOG Database, with Incremental Backups -- 2nd Post

21. Primary and Standby in the same RMAN Catalog

22. Understanding Obsolescence of RMAN Backups

23. "SET TIME ON" in RMAN

24. RMAN Backup of a Standby Database

25. RMAN Image Copy File Names

26. Verifying an RMAN Backup

27. Verifying an RMAN Backup - Part 2

28. Misinterpreting RESTORE DATABASE VALIDATE

29. RMAN Backup and Recovery for Loss of ALL Files

30. CONTROLFILE AUTOBACKUPs are OBSOLETE[d]

31.RMAN Consistent ("COLD" ?) Backup and Restore

32. Archive Log Deletion Policy with a Standby Database

33. Datafiles not Restored -- using V$DATAFILE and V$DATAFILE_HEADER

34. Read Only Tablespaces and BACKUP OPTIMIZATION


Categories: DBA Blogs

Pro*C in Oracle

Hemant K Chitale - Sat, 2021-05-01 05:48

 Oracle also ships a Pro*C Precompiler that can convert a Pro*C source file to a C source file which can then be compiled  using a C Compiler (e.g  using "gcc").  Of course, you need the Pro*C Developer Licence to use this product.

Here is a quick demo with the command line display and then the actual code below.



oracle19c>ls -ltr
total 12
-rw-r--r--. 1 oracle oinstall 2255 May 1 18:07 instancedbinfo.pc
-rwxr--r--. 1 oracle oinstall 786 May 1 18:14 Compile_my_ProC.SH
-rwxr--r--. 1 oracle oinstall 356 May 1 18:15 Run_my_ProC.SH
oracle19c>./Compile_my_ProC.SH
*****Set LD_LIBRARY_PATH
*****Set C_INCLUDE_PATH
*****PreCompile Pro*C program file

Pro*C/C++: Release 19.0.0.0.0 - Production on Sat May 1 18:15:17 2021
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.

System default option values taken from: /opt/oracle/product/19c/dbhome_1/precomp/admin/pcscfg.cfg

*****Compile using C Compiler and specifying Oracle Client library file libclntsh.so
*****Compiled files:
-rw-r--r--. 1 oracle oinstall 2255 May 1 18:07 instancedbinfo.pc
-rw-r--r--. 1 oracle oinstall 0 May 1 18:15 instancedbinfo.lis
-rw-r--r--. 1 oracle oinstall 11875 May 1 18:15 instancedbinfo.c
-rwxr-xr-x. 1 oracle oinstall 14424 May 1 18:15 instancedbinfo
oracle19c>
oracle19c>
oracle19c>
oracle19c>./Run_my_ProC.SH
*****Set LD_LIBRARY_PATH
*****Set Connection String
*****Execute the program
Connected to ORACLE
At ORCLCDB which is on oracle-19c-vagrant running 19.0.0.0.0 and is OPEN, started at 01-MAY-21 17:54:52
This is ORCLPDB1 database running in READ WRITE mode since 01-MAY-21 05.55.21.573 PM +08:00

oracle19c>


The file "instancedbinfo.pc" is my Pro*C source code.
I Precompile it using the "proc" precompiler into "instancedbinfo.c".  Any compilation errors would have been logged into "instancedbinfo.lis"
Then, the same script "Compile_my_ProC.SH" compiles the C program source code into an executable "instancedbinfo" using "gcc"

Finally, I use "Run_my_ProC.SH" to execute the file "instancedbinfo"  (which is now an executable) and the execution displays information about the Pluggable database it is connected to.


Here is the code for the two shell scripts :


oracle19c>cat Compile_my_ProC.SH

echo "*****Set LD_LIBRARY_PATH"
LD_LIBRARY_PATH=/usr/lib/gcc/x86_64-redhat-linux/4.8.2/include:/usr/include/linux:/opt/oracle/product/19c/dbhome_1/precom/lib:/opt/oracle/product/19c/dbhome_1/lib
export LD_LIBRARY_PATH


echo "*****Set C_INCLUDE_PATH"
C_INCLUDE_PATH=/usr/lib/gcc/x86_64-redhat-linux/4.8.2/include:/usr/include/linux:/opt/oracle/product/19c/dbhome_1/precom/lib:/opt/oracle/product/19c/dbhome_1/lib:/opt/oracle/product/19c/dbhome_1/precomp/public
export C_INCLUDE_PATH

echo "*****PreCompile Pro*C program file"
proc instancedbinfo.pc

echo "*****Compile using C Compiler and specifying Oracle Client library file libclntsh.so"
gcc instancedbinfo.c -o instancedbinfo -L /opt/oracle/product/19c/dbhome_1/lib -l clntsh

echo "*****Compiled files:"
ls -ltr instancedbinfo*
oracle19c>


oracle19c>cat Run_my_ProC.SH

echo "*****Set LD_LIBRARY_PATH"
LD_LIBRARY_PATH=/usr/lib/gcc/x86_64-redhat-linux/4.8.2/include:/usr/include/linux:/opt/oracle/product/19c/dbhome_1/precom/lib:/opt/oracle/product/19c/dbhome_1/lib
export LD_LIBRARY_PATH

echo "*****Set Connection String"
CNCTSTRING=hemant/hemant@orclpdb1
export CNCTSTRING

echo "*****Execute the program"
./instancedbinfo
oracle19c>


The Compilation script specifies the LD_LIBRARY_PATH and the Paths to the Include (.h Header) files.  
It then executes "proc"  (which is in $ORACLE_HOME/bin) to precompile the "instancedbinfo.pc" source file.
Finally, it calls "gcc" to compile the c-language source code file (generated by the Precomipler), also specifiying the client shared library file libclntsh.so  in $ORACLE_HOME/lib  (only "-l clntsh" is sufficient to identify the file name).  The compiled executable is called "instancedbinfo" with Execute Permission.

The Run script specifies the Connect-String that the executable will be reading from the environment and executes it.


Here is the code of the source Pro*C file :


oracle19c>cat instancedbinfo.pc

/* standard C includes */
#include << stdio.h >>
#include << stdlib.h >>
#include << string.h >>



/* Oracle Pro*C includes from $ORACLE_HOME/precomp/public */
#include << sqlca.h >>
#include << sqlda.h >>
#include << sqlcpr.h >>




/* my variables */
varchar MYinstanceName[16];
varchar MYhostName[64];
varchar MYversion[17];
varchar MYstatus[12];
varchar MYinstanceStartupTime[18];
varchar MYdbName[128];
varchar MYdbOpenMode[10];
varchar MYdbOpenTime[32];



/* function for error handling */
void sql_error(msg)
char msg[200];
{
char err_msg[128];
size_t buf_len, msg_len;

EXEC SQL WHENEVER SQLERROR CONTINUE;

printf("\n%s\n", msg);
buf_len = sizeof (err_msg);
sqlglm(err_msg, &buf_len, &msg_len);
printf("%.*s\n", msg_len, err_msg);

EXEC SQL ROLLBACK RELEASE;
exit(EXIT_FAILURE);
}


/* MAIIN program */
int main(argc,argv)
int argc;
char *argv[];
{

/* read Connection String from environment -- or, it could have been hardcoded here */
const char *conn = getenv("CNCTSTRING");
if (!conn) {
printf("! require CNCTSTRING env variable\n");
return (1);
}

EXEC SQL WHENEVER SQLERROR DO sql_error("ORACLE error--\n");

/* connect to targe database */
EXEC SQL CONNECT :conn ;
printf("Connected to ORACLE \n");


/* execute query and populate variables */
/* NOTE : This expects to connect to a PDB ! */
/* If the target is a Non-PDB, change references from v$pdbs to V$database */
EXEC SQL SELECT instance_name,host_name, version,
to_char(startup_time,'DD-MON-RR HH24:MI:SS'), status,
name, open_mode, to_char(open_time)
INTO :MYinstanceName, :MYhostName, :MYversion,
:MYinstanceStartupTime, :MYstatus,
:MYdbName, :MYdbOpenMode, :MYdbOpenTime
FROM v$instance, v$pdbs ;


/* display query results */
printf("At %s which is on %s running %s and is %s, started at %s \n",
MYinstanceName.arr, MYhostName.arr, MYversion.arr, MYstatus.arr, MYinstanceStartupTime.arr);
printf("This is %s database running in %s mode since %s \n",
MYdbName.arr, MYdbOpenMode.arr, MYdbOpenTime.arr);
printf("\n");

/* end of MAIN */
}
oracle19c>


(Note :  I have put doube angle brackets for the #includes so as to preserve them in HTML)
Pro*C allows embedding of SQL calls into a C program be including the Proc include files and then running the source code through a Precompiler.
My Pro*C source code file is 2,255 bytes and the C source code is 11,875 bytes.

Note that the variables defined as varchar in my Pro*C source file are actually become C structures :

/* my variables */
/* varchar MYinstanceName[16]; */
struct { unsigned short len; unsigned char arr[16]; } MYinstanceName;

/* varchar MYhostName[64]; */
struct { unsigned short len; unsigned char arr[64]; } MYhostName;

/* varchar MYversion[17]; */
struct { unsigned short len; unsigned char arr[17]; } MYversion;

/* varchar MYstatus[12]; */
struct { unsigned short len; unsigned char arr[12]; } MYstatus;

/* varchar MYinstanceStartupTime[18]; */
struct { unsigned short len; unsigned char arr[18]; } MYinstanceStartupTime;

/* varchar MYdbName[128]; */
struct { unsigned short len; unsigned char arr[128]; } MYdbName;

/* varchar MYdbOpenMode[10]; */
struct { unsigned short len; unsigned char arr[10]; } MYdbOpenMode;

/* varchar MYdbOpenTime[32]; */
struct { unsigned short len; unsigned char arr[32]; } MYdbOpenTime;


Similarly, my EXEC SQL query also gets re-written :
{
struct sqlexd sqlstm;
sqlstm.sqlvsn = 13;
sqlstm.arrsiz = 8;
sqlstm.sqladtp = &sqladt;
sqlstm.sqltdsp = &sqltds;
sqlstm.stmt = "select instance_name ,host_name ,version ,to_char(startup\
_time,'DD-MON-RR HH24:MI:SS') ,status ,name ,open_mode ,to_char(open_time) int\
o :b0,:b1,:b2,:b3,:b4,:b5,:b6,:b7 from v$instance ,v$pdbs ";
sqlstm.iters = (unsigned int )1;
sqlstm.offset = (unsigned int )51;
sqlstm.selerr = (unsigned short)1;
sqlstm.sqlpfmem = (unsigned int )0;
sqlstm.cud = sqlcud0;
sqlstm.sqlest = (unsigned char *)&sqlca;
sqlstm.sqlety = (unsigned short)4352;
sqlstm.occurs = (unsigned int )0;
sqlstm.sqhstv[0] = (unsigned char *)&MYinstanceName;
sqlstm.sqhstl[0] = (unsigned long )18;
sqlstm.sqhsts[0] = ( int )0;
sqlstm.sqindv[0] = ( short *)0;
sqlstm.sqinds[0] = ( int )0;
sqlstm.sqharm[0] = (unsigned long )0;
sqlstm.sqadto[0] = (unsigned short )0;
sqlstm.sqtdso[0] = (unsigned short )0;
sqlstm.sqhstv[1] = (unsigned char *)&MYhostName;
and so on .....


Pro*C is a very good way of combining C programming with SQL and creating an executable binary instead of an interpreted file (like a Java or Python program outside the database).



Categories: DBA Blogs

Regarding On Update Cascade

Tom Kyte - Fri, 2021-04-30 17:06
Dear Tom, We know that when we delete a parent record, automatically child record also will be deleted if we used "on delete cascade". Is it possible to update automatically a child record when we update parent record? (Do we have "On Update Cascade" option? Or any other like..)
Categories: DBA Blogs

How to send a sql query result as an attachment in Oracle apex_mail.send procedure.

Tom Kyte - Fri, 2021-04-30 17:06
I have a Oracle sql query which needs to be run and need to send the data returned by the query as an attachment to a mail. Could you please guide how can i do it using apex_mail.send procedure. I am calling apex_mail from database. I have already configured apex mail. I can call the apex_mail.send to send the mail. But i am not sure how can i attach the result returned by my oracle sql query in apex_mail.add attachment.
Categories: DBA Blogs

Timestamp with time zone comparison Issues

Tom Kyte - Thu, 2021-04-29 22:46
Hi, I am facing an issue while validating timestamp with timezone data with systimestamp in 11g R2. My DB server is in US/Central zone. I have a table with timestamp with timezone data type column and I have inserted a future timestamp for same timezone (US/Central or UTC-5). While selecting data from table, we get same data. I also have an anonymous block which verifies if timestamp in table crossed systimestamp of not. Before daylight saving changes on March, this process was working correctly. both methods returns correct output when systimestamp is greater than timestamp with timezone column. However, after daylight saving changes, record which was inserted by giving timezone as US/Central format, returns correct output only after 1hr from actual time. I have given a sample in livesql, hope this can help to explain issue I am facing. Is there any specific reason for this behavior? Thanks in advance for your help Thanks, Manoj
Categories: DBA Blogs

CTE failes when used with a db link

Tom Kyte - Thu, 2021-04-29 22:46
I am trying to use a cte in a query that copies data from one database to another. The cte is used because I am unable to handle a cycle with connect by. In this simple illustration the db link used by the insert causes an error (ORA-00942: table or view does not exist). <code>insert into don.T2@gstest (RELATE_STRING) with cte (LVL, I, PARENT_I, RELATE) as ( select 1 as LVL, I, PARENT_I, '+' || RELATE as RELATE from don.T1@gsdev where PARENT_I is null union all select c.LVL + 1, t.I, t.PARENT_I, c.RELATE || '+' || t.RELATE from cte c join T1 t on t.PARENT_I = c.I) select RELATE from cte order by LVL, I; </code> The illustration doesn't have a cycle issue, so connect by can be used to demonstrate. If I ensure that the code is executed from the target database and I remove the db link, the code works. <code>insert into don.T2 (RELATE_STRING)</code>... I was unable to figure out how to make a db link in liveSql.
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator - DBA Blogs