Tom Kyte

Subscribe to Tom Kyte feed Tom Kyte
These are the most recently asked questions on Ask Tom
Updated: 18 hours 4 min ago

Method to measure performance gain of clustered table vs non-clustered tables

Mon, 2021-05-10 15:06
I have 2 pairs of parent and child tables ,1 pair is stored in a clustered object and the other is non-clustered. The primary key of the master table (which is foreign key) in the child table is clustered. An index on cluster is also created. The structure of 2 the parent tables is identical and structure of 2 child tables is also identical. Records in the 2 pairs are also identical. I want measure the performance gain of clustered tables vs non clustered table for SELECT statement. I am using SET TIMING ON and printing the elapsed time after the SELECT is executed on the tables. The SELECT statement is also identical. I was expecting the elapsed time of clustered tables to be less than the non-clustered table, consistently. But it is the not. Can you please explain this? Also , is there other way to measure the performance of non-clustered vs clustered, using auto trace or explain plan?
Categories: DBA Blogs

how to delete data from table

Mon, 2021-05-10 15:06
with this normal query of delete delete from table where condition-'a'; will it work in table data are present from year 2014. i want to delete all data starting from 2014 to till now.
Categories: DBA Blogs

The most peculiar Oracle situation in my career- Oracle changes how it records a block read from direct read to not recording it in an I/O wait event at all

Mon, 2021-05-10 15:06
Greetings, I have this extremely perplexing situation where Oracle changes how it records a block read. Last week it wasn't COUNTING block reads at all in an I/O wait event; this week it started to add it to the ?direct read? wait event. This is occurring in our production environment; however, I was able to reproduce the situation in our test environment with test data. I used all_source view to create two test tables until I reached 1.2 million for table 1 and 4 million for table 2: Table1 ( 1.2 Mil records) create table table1 as select * from dba_source where rownum; Table2 ( 4 Mil records ) create table table2 as select * from dba_source; create index t1_pk on table1(owner); create index t2_pk on table2(owner, line); exec dbms_stats.gather_schema_stats('JOHN'); Then I ran this select statement 120 times: <code>select count(*) from Table1 where line=1 and owner in (select Table2.owner from Table2 where Table2.owner=Table1.owner) order by owner;</code> In some cases Oracle 19c records the I/O in "direct path read" wait events and in other cases, it doesn't seem to report in any I/O wait event. That is soooo odd. TEST CASE 1: IOStats summary doesn't record I/O nor does it in a wait event: <code>Top 10 Foreground Events by Total Wait Time ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Total Wait Avg % DB Wait Event Waits Time (sec) Wait time Class ------------------------------ ----------- ---------- --------- ------ -------- DB CPU 20.2 99.6 PGA memory operation 2,524 .1 20.27us .3 Other Disk file operations I/O 520 0 59.49us .2 User I/O db file sequential read 211 0 12.33us .0 User I/O Parameter File I/O 8 0 257.00us .0 User I/O enq: RO - fast object reuse 2 0 784.50us .0 Applicat control file sequential read 209 0 5.32us .0 System I log file sync 1 0 .95ms .0 Commit SQL*Net message to client 546 0 1.53us .0 Network SQL*Net more data to client 22 0 33.77us .0 Network SQL ordered by Gets DB/Inst: ORACLE/stbyoracle Snaps: 2727-2728 -> Resources reported for PL/SQL code includes the resources used by all SQL statements called by the code. -> %Total - Buffer Gets as a percentage of Total Buffer Gets -> %CPU - CPU Time as a percentage of Elapsed Time -> %IO - User I/O Time as a percentage of Elapsed Time -> Total Buffer Gets: 3,399,948 -> Captured SQL account for 98.1% of Total Buffer Gets Elapsed Gets Executions per Exec %Total Time (s) %CPU %IO SQL Id ----------- ----------- ------------ ------ ---------- ----- ----- ------------- 3,241,728 120 27,014.4 95.3 14.4 99.5 0 82mps751cqh84 Module: SQL*Plus select count(*) from Table1 where line=1 and owner in (select Table2.owner from Table2 where Table2.owner=Table1.owner) order by owner IOStat by Function summary DB/Inst: ORACLE/stbyoracle Snaps: 2727-2728 -> 'Data' columns suffixed with M,G,T,P are in multiples of 1024 other columns suffixed with K,M,G,T,P are in multiples of 1000 -> ordered by (Data Read + Write) desc Reads: Reqs Data Writes: Reqs Data Waits: Avg Function Name Data per sec per sec Data per sec per sec Count Time --------------- ------- ------- ------- ------- ------- ------- ------- -------- LGWR 3M 1.5 .022M 10M 3.6 .075M 678 368.73us Others 7M 2...
Categories: DBA Blogs

Single row cursor for short text string from dual produces CHAR(32767)

Mon, 2021-05-10 15:06
Hi I have tried 19.9 - 19.11 I have noticed some suspicious behaviour regarding dual. I will create an example. At the moment, here you can see that from mytab, there comes only single row. Then I will dump the datatype to output. <code> SQL> set serveroutput on size unlimited; declare a clob; l_msg_content_begin CLOB := EMPTY_CLOB(); CURSOR cur IS with mytab as ( select 'SOMERANDOMTABLE' as main_table from dual --union select 'ALSOSOMERANDOMTABLE' as main_table from dual ) select main_table, lower_main_table from ( select main_table, lower(main_table) as lower_main_table from mytab ) order by 1 desc; rec cur%rowtype; BEGIN FOR rec IN cur LOOP dbms_output.put_line(rec.main_table); select dump(rec.lower_main_table) into a from dual; dbms_output.put_line(a); -- ORA-06502: PL/SQL: numeric or value error: character string buffer too small -- If you have only one row from dual, then you get error if you uncomment this: "l_msg_content_begin := ..." -- With 2 or more rows from dual, all good --l_msg_content_begin := 'blabla '||rec.lower_main_table||' blablabla '||rec.lower_main_table||'bla'||UTL_TCP.CRLF; END LOOP; --dbms_output.put_line(substr(l_msg_content_begin, 1, 2000) || 'AA'); END; / </code> And here you can see, datatype is CHAR (Typ=96), and check the length (so whole string padded with spaces "ascii32 == space") <code> SOMERANDOMTABLE Typ=96 Len=32767: 115,111,109,101,114,97,110,100,111,109,116,97,98,108,101,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,.................... </code> Seems like <b>lower()</b> function somehow produces this strange behaviour. Is this normal..? Also when I dump rec.main_table instead (so not lower() function output) <code>select dump(rec.main_table) into a from dual;</code> Then I get type CHAR and an actual length. So it is expected. On contrast, when I uncomment this second line also <code>--union select 'ALSOSOMERANDOMTABLE' as main_table from dual</code> Then it is expected: <code> SOMERANDOMTABLE Typ=1 Len=15: 115,111,109,101,114,97,110,100,111,109,116,97,98,108,101 ALSOSOMERANDOMTABLE Typ=1 Len=19: 97,108,115,111,115,111,109,101,114,97,110,100,111,109,116,97,98,108,101 </code> Type is varchar and length is actual length. Regards Raul
Categories: DBA Blogs

Column default value as another column from same table

Mon, 2021-05-10 15:06
Hello, We have a requirement to add a new column in the table which needs a default value like column1 || column2. For some reason application code can not be changed to deal with this new column and so the default value. I thought of two approaches that can solve this, using trigger to update the new column if any of two columns column1 or column2 are updated - so the new column can be initially updated and then trigger can be enabled to handle any future changes. Other approach is use virtual column. Now it seems that a direct insertion of data or an update might be required for the new column, that rules out the virtual column. And on trigger, web is full of articles that they are problematic and I am having tough time convincing that for a low volume table (number of records as well as the number of transactions) trigger may not be the worst idea, though I understand the maintenance headaches and side effects etc. Is there any other approach? Also why Oracle does not support the column default value as another column? Thank you, Priyank
Categories: DBA Blogs

Is it safe to re-sequence table columns using invisible columns ?

Fri, 2021-05-07 13:46
Hello Team, First of all, thanks for all the good work you are doing. Request your help with a query related to re-sequencing of table columns using invisible columns. Is it safe to change order of columns in a production environment, following the method described in the following link ? https://connor-mcdonald.com/2013/07/22/12c-invisible-columns/ We tested it and are not able to find anything unusual. However, any particular "gotchas" we should lookout for? I know that ideally order of table columns should not matter. However, in our situation, codebase can have legacy code that don't use column names in insert statements. Pasted below is the detailed scenario on how/why we are planning to use this. Thanks, A ---------------------------------------------- Our requirement is to encrypt a column in existing tables in PROD ENVT. These tables can have hundreds of millions of rows. This task has be done during a down time window that is not large enough. In order to achieve this, we are trying to do as much work as possible out side the downtime window. Our plan is to add an invisible column to the tables. Data from the original column will be encrypted and stored into these invisible columns. This can be done outside the downtime window and will not affect the day to day operations. We also have a mechanism to identify and handle delta in the original column. The only task pending for the downtime will be to move values from the invisible column to the original column. In order to complete it in the short downtime window, We will make the invisible column visible and will swap it's name with the original column. The redundant original column can then be dropped. This approach works fine except that the order of the columns change. The encrypted column now appears as the last column in the table. Ideally, the order should not matter. However, these tables are used by some applications that have legacy code that inserts without specifying the column name. We are exploring if we can add the new column at the position of the original column.
Categories: DBA Blogs

Zip a .csv file present at DB directory using PL/SQL

Fri, 2021-05-07 13:46
We have a requirement where we are generating .csv file from DB and placed it to a DB directory. We want to zip these .csv files so that size can be optimised. Could you please suggest a way to achieve it by using PL/SQL.
Categories: DBA Blogs

Shuffle quantities between buckets

Thu, 2021-05-06 19:26
Hi Tom, I am given the "current" allocation of items to eight buckets, and I want to make it more efficient by filling as much as possible of bucket A, then of bucket B, then of bucket C (as indicated by the "priority"), by moving items between buckets by taking as many P1 items from the bucket H and reassigning them to bucket A (as many as possible), then to bucket B, etc., until you allocated all of them. Then you take the next lowest-priority bucket and repeat. How much of P1 and P2 to fill is defined in the volumes table i.e. each A, B-H can have quantities in multiple of eight and seven of P1 and P2 in sample data. I also want to include round-up and round-down logic to nearly distribute the quantities across buckets if one bucket has too big a quantity. The items P1 and P2 are completely independent and one's result shouldn't impact the other. The height, weight, and width don't matter here so not present in any sample data. The below I have started with but couldn't make round-up and round-down cases work. Also, in the cases when quantity is moved into two or more buckets from one bucket or moved out from two buckets into one bucket, can we show a single comma-separated row instead of multiple step-by-step rows? In the cases when one row has too much quantity, can we implement round up and round down logic to distribute the quantities near equally in multiple of the quantities of value table in buckets e.g. if we update the quantity as seventy-two in bucket H of P1 part, the current result gives five rows for H bucket. Can we round buckets A-H with sixteen and then the remaining ones in the H bucket?
Categories: DBA Blogs

Is there an Oracle document that has a checklist to be able to answer whether database server will handle "peak" load

Thu, 2021-05-06 01:06
Greetings, A question from the client that comes up every few years is to predict if the Oracle database server will be able to handle a new application's peak load. Instead of trying to think of all that needs to be considered on the fly, it would be great if there was an Oracle document that had a checklist with all of the questions that we must answer so that we can give the client a definite answer of yes we can predict if x,y and z or performed. I know that in most cases, this will be nearly impossible to answer as it will take too much time to answer and we can't control the variables for other apps that share the same resources like database, network, SAN, etc. For instance, usually the network and SAN are shared with the database server so we will need to get peak loads of all the other applications plus the expected max throughput for the network and SAN. Thanks for your help, John
Categories: DBA Blogs

APEX Message box label

Thu, 2021-05-06 01:06
Is it possible to change the labels of the confirm dialog buttons from ?Cancel/Ok? to ?No/Yes? in APEX?
Categories: DBA Blogs

Avoiding overlap values...

Tue, 2021-05-04 12:26
Hai Mr Tam, U said U have a 'trick' for the following problem. It couldbe nice if u tell me that..thanks. A form(5.0) with tabular style is displayed like below.. to from discount -- --- ----- 10 40 1.5 50 65 2.5 70 90 1.2 . . . . 60 99 ----> should not be allowed. 65 80 ----> should not be allowed. Is there a way . . . . But I would like to stop OVERLAPPING range like above shown with arrow marks. How can I do it. Thanks once again rgs priya
Categories: DBA Blogs

Fluctuating counts on ROWID splits using DIY parallelism.

Mon, 2021-05-03 18:26
Hi Tom, Chris, Connor and all, I've been a user of your DIY Parallel solution for many years now as a way to pull data out of large unpartitioned tables in parallel to send to other non-Oracle databases or for file system archiving. I've ran into a situation recently at my last company and now my new company where the solution is acting different. I first noticed the change at my old company when the data warehouse I was supporting was moved to an Exadata system in OCI. The version of the database stayed the same, 11.2.0.4, making the only change being the hardware/datacenter move. What happened was during a row count validation of a table export based upon rowid splits the counts didn't match what was exported. Upon further investigation I found that the row count for a given rowid split was flutuating. Upon doing one count it would return a value and then the value would change upon subsequent counts. The count didn't just go up, it would go up and down between a set of three or four different values, making getting an accurate count imposssible. The SQL I'd used to do this was of the formats: <code>SELECT COUNT(*) FROM X.XXX WHERE ROWID BETWEEN 'AAC2GBAFRAAD+c4AAP' AND 'AAC2GBAGIAAJ97wABX';</code> or <code>SELECT COUNT(*) FROM X.XXX WHERE ROWID > 'AAC2GBAFRAAD+c4AAP' AND ROWID <= 'AAC2GBAGIAAJ97wABX';</code> I can see where the counts could increment up if data is being added to the table but these were static tables and the count bounced back and forth between a few different sets of numbers. I'm now seeing this happen on other databases at my new job and I'm not sure what the cause of it is. I can't pin it down to a type of table or version or whether it's Exadata related or maybe something related to background work ASM is doing. I did a search to see if anyone else is having this occur to them without any luck. I'm seeing where lots of folks have implemented it but not where the row counts for a given split fluctuates. Do you have any idea what could be causing this and how to make it stop? It doesn't happen on all rowid splits for a table and it doesn't happen for all tables in a given database, it appears to be very random. Thanks, Russ
Categories: DBA Blogs

How to send a sql query result as an attachment in Oracle apex_mail.send procedure.

Fri, 2021-04-30 17:06
I have a Oracle sql query which needs to be run and need to send the data returned by the query as an attachment to a mail. Could you please guide how can i do it using apex_mail.send procedure. I am calling apex_mail from database. I have already configured apex mail. I can call the apex_mail.send to send the mail. But i am not sure how can i attach the result returned by my oracle sql query in apex_mail.add attachment.
Categories: DBA Blogs

Regarding On Update Cascade

Fri, 2021-04-30 17:06
Dear Tom, We know that when we delete a parent record, automatically child record also will be deleted if we used "on delete cascade". Is it possible to update automatically a child record when we update parent record? (Do we have "On Update Cascade" option? Or any other like..)
Categories: DBA Blogs

CTE failes when used with a db link

Thu, 2021-04-29 22:46
I am trying to use a cte in a query that copies data from one database to another. The cte is used because I am unable to handle a cycle with connect by. In this simple illustration the db link used by the insert causes an error (ORA-00942: table or view does not exist). <code>insert into don.T2@gstest (RELATE_STRING) with cte (LVL, I, PARENT_I, RELATE) as ( select 1 as LVL, I, PARENT_I, '+' || RELATE as RELATE from don.T1@gsdev where PARENT_I is null union all select c.LVL + 1, t.I, t.PARENT_I, c.RELATE || '+' || t.RELATE from cte c join T1 t on t.PARENT_I = c.I) select RELATE from cte order by LVL, I; </code> The illustration doesn't have a cycle issue, so connect by can be used to demonstrate. If I ensure that the code is executed from the target database and I remove the db link, the code works. <code>insert into don.T2 (RELATE_STRING)</code>... I was unable to figure out how to make a db link in liveSql.
Categories: DBA Blogs

Timestamp with time zone comparison Issues

Thu, 2021-04-29 22:46
Hi, I am facing an issue while validating timestamp with timezone data with systimestamp in 11g R2. My DB server is in US/Central zone. I have a table with timestamp with timezone data type column and I have inserted a future timestamp for same timezone (US/Central or UTC-5). While selecting data from table, we get same data. I also have an anonymous block which verifies if timestamp in table crossed systimestamp of not. Before daylight saving changes on March, this process was working correctly. both methods returns correct output when systimestamp is greater than timestamp with timezone column. However, after daylight saving changes, record which was inserted by giving timezone as US/Central format, returns correct output only after 1hr from actual time. I have given a sample in livesql, hope this can help to explain issue I am facing. Is there any specific reason for this behavior? Thanks in advance for your help Thanks, Manoj
Categories: DBA Blogs

Script too slow - how to optimize

Wed, 2021-04-28 10:06
Can you advise me in how optimize this batch script : <code>CREATE OR REPLACE PACKAGE BODY ADMIN_WAFA.PCK_Trait_Pre IS -------------------------------------------------------------------------------- FUNCTION Fun_Traiter_Prelevement(P_CONTROL_ANO VARCHAR2) RETURN VARCHAR2 IS ------------------------------------------------------------------------------------------------------------------------- ---curseur prelevement----- CURSOR Prov_Cursor IS SELECT * FROM Prov_Prelevement; --------------------------- Code_Tier NUMBER; v_sql VARCHAR2(1000); --------------- ?????????????????????????????? ------------------------------------ BEGIN /*********************** Calcul des STATS ************************/ BEGIN DBMS_STATS.GATHER_TABLE_STATS(OWNNAME => 'ADMIN_WAFA', TABNAME => 'PROV_PRELEVEMENT', DEGREE => 8, CASCADE => TRUE, METHOD_OPT => 'FOR COLUMNS SIZE AUTO', ESTIMATE_PERCENT => 20); END; /************************ Resoudre PB PERF ************************/ -- alter session set optimizer_mode=RULE; EXECUTE IMMEDIATE ('alter session set optimizer_mode=RULE'); --Debut FZ HANOUNOU IF(P_CONTROL_ANO ='O') THEN Proc_Test_Anomalie; END IF; --Fin FZ HANOUNOU --------------------------------------------- --Modification par Karim EL HALOUI --le 15/12/2010 --pour la mise a jour de la vue materialisee --------------------------------------------- -- suppression BEGIN EXECUTE IMMEDIATE ('DROP MATERIALIZED VIEW vue_anomalie_ie_aff'); EXCEPTION WHEN OTHERS THEN NULL; END; -- creation v_sql:='CREATE MATERIALIZED VIEW vue_anomalie_ie_aff AS SELECT a.ano_num_aff,a.ano_ide_reg,a.ano_dat_ech FROM anomalie a where a.ano_pret_a>45'; execute immediate v_sql; ------------------ ?????? ------------------- v_sql:='alter FUNCTION F_ECH_ANOM compile'; execute immediate v_sql; v_sql:='alter package PKG_CTRL_PROMESSE_REG compile body'; execute immediate v_sql; --------------------------------------------- v_nom_fonction:='Proc_Traiter_Prelevement'; open Prov_cursor; LOOP FETCH Prov_cursor INTO Prov_record; EXIT WHEN Prov_cursor%NOTFOUND OR Prov_cursor%NOTFOUND is NULL; ---------------------------------- -- recherche du code_societe ---------------------------------- select s.code_societe into v_code_societe from societes s where s.lib_societe=prov_record.prov_societe; --------------------------------------------- IF not Fun_Test_New_aff THEN IF Fun_Test_New_tier THEN Code_Tier:=Fun_Ajout_Tier; ELSE Code_Tier:=Fun_Cle_Tier; END IF; ELSE Code_Tier:=Fun_Ajout_Tier; END IF; Proc_Ajout_Prelevement(Code_Tier); END LOOP; execute immediate('truncate table prov_prelevement'); v_nom_fonction:='proc_generation_interne'; pck_ban.proc_generation_interne; -------------------------------------------------------------------- v_nom_fonction:='proc_saisie_imp_bqe_interne'; proc_saisie_imp_bqe_interne; v_nom_fonction:='pck_ban.proc_generation_doti'; pck_ban.proc_generation_doti; update /*+choose*/ prelevement p set p.pre_dat_ech=to_char(sysdate,'DD/MM/YYYY') where exists (select null from vacation v where v.pre_ide_reg=p.pre_ide_reg) and p.pre_ide_reg<0 and exists (select null from tiers t where p.tier_cle=t.tier_cle and t.tier_code_societe=51 and t.tier_nom_agence='REPRCTX') and p.pre_dat_ech>to_char(sysdate,'DD/MM/YYYY'...
Categories: DBA Blogs

editing large clobs

Wed, 2021-04-28 10:06
Hi, I've developed an application in APEX which provides a feature to maintain files stored in a CLOB (well, actually it's a BLOB but I know how to handle this). This works very well as long as the size of the content doesn't exceed the 32k limit. If a file is bigger than this limit an error 'ORA-06502: PL/SQL: numeric or value error' is raised. Is there any solution which I can use to handle files with more then 32k chars in an apex app?
Categories: DBA Blogs

How can we query one database and get all databases' cell or IO usage from one database?

Wed, 2021-04-28 10:06
Hi Ask Tom Team, We had an issue in all of our production databases on an Exadata cluster. One RAC database's IO issue caused all databases on the cluster slowdown, user transactions took much more time, batch jobs hung. We need to run some scripts to consistently monitor the cluster besides OEM, so that we can identify the issues quicker and take actions sooner. OEM provides good information on our needs. We need your help to have related SQL scripts to generate same or similar information. In Oracle AWR reports. we can see a portion that is specific for Exadata. It appears in all AWR reports no matter in which database we run the awr report (global data), "Exadata Top Database Consumers". The information includes Cell usage and IO of all databases even though the AWR report is run only in one database (every AWR report has Exadata Top Database Consumers). Can you please help with related SQLs that generate that part of the AWR report? For example, how can we run a query to get "Top Databases by Requests" and so on by running queries only in one of the databases on the cluster. We checked all of the dictionary tables/views but failed to find any of them provide cluster information showing in AWR reports. We need to run the scripts (or stored procedure) periodically and have the system send us an email if any specific database's IO requests surpass a threshold we set. We can query live tables (GV$xxxxx) or DBA_HIST_XXXXX. Please advise. Thanks!
Categories: DBA Blogs

"java.lang.ClassNotFoundException: oracle.jdbc.driver.OracleDriver"

Wed, 2021-04-28 10:06
May I ask about the "java.lang.ClassNotFoundException: oracle.jdbc.driver.OracleDriver". Per Java document, the PATH and CLASSPATH shall be configured in the Windows 10 environment variables . Below are information regarding the system/database version and what I did. The Java version is as below. C:\>java -version java version "1.8.0_281" Java(TM) SE Runtime Environment (build 1.8.0_281-b09) Java HotSpot(TM) 64-Bit Server VM (build 25.281-b09, mixed mode) The JAVA_HOME is C:\Program Files\Java\jre\1.8.0_281 The CLASSPATH has C:\Program Files\Java\jdk\1.8.0_281 C:\Program Files\Java\jdk\1.8.0_281\bin C:\Program Files\Java\jre\1.8.0_281 C:\Program Files\Java\jre\1.8.0_281\bin C:\Oracle_RDBS_installation\product\12.2.0\client_1\bin C:\Oracle_RDBS_installation\product\12.2.0\dbhome_1\bin The compilation of Java program was successful by using "javac". But the class not found error message was returned during the run time. I have no problem accessing the Oracle database from Sql*Plus at all. I created two users, created a simple table for the sake of testing. Everything worked fine. I have no problem with the listener and the TNSNAES.ORA. Below is the information about database: SQL*Plus: Release 12.2.0.1.0 Production on Wed Apr 21 09:08:05 2021 Copyright (c) 1982, 2016, Oracle. All rights reserved. Enter user-name: c##_chiara Enter password: Last Successful login time: Sat Apr 17 2021 09:46:21 -04:00 Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production Below is the Java code: <Begin of the Java Code> import java.sql.*; import java.sql.Connection; import java.sql.DriverManager; import java.sql.PreparedStatement; import java.sql.ResultSet; import java.sql.SQLException; import java.util.Properties; public class Thirdj3 { public static void main(String[] args){ System.out.println("Beginning of the Thirdj3 program."); try{ //step1 load the driver class Class.forName("oracle.jdbc.driver.OracleDriver"); //step2 create the connection object Connection con=DriverManager.getConnection( "jdbc:oracle:thin:@DESKTOP-JBRID90.home:1521:orcl","c##_chiara","chiara"); //step3 create the statement object Statement stmt=con.createStatement(); //step4 execute query //*ResultSet rs=stmt.executeQuery("select * from emp"); //*while(rs.next()) //*System.out.println(rs.getInt(1)+" "+rs.getString(2)+" "+rs.getString(3)); //step5 close the connection object con.close(); } catch(Exception e){ System.out.println(e);} }//End of main }//End of Thirdj3 Class <end of the java code>
Categories: DBA Blogs

Pages