Issuing frequent commits is bad, bad, BAD! It’s the WORST thing you can do… just don’t do it! In the following example I will create around 7 million rows and then attempt to update a portion of them serially. In addition, I will issue a commit every thousandth row.
Example 1.1: Creating a somewhat large table
SQL> create table big_employee_table 2 as 3 select rownum as eid 4 , e.* 5 from hr.employees e 6 , dba_objects do; Table created. Elapsed: 00:00:12.23 SQL> select count(*) 2 from big_employee_table; COUNT(*) ---------- 7838713 Elapsed: 00:00:08.11
Before I go on, notice that Oracle’s “Create Table As” (CTAS) method blazed thru table creation. That’s 7.84 Million rows in 12.23 seconds. Sometimes, this is the very best method of updating large data sets. The following block updates 100,000 rows, serially, committing every 1000 rows:
Example 1.2: Updating serially
SQL> declare 2 cursor c is 3 select * 4 from big_employee_table 5 where rownum <= 100000; 6 begin 7 for r in c loop 8 update big_employee_table 9 set salary = salary * 1.03 10 where eid = r.eid; 11 12 if mod ( r.eid, 1000 ) = 0 then 13 commit; 14 end if; 15 end loop; 16 end; 17 /
Observe that the update took more time than I have patience for ;). At 20 minutes I killed the session. It is painfully slow and should never be done. Moreover, it chewed up an entire CPU core for the duration. If you’re only updating a few rows, why do it in PL/SQL at all? I like Tom Kyte’s approach (paraphrasing):
1. Do it in SQL. 2. If SQL can’t do it, do it in PL/SQL. 3. If PL/SQL can’t do it, do it in Java. 4. If Java can’t do it ask yourself if it needs to be done.
The following block does the same work in bulk:
Example 1.3: Updating in bulk and committing at the end
SQL> declare 2 type obj_rowid is table of rowid 3 index by pls_integer; 4 5 lr_rowid obj_rowid; 6 lr_salary dbms_sql.number_table; 7 8 cursor c is 9 select rowid rid 10 , salary 11 from big_employee_table 12 where rownum <= 100000; 13 begin 14 open c; 15 loop 16 fetch c bulk collect 17 into lr_rowid 18 , lr_salary 19 limit 500; 20 21 for a in 1 .. lr_rowid.count loop 22 lr_salary ( a ) := lr_salary ( a ) * 1.03; 23 end loop; 24 25 forall b in 1 .. lr_rowid.count 26 update big_employee_table 27 set salary = lr_salary ( b ) 28 where rowid in ( lr_rowid ( b )); 29 30 exit when c%notfound; 31 end loop; 32 close c; 33 commit; -- there! not in the loop 34 exception 35 when others then 36 rollback; 37 dbms_output.put_line ( sqlerrm ); 38 end; 39 / PL/SQL procedure successfully completed. Elapsed: 00:00:02.11 SQL>
Notice that the update completed in 2 seconds! I’ve seen faster but my two-gerbil sandbox machine doesn’t have the power that our newer servers do. The point is that the update was incredibly fast and chewed up only 10% of one core. So, in answer to the question of “how often should I commit?” I say don’t until you absolutely have to.
I've never read so much bull in all my life!
i). you only COMMIT when it's logically correct to do so. Transactions are atomical. COMMITing part-way thru prevents you rolling back if you need to, leaving your transaction in an unknown state. Your only criterion to COMMIT is at the end of a transaction. Got it?
ii). that UPDATE example you see up there, is so much more easily written as just this:
UPDATE BIG_EMPLOYEE_TABLE SET SALARY = SALARY * 1.03;
That's it! No PL/SQL even needed.
Whoever wrote that junk, please keep off the Net in future.
PL/SQL respect object privileges given directly to the user, but does not observe privileges given through roles. The consequence is that a SQL statement can work in SQL*Plus, but will give an error in PL/SQL. Choose one of the following solutions:
GRANT select ON scott.emp TO my_user;
create or replace procedure proc1 authid current_user is begin ...
"Mutating" means "changing". A mutating table is a table that is currently being modified by an update, delete, or insert statement. When a trigger tries to reference a table that is in state of flux (being changed), it is considered "mutating" and raises an error since Oracle should not return data that has not yet reached its final state.
Another way this error can occur is if the trigger has statements to change the primary, foreign or unique key columns of the table off which it fires. If you must have triggers on tables that have referential constraints, the workaround is to enforce the referential integrity through triggers as well.
There are several restrictions in Oracle regarding triggers:
The only way to reference an object type between databases is via a database link. Note that it is not enough to just use "similar" type definitions. Look at this example:
-- Database A: receives a PL/SQL table from database B CREATE OR REPLACE PROCEDURE pcalled(TabX DBMS_SQL.VARCHAR2S) IS BEGIN -- do something with TabX from database B null; END; /
-- Database B: sends a PL/SQL table to database A CREATE OR REPLACE PROCEDURE pcalling IS TabX DBMS_SQL.VARCHAR2S@DBLINK2; BEGIN pcalled@DBLINK2(TabX); END; /
No. Unlike Oracle Forms, SQL*Plus does not have an embedded PL/SQL engine. Thus, all your PL/SQL code is sent directly to the database engine for execution. This makes it much more efficient as SQL statements are not stripped off and sent to the database individually.
Yes, the max size is not an explicit byte limit, but related to the parse tree that is created when you compile the code. You can run the following select statement to query the size of an existing package or procedure:
SQL> select * from dba_object_size where name = 'procedure_name';
The following limits apply:
Level of Block Nesting: 255
Level of Record Nesting: 32
Level of Subquery Nesting: 254
Level of Label Nesting: 98
A commit inside a trigger would defeat the basic definition of an atomic transaction (see ACID). Trigger logic is by definition an extension of the original DML operation. Changes made within triggers should thus be committed or rolled back as part of the transaction in which they execute. For this reason, triggers are NOT allowed to execute COMMIT or ROLLBACK statements (with the exception of autonomous triggers). Here is an example of what will happen when they do:
SQL> CREATE TABLE tab1 (col1 NUMBER); Table created. SQL> CREATE TABLE log (timestamp DATE, operation VARCHAR2(2000)); Table created. SQL> CREATE TRIGGER tab1_trig 2 AFTER insert ON tab1 3 BEGIN 4 INSERT INTO log VALUES (SYSDATE, 'Insert on TAB1'); 5 COMMIT; 6 END; 7 / Trigger created. SQL> INSERT INTO tab1 VALUES (1); INSERT INTO tab1 VALUES (1) * ERROR at line 1: ORA-04092: cannot COMMIT in a trigger ORA-06512: at "SCOTT.TAB1_TRIG", line 3 ORA-04088: error during execution of trigger 'SCOTT.TAB1_TRIG'
Autonomous transactions:
As workaround, one can use autonomous transactions. Autonomous transactions execute separate from the current transaction.
Unlike regular triggers, autonomous triggers can contain COMMIT and ROLLBACK statements. Example:
SQL> CREATE OR REPLACE TRIGGER tab1_trig 2 AFTER insert ON tab1 3 DECLARE 4 PRAGMA AUTONOMOUS_TRANSACTION; 5 BEGIN 6 INSERT INTO log VALUES (SYSDATE, 'Insert on TAB1'); 7 COMMIT; -- only allowed in autonomous triggers 8 END; 9 / Trigger created. SQL> INSERT INTO tab1 VALUES (1); 1 row created.
Note that with the above example will insert and commit log entries - even if the main transaction is rolled-back!
Remember that an "autonomous_transaction" procedure/function/trigger is a whole transaction in itself and so it must end with a commit or a rollback statement.