How to monitor DROP USER statement is hung or running?

Hi All,

We were doing a refresh activity in one of our data warehouse environments and we had to drop the existing schema in that database and recreate that user and fill with the latest dump.

So as usually i issued the DROP USER command and it had been running after 30-40 minutes. I worried and was trying to check whether that’s running, hung or any other anomaly was happening…

I didn’t see anything in the v$session_longops even but i didn’t see the record corresponding to this drop operation.

Then i used another method. Of course it doesn’t looks elegant but i liked it very much.

I opened another session and was just issuing

select count(0) from dba_objects where owner=<OWNER>;

And i was keep on issuing it…see the result below

10:20:16 SYS@CLDTRNG2 > select count(0) from dba_objects where owner=<OWNER>;

  COUNT(0)
———-
    129656

10:20:50 SYS@CLDTRNG2 > /

  COUNT(0)
———-
    129647

10:20:53 SYS@CLDTRNG2 > /

  COUNT(0)
———-
    129643

10:20:55 SYS@CLDTRNG2 > /

  COUNT(0)
———-
    126252

10:27:15 SYS@CLDTRNG2 > /

  COUNT(0)
———-
    103607

10:33:11 SYS@CLDTRNG2 > /

  COUNT(0)
———-
    103607

10:33:13 SYS@CLDTRNG2 > /

  COUNT(0)
———-
     95615

So, when it becomes 0, we can say the user is dropped. The other session too reports that “USER DROPPED”.

This just help us to realize that DROP USER command is working perfectly in the background and will save us from standing on toes :)

Hope this helps.

I’m welcome to your feedback and suggestions.

CSM

Backup optimization in RMAN

I came across a parameter BACKUP OPTIMIZATION in RMAN when i listed all of them down in my environment through SHOW ALL command.

The documentation just says it enables backup optimization when we set it to ON  and disables when it’s OFF. A very unsatisfying answer :)

So I researched a bit  more on this to find out what’s it. Some people said it’s same as BCT (I have some good articles written about BCT already Artcle1, Article2 and Aritcle3). I said “I don’t think so!!” and thanks to Google/Search Engines(These days they became interchangeable words) – Otherwise we would have listened/forced to listen to every misconceptions expressed by every idiot as getting proofs about their false hypothesis would be very difficult – NO OFFENSE :)

So here is the answer

Backup Optimization Algorithm

Datafile

With a recovery window-based retention policy:

For backups to tape, RMAN takes another backup of a file, even if a backup of an identical file exists, if the most recent backup is older than the configured recovery window. This is done to allow media to be recycled after the media expires.

For backups to disk, RMAN skips taking the backup if an identical file is available from a backup on disk, even if that backup is older than the beginning of the recovery window. The retention policy causes RMAN to retain the old backup for as long as it is needed.

With a redundancy-based retention policy:

RMAN sets r=1 by default and searches for values of n in this order of precedence (that is, values higher on the list override values lower on the list):

  1. If CONFIGURE RETENTION POLICY TO REDUNDANCY r is enabled, then RMAN only skips datafiles when n=r+1 backups exist.

  2. BACKUP ... COPIES n

  3. SET BACKUP COPIES n

  4. CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE ... TO n

RMAN skips backup only if at least n backups of an identical file exist on the specified device. If RMAN does not skip the backup, then it makes the backup exactly as specified.

Archived log

By default, n=1. RMAN searches for values of n in this order of precedence (that is, values higher on the list override values lower on the list):

  1. BACKUP ... COPIES n

  2. SET BACKUP COPIES n

  3. CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE ... TO n

RMAN skips backup only if at least n backups of an identical file exist on the specified device. If RMAN does not skip the backup, then it makes the backup exactly as specified.

Backup set

By default, n=1. RMAN searches for other values of n in this order of precedence (that is, values higher on the list override values lower on the list):

  1. BACKUP ... COPIES n

  2. SET BACKUP COPIES n

RMAN skips backup only if at least n backups of an identical file exist on the specified device. If RMAN does not skip the backup, then it makes the backup exactly as specified.

How to enable and disable backup optimization??

By default, backup optimization is configured to OFF. To enable backup optimization, run the following command:

CONFIGURE BACKUP OPTIMIZATION ON;

To disable backup optimization, run the following command:

CONFIGURE BACKUP OPTIMIZATION OFF;

To clear the current backup optimization setting, that is, return backup optimization to its default setting of OFF, run this command:

CONFIGURE BACKUP OPTIMIZATION CLEAR;

How to check the current status?

You can use SHOW BACKUP OPTIMIZATION to view the current settings of backup optimization as configured with the CONFIGURE BACKUP OPTIMIZATION command. After connecting to the target database and recovery catalog (if you use one), issue the SHOWBACKUP OPTIMIZATION command. For example, enter:

SHOW BACKUP OPTIMIZATION;

Sample output for SHOW BACKUP OPTIMIZATION follows:

RMAN configuration parameters are:
CONFIGURE BACKUP OPTIMIZATION ON;

How to grant privileges on V$ tables??

Hi Data lovers,

I was testing something in my 11.2.0.3 database today and as part of that i created a user and he needs select on some of the v$views.

i tried granting privileges directly through the below command

SQL> grant select on v$archived_log to csm;
grant select on v$archived_log to csm
*
ERROR at line 1:
ORA-02030: can only select from fixed tables/views

The error message looks so weird for me. The “V$” are not views really?

After going through the documentation i found a fact that they’re called as views while referring to them but they’re actually synonyms. So the error message is legitimate but not weird as i thought.

So i accomplished my purpose by doing this

SQL> grant select on v_$archived_log to csm;

Grant succeeded.

What’s this “V_$” thing? The actual V$ views have the names like V_$ARCHIVED_LOG, V_$SESSION etc…the V$ representation is just a synonym of the original view.

So grants on the original view solved the problem.

p.s. I’m still not sure why Oracle has synonyms with these names instead of just leaving the view names intact and publishing them tot the world…Then we might be calling them as V_$ views instead of v$ views…Any historical reason, probably :)

CSM

Can we start the Database with both the spfile and pfile lost??

Hi Data lovers,

Hope you guys are having a good time with your databases.

A scenario i came across this week while playing with one of my test Databases is, i lost both my spfile and pfiles.

so what to do in this case to bring my database up? i already had a blog post on recovering some of the parameters, at least, in such scenarios here.

Now i found another option to start my database, through RMAN.

Yes, RMAN can be used for starting the Oracle instances when we lost all the parameter files. It use default parameters and startup the instance.

Please see the exercise below.

[DB11203-11.2.0 @ mymachine : /home/oracle] sql

SQL*Plus: Release 11.2.0.3.0 Production on Wed Jun 12 19:27:13 2013

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> sho parameter spfile

NAME                                 TYPE        VALUE
———————————— ———– ——————————
spfile                               string      /u01/app/oracle/product/11.2.0
                                                 /11.2.0.3/dbs/spfileDB11203.or
                                                 a
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

Now we’re renaming the spfile and trying to start the database.

[DB11203-11.2.0 @ mymachine : /home/oracle] home
[DB11203-11.2.0 @ mymachine : /u01/app/oracle/product/11.2.0/11.2.0.3] cd dbs
[DB11203-11.2.0 @ mymachine : /u01/app/oracle/product/11.2.0/11.2.0.3/dbs] ls
hc_DB11203.dat  init.ora  lkDB11203  orapwDB11203  spfileDB11203.ora
[DB11203-11.2.0 @ mymachine : /u01/app/oracle/product/11.2.0/11.2.0.3/dbs] mv spfileDB11203.ora spfileDB11203.ora_

[DB11203-11.2.0 @ mymachine : /u01/app/oracle/product/11.2.0/11.2.0.3/dbs] sql

SQL*Plus: Release 11.2.0.3.0 Production on Wed Jun 12 19:27:55 2013

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> shut abort
ORACLE instance shut down.
SQL> startup
ORA-01078: failure in processing system parameters
LRM-00109: could not open parameter file ‘/u01/app/oracle/product/11.2.0/11.2.0.3/dbs/initDB11203.ora’
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

Now try with RMAN

[DB11203-11.2.0 @ mymachine : /u01/app/oracle/product/11.2.0/11.2.0.3/dbs] rman target /

Recovery Manager: Release 11.2.0.3.0 – Production on Wed Jun 12 19:28:20 2013

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database (not started)

RMAN> startup

startup failed: ORA-01078: failure in processing system parameters
LRM-00109: could not open parameter file ‘/u01/app/oracle/product/11.2.0/11.2.0.3/dbs/initDB11203.ora’

starting Oracle instance without parameter file for retrieval of spfile
Oracle instance started
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of startup command at 06/12/2013 19:29:46
ORA-00205: error in identifying control file, check alert log for more info

This error is because, as per default settings the Oracle instance expects control files to be under $ORACLE_HOME/dbs location. Now i identified the location of the control files and copied them to this location.

[DB11203-11.2.0 @ mymachine : /u01/app/oracle/product/11.2.0/11.2.0.3/dbs] cd /u01/app/oracle/database/DB11203/
[DB11203-11.2.0 @ mymachine : /u01/app/oracle/database/DB11203]
[DB11203-11.2.0 @ mymachine : /u01/app/oracle/database/DB11203] ls
control01.ctl  control02.ctl  example01.dbf  redo01.log  redo02.log  redo03.log  sysaux01.dbf  system01.dbf  temp01.dbf  undotbs01.dbf  users01.dbf
[DB11203-11.2.0 @ mymachine : /u01/app/oracle/database/DB11203] cp control01.ctl control02.ctl /u01/app/oracle/product/11.2.0/11.2.0.3/dbs
[DB11203-11.2.0 @ mymachine : /u01/app/oracle/database/DB11203] sql

SQL*Plus: Release 11.2.0.3.0 Production on Wed Jun 12 19:33:00 2013

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> shut abort
ORACLE instance shut down.
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

Now try it again with RMAN

[DB11203-11.2.0 @ mymachine : /u01/app/oracle/database/DB11203] rman target /

Recovery Manager: Release 11.2.0.3.0 – Production on Wed Jun 12 19:33:09 2013

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database (not started)

RMAN> startup

Oracle instance started
database mounted
database opened

Total System Global Area     939495424 bytes

Fixed Size                     2233960 bytes
Variable Size                268437912 bytes
Database Buffers             662700032 bytes
Redo Buffers                   6123520 bytes

RMAN> exit

Please feel free to comment/provide your feedback.

CSM

Optimizer generating wrong execution plans!!

Hi Data Lovers,

Recently I came across a weird situation which was caused by the Oracle optimizer’s wrong selection of execution plan.

To explain clearly,  I got a complain from application teams saying they’re noticing a database error in their log files. It’s 10.2.0.4 database and that’s a simple TEMP space exhaustion message.

Then I identified the SQL which is causing the issue and asked them to run again and monitored closely.

It was a simple SQL statement with 2-3 nested joins on 5 tables and none of those tables contains more than 400 rows. But it’s consuming 17GB of temp space; Yes it’s SEVENTEEN absolutely :)

Then I tried analyzing different things and in the process of investigation i generated the execution plan too.

If I see the execution plan, i found all weird predicates. Why i said weird? Because i’m seeing column and table names which are not being used in any part of the original sql statement.

Then i observed the cost, number of rows processed etc…they were in millions. then i realized that there’s something wrong.

So i flushed the shared pool and issued the statement again; Eureka, it was working perfectly as expected. My expected execution plan and Oracle’s execution plan were almost similar now.

The query execution is finished in 20-30 seconds instead of 30 minutes duration.

Hopefully, Oracle’s optimizer associated a wrong execution plan with this statement’s hash value.

There’s a problem in flushing the shared pool actually as it may lead to hard parses with every statement but in our case we were allowed to do that as it was a non-production environment and number of queries fired against the database are very limited.

If you want to purge a selected plan, go through this https://blogs.oracle.com/mandalika/entry/oracle_rdbms_flushing_a_single

The main intention here is, if your queries started behaving weirdly all of a sudden, try to consider the fact that Oracle’s optimizer will be in hangover mood sometimes ;)

p.s. the query i used to determine the TEMP usage are

– Temp segment usage per session.–

SELECT S.sid || ‘,’ || S.serial# sid_serial, S.username, S.osuser, P.spid, S.module,
P.program, SUM (T.blocks) * TBS.block_size / 1024 / 1024 mb_used, T.tablespace,
COUNT(*) statements
FROM v$sort_usage T, v$session S, dba_tablespaces TBS, v$process P
WHERE T.session_addr = S.saddr
AND S.paddr = P.addr
AND T.tablespace = TBS.tablespace_name
GROUP BY S.sid, S.serial#, S.username, S.osuser, P.spid, S.module,
P.program, TBS.block_size, T.tablespace
ORDER BY sid_serial;

Feel free to comment on / provide your feedback.

 

Have a good day

CSM.

How reliable it is using DBMS_STATS.AUTO_SAMPLE_SIZE for gathering table statistics??

Statistics collection in Oracle is managed through a pl/sql package “dbms_stats”. It provides several procedures to gather statistics for a table, schema, or a database. This procedure has an estimate_percent parameter, which specifies the sampling percentage or in other words the sample of object’s data Oracle have to scan to come to an estimation on different statistics. The users can specify any number between 0 and 100 for this parameter.
For example, suppose you have a table ABCD, and you want to specify a 10% sampling percentage, you can do this:

exec dbms_stats.gather_table_stats(null, ‘ABCD’, estimate_percent => 10);

For more details on this procedure: http://docs.oracle.com/cd/B28359_01/appdev.111/b28419/d_stats.htm

It is not always easy for users to pick the appropriate sampling percentage. If the specified sampling percentage is too high, it can take longer to gather statistics (I’ve seen hours on some really big tables). On the contrary, if the data is skewed and the specified sampling percentage is too low, the resulting statistics can be inaccurate which is even a bigger problem.

For this reason, Oracle introduced the AUTO value for the estimate_percent parameter.

For example, you can gather statistics on ABCD as follows:

exec dbms_stats.gather_table_stats(null, ‘ABCD’,

estimate_percent => dbms_stats.auto_sample_size);

There are 2  primary advantage of using AUTO sample size over a fixed number.

  1. When AUTO sample size is specified, the system automatically determines the appropriate sampling percentage.
  2. AUTO sample size is more flexible than a fixed sampling percentage. A fixed sampling percentage size that was good at some point in time may not be appropriate after the data distribution in the table has changed. On the other hand when AUTO value is used Oracle will adjust the sample size when the data distribution changes.

When AUTO is used Oracle picks a sample size where the statistics quality is good enough. However, it does not work very well under cases of extreme skew in the data. To illustrate the accuracy of auto sample size , I compare the performance of using a fixed sampling percentage, AUTO sample size in a Oracle 11g database.

There’s a table called CUSTOMERSCSM which contains 23 columns and 6,216,000 rows. This is a table I’ve taken from the Oracle’s sample tables in SH schema (tables that comes with installation).

SQL> exec dbms_stats.gather_table_stats(null, ‘CUSTOMERSCSM’,estimate_percent => 1);

PL/SQL procedure successfully completed.

Elapsed: 00:00:11.25

and

SQL> exec dbms_stats.gather_table_stats(null, ‘CUSTOMERSCSM’,estimate_percent => 100);

PL/SQL procedure successfully completed.

Elapsed: 00:02:52.37

and

SQL> exec dbms_stats.gather_table_stats(null, ‘CUSTOMERSCSM’,estimate_percent => dbms_stats.auto_sample_size);

PL/SQL procedure successfully completed.

Elapsed: 00:00:29.47

 

You can see there’s a huge difference between 1 and 100 percent samples whereas 1 and auto sample size are finished almost in the same time (A 18 seconds delay, to be accurate).

As timing wise we have significant gains with auto sample size, now we’ll see the accuracy part. Among all the statistics of a column, number of distinct values (NDV) is the one whose accuracy used to be an issue.

the actual distinct values in the column CUST_MAIN_PHONE_NUMBER of CUSTOMERCSM table is

SQL> select count(distinct(CUST_MAIN_PHONE_NUMBER)) from CUSTOMERSCSM;

COUNT(DISTINCT(CUST_MAIN_PHONE_NUMBER))

—————————————

                                 51000

 

Case1: With 1% sample size

SQL> select column_name,NUM_DISTINCT from DBA_TAB_COL_STATISTICS where TABLE_NAME=’CUSTOMERSCSM’ and COLUMN_NAME=’CUST_MAIN_PHONE_NUMBER’;

COLUMN_NAME                                              NUM_DISTINCT

——————————                                    ————

CUST_MAIN_PHONE_NUMBER                  49362

Case2: With100% sample size

SQL> select column_name,NUM_DISTINCT from DBA_TAB_COL_STATISTICS where TABLE_NAME=’CUSTOMERSCSM’ and COLUMN_NAME=’CUST_MAIN_PHONE_NUMBER’;

COLUMN_NAME                                               NUM_DISTINCT

——————————                                      ————

CUST_MAIN_PHONE_NUMBER                    51000

Case3: With AUTO sample size

SQL> select column_name,NUM_DISTINCT from DBA_TAB_COL_STATISTICS where TABLE_NAME=’CUSTOMERSCSM’ and COLUMN_NAME=’CUST_MAIN_PHONE_NUMBER’;

COLUMN_NAME                                           NUM_DISTINCT

——————————                                  ————

CUST_MAIN_PHONE_NUMBER                51344

 

Estimate Percentage Actual value Value from stats % deviation
1% 51000 49362 3.21%
100% 51000 51000 0%
AUTO 51000 51344 0.67%

So percentage deviation is very less with auto sample compared to 1%

Now we’ll do the same exercise on another column called CUST_ID.

SQL> select count(distinct(CUST_ID)) from CUSTOMERSCSM;

COUNT(DISTINCT(CUST_ID))

————————

55500

Case1: With 1% sample size

SQL> select column_name,NUM_DISTINCT from DBA_TAB_COL_STATISTICS where TABLE_NAME=’CUSTOMERSCSM’ and COLUMN_NAME=’CUST_ID’;

COLUMN_NAME                    NUM_DISTINCT

—————————— ————

CUST_ID                               55704

Case2: With100% sample size

SQL> select column_name,NUM_DISTINCT from DBA_TAB_COL_STATISTICS where TABLE_NAME=’CUSTOMERSCSM’ and COLUMN_NAME=’CUST_ID’;

COLUMN_NAME                    NUM_DISTINCT

—————————— ————

CUST_ID                               55500

 

Case3: With AUTO sample size

SQL> select column_name,NUM_DISTINCT from DBA_TAB_COL_STATISTICS where TABLE_NAME=’CUSTOMERSCSM’ and COLUMN_NAME=’CUST_ID’;

COLUMN_NAME                    NUM_DISTINCT

—————————— ————

CUST_ID                               55992

Estimate Percentage Actual value Value from stats % deviation
1% 55500 55704 0.37%
100% 55500 55500 0%
AUTO 55500 55992 0.89%

Here if you see, the AUTO sample size is much deviated from the 1% sample.

So, just now we’ve seen two different scenarios with 1% and auto sample sizes. So it’s reliable to depend on the auto sampling??

I’m still trying to find the answers and will post my observations soon through another article. I’ll be happy if anyone could clear the clouds even before I do it for myself Smile

Have a good day guys. Keep Contemplating.

 

regards,

CSM

BCT–Block Change Tracking Explained

Hi,

I came across an excellent article on BCT (Block Change Tracking) written by one of my all time favorite authors, Alexander Gorbachev.

Please read it here : Alex’s article on BCT and say goodbye to all the myths about BCT.

Thanks you Alex Sir, for such a wonderful article.

CSM

Follow

Get every new post delivered to your Inbox.

Join 332 other followers