Enable BCT (Block Change Tracking)

Hi,

In my previous post (https://querycsm.wordpress.com/2013/01/02/why-rman-daily-incremental-backups-are-taking-so-long/) I’ve talked about investigating the potential problems and stated that one of the improvement initiatives we could go ahead with is ‘Enabling BCT’.

Now we’ll see how to do that and are there any other things we need to perform after enabling.

First I recommend you to go through the document prepared by my favorite author Alexander Gorbachev to get an idea of what’s BCT and how it works(at least go through the first few pages).

To enable BCT we have two ways:

Specifying the BCT file location manually like this

SQL> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING USING FILE ‘/dbfile/bct_file.ora’;

or the other way of doing is

SQL> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING;

but you need to set db_create_file_dest to the location where we want the BCT file to be created before enabling it.

SQL> alter system set db_create_file_dest=’location’ SCOPE=BOTH;

To check if the BCT is enabled or not fire the below query

SQL> select * from v$block_change_tracking;

STATUS        FILENAME                                                                        BYTES

ENABLED   /ora11dbdata2/oracle/NIKU/blk_chg_trckng.dbf    11599872

I did these steps and was not seeing any improvements in the PCT_READ(percentage of blocks read) from the below query.

SQL> select file#, avg(datafile_blocks) blocks,

2  avg(blocks_read) blocks_read,

3  avg(blocks_read/datafile_blocks)*100 pct_read,

avg(blocks) blocks_backed_up,

4    5  (avg(blocks)/avg(datafile_blocks))*100 pct_backed_up

from v$backup_datafile

6    7  where incremental_level = 1

8  group by file#

9  order by file#

So after doing lot of research, I found that we need to take a FULL backup of the Database, after enabling BCT, to leverage it’s benefits. We should do this because when we do a FULL backup, the database will create a baseline bitmap and from there it’ll start updating the bitmap based upon the blocks that are changed.

I’ve later observed that lot of people are facing similar statistics i.e. no improvement after enabling the BCT and this is the answer for that.

BTW, the performance improvement after enabling BCT in our production Database is humongous and we’re now seeing only 2%-3% of the blocks are getting read by RMAN. thanks to BCT!!!

You’re always welcome to any queries and questions.

Regards,

CSM

Advertisements

Why RMAN daily incremental backups are taking so long???

Hi,

Have you ever come across a weird problem with RMAN of taking too long to take even daily Level-1 incremental backups and backing up huge amount of data?

I’ve seen such behavior with some of our production Databases. even a daily incremental backup is having a size of more than 80% of the original and taking a time more than the full backup takes.

We’ll now breakdown the problem and see what could be the reasons and where to start the investigation.

1. As all of us know incremental backups are the backups of the blocks which are changed from last successful incremental backups. If it’s having huge size then no.of blocks are getting changed daily and Oracle is not having any option but to back that block up.

So, where to start our investigation about which objects are generating ‘huge deltas’ and being changed much??

Use the query below to know the number of blocks being changed file wise…

SQL> select file#, avg(datafile_blocks) blocks,

2  avg(blocks_read) blocks_read,

3  avg(blocks_read/datafile_blocks)*100 pct_read,

avg(blocks) blocks_backed_up,

4    5  (avg(blocks)/avg(datafile_blocks))*100 pct_backed_up

from v$backup_datafile

6    7  where incremental_level = 1

8  group by file#

9  order by file#

From the above query you’ll get the no.of blocks being backed up(pct_backed_up) from each file and get into the details like which objects resides on that file via the tablespace_name etc…

2. What if it’s taking so long to complete the backup operation? Whom we have to suspect?

There are lot of areas of concern like tape drive’s performance, if it’s on disk the disk controller’s performance, the OS level issues as RMAN is finally an OS level process etc…

But from the Database point of view we can use the above query and if we see 100% for all the files in the pct_read, then we could confirm that the Database is reading each and every block and wasting lot of time.

First of all why it’s doing so? It’s because RMAN has to visit each block and see if it got modified or not and backup if it’s modified. that’s the way the algorithm has been devised (a linear search)…

So is there anyway we could make it a bit efficient??

Yes, absolutely yes!!! Oracle has gifted us with a functionality called BCT (Block Change Tracking). Please Google that word to get very good article and information about that.

Please enable BCT and you’ll be amazed by the results as I do!!!

We’ll discuss on how to enable BCT and some hidden facts about BCT etc in my next article…

Keep reading!!!

 

Regards,

CSM

ORA-39142 “incompatible version”

Hi,

Many people might have seen this error “ORA-39142 incompatible version” while importing the datapump dump file generated from a higher version Database into a lower version Database.

Many people has suggested me to upgrade the target also to the source version and some said it’s not at all possible to import a higher version dump into a lower version Database.

Thanks to the datapump team @ Oracle. they’ve given an excellent option for the expdp command. that’s called ‘version‘.

below is the computability matrix for EXPDP.

    Data Pump dumpfile compatibility.

    Export              Use Export Data Pump parameter VERSION=...
      From                 if dumpfile needs to be imported into a
    Source                Target Database with compatibility level
  Database        (value of init.ora/spfile parameter COMPATIBLE):
      With
COMPATIBLE     9.2.0.x.0    10.1.0.x.0    10.2.0.x.0    11.1.0.x.0    11.2.0.x.0
---------- ------------- ------------- ------------- ------------- -------------
10.1.0.x.0   VERSION=9.2             -             -             -             -
---------- ------------- ------------- ------------- ------------- -------------
10.2.0.x.0   VERSION=9.2  VERSION=10.1             -             -             -
---------- ------------- ------------- ------------- ------------- -------------
11.1.0.x.0   VERSION=9.2  VERSION=10.1  VERSION=10.2             -             -
---------- ------------- ------------- ------------- ------------- -------------
11.2.0.x.0   VERSION=9.2  VERSION=10.1  VERSION=10.2  VERSION=11.1             -
---------- ------------- ------------- ------------- ------------- -------------

So what is not possible then?

Connection to the Database with higher RDBMS version through a client (IMPDP binary) of lower version. See below for the compatibility matrix of that

Data Pump client compatibility.
===============================

   expdp and  Connecting to Database version
impdp client     10gR1      10gR2      11gR1      11gR2
     version  10.1.0.x   10.2.0.x   11.1.0.x   11.2.0.x
----------- ---------- ---------- ---------- ----------
   10.1.0.x  supported  supported  supported  supported
   10.2.0.x         no  supported  supported  supported
   11.1.0.x         no         no  supported  supported
   11.2.0.x         no         no         no  supported

 

Hope this helps.

Regards,

CSM