BCT–Block Change Tracking Explained

Hi,

I came across an excellent article on BCT (Block Change Tracking) written by one of my all time favorite authors, Alexander Gorbachev.

Please read it here : Alex’s article on BCT and say goodbye to all the myths about BCT.

Thanks you Alex Sir, for such a wonderful article.

CSM

Advertisements

DBA_DATA_FILES, DBA_FREE_SPACE and DBA_SEGMENTS–A Myth Buster

Hi Friends,

I’ve been encountering with lot of questions related to free space and used space details in an Oracle database in various forums and discussions.

Is that really difficult to understands the Oracle space management system? No it’s not; I’m listing down my observations here, which could help discern through the different data dictionary view related to space management and bust the myths around them

The 3 main data dictionary views which are frequently being used by many people are DBA_DATA_FILES, DBA_SEGMENTS and DBA_FREE_SPACE.

Lot of queries available on the internet, which are intended to know the space utilization details are based on DBA_DATA_FILES & DBA_FREE_SPACE and some on DBA_DATA_FILES & DBA_SEGMENTS.

Some free space reporting queries uses DBA_DATA_FILES-DBA_SEGMENTS as free space and some directly uses DBA_FREE_SPACE as free space; Which one to use?

See the below results

SQL> select sum(bytes)/1024/1024 from dba_data_files where tablespace_name=’USERS_SMALL’;

SUM(BYTES)/1024/1024
——————–
266016

SQL> select sum(bytes)/1024/1024 from dba_segments where tablespace_name=’USERS_SMALL’;

SUM(BYTES)/1024/1024
——————–
241157

SQL> select sum(bytes)/1024/1024 from dba_free_space where tablespace_name=’USERS_SMALL’;

SUM(BYTES)/1024/1024
——————–
24856

So DBA_DATA_FILES-DBA_SEGMENTS=266016-241157=24859, So there’s 3MB difference between both the results and in some cases it’s even more. So which one to believe? What’s the actual free space I can use for my objects?

The answer is DBA_FREE_SPACE which actually contains the free space that available to all the objects we’re going to create.

Then why I’m saying the DBA_DATA_FILES-DBA_SEGMENTS is incorrect? According to theory it’s perfectly correct but in reality It’s the LMT Headers/space management blocks + and the last “few” blocks at the end of the file which aren’t enough to make up a whole extent according to the extent allocation and segment space definitions of the tablespace. Thanks to Jonathan Lewis Sir for pointing that out. Please go through Jonathan’s blog http://jonathanlewis.wordpress.com/2012/05/31/lmt-headers/ to get a clear idea of what these LMT headers are.

So, DBA_DATA_FILES=DBA_SEGMENTS+DBA_FREE_SPACE+X

where X=first few blocks of the FILE that are taken up by space management blocks + the last “few” blocks at the end of the file which aren’t enough to make up a whole extent according to the extent allocation and segment space definitions of the tablespace

How could I get the ‘actual’ free space details without using the DBA_FREE_SPACE view? It’s the only way or any alternative exist?

An alternative is there through which you can get the actual free space information just by using DBA_DATA_FILES and DBA_SEGMENTS/DBA_EXTENTS.

See the query below

SQL> select sum(user_bytes)/1024/1024 from dba_data_files where tablespace_name=’USERS_SMALL’;

SUM(USER_BYTES)/1024/1024
————————-
266013

There’s a column called USER_BYTES in DBA_DATA_FILES which contains the actual total space excluding the LMT headers+leftover blocks as mentioned earlier.

So DBA_DATA_FILES[USER_BYTES]-DBA_SEGMENTS=266013-241157=24856 which is exactly the same value we’re seeing in DBA_FREE_SPACE.

Don’t be calm and quite if some one argues with the results of DBA_DATA_FILES[BYTES]-DBA_SEGMENTS as the actual free space available in the Database. Just ask him to read this or even better solution is to ask him to fill with the database with data equivalent to what he’s claiming as free space Winking smile

CSM

Undocumented DATAPUMP parameters

All the DATAPUMP supporters, I’m trying to uncover some of the ,powerful, undocumented parameters as an adduce to your argument Smile

Below is the list of those and I’ve already written two articles about one of them i.e. TRACE

The hidden weapons I’m talking about are

  • Metrics
  • ACCESS_METHOD
  • TRACE
  • KEEP_MASTER

Let’s see each one of them in detal

1. METRICS

You can use the parameter METRICS=Y to include additional logging information about the number of objects and the time it took to process them in the log file.

Note that the METRICS parameter does not create any additional trace files. Instead, the extra details are logged in the logfile of the Export Data Pump or Import Data Pump job.

Here is an example:

expdp \'/ as sysdba\' DUMPFILE=dump_dir:CSM_%U.dmp SCHEMAS=CSMDBA JOB_NAME=CSM_dump METRICS=Y

Export: Release 11.2.0.3.0 - Production on Tue Dec 20 12:26:08 2011

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

With the Partitioning, OLAP, Data Mining and Real Application Testing options

Starting "SYS"."CSM_DUMP":  "/******** AS SYSDBA" DUMPFILE=dump_dir:CSM_%U.dmp SCHEMAS=CSMDBA JOB_NAME=CSM_dump METRICS=Y

Estimate in progress using BLOCKS method...

Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA

Total estimation using BLOCKS method: 320 KB

Processing object type SCHEMA_EXPORT/USER

Completed 1 USER objects in 0 seconds

...

Processing object type SCHEMA_EXPORT/DEFAULT_ROLE

Completed 1 DEFAULT_ROLE objects in 2 seconds

Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA

Completed 1 TABLESPACE_QUOTA objects in 8 seconds

Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA

Completed 1 PROCACT_SCHEMA objects in 27 seconds

Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE

...

. . exported "CSMDBA"."PATCH_RUN"                        0 KB       0 rows

Master table "SYS"."CSM_DUMP" successfully loaded/unloaded

******************************************************************************

Dump file set for SYS.CSM_DUMP is:

/u01/app/oracle/admin/dpdump/CSM_01.dat

Job "SYS"."CSM_DUMP" successfully completed at 10:49:19

2. ACCESS_METHOD

In some situations, the undocumented parameter ACCESS_METHOD can be used to enforce a specific method to unload or load the data.

If the parameter is not specified, then Data Pump will automatically choose the best method to load or unload the data.

The parameter can only be specified when the Data Pump job is initially started i.e., the parameter cannot be specified when the job is restarted.

If the parameter is specified, the method of loading or unloading the data is enforced on all tables that need to be loaded or unloaded with the job.

Enforcing a specific method may result in a slower performance of the overall Data Pump job.

Here is an example:

expdp \'/ as sysdba\'  ... ACCESS_METHOD=DIRECT_PATH

expdp \'/ as sysdba\'  ... ACCESS_METHOD=EXTERNAL_TABLE

impdp \'/ as sysdba\'  ... ACCESS_METHOD=DIRECT_PATH

impdp \'/ as sysdba\'  ... ACCESS_METHOD=EXTERNAL_TABLE

3. TRACE

Please refer to my articles:TRACE1 & TRACE2 to get the ample of information about tracing.

4. KEEP_MASTER

If the undocumented DataPump parameter KEEP_MASTER=Y is used, then the Master Table is not deleted after the job completes. The drop of the master table does not lead to any data dictionary corruption and if you keep the master table after the job completes, then a drop of the master table afterwards will not cause any corruption either.

You might use this parameter if you create transportable tablespaces where the source and the destination are both ASM based.

Here is an example of the usage of this feature:

expdp \'/ as sysdba\' directory=DUMPDIR DUMPFILE=keep_master.dmp LOGFILE=keep_master.log FULL=Y KEEP_MASTER=Y

 

I welcome your suggestions & feedback

 

Thanks,

CSM

Enabling tracing on a running DATAPUMP job

Hi guys,

In the last article I’ve mentioned about how to enable tracing on a datapump (export/import) job.

What if I already triggered my datapump job and it’s in the middle of doing it’s task? Shall I kill the entire job intermittently and restart it by enabling the tracing again? That’s really a bad idea  if we’re dealing with GBs/TBs of data.

Thanks to the datapump team at Oracle; They made it much more simpler to the users.

Please follow the below procedure to enable tracing on an existing datapump job.

% expdp system/manager DIRECTORY=my_dir DUMPFILE=expdp_f.dmp \
LOGFILE=expdp_f.log FULL=Y
Export: Release 10.2.0.3.0 - Production on Thursday, 18 October, 2007 17:11:08
Copyright (c) 2003, 2005, Oracle.  All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
With the Partitioning, Oracle Label Security, OLAP and Data Mining Scoring Engine options
FLASHBACK automatically enabled to preserve database integrity.
Starting "SYSTEM"."SYS_EXPORT_FULL_01":  system/******** DIRECTORY=my_dir 
DUMPFILE=expdp_f.dmp LOGFILE=expdp_f.log FULL=y
Estimate in progress using BLOCKS method...
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
--

press Control-C to go to the Interactive Command mode, and temporarily stop the job with the STOP_JOB command:

Export> stop
Are you sure you wish to stop this job ([yes]/no): yes
-- use the system generated job-name SYS_EXPORT_FULL_01 to re-attach to the job
-- and specify the TRACE parameter with a tracing level:


% expdp system/manager ATTACH=sys_export_full_01 TRACE=480300
Export: Release 10.2.0.3.0 - Production on Thursday, 18 October, 2007 17:23:48
Copyright (c) 2003, 2005, Oracle.  All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
With the Partitioning, Oracle Label Security, OLAP and Data Mining Scoring Engine options
Job: SYS_EXPORT_FULL_01
  Owner: SYSTEM
  Operation: EXPORT
  Creator Privs: FALSE
  GUID: F4E6BF997DFA46378D543F998E82653E
  Start Time: Thursday, 18 October, 2007 17:23:49 
  Mode: FULL
  Instance: m10203wa
  Max Parallelism: 1
  EXPORT Job Parameters:
  Parameter Name      Parameter Value:
     CLIENT_COMMAND        system/******** DIRECTORY=my_dir DUMPFILE=expdp_f.dmp 
LOGFILE=expdp_f.log FULL=y
  State: IDLING
  Bytes Processed: 0
  Current Parallelism: 1
  Job Error Count: 0
  Dump File: /usr/DataPump/expdp_f.dmp
    bytes written: 4,096
Worker 1 Status:
  State: UNDEFINED
-- restart the job and change back from Interactive Command mode to Logging mode
-- with CONTINUE_CLIENT (note that tracing with level 480300 is now active):
Export> cont
Restarting "SYSTEM"."SYS_EXPORT_FULL_01":  system/******** DIRECTORY=my_dir 
DUMPFILE=expdp_f.dmp LOGFILE=expdp_s.log FULL=y
Processing object type DATABASE_EXPORT/TABLESPACE
Processing object type DATABASE_EXPORT/PROFILE
...


Use a database init.ora/spfile event to trace Data Pump (Not in a case of a job  that’s already running).

Data Pump tracing can also be started with a line with EVENT 39089 in the initialization parameter file. This method should only be used to trace the Data Pump calls in an early state, e.g. if details are needed about the DBMS_DATAPUMP.OPEN API call. Trace level 0x300 will trace all Data Pump client processes. Note that this event cannot be enabled through an ALTER SYSTEM command, only a restart of the database will enable the event.
Example when using init.ora initialization parameter file:

- add the following line to init.ora parameter file: 
EVENT="39089 trace name context forever,level 0x300" 
- Restart the database. 
- Start the Export Data Pump or Import Data Pump job.

Example when using spfile initialization parameter file:

-- when using spfile parameter file:
CONNECT / as sysdba 
SHOW PARAMETER event
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
event                                string
ALTER SYSTEM SET EVENTS = '39089 trace name context forever, level 0x300' 
SCOPE = spfile; 
SHUTDOWN immediate
STARTUP
SHOW PARAMETER event
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
event                                string      39089 trace name context forev
                                                 er, level 0x300
- Start the Export Data Pump or Import Data Pump job.
--
to remove the event(s) again:
ALTER SYSTEM RESET EVENT SCOPE = spfile SID='*';
  
SHUTDOWN immediate
STARTUP

Source: Oracle Metalink.

Tracing DATAPUMP jobs

Hi All,

Now a days datapump has become like the ‘select * from…’ to the DBAs as we come across many situations to move the data here and there and among heterogeneous environments. So, having good knowledge on datapump utility and it’s features is quite commendable.

At times we all come across some weird problems using datapump. I’m one of the millions who thought ‘God!! how to trace my impdp/expdp?’ by doing which I can get the information about internal processing and debug the issues quickly.

God answered to my prayers and I came to know about one undocumented parameter called ‘TRACE’ which would help us in generating detailed, system level, traces for the datapump operations.

Let’s see how it works

This parameter,TRACE, takes value as 7 digit hexadecimal numbers. Specifying the parameter value follow some rules.
Out of 7 digit hexadecimal number,

– The first 3 digits are responsible to enable tracing for a specific data pump component and  last 4 digits are usually 0300

– Specifying more than 7 hexadecimal number is not allowed. Doing so will result in UDE-00014: invalid value for parameter, ‘trace’.

– Leading zero can be omitted. So the values may be less than 7 characters/digits and the values are not case sensitive.

Before starting the tracing make sure you have the apposite value set to the MAX_DUMP_FILE_SIZE initialization parameter because this parameter decides how big a trace file can grow. The default value is UNLIMITED and it’s fine to have that.

SQL> show parameter max_dump_file
NAME TYPE VALUE
———————————— ———– ——————-
max_dump_file_size string UNLIMITED

If it’s not unlimited, try setting it through the following command

alter system set max_dump_file_size=UNLIMITED;

The majority of errors that occur during a Data Pump job, can be diagnosed by creating a trace file for the Master Control Process (MCP) and the Worker Process(es) only.

In case of standard tracing trace files are generated in BACKGROUND_DUMP_DEST. In case of standard tracing,

– Master Process trace file format:  <SID>_dm<number>_<process_id>.trc

– Worker Process trace file format: <SID>_dw<number>_<process_id>.trc

In case of full tracing two trace files are generated in BACKGROUND_DUMP_DEST which are the one’s generated by standard tracing modiles and an additional trace file is generated in USER_DUMP_DEST.

Shadow Process trace file format: <SID>_ora_<process_id>.trc

In Databases where ADR is configured, the locations of the files will be like

$ADR_HOME/trace:
– Data Pump Master Control Processes files have format: SID_dm_process_id.trc
– Data Pump Worker Process trace files have format: SID_dw_process_id.trc
– Data Pump Shadow Process trace files have format: SID_ora_process_id.trc

The list of trace level in data pump is shown below.

 Trace   DM   DW  ORA  Lines
  level  trc  trc  trc     in
  (hex) file file file  trace                                         Purpose
------- ---- ---- ---- ------ -----------------------------------------------
  10300    x    x    x  SHDW: To trace the Shadow process (API) (expdp/impdp)
  20300    x    x    x  KUPV: To trace Fixed table
  40300    x    x    x  'div' To trace Process services
  80300    x            KUPM: To trace Master Control Process (MCP)      (DM)
 100300    x    x       KUPF: To trace File Manager
 200300    x    x    x  KUPC: To trace Queue services
 400300         x       KUPW: To trace Worker process(es)                (DW)
 800300         x       KUPD: To trace Data Package
1000300         x       META: To trace Metadata Package
--- +
1FF0300    x    x    x  'all' To trace all components          (full tracing)

You can use the individual values which are listed in the above table or the combination of them. If you sum all the individual values you will get 1FF0300 which is full tracing.

datapump command for full tracing while doing export is,
expdp DUMPFILE=expdp.dmp LOGFILE=expdp.log TRACE=1FF0300

datapump command for full tracing while doing import is,
impdp DUMPFILE=expdp.dmp LOGFILE=impdp.log TRACE=1FF0300

In most of the scenarios full tracing is not required as you’ll be bombarded with wealth of information which takes significant effort/time to analyze. As trace 400300 is to trace the Worker process(es) and trace 80300 is to trace Master Control Process (MCP), combining them we can use trace 480300 to trace both the Master Control process (MCP) and the Worker process(es). We need to leave last 3 digits (300) while adding them so you got 480300 instead of 480600.

Hope this helps. In next series of articles we’ll see how to enable tracing on an already running datapump process (I’m not kidding) and how to analyze the trace file generated through the above method.