Monday, December 2, 2013

The Secrets of Oracle Row Chaining and Migration

Overview

If you notice poor performance in your Oracle database Row Chaining and Migration may be one of several reasons, but we can prevent some of them by properly designing and/or diagnosing the database.
Row Migration & Row Chaining are two potential problems that can be prevented. By suitably diagnosing, we can improve database performance. The main considerations are:
  • What is Row Migration & Row Chaining ?
  • How to identify Row Migration & Row Chaining ?
  • How to avoid Row Migration & Row Chaining ?
Migrated rows affect OLTP systems which use indexed reads to read singleton rows. In the worst case, you can add an extra I/O to all reads which would be really bad. Truly chained rows affect index reads and full table scans.

Oracle Block

The Operating System Block size is the minimum unit of operation (read /write) by the OS and is a property of the OS file system. While creating an Oracle database we have to choose the «Data Base Block Size» as a multiple of the Operating System Block size. The minimum unit of operation (read /write) by the Oracle database would be this «Oracle block», and not the OS block. Once set, the «Data Base Block Size» cannot be changed during the life of the database (except in case of Oracle 9i). To decide on a suitable block size for the database, we take into consideration factors like the size of the database and the concurrent number of transactions expected.
The database block has the following structure (within the whole database structure)

 Header
Header contains the general information about the data i.e. block address, and type of segments (table, index etc). It Also contains the information about table and the actual row (address) which that holds the data.
Free Space
Space allocated for future update/insert operations. Generally affected by the values of PCTFREE and PCTUSED parameters.
Data
 Actual row data.
FREELIST, PCTFREE and PCTUSED
While creating / altering any table/index, Oracle used two storage parameters for space control.
  • PCTFREE - The percentage of space reserved for future update of existing data.
     
  • PCTUSED - The percentage of minimum space used for insertion of new row data.
    This value determines when the block gets back into the FREELISTS structure.
     
  • FREELIST - Structure where Oracle maintains a list of all free available blocks.
Oracle will first search for a free block in the FREELIST and then the data is inserted into that block. The availability of the block in the FREELIST is decided by the PCTFREE value. Initially an empty block will be listed in the FREELIST structure, and it will continue to remain there until the free space reaches the PCTFREE value.
When the free space reach the PCTFREE value the block is removed from the FREELIST, and it is re-listed in the FREELIST table when the volume of data in the block comes below the PCTUSED value.
Oracle use FREELIST to increase the performance. So for every insert operation, oracle needs to search for the free blocks only from the FREELIST structure instead of searching all blocks.

Row Migration

We will migrate a row when an update to that row would cause it to not fit on the block anymore (with all of the other data that exists there currently).  A migration means that the entire row will move and we just leave behind the «forwarding address». So, the original block just has the rowid of the new block and the entire row is moved.

Full Table Scans are not affected by migrated rows
The forwarding addresses are ignored. We know that as we continue the full scan, we'll eventually get to that row so we can ignore the forwarding address and just process the row when we get there.  Hence, in a full scan migrated rows don't cause us to really do any extra work -- they are meaningless.
Index Read will cause additional IO's on migrated rows
When we Index Read into a table, then a migrated row will cause additional IO's. That is because the index will tell us «goto file X, block Y, slot Z to find this row». But when we get there we find a message that says «well, really goto file A, block B, slot C to find this row». We have to do another IO (logical or physical) to find the row.

Row Chaining

A row is too large to fit into a single database block. For example, if you use a 4KB blocksize for your database, and you need to insert a row of 8KB into it, Oracle will use 3 blocks and store the row in pieces. Some conditions that will cause row chaining are: Tables whose rowsize exceeds the blocksize. Tables with LONG and LONG RAW columns are prone to having chained rows. Tables with more then 255 columns will have chained rows as Oracle break wide tables up into pieces. So, instead of just having a forwarding address on one block and the data on another we have data on two or more blocks.

Chained rows affect us differently. Here, it depends on the data we need. If we had a row with two columns that was spread over two blocks, the query:
SELECT column1 FROM table
where column1 is in Block 1, would not cause any «table fetch continued row». It would not actually have to get column2, it would not follow the chained row all of the way out. On the other hand, if we ask for:
SELECT column2 FROM table
and column2 is in Block 2 due to row chaining, then you would in fact see a «table fetch continued row»
Example
The following example was published by Tom Kyte, it will show row migration and chaining. We are using an 4k block size:
SELECT name,value
  FROM v$parameter
 WHERE name = 'db_block_size';

NAME                 VALUE
--------------      ------
db_block_size         4096
Create the following table with CHAR fixed columns:
CREATE TABLE row_mig_chain_demo (
  x int
PRIMARY KEY,
  a CHAR(1000),
  b CHAR(1000),
  c CHAR(1000),
  d CHAR(1000),
  e CHAR(1000)
);
That is our table. The CHAR(1000)'s will let us easily cause rows to migrate or chain. We used 5 columns a,b,c,d,e so that the total rowsize can grow to about 5K, bigger than one block, ensuring we can truly chain a row.
INSERT INTO row_mig_chain_demo (x) VALUES (1);
INSERT INTO row_mig_chain_demo (x) VALUES (2);
INSERT INTO row_mig_chain_demo (x) VALUES (3);
COMMIT;
We are not interested about seeing a,b,c,d,e - just fetching them. They are really wide so we'll surpress their display.
column a noprint
column b noprint
column c noprint
column d noprint
column e noprint
SELECT * FROM row_mig_chain_demo;
         X
----------
         1
         2
         3
Check for chained rows:
SELECT a.name, b.value
  FROM v$statname a, v$mystat b
 WHERE a.statistic# = b.statistic#
   AND lower(a.name) = 'table fetch continued row';
NAME                                                                  VALUE
---------------------------------------------------------------- ----------
table fetch continued row                                                 0
Now that is to be expected, the rows came out in the order we put them in (Oracle full scanned this query, it processed the data as it found it). Also expected is the table fetch continued row is zero. This data is so small right now, we know that all three rows fit on a single block. No chaining.
Demonstration of the Row Migration
Now, lets do some updates in a specific way. We want to demonstrate the row migration issue and how it affects the full scan:
UPDATE row_mig_chain_demo SET a = 'z1', b = 'z2', c = 'z3' WHERE x = 3;
COMMIT;
UPDATE row_mig_chain_demo SET a = 'y1', b = 'y2', c = 'y3' WHERE x = 2;
COMMIT;
UPDATE row_mig_chain_demo SET a = 'w1', b = 'w2', c = 'w3' WHERE x = 1;
COMMIT;
Note the order of updates, we did last row first, first row last.
SELECT * FROM row_mig_chain_demo;
         X
----------
         3
         2
         1
SELECT a.name, b.value
  FROM v$statname a, v$mystat b
 WHERE a.statistic# = b.statistic#
   AND lower(a.name) = 'table fetch continued row';
NAME                                                                  VALUE
---------------------------------------------------------------- ----------
table fetch continued row                                                 0
Interesting, the rows came out «backwards» now. That is because we updated row 3 first. It did not have to migrate, but it filled up block 1. We then updated row 2. It migrated to block 2 with row 3 hogging all of the space, it had to. We then updated row 1, it migrated to block 3. We migrated rows 2 and 1, leaving 3 where it started.
So, when Oracle full scanned the table, it found row 3 on block 1 first, row 2 on block 2 second and row 1 on block 3 third. It ignored the head rowid piece on block 1 for rows 1 and 2 and just found the rows as it scanned the table. That is why the table fetch continued row is still zero. No chaining.

So, lets see a migrated row affecting the «table fetch continued row»:
SELECT * FROM row_mig_chain_demo WHERE x = 3;

         X
----------
         3
SELECT a.name, b.value
  FROM v$statname a, v$mystat b
 WHERE a.statistic# = b.statistic#
   AND lower(a.name) = 'table fetch continued row';
NAME                                                                  VALUE
---------------------------------------------------------------- ----------
table fetch continued row                                                 0
This was an index range scan / table access by rowid using the primary key.  We didn't increment the «table fetch continued row» yet since row 3 isn't migrated.
SELECT * FROM row_mig_chain_demo WHERE x = 1;

 
        X
----------
         1
SELECT a.name, b.value
  FROM v$statname a, v$mystat b
 WHERE a.statistic# = b.statistic#
   AND lower(a.name) = 'table fetch continued row';
NAME                                                                  VALUE
---------------------------------------------------------------- ----------
table fetch continued row                                                 1
Row 1 is migrated, using the primary key index, we forced a «table fetch continued row».
Demonstration of the Row Chaining
UPDATE row_mig_chain_demo SET d = 'z4', e = 'z5' WHERE x = 3;
COMMIT;
Row 3 no longer fits on block 1. With d and e set, the rowsize is about 5k, it is truly chained.
SELECT x,a FROM row_mig_chain_demo WHERE x = 3;
         X
----------
         3
SELECT a.name, b.value
  FROM v$statname a, v$mystat b
 WHERE a.statistic# = b.statistic#
   AND lower(a.name) = 'table fetch continued row';
NAME                                                                  VALUE
---------------------------------------------------------------- ----------
table fetch continued row                                                 1
We fetched column «x» and «a» from row 3 which are located on the «head» of the row, it will not cause a «table fetch continued row». No extra I/O to get it.

SELECT x,d,e FROM row_mig_chain_demo WHERE x = 3;
SELECT a.name, b.value
  FROM v$statname a, v$mystat b
 WHERE a.statistic# = b.statistic#
   AND lower(a.name) = 'table fetch continued row';
NAME                                                                  VALUE
---------------------------------------------------------------- ----------
table fetch continued row                                                 2
Now we fetch from the «tail» of the row via the primary key index. This increments the «table fetch continued row» by one to put the row back together from its head to its tail to get that data.
Now let's see a full table scan - it is affected as well:
SELECT * FROM row_mig_chain_demo;
         X
----------
         3
         2
         1
SELECT a.name, b.value
  FROM v$statname a, v$mystat b
 WHERE a.statistic# = b.statistic#
   AND lower(a.name) = 'table fetch continued row';
NAME                                                                  VALUE
---------------------------------------------------------------- ----------
table fetch continued row                                                 3
The «table fetch continued row» was incremented here because of Row 3, we had to assemble it to get the trailing columns.  Rows 1 and 2, even though they are migrated don't increment the «table fetch continued row» since we full scanned.
SELECT x,a FROM row_mig_chain_demo;
         X
----------
         3
         2
         1
SELECT a.name, b.value
  FROM v$statname a, v$mystat b
 WHERE a.statistic# = b.statistic#
   AND lower(a.name) = 'table fetch continued row';
NAME                                                                  VALUE
---------------------------------------------------------------- ----------
table fetch continued row                                                 3
No «table fetch continued row» since we didn't have to assemble Row 3, we just needed the first two columns.
SELECT x,e FROM row_mig_chain_demo;
         X
----------
         3
         2
         1
SELECT a.name, b.value
  FROM v$statname a, v$mystat b
 WHERE a.statistic# = b.statistic#
   AND lower(a.name) = 'table fetch continued row';
NAME                                                                  VALUE
---------------------------------------------------------------- ----------
table fetch continued row                                                 4
But by fetching for d and e, we incemented the «table fetch continued row». We most likely have only migrated rows but even if they are truly chained, the columns you are selecting are at the front of the table.
So, how can you decide if you have migrated or truly chained?
Count the last column in that table. That'll force to construct the entire row.
SELECT count(e) FROM row_mig_chain_demo;
  COUNT(E)
----------
         1
SELECT a.name, b.value
  FROM v$statname a, v$mystat b
 WHERE a.statistic# = b.statistic#
   AND lower(a.name) = 'table fetch continued row';
NAME                                                                  VALUE
---------------------------------------------------------------- ----------
table fetch continued row                                                 5
Analyse the table to verify the chain count of the table:
ANALYZE TABLE row_mig_chain_demo COMPUTE STATISTICS;

SELECT chain_cnt
  FROM user_tables
 WHERE table_name = 'ROW_MIG_CHAIN_DEMO';
 CHAIN_CNT
----------
         3
Three rows that are chained. Apparently, 2 of them are migrated (Rows 1 and 2) and one is truly chained (Row 3).

Total Number of «table fetch continued row» since instance startup?

The V$SYSSTAT view tells you how many times, since the system (database) was started you did a «table fetch continued row» over all tables.
sqlplus system/<password>

SELECT 'Chained or Migrated Rows = '||value
  FROM v$sysstat
 WHERE name = 'table fetch continued row';
Chained or Migrated Rows = 31637
You could have 1 table with 1 chained row that was fetched 31'637 times. You could have 31'637 tables, each with a chained row, each of which was fetched once. You could have any combination of the above -- any combo.
Also, 31'637 - maybe that's good, maybe that's bad. it is a function of
  • how long has the database has been up
  • how many rows is this as a percentage of total fetched rows.
    For example if 0.001% of your fetched are table fetch continued row, who cares!
Therefore, always compare the total fetched rows against the continued rows.
SELECT name,value FROM v$sysstat WHERE name like '%table%';
NAME                                                                  VALUE
---------------------------------------------------------------- ----------
table scans (short tables)                                           124338
table scans (long tables)                                              1485
table scans (rowid ranges)                                                0
table scans (cache partitions)                                           10
table scans (direct read)                                                 0
table scan rows gotten                                             20164484
table scan blocks gotten                                            1658293
table fetch by rowid                                                1883112
table fetch continued row                                             31637 table lookup prefetch client count                                        0

How many Rows in a Table are chained?

The USER_TABLES tells you immediately after an ANALYZE (will be null otherwise) how many rows in the table are chained.
ANALYZE TABLE row_mig_chain_demo COMPUTE STATISTICS;
SELECT chain_cnt,
       round(chain_cnt/num_rows*100,2) pct_chained,
       avg_row_len, pct_free , pct_used
  FROM user_tables
WHERE table_name = 'ROW_MIG_CHAIN_DEMO';

 CHAIN_CNT PCT_CHAINED AVG_ROW_LEN   PCT_FREE   PCT_USED
---------- ----------- ----------- ---------- ----------
         3         100        3691         10         40
PCT_CHAINED shows 100% which means all rows are chained or migrated.

List Chained Rows

You can look at the chained and migrated rows of a table using the ANALYZE statement with the LIST CHAINED ROWS clause. The results of this statement are stored in a specified table created explicitly to accept the information returned by the LIST CHAINED ROWS clause. These results are useful in determining whether you have enough room for updates to rows.
Creating a CHAINED_ROWS Table
To create the table to accept data returned by an ANALYZE ... LIST CHAINED ROWS statement, execute the UTLCHAIN.SQL or UTLCHN1.SQL script in $ORACLE_HOME/rdbms/admin. These scripts are provided by the database. They create a table named CHAINED_ROWS in the schema of the user submitting the script.
create table CHAINED_ROWS (
  owner_name         varchar2(30),
  table_name         varchar2(30),
  cluster_name       varchar2(30),
  partition_name     varchar2(30),
  subpartition_name  varchar2(30),
  head_rowid         rowid,
  analyze_timestamp  date
);
After a CHAINED_ROWS table is created, you specify it in the INTO clause of the ANALYZE statement.
ANALYZE TABLE row_mig_chain_demo LIST CHAINED ROWS;
SELECT owner_name,
       table_name,
       head_rowid
 FROM chained_rows
OWNER_NAME                     TABLE_NAME                     HEAD_ROWID
------------------------------ ------------------------------ ------------------
SCOTT                          ROW_MIG_CHAIN_DEMO             AAAPVIAAFAAAAkiAAA
SCOTT                          ROW_MIG_CHAIN_DEMO             AAAPVIAAFAAAAkiAAB

How to avoid Chained and Migrated Rows?

Increasing PCTFREE can help to avoid migrated rows. If you leave more free space available in the block, then the row has room to grow. You can also reorganize or re-create tables and indexes that have high deletion rates. If tables frequently have rows deleted, then data blocks can have partially free space in them. If rows are inserted and later expanded, then the inserted rows might land in blocks with deleted rows but still not have enough room to expand. Reorganizing the table ensures that the main free space is totally empty blocks.
The ALTER TABLE ... MOVE statement enables you to relocate data of a nonpartitioned table or of a partition of a partitioned table into a new segment, and optionally into a different tablespace for which you have quota. This statement also lets you modify any of the storage attributes of the table or partition, including those which cannot be modified using ALTER TABLE. You can also use the ALTER TABLE ... MOVE statement with the COMPRESS keyword to store the new segment using table compression.
  1. ALTER TABLE MOVE
    First count the number of Rows per Block before the ALTER TABLE MOVE

    SELECT dbms_rowid.rowid_block_number(rowid) "Block-Nr", count(*) "Rows"
      FROM row_mig_chain_demo
    GROUP BY dbms_rowid.rowid_block_number(rowid) order by 1;

     Block-Nr        Rows
    ---------- ----------
          2066          3
    Now, de-chain the table, the ALTER TABLE MOVE rebuilds the row_mig_chain_demo table in a new segment, specifying new storage parameters:

    ALTER TABLE row_mig_chain_demo MOVE
       PCTFREE 20
       PCTUSED 40
       STORAGE (INITIAL 20K
                NEXT 40K
                MINEXTENTS 2
                MAXEXTENTS 20
                PCTINCREASE 0);

    Table altered.
    Again count the number of Rows per Block after the ALTER TABLE MOVE

    SELECT dbms_rowid.rowid_block_number(rowid) "Block-Nr", count(*) "Rows"
      FROM row_mig_chain_demo
    GROUP BY dbms_rowid.rowid_block_number(rowid) order by 1;
     Block-Nr        Rows
    ---------- ----------
          2322          1
          2324          1
          2325          1

     
  2. Rebuild the Indexes for the Table
    Moving a table changes the rowids of the rows in the table. This causes indexes on the table to be marked UNUSABLE, and DML accessing the table using these indexes will receive an ORA-01502 error. The indexes on the table must be dropped or rebuilt. Likewise, any statistics for the table become invalid and new statistics should be collected after moving the table.

    ANALYZE TABLE row_mig_chain_demo COMPUTE STATISTICS;

    ERROR at line 1:
    ORA-01502: index 'SCOTT.SYS_C003228' or partition of such index is in unusable
    state
    This is the primary key of the table which must be rebuilt.
    ALTER INDEX SYS_C003228 REBUILD; Index altered.
    ANALYZE TABLE row_mig_chain_demo COMPUTE STATISTICS; Table analyzed.
    SELECT chain_cnt,
           round(chain_cnt/num_rows*100,2) pct_chained,
           avg_row_len, pct_free , pct_used
      FROM user_tables
     WHERE table_name = 'ROW_MIG_CHAIN_DEMO';
     CHAIN_CNT PCT_CHAINED AVG_ROW_LEN   PCT_FREE   PCT_USED
    ---------- ----------- ----------- ---------- ----------
             1       33.33        3687         20         40
    If the table includes LOB column(s), this statement can be used to move the table along with LOB data and LOB index segments (associated with this table) which the user explicitly specifies. If not specified, the default is to not move the LOB data and LOB index segments.

Detect all Tables with Chained and Migrated Rows

Using the CHAINED_ROWS table, you can find out the tables with chained or migrated rows.
  1. Create the CHAINED_ROWS table

    cd $ORACLE_HOME/rdbms/admin
    sqlplus scott/tiger
    @utlchain.sql
     
  2. Analyse all or only your Tables

    SELECT 'ANALYZE TABLE '||table_name||' LIST CHAINED ROWS INTO CHAINED_ROWS;'
      FROM user_tables
    /


    ANALYZE TABLE ROW_MIG_CHAIN_DEMO LIST CHAINED ROWS INTO CHAINED_ROWS;
    ANALYZE TABLE DEPT LIST CHAINED ROWS INTO CHAINED_ROWS;
    ANALYZE TABLE EMP LIST CHAINED ROWS INTO CHAINED_ROWS;
    ANALYZE TABLE BONUS LIST CHAINED ROWS INTO CHAINED_ROWS;
    ANALYZE TABLE SALGRADE LIST CHAINED ROWS INTO CHAINED_ROWS;
    ANALYZE TABLE DUMMY LIST CHAINED ROWS INTO CHAINED_ROWS;

    Table analyzed.
     
  3. Show the RowIDs for all chained rows

    This will allow you to quickly see how much of a problem chaining is in each table. If chaining is prevalent in a table, then that table should be rebuild with a higher value for PCTFREE

    SELECT owner_name,
           table_name,
           count(head_rowid) row_count
      FROM chained_rows
    GROUP BY owner_name,table_name
    /


    OWNER_NAME                     TABLE_NAME                      ROW_COUNT
    ------------------------------ ------------------------------ ----------
    SCOTT                          ROW_MIG_CHAIN_DEMO                      1

Conclusion

Migrated rows affect OLTP systems which use indexed reads to read singleton rows. In the worst case, you can add an extra I/O to all reads which would be really bad. Truly chained rows affect index reads and full table scans.
  • Row migration is typically caused by UPDATE operation
  • Row chaining is typically caused by INSERT operation.
  • SQL statements which are creating/querying these chained/migrated rows will degrade the performance due to more I/O work.
  • To diagnose chained/migrated rows use ANALYZE command , query V$SYSSTAT view
  • To remove chained/migrated rows use higher PCTFREE using ALTER TABLE MOVE.
 Note: Click Here to see the Source of above data

Saturday, November 23, 2013

PGA , SGA , MEMORY Advisors

MEMORY_Advisors
One of the issues that Database DBA face is oracle database memory set correctly or do I need to increase it.
Oracle 11g has cool solution.
Querying from table: v$memory_target_advice , tells if increasing memory size would provide any advantage, if so how much should you increase.
Here is the example:
Image

Review the results in the screen shot. Currently database has a memory_target of 2048 MB (First line shows this).It can be compared with DB initialization parameter MEMORY_TARGET.
Line 2 shows that increasing memory size to 2560 MB (which is 25% more) did not add further benefit which can be seen in the column : EST_DB_TIME_FACTOR value 1. However, in line 3 if Memory size is 3072 (i.e. 50% more ) estimated time factor is 0.9602 that means 1 sec process can be done in 0.9602 sec this is an improvement.
Even if you increase memory by 75%, there is no additional benefit, but there is very minor benefit when memory is  double.
In this scenario increasing memory by 50% is adding benefit for the database.


PGA Advisor
Sql>select pga_target_for_estimate,pga_target_factor,estd_extra_bytes_rw from v$pga_target_advice;

In the above figure, look at 5th row
Value=1358954496 is the current orcldb.__pga_aggregate_target  value  (set in initorcldb.ora)
So, Subtract pga_target_for_estimate - estd_extra_bytes_rw for 5th row , 1358954496-954314752
equals to 404639744 i.e eqals to 386 MB.
So, conclusion is that adding extra 386 MB will improve performance(less I/O would be needed).
SGA Advisor
Sql>select sga_size,sga_size_factor,estd_db_time from v$sga_target_advice;
DB_TIME is an overall figure for the amount of time spent taken within the database to execute SQL;
Minimizing DB_TIME is the overall objective of all tuning.
Look at the 3rd row i.e value 1952 MB , this is the value of parameter orcldb.__sga_target being set
in initorcldb.ora.
So,It can be seen that if the SGA is raised from its current value of 1952 MB to 3904 MB then
DB_TIME would reduce.

Oracle Database Switchover and Failover steps

Switchover Steps
1) There are few steps we need to check before performing the switchover in dataguard environment 
a)If we are using the OEM Enable a blackout for primary database in enterprise manager.
b)Check the crontab jobs for Primary database need to disabled.
c)Check for executing backups/exports and cancel.
d)Identify the log gap if gap present resolve the gap
e)From Primary and Standby database check the list of last log should be applied on both database


2)Verify that it is possible to perform a switchover operation On the primary database
On the primary database the switchover_status column of v$database check which is possible to perform a switchover operation
SQL> select switchover_status from v$database;
SWITCHOVER_STATUS
------------------
TO STANDBY
-In order to perform a switchover all sessions to the database need to be disconnected. This process has been automated with the “with session shutdown” clause that has been added to the alter database commit to switchover command.
SWITCHOVER_STATUS 
-------------------------------
 SESSIONS ACTIVE 
 -Then you should either disconnect all sessions manually (or) when performing step 3 you should append the “with session shutdown” clause.
For example:
 SQL> alter database commit to switchover to standby with session shutdown;
(Note: that the clause "with session shutdown" also works with the switchover to primary command 
Ex: alter database commit to switchover to primary with session shutdown;)
SWITCHOVER_STATUS 
-----------------------------
NOT ALLOWED         
-Either this is a standby database and the primary database has not been switched first, or this is a primary database and there are no standby databases. 
SWITCHOVER_STATUS 
-----------------------------
SESSIONS ACTIVE                 - 
-Indicates that there are active SQL sessions attached to the primary or                                               standby database that need to be disconnected before the switchover operation is permitted.
SWITCHOVER_STATUS 
-----------------------------
SWITCHOVER PENDING
-This is a standby database and the primary database switchover request has been received but not processed. 
SWITCHOVER_STATUS 
-----------------------------
SWITCHOVER LATENT
-The switchover was in pending mode, but did not complete and went back to the primary database.
SWITCHOVER_STATUS 
-----------------------------
TO PRIMARY                  
 - This is a standby database, with no active sessions, that is allowed to switch over to a primary database.
SWITCHOVER_STATUS 
-----------------------------
TO STANDBY 
- This is a primary database, with no active sessions, that is allowed to switch over to a standby database.
SWITCHOVER_STATUS 
-----------------------------
RECOVERY NEEDED 
- This is a standby database that has not received the switchover request.
During normal operations it is acceptable to see the following values for
SWITCHOVER_STATUS 
-----------------------------
 SESSIONS ACTIVE (or) TO STANDBY. ( on the primary database)
 During normal operations  it is acceptable to see these values
SWITCHOVER_STATUS 
-----------------------------
NOT ALLOWED (or) SESSIONS ACTIVE.  (on the standby database)
-During normal operations  it is acceptable to see these values 
3. Convert the primary database to the new standby:
SQL> alter database commit to switchover to physical standby ;
Database altered.
4. Shutdown the former primary and mount as a standby database:
SQL> shutdown immediate
ORA-01507: database not mounted
ORACLE instance shut down.
SQL> startup nomount
ORACLE instance started.
Total System Global Area  818401280 bytes
Fixed Size                  2217792 bytes
Variable Size             528484544 bytes
Database Buffers          285212672 bytes
Redo Buffers                2486272 bytes

SQL> alter database mount standby database;
Database altered.
5. Defer the remote archive destination on the old primary:
SQL> alter system set log_archive_dest_state_3=defer;
6. Verify that the physical standby can be converted to the new primary:
SQL> select switchover_status from v$database;
SWITCHOVER_STATUS
------------------
SWITCHOVER PENDING
(Note that if the status returns SESSIONS ACTIVE then you should append the with session shutdown clause to the command in step 7.
7. Convert the physical standby to the new primary:
SQL> alter database commit to switchover to primary;
Database altered.
8. Shutdown and startup the new primary:
SQL> shutdown immediate
ORA-01507: database not mounted
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
Total System Global Area  818401280 bytes
Fixed Size                  2217792 bytes
Variable Size             482347200 bytes
Database Buffers          331350016 bytes
Redo Buffers                2486272 bytes
Database mounted.
Database opened.

9. Enable remote archiving on the new primary to the new standby:
SQL> alter system set log_archive_dest_state_2=enable;
10. Start managed recover on the new standby database:
SQL> recover managed standby database disconnect;
Media recovery complete.
11) There are few steps we need to check after performing the switchover in dataguard environment 
a)If we are using the OEM check the status of  primary database in enterprise manager.
b)check the crontab jobs for Primary database need to enabled
c)Check for executing backups/exports and start as per schedule
d)Identify the log gap if gap present resolve the gap
e)From Primary and Standby database check the list of last log should be applied on both database

FAILOVER Steps:
1. Initiate the failover on the standby database STAN:
SQL>connect /@STAN as sysdba
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH;
SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;
2. Immediately after issuing command in step 2, shut down and restart the standby instance STAN:
SQL>SHUTDOWN IMMEDIATE;
SQL>STARTUP;
STAN is now transitioned to the primary database role.Now your PHYSICAL STANDBY Database has become PRIMARY. To verify this change, again query the Database_role column of V$DATABASE. Now it will return “PRIMARY”.
SQL> Select Database_role from v$Database;

Friday, November 22, 2013

Enable Oracle database to send emails via SMTP server

Often in a complex enterprise Infrastructure Oracle DBAs face issues by enabling SMTP mail traffic on the databases through corporate email gateway servers. Imagine you have to provide your database applications an ability to send emails via Simple Mail Transfer Protocol (SMTP) protocol from Oracle database. Below I give a detail action plan to accomplish the same. My test example includes an Oracle database 11gR2 running on Linux RedHat 6 and a Microsoft Exchange corporate server.

Oracle packages SYS.UTL_SMTP and SYS.UTL_TCP

Check if Oracle packages SYS.UTL_SMTP and SYS.UTL_TCP are available on Oracle database and you have EXECUTE grants on them.

Check SMTP access of database Linux server on mail server

- Check whether you are able to contact the email gateway server via SMTP from the database Linux box:
1
$ telnet smtp_server 25
# If you see blank screen or “telnet: Unable to connect to remote host: Connection refused”,
# your DB server is not recognized by the SMTP server. Otherwise type the following commands:



helo
mail from: my_email@my_company.com # you should see "Sender OK'
rcpt to: my_email@my_company.com   # you should see "Recipient OK"
data                               # Start mail input
test email via SMTP and orcl DB
[Enter]
.                                  # mail should be sent
[Enter]
quit

Apply for mail SMTP access

Contact your mail (exchange) admins and apply for SMTP access on your corporate smtp mail gateway server. Below is an example:
Please allow SMTP traffic at smtpgate.mycompany.com for the following new server:
Hostname: ..com
IP: 99.999.99.99
Function: Oracle production DB …
Outgoing email: my_app@my_company.com
Amount emails per month: around 50-100
After that your mail admins will add a sender IP address to a corporate SMTP mail server to allow the sender Linux server access the SMTP server. This will enable the SMTP server sending emails on behalf of the sender (your Oracle database Linux box).
Note that the mail server can check if the sender email exists. If not, you can send an email from a dummy email account.

Sending email from Oracle database via SMTP

   
create or replace procedure testmail
(fromm varchar2,too varchar2,sub varchar2,body varchar2,port number)
is
objConnection utl_smtp.connection;
vrData varchar2(32000);
BEGIN
objConnection := UTL_smtp.open_connection('smtp_server',port);
UTL_smtp.helo(objConnection, 'smtp_server');
UTL_smtp.mail(objConnection, fromm);
UTL_smtp.rcpt(objConnection, too);
UTL_smtp.open_data(objConnection);
/* ** Sending the header information */
UTL_smtp.write_data(objConnection, 'From: '||fromm || UTL_tcp.CRLF);
UTL_smtp.write_data(objConnection, 'To: '||too || UTL_tcp.CRLF);
UTL_smtp.write_data(objConnection, 'Subject: ' || sub || UTL_tcp.CRLF);
UTL_smtp.write_data(objConnection, 'MIME-Version: ' || '1.0' || UTL_tcp.CRLF);
UTL_smtp.write_data(objConnection, 'Content-Type: ' || 'text/html;');
UTL_smtp.write_data(objConnection, 'Content-Transfer-Encoding: ' || '"8Bit"' ||
UTL_tcp.CRLF);
UTL_smtp.write_data(objConnection,UTL_tcp.CRLF);
UTL_smtp.write_data(objConnection,UTL_tcp.CRLF||'');
UTL_smtp.write_data(objConnection,UTL_tcp.CRLF||'');
UTL_smtp.write_data(objConnection,UTL_tcp.CRLF||'<span style="color: red; font-family: Courier New;">'||body||'</span>');
UTL_smtp.write_data(objConnection,UTL_tcp.CRLF||'');
UTL_smtp.write_data(objConnection,UTL_tcp.CRLF||'');
UTL_smtp.close_data(objConnection);
UTL_smtp.quit(objConnection);
EXCEPTION
WHEN UTL_smtp.transient_error OR UTL_smtp.permanent_error THEN
UTL_smtp.quit(objConnection);
dbms_output.put_line(sqlerrm);
WHEN OTHERS THEN
UTL_smtp.quit(objConnection);
dbms_output.put_line(sqlerrm);
END testmail;
/
-- this should send an email from a dummy account to your email address
execute testmail ('dummy@my_company.com','my_email@my_company.com','test Subject','Test Body Text',25);
- In case of following ORA- error go to step 2 and 3:
ORA-29278: SMTP transient error: 421 Service not available
ORA-06512: at “SYS.UTL_SMTP”, line 21
ORA-06512: at “SYS.UTL_SMTP”, line 97
ORA-06512: at “SYS.UTL_SMTP”, line 139
The “ORA-29278: SMTP transient error: 421 Service not available” error indicates that the problem
is not with the UTL_SMTP package but the fact that your database server’s network configuration does
not allow it to contact an external SMTP server.

- In case a following Oracle error, go to next step and enable ACL in Oracle 11g database:
ERROR at line 1:
ORA-29278: SMTP transient error: 421 Service not available
ORA-06512: at “SYS.UTL_SMTP”, line 54
ORA-06512: at “SYS.UTL_SMTP”, line 138
ORA-06512: at “SYS.UTL_SMTP”, line 699
ORA-06512: at “me.TESTMAIL”, line 35
ORA-24247: network access denied by access control list (ACL)
ORA-06512: at line 1

Enable access control list (ACL) for your network hosts

The network utility family of PL/SQL packages, such as UTL_TCP, UTL_INADDR, UTL_HTTP, UTL_SMTP, and UTL_MAIL, allow Oracle users to make network callouts from the database using raw TCP or using higher-level protocols built on raw TCP. As of Oracle 11g a new package DBMS_NETWORK_ACL_ADMIN allows fine-grained control over network hosts access using access control lists (ACL) implemented by XML DB. Below I show quickly 2 major steps that are required to enable ACL for a user. Read Oracle documentation for more details.

Create an ACL with one database user in Oracle database

        
BEGIN
DBMS_NETWORK_ACL_ADMIN.CREATE_ACL (
acl => 'smtp-gate-permissions.xml',
description => 'Permissions for smtp gate',
principal => 'SCOTT',
is_grant => TRUE,
privilege => 'connect'
);
COMMIT;
END;
/

Assign an ACL to one or more network hosts

        
BEGIN
DBMS_NETWORK_ACL_ADMIN.ASSIGN_ACL (
acl => 'smtp-gate-permissions.xml',
host => '<smtp_server>',
lower_port => 25,
upper_port => null);
COMMIT;
END;
/
So that was my action plan for providing your applications an ability to send email via SMTP protocol from an Oracle database Linux server using a real example with UTL_SMTP package including ORA- error handling and ACL creation on Oracle 11g.

Automating ADDM report Generation

Some times we may have a requirement to generate addm report for peak hours every day. Below are the steps , by which we can automate addm reports generation.

1) Create a task in DB.

begin
DBMS_ADVISOR.create_task (
advisor_name => 'ADDM',
task_name => 'ADDM2',
task_desc => 'Addm report generation schedule');
end;
/

2) Create this procedure in database. This procedure can generate awr report for 10:00 AM to 14:00 AM.

create or replace procedure auto_addm as
start_time number;
end_time number;
BEGIN
DBMS_ADVISOR.RESET_TASK(task_name =>'ADDM1');
select (MAX(SNAP_ID)-1) into start_time from dba_hist_snapshot where to_char(BEGIN_INTERVAL_TIME,'HH24') = '10';
select (MAX(SNAP_ID)-1) into end_time from dba_hist_snapshot where to_char(BEGIN_INTERVAL_TIME,'HH24') = '14';
DBMS_ADVISOR.set_task_parameter (
task_name => 'ADDM1',
parameter => 'START_SNAPSHOT',
value => start_time);
DBMS_ADVISOR.set_task_parameter (
task_name => 'ADDM1',
parameter => 'END_SNAPSHOT',
value => end_time);
-- Execute the task.
DBMS_ADVISOR.execute_task(task_name =>'ADDM1');
END auto_addm;
/

3) Schedule the bleow script in crontab.

sqlplus -s '/as sysdba' << EOF
SET LONG 100000
SET PAGESIZE 50000
exec auto_addm;
spool addmrpt.txt
SELECT DBMS_ADVISOR.get_task_report('ADDM1') AS report FROM dual;
spool off;

exit;

EOF
cat addmrpt.txt |mailx -s "ADDMREPORT For TODAY" "MAIL-ID"

Automating AWR report Generation

=========AWR REPORT AUTOMIZATION===========


CREATE OR REPLACE PROCEDURE CreateAwrReports (begin_snap number,end_snap number, directory varchar2 ) as
v_Instance_number v$instance.instance_number%TYPE;
v_Instance_name v$instance.instance_name%TYPE;
v_dbid V$database.dbid%TYPE;
v_file UTL_FILE.file_type;
BEGIN
SELECT instance_number, instance_name into v_Instance_number,v_Instance_name FROM gv$instance ORDER BY 1;
SELECT dbid INTO v_dbid FROM v$database;
EXECUTE IMMEDIATE('CREATE OR REPLACE DIRECTORY AWR_DIR AS '''||directory||'''');
BEGIN
v_file := UTL_FILE.fopen('AWR_DIR', 'awr_' || v_Instance_name ||'_'|| v_Instance_number || '_' || begin_snap || '_' || (end_snap-1) || '.txt', 'w', 32767);
FOR c_AWRReport IN (
SELECT output FROM TABLE (DBMS_WORKLOAD_REPOSITORY.AWR_REPORT_TEXT( v_dbid, v_Instance_number, begin_snap,end_snap-1))
) loop
UTL_FILE.PUT_LINE(v_file, c_AWRReport.output); end loop;
UTL_FILE.fclose(v_file);
END;
EXECUTE IMMEDIATE('DROP DIRECTORY AWR_DIR');
END;


create or replace procedure generate_awrreport as
begin_snap number;
end_snap number;
begin
select (MAX(SNAP_ID)-1) into begin_snap from dba_hist_snapshot where to_char(BEGIN_INTERVAL_TIME,'HH24') = '10';
select (MAX(SNAP_ID)-1) into end_snap from dba_hist_snapshot where to_char(BEGIN_INTERVAL_TIME,'HH24') = '14';
CreateAwrReports(begin_snap, end_snap,'/export/home/oracle/');
end;
/

How to schedule AWR report(awrrpt.sql) via crontab?

Hi All,


I came across a requirement to schedule a AWR snap report between 9 AM to 6 PM on certain days. 


The aim here was to automatically identify snap id for morning 9 AM and 6 PM and run the report from background and email the report.


Following is a sample script. In my test instance begin snap id is returning only 1 snap so I am good :)
If you get multiple snap id by given select statement please use min(snap_id) and max(snap_id) in begin and end respectively. I have commented the script to help understanding better.


Save the following as any shell script and schedule in cron.


myAWR.sh


## START


# Following mechanism will record begin and end snap_id by given time 9 and 18 hour of the day
# It also generate define statements which are required to run awrrpti.sql from background


sqlplus -s '/as sysdba' << EOF
set echo off
set head off
set feed off
spool /tmp/tmpScr.sql
select 'Define begin_snap= '|| snap_id from dba_hist_snapshot where 
EXTRACT(HOUR FROM BEGIN_INTERVAL_TIME)=9 and
EXTRACT(year FROM BEGIN_INTERVAL_TIME) = TO_CHAR(sysdate,'YYYY') and 
EXTRACT(month FROM BEGIN_INTERVAL_TIME) = TO_CHAR(sysdate,'MM') and
EXTRACT(day FROM BEGIN_INTERVAL_TIME) = TO_CHAR(sysdate,'DD');


SELECT 'Define end_snap= '|| snap_id from dba_hist_snapshot where 
EXTRACT(HOUR FROM BEGIN_INTERVAL_TIME)=18 and
EXTRACT(year FROM BEGIN_INTERVAL_TIME) = TO_CHAR(sysdate,'YYYY') and 
EXTRACT(month FROM BEGIN_INTERVAL_TIME) = TO_CHAR(sysdate,'MM') and
EXTRACT(day FROM BEGIN_INTERVAL_TIME) = TO_CHAR(sysdate,'DD');


select 'define  report_type  = ' || '''text''' from dual;
select 'define  report_name  = /tmp/awrrep.txt' from dual;
select 'define  inst_name    = ' || INSTANCE_NAME from v\$instance;
select 'define  db_name    = ' || name from v\$database;
select 'define  dbid = ' || dbid from v\$database;
select 'define inst_num = ' || INSTANCE_NUMBER from v\$instance;
select 'define  num_days     = 3' from dual;
select '@$ORACLE_HOME/rdbms/admin/awrrpti.sql' from dual;
exit
EOF


# Following part runs the script composed earlier with snap ids and define statements
sqlplus -s '/as sysdba' << EOF
@/tmp/tmpScr.sql
exit
EOF


## END


Crontab Example (Linux)

## Running AWR everyday 7:00 PM
00 19 * * * /njadav/myAWR.sh | mailx -s "AWR Today" myemail@mydomain.com 2>&1

Note:
To further customize, please check parameters of $ORACLE_HOME/rdbms/admin/awrrpti.sql by opening it.

Following sample ..

define  inst_num     = 1;
define  num_days     = 3;
define  inst_name    = 'Instance';
define  db_name      = 'Database';
define  dbid         = 4;
define  begin_snap   = 10;
define  end_snap     = 11;
define  report_type  = 'text';
define  report_name  = /tmp/swrf_report_10_11.txt