Monday, January 25, 2016

How to convert 2k db block size database to 8k block size with least down time

7:13 AM Posted by Dilli Raj Maharjan 1 comment
The DB_BLOCK_SIZE is the size of the default data block size when the database is created. Converting database from one block size to another is not directly possible. There is no alter database command to do that. From Oracle 10g, we can have tablespaces with different block size. Database block size is determined while database is installed and Once it is used there is no direct method to convert it. We can use different oracle features to minimise downtime. In my case, Our production database (orcl2k) is running with 2k DB block size. Due to unnecessary performance implications, I need to convert it into 8k block size. A full export and import is not possible because it has higher downtime and costs a lot for me. 
We need to create a separate database instance with 8k db_block_size. We can use same server with different instance name or on different server with same instance name. In case we have separate server and SAN storage, New database Instance on different server with same Instance name is recommended. In my test environment, I have created new database on same server with the name orcl8k SID with 8k DB block size.
Following are the features and steps used to achieve the conversion.


Features Used:

  • RMAN Image copy, restores and recover database quickly in case of failure.
  • Online table redefinition Transfer tables from one tablespace(tablespace with 2k block size) to new tablespace (with 8k block size) without down time.
  • Oracle Transport tablespace to transport metadata from source database(database with default 2k block size) to destination server(database with default 8k block size)
  • Oracle service Use same service name from different instance. That is we can use service name orcl2k to connect any instance. In my case I use the same service to connect orcl2k on source and orcl8k on destination server. This will avoid configuration change on client machine and any middleware.

Connect Oracle database as user sys to check the db_block_size parameter. Execute show parameter command to find db_block size.

conn / as sysdba
show parameter db_block_size;







Make sure database is fully Recoverable to the point of failure.
Verify database is in archivelog mode. If it is not, Follow procedure below to convert database into archivelog mode.

  • Shutdown database
  • Start database on mount mode
  • Alter database to achivelog mode.
  • Open database.

archive log list;
shutdown immediate
startup mount
alter database archivelog;
alter database open;













Modify db_8K_cache_size database parameter before creating tablespace with 8K block size. To create database with different block size we need to set db_nk_cache_size parameter.

alter system set db_8k_cache_size=10m;








Create staging tablespace in my case dilli is my tablespace with 2k block size and I have created dilli01 with 8k block size.

Create tablespace dilli01
datafile '/u02/oradata/dilli01.dbf'
size 100m
autoextend on
next 100m
blocksize 8K;













Lets rename the existing table as dilli_stg and dilli01 as dilli

alter tablespace dilli rename to dilli_stg;
alter tablespace dilli01 rename to dilli;












Now dilli_stg is tablespace with 2K block size
and dilli is tablespace with 8K block size

Verify that all database objects are still on dilli_stg tablespace

select segment_name, tablespace_name
from dba_segments
where tablespace_name like 'DILLI%';















Backup full database with RMAN online backup command. Make sure backup execute successfully without error.

RMAN Backup database with copy option.
run {
backup as copy database;
backup as copy current controlfile;
backup spfile;
}


Start Online table redefination. Online table redefinition is a mechanism to modify table structures without affecting its availability.

Grant required permissions

grant execute on DBMS_REDEFINITION to dilli;
grant ALTER ANY TABLE, CREATE ANY TABLE,
DROP ANY TABLE, LOCK ANY TABLE, SELECT ANY TABLE,
CREATE ANY INDEX, CREATE ANY SEQUENCE,
CREATE ANY TRIGGER to dilli;














Verify that table can be redefined or not. In our case we have 3 tables LOGIN, DEPOSIT, USER_DETAIL
If any table cannot be redefined online, an error message will be raised.


EXEC DBMS_REDEFINITION.CAN_REDEF_TABLE('DILLI', 'LOGIN',DBMS_REDEFINITION.CONS_USE_ROWID);
EXEC DBMS_REDEFINITION.CAN_REDEF_TABLE('DILLI', 'DEPOSIT',DBMS_REDEFINITION.CONS_USE_ROWID);
EXEC DBMS_REDEFINITION.CAN_REDEF_TABLE('DILLI', 'USER_DETAIL',DBMS_REDEFINITION.CONS_USE_ROWID);













Connect to user whose object need to be redefine online. In my case we are redefining dilli schemas objects so lets connect with user dilli.

conn dilli







Create interim table with new structure if required and on new tablespace. Oracle truncates data from this table in next step so do not copy data.

Remember current object is in dilli_stg tablespace and new tablespace with 8K block size is renamed as tablespace dilli.
create table LOGIN1 tablespace dilli
as select * from login where 1=2;
create table DEPOSIT1 tablespace dilli
as select * from DEPOSIT where 1=2;
create table USER_DETAIL1 tablespace dilli
as select * from USER_DETAIL where 1=2;
















Repeat process below for each tables.

Start the redefinition Process Oracle will copy (and transform) the data from the production table to the interim table.

EXEC DBMS_REDEFINITION.start_redef_table('DILLI', 'LOGIN', 'LOGIN1',options_flag=> DBMS_REDEFINITION.CONS_USE_ROWID);







Sync intermediate changes to interim table (optional)

EXEC dbms_redefinition.sync_interim_table('DILLI','LOGIN', 'LOGIN1');








Copy dependencies


SET SERVEROUTPUT ON
DECLARE
  l_num_errors PLS_INTEGER;
BEGIN
  DBMS_REDEFINITION.copy_table_dependents(
    uname               => 'DILLI',
    orig_table          => 'LOGIN',
    int_table           => 'LOGIN1',
    copy_indexes        => 1,             -- Default
    copy_triggers       => TRUE,          -- Default
    copy_constraints    => TRUE,          -- Default
    copy_privileges     => TRUE,          -- Default
    ignore_errors       => FALSE,         -- Default
    num_errors          => l_num_errors,
    copy_statistics     => FALSE,         -- Default
    copy_mvlog          => FALSE);        -- Default
 
  DBMS_OUTPUT.put_line('num_errors=' || l_num_errors);
END;
/
























Complete the redefinition process.

exec dbms_redefinition.finish_redef_table('DILLI', 'LOGIN', 'LOGIN1');



Redefination process for DEPOSIT table

EXEC DBMS_REDEFINITION.start_redef_table('DILLI', 'DEPOSIT','DEPOSIT1',options_flag=> DBMS_REDEFINITION.CONS_USE_ROWID);
EXEC dbms_redefinition.sync_interim_table('DILLI','DEPOSIT', 'DEPOSIT1');

SET SERVEROUTPUT ON
DECLARE
  l_num_errors PLS_INTEGER;
BEGIN
  DBMS_REDEFINITION.copy_table_dependents(
    uname               => 'DILLI',
    orig_table          => 'DEPOSIT',
    int_table           => 'DEPOSIT1',
    copy_indexes        => 1,             -- Default
    copy_triggers       => TRUE,          -- Default
    copy_constraints    => TRUE,          -- Default
    copy_privileges     => TRUE,          -- Default
    ignore_errors       => FALSE,         -- Default
    num_errors          => l_num_errors,
    copy_statistics     => FALSE,         -- Default
    copy_mvlog          => FALSE);        -- Default
 
  DBMS_OUTPUT.put_line('num_errors=' || l_num_errors);
END;
/

exec dbms_redefinition.finish_redef_table('DILLI', 'DEPOSIT', 'DEPOSIT1');


Redefination process for USER_DETAIL table

EXEC DBMS_REDEFINITION.start_redef_table('DILLI', 'USER_DETAIL', 'USER_DETAIL1',options_flag=> DBMS_REDEFINITION.CONS_USE_ROWID);
EXEC dbms_redefinition.sync_interim_table('DILLI','USER_DETAIL', 'USER_DETAIL1');

SET SERVEROUTPUT ON
DECLARE
  l_num_errors PLS_INTEGER;
BEGIN
  DBMS_REDEFINITION.copy_table_dependents(
    uname               => 'DILLI',
    orig_table          => 'USER_DETAIL',
    int_table           => 'USER_DETAIL1',
    copy_indexes        => 1,             -- Default
    copy_triggers       => TRUE,          -- Default
    copy_constraints    => TRUE,          -- Default
    copy_privileges     => TRUE,          -- Default
    ignore_errors       => TRUE,         -- Default
    num_errors          => l_num_errors,
    copy_statistics     => FALSE,         -- Default
    copy_mvlog          => FALSE);        -- Default
 
  DBMS_OUTPUT.put_line('num_errors=' || l_num_errors);
END;
/

exec dbms_redefinition.finish_redef_table('DILLI', 'USER_DETAIL', 'USER_DETAIL1');

Error as below can be safely ignored
DECLARE
*
ERROR at line 1:
ORA-01442: column to be modified to NOT NULL is already NOT NULL
ORA-06512: at "SYS.DBMS_REDEFINITION", line 1015
ORA-06512: at "SYS.DBMS_REDEFINITION", line 1907
ORA-06512: at line 4

Verify that all database objects are successfully redefined to dilli tablespace;

column segment_name format a30
select segment_name, tablespace_name
from user_segments
where tablespace_name like 'DILLI%'
order by 1;




















We can drop all the interm objects

alter table deposit
drop constraint TMP$$_FK_DEP_UNAME0;
alter table withdraw
drop constraint TMP$$_FK_WIT_UNAME0;
alter table LOGIN1
drop constraint TMP$$_PK_LOGIN_USERNAME0;
drop table DEPOSIT1 purge;
drop table USER_DETAIL1 purge;
drop table LOGIN1 purge;




Create directory on both orcl2k instance and orcl8K instance. Since we are using single machine mkdir command should be executed once

Note: For convenience, we are using PROMPT as @:SQL. For example SYS@orcl2k:SQL means we are connected to instance orcl2k as user SYS.
mkdir /u02/dumps
create directory dump_dir
as '/u02/dumps';
grant read, write on directory dump_dir to public;















Change the service name of orcl2k database instance to orcl2k_old. and orcl8K database instance to orcl2k

alter system set service_names=orcl2k_old;

alter system set service_names=orcl2k;









In orcl8k database
create user with exact same password.
create user dilli
identified by pwd;
grant connect, resource to dilli;













Downtime begins:
alter tablespace dilli read only;








Note perform transportable tablespace to transport tablespace from orcl2K database to orcl8k database. We are using same datafile in both instance to save time required to copy datafile from one location to another.

expdp system@ORCL2K directory=dump_dir dumpfile=tts_dilli_$(date +%Y_%m_%d).dmp logfile=exp_tts_dilli_$(date +%Y_%m_%d).log TRANSPORT_TABLESPACES=DILLI
alter tablespace dilli offline immediate;





















impdp system@ORCL8K directory=dump_dir dumpfile=tts_dilli_$(date +%Y_%m_%d).dmp logfile=imp_tts_dilli_$(date +%Y_%m_%d).log TRANSPORT_DATAFILES=/u02/oradata/dilli01.dbf











Drop tablespace from orcl2k database Note: DONOT use including contents and datafiles.

Drop tablespace dilli including contents cascade constraints;









Connect to server orcl8K as system or sys user and set default tablespace for the schema and make tablespace read write;

alter user dilli default tablespace dilli;
alter tablespace dilli read write;










Downtime Ends:

Connect as the normal user on orcl8k and verify the contents.

conn dilli
select * from tab;
select count(*) from deposit;
select count(*) from login;
select count(*) from user_detail;




















Drop orcl2k database.

shutdown immediate;
startup mount exclusive restrict;
drop database;



















Clients and middleware servers can connect with service orcl2k without problem. We can connect with service name orcl8k and orcl2k for same instance.

Service "orcl2k" has 1 instance(s).
  Instance "orcl8k", status READY, has 1 handler(s) for this service...





















sqlplus sys@orcl2k as sysdba
select instance_name from v$instance;



















sqlplus sys@orcl8k as sysdba
select instance_name from v$instance;


















1 comment: