Saturday, July 22, 2017

Oracle tablespace point in time recovery(TSPITR)

9:16 AM Posted by Dilli Maharjan No comments


RMAN TSPITR(Recovery Manager Tablespace Point In Time Recovery) enables quick recovery of one or more tablespaces in a database to an earlier time without affecting the rest of the tablespaces and objects in the database.

RMAN TSPITR is useful in following scenarios:
  • Recover from data loss after DDL as truncate, Drop table with purge.
  • Recover from logical corruption of the table.
  • Recover from dropped tablespaces.
  • Recover from incorrect batch job on table belongs to single tablespace.

Modes:
  • Fully Automated 
  • Automated: RMAN-Managed Auxiliary Instance with User Settings
  • Non-Automated: TSPITR and User-Managed Auxiliary Instance

Scenario: There is batch data operation: insert, update into a table. It is noticed that all the data inserted after certain time is incorrect and need to revert it back. The time until which date need to be recovered is termed as Target time. We are using Fully Automated TSPITR to recover tablespace tbs01 till target time.


Create tablespace for Demo propose.
create tablespace tbs01
datafile '/u01/app/oracle/oradata/orcl/tbs01.dbf'
size 10m
autoextend on;



Create user that holds table.
Create user demo 
identified by oracle
default tablespace tbs01
quota unlimited on tbs01;



Grant required privileges to the user.
grant connect,resource to demo;
grant execute on dbms_lock to demo;



Connect as user demo and create table that will be used on recovery
conn demo/oracle;
create table tbl_demo(
sn number,
data varchar2(20)
);



Create PL/SQL code to insert data into demo TABLE
declare
a number;
begin
for a in 1..50
loop
insert into tbl_demo values(a,to_char(sysdate,'YYYY-MM-DD HH24:MI:SS'));
commit;
dbms_lock.sleep(5); --sleep for 5 second
end loop;
end;
/



Wait for few minutes to complete this PL/SQL Block.
Right now there are 50 number of rows on the table tbl_demo. Now we are doing
Tablespace point in time recovery till 46th row and data: 2017-07-18 21:16:04
That means we are recovering until Date and Time: 2017-07-18 21:16:05.
select * from tbl_demo;

..............


Querying DBMS_TTS.TRANSPORT_SET_CHECK for a Subset of Tablespaces and check for any violations. 
begin
dbms_tts.transport_set_check('TBS01',TRUE,TRUE);
end;
/

select * from transport_set_violations;



Query the TS_PITR_OBJECTS_TO_BE_DROPPED data dictionary view on the primary database to list all the database objects that will be dropped once you perform TSPITR. We will search with the reference of the target time. In our case target time is 2017-07-19 08:00:01.

select owner, name, tablespace_name, to_char(creation_time,'YY-MM-DD:HH24:MI:SS')
from ts_pitr_objects_to_be_dropped
where tablespace_name='TBS01'
and creation_time > to_date('2017-07-19 08:00:01','YYYY-MM-DD HH24:MI:SS')
order by tablespace_name,creation_time;



AUXILIARY DESTINATION parameter is use to set a location for RMAN to use for the auxiliary set data files. Create auxiliary destination for tablespace point in time recovery
mkdir /tmp/auxdest

Execute rman and connect to target Database and execute following command
rman target /
run
{
recover tablespace tbs01
until time "to_date('2017-07-18 21:16:05','YYYY-MM-DD HH24:MI:SS')"
auxiliary destination '/tmp/auxdest';
}

..........


Once recovery is completed the tablespace in our case tbs01 will stay offline. We need to bring it to online before accessing.
alter tablespace tbs01 online;



Now connect user demo and select the table tbl_demo to access the recovered table as we have expected.
select * from tbl_demo;

.........


Following is the output of the RMAN command with detail explantation. Text with background are the output.

Starting Tablespace PITR.
Starting recover at 18-JUL-17
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=27 device type=DISK
RMAN-05026: WARNING: presuming following set of tablespaces applies to specified point-in-time

List of tablespaces expected to have UNDO segments
Tablespace SYSTEM
Tablespace UNDO01

Creating auxiliary instance with random SID

Creating automatic instance, with SID='btar'

initialization parameters used for automatic instance:
db_name=ORCL
db_unique_name=btar_tspitr_ORCL
compatible=11.2.0
db_block_size=8192
db_files=200
sga_target=280M
processes=50
db_create_file_dest=/tmp/auxdest
log_archive_dest_1='location=/tmp/auxdest'
#No auxiliary parameter file used


starting up automatic instance ORCL

Oracle instance started

Total System Global Area     292933632 bytes

Fixed Size                     1336092 bytes
Variable Size                100666596 bytes
Database Buffers             184549376 bytes
Redo Buffers                   6381568 bytes
Automatic instance created
Running TRANSPORT_SET_CHECK on recovery set tablespaces
TRANSPORT_SET_CHECK completed successfully

Starting Restoration of database on auxiliary instance with the following RMAN commands.

contents of Memory Script:
{
# set requested point in time
set until  time "to_date('2017-07-18 21:16:05','YYYY-MM-DD HH24:MI:SS')";
# restore the controlfile
restore clone controlfile;
# mount the controlfile
sql clone 'alter database mount clone database';
# archive current online log 
sql 'alter system archive log current';
# avoid unnecessary autobackups for structural changes during TSPITR
sql 'begin dbms_backup_restore.AutoBackupFlag(FALSE); end;';
}
executing Memory Script

executing command: SET until clause

Starting restore at 18-JUL-17
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=20 device type=DISK

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: reading from backup piece /u01/app/oracle/flash_recovery_area/ORCL/autobackup/2017_07_18/o1_mf_s_949698787_dpwc4c64_.bkp
channel ORA_AUX_DISK_1: piece handle=/u01/app/oracle/flash_recovery_area/ORCL/autobackup/2017_07_18/o1_mf_s_949698787_dpwc4c64_.bkp tag=TAG20170718T211307
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/tmp/auxdest/ORCL/controlfile/o1_mf_dpwcskct_.ctl
Finished restore at 18-JUL-17

sql statement: alter database mount clone database

sql statement: alter system archive log current

sql statement: begin dbms_backup_restore.AutoBackupFlag(FALSE); end;

contents of Memory Script:
{
# set requested point in time
set until  time "to_date('2017-07-18 21:16:05','YYYY-MM-DD HH24:MI:SS')";
plsql <<<-- font="" tspitr_2="">
declare
  sqlstatement       varchar2(512);
  offline_not_needed exception;
  pragma exception_init(offline_not_needed, -01539);
begin
  sqlstatement := 'alter tablespace '||  'TBS01' ||' offline immediate';
  krmicd.writeMsg(6162, sqlstatement);
  krmicd.execSql(sqlstatement);
exception
  when offline_not_needed then
    null;
end; >>>;
# set destinations for recovery set and auxiliary set datafiles
set newname for clone datafile  1 to new;
set newname for clone datafile  3 to new;
set newname for clone datafile  2 to new;
set newname for clone tempfile  1 to new;
set newname for datafile  5 to 
 "/u01/app/oracle/oradata/orcl/tbs01.dbf";
# switch all tempfiles
switch clone tempfile all;
# restore the tablespaces in the recovery set and the auxiliary set
restore clone datafile  1, 3, 2, 5;
switch clone datafile all;
}
executing Memory Script

executing command: SET until clause

sql statement: alter tablespace TBS01 offline immediate

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

renamed tempfile 1 to /tmp/auxdest/ORCL/datafile/o1_mf_temp01_%u_.tmp in control file

Starting restore at 18-JUL-17
using channel ORA_AUX_DISK_1

creating datafile file number=5 name=/u01/app/oracle/oradata/orcl/tbs01.dbf
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to /tmp/auxdest/ORCL/datafile/o1_mf_system_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00003 to /tmp/auxdest/ORCL/datafile/o1_mf_undo01_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00002 to /tmp/auxdest/ORCL/datafile/o1_mf_sysaux_%u_.dbf
channel ORA_AUX_DISK_1: reading from backup piece /u01/app/oracle/flash_recovery_area/ORCL/backupset/2017_07_17/o1_mf_nnndf_FULL_DB_BACKUP_ON_ST_dpsokmr7_.bkp
channel ORA_AUX_DISK_1: piece handle=/u01/app/oracle/flash_recovery_area/ORCL/backupset/2017_07_17/o1_mf_nnndf_FULL_DB_BACKUP_ON_ST_dpsokmr7_.bkp tag=FULL_DB_BACKUP_ON_START
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:25
Finished restore at 18-JUL-17

datafile 1 switched to datafile copy
input datafile copy RECID=4 STAMP=949699496 file name=/tmp/auxdest/ORCL/datafile/o1_mf_system_dpwcsq07_.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=5 STAMP=949699496 file name=/tmp/auxdest/ORCL/datafile/o1_mf_undo01_dpwcsq0y_.dbf
datafile 2 switched to datafile copy
input datafile copy RECID=6 STAMP=949699496 file name=/tmp/auxdest/ORCL/datafile/o1_mf_sysaux_dpwcsq0t_.dbf

Start recovering auxiliary database instance until the specified time.

contents of Memory Script:
{
# set requested point in time
set until  time "to_date('2017-07-18 21:16:05','YYYY-MM-DD HH24:MI:SS')";
# online the datafiles restored or switched
sql clone "alter database datafile  1 online";
sql clone "alter database datafile  3 online";
sql clone "alter database datafile  2 online";
sql clone "alter database datafile  5 online";
# recover and open resetlogs
recover clone database tablespace  "TBS01", "SYSTEM", "UNDO01", "SYSAUX" delete archivelog;
alter clone database open resetlogs;
}
executing Memory Script

executing command: SET until clause

sql statement: alter database datafile  1 online

sql statement: alter database datafile  3 online

sql statement: alter database datafile  2 online

sql statement: alter database datafile  5 online

Starting recover at 18-JUL-17
using channel ORA_AUX_DISK_1

starting media recovery

archived log for thread 1 with sequence 14 is already on disk as file /u01/app/oracle/flash_recovery_area/ORCL/archivelog/2017_07_17/o1_mf_1_14_dpsol2ry_.arc
archived log for thread 1 with sequence 15 is already on disk as file /u01/app/oracle/flash_recovery_area/ORCL/archivelog/2017_07_18/o1_mf_1_15_dpty3y5f_.arc
archived log for thread 1 with sequence 16 is already on disk as file /u01/app/oracle/flash_recovery_area/ORCL/archivelog/2017_07_18/o1_mf_1_16_dpty41xy_.arc
archived log for thread 1 with sequence 17 is already on disk as file /u01/app/oracle/flash_recovery_area/ORCL/archivelog/2017_07_18/o1_mf_1_17_dpv1y2s4_.arc
archived log for thread 1 with sequence 18 is already on disk as file /u01/app/oracle/flash_recovery_area/ORCL/archivelog/2017_07_18/o1_mf_1_18_dpv22p7t_.arc
archived log for thread 1 with sequence 19 is already on disk as file /u01/app/oracle/flash_recovery_area/ORCL/archivelog/2017_07_18/o1_mf_1_19_dpv2c55w_.arc
archived log for thread 1 with sequence 20 is already on disk as file /u01/app/oracle/flash_recovery_area/ORCL/archivelog/2017_07_18/o1_mf_1_20_dpw8rzoy_.arc
archived log for thread 1 with sequence 21 is already on disk as file /u01/app/oracle/flash_recovery_area/ORCL/archivelog/2017_07_18/o1_mf_1_21_dpwcspjx_.arc
archived log file name=/u01/app/oracle/flash_recovery_area/ORCL/archivelog/2017_07_17/o1_mf_1_14_dpsol2ry_.arc thread=1 sequence=14
archived log file name=/u01/app/oracle/flash_recovery_area/ORCL/archivelog/2017_07_18/o1_mf_1_15_dpty3y5f_.arc thread=1 sequence=15
archived log file name=/u01/app/oracle/flash_recovery_area/ORCL/archivelog/2017_07_18/o1_mf_1_16_dpty41xy_.arc thread=1 sequence=16
archived log file name=/u01/app/oracle/flash_recovery_area/ORCL/archivelog/2017_07_18/o1_mf_1_17_dpv1y2s4_.arc thread=1 sequence=17
archived log file name=/u01/app/oracle/flash_recovery_area/ORCL/archivelog/2017_07_18/o1_mf_1_18_dpv22p7t_.arc thread=1 sequence=18
archived log file name=/u01/app/oracle/flash_recovery_area/ORCL/archivelog/2017_07_18/o1_mf_1_19_dpv2c55w_.arc thread=1 sequence=19
archived log file name=/u01/app/oracle/flash_recovery_area/ORCL/archivelog/2017_07_18/o1_mf_1_20_dpw8rzoy_.arc thread=1 sequence=20
archived log file name=/u01/app/oracle/flash_recovery_area/ORCL/archivelog/2017_07_18/o1_mf_1_21_dpwcspjx_.arc thread=1 sequence=21
media recovery complete, elapsed time: 00:00:03
Finished recover at 18-JUL-17

database opened

Making target tablespace on auxiliary instance read only and creating oracle directory and performing transportable tablespace. 

contents of Memory Script:
{
# make read only the tablespace that will be exported
sql clone 'alter tablespace  TBS01 read only';
# create directory for datapump import
sql "create or replace directory TSPITR_DIROBJ_DPDIR as ''
/tmp/auxdest''";
# create directory for datapump export
sql clone "create or replace directory TSPITR_DIROBJ_DPDIR as ''
/tmp/auxdest''";
}
executing Memory Script

sql statement: alter tablespace  TBS01 read only

sql statement: create or replace directory TSPITR_DIROBJ_DPDIR as ''/tmp/auxdest''

sql statement: create or replace directory TSPITR_DIROBJ_DPDIR as ''/tmp/auxdest''

Performing export of metadata...
   EXPDP> Starting "SYS"."TSPITR_EXP_btar":  
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
   EXPDP> Master table "SYS"."TSPITR_EXP_btar" successfully loaded/unloaded
   EXPDP> ******************************************************************************
   EXPDP> Dump file set for SYS.TSPITR_EXP_btar is:
   EXPDP>   /tmp/auxdest/tspitr_btar_43317.dmp
   EXPDP> ******************************************************************************
   EXPDP> Datafiles required for transportable tablespace TBS01:
   EXPDP>   /u01/app/oracle/oradata/orcl/tbs01.dbf
   EXPDP> Job "SYS"."TSPITR_EXP_btar" successfully completed at 21:25:12
Export completed


Dropping the target tablespace on the main database and importing the transportable tablespace exported earlier.

contents of Memory Script:
{
# shutdown clone before import
shutdown clone immediate
# drop target tablespaces before importing them back
sql 'drop tablespace  TBS01 including contents keep datafiles';
}
executing Memory Script

database closed
database dismounted
Oracle instance shut down

sql statement: drop tablespace  TBS01 including contents keep datafiles

Performing import of metadata...
   IMPDP> Master table "SYS"."TSPITR_IMP_btar" successfully loaded/unloaded
   IMPDP> Starting "SYS"."TSPITR_IMP_btar":  
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
   IMPDP> Job "SYS"."TSPITR_IMP_btar" successfully completed at 21:25:21
Import completed

Removing and cleaning the auxiliary instance leaving the recovered tablespace offline.

contents of Memory Script:
{
# make read write and offline the imported tablespaces
sql 'alter tablespace  TBS01 read write';
sql 'alter tablespace  TBS01 offline';
# enable autobackups after TSPITR is finished
sql 'begin dbms_backup_restore.AutoBackupFlag(TRUE); end;';
}
executing Memory Script

sql statement: alter tablespace  TBS01 read write

sql statement: alter tablespace  TBS01 offline

sql statement: begin dbms_backup_restore.AutoBackupFlag(TRUE); end;

Removing automatic instance
Automatic instance removed
auxiliary instance file /tmp/auxdest/ORCL/datafile/o1_mf_temp01_dpwctmy8_.tmp deleted
auxiliary instance file /tmp/auxdest/ORCL/onlinelog/o1_mf_3_dpwctmn8_.log deleted
auxiliary instance file /tmp/auxdest/ORCL/onlinelog/o1_mf_2_dpwctmgt_.log deleted
auxiliary instance file /tmp/auxdest/ORCL/onlinelog/o1_mf_1_dpwctm49_.log deleted
auxiliary instance file /tmp/auxdest/ORCL/datafile/o1_mf_sysaux_dpwcsq0t_.dbf deleted
auxiliary instance file /tmp/auxdest/ORCL/datafile/o1_mf_undo01_dpwcsq0y_.dbf deleted
auxiliary instance file /tmp/auxdest/ORCL/datafile/o1_mf_system_dpwcsq07_.dbf deleted
auxiliary instance file /tmp/auxdest/ORCL/controlfile/o1_mf_dpwcskct_.ctl deleted
Finished recover at 18-JUL-17



RMAN> 

Wednesday, June 7, 2017

Install Zabbix proxy on CentOS 7

11:47 AM Posted by Dilli Maharjan No comments
A Zabbix proxy collects performance and availability data on behalf of the Zabbix server.  This way, a proxy can take on itself some of the load of collecting data and offload the Zabbix server. Using a proxy is the easiest way of implementing centralized and distributed monitoring, when all agents and proxies report to one Zabbix server and all data is collected centrally.

A Zabbix proxy can be used to:
  1. Monitor remote locations
  2. Monitor locations having unreliable communications
  3. Offload the Zabbix server when monitoring thousands of devices
  4. Simplify the maintenance of distributed monitoring
Monitoring hosts inside NAT using zabbix proxy.



Install zabbix-proxy on the one of the host inside NAT. Make Sure you have enable port forwarding so that zabbix server is able to reach the proxy host. If required enable firewall to allow traffic from and to the proxy from Zabbix server. 
rpm -ivh zabbix-proxy-sqlite3-3.2.6-1.el7.x86_64.rpm 


Unzip the file with SQL command to be executed on the database version. Here sqlite is used so we need to change directory to doc with sqlite database
cd /usr/share/doc/zabbix-proxy-sqlite3-3.2.6/
gunzip schema.sql.gz



Create sqlite database and create required schema objects.
mkdir /var/lib/sqlite
sqlite3 /var/lib/sqlite/zabbix.db



Change ownership of the sqlite database file.
chown -R zabbix:zabbix /var/lib/sqlite


Modify zabbiz-proxy configuration file and change the value of the server. We need to specify the IP address of the zabbix server on Passive checks related and Active checks related headings.
vi /etc/zabbix/zabbix_proxy.conf
Server=x.x.y.z
DBHost=localhost
DBName=/var/lib/sqlite/zabbix.db
DBUser=
DBPassword=
DBPort=

Note: Since we are using sqlite database we need to specify DBName with the location of the dbfile.





Now we can start zabbix-proxy server.


Add the proxy on the Zabbix Server Web. Click on Configuration >> Hosts.

Provide details of the host to be added. We need to provide the public IP address so that the machine can be accessible from the zabbix server.


Once host has been added add proxy setting. Click on Administration >> Proxies.


Provide the name of the proxy Select the required hosts and Click on Add.




In we can add the additional host that is inside the NAT. We can provide private IPaddress of the host.


Select the name of the proxy on the Monitor by proxy drop down list.



Zabbix Proxy cannot be start if the selinux mode is set to Enforcing. Following error message will be noticed while starting.



Following is the log message that is being displayed.



Use semodule -l command to list all selinux modules
semodule -l




Setting selinux for zabbix agent and zabbix proxy to permissive mode.
semanage permissive -a zabbix_t
semanage permissive -a zabbix_agent_t





Now zabbix agent and zabbix proxy can be started.



Friday, May 12, 2017

Oracle Database Sharding

10:57 PM Posted by Dilli Maharjan No comments

Sharding is a data tier architecture in which data is horizontally partitioned across independent databases. Each database is hosted on dedicated server with its own local resources - CPU, memory, flash, or disk. Each database in such configuration is called a shard. Those collection of shards make up a single logical database called as Sharding Database. All of the shards together make up a single logical database, which is referred to as a sharded database (SDB). Horizontal partitioning involves splitting a database table across shards so that each shard contains the table with the same columns but a different subset of rows. A table split up in this manner is also known as a sharded table. These independent database is hosted on dedicated servers with its own resources as CPU, Memory and disk.

Database Sharding has been introduced with the Oracle Database 12c R2.

Note: In Oracle Database 12c Release 1 (12.2.0.1) all shards and shard catalog must be non-CDB databases.

In Oracle Database 12c Release 2, a single table can be partitioned up to 1000 shards  you can deploy up to 5 shard directors in a given region with following benefits:
  • Linear scalability
  • Workload distribution
  • Users with isolation

Environment:

3 Virtual machines with OS Oracle Enterprise Linux 6.9 installed.

SHARD-DIRECTOR
Host: sharddirector
IP: 192.168.100.20
Role: Shard Director
SID: SHARDDIR
Software: Oracle database 12.2.0.1, Oracle Database 12c Release 2 Global Service Manager

SHARD-NODES:
HOSTIPROLESID
shard1192.168.100.211st Shardsh1
shard2192.168.100.222nd Shardsh2

Software: Oracle database 12.2.0.1

Download Oracle database 12.2.0.1 and Oracle Database 12.2.0.1 GSM software from URL below:
http://www.oracle.com/technetwork/database/enterprise-edition/downloads/oracle12c-linux-12201-3608234.html

Configure host configuration /etc/hosts on all the machines(director and 2 shard nodes)
192.168.100.20 sharddirector.localdomain sharddirector
192.168.100.21 shard1.localdomain shard1
192.168.100.22 shard2.localdomain shard2





Installing Oracle database software, software only option has been chosen on all the hosts. Where as the gms software is required on the shard director only.

Install Oracle Database on shard director.

Execute dbca.

Select Create a database, and click Next.

Select Advanced configuration, and click Next.

Select the Oracle Single Instance database, the General Purpose or Transaction Processing template, and click Next.

Enter the Global Database name and SID for the shard catalog. Uncheck the Create as Container Database option, and click Next.

On the Storage Option page, select the Use following for the database storage attributes option, select File System, select the Use Oracle-Managed Files (OMF) option, and click Next.

Select Specify Fast Recovery Area, select Enable archiving, and click Next.

Select Create a new listener, set the listener name and port number, and click Next.

Simply click on Next on the Data Vault Config Option page.

On Memory tab, select Use Automatic Shared Memory Management.

On Character sets tab, select Use Unicode (AL32UTF8), and click Next

Uncheck the Configure Enterprise Manager (EM) database express option, and click Next.

Enter the passwords, and click Next.

Select Create database, and click Next.

Click Finish.

Database Creation on progress ##########.

Click on Close once database creation completes.


Login to shard director server and execute following SQL commands.
sqlplus /nolog
conn / as sysdba



Modify following initialization parameters.
alter system set db_create_file_dest='/u01/app/oracle/oradata' scope=both;
alter system set open_links=16 scope=spfile;
alter system set open_links_per_instance=16 scope=spfile;


Unlock gsmcatuser, create gsm_admin user and grant required privileges.
alter user gsmcatuser identified by oracle_4U account unlock;
create user gsm_admin identified by oracle_4U;
grant connect,create session,gsmadmin_role to gsm_admin;
grant inherit privileges on user SYS to GSMADMIN_INTERNAL;



GSM software is already installed on shard director, Create gsm environment  to access gsm.
cat > gsm.env <
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=${ORACLE_BASE}/product/12.2.0/gsm
export PATH=${ORACLE_BASE}/product/12.2.0/gsm/bin:$PATH
EOF
chmod 755 gsm.env 



Execute the gsm.env file to set the gsm environments
. gsm.env


Execute gdsctl to get the gsm command line tool
gdsctl


Create shard catalog database on Oracle Shard Director.
create shardcatalog -database sharddirector:1521/sharddir -region dc1 -user gsm_admin/oracle_4U -agent_port 7777 -agent_password oracle_4U -chunks 120 -force


Create and start Oracle Shard Director
add gsm -gsm shdd -catalog sharddirector:1521/sharddir -region dc1 -pwd oracle_4U


start gsm -gsm shdd


Create credential for the OS username on all the shard hosts.
add credential -credential mycred -osaccount oracle -ospassword oracle



Create oradata and fast_recovery_area on all the shard nodes
mkdir /u01/app/oracle/oradata
mkdir /u01/app/oracle/fast_recovery_area




Add following lines of configuration on glogin file
cd $ORACLE_HOME/sqlplus/admin
cat >> glogin.sql
set sqlprompt "_user '@' _connect_identifier > "
define _editor=vi





Register scheduler agent on both shard hosts. Make sure that shard director is reachable before configuring scheduler agent.
schagent -registerdatabase sharddirector 7777




Verify the scheduler agent is started. If not start scheduler agent.
schagent -status
schagent -start




Create Shard group on shard director
add shardgroup -shardgroup sgroup1 -region dc1 -deploy_as primary
add invitednode sharddirector
add invitednode shard1
add invitednode shard2



Create shard metadata.
create shard -shardgroup sgroup1 -destination shard1 -credential mycred -dbtemplatefile /u01/app/oracle/product/12.2.0/dbhome_1/assistants/dbca/templates/General_Purpose.dbc
create shard -shardgroup sgroup1 -destination shard2 -credential mycred -dbtemplatefile /u01/app/oracle/product/12.2.0/dbhome_1/assistants/dbca/templates/General_Purpose.dbc


Database name will be sh1, sh2 and shn for n number of nodes.

Execute deploy to create database via scheduler agents.
deploy;



Once deploy is completed it looks as below:


Execute command below on the gds control to view the configurations.
config



Create and start services.
add service -service srvshard -role primary;
start service -service srvshard



Verify the service is started on shard nodes.
lsnrctl status





Now create shard and normal tablespace using the oracle shard catalog.
sqlplus /nolog
conn / as sysdba



Enable shard ddl for the session.
alter session enable shard ddl;


Creating Shard tablespace.
create tablespace set tbs_shard in shardspace shardspaceora using template 
(datafile size 128m extent management local segment space management auto);



Create non shard normal tablespace.
create tablespace tbs_nshard 
datafile 
size 128m 
autoextend on 
next 100m
maxsize 512M;



Verify the tablespace has been created on all the hosts with SQL command below.
select tablespace_name from dba_tablespaces where tablespace_name in ('TBS_SHARD','TBS_NSHARD');





Create user.
create user usr_shard
identified by oracle_4U;



Grant required privileges.
grant connect, resource to usr_shard;
grant all privileges to usr_shard;
grant gsmadmin_role to usr_shard;
grant select_catalog_role to usr_shard;
grant connect, resource to usr_shard;
grant dba to usr_shard;
grant execute on dbms_crypto to usr_shard;



Connect to the shard user.
connect usr_shard/oracle_4U

Enable shard ddl for the session.
alter session enable shard ddl;


Create shared table. Shared table will distribute table data among the shard databases.
CREATE SHARDED TABLE Customers
  (
    CustId      VARCHAR2(60) NOT NULL,
    FirstName   VARCHAR2(60),
    LastName    VARCHAR2(60),
    Class       VARCHAR2(10),
    Geo         VARCHAR2(8),
    CustProfile VARCHAR2(4000),
    CONSTRAINT pk_customers PRIMARY KEY (CustId),
    CONSTRAINT json_customers CHECK (CustProfile IS JSON)
  ) TABLESPACE SET tbs_shard
  PARTITION BY CONSISTENT HASH (CustId) PARTITIONS AUTO;



Create duplicate table. Duplicate table will be duplicate on all the shard database.
CREATE DUPLICATED TABLE Products
  (
    ProductId  INTEGER GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY,
    Name       VARCHAR2(128),
    DescrUri   VARCHAR2(128),
    LastPrice  NUMBER(19,4)
  ) TABLESPACE TBS_NSHARD;




Login to the shard director database and insert data.
sqlplus usr_shard



insert into customers values(1,'6sh60jxm','6sh60jxm','6sh60jxm','6sh60j','{ "name":"6sh60jxm" }');
insert into customers values(2,'6sh60jxm','6sh60jxm','6sh60jxm','6sh60j','{ "name":"6sh60jxm" }');
insert into customers values(3,'6sh60jxm','6sh60jxm','6sh60jxm','6sh60j','{ "name":"6sh60jxm" }');
insert into customers values(4,'6sh60jxm','6sh60jxm','6sh60jxm','6sh60j','{ "name":"6sh60jxm" }');
insert into customers values(5,'6sh60jxm','6sh60jxm','6sh60jxm','6sh60j','{ "name":"6sh60jxm" }');
insert into customers values(6,'6sh60jxm','6sh60jxm','6sh60jxm','6sh60j','{ "name":"6sh60jxm" }');
insert into customers values(7,'6sh60jxm','6sh60jxm','6sh60jxm','6sh60j','{ "name":"6sh60jxm" }');
insert into customers values(8,'6sh60jxm','6sh60jxm','6sh60jxm','6sh60j','{ "name":"6sh60jxm" }');
insert into customers values(9,'6sh60jxm','6sh60jxm','6sh60jxm','6sh60j','{ "name":"6sh60jxm" }');
insert into customers values(10,'6sh60jxm','6sh60jxm','6sh60jxm','6sh60j','{ "name":"6sh60jxm" }');
insert into customers values(11,'6sh60jxm','6sh60jxm','6sh60jxm','6sh60j','{ "name":"6sh60jxm" }');



insert into products(name, DescrUri,lastPrice) values('A','B',1); 
insert into products(name, DescrUri,lastPrice) values('A','B',1); 
insert into products(name, DescrUri,lastPrice) values('A','B',1); 
insert into products(name, DescrUri,lastPrice) values('A','B',1); 
insert into products(name, DescrUri,lastPrice) values('A','B',1); 
insert into products(name, DescrUri,lastPrice) values('A','B',1); 
insert into products(name, DescrUri,lastPrice) values('A','B',1); 
insert into products(name, DescrUri,lastPrice) values('A','B',1); 
insert into products(name, DescrUri,lastPrice) values('A','B',1); 
insert into products(name, DescrUri,lastPrice) values('A','B',1); 
insert into products(name, DescrUri,lastPrice) values('A','B',1); 
insert into products(name, DescrUri,lastPrice) values('A','B',1); 



Verify the number of rows on the sharddirector and the shard1 and shard2 respectively. We can notice that Shared table have partitioned the rows among the shard nodes where as the duplicate table is copied among the shard nodes.

Execute following SQL on all the hosts.
select count(*) from customers;
select count(*) from products;






Additional Command for administration

To list all the DDLs executed
show ddl;



To view the shard configuration
config shard
config shard -shard sh1
config shard -shard sh2




List all the objects on user schema
col object_name format a30
select object_name, object_type from user_objects; 


Connect to the shard catalog and verify the chunk has been equally distributed
set echo off
SELECT a.name Shard, COUNT(b.chunk_number) Number_of_Chunks
FROM gsmadmin_internal.database a, gsmadmin_internal.chunk_loc b
WHERE a.database_num=b.database_num
GROUP BY a.name
ORDER BY a.name;



List all the destination objects in the database pointing to remote agents
select DESTINATION_NAME from ALL_SCHEDULER_EXTERNAL_DESTS;



Recovery of shard node once the shard get issue 
recover shard -shard sh1
recover shard -shard sh2