Wednesday, June 7, 2017

Install Zabbix proxy on CentOS 7

11:47 AM Posted by Dilli Maharjan No comments
A Zabbix proxy collects performance and availability data on behalf of the Zabbix server.  This way, a proxy can take on itself some of the load of collecting data and offload the Zabbix server. Using a proxy is the easiest way of implementing centralized and distributed monitoring, when all agents and proxies report to one Zabbix server and all data is collected centrally.

A Zabbix proxy can be used to:
  1. Monitor remote locations
  2. Monitor locations having unreliable communications
  3. Offload the Zabbix server when monitoring thousands of devices
  4. Simplify the maintenance of distributed monitoring
Monitoring hosts inside NAT using zabbix proxy.



Install zabbix-proxy on the one of the host inside NAT. Make Sure you have enable port forwarding so that zabbix server is able to reach the proxy host. If required enable firewall to allow traffic from and to the proxy from Zabbix server. 
rpm -ivh zabbix-proxy-sqlite3-3.2.6-1.el7.x86_64.rpm 


Unzip the file with SQL command to be executed on the database version. Here sqlite is used so we need to change directory to doc with sqlite database
cd /usr/share/doc/zabbix-proxy-sqlite3-3.2.6/
gunzip schema.sql.gz



Create sqlite database and create required schema objects.
mkdir /var/lib/sqlite
sqlite3 /var/lib/sqlite/zabbix.db



Change ownership of the sqlite database file.
chown -R zabbix:zabbix /var/lib/sqlite


Modify zabbiz-proxy configuration file and change the value of the server. We need to specify the IP address of the zabbix server on Passive checks related and Active checks related headings.
vi /etc/zabbix/zabbix_proxy.conf
Server=x.x.y.z
DBHost=localhost
DBName=/var/lib/sqlite/zabbix.db
DBUser=
DBPassword=
DBPort=

Note: Since we are using sqlite database we need to specify DBName with the location of the dbfile.





Now we can start zabbix-proxy server.


Add the proxy on the Zabbix Server Web. Click on Configuration >> Hosts.

Provide details of the host to be added. We need to provide the public IP address so that the machine can be accessible from the zabbix server.


Once host has been added add proxy setting. Click on Administration >> Proxies.


Provide the name of the proxy Select the required hosts and Click on Add.




In we can add the additional host that is inside the NAT. We can provide private IPaddress of the host.


Select the name of the proxy on the Monitor by proxy drop down list.


Friday, May 12, 2017

Oracle Database Sharding

10:57 PM Posted by Dilli Maharjan No comments

Sharding is a data tier architecture in which data is horizontally partitioned across independent databases. Each database is hosted on dedicated server with its own local resources - CPU, memory, flash, or disk. Each database in such configuration is called a shard. Those collection of shards make up a single logical database called as Sharding Database. All of the shards together make up a single logical database, which is referred to as a sharded database (SDB). Horizontal partitioning involves splitting a database table across shards so that each shard contains the table with the same columns but a different subset of rows. A table split up in this manner is also known as a sharded table. These independent database is hosted on dedicated servers with its own resources as CPU, Memory and disk.

Database Sharding has been introduced with the Oracle Database 12c R2.

Note: In Oracle Database 12c Release 1 (12.2.0.1) all shards and shard catalog must be non-CDB databases.

In Oracle Database 12c Release 2, a single table can be partitioned up to 1000 shards  you can deploy up to 5 shard directors in a given region with following benefits:
  • Linear scalability
  • Workload distribution
  • Users with isolation

Environment:

3 Virtual machines with OS Oracle Enterprise Linux 6.9 installed.

SHARD-DIRECTOR
Host: sharddirector
IP: 192.168.100.20
Role: Shard Director
SID: SHARDDIR
Software: Oracle database 12.2.0.1, Oracle Database 12c Release 2 Global Service Manager

SHARD-NODES:
HOSTIPROLESID
shard1192.168.100.211st Shardsh1
shard2192.168.100.222nd Shardsh2

Software: Oracle database 12.2.0.1

Download Oracle database 12.2.0.1 and Oracle Database 12.2.0.1 GSM software from URL below:
http://www.oracle.com/technetwork/database/enterprise-edition/downloads/oracle12c-linux-12201-3608234.html

Configure host configuration /etc/hosts on all the machines(director and 2 shard nodes)
192.168.100.20 sharddirector.localdomain sharddirector
192.168.100.21 shard1.localdomain shard1
192.168.100.22 shard2.localdomain shard2





Installing Oracle database software, software only option has been chosen on all the hosts. Where as the gms software is required on the shard director only.

Install Oracle Database on shard director.

Execute dbca.

Select Create a database, and click Next.

Select Advanced configuration, and click Next.

Select the Oracle Single Instance database, the General Purpose or Transaction Processing template, and click Next.

Enter the Global Database name and SID for the shard catalog. Uncheck the Create as Container Database option, and click Next.

On the Storage Option page, select the Use following for the database storage attributes option, select File System, select the Use Oracle-Managed Files (OMF) option, and click Next.

Select Specify Fast Recovery Area, select Enable archiving, and click Next.

Select Create a new listener, set the listener name and port number, and click Next.

Simply click on Next on the Data Vault Config Option page.

On Memory tab, select Use Automatic Shared Memory Management.

On Character sets tab, select Use Unicode (AL32UTF8), and click Next

Uncheck the Configure Enterprise Manager (EM) database express option, and click Next.

Enter the passwords, and click Next.

Select Create database, and click Next.

Click Finish.

Database Creation on progress ##########.

Click on Close once database creation completes.


Login to shard director server and execute following SQL commands.
sqlplus /nolog
conn / as sysdba



Modify following initialization parameters.
alter system set db_create_file_dest='/u01/app/oracle/oradata' scope=both;
alter system set open_links=16 scope=spfile;
alter system set open_links_per_instance=16 scope=spfile;


Unlock gsmcatuser, create gsm_admin user and grant required privileges.
alter user gsmcatuser identified by oracle_4U account unlock;
create user gsm_admin identified by oracle_4U;
grant connect,create session,gsmadmin_role to gsm_admin;
grant inherit privileges on user SYS to GSMADMIN_INTERNAL;



GSM software is already installed on shard director, Create gsm environment  to access gsm.
cat > gsm.env <
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=${ORACLE_BASE}/product/12.2.0/gsm
export PATH=${ORACLE_BASE}/product/12.2.0/gsm/bin:$PATH
EOF
chmod 755 gsm.env 



Execute the gsm.env file to set the gsm environments
. gsm.env


Execute gdsctl to get the gsm command line tool
gdsctl


Create shard catalog database on Oracle Shard Director.
create shardcatalog -database sharddirector:1521/sharddir -region dc1 -user gsm_admin/oracle_4U -agent_port 7777 -agent_password oracle_4U -chunks 120 -force


Create and start Oracle Shard Director
add gsm -gsm shdd -catalog sharddirector:1521/sharddir -region dc1 -pwd oracle_4U


start gsm -gsm shdd


Create credential for the OS username on all the shard hosts.
add credential -credential mycred -osaccount oracle -ospassword oracle



Create oradata and fast_recovery_area on all the shard nodes
mkdir /u01/app/oracle/oradata
mkdir /u01/app/oracle/fast_recovery_area




Add following lines of configuration on glogin file
cd $ORACLE_HOME/sqlplus/admin
cat >> glogin.sql
set sqlprompt "_user '@' _connect_identifier > "
define _editor=vi





Register scheduler agent on both shard hosts. Make sure that shard director is reachable before configuring scheduler agent.
schagent -registerdatabase sharddirector 7777




Verify the scheduler agent is started. If not start scheduler agent.
schagent -status
schagent -start




Create Shard group on shard director
add shardgroup -shardgroup sgroup1 -region dc1 -deploy_as primary
add invitednode sharddirector
add invitednode shard1
add invitednode shard2



Create shard metadata.
create shard -shardgroup sgroup1 -destination shard1 -credential mycred -dbtemplatefile /u01/app/oracle/product/12.2.0/dbhome_1/assistants/dbca/templates/General_Purpose.dbc
create shard -shardgroup sgroup1 -destination shard2 -credential mycred -dbtemplatefile /u01/app/oracle/product/12.2.0/dbhome_1/assistants/dbca/templates/General_Purpose.dbc


Database name will be sh1, sh2 and shn for n number of nodes.

Execute deploy to create database via scheduler agents.
deploy;



Once deploy is completed it looks as below:


Execute command below on the gds control to view the configurations.
config



Create and start services.
add service -service srvshard -role primary;
start service -service srvshard



Verify the service is started on shard nodes.
lsnrctl status





Now create shard and normal tablespace using the oracle shard catalog.
sqlplus /nolog
conn / as sysdba



Enable shard ddl for the session.
alter session enable shard ddl;


Creating Shard tablespace.
create tablespace set tbs_shard in shardspace shardspaceora using template 
(datafile size 128m extent management local segment space management auto);



Create non shard normal tablespace.
create tablespace tbs_nshard 
datafile 
size 128m 
autoextend on 
next 100m
maxsize 512M;



Verify the tablespace has been created on all the hosts with SQL command below.
select tablespace_name from dba_tablespaces where tablespace_name in ('TBS_SHARD','TBS_NSHARD');





Create user.
create user usr_shard
identified by oracle_4U;



Grant required privileges.
grant connect, resource to usr_shard;
grant all privileges to usr_shard;
grant gsmadmin_role to usr_shard;
grant select_catalog_role to usr_shard;
grant connect, resource to usr_shard;
grant dba to usr_shard;
grant execute on dbms_crypto to usr_shard;



Connect to the shard user.
connect usr_shard/oracle_4U

Enable shard ddl for the session.
alter session enable shard ddl;


Create shared table. Shared table will distribute table data among the shard databases.
CREATE SHARDED TABLE Customers
  (
    CustId      VARCHAR2(60) NOT NULL,
    FirstName   VARCHAR2(60),
    LastName    VARCHAR2(60),
    Class       VARCHAR2(10),
    Geo         VARCHAR2(8),
    CustProfile VARCHAR2(4000),
    CONSTRAINT pk_customers PRIMARY KEY (CustId),
    CONSTRAINT json_customers CHECK (CustProfile IS JSON)
  ) TABLESPACE SET tbs_shard
  PARTITION BY CONSISTENT HASH (CustId) PARTITIONS AUTO;



Create duplicate table. Duplicate table will be duplicate on all the shard database.
CREATE DUPLICATED TABLE Products
  (
    ProductId  INTEGER GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY,
    Name       VARCHAR2(128),
    DescrUri   VARCHAR2(128),
    LastPrice  NUMBER(19,4)
  ) TABLESPACE TBS_NSHARD;




Login to the shard director database and insert data.
sqlplus usr_shard



insert into customers values(1,'6sh60jxm','6sh60jxm','6sh60jxm','6sh60j','{ "name":"6sh60jxm" }');
insert into customers values(2,'6sh60jxm','6sh60jxm','6sh60jxm','6sh60j','{ "name":"6sh60jxm" }');
insert into customers values(3,'6sh60jxm','6sh60jxm','6sh60jxm','6sh60j','{ "name":"6sh60jxm" }');
insert into customers values(4,'6sh60jxm','6sh60jxm','6sh60jxm','6sh60j','{ "name":"6sh60jxm" }');
insert into customers values(5,'6sh60jxm','6sh60jxm','6sh60jxm','6sh60j','{ "name":"6sh60jxm" }');
insert into customers values(6,'6sh60jxm','6sh60jxm','6sh60jxm','6sh60j','{ "name":"6sh60jxm" }');
insert into customers values(7,'6sh60jxm','6sh60jxm','6sh60jxm','6sh60j','{ "name":"6sh60jxm" }');
insert into customers values(8,'6sh60jxm','6sh60jxm','6sh60jxm','6sh60j','{ "name":"6sh60jxm" }');
insert into customers values(9,'6sh60jxm','6sh60jxm','6sh60jxm','6sh60j','{ "name":"6sh60jxm" }');
insert into customers values(10,'6sh60jxm','6sh60jxm','6sh60jxm','6sh60j','{ "name":"6sh60jxm" }');
insert into customers values(11,'6sh60jxm','6sh60jxm','6sh60jxm','6sh60j','{ "name":"6sh60jxm" }');



insert into products(name, DescrUri,lastPrice) values('A','B',1); 
insert into products(name, DescrUri,lastPrice) values('A','B',1); 
insert into products(name, DescrUri,lastPrice) values('A','B',1); 
insert into products(name, DescrUri,lastPrice) values('A','B',1); 
insert into products(name, DescrUri,lastPrice) values('A','B',1); 
insert into products(name, DescrUri,lastPrice) values('A','B',1); 
insert into products(name, DescrUri,lastPrice) values('A','B',1); 
insert into products(name, DescrUri,lastPrice) values('A','B',1); 
insert into products(name, DescrUri,lastPrice) values('A','B',1); 
insert into products(name, DescrUri,lastPrice) values('A','B',1); 
insert into products(name, DescrUri,lastPrice) values('A','B',1); 
insert into products(name, DescrUri,lastPrice) values('A','B',1); 



Verify the number of rows on the sharddirector and the shard1 and shard2 respectively. We can notice that Shared table have partitioned the rows among the shard nodes where as the duplicate table is copied among the shard nodes.

Execute following SQL on all the hosts.
select count(*) from customers;
select count(*) from products;






Additional Command for administration

To list all the DDLs executed
show ddl;



To view the shard configuration
config shard
config shard -shard sh1
config shard -shard sh2




List all the objects on user schema
col object_name format a30
select object_name, object_type from user_objects; 


Connect to the shard catalog and verify the chunk has been equally distributed
set echo off
SELECT a.name Shard, COUNT(b.chunk_number) Number_of_Chunks
FROM gsmadmin_internal.database a, gsmadmin_internal.chunk_loc b
WHERE a.database_num=b.database_num
GROUP BY a.name
ORDER BY a.name;



List all the destination objects in the database pointing to remote agents
select DESTINATION_NAME from ALL_SCHEDULER_EXTERNAL_DESTS;



Recovery of shard node once the shard get issue 
recover shard -shard sh1
recover shard -shard sh2