set the archive destination And archive to multiple destinations in oracle

Method
Initialization Parameter
Host
Example
1
LOG_ARCHIVE_DEST_n where: n is an integer from 1 to 10
Local or remote
LOG_ARCHIVE_DEST_1 = ‘LOCATION=/disk1/arc’ LOG_ARCHIVE_DEST_2 = ‘SERVICE=standby1’
2
LOG_ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST
Local only
LOG_ARCHIVE_DEST = ‘/disk1/arc’ LOG_ARCHIVE_DUPLEX_DEST = ‘/disk2/arc’




1 Method
Perform the following steps to set the destination for archived redo logs using the LOG_ARCHIVE_DEST_n initialization parameter:

(1) Use SQL*Plus to shut down the database with normal or immediate option but not abort.
SHUTDOWN

(2) Edit the LOG_ARCHIVE_DEST_n parameter to specify from one to ten archiving locations. The LOCATION keyword specifies an operating system specific path name.
For example, enter:
LOG_ARCHIVE_DEST_1 = ‘LOCATION = /disk1/archive’
LOG_ARCHIVE_DEST_2 = ‘LOCATION = /disk2/archive’
LOG_ARCHIVE_DEST_3 = ‘LOCATION = /disk3/archive’

If you are archiving to a standby database, use the SERVICE keyword to specify a valid net service name from the tnsnames.ora file. For example, enter:

LOG_ARCHIVE_DEST_4 = ‘SERVICE = standby1’

(3) Edit the LOG_ARCHIVE_FORMAT initialization parameter, using %s to include the log sequence number as part of the file name and %t to include the thread number. Use capital letters (%S and %T) to pad the file name to the left with zeroes. For example, enter:
LOG_ARCHIVE_FORMAT = arch%s.arc

These settings will generate archived logs as follows for log sequence
numbers 100, 101, and 102:

/disk1/archive/arch100.arc, 
/disk1/archive/arch101.arc,
/disk1/archive/arch102.arc

/disk2/archive/arch100.arc, 
/disk2/archive/arch101.arc,
/disk2/archive/arch102.arc

/disk3/archive/arch100.arc, 
/disk3/archive/arch101.arc,
/disk3/archive/arch102.arc

02 Method
The second method, which allows you to specify a maximum of two locations, is to use the LOG_ARCHIVE_DEST parameter to specify a primary archive destination and the LOG_ARCHIVE_DUPLEX_DEST to specify an optional secondary archive destination. Whenever Oracle archives a redo log, it archives it to every destination specified by either set of parameters.

Perform the following steps to use method 2:

(1) Use SQL*Plus to shut down the database.
SHUTDOWN

(2) Specify destinations for the LOG_ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST parameter (you can also specify LOG_ARCHIVE_DUPLEX_DEST dynamically using the ALTER SYSTEM statement).

For example, enter:

LOG_ARCHIVE_DEST = ‘/disk1/archive’
LOG_ARCHIVE_DUPLEX_DEST = ‘/disk2/archive’

(3) Edit the LOG_ARCHIVE_FORMAT parameter, using %s to include the log sequence number as part of the file name and %t to include the thread number. Use capital letters (%S and %T) to pad the file name to the left with zeroes.

For example, enter:
LOG_ARCHIVE_FORMAT = arch_%t_%s.arc

For example, the above settings generates archived logs as follows for log sequence numbers 100 and 101 in thread 1:

/disk1/archive/arch_1_100.arc, /disk1/archive/arch_1_101.arc
/disk2/archive/arch_1_100.arc, /disk2/archive/arch_1_101.arc

Moving Datafiles with/without shutting down the database –oracle

Moving Datafiles with shutting down the database
$ sqlplus “/ as sysdba”
SQL> shutdown immediate
SQL> !mv /u03/oradata/PROD/devl_PROD_01.dbf /u04/oradata/PROD
SQL> startup mount
SQL> alter database rename file ‘/u03/oradata/PROD/devl_PROD_01.dbf’
            to ‘/u04/oradata/PROD/devl_PROD_01.dbf’;
SQL> alter database open;
SQL> select * from v$datafile


Moving Datafiles without shutting down the databa
$ sqlplus “/ as sysdba”
SQL> alter tablespace development offline;
SQL> !mv /u03/oradata/PROD/devl_PROD_01.dbf /u04/oradata/PROD
SQL> alter database rename file ‘/u03/oradata/PROD/devl_PROD_01.dbf’
            to ‘/u04/oradata/PROD/devl_PROD_01.dbf’;
SQL> alter tablespace development online;
SQL> select * from v$datafile;

Control File Multiplexing Using Pfile –oracle

Control file is head of the every oracle database. If control files lost or corrupt for any kind of physical damage of storage then the database must be shutdown. So every database mast have more than one control files needed on different physical storage. Control files can be multiplexed up to eight times. There are different ways to multiplexing the control file.

Step-1: Shutdown the database as normal
SQL> shutdown normal;

Step-2: Using OS command copy the existing control file to a new name and location
$ cp /u01/app/oracle/oradata/orcl/control01.ctl  /u01/app/oracle/oradata/orcl/control02.ctl

Step-3: Add the new control file name to PFILE
CONTROL_FILES = (/u01/app/oracle/oradata/orcl/control01.ctl, /u01/app/oracle/oradata/orcl/control02.ctl)

Step-4: Start the database
SQL> startup;

Multiplexing Control File Using Spfile -Oracle

Oracle consist of three major physical files, they are :
Controlfiles
Datafiles
Online Redo log files

Among them control files are the most impotent one. Controlfile contains Database name, database creation date, Tablespace names, Physical location of datafiles and Recovery information.
With default installation, Oracle has 3 control files placed in same physical location. According to database availability, It is safe to place the 3 controlfiles in disk.

To see the current physical location of control file
SQL> select name from v$controlfile;
NAME
———————————————————
/ua1/control01.ctl
/ua1/control02.ctl
/ua1/control03.ctl

Suppose we have two HD and those two are mount as /ua1 and /ua2. So we need to move at list one controlfile in /ua1 and we move the 3rd contronlife. there are several way to do the Control File Multiplexing:

1. Using SPFILE:
The steps to multiplex control files using an SPFILE are describe bellow:

Login as SYSDBA
1. Alter the SPFILE: Using the ALTER SYSTEM SET command, alter the SPFILE to include a list of all control files to be used.
SQL> ALTER SYSTEM SET control_files=’/ua1/control01.ctl’
,’/ua1/control01.ctl’, ‘/ua1/control01.ctl’ scope=spfile;

2. Shut down the database: Shut down the database in order to create the additional/ relocate control files on the operating system.
SQL> SHUTDOWN IMMEDIATE;

3. Create additional control files: Using the operating system copy command, create/move the additional control files as required and verify that the files have been created in the appropriate directories.
mv /ua1/control01.ct /ua2

4. Start the database: When the database is started the SPFILE will be read and the Oracle server will maintain all the control files listed in the CONTROL_FILES parameter.
SQL>STARTUP;

To see the changed physical location of control file
SQL> select name from v$controlfile;
NAME
———————————————————
/ua1/control01.ctl
/ua1/control02.ctl
/ua2/control03.ctl

Setup Steaming Replication Step By Step

                 Though many know about streaming replication, I would like to start from Prerequisites and with some Introduction about this replication as this blog is for beginners :-).
Prerequisites:
1. Same version of the PostgreSQL Database must be installed on both servers.
2. Configure the Password-less ssh authentication to use the “postgres” user.
3. Production server must operate in WAL archiving enabled mode by setting archive_mode and         archive_command in postgresql.conf file.
4. Production and stand by servers always should have connectivity between them to transfer the archived WAL files from production.
5. Set up your standby host’s environment and directory structure exactly the same as your primary.
Introduction:
It is an asynchronous mechanism; the standby server lags behind the master. But unlike other replication methods, this lag is very short, and can be as little as a single transaction, depending on network speed, database activity, and Streaming replication settings. Also, the load on the master for each slave is minimal, allowing a single master to support dozens of slaves. The feature is included in Postgresql-9.0, with this the second database instance (normally on a separate server) replaying the primary’s binary log, while making that standby server can accept read-only queries.
Here are the practical steps with necessary commands:
1. Connect to Master and create a “replication” user with replication privilege.
$ psql
Password for user postgres:
psql.bin (9.2.1) Type “help” for help.
postgres=# create user replication with replication password ”;
2. We need to change some parameters in postgresql.conf and authentication in pg_hba.conf which are located at  /opt/PostgreSQL92/data/ location on Master. Set up connections and authentication so that the standby server can successfully connect to the replication pseudo-database on the primary.
$ $EDITOR postgresql.conf
listen_addresses = ‘*’
$ $EDITOR pg_hba.conf
#The standby server must have superuser access privileges.
host replication replication 10.176.0.0/16 md5
3. Set up the streaming replication related parameters on the primary server.
$EDITOR postgresql.conf
#To enable read¬only queries on a standby server, wal_level must be set to “hot_standby”. But you can choose “archive” if you never connect to the server in standby mode.
wal_level = hot_standby
#Set the maximum number of concurrent connections from the standby servers.
max_wal_senders = 5
#To prevent the primary server from removing the WAL segments required for the standby server before shipping them, set the minimum number of segments retained in the pg_xlog directory.
wal_keep_segments = 32
#Enable WAL archiving on the primary to an archive directory accessible from the standby. If wal_keep_segments is a high enough number to retain the WAL segments required for the standby server, this may not be necessary.
archive_mode    = on
archive_command = ‘cp %p %f && scp %p [email protected]:/%f’
Note: Restart the cluster after modifying the above parameters in postgresql.conf file.
4. Re-start postgres on the primary server and check if the parameters are affected.
postgres=# show archive_command ;
      archive_command
¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬      —————–
cp %p /var/PG92_Archives/%f
(1 row)
postgres=# show archive_mode ;
archive_mode
¬¬¬¬¬¬¬¬¬¬¬¬¬¬————-
on
(1 row)
postgres=# show wal_level ;
 wal_level
¬¬¬¬¬¬¬¬¬¬¬¬¬————
hot_standby
(1 row)
postgres=# show max_wal_senders ;
max_wal_senders
¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬—————-
5
(1 row)
postgres=# show wal_keep_segments ;
wal_keep_segments
¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬——————
32
5. Make a base backup of Master server’s data directory.
$ psql ¬c “SELECT pg_start_backup(‘label’, true)”
$ cp /opt/PostgreSQL92/data/* backup/
$psql ¬c “SELECT pg_stop_backup()”
— tar the backup directory and move to standby location.
$tar ¬cvzf backup.tar backup/
$scp backup.tar [email protected]:/opt/PostgreSQL92/
6. Move the slave data directory contents to any other location, untar the backup file and copy contents to slave data directory.
7. Set up replication-related parameters, connections and authentication in the standby server like the primary, so that the standby might work as a primary after failover. 8. Enable read-only queries on the standby server. But if wal_level is archive on the primary, leave hot_standby unchanged (i.e., off).
$ $EDITOR postgresql.conf
hot_standby = on
9. Create a recovery command file in the standby server; the following parameters are required for streaming replication.
$ $EDITOR recovery.conf
# Specifies whether to start the server as a standby. In streaming
# replication, this parameter must to be set to on.
standby_mode          = ‘on’
# Specifies a connection string which is used for the standby server to
# connect with the primary.
primary_conninfo      = ‘host=10.176.112.188 port=5432 user=replication
password= application=’
# Specifies a trigger file whose presence should cause streaming
# replication to end (i.e., failover).
trigger_file = ” ===> Do not create the file. You have to
create the file when failover.
# Specifies a command to load archive segments from the WAL archive. If
# # wal_keep_segments is a high enough number to retain the WAL segments
# # required for the standby server, this may not be necessary. But
# a large workload can cause segments to be recycled before the standby
# is fully synchronized, requiring you to start again from a new base
# backup.
restore_command = ‘cp %f “%p”‘
10. Start postgres in the standby server. It will start streaming replication and you will see log messages like below:
LOG:  entering standby mode
LOG:  consistent recovery state reached at 0/1D000078
LOG:  record with zero length at 0/1D000078
LOG:  streaming replication successfully connected to primary
11. You can calculate the replication lag by comparing the current WAL write location on the primary with the last WAL location received/replayed by the standby. They can be retrieved using pg_current_xlog_location function on the  primary side and the pg_last_xlog_receive_location or pg_last_xlog_replay_location function on the standby, respectively.
$ psql ¬c “SELECT pg_current_xlog_location()” ¬h192.168.0.10
(primary host)
pg_current_xlog_location
¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬
0/2000000
(1 row)
$ psql ¬c “select pg_last_xlog_receive_location()” ¬h192.168.0.20
(standby host)
pg_last_xlog_receive_location
¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬
0/2000000
(1 row)
$ psql ¬c “select pg_last_xlog_replay_location()” ¬h192.168.0.20
(standby host)
pg_last_xlog_replay_location
¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬
0/2000000
(1 row)
12. Other ways to check streaming replication: the easiest way is “select now()-pg_last_xact_replay_timestamp();” at slave side. pg_last_xact_replay_timestamp() function gives time stamp of last transaction replayed during recovery, time at which the commit or abort WAL record for that transaction was generated on the primary. If no transactions have been replayed during recovery, this function returns NULL. Otherwise, if recovery is still in progress this will increase monotonically. If recovery has completed then this value will remain static at the value of the last transaction applied during that recovery. When the server has been started normally without recovery the function returns NULL. You can try with some operation on Master and then check the fuction output.
If you want to check the delay manually, then go for below steps:
Step 1:Need to create table on Primary using below command.
create table stream_delay (tstamp timestamp without time zone );
insert into stream_delay select now();
Step 2: schedule the below command on primary to execute every minute on cronjob.
update stream_delay set tstamp=’now()’;
step 3: verify the delay on slave by selecting the “stream_delay” table.
It should show the last time that was updated in primary. Difference between this timestamp and current timestamp of slave server shows the time delay between Primary and slave.
You can also check the progress of streaming replication by using ps command.#The displayed LSNs indicate the byte position that the standby server has written up to in the xlogs.
[primary] $ ps ¬ef | grep sender
postgres  6879  6831  0 10:31 ?        00:00:00 postgres: wal sender
process postgres 127.0.0.1(44663) streaming 0/2000000
[standby] $ ps ¬ef | grep receiver
postgres  6878  6872  1 10:31 ? receiver process   streaming 0/2000000
Thanks all, Kindly let me know if I miss anything.

Automatic failover using Replication Manager 2.0 on PostgreSQL 9.3.5


      In PostgreSQL high availability(Streaming Replication/Hot Standby), one of the thing require human interference and has no automation, that is in the event of master database crash; initiating failover procedure(trigger file creation or pg_ctl promote command) on the standby. Because in core of PostgreSQL there’s no such built-in functionality to identify a failure of master and notify the standby. Thus, we require some automated tools to take over manual failover work or we have to dive into scripting land for writing our own script to do it.

Today, we have very good external tools to handle automatic failover like Replication Manager(repmgr),  EDB Failover Manager(EFM),  pgHA and HandyRep. Thanks to all for filling the gap of automatic failover in PostgreSQL.

In this post, am demonstrating Replication Manager on single node(localhost) on RHEL 6.5 – PostgreSQL 9.3.5. In order to achieve an easy and good understanding of concept I have compiled repmgr with EnterpriseDB One Click Installer(a pre-build binary package) instead of PG source.

To compile repmgr, we need to install few mandatory dependency packages gcc, postgresql-devel, libxslt-devel, pam-devel, libopenssl-devel, krb5-devel and libedit-devel by using yum or rpm. After installing dependencies, download repmgr 2.0 from here and set pg_config in your path and start compiling.

[postgres@localhost:/opt/PostgreSQL/9.3~]$ type pg_config
pg_config is hashed (/opt/PostgreSQL/9.3/bin/pg_config)

export PATH=/opt/PostgreSQL/9.3/bin:$PATH
export LD_LIBRARY_PATH=/opt/PostgreSQL/9.3/lib:$LD_LIBRARY_PATH

tar xvzf repmgr-2.0.tar.gz
cd repmgr
make USE_PGXS=1
make USE_PGXS=1 install

Mostly, repmgr compiles smoothly without any hiccups if we have installed all dependency packages, since am compiling against PG pre-build binaries, there may be diverse variants of libraries came with pre-build and rpm which might throw some compilation errors. Like one you see here:

/lib64/libldap_r-2.4.so.2: undefined reference to `ber_sockbuf_io_udp’
collect2: ld returned 1 exit status
make: *** [repmgrd] Error 1

To fix, find the checking library in /lib64/libldap_r-2.4.so.2.
[root@localhost repmgr-2.0]# cd /lib64/
[root@localhost lib64]# ls -l libldap*

lrwxrwxrwx. 1 root root     20 Dec  8 09:23 libldap-2.4.so.2 -> libldap-2.4.so.2.5.6
-rwxr-xr-x. 1 root root 317200 Apr 29  2013 libldap-2.4.so.2.5.6
lrwxrwxrwx. 1 root root     22 Dec  8 09:23 libldap_r-2.4.so.2 -> libldap_r-2.4.so.2.5.6
-rwxr-xr-x. 1 root root 335264 Apr 29  2013 libldap_r-2.4.so.2.5.6

Ok, there are two copies, retain one and unlink other.
[root@localhost lib64]# unlink libldap_r-2.4.so.2

Now clean the previous compilation by command “make USE_PGXS=1 clean” and proceed from step 1 and it will compile without any issues. Similarly, you need to fix for other library related errors. After installation you can find binaries, libraries and .SQL files related to repmgr in PostgreSQL locations.

[postgres@localhost:/opt/PostgreSQL/9.3/bin~]$ ls repmgr*     (Two utility commands)
repmgr  repmgrd

[postgres@localhost:/opt/PostgreSQL/9.3/lib/postgresql~]$ ls rep*  
repmgr_funcs.so

[postgres@localhost:/opt/PostgreSQL/9.3/share/postgresql/contrib~]$ ls
repmgr_funcs.sql  repmgr.sql  uninstall_repmgr_funcs.sql  uninstall_repmgr.sql

We are all set to setup automatic failover with a super-simple-toy Replication Manager. As a first step we need to have streaming replication(Refer to wiki) configured which I have done already on my localhost between two port 5432 (Master) and 5433 (Standby) lets use them. You can also try building standby using repmgr STANDBY CLONE command. Refer to repmgr documentation for more details.

Step 1. Enable repmgr libraries on both PostgreSQL instances that required for its backend functions.
Master Data Directory : /opt/PostgreSQL/9.3/data
Standby Data Directory: /opt/PostgreSQL/9.3/data_slave

Edit $PGDATA/postgresql.conf 
shared_preload_libraries = ‘repmgr_funcs’

[postgres@localhost:/opt/PostgreSQL/9.3~]$ /opt/PostgreSQL/9.3/bin/pg_ctl -D /opt/PostgreSQL/9.3/data start

[postgres@localhost:/opt/PostgreSQL/9.3~]$ psql -p 5432 -c “show shared_preload_libraries;”
 shared_preload_libraries
————————–
 repmgr_funcs
(1 row)

Step 2. Repmgr need repmgr.conf file for each node, since we are working on localhost we need to keep each nodes repmgr.conf in seperate directory.

[postgres@localhost:/opt/PostgreSQL/9.3~]$ pwd
/opt/PostgreSQL/9.3
[postgres@localhost:/opt/PostgreSQL/9.3~]$ mkdir -p repmgr/master repmgr/standby

Step 3. Create repmgr.conf file for Master(5432) and Standby(5433) in the directories we created in Step 2.

[postgres@localhost:/opt/PostgreSQL/9.3/repmgr/master~]$ pwd
/opt/PostgreSQL/9.3/repmgr/master
[postgres@localhost:/opt/PostgreSQL/9.3/repmgr/master~]$ more repmgr.conf
cluster=test
node=1
node_name=master
conninfo=’host=127.0.0.1 port=5432 dbname=postgres’
pg_bindir=/opt/PostgreSQL/9.3/bin
master_response_timeout=60 
reconnect_attempts=6
reconnect_interval=10
failover=automatic
promote_command=’/opt/PostgreSQL/9.3/repmgr/auto_failover.sh’


[postgres@localhost:/opt/PostgreSQL/9.3/repmgr/slave~]$ pwd
/opt/PostgreSQL/9.3/repmgr/slave
[postgres@localhost:/opt/PostgreSQL/9.3/repmgr/slave~]$ more repmgr.conf
cluster=test
node=2
node_name=slave
conninfo=’host=127.0.0.1 port=5433 dbname=postgres’
pg_bindir=/opt/PostgreSQL/9.3/bin
master_response_timeout=60
reconnect_attempts=6
reconnect_interval=10
failover=automatic
loglevel=DEBUG
promote_command=’/opt/PostgreSQL/9.3/repmgr/auto_failover.sh’

What mainly you have to observe in repmgr.conf is “master_reponse_timeout” which’s total wait duration in seconds before declaring master has disappeared. In that duration 6 reconnect attempts made with 10 seconds of interval. After no response from master in “master_response_timeout” duration automatic failover takes place by promote_command script. The script consist of Standby promotion steps, which I have created one for this setup shared below.

[postgres@localhost:/opt/PostgreSQL/9.3~]$ more repmgr/auto_failover.sh
#!/bin/bash
echo “Promoting Standby at `date ‘+%Y-%m-%d %H:%M:%S’`” >>/tmp/repsetup.log
/opt/PostgreSQL/9.3/bin/pg_ctl -D /opt/PostgreSQL/9.3/data_slave promote >>/tmp/repsetup.log

Step 4. Register Master and Standby node with repmgr using “repmgr” utility.
[postgres@localhost:/opt/PostgreSQL/9.3~]$ repmgr -f repmgr/master/repmgr.conf –verbose master register >/tmp/repsetup.log 2>&1

Master register logs (/tmp/repsetup.log):

[2015-01-12 01:28:55] [INFO] repmgr connecting to master database
[2015-01-12 01:28:55] [INFO] repmgr connected to master, checking its state
[2015-01-12 01:28:55] [INFO] master register: creating database objects inside the repmgr_test schema
[2015-01-12 01:28:55] [DEBUG] master register: CREATE SCHEMA repmgr_test
[2015-01-12 01:28:55] [DEBUG] master register: CREATE TABLE repmgr_test.repl_nodes (          [2015-01-12 01:28:55] [DEBUG] master register: CREATE TABLE repmgr_test.repl_monitor (   [2015-01-12 01:28:55] [DEBUG] master register: CREATE VIEW repmgr_test.repl_status AS  [2015-01-12 01:28:55] [DEBUG] master register: CREATE INDEX idx_repl_status_sort     ON repmgr_test.repl_monitor (last_monitor_time, standby_node)
[2015-01-12 01:28:55] [DEBUG] master register: INSERT INTO repmgr_test.repl_nodes (id, cluster, name, conninfo, priority) VALUES (1, ‘test’, ‘master’, ‘host=1
27.0.0.1 port=5432 dbname=postgres’, 0)
[2015-01-12 01:28:55] [NOTICE] Master node correctly registered for cluster test with id 1 (conninfo: host=127.0.0.1 port=5432 dbname=postgres)
Opening configuration file: repmgr/master/repmgr.conf

Standby:
[postgres@localhost:/opt/PostgreSQL/9.3~]$ repmgr -f repmgr/slave/repmgr.conf –verbose standby register >>/tmp/repsetup.log 2>&1

Standby register logs (/tmp/repsetup.log):

[2015-01-12 01:30:37] [INFO] repmgr connecting to standby database
[2015-01-12 01:30:37] [INFO] repmgr connected to standby, checking its state
[2015-01-12 01:30:37] [DEBUG] standby register: SELECT 1 FROM pg_namespace WHERE nspname = ‘repmgr_test’
[2015-01-12 01:30:37] [INFO] repmgr connecting to master database
[2015-01-12 01:30:37] [INFO] finding node list for cluster ‘test’
[2015-01-12 01:30:37] [INFO] checking role of cluster node ‘host=127.0.0.1 port=5432 dbname=postgres’
[2015-01-12 01:30:37] [INFO] repmgr connected to master, checking its state
[2015-01-12 01:30:37] [INFO] repmgr registering the standby
[2015-01-12 01:30:37] [DEBUG] standby register: INSERT INTO repmgr_test.repl_nodes(id, cluster, name, conninfo, priority) VALUES (2, ‘test’, ‘slave’, ‘host=12
7.0.0.1 port=5433 dbname=postgres’, 0)
[2015-01-12 01:30:37] [INFO] repmgr registering the standby complete
[2015-01-12 01:30:37] [NOTICE] Standby node correctly registered for cluster test with id 2 (conninfo: host=127.0.0.1 port=5433 dbname=postgres)
Opening configuration file: repmgr/slave/repmgr.conf

By looking to the logs you can easily notice, repmgr creating its own schema in database with “repmgr_$CLUSTER” name and some tables/view/functions in it. And it create one row regarding replication lag in repl_monitor table. Repmgr has a utility called “repmgrd” to monitor Master availability running daemon process from Standby node. Its also a management and monitoring system daemon that watches the cluster status and can trigger standby promotion. We need to start “repmgrd” daemon process after registering master/standby nodes with repmgr.

Step 5. Lets start the repmgr daemon process to watch master, in our case we are not running standby on separate node hence we need to start the daemon with repmgr/standby/repmgr.conf file.
repmgrd -f repmgr/slave/repmgr.conf –verbose –monitoring-history >>/tmp/repsetup.log 2>&1 &

Logs (/tmp/repsetup.log)

[2015-01-12 01:42:13] [INFO] repmgrd Connecting to database ‘host=127.0.0.1 port=5433 dbname=postgres’
[2015-01-12 01:42:13] [INFO] repmgrd Connected to database, checking its state
[2015-01-12 01:42:13] [INFO] repmgrd Connecting to primary for cluster ‘test’
[2015-01-12 01:42:13] [INFO] finding node list for cluster ‘test’
[2015-01-12 01:42:13] [INFO] checking role of cluster node ‘host=127.0.0.1 port=5432 dbname=postgres’
[2015-01-12 01:42:13] [INFO] repmgrd Checking cluster configuration with schema ‘repmgr_test’
[2015-01-12 01:42:13] [INFO] repmgrd Checking node 2 in cluster ‘test’
[2015-01-12 01:42:13] [INFO] Reloading configuration file and updating repmgr tables
[2015-01-12 01:42:13] [INFO] repmgrd Starting continuous standby node monitoring
[2015-01-12 01:42:14] [DEBUG] standby_monitor: INSERT INTO repmgr_test.repl_monitor VALUES(1, 2, ‘2015-01-12 09:42:14.457287+00’::timestamp with time zone,  ‘2015-01-12 09:42:13.950172+00’::timestamp with time zone, ‘2/C84DAB08’, ‘2/C84DAB08’,  0, 0)

Step 6. Monitor nodes registered with repmgr and running daemon process.
[postgres@localhost:/opt/PostgreSQL/9.3~]$ psql -p 5432 -d postgres -xc “SELECT * FROM
repmgr_test.repl_status”

-[ RECORD 1 ]————-+——————————
primary_node              | 1
standby_node              | 2
standby_name              | slave
last_monitor_time         | 2015-01-12 09:42:29.514056+00
last_wal_primary_location | 2/C84DB7A8
last_wal_standby_location | 2/C84DB7A8
replication_lag           | 0 bytes
replication_time_lag      | 00:04:01.960772
apply_lag                 | 0 bytes
communication_time_lag    | 00:03:59.45349

[postgres@localhost:/opt/PostgreSQL/9.3~]$ repmgr -f repmgr/master/repmgr.conf cluster show
[2015-01-12 01:47:00] [INFO] repmgr connecting to database
Role      | Connection String
* master  | host=127.0.0.1 port=5432 dbname=postgres
  standby | host=127.0.0.1 port=5433 dbname=postgres

Step 7. Lets simulate failover scenario by taking master down. I am doing some stupid way by killing postmaster.pid. Please don’t do the same on production if you want to stick for long time in the same company 🙂

[postgres@localhost:/opt/PostgreSQL/9.3~]$ kill `head -n1 data/postmaster.pid`
or, you can take master down safely.
[postgres@localhost:/opt/PostgreSQL/9.3~]$ pg_ctl -D /opt/PostgreSQL/9.3/data stop -mf

Now check the logs, how repmgr has promted the standby:
[2015-01-12 02:14:11] [WARNING] Can’t stop current query: PQcancel() — connect() failed: Connection refused
[2015-01-12 02:14:11] [WARNING] repmgrd: Connection to master has been lost, trying to recover… 10 seconds before failover decision

[2015-01-12 02:14:21] [ERROR] repmgrd: We couldn’t reconnect for long enough, exiting…
[2015-01-12 02:14:21] [DEBUG] repmgrd: there are 2 nodes registered
[2015-01-12 02:14:21] [DEBUG] repmgrd: node=1 conninfo=”host=127.0.0.1 port=5432 dbname=postgres” witness=false
[2015-01-12 02:14:21] [ERROR] Connection to database failed: could not connect to server: Connection refused
        Is the server running on host “127.0.0.1” and accepting
        TCP/IP connections on port 5432?
[2015-01-12 02:14:21] [DEBUG] repmgrd: node=2 conninfo=”host=127.0.0.1 port=5433 dbname=postgres” witness=false
[2015-01-12 02:14:21] [DEBUG] Total nodes counted: registered=2, visible=1
[2015-01-12 02:14:21] [DEBUG] XLog position of node 2: log id=2 (2), offset=3360733048 (C850B778)
[2015-01-12 02:14:21] [DEBUG] Last XLog position of node 2: log id=2 (2), offset=3360733048 (C850B778)
[2015-01-12 02:14:26] [INFO] repmgrd: This node is the best candidate to be the new primary, promoting…
[2015-01-12 02:14:26] [DEBUG] promote command is: “/opt/PostgreSQL/9.3/repmgr/auto_failover.sh”
Promoting Standby at 2015-01-12 02:14:26
server promoting
[2015-01-12 02:14:29] [INFO] repmgrd Checking cluster configuration with schema ‘repmgr_test’
[2015-01-12 02:14:29] [INFO] repmgrd Checking node 2 in cluster ‘test’
[2015-01-12 02:14:29] [INFO] Reloading configuration file and updating repmgr tables
[2015-01-12 02:14:29] [INFO] repmgrd Starting continuous primary connection check

Perfect, repmgr daemon recognised master failure and before promoting standby by “auto_failover.sh” script it has properly verified the WAL’s location as well. Lets verify whether Standby had promotion or not ?

[postgres@localhost:/opt/PostgreSQL/9.3~]$ psql -p 5433 -c “select pg_is_in_recovery();”
 pg_is_in_recovery
——————-
 f
(1 row)

Incredible, without any human intervention manual failover procedure taken care by repmgr. You can rebuild the dead master as standby either by repmgr itself or you can follow the switchback procedure shown in my previous post. Same way you can setup for two nodes, by executing standby steps on standby node instead of localhost.

pg_terminate_backend() to kill a connection in postgreSql

You can use pg_terminate_backend() to kill a connection. You have to be superuser to use this function. This works on all operating systems the same.

SELECT 
    pg_terminate_backend(pid) 
FROM 
    pg_stat_activity 
WHERE 
    — don’t kill my own connection!
    pid pg_backend_pid()
    — don’t kill the connections to other databases
    AND datname = ‘database_name’
    ;

Before executing this query, you have to REVOKE the CONNECT privileges to avoid new connections:
REVOKE CONNECT ON DATABASE dbname FROM PUBLIC, username;

If you’re using Postgres 8.4-9.1 use procpid instead of pid
SELECT 
    pg_terminate_backend(procpid) 
FROM 
    pg_stat_activity 
WHERE 
    — don’t kill my own connection!
    procpid pg_backend_pid()
    — don’t kill the connections to other databases
    AND datname = ‘database_name’
    ;

calculate a postgresql database objects size examples

HOW TO FIND THE LARGEST TABLE IN A POSTGRESQL DATABASE
To get the largest table we need to query the pg_class table. This is a catalog containing pretty much anything that has columns or is similar to a table. We can get the ten largest tables with the following query:
SELECT relname, relpages FROM pg_class WHERE relkind IN (‘r’, ‘t’, ‘f’) ORDER BY relpages DESC LIMIT 10;

Once we execute the query we get our results:
postgres=# SELECT relname, relpages FROM pg_class WHERE relkind IN (‘r’, ‘t’, ‘f’) ORDER BY relpages DESC LIMIT 10;
         relname               | relpages
————————————-+———-
 accounts                            |   373017
 mailing_list                        |   353234
 pg_toast_272815                     |   348236
 order_history                       |   291386
 login_failures                      |   284682
 blog                                |   279218
 blog_data                           |   262035
 pg_toast_356234                     |   226826
 name_list                           |   194564
 categories                          |   188161
(10 rows)



Defining the above SELECT:
relname is the name of the object. This could be a table, index, or view
relpages is the  size of the given table in pages. This is only an estimate and is updated periodically.
relkind is the type of object and is defined as follows: r = ordinary table, i = index, S = sequence, v = view, m = materialized view, c = composite type, t = TOAST table, f = foreign table.
A note on pg_toast tables. TOAST (The Oversized-Attribute Storage Technique) tables are created when the data set of a table is too big. By default PostgreSQL uses 8kB page sizes. If tuples are larger than 8kB they cannot span multiple pages. To fix this, large values are compressed and split up into multiple physical rows.

HOW TO FIND THE LARGEST INDEX, SEQUENCE, VIEW, MATERIALIZED VIEW, OR COMPOSITE TYPE IN A POSTGRESQL DATABASE
SELECT relname, relpages FROM pg_class WHERE relkind IN (‘i’, ‘S’, ‘v’, ‘m’, ‘c’) ORDER BY relpages DESC LIMIT 10;

Similar to our query for finding the largest table, however now we look for a different type of relkind value.
postgres=# SELECT relname, relpages FROM pg_class WHERE relkind IN (‘i’, ‘S’, ‘v’, ‘m’, ‘c’) ORDER BY relpages DESC LIMIT 10;
                       relname                        | relpages
——————————————————+———-
 accounts_index                                       |   138742
 mailing_list_index                                   |   104217
 address_list_index                                   |    76413
 order_history_index                                  |    66447
 login_failures_index                                 |    57619
 blog_index                                           |    57243
 blog_data_index                                      |    52554
 mailing_list_index                                   |    51961
 name_list_index                                      |    48066
 categories_index                                     |    48066
(10 rows)


HOW TO CALCULATE THE DISK SPACE USED BY A POSTGRESQL DATABASE
The pg_database_size() function will return the file size in bytes of a given database. It only accepts one parameter which is the database name.
postgres=# SELECT pg_database_size(‘my_database_name’);
 pg_database_size
——————
     120749803704
(1 row)

The output is returned in bytes. To print out the size in a more user friendly format we can call the same pg_database_size() function, but this time we pass it into pg_size_pretty().
postgres=# SELECT pg_size_pretty(pg_database_size(‘my_database_name’));
 pg_size_pretty
—————-
 112 GB
(1 row)


HOW TO CALCULATE THE DISK SPACE USED BY A POSTGRESQL TABLE
There are two functions for retrieving the disk usage of a table. The first, pg_relation_size() which returns the size of a table excluding the index.
postgres=# SELECT pg_size_pretty(pg_relation_size(‘accounts’));
 pg_size_pretty
—————-
 2914 MB
(1 row)

The second option, pg_total_relation_size() which will include the index and toasted data.
postgres=# SELECT pg_size_pretty(pg_total_relation_size(‘accounts’));
 pg_size_pretty
—————-
 5783 MB

HOW TO CALCULATE THE INDEX SIZE OF A TABLE
pg_indexes_size() returns the physical disk usage of only indexes for a particular table.
postgres=# SELECT pg_size_pretty(pg_indexes_size(‘accounts’));
 pg_size_pretty
—————-
 2868 MB
(1 row)

HOW TO CALCULATE THE SIZE OF A POSTGRESQL TABLESPACE
Lastly we calculate the size of tablespaces, this we do with the function pg_tablespace_size().
postgres=# SELECT pg_size_pretty(pg_tablespace_size(‘my_tablespace’));
 pg_size_pretty
—————-
 118 MB
(1 row)

find out table size,database size,tablespace size,

Database Object Size Functions:-
The functions shows to calculate the disk space usage of database objects.
pg_column_size(any):- Number of bytes used to store a particular value (possibly compressed)
pg_database_size(oid):- Disk space used by the database with the specified OID
pg_database_size(name):- Disk space used by the database with the specified name
pg_indexes_size(regclass):- Total disk space used by indexes attached to the specified table
pg_relation_size(relation regclass, fork text):- Disk space used by the specified fork (‘main’, ‘fsm’, ‘vm’, or                                                                          ‘init’) of the specified table or index
pg_relation_size(relation regclass):- Shorthand for pg_relation_size(…, ‘main’)
pg_size_pretty(bigint):- Converts a size in bytes expressed as a 64-bit integer into a human-readable format                                        with size units
pg_size_pretty(numeric):- Converts a size in bytes expressed as a numeric value into a human-readable                                                   format with size units
pg_table_size(regclass):- Disk space used by the specified table, excluding indexes (but including TOAST,                                            free space map, and visibility map)
pg_tablespace_size(oid):- Disk space used by the tablespace with the specified OID
pg_tablespace_size(name):- Disk space used by the tablespace with the specified name
pg_total_relation_size(regclass):- Total disk space used by the specified table, including all indexes and                                                            TOAST data

The storage space (in bytes) for one specific table:
ads=# select pg_relation_size(‘size_test’);
pg_relation_size
——————
5668864
(1 row)

Same query, but with the result in a human-readable format:
ads=# select pg_size_pretty(pg_relation_size(‘size_test’));
pg_size_pretty
—————-
5536 kB(1 row)

Remember that this is only the size of the table, not included an index or additional stuff.
The size including any index can be found out with:
ads=# select pg_size_pretty(pg_total_relation_size(‘size_test’));
pg_size_pretty
—————-
7656 kB
(1 row)

The size of a complete database:
ads=# select pg_size_pretty(pg_database_size(‘ads’));
pg_size_pretty
—————-
11 MB
(1 row)

You can even find out, how much space a specific value needs:
ads=# select pg_column_size(5::smallint);
pg_column_size
—————-
2
(1 row)


ads=# select pg_column_size(5::int);
pg_column_size
—————-
4
(1 row)


ads=# select pg_column_size(5::bigint);
pg_column_size
—————-
8
(1 row)


ads=# select pg_column_size(‘This is a string’::varchar);
pg_column_size
—————-
20
(1 row)


ads=# select length(‘This is a string’::varchar);
length
——–
16
(1 row)

Viewing the path of a relation
SELECT pg_relation_filepath(‘tablename’);

size of a postgresql tablespace
postgres=# SELECT pg_size_pretty(pg_tablespace_size(‘my_tablespace’));
 pg_size_pretty
—————-
 118 MB
(1 row)

backup and recovery functions in postgreSql

Backup Control Functions:-
These functions cannot be executed during recovery (except pg_is_in_backup, pg_backup_start_time and pg_xlog_location_diff).
pg_create_restore_point(name text):- Create a named point for performing restore (restricted to superusers)
pg_current_xlog_insert_location():- Get current transaction log insert location
pg_current_xlog_location():- Get current transaction log write location
pg_start_backup(label text [, fast boolean ]):- Prepare for performing on-line backup (restricted to superusers or replication roles)
pg_stop_backup():- Finish performing on-line backup (restricted to superusers or replication roles)
pg_is_in_backup():- True if an on-line exclusive backup is still in progress.
pg_backup_start_time():- Get start time of an on-line exclusive backup in progress.
pg_switch_xlog():- Force switch to a new transaction log file (restricted to superusers)
pg_xlogfile_name(location text):- Convert transaction log location string to file name
pg_xlogfile_name_offset(location text):- Convert transaction log location string to file name and decimal byte offset within file
pg_xlog_location_diff(location text, location text):- Calculate the difference between two transaction log locations

Recovery Control Functions:-
The functions shows provide information about the current status of the standby. These functions may be executed both during recovery and in normal running.
pg_is_xlog_replay_paused():- True if recovery is paused.
pg_xlog_replay_pause():- Pauses recovery immediately.
pg_xlog_replay_resume():- Restarts recovery if it was paused.