Show blocking Postgres processes and kill them in PostgreSql

           Have you ever had to kill your Postgres cluster because one hanging client starved other processes until most clients became unresponsive blocking on this one pesky process?
There is a very nice way to show currently blocked queries and the processes those are blocking on slightly adapted from this query posted on the Postgres mailing list. I suggest putting it into a view so you can easily access it when you need it:

CREATE VIEW blocking_procs AS
SELECT 
    kl.pid as blocking_pid,
    ka.usename as blocking_user,
    ka.current_query as blocking_query,
    bl.pid as blocked_pid,
    a.usename as blocked_user, 
    a.current_query as blocked_query, 
    to_char(age(now(), a.query_start),’HH24h:MIm:SSs’) as age
FROM pg_catalog.pg_locks bl
    JOIN pg_catalog.pg_stat_activity a 
        ON bl.pid = a.procpid
    JOIN pg_catalog.pg_locks kl 
        ON bl.locktype = kl.locktype
        and bl.database is not distinct from kl.database
        and bl.relation is not distinct from kl.relation
        and bl.page is not distinct from kl.page
        and bl.tuple is not distinct from kl.tuple
        and bl.virtualxid is not distinct from kl.virtualxid
        and bl.transactionid is not distinct from kl.transactionid
        and bl.classid is not distinct from kl.classid
        and bl.objid is not distinct from kl.objid
        and bl.objsubid is not distinct from kl.objsubid
        and bl.pid kl.pid 
    JOIN pg_catalog.pg_stat_activity ka 
        ON kl.pid = ka.procpid
WHERE kl.granted and not bl.granted
ORDER BY a.query_start;

How to test the query on a testing server (not your production DB server)
Connect to your database open a transaction and manually lock a table:

BEGIN;
LOCK your_table;
Leave the transaction and connection open.
Open another client that accesses that data:
# SELECT count(*) from your_table;

It now should be blocked.
View the currently held locks with a third client
# SELECT * FROM blocking_procs;
blocking_pid   | 25842
blocking_user  | postgres
blocking_query | in transaction
blocked_pid    | 25844
blocked_user   | postgres
blocked_query  | SELECT COUNT(*) FROM “your_table”
age            | 00h:00m:23s

It’s now possible to kill the offending process holding the lock using:
# SELECT pg_terminate_backend(25842);

This will kill the connection where you’ve set the lock and the open transaction is rolled back but it seems to leave everything else intact. The second client should now get the response from the server.

monitoring and maintenance of ProgreSQL Server

1. ps:
          With command, which most of Linux / Unix systems includes, you can get information about the CPU usage, RAM PostgreSQL processes, the client connections to the server and its activity, among many other things.

For example, with a query as the one below:
ps aux | grep “postgres” | grep -v “grep”

We would get this information:

postgres  1289  0.0  0.3  51676  7900 ?   S    11:31   0:01 /usr/lib/postgresql/9.1/bin/postgres -D /var/lib/postgresql/9.1/main -c config_file=/etc/postgresql/9.1/main/postgresql.conf

postgres  1303  0.0  0.0  21828  1148 ?   Ss   11:31   0:00 postgres: logger process

postgres  1305  0.0  0.0  51676  1568 ?   Ss   11:31   0:02 postgres: writer process

postgres  1306  0.0  0.0  51676  1324 ?   Ss   11:31   0:02 postgres: wal writer process

postgres  1307  0.0  0.1  52228  2452 ?   Ss   11:31   0:00 postgres: autovacuum launcher process

postgres  1308  0.0  0.0  22016  1432 ?   Ss   11:31   0:00 postgres: stats collector process

The third and fourth columns show the percentage of CPU usage and RAM, respectively. Since a very high consumption of CPU and RAM can cause slowness and server crashes, it is important to pay attention to these two data.
Note that if data appear when performing this query, it means that the PostgreSQL server is active.



2.Vmstat:
             With this command (included in most of the facilities of Linux / Unix systems) you can obtain data related to the common memory and SWAP, (at the entrance and exit), the system and the CPU. The first line always shows an average since the last reboot.
To get a view of the current data during an interval of time with a defined number of repetitions, you can perform a query like the one below, where the first variable shows the time in seconds that will measure each line, and the second variable shows the number of lines that will appear:

vmstat 1 5

It will return information like this:

procs ———–memory———- —swap– —–io—- -system– —-cpu—-

r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa

0  0      0 188972 179496 669392    0    0    24    27  525  461  1  1 96  1

0  0      0 188972 179496 669392    0    0     0     0 1449 1689  2  2 97  0

0  0      0 188972 179496 669392    0    0     0     0 2403 1605  2  2 97  0

2  0      0 188972 179500 669392    0    0     0    16 2041 2502  2  3 94  2

0  0      0 188972 179500 669392    0    0     0     0 1599 1868  1  1 98  0
   The most useful information to the topic that concerns us is shown in the last column: “wa”. It shows the timeout for Input / Output operations. Since the system could “collapse ” due to an excessive disk access, it is very important to monitor it so it will not exceed a value of 15-20.
Check if the values of the columns “si” and “so” of the SWAP memory usage are 0. It would mean that the Ram memory of the server is saturated.


3.netstat:
               This command displays information about the status of the network connections, so you can check the number of standby connections in the computer hosting the PostgreSQ, with a query like the one below.

netstat -ntu | grep “TIME_WAIT” | wc -l

A high number of “standby connections” may indicate problems with the server response as a result of other problems.


4.Views and internal tables of PostgreSQL:
                        PostgreSQL has a subsystem called Statistics Collector responsible for collecting and reporting information on the activity of the server. To deactivate Statistic Collectors and avoid performance losses, modify the parameters ‘ track_counts ‘, ‘ track_functions ‘ and ‘ track_activities ‘ on the file ‘ postgresql.conf ‘ . However, before deactivating it, you must assess whether it is worthwhile as this will remove the possibility of obtaining very useful data.
To access these data, you need SQL to perform several queries to some predefined views. Some of the most important ones are:


a.pg_database
            It saves information about the databases available. A database per row.
This table is very interesting because it will allow you to find out the size ( in bytes ) of all the databases. The ” pg_database_size ( oid ) ” returns the size of the database whose ID is passed as an argument . By performing a query on this table, it will show the size of each database, row by row, and you will only have to add the results to see if there is enough space left in the disk:
SELECT SUM(pg_database_size(oid)) FROM pg_database;

b.pg_locks
       pg_locks displays information on the active locks in the server. It has one row for each lock, which display information about the type of lock and where it has occurred, etc.
As the SGBD allows multiple users to access at the same time, some of them may want to make changes on the same item that is being used by another user. To avoid this situation, the elements in use in a transaction must be locked. To find out the number of locked element, perform the following query:

SELECT COUNT(*) FROM pg_locks;

A high number of locks could decrease the server performance, because two or more processes could try to access the same resource and would have to wait for the resource to be unlocked.

pg_stat_activity 
                 it shows information about the server processes. It shows a row per process.
From this view you can get the number of users connected to the query:

SELECT COUNT(usename) FROM pg_stat_activity;

The number of concurrent users that can manage the system depend on the hardware and the connection of the machine where the server is located. It is convenient to perform stress tests to find out what that limit is and be alerted if the system is reaching it.

 pg_stat_database
             It shows information about the use of databases. It has one row for each database, which shows the number of connected processes, commits and rollbacks. Furthermore, it shows information on the blocks read from the disk, the cache and the number and type of operations performed with rows of each one.

With this table you can find out, for example, if the cache memory is working properly. With the query below, you can get a usage rate of the cache with respect to the common:

SELECT SUM(blks_hit) / SUM(blks_read) FROM pg_stat_database;

The higher ratio, the greater speed when collecting data for the tables.

*

pg_stat_bgwriter
        It only has one row and shows data related to checkpoints and buffers from the background writer.
An advanced user able to interpret data can take advantage of this information by changing certain properties to improve the performance. For example, you can perform this query:

SELECT maxwritten_clean FROM pg_stat_bgwriter;

This query will return the number of times that the bgwriter has overwritten the maximum number of buffers allowed in that round, when a checkpoint has been performed. A low number is ok, but if the values are too high, you should increase the value ‘ bgwriter_lru_maxpages ‘ (default 100 ) in ‘ postgresql.conf ‘ or the command line to improve the performance .
You can also improve the performance by checking what external processes are forced to ask for more space many times in the buffers.

Use this query to check it:
     SELECT buffers_backend FROM pg_stat_bgwriter;

To expand the space before a process needs to extend it, you can increase the values ‘ bgwriter_lru_maxpages ‘ and ‘ bgwriter_lru_multiplier ‘ , and reduce the value ‘ bgwriter_delay ‘.

5.How to monitor PostgreSQL with Pandora FMS:
                  Some users may prefer to perform checks manually to ensure the proper operation of your PostgreSQL server . However, for those who need to maintain a tight control of the operation and evolution of the system, a monitoring system is the perfect solution to increase the functionalities and possibilities of PostgreSQL.
The market offers a wide range of monitoring systems. As manufacturers and experts in Pandora FMS we strongly recommend Pandora FMS and its plugin for PostgreSQL to monitor PostgreSQL.

The first step to deploy the monitoring is to download and install Pandora FMS. Please, fin below the ISO image:
http://pandorafms.com/Community/download/

Once installed, install an agent on the machine where server cache of PostgreSQL is running. You can find many different Linux distributions agents at the following link:
http://sourceforge.net/projects/pandora/files/Pandora % 20FMS % 204.0.3 /

Since you can find all the information on how to install every process in the official documentation, we won’t explain it in this article. For further information on this process, click on this link:
http://wiki.pandorafms.com/index.php?title=Pandora:Documentation_en:Installing

Click on the link below to download the to monitor PostgreSQL.
http://pandorafms.com/index.php?sec=Library&sec2=repository&lng=es&action=view_PUI&id_PUI=553

To install the plugin , you just have to unzip the downloaded file and move the Pandora_PostgreSQL.conf Pandora_and PostgreSQL.pl files to the agent plugin folder installed on the PostgreSQL server to monitor. You can use the following commands:
# unzip Pandora_PostgreSQL.zip
# cp Pandora_PostgreSQL.* /etc/pandora/plugins

Once the plugin is in the right location, we have to configure it. Enter a user name and a password of a user with full permissions to log in from the terminal. You will also have to add the host address and specify a directory for temporary data. Once done, the plugin will be fully functional, as it includes a lot of checks by default.

You can add or delete checks from the configuration, or even customize the existing ones.
There are three types of checks: system checks, general queries about information of PostgreSQL and free queries. To remove one in particular, just delete or add a comment with a # at the beginning of each line of the check.

To add or customize a new check, firstly you need to understand how the data displayed here operates:

* System checks:
check_begin
check_pgsql_service
check_end

These checks are predefined and can not be added to new ones, but you can delete them or customize them as it will be explained later.

* General queries:
check_begin
check_name (Nombre del chequeo)
pgsql_column (Columna de la que se quiera extraer información)
pgsql_table (Tabla de la que se desea extraer información)
check_end

These checks will work just with numerical data. In that case, it will return the whole of all the data found in the specified table column.

* Free queries:
check_begin
check_name (Nombre del chequeo)
pgsql_sql (Consulta SQL a realizar)
check_end

These checks work like general queries when searching for numerical data. In case, the information you wish to gather is text-type, these checks will return the first text data found.

Each check should be between two lines: ” check_begin ” and “check_end “. It will have all the necessary data for execution indicated above and optional customization:

– post_condition :
It compares the result obtained when executing the module with the one indicated here. If it meets, ‘ post_status ‘, ‘ post_execution ‘ or both of them (if they are defined) will be executed.
Eg post_condition > 10

– post_status :
It switches the status of the check into the indicated one when the condition of ‘ post_condition ‘ is met. Ex: WARNING post_status

– post_execution :
Run the terminal’s order indicated before ‘ _data_ ‘ in the event that the condition of ‘ post_condition ‘ is fulfilled.
Eg post_execution snmptrap – v 1-c public 192.168.5.2 1.3.6.1.2.1.2 192.168.50.124 6 666 1233433 .1.3.6.1.2.1.2.2.1.1.6 i _data_

– data_delta : Specifying this property, the information obtained by the check will be treated as “delta ” , ie , reflects the increase to observe the rate of increase that was produced .

– module_type : it changes the data type that will return the check indicated.
Eg async_string module_type

Once setup is completed according to your needs, add the plugin configuration file of the Pandora FMS agent, which will be by default in the path “/ etc / pandora / pandora_agent.conf ” , so you can run it periodically. If the plugin files have been already added to the plugins folder , you just have to add this line :

module_plugin /etc/pandora/plugins/Pandora_PostgreSQL.pl /etc/pandora/plugins/Pandora_PostgreSQL.conf

After completing this process, Pandora FMS will have been successfully installed in your systems and you can be sure that you will be informed of any failures occurring in your systems before they become a bigger problem for your environment.

Check if your current database is connected to PgBouncer or not.

                        Determining if your current database connection is using PgBouncer, or going directly to Postgres itself, can be challenging, as PgBouncer is a very low-level, transparent interface. It is possible, and here are some detection methods you can use.

This was inspired by someone asking on the Perl DBD IRC channel if it was possible to easily tell if your current database handle (usually “$dbh”) is connected to PgBouncer or not. Since I’ve seen this question asked in other venues, I decided to take a crack at it.

There are actually two questions to be answered: (1) are we connected to PgBouncer, and if so, (2) what pool_mode is being run? The quickest and easiest way I found to answer the first question is to try and connect to a non-existent database. Normally, this is a FATAL message, as seen here:

$ psql testdb -p 5432
testdb=# \c ghostdb
FATAL:  database “ghostdb” does not exist
Previous connection kept

testdb=# 
However, a slightly different ERROR message is returned if the same thing is attempted while connected to PgBouncer:
$ psql testdb -p 6432
testdb=# \c ghostdb
ERROR:  No such database: ghostdb
Previous connection kept

Thus, an ERROR will always indicate that you are connected to PgBouncer and not directly to Postgres, which will always issue a FATAL.
In the future, there will be an even simpler method. As of this writing, pgBouncer 1.6 has not been released, but it will have the ability to customize the application_name. This is a configurable session-level variable that is fairly new in Postgres. Andrew Dunstan wrote a patch which enables adding this to your pgbouncer.ini file:
application_name_add_host = 1
This will make PgBouncer modify the application_name to append some information to it such as the remote host, the remote port, and the local port. This is a feature many PgBouncer users will appreciate, as it offers an escape from the black hole of connection information that PgBouncer suffers from. Here is what it looks like on both a normal Postgres connection, and a PgBouncer connection. As you can see, this is an easier check than the “invalid database connection” check above:

 Postgres:
$ psql testdb -p 5432 -c ‘show application_name’
 application_name 
——————
 psql
 PgBouncer:
$ psql testdb -p 6432 -c ‘show application_name’
        application_name        
——————————–
 psql – unix([email protected]):6432

## DBD::Pg connections to PgBouncer get a very similar change:
$ perl testme.tmp.pl –port 6432
app – unix([email protected]):6432

Now we have answered question of “are we connected to PgBouncer or not?”. The next question is which pool mode we are in. There are three pool modes you can set for PgBouncer, which controls when your particular connection is returned to “the pool”. For “session” mode, you keep the same Postgres backend the entire time you are connected. For “transaction”, you keep the same Postgres backend until the end of a transaction. For “statement”, you may get a new Postgres backend after each statement.
First, we can check if we are connected to PgBouncer in a statement level pool mode by taking advantage of the fact that multi-statement transactions are prohibited. PgBouncer enforces this by intercepting any attempts to enter a transaction (e.g. by issuing a BEGIN command). A very PgBouncer specific error about “Long transactions not allowed” is issued back to the client like so:

$ psql testdb -p 6432
testdb=# begin;
ERROR:  Long transactions not allowed

So, that takes care of detecting a pool_mode set to ‘statement’. The other two modes, transaction and session, will *not* give the same error. Thus, seeing that error indicates you are using a statement-level PgBouncer connection.

The next pool mode is “transaction”, which means that the server connection if released back to the pool at the end of a transaction. To figure out if we are in this mode, we take advantage of the fact that PgBouncer can be set to clean up the connection at the end of each transaction by issuing a specific command. By default, the command set by server_reset_query is DISCARD ALL, which invalidates any prepared statements, temporary tables, and other transaction-spanning, session-level items. Thus, our test will see if these session-level artifacts get discarded or not:

 Direct Postgres:
$ psql testdb -p 5432
testdb=# prepare abc(int) as select $1::text;
PREPARE
testdb=# execute abc(1);
 text
——
 1

 PgBouncer:
$ psql testdb -p 6432
testdb=# prepare abc(int) as select $1::text;
PREPARE
testdb=# execute abc(1);
ERROR:  prepared statement “abc” does not exist

Keep in mind that there are no true “transactionless” commands in Postgres. Even though we did not use a BEGIN in the psql prompt above, each command is treated as its own mini-transaction. In the case of the PgBouncer connection, the prepare is immediately followed with a DISCARD ALL, which means that our prepared statement no longer exists. Hence, we have determined that we are using a transaction-level PgBouncer connection.
Unfortunately, not getting an error does not necessarily mean your PgBouncer is NOT in transaction mode! It could be that server_reset_query is empty, meaning that temporary artifacts are not discarded at the end of the transaction. In such a case, we can take advantage of the fact that PgBouncer will allow other clients to share in our current connection, and thus be able to see the temporary items. If we create a temporary table in one pgbouncer connection, then connect again as a new client, the temporary table will only show up if we are sharing sessions but not transactions. Easier shown than explained, I suspect:

Regular Postgres gets a fresh session:
$ psql test1 -p 5432
test1=# create temp table abc(a int);
CREATE TABLE
test1=# select * from abc;
(No rows)
test1=# ^Z ## (we suspend with CTRL-Z)
[2]+  Stopped                 psql test1 -p 5432

$ psql test1 -p 5432
test1=# select * from abc;
ERROR:  relation “abc” does not exist

 PgBouncer will re-use the same session:
$ psql test1 -p 6432
test1=# create temp table abc(a int);
CREATE TABLE
test1=# select * from abc;
(No rows)
test1=# ^Z
[2]+  Stopped                 psql test1 -p 6432

$ psql test1 -p 6432
test1=# select * from abc;
(No rows)

The final PgBouncer pool mode is “session”, and basically means the only advantage over a normal Postgres connection is the overhead to start up and connect to a new Postgres backend. Thus, the PgBouncer connections are only returned to the pool upon disconnection. The only way to tell if you are in this mode is by determining that you are *not* in the other two modes. 🙂
So, although PgBouncer is extremely transparent, there are some tricks to determine if you are connected to it, and at what pool_mode. If you can think of other (SQL-level!) ways to check, please let me know in the comments section.

without restarting the server-drop all connections in postgreSql

I want to drop all connections (sessions) that are currently opened to a specific PostgreSQL database but without restarting the server or disconnecting connections to other databases.


Method 1.
Depending on your version of postgresql you might run into a bug, that makes pg_stat_activity to omit active connections from dropped users. These connections are also not shown inside pgAdminIII.
If you are doing automatic testing (in which you also create users) this might be a probable scenario.

In this case you need to revert to queries like:
 SELECT pg_terminate_backend(pg_stat_activity.procpid)
 FROM pg_stat_get_activity(NULL::integer)
 WHERE datid=(SELECT oid from pg_database where datname = ‘your_database’);


Method 2.
The query like this should help (assuming the database is named ‘db’):
select pg_terminate_backend(pid) from pg_stat_activity where datname=’db’;

pid used to be called procpid, so if you’re using a version of postgres older than 9.2 you could try the following:
select pg_terminate_backend(procpid) from pg_stat_activity where datname=’db’;

However you have to be a superuser to disconnect other users.
It might also be useful to REVOKE CONNECT ON DATABASE FROM PUBLIC or something similar, and then GRANT it afterward.

Method 3.
This can be used to “free” a database from client connections, so that you for example can rename it:
SELECT pg_terminate_backend(procpid) FROM pg_stat_activity WHERE datname=’current_db’;
ALTER DATABASE current_db RENAME TO old_db;
ALTER DATABASE new_db RENAME TO current_db;

Be aware that this might cause problematic behaviour to your client apps. Data actualy should not be currupted due to using transactions.



Method 4.
$ ps aux | grep DATABASE_NAME
          51191
 postgres: user DATABASE_NAME [local] idle
then kill the process

$ kill -9 51191

pg_cancel_backend vs pg_terminate_backend in postgresql

1.To test this functionality we will take a look at the below 2 sessions (one IDLE and one running. )

gpadmin=# select datname,usename,procpid,sess_id,current_query from pg_stat_activity ;


datname |    usename    | procpid | sess_id |                                current_query                                 
———+—————+———+———+——————————————————————————
 gpadmin | running_user1 |   24174 |      26 | insert into test_table values (generate_series(1,100000000000));
 gpadmin | idle_user1    |   24285 |      27 |
 gpadmin | gpadmin       |   23494 |      21 | select datname,usename,procpid,sess_id,current_query from pg_stat_activity ;

3 rows)


2.pg_cancel_backend()
pg_cancel_backend (procpid from pg_stat_activity ) should be used when query is running, the function will not impact when the session is IDLE.

gpadmin=# select pg_cancel_backend(24174) ; 

pg_cancel_backend 

——————- 
 t 

 (1 row) 
note:- Canceling the query may take some time depending on the cleanup/rollback of the transactions.


3.pg_terminate_backend()
pg_terminate_backend (procpid from pg_stat_activity) should be used for terminating IDLE sessions, avoid using them on an active query or where the session is not ..
gpadmin=# select pg_terminate_backend(24285) ; 

 pg_terminate_backend 

 ———————-
  t 
 (1 row) 



4.State of pg_stat_activity after running the above two commands:
 gpadmin=# select datname,usename,procpid,sess_id,current_query from pg_stat_activity ;

 datname | usename       | procpid | sess_id | current_query 
 ———+—————+———+———+—————————————————————————— 
 gpadmin | running_user1 | 24174   | 26      |  
 gpadmin | gpadmin       | 23494   | 21      | select datname,usename,procpid,sess_id,current_query from pg_stat_activity ; 
 (2 rows) 


     The IDLE session that you have witnessed again above is after we have cancelled the query through pg_cancel_backend, the query has been cancelled by the function but the user session still is connected.


pg_stat_activity View explanation in postgresSql

Column
Type
Description
datid                            
Oid                 
OID of the database this backend is connected to
datname
Name
Name of the database this backend is connected to
pid
Integer
Process ID of this backend
usesysid
Oid
OID of the user logged into this backend
usename
Name
Name of the user logged into this backend
application_name                                                                   
Text
Name of the application that is connected to this backend
client_addr
Inet
IP address of the client connected to this backend. If this field is null, it indicates either that the client is connected via a Unix socket on the server machine or that this is an internal process such as autovacuum.
client_hostname                  
Text
Host name of the connected client, as reported by a reverse DNS lookup of client_addr. This field will only be non-null for IP connections, and only when log_hostname is enabled.
client_port
Integer
TCP port number that the client is using for communication with this backend, or -1 if a Unix socket is used
backend_start                    
timestamp with time zone               
Time when this process was started, i.e., when the client connected to the server
xact_start
timestamp with time zone
Time when this process’ current transaction was started, or null if no transaction is active. If the current query is the first of its transaction, this column is equal to the query_startcolumn.
query_start
timestamp with time zone
Time when the currently active query was started, or if state is not active, when the last query was started
state_change
timestamp with time zone
Time when the state was last changed
wait_event_type
Text
The type of event for which the backend is waiting, if any; otherwise NULL. Possible values are:
  • LWLockNamed: The backend is waiting for a specific named lightweight lock. Each such lock protects a particular data structure in shared memory. wait_event will contain the name of the lightweight lock.
  • LWLockTranche: The backend is waiting for one of a group of related lightweight locks. All locks in the group perform a similar function; wait_event will identify the general purpose of locks in that group.
  • Lock: The backend is waiting for a heavyweight lock. Heavyweight locks, also known as lock manager locks or simply locks, primarily protect SQL-visible objects such as tables. However, they are also used to ensure mutual exclusion for certain internal operations such as relation extension. wait_event will identify the type of lock awaited.
  • BufferPin: The server process is waiting to access to a data buffer during a period when no other process can be examining that buffer. Buffer pin waits can be protracted if another process holds an open cursor which last read data from the buffer in question.
wait_event
Text
Wait event name if backend is currently waiting, otherwise NULL. See Table 28-4 for details.
state
Text
Current overall state of this backend. Possible values are:
  • active: The backend is executing a query.
  • idle: The backend is waiting for a new client command.
  • idle in transaction: The backend is in a transaction, but is not currently executing a query.
  • idle in transaction (aborted): This state is similar to idle in transaction, except one of the statements in the transaction caused an error.
  • fastpath function call: The backend is executing a fast-path function.
  • disabled: This state is reported if track_activities is disabled in this backend.
backend_xid
xid
Top-level transaction identifier of this backend, if any.
backend_xmin
xid
The current backend’s xmin horizon.
query
text
Text of this backend’s most recent query. If state is active this field shows the currently executing query. In all other states, it shows the last query that was executed.

examples:
current running query find out

database1=# SELECT count(*) as cnt, usename, current_query FROM pg_stat_activity GROUP BY usename,current_query ORDER BY cnt DESC;

 cnt |    usename    | current_query
—–+—————+—————
   7 | freddykrueger |
   3 | freddykrueger | SELECT name FROM users WHERE id=50;
   1 | postgres      |
(3 rows)

Views in postgreSql(dynamic and collected statistics views)



Dynamic Statistics Views:-

View Name
Description
 pg_stat_activity                          
One row per server process, showing information related to the current activity of that process, such as state and current query. See pg_stat_activity for details.
pg_stat_replication                      
One row per WAL sender process, showing statistics about replication to that sender’s connected standby server. See pg_stat_replication for details.
pg_stat_wal_receiver
Only one row, showing statistics about the WAL receiver from that receiver’s connected server. Seepg_stat_wal_receiver for details.
pg_stat_ssl
One row per connection (regular and replication), showing information about SSL used on this connection. See pg_stat_ssl for details.


Collected Statistics Views:-


View Name
Description
pg_stat_archiver                                       
One row only, showing statistics about the WAL archiver process’s activity. See pg_stat_archiver for details.
pg_stat_bgwriter
One row only, showing statistics about the background writer process’s activity. See pg_stat_bgwriter for details.
pg_stat_database
One row per database, showing database-wide statistics. See pg_stat_database for details.
pg_stat_database_conflicts                                                                                     
One row per database, showing database-wide statistics about query cancels due to conflict with recovery on standby servers. See pg_stat_database_conflicts for details.
pg_stat_all_tables
One row for each table in the current database, showing statistics about accesses to that specific table. See pg_stat_all_tables for details.
pg_stat_sys_tables
Same as pg_stat_all_tables, except that only system tables are shown.
pg_stat_user_tables
Same as pg_stat_all_tables, except that only user tables are shown.
pg_stat_xact_all_tables
Similar to pg_stat_all_tables, but counts actions taken so far within the current transaction (which are not yet included in pg_stat_all_tables and related views). The columns for numbers of live and dead rows and vacuum and analyze actions are not present in this view.
pg_stat_xact_sys_tables
Same as pg_stat_xact_all_tables, except that only system tables are shown.
pg_stat_xact_user_tables
Same as pg_stat_xact_all_tables, except that only user tables are shown.
pg_stat_all_indexes
One row for each index in the current database, showing statistics about accesses to that specific index. See pg_stat_all_indexes for details.
pg_stat_sys_indexes
Same as pg_stat_all_indexes, except that only indexes on system tables are shown.
pg_stat_user_indexes
Same as pg_stat_all_indexes, except that only indexes on user tables are shown.
pg_statio_all_tables    
One row for each table in the current database, showing statistics about I/O on that specific table. See pg_statio_all_tables for details.
pg_statio_sys_tables
Same as pg_statio_all_tables, except that only system tables are shown.
pg_statio_user_tables
Same as pg_statio_all_tables, except that only user tables are shown.
pg_statio_all_indexes
One row for each index in the current database, showing statistics about I/O on that specific index. See pg_statio_all_indexes for details.
pg_statio_sys_indexes
Same as pg_statio_all_indexes, except that only indexes on system tables are shown.
pg_statio_user_indexes
Same as pg_statio_all_indexes, except that only indexes on user tables are shown.
pg_statio_all_sequences
One row for each sequence in the current database, showing statistics about I/O on that specific sequence. See pg_statio_all_sequences for details.
pg_statio_sys_sequences
Same as pg_statio_all_sequences, except that only system sequences are shown. (Presently, no system sequences are defined, so this view is always empty.)
pg_statio_user_sequences
Same as pg_statio_all_sequences, except that only user sequences are shown.
pg_stat_user_functions
One row for each tracked function, showing statistics about executions of that function. Seepg_stat_user_functions for details.
pg_stat_xact_user_functions

Similar to pg_stat_user_functions, but counts only calls during the current transaction (which are not yet included in pg_stat_user_functions).
pg_stat_progress_vacuum                                  
One row for each backend (including autovacuum worker processes) running VACUUM, showing current progress. See Section 28.4.1.


Statistics Collector in postgresql

Statistics Collection Configuration in postgresql.conf the system can be configured to collect or not collect information

The parameter track_counts controls whether statistics are collected about table and index accesses.T
he parameter track_functions enables tracking of usage of user-defined functions.
The parameter track_io_timing enables monitoring of block read and write times


view statistics:-
The statistics collector transmits the collected information to other PostgreSQL processes through temporary files. These files are stored in the directory named by the stats_temp_directory parameter, pg_stat_tmp by default. For better performance, stats_temp_directory can be pointed at a RAM-based file system, decreasing physical I/O requirements. When the server shuts down cleanly, a permanent copy of the statistics data is stored in the pg_stat subdirectory, so that statistics can be retained across server restarts. When recovery is performed at server start (e.g. after immediate shutdown, server crash, and point-in-time recovery), all statistics counters are reset.

When using the statistics to monitor collected data, it is important to realize that the information does not update instantaneously. Each individual server process transmits new statistical counts to the collector just before going idle; so a query or transaction still in progress does not affect the displayed totals. Also, the collector itself emits a new report at most once per PGSTAT_STAT_INTERVAL milliseconds (500 ms unless altered while building the server). So the displayed information lags behind actual activity. However, current-query information collected by track_activities is always up-to-date.

Another important point is that when a server process is asked to display any of these statistics, it first fetches the most recent report emitted by the collector process and then continues to use this snapshot for all statistical views and functions until the end of its current transaction. So the statistics will show static information as long as you continue the current transaction. Similarly, information about the current queries of all sessions is collected when any such information is first requested within a transaction, and the same information will be displayed throughout the transaction. This is a feature, not a bug, because it allows you to perform several queries on the statistics and correlate the results without worrying that the numbers are changing underneath you. But if you want to see new results with each query, be sure to do the queries outside any transaction block. Alternatively, you can invoke pg_stat_clear_snapshot(), which will discard the current transaction’s statistics snapshot (if any). The next use of statistical information will cause a new snapshot to be fetched.

A transaction can also see its own statistics (as yet untransmitted to the collector) in the views pg_stat_xact_all_tables, pg_stat_xact_sys_tables, pg_stat_xact_user_tables, and pg_stat_xact_user_functions. These numbers do not act as stated above; instead they update continuously throughout the transaction.