Would be nice to also alter this to support replication slots.
-#PgBouncer Slony-I replication set follower
+#PgBouncer Slony-I replication set & WAL replication follower
This script is a command-line utility to monitor the nodes in a Slony-I cluster
and reconfigure pgbouncer to follow an origin or subscriber of replication sets.
+Additionally the script can be used to follow WAL streaming primary/secondary
+databases.
The idea is pretty simple; periodically poll the slony schema to identify origin
and subscriber nodes and repoint an existing pgbouncer instance to follow either
-the origin for read-write or a subscriber for read-only operation. The script
-takes a template pgbouncer.ini file containing all the desired settings except
+the origin for read-write or a subscriber for read-only operation. For binary WAL
+replication we instead periodically poll pg_*() functions on all servers to get a
+picture of replication status (pg_is_in_recovery(),pg_last_xlog_receive_location(),
+pg_current_xlog_location() and pg_last_xact_replay_timestamp()).
+
+Side note:
+>Support for binary/WAL replication is an afterthought, so binary replicas are
+>considered followers of the primary if they're on the same timeline. In future
+>it should be possible to improve this for replicas using replication slots
+>and pull all information required from the primary by looking at pg_replication_slots/
+>pg_stat_replication. It would also hopefully be possible to support following
+>logical replication origins similar to how we follow slony origins.
+
+The script takes a template pgbouncer.ini file containing all the desired settings except
for the target database and will write a copy with the target database and reload
-pgbouncer. In 'ro' mode the script will try and choose a subscriber node closest
-to pgbouncer host if IPv4 addressing has been used for the slony conninfos.
+pgbouncer. In 'ro' mode for slony replicas (not WAL replicas currently) the script
+will try and choose a subscriber node closest to pgbouncer host if IPv4 addressing
+has been used for the slony conninfos.
##Example usage
##Configuration options
-| Section | Parameter | Type | Default | Comment
-|:----------|:-----------------------|:--------|:--------------------------------------------|:-----------------------------------
-| Slony | slony_user | text | *'slony'* | Username used to connect to PostgreSQL and select from slony schema tables
-| Slony | slony_pass | text | *''* | Recommended to leave blank and use .pgpass file
-| Slony | slony_cluster_name | text | *'replication'* | Name of slony cluster (without leading underscore of schema name)
-| Slony | server_conninfo | text | *null* | Conninfo string for slony a node, can be specified multiple times
-| pgBouncer | debug | boolean | *false* | Churn out debugging info to log file / stdout
-| pgBouncer | follower_poll_interval | integer | *1000* | Interval to poll slony cluster state when in daemon mode
-| pgBouncer | sets_to_follow | text | *1* | Comma separated list of sets to follow or 'all' to follow all sets
-| pgBouncer | pool_mode | 'ro/rw' | *'rw'* | Select a read-only subscriber or the origin for read-write
-| pgBouncer | pool_all_databases | boolean | *'false'* | If true uses wildcard for database name in pgbouncer.ini, false uses slony database
-| pgBouncer | auth_user | text | *''* | If set auth_user will be appended to conninfo written in [databases] section
-| pgBouncer | only_follow_origins | boolean | *'false'* | If true pgbouncer will only be reconfigured and reloaded when sets move origin
-| pgBouncer | status_file | text | *'/tmp/pgbouncer_follower_%mode.status'* | File used to store a hash depicting the state of the cluster
-| pgBouncer | log_file | text | *'/tmp/pgbouncer_follower_%mode.log'* | Log file for the script
-| pgBouncer | pid_file | text | *'/tmp/pgbouncer_follower_%mode.log'* | PID file for the script when run as a daemon
-| pgBouncer | pool_conf_template | text | *'/etc/pgbouncer/pgbouncer_%mode.template'* | Template pgbouncer.ini file with your settings and a blank [databases] section
-| pgBouncer | pool_conf_target | text | *'/etc/pgbouncer/pgbouncer_%mode.ini'* | Target pgbouncer.ini file to write a copy of pool_conf_template with a [databases] section to
-| pgBouncer | pool_reload_command | text | *'/etc/init.d/pgbouncer_%mode reload"'* | System command to execute to reload pgbouncer instance
-| pgBouncer | max_ro_lag | integer | *0* | Maximum lag in seconds allowed for subscriber nodes when running in ro mode. 0 = don't monitor lag.
+| Section | Parameter | Type | Default | Comment
+|:------------|:-----------------------|:--------|:--------------------------------------------|:-----------------------------------
+| Replication | replication_user | text | *'slony'* | Username used to connect to PostgreSQL and select from slony schema tables
+| Replication | replication_pass | text | *''* | Recommended to leave blank and use .pgpass file
+| Replication | replication_method | text | *'slony'* | Specifies replication method in use, possible values 'slony' or 'wal'
+| Slony | slony_cluster_name | text | *'replication'* | Name of slony cluster (without leading underscore of schema name)
+| Server | server_conninfo | text | *null* | Conninfo string for server, can be specified multiple times. For slony only one conninfo is required (but all nodes recommended), for WAL replication all servers required
+| pgBouncer | debug | boolean | *false* | Churn out debugging info to log file / stdout
+| pgBouncer | follower_poll_interval | integer | *1000* | Interval to poll slony cluster state when in daemon mode
+| pgBouncer | sets_to_follow | text | *1* | Comma separated list of sets to follow or 'all' to follow all sets
+| pgBouncer | pool_mode | 'ro/rw' | *'rw'* | Select a read-only subscriber or the origin for read-write
+| pgBouncer | pool_all_databases | boolean | *'false'* | If true uses wildcard for database name in pgbouncer.ini, false uses slony database
+| pgBouncer | auth_user | text | *''* | If set auth_user will be appended to conninfo written in [databases] section
+| pgBouncer | only_follow_origins | boolean | *'false'* | If true pgbouncer will only be reconfigured and reloaded when sets move origin
+| pgBouncer | status_file | text | *'/tmp/pgbouncer_follower_%mode.status'* | File used to store a hash depicting the state of the cluster
+| pgBouncer | log_file | text | *'/tmp/pgbouncer_follower_%mode.log'* | Log file for the script
+| pgBouncer | pid_file | text | *'/tmp/pgbouncer_follower_%mode.log'* | PID file for the script when run as a daemon
+| pgBouncer | pool_conf_template | text | *'/etc/pgbouncer/pgbouncer_%mode.template'* | Template pgbouncer.ini file with your settings and a blank [databases] section
+| pgBouncer | pool_conf_target | text | *'/etc/pgbouncer/pgbouncer_%mode.ini'* | Target pgbouncer.ini file to write a copy of pool_conf_template with a [databases] section to
+| pgBouncer | pool_reload_command | text | *'/etc/init.d/pgbouncer_%mode reload"'* | System command to execute to reload pgbouncer instance
+| pgBouncer | max_ro_lag | integer | *0* | Maximum lag in seconds allowed for subscriber nodes when running in ro mode. 0 = don't monitor lag.
The status_file, log_file, pid_file, pool_conf_template, pool_conf_target and
pool_reload_command parameters can contain the following special values:
# Script: pgbouncer_follower.pl
# Copyright: 22/04/2012: v1.0.1 Glyn Astill <glyn@8kb.co.uk>
-# Requires: Perl 5.10.1+, PostgreSQL 9.0+ Slony-I 2.0+
+# Requires: Perl 5.10.1+, PostgreSQL 9.0+ Slony-I 2.0+ OR Streaming Replication
#
# This script is a command-line utility to monitor Slony-I clusters
# and reconfigure pgbouncer to follow replication sets.
use constant true => 1;
my $g_usage = 'Pass configuration file: pool_follower.pl -f <configuration_path> [-D] ';
+my $g_method = "slony";
my $g_debug = false;
my $g_pidfile = "/tmp/pgbouncer_follower_%mode.pid";
my $g_logfile = "/tmp/pgbouncer_follower_%mode.log";
if ($g_debug) {
printLogLn($g_logfile, "DEBUG: Logging to my '$g_logfile'");
- printLogLn($g_logfile, "\t Watching sets $g_clsets in Slony-I cluster '$g_clname' polling every ${g_poll_interval}ms");
- printLogLn($g_logfile, "\t Following " . ($g_all_databases ? "all databases" : "replicated database only") . " on an '$g_mode' node for the above replicated sets");
+ if ($g_method eq 'slony') {
+ printLogLn($g_logfile, "\t Watching sets $g_clsets in Slony-I cluster '$g_clname' polling every ${g_poll_interval}ms");
+ printLogLn($g_logfile, "\t Following " . ($g_all_databases ? "all databases" : "replicated database only") . " on an '$g_mode' node for the above replicated sets");
+ }
+ elsif ($g_method eq 'wal') {
+ printLogLn($g_logfile, "\t Watching streaming replication lag polling every ${g_poll_interval}ms");
+ }
+ else {
+ printLogLn($g_logfile, "\t ERROR: Unknown replication method: '$g_method'");
+ exit(1);
+ }
printLogLn($g_logfile, "\t Template config '$g_conf_template' Target config '$g_conf_target'");
printLogLn($g_logfile, "\t Reload command is '$g_reload_command'");
printLogLn($g_logfile, "\t Status stored in '$g_status_file'");
foreach my $conninfo (@g_conninfos) {
$conninfo_read++;
eval {
- @g_cluster = loadCluster($g_clname, $conninfo, $g_user, $g_pass, $g_addr, $g_clsets);
+ if ($g_method eq 'slony') {
+ @g_cluster = loadCluster($g_clname, $conninfo, $g_user, $g_pass, $g_addr, $g_clsets);
+ }
+ elsif ($g_method eq 'wal') {
+ @g_cluster = loadBinCluster($g_user, $g_pass);
+ }
if ($g_debug) {
- printLogLn($g_logfile, "DEBUG: ($conninfo_read) Cluster with " . scalar(@g_cluster) . " nodes read from conninfo: $conninfo");
+ printLogLn($g_logfile, "DEBUG: ($conninfo_read) " . (($g_method eq 'slony')?'Slony':'Streaming replication') . " cluster with " . scalar(@g_cluster) . " nodes " . (($g_method eq 'slony')?"read from conninfo: $conninfo":"provided via config conninfos"));
foreach (@g_cluster) {
printLogLn($g_logfile, "DEBUG: Node #" . @$_[0] . " DETAIL: " . @$_[1] . " " . @$_[2] . " " . (@$_[3] // "<NONE>") . " " . @$_[4] . " " . (@$_[5] // "<NONE>") . " " . @$_[6] . " " . @$_[7] . " " . @$_[8] . " " . @$_[9] . " " . @$_[10] . " " . @$_[11]);
}
my $target_auth = "";
if ($g_debug) {
- printLogLn($g_logfile, "DEBUG: All databases = " . ($g_all_databases ? 'true' : 'false'));
+ printLogLn($g_logfile, "DEBUG: All databases = " . ($all_databases ? 'true' : 'false'));
}
if ($g_auth_user ne "") {
if ($g_debug) {
printLogLn ($g_logfile, "DEBUG: Configuration for " . ($target_is_origin ? "origin" : "subscriber") . " of sets $target_sets node #$target_node_id $target_host:$target_port");
}
- if ($all_databases) {
+ if ($all_databases || $target_db eq '*') {
$_ =~ s/(\[databases\])/$1\n\* = host=$target_host port=$target_port$target_auth/;
}
else {
return @cluster;
}
+sub loadBinCluster {
+ my $dbuser = shift;
+ my $dbpass = shift;
+
+ my $dsn;
+ my $dbh;
+ my $sth;
+ my $query;
+ my $recovery;
+ my $xlog_location;
+ my $apply_lag;
+ my $primaries = 0;
+
+ my @parts;
+ my $timeline;
+ my $location;
+ my $primary_location;
+
+ my $hostname;
+ my $port;
+ my $database;
+ my @tmp_cluster;
+ my @cluster;
+ my $node_id = 1;
+ $g_best_config = true;
+
+ foreach my $conninfo (@g_conninfos) {
+ $dsn = "DBI:Pg:$conninfo};";
+
+ eval {
+ $dbh = DBI->connect($dsn, $dbuser, $dbpass, {RaiseError => 1});
+ # Check to see if the server is a secondary, and also pull the current xlog
+ # location and the apply lag using pg_last_xact_replay_timestamp to get the
+ # last commit timestamp from the primary applied on the secondary.
+ # We will need to compare the current receive location to the primary xlog
+ # location, if they differ we can then use the apply_lag value; we'll have
+ # to post-process this.
+ # In 9.6+ we might want to pull the system identifier from pg_controldata view too
+ $query = "SELECT pg_is_in_recovery(),
+ CASE
+ WHEN pg_is_in_recovery() THEN pg_last_xlog_receive_location()
+ ELSE pg_current_xlog_location()
+ END,
+ COALESCE(extract(epoch from current_timestamp-pg_last_xact_replay_timestamp()),0)";
+ $sth = $dbh->prepare($query);
+ $sth->execute();
+ ($recovery, $xlog_location, $apply_lag) = $sth->fetchrow;
+ $sth->finish;
+
+ ($port) = $conninfo =~ m/port=(.+?)(\s|$)/g;
+ ($hostname) = $conninfo =~ m/host=(.+?)(\s|$)/g;
+ ($database) = $conninfo =~ m/dbname=(.+?)(\s|$)/g;
+ @parts = split('/', $xlog_location, 2);
+ $timeline = qtrim(trim($parts[0]));
+ $location = hex(qtrim(trim($parts[1])));
+
+ if ($g_debug) {
+ printLogLn($g_logfile, "DEBUG: Server: $hostname:$port " . ($recovery ? 'secondary' : 'primary') . " at $xlog_location ($timeline/$location)");
+ }
+
+ # For WAL replication we assume if we can contact a server then it is active,
+ # which isn't strictly true, but nodes that have fallen behind can be excluded
+ # via the max_ro_lag setting. We also substitute timeline+1 for the slony
+ # replication set.
+ if (!$recovery) {
+ $primaries++;
+ $primary_location = $xlog_location;
+ my @node=(1,'Primary',0,$timeline,"ACTIVE",($timeline+1),$conninfo,$database,$hostname,$port,1,$xlog_location,$apply_lag);
+ push(@tmp_cluster, \@node);
+ }
+ else {
+ $node_id++;
+ my @node=($node_id,"Secondary".($node_id-1),1,undef,"ACTIVE",($timeline+1),$conninfo,$database,$hostname,$port,$node_id,$xlog_location,$apply_lag);
+ push(@tmp_cluster, \@node);
+ }
+ };
+ if ($@) {
+ printLogLn($g_logfile, "ERROR: Could not connect to server with conninfo: $conninfo DETAIL: $@");
+ }
+ }
+
+ # Error if more than one primary discovered.
+ if ($primaries != 1) {
+ printLogLn($g_logfile, "ERROR: Invalid quantity of primaries: $primaries");
+ die "no primaries found";
+ }
+ # Do the post processing we mentioned above, once we know xlog locations differ
+ # then we can say the apply lag is correct, else it's just there has been no
+ # activity on the master.
+ else {
+ foreach (@tmp_cluster) {
+ $apply_lag = ((@$_[12] > 0 && @$_[11] != $primary_location) ? @$_[12] : 0);
+ my @node=(@$_[0],@$_[1],@$_[2],@$_[3],@$_[4],@$_[5],@$_[6],@$_[7],@$_[8],@$_[9],@$_[10],$apply_lag);
+ push(@cluster, \@node);
+ }
+ }
+
+ return @cluster;
+}
+
sub getConfig {
my @fields;
my $success = false;
@fields = split('=', $_, 2);
$value = qtrim(trim($fields[1]));
given(lc($fields[0])) {
+ when(/\breplication_method\b/i) {
+ $g_method = $value;
+ }
when(/\bdebug\b/i) {
$g_debug = checkBoolean($value);
}
when(/\blog_file\b/i) {
$g_logfile = $value;
}
- when(/\bslony_user\b/i) {
+ when(/\bslony_user\b/i) { # Depreciated
+ $g_user = $value;
+ }
+ when(/\bslony_pass\b/i) { # Depreciated
+ $g_pass = $value;
+ }
+ when(/\breplication_user\b/i) {
$g_user = $value;
}
- when(/\bslony_pass\b/i) {
+ when(/\breplication_pass\b/i) {
$g_pass = $value;
}
when(/\bslony_cluster_name\b/i) {
#------------------------------------------------------------------------------
-# Slony cluster information
+# Replication cluster information
#------------------------------------------------------------------------------
-slony_user=slony # Username to use when connecting to PostgreSQL server to
- # retrieve slony cluster configuration
+replication_method=slony # Possible values 'slony' or 'wal'
-slony_pass= # Password for above, leave blank to use .pgpass
+replication_user=repuser # Username to use when connecting to PostgreSQL server to
+ # retrieve slony cluster configuration / streaming rep state
+
+replication_pass= # Password for above, leave blank to use .pgpass
slony_cluster_name=test_replication # Slony-I replication cluster name
-server_conninfo=dbname=TEST host=localhost port=5432 # Conninfo of server to to retrieve slony cluster configuration
-server_conninfo=dbname=TEST host=localhost port=5433 # from, accepts multiple servers and will try in order given.
-server_conninfo=dbname=TEST host=localhost port=5434
+#------------------------------------------------------------------------------
+# Server information
+#------------------------------------------------------------------------------
+server_conninfo=dbname=TEST host=localhost port=5432 # Conninfo of servers. For slony this list is used to pull cluster
+server_conninfo=dbname=TEST host=localhost port=5433 # configuration from the first contactable node so all nodes not necessarily
+server_conninfo=dbname=TEST host=localhost port=5434 # not necessarily required. For WAL replication all servers required.
server_conninfo=dbname=TEST host=localhost port=5435
#------------------------------------------------------------------------------
slony_sets_to_follow=all # Comma separated list of set id's to follow E.g. "1,2,3,4" or "all"
only_follow_origins=false # If true pgbouncer will only be reconfigured and reloaded
- # when the origin of a set moves. If false any changes to origins,
- # providers or subscribers will initiate a reconfigure and reload.
+ # when the origin of a set / primary moves. If false any
+ # changes to origins, providers or subscribers will initiate
+ # a reconfigure and reload.
pool_mode=ro # Values are rw/ro. "rw" follows the origin node
# whereas "ro" will follow a subscriber of the sets
pool_all_databases=false # If true uses a wildcard in the conninfo for the pooler
# thus hosting all databases on the node. If false
- # only the replicated database is hosted.
+ # only the database in the target servers conninfo is hosted.
-auth_user= # If set auth_user will be appended to conninfo written
+auth_user= # If set auth_user will be appended to conninfo written
# in [databases] section.
-status_file=/var/slony/pgbouncer_follower/pgbouncer_follower_%mode.status # File used to store tate of the cluster
-log_file=/var/slony/pgbouncer_follower/pgbouncer_follower_%mode.log # Log for pool_follower script
+status_file=/var/run/pgbouncer_follower_%mode.status # File used to store tate of the cluster
+log_file=/var/log/pgbouncer_follower_%mode.log # Log for pool_follower script
pid_file=/var/run/pgbouncer_follower_%mode.pid # Pidfile for poll mode
pool_conf_template=/etc/pgbouncer/pgbouncer_%mode.template # Template configuration file
pool_conf_target=/etc/pgbouncer/pgbouncer_%mode.ini # Target configuration file to be modified
-pool_reload_command=/etc/init.d/pgbouncer_%mode reload # Command used to action configuration changes
+pool_reload_command=/etc/init.d/pgbouncer_%mode restart # Command used to action configuration changes
# Sepcial values for status_file, log_file, pool_conf_template
# pool_conf_target and pool_reload_command parameters:
# %mode - Pool Mode
# %clname - Slony cluster name
-max_ro_lag=1200 # Maximum lag in seconds allowed for subscriber nodes when
+max_ro_lag=1200 # Maximum lag in seconds allowed for subscriber nodes when
# running in ro mode. 0 = don't monitor lag.
#------------------------------------------------------------------------------
-# Slony cluster information
+# Replication cluster information
#------------------------------------------------------------------------------
-slony_user=slony # Username to use when connecting to PostgreSQL server to
- # retrieve slony cluster configuration
+replication_method=slony # Possible values 'slony' or 'wal'
-slony_pass= # Password for above, leave blank to use .pgpass
+replication_user=repuser # Username to use when connecting to PostgreSQL server to
+ # retrieve slony cluster configuration / streaming rep state
+
+replication_pass= # Password for above, leave blank to use .pgpass
slony_cluster_name=test_replication # Slony-I replication cluster name
-server_conninfo=dbname=TEST host=localhost port=5432 # Conninfo of server to to retrieve slony cluster configuration
-server_conninfo=dbname=TEST host=localhost port=5433 # from, accepts multiple servers and will try in order given.
-server_conninfo=dbname=TEST host=localhost port=5434
+#------------------------------------------------------------------------------
+# Server information
+#------------------------------------------------------------------------------
+server_conninfo=dbname=TEST host=localhost port=5432 # Conninfo of servers. For slony this list is used to pull cluster
+server_conninfo=dbname=TEST host=localhost port=5433 # configuration from the first contactable node so all nodes not necessarily
+server_conninfo=dbname=TEST host=localhost port=5434 # not necessarily required. For WAL replication all servers required.
server_conninfo=dbname=TEST host=localhost port=5435
#------------------------------------------------------------------------------
slony_sets_to_follow=all # Comma separated list of set id's to follow E.g. "1,2,3,4" or "all"
only_follow_origins=true # If true pgbouncer will only be reconfigured and reloaded
- # when the origin of a set moves. If false any changes to origins,
- # providers or subscribers will initiate a reconfigure and reload.
+ # when the origin of a set / primary moves. If false any
+ # changes to origins, providers or subscribers will initiate
+ # a reconfigure and reload.
pool_mode=rw # Values are rw/ro. "rw" follows the origin node
# whereas "ro" will follow a subscriber of the sets
pool_all_databases=false # If true uses a wildcard in the conninfo for the pooler
# thus hosting all databases on the node. If false
- # only the replicated database is hosted.
+ # only the database in the target servers conninfo is hosted.
auth_user= # If set auth_user will be appended to conninfo written
# in [databases] section.
-status_file=/var/slony/pgbouncer_follower/pgbouncer_follower_%mode.status # File used to store tate of the cluster
-log_file=/var/slony/pgbouncer_follower/pgbouncer_follower_%mode.log # Log for pool_follower script
+status_file=/var/run/pgbouncer_follower_%mode.status # File used to store tate of the cluster
+log_file=/var/log/pgbouncer_follower_%mode.log # Log for pool_follower script
pid_file=/var/run/pgbouncer_follower_%mode.pid # Pidfile for poll mode
pool_conf_template=/etc/pgbouncer/pgbouncer_%mode.template # Template configuration file
pool_conf_target=/etc/pgbouncer/pgbouncer_%mode.ini # Target configuration file to be modified
# pool_conf_target and pool_reload_command parameters:
# %mode - Pool Mode
# %clname - Slony cluster name
-max_ro_lag=0 # Maximum lag in seconds allowed for subscriber nodes when
+max_ro_lag=0 # Maximum lag in seconds allowed for subscriber nodes when
# running in ro mode. 0 = don't monitor lag.
--- /dev/null
+#------------------------------------------------------------------------------
+# Replication cluster information
+#------------------------------------------------------------------------------
+replication_method=wal # Possible values 'slony' or 'wal'
+
+replication_user=repuser # Username to use when connecting to PostgreSQL server to
+ # retrieve slony cluster configuration / streaming rep state
+
+replication_pass= # Password for above, leave blank to use .pgpass
+
+slony_cluster_name=test_replication # Slony-I replication cluster name
+
+#------------------------------------------------------------------------------
+# Server information
+#------------------------------------------------------------------------------
+server_conninfo=dbname=test host=localhost port=5432 # Conninfo of servers. For slony this list is used to pull cluster
+server_conninfo=dbname=test host=localhost port=5433 # configuration from the first contactable node so all nodes not necessarily
+server_conninfo=dbname=test host=localhost port=5434 # not necessarily required. For WAL replication all servers required.
+
+#------------------------------------------------------------------------------
+# PgBouncer follower information
+#------------------------------------------------------------------------------
+debug=false
+follower_poll_interval=5000 # Cluster state poll interval (milliseconds)
+
+slony_sets_to_follow=all # Comma separated list of set id's to follow E.g. "1,2,3,4" or "all"
+
+only_follow_origins=false # If true pgbouncer will only be reconfigured and reloaded
+ # when the origin of a set / primary moves. If false any
+ # changes to origins, providers or subscribers will initiate
+ # a reconfigure and reload.
+
+pool_mode=ro # Values are rw/ro. "rw" follows the origin node
+ # whereas "ro" will follow a subscriber of the sets
+ # specified in slony_sets_to_follow
+
+pool_all_databases=true # If true uses a wildcard in the conninfo for the pooler
+ # thus hosting all databases on the node. If false
+ # only the database in the target servers conninfo is hosted.
+
+auth_user= # If set auth_user will be appended to conninfo written
+ # in [databases] section.
+
+status_file=/var/run/pgbouncer_follower.status # File used to store tate of the cluster
+log_file=/var/log/pgbouncer_follower.log # Log for pool_follower script
+pid_file=/var/run/pgbouncer_follower.pid # Pidfile for poll mode
+pool_conf_template=/etc/pgbouncer/pgbouncer.template # Template configuration file
+pool_conf_target=/etc/pgbouncer/pgbouncer.ini # Target configuration file to be modified
+pool_reload_command=/etc/init.d/pgbouncer restart # Command used to action configuration changes
+ # Sepcial values for status_file, log_file, pool_conf_template
+ # pool_conf_target and pool_reload_command parameters:
+ # %mode - Pool Mode
+ # %clname - Slony cluster name
+max_ro_lag=1200 # Maximum lag in seconds allowed for subscriber nodes when
+ # running in ro mode. 0 = don't monitor lag.
--- /dev/null
+#------------------------------------------------------------------------------
+# Replication cluster information
+#------------------------------------------------------------------------------
+replication_method=wal # Possible values 'slony' or 'wal'
+
+replication_user=repuser # Username to use when connecting to PostgreSQL server to
+ # retrieve slony cluster configuration / streaming rep state
+
+replication_pass= # Password for above, leave blank to use .pgpass
+
+slony_cluster_name=test_replication # Slony-I replication cluster name
+
+#------------------------------------------------------------------------------
+# Server information
+#------------------------------------------------------------------------------
+server_conninfo=dbname=test host=localhost port=5432 # Conninfo of servers. For slony this list is used to pull cluster
+server_conninfo=dbname=test host=localhost port=5433 # configuration from the first contactable node so all nodes not necessarily
+server_conninfo=dbname=test host=localhost port=5434 # not necessarily required. For WAL replication all servers required.
+
+#------------------------------------------------------------------------------
+# PgBouncer follower information
+#------------------------------------------------------------------------------
+debug=false
+follower_poll_interval=5000 # Cluster state poll interval (milliseconds)
+
+slony_sets_to_follow=all # Comma separated list of set id's to follow E.g. "1,2,3,4" or "all"
+
+only_follow_origins=true # If true pgbouncer will only be reconfigured and reloaded
+ # when the origin of a set / primary moves. If false any
+ # changes to origins, providers or subscribers will initiate
+ # a reconfigure and reload.
+
+pool_mode=rw # Values are rw/ro. "rw" follows the origin node
+ # whereas "ro" will follow a subscriber of the sets
+ # specified in slony_sets_to_follow
+
+pool_all_databases=true # If true uses a wildcard in the conninfo for the pooler
+ # thus hosting all databases on the node. If false
+ # only the database in the target servers conninfo is hosted.
+
+auth_user= # If set auth_user will be appended to conninfo written
+ # in [databases] section.
+
+status_file=/var/run/pgbouncer_follower.status # File used to store tate of the cluster
+log_file=/var/log/pgbouncer_follower.log # Log for pool_follower script
+pid_file=/var/run/pgbouncer_follower.pid # Pidfile for poll mode
+pool_conf_template=/etc/pgbouncer/pgbouncer.template # Template configuration file
+pool_conf_target=/etc/pgbouncer/pgbouncer.ini # Target configuration file to be modified
+pool_reload_command=/etc/init.d/pgbouncer restart # Command used to action configuration changes
+ # Sepcial values for status_file, log_file, pool_conf_template
+ # pool_conf_target and pool_reload_command parameters:
+ # %mode - Pool Mode
+ # %clname - Slony cluster name
+max_ro_lag=0 # Maximum lag in seconds allowed for subscriber nodes when
+ # running in ro mode. 0 = don't monitor lag.