1.) Install Google Authenticator on your Android or iOS device.
2.) Install the Google Authenticator PAM on the machine where your PostgreSQL server lives, like in Step 1 - 4 of this guide.
3.) Connect your device with the account on that machine.
4.) Create a login role for the database but without a password. Since roles without password can never log in regularly (See the PASSWORD remarks here), this limits this role to PAM logins.
5.) Configure a PAM service for PostgreSQL. E.g. create a file named postgresql where your PAM configs live, on Ubuntu this is /etc/pam.d/. The file should look like this:
auth sufficient pam_google_authenticator.so
6.) Configure PostgreSQL to use the PAM. E.g. a line in pg_hba.conf could look like this:
hostssl all all 127.0.0.1/32 pam pamservice=postgresql
And that's basically it. Now, next time you login, PostgreSQL will ask you for a password that is generated individually on your device.
Of course you can use all kinds of PAM with PostgreSQL like this.
Unfortunately, I also found a few caveats along the way. :-(
First, PostgreSQL clients will ask only for one password, regardless if you chain n PAM's for n-factor authentication.
So if you e.g. chain a PAM against LDAP with Google Authenticator as the second factor, this won't work. This seems to be a shortcoming of the PAM implementation in PostgreSQL, not expecting multiple password prompts. It is still possible to enable n-factor authentication though, but only one PAM can prompt for a password. If the other factors are hardware devices like a fingerprint scanner that does not prompt for a password, you are fine.
Alternatively, you can provide your own PAM that takes all passwords in one prompt and handles them internally.
Second, PAM requires PostgreSQL clients to send the password in plaintext. So now is the time to switch on TLS and make it mandatory (Noticed the hostssl switch above?).
Third, some clients like pgAdmin3 break with one-time passwords, because they apparently open new connections without prompting for a password again, but re-use the initial one instead until you disconnect. This obviously does not work with passwords which are valid only for one login attempt.
In the previous post, I suggested that contrib/tcn for example could be used to selectively update caches.
OK, here is my try:
By touching the cache only when the database signals a change of the base table and using the passed information to touch the affected rows only, this should be much more efficient than other strategies like routinely invalidating the cache every n minutes or so.
Tcn provides a native trigger function to send notifications about INSERT, UPDATE and DELETEs on any table it is attached to, to subscribers of an arbitrarily named channel via PostgreSQL's NOTIFY/LISTEN mechanism. This is useful e.g. for refreshing caches automagically.
First, tcnmust be installed in the database:
CREATE EXTENSION tcn;
Now, a test table is created: CREATE TABLE public.cats ( id serial NOT NULL, face_name text NOT NULL, CONSTRAINT cats_pkey PRIMARY KEY (face_name) ) WITH ( OIDS=FALSE ); And the trigger: CREATE TRIGGER cats_tcn_trigger AFTER INSERT OR UPDATE OR DELETE ON cats FOR EACH ROW EXECUTE PROCEDURE triggered_change_notification('cats'); Here, the channel is named 'cats'. If the parameter is omitted, the channel is 'tcn' by default. It's all set, now we can subscribe to channel 'cats': LISTEN cats; Now, when we INSERT some data, the following happens: INSERT INTO cats (face_name) VALUES ('Tailchaser'); Query returned successfully: one row affected, 13 msec execution time. Asynchronous notification of 'cats' received from backend pid 5457 Data: "cats",I,"face_name"='Tailchaser' Multiple operations generate multiple notifications: INSERT INTO cats (face_name) VALUES ('Hushpad'),('Pouncequick'); Query returned successfully: 2 rows affected, 11 msec execution time. Asynchronous notification of 'cats' received from backend pid 5457 Data: "cats",I,"face_name"='Hushpad' Asynchronous notification of 'cats' received from backend pid 5457 Data: "cats",I,"face_name"='Pouncequick' But what is this good for? Well, some PostgreSQL drivers support asynchronous notification, e.g. libpq for C, psycopg2 for Python or JDBC for Java. So this can be used for notifying all clients if the content of a table has been changed. Then the client can react accordingly, e.g. refresh the cache of such a table, or display the changes in a (near) real-time information system, or whatever action is required in the context of the application if a change has occurred. The following Python code shows an example matching the 'cats' table from above.
Start with python notify.py:
Waiting for notifications on channel 'cats'
Another INSERT INSERT INTO cats (face_name) VALUES ('Roofshadow'); yields Got NOTIFY: 5457 cats "cats",I,"face_name"='Roofshadow' Must refresh cache