Monday, November 7, 2016

Locking PostgreSQL shared memory to physical RAM

As an anonymous reader of my last post remarked, instance-level encryption might leak decrypted data to disk.

It looks like if there is a solution to this problem. If you change only one line in src/include/portability/mem.h from:

#define PG_MMAP_FLAGS                   (MAP_SHARED|MAP_ANONYMOUS|MAP_HASSEMAPHORE)

to:

#define PG_MMAP_FLAGS                   (MAP_LOCKED|MAP_SHARED|MAP_ANONYMOUS|MAP_HASSEMAPHORE)

PostgreSQL shared memory should be locked in physical RAM and never been swapped/paged out to disk.

In order for this to work you obviously need enough physical RAM, and the user PostgreSQL runs as needs permission to lock enough memory, though. So you better check the ulimit:

ulimit -l

unlimited (or at least big enough)

Otherwise you get this rather cryptic error at startup:

"could not map anonymous shared memory: resource temporarily unavailable"

If you are not root, it will say "resource temporarily unavailable" and hide the real cause. Since PostgreSQL refuses to run as root, you'll never see the real error: "cannot allocate memory".

Well I'm no PostgreSQL hacker. I seems to be working but is it really that easy to fix?


4 comments:

  1. Use hugepages, they have never swapped out.

    ReplyDelete
  2. Thank's, I didn't know that yet, but it seems to be true:

    "Huge pages cannot be swapped out under memory pressure."

    https://www.kernel.org/doc/Documentation/vm/hugetlbpage.txt

    But what if huge pages are not supported? According to the Documentation, this is not always the case:

    "The default behavior for huge pages in PostgreSQL is to use them when possible and to fall back to normal pages when failing. To enforce the use of huge pages, you can set huge_pages to on in postgresql.conf. Note that with this setting PostgreSQL will fail to start if not enough huge pages are available."

    https://www.postgresql.org/docs/9.6/static/kernel-resources.html

    I'd say that security based solely on a side effect of using huge pages would be a bit obscure and lead to misconfigurations by people unaware of the implications of not using huge pages together with instance-level encryption.

    ReplyDelete
  3. https://www.kernel.org/doc/Documentation/vm/transhuge.txt

    Sorry there are two implementations of hugepages in the Linux kernel. Transparent Huge Pages version will use kswap to get self out of memory and the hugetlbpage version does not. Of course to the application using hugepages both report mostly the same things.

    So the use hugepage will never be swapped out is wrong its only true if the kernel is HUGETLB not true of the kernel is "Transparent Hugepage Support" in all cases.

    If you want to be sure hugepages are in fact in memory MAP_LOCKED like any other page then it will not be stuffed up just because a different implementation of huge pages is in use. So ergo you were on the right path. Alexey Lesovsky I think you better relook at where you are using hugepages because if there is some security reason why you need that data locked in ram you need to change to the MAP_LOCKED.

    You do also have to remember if pages cannot be swapped out this can result in out of memory issues setting of the very evil OOM Killer.

    Depending on side effect of an particular implementation of something is path to burnt.

    ReplyDelete
  4. Using hugepages to prevent unencrypted user data from swapping out is a bad idea all around. One has to lock backends' heap and stack too rather than just shared memory. And then there's Windows...

    If swap is a concern in the data encryption context, encrypting swap using OS facilities might be a more viable solution.

    ReplyDelete