5 Replies Latest reply: Dec 15, 2011 10:05 AM by sfatula
Impossible Eagle Level 1 (0 points)



Yesterday I had a major hickup while trying to configure some OD stuff. Since I was still setting up Lion Server, I just decided to go back and restore the system using a Time Machine backup from a few hours earlier. Went well... except that anything to do with postgres was not working anymore: web pages, wiki, iCal, etc Postgres would not start anymore. I could not find anything on the internet and after a few hours of trying gave up and just did a fresh, clean install. Fortunately there were not many user files on yet, but I did lose everything I noted in the wiki since I migrated the old Snow Leopard content.


SO, HERE comes the question: is there a fool-proof way of backing up all Lion Server postgres databases and, more importantly, to actually get the thing back up after a Time Machine restore? I know I can dump all db's using pg_dumpall, but what would be the steps to get this back up and running should I need to? I have restored many MySQL databases in the past, but never were they directly linked to many OS functionalities.


The "Advanced Administration" document of course is not advanced enough for this. Is there anywhere I real guide on "How to restore and make function your Lion Server after a major desaster", like hard disk failure?? Would be very helpful, in case it's needed but also for peace of mind.


Thanks lot! Appreciate any input!




Mac mini, Mac OS X (10.7), Server version, hence no DVD drive
  • haykong Level 1 (115 points)

    Have you tried phpPgAdmin?






    Also have you considered doing a RAID 1? At least this can protect you from hard drive failure.

  • haykong Level 1 (115 points)

    or a automated backup script for postgresql?

  • Impossible Eagle Level 1 (0 points)



    I've looked at RAID, which I so far used for my backup drives (where Time Machine goes to), since I last year had a fatal hardware crash on the backup drive while trying to reload the data. Lost everything since bringing the last disk offsite (a few weeks of work)...


    My issue is that Lion seems not to be so stable that the machine would actually restart after a Time Machine restore. In my scenario I descibed in the beginning of this post, I could not get postgres to even start again, so reloading data with phpPgAdmin wouldn't have worked out of the box. Well, except clean install and then reload data from a pg_dump.


    I'm wondering whether creating a clean system volume clone would work. Create one every few weeks, and then only reload mail (easy, as this is only a bunch of folders easily copied from Time Machine), postgres databases (e.g. do an hourly pg_dumpall).


    Any system databases I'd be missing? User IDs in Open Directory,of course, if they change.


    Seems a lot of effort, though, just to get postgres to work again after a restore. In any case, Lion forces me to seriously rethink my desaster recovery plans.




  • Archetrix Level 1 (0 points)

    Take my little script here and adjust it to your requirements. I got the base of this script somwhere else and enhanced it myself. Read some explanations below:


    # #!/bin/bash

    if (test "`whoami`" != "root") then

            echo "You need to be root to start this"

            exit 1









    ACT_DATE=$(date '+%y-%m-%d')

    EXP_DATE=$(date -v -2w '+%y-%m-%d')




    PG_CON="-U $PG_USER"





    echo "***** DB_BACKUP $ACT_DATE *****" >>$LOGFILE

    for db in `$PG_BIN/psql $PG_CON -lt | sed /\eof/p | grep -v = | awk {'print $1'}`


        # vacuum

        if [ "X" == "X$db" ]; then



        echo $(date '+%c')" -- vacuuming database $db" >> $LOGFILE

        if $PG_BIN/vacuumdb $VACUUM_OPTIONS $db


          #echo "OK!" >>$LOGFILE

              sleep 1


          echo "No Vacuum in database $db!" >>$LOGFILE


        # backup

        echo $(date '+%c')" -- backing up database $db" >>$LOGFILE

        if  $PG_BIN/pg_dump $BACKUP_OPTIONS -f $BACKUP_DIR/$db-$ACT_DATE.pgdump $db


          if [ -e $BACKUP_DIR/$db-$EXP_DATE.pgdump ]; then

              echo $(date '+%c')" -- deleting old backup" >>$LOGFILE

              rm $BACKUP_DIR/$db-$EXP_DATE.pgdump



          echo "FAIL Database $db backup!" >>$LOGFILE




    This script dumps all databases in postgres individually.

    Databases are vacuumed and then dumped.

    Dump-Format is currently set as compressed pgsql dump (See "-F c" in $BACKUP_OPTIONS)

    File names consist of "databasename"-"backupdade".pgdump

    I have programmed to automatically purge backups older than two weeks (see " -2w " in $EXP_DATE)

    If you drop a database you will have to remove backups for that database manually.


    I have enabled a daily run of this scrip by placing the executable script somwhere and add it to the crontab of root: sudo crontab -e

    Enter the following line and your backups start every day at 6:00. Adjust paths and scriptname according to your own setup.


    0 6 * * *       ~/.bin/postgres_dump.sh > /dev/null 2>&1

  • sfatula Level 2 (150 points)

    I did something dumb 2 weeks ago and reloaded from Time Machine, and it worked fine for me. In fact, it was ONLY psotgres. Wiki, etc all continued to function just fine.


    RAID is fine, we use that too, unless, someone breaks in a steals both drives, or, the building burns down. So, offsite is good also.


    So, not sure why you did not have success. Time Machine we have never had any issues with. And it's better than a system volume copy as it's automatic, and, you can go back to various instants in time.


    Perhaps you excluded some key folder somewhere? SOme of the wiki, etc. data also is stored in the filesystem, it's not 100% postgres.