Potential for backing up a corrupted database when carrying out incremental backups

Hi everyone I'm not sure if the title makes sense, but let me try to explain. Off the back of trying to understand how the incremental backups are stored and the space they occupy, a subsequent question has arisen. A listing of my backup directory shows that there are four backup subdirectories. The one labelled "data_services_backup" is the most recent backup, and all those with "_archiveYYYYMMDD" represent the increments (abbreviated and detailed below) [root@elinkvertica01 v_data_services_node0001]# ls -lrt total 16 drwxrwx---. 3 ... Aug 5 11:18 data_services_backup_archive20130805_111547 drwxrwx---. 3 ... Aug 6 10:00 data_services_backup_archive20130806_095739 drwxrwx---. 3 ... Aug 8 09:19 data_services_backup_archive20130808_091631 drwxrwx---. 3 ... 4096 Aug 8 09:22 data_services_backup The du for the same directories is as follows: [root@elinkvertica01 v_data_services_node0001]# du -sh * 123G data_services_backup 2.0G data_services_backup_archive20130805_111547 279M data_services_backup_archive20130806_095739 281M data_services_backup_archive20130808_091631 My understanding of database backup (SQL Server, Oracle) is that it is normally the case that a full backup is taken, and then take subsequent increments going forwards until the next full backup is taken. However, if I am reading all of the above correctly, Vertica takes a full backup and retains increments going backwards. It then appears to follow that to restore to an older archive, rather than the most recent backup, that you have to restore the recent full backup, then subtract the previous increments. In contrast, for SQL Server, one would restore the full backup, then add the subsequent increments. This raises a question of what happens if the database corrupts, and you take a full backup, which presumably would incorporate the corruption. How would you then restore to the last good copy of the database? Again, if I am reading this correctly, it presumes that you can restore the corrupted database, and then reverse the corruption by applying the backwards increments. Please forgive anything I've misunderstood here, I'm still trying to get my head around Vertica, and generally, I've been away from databases for a good number of years. Thanks Ben

Comments

  • Hi Ben, Just out of curiosity, could you try the following command?: [root@elinkvertica01 v_data_services_node0001]# du -sh data_services_backup_archive20130808_091631 data_services_backup_archive20130806_095739 data_services_backup_archive20130805_111547 data_services_backup (EDIT: That's all one line; it's just word-wrapping in this comment field.) (This is simply telling 'du' to process the directories in a particular order, rather than in whatever order '*' expands in.) Thanks, Adam
  • Hi Adam The output of the command you requested is as follows: [root@elinkvertica01 v_data_services_node0001]# du -sh data_services_backup_archive20130808_091631 data_services_backup_archive20130806_095739 data_services_backup_archive20130805_111547 data_services_backup 123G data_services_backup_archive20130808_091631 399M data_services_backup_archive20130806_095739 1.8G data_services_backup_archive20130805_111547 34G data_services_backup However, since I wrote the post, I ran another local backup, so a revised command in the spirit of what you suggested and its output are as follows: [root@elinkvertica01 v_data_services_node0001]# du -sh data_services_backup data_services_backup_archive20130805_111547 data_services_backup_archive20130806_095739 data_services_backup_archive20130808_091631 data_services_backup_archive20130808_092223 123G data_services_backup 35G data_services_backup_archive20130805_111547 1.3G data_services_backup_archive20130806_095739 398M data_services_backup_archive20130808_091631 281M data_services_backup_archive20130808_092223 The revision of the command seems to portray the same information, that the largest of directories is the one called data_services_backup. Howevever, executing exactly what you suggested seems to muddy the waters further, although I think this is because of the way in which sub-directories are linked across each of the backups. I don't really know Linux all that well, but it seems like there may be a question over which directories are "original" and which are linked. I'll discuss this further with our Linux support contractor. Thanks Ben
  • Hi Ben, Take a closer look -- The largest folder isn't necessarily "data_services_backup"; it's the one that "du" scans first. If you switch up the order, as with my suggested command, you'll always find that the first directory that you list appears to be the full backup and the rest appear to be incremental. The technology that you should ask about is called "hard links." (It's not Linux-specific; Windows has them too, though they're used less.) Specifically: A file can be stored anywhere on disk regardless of which directory it is in. In fact, files don't have to be in only one directory. A directory in Linux is just a list of [name, file-identifier] (aka [name, inode]) pairs, and multiple directories can point to the same file. Every file is linked at least once, but it can be linked multiple times into multiple directories. (This is why the Linux 'rm' command is also called 'unlink' -- it removes the link; the file data goes away automatically when no links are left.) So, each backup directory here contains all of the files needed for the backup. However, if a file is the same between two incremental backups, we don't copy it; we just link to it a second time. (If "du" sees the same file multiple times, it only counts the file the first time it sees it. That's why the first listed directory always looks big. After all, if a file can be in multiple directories, what does it mean to compute the size of those directores?) This means that "incremental" backups look basically just like full backups. So the performance cost to restoring all the incremental backups isn't really there any more. (This only works well because Vertica seldom modifies its internal data files; if you look at our "mergeout" process, we prefer to write new files and combine them later. So consecutive backups share a lot of identical files.) The disadvantage is, there's only one copy of the actual file data. What if something happens and that copy gets corrupted? That's why you might take multiple full backups -- that way you're actually getting multiple copies of the file-data. Adam
  • Hi Adam Thanks for the detailed response. I am starting to understand a bit more now about the listing on Linux. I also forwarded your response to our backup vaulting company who noted that the last backup was seemingly quite small, pointing towards only the new inodes being added to the vault. So, the understanding is coming along...! Thanks again Ben

Leave a Comment

BoldItalicStrikethroughOrdered listUnordered list
Emoji
Image
Align leftAlign centerAlign rightToggle HTML viewToggle full pageToggle lights
Drop image/file