Restoring to an old snapshot takes more storage
Hi,
Recently, my node's disks filled up. To handle that I needed to optimized a couple of fact table superprojections.
The problem is that I have to duplicate the data to create the new projection. Since I have not enough space left, I needed to free some space.
To do that, I decided to restore the DB to its state two weeks ago, when it was less full. Then I would replace the projections, and reload the last two weeks of data. The backup is located in a remote server (mounted as a local directory) and is incrementally backed up daily.
To my surprise, Instead of deleting the newest files, and freeing space immediately, the file system just got more congested, got to 100% full and failed.
How is that possible? Am I doing something wrong?
Thanks,
Amos.
Comments
Off-course, the act of dropping the database would delete the files first.
We leave that for the user to do manually, if there are special circumstances like disks with little or no remaining space.
Thank you skeswani for the detailed explaination.
)