Restoring to an old snapshot takes more storage
Recently, my node's disks filled up. To handle that I needed to optimized a couple of fact table superprojections.
The problem is that I have to duplicate the data to create the new projection. Since I have not enough space left, I needed to free some space.
To do that, I decided to restore the DB to its state two weeks ago, when it was less full. Then I would replace the projections, and reload the last two weeks of data. The backup is located in a remote server (mounted as a local directory) and is incrementally backed up daily.
To my surprise, Instead of deleting the newest files, and freeing space immediately, the file system just got more congested, got to 100% full and failed.
How is that possible? Am I doing something wrong?