Closed sjanefullerton closed 6 months ago
I am still struggling to run my indexes without running into a client_loop: send disconnect: Broken pipe
message.
I tried running the following to remove the pg_denormalized data:
$ docker stop $(docker ps -q)
$ docker rm $(docker ps -qa)
$ docker volume prune --all
$ docker-compose exec pg_denormalized bash -c 'rm -rf $PGDATA'
However, after doing this, I checked the disk usage and it still says that it is at 783G for pg_denormalized. Is there a different way to remove/reset the disk usage?
Hi @sjanefullerton I believe you may want to try deleting the data while the docker containers are running, as the deletion command requires logging into docker and deleting them. If the containers aren't up, I don't think this can happen. I could be wrong about this so if others want to chime in feel free. I did recieve a message after executing the deletion command
rm: cannot remove '/var/lib/postgresql/data': Device or resource busy
but then took containers down, brought them back up and the data was essentially gone
That worked!! Thank you!
Hello! Running my pg_denormalized indexes is taking several hours per index (I haven't been able to get it to finish the first index I run before the server times out). I checked the disk usage of pg_denormalized and it is 783G.
Could this be why it is taking so long? Would it be useful to reset the pg_denormalized container so I can run the indexes quicker?