soapiestwaffles / s3-nuke

Nuke all the files and their versions from an S3 Bucket 💣🪣
MIT License
22 stars 1 forks source link

"queue count does not match actual deleted count" error, is there anyway to fix it? #10

Closed luiscosio closed 2 years ago

luiscosio commented 2 years ago

Running it as:

go run . --concurrency 80

Here is the error, just deleted the secret phrase.

🌎 bucket located in us-east-1

Bucket object count: 111,465,121
(object count metric last updated 1 day ago @ 2022-02-09 18:00:00 -0600 CST)

⚠️   !!! WARNING !!!  ⚠️
This will destroy all versions of all objects in the selected bucket
...                                                                                                                                                              

⠙ deleting objects... (26000/-, 1948 it/s) error: queue count does not match actual deleted count
exit status 1

Worked flawlessly with buckets with 1M - 5M, but not sure why it is failing with this one.

Tried a million ways to delete this bucket, so not sure if that is somehow interfering with this script.

soapiestwaffles commented 2 years ago

hmmm, let me see if I can reproduce! I wonder if you may be starting to hit the AWS API rate limit... I, do have a couple of large buckets left I need to destroy so I'll give it a try with the same settings and see if I can reproduce.

Just as a test, would you mind trying to run it with a smaller amount of concurrency, maybe like... 5 or 10, and see if it still happens?

As a side note: I've been trying to get enough tests done so I can figure out a reasonable default for concurrency ( see #3 ) but generating large test buckets takes a long time 😁

luiscosio commented 2 years ago

Thank you so much, I've tried with concurrency and it is definitely not a concurrency issue.

I'll try to reproduce it with a couple of other buckets I need to nuke. Thank you so much for your tool!

soapiestwaffles commented 2 years ago

hmm, maybe I just need to get rid of that check, though, seems like if it doesn't delete everything that it was supposed to in that call, when it finishes you would be left with some keys in there. Let me experiment and read more on that bulk call I use.

and... Thank you for using my tool! <3 I appreciate the bug issue, too! This one isn't as easy to test out, things like localstack just aren't the same.

soapiestwaffles commented 2 years ago

just an update: I should be able to look at this more today, I'm sorry I haven't been able to look into it more sooner, the last few days for me have been extremely busy

soapiestwaffles commented 2 years ago

@luiscosio

If you still have some large buckets to test on, you can checkout the branch issue-10 and run that version. I have now just made it log the files that AWS reported that it didn't delete, but it should continue on. I'm curious if your bucket will still end up empty afterwards or there will be files left

lucazz commented 2 years ago

Hello there @soapiestwaffles, I was getting hit by that bug as well and I was indeed able to nuke a bucket using this branch:

s3-nuke  issue-10| ❯❯❯ go run . --concurrency 80

        'OOOO`
      'OOOOOOOO`
     'OOOOOOOOOO`
          ||
   ,.--'--++--'--.,
   (\'-.,_____,.-'/)
    \\-.,_____,.-//
    ;\\         //|  ▄▀▀ ▀██    █▄ █ █ █ █▄▀ ██▀
    | \\  ___  // |  ▄██ ▄▄█ ▀▀ █ ▀█ ▀▄█ █ █ █▄▄
    |  '-[___]-'  |
    `'-.,_____,.-''

⠂ fetching bucket list...

🪣  random.bucket.name.us-east-1

🌎 bucket located in us-east-1

Bucket object count: 14,015,845
(object count metric last updated 3 days ago @ 2022-03-13 21:00:00 -0300 -03)

⚠️   !!! WARNING !!!  ⚠️
This will destroy all versions of all objects in the selected bucket

Please enter the following phrase to continue: montne fal acar iriess
Enter phrase: montne fal acar iriess
[bucket: random.bucket.name.us-east-1] Are you sure, this operation cannot be undone: y

⠴ deleting objects... (346995/-, 2201 it/s)

and it's still going (20mins or so - there's A LOT of object versions in this bucket lol)

soapiestwaffles commented 2 years ago

Hi @lucazz ! Thank you so much for testing! Alright, I'll merge this branch then and call it good :) 👍🏼