Closed alexeyzimarev closed 5 years ago
@alexeyzimarev can you please check if there is sufficient disk space for the container ? I think something around 60GB free space would be required for the 3 node cluster to come up healthy. If that is already present, can you please share the traces from the dev cluster folder.
I have the same issue. Disk space was the issue (on a different mac), here I have given docker enough disk space; yet it just appears to be stuck.
Would be nice to have some documentation on where to find the logs of what's actually going on when the cluster is starting (docker logs shows exactly the same output as above).
It might be the size of the Docker disk in Docker for Mac settings. The default there is 60 Gib. On Sat, 9 Feb 2019 at 06:20, tastyeggs notifications@github.com wrote:
I have the same issue. Disk space was the issue (on a different mac), here I have given docker enough disk space; yet it just appears to be stuck.
Would be nice to have some documentation on where to find the logs of what's actually going on when the cluster is starting (docker logs shows exactly the same output as above).
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Azure/service-fabric-issues/issues/1428#issuecomment-462015005, or mute the thread https://github.com/notifications/unsubscribe-auth/ACsMVQGbW2S5LuPPbMeiEdH2gy_MNmRjks5vLlqsgaJpZM4aUMru .
-- Med vennligst hilsen / Best regards, Alexey V. Zimarev
That's not the case for me; I've verified, and set it to 150G. I've also exec'ed into the docker container at this point, and there's plenty of free space.
It was the issue on my first attempt (on a different machine), which upon solving ended up working.
@suhuruli would be nice to get some advice on this.
This is blocking further adoption of Service Fabric within my company, as some members of my team are running Macs.
Hi @tastyeggs, can you please exec into your container and do a "df -h" and share the output ? I suspect this could be the docker overlay which is eating up the space on one of your impacted machines. So even if the docker setting allows you to set the max disk space to 150G for the docker overlay, that doesnt mean it has 150G free disk space available on the disk. The docker setting have a docker.raw disk image location, what is the size of that ? Can you do a df -h for the root of the path where the docker.raw resides ? If the docker overlay itself is eating a lot of space then there isnt enough free space at the docker.raw image location and the container would not work properly. This could be the issue in that case: https://github.com/docker/for-mac/issues/2297. A quick mitigation would be to reset docker to the factory settings. You may also look at this for other ways of reducing disk usage: https://djs55.github.io/jekyll/update/2017/11/27/docker-for-mac-disk-space.html. Let us know.
@tastyeggs why are you asking me? I have the same issue.
@alexeyzimarev that was accidental tagging, sorry!
@anantshankar17, I'm sadly not able to reproduce this anymore, after I rebooted my Mac. I'm going to repeat the process on a fresh mac and will report back if I hit the same issue. Thanks for your guidance.
I have followed the instructions here https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-get-started-mac
When I start the container that I have built, I get the following in the logs:
The http://localhost:19080/ gives an empty response.