Open digininja opened 11 months ago
Hi thanks for reporting this.
I haven't tried the DO Guide in a while will recheck when I get a chance. It's possible that the default VM sizes it's now using are not big enough anymore.
You can try if adding more nodes or adding a second node pool with bigger VM (more cpu / memory) works. You should be able to do so via the DO Web Ui.
I had a chat with the DO support team and they told me to do basically the same as you suggest and it worked so your prediction is probably right.
I wish I understood the setup enough to be more useful with reporting or a fix, but it was a rush job to get it all setup for a training day and now I'm tearing it all back down to stop them charging me.
I'm happy to run through the scripts again later though if you want and provide feedback as a novice user. There are definitely bits that could be expanded on, such as getting a certificate, that would help, but aren't really in your scope so I understand why you wouldn't want to cover them.
In my experience, a machine size of s-2vcpu-4gb
for up to 8 participants is sufficient:
doctl kubernetes cluster create --region=REGION CLUSTER_NAME --size=s-2vcpu-4gb
I've run mine on a single node (--count 1
) of s-2vcpu-4gb
, but even then, the second team's pod didn't have enough memory. Since this is for an event lasting an afternoon, I've decided to use one of the bigger sizes, to be sure not to run into such problems during the event. Even if I leave it running until the next day, it will cost less than 10$.
doctl kubernetes cluster create --size s-8vcpu-32gb --count 1 juicy-k8s
Would be cool to have the DO guide updated with a machine type which should work better than the default one.
I think then we should be able to close this issue.
That sounds good to me. I wish I could help, but I just guessed till it worked.
On Sat, 9 Dec 2023, 15:35 Jannik Hollenbach, @.***> wrote:
Would be cool to have the DO guide updated with a machine type which should work better than the default one.
I think then we should be able to close this issue.
— Reply to this email directly, view it on GitHub https://github.com/juice-shop/multi-juicer/issues/184#issuecomment-1848441687, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAA4SWKQJKNNK6YEXLRU3Y3YISAL3AVCNFSM6AAAAAA5227DMOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNBYGQ2DCNRYG4 . You are receiving this because you authored the thread.Message ID: @.***>
I mean if it works, it should probably also work for others 😅
So i guess it's can't be worse than the default one, if the default just doesn't cut it resource wise.
Unfortunately I deleted it straight after the class.
I've just checked the invoice and it only shows the name, not the spec. Might be able to reverse it, 80 hours cost $6.12.
On Sat, 9 Dec 2023, 16:57 Jannik Hollenbach, @.***> wrote:
I mean if it works, it should probably also work for others 😅
So i guess it's can't be worse than the default one, if the default just doesn't cut it resource wise.
— Reply to this email directly, view it on GitHub https://github.com/juice-shop/multi-juicer/issues/184#issuecomment-1848578290, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAA4SWLAU63RAUD6EWJU4XDYISKAZAVCNFSM6AAAAAA5227DMOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNBYGU3TQMRZGA . You are receiving this because you authored the thread.Message ID: @.***>
I'm following the setup instructions for DigitalOcean. I've got to step 2 and ran the
get pods
. The juice-balancer pod is stuck in the pending state.When I describe the pod I get this:
I know nothing about kubernetes or DO setup so I'm stalled here.
How do I allocate the extra resources so the provisioning can go ahead? I'll probably be hosting about 12 users with light load in case that makes a difference.