Chia-Network / chia-docker

Apache License 2.0
215 stars 345 forks source link

VDF-Client build interest? #131

Closed Scribbd closed 2 years ago

Scribbd commented 2 years ago

Since #130 got merged it is easier to include a VDF-build and not inflate the container image by too much.

I already did some work to make the VDF client build in the container: https://github.com/Scribbd/chia-docker-slim/tree/bluebox

This adds about 4MB to the total payload and it would allow users with a Windows PCs (like mine) with spare calculation power to run VDF-clients in Docker.

There are some issues:

I want to gauge interested in this option. I will continue working on it in my fork. However, to have it be a PR I estimate the following has to change:

Let me know.

cmmarslender commented 2 years ago

I would love to see support for this in a container. Does your fork of this actually work now to where I could test the container in a few environments to see if it works, or is more work required?

I also like the change proposed in #128 just need that PR updated so there aren't any conflicts so I can run the CI and do a bit of testing.

Scribbd commented 2 years ago

Does your fork of this actually work now to where I could test the container in a few environments to see if it works, or is more work required?

It builds. And vdf_bench runs. I haven't thoroughly tested it yet myself with a propper connection to a node.
EDIT: If that is enough for you to test it, you are free to do so. I haven't added the required changes to the chia config file to run a bluebox. (Also not with entrypoint.sh) So a little headsup on that.

Before I started testing I started focusing on #130 . I will also look into it.

xearl4 commented 2 years ago

Being able to run timelord-launcher-only was part of the reason for #128 :) So yes, we'd definitely be interested in that too. Currently, we are running custom Docker images for pure bluebox vdf_client workers.

One issue with building vdf_client at Docker build-time is that more often than not, you want vdf_client built for the actual target CPU it will be running on. If, for example, you build on a non-AVX512 host but then run the container on a AVX512 machine, you'll be missing out on AVX512-specific performance optimisations. The other way round, it's even worse, of course. A vdf_client built on a AVX512 CPU won't run/will crash on a non-AVX512 CPU.

Thus, what we do for our custom containers is to allow vdf_client building either at container build-time or at container run-time. Then you can use build-time install-timelord.sh for when you want to pre-build for well-known target hosts while instructing unknown targets to run their workers with run-time install-timelord.sh. I'm not sure if it's worth pre-building images with a vdf_client built for a "lowest common denominator" CPU; I think for public use of pre-built images, you want to build vdf_client on the actual target CPU, thus, at container-runtime.

If there's interest in this two-pronged buildtime/runtime vdf_client build approach, I should be able to quickly port it from our custom image to instead run on top of chia-docker's main branch.

Scribbd commented 2 years ago

@xchdata1 You have clearly already done more research into this. I haven't even thought about the implications of different instruction sets beyond the ARM64, ARMv7, AMD64 triad and not even thought about their extension. Are there more extensions that have specific optimizations? And are these optimizations crucial if you want to run a bluebox? Which is my intention.

For me it was more of providing an easy way for people to contribute with the compacting progress. To have them be able to just get an image and run that. By it's nature that doesn't have to be the fasted most optimized process. Or that is what I think. I could be wrong here.

With automation we could easily provide tag based images.
Or if that is too much of a hassle because it is not just AVX512 but the whole slew of instruction extensions that needs to be accounted for. Or that it isn't easily done with something like buildx. We could provide a docker-compose.yaml file that builds an image locally and we could instruct users to run that instead? Reducing the steps to git clone and docker-compose up.

Scribbd commented 2 years ago

I am now looking at arewecompactifiedyet.live. It might be that we don't need more raw cpu time anymore and the focus should be the optimal cpu targeted builds.

Scribbd commented 2 years ago

@xchdata1 I am interested in seeing it. What is your opinion on what I proposed? (docker-compose with build step for cpu optimized images build on the machine. But a general most-common-denominator image for general use.)

github-actions[bot] commented 2 years ago

'This issue has been flagged as stale as there has been no activity on it in 14 days. If this issue is still affecting you and in need of review, please update it to keep it open.'

github-actions[bot] commented 2 years ago

'This issue was automatically closed because it has been flagged as stale and subsequently passed 7 days with no further activity.'