Closed 0xsamgreen closed 4 years ago
Thanks for the feedback @sg2. To keep the entry bar easy on all platforms (Windows, Mac, Linux) we recommend using virtual envs. Depending on the need for Docker in the community we will consider adding Docker as a first class route to install ml-agents.
Ok but why not leaving the document and mark it as "deprecated" like the cloud training?
https://github.com/Unity-Technologies/ml-agents/tree/master/docs#cloud-training-deprecated
@sg2 @albertoxamin We are working on publicly releasing a docker image with mlagents and all its dependencies pre-installed to support our Docker users and taking away the installation overhead completely. Stay tuned.
This issue has been automatically marked as stale because it has not had activity in the last 14 days. It will be closed in the next 14 days if no further activity occurs. Thank you for your contributions.
This issue has been automatically closed because it has not had activity in the last 28 days. If this issue is still valid, please ping a maintainer. Thank you for your contributions.
@anupambhatnagar Hello, that sounds great. Thanks for the great work. Any ETA on this? The current installation process without a GUI seems very convoluted and under-documented.
In addition, it will be great if those docker containers come with pre-built binaries of all the example envs. I would like to run a simple benchmark using my own RL algorithms, but following the instructions from https://github.com/Unity-Technologies/ml-agents/issues/2684 to build every env seems like an over-complicated process.
It would be very desirable to run the docker container with files like the following
from gym_unity.envs import UnityEnv
# pre-built binaries
env_name = "Project/Prebuilt/GridWorld"
env = UnityEnv(env_name, worker_id=0, use_visual=True)
print(str(env))
But perhaps it's not a good idea to update binaries directly in the github repo. Maybe you could consider releasing the pre-built binaries in the Github Release page of your repo? Or have a simple script to build all example envs or download pre-built envs from somewhere.
@vwxyzjn we expect to have something ready by the end of the month. If it's not a docker image then there would be a dockerfile with instructions on how to build it locally.
Regarding the binaries - at present we don't have any plans to make the pre-built binaries of the example envs available to the user. One way to generate the executable would be to delete all but one scene and run the build from within the editor. It's a bit of work but certainly doable.
@anupambhatnagar Thanks for the prompt reply. I will look forward to that.
When you said “within the editor”, the editor requires a desktop environment, right?
Another question: if we have the pre-built binaries, do we even still need to install unity?
@vwxyzjn Yes, the editor does requires a desktop environment. You do not need to install unity if you have a pre-built binary.
I see. Then I think it makes more sense to have pre-built binaries, especially for those who just want to test out their algorithms on a gym environment instead of building one. I personally work on a gym environment that uses Java as backend engine. To make it convenient, I use continuous integration service to pre-build binaries like http://microrts.s3-website-us-east-1.amazonaws.com/microrts/artifacts/.
As a result, the instructions for hello world is very simple(without downloading something like eclipse to build everything):
$ rm ~/microrts -fR && mkdir ~/microrts && \
wget -O ~/microrts/microrts.zip http://microrts.s3.amazonaws.com/microrts/artifacts/202002051504.microrts.zip && \
unzip ~/microrts/microrts.zip -d ~/microrts/ && \
rm ~/microrts/microrts.zip
$ git clone https://github.com/vwxyzjn/gym-microrts.git && \
cd gym-microrts && \
pip install dacite && \
pip install -e .
$ python3 hello_world.py
Perhaps this is something that you guys should consider as well. :)
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.
There is a Dockerfile available in the repo, and there are references to Docker in the documentation. But it's not clear from the documentation how to setup and use ML-Agents with Docker. (I'm able to build the Docker image, but now I'm trying to hack around and make it run, but it's not. And even if it did run, I need to run it headless and/or through xvfb on a remote server.)
I'd like to use Docker because there are many machines and collaborators I work with, and Docker makes it easier to collaborate. I know that documentation for virtual environments seems to be prioritized at the moment, but I think that the virtual env route could easily be integrated into the Dockerfile in order to minimize maintenance.
So, I'd love to see:
Note: this is a follow-up from a discussion by @xiaomaogy and @mikeruddy here: https://github.com/Unity-Technologies/ml-agents/issues/2715#issuecomment-551965518