Blizzard / s2client-proto

StarCraft II Client - protocol definitions used to communicate with StarCraft II.
MIT License
3.76k stars 430 forks source link

GLIBC 2.18 Requirement for SCII Linux Package #84

Open xinghai-sun opened 6 years ago

xinghai-sun commented 6 years ago

My linux environment has only GLIBC 2.17, which is not supported by StarCraftII Linux Package. When running, the error appears: GLIBC 2.18 not found.

Unfortunately, for some reason I cannot update my GLIBC to 2.18. Do you have GLIBC 2.17 supported version? If no, how can I solve this problem? Thank you!

jejay commented 6 years ago

This is really a big issue we also ran into. Red Hat Enterprise Linux (RHEL), CentOS and Scientific Linux all do only support glibc 2.17 in their newest version.

This is especially unfortunate because those distributions are commonly used in organisations where stability is a concern and hacking an update is not possible. My university uses Scientific Linux (as others do) and we are currently trying to make use of our 200 GPU cluster...

maym2104 commented 6 years ago

Same here (I've got this error /lib64/libstdc++.so.6: version `GLIBCXX_3.4.21' not found, I presume it's related) and our research team is using resources on which we are not sudo users and the support do updates like once a year if we are lucky.

Tymyan1 commented 6 years ago

Same issue, tried compiling my glibc-2.18 and then set LD_LIBRARY_PATH to point towards that only to give python: relocation error: /lib64/libpthread.so.0: symbol __getrlimit, version GLIBC_PRIVATE not defined in file libc.so.6 with link time reference and can't quite find a workaround for this.

Anyone got any success/tips?

islamelnabarawy commented 6 years ago

+1, same issue here. We currently can't run SC2 on our university's HPC cluster due to this issue.

CrossR commented 6 years ago

I encountered this issue, but luckily managed to get some great support from my uni and the HPC team.

The end result was using Singularity to run SC2 inside of, then passing that container access to the GPU.

The Singularity file I used can be found here : https://gist.github.com/CrossR/a6b71f8b86ce3ea74fd99366af0452ae

You just build an image on any machine you have root access on, so for me a home machine, with sudo singularity build starcraft.simg Singularity. Upload that image to the HPC and either get shell access with singularity shell --nv starcraft.simg or run it via a script to use it in a scheduler.

Here is an example script for my Unis scheduler : https://gist.github.com/CrossR/cf53d240b0fa50bb4967275fa753a51a

If your HPCs have Singularity or Docker, it seems the easiest way to go for now, though releasing either a version that works easily on the prominent academic versions of Linux. There is an official Docker build here as well : https://github.com/Blizzard/s2client-docker which should have more support than my homemade scripts.

nicoladainese96 commented 4 years ago

I encountered this issue, but luckily managed to get some great support from my uni and the HPC team.

The end result was using Singularity to run SC2 inside of, then passing that container access to the GPU.

The Singularity file I used can be found here : https://gist.github.com/CrossR/a6b71f8b86ce3ea74fd99366af0452ae

You just build an image on any machine you have root access on, so for me a home machine, with sudo singularity build starcraft.simg Singularity. Upload that image to the HPC and either get shell access with singularity shell --nv starcraft.simg or run it via a script to use it in a scheduler.

Here is an example script for my Unis scheduler : https://gist.github.com/CrossR/cf53d240b0fa50bb4967275fa753a51a

If your HPCs have Singularity or Docker, it seems the easiest way to go for now, though releasing either a version that works easily on the prominent academic versions of Linux. There is an official Docker build here as well : https://github.com/Blizzard/s2client-docker which should have more support than my homemade scripts.

Hi, I have a follow-up on this. I had the same problem and found a similar solution, but seems simpler to implement. Basically there are 2 differences: 1) build the singularity from nvidia's docker releases of either pytorch of tensorflow ( e.g. https://ngc.nvidia.com/catalog/containers/nvidia:pytorch) 2) unzip the SCII engine as always in some directory that is seen by the singularity (in my cluster I have either /scratch, $HOME or /ProjectAppl for instance, so I use the latter)

I don't try to post a full solution because I got lucky and it was already at disposal a module that loaded that same pytorch singularity already in the cluster, so I didn't have to build the singularity starting from the docker, but should be fairly simple. So in the end my solution looks like: module load pytorch/nvidia-20.03-py3 singularity_wrapper exec python run.py

The cool thing is that there is no need to have the SC engine inside the singularity, because if you have it in a directory that is seen by the singularity, it will be seen anyway and the GLIBC used by that module is 2.27 and works fine.

paopjian commented 3 years ago

Singularity do not need to install SC2 game into the pack, just build a Ubuntu pack and use apt to install apps and use pip to install pysc2 or other libraries you need in code. Mostly the pygame dependencies are the bother things. Try: apt install python-dev libsdl-image1.2-dev libsdl-mixer1.2-dev libsdl-ttf2.0-dev libsdl1.2-dev libsmpeg-dev python-numpy subversion libportmidi-dev ffmpeg libswscale-dev libavformat-dev libavcodec-dev