To all those who have purchased the book マテリアルズインフォマティクス published by KYORITSU SHUPPAN: The link to the exercises has changed to https://github.com/yoshida-lab/XenonPy/tree/master/mi_book. Please follow the new link to access all these exercises.
Our XenonPy.MDL is under indefinite technical maintenance due to some security issues found during a server upgrade. Our current plan is to completely re-structure the model library server, but the completion time is unclear. In the mean time, if you would like to get access to the pretrained models, please contact us directly with your purpose of using the models and your affiliation. We will try to provide necessary aid to access part of the model library based on specific needs. Sorry for all the inconvenience. We will make further announcement here when a more concrete recovery schedule is available.
We apologize for the inconvenience 🥺🙏🙇
XenonPy is a Python library that implements a comprehensive set of machine learning tools for materials informatics. Its functionalities partially depend on PyTorch and R. The current release provides some limited modules:
XenonPy inspired by matminer: https://hackingmaterials.github.io/matminer/.
XenonPy is a open source project https://github.com/yoshida-lab/XenonPy.
See our documents for details: http://xenonpy.readthedocs.io
Docker has introduced a new Subscription Service Agreement which requires organizations with more than 250 employees or more than $10 million in revenue to buy a paid subscription. Since the fact that Docker company has been changed their policy to business first mode, we decided to drop the prebuilt Docker images service.
XenonPy base images packed a lot of useful packages for materials informatics using. The following table list some core packages in XenonPy images.
Package | Version |
---|---|
PyTorch |
1.7.1 |
tensorly |
0.5.0 |
pymatgen |
2021.2.16 |
matminer |
0.6.2 |
mordred |
1.2.0 |
scipy |
1.6.0 |
scikit-learn |
0.24.1 |
xgboost |
1.3.0 |
ngboost |
0.3.7 |
fastcluster |
1.1.26 |
pandas |
1.2.2 |
rdkit |
2020.09.4 |
jupyter |
1.0.0 |
seaborn |
0.11.1 |
matplotlib |
3.3.4 |
OpenNMT-py |
1.2.0 |
Optuna |
2.3.0 |
plotly |
4.11.0 |
ipympl |
0.5.8 |
In order to use this image you must have Docker Engine installed. Instructions for setting up Docker Engine are available on the Docker website.
If you have a CUDA-compatible NVIDIA graphics card, you can use a CUDA-enabled version of the PyTorch image to enable hardware acceleration. This only can be used in Ubuntu Linux.
Firstly, ensure that you install the appropriate NVIDIA drivers and libraries. If you are running Ubuntu, you can install proprietary NVIDIA drivers from the PPA and CUDA from the NVIDIA website.
You will also need to install nvidia-docker2
to enable GPU device access
within Docker containers. This can be found at
NVIDIA/nvidia-docker.
Pre-built xenonpy images are available on Docker Hub under the name yoshidalab/xenonpy. For example, you can pull the CUDA 10.1 version with:
docker pull yoshidalab/xenonpy:cuda10
The table below lists software versions for each of the currently supported Docker image tags .
Image tag | CUDA | PyTorch |
---|---|---|
latest |
11.0 | 1.7.1 |
cpu |
None | 1.7.1 |
cuda11 |
11.0 | 1.7.1 |
cuda10 |
10.2 | 1.7.1 |
cuda9 |
9.2 | 1.7.1 |
It is possible to run XenonPy inside a container. Using xenonpy with jupyter is very easy, you could run it with the following command:
docker run --rm -it \
--runtime=nvidia \
--ipc=host \
--publish="8888:8888" \
--volume=$HOME/.xenonpy:/home/user/.xenonpy \
--volume=<path/to/your/workspace>:/workspace \
-e NVIDIA_VISIBLE_DEVICES=0 \
yoshidalab/xenonpy
Here's a description of the Docker command-line options shown above:
--runtime=nvidia
: Required if using CUDA, optional otherwise. Passes the
graphics card from the host to the container. Optional, based on your usage.--ipc=host
: Required if using multiprocessing, as explained at
https://github.com/pytorch/pytorch#docker-image. Optional--publish="8888:8888"
: Publish container's port 8888 to the host. Needed--volume=$Home/.xenonpy:/home/user/.xenonpy
: Mounts
the XenonPy root directory into the container. Optional, but highly recommended.--volume=<path/to/your/workspace>:/workspace
: Mounts
the your working directory into the container. Optional, but highly recommended.-e NVIDIA_VISIBLE_DEVICES=0
: Sets an environment variable to restrict which
graphics cards are seen by programs running inside the container. Set to all
to enable all cards. Optional, defaults to all.You may wish to consider using Docker Compose
to make running containers with many options easier. At the time of writing,
only version 2.3 of Docker Compose configuration files supports the runtime
option.
©Copyright 2021 The XenonPy project, all rights reserved.
Released under the BSD-3 license
.