Resources to learn about bio_embeddings:
examples
of pipeline configurations a and notebooks
.Project aims:
The project includes:
You can install bio_embeddings
via pip or use it via docker. Mind the additional dependencies for align
.
Install the pipeline and all extras like so:
pip install bio-embeddings[all]
To install the unstable version, please install the pipeline like so:
pip install -U "bio-embeddings[all] @ git+https://github.com/sacdallago/bio_embeddings.git"
If you only need to run a specific model (e.g. an ESM or ProtTrans model) you can install bio-embeddings without dependencies and then install the model-specific dependency, e.g.:
pip install bio-embeddings
pip install bio-embeddings[prottrans]
The extras are:
We provide a docker image at ghcr.io/bioembeddings/bio_embeddings
. Simple usage example:
docker run --rm --gpus all \
-v "$(pwd)/examples/docker":/mnt \
-v bio_embeddings_weights_cache:/root/.cache/bio_embeddings \
-u $(id -u ${USER}):$(id -g ${USER}) \
ghcr.io/bioembeddings/bio_embeddings:v0.1.6 /mnt/config.yml
See the docker
example in the examples
folder for instructions. You can also use ghcr.io/bioembeddings/bio_embeddings:latest
which is built from the latest commit.
To use the mmseqs_search
protocol, or the mmsesq2
functions in align
, you additionally need to have mmseqs2 in your path.
bio_embeddings
was developed for unix machines with GPU capabilities and CUDA installed. If your setup diverges from this, you may encounter some inconsistencies (e.g. speed is significantly affected by the absence of a GPU and CUDA). For Windows users, we strongly recommend the use of Windows Subsystem for Linux.
Each models has its strengths and weaknesses (speed, specificity, memory footprint...). There isn't a "one-fits-all" and we encourage you to at least try two different models when attempting a new exploratory project.
The models prottrans_t5_xl_u50
, esm1b
, esm
, prottrans_bert_bfd
, prottrans_albert_bfd
, seqvec
and prottrans_xlnet_uniref100
were all trained with the goal of systematic predictions. From this pool, we believe the optimal model to be prottrans_t5_xl_u50
, followed by esm1b
.
We highly recommend you to check out the examples
folder for pipeline examples, and the notebooks
folder for post-processing pipeline runs and general purpose use of the embedders.
After having installed the package, you can:
Use the pipeline like:
bio_embeddings config.yml
A blueprint of the configuration file, and an example setup can be found in the examples
directory of this repository.
Use the general purpose embedder objects via python, e.g.:
from bio_embeddings.embed import SeqVecEmbedder
embedder = SeqVecEmbedder()
embedding = embedder.embed("SEQVENCE")
More examples can be found in the notebooks
folder of this repository.
If you use bio_embeddings
for your research, we would appreciate it if you could cite the following paper:
Dallago, C., Schütze, K., Heinzinger, M., Olenyi, T., Littmann, M., Lu, A. X., Yang, K. K., Min, S., Yoon, S., Morton, J. T., & Rost, B. (2021). Learned embeddings from deep learning to visualize and predict protein sets. Current Protocols, 1, e113. doi: 10.1002/cpz1.113
The corresponding bibtex:
@article{https://doi.org/10.1002/cpz1.113,
author = {Dallago, Christian and Schütze, Konstantin and Heinzinger, Michael and Olenyi, Tobias and Littmann, Maria and Lu, Amy X. and Yang, Kevin K. and Min, Seonwoo and Yoon, Sungroh and Morton, James T. and Rost, Burkhard},
title = {Learned Embeddings from Deep Learning to Visualize and Predict Protein Sets},
journal = {Current Protocols},
volume = {1},
number = {5},
pages = {e113},
keywords = {deep learning embeddings, machine learning, protein annotation pipeline, protein representations, protein visualization},
doi = {https://doi.org/10.1002/cpz1.113},
url = {https://currentprotocols.onlinelibrary.wiley.com/doi/abs/10.1002/cpz1.113},
eprint = {https://currentprotocols.onlinelibrary.wiley.com/doi/pdf/10.1002/cpz1.113},
year = {2021}
}
Additionally, we invite you to cite the work from others that was collected in `bio_embeddings` (see section _"Tools by category"_ below). We are working on an enhanced user guide which will include proper references to all citable work collected in `bio_embeddings`.
Want to add your own model? See contributing for instructions.
prottrans_t5_xl_u50
residue and sequence embeddings of the Human proteome at full precision + secondary structure predictions + sub-cellular localisation predictions: prottrans_t5_xl_u50
residue and sequence embeddings of the Fly proteome at full precision + secondary structure predictions + sub-cellular localisation predictions + conservation prediction + variation prediction: