A4Bio / ProteinInvBench

The official implementation of the NeurIPS'23 paper ProteinInvBench: Benchmarking Protein Design on Diverse Tasks, Models, and Metrics
161 stars 6 forks source link

One can use the Colab to evaluate our latest models.

ProteinInvBench: Benchmarking Protein Design on Diverse Tasks, Models, and Metrics

Model zoom: https://zenodo.org/record/8031783

📘Documentation | 🛠️Installation | 🚀Model Zoo | 🆕News

This repository is an open-source project for benchmarking structure-based protein design methods, which provides a variety of collated datasets, reprouduced methods, novel evaluation metrics, and fine-tuned models that are all integrated into one unified framework. It also contains the implementation code for the paper:

ProteinInvBench: Benchmarking Protein Design on Diverse Tasks, Models, and Metrics

Zhangyang Gao, Cheng Tan, Yijie Zhang, Xingran Chen, Stan Z. Li.

Introduction

ProteinInvBench is the first comprehensive benchmark for protein design. The main contributions of our paper could be listed as four points below:


(back to top)

Install via pip

pip install pyg_lib torch_scatter torch_sparse torch_cluster torch_spline_conv -f https://data.pyg.org/whl/torch-2.4.0+cu121.html
pip install -r requirements.txt
pip install PInvBench==0.1.0

Overview

Major Features - **Unified Code Framework** ProteinInvBench integrates the protein design system pipeline into a unified framework. From data preprocessing to model training, evaluating and result recording, all the methods collected in this paper could be conducted in the same way, which simplifies further analysis to the problem. In detail, ProteinInvBench decomposes computational protein design algorithms into `methods` (training and prediction), `models` (network architectures), and `modules. `Users can develop their own algorithms with flexible training strategies and networks for different protein design tasks. - **Comprehensive Model Implementation** ProteinInvBench collects a wide range of recent impressive models together with the datasets and reproduces all the methods in each of the datasets with restricted manners. - **Standard Benchmarks** ProteinInvBench supports standard benchmarks of computational protein design algorithms with various evaluation metrics, including novel ones such as confidence, diversity, and sc-TM. The wide range of evaluataion metrics helps to have a more comprehensive understanding of different protein design algorithms.
Code Structures - `run/` contains the experiment runner and dataset configurations. - `configs/` contains the model configurations - `opencpd/core` contains core training plugins and metrics. - `opencpd/datasets` contains datasets and dataloaders. - `opencpd/methods/` contains collected models for various protein design methods. - `opencpd/models/` contains the main network architectures of various protein design methods. - `opencpd/modules/` contains network modules and layers. - `opencpd/utils/` contains some details in each model. - `tools/` contains the executable python files and script files to prepare the dateset and save checkpoints to the model.
Demo Results The result of methods collected on CATH dataset is listed as following:


(back to top)

News and Updates

[2023-06-16] ProteinInvBench v0.1.0 is released.

Installation

This project has provided an environment setting file of conda, users can easily reproduce the environment by the following commands:

git clone https://github.com/A4Bio/OpenCPD.git
cd opencpd
conda env create -f environment.yml
conda activate opencpd
python setup.py develop

Getting Started

Obtaining Dataset

The processed datasets could be found in the releases.

To note that, due to the large file size, ProteinMPNN dataset was uploaded in a separate file named mpnn.tar.gz, others could be found in data.tar.gz

Model Training

python main.py --method {method} 

(back to top)

Overview of Supported Models, Datasets, and Evaluation Metrics

We support various protein design methods and will provide benchmarks on various protein datasets. We are working on adding new methods and collecting experiment results.

(back to top)

License

This project is released under the Apache 2.0 license. See LICENSE for more information.

Acknowledgement

ProteinInvBench is an open-source project for structure-based protein design methods created by researchers in CAIRI AI Lab. We encourage researchers interested in protein design and other related fields to contribute to this project!

Citation

@inproceedings{
gao2023proteininvbench,
title={ProteinInvBench: Benchmarking Protein Inverse Folding on Diverse Tasks, Models, and Metrics},
author={Zhangyang Gao and Cheng Tan and Yijie Zhang and Xingran Chen and Lirong Wu and Stan Z. Li},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=bqXduvuW5E}
}

Contribution and Contact

For adding new features, looking for helps, or reporting bugs associated with ProteinInvBench, please open a GitHub issue and pull request with the tag "new features", "help wanted", or "enhancement". Feel free to contact us through email if you have any questions.

(back to top)

TODO

  1. Switch code to torch_lightning framework (Done)
  2. Deploy code to public server
  3. Support pip installation

InverseFolding as Evaluation Tools

export PYTHONPATH=path/ProteinInvBench
python PInvBench/evaluation_tools/InverseFolding.py --pdb_path test_pdbs --sv_fasta_path test.fasta --model UniIF --topk 5 --temp 1.0

For each pdb file in test_pdbs, we use UniIF to design the corresponding top-k sequence and save the results in test.fasta. The sampling tempertature is 1.0.