NUS-HPC-AI-Lab / Neural-Network-Parameter-Diffusion

We introduce a novel approach for parameter generation, named neural network parameter diffusion (p-diff), which employs a standard latent diffusion model to synthesize a new set of parameters
841 stars 44 forks source link

Neural Network Parameter Diffusion

Paper | Project Page | Hugging Face

Motivation of p-diff

This repository contains the code and implementation details for the research paper titled "Neural Network Diffusion." The paper explores novel paradigms in deep learning, specifically focusing on diffusion models for generating high-performing neural network parameters.

Authors

Overview

Figure: Illustration of the proposed p-diff framework. Our approach consists of two processes, namely parameter autoencoder and generation. Parameter autoencoder aims to extract the latent representations that can generate high-performing model parameters via the decoder. The extracted representations are fed into a standard latent diffusion model (LDM). During the inference, we freeze the parameters of the autoencoder's decoder. The generated parameters are obtained via feeding random noise to the LDM and trained decoder.

Abstract: Diffusion models have achieved remarkable success in image and video generation. In this work, we demonstrate that diffusion models can also generate high-performing neural network parameters. Our approach is simple, utilizing an autoencoder and a standard latent diffusion model. The autoencoder extracts latent representations of the trained network parameters. A diffusion model is then trained to synthesize these latent parameter representations from random noise. It then generates new representations that are passed through the autoencoder's decoder, whose outputs are ready to use as new sets of network parameters. Across various architectures and datasets, our diffusion process consistently generates models of comparable or improved performance over SGD-trained models, with minimal additional cost. Our results encourage more exploration on the versatile use of diffusion models.

Installation

  1. Clone the repository:
git clone https://github.com/NUS-HPC-AI-Lab/Neural-Network-Diffusion.git
  1. Create a new Conda environment and activate it:
conda env create -f environment.yml
conda activate pdiff

or install necessary package by:

pip install -r requirements.txt

Baseline

For CIFAR100 Resnet18 parameter generation, you can run the script:

bash ./cifar100_resnet18_k200.sh

The script is run in two steps, one to obtain the relevant parameters for the task, the second to train and test the generative model of the parameters.

Ablation

We use Hydra package for our configuration. You can modify the configuration by modifying the config file as well as command line.

For example, for CIFAR10 Resnet18, you can change the config file configs/task/cifar100.yaml.

data:
  root: cifar100 path
  dataset: cifar100
  batch_size: 64
  num_workers: 1

to:

data:
  root: cifar10 path
  dataset: cifar10
  batch_size: 64
  num_workers: 1

or you can change the Neural Network training data (K) by command line:

python train_p_diff.py task.param.k=xxx

Citation

If you found our work useful, please consider citing us.

@misc{wang2024neural,
      title={Neural Network Diffusion}, 
      author={Kai Wang and Zhaopan Xu and Yukun Zhou and Zelin Zang and Trevor Darrell and Zhuang Liu and Yang You},
      year={2024},
      eprint={2402.13144},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}

Acknowledgments

We thank Kaiming He, Dianbo Liu, Mingjia Shi, Zheng Zhu, Bo Zhao, Jiawei Liu, Yong Liu, Ziheng Qin, Zangwei Zheng, Yifan Zhang, Xiangyu Peng, Hongyan Chang, Zirui Zhu, David Yin, Dave Zhenyu Chen, Ahmad Sajedi and George Cazenavette for valuable discussions and feedbacks.