ArthurLeoM / peft-givens

source code of (quasi-)Givens Orthogonal Fine Tuning integrated to peft lib
12 stars 0 forks source link

How do I run the code #1

Open Ramyyang opened 2 months ago

Ramyyang commented 2 months ago

Thanks for sharing your work on quasi-Givens Orthogonal Fine Tuning! I'm excited to try it out but couldn't find instructions on how to use the code. Could you please provide some guidance on:

  1. Installation: What dependencies do I need and how do I install them?
  2. Usage: How do I run the code? Any examples would be helpful.
  3. Directory Structure: A brief explanation of the main files/folders.
ArthurLeoM commented 2 months ago

Hi! Thank you for your interest in our work. This is actually a modified version of peft lib repo. We have not yet provided the installation script of this lib, yet you can simply download it to your code directory and import this repo in your python code.

  1. Dependencies: see peft lib (https://github.com/huggingface/peft/tree/main), we are actually also working on integrating our method to this lib.
  2. Usage: you can from peft import PeftModel, TaskType, GivensConfig, get_peft_model, and define your configuration of Givens OFT:
    givens_config = GivensConfig(task_type=TaskType.CAUSAL_LM,
    inference_mode=False,
    strict_oft=True(False),
    no_scaling=True,
    target_modules=(List of target module names to apply Givens OFT, e.g. LLaMA choices: ["q_proj", "k_proj", "v_proj", "o_proj", ...]))

    And you can wrap up your transformers model with Givens trainable modules applied:

    model = get_peft_model(model, givens_config)
  3. The main structure of this lib is actually inherited from peft lib (v0.6.2), we only added the ./tuners/givens subdirectory to implement our method.