Open westfish opened 1 year ago
In my case, it takes about 20GB with xformers enabled. What kind of settings are you using?
I use v100 32G. You're right. I can't run normally without xformers. After installation, it shows that the memory consumption is about 23G.
Any idea why I get this?
RuntimeError: Input type (torch.cuda.HalfTensor) and weight type (torch.cuda.FloatTensor) should be the same
@codeaudit Probably related to accelerate setting. This is my accelerate config:
command_file: null
commands: null
compute_environment: LOCAL_MACHINE
deepspeed_config: {}
distributed_type: 'NO'
downcast_bf16: 'no'
dynamo_backend: 'NO'
fsdp_config: {}
gpu_ids: all
machine_rank: 0
main_process_ip: null
main_process_port: null
main_training_function: main
megatron_lm_config: {}
mixed_precision: fp16
num_machines: 1
num_processes: 1
rdzv_backend: static
same_network: true
tpu_name: null
tpu_zone: null
use_cpu: false
To configure the accelerate, run accelerate config
and set items or you can just copy and paste the config at ~/.cache/huggingface/accelerate/default_config.yaml
It seems that the 32G GPU is not enough. How large memory a GPU is needed for normal operation?