h2oai / h2o-llmstudio

H2O LLM Studio - a framework and no-code GUI for fine-tuning LLMs. Documentation: https://docs.h2o.ai/h2o-llmstudio/
https://h2o.ai
Apache License 2.0
4.02k stars 416 forks source link

[BUG] Pushing int8 model to Huggingface has error: AttributeError: 'str' object has no attribute 'dtype' #213

Closed Glavin001 closed 1 year ago

Glavin001 commented 1 year ago

How to Reproduce

  1. Train
  2. Push to Huggingface
  3. Error 😢

With:

Config:

architecture:
    backbone_dtype: int8
    force_embedding_gradients: false
    gradient_checkpointing: true
    intermediate_dropout: 0.0
    pretrained: true
    pretrained_weights: ''
augmentation:
    random_parent_probability: 0.0
    skip_parent_probability: 0.0
    token_mask_probability: 0.0
dataset:
    add_eos_token_to_answer: true
    add_eos_token_to_prompt: true
    answer_column: output
    chatbot_author: H2O.ai
    chatbot_name: h2oGPT
    data_sample: 1.0
    data_sample_choice:
    - Train
    - Validation
    limit_chained_samples: false
    mask_prompt_labels: true
    parent_id_column: None
    personalize: false
    prompt_column:
    - instruction
    text_answer_separator: <|answer|>
    text_prompt_start: <|prompt|>
    train_dataframe: data/user/startup-interviews/json-train.pq
    validation_dataframe: None
    validation_size: 0.01
    validation_strategy: automatic
environment:
    compile_model: false
    find_unused_parameters: false
    gpus:
    - '0'
    huggingface_branch: main
    mixed_precision: true
    number_of_workers: 8
    seed: -1
    trust_remote_code: true
    use_fsdp: false
experiment_name: startup-interviews
llm_backbone: huggyllama/llama-13b
logging:
    logger: Neptune
    neptune_project: glavin001/startup-interviews
    number_of_texts: 10
output_directory: output/user/startup-interviews/
prediction:
    batch_size_inference: 0
    do_sample: false
    max_length_inference: 256
    metric: BLEU
    min_length_inference: 2
    num_beams: 1
    num_history: 2
    repetition_penalty: 1.2
    stop_tokens: ''
    temperature: 0.3
    top_k: 0
    top_p: 1.0
tokenizer:
    add_prefix_space: false
    add_prompt_answer_tokens: false
    max_length: 2048
    max_length_answer: 1808
    max_length_prompt: 192
    padding_quantile: 1.0
    use_fast: false
training:
    adaptive_kl_control: true
    advantages_gamma: 0.99
    advantages_lambda: 0.95
    batch_size: 3
    differential_learning_rate: 1.0e-05
    differential_learning_rate_layers: []
    drop_last_batch: true
    epochs: 1
    evaluate_before_training: true
    evaluation_epochs: 0.1
    grad_accumulation: 1
    gradient_clip: 0.0
    initial_kl_coefficient: 0.2
    kl_horizon: 10000
    kl_target: 6.0
    learning_rate: 0.0001
    lora: true
    lora_alpha: 16
    lora_dropout: 0.05
    lora_r: 4
    lora_target_modules: ''
    loss_function: TokenAveragedCrossEntropy
    offload_reward_model: false
    optimizer: AdamW8bit
    ppo_batch_size: 1
    ppo_clip_policy: 0.2
    ppo_clip_value: 0.2
    ppo_epochs: 4
    ppo_generate_temperature: 1.0
    reward_model: OpenAssistant/reward-model-deberta-v3-large-v2
    save_best_checkpoint: false
    scaling_factor_value_loss: 0.1
    schedule: Cosine
    train_validation_data: false
    use_rlhf: false
    warmup_epochs: 0.0
    weight_decay: 0.0

H2O report

q.app:

script_sources: ['/_f/b269447c-9681-4109-b180-7273d353629f/tmp7cgpehyt.min.js']
initialized: True
wave_utils_stack_trace_str: ### stacktrace
Traceback (most recent call last):

  File "/workspace/./app_utils/handlers.py", line 307, in handle
    await experiment_push_to_huggingface_dialog(q)

  File "/workspace/./app_utils/sections/experiment.py", line 1744, in experiment_push_to_huggingface_dialog
    cfg, model, tokenizer = load_cfg_model_tokenizer(

  File "/workspace/./app_utils/sections/experiment.py", line 1908, in load_cfg_model_tokenizer
    load_checkpoint(cfg, model, strict=False)

  File "/workspace/./llm_studio/src/utils/modeling_utils.py", line 127, in load_checkpoint
    model = load_model_weights(model, model_weights, strict, cfg)

  File "/workspace/./llm_studio/src/utils/modeling_utils.py", line 69, in load_model_weights
    model_weights = {

  File "/workspace/./llm_studio/src/utils/modeling_utils.py", line 72, in <dictcomp>
    v.dtype is torch.int8

AttributeError: 'str' object has no attribute 'dtype'

q.client:

app_db: <app_utils.db.Database object at 0x7fc008725840>
client_initialized: True
mode_curr: error
theme_dark: True
default_aws_bucket_name: bucket_name
default_kaggle_username: 
set_max_epochs: 50
set_max_batch_size: 256
set_max_gradient_clip: 10
default_number_of_workers: 8
default_logger: Neptune
default_neptune_project: glavin001/startup-interviews
default_openai_azure: False
default_openai_api_base: https://example-endpoint.openai.azure.com
default_openai_api_deployment_id: deployment-name
default_openai_api_version: 2023-05-15
default_gpt_eval_max: 100
delete_dialogs: True
chart_plot_max_points: 1000
init_interface: True
notification_bar: None
nav/active: experiment/list
experiment/list/mode: train
dataset/list/df_datasets:    id                name  ... validation rows  labels
1   2  startup-interviews  ...            None  output
0   1               oasst  ...            None  output

[2 rows x 10 columns]
experiment/list/df_experiments:    id                name   mode  ... progress    status               info
0   1  startup-interviews  train  ...      1.0  finished  Runtime: 00:28:06

[1 rows x 16 columns]
expander: True
dataset/list: False
dataset/list/table: []
experiment/list: True
experiment/list/table: ['0']
dataset/import: False
dataset/import/source: Upload
dataset/import/id: None
dataset/import/cfg_file: text_causal_language_modeling_config
dataset/import/local_upload: ['/_f/dc662178-96c4-4ae1-8358-4aa6d339954f/json-train.pq']
dataset/import/local_path: /workspace/data/user/json-train.pq
dataset/import/path: data/user/tmp
dataset/import/name: startup-interviews
dataset/import/edit: False
dataset/import/cfg_category: text
dataset/import/cfg: ConfigProblemBase(output_directory='output/text_causal_language_modeling_config', experiment_name='hallowed-sunfish', _parent_experiment='', llm_backbone='EleutherAI/pythia-2.8b-deduped', dataset=ConfigNLPCausalLMDataset(dataset_class=<class 'llm_studio.src.datasets.text_causal_language_modeling_ds.CustomDataset'>, personalize=False, chatbot_name='h2oGPT', chatbot_author='H2O.ai', train_dataframe='data/user/startup-interviews/json-train.pq', validation_strategy='automatic', validation_dataframe='None', validation_size=0.01, data_sample=1.0, data_sample_choice=('Train', 'Validation'), prompt_column=('instruction',), answer_column='output', parent_id_column='None', text_prompt_start='<|prompt|>', text_answer_separator='<|answer|>', limit_chained_samples=False, add_eos_token_to_prompt=True, add_eos_token_to_answer=True, mask_prompt_labels=True, _allowed_file_extensions=('csv', 'pq')), tokenizer=ConfigNLPCausalLMTokenizer(max_length_prompt=256, max_length_answer=256, max_length=512, add_prompt_answer_tokens=False, padding_quantile=1.0, use_fast=True, add_prefix_space=False), architecture=ConfigNLPCausalLMArchitecture(model_class=<class 'llm_studio.src.models.text_causal_language_modeling_model.Model'>, reward_model_class=<class 'llm_studio.src.models.text_reward_model.RewardModel'>, pretrained=True, backbone_dtype='float16', gradient_checkpointing=False, force_embedding_gradients=False, intermediate_dropout=0, pretrained_weights=''), training=ConfigNLPCausalLMTraining(loss_class=<class 'llm_studio.src.losses.text_causal_language_modeling_losses.Losses'>, loss_function='TokenAveragedCrossEntropy', optimizer='AdamW', learning_rate=0.0001, differential_learning_rate_layers=(), differential_learning_rate=1e-05, batch_size=3, drop_last_batch=True, epochs=1, schedule='Cosine', warmup_epochs=0.0, weight_decay=0.0, gradient_clip=0.0, grad_accumulation=1, lora=True, lora_r=4, lora_alpha=16, lora_dropout=0.05, lora_target_modules='', save_best_checkpoint=False, evaluation_epochs=1.0, evaluate_before_training=False, train_validation_data=False, use_rlhf=False, reward_model='OpenAssistant/reward-model-deberta-v3-large-v2', adaptive_kl_control=True, initial_kl_coefficient=0.2, kl_target=6.0, kl_horizon=10000, advantages_gamma=0.99, advantages_lambda=0.95, ppo_clip_policy=0.2, ppo_clip_value=0.2, scaling_factor_value_loss=0.1, ppo_epochs=4, ppo_batch_size=1, ppo_generate_temperature=1.0, offload_reward_model=False), augmentation=ConfigNLPAugmentation(nlp_augmentations_class=<class 'llm_studio.src.augmentations.nlp_aug.BaseNLPAug'>, token_mask_probability=0, skip_parent_probability=0, random_parent_probability=0), prediction=ConfigNLPCausalLMPrediction(metric_class=<class 'llm_studio.src.metrics.text_causal_language_modeling_metrics.Metrics'>, metric='GPT3.5', min_length_inference=2, max_length_inference=256, batch_size_inference=0, do_sample=False, num_beams=1, temperature=0.3, repetition_penalty=1.2, stop_tokens='', top_k=0, top_p=1.0, num_history=2), environment=ConfigNLPCausalLMEnvironment(gpus=('0',), mixed_precision=True, compile_model=False, use_fsdp=False, find_unused_parameters=False, trust_remote_code=True, huggingface_branch='main', number_of_workers=4, seed=-1, _seed=0, _distributed=False, _distributed_inference=True, _local_rank=0, _world_size=1, _curr_step=0, _curr_val_step=0, _rank=0, _device='cuda', _cpu_comm=None), logging=ConfigNLPCausalLMLogging(logger='None', neptune_project='', _neptune_debug=False, plots_class=<class 'llm_studio.src.plots.text_causal_language_modeling_plots.Plots'>, number_of_texts=10, _logger=None))
dataset/import/cfg/dataframe:                                            instruction  ...                                              start
0    What are the two stages of a startup's journey...  ...  Post product market fit and pre product market...
1    How can startups improve their product and lea...  ...  Faster that you iterate, the faster you learn ...
2    How do YC startups compete against larger incu...  ...  Interesting things that we've learned is that ...
3    What is one aspect of startup advice that is o...  ...  Smart you are. And there's almost no talk abou...
4    What factors do you think contribute to the su...  ...  Look at your peer group in university or in hi...
..                                                 ...  ...                                                ...
549  What is the economic argument for doing a star...  ...  Number 4, the economic argument. Doing a start...
550  Can you please provide an overview of the agen...  ...  So agenda, I'll go over logistics real quick, ...
551  What is your opinion on the effectiveness of s...  ...  Your opinion like what makes an effective soci...
552  What is the typical range of investment for se...  ...  Seed funds are typically funds that are betwee...
553  What advice would you give to someone who is t...  ...  The thing I would say is when you grow up, you...

[554 rows x 7 columns]
dataset/import/cfg/train_dataframe: data/user/tmp/json-train.pq
dataset/import/cfg/validation_dataframe: None
dataset/import/cfg/prompt_column: ['instruction']
dataset/import/cfg/answer_column: output
dataset/import/cfg/parent_id_column: None
dataset/import/4: True
dataset/merge: False
dataset/import/6: True
dataset/import/3/edit: False
dataset/list/delete: False
dataset/newexperiment: False
dataset/edit: False
dataset/delete/dialog/single: False
experiment/start: True
experiment/start/cfg_category: text
experiment/start/cfg_file: text_causal_language_modeling_config
experiment/start/cfg_experiment_prev: None
experiment/start/cfg_file_prev: text_causal_language_modeling_config
experiment/start/prev_dataset: 2
experiment/start/cfg_sub: 
experiment/start/dataset: 2
experiment/start/cfg_mode/mode: train
experiment/start/cfg_mode/from_dataset: False
experiment/start/cfg_mode/from_cfg: False
experiment/start/cfg_mode/from_default: False
experiment/start/cfg_mode/from_dataset_args: True
experiment/start/cfg: ConfigProblemBase(output_directory='output/text_causal_language_modeling_config', experiment_name='startup-interviews', _parent_experiment='', llm_backbone='huggyllama/llama-13b', dataset=ConfigNLPCausalLMDataset(dataset_class=<class 'llm_studio.src.datasets.text_causal_language_modeling_ds.CustomDataset'>, personalize=False, chatbot_name='h2oGPT', chatbot_author='H2O.ai', train_dataframe='data/user/startup-interviews/json-train.pq', validation_strategy='automatic', validation_dataframe='None', validation_size=0.01, data_sample=1, data_sample_choice=('Train', 'Validation'), prompt_column=('instruction',), answer_column='output', parent_id_column='None', text_prompt_start='<|prompt|>', text_answer_separator='<|answer|>', limit_chained_samples=False, add_eos_token_to_prompt=True, add_eos_token_to_answer=True, mask_prompt_labels=True, _allowed_file_extensions=('csv', 'pq')), tokenizer=ConfigNLPCausalLMTokenizer(max_length_prompt=192, max_length_answer=1808, max_length=2048, add_prompt_answer_tokens=False, padding_quantile=1, use_fast=True, add_prefix_space=False), architecture=ConfigNLPCausalLMArchitecture(model_class=<class 'llm_studio.src.models.text_causal_language_modeling_model.Model'>, reward_model_class=<class 'llm_studio.src.models.text_reward_model.RewardModel'>, pretrained=True, backbone_dtype='int8', gradient_checkpointing=True, force_embedding_gradients=False, intermediate_dropout=0, pretrained_weights=''), training=ConfigNLPCausalLMTraining(loss_class=<class 'llm_studio.src.losses.text_causal_language_modeling_losses.Losses'>, loss_function='TokenAveragedCrossEntropy', optimizer='AdamW8bit', learning_rate=0.0001, differential_learning_rate_layers=(), differential_learning_rate=1e-05, batch_size=3, drop_last_batch=True, epochs=1, schedule='Cosine', warmup_epochs=0, weight_decay=0, gradient_clip=0, grad_accumulation=1, lora=True, lora_r=4, lora_alpha=16, lora_dropout=0.05, lora_target_modules='', save_best_checkpoint=False, evaluation_epochs=0.1, evaluate_before_training=True, train_validation_data=False, use_rlhf=False, reward_model='OpenAssistant/reward-model-deberta-v3-large-v2', adaptive_kl_control=True, initial_kl_coefficient=0.2, kl_target=6.0, kl_horizon=10000, advantages_gamma=0.99, advantages_lambda=0.95, ppo_clip_policy=0.2, ppo_clip_value=0.2, scaling_factor_value_loss=0.1, ppo_epochs=4, ppo_batch_size=1, ppo_generate_temperature=1.0, offload_reward_model=False), augmentation=ConfigNLPAugmentation(nlp_augmentations_class=<class 'llm_studio.src.augmentations.nlp_aug.BaseNLPAug'>, token_mask_probability=0, skip_parent_probability=0, random_parent_probability=0), prediction=ConfigNLPCausalLMPrediction(metric_class=<class 'llm_studio.src.metrics.text_causal_language_modeling_metrics.Metrics'>, metric='BLEU', min_length_inference=2, max_length_inference=256, batch_size_inference=0, do_sample=False, num_beams=1, temperature=0.3, repetition_penalty=1.2, stop_tokens='', top_k=0, top_p=1, num_history=2), environment=ConfigNLPCausalLMEnvironment(gpus=('0',), mixed_precision=True, compile_model=False, use_fsdp=False, find_unused_parameters=False, trust_remote_code=True, huggingface_branch='main', number_of_workers=8, seed=-1, _seed=0, _distributed=False, _distributed_inference=True, _local_rank=0, _world_size=1, _curr_step=0, _curr_val_step=0, _rank=0, _device='cuda', _cpu_comm=None), logging=ConfigNLPCausalLMLogging(logger='Neptune', neptune_project='glavin001/startup-interviews', _neptune_debug=False, plots_class=<class 'llm_studio.src.plots.text_causal_language_modeling_plots.Plots'>, number_of_texts=10, _logger=None))
experiment/start/dataset_prev: 2
experiment/start/cfg/output_directory: output/text_causal_language_modeling_config
experiment/start/cfg/experiment_name: startup-interviews
experiment/start/trigger_ks: ['train_dataframe', 'logger']
experiment/start/cfg/_parent_experiment: 
experiment/start/cfg/llm_backbone: huggyllama/llama-13b
experiment/start/cfg/dataset: ConfigNLPCausalLMDataset(dataset_class=<class 'llm_studio.src.datasets.text_causal_language_modeling_ds.CustomDataset'>, personalize=False, chatbot_name='h2oGPT', chatbot_author='H2O.ai', train_dataframe='/path/to/train.csv', validation_strategy='automatic', validation_dataframe='', validation_size=0.01, data_sample=1.0, data_sample_choice=('Train', 'Validation'), prompt_column=('instruction', 'input'), answer_column='output', parent_id_column='None', text_prompt_start='<|prompt|>', text_answer_separator='<|answer|>', limit_chained_samples=False, add_eos_token_to_prompt=True, add_eos_token_to_answer=True, mask_prompt_labels=True, _allowed_file_extensions=('csv', 'pq'))
experiment/start/cfg/dataset_class: <class 'llm_studio.src.datasets.text_causal_language_modeling_ds.CustomDataset'>
experiment/start/cfg/personalize: False
experiment/start/cfg/chatbot_name: h2oGPT
experiment/start/cfg/chatbot_author: H2O.ai
experiment/start/cfg/dataframe: Empty DataFrame
Columns: [instruction, input, output, instruction_length, output_length, title, start]
Index: []
experiment/start/cfg/train_dataframe: data/user/startup-interviews/json-train.pq
experiment/start/cfg/validation_strategy: automatic
experiment/start/cfg/validation_dataframe: None
experiment/start/cfg/validation_size: 0.01
experiment/start/cfg/data_sample: 1
experiment/start/cfg/data_sample_choice: ['Train', 'Validation']
experiment/start/cfg/prompt_column: ['instruction']
experiment/start/cfg/answer_column: output
experiment/start/cfg/parent_id_column: None
experiment/start/cfg/text_prompt_start: <|prompt|>
experiment/start/cfg/text_answer_separator: <|answer|>
experiment/start/cfg/limit_chained_samples: False
experiment/start/cfg/mask_prompt_labels: True
experiment/start/cfg/_allowed_file_extensions: ('csv', 'pq')
experiment/start/cfg/_nesting: <llm_studio.src.nesting.Nesting object at 0x7fbf8ee2b250>
experiment/start/cfg/_order: <llm_studio.src.order.Order object at 0x7fbf8ee2b640>
experiment/start/cfg/_possible_values: {'output_directory': None, 'experiment_name': None, '_parent_experiment': None, 'llm_backbone': String(values=('h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v2', 'h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b', 'h2oai/h2ogpt-gm-oasst1-en-2048-falcon-40b-v1', 'h2oai/h2ogpt-oig-oasst1-512-6.9b', 'h2oai/h2ogpt-oasst1-512-20b', 'EleutherAI/gpt-neo-1.3B', 'EleutherAI/gpt-j-6B', 'EleutherAI/gpt-neox-20b', 'facebook/opt-125m', 'facebook/opt-2.7b', 'facebook/opt-6.7b', 'facebook/opt-13b', 'EleutherAI/pythia-1b-deduped', 'EleutherAI/pythia-2.8b-deduped', 'EleutherAI/pythia-6.9b-deduped', 'EleutherAI/pythia-12b-deduped', 'cerebras/Cerebras-GPT-13B', 'stabilityai/stablelm-base-alpha-7b', 'togethercomputer/GPT-NeoXT-Chat-Base-20B'), allow_custom=True, placeholder=None), 'dataset': None, 'tokenizer': None, 'architecture': None, 'training': None, 'augmentation': None, 'prediction': None, 'environment': None, 'logging': None}
experiment/start/cfg/_visibility: {'output_directory': -1, 'experiment_name': 0, '_parent_experiment': 0, 'llm_backbone': 0, 'dataset': 0, 'tokenizer': 0, 'architecture': 0, 'training': 0, 'augmentation': 0, 'prediction': 0, 'environment': 0, 'logging': 0, '_possible_values': 0}
experiment/start/cfg/tokenizer: ConfigNLPCausalLMTokenizer(max_length_prompt=256, max_length_answer=256, max_length=512, add_prompt_answer_tokens=False, padding_quantile=1.0, use_fast=True, add_prefix_space=False)
experiment/start/cfg/max_length_prompt: 192
experiment/start/cfg/max_length_answer: 1808
experiment/start/cfg/max_length: 2048
experiment/start/cfg/padding_quantile: 1
experiment/start/cfg/use_fast: True
experiment/start/cfg/add_prefix_space: False
experiment/start/cfg/_padding_side: left
experiment/start/cfg/architecture: ConfigNLPCausalLMArchitecture(model_class=<class 'llm_studio.src.models.text_causal_language_modeling_model.Model'>, reward_model_class=<class 'llm_studio.src.models.text_reward_model.RewardModel'>, pretrained=True, backbone_dtype='float16', gradient_checkpointing=False, force_embedding_gradients=False, intermediate_dropout=0, pretrained_weights='')
experiment/start/cfg/model_class: <class 'llm_studio.src.models.text_causal_language_modeling_model.Model'>
experiment/start/cfg/reward_model_class: <class 'llm_studio.src.models.text_reward_model.RewardModel'>
experiment/start/cfg/pretrained: True
experiment/start/cfg/backbone_dtype: int8
experiment/start/cfg/gradient_checkpointing: True
experiment/start/cfg/force_embedding_gradients: False
experiment/start/cfg/intermediate_dropout: 0
experiment/start/cfg/pretrained_weights: 
experiment/start/cfg/training: ConfigNLPCausalLMTraining(loss_class=<class 'llm_studio.src.losses.text_causal_language_modeling_losses.Losses'>, loss_function='TokenAveragedCrossEntropy', optimizer='AdamW', learning_rate=0.0001, differential_learning_rate_layers=(), differential_learning_rate=1e-05, batch_size=3, drop_last_batch=True, epochs=1, schedule='Cosine', warmup_epochs=0.0, weight_decay=0.0, gradient_clip=0.0, grad_accumulation=1, lora=True, lora_r=4, lora_alpha=16, lora_dropout=0.05, lora_target_modules='', save_best_checkpoint=False, evaluation_epochs=1.0, evaluate_before_training=False, train_validation_data=False, use_rlhf=False, reward_model='OpenAssistant/reward-model-deberta-v3-large-v2', adaptive_kl_control=True, initial_kl_coefficient=0.2, kl_target=6.0, kl_horizon=10000, advantages_gamma=0.99, advantages_lambda=0.95, ppo_clip_policy=0.2, ppo_clip_value=0.2, scaling_factor_value_loss=0.1, ppo_epochs=4, ppo_batch_size=1, ppo_generate_temperature=1.0, offload_reward_model=False)
experiment/start/cfg/loss_class: <class 'llm_studio.src.losses.text_causal_language_modeling_losses.Losses'>
experiment/start/cfg/loss_function: TokenAveragedCrossEntropy
experiment/start/cfg/optimizer: AdamW8bit
experiment/start/cfg/learning_rate: 0.0001
experiment/start/cfg/differential_learning_rate_layers: []
experiment/start/cfg/differential_learning_rate: 1e-05
experiment/start/cfg/batch_size: 3
experiment/start/cfg/drop_last_batch: True
experiment/start/cfg/epochs: 1
experiment/start/cfg/schedule: Cosine
experiment/start/cfg/warmup_epochs: 0
experiment/start/cfg/weight_decay: 0
experiment/start/cfg/gradient_clip: 0
experiment/start/cfg/grad_accumulation: 1
experiment/start/cfg/lora: True
experiment/start/cfg/lora_r: 4
experiment/start/cfg/lora_alpha: 16
experiment/start/cfg/lora_dropout: 0.05
experiment/start/cfg/lora_target_modules: 
experiment/start/cfg/save_best_checkpoint: False
experiment/start/cfg/evaluation_epochs: 0.1
experiment/start/cfg/evaluate_before_training: True
experiment/start/cfg/train_validation_data: False
experiment/start/cfg/use_rlhf: False
experiment/start/cfg/reward_model: OpenAssistant/reward-model-deberta-v3-large-v2
experiment/start/cfg/adaptive_kl_control: True
experiment/start/cfg/initial_kl_coefficient: 0.2
experiment/start/cfg/kl_target: 6.0
experiment/start/cfg/kl_horizon: 10000
experiment/start/cfg/advantages_gamma: 0.99
experiment/start/cfg/advantages_lambda: 0.95
experiment/start/cfg/ppo_clip_policy: 0.2
experiment/start/cfg/ppo_clip_value: 0.2
experiment/start/cfg/scaling_factor_value_loss: 0.1
experiment/start/cfg/ppo_epochs: 4
experiment/start/cfg/ppo_batch_size: 1
experiment/start/cfg/ppo_generate_temperature: 1.0
experiment/start/cfg/offload_reward_model: False
experiment/start/cfg/augmentation: ConfigNLPAugmentation(nlp_augmentations_class=<class 'llm_studio.src.augmentations.nlp_aug.BaseNLPAug'>, token_mask_probability=0, skip_parent_probability=0, random_parent_probability=0)
experiment/start/cfg/nlp_augmentations_class: <class 'llm_studio.src.augmentations.nlp_aug.BaseNLPAug'>
experiment/start/cfg/token_mask_probability: 0
experiment/start/cfg/skip_parent_probability: 0
experiment/start/cfg/random_parent_probability: 0
experiment/start/cfg/prediction: ConfigNLPCausalLMPrediction(metric_class=<class 'llm_studio.src.metrics.text_causal_language_modeling_metrics.Metrics'>, metric='GPT3.5', min_length_inference=2, max_length_inference=256, batch_size_inference=0, do_sample=False, num_beams=1, temperature=0.3, repetition_penalty=1.2, stop_tokens='', top_k=0, top_p=1.0, num_history=2)
experiment/start/cfg/metric_class: <class 'llm_studio.src.metrics.text_causal_language_modeling_metrics.Metrics'>
experiment/start/cfg/metric: BLEU
experiment/start/cfg/min_length_inference: 2
experiment/start/cfg/max_length_inference: 256
experiment/start/cfg/batch_size_inference: 0
experiment/start/cfg/do_sample: False
experiment/start/cfg/num_beams: 1
experiment/start/cfg/temperature: 0.3
experiment/start/cfg/repetition_penalty: 1.2
experiment/start/cfg/top_k: 0
experiment/start/cfg/top_p: 1
experiment/start/cfg/num_history: 2
experiment/start/cfg/environment: ConfigNLPCausalLMEnvironment(gpus=('0',), mixed_precision=True, compile_model=False, use_fsdp=False, find_unused_parameters=False, trust_remote_code=True, huggingface_branch='main', number_of_workers=4, seed=-1, _seed=0, _distributed=False, _distributed_inference=True, _local_rank=0, _world_size=1, _curr_step=0, _curr_val_step=0, _rank=0, _device='cuda', _cpu_comm=None)
experiment/start/cfg/gpus: ['0']
experiment/start/cfg/mixed_precision: True
experiment/start/cfg/compile_model: False
experiment/start/cfg/use_fsdp: False
experiment/start/cfg/find_unused_parameters: False
experiment/start/cfg/trust_remote_code: True
experiment/start/cfg/huggingface_branch: main
experiment/start/cfg/number_of_workers: 8
experiment/start/cfg/seed: -1
experiment/start/cfg/_seed: 0
experiment/start/cfg/_distributed: False
experiment/start/cfg/_distributed_inference: True
experiment/start/cfg/_local_rank: 0
experiment/start/cfg/_world_size: 1
experiment/start/cfg/_curr_step: 0
experiment/start/cfg/_curr_val_step: 0
experiment/start/cfg/_rank: 0
experiment/start/cfg/_device: cuda
experiment/start/cfg/_cpu_comm: None
experiment/start/cfg/logging: ConfigNLPCausalLMLogging(logger='None', neptune_project='', _neptune_debug=False, plots_class=<class 'llm_studio.src.plots.text_causal_language_modeling_plots.Plots'>, number_of_texts=10, _logger=None)
experiment/start/cfg/logger: Neptune
experiment/start/cfg/neptune_project: glavin001/startup-interviews
experiment/start/cfg/_neptune_debug: False
experiment/start/cfg/plots_class: <class 'llm_studio.src.plots.text_causal_language_modeling_plots.Plots'>
experiment/start/cfg/number_of_texts: 10
experiment/start/cfg/_logger: None
experiment/start/from_yaml: False
experiment/start/run: True
experiment/list/refresh: False
experiment/list/compare: False
experiment/list/stop: False
experiment/list/delete: False
experiment/list/new: False
experiment/list/rename: False
experiment/list/stop/table: False
experiment/list/delete/table/dialog: False
settings: True
save_settings: False
load_settings: False
restore_default_settings: False
dataset/import/s3_bucket: bucket_name
experiment/display/id: 0
experiment/display/logs_path: None
experiment/display/preds_path: None
experiment/display/tab: experiment/display/charts
experiment/display/experiment_id: 1
experiment/display/experiment: <app_utils.db.Experiment object at 0x7fbf8eed48b0>
experiment/display/experiment_path: output/user/startup-interviews/
...
4.9792972446479605e-08, 7.448797957526621e-09, 0.0]}}, 'internal': {'total_training_steps': {'steps': [0], 'values': [546.0]}, 'total_validation_steps': {'steps': [0], 'values': [60.0]}, 'global_start_time': {'steps': [0], 'values': [1688225282.4341586]}, 'current_val_step': {'steps': [3, 6, 9, 12, 15, 18, 21, 24, 27, 30, 33, 36, 39, 42, 45, 48, 51, 54, 57, 60, 63, 66], 'values': [3.0, 6.0, 9.0, 12.0, 15.0, 18.0, 21.0, 24.0, 27.0, 30.0, 33.0, 36.0, 39.0, 42.0, 45.0, 48.0, 51.0, 54.0, 57.0, 60.0, 63.0, 66.0]}, 'current_step': {'steps': [3, 6, 9, 12, 15, 18, 21, 24, 27, 30, 33, 36, 39, 42, 45, 48, 51, 54, 57, 60, 63, 66, 69, 72, 75, 78, 81, 84, 87, 90, 93, 96, 99, 102, 105, 108, 111, 114, 117, 120, 123, 126, 129, 132, 135, 138, 141, 144, 147, 150, 153, 156, 159, 162, 165, 168, 171, 174, 177, 180, 183, 186, 189, 192, 195, 198, 201, 204, 207, 210, 213, 216, 219, 222, 225, 228, 231, 234, 237, 240, 243, 246, 249, 252, 255, 258, 261, 264, 267, 270, 273, 276, 279, 282, 285, 288, 291, 294, 297, 300, 303, 306, 309, 312, 315, 318, 321, 324, 327, 330, 333, 336, 339, 342, 345, 348, 351, 354, 357, 360, 363, 366, 369, 372, 375, 378, 381, 384, 387, 390, 393, 396, 399, 402, 405, 408, 411, 414, 417, 420, 423, 426, 429, 432, 435, 438, 441, 444, 447, 450, 453, 456, 459, 462, 465, 468, 471, 474, 477, 480, 483, 486, 489, 492, 495, 498, 501, 504, 507, 510, 513, 516, 519, 522, 525, 528, 531, 534, 537, 540, 543, 546], 'values': [3.0, 6.0, 9.0, 12.0, 15.0, 18.0, 21.0, 24.0, 27.0, 30.0, 33.0, 36.0, 39.0, 42.0, 45.0, 48.0, 51.0, 54.0, 57.0, 60.0, 63.0, 66.0, 69.0, 72.0, 75.0, 78.0, 81.0, 84.0, 87.0, 90.0, 93.0, 96.0, 99.0, 102.0, 105.0, 108.0, 111.0, 114.0, 117.0, 120.0, 123.0, 126.0, 129.0, 132.0, 135.0, 138.0, 141.0, 144.0, 147.0, 150.0, 153.0, 156.0, 159.0, 162.0, 165.0, 168.0, 171.0, 174.0, 177.0, 180.0, 183.0, 186.0, 189.0, 192.0, 195.0, 198.0, 201.0, 204.0, 207.0, 210.0, 213.0, 216.0, 219.0, 222.0, 225.0, 228.0, 231.0, 234.0, 237.0, 240.0, 243.0, 246.0, 249.0, 252.0, 255.0, 258.0, 261.0, 264.0, 267.0, 270.0, 273.0, 276.0, 279.0, 282.0, 285.0, 288.0, 291.0, 294.0, 297.0, 300.0, 303.0, 306.0, 309.0, 312.0, 315.0, 318.0, 321.0, 324.0, 327.0, 330.0, 333.0, 336.0, 339.0, 342.0, 345.0, 348.0, 351.0, 354.0, 357.0, 360.0, 363.0, 366.0, 369.0, 372.0, 375.0, 378.0, 381.0, 384.0, 387.0, 390.0, 393.0, 396.0, 399.0, 402.0, 405.0, 408.0, 411.0, 414.0, 417.0, 420.0, 423.0, 426.0, 429.0, 432.0, 435.0, 438.0, 441.0, 444.0, 447.0, 450.0, 453.0, 456.0, 459.0, 462.0, 465.0, 468.0, 471.0, 474.0, 477.0, 480.0, 483.0, 486.0, 489.0, 492.0, 495.0, 498.0, 501.0, 504.0, 507.0, 510.0, 513.0, 516.0, 519.0, 522.0, 525.0, 528.0, 531.0, 534.0, 537.0, 540.0, 543.0, 546.0]}, 'epoch': {'steps': [546], 'values': [1.0]}}}
experiment/display/refresh: False
experiment/display/download_logs: False
experiment/list/current: False
experiment/display/logs: True
experiment/display/validation_prediction_insights: True
experiment/display/summary: True
home: False
experiment/display/train_data_insights: True
experiment/display/config: True
experiment/display/download_predictions: False
experiment/display/download_model: False
experiment/display/push_to_huggingface: True
keep_meta: False
experiment/display/push_to_huggingface/account_name: Glavin001
experiment/display/push_to_huggingface/model_name: Glavin001/llama-13b-startup-interviews-v1
experiment/display/push_to_huggingface/device: cuda:0
experiment/display/push_to_huggingface_submit: True
cancel: False
report_error: True

stacktrace:

Traceback (most recent call last):

File “/workspace/./app_utils/handlers.py”, line 307, in handle await experiment_push_to_huggingface_dialog(q)

File “/workspace/./app_utils/sections/experiment.py”, line 1744, in experiment_push_to_huggingface_dialog cfg, model, tokenizer = load_cfg_model_tokenizer(

File “/workspace/./app_utils/sections/experiment.py”, line 1908, in load_cfg_model_tokenizer load_checkpoint(cfg, model, strict=False)

File “/workspace/./llm_studio/src/utils/modeling_utils.py”, line 127, in load_checkpoint model = load_model_weights(model, model_weights, strict, cfg)

File “/workspace/./llm_studio/src/utils/modeling_utils.py”, line 69, in load_model_weights model_weights = {

File “/workspace/./llm_studio/src/utils/modeling_utils.py”, line 72, in v.dtype is torch.int8

AttributeError: ‘str’ object has no attribute ‘dtype’

`Error:

None
psinger commented 1 year ago

Thanks for reporting. Will take a look.

For now I would suggest to use int4, int8 has several issues also in bitsandbytes.

Glavin001 commented 1 year ago

Will try int4 next. Float16 worked. Thanks!

Glavin001 commented 1 year ago

int4 also worked.