huggingface / autotrain-advanced

🤗 AutoTrain Advanced
https://huggingface.co/autotrain
Apache License 2.0
3.84k stars 472 forks source link

[BUG] AutoTrain using Kaggle UI #625

Closed avcode-exe closed 5 months ago

avcode-exe commented 5 months ago

Prerequisites

Backend

Other cloud providers

Interface Used

UI

CLI Command

No response

UI Screenshots & Parameters

image

Error Logs

log from Kaggle: image log from AutoTrain UI (connected to Kaggle):

Device 0: Tesla P100-PCIE-16GB - 109.9MiB/16384MiB

-----------------

INFO | 2024-05-07 07:04:24 | autotrain.app:handle_form:463 - hardware: Local

INFO | 2024-05-07 07:04:04 | autotrain.app:handle_form:463 - hardware: Local

INFO | 2024-05-07 07:02:50 | autotrain.app:handle_form:463 - hardware: Local

INFO | 2024-05-07 07:01:09 | autotrain.app:fetch_params:214 - Task: llm:dpo

INFO | 2024-05-07 07:00:45 | autotrain.app:fetch_params:214 - Task: llm:dpo

INFO | 2024-05-07 07:00:41 | autotrain.app:fetch_params:214 - Task: llm:sft

INFO | 2024-05-07 07:00:02 | autotrain.app:<module>:156 - AutoTrain started successfully

WARNING | 2024-05-07 06:59:59 | autotrain.trainers.common:__init__:174 - Parameters not supplied by user and set to default: logging_steps, batch_size, train_split, weight_decay, warmup_ratio, auto_find_batch_size, text_column, epochs, seed, push_to_hub, valid_split, scheduler, token, model, project_name, optimizer, target_column, username, lr, max_grad_norm, data_path, save_total_limit, evaluation_strategy, max_seq_length, gradient_accumulation

WARNING | 2024-05-07 06:59:59 | autotrain.trainers.common:__init__:174 - Parameters not supplied by user and set to default: logging_steps, batch_size, train_split, weight_decay, warmup_ratio, auto_find_batch_size, epochs, seed, push_to_hub, valid_split, tags_column, scheduler, token, tokens_column, model, project_name, optimizer, username, lr, max_grad_norm, data_path, save_total_limit, evaluation_strategy, max_seq_length, gradient_accumulation

WARNING | 2024-05-07 06:59:59 | autotrain.trainers.common:__init__:174 - Parameters not supplied by user and set to default: allow_tf32, epochs, seed, push_to_hub, pre_compute_text_embeddings, sample_batch_size, scheduler, validation_epochs, class_labels_conditioning, token, checkpointing_steps, username, validation_prompt, xl, tokenizer_max_length, num_cycles, validation_images, text_encoder_use_attention_mask, num_validation_images, dataloader_num_workers, adam_beta1, num_class_images, lr_power, adam_epsilon, revision, resume_from_checkpoint, adam_beta2, local_rank, logging, rank, warmup_steps, prior_preservation, model, tokenizer, prior_loss_weight, project_name, class_image_path, center_crop, adam_weight_decay, checkpoints_total_limit, scale_lr, max_grad_norm, class_prompt, prior_generation_precision, image_path

WARNING | 2024-05-07 06:59:59 | autotrain.trainers.common:__init__:174 - Parameters not supplied by user and set to default: num_trials, train_split, seed, push_to_hub, valid_split, time_limit, token, model, target_columns, project_name, username, categorical_columns, data_path, id_column, numerical_columns, task

WARNING | 2024-05-07 06:59:59 | autotrain.trainers.common:__init__:174 - Parameters not supplied by user and set to default: lora_r, max_target_length, weight_decay, warmup_ratio, auto_find_batch_size, text_column, epochs, seed, push_to_hub, scheduler, token, target_column, username, optimizer, lr, data_path, lora_alpha, evaluation_strategy, max_seq_length, logging_steps, batch_size, train_split, valid_split, peft, quantization, model, lora_dropout, project_name, max_grad_norm, save_total_limit, gradient_accumulation

WARNING | 2024-05-07 06:59:59 | autotrain.trainers.common:__init__:174 - Parameters not supplied by user and set to default: logging_steps, batch_size, train_split, weight_decay, warmup_ratio, auto_find_batch_size, image_column, epochs, seed, push_to_hub, valid_split, scheduler, token, model, project_name, optimizer, username, target_column, lr, max_grad_norm, data_path, save_total_limit, evaluation_strategy, gradient_accumulation

WARNING | 2024-05-07 06:59:59 | autotrain.trainers.common:__init__:174 - Parameters not supplied by user and set to default: logging_steps, batch_size, train_split, weight_decay, warmup_ratio, auto_find_batch_size, text_column, epochs, seed, push_to_hub, valid_split, scheduler, token, model, project_name, optimizer, target_column, username, lr, max_grad_norm, data_path, save_total_limit, evaluation_strategy, max_seq_length, gradient_accumulation

WARNING | 2024-05-07 06:59:59 | autotrain.trainers.common:__init__:174 - Parameters not supplied by user and set to default: merge_adapter, add_eos_token, lora_r, max_prompt_length, weight_decay, warmup_ratio, auto_find_batch_size, text_column, seed, rejected_text_column, push_to_hub, scheduler, prompt_text_column, token, optimizer, username, lr, data_path, model_ref, lora_alpha, evaluation_strategy, logging_steps, dpo_beta, batch_size, train_split, trainer, valid_split, use_flash_attention_2, disable_gradient_checkpointing, model, lora_dropout, project_name, max_grad_norm, save_total_limit, model_max_length, gradient_accumulation

INFO | 2024-05-07 06:59:59 | autotrain.app:<module>:32 - Starting AutoTrain...

Additional Information

Got error code 409

abhishekkrthakur commented 5 months ago

you probably have a repo called "test" in your hf account. please use a unique project name or the one thats genetated by default and rename it later.

avcode-exe commented 5 months ago

Yeah, just realized. Thx!