👑 Easy-to-use and powerful NLP and LLM library with 🤗 Awesome model zoo, supporting wide-range of NLP tasks from research to industrial applications, including 🗂Text Classification, 🔍 Neural Search, ❓ Question Answering, ℹ️ Information Extraction, 📄 Document Intelligence, 💌 Sentiment Analysis etc.
运行日志:
/mnt/workspace/demos/PaddleNLP-develop/applications/text_classification/hierarchical> python train.py \
--dataset_dir "data" \
--device "gpu" \
--max_seq_length 128 \
--model_name "ernie-3.0-medium-zh" \
--batch_size 32 \
--early_stop \
--epochs 100
/home/pai/lib/python3.11/site-packages/_distutils_hack/init.py:33: UserWarning: Setuptools is replacing distutils.
warnings.warn("Setuptools is replacing distutils.")
[2024-04-18 16:47:31,683] [ INFO] - We are using (<class 'paddlenlp.transformers.ernie.tokenizer.ErnieTokenizer'>, False) to load 'ernie-3.0-medium-zh'.
[2024-04-18 16:47:31,683] [ INFO] - Already cached /root/.paddlenlp/models/ernie-3.0-medium-zh/ernie_3.0_medium_zh_vocab.txt
[2024-04-18 16:47:31,706] [ INFO] - tokenizer config file saved in /root/.paddlenlp/models/ernie-3.0-medium-zh/tokenizer_config.json
[2024-04-18 16:47:31,706] [ INFO] - Special tokens file saved in /root/.paddlenlp/models/ernie-3.0-medium-zh/special_tokens_map.json
[2024-04-18 16:47:31,707] [ INFO] - We are using <class 'paddlenlp.transformers.ernie.modeling.ErnieForSequenceClassification'> to load 'ernie-3.0-medium-zh'.
[2024-04-18 16:47:31,708] [ INFO] - Already cached /root/.paddlenlp/models/ernie-3.0-medium-zh/model_state.pdparams
[2024-04-18 16:47:31,708] [ INFO] - Loading weights file model_state.pdparams from cache at /root/.paddlenlp/models/ernie-3.0-medium-zh/model_state.pdparams
[2024-04-18 16:47:32,212] [ INFO] - Loaded weights file from disk, setting weights to model.
W0418 16:47:32.216168 1276 gpu_resources.cc:119] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 11.8, Runtime API Version: 11.8
W0418 16:47:32.217109 1276 gpu_resources.cc:164] device: 0, cuDNN Version: 8.9.
[2024-04-18 16:47:34,664] [ WARNING] - Some weights of the model checkpoint at ernie-3.0-medium-zh were not used when initializing ErnieForSequenceClassification: ['ernie.encoder.layers.6.linear1.bias', 'ernie.encoder.layers.6.linear1.weight', 'ernie.encoder.layers.6.linear2.bias', 'ernie.encoder.layers.6.linear2.weight', 'ernie.encoder.layers.6.norm1.bias', 'ernie.encoder.layers.6.norm1.weight', 'ernie.encoder.layers.6.norm2.bias', 'ernie.encoder.layers.6.norm2.weight', 'ernie.encoder.layers.6.self_attn.k_proj.bias', 'ernie.encoder.layers.6.self_attn.k_proj.weight', 'ernie.encoder.layers.6.self_attn.out_proj.bias', 'ernie.encoder.layers.6.self_attn.out_proj.weight', 'ernie.encoder.layers.6.self_attn.q_proj.bias', 'ernie.encoder.layers.6.self_attn.q_proj.weight', 'ernie.encoder.layers.6.self_attn.v_proj.bias', 'ernie.encoder.layers.6.self_attn.v_proj.weight']
This IS expected if you are initializing ErnieForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
This IS NOT expected if you are initializing ErnieForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
[2024-04-18 16:47:34,665] [ WARNING] - Some weights of ErnieForSequenceClassification were not initialized from the model checkpoint at ernie-3.0-medium-zh and are newly initialized: ['classifier.weight', 'ernie.pooler.dense.bias', 'ernie.pooler.dense.weight', 'classifier.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
/home/pai/lib/python3.11/site-packages/paddle/distributed/parallel.py:410: UserWarning: The program will return to single-card operation. Please check 1, whether you use spawn or fleetrun to start the program. 2, Whether it is a multi-card program. 3, Is the current environment multi-card.
warnings.warn(
/home/pai/lib/python3.11/site-packages/paddlenlp/transformers/tokenizer_utils_base.py:2353: FutureWarning: The max_seq_len argument is deprecated and will be removed in a future version, please use max_length instead.
warnings.warn(
/home/pai/lib/python3.11/site-packages/paddlenlp/transformers/tokenizer_utils_base.py:1925: UserWarning: Truncation was not explicitly activated but max_length is provided a specific value, please use truncation=True to explicitly truncate examples to max length. Defaulting to 'longest_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you can select this strategy more precisely by providing a specific strategy to truncation.
warnings.warn(
/home/pai/lib/python3.11/site-packages/sklearn/metrics/_classification.py:1509: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 due to no true nor predicted samples. Use zero_division parameter to control this behavior.
/mnt/workspace/demos/PaddleNLP-develop/applications/text_classification/hierarchical>
/mnt/workspace/demos/PaddleNLP-develop/applications/text_classification/hierarchical>
/mnt/workspace/demos/PaddleNLP-develop/applications/text_classification/hierarchical> python train.py --dataset_dir "data" --device "gpu" --max_seq_length 128 --model_name "ernie-3.0-medium-zh" --batch_size 32 --early_stop --epochs 100
/home/pai/lib/python3.11/site-packages/_distutils_hack/init.py:33: UserWarning: Setuptools is replacing distutils.
warnings.warn("Setuptools is replacing distutils.")
[2024-04-18 16:57:45,986] [ INFO] - We are using (<class 'paddlenlp.transformers.ernie.tokenizer.ErnieTokenizer'>, False) to load 'ernie-3.0-medium-zh'.
[2024-04-18 16:57:45,986] [ INFO] - Already cached /root/.paddlenlp/models/ernie-3.0-medium-zh/ernie_3.0_medium_zh_vocab.txt
[2024-04-18 16:57:46,009] [ INFO] - tokenizer config file saved in /root/.paddlenlp/models/ernie-3.0-medium-zh/tokenizer_config.json
[2024-04-18 16:57:46,009] [ INFO] - Special tokens file saved in /root/.paddlenlp/models/ernie-3.0-medium-zh/special_tokens_map.json
[2024-04-18 16:57:46,010] [ INFO] - We are using <class 'paddlenlp.transformers.ernie.modeling.ErnieForSequenceClassification'> to load 'ernie-3.0-medium-zh'.
[2024-04-18 16:57:46,011] [ INFO] - Already cached /root/.paddlenlp/models/ernie-3.0-medium-zh/model_state.pdparams
[2024-04-18 16:57:46,011] [ INFO] - Loading weights file model_state.pdparams from cache at /root/.paddlenlp/models/ernie-3.0-medium-zh/model_state.pdparams
[2024-04-18 16:57:46,514] [ INFO] - Loaded weights file from disk, setting weights to model.
W0418 16:57:46.518276 1341 gpu_resources.cc:119] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 11.8, Runtime API Version: 11.8
W0418 16:57:46.519228 1341 gpu_resources.cc:164] device: 0, cuDNN Version: 8.9.
[2024-04-18 16:57:48,974] [ WARNING] - Some weights of the model checkpoint at ernie-3.0-medium-zh were not used when initializing ErnieForSequenceClassification: ['ernie.encoder.layers.6.linear1.bias', 'ernie.encoder.layers.6.linear1.weight', 'ernie.encoder.layers.6.linear2.bias', 'ernie.encoder.layers.6.linear2.weight', 'ernie.encoder.layers.6.norm1.bias', 'ernie.encoder.layers.6.norm1.weight', 'ernie.encoder.layers.6.norm2.bias', 'ernie.encoder.layers.6.norm2.weight', 'ernie.encoder.layers.6.self_attn.k_proj.bias', 'ernie.encoder.layers.6.self_attn.k_proj.weight', 'ernie.encoder.layers.6.self_attn.out_proj.bias', 'ernie.encoder.layers.6.self_attn.out_proj.weight', 'ernie.encoder.layers.6.self_attn.q_proj.bias', 'ernie.encoder.layers.6.self_attn.q_proj.weight', 'ernie.encoder.layers.6.self_attn.v_proj.bias', 'ernie.encoder.layers.6.self_attn.v_proj.weight']
This IS expected if you are initializing ErnieForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
This IS NOT expected if you are initializing ErnieForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
[2024-04-18 16:57:48,975] [ WARNING] - Some weights of ErnieForSequenceClassification were not initialized from the model checkpoint at ernie-3.0-medium-zh and are newly initialized: ['ernie.pooler.dense.bias', 'classifier.bias', 'classifier.weight', 'ernie.pooler.dense.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
/home/pai/lib/python3.11/site-packages/paddle/distributed/parallel.py:410: UserWarning: The program will return to single-card operation. Please check 1, whether you use spawn or fleetrun to start the program. 2, Whether it is a multi-card program. 3, Is the current environment multi-card.
warnings.warn(
/home/pai/lib/python3.11/site-packages/paddlenlp/transformers/tokenizer_utils_base.py:2353: FutureWarning: The max_seq_len argument is deprecated and will be removed in a future version, please use max_length instead.
warnings.warn(
/home/pai/lib/python3.11/site-packages/paddlenlp/transformers/tokenizer_utils_base.py:1925: UserWarning: Truncation was not explicitly activated but max_length is provided a specific value, please use truncation=True to explicitly truncate examples to max length. Defaulting to 'longest_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you can select this strategy more precisely by providing a specific strategy to truncation.
warnings.warn(
/home/pai/lib/python3.11/site-packages/sklearn/metrics/_classification.py:1509: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 due to no true nor predicted samples. Use zero_division parameter to control this behavior.
_warn_prf(average, modifier, f"{metric.capitalize()} is", len(result))
/home/pai/lib/python3.11/site-packages/sklearn/metrics/_classification.py:1509: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no true nor predicted samples. Use zero_division parameter to control this behavior.
_warn_prf(average, modifier, f"{metric.capitalize()} is", len(result))
[2024-04-18 16:57:49,804] [ INFO] - eval loss: 0.54177, micro f1 score: 0.00000, macro f1 score: 0.00000
[2024-04-18 16:57:49,805] [ INFO] - Current best macro f1 score: 0.00000
/home/pai/lib/python3.11/site-packages/paddlenlp/transformers/tokenizer_utils_base.py:2353: FutureWarning: The max_seq_len argument is deprecated and will be removed in a future version, please use max_length instead.
warnings.warn(
/home/pai/lib/python3.11/site-packages/sklearn/metrics/_classification.py:1509: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 due to no true nor predicted samples. Use zero_division parameter to control this behavior.
_warn_prf(average, modifier, f"{metric.capitalize()} is", len(result))
/home/pai/lib/python3.11/site-packages/sklearn/metrics/_classification.py:1509: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no true nor predicted samples. Use zero_division parameter to control this behavior.
_warn_prf(average, modifier, f"{metric.capitalize()} is", len(result))
[2024-04-18 16:57:49,881] [ INFO] - eval loss: 0.46559, micro f1 score: 0.00000, macro f1 score: 0.00000
[2024-04-18 16:57:49,882] [ INFO] - Current best macro f1 score: 0.00000
/home/pai/lib/python3.11/site-packages/paddlenlp/transformers/tokenizer_utils_base.py:2353: FutureWarning: The max_seq_len argument is deprecated and will be removed in a future version, please use max_length instead.
warnings.warn(
/home/pai/lib/python3.11/site-packages/sklearn/metrics/_classification.py:1509: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 due to no true nor predicted samples. Use zero_division parameter to control this behavior.
_warn_prf(average, modifier, f"{metric.capitalize()} is", len(result))
/home/pai/lib/python3.11/site-packages/sklearn/metrics/_classification.py:1509: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no true nor predicted samples. Use zero_division parameter to control this behavior.
_warn_prf(average, modifier, f"{metric.capitalize()} is", len(result))
[2024-04-18 16:57:49,947] [ INFO] - eval loss: 0.40730, micro f1 score: 0.00000, macro f1 score: 0.00000
[2024-04-18 16:57:49,948] [ INFO] - Current best macro f1 score: 0.00000
[2024-04-18 16:57:49,948] [ INFO] - Early stop!
[2024-04-18 16:57:49,948] [ INFO] - Final best macro f1 score: 0.00000
[2024-04-18 16:57:49,948] [ INFO] - Save best macro f1 text classification model in ./checkpoint
train.txt
请提出你的问题
现象: 预训练后,生成的checkpoint文件夹内文件不齐全。
截图:
运行日志: /mnt/workspace/demos/PaddleNLP-develop/applications/text_classification/hierarchical> python train.py \ --dataset_dir "data" \ --device "gpu" \ --max_seq_length 128 \ --model_name "ernie-3.0-medium-zh" \ --batch_size 32 \ --early_stop \ --epochs 100 /home/pai/lib/python3.11/site-packages/_distutils_hack/init.py:33: UserWarning: Setuptools is replacing distutils. warnings.warn("Setuptools is replacing distutils.") [2024-04-18 16:47:31,683] [ INFO] - We are using (<class 'paddlenlp.transformers.ernie.tokenizer.ErnieTokenizer'>, False) to load 'ernie-3.0-medium-zh'. [2024-04-18 16:47:31,683] [ INFO] - Already cached /root/.paddlenlp/models/ernie-3.0-medium-zh/ernie_3.0_medium_zh_vocab.txt [2024-04-18 16:47:31,706] [ INFO] - tokenizer config file saved in /root/.paddlenlp/models/ernie-3.0-medium-zh/tokenizer_config.json [2024-04-18 16:47:31,706] [ INFO] - Special tokens file saved in /root/.paddlenlp/models/ernie-3.0-medium-zh/special_tokens_map.json [2024-04-18 16:47:31,707] [ INFO] - We are using <class 'paddlenlp.transformers.ernie.modeling.ErnieForSequenceClassification'> to load 'ernie-3.0-medium-zh'. [2024-04-18 16:47:31,708] [ INFO] - Already cached /root/.paddlenlp/models/ernie-3.0-medium-zh/model_state.pdparams [2024-04-18 16:47:31,708] [ INFO] - Loading weights file model_state.pdparams from cache at /root/.paddlenlp/models/ernie-3.0-medium-zh/model_state.pdparams [2024-04-18 16:47:32,212] [ INFO] - Loaded weights file from disk, setting weights to model. W0418 16:47:32.216168 1276 gpu_resources.cc:119] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 11.8, Runtime API Version: 11.8 W0418 16:47:32.217109 1276 gpu_resources.cc:164] device: 0, cuDNN Version: 8.9. [2024-04-18 16:47:34,664] [ WARNING] - Some weights of the model checkpoint at ernie-3.0-medium-zh were not used when initializing ErnieForSequenceClassification: ['ernie.encoder.layers.6.linear1.bias', 'ernie.encoder.layers.6.linear1.weight', 'ernie.encoder.layers.6.linear2.bias', 'ernie.encoder.layers.6.linear2.weight', 'ernie.encoder.layers.6.norm1.bias', 'ernie.encoder.layers.6.norm1.weight', 'ernie.encoder.layers.6.norm2.bias', 'ernie.encoder.layers.6.norm2.weight', 'ernie.encoder.layers.6.self_attn.k_proj.bias', 'ernie.encoder.layers.6.self_attn.k_proj.weight', 'ernie.encoder.layers.6.self_attn.out_proj.bias', 'ernie.encoder.layers.6.self_attn.out_proj.weight', 'ernie.encoder.layers.6.self_attn.q_proj.bias', 'ernie.encoder.layers.6.self_attn.q_proj.weight', 'ernie.encoder.layers.6.self_attn.v_proj.bias', 'ernie.encoder.layers.6.self_attn.v_proj.weight']
max_seq_len
argument is deprecated and will be removed in a future version, please usemax_length
instead. warnings.warn( /home/pai/lib/python3.11/site-packages/paddlenlp/transformers/tokenizer_utils_base.py:1925: UserWarning: Truncation was not explicitly activated butmax_length
is provided a specific value, please usetruncation=True
to explicitly truncate examples to max length. Defaulting to 'longest_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you can select this strategy more precisely by providing a specific strategy totruncation
. warnings.warn( /home/pai/lib/python3.11/site-packages/sklearn/metrics/_classification.py:1509: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 due to no true nor predicted samples. Usezero_division
parameter to control this behavior. /mnt/workspace/demos/PaddleNLP-develop/applications/text_classification/hierarchical> /mnt/workspace/demos/PaddleNLP-develop/applications/text_classification/hierarchical> /mnt/workspace/demos/PaddleNLP-develop/applications/text_classification/hierarchical> python train.py --dataset_dir "data" --device "gpu" --max_seq_length 128 --model_name "ernie-3.0-medium-zh" --batch_size 32 --early_stop --epochs 100 /home/pai/lib/python3.11/site-packages/_distutils_hack/init.py:33: UserWarning: Setuptools is replacing distutils. warnings.warn("Setuptools is replacing distutils.") [2024-04-18 16:57:45,986] [ INFO] - We are using (<class 'paddlenlp.transformers.ernie.tokenizer.ErnieTokenizer'>, False) to load 'ernie-3.0-medium-zh'. [2024-04-18 16:57:45,986] [ INFO] - Already cached /root/.paddlenlp/models/ernie-3.0-medium-zh/ernie_3.0_medium_zh_vocab.txt [2024-04-18 16:57:46,009] [ INFO] - tokenizer config file saved in /root/.paddlenlp/models/ernie-3.0-medium-zh/tokenizer_config.json [2024-04-18 16:57:46,009] [ INFO] - Special tokens file saved in /root/.paddlenlp/models/ernie-3.0-medium-zh/special_tokens_map.json [2024-04-18 16:57:46,010] [ INFO] - We are using <class 'paddlenlp.transformers.ernie.modeling.ErnieForSequenceClassification'> to load 'ernie-3.0-medium-zh'. [2024-04-18 16:57:46,011] [ INFO] - Already cached /root/.paddlenlp/models/ernie-3.0-medium-zh/model_state.pdparams [2024-04-18 16:57:46,011] [ INFO] - Loading weights file model_state.pdparams from cache at /root/.paddlenlp/models/ernie-3.0-medium-zh/model_state.pdparams [2024-04-18 16:57:46,514] [ INFO] - Loaded weights file from disk, setting weights to model. W0418 16:57:46.518276 1341 gpu_resources.cc:119] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 11.8, Runtime API Version: 11.8 W0418 16:57:46.519228 1341 gpu_resources.cc:164] device: 0, cuDNN Version: 8.9. [2024-04-18 16:57:48,974] [ WARNING] - Some weights of the model checkpoint at ernie-3.0-medium-zh were not used when initializing ErnieForSequenceClassification: ['ernie.encoder.layers.6.linear1.bias', 'ernie.encoder.layers.6.linear1.weight', 'ernie.encoder.layers.6.linear2.bias', 'ernie.encoder.layers.6.linear2.weight', 'ernie.encoder.layers.6.norm1.bias', 'ernie.encoder.layers.6.norm1.weight', 'ernie.encoder.layers.6.norm2.bias', 'ernie.encoder.layers.6.norm2.weight', 'ernie.encoder.layers.6.self_attn.k_proj.bias', 'ernie.encoder.layers.6.self_attn.k_proj.weight', 'ernie.encoder.layers.6.self_attn.out_proj.bias', 'ernie.encoder.layers.6.self_attn.out_proj.weight', 'ernie.encoder.layers.6.self_attn.q_proj.bias', 'ernie.encoder.layers.6.self_attn.q_proj.weight', 'ernie.encoder.layers.6.self_attn.v_proj.bias', 'ernie.encoder.layers.6.self_attn.v_proj.weight']max_seq_len
argument is deprecated and will be removed in a future version, please usemax_length
instead. warnings.warn( /home/pai/lib/python3.11/site-packages/paddlenlp/transformers/tokenizer_utils_base.py:1925: UserWarning: Truncation was not explicitly activated butmax_length
is provided a specific value, please usetruncation=True
to explicitly truncate examples to max length. Defaulting to 'longest_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you can select this strategy more precisely by providing a specific strategy totruncation
. warnings.warn( /home/pai/lib/python3.11/site-packages/sklearn/metrics/_classification.py:1509: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 due to no true nor predicted samples. Usezero_division
parameter to control this behavior. _warn_prf(average, modifier, f"{metric.capitalize()} is", len(result)) /home/pai/lib/python3.11/site-packages/sklearn/metrics/_classification.py:1509: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no true nor predicted samples. Usezero_division
parameter to control this behavior. _warn_prf(average, modifier, f"{metric.capitalize()} is", len(result)) [2024-04-18 16:57:49,804] [ INFO] - eval loss: 0.54177, micro f1 score: 0.00000, macro f1 score: 0.00000 [2024-04-18 16:57:49,805] [ INFO] - Current best macro f1 score: 0.00000 /home/pai/lib/python3.11/site-packages/paddlenlp/transformers/tokenizer_utils_base.py:2353: FutureWarning: Themax_seq_len
argument is deprecated and will be removed in a future version, please usemax_length
instead. warnings.warn( /home/pai/lib/python3.11/site-packages/sklearn/metrics/_classification.py:1509: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 due to no true nor predicted samples. Usezero_division
parameter to control this behavior. _warn_prf(average, modifier, f"{metric.capitalize()} is", len(result)) /home/pai/lib/python3.11/site-packages/sklearn/metrics/_classification.py:1509: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no true nor predicted samples. Usezero_division
parameter to control this behavior. _warn_prf(average, modifier, f"{metric.capitalize()} is", len(result)) [2024-04-18 16:57:49,881] [ INFO] - eval loss: 0.46559, micro f1 score: 0.00000, macro f1 score: 0.00000 [2024-04-18 16:57:49,882] [ INFO] - Current best macro f1 score: 0.00000 /home/pai/lib/python3.11/site-packages/paddlenlp/transformers/tokenizer_utils_base.py:2353: FutureWarning: Themax_seq_len
argument is deprecated and will be removed in a future version, please usemax_length
instead. warnings.warn( /home/pai/lib/python3.11/site-packages/sklearn/metrics/_classification.py:1509: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 due to no true nor predicted samples. Usezero_division
parameter to control this behavior. _warn_prf(average, modifier, f"{metric.capitalize()} is", len(result)) /home/pai/lib/python3.11/site-packages/sklearn/metrics/_classification.py:1509: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no true nor predicted samples. Usezero_division
parameter to control this behavior. _warn_prf(average, modifier, f"{metric.capitalize()} is", len(result)) [2024-04-18 16:57:49,947] [ INFO] - eval loss: 0.40730, micro f1 score: 0.00000, macro f1 score: 0.00000 [2024-04-18 16:57:49,948] [ INFO] - Current best macro f1 score: 0.00000 [2024-04-18 16:57:49,948] [ INFO] - Early stop! [2024-04-18 16:57:49,948] [ INFO] - Final best macro f1 score: 0.00000 [2024-04-18 16:57:49,948] [ INFO] - Save best macro f1 text classification model in ./checkpoint train.txt