Open middle-plat-ai opened 6 years ago
You need to modify run_classifier.py
as described in this issue : #74
@Colanim Can you share what you did for run_classify on sentiment data? Any chance trying imdb data?
I didn't modify it for sentiment data, but for another dataset (STS-B).
Basically what I did is :
model_fn_builder
, change the metric_fn
to fit your data. In my case, I couldn't use accuracy, but I had to use Pearson and Spearman correlation. Here is my metric_fn
:def metric_fn(per_example_loss, label_ids, logits):
# Compute Pearson correlation
pearson = tf.contrib.metrics.streaming_pearson_correlation(logits, label_ids)
# Compute MSE
mse = tf.metrics.mean_squared_error(label_ids, logits)
# Compute Spearman correlation
size = tf.size(logits)
indice_of_ranks_pred = tf.nn.top_k(logits, k=size)[1]
indice_of_ranks_label = tf.nn.top_k(label_ids, k=size)[1]
rank_pred = tf.nn.top_k(-indice_of_ranks_pred, k=size)[1]
rank_label = tf.nn.top_k(-indice_of_ranks_label, k=size)[1]
rank_pred = tf.to_float(rank_pred)
rank_label = tf.to_float(rank_label)
spearman = tf.contrib.metrics.streaming_pearson_correlation(rank_pred, rank_label)
return {'pearson': pearson, 'spearman': spearman, 'MSE': mse}
input_fn
in order to match the label type. In my case, the label was a score, so I put a float :"label_ids":
tf.constant(all_label_ids, shape=[num_examples], dtype=tf.float32),
main
, I had to be careful because since I'm not using a classifier anymore but a scorer, I have to remove the occurence of class list. There is several place where to replace this, but I'm not sure it's relevant to you. For example, instead of label_list = processor.get_labels()
, I used label_list = None
and set following code consequently.You have the official advices of Jacob Devlin in this issue : #74
@Colanim thanks for sharing!
@Colanim
You have the official advices of Jacob Devlin in this issue : #74 May I know what changes you do in main function? I'm also doing fine-tuning in STS-B dataset. I have 1.added the StsProcessor; 2.changed metricfn in model_fn_builder; 3.changed the data type of label_ids; But I don't know how to change main function. Would you mind share your source code? Thanks.
Here is my main()
:
def main(_):
tf.logging.set_verbosity(tf.logging.INFO)
processors = {
"sick": SickProcessor,
"sts": StsProcessor
}
if not FLAGS.do_train and not FLAGS.do_eval:
raise ValueError("At least one of `do_train` or `do_eval` must be True.")
bert_config = modeling.BertConfig.from_json_file(FLAGS.bert_config_file)
if FLAGS.max_seq_length > bert_config.max_position_embeddings:
raise ValueError(
"Cannot use sequence length %d because the BERT model "
"was only trained up to sequence length %d" %
(FLAGS.max_seq_length, bert_config.max_position_embeddings))
tf.gfile.MakeDirs(FLAGS.output_dir)
task_name = FLAGS.task_name.lower()
if task_name not in processors:
raise ValueError("Task not found: %s" % (task_name))
processor = processors[task_name]()
# label_list = processor.get_labels()
label_list = None
tokenizer = tokenization.FullTokenizer(
vocab_file=FLAGS.vocab_file, do_lower_case=FLAGS.do_lower_case)
tpu_cluster_resolver = None
if FLAGS.use_tpu and FLAGS.tpu_name:
tpu_cluster_resolver = tf.contrib.cluster_resolver.TPUClusterResolver(
FLAGS.tpu_name, zone=FLAGS.tpu_zone, project=FLAGS.gcp_project)
is_per_host = tf.contrib.tpu.InputPipelineConfig.PER_HOST_V2
run_config = tf.contrib.tpu.RunConfig(
cluster=tpu_cluster_resolver,
master=FLAGS.master,
model_dir=FLAGS.output_dir,
save_checkpoints_steps=FLAGS.save_checkpoints_steps,
tpu_config=tf.contrib.tpu.TPUConfig(
iterations_per_loop=FLAGS.iterations_per_loop,
num_shards=FLAGS.num_tpu_cores,
per_host_input_for_training=is_per_host))
train_examples = None
num_train_steps = None
num_warmup_steps = None
if FLAGS.do_train:
train_examples = processor.get_train_examples(FLAGS.data_dir)
num_train_steps = int(
len(train_examples) / FLAGS.train_batch_size * FLAGS.num_train_epochs)
num_warmup_steps = int(num_train_steps * FLAGS.warmup_proportion)
model_fn = model_fn_builder(
bert_config=bert_config,
init_checkpoint=FLAGS.init_checkpoint,
learning_rate=FLAGS.learning_rate,
num_train_steps=num_train_steps,
num_warmup_steps=num_warmup_steps,
use_tpu=FLAGS.use_tpu,
use_one_hot_embeddings=FLAGS.use_tpu)
# If TPU is not available, this will fall back to normal Estimator on CPU
# or GPU.
estimator = tf.contrib.tpu.TPUEstimator(
use_tpu=FLAGS.use_tpu,
model_fn=model_fn,
config=run_config,
train_batch_size=FLAGS.train_batch_size,
eval_batch_size=FLAGS.eval_batch_size)
if FLAGS.do_train:
import time
train_t0 = time.time()
train_features = convert_examples_to_features(
train_examples, label_list, FLAGS.max_seq_length, tokenizer)
tf.logging.info("***** Running training *****")
tf.logging.info(" Num examples = %d", len(train_examples))
tf.logging.info(" Batch size = %d", FLAGS.train_batch_size)
tf.logging.info(" Num steps = %d", num_train_steps)
train_input_fn = input_fn_builder(
features=train_features,
seq_length=FLAGS.max_seq_length,
is_training=True,
drop_remainder=True)
estimator.train(input_fn=train_input_fn, max_steps=num_train_steps)
train_t1 = time.time()
if FLAGS.do_eval:
eval_examples = processor.get_dev_examples(FLAGS.data_dir)
eval_features = convert_examples_to_features(
eval_examples, label_list, FLAGS.max_seq_length, tokenizer)
tf.logging.info("***** Running evaluation *****")
tf.logging.info(" Num examples = %d", len(eval_examples))
tf.logging.info(" Batch size = %d", FLAGS.eval_batch_size)
# This tells the estimator to run through the entire set.
eval_steps = None
# However, if running eval on the TPU, you will need to specify the
# number of steps.
if FLAGS.use_tpu:
# Eval will be slightly WRONG on the TPU because it will truncate
# the last batch.
eval_steps = int(len(eval_examples) / FLAGS.eval_batch_size)
eval_drop_remainder = True if FLAGS.use_tpu else False
eval_input_fn = input_fn_builder(
features=eval_features,
seq_length=FLAGS.max_seq_length,
is_training=False,
drop_remainder=eval_drop_remainder)
result = estimator.evaluate(input_fn=eval_input_fn, steps=eval_steps)
output_eval_file = os.path.join(FLAGS.output_dir, "eval_results.txt")
with tf.gfile.GFile(output_eval_file, "w") as writer:
tf.logging.info("***** Eval results *****")
for key in sorted(result.keys()):
tf.logging.info(" %s = %s", key, str(result[key]))
writer.write("%s = %s\n" % (key, str(result[key])))
Basically what you had to change is that you don't have class anymore. So label_list
does not make sense anymore. So I just set it to None
and removed it when it was called in other functions.
Let me know if it works 👍
@Colanim I followed what you showed above and changed the main()
function. And I realized that in the initial main()
function, they use files_based method to convert_examples_to_features
and build the input_fn_builder
, however, you use the non-files_based way.
But an error occured when I ran the changed file. It shows that TypeError: model_fn_builder() missing 1 required positional argument: 'num_labels'
. So I know that you also made changes to the model_fn_builder()
, delete the argument -- num_labels
. You said that label_list
does not make sense, that's the reason why you removed num_labels
, right?
Would you mind upload your whole run_classifier.py or just the whole bert program in Github, so that I can follow you, and see what improvement I need to do to train the STS dataset?
Thanks!
Here you go : run_scorer.py
@Colanim
Thanks! I have tried do fine-tuning using your code run_scorer,py
. During train and eval, it performs well,
MSE = 0.48805913
global_step = 1796
label_ids = [5. 4.75 5. ... 2. 0. 0. ]
loss = 0.4898354
pearson = 0.8921575
pred = [5.055186 4.7891555 5.0168333 ... 2.493906 0.8447667 1.1127251]
spearman = 0.78399885.
However, during test, the result is bad,
MSE = 0.16579048
global_step = 0
label_ids = [0. 0. 0. ... 0. 0. 0.]
loss = 0.16522574
pearson = nan
pred = [ 0.34180358 0.4761568 0.30145267 ... 0.10529003 -0.12108919
-0.05159474]
spearman = -4.0138337e-05
I don't know the reason, maybe I need to change estimator.evaluate
to estimator.predict
?
And I also changed the
def _create_examples(self, lines, set_type):
"""Creates examples for the training and dev sets."""
examples = []
for (i, line) in enumerate(lines):
if i == 0:
continue
guid = "%s-%s" % (set_type, tokenization.convert_to_unicode(line[0]))
text_a = tokenization.convert_to_unicode(line[-3])
text_b = tokenization.convert_to_unicode(line[-2])
if set_type == "test":
label = 0.0
else:
label = float(line[-1])
examples.append(InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label))
return examples
in StsProcessor
to do test.
STS-B doesn't have labels for the test. Check the file test.tsv
you will understand ^^
test.tsv
is used for benchmark (for the GLUE benchmark), therefore you cannot use it to evaluate your model (because you don't know the labels).
STS-B doesn't have labels for the test. Check the file
test.tsv
you will understand ^^
test.tsv
is used for benchmark (for the GLUE benchmark), therefore you cannot use it to evaluate your model (because you don't know the labels).
I got it. Because there are no lables in test.tsv, so the model cannot calculate the Pearson's r and MSE. Thanks for your help!
@Colanim I am not getting predictions for all examples in test.tsv. I am getting predictions only for some examples like [ 0.34180358 0.4761568 0.30145267 ... 0.10529003 -0.12108919 -0.05159474]. Is it because of streamingconcat ? How to modify it to get all prediction values?
Thanks.
How to run Stanford Sentiment Treebank(SST-2) task with BERT?