yangheng95 / PyABSA

Sentiment Analysis, Text Classification, Text Augmentation, Text Adversarial defense, etc.;
https://pyabsa.readthedocs.io
MIT License
937 stars 159 forks source link

ATEPC Training with valid_dataset #213

Closed DorisFangWork closed 2 years ago

DorisFangWork commented 2 years ago

Hi, latest release solved the valid dataset_file loading issue, but APC_Trainer() never used 'valid' dataset_file. Val_set is a subset of train_set data, I'm not sure if my understanding is correct.

image


            if load_aug:
                dataset_file['train'] += find_files(search_path, [d, 'train', task], exclude_key=['.inference', 'test.', 'valid.'] + filter_key_words)
                dataset_file['test'] += find_files(search_path, [d, 'test', task], exclude_key=['.inference', 'train.', 'valid.'] + filter_key_words)
                dataset_file['valid'] += find_files(search_path, [d, 'valid', task], exclude_key=['.inference', 'train.', 'test.'] + filter_key_words)
                dataset_file['valid'] += find_files(search_path, [d, 'dev', task], exclude_key=['.inference', 'train.', 'test.'] + filter_key_words)
            else:
                dataset_file['train'] += find_files(search_path, [d, 'train', task], exclude_key=['.inference', 'test.', 'valid.'] + filter_key_words + ['.ignore'])
                dataset_file['test'] += find_files(search_path, [d, 'test', task], exclude_key=['.inference', 'train.', 'valid.'] + filter_key_words + ['.ignore'])
                dataset_file['valid'] += find_files(search_path, [d, 'valid', task], exclude_key=['.inference', 'train.', 'test.'] + filter_key_words + ['.ignore'])
                dataset_file['valid'] += find_files(search_path, [d, 'dev', task], exclude_key=['.inference', 'train.', 'test.'] + filter_key_words + ['.ignore'])

        else:
            print('Try to load {} dataset from local'.format(dataset_path))
            if load_aug:
                dataset_file['train'] += find_files(d, ['train', task], exclude_key=['.inference', 'test.', 'valid.'] + filter_key_words)
                dataset_file['test'] += find_files(d, ['test', task], exclude_key=['.inference', 'train.', 'valid.'] + filter_key_words)
                dataset_file['valid'] += find_files(d, ['valid', task], exclude_key=['.inference', 'train.'] + filter_key_words)
                dataset_file['valid'] += find_files(d, ['dev', task], exclude_key=['.inference', 'train.'] + filter_key_words)
            else:
                dataset_file['train'] += find_files(d, ['train', task], exclude_key=['.inference', 'test.', 'valid.'] + filter_key_words + ['.ignore'])
                dataset_file['test'] += find_files(d, ['test', task], exclude_key=['.inference', 'train.', 'valid.'] + filter_key_words + ['.ignore'])
                dataset_file['valid'] += find_files(d, ['valid', task], exclude_key=['.inference', 'train.', 'test.'] + filter_key_words + ['.ignore'])
                dataset_file['valid'] += find_files(d, ['dev', task], exclude_key=['.inference', 'train.', 'test.'] + filter_key_words + ['.ignore'])

Otherwise can you explain the problem more specificly? Thanks and have a nice day!

Originally posted by @yangheng95 in https://github.com/yangheng95/PyABSA/issues/212#issuecomment-1177788297

DorisFangWork commented 2 years ago

Another question, I didn't find the build-in function for splitting train / valid / test dataset in ATEPC training. Does it mean model train on 'train' dataset_file, and evaluate the performance on 'test' dataset_file? The result is actually for evaluation, not for hold-out test result.

yangheng95 commented 2 years ago

The bulit-in split function is for cross valid which use a split of train set as valid set. The valid set can be used in next update. But I am afraid that the evalutaion of ATEPC result will pend for a longer time.

yangheng95 commented 2 years ago

@DorisFangWork V1.16.4

DorisFangWork commented 2 years ago

@DorisFangWork V1.16.4

@DorisFangWork V1.16.4

Thank you