It was working correctly for many months, but now all of a sudden from this morning I have been encouraging this error, I have tried it for multiple datasets. Does, any one have any idea on this?
/usr/local/lib/python3.10/dist-packages/realtabformer/realtabformer.py:570: UserWarning: Duplicate rate (0.0) in the data is zero. The `qt_interval` will be set to qt_interval_unique=100.
warnings.warn(
/usr/local/lib/python3.10/dist-packages/realtabformer/realtabformer.py:597: UserWarning: qt_interval adjusted from 100 to 2...
warnings.warn(
Bootstrap round: 100%
500/500 [01:01<00:00, 16.43it/s]
Sensitivity threshold summary:
count 500.000000
mean 0.304600
std 0.382382
min -0.025000
25% -0.025000
50% -0.025000
75% 0.575000
max 0.975000
dtype: float64
Sensitivity threshold: 0.975 qt_max: 0.05
Map: 100%
10/10 [00:00<00:00, 187.56 examples/s]
/usr/local/lib/python3.10/dist-packages/transformers/training_args.py:1474: FutureWarning: `evaluation_strategy` is deprecated and will be removed in version 4.46 of 🤗 Transformers. Use `eval_strategy` instead
warnings.warn(
[5/5 00:15, Epoch 5/5]
Step Training Loss
1 2.790300
2 2.240800
3 1.765800
4 1.308200
5 1.025400
128/? [00:02<00:00, 43.53it/s]
Generated 0 invalid samples out of total 128 samples generated. Sampling efficiency is: 100.0000%
/usr/local/lib/python3.10/dist-packages/transformers/training_args.py:1474: FutureWarning: `evaluation_strategy` is deprecated and will be removed in version 4.46 of 🤗 Transformers. Use `eval_strategy` instead
warnings.warn(
There were missing keys in the checkpoint model loaded: ['lm_head.weight'].
Critic round: 5, sensitivity_threshold: 0.975, val_sensitivity: 0.37499999999999994, val_sensitivities: [-0.025, -0.025, -0.025, -0.025, -0.025, 0.375, -0.025, 0.975, 0.375, 0.375, -0.025, 0.775, 0.975, 0.975, 0.975]
[10/10 00:45, Epoch 10/10]
Step Training Loss
6 0.835700
7 0.697500
8 0.680100
9 0.552300
10 0.580400
128/? [00:03<00:00, 34.60it/s]
Generated 0 invalid samples out of total 128 samples generated. Sampling efficiency is: 100.0000%
Critic round: 10, sensitivity_threshold: 0.975, val_sensitivity: 0.34833333333333333, val_sensitivities: [-0.025, -0.025, -0.025, -0.025, -0.025, -0.025, -0.025, 0.775, 0.375, 0.575, -0.025, 0.775, 0.975, 0.975, 0.975]
/usr/local/lib/python3.10/dist-packages/realtabformer/realtabformer.py:834: UserWarning: No best model was saved. Loading the closest model to the sensitivity_threshold.
warnings.warn(
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
<ipython-input-5-962080ccc07b> in <cell line: 20>()
18 # Additional parameters can be
19 # passed to the `.fit` method.
---> 20 rtf_model.fit(df)
21
22 # Save the model to the current directory.
6 frames
/usr/local/lib/python3.10/dist-packages/transformers/utils/hub.py in cached_file(path_or_repo_id, filename, cache_dir, force_download, resume_download, proxies, token, revision, local_files_only, subfolder, repo_type, user_agent, _raise_exceptions_for_gated_repo, _raise_exceptions_for_missing_entries, _raise_exceptions_for_connection_errors, _commit_hash, **deprecated_kwargs)
368 if not os.path.isfile(resolved_file):
369 if _raise_exceptions_for_missing_entries:
--> 370 raise EnvironmentError(
371 f"{path_or_repo_id} does not appear to have a file named {full_filename}. Checkout "
372 f"'https://huggingface.co/{path_or_repo_id}/tree/{revision}' for available files."
OSError: rtf_checkpoints/not-best-disc-model does not appear to have a file named config.json. Checkout 'https://huggingface.co/rtf_checkpoints/not-best-disc-model/tree/main' for available files.```
It was working correctly for many months, but now all of a sudden from this morning I have been encouraging this error, I have tried it for multiple datasets. Does, any one have any idea on this?