ludwig-ai / ludwig

Low-code framework for building custom LLMs, neural networks, and other AI models
http://ludwig.ai
Apache License 2.0
11.14k stars 1.19k forks source link

KeyError in compare_performance visualization because "loss" not in metric_names (visualize.py line 1497) #3777

Open DavidFarago opened 11 months ago

DavidFarago commented 11 months ago

Describe the bug Having trained two (mistral) models (with model_type either llm or ecd), I wanted to create a compare_performance visualization via ludwig visualize --visualization compare_performance --test_statistics dir1/training_statistics.json dir2/training_statistics.json, but I get

Traceback (most recent call last):
  File "/workspace/anaconda3/envs/ludwig310/bin/ludwig", line 8, in <module>
    sys.exit(main())
  File "/workspace/anaconda3/envs/ludwig310/lib/python3.10/site-packages/ludwig/cli.py", line 191, in main
    CLI()
  File "/workspace/anaconda3/envs/ludwig310/lib/python3.10/site-packages/ludwig/cli.py", line 71, in __init__
    getattr(self, args.command)()
  File "/workspace/anaconda3/envs/ludwig310/lib/python3.10/site-packages/ludwig/cli.py", line 116, in visualize
    visualize.cli(sys.argv[2:])
  File "/workspace/anaconda3/envs/ludwig310/lib/python3.10/site-packages/ludwig/visualize.py", line 4174, in cli
    vis_func(**vars(args))
  File "/workspace/anaconda3/envs/ludwig310/lib/python3.10/site-packages/ludwig/visualize.py", line 355, in compare_performance_cli
    compare_performance(test_stats_per_model, **kwargs)
  File "/workspace/anaconda3/envs/ludwig310/lib/python3.10/site-packages/ludwig/visualize.py", line 1497, in compare_performance
    metric_names.remove(LOSS)
KeyError: 'loss'

To Reproduce Create two training_statistics.json in the style of

{
    "evaluation_frequency": {
        "frequency": 1,
        "period": "epoch"
    },
    "test": {
        "combined": {
            "loss": [
                6.1890668869018555
            ]
        },
        "output": {
            "accuracy": [
                0.780337929725647
            ],
            "accuracy_micro": [
                0.9599999785423279
            ],
            "loss": [
                6.1890668869018555
            ],
            "roc_auc": [
                0.8079876899719238
            ]
        }
    },
    "training": {
        "combined": {
            "loss": [
                24.924232482910156
            ]
        },
        "output": {
            "accuracy": [
                0.7352941036224365
            ],
            "accuracy_micro": [
                0.9324324131011963
            ],
            "loss": [
                24.924232482910156
            ],
            "roc_auc": [
                0.7261029481887817
            ]
        }
    },
    "validation": {}

Then the visualization call above will lead to metric_names == {"frequency", "perios"} on line 1497 of visualize.py, thus causing a KeyError for metric_names.remove(LOSS).

Details, including a yaml that lead to above training_statistics.json can be found at https://ludwig-ai.slack.com/archives/C01PN6M2RSM/p1699559557928909?thread_ts=1698743347.409099&cid=C01PN6M2RSM

Expected behavior No error, but a pdf being created with visualizations of the model comparison.

Environment (please complete the following information):

Additional context The environment is a runpod:

DavidFarago commented 11 months ago

self-assign

DavidFarago commented 10 months ago

I believe the above training_statistics.json produced by training models with model_type being ecd is not meant for compare_performance() in visualize.py: 1) The argument test_stats_per_model becomes [{'evaluation_frequency': {'frequency': 1, 'period': 'epoch'}, 'test': {'combined': {'loss': [6.1890668869018555]}, 'output': {'accuracy': [0.780337929725647], 'accuracy_micro': [0.9599999785423279], 'loss': [6.1890668869018555], 'roc_auc': [0.8079876899719238]}}, 'training': {'combined': {'loss': [24.924232482910156]}, 'output': {'accuracy': [0.7352941036224365], 'accuracy_micro': [0.9324324131011963], 'loss': [24.924232482910156], 'roc_auc': [0.7261029481887817]}}, 'validation': {}}, {'evaluation_frequency': {'frequency': 1, 'period': 'epoch'}, 'test': {'combined': {'loss': [6.1890668869018555]}, 'output': {'accuracy': [0.780337929725647], 'accuracy_micro': [0.9599999785423279], 'loss': [6.1890668869018555], 'roc_auc': [0.8079876899719238]}}, 'training': {'combined': {'loss': [24.924232482910156]}, 'output': {'accuracy': [0.7352941036224365], 'accuracy_micro': [0.9324324131011963], 'loss': [24.924232482910156], 'roc_auc': [0.7261029481887817]}}, 'validation': {}}] 2) convert_to_list(test_stats_per_model) yields the same structure, so test_stats_per_model_list == test_stats_per_model 3) thus output_feature_names becomes {'training', 'evaluation_frequency', 'test', 'validation'} 4) metric_names becomes {'output', 'combined'} 5) thus metric_names.remove(LOSS) on line 1497 causes KeyError: 'loss'. If you instead do metric_names.discard(LOSS): 6) metrics_dict will not map metric_names to lists of metric values, but to lists of dicts containing loss and possibly further keys :{'output': [{'accuracy': [0.7352941036224365], 'accuracy_micro': [0.9324324131011963], 'loss': [24.924232482910156], 'roc_auc': [0.7261029481887817]}, {'accuracy': [0.7352941036224365], 'accuracy_micro': [0.9324324131011963], 'loss': [24.924232482910156], 'roc_auc': [0.7261029481887817]}], 'combined': [{'loss': [24.924232482910156]}, {'loss': [24.924232482910156]}]} 7) Taking the minimum over such a list of dicts on line 1517 causes TypeError: '<' not supported between instances of 'dict' and 'dict'.

Before fixing line 1497 and line 1517, it would be very helpful to have a JSON schema for the files passed as --test_statistics arguments, or a definition of the structure of visualize.py's first argument test_stats_per_model.

Infernaught commented 10 months ago

Right. Instead of train_statistics.json, test_statistics.json should be passed in. This file is generated by either model.eval or model.experiment. This should fix the key error.