NielsRogge / Transformers-Tutorials

This repository contains demos I made with the Transformers library by HuggingFace.
MIT License
9.48k stars 1.45k forks source link

Output for the segformer inference is incorrect #278

Open aquib1011 opened 1 year ago

aquib1011 commented 1 year ago

Hi @NielsRogge @FrancescoSaverioZuppichini, While running the code Fine_tune_SegFormer_on_custom_dataset_[RUGD].ipynb, there were two errors. TypeError: _compute() missing two required positional arguments: 'predictions' and 'references' but this is solved by removing_ from compute part of the code.

metrics = metric._compute(num_labels=len(id2label), 
                                   ignore_index=255,
                                   reduce_labels=False, # we've already reduced the labels before)
          ) 

But even after training, the inference gives the wrong result, and the color coding is different. Please try to re-run the code to check for the error. 9-tBWPddh

RUGD_annotation-colormap.txt 

1 dirt 108 64 20
2 sand 255 229 204
3 grass 0 102 0
4 tree 0 255 0
5 pole 0 153 153
6 water 0 128 255
7 sky 0 0 0
8 vehicle 255 255 0
9 container/generic-object 255 0 127
10 asphalt 64 64 64
11 gravel 255 128 0
12 building 255 0 0
13 mulch 153 76 0
14 rock-bed 102 102 0
15 log 102 0 0
16 bicycle 0 255 128
17 person 204 153 255
18 fence 102 0 204
19 bush 255 153 204
20 sign 0 102 102
21 rock 153 204 255
22 bridge 102 255 255
23 concrete 101 101 11
24 picnic-table 114 85 47

I have taken the dataset from the official RUGD website.

aquib1011 commented 1 year ago

Follow up @NielsRogge

yashmewada9618 commented 1 year ago

I am facing the same issue. The loss values and mean accuracy values are not overshooting. Maybe my model is being over-trained. I tried with nvidia/mit-b2 and the issue remains the same. Everything is getting inferred as something else.

In my case the road is showing as a container/generic object map.