trinity-xai / Trinity

Explainable AI analysis tool and 3D visualization
Apache License 2.0
105 stars 13 forks source link

Distance object Color not updating on change #45

Open Birdasaur opened 1 year ago

Birdasaur commented 1 year ago

Related to #35

After a Distance object has been created the color of the 3D line that represents it can be changed using a color picker. However when the color value in the control is altered the Distance object's color is not immediately updated. When selecting a different Distance object in the list view, the previously changed color is then rendered. This implies that either the distance change event is not being fired or the listeners are not hooked in, or possibly the global map of distance objects is not being updated in a timely manner relative to the event handling.

This behavior has been observed in the Projections 3D pane but may also be present in the Hyperspace3D pane.

czp13 commented 1 year ago

Any file and/or steps to reproduce the issue maybe @Birdasaur? It would help me a lot to be faster fixing this, and to not go totally deep, and dig out all the context. Appreciated if you can help! 🙇

Birdasaur commented 1 year ago

Ah this brings up a great point... we don't have any files for testing! We need to upload a few different files to facilitate. I will do so but I will need a little time to determine which data files are both good representative examples but are not sensitive. (so I can share)

czp13 commented 1 year ago

Oh, I see! Thank you for your response. Yes, indeed it is much easier to fix a bug if you can actually see/reproduce the bug :D. And I fully understand the concern, and problem with data. Take your time, I also have busy days/weeks behind me with Devoxx and fam/work stuff. I will have a second look at this issue when we have some files/data to spot the issue.

Birdasaur commented 1 year ago

@samypr100 Does it make sense here to create a branch for this issue and add a "sample" folder or something like that?

samypr100 commented 1 year ago

Agreed

samypr100 commented 1 year ago

I'd name it example_data

Birdasaur commented 1 year ago

https://github.com/Birdasaur/Trinity/tree/45-distance-object-color-not-updating-on-change

Added two data files (human vs chatgpt) and a label coloration configuration file.

Birdasaur commented 1 year ago

Adding these files, while just JSON, does add over 50mb to the repo. I didn't stop to think about that. Is this the right thing to do?

czp13 commented 1 year ago

@Birdasaur: Thank you very much for working on this! Much appreciated!

I'm with you on this. I think the 100MB limit for larger files on GitLab, after that you need to use GitLFS. But 20-50MB files are also not nice to add. Usually, from what I have seen or how we've resolved this with different companies, there are a couple of options:

Using a Google Drive link (and we can add it to the README.md with the dataset link).

Another idea is that you can somehow narrow down the dataset to only include what's relevant to a specific problem. However, in that case, we might need to do every time for each of the bugs.

On the other hand, we can't write unit tests without a test dataset to verify if the problem is fixed and stays fixed in the long run.

So yeah, many thoughts :D, but in short, I am fine if you add it to Google Drive if possible, or anywhere else, but GitHub. And it would still be nice to have it covered with a unit test, I feel.

samypr100 commented 1 year ago

Might be easier to provide instructions on how to generate the data instead. Althought if you already commited and already pushed, the size increase is permanent on the history now I think.

czp13 commented 1 year ago

@samypr100 I do not see any push or PR from recent times containing the big file.

Birdsaur and SamyPr:

I think 50mb is okay, GitHub shall be able to handle it nicely, the downsides I can see now are:

Also if you reset, remove the commit fully from git commit history, and run some kind of GitHub magic, or eventually at some point with garbage collection, the files shall be removed completely and space shall be allocated again, more info about how to completely remove data from Git:

All in all not the end of the world to commit 50MB but for the long run, long/frequently used open-source libs/projects it is not optimal. So I suggest the previous options (GDrive or other storages, or even generating data also seems a good idea, but how do you generate to cover, see the issues, seems a bit more complex to me...)