steveharoz / open-access-vis

A collection of open access material presented at the VIS conference
http://oavis.steveharoz.com
48 stars 11 forks source link

Clique Community Persistence: A Topological Visual Analysis Approach for Complex Networks #38

Closed Pseudomanifold closed 6 years ago

Pseudomanifold commented 7 years ago

Dear Steve,

thank you so much for this service! I have finally added all information about this project on OSF.io. Would you kindly include the relevant URIs?

Hope that I am using this service correcetly---this is my first project on OSF, inspired by your initiative. Thanks for your efforts!

Best, Bastian

steveharoz commented 7 years ago

Thanks Bastian. The paper is updated. And I've put the OSF repository as the materials. The data category is for experiment results such as experiment subject responses or algorithmic runtime results. I'm not sure the data folder here fits into that. Feel free to comment in this issue if you think anything should change.

Pseudomanifold commented 7 years ago

Hi Steve! I realize that I should have named the folder somewhat differently, but it actually contains the results of algorithmic runs as well as scripts to create them for yourself (in order to compare them with the ones we reported in the paper). I'll also add a link from the GitHub repository to the OSF one so that more people are capable of finding it.

Pseudomanifold commented 7 years ago

Hi Steve, sorry to bother you again, but I wanted to ask whether you could add the OSF repository as a data repository of our paper, as well. I have uploaded the raw input files as well as the results of our analysis in order to make everything reproducible.

steveharoz commented 7 years ago

I took a look at the data folder, and I'm still not sure if it fits the criteria of experiment data rather than materials.

I don't see:

  1. A data dictionary (how do I read the data?)
  2. An explanation in the paper about how the data is used for an analysis (e.g., mean & SD of algorithm run times) or comparison (algorithm A vs algorithm B; simulation vs ground truth; algorithm vs human estimation; etc.). I see the conclusion mentions a contrast with existing methods, but I don't see a discussion of the comparison using the data.

I just want to make sure that I understand what's posted, as it seems to be quite outside the typical umbrella of experiment results.

Pseudomanifold commented 7 years ago

OK, I understand that I was mistaken about the nature of these experimental data. I have added an updated README to the Data folder to explain what is in there. In short, I added:

I also added detailed instructions and automated scripts for reproducing every figure and every table in the table. The results are added in order to make it possible to check the correctness of the calculations.

So, I was hoping that this (along with the supplied code) should make it possible to reproduce just about everything in the paper (and also try it out with new data, if possible).

Sorry for taking up so much time; I have to admit that Open Science is something new for me—but I'm very excited to try it out and see the benefits for other scientists.