vacp2p / wakurtosis

4 stars 3 forks source link

Accumulated metrics into analysis image #115

Closed AlbertoSoutullo closed 1 year ago

AlbertoSoutullo commented 1 year ago

Changed analysis so now you can select accumulated values for specific metrics, you can find an example in config.json.

by_node are metrics that are gathered and plotted by waku node.

by_simulation means that this metric is gathered from each node at the same timestamp, thus getting an accumulated value. For example, in the case of bandwith, the firts value of the plot is the addition of all bandwith metrics of all nodes in the first timestamp, and so on.

Now analysis will be run automatically if the plotting keywork is in config.json. RIght now it calls src/main.py, but we can make this something to be selected in config.json for example, once all branches are merged.

Also, added tests for new functions.

Closes #116 , closes #117

Daimakaimura commented 1 year ago

@AlbertoSoutullo this runs fine in my machine but I cannot find the .pdf output file. Isnt it supposed to be under ./wakurtosis_logs?

AlbertoSoutullo commented 1 year ago

I changed as we discussed the config.json file. Now it is much simpler, but we don't longer plot several metrics in the same subplot.

Here we had 2 options, either the config.json is more verbose, but it is simpler to code and personalize, or we have the plot configuration "hidden", and to be honest it is a complete mess right now. I am not very happy with this, as maybe for me a more verbose option was acceptable, but it is what we decided. I can understand the choice.

I could refactor everything to be able to plot different metrics in the same subplot, but at this point I don't want to spend more days with it, since a good plotting configuration is not something that is easy to do. And yes, it is helpful to have two metrics in the same subplot to compare, but we cannot have everything right now.

Regards.

Daimakaimura commented 1 year ago

Agree we shouldn't spend more time on this right now. I also expect to get some feedback once someone actually uses it and then we can proceed accordingly. All good but I m not going to approve this until we merge PR #111 because I've already been having issues after other merges.

AlbertoSoutullo commented 1 year ago

Agree we shouldn't spend more time on this right now. I also expect to get some feedback once someone actually uses it and then we can proceed accordingly. All good but I m not going to approve this until we merge PR #111 because I've already been having issues after other merges.

I am sorry, but I do not agree with this choice.

I already provided feedback on your PR, and apart from some minor stuff, the workflow is currently broken because of the venv, so I cannot fully test it.

On the other hand, this PR is fully ready, all tests are passing, and the workflow is working from end to end.

I do not see any reason to not approve this, if everything is ok as you mentioned. If this need changes, this is why we can review our work. If not, then it should be approved and merged.

Can you elaborate more on your decision, so I can try to understand your POV?

Daimakaimura commented 1 year ago

I just want to merge #111 before to avoid potential conflicts in the merge (which I already fixed before).

AlbertoSoutullo commented 1 year ago

I just want to merge #111 before to avoid potential conflicts in the merge (which I already fixed before).

This feels a bit weird. If I understand this correctly, you want to stop my merge which is finished, until yours is merged to avoid conflicts.

Conflicts are inevitable sometimes. I had to deal with conflicts aswell, and you, and Ganesh. Sometimes are harder, sometimes easier. But I can't agree with a situation like this. Because basically you are moving the possible conflicts of your unfinished branch, to my finished one, so I will have to deal with them.

I don't mind to deal with conflicts, because they occur naturally. But this is a deliverate action. And it doesn't feel the way it should be.

I will check your PR now, and if it works, I will approve it, of course. But if it doesn't, or I consider we need changes, I will request for changes. If you still think that you don't want to approve mine, do as you please. I just wanted to give you know my honest opinion.

Regards.

Daimakaimura commented 1 year ago

I didn't mean to make things harder for you - just trying to get this thing over the finish line as fast as we can, you know?

I'll go ahead and approve your PR now. If any merge issues pop up later, we can tackle them together. Thanks for speaking up about this, let's just keep moving forward.

AlbertoSoutullo commented 1 year ago

I didn't mean to make things harder for you - just trying to get this thing over the finish line as fast as we can, you know?

I'll go ahead and approve your PR now. If any merge issues pop up later, we can tackle them together. Thanks for speaking up about this, let's just keep moving forward.

I can understand it. Of course I didn't mean you wanted to make things harder for me. Don't even worry about that.

I am always open to help with any issues, either tests, conflicts, or other things.

0xFugue commented 1 year ago

The config.json is much cleaner, cheers. The idea is to keep the 'declarative' feel of config.json and let the wakurtosis to enforce the requirements specified in config.json transparently. I understand you wanting the plots to be 'fully' configurable, but since we do not verbosely report where/how the config.json parsing fails inside wakurtosis, specifying everything in config.json will: make it error prone; and hard to debug.

One minor issue with the run.sh: we need to make the prometheus/grafana configs explicitly readable (chmod a+r monitoring/prometheus.yml monitoring/configuration/config/grafana.ini ./monitoring/configuration/config/provisioning/dashboards/dashboard.yaml). This just might be why Jordi's runs were failing to produce the analysis.pdf: in my machine I definitely needed to chmod to get the prometheus off the ground. I already added it to run.sh in my branch, but feel free to update it if you'd like.

I second the idea of each of us invoking different python scripts inside the analysis docker, depending of the metrics tag selected (cadvisor/dstats/host-proc/container-proc). That's cleaner.

I have few minor suggestions, lets thrash them out in Monday's call. LGTM otherwise and nice work!