Open pdehaye opened 1 year ago
(for the preqreq for this)
Looks like it is possible to use javascript clientside libraries for visualisations in Jupyter. Jupyter needs to be installed with both Python and JS kernels.
See https://towardsdatascience.com/javascript-charts-on-jupyter-notebooks-dd25f794cf6a
General approach seems to be:
We need to provide specs to deploy experiences in Notebooks, if it is even possible.
@valentinoli @pdehaye we said this should move to Specs Needed column, right?
Yes
Waiting for modularisation to be completed #896
@valentinoli was asking about value of this as something separate from experiences.
@pdehaye explaining that we sometimes need to work on unknown format data e.g. from audit files / haixin auditing. don't want to program an experience for every unique file format.
@pdehaye points out that dev team need to see more of what sorts of data formats come out of @HaixinShi 's auditing tool in order to better understand the need here. Presentation proposed?
@HaixinShi @pdehaye and others to discuss on Monday.
@Caochy suggests we could do some work on this independently of modularisation.
@pdehaye saying we as a team need to get more familiar with this audit data.
All agreed we want to avoid duplicate work.
awaiting for discussion next Monday
@emmanuel-hestia will talk to @pdehaye to clarify what to do with this ticket / this work. @alexbfree proposes to see this as work following on from hestiaAI/clients/issues/35.
Transferred for information at this stage to @streitlua
In the past I have suggested using experiences in Jupyter, but that discussion didn't go very far. I am starting this ticket:
The latter ticket's first comment concerns running the data processing pipeline as a Node program invoked from Jupyter (if I got that right).
I think I understand the two approaches are different (one uses the Jupyter JS kernel, the other one doesn't), and with different outcomes (one runs the data processing pipelines, the other one also runs the visuals).
Is there something in synthetizing the two issues, once modualrization is done? Does this indicate there is a clear point where we should "cleave" experiences in two? Is there a point for a separate Node program, if we can easily embed (parts of) experiences through libraries?
Please comment on this ticket, which depending on the response might evolve into an actual plan to migrate (some) experiences within Jupyter as well.