Open WenheLI opened 4 years ago
@WenheLI our current evaluating result is a JSON string, which depends on the model-evaluate plugin. Before we start working on Pipboard, is there possible to have a protocol design on evaluateMaps
?
@yorkie - If we render those maps in the pipcook
process, we can directly write images into the disk and refer them using URI. We just need to modify pipboard
to support image display. However, this solution compromises the interactivity.
Another solution is to render the data in the frontend and in this case, we might need some data structures like:
{
numericalData: Number
renderedData: ArrayLike<Number>,
type: 'ConfusionMatrix' | 'Image' | ...
}
This solution adds up complexity but gives flexibility when exploring data & model.
If we render those maps in the pipcook process, we can directly write images into the disk and refer them using URI. We just need to modify the pipboard to support the image display. However, this solution compromises the interactivity.
How does Pipboard know how to render the data? Does that mean we will have some hard code to read these URLs and display?
By the way, I like 2nd one though its complexity to implement.
We need to tell pipboard
how to render different types of maps using something like:
switch _type {
case 'ConfusionMatrix':
renderCM(renderedData)
break;
case 'Image':
renderImage(renderedData)
break;
.......
}
I see, does this add a new protocol to tell model-evaluate plugin how to write evaluationMaps
?
This is the previous design:
export interface EvaluateResult {
pass?: boolean;
[key: string]: any;
}
Maybe we can change it into:
export interface EvaluateResult {
pass?: boolean;
requireRender: boolean;
renderData?: ArrayLike<Number>,
[key: string]: any;
}
This definition just looks like an interface in pipcook-core
is Pipboard-oriented, and it's not that good for a plugin developer to learn how to render in Pipboard. How about putting these types into the EvaluateResult
?
And I think requireRender
is no needed, the renderer will detect whether it could be rendered by itself :)
BTW it's useful to add accuracy, recall rate, and losses, too :p
Currently,
pipboard
can display numerical data(accuracy, recall rate, and losses).However, most model evaluation process requires images (I.E. generated image in GAN model) or tables (I.E. Confusion Matrix in Classification Task) for better parameter tuning.
Therefore, I think it is necessary to support an image/table display both on the
pipboard
andeval plugin protocol
.Discussions on how to implement it and the protocol design are very welcome.