I had some discussions about this recently, e.g. with @guijacquemet, also @tischi is interested in this. I see a couple of options for supporting users in maintaining good scientific practice when using bob:
We could add a prompt to bob internally, that make bob suggest steps for testing code. E.g. if you ask for a segmentation algorithm, bob could automatically say something like "It is good scientific practice to compare automatic segmentation results to manual annotations. Let me know if you want to create code for such a comparison." This might in particular make sense when writing notebooks, because there is space for this.
We could also immediately add code. I would find this a bit annoying, but if a majority thinks it makes sense, I will implement it.
The least annoying could be pointing users to online resources about this topic such as online videos or blog posts. We may have to create those resources to cover good scientific practice in the LLM-context.
I had some discussions about this recently, e.g. with @guijacquemet, also @tischi is interested in this. I see a couple of options for supporting users in maintaining good scientific practice when using bob: