Open glopesdev opened 1 year ago
Comments from #308 (closed as duplicate):
from @RoboDoig
aeon_experiments should have a standard structure for tracking benchmarking and calibration experiments related to the main aeon experiments.
Proposal is to have a branch off main called 'rig-qc'. For specific benchmarking experiments a new branch is created off rig-qc. The folder structure for benchmark worflows/analysis/data is:
workflows
- tests
- Readme (general readme for how to set up benchmark folder)
- bonsai (contains env for the benchmark)
- analysis
- qc-workflows
- Readme (purpose of benchmark, data location on ceph etc.)
from @jkbhagatio
Actually if we're not enforcing for qc stuff via the LogController, and if we end up having and keeping many branches branched off of 'rig-qc' that don't ever get merged back in, I may be inclined to go back to having all qc stuff in a separate repo. Let's discuss further again.
Every QC project / folder should have at a minimum:
This would rougly follow the structure in root of the 'aeon_experiments' repo:
The growing number and complexity of foraging arena components requires the development of self-contained hardware and system integration tests to ensure both the correct functioning and calibration of the rig elements, e.g. feeder delivery and torque measurements, RFID testing, camera calibration, etc.
These tests cut across the different experimental protocols, since they are associated with quality-control of the entire arena structure. As such, it makes sense to have some kind of reusable and pluggable module that is available to include in protocols.
Considerations:
Proposals:
New experiments branch (
tests
)Pros:
Cons:
New aeon package (
Aeon.Testing
)Pros:
Cons:
New aeon repository (
aeon_tests
)Pros:
Cons:
Include tests together with hardware module repositories
Pros:
Cons: