Open akim1 opened 9 years ago
I think in most cases - as you said - truth is usually determined by a team manually mapping a system.
This is why mappings like that of C. Elegans and Bock's are so useful because they let us compare to some notion of ground truth (rather than just evaluating against other models).
I guess people try to get wet and dirty when possible, but what about when the number of neurons is so large that it is not possible, like in humans? Not to mention the level of biological heterogeneity present in the population. I think this will continue to be a big question or problem that we will all have to face without a good, solid answer for a while if we want to work in this field.
We now have a way to get a computer to create a graph from neural scans (i2g paper). Could we not expand this work to include location? Then the computer can tell us what truth is with some degree of accuracy. Maybe?
It doesn't seem like there is a way to know whether the graphs from the neural scan are actually accurate.
How is truth generated for many of these studies? Does someone sit and actually try to figure it out, or do we assume some particular pattern that always seems to come up?