Open crossley opened 3 months ago
and what is the meaning of / significance of time and learning in artificial and natural contexts (especially with federated, offline high throughput learning)
what's federated offline high throughput learning?
Can we create universal modifiers that work the same way in several (3 or more) cognitive substrates?
How important are models of the environment, including self, in improving learning rate?
How important are models of the environment, including self, in improving learning rate?
Yes!!!!!! And in animals what are those models like, where do they come from, how do they develop and evolve? Is "model" even the right metaphore for an animal or would Barbara Webb take exception?
fundametal flaw in artificial inteligences is that they have no motivation that we havent given them. All living organisms are motivated somewhat in ensuring survival - how can we give this type of pulls for artificial inteligences? (we have all seen that movie) There is also the issue of what happens at failure - is there a mechanism of self punishment Can you have any inteligence without a body?
Can we push funding sources in the opposite direction to fear-based projects at least part of the time (e.g. existential risk, etc.)?
TABLE mid summaries of past table discussions:
Is navigation the first sign of inteligence - directed movement with some goal. Is this how we could identify inteligence?
Is navigation the first sign of inteligence - directed movement with some goal. Is this how we could identify inteligence?
Requires a body not just a mind
Knowledge gap: How multiple systems / modules are coordinated and controlled in learning and behaviour.
yes! brains have that network of networks thingo feature. How do we wrestle with that?
What is intelligence in the first place? We need a shared language that spans our lightly connected fields. This is a huge gap that seems quite essential to address. This might be the semantic hygiene idea.
If different fields of research use different language and semantics are we able to communicate and collaborate effectivly? Can we solve this with a LLM?
terminology commonality and understanding emerging again as a key issue especially in diverse networks
@ABBarron federated high throughput learning is millions of robots in kitchens all around the world sharing their learnings and also running 100x realtime speed simulations in parallel.
e.g. "I accidentally stabbed my master Andrew - he appears to be unhealthy - suggest other robots not do this"
A limitation is the population of researchers working both in neurosciences (biological) + AI/engineering. Need dual background to overdose current obstacles in the field -> we need funding for these programs (e.g.: dual Bachelor/master degrees), there is less funding on the side of neuroscience.
Challenge of bridging between different disciplines given the complexity of the problems/frameworks within separate fields, distinct methodologies, and the opaqueness of central concepts associated with intelligence
Perceived risks versus real risks of AI - not going to destroy the world. Yet, we already do not mitigate all the risks about data management, AI related to health management, etc.
A limitation is the population of researchers working both in neurosciences (biological) + AI/engineering. Need dual background to overdose current obstacles in the field -> we need funding for these programs (e.g.: dual Bachelor/master degrees), there is less funding on the side of neuroscience.
Funding for skill sharing between research groups would also be great further up the chain
For neuromorphic AI we need to understand more about the neuroscience, how the different structures communicate between each others rather than making wild assumptions about blobs (cortex...).
For neuromorphic AI we need to understand more about the neuroscience, how the different structures communicate between each others rather than making wild assumptions about blobs (cortex...).
yes absolutely! A brain is not an undifferentiated network! Far from it! And it's not made of blobs either!
Here we go.... what is consciousness, why did it evolve, what does it add if anything the system. Do we need and want our artificial systems to have it? Whoo.
There’s a lot of knowledge in various neuroscience related topics, however, there is still not a lot of collaborations between these fields. The experts in these fields usually have different goals, and what makes collaborations more effective is to be able to ask the kind of questions, and have the kind of discussions that results into finding new research gaps. Sometimes the finding the right literature is hard to find because the same concepts are phrased and termed very differently - answer might be to use AI based tools to help.
Ethics around AI decision making?
Biggest impact questions utilising this network : 1/ plateform for different species with experiments (e.g. https://www.phylopsy.org/ or mimosa?) to understand better cognition and make a basis for consciousness.
Sharing-co-reviewing research questions and methodologies in diverse cognition research through platforms similar to Phylopsy, Mimosa, etc.
Here we go.... what is consciousness, why did it evolve, what does it add if anything the system. Do we need and want our artificial systems to have it? Whoo.
Whoo indeed!
Cultural effects on perceived risks as well as what tasks need to be solved. At the moment it appears to be very US centric, but a different cultural lens could have a strong effect in the questions asked.
Cancer to the Cambrian explosion - what can we learn?
How to create a big enough question to cover the whole of the network? What about creating sub-network between people interested by similar topics (ex. Active vision Karin/James/Marie/Andrew ; communication Richard/Olaf…. Etc). Sub-questions: How does active learning works in biological/artificial systems ?
There may be some important conversations / gaps to address surrounding various ethical issues of AI development and placement in human societies.
IQ test Stanford - what we can learn?
The sustainability of the field: Insects are able to perform amazingly with very limited resources, but robotics and AI are not really appearing to take this into account in the development.
How does the brain simplify perception?
Biggest impact questions utilising this network : 1/ plateform for different species with experiments (e.g. https://www.phylopsy.org/ or mimosa?) to understand better cognition and make a basis for consciousness.
Sharing methodology!
How do we move beyond capitalist considerations?
A limitation is the population of researchers working both in neurosciences (biological) + AI/engineering. Need dual background to overdose current obstacles in the field -> we need funding for these programs (e.g.: dual Bachelor/master degrees), there is less funding on the side of neuroscience.
Summerschools with hands-on workshop where neuroscientists learn some computing skills and vice versa
Skill uplift needed, need training in both machine learning and neuroscience/behavior.
Skill uplift needed, need training in both machine learning and neuroscience/behavior.
yes! a skills-based summer school? Not just for grads - for ECRs and oldies like me!
what learning theory do we have that's animal-like, but more sophisticated than Rescorla-Wagner?
Cool question. The contextual / bayesian inference models from Sam Gershman and Yael Niv come to mind as the next generation of associative learning mathematical models. Anne Collins also has some nice work where she places the Bayesian mathematical models into the basal ganglia circuit. Myself, Greg Ashby (also in attendance), Michael Frank, Kevin Gurney etc have done a lot of work extending models of action selection in basal ganglia circuits to the extent that I think they are importantly different from RW math models.
Skill uplift needed, need training in both machine learning and neuroscience/behavior.
yes! a skills-based summer school? Not just for grads - for ECRs and oldies like me!
Summer schools are a great idea.
"what does everyone in the room need to do the things they really want to do that they can't currently do?"
Can AI's effectively learn appropriate ethics through reinforcement or do they need rules.
A central database for measures of cognitive domains and intelligence is required. For instance, Nora Newcombe and colleagues has recently pointed out that spatial cognition measures have a number of issues, and that a central database for assessing and refining these is required. A similar point could be made for many cognitive constructs. And this will be a major gap in trying to refine and test artificial intelligence
Gap in methodology is the measurement problem - how do we operationalise concepts such as memory capacity, attention, intuitive physics, intelligence and then build instruments to test them? How can we improve our measurement tools? This seems crucial if we want to commensurate cognition in diverse species and AI (and appraise the capabilities and consequently the safety of AI systems)
One thing that is limiting progress in AI is that it is not surprised by its mistakes, like animals are. It is not embodied, and doesn’t have a dopaminergic system.
“Introspection” doesn’t map well to cogsci. On the other hand, “metacognition” can be widely used and is sometimes itself contentious. What does it mean to “stand outside oneself” in practical contexts. (I’d lean in to the difference between asking it this way and getting involved in the endless semantic bickering that arise in-field over metacognition.)
Please share your feedback on this group discussion by replying in this thread. Your contributions are important as we aim to identify gaps in knowledge, pinpoint collaborators who can help us bridge these gaps, and explore how sustainable collaboration within the ARCANI network can address these issues. We will incorporate the thoughts and opinions expressed here into a white paper focused on these critical questions.