crossley / ARCANI

0 stars 0 forks source link

WHAT discussion: What are the knowledge gaps #2

Open crossley opened 3 months ago

crossley commented 3 months ago

Please share your feedback on this group discussion by replying in this thread. Your contributions are important as we aim to identify gaps in knowledge, pinpoint collaborators who can help us bridge these gaps, and explore how sustainable collaboration within the ARCANI network can address these issues. We will incorporate the thoughts and opinions expressed here into a white paper focused on these critical questions.

ABBarron commented 3 months ago

and what is the meaning of / significance of time and learning in artificial and natural contexts (especially with federated, offline high throughput learning)

what's federated offline high throughput learning?

ookwrd commented 3 months ago

Can we create universal modifiers that work the same way in several (3 or more) cognitive substrates?

jarmarshall commented 3 months ago

How important are models of the environment, including self, in improving learning rate?

ABBarron commented 3 months ago

How important are models of the environment, including self, in improving learning rate?

Yes!!!!!! And in animals what are those models like, where do they come from, how do they develop and evolve? Is "model" even the right metaphore for an animal or would Barbara Webb take exception?

stenti commented 3 months ago

fundametal flaw in artificial inteligences is that they have no motivation that we havent given them. All living organisms are motivated somewhat in ensuring survival - how can we give this type of pulls for artificial inteligences? (we have all seen that movie) There is also the issue of what happens at failure - is there a mechanism of self punishment Can you have any inteligence without a body?

ookwrd commented 3 months ago

Can we push funding sources in the opposite direction to fear-based projects at least part of the time (e.g. existential risk, etc.)?

dracothraxus commented 3 months ago

TABLE mid summaries of past table discussions:

stenti commented 3 months ago

Is navigation the first sign of inteligence - directed movement with some goal. Is this how we could identify inteligence?

stenti commented 3 months ago

Is navigation the first sign of inteligence - directed movement with some goal. Is this how we could identify inteligence?

Requires a body not just a mind

ABBarron commented 3 months ago

Knowledge gap: How multiple systems / modules are coordinated and controlled in learning and behaviour.

yes! brains have that network of networks thingo feature. How do we wrestle with that?

crossley commented 3 months ago

What is intelligence in the first place? We need a shared language that spans our lightly connected fields. This is a huge gap that seems quite essential to address. This might be the semantic hygiene idea.

stenti commented 3 months ago

If different fields of research use different language and semantics are we able to communicate and collaborate effectivly? Can we solve this with a LLM?

dracothraxus commented 3 months ago

terminology commonality and understanding emerging again as a key issue especially in diverse networks

dracothraxus commented 3 months ago

@ABBarron federated high throughput learning is millions of robots in kitchens all around the world sharing their learnings and also running 100x realtime speed simulations in parallel.

e.g. "I accidentally stabbed my master Andrew - he appears to be unhealthy - suggest other robots not do this"

MGGuiraud commented 3 months ago

A limitation is the population of researchers working both in neurosciences (biological) + AI/engineering. Need dual background to overdose current obstacles in the field -> we need funding for these programs (e.g.: dual Bachelor/master degrees), there is less funding on the side of neuroscience.

patrickmcgivern commented 3 months ago

Challenge of bridging between different disciplines given the complexity of the problems/frameworks within separate fields, distinct methodologies, and the opaqueness of central concepts associated with intelligence

MGGuiraud commented 3 months ago

Perceived risks versus real risks of AI - not going to destroy the world. Yet, we already do not mitigate all the risks about data management, AI related to health management, etc.

stenti commented 3 months ago

A limitation is the population of researchers working both in neurosciences (biological) + AI/engineering. Need dual background to overdose current obstacles in the field -> we need funding for these programs (e.g.: dual Bachelor/master degrees), there is less funding on the side of neuroscience.

Funding for skill sharing between research groups would also be great further up the chain

MGGuiraud commented 3 months ago

For neuromorphic AI we need to understand more about the neuroscience, how the different structures communicate between each others rather than making wild assumptions about blobs (cortex...).

ABBarron commented 3 months ago

For neuromorphic AI we need to understand more about the neuroscience, how the different structures communicate between each others rather than making wild assumptions about blobs (cortex...).

yes absolutely! A brain is not an undifferentiated network! Far from it! And it's not made of blobs either!

crossley commented 3 months ago

Here we go.... what is consciousness, why did it evolve, what does it add if anything the system. Do we need and want our artificial systems to have it? Whoo.

Somayeh-h commented 3 months ago

There’s a lot of knowledge in various neuroscience related topics, however, there is still not a lot of collaborations between these fields. The experts in these fields usually have different goals, and what makes collaborations more effective is to be able to ask the kind of questions, and have the kind of discussions that results into finding new research gaps. Sometimes the finding the right literature is hard to find because the same concepts are phrased and termed very differently - answer might be to use AI based tools to help.

dracothraxus commented 3 months ago

Ethics around AI decision making?

MGGuiraud commented 3 months ago

Biggest impact questions utilising this network : 1/ plateform for different species with experiments (e.g. https://www.phylopsy.org/ or mimosa?) to understand better cognition and make a basis for consciousness.

ookwrd commented 3 months ago

Sharing-co-reviewing research questions and methodologies in diverse cognition research through platforms similar to Phylopsy, Mimosa, etc.

ABBarron commented 3 months ago

Here we go.... what is consciousness, why did it evolve, what does it add if anything the system. Do we need and want our artificial systems to have it? Whoo.

Whoo indeed!

HoverflyLab commented 3 months ago

Cultural effects on perceived risks as well as what tasks need to be solved. At the moment it appears to be very US centric, but a different cultural lens could have a strong effect in the questions asked.

dracothraxus commented 3 months ago

Cancer to the Cambrian explosion - what can we learn?

MGGuiraud commented 3 months ago

How to create a big enough question to cover the whole of the network? What about creating sub-network between people interested by similar topics (ex. Active vision Karin/James/Marie/Andrew ; communication Richard/Olaf…. Etc). Sub-questions: How does active learning works in biological/artificial systems ?

crossley commented 3 months ago

There may be some important conversations / gaps to address surrounding various ethical issues of AI development and placement in human societies.

dracothraxus commented 3 months ago

IQ test Stanford - what we can learn?

HoverflyLab commented 3 months ago

The sustainability of the field: Insects are able to perform amazingly with very limited resources, but robotics and AI are not really appearing to take this into account in the development.

MGGuiraud commented 3 months ago

How does the brain simplify perception?

MGGuiraud commented 3 months ago

Biggest impact questions utilising this network : 1/ plateform for different species with experiments (e.g. https://www.phylopsy.org/ or mimosa?) to understand better cognition and make a basis for consciousness.

Sharing methodology!

dracothraxus commented 3 months ago

How do we move beyond capitalist considerations?

MGGuiraud commented 3 months ago

A limitation is the population of researchers working both in neurosciences (biological) + AI/engineering. Need dual background to overdose current obstacles in the field -> we need funding for these programs (e.g.: dual Bachelor/master degrees), there is less funding on the side of neuroscience.

Summerschools with hands-on workshop where neuroscientists learn some computing skills and vice versa

HoverflyLab commented 3 months ago

Skill uplift needed, need training in both machine learning and neuroscience/behavior.

ABBarron commented 3 months ago

Skill uplift needed, need training in both machine learning and neuroscience/behavior.

yes! a skills-based summer school? Not just for grads - for ECRs and oldies like me!

crossley commented 3 months ago

what learning theory do we have that's animal-like, but more sophisticated than Rescorla-Wagner?

Cool question. The contextual / bayesian inference models from Sam Gershman and Yael Niv come to mind as the next generation of associative learning mathematical models. Anne Collins also has some nice work where she places the Bayesian mathematical models into the basal ganglia circuit. Myself, Greg Ashby (also in attendance), Michael Frank, Kevin Gurney etc have done a lot of work extending models of action selection in basal ganglia circuits to the extent that I think they are importantly different from RW math models.

HoverflyLab commented 3 months ago

Skill uplift needed, need training in both machine learning and neuroscience/behavior.

yes! a skills-based summer school? Not just for grads - for ECRs and oldies like me!

Summer schools are a great idea.

dracothraxus commented 3 months ago

"what does everyone in the room need to do the things they really want to do that they can't currently do?"

gmfricke commented 3 months ago

Can AI's effectively learn appropriate ethics through reinforcement or do they need rules.

alexjgillett1984 commented 3 months ago

A central database for measures of cognitive domains and intelligence is required. For instance, Nora Newcombe and colleagues has recently pointed out that spatial cognition measures have a number of issues, and that a central database for assessing and refining these is required. A similar point could be made for many cognitive constructs. And this will be a major gap in trying to refine and test artificial intelligence

kozzy97 commented 3 months ago

Gap in methodology is the measurement problem - how do we operationalise concepts such as memory capacity, attention, intuitive physics, intelligence and then build instruments to test them? How can we improve our measurement tools? This seems crucial if we want to commensurate cognition in diverse species and AI (and appraise the capabilities and consequently the safety of AI systems)

philosophyalex commented 3 months ago

One thing that is limiting progress in AI is that it is not surprised by its mistakes, like animals are. It is not embodied, and doesn’t have a dopaminergic system.

Daspraelon commented 3 months ago

“Introspection” doesn’t map well to cogsci. On the other hand, “metacognition” can be widely used and is sometimes itself contentious. What does it mean to “stand outside oneself” in practical contexts. (I’d lean in to the difference between asking it this way and getting involved in the endless semantic bickering that arise in-field over metacognition.)