crossley / ARCANI

0 stars 0 forks source link

WHAT discussion: What are the knowledge gaps #2

Open crossley opened 3 months ago

crossley commented 3 months ago

Please share your feedback on this group discussion by replying in this thread. Your contributions are important as we aim to identify gaps in knowledge, pinpoint collaborators who can help us bridge these gaps, and explore how sustainable collaboration within the ARCANI network can address these issues. We will incorporate the thoughts and opinions expressed here into a white paper focused on these critical questions.

gmfricke commented 3 months ago

How do we remove hallucinations? Is removing all hallucinations desirable (to Bruno's talk)

jarmarshall commented 3 months ago

How do we reduce training and compute requirements for AI models to be more comparable to animal levels of experience

ABBarron commented 3 months ago

animal brains don't start from de-novo. How do instincts evolve and how do instincts scaffold learning?

jarmarshall commented 3 months ago

what learning theory do we have that's animal-like, but more sophisticated than Rescorla-Wagner?

ABBarron commented 3 months ago

Backpropogation through time is biologically impossible. And recurrent ANNs are really hard to train. But animal brains seem to solve this. And I don't know how. It keeps me awake at night.

jarmarshall commented 3 months ago

Do we even need a brain to learn, e.g. slime mould

dracothraxus commented 3 months ago

• Terminology and semantics fuzzy and varied across the fields • Robotics undergoing transformation – modern AI stuff driven • Embodiment affects cognition • Mechanical bio-inspiration • How do we assess progress in both fields? Generality versus specificity? Task-specific versus general?

ookwrd commented 3 months ago

How do models and representations constrain cognition in humans, animal, or any diverse intelligences?

dracothraxus commented 3 months ago

Do modern AI + robot developments need causality understanding, to what, and if so, to what extent?

ABBarron commented 3 months ago

what learning theory do we have that's animal-like, but more sophisticated than Rescorla-Wagner?

yeah - and it worries me that all nodes in ANNs are identical and operate this loosely biological learning rule. Neurons are not identical!

beebee44 commented 3 months ago

how does cognition intervene to generate behavioural control

jarmarshall commented 3 months ago

brains evolved to solve movement - cognition scaffolds on top of this, re-using the same circuits?

dracothraxus commented 3 months ago

What is the benefit of biomorphic?

RMenary commented 3 months ago

brains evolved to solve movement - cognition scaffolds on top of this, re-using the same circuits?

Agreed, the recruitment of sensors-motor capacities for symbolic cognition is not well understood.

dracothraxus commented 3 months ago

It's impossible to be impossible @ABBarron

ookwrd commented 3 months ago

The Universal Cognitive Translator: do we have the tools to reverse engineer cognitive representations across species, to recreate the cognitive (e.g. intuitive physics) or phenomenology (e.g. what it feels like to be a fruit fly) of one into another? For an example, can we recreate the sense of physics that a (particular) chimp has and create a virtual simulation of it?

dracothraxus commented 3 months ago

How do we collect evidence that the systems we create or study have the capabilities that we ascribe to them?

And what is the burden of evidence in different fields and contexts?

dracothraxus commented 3 months ago

Is it a conscious decision or a reflex?

ookwrd commented 3 months ago

Can we go beyond teaching human language to animals, and creating interfaces for animals to speak in their own « language » and another species species listening in hearing it in their own « language ». (Note: Language scare-quoted here.)

gmfricke commented 3 months ago

There is a serious gap in goal alignment between machine learning engineers who generally develop algorithms to solve a very narrow problem. There is largely no awareness in the machine learning algorithm literature in neuroscience or cognitive science. A basic education in neuroscience among ML developers and ML among cognitive scientists would perhaps be beneficial - and addressable through collaborative networking. ML engineers are large focused on how to speed up Amazon transactions, not implications for general AI.

jarmarshall commented 3 months ago

how do we find new learning algorithms from the brain, rather than impose our current algorithms on our current understanding of the brain, e.g. seeing support vector machines in insect antennal lobe

RMenary commented 3 months ago

Gap: The link between sensory-motor intelligence and symbolic - verbal and quantitative - thinking.

crossley commented 3 months ago

To what extent are biological system capable of interleaving new behaviours with old, without suffering catastrophic interference, and can these mechanisms imbue artificial systems with similar traits. This would seem to become even more essential as artificial systems are placed in long-term continual learning roles.

Some bioligcal systems are very good at processing and making decisions form visual information from complex scenes with cheap sensors. How? And can these mechanisms be further mapped to artifical systems for cheaper more effective systems.

A big question / cornerstone is making good artificial systems that are also efficient.

fgashby commented 3 months ago

Humans have multiple functionally and anatomically distinct networks that can learn in qualitatively different ways. This gives humans a flexibility advantage over many AI systems. One important knowledge gap about human learning is how these various learning systems coordinate, how the knowledge they acquire is integrated, and how control is passed from one system to another.

The human brain does not rely on backpropagation. One way it overcomes this limitation is to use a variety of qualitatively different learning algorithms that are each ideally suited for a different type of learning. For example, a prominent proposal that was popularized by Kenji Doya is that synaptic plasticity follows Hebbian learning rules in cortex, reinforcement learning rules in the basal ganglia, and supervised learning rules in the cerebellum.

dracothraxus commented 3 months ago

@gmfricke there are initiatives like ELLIS in Europe that try to remedy this - we could potentially model part of ARCANI on some of that

ABBarron commented 3 months ago

There is a serious gap in goal alignment between machine learning engineers who generally develop algorithms to solve a very narrow problem. There is largely no awareness in the machine learning algorithm literature in neuroscience or cognitive science. A basic education in neuroscience among ML developers and ML among cognitive scientists would perhaps be beneficial - and addressable through collaborative networking.

totally! And on the flipside I'm embarrassed how little I know about and understand LMMs

ookwrd commented 3 months ago

Do we have an agreement on a framework for models that extract and map goals from various species (natural or artificial)?

dracothraxus commented 3 months ago

Accessibility to understanding behaviour e.g. hallucinations in artifical and natural systems

dracothraxus commented 3 months ago

Do we hold AI to higher standards, risk reward, how do we approach that

crossley commented 3 months ago

Biological systems are better substrate inspiration for ensembles of experts / AI as layers of systems. Similar idea to Greg's comment above about multiple learning and memory systems.

dracothraxus commented 3 months ago

Transparency and new tools and standards are likely needed for the "new" AI - can the network help this?

stenti commented 3 months ago

Thoughts from our table: How do we get from basal congnition to high order cognition? Is navigation the first sign of basal intelligence? There are so many definitions of intelligence. How can we pull these concepts of different types of inteligence across to artificial inteligence eg creative intelligence. Generalised testing doesn't work for humans and cant compare inteligence across cultures let alone animal inteligence, why would it work for AIs. How are we definining intelligence across fields? Should we expand the definition of integligence beyond general inteligence. Frustration: Expecting animals to learn using human language or human-centric tasks - enforcing a non-natural learning - we are limiting ourselves this way Artificial inteligence models can act as if a specific species - speed up the process of exploring data we already have - bypass some ethical issues - however AI in this way cannot addapt to changes - gap in knowledge is the ability to learn novelly without specific training? Deep learning models are clearly not how animals are solving the problem - are we being limited in our exploration by the tools we already have?

crossley commented 3 months ago

Knowledge gap: How multiple systems / modules are coordinated and controlled in learning and behaviour.

dracothraxus commented 3 months ago

Sustainability perspective - insects?

ookwrd commented 3 months ago

Is the Internet of Animals back yet? Can we make it VR-augmented this time? (DI had a nice grant project on this.)

ABBarron commented 3 months ago

To what extent are biological system capable of interleaving new behaviours with old, without suffering catastrophic interference, and can these mechanisms imbue artificial systems with similar traits. This would seem to become even more essential as artificial systems are placed in long-term continual learning roles.

Some bioligcal systems are very good at processing and making decisions form visual information from complex scenes with cheap sensors. How? And can these mechanisms be further mapped to artifical systems for cheaper more effective systems.

A big question / cornerstone is making good artificial systems that are also efficient.

How is it that animals can avoid catastrophic forgetting?

ookwrd commented 3 months ago

How do we remove hallucinations? Is removing all hallucinations desirable (to Bruno's talk)

I love this, especially in the light of Sam’s talk on what I’d call « cognitive noise-canceling ». Let’s call them confabulation though please? Hallucinations would be something else :)

stenti commented 3 months ago

Thoughts from our table 2: Difficulty in implementing the models we build in existing hardware - are we being limited by the technology we have - neuromorphic tech may be a solution - however it is still very far removed from any real brains brains bodies and behaviours have coevolved within an environment - important to look at inteligence within all of these Deep learning black box AI can do amazing things but is unintelliagable, biomimetic AI more of an open box but currently less capable

dracothraxus commented 3 months ago

Don't be too functionally-focused - modelling animals / bio for serendipitious, unpredictable discoveries ducks and hides from some risk-averse grant funding agencies

jarmarshall commented 3 months ago

embodied cognition - don't try and solve everything with the brain, solve with the body as well

jarmarshall commented 3 months ago

☝️ Is it even possible to be intelligence if you don't have a body?

ABBarron commented 3 months ago

How do we remove hallucinations? Is removing all hallucinations desirable (to Bruno's talk)

I love this, especially in the light of Sam’s talk on what I’d call « cognitive noise-canceling ». Let’s call them confabulation though please? Hallucinations would be something else :)

Agreed! But it's interesting we call a weird thing a LLM does a "hallucination". That says something about how we perceive them.

crossley commented 3 months ago

Hierarchical system control engineering or other as inspirations for research programs that might shed light on the mutliple system learning and control problem.

dracothraxus commented 3 months ago

Geographical and cultural diversity in the network of people shaping this - ARCANI to play a role in diversifying?

beebee44 commented 3 months ago

AI commonly focuses on cognition/intelligence but the selection problem suggests that important constraints are generated by motivational/emotional concerns. Some consideration of this issue and involvement of groups interested in these functions would be important.

ABBarron commented 3 months ago

What is the benefit of biomorphic?

good question! learning speed? operating costs? We think of biological brains a metabolically expensive but they are cheap and efficient compared to a LLM

stenti commented 3 months ago

Thoughts from our table 3: Hacking the mind, parasitizing a system in order to change its behaviour towards its own goal - fungi do this to ants - this is a risk as AI develops - can you hypnotize an ant? can you hypnotize a machine? The barrier is that we dont understand how to hijak cognitive processes. Creativly breaking the system in a consistent way can give you insight into a system. Generally encoded knowlege - need a framework to extract and compare mathmatics acrss representation not just humans. (needs revision)

crossley commented 3 months ago

AI commonly focuses on cognition/intelligence but the selection problem suggests that important constraints are generated by motivational/emotional concerns. Some consideration of this issue and involvement of groups interested in these functions would be important.

Spot on!

jarmarshall commented 3 months ago

Chris Reid: many mammals can learn to walk within a few hours of birth, others take years. What is it about the brains/innate structures in these mammals, and can we take principles from these to enhance learning speed in artificial models/bodies?

dracothraxus commented 3 months ago

and what is the meaning of / significance of time and learning in artificial and natural contexts (especially with federated, offline high throughput learning)