issues
search
neurodream
/
IDEAS
container for misc (open source) ideas
0
stars
0
forks
source link
Adverserial Attack on natural neural network to make it temporarily misclassify current state (awake vs. asleep)
#6
Open
neurodream
opened
1 year ago
neurodream
commented
1 year ago
via transcranial or sensory stimulation
might help lucid dream research, but bypasses the need for veridical insight
to identify system to stimulate, needs to contrast awake/asleep x classify_awake/classify_asleep (also: subjective confidence levels)
highest probability to obtain might be in sleep lab, focussing on (false) awakenings:
incorporations of sleep lab environment make false awakenings more likely
subjects can be trained to focus on awakenings for questioning awake state --> classifying self as "asleep" more likely to obtain
dream recall might be better when memory can be focused on "did I dream about waking up?"
note: awake x classify_asleep may be not attained naturally - but maybe with very low confidence levels
number of LRs could indicate confidence (e.g. up to 5 LRs)
how to completely ensure that misclassification stays temporal?
what could possibly go wrong...?
illustration of
adverserial attacks in AI
:
illustration of adverserial attacks in AI: