uchicago-computation-workshop / marc_berman

0 stars 4 forks source link

Some speculations related to machine learning #7

Open ShuyanHuang opened 5 years ago

ShuyanHuang commented 5 years ago

Excellent work! I am not familiar with this field so I just want to tell you something that I find interesting. I don't know how neuroscientists interpret the temporal signals of fMRI when someone is trying to learn something, but from my perspective this seems like a artificial neural network tuning its parameters when given a bunch of training data. In this paper you give a measure of cognitive effort. But in machine learning theory we don't measure how much effort the computer put into the task. Instead we assume that the computer will exhaust all it computing resources, so when the task becomes more complex, only runtime and sample complexity increase. It seems like the computer's way is more efficient while the human brain is idling some of its resources when dealing with easy tasks. But we always consider the human brain as "super advanced" in terms of learning algorithm. So there's must something machine learning theorists can learn from human brains about how to model and tune "learning effort".

bermanm commented 5 years ago

This is interesting. I was chatting with my graduate student, Omid, about your question. These are just half-baked ideas so take what I say with a grain of salt. For the computer there is no future, only the present. The human brain needs to consider the future and save resources for potential future tasks. If computers also had built in some mechanisms in terms of the life of its microprocessor, maybe it would also idle to extend its life rather than go full tilt on every task. It seems adaptive for humans to save energy when possible, because who knows what will happen next.

AlexanderTyan commented 5 years ago

Wow, this could be a game-changer: an ML training algorithm (or any computer algorithm, for that matter) that is anticipatory, strategically saving resources when predicting maxing out on available computation power in some future. Whoever figures this out is going to be legendary. If anyone has theoretical CS background, I'd love to hear more on if anything's been done in this area.

It almost seems like there would have to be two (at least) modeling processes going on concurrently. First, the training of the ML model as we know it. Second, a predictive model for anticipated resource use that serves some inputs to the "virtual environment" in which ML training happens (i.e. limits allocation of system resources to the ML training) and also takes into itself some outputs from resource usage monitoring associated with the ML training to adjust this predictive modeling's outputs. I imagine this parallel predictive modeling would in itself have to be relatively low-resource use to justify this kind of a computation paradigm and to keep the set-up costs low.

Y'all, this issue needs more upvotes. Not the least because just even the topic of human design taking inspiration from nature is fascinating.