A hunch: most advanced robots are made/expected to navigate "autonomously" withij the operating environment. This is a valid problem and feature, but it's hard (ig).
Todo:
Instead we should mark the operating environment with small infra-red strobes (grid of dots on the walls/ceilings/ground?), which are controlled by a small computer, that the robot may/may not access. This computer essentially controls the pattern of strobes, and therefore the movement of the robot. The consumption of development, runtime resources and reliability can be superlinearly bettered by this, I think. Essentially create an adressable space, which I think is the reason why computers have been able to scale, standardize and become reliable. Navigation becomes just like a filesystem now.
The autonomous navigation part may be developed earlier this way - we'll use the strobes for navigation, but also record the environment. Using this, we can create a relation between real env <--> actuation/sensing weights in a very reliable manner. Basically, we're creating reliable data that'll be used to make the autonomous nav models.
Pros:
Inexpensive - since training compute and sophisticated sensors are optional.
Helps us towards the goal (of autonomous nav)
Easy to commoditize, solve daily human problems
Approachable to people who are not so familiar with CV, ML etc - existing software engineers can do it
Best - it'll help in transforming the home in a way PCs did as opposed to mainframes.
Cons:
This may (or may not) hinder CV research and generally "real time" perception research.
It's known that explcitly solving part of a problem that can be solved through deep learning, actually prevents creation of good models, in the long run.
Data produced may be too much of a "happy path". Well, one could just turn off usage of the strobes.
A hunch: most advanced robots are made/expected to navigate "autonomously" withij the operating environment. This is a valid problem and feature, but it's hard (ig).
Todo:
Pros:
Cons: