The LMP now provides sufficient functionality so that we can get a better version of guided tours on the road. Maybe even one that deserves the name.
"All" we need to do is have the front-end iteratively call these two LMP APIs:
"Given I eventually want to learn about X, what do I need to learn about now?" (Leaf Concepts, (docs))
"Given I need to learn about this, what learning objects are suitable for that?" (LO recommendation (docs))
You just call the first API, pick any or all of the leaf concepts and feed them to the second API and have the learner work through all of that. You can even visualise the dependency tree if you want. Since these APIs are already informed by the current state of the learner model, they would be a huge improvement over the current state.
The LMP now provides sufficient functionality so that we can get a better version of guided tours on the road. Maybe even one that deserves the name.
"All" we need to do is have the front-end iteratively call these two LMP APIs:
You just call the first API, pick any or all of the leaf concepts and feed them to the second API and have the learner work through all of that. You can even visualise the dependency tree if you want. Since these APIs are already informed by the current state of the learner model, they would be a huge improvement over the current state.