Open caleb-allen opened 4 years ago
I think that the UX should center around the most interactive parts of listening to speech.
Where music playback might focus on songs, albums, and playlists, we can focus on more semantically rich structures of language, rather than attempting to break it up into seconds (at least for navigation)
Each potential action is only possible in specific contexts, and so any visual design should reflect those changing contexts while maintaining a sense of continuity.
More advanced potential actions are also possible. For example, recently heard hyperlinks. This could be added to a stack, visible to the user. But maintaining clear separation of current state within a document and recently heard items is very important.
So, rather than a static stack, perhaps it is a "sentence" stack: all hyperlinks read in the last x number of sentences are pushed to this stack, the most recent being at the top. But this shouldn't be displayed to the user as an unbound list. Instead, it should be restricted to two items, and clear indication to the user that the list does have an infinite capacity, and the interface item for this stack will not change size over time
The current goal of WikiPod is to be an effective and minimal hypertext reader.
Proposed solutions:
5 all "waving" features stripped out
Open questions: