tomtetobe / remuco

Automatically exported from code.google.com/p/remuco
0 stars 0 forks source link

Add 'capabilities' conversation between client and adapter #69

Open GoogleCodeExporter opened 9 years ago

GoogleCodeExporter commented 9 years ago
Remuco is extensible and things like an Okular adapter and a video-player
have different semantics which right now we get around by saying, e.g., the
"mute" key will be assigned to a particular use case.
It is possible to add early messages between the client and the adapter,
via the server, to publish and assign the client's capabilities to the
commands in the adapter.
If we could use the client to probe the phone for joysticks and other
buttons not in [0-9#*], things like DVD navigation (see comment in issue 47
) would be fairly straightforward to add.

This idea still needs some SERIOUS fine-tuning.

Original issue reported on code.google.com by igor.con...@gmail.com on 27 Nov 2009 at 2:21

GoogleCodeExporter commented 9 years ago
You're right, navigation-like control currently needs overloading of existing
controls - which is bad.

I also thought about this a while ago and came up with the idea to add another 
method
to player adapters, something like navigate(x) where x is something like UP, 
DOWN,
SELECT or ENTER etc. Actually the client does not really need to get probed for
capabilities, as users are free to bind the whatever available keys on their 
devices.
So we would just need to add some more actions on the client and users could 
bind
them to existing keys as they like (though there could be a default binding to
joystick events, when available) - but I don't see a reason why client and 
server
should exchange client capabilities here.

For touch screen devices, I can think of a special screen users can activate. 
This
screen shows N-E-S-W arrow buttons etc and users can push these buttons to issue
navigation commands.

What do you think?

Original comment by obensonne@googlemail.com on 2 Dec 2009 at 8:49

GoogleCodeExporter commented 9 years ago
The probing thing is because it seems the buttons are hard-coded in the client 
(like
3 is toggle fullscreen, 0 is mute, etc.), and it doesn't make sense to "mute" 
the
Okular adapter, nor does it make sense to "rate" a movie (at least for my 
mplayer
necessities thus far), or even to fullscreen a music player (does anyone 
actually use
those spectrogram animations? to me they're annoying as hell :P ).

Anyway, what I mean is that this initial conversation would say, for instance, 
(1)
the client has something really close to WASD-controls which can be assigned to
navigation, (2) the extra 'clear' button to "mute" or "pause" or "blank screen" 
etc.,
and when it comes to (3) the numeric keypad, the adapter's capabilities could 
provide
some kind of mask ("I can do fast-forwards/rewinds which mean horizontal 
motion, page
up/downs for vertical motion, yadda yadda yadda"), which the client would 
assign to
available keys in a best-fit way, if there is such a thing.
This beats the scenario where the client is flooded with capabilities from the
adapter and the user is supposed to answer a bunch of "Which key should trigger
function f()?" questions. 
The "conversation phase" also makes the adapter/client interface more robust in 
the
sense that more functionality means extra messages being broadcast, instead of 
extra
code being written in the server and the client, if I understand remuco's inner
workings correctly :)

I am aware that these changes could alter remuco's initial focus in media 
players to
something broader like LIRC and that the Okular presenter is really a special 
case
among the adapters, so feel free to limit what should be discussed for now and 
I'll
play along.

Also, the touchscreen idea seems nice.

Original comment by igor.con...@gmail.com on 3 Dec 2009 at 1:16

GoogleCodeExporter commented 9 years ago
There is already an initial kind of capabilities communication between server 
and
client at the beginning of a session. The server sends information to the client
which information may be shown (e.g. repeat, shuffle, volume, ..) and what 
things may
be controlled. The client sends information about screen size and some other 
options.

Buttons on the client are not hard-coded. On the protocol level, there are just
message IDs (e.g. ctrl-volume, request-playlist). On clients there is a default
binding of keys to actions (which cause sending of certain message) but these
relations may be adjusted freely by users. For instance I use the hardware 
volume
keys on my device for volume control and the camera button for toggling 
fullscreen mode.

I think the server side really should not bother with mappings of functions to 
keys
(or types of keys, e.g. joystick). There should be an abstract description of 
what
may be controlled on the client (e.g. regular play, pause, volume, .. or menu
navigation) and the client should suggest an according default key setup.

But finally you're right, the current design and its initial focus on audio 
players
is not easily adaptable for controlling other types of applications, as seen 
for the
TVtime and Okular adapter.

As said, to some extend there already is a capabilities conversation. OTH it 
could be
more flexible. I think it's best I'll set up a wiki page for some brainstorming 
and
drafting (I also have some other ideas for bigger changes I would like to 
discuss).

Original comment by obensonne@googlemail.com on 6 Dec 2009 at 10:19