Currently SimpleWebRTC seems to require a video stream before it can do screen sharing. Is this intentional? It seems possible to do screensharing without having a webcam installed. The trivial part is to make sure localStream exists before using it (which requires a change to Peer in webrtc.js as well).
The more difficult part is how to emit readyToCall properly. My method is an ugly hack; if no local stream is found, simplewebrtc emits 'noLocalStream' instead of 'localStream' in startLocalVideo. Instead of listening for readyToCall from testReadiness, my app listens for the {no}localstream emit and connectionReady; if it gets both, that is the equivalent of receiving readyToCall. The problem is that I have to store the connectionReady/{no}localStream emit state to know when both have been received; that probably goes against the philosophy of using emits.
I think this would be a nice feature to have; if anybody has guidelines on a proper implementation I'd love to hear them.
Currently SimpleWebRTC seems to require a video stream before it can do screen sharing. Is this intentional? It seems possible to do screensharing without having a webcam installed. The trivial part is to make sure localStream exists before using it (which requires a change to Peer in webrtc.js as well).
The more difficult part is how to emit readyToCall properly. My method is an ugly hack; if no local stream is found, simplewebrtc emits 'noLocalStream' instead of 'localStream' in startLocalVideo. Instead of listening for readyToCall from testReadiness, my app listens for the {no}localstream emit and connectionReady; if it gets both, that is the equivalent of receiving readyToCall. The problem is that I have to store the connectionReady/{no}localStream emit state to know when both have been received; that probably goes against the philosophy of using emits.
I think this would be a nice feature to have; if anybody has guidelines on a proper implementation I'd love to hear them.