In service of #112 , this PR implements a WebRTC connection for streaming video from the robot to the client. It implements "Option A" from #112 . Specifically, it launches a headless browser on lovelace to be one end of the P2P connection. Although this should not be necessary (see Option B in #112 ), it was the most straightforward to implement given that is the approach stretch_teleop_interface used.
As such, this PR changes the web app's launch procedure on lovelace. Whereas earlier one had to launch the web app and the web_video_server, now they have to run:
The web app: npm run start (or cd build; python3 -m http.server 3000 if you have already run npm run build).
The WebRTC signaling server: node --env-file=.env server.js
The robot browser: node --env-file=.env start_robot_browser.js
(NOTE: The robot browser must be the last of the web app commands to run!)
Explain how this pull request was tested, including but not limited to the below checkmarks.
[x] npx playwright install (Note: you may have to run node --env-file=.env start_robot_browser.js first)
Testing:
The below tests were run with the real web app running (see launch procedure above), the real perception launchfile, but the dummy motion action servers.
[x] Load the app on a smartphone (Amal's iPhone). For every screen with a video (i.e., BiteSelection, DetectingFace, VideoModal), move your hand/face in front of the camera and verify there is minimal lag.
[x] Go through 3 consecutive bites end-to-end and verify that the video stays up-to-date.
[x] Importantly, verify that BiteSelection works and the app receives the response from the action server.
[x] Load the app on a desktop computer (weebo) at the same time. Verify that there is no noticeable lag between the smartphone compared to weebo.
Before creating a pull request
[x] Format React code with npm run format
[N/A] Format Python code by running python3 -m black . in the top-level of this repository
[x] Thoroughly test your code's functionality, including unintended uses.
[x] Fully test the responsiveness of the feature as documented in the Responsiveness Testing Guidelines. If you deviate from those guidelines, document above why you deviated and what you did instead.
[N/A] Consider the user flow between states that this feature introduces, consider different situations that might occur for the user, and ensure that there is no way for the user to get stuck in a loop.
Before merging a pull request
[x] Squash all your commits into one (or Squash and Merge)
Describe this pull request. Link to relevant GitHub issues, if any.
Joint with
ada_feeding
#154.In service of #112 , this PR implements a WebRTC connection for streaming video from the robot to the client. It implements "Option A" from #112 . Specifically, it launches a headless browser on
lovelace
to be one end of the P2P connection. Although this should not be necessary (see Option B in #112 ), it was the most straightforward to implement given that is the approachstretch_teleop_interface
used.As such, this PR changes the web app's launch procedure on
lovelace
. Whereas earlier one had to launch the web app and the web_video_server, now they have to run:npm run start
(orcd build; python3 -m http.server 3000
if you have already runnpm run build
).node --env-file=.env server.js
node --env-file=.env start_robot_browser.js
(NOTE: The robot browser must be the last of the web app commands to run!)Explain how this pull request was tested, including but not limited to the below checkmarks.
Setup:
ada_feeding
#154.cd
intofeedingwebapp
npm install --legacy-peer-deps
npx playwright install
(Note: you may have to runnode --env-file=.env start_robot_browser.js
first)Testing: The below tests were run with the real web app running (see launch procedure above), the real perception launchfile, but the dummy motion action servers.
Before creating a pull request
npm run format
python3 -m black .
in the top-level of this repositoryBefore merging a pull request
Squash and Merge
)