EnCiv / undebate

Not debates, but recorded online video Q&A with candidates so voters can quickly can get to know them, for every candidate, for every election, across the US.
Other
20 stars 14 forks source link

Display Transcription #218

Closed epg323 closed 4 years ago

epg323 commented 4 years ago

This ticket is a front end development for displaying the transcription of candidate conversation recorded content to the viewers. The initial idea is to have Agenda and Transcript as two tabs adjacent to the recorded video.

ddfridley commented 4 years ago

There are the UI designs from @tianchili11

https://xd.adobe.com/view/31159d3e-8ed2-4966-416b-b234103484d0-4724/

https://xd.adobe.com/view/628227f4-05fd-46f1-548c-83a661b5e7e3-14b3/

poornaraob commented 4 years ago

06/24: Working with Luis. Need help from team. @djbowers @ddfridley @MrNanosh @luiscmartinez, Thong Pham to get together on 06/25 at 2.00p.m to code. Provide Branch name, Directories to provide before this call. Meeting link: https://meet.google.com/hpe-jxin-kww

ddfridley commented 4 years ago

@epg323 @MrNanosh @luiscmartinez Esaul mentioned the problem of when to start the playing the transcription. It does sound tricky, but it's interesting and so here's an idea (today's idea).

the Agenda component could expose two methods to the parent (CanddidatConversation) play(speaker, round), and pause(). where speaker is 'moderator', 'audience1', 'audience2', ..., 'human'

pause just pauses all.

the Agenda method needs props.participants and this.participants from CandidateConversation. this.participants[speaker].element.current is the DOM video element.

In candidate conversation every time it calls play() well have it call Agenda's play() , and pause - the same.

Then in agenda, when you start playing, look at element.current.currentTime and figure out what word you should be highlighting, and set a timer to come back when it's time to highlight the next word. But when you come back, look at currentTime again and figure out what word to highlight. You need to check currentTime because the video might not start playing right away, and it might stall while it's playing, and maybe one day well implement rewind 10 seconds, or something like that.

I haven't don't exposed methods in functional components, but I've done it a lot with class components. I'm hoping there's a way to do it.

ddfridley commented 4 years ago

Here's a great start at how to expose a method to a parent form a child. https://stackoverflow.com/questions/37949981/call-child-method-from-parent

Their mindset that exposing a method from a react component is a sign that there is a better way - is a view only mindset. The html5 video component is an example of a something that exposes methods that are used by the parent. It makes perfect sense for the agenda component to work in a similar fashion - IMHO. And they offer a great example of how to do it.

ddfridley commented 4 years ago

@epg323 @MrNanosh @luiscmartinez Here is idea #2. It does not require methods to be exposed, instead it uses events from the video element, and we might miss a few edge conditions when we start.

this is the spec on the video element: https://developer.mozilla.org/en-US/docs/Web/HTML/Element/video go down for the events.

pass thisParticipants=this.participants to Agenda.


for each video dom element in thisParticipants[participant].element.current
if participant==="human" skip it - there's nothing to do for this one. 
addEventListener('play',(e)=>{ // e is the event but so far not using it
   if videoElement.muted  stop it if it is playing and don't play transcriptions - muted means this element is in listening mode
   if(videoElement.src) {
       round = thisParticipants[participant].speaking.indexOf( videoElement.src)
   else if(videoElement.srcObject)
       round= thisParticipants[participant].speakingObjectURLs.indexOf(videoElement.srcObj)
   else something is wrong - it's not playing anything
   if(round<0) // something's wrong - what's being played doesn't match what's expected. 
   show the transcription for the round.
   do the same as the above, get videoElement.currentTime and use that to figure out which word to highlight and then setTimeout to come back when it's time to switch to the next word.

addEventListener('pause', (e)=>{
   clearTimeout any pending timeout.  

addEventListener('playing', (e)=>{
 clearTimeout any pending timeout
  do the same as play

addEventListener('ended', (e)=>{
 clearTimeout any pending timeout.  
 unighlight any words

I'm happy to talk this through - I've done a lot of event stuff and find it interesting
poornaraob commented 4 years ago

07/01: Backend work is completed for this task. Make a list of things (must have and Nice to have) on 07/02 during hackathon @epg323 @MrNanosh @ddfridley

ddfridley commented 4 years ago

@MrNanosh

In agenda/index.js I see:

      <TabbedContainer
        tabs={[
          {
            name: 'Agenda',
            contents: <AgendaItem round={round} prevSection={prevSection} nextSection={nextSection} agenda={agenda} />,
          },
          { name: 'Transcript', contents: <Transcription transcriptionJson={participants.moderator} /> },
        ]}
      />

And I want to clarify the use case: For each question, which is a panel of the agenda, there are several speakers, and there will be several transcriptions. The speakers are Object.keys(participants). And each participant may have a transcription. Depending on who's speaking, you should render the transcription for that speaker. How do you know who's speaking? That is where this.participants.elements.current comes in. See idea # 2 above. The 'play' event handler needs to do something to set who the current speaker is, as well as start the timer that highlights the words. We can talk this through if you want. Also I think a layered approach is good - so that there is one layer that renders the words for a speaker, and a parent component that handles the addEventListner stuff, and whatever else. But if you just want to get started with the rendering part, participants.audience1 is going to be the one with a transcription from the new Iota that Esaul just made.

MrNanosh commented 4 years ago

Yea I was semi-aware of this. Here is my expectation of the order of things:

  1. the participants prop changes in candidate-conversations
  2. a hook is triggered in agenda from the prop change.
  3. the hook changes the transcriptionJson prop by taking participants.elements.current and using it to find the corresponding transcription.
  4. the prop changes in Transcription (component) and this results in the contents of the transcription tab changing

Let me know if this more or less correct.

poornaraob commented 4 years ago

07/08: work in progress. Dana needs an example of end result transcription to work on - @epg323 @luiscmartinez

tianchili11 commented 4 years ago

https://xd.adobe.com/view/10d56feb-0e4d-49df-772d-f0f2dc06d4c3-66e4/ Please refer to this link for typography information. Let me know if @MrNanosh needs more specific information.Thanks

poornaraob commented 4 years ago

07/15: David to work on this task this week @ddfridley Task list explaining the changes to transcript displaying on candidate conversation to be added and updated.

poornaraob commented 4 years ago

As per team feedback, the highlighting of the transcription is to be as sentences instead of words.