Hubs-Foundation / hubs

Duck-themed multi-user virtual spaces in WebVR. Built with A-Frame.
https://hubsfoundation.org
Mozilla Public License 2.0
2.14k stars 1.41k forks source link

Encourage avatars to stand closer together #3132

Open johnshaughnessy opened 4 years ago

johnshaughnessy commented 4 years ago

Observation: People tend to stand farther apart in Hubs than they do in real life (before social distancing).

Significance: The fact that people stand so far apart is related to our the difficulty of finding an appropriate falloff curve for a group or a space. The farther away people tend to stand in groups, the harder it is to create curves that allow break-out sessions that don't require moving hundreds of meters away, and the less intuitive the audio behavior in a space will be.

Why I think people do this: People want to see everyone in the group. Right now, the easiest way to do this is to stand farther back.

Alternatives:

Suggestion: I like the idea of incorporating a third person or wide angle mode in some way.

Perhaps "idle camera mode" is kind of like the lobby camera from behind your avatar's head. It could pan and rotate to allow you to passively get a better view around yourself, and also make it more likely that you'll stand at a closer distance.

Perhaps when you are close to other people, your view automatically transitions in some way that makes it easier to see the people around you.

Perhaps when people are close to you but just slightly out of view, they show up in little "waypoint markers" on the sides of your screen, so that you can still "see" them even though you aren't looking directly at them.

┆Issue is synchronized with this Jira Task

camelgod commented 4 years ago

I've tried increasing the FoV but we found it to be very fisheye-y, not doing the 3D models, avatars and art in our rooms justice - and potentially terrible in VR.

Just some notes from my experience in Second Life: Discourage movement by having really clunky controls, but encourage fast travel to groups of others by the radar map click-to-teleport etc.

People don't often move around then, but rather stay in group and use additional camera controls to move their camera separate from avatar. This allows the voice volume and distance to be persistent but allow you to focus and orbit camera on specific persons (or the environment if the people are boring). I think that is important.

In SL you have the "focus camera" that works pretty much like focusing things in threejs, coupled with a third-person mode.

Expanding the VERY experimental third-person mode would be very welcome for my use cases and definitely something I want to explore myself regardless of hubs direction because I think its very useful for this exact type of socialisation. If you can in addition have a camera cursor to manually focus+orbit on what you click, or potentially the nearby active speaker by yourself (or i guess it can be done automatically automatically but can become very fast back and forth.) it would be really neat!

We already got the "Focus avatar" functionality just a couple of weeks ago now, so it should not be difficult to implement, and might be something I want to explore myself (will post here again then if anything useful comes from it). But having a dedicated cursor/mode where you can focus on any surface / object (coupled with third person so it doesn't feel like you are "seperating" the camera too much (moving from third -> focus is in my opinion less impactful / dramatic than firstperson -> focus object (because focus object "feels kinda" like third person already)) could be very interesting use of this functionality.

johnshaughnessy commented 4 years ago

@camelgod I agree - A third-person mode (that actually works) would be really nice.

I would want a third person mode to communicate a person's interest. It is convenient that when most people are in first person most of the time, it is obvious where everyone is directing their attention because it is whatever their heads are directed at. This does not apply when the browser is not in focus (the user alt-tabbed) or for our current "inspect"/"focus" mechanic for media and avatars, and I'd love for us to do something for these cases too. This is also important for setting correct expectations in the room about who is within earshot or in view at a given time.

I would like a third person mode to allow for "sims-like" navigation. We already have a nav-mesh so implementation shouldn't be too difficult, and I think many users would feel more comfortable with a point-and-click (or touch) interface for directing the avatar rather than having to operate WASD, on-screen joysticks, or pinch gestures.

Camera rules for third person are often made to be aware of the scene in important ways (so that it either cuts-out certain geometry to show your character or restricts motion in a way that is flattering to the level design). I doubt we'd want to go deep on this aspect because in-room content is so dynamic and customizable.

I would like to experiment with a "streamer-focused" viewing mode that allowed for full-screen, ui-less capture in one window and a control panel in a second window. This may allow setting up multiple camera angles/shots, tracking, etc. I could see this being a value-add for a twitch streamer who wants to interact with a few audience members in hubs while a moderator operates the "hubs streamer mode" ui to decide what to shown on the twitch stream.

camelgod commented 4 years ago

@johnshaughnessy I never even considered that click-to-move / point-and-click functionality would be an option. That could also make things much more user-friendly considering our users still need guidance using arrow keys and swipe motions. (We have a very diverse end user selection). Also gotten questions about why they cannot see themselves in the world. That would be absolutely awesome.

I am a little bit sceptical about directing attention outside of the users control, such as automating towards speakers, but some interesting approaches there as well. Building something like this into a streamer mode could be a nice in-between to test things out.

World of Warcraft- type third person / "chase camera" would be neat (where you can rotate your character+orbit camera by right-clicking, and just orbit camera without rotation by left-clicking) coupled with a "focus" button to lookAt other avatars.

If i somehow get time I want to try and make a basic prototype scratching the first-person viewing rig in favour of a basic model such as the networked avatars have, then use the "lookAt" functionality as kind of a third person mode. If you click on others it will just lookAt(otheravatar) instead and move your camera target (with orbit controls?). Once you start moving or hitting escape, the camera will "snap /or smoothly transition", (like in World of Warcraft) back on your avatar into the standard lookAt(ownavatar) third person perspective. You should also be able to lookAt yourself with orbit controls to see the front of your avatar without rotation.

Only thing I am wondering is how this will impact creation and colaboration tools such as moving objects etc. as there will be alot of stuff going on in the same "frame of focus"

djay commented 3 years ago

I wonder if you have eye gaze tracking using a webcam (https://github.com/mozilla/hubs/issues/3689) if you can't use that to look around when on a laptop. Might be too disorientating but would be an interesting experiment. ie look to the left of your laptop screen and it moves your head left one click.

misslivirose commented 3 years ago

https://arxiv.org/pdf/2101.05300.pdf - this study explored proximity as a function in virtual environments, relevant to this topic but also might contradict the assumption that people in Hubs stand too far away from one another.

johnshaughnessy commented 3 years ago

@misslivirose Good call out -- My initial assumptions may be wrong. Still, it's hard for me to reconcile the results of that study with the impression I get from my own (limited) experience in Hubs rooms.

(This is obviously not quantitative evidence, but here's my anecdotal screenshot of a group of ~10 people: image

It's not uncommon for me to see a group this size spread out even more.)

djay commented 3 years ago

Had another discussion with some members of the meetup I arranged and they brought up the same point that having no peripheral vision meant it was hard to hold a conversation in a group unless it was one to one in front of you and also that it felt uncomfortable to have no awareness of who was behind you. They specifically mentioned that a third-person mode would be better. I personally think that might make you miss out on any non verbal queues from those with headsets being able to look directly at you. I think perhaps you could have a radar (top down) or angled mode in a small HUD that could give you the extra info of whats happening around you. Or perhaps some "side mirrors" that give you a smaller wide angle view on the left and right?

djay commented 3 years ago

coupled with a "focus" button to lookAt other avatars.

One idea would be to map the 1,2,3,4,5 buttons to turn your head towards the people standing near you. ie 1 turns your head to the closest on the left and 5 the closest on the right and 3 the closest in front of you. Perhaps if there are more than 5 people it be a little smart and go to the person in that direction that spoke last? Maybe its a click and hold thing so that your head snaps back to forward so there is no confusion which way you are going to move forward.