digital-dream-labs / vector-animations-raw

Other
11 stars 24 forks source link

Sound... #9

Open moribundant opened 3 years ago

moribundant commented 3 years ago

I have Maya 2019 running animations but there is no sound; e.g., after loading _anim_attentionlookatdevice.ma Vector shows up, wiggles back and forth while tapping his lift and blinking his eyes--but no sound. Is there supposed to be sound? Where are the sounds(Wwise)? Were they included in the repository or are they embedded in the other files?

randym32 commented 3 years ago

Where are the sounds(Wwise)? Were they included in the repository or are they embedded in the other files?

A few different things here. First, the soundbanks are already loaded onto the robot, but I am not sure why playing them with Maya did not trigger them.

Second, the original raw sounds, and WWise project -- a direct analog to the vector-animations-raw repo and the Maya project -- is being worked on now to be publicly released. Maya, does not use this WWise project directly.

Third, there is also repo of the WWise project results, which is like the vector-animations-build repo, and the results would be directly used on the robot, placed in the assets folder (as part of the build process or with a script). This repo is also being worked on now to be publicly released. Unlike the vector-animations-build, where you could kinda hand edit the parts if you wanted to, you can’t realistically edit the sound banks. These both should also have some helper tools and docs.

In the original Anki asset-pipeline, the original sounds and music would start in the WWise project, get packaged into sound banks in another repo (this is vector-audio-build). Then these sound banks could optionally be employed by the Maya project to aid animator in refining new animations; presumably so they could work without waiting for a build. The Anki animators also used a proprietary (i.e. Audiokinetic) plugin with Maya to play these sounds on the computer (not robot).

The audio project engineers also took the Maya animation results and cross checked that the events from the animations to ensure that the audio sound banks provided for all of them.

Randy — Sent unencumbered by the thought process

moribundant commented 3 years ago

I haven't played any Maya scenes on the robot yet, too many other issues...what a mess.

Anyway, this explains why I get an error when clicking the 'refresh audio data' button on the VictorAudio shelf. It goes looking for the 'AnkiMayaWWisePlugIn' which I could not find in the repo. It also explains why the sounds are not embedded within the animation files and why there are two development shelves.

I played around a bit with AudioKinetic's Audio Lab, which is quite interesting but only useful in game development with stereo or multichannel sound--hence the company name. I cannot get WWise (2017.2.10) to load, but that is a widespread problem apparently because of Big Sur. Apple has locked it up so tight barely anything will run and especially not legacy code.

Thanks!

Harley

randym32 commented 3 years ago

Harley, Regarding the Maya shelf. Internal tools (at any company) always are a bit rough. It even has a slogan: What are the cobblers children barefoot?

In this case, I (along with many others) advocated for the Maya project to be released as-is, and to let the passionate hobbyists clean it up; rather than have DDL create a fully polished, complete toolset. Internally DDL doesn’t have Maya experts; and I didn’t have Maya to test the scripts on before release. I crudely chopped out the bits that had links to defunct SVN repos and such, but couldn’t do a dry run. I’m chuffed that DDL was kind enough to make this available. Since Anki copy-pasted the Cozmo Maya, upgraded from 2016 to 2018, and a few others, there is probably a lot of Cozmo-isms. (They initially thought of Vector as Cosmo 2.0).

AudioKinetic .. I’m no expert on getting the tool to run. I got it to work on Windows. It’s really over complicated. There are a couple of reasons why AudioKinetic is used. Vector and Cozmo were created by game designers, and this tool fit with where they were initially thought they would go. The second is that they wanted plenty of capacity in crafting the sounds on the tool side, and dynamic, nearly-scriptable sound effects that respond to conditions and mental state (just like with the animation), so this tool really has a lot features to do that. Vector has a single speaker, but the Wwise features include blending multiple audio channels together, complex sequences with randomization, equalization, parametric sound generation, MIDI, sound effects (such as Vector speech), and a few others. I think the developer bots also have the Wwise remote debugging feature as well. That’s a fraction of what Wwise can do, but it works out as a lot more than simple audio file playback.

Randy — Sent unencumbered by the thought process

moribundant commented 3 years ago

Randy,

I have the fortune (some would read this as misfortune!) of unrestricted access to all AutoDesk products and seriously this is just playing for me because what I really need to do is control Vector via the EP with no new behaviors--what it does now is more than adequate. When I saw the Maya stuff released, I had thoughts of doing what I need to do with it, but that would be colossal overkill with a huge learning curve. As it stands, playing with it has helped me understand how Vector works and that is not wasted time at all. I also clearly see why Anki used these tools--as you said, they were game designers and what they did was make a 'for real' game actor. I find it fascinating.

Harley