Closed microbit-carlos closed 3 years ago
For the sound effects (not to be implemented yet) we were thinking something along these lines:
audio.play(Sound.HAPPY)
print(type(Sound.HAPPY)) # op: <class 'MicroBitSound'>
print(Sound.HAPPY) # op: Sound('happy')
new_sound = Sound(["010230660078300445876...", "010230660078300445876..."])
print(new_sound) # op: Sound(["010230660078300445876...", "010230660078300445876..."]
new_sound = Sound([
SoundEffect(length=600, startFrequency=100, endFrequency=500, Interpolation=Sound.Interpolation.LINEAR, steps=12),
SoundEffect(startFrequency=100, endFrequency=500, Interpolation=Sound.Interpolation.LINEAR, steps=12)
])
print(new_sound) # op: Sound(["010230660078300445876...", "010230660078300445876..."]
audio.play(new_sound)
try:
new_sound = Sound("01023066007830044004400888102301280")
except Error as e:
print("Sound string constructor only works with built-ins like 'happy'. For sounds with single SoundEffects you can use a list of one item")
new_sound = Sound([radio_recvd_msg])
new_sound_effect = SoundEffect("01023066007830044004400888102301280")
print(new_sound_effect) # op: SoundEffect("01023066007830044004400888102301280")
new_sound_effect = SoundEffect(length=600, startFrequency=100, endFrequency=500, Interpolation=Sound.Interpolation.LINEAR, steps=12)
print(new_sound_effect) # op: SoundEffect("01023066007830044004400888102301280")
audio.play(new_sound_effect)
This is just a sketch of what the API would look for Sound(), all the SoundEffect() stuff is still open.
As the sound expressions are very "microbitish" (and depend on CODAL) we wanted Sound
and SoundEffect
to be part of the microbit
module, so microbit.Sound.HAPPY
.
However, having to do an external import (import audio
) to play them feels a bit odd.
from microbit import *
import audio
audio.play(Sound.HAPPY)
import microbit
import audio
audio.play(microbit.Sound.HAPPY)
Adding a play
function to the microbit
module can be quite confusing as well, specially since most users do from microbit import *
.
A possible option would be to bring the audio
module inside microbit
:
from microbit import *
audio.play(Sound.HAPPY)
import microbit
microbit.audio.play(microbit.Sound.HAPPY)
from microbit import audio as microbit_audio
import audio
audio == microbit_audio # True
@dpgeorge what do you think?
@dpgeorge I'm sorry, I missed this one in my earlier tagging. I think this should be in the milestone if at all possible, because it has a big impact on the reading of scripts that use the sounds.
Yes I can implement this Sound
class as part of the milestone.
But (although it's easy to implement) I'm not sure about bringing the audio
module inside the microbit
module... there's still going to be speech
and music
as separate modules yet they are also "audio" related.
Okay, in that case let's place Sound (and in the future SoundEffect) inside the audio
module.
in that case let's place Sound (and in the future SoundEffect) inside the
audio
module.
Except that would clash a bit with SoundEvent
being in the microbit
module...
On second thoughts it might make sense to have a microbit.audio
entity (it'd be a "nested module", but that's just an implementation detail). It mirrors microbit.display
(and accelerometer/compass) in that audio is built-in functionality, not an add-on. Then speech
and music
modules are add-ons that use microbit.audio
to produce particular kinds of sounds.
OTOH, radio
is a separate module even though it's a built-in feature. And audio
has the class audio.AudioFrame
... would that be moved to microbit.AudioFrame
?
There are arguments for and against having microbit.audio
vs a top-level audio
module (like it is currently). I'd lean towards the latter because 1) it's already like that; 2) it cleanly separates sound output to a separate module (but sound input is via microbit.microphone
and microbit.SoundEvent
).
in that case let's place Sound (and in the future SoundEffect) inside the
audio
module.Except that would clash a bit with
SoundEvent
being in themicrobit
module...
Right, but SoundEvent is for the microphone
, which is inside microbit
.
It mirrors microbit.display (and accelerometer/compass) in that audio is built-in functionality, not an add-on. Then speech and music modules are add-ons that use microbit.audio to produce particular kinds of sounds.
This was my reasoning, with v2 audio is now "built-in".
The other advantage is to keep the sound to represent the "microbit personality" inside the microbit module (microbit.Sound.HAPPY
), in the same module that can play it.
OTOH, radio is a separate module even though it's a built-in feature.
Yeah, this is one of the things I would change if we started from zero, but it's a bit late to change that.
And audio has the class audio.AudioFrame... would that be moved to microbit.AudioFrame?
I would leave that inside the audio module, we need to still keep it in the top module audio.AudioFrame
, and audio == microbit.audio
would not be true if microbit.audio.AudioFrame
wasn't available.
There are arguments for and against having
microbit.audio
vs a top-levelaudio
module (like it is currently). I'd lean towards the latter because 1) it's already like that; 2) it cleanly separates sound output to a separate module (but sound input is viamicrobit.microphone
andmicrobit.SoundEvent
).
So to clarify, you would prefer to keep audio
outside of the microbit
module, and add Sound
and SoundEffect
to the top level audio
module?
This was my reasoning, with v2 audio is now "built-in". The other advantage is to keep the sound to represent the "microbit personality" inside the microbit module (
microbit.Sound.HAPPY
), in the same module that can play it.
Yes, these are good reasons to make it microbit.audio
.
OTOH, radio is a separate module even though it's a built-in feature.
Yeah, this is one of the things I would change if we started from zero, but it's a bit late to change that.
Well... it could be changed... easy enough to make it backwards compatible by aliasing (same as audio
). So if that's a possible change in the future then it would be better to make it microbit.audio
now.
So to clarify, you would prefer to keep
audio
outside of themicrobit
module, and addSound
andSoundEffect
to the top levelaudio
module?
Yes that is what I meant in my previous comment, but I honestly don't mind how it's done. I feel that having it as microbit.audio
is a little easier to teach: you can do microbit.audio.play(microbit.Sound.HAPPY)
, without any knowledge of imports, and then learn about the music
and speech
modules as advanced features.
Also, since set_volume()
lives in microbit
it does feel a bit strange to have audio
as a separate module...
I like and now prefer microbit.audio
after reading this :)
In practice for most students I don't believe they'll see much difference except to what most people consider 'boilerplate', the code will read
audio.play(Sound.HAPPY)
whether we put audio as a separate module or in microbit. The errors will be different if people forget to import stuff and I think that's yet another reason to put audio in microbit: less liklihood of consusion over missing the module
Yes, also taking in consideration a lot of the new content will be using audio now, it also saves users having to be constantly adding a second import as well.
Let's put audio
inside microbit
(audio == microbit.audio
), and Sound
inside microbit
as well (microbit.Sound.HAPPY
).
Ok, great decision!
The microbit.Sound
class was added in 4971558442428b7a87aae3559ff4527e084c1168
And the audio
module aliased as microbit.audio
in 523ec02f16c8533d7312ae9e09c33a6a9f56ee59
One can now do:
from microbit import *
display.show(Image.HAPPY)
audio.play(Sound.HAPPY)
Right now we are sending strings, we can wrap them in a new class that we can then expand when we add SoundEffects to make your own sounds: