Canardoux / flutter_sound

Flutter plugin for sound. Audio recorder and player.
Mozilla Public License 2.0
876 stars 570 forks source link

Flutter Sound Version 4.0.0 #288

Closed Larpoux closed 4 years ago

Larpoux commented 4 years ago

Hi, guys and girls,

It is time to think about the future Flutter Sound version 4.0.0

My thinking is that the major development that we must do is to support, on Android, a way to record without using this horrible Media Recorder. If we use the low level API Android AudioRecord instead of MediaRecorder, it will open doors on many good improvements :

The last point would be great to use Flutter Sound for recording to a Speech to Text Server.

I looked quickly on this development, and it seems that some features already supported by Flutter Sound will not be easily done with Android AudioRecord, for example the DbPeakController feature. But supporting DbPeakController is mandatory for some of us.

I see two possibilities if we want to use the new features allowed by Android Audiorecord.

  1. Develop a new flutter_sound module for recording. This new module will exist side to side with the actual 'FlutterSoundRecorder' module. The code will be very clean, because the actual FlutterSoundRecorder module and the new module will be completely separated. No flags to handle during the execution, no hack, no patch.

With this option, we will have then two modules for play-back :

And two modules fo recording :

The drawback of this option is that it will not be very clear for the developer, which module he must use.

  1. Keep just one module for recording : the current FlutterSoundRecorder. When the developer wants to use a feature not supported by Media Recorder (for example record using a Codec like OGG/OPUS, or use a new API verb : startRecordToStream()), FlutterSoundRecorder will delegate the task to another private module.

Probably better for the developer, but I worry that it will be rapidly a mess, when he will want to use at the same time a feature supported natively by FlutterSoundRecorder (DbPeakController) and a feature supported by the new private module.

OK, I hope that I am understandable, because my English is poor.

I would be glad to have your point of view for this development. If you think that there are other priorities for the V4.0.0, it is time to vote.

Thank you for reading me.

Larpoux commented 4 years ago

Other developments that we may consider for the V4.0.0 are :

Unfortunately, we are not many developers and we must vote on what are the priorities.

Larpoux commented 4 years ago

Other tasks we may consider :

If there is developers who have spare time and want to contribute to this great product (Flutter sound ;-) ), it is the good moment to raise your hand, so that we can dispatch all those tasks to our army of developers ;-)

bsutton commented 4 years ago

1) doco - I would view this as the most critical. There is no point having a package if it's hard to use. We need to provide doco on every public method and ensure that we don't expose any methods, fields which should not be public. Essentially this is about making the public api as simple as possible.

We should also review the api for any inconsistencies. One that stands out is that the player has a PlayStatus class but the recorder just passes a double. The recorder should probably pass a RecorderStatus. If we go down the path the RecorderStatus could also be used to pass both the db and the duration rather than having two separate apis. The naming convention on the subscriptions is also not clear.

I always put bug fixes and documentation before features. Its the best way to get a project accepted by the community.

2) As I've suggested elsewhere I would probably leave the example along and create separate simple examples that show basic features. I have a couple of controls that I can donate but they will need a bit of work to get them to the point of being publishable.

3) Nice but 1) and 2) should be the priority.

So these are my votes :)

On Wed, 15 Apr 2020 at 06:41, Larpoux notifications@github.com wrote:

Other developments that we may consider for the V4.0.0 are :

-

Rewrite the documentation. With a good documentation, the developers will save time for their own code, and we will save time too on the Flutter Sound Support

Rewrite the Example App. Actually on the V3.1.x the Example App is not really a good example to follow for the developers. A good example will help the developers and we will spare time on the Developers Support. Note : @bsutton https://github.com/bsutton has already began to rewrite this example. It is even possible that this task is already done and finished (I have not yet look to that).

Support a way to record to a stream on iOS. (For example for streaming to a Speech-to-Text server)

Probably other points that you think are very important to support.

Unfortunately, we are not many developers and we must vote on what are the priorities.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/dooboolab/flutter_sound/issues/288#issuecomment-613670713, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAG32OCB4HSHO5W4RRLBNWLRMTDA5ANCNFSM4MIAQDZA .

Larpoux commented 4 years ago

I understand your point of view. Your two points will spare the time spent for the Developers support. Then we will have more time to develop new functionalities.

But I have also the feeling that many developers are waiting for a possibility to record to a stream, so that they can use Flutter Sound for things like Speech-to-Text.

I remember that you are ready to re-read a draft that would have written, to correct my poor English. Of course, if there are other volunteers ...

bsutton commented 4 years ago

The speach to text engines I've looked at all require you to send a recording. I've not seen any that will take a continuous stream?

Send me a link to the draft.

bsutton commented 4 years ago

With this option, we will have then two modules for play-back :

  • FlutterSoundPlayer
  • TrackPlayer

And two modules fo recording :

  • FlutterSoundRecorder
  • GreatNewModuleForRecording

The drawback of this option is that it will not be very clear for the developer, which module he must use.

I have to say I'm not in love with having two modules. I don't really understand why we even have two player modules.

From a users perspective I don't want to have to deal with the internals of the api. I just want two simple to use modules AudioRecorder AudioPlayback These names are concise and clearly express the intent.

The use of the FlutterSound prefix on the names is redundant. We also shouldn't be concerned with name conflicts with other packages as user can use the dart namespaces if this is an issue. e.g. import 'flutter_sound.dart' as fs;

I would prefer to see a single underlying recording subsystem. It sounds like the AudioRecorder might be the better solution. If it doesn't support decibels then I would propose that we directly implement the db calculations. There are certainly plenty of examples for at least some of the formats and I would guess that we can find the rest. Its probably a bit of work to do the initial db calcs for each format but once its done its done. If we go down the path of using multiple android recorders then we are going to have to maintain both forever and we are likely to see inconsistencies at the user level which will cause more support headaches.

bsutton commented 4 years ago

I would like to suggest changes to the subscription apis.

For recorder we currently have a no. of methods for setup up a subscription:

setSubscriptionDuration setDbPeakLevelUpdate setDbLevelEnabled onRecorderStateChanged onRecorderDbPeakChanged

I would like to suggest that we merge this down to a single api call:

Stream<RecordingDisposition> disposition({Duration interval}) 
{
    _setSubscriptionDuration(interval);
    _setDbLevelEnabled(true);
    _setDbPeakLevelUpdate(interval.asSeconds());
    return     _dispositionController.stream;
}

class RecordingDisposition
{
     Duration duration;
     double decibels;
}

I don't think there is any need to cancel this call. Whilst there is a performance overhead when this runs I don't think it will be that significant. If it turns out it is then we can add a method: void stopDispositionStream();

As to the player I would suggest the same type of amalgamation: We currently have: setSubscriptionDuration onPlayerStateChanged

These would be reduced to

Stream<PlaybackDisposition> disposition({Duration interval}) 
{
    _setSubscriptionDuration(interval.asSeconds());
    return     _dispositionController.stream;
}

class PlaybackDisposition
{
    Duration duration;
    Duration position;
}

The above changes provides a cleaner and easier to understand api and removes inconsistencies between the two modules.

Larpoux commented 4 years ago

Yes my friend. I totally agree. We can/must change that in V4.x.x

bsutton commented 4 years ago

I'm working through the TrackPlayer to see if I can combine it with the SoundPlayer so we have a single simple interface.

I'm a little uncertain how the TrackPlayer treats a Track. Is Track the correct terminology or is this really an Album? If its just a Track how are 'skip forward' and 'skip backward' meant to work? i.e. if its just a track then you can't skip forward to the next track.

If it is an Album should we not be calling it an Album? If it is an Album how does the user select a single track to play?

bsutton commented 4 years ago

I noticed that we have code in both dart and java/swift that convert a databuffer into a file. I've not changed the java/swift code but I'm nolonger passing a databuffer to the plugin but rather always converting into to a file first. The idea is that we should keep as much of the logic in dart as possible as it is easier to maintain. Is there any reason that I've missed as to why it would need to be done in the java/swift code?

Larpoux commented 4 years ago

Yes, it is very good to pass a file instead of a buffer. I agree that we should put the logic in Dart when possible. And maybe more efficient not transmit a large buffer to the native code. I did the convert in the native side, because I hoped to find a way to play the buffer directly without putting the data into a file (like it is done on iOS). But we will change the Dart code if one day we can play directly the buffer. Do this code only for Android, iOS is actually fine.

Larpoux commented 4 years ago

I'm a little uncertain how the TrackPlayer treats a Track. Is Track the correct terminology or is this really an Album? If its just a Track how are 'skip forward' and 'skip backward' meant to work? i.e. if its just a track then you can't skip forward to the next track.

If it is an Album should we not be calling it an Album? If it is an Album how does the user select a single track to play?

No, it is not an album. It is really a track.

'skip forward' and 'skip backward' are callbacks which are called when the user hit the buttons on the lock screen. Then the App can do what it wants, probably play another track.

bsutton commented 4 years ago

Ok that makes sense . I will update the doco

On Fri, 17 Apr 2020, 6:48 pm Larpoux, notifications@github.com wrote:

I'm a little uncertain how the TrackPlayer treats a Track. Is Track the correct terminology or is this really an Album? If its just a Track how are 'skip forward' and 'skip backward' meant to work? i.e. if its just a track then you can't skip forward to the next track.

If it is an Album should we not be calling it an Album? If it is an Album how does the user select a single track to play?

No, it is not an album. It is really a track.

'skip forward' and 'skip backward' are callbacks which are called when the user hit the buttons on the lock screen. Then the App can do what it wants, probably play another track.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/dooboolab/flutter_sound/issues/288#issuecomment-615125901, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAG32OAPEO5UKHA73XTSLMLRNAJXFANCNFSM4MIAQDZA .

Larpoux commented 4 years ago

I worked very much to separate TrackPlayer and FlutterSoundPlayer.

TrackPlayer is a subclass of FlutterSoundPlayer but those two modules does not use the same OS audio tools.

I am not sure that it would be good to recombine the two modules. Please tell me what are your position.

bsutton commented 4 years ago

From a user perspective the underlying implementation is a detail they shouldn't have to worry about.

Even working on the example having the two classes made it a pain to work with.

I've reworked the example and the code is cleaner because of the single class.

On Fri, 17 Apr 2020, 6:53 pm Larpoux, notifications@github.com wrote:

I worked very much to separate TrackPlayer and FlutterSoundPlayer. TrackPlayer. TrackPlayer is a subclass of FlutterSoundPlayer but those two mudules does not use the same OS audio tools.

I am not sure that it would be good to recombine the tow modules. Please tell me what are your position.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/dooboolab/flutter_sound/issues/288#issuecomment-615128175, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAG32OGU6KXDDOEWREFYSJTRNAKJPANCNFSM4MIAQDZA .

bsutton commented 4 years ago

I had some other thoughts about the api and some possible changes.

Currently we premote re-use of a SoundPlayer/SoundRecorder instance.

I'm wondering if its worth considering moving to a single use scenario.

So current we have:

var player = SoundPlayer();
player.startPlayer(path);
player.startPlayerFromBuffer(buffer);
player.startPlayerFromTrack()

What if we moved to:

SoundPlayer.fromPath(path).play();
SoundPlayer.fromBuffer(buffer).play();
SoundPlayer.fromTrack(track).play();

I'm also wondering about the need for the Track class and the `startPlayerFromTrack' method.

The Track class seems to essentially bundle up some additional information that the OS player is able to use.

This additional information could just as easily be passed as named arguments and if we build our own player (or a third party does) this information could just as easily be passed to those players via some sort of call back/notification mechanism.

So we could use the cascade operator to do something like:

var player  = SoundPlayer.fromPath(path)
  ..title = 'xxx;
  ..author = 'xxx'
  ..artUri = 'xxx
  ..artAsset = 'xxx
  ..onStarted(Track) => dosomting()
  ..onStopped() => doSomething()
   ..onPause() => doSomething() 
   ...onResume() => doSomething(),
    ..onSkipForward(Track) => xxx
    ..onSkipBackward(Track) => xxx
   ..seekTo(Duration) // can be called before or after play
  ..play();

player.pause();
player.stop();

or
SoundPlayer.fromPath(path
   title:  'xxx',
   author: 'xxx',
   artUri: 'xxx',
   artAsset: 'xxx')
.play();

or maybe we keep the track but just to pass the info.

class Track
{
 title;
   author;
   artUrl;
   artAsset; // can the asset be described as a file uri?
}
SoundPlayer.fromPath(path)
..track = Track(....)
.play();

So now we only need to two constructors:

SoundPlayer.fromPath(path);
SoundPlayer.fromBuffer(buffer);

Of the above I prefer the use of the cascade operator along with the 'Track' class. I think it gives the cleanest api and we can use the Track class to pass 'track' details back to the onSkipForward etc.

So my preferred option looks like:

var player =  SoundPlayer.fromPath(path)
  ..track = Track(title: 'xxx', author: 'xxx', artUri:'xxx', artAsset:'xxx')
  ..onStarted(Track) => dosomting()
  ..onStopped() => doSomething()
   ..onPause() => doSomething() 
   ..onResume() => doSomething(),
   ..onSkipForward(Track) => xxx
   ..onSkipBackward(Track) => xxx
   ..seekTo(Duration) // can be called before or after play
  ..play();
player.pause();
player.resume();
player.stop();

So how does the user have the player use the OS track UI?

SoundPlayer.fromPath(path)
..showPlatformUI = true
..play();

(I'm not certain about the name showPlatformUI but I think you get the idea).

So how does the user use the player with a Widget. If we have a widget called Playbar which is a html 5 style player it might be nice to have something like:

void build(BuildContext context)
{
   return Playbar(source: SoundPlayer.fromPath(path)
}

This syntax allows the SoundPlayer to be configure in all the above ways so the Playbar only needs to know about actions such as .play(), .stop() ...

Larpoux commented 4 years ago

Currently we premote re-use of a SoundPlayer/SoundRecorder instance.

I'm wondering if its worth considering moving to a single use scenario.

So current we have:

var player = SoundPlayer();
player.startPlayer(path);
player.startPlayerFromBuffer(buffer);
player.startPlayerFromTrack()

What if we moved to:

SoundPlayer.fromPath(path).play();
SoundPlayer.fromBuffer(buffer).play();
SoundPlayer.fromTrack(track).play();

👎 . I think important to keep a concept of session. Specially if the user wants to control its audio-focus. And there is no really benefits to do everything in startPlayer(). I think also that initialize() and release() are a good way to start an audio session and terminate it. Now, flutter_sound can handle several audio-sessions simultaneously. This has been request by some developers.

I'm also wondering about the need for the Track class and the `startPlayerFromTrack' method.

The Track class seems to essentially bundle up some additional information that the OS player is able to use.

This additional information could just as easily be passed as named arguments and if we build our own player (or a third party does) this information could just as easily be passed to those players via some sort of call back/notification mechanism.

Personnaly, I do not like very much the way Salvatore use a special Track class. I would prefer to pass the informations as argument. But I think this is a detail, and I did not want to fight with Salvatore (I had already too many relationship difficulties with him).

So we could use the cascade operator to do something like:

var player  = SoundPlayer.fromPath(path)
  ..title = 'xxx;
  ..author = 'xxx'
  ..artUri = 'xxx
  ..artAsset = 'xxx
  ..onStarted(Track) => dosomting()
  ..onStopped() => doSomething()
   ..onPause() => doSomething() 
   ...onResume() => doSomething(),
    ..onSkipForward(Track) => xxx
    ..onSkipBackward(Track) => xxx
   ..seekTo(Duration) // can be called before or after play
  ..play();

player.pause();
player.stop();

or
SoundPlayer.fromPath(path
   title:  'xxx',
   author: 'xxx',
   artUri: 'xxx',
   artAsset: 'xxx')
.play();

I prefer :

SoundPlayer player = SoundPlayer().initialize();
player.startPlayer( ...)
player.pause();
player.resume();
player.stop();
player.release();

Personally, I do not like very much this new custom to do cascading statements. This has no real benefits. But I will not fight with you on this point. Actually, our developers do not seems to have any problems to instance their player before using it. And now, FlutterSound does the initialize() itself if the App forgot to do it. So it is very simple. The only drawback is that the App will also forget to do the release(), because it will not have done the initialize itself.

bsutton commented 4 years ago

Can you explain what you mean by session? The existing startPlayer method we still only play a single audio stream at a time so I'm a little confused what you mean we need to retain it to support sessions?

I would suggested that my method has a couple of advantages:

1) you can create a stream and start/stop it multiple times.

With the current system if you want to restart the stream you have to call seekTo, then resume.So:

var player = SoundPlayer().startPlayer(path)
player.stop();
player.seekTo();
player.resume();

With the new proposal this becomes:

var player = SoundPlayer.fromPath(path);
player.start();
player.stop();
player.start();

I would suggest that this is a cleaner approach. Having to seek then resume feels odd.

2) If we go down your path of dumping Track (which I have no problem with) then the startPlayer signature gets messy.

You end up with 11 named arguments which makes auto completion in an ide harder to read.

Note: I would like to separate out the whenPaused into two separate methods (onPaused, onResumed).

e.g.

Future<void> startPlayer({
    @required String path,
    Codec codec = Codec.defaultCodec,
    void Function() onFinished,
    void Function() onPaused,
    void Function() onResume,
    bool showTrack = false,
    TonSkip onSkipForward,
    TonSkip onSkipBackward,
    this.trackTitle,
    this.trackAuthor,
    this.albumArtUrl,
    this.albumArtAsset)

Using the cascade operator leaves the startPlayer signature clean and easy to use whilst still giving the option to set the arguments.

In terms of focus I'm not really sure I understand the difference:

We currently have setActive which sets the focus.

If a user has multiple SoundPlayer instances then they can just make calls to setActive to move the focus between players.

var  player= SoundPlayer.fromPath(path);
var  player2 = SoundPlayer.fromPath(path);
player.start();
player2.start();
player.setActive(true);
// wait for a bit
player2.setActive(true);

If we use the suggested approach a user can start and control multiple streams and move focus between the streams be calling player.setActive The two methods seem to be equivalent here.

As to the initialise, with new proposal the code also looks cleaner as the construtor can now deal with the initialise.

This will also clean our code up as we won't have to call initialise on ever method call. This has a secondary advantage as having to initialise every call has force all of our entry points to be Futures. If we initialise automatically in the ctor then we don't need to worry about the user forgetting to call initialise: So the full code looks like:

Current system:

var player = SoundPlayer();
   player.initialize();
    player.startPlayer(xxx, 
        whenFinished: () => player.relase());

vs

var player = SoundPlayer().fromPath(xxx);
player.whenFinished => player.release();
player.start();

We have now made it impossible for the user to forget to initialise and we don't have the burden of checking it everywhere in the api.

bsutton commented 4 years ago

So I was looking into @Larpoux comments on the need for a session now that I understand what a session about. So I've now made a class called AudioSession. You now have the ability to pass an AudioSession into a player and the player attaches to that session. The Album now has an AudioSession but you can use an audio session without an album. I think this creates a balance between the requirements I'm hearing. Its allows me to keep the SoundPlayer api clean and simple whilst providing an explicit means of controlling the session.

It would rather nicely for the skipback/forward notifications as it makes a logic place for these to live. I will hopefully pull another request later today with the code changes.

Larpoux commented 4 years ago

Too complicated. 👎 Your branch is more and more complicated. Flutter Sound must be light and simple, for the user and for the maintenance.

I do not approve your proposition.

bsutton commented 4 years ago

Let's wait until I'm finished with it and the review where things stand.

Code should be as simple as possible but no simpler :)

On Tue, 21 Apr 2020 at 16:29, Larpoux notifications@github.com wrote:

Too complicated. 👎 Your branch is more and more complicated. Flutter Sound must be light and simple, for the user and for the maintenance.

I do not approve your proposition.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/dooboolab/flutter_sound/issues/288#issuecomment-616980370, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAG32ODZIEVAXQGIKKPONXDRNU4LZANCNFSM4MIAQDZA .

bsutton commented 4 years ago

So gentlemen I think I've been guilty of hubris, so for that I apologise. My wife tells me its one of my less endearing qualities.

After having a chance to sit back and review the api I realise that my AudioSession is essentially a merge of the TrackPlayer and SoundPlayer apis.

As such calling it AudioSessions had more to do with my fixation on sessions (once I rather belatedly discovered that they existed) than an improvement to the api.

As such I'm going to rename AudioSession back to SoundPlayer. This should will make the upgrade path much simpler for existing users and reflects what you guys have been trying to tell me from the start :)

The class that I have called SoundPlayer will be renamed QuickPlay as that reflects its intent.

I also had a second epiphany last night after doing some reading on Dart. I had a number of facades in the system (Album/AlbumImpl) in order to hide our internal api from users. I've now been able to collapse all of these facades into single classes (with a subsequent reduction in complexity which should make @Larpoux happier) while still hiding the internal implementation.

So apologies for the merry go around.

I will push the changes later today for your review. Brett.

Larpoux commented 4 years ago

Great 👍 Perhaps that we will finish to agree together with 4.0.0. 😄 (for a few days, I have worried that dev was definitely not what I would like Flutter Sound to be). Thank you for your work.