VictorRancesCode / flutter_dialogflow

Flutter package for makes it easy to integrate dialogflow and support dialogflow v2
Apache License 2.0
215 stars 73 forks source link

outputAudio #25

Open alanstyong71 opened 4 years ago

alanstyong71 commented 4 years ago

Hi,

Not sure who's maintaining the repo on this. I'm sure this is a pretty popular request.

But one suggestion is to include the outputAudio in the AIResponse class:

class OutputAudio {
  ByteData _outputAudio;
...

}

class AIResponse {
  String _responseId;
  QueryResult _queryResult;
  num _intentDetectionConfidence;
  String _languageCode;
  DiagnosticInfo _diagnosticInfo;
  WebhookStatus _webhookStatus;
  OutputAudio _outputAudio; // define and add OutputAudio class

  AIResponse({Map body}) {
   // add this
   _outputAudio = body['outputAudio'];

    _responseId = body['responseId'];
    _intentDetectionConfidence = body['intentDetectionConfidence'];
    _queryResult = new QueryResult(body['queryResult']);
    _languageCode = body['languageCode'];
    _diagnosticInfo = (body['diagnosticInfo'] != null
        ? new DiagnosticInfo(body['diagnosticInfo'])
        : null);
    _webhookStatus = body['webhookStatus'] != null
        ? new WebhookStatus(body['webhookStatus'])
        : null;
  }"

// and finally, add one getter:

OutputAudio get outputAudio {
    return _outputAudio;
  }

You can then choose to use any audio player on the output audio.

Just a suggestion.

stetanjung commented 4 years ago

Hi,

I am implementing your suggestion. However, I dont know how to define the outputaudio class can you help me to define that class?

alanstyong71 commented 4 years ago

Something like this:

class OutputAudio {
  ByteData _outputAudio;
  OutputAudio(String audio) {

   // convert string to bytedata
    _outputAudio = base64.decode(audio);
  }

  ByteData get outputAudio {
    return _outputAudio;
   }
}

However, when you wish to play the audio, you will need to write it somewhere in the local file system before you use any player to playback the file.

Good luck, Alan

stetanjung commented 4 years ago

Thanks Alan for your help.

I tried another option by making the "_audioOutput" as a dynamic variable and remove the class that you suggested. However, I add your code to convert the audio into base64 in my main code.

OutputAudio(String audio) {

   // convert string to bytedata
    _outputAudio = base64.decode(audio);
  }

so my final dialogflow_v2.dart was added like this

class AIResponse {
  ....
  dynamic _outputAudio;

  AIResponse({Map body}) {
   ....
    _outputAudio = body['outputAudio'];
  }

  dynamic get outputAudio {
    return _outputAudio;
  }
}
alanstyong71 commented 4 years ago

Yes, that will work too! There is another watch out if you're playing around with Dialogflow. If you're planning to use your own Fulfillment webhook with nodejs and the google-actions library, you might have to modify the class when you use 'Suggestions' to comply to Google Actions. Output Audio will continue to work, but webhook messages have shifted to another object. No one is maintaining this library actively anymore so you will need to make the changes yourself. Take care and stay safe.

stetanjung commented 4 years ago

Thanks for the warning.

I am planning to use the inline editor instead of my own webhook. Will it encounter the same problem? Since I'm going to use firebase as the DB for this project therefore I will use the inline editor instead of making my own webhook.

alanstyong71 commented 4 years ago

I started with the inline editor early in development to prove the concept. It deploys to Google Cloud functions in your GCP account. But the npm library 'dialogflow-fulfillment' used in the inline version itself is no longer supported by Google (last update was version 0.6.1, Oct 2018). Some functions may still work, but Google has advised to move to the new library 'actions-on-google'. You can rewrite the inline index.js to use 'actions-on-google' which is a better supported and documented library. You'll have to also update the package.json to include the correct packages. But the challenge is debugging from inline editor.

Or, you can write your own nodejs application using 'actions-on-google', deploy to your own server and update the Webhook URL. Personally, it was easier to develop in our own server environment - it's easier to troubleshoot, etc. You can then use express, body-parser, mysql and all of those awesome libraries from nodejs to connect to your backend.

Hope this helps.

alanstyong71 commented 4 years ago

Sorry, forgot to add - if you decide to develop your own nodejs application, you can connect to firebase, by installing the firebase npm library 'firebase' and use the same functions.

stetanjung commented 4 years ago

Thanks for your advice and warning, Alan.

I will try to create my own Webhook if I have more time to develop since I'm quite new with nodejs and I dont have enough time finish this project on time if I learn to create my own nodejs server for the dialogflow. I'll try my best to work on the inline editor first.

Once again thank you. Once again thank you for your help. will try to create own nodejs application

Shajeel-Afzal commented 4 years ago

@stetanjung any luck to complete this? Is outputAudio supported now?

ElZombieIsra commented 3 years ago

Hello, everyone.

Since this package is abandoned, I created another package that solves the problems that this one has. You can check it out here: DialogFlowtter

The issue that you have is addressed in that package.

Feedback is always welcome.