Closed gabog closed 6 days ago
Just for reference, we could approach this as writing an extension for BF CLI specific to VA based on: https://github.com/microsoft/botframework-cli/blob/master/CONTRIBUTING.md#steps-to-create-a-new-plugin
Hi @scheyal and @clearab, do we still need this feature in the CLI? PVA knows how to register skills and composer has a way too. The only think that doesn't is VA that would map to our "coded approach" for implementing consumers. Do we have customers asking for this or we can close it?
Thanks
We need some of this functionality for Orchestrator. Need to discuss in detail.
Automate the process of adding, removing, and updating skill configuration for a Bot Framework Bot.
Core Skill Registration Steps (Action and Utterance Invocation)
Validate Skill Manifest. Specifically to ensure Id, AppId, SkillEndpoint, etc. is valid. Given the current SDK approach requires developers to create this manually it leads to many errors.
Extract core manifest information and update Bot configuration. Id, AppId, Endpoint, Name and Description are required on the Bot side. AppId is used as part of the secure skill communication mechanism.
"botFrameworkSkills": { "id": "",
"appId": "",
"skillEndpoint": "",
"name": "",
"description": ""
}
Populate skillHostEndpoint Uri. This setting is part of the Bots configuration file and provides the endpoint that a Skill calls back with response activities. Not setting this leads to obscure HTTP 500 errors and is highly confusing to developers. As developers move from dev envrionments through to production this needs to be set as part of CI/CD. Can also be looked up based on other information in configuration.
"skillHostEndpoint": ""
Additional Skill Registration Steps required for Utterance Invocation scenarios
Retrieve LU training data. Manifest will point to LU source(s), one per supported locale. These need to be resolved and contents downloaded. Local File, Remote File and also directly from the LUIS model for Microsoft-centric and enterprise scenarios whereby a separate static file is not required.
Filter out unneeded intents. Manifest optionally enables certain intents to be "exposed" (made public) so LU data may be filtered down to just the intents the manifest specifies. May not be needed. No intents specified will default to all intents.
Train Dispatcher. A new "label" is added to Dispatcher/Orchestrator and the training utterances from previous step are added. This label is mapped to the Skill Name enabling the Skill to be identified at runtime.
Multi-Locale. Above steps are repeated for each and every locale. Each locale will have a different dispatch model.
Strongly typed class. Generated an updated Dispatch "model" class ensuring the (now extended) Dispatch result can be deserialized by the assistant. This avoids any code changes when adding a Skill.
Additional Considerations for Utterance Invocation Scenarios
Utterance Invocation in multi-locale scenarios requires a consistent way of modelling the cognitive models in your configuration (e.g. Each locale will have a Dispatcher, N LUIS Models and N QnAMaker knowledgebases). These need to be reliably retrieved at runtime for both the bot but also CLIs that are refreshing dispatcher. This is problematic for a "SDK" which doesn't want to impose too much structure/regiment on developers - easier for an assistant platform layered on the SDK.
When registering a series of Skills the dispatch retraining cycle between each skill is less than ideal especially in CI/CD pipelines. We have implemented a "noRefresh" flag where you can skip dispatch refresh until say the last skill registration in a batch meaning you only refresh dispatch once.