IBM / clai

Command Line Artificial Intelligence or CLAI is an open-sourced project from IBM Research aimed to bring the power of AI to the command line interface.
https://clai-home.mybluemix.net/
MIT License
475 stars 73 forks source link

tellina integration #34

Closed TathagataChakraborti closed 4 years ago

TathagataChakraborti commented 4 years ago

Having a local nlc2cmd skill based on the Tellina deployed model would be great.

See here.

ktalamad commented 4 years ago

Seems to be more prudent to try the Tellina local install first. We could have different versions of this skill depending on user appetite:

ktalamad commented 4 years ago

Other considerations that have come up in my exploration of Tellina for integration with CLAI:

  1. Easiest approach might be to scrape from the returned results from Tellina; however, since there is dynamic content being returned simply using the requests library does not work. Might have to use something similar to selenium (and ensuing complexity) to scrape (for e.g.) the content of the button-click that gets sent to ExplainShell.

  2. List of commands: We can return either the first one, or an array of all possible commands listed by Tellina. The web interface does not expose any criteria for ranking, which brings us to:

  3. Confidences: Currently there is no way of figuring out a confidence, but we could write a module that does that.

Need to reach out to Victoria / others on Tellina team to see if there is a queryable web API (mitigating a lot of the above) that can be used instead of having to install Tellina (and hence TensorFlow) locally.

ktalamad commented 4 years ago

Current update:

First, I tried to see if there was a way to just use the results from the deployed website online (tellina.rocks). This is hard because the results webpage is generated dynamically based on the response from the server running TF, and so it will be quite hard to scrape etc.

Second, I tried to get a local install of Tellina up and running. For this, you need TF and a bunch of other dependencies to be installed. One of the immediate problems is that even though the installation instructions mandate TF v2, the code still has some checks and invocations that are unique to TF v1.x. I changed around a few of these to get it working. I also independently verified that TF (v1.15) was running on my machine. However, calling it through Tellina threw an RNN cell error (I think on the shape/dimensions), which is not something that can be fixed easily without changing a lot of the TF internal code.

Third, I tried to install the nl2bash code and get that running locally. This runs fine with venv (the downside is that CLAI might have issues with using venv when we try to wrap skills up, but we’ll get to that later). However, the nl2bash project is currently set up to evaluate and report numbers at scale for different models. I am currently trying to isolate the functionality that we can use to just pass in a natural language utterance and return the relevant command (maybe based on the AST tree that is produced, etc).

Talking later today to @madhavanpallan about this.

TathagataChakraborti commented 4 years ago

@madhavanpallan seems to have Tellina set up. We can use that as web service, for example. But @ktalamad we nl2bash to run with confidence scores before we can use it in CLAI.

ktalamad commented 4 years ago

Follow tellina skill development in Madhavan's repo. Current status: the tellina service is running on ibmcloud but we are facing some errors in getting the CLAI skill to respond correctly.