SEPIA-Framework / sepia-docs

Documentation and Wiki for SEPIA. Please post your questions and bug-reports here in the issues section! Thank you :-)
https://sepia-framework.github.io/
236 stars 16 forks source link

own wolframalpha skill / smart-services #150

Closed royrogermcfreely closed 1 year ago

royrogermcfreely commented 2 years ago

hello,

i know sepia asks wikipedia if i ask something like "who is elon musk" "what is inflation" etc. i get a response.

for other questions, sepia opens just a seach website.

i wrote a small script in python to ask a question via the wolfram api. my motherlanguage is german, so i translet my question to english first (google translator is pretty good for stuff like this) and then i give it to the wolfram api.

now i can ask question about calculate things, ask about money exchange, distance between cities and much much more.

i want to do that via voice commands in sepia too.

first i install

pip install wolframalpha
pip install  -U deep_translator

then my script looks like this:

import deep_translator 
from deep_translator import GoogleTranslator
import wolframalpha

input = input("question: ")
translated = GoogleTranslator(source='auto', target='en').translate(input)

app_id = "Wolfram APP-ID HERE"
client = wolframalpha.Client(app_id)
res = client.query(translated)
answer = next(res.results).text
print (answer)

and thats the result when i call

python3 wolfram.py
qestion: Was ist die Hauptstadt von Indien

New Delhi, Delhi, India

so i want to make a skill where i can say "question" and with that it starts the wolfram skill and sepia asks me about the question and will answer it with the given result.

is that something easy to make it possible with the python bridge or written in java?

/roy

fquirin commented 2 years ago

That looks very interesting! :sunglasses: and it's certainly a perfect use-case for a custom service :smiley: .

The Python-Bridge is a good place to start as well, actually I think you can even strip it down a bit. I've recently used it as a template to build a YouTube Music search micro-service, maybe that is an even better example: https://github.com/fquirin/python-ytmusicapi-server/blob/main/main.py

Let me know if you need any help. I hope I have a bit more time for custom service soon :-)

royrogermcfreely commented 2 years ago

tried to upload the python bridge smart-service from the online repository but i get {"result":"fail","error":"401 not authorized"}

i tried it with the user "uid1007" which has the "user, developer, smarthomeadmin" roles and sdk is switched to on.

also with "uid1003" i get the same error

do you know how i can solve this?

fquirin commented 2 years ago

Your setup looks correct :thinking: . Can you try once more following these steps, just to be sure:

I just double-checked to make sure it actually still works and there was no regression or something ^^.

royrogermcfreely commented 2 years ago

oh man, i feel so dumb^^ i went allways from the sepia webclient menu to "control-hub" -> not authorized when i go direct to the control hub its working :)

now i can test the python bridge. sepia asks me about the code word and accept the word "friend" :)

how do i have to adapt the main.py so i get a given result as spoken answer?

fquirin commented 2 years ago

i went allways from the sepia webclient menu to "control-hub" -> not authorized when i go direct to the control hub its working :)

Hmm :thinking: and you were logged in with the uid1007 user inside the app? It should transfer the login over to the Control HUB actually. The "account" button will show you the active login.

sepia asks me about the code word and accept the word "friend" :)

:grin:

where is the main.py file located or where do i have to put my python file?

I'd put it next to your SEPIA installation but in general you can put it anywhere you want. The Python-Bridge itself is basically just a micro server that makes your code accessible via HTTP calls. It is optimized to integrate into the SEPIA server NLU chain which will redirect user input to the bridge automatically without the need for a custom service. That's why the main.py has the endpoint /nlu/get_nlu_result for example, but this will require you to write some extensive Python logic to identify the intent in the first place etc.. Since you already have the custom service I'd simply add a specific WolframAlpha enpoint to the main.py. Something like this should work on the SEPIA server:

Then you can edit the main.py to add your own endpoint. Something like this:

class WolframAlphaResponse(BaseModel):
    """Response object for WA request"""
    answer: str = None

@api.get("/wolframalpha")
def search(q: str = None):
    if q is not None:
        # do your stuff with the question
        answer = "what wolfram says"
        return WolframAlphaResponse(answer = answer)
    else:
        raise HTTPException(status_code=400, detail="Missing query parameter 'q'")

And then you can run the server: uvicorn main:api --host 0.0.0.0 --port 20731 --log-level info --reload

Inside your custom service you can take the input of the user and send it to your Python-Bridge like this:

String searchTerm = "... the question ...";
String url = "http://127.0.0.1:20731/wolframalpha" + "&q=" + URLEncoder.encode(searchTerm, "UTF-8");
JSONObject res = Connectors.httpGET(url, null, headers);
//Get first item
if (Connectors.httpSuccess(res)){
    String answer = JSON.getString(res, "answer");
}
...

NOTE: This is basically untested code so there might be some typos and stuff :sweat_smile:

royrogermcfreely commented 2 years ago

hey,

sad to say, i have no plan what i am doing ^^

i played the last days a bit arround with the python bridge.

i tried to make a test.py file where sepia just should speak my input text. but i have no clue how to start or get this working

have to say my wolfram python script was just copy paste ;)

maybe you or someone else can upload a example for simple use so i / others can get started with simple tasks.

but i know your time is pretty busy with the finishing of the next release and you got a live to (hopefully ;) ).

i will try my luck with that from time to time or need to start learning python :D

/roy

btw. is it possible to publish a new alarm / list entry via mqtt?

fquirin commented 2 years ago

Hey roy,

if you post your custom smart-service and Python "drafts" I might be able to give you some hints on how to proceed. I'll try to put together a little "custom service + Python" example in the coming days as well. Since Python is so popular these days I guess it can't hurt ;-)

fquirin commented 2 years ago

I've added a new SDK demo service called DynamicQuestionAnswering. If you check the 'getResult' method you will find some lines that are commented out. You should be able to use those to build your Wolfram-Alpha skill :slightly_smiling_face: . The "only" thing you have to do is to set up the right endpoint in your PythonBridge that can return your WA answer.

[EDIT] Ich habe den Demo und die Python-Bridge noch mal aktualisiert. In der Python-Bridge findet man jetzt den Eintrag:

# ---- Custom Endpoints:

# -- Example GET with 'q' as URL parameter
@api.get("/my-service")
def my_service(q: str = None):
    if q is not None:
        # implement your logic here
        return {"myReply": "to be implemented"}
    else:
        raise HTTPException(status_code=400, detail="Missing query parameter 'q'")

Und in dem SDK Demo den entsprechenden Aufruf:

String qUrlParam = "?q=" + URLEncoder.encode("my question", "UTF-8");
JSONObject response = Connectors.simpleJsonGet("http://localhost:20731/my-service/" + qUrlParam);
if (response != null && response.containsKey("myReply")){
    myReply = JSON.getString(response, "myReply");
    ...
}
royrogermcfreely commented 1 year ago

habe es endlich hinbekommen :)

hier der link zum WolframAlpha Smart-Services

/roy