m0ngr31 / kanzi

Alexa skill for controlling Kodi
https://lexigr.am
MIT License
427 stars 149 forks source link

FYI: Successful Docker Installation #147

Closed michaelrcarroll closed 4 years ago

michaelrcarroll commented 7 years ago

I was able to successfully get this working with Docker yesterday. A few setup details:

Dockerfile

FROM python:2.7-alpine
MAINTAINER Michael Carroll <me@michaelrcarroll.com>

#install necessary dependencies
RUN apk update && apk add git build-base libffi-dev openssl-dev

ENV INSTALL_PATH /kodi-alexa
RUN mkdir -p $INSTALL_PATH

WORKDIR $INSTALL_PATH

#get latest
RUN git clone https://github.com/m0ngr31/kodi-alexa.git .

#install requirements
RUN pip install -r requirements.txt

#faster fuzzy matching
RUN pip install python-Levenshtein

CMD gunicorn --certfile /config/$CERTFILE --keyfile /config/$KEYFILE -b 0.0.0.0:8000 alexa:app

Build Command

docker build -t kodi-alexa .

docker-compose.yml

version: "2"
services:

  kodi-alexa:
    container_name: kodi-alexa
    image: kodi-alexa
    ports:
      - 8443:8000
    environment:
      KODI_ADDRESS: htpc
      KODI_PORT: 8080
      KODI_USERNAME: kodi
      KODI_PASSWORD: kodi
      TZ: $TZ
      SKILL_TZ: $TZ
      CERTFILE: fullchain.pem
      KEYFILE: privkey.pem
    volumes:
      - $CONFIG/kodi-alexa:/config

Where $TZ and $CONFIG are located in a .env file:

TZ=America/Los_Angeles
CONFIG=/volume1/docker

And /volume1/docker/kodi-alexa hs the fullchain.pem & privacy.pem from letsencrypt.

Docker Compose Command

docker-compose up -d Runs in daemon mode, creates container with proper

Conclusion

I hope these details help someone as a starting point. It proves that it is possible to run this through Docker. I do not plan on creating a pull-request for this, as I just wanted to try it out. In my testing, I found it to be too slow to respond for my usage (and the girlfriend approval factor).

m0ngr31 commented 7 years ago

Thanks for doing that. Is your synology an ARM box? I wonder if that's what it was so slow?

michaelrcarroll commented 7 years ago

The DS411+ii has a dual-core Intel Atom D525 processor (http://ark.intel.com/products/49490/Intel-Atom-Processor-D525-1M-Cache-1_80-GHz). Fairly old, but not too slow. I didn't do too much investigating to see why it was slow, but I've got a Gigabit internet connection so the network connectivity part shouldnt've been it. I saw the requests (& response) come through almost immediately on the docker console, but Alexa spent some time "spinning" before processing the command.

jingai commented 7 years ago

If the request and the response showed up quickly in the log, it's odd that it took that long for the device to respond. The final response packet to the Echo is small.

jingai commented 7 years ago

Anyone up for actually documenting this and creating a PR?

Ltek commented 7 years ago

I'm willing to try this on my Syno if someone can help me thru it... I'm not sure what I need to do.

Do I need to create/get a cert?

Do I need to edit docker-compose.yml after the install?

... etc

mboeru commented 7 years ago

Hello all,

I've tested the above and it works good for me. It's actually faster than the lambda function. I am running it on a J3160 CPU, with other dockerised apps on there. I've started to do some work on integrating it into the repo here https://github.com/mboeru/kodi-alexa. And a built image available here https://hub.docker.com/r/mboeru/kodi-alexa/ if anyone wants to test it.

It can be run locally like this:

docker run --name=kodi-alexa -d -v ~/ka-config:/config -p 8000:8000 -e "KODI_ADDRESS=192.168.54.14" -e "KODI_PORT=8080" -e "GUNICORN_LOGLEVEL=debug" mboeru/kodi-alexa 

Or via docker-compose like this:

version: '2'
services:
  kodi-alexa:
    container_name: kodi-alexa
    image: mboeru/kodi-alexa:latest
    network_mode: bridge
    restart: always
    ports:
      - 8000:8000/tcp
    environment:
      - TZ=Europe/Bucharest
      - KODI_ADDRESS=192.168.54.14
      - KODI_PORT=8080
      - GUNICORN_LOGLEVEL=debug
      - SKILL_APPID="amzn1.ask.skill.XXXXXX"
      - MAX_UNWATCHED_SHOWS=15
      - MAX_UNWATCHED_EPISODES=15
      - MAX_UNWATCHED_MOVIES=15

As a default I did not implement https in gunicorn, as probably most folks tent to have a reverse proxy setup for other apps as well. I expose it via nginx like this:

location /bobby {
        rewrite           ^/bobby/(.*)  /$1  break;
        proxy_pass        http://192.168.54.220:8000;
            proxy_redirect    http://192.168.54.220:8000 /bobby;
            proxy_set_header   Host $host;
            proxy_set_header   X-Real-IP $remote_addr;
            proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header   X-Forwarded-Server $host;
            proxy_set_header   X-Forwarded-Host $server_name;
    }

in a standard nginx server block.

If it looks good, I will send a PR request after I create some proper documentation on this.

jingai commented 7 years ago

Thanks for having a look at it. Assuming it works, a PR would be welcome :)

My only comment is about all of the print statements you added.. if anything, they should be centralized somewhere like Kodi.SendCommand(), and they definitely should not be enabled by default.

We can talk about adding a debug option, though I don't believe it belongs in this particular PR.

jingai commented 7 years ago

Also, in your docker-compose example, why are you setting environment variables for configuration rather than reading kodi.config? The environment variables are really there for Heroku deployments, simply because apparently it won't pick up anything out of repo.

mboeru commented 7 years ago

You are right about the prints. I am bad at programming :) as I am a systems guy. Tried to find a centralised way, but could not find one so I did all the prints. I will remove them anyway, as they don't seem to go to stdout, and docker logs is not picking them.

In Docker, I think it's easier to have the same as in heroku, env variables defined. And in my opinion should be the way to go. And if a kodi.confing file is available in /confing path, then that should take priority over the config files.

For starting out it's definetly easier with env vars, rather than seeing the complicated vars in the config, as most people probably use it with one Alexa device and one Kodi instance.

Let me know what you think.

jingai commented 7 years ago

You don't get support for multiple Kodi instances though if you use the environment vars. Honestly I wish we could dump them entirely because right now we are maintaining configuration defaults in 3 places.

Unless it simply can't be done with docker, I think it should read kodi.config

mboeru commented 7 years ago

Well, I can create an entrypoint.sh that creates the configuration file with it's defaults, or we can just map it as a volume. In any case it would be another file to manage. external in a way to docker. I will think of a way and test in the next couple of days. Still, since these vars are used by Heroku, I would still keep them, and use them as a simple example in docs. Then provide the advanced way of setting it up from the config file.

For me docker should be a simpler way of setting kodi-alexa, than using a lambda function and doing everything in aws. Right now I have two kodi-alexa instances, one in aws and one in docker and testing them both.

jingai commented 7 years ago

I'll look at it it in a bit, but it seems odd to me that you can't just do it the same way as for AWS: point the user at the template, have them edit it, and place it in the same directory.

I wasn't trying to say that we needed to generate the configuration with a script or anything.

mboeru commented 7 years ago

If the image comes with an explicit version of the code, embedded in that image, we would meed to symlink the config somewhere outside the image, and add it as a volume. As an example, in the app directory there would be a symlink kodi.config -> /config/kodi.config, and /config would be a volume mounted in the container.

Another way would be if we could specify where the config file is located, which complicates things. I think we come from two different places of doing things :), but that is typically how docker works, because you want that config to be persistent.

This way, you could just do a docker pull kodi-alexa, remove the old conainter, create a new one with the same config file, and you are now upgraded.

Docker hub even builds the images automatically based on pushes to branches, and new tags.

jingai commented 7 years ago

You are correct that I don't have much experience with Docker. I'm out at the moment, but I'll do a bit of reading when I get home.

I'd rather not eliminate a feature (Device ID mapping) for Docker users though unless it's absolutely necessary.

We already have an easier-than-AWS deployment method: Heroku. Docker without Device ID mapping would still have the advantage of running locally (and thus faster), but it's hard to top Heroku as the 'easy' option.

I don't think we should forgo feature parity with AWS just because it's more difficult, especially since it'd ultimately be the best option (in terms of features and speed) of the three if it did support the configuration file.

jingai commented 7 years ago

FWIW, env vars are already supported and override stuff in the configuration file, so you can have it the way you want it anyway. I'm speaking more about what I'd like to document for other users.

mboeru commented 7 years ago

I fully understand, and you are right, AWS is fully featured. In terms of speed I would say it's not really the case, as depending on a lot of factors, it can be faster having it locally, as it is my case (I am in Eastern Europe, and the kodi-alexa lambda is setup on eu-west-1). Having a docker, and an easier version might bring more traction to the project, albeit a big might. :) Did not check for variable precedence, but thought that might be the case. I just used the env vars i found in app.json. I have a dedicated mini PC for this kind of stuff, and for home automation, so it makes sense for me to keep kodi-alexa local. My point being that I am sure there are others running a similar scenario. A lot of people in the home assistant community for instance.

Let me know what you think is best in terms of Docker image, and I will help however I can.

jingai commented 7 years ago

I host locally as well just for speed too, so I understand. I'm old fashioned though and just use Apache in a chroot jail :)

Docker would be viable for others to self-host, but I think we can make it fully-featured too.

I've been wanting to look at Docker more anyway.. now I have an excuse.

mboeru commented 7 years ago

That's good to hear :) let me know your thoughts.

jingai commented 7 years ago

Ok after a little bit of reading, I get how to handle the config file and that part wouldn't be terribly hard to document. We can push the symlink up to the repo and leave it dangling until someone runs this in a Docker container.

The bigger problem (documentation-wise) will be that Alexa requires an HTTPS endpoint. If we make the assumption that users know how to set up a proxy and deal with certs, we can probably also assume that they know how to get this running in Docker or whatever on their own too.

Ultimately, that's what this Issue is: documentation. If you feel like writing up how to set up certs and an HTTPS proxy (or however else you can make it work in a Docker container), we'd definitely accept that PR. But without that piece of the puzzle documented, we'll just be inundated with support requests since everyone will certainly want to do this for the reduced latency.

mboeru commented 7 years ago

The easy way would be using self signed certs, and we can maybe create them on the fly at first docker run. We can have a flag maybe, SSL_ENABLE which is enabled by default. In order for the image to be self conained. Then if anyone wants to use it with letsencrypt or something else they can disable it via the flag and have http only, and expose kodi alexa via their reverse proxy.

I can try this on my fork if it sounds good to you.

jingai commented 7 years ago

Generating self-signed certs for a default installation is fine. There's a specific option though they'll need to tick in the skill builder, which should be documented.

The flag wouldn't be disabled for Let's Encrypt (or whatever other CA) -- they'd just swap out the key and cert. I realize you probably know this and just worded it a bit wrong, but I think there should be a blurb in the docs about it.

BTW, if you'd prefer, it might be easier to chat about this on Slack.

mboeru commented 7 years ago

Sure, I can join slack. It seems I need an invite. My e-mail address is my user at gmail.

mboeru commented 7 years ago

created custom entrypoint that handles the SSL certificate check and creation, and also makes sure the kodi.config itself and the symlink exist. It's still rough around the edges and I will work on polishing it a bit more next week

To build locally, from my fork

docker build -t kodi-alexa .

To run from locally created image:

docker run --name=kodi-alexa -d -v ~/ka-config:/config -p 8000:8000 -e "KODI_ADDRESS=192.168.54.14" -e "KODI_PORT=8080" -e "GUNICORN_LOGLEVEL=debug" kodi-alexa

To run direcly from dockerhub:

docker pull mboeru/kodi-alexa
docker run --name=kodi-alexa -d -v ~/ka-config:/config -p 8000:8000 -e "KODI_ADDRESS=192.168.54.14" -e "KODI_PORT=8080" -e "GUNICORN_LOGLEVEL=debug" mboeru/kodi-alexa

Where ~/ka-config is a directory on the host, and will hold the ssl cert files and kodi.config

Did not get a chance to fully test (with Alexa), but that part should still work. Let me know how it looks so far.

mboeru commented 7 years ago

I've stumbled upon an issue, and it seems that if the config file exists the env variables are ignored https://github.com/m0ngr31/kodi-voice/blob/master/kodi_voice/kodi.py#L243

So variable precedence is config file first and if that does not exist then use env vars.

Maybe we can add a switch (meaning a env variable for docker), default being to use the config file?

jingai commented 7 years ago

Yes, it's supposed to be one or the other, not a combination of the two.

I'm not sure I see why this is a problem, though?

jingai commented 7 years ago

I haven't really had time to think about this much yet, but the idea was not to automatically create kodi.config -- it was to just commit a dangling symlink to it. The user would be responsible for actually creating the file, same as we do for AWS.

mboeru commented 7 years ago

Oh, ok, I can remove that part if necessary. I thought it would be easier for someone, if the config was autocreated, then adjusted, and then they would issue a restart of the container.

The issue is that if the config is autocreated, then the env would no longer be used at all, even if specified.

jingai commented 7 years ago

Yeah, just don't create it and then you'll be able to use env vars.

The creation of the configuration file will be a documentation issue as it is for AWS.

Kodi-Voice already reads the example file to seed defaults. From there you (as a user) can override with either env vars or your own config file, but not both.

I'm setting things up here to play with this, but I don't know how far I'll get as we are also preparing for the hurricane.

mboeru commented 7 years ago

Sorry to hear about the hurricane. Ok, now I get it with the kodi.config. I'll remove that part and start documenting the rest, inside the README file.

jingai commented 7 years ago

FWIW, I've got it up and running here. In entrypoint.sh I just changed it to test for presence of the symlink and if it doesn't exist, it creates it.

One thing worth mentioning is you could add pip install python-Levenshtein to Dockerfile to speed up the fuzzy matches some. We can't just add this to requirements.txt as it breaks AWS and Heroku deployments.

mboeru commented 7 years ago

did the required modifications, added some docs and created pull request #224

If all is ok, are you willing to adding automated build on dockerhub for just getting the images directly, instead of building them?

jingai commented 7 years ago

Sure, provided it's documented well in the README.

While I have a really good understanding of the underlying stuff that Docker uses, I don't have the same understanding of their wrappers for it all.

I'll read up on it a bit more when I get a chance, but I've never really looked at dockerhub. Who would be updating the images and how would they be updated? Also, remember that it requires that the users update their interaction models too, so it can't be an automated update, or the model will go out of sync with the skill code.

mboeru commented 7 years ago

If you enable automated updates, then github will push a build request to dockerhub. So everytime you push a change to the github repo, or create a tag, it would start to build a new image. This is cusomisable based on tags and branches, so you can only enable a certain branch to create an automatic build.

jingai commented 7 years ago

That sounds fine to me.

mboeru commented 7 years ago

This is what the build settings looks like.

screenshot 2017-09-18 17 59 04

One of the project collaborators or the owner will need to set this up.

jingai commented 7 years ago

Looks easy enough :)

We might want to document both building and using the pre-built images. @m0ngr31 thoughts?

rdfedor commented 6 years ago

@mboeru, how do i disable the SSL_ENABLE flag for my installation? I'd like to drop this docker project behind a reverse proxy which already handles all SSL encryption requests but can't because the connection is already encrypted.

Thanks

mboeru commented 6 years ago

You cannot, but you can set it via https in your reverse proxy. This is what I have in my nginx config for instance

            location /kodi-alexa {
        rewrite           ^/kodi-alexa/(.*)  /$1  break;
        proxy_pass        https://192.168.54.XXX:8000;
            proxy_redirect    https://192.168.54.XXX:8000 /kodi-alexa;
            proxy_set_header   Host $host;
            proxy_set_header   X-Real-IP $remote_addr;
            proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header   X-Forwarded-Server $host;
            proxy_set_header   X-Forwarded-Host $server_name;
    }

So use https:// instead of http:// when declaring proxy_pass

Than works great for me, and I am behind nginx+letsencrypt

rdfedor commented 6 years ago

Actually that kind of helped me a bit. I maintain my entire infrastructure use composer + git and use nginx-gen w/ letsencrypt-nginx-proxy-companion to handle the automatic generation and maintenance of my SSL certs and virtual host. All I had to do was add VIRTUAL_PROTO=https to the environment variables.

Literally all I need to do bring up a new service within my infrastructure is add the following snippet to my docker-compose.web.yml and add the dns entry to point to the host and it handles the rest.

kodi-alexa:
    image: mboeru/kodi-alexa:latest
    restart: always
    environment:
      - VIRTUAL_HOST=kodi-alexa.example.com
      - VIRTUAL_PORT=8080
      - VIRTUAL_PROTO=https
      - VIRTUAL_NETWORK=nginx-proxy
      - LETSENCRYPT_HOST=kodi-alexa.example.com
      - LETSENCRYPT_EMAIL=whome@example.com
    volumes:
      - "./kodi-alexa/kodi.config:/config/kodi.config:ro"
    networks:
      - proxy-tier

Thanks

islipfd19 commented 6 years ago

Is there any headway with this? I've been trying to follow along, trying different combinations but have yet to get any successful docker image up and running.

jingai commented 6 years ago

I think the main thing we're waiting on is documentation, but it's been a while since I've looked at it.

ShadakScartan commented 6 years ago

Are there any further updates on this? Ive set up everything, alexa skill sends via https ddns to my nginx which redirects into mboeru's docker machine and I can see the docker machine receiving POST requests, but nothing is then sent to my Kodi installation and the alexa skill gets a null response.

ShadakScartan commented 6 years ago

Never mind, solved it myself. Working now.

amonsosanz commented 5 years ago

I just wanted to comment that I used this image and it worked very well in my Pi 3:

https://github.com/linuxserver/docker-kanzi