Closed WeeSee closed 1 year ago
While it's not exactly what you're asking for, this repo contains an actix actor that spins up a REST API which can serve as a decent starting point.
Your actual idea, about building a containerized REST API that fully wraps the capabilities of the library is brilliant and I will gladly support whoever decides to build it.
Hey everyone. Someday when I get free time, I would love to learn Rust, but right now, it's hard for me...
But I'm proficient with Docker. Would that help?
Hi,
I'm going to receive a pair of L530E in the following week. I've made a lot of Rust & Docker-related projects, I can quickly build a simple REST API if you want :)
One way that could be implemented would be to have one GET route per main feature (e.g. /on, /off, /get-device-info, etc.), and one for the set() API (when available).
Is there any specific feature you'd like to see in the API?
Also @mihai-dinculescu how would you like this to be integrated? In another repo, in the same repo as your library (as a Cargo workspace for instance)?
It makes the most sense to be a separate repo. Let me know the URL so I can add it to the README.md.
It will be interesting to see what you can develop because there are so many ways to go about it and lots of decisions about ergonomics vs breadth of options.
It might be worth experimenting with reusable handlers under different routes, e.g. an on
handler used for both POST /l530/on
and POST/p110/on
.
Another interesting decision will be where the credentials are stored. Are they part of the API secrets, or should they be part of the payload?
And what about the session? Should there be an endpoint that logs in on a particular device and returns the session to be used for subsequent calls?
I think secrets should be put in the environment variables, automatically loaded from a .env file. Sessions could work with a bearer token : you log in with the credentials, the API answers with a token, and you reuse it for future calls. The session would be stored in memory.
The Tapo account e-mail and password should go into the service as env vars, indeed. The device IP and type should probably be stored on the caller side and passed to the API.
There are two sessions to worry about:
The Tapo session. It is by device. Here are two options:
Some changes will need to be made to the Tapo client to accommodate this flexibility around the Tapo session. Still, it all sounds like valuable features to have anyway, so I will go ahead and add them in the coming days.
I'm personally not fond of JWT for multiple reasons, I think a bearer token would be a better approach.
For Tapo sessions, I imagine a simple mechanism: storing the Tapo session for each devices inside the server session.
English is not my primary language so I'm not sure if what I'm trying to say is clear. But basically you would have the server session (which is the one for which the user gets a bearer token to provide in the Authorization
header) as well as sessions for the different Tapo devices, which would be linked to this session.
That can also work, yes. It will be interesting to see what you come up with.
I'll work on the changes to help with the reusability of the RSA key pair and the session extraction these days.
Ok so I've made a little server, which can be found in the following repository: https://github.com/ClementNerma/tapo-rest. Here is the approach I chose:
Firstly you create a JSON config file (anywhere) which has this structure:
{
"account": {
"username": "<your tapo account's email>",
"password": "<your tapo account's password>"
},
"devices": [
{
"name": "living-room-bulb",
"device_type": "L530",
"ip_addr": "<ip address of the device>"
},
{
"name": "kitchen-bulb",
"device_type": "L530",
"ip_addr": "<ip address of the device>"
}
]
}
This allows to store credentials without them appearing anywhere: not in any environment variable (which can be inspected through processes), not on the command line itself. And it allows to register multiple devices simultaneously. This format allows adding new features later on (e.g. zones management, etc.)
You then run the server with:
cargo run -- --devices-config-path <path to your json file> --port 8000 --auth-password 'potatoes'
This will run the server on 0.0.0.0:8000
(you can chose any port you like) and will require clients to use the potatoes
password to log in.
Please note though that the server is not using SSL certificates (only plain HTTP/1 and HTTP/2), so you absolutely need to use a proxy (such as Caddy) if you don't want this secret password to appear in plain text on your network.
Before exposing the REST API, the server starts by connecting to all the devices specicified in your config file, to ensure they are reachable and caching the authentication results.
Clients call the POST /login
route with a body of { "password": "potatoes" }
. This returns a raw string, which is the session ID.
curl -i -X POST -H 'Content-Type: application/json' --data '{ "password": "potatoes" }' http://localhost:8000/login
All subsequent calls to the API must include an Authorization
header containing the session ID (Authorization: Bearer <session ID>
). Note that when the server exits, all sessions are currently immediatly destroyed - I think a basic JSON file to store the sessions would be a good idea, I don't think we need a full-blown database for that. EDIT: I made it so the application stores a simple JSON file in dirs::data_local_dir().join(env!("CARGO_PKG_NAME"))
with the session IDs and content. This way they can be reused after a server restart.
You can then access all other API routes which are located under /actions
to use your device. Each route takes a ?device=<name>
query parameter to know which device you are trying to interact with. The <name>
is the same as the one you provided in your config file.
curl -i -X GET -H 'Authorization: Bearer <your session ID>' 'http://localhost:8000/on?device=living-room-bulb'
Current routes (I just started with two routes) are /actions/on
and /actions/off
. They work perfectly fine at my home :)
EDIT: I'm adding new ones right now, just did /actions/set-brightness
with query param level=<int>
and /actions/set-color
with query param name=<color>
.
For later routes such as changing the color of a light bulb, if you are specifying a device that's either not a light bulb or a one that does not have color control feature, it will return an HTTP error describing what happened.
This server can be very easily bootstraped into a Dockerfile
(I'll do it as soon as I have a few more routes to play with). It is extremely lightweight, current version only weighs a few megabytes stripped. I also managed to vendor the openssl
crate to avoid having any pre-requisite and also built an ARM64 standalone executable successfully (I didn't test it though).
What do you think of this system? Please tell me if you think some things should be added / modified / removed ;)
(By the way your library is really amazing, I've rarely seen an API interface so intuitive and easy-to-use in my career)
Wow, that was quick!
Here are some thoughts:
X-API-KEY
header instead of a body object./l530/on
&& /p110/on
. The handlers can be reused.I don't think the header should be used as it's not a key we use on every request ; it's a password used to authenticate and create a session ID. The password itself is only used on the login route.
I didn't know about config-rs, seems nice! It'll save me some time instead of reinventing the wheel each time ^^
For endpoints I think it's a good idea, I'll see how I can implement that properly :)
I tried rewriting the whole API routing stuff, and ended up with a huge macro that allows to write a pretty elegant route system:
routes! {
L530 {
async fn on(state: #State, client: #Client) -> () {
client.on().await.map_err(tapo_api_err)
}
async fn off(state: #State, client: #Client) -> () {
client.off().await.map_err(tapo_api_err)
}
async fn set_brightness(state: #State, client: #Client, level: u8) -> () {
client.set_brightness(level).await.map_err(tapo_api_err)
}
async fn set_color(state: #State, client: #Client, color: tapo::requests::Color) -> () {
client.set_color(color).await.map_err(tapo_api_err)
}
}
}
Which compiles to a module exposing a make_router() -> Router<State>
function populated with all routes from all devices, hierarchised.
State management, query parameters deserialization and session handling are automatically performed by the macro.
I don't think the header should be used as it's not a key we use on every request ; it's a password used to authenticate and create a session ID. The password itself is only used on the login route.
Yeah, when you put it like that, it does make sense :)
I just tried the API, and I love where it's going!
Would it be possible to make the routes dynamic by the configured devices instead of type? E.g.
/devices/living-room-bulb/on
instead of
/l530/off?device=living-room-bulb
The server already knows the name and the type of the device. It would be easier for the client to provide only the device name instead of both name and type.
I already thought about that but the problem is that the API would have dynamic routes, which isn't a good thing.
With the current API schema, we can generate e.g. a Swagger definition and expose all existing routes, which 1) allows to see all available routes at once and 2) see which routes are available for which device type
Even though having the device name as a query parameter is not the most elegant thing, I think it's the best thing to do, especially given action parameters wil also be in the query params.
I see what you mean. Swagger could be generated for the dynamic routes, but it will probably become quite involved and messy. 👍
I added support for new device types: L510, L610, L630, L900, L920 and L930.
Action routes are now nested under the /actions
prefix.
I've added support for all remaining device types: P100, P105, P110, P115. All methods for all devices are now implemented. After thinking about it, I don't think a Dockerfile would be very useful. The API can be built into a standalone executable, you just have to download the binary and run it, there's no external dependency.
I've added a README, @WeeSee could you tell me if this would fill your needs?
Brilliant. I'll have a look one of these days once I'm done with the super secret work of adding support for H100 and its sensors :)
I think the Dockerfile is helpful for people that want to write the config file and docker run
the API on an RPI to be interacted with by HA, Node-RED, microcontrollers, etc.
Release v0.7.1 brings clone
to the ApiClient.
Great repo!
It really would be helpful to have a Docker image and.a REST api for easy usage from different environmenta as NodeRED or others.
Can we wait for such features in this repo or are there repos out there for these features which could be mentioned in the README?