jasonacox / Powerwall-Dashboard

Grafana Monitoring Dashboard for Tesla Solar and Powerwall Systems
MIT License
268 stars 57 forks source link

Time of Use Solar -> Grid (over production) and From Grid (consumption) Metrics #444

Open KevM opened 4 months ago

KevM commented 4 months ago

First a complement. I found this project recently and was impressed with how easy it was to get running and how feature filled it is. Well done!   Problem We have Solar and a single Powerwall. My electrical service provider supports time of use. Sadly, they do not carry over credits between months. Also, they do not credit 1-1 more like 1/2 off-peak and 1/4 of on-peak Kwh charges. Due to our Powerwall we very rarely use on-peak grid sourced power.

My problem is we mostly (5 days of 7) only get credit for on-peak solar production. But we classically charge our EV off-peak overnight (EV charging is our biggest energy usage). This means we are not balancing our credits with our consumption. We end up getting charged for a lot of off-peak usage and credited little for on-peak over production.

The goal: move charging of our EV to on-peak, but do not exceed our solar over production for the month when we do this to avoid the very expensive on-peak energy cost. This would save us approximately 5 cents a Kwh for all of the energy we use to charge our EV.

Enhancement I would like to create metrics to help monitor the 4 different "buckets" of our energy consumption and production:

These could be daily with a monthly rollup as well.

Visualization

I would like to create panels to show the difference between solar over-production and grid consumption for the current month. It "would be nice" to have a comparison with the previous year's month.

Time of Use Schedule

Making this general use would be the hardest part. My use case is simple in that on-peak is Monday-Friday 8am-8pm all year round. For general use this might be tricky as looking at the Tesla app time of use scheduling UI it can get pretty complicated.

How?

Maybe a telegraf transform work against the Powerwall site meter API to populate a daily metric that references the time of use schedule to measure how much production and consumption are in each "bucket". Maybe daily another transform could roll it up to a monthly metric.

Additional context I am a software developer but have limited experience with Grafana and very little experience with influxdb and telegraf. I'd be happy to play around working on this idea. But would likely need some hand holding along the way.

KevM commented 4 months ago

I didn't know about pwdusage. So I will look into that for now. 👯

youzer-name commented 4 months ago

If that doesn't work for you, I think you can get everything you need using the approach I took for time of use calculations.

TLDR: Copy some data from InfluxDB into MySQL and then you will have no problem writing queries to pull the usage/production for just the days and times you want. The data can easily be visualized in Grafana.

As described in https://github.com/jasonacox/Powerwall-Dashboard/discussions/82, I set up a separate MySQL/MariaDB container and copy data from InfluxDB to that database since MySQL offers greater ability (or really any ability when compared to InfluxDB 1.x) to work with dates and times.

To very briefly summarize what's in the linked discussion:

I extended my setup beyond what's in the linked post because I added a second solar array that I own in addition to my original array which is on a PPA (power purchase agreement). Since the powerwall 'solar' metric includes production from both systems, I have a separate table in MariaDB where I track only the production from the PPA and its costs and use that for my solar cost.

I think you're more interested in the kwh total at different times rather than the dollar cost, correct? If so, simply getting the data into MySQL should be enough without worrying about the TOU cost calculations. Once you have the InfluxDB kwh data in MySQL, you can easily write queries that use the date and time functions in MySQL to get the information you need (https://dev.mysql.com/doc/refman/8.0/en/date-and-time-functions.html).

For example, in MySQL you could get the sum of usage "where week(datetime) <=12 and hour(datetime) >=4 and hour(datetime) <11 and dayofweek(datetime) = 2" which would give you only usage for Mondays from 04:00:00 to 10:59:59 for the first 12 weeks of the year.

Since I think you only want solar over-production, the to_grid value should be what you're looking for. For the examples below, I'm going to assume peak is Monday through Friday from noon to 8 PM, because that is my utility's peak period. You could easily extend these queries to cover multiple peak period per day or different start/end times on different days:

Monthly On-Peak Solar Over Production

(using my data for this year so far)

select month(datetime_local), sum(to_grid)
from kwh
where 
dayofweek(datetime_local) >= 2  and dayofweek(datetime_local) <= 6
and 
( hour(datetime_local) >= 12 or hour(datetime_local) < 20 )
and 
datetime_local >= '2024-01-01'
group by month(datetime_local)

image

Monthly On-Peak Home Usage

Same, changing to sum of "home". I included home and to_pw to capture all power going to the house or to charge the powerwalls.

select month(datetime_local), sum(home) + sum(to_pw)
from kwh
where 
dayofweek(datetime_local) >= 2  and dayofweek(datetime_local) <= 6
and 
( hour(datetime_local) >= 12 or hour(datetime_local) < 20 )
and 
datetime_local >= '2024-01-01'
group by month(datetime_local)

image

Monthly Off-Peak Solar Over Production

(I'd need to double-check some of my parenthesis here, but I think this is right)

select month(datetime_local), sum(to_grid)
from kwh
where 
(
dayofweek(datetime_local) in ('1','7')
or
(
dayofweek(datetime_local) >= 2  and dayofweek(datetime_local) <= 6
and 
( hour(datetime_local) < 12 or hour(datetime_local) >= 20 )
)
)
and 
datetime_local >= '2024-01-01'
group by month(datetime_local)

image

Monthly Off-Peak Home Usage

select month(datetime_local), sum(home) + sum(to_pw)
from kwh
where 
(
dayofweek(datetime_local) in ('1','7')
or
(
dayofweek(datetime_local) >= 2  and dayofweek(datetime_local) <= 6
and 
( hour(datetime_local) < 12 or hour(datetime_local) >= 20 )
)
)
and 
datetime_local >= '2024-01-01'
group by month(datetime_local)

image

You mentioned a daily summary... easy enough:

Solar peak-only over-production by day (February only)

select month(datetime_local), day(datetime_local), sum(to_grid)
from kwh
where 
dayofweek(datetime_local) >= 2  and dayofweek(datetime_local) <= 6
and 
( hour(datetime_local) >= 12 or hour(datetime_local) < 20 )
and 
datetime_local >= '2024-02-01' and datetime_local < '2024-03-01'
group by month(datetime_local), day(datetime_local)

image

EDIT: I originally wrote the queries above using the "datetime" field and not the "datetime_local" field. I've updated them since we're interested in local times for peak/off-peak.

KevM commented 4 months ago

This is immensely helpful thank you.

BuongiornoTexas commented 4 months ago

I didn't know about pwdusage. So I will look into that for now. 👯

Let me know if you go down this path - I'm happy to answer questions and/or help develop an agent if your energy plan doesn't match up with the default simple agent. (If you can link to your utilities plan, I can get an idea pretty quickly).

BuongiornoTexas commented 4 months ago

Looking a bit more closely, I think pwdusage should do a lot of this out of the box.

Problem We have Solar and a single Powerwall. My electrical service provider supports time of use.

Can I check that your plan is more or less this:

If so, pwdusage can generate the raw bucket data for the import and export in these periods, and you can get running credit/debit values with a couple of grafana transforms.

Enhancement I would like to create metrics to help monitor the 4 different "buckets" of our energy consumption and production:

This should be straighforward. We do a rolling monthly panel that tracks:

  • value of solar export.
  • value of solar power used directly.
  • value of solar power used indirectly via the powerwall.

You could do metrics of either kWh or cost on:

These could be daily with a monthly rollup as well.

Probably workable. pwdusage is not efficient because it doesn't store query results (deliberate decision - I'm looking to minimise impact on the dashboard stack). You could modify pwdusage to write queries back to influxdb or a database of your choice.

Visualization

I would like to create panels to show the difference between solar over-production and grid consumption for the current month. It "would be nice" to have a comparison with the previous year's month.

For this to work acceptably fast, you'd need to write query results back to a database. We find a simple monthly panel gives us everything we need.

KevM commented 4 months ago

My utilty plan details:

Our net metering is reconciled at the month end subtracting consumption - over production for each period.

Example: I consume 200 peak Kwh and over produce 190Kwh. I owe (200-190) * $0.205.

pwdusage

I'm working through the readme now and working to get it started up for my testing.

Alternatives

I do not understand well enough how the pipeline of telemetry goes between pyPowerwall to influxdb to grafana. I was hoping I could maybe write a go telemetry aggregator to capture a metric accumulating pypowerwall values from /api/meters/aggregates in particular, the difference between the site.energy_imported and site.energy_exported scoped to the off-peak and on-peak periods. Or I could be completely wrong.

youzer-name commented 4 months ago

@KevM - I took a look at the impressive work @BuongiornoTexas has done on the pwdusage tool, although quite a bit of it went over my head as I'm crap with Python and JSON. So to me that looks like a black box with a rather intimidating setup process, whereas I write SQL queries in my sleep so using MySQL seems much easier to me. YMMV depending on your experience with SQL.

After reading the pwdusage page, I think I see a couple of other distinctions between the two approaches. Adding on MySQL or MariaDB, a cron job to refresh data, and writing queries may be the 'clunkier' of the two approaches with more moving parts. However once the kwh data is in MySQL, the queries against it are very fast. I run a separate dashboard where I only have historical data and generally look at periods from 90 days to one year at a time, and when I look at a year's data, most of the panels are done loading in 1 second (running Grafana 10.x on a Raspberry Pi 4B with a bunch of other services running).

If you go with the MySQL approach, I would create views on the database for each of the query types to make using them in Grafana easier: e.g. I would save the first query from above as "view_peak_to_grid" and the next one as "view_peak_use", etc. so you can write simple Grafana queries like "select sum(to_grid) from view_peak_to_grid where $__timeFilter(datetime_local)".

Just for fun (I really need some new hobbies 😄 ) I ran a combined peak overage/peak use query for 2023:

image

The fine print there says it took .061 seconds.

So I ran the daily one (for peak only)...

image

Then I ran the off-peak for 2023 (combined to_grid and (home + to_pw):

image

Ok, so still needing a new hobby, I wrote the one query to rule them all. This one returns peak and off-peak values for every day in 2023:

The query select t1.datestamp, t1.month, t1.day, t1.to_grid_off_peak, t1.usage_off_peak, t2.to_grid_peak, t2.usage_peak from ( select date(datetime_local) as datestamp, month(datetime_local) as month, day(datetime_local) as day, sum(to_grid) as to_grid_off_peak, sum(home) + sum(to_pw) as usage_off_peak from kwh where dayofweek(datetime_local) in ('1','7') or ( dayofweek(datetime_local) >= 2 and dayofweek(datetime_local) <= 6 and ( hour(datetime_local) < 12 or hour(datetime_local) >= 20 ) ) group by date(datetime_local) ) t1 left join (select date(datetime_local) as datestamp, month(datetime_local) as month, day(datetime_local) as day, sum(to_grid) as to_grid_peak, sum(home) + sum(to_pw) as usage_peak from kwh where dayofweek(datetime_local) >= 2 and dayofweek(datetime_local) <= 6 and ( hour(datetime_local) >= 12 or hour(datetime_local) < 20 ) group by month(datetime_local), day(datetime_local) ) t2 on t1.datestamp = t2.datestamp where t1.datestamp >= '2023-01-01' and t1.datestamp < '2024-01-01' order by t1.datestamp

Revising what I said above, this one query could be saved to MySQL as a view with the final "where" statement removed (where I limited it to 2023) and then all the Grafana queries could be run against this one view. If it were named, for example, 'view_tou_usage', the panel source would look like "select to_grid_peak from view_tou_usage where $__timeFilter"

A few rows of the mega-query look like:

image

That query took:

image

to pull one year's data from MySQL

BuongiornoTexas commented 4 months ago

Example: I consume 200 peak Kwh and over produce 190Kwh. I owe (200-190) * $0.205.

At first glance (clarifying questions below), I don't think the cost/credit data from the simple agent will work for your case.

However, you can definitely get monthly consumption and over production kWh values from pwdusage, and you can calculate your net credits/costs via grafana transforms - but this will typically only work for one tariff at a time (history probably won't work).

It should be possible to create a modified agent that does a net consumption calculation using a repeating monthly schedule, but this may be overkill for your use case - and may not be worth the implementation effort - depends how frequently your tarriffs change and how much you value your historical data for this stuff. (We've found we mainly use the monthly report, and then drop summary results into a manual tracking spreadsheet for summary history and economic analysis).

Can I check that your $0.59/$0.49 solar credit values are correct? Not $0.059 /$$0.049?

Also, can I check how net overproduction works: e.g. If I consume 220 peak and over produce 250, do I get a payment of (250-220) * $0.59?

I do not understand well enough how the pipeline of telemetry goes between pyPowerwall ...

pypowerwall provides a set of rest apis for powerwall data with some processing along the way - it sounds like your aggregator would do similar things. Telegraf grabs the data (with optional filtering) and writes to influxdb. Grafana is visualisation, but you can do some basic transforms along the way.

BuongiornoTexas commented 4 months ago

as I'm crap with Python and JSON.

No one likes JSON. I'm regretting running with it, and may convert to TOML at some point.

a rather intimidating setup process,

That's a bit harsh - the basic setup is pretty straightforward now that Jason is hosting the docker image. Update the yml file, open the firewall port and configure your setup and usage plan in json (yeah, that part does suck!).

The biggest criticism of pwdusage is that it only captures one usage plan type at the moment (but I'm also pretty confident it will cover an awful lot of ToU plans in Australia), and that it is fiddly to add new families of plans. But the upside for the plans supported is that the user doesn't need to know their way around SQL or InfluxQL - they only need to understand their plan.

I'd like to add more usage agents for other families of usage plans - but I can't do much about this without feedback from users/potential users (limited feedback is not entirely surprising, given that it is a niche add on for a niche tool).

If I knew more people were using it, I'd spend a bit of time profiling and improving efficiency (pretty sure I know where the problems are), but with crickets for feedback, there's not a lot of incentive ...

BJReplay commented 4 months ago

If I knew more people were using it, I'd spend a bit of time profiling and improving efficiency (pretty sure I know where the problems are), but with crickets for feedback, there's not a lot of incentive ...

I tried to get this going a while ago, but couldn't. It's that long ago that I don't recall where I failed. I should try again (but then again, I've got MySQL working well, so there wasn't the incentive to get it going).

However, I've got a stepped rate FIT (first 10kWh at a higher rate than the residual) so your approach seemed worth trying.

I should try again, when I get a round TUIT.

youzer-name commented 4 months ago

as I'm crap with Python and JSON.

No one likes JSON. I'm regretting running with it, and may convert to TOML at some point.

Everyone likes Jason! Oh... not @jasonacox... JSON 😄

a rather intimidating setup process,

That's a bit harsh - the basic setup is pretty straightforward now that Jason is hosting the docker image. Update the yml file, open the firewall port and configure your setup and usage plan in json (yeah, that part does suck!).

I should have said "intimidating to me"...

Are you familiar with Paul Hibbert who does home automation videos on YouTube? Because I can't help hearing everything starting at "update the yml" in his Home Assistant boffin voice. If you don't know his channel, trust me, that's a hilarious reference. 🥲

youzer-name commented 4 months ago

If I knew more people were using it, I'd spend a bit of time profiling and improving efficiency (pretty sure I know where the problems are), but with crickets for feedback, there's not a lot of incentive ...

I tried to get this going a while ago, but couldn't. It's that long ago that I don't recall where I failed. I should try again (but then again, I've got MySQL working well, so there wasn't the incentive to get it going).

However, I've got a stepped rate FIT (first 10kWh at a higher rate than the residual) so your approach seemed worth trying.

I should try again, when I get a round TUIT.

Here you go:

image

Did you get the first 10 kWh thing working in SQL? Because it seems like it should be solvable in a query

BuongiornoTexas commented 4 months ago

I should try again, when I get a round TUIT. MySQL working well,

A working solution is worth a lot more than the time spent getting pwdusage up and running.

However, I've got a stepped rate FIT (first 10kWh at a higher rate than the residual)

I assume this the first 10kWh of the day?

This won't work out of the box for pwdusage - at the very least requires a tweaked usage agent, and raises issues about reporting part days - e.g. how to handle a view starting from say 11:00am after part of the generation for a day has happened.

But as it seems to be a common aspect of Australian plans, I'll have a think about it! I can already see a path to incorporating it, but generalising the combo of ToU and tiered tarriffs may make the JSON explode - which we'd all rather avoid.

Are you familiar with Paul Hibbert

Not seen/heard this!

BJReplay commented 4 months ago

Did you get the first 10 kWh thing working in SQL? Because it seems like it should be solvable in a query

Agreed.

I can't even remember (and I'm a bit under the weather at the moment) - well enough to accept your TUIT and reply here, but not well enough to fire up the tools and look)

youzer-name commented 4 months ago

Are you familiar with Paul Hibbert

Not seen/heard this!

This isn't the exact scene I was looking for, but this is the general idea:

https://youtu.be/glqS5kEiJoQ?si=4zbM9TmIz5OpkVTs&t=751

or this:

https://youtu.be/LI3lhgOiZ-8?si=08REST2KRhVp-i0b&t=796

BuongiornoTexas commented 4 months ago

This isn't the exact scene I was looking for, but this is the general idea:

Nice!

Also, @youzer-name, I'm going to kind of hate both you and @BJReplay a little bit for the rest of today. I can now see a way to do combined tiered and ToU plans, but it'll probably involve a rewrite of the core pwdusage engine (especially since it will need to capture non-ToU tiers and even worse, tiers that cross ToU boundaries). Back to the python mines at some point then.

BJReplay commented 4 months ago

Please don't do it for me (I'm blissfully ignorant at this point in time as to what I did when I got my tiered FIT, other than I remember doing something); that said, solving a challenge is sometimes an itch you have to scratch.

BuongiornoTexas commented 4 months ago

solving a challenge is sometimes an itch you have to scratch.

And that's the problem - I'm mostly hating myself! That said, I've just started playing Elden Ring, and the way I'm having my backside handed to me, python coding will probably seem like a welcome and trivial thing when I get back to it.

KevM commented 4 months ago

Hmm I'm trying to understand the data flow of pypowerwall http to influxdb. I'm querying the influxdb timeseries and only see http and weather measurements. Isn't there more details being displayed on the dashboards?

Here is the gist of my http measurement:

Query:

select home, from_grid, to_grid, to_pw, from_pw, solar, percentage from http WHERE time >= now() - 1h ORDER By time DESC

screenshot of influxdb query

On the dashboard I see detailed information about battery reserve settings. etc.

Looking at the telegraf.conf it seems like a lot more data should be in the influx fields but I don't see where the json fields I am seeing in influxdb are getting selected out of the json.

youzer-name commented 4 months ago

Hmm I'm trying to understand the data flow of pypowerwall http to influxdb. I'm querying the influxdb timeseries and only see http and weather measurements. Isn't there more details being displayed on the dashboards?

You have to query against the specific retention policy. If none is specified, it uses the default, which is autogen.

So your query is identical to:

select home, from_grid, to_grid, to_pw, from_pw, solar, percentage from autogen.http WHERE time >= now() - 1h ORDER By time DESC

you can use "show retention policies" to see the other ones.

> show retention policies
name    duration shardGroupDuration replicaN default
----    -------- ------------------ -------- -------
autogen 0s       168h0m0s           1        true
raw     72h0m0s  24h0m0s            1        false
kwh     0s       168h0m0s           1        false
daily   0s       168h0m0s           1        false
monthly 0s       168h0m0s           1        false
pwtemps 0s       168h0m0s           1        false
vitals  0s       168h0m0s           1        false
grid    0s       168h0m0s           1        false
pod     0s       168h0m0s           1        false
alerts  0s       168h0m0s           1        false
strings 0s       168h0m0s           1        false
> select * from pod.http order by time desc  limit 1
name: http
time                 PW1_POD_nom_energy_remaining PW1_POD_nom_full_pack_energy PW2_POD_nom_energy_remaining PW2_POD_nom_full_pack_energy PW3_POD_nom_energy_remaining PW3_POD_nom_full_pack_energy backup_reserve_percent host month nominal_energy_remaining nominal_full_pack_energy source url year
----                 ---------------------------- ---------------------------- ---------------------------- ---------------------------- ---------------------------- ---------------------------- ---------------------- ---- ----- ------------------------ ------------------------ ------ --- ----
2024-03-05T15:57:00Z 13580                        14768                        13442                        14593                        13342                        14484                        90                          Mar   40364                    43845                               2024
> select * from vitals.http order by time desc limit 1
name: http
time                 ISLAND_FreqL1_Load ISLAND_FreqL1_Main ISLAND_FreqL2_Load ISLAND_FreqL2_Main ISLAND_FreqL3_Load ISLAND_FreqL3_Main ISLAND_VL1N_Load ISLAND_VL1N_Main ISLAND_VL2N_Load ISLAND_VL2N_Main ISLAND_VL3N_Load ISLAND_VL3N_Main METER_X_VL1N METER_X_VL2N METER_X_VL3N PW1_PINV_Fout      PW1_PINV_VSplit1 PW1_PINV_VSplit2 PW1_f_out         PW1_i_out          PW1_p_out           PW1_q_out PW1_v_out         PW2_PINV_Fout     PW2_PINV_VSplit1 PW2_PINV_VSplit2 PW2_f_out          PW2_i_out          PW2_p_out           PW2_q_out           PW2_v_out         PW3_PINV_Fout     PW3_PINV_VSplit1 PW3_PINV_VSplit2 PW3_f_out         PW3_i_out          PW3_p_out          PW3_q_out          PW3_v_out          host month source url year
----                 ------------------ ------------------ ------------------ ------------------ ------------------ ------------------ ---------------- ---------------- ---------------- ---------------- ---------------- ---------------- ------------ ------------ ------------ -------------      ---------------- ---------------- ---------         ---------          ---------           --------- ---------         -------------     ---------------- ---------------- ---------          ---------          ---------           ---------           ---------         -------------     ---------------- ---------------- ---------         ---------          ---------          ---------          ---------          ---- ----- ------ --- ----
2024-03-05T15:58:00Z                                                                                                                                                                                                                                                                60.010727272727266                                   60.01077272727273 2.6863636363636365 -145.45454545454547 -10       246.6045454545455 60.01118181818181                                   60.011181818181825 2.5272727272727273 -135.45454545454547 -14.545454545454545 246.7181818181818 60.01018181818181                                   60.01022727272729 2.5954545454545457 -129.0909090909091 13.181818181818182 246.60909090909092      Mar              2024
BuongiornoTexas commented 4 months ago

Looking at the telegraf.conf it seems like a lot more data should be in the influx fields but I don't see where the json fields I am seeing in influxdb are getting selected out of the json.

Supporting the response from @youzer-name - grab chrongraf as well - you can use it a interactive browser and query builder for influx. It will list the RPs, data buckets and variables available from influxdb. I used it to get a handle on both InfluxQL and Flux (Flux is a) horrible and b) deprecated - not worth chasing).

BuongiornoTexas commented 4 months ago

This is as close as I can get with the simple pwdusage agent:

image

Unfortunately, I can't see a way to apply costs to the kWh net usage/overproduction values due to a combination of limitations between the json import plugin and grafanas relatively inflexible transforms (very hard to do an if statement!). With a better choice of data api for the usage data, I suspect this could be easily fixed - paying the price for my laziness here!

KevM commented 4 months ago

I'm currently diving into learning how telegraf slurps pypowerwall data into influxdb and seeing if that's a good place for an aggregator. It seems like your projects could benefit from that as well. My goal is to collect two measurements aggregations of consumption (grid to home) and over production (solar to grid) over time. Then maybe figure out how rollups work (daily, weekly, monthly, etc.).

I'm really not looking for costs. I am more looking for usage in Kwh for on peak and off peak. The cost calculation is more of a would be nice. That said I think costs could be derived from the measurements I'm looking to collect and may obviate the need for a side database. The cost calculator could be a telegraf processor which emits a new measurement?

I'd love someone to chime in with how the rollups work. I am mostly ignorant of what exactly powerwall dashboard is doing behind the scenes.

There looks to be some drama regarding the usage of with influxdb version 1.8 when the world seems to be moving towards v3 at the moment but again I'm ignorant of that too.

youzer-name commented 4 months ago

I'm going to plug my method again.

The fundamental problem with InfluxDB 1.x is the lack of date/time manipulation functions. You'll need to anticipate every query you would ever want to use and tag the data when storing it. If you want to query by the day of the week, you'd have to tag each entry with a DOW value separate from the datetime. Do you want to find everything on weekdays between certain hours? You'll have to tag each value when it is stored to indicate which time window it is in. You could probably tag each row with a simple 'peak' or 'off peak' tag, but you'll want to get it right the first time. Going back to change the tags, while possible, is a PITA.

You also have to deal with the fact that a tag is not the same as a field, and that adding tags with many discrete values will bloat the memory utilization of the database. If you want to be able to query "where X = Y", X must be a tag, not a field. If there are many different values for Y, memory utilization will skyrocket.

Once you have a MySQL instance up and running, you have complete flexibility to manipulate the data. I was able, in a few minutes, to write queries that got the results you need from SQL even though I didn't create the database with those specific parameters in mind. SQL is simply a superior query language for extracting the information you need from the data.

InfluxDB with complex date/time queries is very much a square peg/round hole situation. You might be able to make it fit, but it's going to take a lot of forcing.

BuongiornoTexas commented 4 months ago

@youzer-name is spot on with his comments about issues with influxDB. It's not great to work with if you want to get clever about time series manipulation.

I'm really not looking for costs. I am more looking for usage in Kwh for on peak and off peak

Um. This was the data I provided in my example plot. Usage is Import (from grid), and Over production is Export (to grid). The net values are the differences between usage and over production.

310361101-4f36ec37-3e88-4481-a88c-c5f3fa9f4172

You can do more granular output if you want. E.g. our last 7 days of usage and over production by day (please excuse the American style month/day notation - I almost never use this, so haven't localised it to sensible day/month).

image

I'm currently diving into [learning how telegraf ... It seems like your projects could benefit from that as well

That'll be a big Australian "Yeah, nah!" from me on that. It may be possible to do simple ToU cases with telegraf, but from my understanding, it'll be hard to manage complex cases in a general way (I'll be very happy to be proved wrong though!). Giving some idea of the things that a usage plan might need to do, here are just some of the issues in the Australian market:

I'll be ecstatic if you can find a path that makes these things easy for general usage. But I won't be surprised if you end up like @youzer-name and I - concluding there are things that the InfluxData stack does well, but it's a bloody blackberry thicket if you want to do anything outside that narrow path.