martinmeinke / rlog

Raspberry Pi based logger for KACO Powador inverters
6 stars 0 forks source link

Move ticks to backup table at night #15

Closed martinmeinke closed 11 years ago

martinmeinke commented 11 years ago

to keep working table slim & fast

stylpen commented 11 years ago

and move minutes to backup minutes table every week. there are over 9000 (over 12000 to accurate) rows per week. grows too fast considering the hour trigger wants to insert here every 10 seconds. (but still hope that inserting doesn't depend on table size)

stylpen commented 11 years ago

my first idea is to let django know of that distributon between old and new data but handling copy and delete not in django but by the db. https://groups.google.com/forum/?fromgroups=#!topic/django-users/TteIxe8hfCg If the requested timeframe is not covered by the working table django knows that it has the backup table and selects from there. I'd suggest not to sort the respones - merge everything by devide id and return as JSON. Sorting is easy in javascript and outsourcing reduces load on the pi. What's your opinion?

martinmeinke commented 11 years ago

You're talking about the minute table, right? So the only usecase when the Webapp would have to ask the backup table would be the request of a timeframe some time in the past with a minute-wise period, right?

I'm coming up with a number of ~5kk tuples on a rough estimation for a setup comprising 3 devices for one year of minute data. Is this already too much to be handled?

I'd propose to first perform some measurements on the actual performance of a crowded minute table.

stylpen commented 11 years ago

You're talking about the minute table, right? So the only usecase when the Webapp would have to ask the backup table would be the request of a timeframe some time in the past with a minute-wise period, right?

Yes.

I'm coming up with a number of ~5kk tuples on a rough estimation for a setup comprising 3 devices for one year of minute data. Is this already too much to be handled?

I'd propose to first perform some measurements on the actual performance of a crowded minute table.

Not sure whether it is too much ... does insert time depend on table size? Maybe it is slow because of logging the SQL statements and everything.