spectraphilic / wsn_server

Software and django applications for wsn and iot setup
GNU General Public License v3.0
1 stars 0 forks source link

PostgreSQL tuning #3

Closed jdavid closed 5 years ago

jdavid commented 5 years ago

PostgreSQL is becoming slow as it grows.

I've made these changes to the configuration:

With these changes the query (found in the logs), goes from 4 minutes to 3 seconds.

SELECT time, bat as battery
FROM wsn_frame JOIN wsn_metadata ON metadata_id = wsn_metadata.id
WHERE
          wsn_metadata.name IN('pf-Juvvass-4')
          AND bat IS NOT NULL
          AND time >= 1537286218 AND time <= 1552924618
ORDER BY time;

Regarding pf-Juvvass-4 I've found that it's slower than asking for other motes. The reason is that there're many metadatas with that name. So we should:

The core idea here is to reduce the number of metadatas when possible.

However other queries are still slow.

jdavid commented 5 years ago

This is another slow query, increasing the work_mem has not helped.

EXPLAIN ANALYZE SELECT
          time,
          wsn_metadata.name as metric,
          (-"PowLicor"::float)*100. as "test"
        FROM
          wsn_frame JOIN wsn_metadata ON metadata_id = wsn_metadata.id
        WHERE
          wsn_metadata.name IN('mobileflux1')
          AND wsn_frame."PowLicor" IS NOT NULL
          AND time >= 1521363540 AND time <= 1552899540
        GROUP BY wsn_metadata.name, time, "test"
        ORDER BY time;

                                                                                   QUERY PLAN                                                                                   
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Group  (cost=450189.94..450269.33 rows=4537 width=22) (actual time=12314.791..14117.677 rows=5673233 loops=1)
   Group Key: wsn_frame."time", wsn_metadata.name, (((- (wsn_frame."PowLicor")::double precision) * '100'::double precision))
   ->  Sort  (cost=450189.94..450201.28 rows=4537 width=22) (actual time=12314.787..12882.946 rows=5673233 loops=1)
         Sort Key: wsn_frame."time", (((- (wsn_frame."PowLicor")::double precision) * '100'::double precision))
         Sort Method: external merge  Disk: 188752kB
         ->  Nested Loop  (cost=0.72..449914.37 rows=4537 width=22) (actual time=4417.936..9884.012 rows=5673233 loops=1)
               ->  Index Scan using wsn_metadata_name_tags_19cf3337_uniq on wsn_metadata  (cost=0.28..4.33 rows=3 width=14) (actual time=0.142..0.192 rows=16 loops=1)
                     Index Cond: ((name)::text = 'mobileflux1'::text)
               ->  Index Scan using wsn_frame_metadata_id_018a8e95 on wsn_frame  (cost=0.44..149203.03 rows=75564 width=10) (actual time=281.145..573.650 rows=354577 loops=16)
                     Index Cond: (metadata_id = wsn_metadata.id)
                     Filter: (("PowLicor" IS NOT NULL) AND ("time" >= 1521363540) AND ("time" <= 1552899540))
                     Rows Removed by Filter: 471623
 Planning time: 1.487 ms
 Execution time: 14382.759 ms
(14 rows)
jdavid commented 5 years ago

I've reduced the memory usage of the Django servers, by reducing the number of processes to 2 and increasing the number of threads to 4. This way we leave a bit more memory to PostgreSQL.

Then I've tuned the PostgreSQL config a bit more:

shared_buffers = 2GB
work_mem = 64MB # was 8MB
maintenance_work_mem = 256MB # was 128MB
effective_cache_size = 5GB # was 4GB

But the query above is difficult to optimize, because it's just returning too many rows (5.7 million).

jdavid commented 5 years ago

Closing, there's little more we can do with this approach. See issue #6 for a follow up.