Postgres 9.0 vs Postgres 10 & 11 Performance












3















We want to move our databases from webhosting provider (postgres 9.0) to our local network server (tried both postgres 10 and latest 11)
Our machine is windows server, fast XEON machine with 16gb ram, just for the database.



But even after raising default_statistics_targer = 4000 and analyzing for statistics we have trouble running a lot of views which worked very fast before.
Seems the webhosting provider server was fine tuned and our execution plans are probably odd for some reason.



Our Postgres is stock install config.



Simplified example query is bellow
(biggest table is "dale table which is several million records big (it's binding table with foreign keys)
other tables are much smaller, tenth thousands of records
(System is vacuum analyzed and it's fresh install)



EXPLAIN ANALYZE
SELECT REKLAMACNY_LIST.ID REKLAMACNY_LIST_ID
FROM REKLAMACNY_LIST
WHERE REKLAMACNY_LIST.VALIDTO IS NULL
AND
( SELECT NOT tr.typ_odstupenia::boolean
AND sr.konecny_stav::boolean
FROM dale d1
CROSS JOIN typ_reklamacie tr
CROSS JOIN dale d2
CROSS JOIN stav_reklamacie sr
WHERE TRUE
AND d1.fk7 = reklamacny_list.id
AND d2.fk7 = reklamacny_list.id
AND d1.fk1 = tr.id
AND d2.fk3 = sr.id
AND sr.validto IS NULL
AND tr.validto IS NULL
AND d1.validto IS NULL
AND d2.validto IS NULL )
ORDER BY reklamacny_list_id DESC
LIMIT 100


Explain analyze for our testing postgres 11 server



"Limit  (cost=0.29..18432.84 rows=100 width=4) (actual time=11804.484..331036.595 rows=91 loops=1)"
" -> Index Scan Backward using reklamacny_list_pk on reklamacny_list (cost=0.29..2578713.84 rows=13990 width=4) (actual time=11804.482..331036.524 rows=91 loops=1)"
" Index Cond: (id > 0)"
" Filter: ((validto IS NULL) AND (SubPlan 1))"
" Rows Removed by Filter: 29199"
" SubPlan 1"
" -> Hash Join (cost=5.30..87.57 rows=250 width=1) (actual time=5.246..11.824 rows=1 loops=27981)"
" Hash Cond: (d2.fk3 = sr.id)"
" -> Merge Join (cost=1.85..80.76 rows=324 width=9) (actual time=5.222..11.806 rows=6 loops=27981)"
" Merge Cond: (d1.fk1 = tr.id)"
" -> Nested Loop (cost=0.71..25556.34 rows=324 width=8) (actual time=5.211..11.794 rows=6 loops=27981)"
" -> Index Scan using dale_idx_fk1 on dale d1 (cost=0.29..25510.95 rows=18 width=4) (actual time=5.195..11.772 rows=1 loops=27981)"
" Filter: (fk7 = reklamacny_list.id)"
" Rows Removed by Filter: 28432"
" -> Materialize (cost=0.42..41.38 rows=18 width=4) (actual time=0.011..0.015 rows=6 loops=27890)"
" -> Index Scan using dale_fk7_idx on dale d2 (cost=0.42..41.29 rows=18 width=4) (actual time=0.007..0.010 rows=6 loops=27890)"
" Index Cond: (fk7 = reklamacny_list.id)"
" -> Sort (cost=1.14..1.15 rows=6 width=9) (actual time=0.009..0.009 rows=1 loops=27890)"
" Sort Key: tr.id"
" Sort Method: quicksort Memory: 25kB"
" -> Seq Scan on typ_reklamacie tr (cost=0.00..1.06 rows=6 width=9) (actual time=0.002..0.004 rows=6 loops=27890)"
" Filter: (validto IS NULL)"
" -> Hash (cost=2.74..2.74 rows=57 width=9) (actual time=0.046..0.047 rows=57 loops=1)"
" Buckets: 1024 Batches: 1 Memory Usage: 11kB"
" -> Seq Scan on stav_reklamacie sr (cost=0.00..2.74 rows=57 width=9) (actual time=0.012..0.030 rows=57 loops=1)"
" Filter: (validto IS NULL)"
" Rows Removed by Filter: 17"
"Planning Time: 10.174 ms"
"Execution Time: 331036.893 ms"


executions takes forever versus immediate result on our testing postgres 10.0



"Limit  (cost=0.29..163635.75 rows=100 width=4) (actual time=24.199..925.691 rows=70 loops=1)"
" -> Index Scan Backward using reklamacny_list_pk on reklamacny_list (cost=0.29..21326610.37 rows=13033 width=4) (actual time=24.195..925.660 rows=70 loops=1)"
" Index Cond: (id > 0)"
" Filter: ((validto IS NULL) AND (SubPlan 1))"
" Rows Removed by Filter: 27218"
" SubPlan 1"
" -> Nested Loop (cost=4.22..781.03 rows=1293 width=1) (actual time=0.018..0.034 rows=1 loops=26066)"
" -> Hash Join (cost=3.80..377.12 rows=76 width=5) (actual time=0.005..0.005 rows=1 loops=26066)"
" Hash Cond: (d2.fk3 = sr.id)"
" -> Index Scan using dale_fk7_idx on dale d2 (cost=0.42..372.47 rows=100 width=4) (actual time=0.002..0.004 rows=5 loops=26066)"
" Index Cond: (fk7 = reklamacny_list.id)"
" -> Hash (cost=2.71..2.71 rows=54 width=9) (actual time=0.049..0.049 rows=54 loops=1)"
" Buckets: 1024 Batches: 1 Memory Usage: 11kB"
" -> Seq Scan on stav_reklamacie sr (cost=0.00..2.71 rows=54 width=9) (actual time=0.016..0.032 rows=54 loops=1)"
" Filter: (validto IS NULL)"
" Rows Removed by Filter: 17"
" -> Materialize (cost=0.42..374.87 rows=17 width=24) (actual time=0.013..0.027 rows=1 loops=25987)"
" -> Nested Loop (cost=0.42..374.78 rows=17 width=24) (actual time=0.010..0.024 rows=1 loops=25987)"
" Join Filter: (d1.fk1 = tr.id)"
" Rows Removed by Join Filter: 32"
" -> Seq Scan on typ_reklamacie tr (cost=0.00..1.06 rows=1 width=28) (actual time=0.001..0.002 rows=6 loops=25987)"
" Filter: (validto IS NULL)"
" -> Index Scan using dale_fk7_idx on dale d1 (cost=0.42..372.47 rows=100 width=4) (actual time=0.002..0.003 rows=5 loops=155922)"
" Index Cond: (fk7 = reklamacny_list.id)"
"Planning time: 8.460 ms"
"Execution time: 925.870 ms"


This is just one example. But almost every query is simply many times slower on 11 and even things which takes forever also on 10, are immediately returned by web providers postgres 9.0 (which also hosts hundreds of different databases)



Do you have some point what might be the issue worth investigating?



Does adjusting some memory parameters can help? (Server has 16gb just for postgres and OS, there will be around 50 users connecting)
Actually raising default_statisticts_target=10000 helped a lot but even then.



A different version of the request with coalesce otherwise the same



EXPLAIN ANALYZE
SELECT REKLAMACNY_LIST.ID REKLAMACNY_LIST_ID
FROM REKLAMACNY_LIST
WHERE REKLAMACNY_LIST.VALIDTO IS NULL
AND REKLAMACNY_LIST.ID > 0
AND ((
( SELECT (NOT COALESCE(tr.typ_odstupenia, 'False')::boolean)
AND COALESCE(sr.konecny_stav, 'False'):: boolean
FROM dale d1
CROSS JOIN typ_reklamacie tr
CROSS JOIN dale d2
CROSS JOIN stav_reklamacie sr
WHERE TRUE
AND d1.fk7 = reklamacny_list.id
AND d2.fk7 = reklamacny_list.id
AND d1.fk1 = tr.id
AND d2.fk3 = sr.id
AND sr.validto IS NULL
AND tr.validto IS NULL
AND d1.validto IS NULL
AND d2.validto IS NULL )))
ORDER BY reklamacny_list_id DESC
LIMIT 100


On Postgres 11 it jumps to 10 seconds (great difference from the previous version of request without coalesce)



"Limit  (cost=0.29..18432.84 rows=100 width=4) (actual time=447.853..10695.583 rows=100 loops=1)"
" -> Index Scan Backward using reklamacny_list_pk on reklamacny_list (cost=0.29..2578713.84 rows=13990 width=4) (actual time=447.851..10695.495 rows=100 loops=1)"
" Index Cond: (id > 0)"
" Filter: ((validto IS NULL) AND (SubPlan 1))"
" Rows Removed by Filter: 687"
" SubPlan 1"
" -> Hash Join (cost=5.30..87.57 rows=250 width=1) (actual time=11.436..14.102 rows=1 loops=758)"
" Hash Cond: (d2.fk3 = sr.id)"
" -> Merge Join (cost=1.85..80.76 rows=324 width=9) (actual time=11.407..14.076 rows=5 loops=758)"
" Merge Cond: (d1.fk1 = tr.id)"
" -> Nested Loop (cost=0.71..25556.34 rows=324 width=8) (actual time=11.389..14.056 rows=5 loops=758)"
" -> Index Scan using dale_idx_fk1 on dale d1 (cost=0.29..25510.95 rows=18 width=4) (actual time=11.361..14.023 rows=1 loops=758)"
" Filter: (fk7 = reklamacny_list.id)"
" Rows Removed by Filter: 28432"
" -> Materialize (cost=0.42..41.38 rows=18 width=4) (actual time=0.017..0.021 rows=5 loops=754)"
" -> Index Scan using dale_fk7_idx on dale d2 (cost=0.42..41.29 rows=18 width=4) (actual time=0.009..0.012 rows=5 loops=754)"
" Index Cond: (fk7 = reklamacny_list.id)"
" -> Sort (cost=1.14..1.15 rows=6 width=9) (actual time=0.015..0.015 rows=2 loops=754)"
" Sort Key: tr.id"
" Sort Method: quicksort Memory: 25kB"
" -> Seq Scan on typ_reklamacie tr (cost=0.00..1.06 rows=6 width=9) (actual time=0.003..0.006 rows=6 loops=754)"
" Filter: (validto IS NULL)"
" -> Hash (cost=2.74..2.74 rows=57 width=9) (actual time=0.092..0.092 rows=57 loops=1)"
" Buckets: 1024 Batches: 1 Memory Usage: 11kB"
" -> Seq Scan on stav_reklamacie sr (cost=0.00..2.74 rows=57 width=9) (actual time=0.032..0.068 rows=57 loops=1)"
" Filter: (validto IS NULL)"
" Rows Removed by Filter: 17"
"Planning Time: 1.556 ms"
"Execution Time: 10695.752 ms"


on



postgres 10



"Limit  (cost=0.29..163635.75 rows=100 width=4) (actual time=1.958..20.024 rows=100 loops=1)"
" -> Index Scan Backward using reklamacny_list_pk on reklamacny_list (cost=0.29..21326610.37 rows=13033 width=4) (actual time=1.957..20.011 rows=100 loops=1)"
" Index Cond: (id > 0)"
" Filter: ((validto IS NULL) AND (SubPlan 1))"
" Rows Removed by Filter: 572"
" SubPlan 1"
" -> Nested Loop (cost=4.22..781.03 rows=1293 width=1) (actual time=0.017..0.031 rows=1 loops=609)"
" -> Hash Join (cost=3.80..377.12 rows=76 width=5) (actual time=0.004..0.004 rows=1 loops=609)"
" Hash Cond: (d2.fk3 = sr.id)"
" -> Index Scan using dale_fk7_idx on dale d2 (cost=0.42..372.47 rows=100 width=4) (actual time=0.002..0.003 rows=5 loops=609)"
" Index Cond: (fk7 = reklamacny_list.id)"
" -> Hash (cost=2.71..2.71 rows=54 width=9) (actual time=0.037..0.037 rows=54 loops=1)"
" Buckets: 1024 Batches: 1 Memory Usage: 11kB"
" -> Seq Scan on stav_reklamacie sr (cost=0.00..2.71 rows=54 width=9) (actual time=0.009..0.023 rows=54 loops=1)"
" Filter: (validto IS NULL)"
" Rows Removed by Filter: 17"
" -> Materialize (cost=0.42..374.87 rows=17 width=24) (actual time=0.013..0.025 rows=1 loops=604)"
" -> Nested Loop (cost=0.42..374.78 rows=17 width=24) (actual time=0.010..0.022 rows=1 loops=604)"
" Join Filter: (d1.fk1 = tr.id)"
" Rows Removed by Join Filter: 31"
" -> Seq Scan on typ_reklamacie tr (cost=0.00..1.06 rows=1 width=28) (actual time=0.001..0.002 rows=6 loops=604)"
" Filter: (validto IS NULL)"
" -> Index Scan using dale_fk7_idx on dale d1 (cost=0.42..372.47 rows=100 width=4) (actual time=0.002..0.003 rows=5 loops=3624)"
" Index Cond: (fk7 = reklamacny_list.id)"
"Planning time: 1.418 ms"
"Execution time: 20.193 ms"


I am attaching full logs in zip file (including configuration postgres.conf).
It seems like raising default statistic target helps but only with very high values.
https://www.dropbox.com/s/7m3wy9nkapqitca/SpeedTest.zip?dl=0










share|improve this question

























  • Please edit the question and add the execution plans generated using explain (analyze, buffers) to see the I/O performance.

    – a_horse_with_no_name
    Nov 4 '18 at 17:46











  • Thanks very much for reply, I've uploaded configuration files and results with buffers dropbox.com/s/7m3wy9nkapqitca/SpeedTest.zip?dl=0

    – Martin Ritchie
    Nov 7 '18 at 9:43


















3















We want to move our databases from webhosting provider (postgres 9.0) to our local network server (tried both postgres 10 and latest 11)
Our machine is windows server, fast XEON machine with 16gb ram, just for the database.



But even after raising default_statistics_targer = 4000 and analyzing for statistics we have trouble running a lot of views which worked very fast before.
Seems the webhosting provider server was fine tuned and our execution plans are probably odd for some reason.



Our Postgres is stock install config.



Simplified example query is bellow
(biggest table is "dale table which is several million records big (it's binding table with foreign keys)
other tables are much smaller, tenth thousands of records
(System is vacuum analyzed and it's fresh install)



EXPLAIN ANALYZE
SELECT REKLAMACNY_LIST.ID REKLAMACNY_LIST_ID
FROM REKLAMACNY_LIST
WHERE REKLAMACNY_LIST.VALIDTO IS NULL
AND
( SELECT NOT tr.typ_odstupenia::boolean
AND sr.konecny_stav::boolean
FROM dale d1
CROSS JOIN typ_reklamacie tr
CROSS JOIN dale d2
CROSS JOIN stav_reklamacie sr
WHERE TRUE
AND d1.fk7 = reklamacny_list.id
AND d2.fk7 = reklamacny_list.id
AND d1.fk1 = tr.id
AND d2.fk3 = sr.id
AND sr.validto IS NULL
AND tr.validto IS NULL
AND d1.validto IS NULL
AND d2.validto IS NULL )
ORDER BY reklamacny_list_id DESC
LIMIT 100


Explain analyze for our testing postgres 11 server



"Limit  (cost=0.29..18432.84 rows=100 width=4) (actual time=11804.484..331036.595 rows=91 loops=1)"
" -> Index Scan Backward using reklamacny_list_pk on reklamacny_list (cost=0.29..2578713.84 rows=13990 width=4) (actual time=11804.482..331036.524 rows=91 loops=1)"
" Index Cond: (id > 0)"
" Filter: ((validto IS NULL) AND (SubPlan 1))"
" Rows Removed by Filter: 29199"
" SubPlan 1"
" -> Hash Join (cost=5.30..87.57 rows=250 width=1) (actual time=5.246..11.824 rows=1 loops=27981)"
" Hash Cond: (d2.fk3 = sr.id)"
" -> Merge Join (cost=1.85..80.76 rows=324 width=9) (actual time=5.222..11.806 rows=6 loops=27981)"
" Merge Cond: (d1.fk1 = tr.id)"
" -> Nested Loop (cost=0.71..25556.34 rows=324 width=8) (actual time=5.211..11.794 rows=6 loops=27981)"
" -> Index Scan using dale_idx_fk1 on dale d1 (cost=0.29..25510.95 rows=18 width=4) (actual time=5.195..11.772 rows=1 loops=27981)"
" Filter: (fk7 = reklamacny_list.id)"
" Rows Removed by Filter: 28432"
" -> Materialize (cost=0.42..41.38 rows=18 width=4) (actual time=0.011..0.015 rows=6 loops=27890)"
" -> Index Scan using dale_fk7_idx on dale d2 (cost=0.42..41.29 rows=18 width=4) (actual time=0.007..0.010 rows=6 loops=27890)"
" Index Cond: (fk7 = reklamacny_list.id)"
" -> Sort (cost=1.14..1.15 rows=6 width=9) (actual time=0.009..0.009 rows=1 loops=27890)"
" Sort Key: tr.id"
" Sort Method: quicksort Memory: 25kB"
" -> Seq Scan on typ_reklamacie tr (cost=0.00..1.06 rows=6 width=9) (actual time=0.002..0.004 rows=6 loops=27890)"
" Filter: (validto IS NULL)"
" -> Hash (cost=2.74..2.74 rows=57 width=9) (actual time=0.046..0.047 rows=57 loops=1)"
" Buckets: 1024 Batches: 1 Memory Usage: 11kB"
" -> Seq Scan on stav_reklamacie sr (cost=0.00..2.74 rows=57 width=9) (actual time=0.012..0.030 rows=57 loops=1)"
" Filter: (validto IS NULL)"
" Rows Removed by Filter: 17"
"Planning Time: 10.174 ms"
"Execution Time: 331036.893 ms"


executions takes forever versus immediate result on our testing postgres 10.0



"Limit  (cost=0.29..163635.75 rows=100 width=4) (actual time=24.199..925.691 rows=70 loops=1)"
" -> Index Scan Backward using reklamacny_list_pk on reklamacny_list (cost=0.29..21326610.37 rows=13033 width=4) (actual time=24.195..925.660 rows=70 loops=1)"
" Index Cond: (id > 0)"
" Filter: ((validto IS NULL) AND (SubPlan 1))"
" Rows Removed by Filter: 27218"
" SubPlan 1"
" -> Nested Loop (cost=4.22..781.03 rows=1293 width=1) (actual time=0.018..0.034 rows=1 loops=26066)"
" -> Hash Join (cost=3.80..377.12 rows=76 width=5) (actual time=0.005..0.005 rows=1 loops=26066)"
" Hash Cond: (d2.fk3 = sr.id)"
" -> Index Scan using dale_fk7_idx on dale d2 (cost=0.42..372.47 rows=100 width=4) (actual time=0.002..0.004 rows=5 loops=26066)"
" Index Cond: (fk7 = reklamacny_list.id)"
" -> Hash (cost=2.71..2.71 rows=54 width=9) (actual time=0.049..0.049 rows=54 loops=1)"
" Buckets: 1024 Batches: 1 Memory Usage: 11kB"
" -> Seq Scan on stav_reklamacie sr (cost=0.00..2.71 rows=54 width=9) (actual time=0.016..0.032 rows=54 loops=1)"
" Filter: (validto IS NULL)"
" Rows Removed by Filter: 17"
" -> Materialize (cost=0.42..374.87 rows=17 width=24) (actual time=0.013..0.027 rows=1 loops=25987)"
" -> Nested Loop (cost=0.42..374.78 rows=17 width=24) (actual time=0.010..0.024 rows=1 loops=25987)"
" Join Filter: (d1.fk1 = tr.id)"
" Rows Removed by Join Filter: 32"
" -> Seq Scan on typ_reklamacie tr (cost=0.00..1.06 rows=1 width=28) (actual time=0.001..0.002 rows=6 loops=25987)"
" Filter: (validto IS NULL)"
" -> Index Scan using dale_fk7_idx on dale d1 (cost=0.42..372.47 rows=100 width=4) (actual time=0.002..0.003 rows=5 loops=155922)"
" Index Cond: (fk7 = reklamacny_list.id)"
"Planning time: 8.460 ms"
"Execution time: 925.870 ms"


This is just one example. But almost every query is simply many times slower on 11 and even things which takes forever also on 10, are immediately returned by web providers postgres 9.0 (which also hosts hundreds of different databases)



Do you have some point what might be the issue worth investigating?



Does adjusting some memory parameters can help? (Server has 16gb just for postgres and OS, there will be around 50 users connecting)
Actually raising default_statisticts_target=10000 helped a lot but even then.



A different version of the request with coalesce otherwise the same



EXPLAIN ANALYZE
SELECT REKLAMACNY_LIST.ID REKLAMACNY_LIST_ID
FROM REKLAMACNY_LIST
WHERE REKLAMACNY_LIST.VALIDTO IS NULL
AND REKLAMACNY_LIST.ID > 0
AND ((
( SELECT (NOT COALESCE(tr.typ_odstupenia, 'False')::boolean)
AND COALESCE(sr.konecny_stav, 'False'):: boolean
FROM dale d1
CROSS JOIN typ_reklamacie tr
CROSS JOIN dale d2
CROSS JOIN stav_reklamacie sr
WHERE TRUE
AND d1.fk7 = reklamacny_list.id
AND d2.fk7 = reklamacny_list.id
AND d1.fk1 = tr.id
AND d2.fk3 = sr.id
AND sr.validto IS NULL
AND tr.validto IS NULL
AND d1.validto IS NULL
AND d2.validto IS NULL )))
ORDER BY reklamacny_list_id DESC
LIMIT 100


On Postgres 11 it jumps to 10 seconds (great difference from the previous version of request without coalesce)



"Limit  (cost=0.29..18432.84 rows=100 width=4) (actual time=447.853..10695.583 rows=100 loops=1)"
" -> Index Scan Backward using reklamacny_list_pk on reklamacny_list (cost=0.29..2578713.84 rows=13990 width=4) (actual time=447.851..10695.495 rows=100 loops=1)"
" Index Cond: (id > 0)"
" Filter: ((validto IS NULL) AND (SubPlan 1))"
" Rows Removed by Filter: 687"
" SubPlan 1"
" -> Hash Join (cost=5.30..87.57 rows=250 width=1) (actual time=11.436..14.102 rows=1 loops=758)"
" Hash Cond: (d2.fk3 = sr.id)"
" -> Merge Join (cost=1.85..80.76 rows=324 width=9) (actual time=11.407..14.076 rows=5 loops=758)"
" Merge Cond: (d1.fk1 = tr.id)"
" -> Nested Loop (cost=0.71..25556.34 rows=324 width=8) (actual time=11.389..14.056 rows=5 loops=758)"
" -> Index Scan using dale_idx_fk1 on dale d1 (cost=0.29..25510.95 rows=18 width=4) (actual time=11.361..14.023 rows=1 loops=758)"
" Filter: (fk7 = reklamacny_list.id)"
" Rows Removed by Filter: 28432"
" -> Materialize (cost=0.42..41.38 rows=18 width=4) (actual time=0.017..0.021 rows=5 loops=754)"
" -> Index Scan using dale_fk7_idx on dale d2 (cost=0.42..41.29 rows=18 width=4) (actual time=0.009..0.012 rows=5 loops=754)"
" Index Cond: (fk7 = reklamacny_list.id)"
" -> Sort (cost=1.14..1.15 rows=6 width=9) (actual time=0.015..0.015 rows=2 loops=754)"
" Sort Key: tr.id"
" Sort Method: quicksort Memory: 25kB"
" -> Seq Scan on typ_reklamacie tr (cost=0.00..1.06 rows=6 width=9) (actual time=0.003..0.006 rows=6 loops=754)"
" Filter: (validto IS NULL)"
" -> Hash (cost=2.74..2.74 rows=57 width=9) (actual time=0.092..0.092 rows=57 loops=1)"
" Buckets: 1024 Batches: 1 Memory Usage: 11kB"
" -> Seq Scan on stav_reklamacie sr (cost=0.00..2.74 rows=57 width=9) (actual time=0.032..0.068 rows=57 loops=1)"
" Filter: (validto IS NULL)"
" Rows Removed by Filter: 17"
"Planning Time: 1.556 ms"
"Execution Time: 10695.752 ms"


on



postgres 10



"Limit  (cost=0.29..163635.75 rows=100 width=4) (actual time=1.958..20.024 rows=100 loops=1)"
" -> Index Scan Backward using reklamacny_list_pk on reklamacny_list (cost=0.29..21326610.37 rows=13033 width=4) (actual time=1.957..20.011 rows=100 loops=1)"
" Index Cond: (id > 0)"
" Filter: ((validto IS NULL) AND (SubPlan 1))"
" Rows Removed by Filter: 572"
" SubPlan 1"
" -> Nested Loop (cost=4.22..781.03 rows=1293 width=1) (actual time=0.017..0.031 rows=1 loops=609)"
" -> Hash Join (cost=3.80..377.12 rows=76 width=5) (actual time=0.004..0.004 rows=1 loops=609)"
" Hash Cond: (d2.fk3 = sr.id)"
" -> Index Scan using dale_fk7_idx on dale d2 (cost=0.42..372.47 rows=100 width=4) (actual time=0.002..0.003 rows=5 loops=609)"
" Index Cond: (fk7 = reklamacny_list.id)"
" -> Hash (cost=2.71..2.71 rows=54 width=9) (actual time=0.037..0.037 rows=54 loops=1)"
" Buckets: 1024 Batches: 1 Memory Usage: 11kB"
" -> Seq Scan on stav_reklamacie sr (cost=0.00..2.71 rows=54 width=9) (actual time=0.009..0.023 rows=54 loops=1)"
" Filter: (validto IS NULL)"
" Rows Removed by Filter: 17"
" -> Materialize (cost=0.42..374.87 rows=17 width=24) (actual time=0.013..0.025 rows=1 loops=604)"
" -> Nested Loop (cost=0.42..374.78 rows=17 width=24) (actual time=0.010..0.022 rows=1 loops=604)"
" Join Filter: (d1.fk1 = tr.id)"
" Rows Removed by Join Filter: 31"
" -> Seq Scan on typ_reklamacie tr (cost=0.00..1.06 rows=1 width=28) (actual time=0.001..0.002 rows=6 loops=604)"
" Filter: (validto IS NULL)"
" -> Index Scan using dale_fk7_idx on dale d1 (cost=0.42..372.47 rows=100 width=4) (actual time=0.002..0.003 rows=5 loops=3624)"
" Index Cond: (fk7 = reklamacny_list.id)"
"Planning time: 1.418 ms"
"Execution time: 20.193 ms"


I am attaching full logs in zip file (including configuration postgres.conf).
It seems like raising default statistic target helps but only with very high values.
https://www.dropbox.com/s/7m3wy9nkapqitca/SpeedTest.zip?dl=0










share|improve this question

























  • Please edit the question and add the execution plans generated using explain (analyze, buffers) to see the I/O performance.

    – a_horse_with_no_name
    Nov 4 '18 at 17:46











  • Thanks very much for reply, I've uploaded configuration files and results with buffers dropbox.com/s/7m3wy9nkapqitca/SpeedTest.zip?dl=0

    – Martin Ritchie
    Nov 7 '18 at 9:43
















3












3








3








We want to move our databases from webhosting provider (postgres 9.0) to our local network server (tried both postgres 10 and latest 11)
Our machine is windows server, fast XEON machine with 16gb ram, just for the database.



But even after raising default_statistics_targer = 4000 and analyzing for statistics we have trouble running a lot of views which worked very fast before.
Seems the webhosting provider server was fine tuned and our execution plans are probably odd for some reason.



Our Postgres is stock install config.



Simplified example query is bellow
(biggest table is "dale table which is several million records big (it's binding table with foreign keys)
other tables are much smaller, tenth thousands of records
(System is vacuum analyzed and it's fresh install)



EXPLAIN ANALYZE
SELECT REKLAMACNY_LIST.ID REKLAMACNY_LIST_ID
FROM REKLAMACNY_LIST
WHERE REKLAMACNY_LIST.VALIDTO IS NULL
AND
( SELECT NOT tr.typ_odstupenia::boolean
AND sr.konecny_stav::boolean
FROM dale d1
CROSS JOIN typ_reklamacie tr
CROSS JOIN dale d2
CROSS JOIN stav_reklamacie sr
WHERE TRUE
AND d1.fk7 = reklamacny_list.id
AND d2.fk7 = reklamacny_list.id
AND d1.fk1 = tr.id
AND d2.fk3 = sr.id
AND sr.validto IS NULL
AND tr.validto IS NULL
AND d1.validto IS NULL
AND d2.validto IS NULL )
ORDER BY reklamacny_list_id DESC
LIMIT 100


Explain analyze for our testing postgres 11 server



"Limit  (cost=0.29..18432.84 rows=100 width=4) (actual time=11804.484..331036.595 rows=91 loops=1)"
" -> Index Scan Backward using reklamacny_list_pk on reklamacny_list (cost=0.29..2578713.84 rows=13990 width=4) (actual time=11804.482..331036.524 rows=91 loops=1)"
" Index Cond: (id > 0)"
" Filter: ((validto IS NULL) AND (SubPlan 1))"
" Rows Removed by Filter: 29199"
" SubPlan 1"
" -> Hash Join (cost=5.30..87.57 rows=250 width=1) (actual time=5.246..11.824 rows=1 loops=27981)"
" Hash Cond: (d2.fk3 = sr.id)"
" -> Merge Join (cost=1.85..80.76 rows=324 width=9) (actual time=5.222..11.806 rows=6 loops=27981)"
" Merge Cond: (d1.fk1 = tr.id)"
" -> Nested Loop (cost=0.71..25556.34 rows=324 width=8) (actual time=5.211..11.794 rows=6 loops=27981)"
" -> Index Scan using dale_idx_fk1 on dale d1 (cost=0.29..25510.95 rows=18 width=4) (actual time=5.195..11.772 rows=1 loops=27981)"
" Filter: (fk7 = reklamacny_list.id)"
" Rows Removed by Filter: 28432"
" -> Materialize (cost=0.42..41.38 rows=18 width=4) (actual time=0.011..0.015 rows=6 loops=27890)"
" -> Index Scan using dale_fk7_idx on dale d2 (cost=0.42..41.29 rows=18 width=4) (actual time=0.007..0.010 rows=6 loops=27890)"
" Index Cond: (fk7 = reklamacny_list.id)"
" -> Sort (cost=1.14..1.15 rows=6 width=9) (actual time=0.009..0.009 rows=1 loops=27890)"
" Sort Key: tr.id"
" Sort Method: quicksort Memory: 25kB"
" -> Seq Scan on typ_reklamacie tr (cost=0.00..1.06 rows=6 width=9) (actual time=0.002..0.004 rows=6 loops=27890)"
" Filter: (validto IS NULL)"
" -> Hash (cost=2.74..2.74 rows=57 width=9) (actual time=0.046..0.047 rows=57 loops=1)"
" Buckets: 1024 Batches: 1 Memory Usage: 11kB"
" -> Seq Scan on stav_reklamacie sr (cost=0.00..2.74 rows=57 width=9) (actual time=0.012..0.030 rows=57 loops=1)"
" Filter: (validto IS NULL)"
" Rows Removed by Filter: 17"
"Planning Time: 10.174 ms"
"Execution Time: 331036.893 ms"


executions takes forever versus immediate result on our testing postgres 10.0



"Limit  (cost=0.29..163635.75 rows=100 width=4) (actual time=24.199..925.691 rows=70 loops=1)"
" -> Index Scan Backward using reklamacny_list_pk on reklamacny_list (cost=0.29..21326610.37 rows=13033 width=4) (actual time=24.195..925.660 rows=70 loops=1)"
" Index Cond: (id > 0)"
" Filter: ((validto IS NULL) AND (SubPlan 1))"
" Rows Removed by Filter: 27218"
" SubPlan 1"
" -> Nested Loop (cost=4.22..781.03 rows=1293 width=1) (actual time=0.018..0.034 rows=1 loops=26066)"
" -> Hash Join (cost=3.80..377.12 rows=76 width=5) (actual time=0.005..0.005 rows=1 loops=26066)"
" Hash Cond: (d2.fk3 = sr.id)"
" -> Index Scan using dale_fk7_idx on dale d2 (cost=0.42..372.47 rows=100 width=4) (actual time=0.002..0.004 rows=5 loops=26066)"
" Index Cond: (fk7 = reklamacny_list.id)"
" -> Hash (cost=2.71..2.71 rows=54 width=9) (actual time=0.049..0.049 rows=54 loops=1)"
" Buckets: 1024 Batches: 1 Memory Usage: 11kB"
" -> Seq Scan on stav_reklamacie sr (cost=0.00..2.71 rows=54 width=9) (actual time=0.016..0.032 rows=54 loops=1)"
" Filter: (validto IS NULL)"
" Rows Removed by Filter: 17"
" -> Materialize (cost=0.42..374.87 rows=17 width=24) (actual time=0.013..0.027 rows=1 loops=25987)"
" -> Nested Loop (cost=0.42..374.78 rows=17 width=24) (actual time=0.010..0.024 rows=1 loops=25987)"
" Join Filter: (d1.fk1 = tr.id)"
" Rows Removed by Join Filter: 32"
" -> Seq Scan on typ_reklamacie tr (cost=0.00..1.06 rows=1 width=28) (actual time=0.001..0.002 rows=6 loops=25987)"
" Filter: (validto IS NULL)"
" -> Index Scan using dale_fk7_idx on dale d1 (cost=0.42..372.47 rows=100 width=4) (actual time=0.002..0.003 rows=5 loops=155922)"
" Index Cond: (fk7 = reklamacny_list.id)"
"Planning time: 8.460 ms"
"Execution time: 925.870 ms"


This is just one example. But almost every query is simply many times slower on 11 and even things which takes forever also on 10, are immediately returned by web providers postgres 9.0 (which also hosts hundreds of different databases)



Do you have some point what might be the issue worth investigating?



Does adjusting some memory parameters can help? (Server has 16gb just for postgres and OS, there will be around 50 users connecting)
Actually raising default_statisticts_target=10000 helped a lot but even then.



A different version of the request with coalesce otherwise the same



EXPLAIN ANALYZE
SELECT REKLAMACNY_LIST.ID REKLAMACNY_LIST_ID
FROM REKLAMACNY_LIST
WHERE REKLAMACNY_LIST.VALIDTO IS NULL
AND REKLAMACNY_LIST.ID > 0
AND ((
( SELECT (NOT COALESCE(tr.typ_odstupenia, 'False')::boolean)
AND COALESCE(sr.konecny_stav, 'False'):: boolean
FROM dale d1
CROSS JOIN typ_reklamacie tr
CROSS JOIN dale d2
CROSS JOIN stav_reklamacie sr
WHERE TRUE
AND d1.fk7 = reklamacny_list.id
AND d2.fk7 = reklamacny_list.id
AND d1.fk1 = tr.id
AND d2.fk3 = sr.id
AND sr.validto IS NULL
AND tr.validto IS NULL
AND d1.validto IS NULL
AND d2.validto IS NULL )))
ORDER BY reklamacny_list_id DESC
LIMIT 100


On Postgres 11 it jumps to 10 seconds (great difference from the previous version of request without coalesce)



"Limit  (cost=0.29..18432.84 rows=100 width=4) (actual time=447.853..10695.583 rows=100 loops=1)"
" -> Index Scan Backward using reklamacny_list_pk on reklamacny_list (cost=0.29..2578713.84 rows=13990 width=4) (actual time=447.851..10695.495 rows=100 loops=1)"
" Index Cond: (id > 0)"
" Filter: ((validto IS NULL) AND (SubPlan 1))"
" Rows Removed by Filter: 687"
" SubPlan 1"
" -> Hash Join (cost=5.30..87.57 rows=250 width=1) (actual time=11.436..14.102 rows=1 loops=758)"
" Hash Cond: (d2.fk3 = sr.id)"
" -> Merge Join (cost=1.85..80.76 rows=324 width=9) (actual time=11.407..14.076 rows=5 loops=758)"
" Merge Cond: (d1.fk1 = tr.id)"
" -> Nested Loop (cost=0.71..25556.34 rows=324 width=8) (actual time=11.389..14.056 rows=5 loops=758)"
" -> Index Scan using dale_idx_fk1 on dale d1 (cost=0.29..25510.95 rows=18 width=4) (actual time=11.361..14.023 rows=1 loops=758)"
" Filter: (fk7 = reklamacny_list.id)"
" Rows Removed by Filter: 28432"
" -> Materialize (cost=0.42..41.38 rows=18 width=4) (actual time=0.017..0.021 rows=5 loops=754)"
" -> Index Scan using dale_fk7_idx on dale d2 (cost=0.42..41.29 rows=18 width=4) (actual time=0.009..0.012 rows=5 loops=754)"
" Index Cond: (fk7 = reklamacny_list.id)"
" -> Sort (cost=1.14..1.15 rows=6 width=9) (actual time=0.015..0.015 rows=2 loops=754)"
" Sort Key: tr.id"
" Sort Method: quicksort Memory: 25kB"
" -> Seq Scan on typ_reklamacie tr (cost=0.00..1.06 rows=6 width=9) (actual time=0.003..0.006 rows=6 loops=754)"
" Filter: (validto IS NULL)"
" -> Hash (cost=2.74..2.74 rows=57 width=9) (actual time=0.092..0.092 rows=57 loops=1)"
" Buckets: 1024 Batches: 1 Memory Usage: 11kB"
" -> Seq Scan on stav_reklamacie sr (cost=0.00..2.74 rows=57 width=9) (actual time=0.032..0.068 rows=57 loops=1)"
" Filter: (validto IS NULL)"
" Rows Removed by Filter: 17"
"Planning Time: 1.556 ms"
"Execution Time: 10695.752 ms"


on



postgres 10



"Limit  (cost=0.29..163635.75 rows=100 width=4) (actual time=1.958..20.024 rows=100 loops=1)"
" -> Index Scan Backward using reklamacny_list_pk on reklamacny_list (cost=0.29..21326610.37 rows=13033 width=4) (actual time=1.957..20.011 rows=100 loops=1)"
" Index Cond: (id > 0)"
" Filter: ((validto IS NULL) AND (SubPlan 1))"
" Rows Removed by Filter: 572"
" SubPlan 1"
" -> Nested Loop (cost=4.22..781.03 rows=1293 width=1) (actual time=0.017..0.031 rows=1 loops=609)"
" -> Hash Join (cost=3.80..377.12 rows=76 width=5) (actual time=0.004..0.004 rows=1 loops=609)"
" Hash Cond: (d2.fk3 = sr.id)"
" -> Index Scan using dale_fk7_idx on dale d2 (cost=0.42..372.47 rows=100 width=4) (actual time=0.002..0.003 rows=5 loops=609)"
" Index Cond: (fk7 = reklamacny_list.id)"
" -> Hash (cost=2.71..2.71 rows=54 width=9) (actual time=0.037..0.037 rows=54 loops=1)"
" Buckets: 1024 Batches: 1 Memory Usage: 11kB"
" -> Seq Scan on stav_reklamacie sr (cost=0.00..2.71 rows=54 width=9) (actual time=0.009..0.023 rows=54 loops=1)"
" Filter: (validto IS NULL)"
" Rows Removed by Filter: 17"
" -> Materialize (cost=0.42..374.87 rows=17 width=24) (actual time=0.013..0.025 rows=1 loops=604)"
" -> Nested Loop (cost=0.42..374.78 rows=17 width=24) (actual time=0.010..0.022 rows=1 loops=604)"
" Join Filter: (d1.fk1 = tr.id)"
" Rows Removed by Join Filter: 31"
" -> Seq Scan on typ_reklamacie tr (cost=0.00..1.06 rows=1 width=28) (actual time=0.001..0.002 rows=6 loops=604)"
" Filter: (validto IS NULL)"
" -> Index Scan using dale_fk7_idx on dale d1 (cost=0.42..372.47 rows=100 width=4) (actual time=0.002..0.003 rows=5 loops=3624)"
" Index Cond: (fk7 = reklamacny_list.id)"
"Planning time: 1.418 ms"
"Execution time: 20.193 ms"


I am attaching full logs in zip file (including configuration postgres.conf).
It seems like raising default statistic target helps but only with very high values.
https://www.dropbox.com/s/7m3wy9nkapqitca/SpeedTest.zip?dl=0










share|improve this question
















We want to move our databases from webhosting provider (postgres 9.0) to our local network server (tried both postgres 10 and latest 11)
Our machine is windows server, fast XEON machine with 16gb ram, just for the database.



But even after raising default_statistics_targer = 4000 and analyzing for statistics we have trouble running a lot of views which worked very fast before.
Seems the webhosting provider server was fine tuned and our execution plans are probably odd for some reason.



Our Postgres is stock install config.



Simplified example query is bellow
(biggest table is "dale table which is several million records big (it's binding table with foreign keys)
other tables are much smaller, tenth thousands of records
(System is vacuum analyzed and it's fresh install)



EXPLAIN ANALYZE
SELECT REKLAMACNY_LIST.ID REKLAMACNY_LIST_ID
FROM REKLAMACNY_LIST
WHERE REKLAMACNY_LIST.VALIDTO IS NULL
AND
( SELECT NOT tr.typ_odstupenia::boolean
AND sr.konecny_stav::boolean
FROM dale d1
CROSS JOIN typ_reklamacie tr
CROSS JOIN dale d2
CROSS JOIN stav_reklamacie sr
WHERE TRUE
AND d1.fk7 = reklamacny_list.id
AND d2.fk7 = reklamacny_list.id
AND d1.fk1 = tr.id
AND d2.fk3 = sr.id
AND sr.validto IS NULL
AND tr.validto IS NULL
AND d1.validto IS NULL
AND d2.validto IS NULL )
ORDER BY reklamacny_list_id DESC
LIMIT 100


Explain analyze for our testing postgres 11 server



"Limit  (cost=0.29..18432.84 rows=100 width=4) (actual time=11804.484..331036.595 rows=91 loops=1)"
" -> Index Scan Backward using reklamacny_list_pk on reklamacny_list (cost=0.29..2578713.84 rows=13990 width=4) (actual time=11804.482..331036.524 rows=91 loops=1)"
" Index Cond: (id > 0)"
" Filter: ((validto IS NULL) AND (SubPlan 1))"
" Rows Removed by Filter: 29199"
" SubPlan 1"
" -> Hash Join (cost=5.30..87.57 rows=250 width=1) (actual time=5.246..11.824 rows=1 loops=27981)"
" Hash Cond: (d2.fk3 = sr.id)"
" -> Merge Join (cost=1.85..80.76 rows=324 width=9) (actual time=5.222..11.806 rows=6 loops=27981)"
" Merge Cond: (d1.fk1 = tr.id)"
" -> Nested Loop (cost=0.71..25556.34 rows=324 width=8) (actual time=5.211..11.794 rows=6 loops=27981)"
" -> Index Scan using dale_idx_fk1 on dale d1 (cost=0.29..25510.95 rows=18 width=4) (actual time=5.195..11.772 rows=1 loops=27981)"
" Filter: (fk7 = reklamacny_list.id)"
" Rows Removed by Filter: 28432"
" -> Materialize (cost=0.42..41.38 rows=18 width=4) (actual time=0.011..0.015 rows=6 loops=27890)"
" -> Index Scan using dale_fk7_idx on dale d2 (cost=0.42..41.29 rows=18 width=4) (actual time=0.007..0.010 rows=6 loops=27890)"
" Index Cond: (fk7 = reklamacny_list.id)"
" -> Sort (cost=1.14..1.15 rows=6 width=9) (actual time=0.009..0.009 rows=1 loops=27890)"
" Sort Key: tr.id"
" Sort Method: quicksort Memory: 25kB"
" -> Seq Scan on typ_reklamacie tr (cost=0.00..1.06 rows=6 width=9) (actual time=0.002..0.004 rows=6 loops=27890)"
" Filter: (validto IS NULL)"
" -> Hash (cost=2.74..2.74 rows=57 width=9) (actual time=0.046..0.047 rows=57 loops=1)"
" Buckets: 1024 Batches: 1 Memory Usage: 11kB"
" -> Seq Scan on stav_reklamacie sr (cost=0.00..2.74 rows=57 width=9) (actual time=0.012..0.030 rows=57 loops=1)"
" Filter: (validto IS NULL)"
" Rows Removed by Filter: 17"
"Planning Time: 10.174 ms"
"Execution Time: 331036.893 ms"


executions takes forever versus immediate result on our testing postgres 10.0



"Limit  (cost=0.29..163635.75 rows=100 width=4) (actual time=24.199..925.691 rows=70 loops=1)"
" -> Index Scan Backward using reklamacny_list_pk on reklamacny_list (cost=0.29..21326610.37 rows=13033 width=4) (actual time=24.195..925.660 rows=70 loops=1)"
" Index Cond: (id > 0)"
" Filter: ((validto IS NULL) AND (SubPlan 1))"
" Rows Removed by Filter: 27218"
" SubPlan 1"
" -> Nested Loop (cost=4.22..781.03 rows=1293 width=1) (actual time=0.018..0.034 rows=1 loops=26066)"
" -> Hash Join (cost=3.80..377.12 rows=76 width=5) (actual time=0.005..0.005 rows=1 loops=26066)"
" Hash Cond: (d2.fk3 = sr.id)"
" -> Index Scan using dale_fk7_idx on dale d2 (cost=0.42..372.47 rows=100 width=4) (actual time=0.002..0.004 rows=5 loops=26066)"
" Index Cond: (fk7 = reklamacny_list.id)"
" -> Hash (cost=2.71..2.71 rows=54 width=9) (actual time=0.049..0.049 rows=54 loops=1)"
" Buckets: 1024 Batches: 1 Memory Usage: 11kB"
" -> Seq Scan on stav_reklamacie sr (cost=0.00..2.71 rows=54 width=9) (actual time=0.016..0.032 rows=54 loops=1)"
" Filter: (validto IS NULL)"
" Rows Removed by Filter: 17"
" -> Materialize (cost=0.42..374.87 rows=17 width=24) (actual time=0.013..0.027 rows=1 loops=25987)"
" -> Nested Loop (cost=0.42..374.78 rows=17 width=24) (actual time=0.010..0.024 rows=1 loops=25987)"
" Join Filter: (d1.fk1 = tr.id)"
" Rows Removed by Join Filter: 32"
" -> Seq Scan on typ_reklamacie tr (cost=0.00..1.06 rows=1 width=28) (actual time=0.001..0.002 rows=6 loops=25987)"
" Filter: (validto IS NULL)"
" -> Index Scan using dale_fk7_idx on dale d1 (cost=0.42..372.47 rows=100 width=4) (actual time=0.002..0.003 rows=5 loops=155922)"
" Index Cond: (fk7 = reklamacny_list.id)"
"Planning time: 8.460 ms"
"Execution time: 925.870 ms"


This is just one example. But almost every query is simply many times slower on 11 and even things which takes forever also on 10, are immediately returned by web providers postgres 9.0 (which also hosts hundreds of different databases)



Do you have some point what might be the issue worth investigating?



Does adjusting some memory parameters can help? (Server has 16gb just for postgres and OS, there will be around 50 users connecting)
Actually raising default_statisticts_target=10000 helped a lot but even then.



A different version of the request with coalesce otherwise the same



EXPLAIN ANALYZE
SELECT REKLAMACNY_LIST.ID REKLAMACNY_LIST_ID
FROM REKLAMACNY_LIST
WHERE REKLAMACNY_LIST.VALIDTO IS NULL
AND REKLAMACNY_LIST.ID > 0
AND ((
( SELECT (NOT COALESCE(tr.typ_odstupenia, 'False')::boolean)
AND COALESCE(sr.konecny_stav, 'False'):: boolean
FROM dale d1
CROSS JOIN typ_reklamacie tr
CROSS JOIN dale d2
CROSS JOIN stav_reklamacie sr
WHERE TRUE
AND d1.fk7 = reklamacny_list.id
AND d2.fk7 = reklamacny_list.id
AND d1.fk1 = tr.id
AND d2.fk3 = sr.id
AND sr.validto IS NULL
AND tr.validto IS NULL
AND d1.validto IS NULL
AND d2.validto IS NULL )))
ORDER BY reklamacny_list_id DESC
LIMIT 100


On Postgres 11 it jumps to 10 seconds (great difference from the previous version of request without coalesce)



"Limit  (cost=0.29..18432.84 rows=100 width=4) (actual time=447.853..10695.583 rows=100 loops=1)"
" -> Index Scan Backward using reklamacny_list_pk on reklamacny_list (cost=0.29..2578713.84 rows=13990 width=4) (actual time=447.851..10695.495 rows=100 loops=1)"
" Index Cond: (id > 0)"
" Filter: ((validto IS NULL) AND (SubPlan 1))"
" Rows Removed by Filter: 687"
" SubPlan 1"
" -> Hash Join (cost=5.30..87.57 rows=250 width=1) (actual time=11.436..14.102 rows=1 loops=758)"
" Hash Cond: (d2.fk3 = sr.id)"
" -> Merge Join (cost=1.85..80.76 rows=324 width=9) (actual time=11.407..14.076 rows=5 loops=758)"
" Merge Cond: (d1.fk1 = tr.id)"
" -> Nested Loop (cost=0.71..25556.34 rows=324 width=8) (actual time=11.389..14.056 rows=5 loops=758)"
" -> Index Scan using dale_idx_fk1 on dale d1 (cost=0.29..25510.95 rows=18 width=4) (actual time=11.361..14.023 rows=1 loops=758)"
" Filter: (fk7 = reklamacny_list.id)"
" Rows Removed by Filter: 28432"
" -> Materialize (cost=0.42..41.38 rows=18 width=4) (actual time=0.017..0.021 rows=5 loops=754)"
" -> Index Scan using dale_fk7_idx on dale d2 (cost=0.42..41.29 rows=18 width=4) (actual time=0.009..0.012 rows=5 loops=754)"
" Index Cond: (fk7 = reklamacny_list.id)"
" -> Sort (cost=1.14..1.15 rows=6 width=9) (actual time=0.015..0.015 rows=2 loops=754)"
" Sort Key: tr.id"
" Sort Method: quicksort Memory: 25kB"
" -> Seq Scan on typ_reklamacie tr (cost=0.00..1.06 rows=6 width=9) (actual time=0.003..0.006 rows=6 loops=754)"
" Filter: (validto IS NULL)"
" -> Hash (cost=2.74..2.74 rows=57 width=9) (actual time=0.092..0.092 rows=57 loops=1)"
" Buckets: 1024 Batches: 1 Memory Usage: 11kB"
" -> Seq Scan on stav_reklamacie sr (cost=0.00..2.74 rows=57 width=9) (actual time=0.032..0.068 rows=57 loops=1)"
" Filter: (validto IS NULL)"
" Rows Removed by Filter: 17"
"Planning Time: 1.556 ms"
"Execution Time: 10695.752 ms"


on



postgres 10



"Limit  (cost=0.29..163635.75 rows=100 width=4) (actual time=1.958..20.024 rows=100 loops=1)"
" -> Index Scan Backward using reklamacny_list_pk on reklamacny_list (cost=0.29..21326610.37 rows=13033 width=4) (actual time=1.957..20.011 rows=100 loops=1)"
" Index Cond: (id > 0)"
" Filter: ((validto IS NULL) AND (SubPlan 1))"
" Rows Removed by Filter: 572"
" SubPlan 1"
" -> Nested Loop (cost=4.22..781.03 rows=1293 width=1) (actual time=0.017..0.031 rows=1 loops=609)"
" -> Hash Join (cost=3.80..377.12 rows=76 width=5) (actual time=0.004..0.004 rows=1 loops=609)"
" Hash Cond: (d2.fk3 = sr.id)"
" -> Index Scan using dale_fk7_idx on dale d2 (cost=0.42..372.47 rows=100 width=4) (actual time=0.002..0.003 rows=5 loops=609)"
" Index Cond: (fk7 = reklamacny_list.id)"
" -> Hash (cost=2.71..2.71 rows=54 width=9) (actual time=0.037..0.037 rows=54 loops=1)"
" Buckets: 1024 Batches: 1 Memory Usage: 11kB"
" -> Seq Scan on stav_reklamacie sr (cost=0.00..2.71 rows=54 width=9) (actual time=0.009..0.023 rows=54 loops=1)"
" Filter: (validto IS NULL)"
" Rows Removed by Filter: 17"
" -> Materialize (cost=0.42..374.87 rows=17 width=24) (actual time=0.013..0.025 rows=1 loops=604)"
" -> Nested Loop (cost=0.42..374.78 rows=17 width=24) (actual time=0.010..0.022 rows=1 loops=604)"
" Join Filter: (d1.fk1 = tr.id)"
" Rows Removed by Join Filter: 31"
" -> Seq Scan on typ_reklamacie tr (cost=0.00..1.06 rows=1 width=28) (actual time=0.001..0.002 rows=6 loops=604)"
" Filter: (validto IS NULL)"
" -> Index Scan using dale_fk7_idx on dale d1 (cost=0.42..372.47 rows=100 width=4) (actual time=0.002..0.003 rows=5 loops=3624)"
" Index Cond: (fk7 = reklamacny_list.id)"
"Planning time: 1.418 ms"
"Execution time: 20.193 ms"


I am attaching full logs in zip file (including configuration postgres.conf).
It seems like raising default statistic target helps but only with very high values.
https://www.dropbox.com/s/7m3wy9nkapqitca/SpeedTest.zip?dl=0







postgresql postgresql-performance






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 7 '18 at 10:46









Andriy M

16k63372




16k63372










asked Nov 4 '18 at 16:56









Martin RitchieMartin Ritchie

162




162













  • Please edit the question and add the execution plans generated using explain (analyze, buffers) to see the I/O performance.

    – a_horse_with_no_name
    Nov 4 '18 at 17:46











  • Thanks very much for reply, I've uploaded configuration files and results with buffers dropbox.com/s/7m3wy9nkapqitca/SpeedTest.zip?dl=0

    – Martin Ritchie
    Nov 7 '18 at 9:43





















  • Please edit the question and add the execution plans generated using explain (analyze, buffers) to see the I/O performance.

    – a_horse_with_no_name
    Nov 4 '18 at 17:46











  • Thanks very much for reply, I've uploaded configuration files and results with buffers dropbox.com/s/7m3wy9nkapqitca/SpeedTest.zip?dl=0

    – Martin Ritchie
    Nov 7 '18 at 9:43



















Please edit the question and add the execution plans generated using explain (analyze, buffers) to see the I/O performance.

– a_horse_with_no_name
Nov 4 '18 at 17:46





Please edit the question and add the execution plans generated using explain (analyze, buffers) to see the I/O performance.

– a_horse_with_no_name
Nov 4 '18 at 17:46













Thanks very much for reply, I've uploaded configuration files and results with buffers dropbox.com/s/7m3wy9nkapqitca/SpeedTest.zip?dl=0

– Martin Ritchie
Nov 7 '18 at 9:43







Thanks very much for reply, I've uploaded configuration files and results with buffers dropbox.com/s/7m3wy9nkapqitca/SpeedTest.zip?dl=0

– Martin Ritchie
Nov 7 '18 at 9:43












2 Answers
2






active

oldest

votes


















0














psql 10: Buffers: shared hit=4.439.193

psql 09: Buffers: shared hit=____7.493 read=686



https://www.postgresql.org/docs/11/sql-explain.html




Include information on buffer usage. Specifically, include the number of shared blocks hit, read, dirtied, and written, the number of
local blocks hit, read, dirtied, and written, and the number of temp
blocks read and written. A hit means that a read was avoided because
the block was found already in cache when needed. Shared blocks
contain data from regular tables and indexes; local blocks contain
data from temporary tables and indexes; while temp blocks contain
short-term working data used in sorts, hashes, Materialize plan nodes,
and similar cases. The number of blocks dirtied indicates the number
of previously unmodified blocks that were changed by this query; while
the number of blocks written indicates the number of
previously-dirtied blocks evicted from cache by this backend during
query processing. The number of blocks shown for an upper-level node
includes those used by all its child nodes. In text format, only
non-zero values are printed. This parameter may only be used when
ANALYZE is also enabled. It defaults to FALSE.




something is wrong with db
on the p09 there are ~8000 buffer involvements
on the p10 there are 4M buffer hits



if that is on the same dataset there must be a lot of empty/deleted row in the databes.



if that is the case then vacuum should help.





the execuption plans are quite different, the estimation of naumber of rows involved is different
updated statisc on specific columns may help



https://www.citusdata.com/blog/2018/03/06/postgres-planner-and-its-usage-of-statistics/





psql 09 plan does not have a separate sorting part.
maybe the indexes are not the same. psql 09 may already be getting the date in the right order ....






share|improve this answer

































    0














    The default postgresql.conf configure is just for small footprint, it will be slow if your database is large and with complicated joins.



    Saw from your postgresql10 configure file that your shared memory is only "128MB" (also many other settings are very small). You need to configure it.



    Bug tuning the Postgresql is a big topic and different kind of hardwares would also need different settings, which also comes with trial and error to continuously improve it.



    Not able to discuss here, I am just able to provide the "tuned setting" I used to use.
    Target:
    - Use not more than 4GB memory in my server (as my server is not dedicated to run Postgres db)
    - Server have > 8 cores



    max_connections : 800
    shared_buffers : 1536MB
    work_mem : 24MB
    maintenance_work_mem : 480MB
    vacuum_cost_delay : 20ms
    synchronous_commit : local
    wal_buffers : 8MB
    max_wal_size : 1536GB
    checkpoint_completion_target : 0.9
    effective_cache_size : 4GB
    deadlock_timeout : 3s
    log_min_duration_statement : 5000
    log_error_verbosity : verbose
    log_autovacuum_min_duration : 10000
    log_lock_waits : on




    share








    New contributor




    user170807 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.




















      Your Answer








      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "182"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: false,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: null,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














      draft saved

      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f221755%2fpostgres-9-0-vs-postgres-10-11-performance%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      2 Answers
      2






      active

      oldest

      votes








      2 Answers
      2






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      0














      psql 10: Buffers: shared hit=4.439.193

      psql 09: Buffers: shared hit=____7.493 read=686



      https://www.postgresql.org/docs/11/sql-explain.html




      Include information on buffer usage. Specifically, include the number of shared blocks hit, read, dirtied, and written, the number of
      local blocks hit, read, dirtied, and written, and the number of temp
      blocks read and written. A hit means that a read was avoided because
      the block was found already in cache when needed. Shared blocks
      contain data from regular tables and indexes; local blocks contain
      data from temporary tables and indexes; while temp blocks contain
      short-term working data used in sorts, hashes, Materialize plan nodes,
      and similar cases. The number of blocks dirtied indicates the number
      of previously unmodified blocks that were changed by this query; while
      the number of blocks written indicates the number of
      previously-dirtied blocks evicted from cache by this backend during
      query processing. The number of blocks shown for an upper-level node
      includes those used by all its child nodes. In text format, only
      non-zero values are printed. This parameter may only be used when
      ANALYZE is also enabled. It defaults to FALSE.




      something is wrong with db
      on the p09 there are ~8000 buffer involvements
      on the p10 there are 4M buffer hits



      if that is on the same dataset there must be a lot of empty/deleted row in the databes.



      if that is the case then vacuum should help.





      the execuption plans are quite different, the estimation of naumber of rows involved is different
      updated statisc on specific columns may help



      https://www.citusdata.com/blog/2018/03/06/postgres-planner-and-its-usage-of-statistics/





      psql 09 plan does not have a separate sorting part.
      maybe the indexes are not the same. psql 09 may already be getting the date in the right order ....






      share|improve this answer






























        0














        psql 10: Buffers: shared hit=4.439.193

        psql 09: Buffers: shared hit=____7.493 read=686



        https://www.postgresql.org/docs/11/sql-explain.html




        Include information on buffer usage. Specifically, include the number of shared blocks hit, read, dirtied, and written, the number of
        local blocks hit, read, dirtied, and written, and the number of temp
        blocks read and written. A hit means that a read was avoided because
        the block was found already in cache when needed. Shared blocks
        contain data from regular tables and indexes; local blocks contain
        data from temporary tables and indexes; while temp blocks contain
        short-term working data used in sorts, hashes, Materialize plan nodes,
        and similar cases. The number of blocks dirtied indicates the number
        of previously unmodified blocks that were changed by this query; while
        the number of blocks written indicates the number of
        previously-dirtied blocks evicted from cache by this backend during
        query processing. The number of blocks shown for an upper-level node
        includes those used by all its child nodes. In text format, only
        non-zero values are printed. This parameter may only be used when
        ANALYZE is also enabled. It defaults to FALSE.




        something is wrong with db
        on the p09 there are ~8000 buffer involvements
        on the p10 there are 4M buffer hits



        if that is on the same dataset there must be a lot of empty/deleted row in the databes.



        if that is the case then vacuum should help.





        the execuption plans are quite different, the estimation of naumber of rows involved is different
        updated statisc on specific columns may help



        https://www.citusdata.com/blog/2018/03/06/postgres-planner-and-its-usage-of-statistics/





        psql 09 plan does not have a separate sorting part.
        maybe the indexes are not the same. psql 09 may already be getting the date in the right order ....






        share|improve this answer




























          0












          0








          0







          psql 10: Buffers: shared hit=4.439.193

          psql 09: Buffers: shared hit=____7.493 read=686



          https://www.postgresql.org/docs/11/sql-explain.html




          Include information on buffer usage. Specifically, include the number of shared blocks hit, read, dirtied, and written, the number of
          local blocks hit, read, dirtied, and written, and the number of temp
          blocks read and written. A hit means that a read was avoided because
          the block was found already in cache when needed. Shared blocks
          contain data from regular tables and indexes; local blocks contain
          data from temporary tables and indexes; while temp blocks contain
          short-term working data used in sorts, hashes, Materialize plan nodes,
          and similar cases. The number of blocks dirtied indicates the number
          of previously unmodified blocks that were changed by this query; while
          the number of blocks written indicates the number of
          previously-dirtied blocks evicted from cache by this backend during
          query processing. The number of blocks shown for an upper-level node
          includes those used by all its child nodes. In text format, only
          non-zero values are printed. This parameter may only be used when
          ANALYZE is also enabled. It defaults to FALSE.




          something is wrong with db
          on the p09 there are ~8000 buffer involvements
          on the p10 there are 4M buffer hits



          if that is on the same dataset there must be a lot of empty/deleted row in the databes.



          if that is the case then vacuum should help.





          the execuption plans are quite different, the estimation of naumber of rows involved is different
          updated statisc on specific columns may help



          https://www.citusdata.com/blog/2018/03/06/postgres-planner-and-its-usage-of-statistics/





          psql 09 plan does not have a separate sorting part.
          maybe the indexes are not the same. psql 09 may already be getting the date in the right order ....






          share|improve this answer















          psql 10: Buffers: shared hit=4.439.193

          psql 09: Buffers: shared hit=____7.493 read=686



          https://www.postgresql.org/docs/11/sql-explain.html




          Include information on buffer usage. Specifically, include the number of shared blocks hit, read, dirtied, and written, the number of
          local blocks hit, read, dirtied, and written, and the number of temp
          blocks read and written. A hit means that a read was avoided because
          the block was found already in cache when needed. Shared blocks
          contain data from regular tables and indexes; local blocks contain
          data from temporary tables and indexes; while temp blocks contain
          short-term working data used in sorts, hashes, Materialize plan nodes,
          and similar cases. The number of blocks dirtied indicates the number
          of previously unmodified blocks that were changed by this query; while
          the number of blocks written indicates the number of
          previously-dirtied blocks evicted from cache by this backend during
          query processing. The number of blocks shown for an upper-level node
          includes those used by all its child nodes. In text format, only
          non-zero values are printed. This parameter may only be used when
          ANALYZE is also enabled. It defaults to FALSE.




          something is wrong with db
          on the p09 there are ~8000 buffer involvements
          on the p10 there are 4M buffer hits



          if that is on the same dataset there must be a lot of empty/deleted row in the databes.



          if that is the case then vacuum should help.





          the execuption plans are quite different, the estimation of naumber of rows involved is different
          updated statisc on specific columns may help



          https://www.citusdata.com/blog/2018/03/06/postgres-planner-and-its-usage-of-statistics/





          psql 09 plan does not have a separate sorting part.
          maybe the indexes are not the same. psql 09 may already be getting the date in the right order ....







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Nov 22 '18 at 7:09

























          answered Nov 21 '18 at 15:43









          user1817599user1817599

          11




          11

























              0














              The default postgresql.conf configure is just for small footprint, it will be slow if your database is large and with complicated joins.



              Saw from your postgresql10 configure file that your shared memory is only "128MB" (also many other settings are very small). You need to configure it.



              Bug tuning the Postgresql is a big topic and different kind of hardwares would also need different settings, which also comes with trial and error to continuously improve it.



              Not able to discuss here, I am just able to provide the "tuned setting" I used to use.
              Target:
              - Use not more than 4GB memory in my server (as my server is not dedicated to run Postgres db)
              - Server have > 8 cores



              max_connections : 800
              shared_buffers : 1536MB
              work_mem : 24MB
              maintenance_work_mem : 480MB
              vacuum_cost_delay : 20ms
              synchronous_commit : local
              wal_buffers : 8MB
              max_wal_size : 1536GB
              checkpoint_completion_target : 0.9
              effective_cache_size : 4GB
              deadlock_timeout : 3s
              log_min_duration_statement : 5000
              log_error_verbosity : verbose
              log_autovacuum_min_duration : 10000
              log_lock_waits : on




              share








              New contributor




              user170807 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
              Check out our Code of Conduct.

























                0














                The default postgresql.conf configure is just for small footprint, it will be slow if your database is large and with complicated joins.



                Saw from your postgresql10 configure file that your shared memory is only "128MB" (also many other settings are very small). You need to configure it.



                Bug tuning the Postgresql is a big topic and different kind of hardwares would also need different settings, which also comes with trial and error to continuously improve it.



                Not able to discuss here, I am just able to provide the "tuned setting" I used to use.
                Target:
                - Use not more than 4GB memory in my server (as my server is not dedicated to run Postgres db)
                - Server have > 8 cores



                max_connections : 800
                shared_buffers : 1536MB
                work_mem : 24MB
                maintenance_work_mem : 480MB
                vacuum_cost_delay : 20ms
                synchronous_commit : local
                wal_buffers : 8MB
                max_wal_size : 1536GB
                checkpoint_completion_target : 0.9
                effective_cache_size : 4GB
                deadlock_timeout : 3s
                log_min_duration_statement : 5000
                log_error_verbosity : verbose
                log_autovacuum_min_duration : 10000
                log_lock_waits : on




                share








                New contributor




                user170807 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.























                  0












                  0








                  0







                  The default postgresql.conf configure is just for small footprint, it will be slow if your database is large and with complicated joins.



                  Saw from your postgresql10 configure file that your shared memory is only "128MB" (also many other settings are very small). You need to configure it.



                  Bug tuning the Postgresql is a big topic and different kind of hardwares would also need different settings, which also comes with trial and error to continuously improve it.



                  Not able to discuss here, I am just able to provide the "tuned setting" I used to use.
                  Target:
                  - Use not more than 4GB memory in my server (as my server is not dedicated to run Postgres db)
                  - Server have > 8 cores



                  max_connections : 800
                  shared_buffers : 1536MB
                  work_mem : 24MB
                  maintenance_work_mem : 480MB
                  vacuum_cost_delay : 20ms
                  synchronous_commit : local
                  wal_buffers : 8MB
                  max_wal_size : 1536GB
                  checkpoint_completion_target : 0.9
                  effective_cache_size : 4GB
                  deadlock_timeout : 3s
                  log_min_duration_statement : 5000
                  log_error_verbosity : verbose
                  log_autovacuum_min_duration : 10000
                  log_lock_waits : on




                  share








                  New contributor




                  user170807 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                  Check out our Code of Conduct.










                  The default postgresql.conf configure is just for small footprint, it will be slow if your database is large and with complicated joins.



                  Saw from your postgresql10 configure file that your shared memory is only "128MB" (also many other settings are very small). You need to configure it.



                  Bug tuning the Postgresql is a big topic and different kind of hardwares would also need different settings, which also comes with trial and error to continuously improve it.



                  Not able to discuss here, I am just able to provide the "tuned setting" I used to use.
                  Target:
                  - Use not more than 4GB memory in my server (as my server is not dedicated to run Postgres db)
                  - Server have > 8 cores



                  max_connections : 800
                  shared_buffers : 1536MB
                  work_mem : 24MB
                  maintenance_work_mem : 480MB
                  vacuum_cost_delay : 20ms
                  synchronous_commit : local
                  wal_buffers : 8MB
                  max_wal_size : 1536GB
                  checkpoint_completion_target : 0.9
                  effective_cache_size : 4GB
                  deadlock_timeout : 3s
                  log_min_duration_statement : 5000
                  log_error_verbosity : verbose
                  log_autovacuum_min_duration : 10000
                  log_lock_waits : on





                  share








                  New contributor




                  user170807 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                  Check out our Code of Conduct.








                  share


                  share






                  New contributor




                  user170807 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                  Check out our Code of Conduct.









                  answered 5 mins ago









                  user170807user170807

                  1




                  1




                  New contributor




                  user170807 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                  Check out our Code of Conduct.





                  New contributor





                  user170807 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                  Check out our Code of Conduct.






                  user170807 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                  Check out our Code of Conduct.






























                      draft saved

                      draft discarded




















































                      Thanks for contributing an answer to Database Administrators Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f221755%2fpostgres-9-0-vs-postgres-10-11-performance%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      SQL Server 17 - Attemping to backup to remote NAS but Access is denied

                      Always On Availability groups resolving state after failover - Remote harden of transaction...

                      Restoring from pg_dump with foreign key constraints