count(distinct xx)优化
最近更新时间: 2025-10-11 18:10:00
postgres=# CREATE TABLE t1(f1 serial not null unique,f2 text,f3 text,f4 text,f5 text,f6 text,f7 text,f8 text,f9 text,f10 text,f11 text,f12 text) distribute by shard(f1);
postgres=# insert into t1 select t,md5(t::text),md5(t::text),md5(t::text),md5(t::text),md5(t::text),md5(t::text),md5(t::text),md5(t::text),md5(t::text),md5(t::text),md5(t::text) from generate_series(1,1000000) as t;
postgres=# analyze t1;
ANALYZE
postgres=# explain (verbose) select count(distinct f2) from t1;
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------
Aggregate (cost=103320.00..103320.01 rows=1 width=8)
Output: count(DISTINCT f2)
-> Remote Subquery Scan on all (dn01,dn02,dn03,dn04,dn05,dn06,dn07,dn08,dn09,dn10) (cost=100.00..100820.00 rows=1000000 width=33)
Output: f2
-> Seq Scan on public.t1 (cost=0.00..62720.00 rows=1000000 width=33)
Output: f2
(6 rows)
Time: 0.748 ms
postgres=# select count(distinct f2) from t1;
count
---------
1000000
(1 row)
Time: 6274.684 ms (00:06.275)
postgres=# select count(distinct f2) from t1 where f1 <10;
count
-------
9
(1 row)
Time: 19.261 ms
上面发现count(distinct f2)是发生在cn节点,对于TP类业务,需要操作的数据量少的情况下,性能开销是没有问题的,而且往往比下推执行的性能开销还要小。但如果一次要操作的数据量比较大的ap类业务,则网络传输就会成功瓶颈,下面看看改写后的执行计划。
postgres=# explain (verbose) select count(1) from (select f2 from t1 group by f2) as t ;
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------------------
Finalize Aggregate (cost=355600.70..355600.71 rows=1 width=8)
Output: count(1)
-> Remote Subquery Scan on all (dn01,dn02,dn03,dn04,dn05,dn06,dn07,dn08,dn09,dn10) (cost=355600.69..355600.70 rows=1 width=0)
Output: PARTIAL count(1)
-> Partial Aggregate (cost=355500.69..355500.70 rows=1 width=8)
Output: PARTIAL count(1)
-> Group (cost=340500.69..345500.69 rows=1000000 width=33)
Output: t1.f2
Group Key: t1.f2
-> Sort (cost=340500.69..343000.69 rows=1000000 width=0)
Output: t1.f2
Sort Key: t1.f2
-> Remote Subquery Scan on all (dn01,dn02,dn03,dn04,dn05,dn06,dn07,dn08,dn09,dn10) (cost=216192.84..226192.84 rows=1000000 width=0)
Output: t1.f2
Distribute results by S: f2
-> Group (cost=216092.84..221092.84 rows=1000000 width=33)
Output: t1.f2
Group Key: t1.f2
-> Sort (cost=216092.84..218592.84 rows=1000000 width=33)
Output: t1.f2
Sort Key: t1.f2
-> Seq Scan on public.t1 (cost=0.00..62720.00 rows=1000000 width=33)
Output: t1.f2
(23 rows)
改写后,并行推到dn去执行,现在看看执行的效果。
postgres=# select count(1) from (select f2 from t1 group by f2) as t ;
count
---------
1000000
(1 row)
Time: 1328.431 ms (00:01.328)
postgres=# select count(1) from (select f2 from t1 where f1<10 group by f2) as t ;
count
-------
9
(1 row)
Time: 24.991 ms
postgres=#
我们可以看到对于大量数据计算的AP类业务,性能提高了5倍。