Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Most of the time-series queries (almost all of them) are aggregated queries. Why not leverage or build top-notch Columnarstore for the same.

Everything seems to be there and why there's not first class product like ClickHouse on PG.



The gold standard for this Druid at very large scale, or ClickhouseDB. Clickhouse has a lot of problems as far as modifying/scaling shards after the fact, while Druid handles this with ease (and the penalty of not being able to update after the fact.)


Doris?


Citus, Persona, TimescaleDB?


That was very "Klaatu, Barada, Nikto".


Victoria metrics as well, they say based on similar structures used in clickhouse


Looking at the comparison with Click Benchmark, they are almost pathetic in terms of performance. They cant even handle sub-second aggregation queries for 10M records. Compared that too even duckdb reading from parquet files.


Postgres is missing a proper columnstore implementation. It's a big gap and it's not easy to build.

One solution could be integrating duckdb in a similar way as pgvector. You need to map duckdb storage to Postgres storage and reuse duckdb query processor. I believe it's the fastest way to get Postgres to have competitive columnstores.


Olo, CEO of https://www.tablespace.io here. We've built a columnstore extension for Postgres that is faster than ClikcHouse for real-time analytics in their own ClickBench Benchmarks. Feel free to check it out - https://www.tablespace.io/blog/postgres-columnstore-index-vs...


This sounds interesting. I don't see duck db as a supported extension or mentioned anywhere in your code yet ;)

Is this foreshadowing?


Hydra?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: