Frugal Prometheus with ClickHouse + URLEngine

Frugal Prometheus with ClickHouse + URLEngine


3 min read

Today we're going to build a frugal serverless Prometheus metrics store & publisher for our ClickHouse fleet using our free service

Our experiment will have the following characteristics:

  • no coding, no logins - all batteries included! πŸ”‹

  • available from any clickhouse instance

  • custom HTTP scraping endpoint

  • powered by CloudFlare, persistent data courtesy of Deta cloud

Let's get started! is a free serverless pastie designed for ClickHouse URL Engine and accessible from anywhere, including local, cloud or even storage-less instances.

We're going to use it to power our distributed metrics table, starting with a simple schema to hold our Prometheus-like metrics:

metric namemetric help textmetric valueoptional json tagsoptional expiration unix ts

A few notes on our serverless backend:

  • INSERTs for the same key are considered UPDATEs in URL tables

  • JSON Tags must be present or empty stringed

  • Metrics can auto-expire using the __expires column and a future Unix timestamp


Let's INSERT our first little metric with some value and some tags:

INSERT INTO FUNCTION url('', JSONEachRow, 'key String, help String, value Float64, __expire UInt64, tags String') VALUES ('mygauge', 'my little metric', 100, toUnixTimestamp(now()+300), '{"le":"0.05"}')

That's easy. Let's take a closer look at our serverless URL table:

SELECT * FROM url('', JSONEachRow)

β”‚ 1659015366 β”‚ my little metric      β”‚ mygauge  β”‚ {"le":"0.05"} β”‚ 100   β”‚

Looking good! Our distributed metrics are ready to be used and auto-expired.

INSERTs to our will be generated by queries, materialized views, etc. 🦷 ... but the returned data is still in ClickHouse format and can't be scraped by Prometheus, so it's time to change the output!

FORMAT Prometheus

ClickHouse versions 22.5 and higher feature built-in support for the Prometheus output format - it's a little strict, but it works, so let's use it!

SELECT ifNull(key, 'undefined') as name, ifNull(toFloat64(value),0) as value, help as help, CAST(JSONExtractKeysAndValues(replaceAll(ifNull(tags,''), '\'','\"'), 'String'), 'Map(String, String)') as labels FROM url('', JSONEachRow) FORMAT Prometheus
# HELP mygauge my little metric
mygauge{le="0.05"} 100

Our metrics look perfect - all we need to complete our project is an HTTP endpoint we can scrape them from and luckily, ClickHouse does that too!

Metrics HTTP Handler

Let's use all the ClickHouse features we can, shall we? Create a custom handler configuration file to handle requests for our metrics publisher:

Example Path: /etc/clickhouse-server/config.d/metric_export.xml

<yandex> <format_schema_path>/var/lib/clickhouse/format_schemas/</format_schema_path> <custom_urleng>'</custom_urleng>
      <query>SELECT ifNull(key, 'undefined') as name, ifNull(toFloat64(value),0) as value, help, CAST(JSONExtractKeysAndValues(replaceAll(ifNull(tags,''), '\'','\"'), 'String'), 'Map(String, String)') as labels FROM url(getSetting('custom_urleng'), JSONEachRow) FORMAT Prometheus</query>       

Scrape it and Shake it!

Our custom endpoint is ready to be scraped. Let's curl a final test:

curl 'http://default:password@localhost:8123/metrix'
# TYPE mygauge gauge
mygauge{le="0.05"} 100

Bonus Steps

Going to use your URL engine store daily? Extend the schema and setup a persistent URL Engine powered table for quicker access:

CREATE TABLE default.prometheus_exporter
 `key` String,
 `value` Float64, 
 `help` String,
 `type` String,
 `tags` String,
 `__expires` UInt64 DEFAULT toUnixTimestamp(now()+300),
ENGINE = URL('', 'JSONEachRow')
INSERT INTO default.prometheus_exporter VALUES ('mygauge', 100, 'my little metric', 'gauge', '{"le":"0.05"}', toUnixTimestamp(now()+300))




ClickHouse and are super flexible and can be used to abstract and distribute low-pressure storage between swarms of servers unaware of each other.

This guide is intended as an example - go crazy adding precise timestamping, output formats and anything else you can dream up!

Did you find this article valuable?

Support qryn by becoming a sponsor. Any amount is appreciated!