Benchmarks

Performance Benchmarks

Benchmark results comparing PMTiles, MBTiles, PostgreSQL, and COG (raster) performance in tileserver-rs

tileserver-rs is designed for high-performance tile serving. This page documents benchmark results for PMTiles, MBTiles, PostgreSQL, and Cloud Optimized GeoTIFF (COG) sources.

Test Environment

  • Hardware: Apple Silicon (M-series) MacBook
  • Runtime: All servers running in Docker containers (ARM64 native)
  • Test Tool: autocannon (Node.js HTTP benchmarking)
  • Configuration: 100 concurrent connections, 10 seconds per endpoint

Test Data

SourceFileAreaZoom RangeSize
PMTilesprotomaps-sample.pmtilesFlorence, Italy0-156.3 MB
MBTileszurich_switzerland.mbtilesZurich, Switzerland0-1434 MB
PostgreSQLbenchmark_points tableZurich, Switzerland0-1450,000 points
COGbenchmark-rgb.cog.tifWorld (Web Mercator)0-2290 MB

Summary Results

SourceAvg Requests/secAvg ThroughputAvg Latency
PMTiles1,047 req/s93.18 MB/s171ms
MBTiles1,133 req/s92.62 MB/s181ms

Both formats deliver ~93 MB/s throughput with 1,000+ requests/second under heavy load (100 concurrent connections).

Detailed Results by Zoom Level

PMTiles (Florence, Italy)

ZoomLocationRequests/secThroughputAvg LatencyP99 Latency
z0World23697.88 MB/s461ms1,190ms
z4Europe40395.51 MB/s264ms607ms
z8Italy1,07191.72 MB/s99ms191ms
z10Tuscany1,29089.81 MB/s81ms158ms
z12Florence1,67593.62 MB/s62ms119ms
z14City Center1,60590.54 MB/s63ms121ms

MBTiles (Zurich, Switzerland)

ZoomLocationRequests/secThroughputAvg LatencyP99 Latency
z0World3,44189.82 MB/s29ms55ms
z4Europe99089.84 MB/s104ms207ms
z8Switzerland42692.90 MB/s252ms669ms
z10Zurich Region59092.33 MB/s180ms361ms
z12Zurich City1,08891.47 MB/s97ms191ms
z14City Center26699.37 MB/s425ms1,166ms

Analysis

Key Insights

  • Throughput is consistent at ~90-100 MB/s regardless of zoom level
  • Latency scales with tile size - low zoom (large tiles) = higher latency
  • High zoom requests are fastest - z10-z14 achieve 1,000-3,400 req/s

PMTiles Performance

  • Consistent performance across zoom levels
  • Best at city zoom (z12-z14): 1,600+ req/s with 62ms latency
  • Memory-mapped file access provides predictable performance

MBTiles Performance

  • Fastest at z0: 3,441 req/s with only 29ms latency
  • SQLite overhead more visible at high-detail tiles (z14: 425ms)
  • Good for local development and smaller datasets

Format Comparison

AspectPMTilesMBTiles
Best forProduction, CDN, cloudDevelopment, local
ConsistencyMore predictableVariable by tile size
High-zoom perfExcellentGood
Low-zoom perfGoodExcellent

Running Benchmarks

To reproduce these benchmarks with fair Docker-to-Docker comparison:

# Build tileserver-rs Docker image for your platform
docker build -t tileserver-rs:local .

# Update benchmarks/docker-compose.yml to use local image
# Then start all servers
docker compose -f benchmarks/docker-compose.yml up -d

# Run benchmarks
cd benchmarks
bun install
node run-benchmarks.js --duration 10 --connections 100

Benchmark Options

# Test only PMTiles
node run-benchmarks.js --format pmtiles

# Test only MBTiles  
node run-benchmarks.js --format mbtiles

# Test PostgreSQL table sources
node run-benchmarks.js --format postgres

# Test PostgreSQL function sources
node run-benchmarks.js --format postgres_function

# Test COG/raster sources
node run-benchmarks.js --format cog --connections 10

# Test specific server
node run-benchmarks.js --server tileserver-rs

# Longer test with more connections
node run-benchmarks.js --duration 30 --connections 200

# Generate markdown report
node run-benchmarks.js --markdown

Server Comparison

We benchmarked tileserver-rs against martin and tileserver-gl using the same test data. All servers ran in Docker containers on ARM64 for a fair apples-to-apples comparison.

PMTiles Performance (Florence, Italy)

ServerAvg Req/secAvg LatencyThroughput
tileserver-rs1,37280ms90 MB/s
tileserver-gl1,14893ms75 MB/s
martin781,252ms9 MB/s

tileserver-rs is ~20% faster than tileserver-gl and ~17x faster than martin for PMTiles serving.

MBTiles Performance (Zurich, Switzerland)

ServerAvg Req/secAvg LatencyThroughput
tileserver-rs751188ms90 MB/s
martin722154ms155 MB/s
tileserver-gl712194ms85 MB/s

All three servers perform similarly for MBTiles (within ~5% of each other).

PostgreSQL Performance (Zurich Points)

Benchmarks run in isolation (fresh DB for each server) to ensure fair comparison.

ServerAvg Req/secAvg LatencyThroughput
tileserver-rs3,337316ms273 MB/s
martin3,149212ms269 MB/s

tileserver-rs matches martin's PostgreSQL performance with connection pool pre-warming and prepared statement caching. Both servers show similar throughput at ~270 MB/s.

PostgreSQL by Zoom Level (tileserver-rs vs martin)

Zoomtileserver-rsmartinWinner
z1058 req/s104 req/smartin
z11120 req/s124 req/s~tie
z12617 req/s422 req/stileserver-rs (+46%)
z132,796 req/s2,867 req/s~tie
z1413,097 req/s12,230 req/stileserver-rs (+7%)
At high zoom levels (z12-z14), tileserver-rs outperforms martin by 7-46%. Both servers hit the same PostgreSQL bottleneck at low zoom levels where queries process more geometry.

PostgreSQL Optimizations

tileserver-rs includes several PostgreSQL performance optimizations:

  • Connection pool pre-warming - All connections established at startup
  • Prepared statement caching - Tile queries pre-prepared on all connections
  • ST_TileEnvelope margin - PostGIS 3.1+ margin parameter for better tile edge clipping
  • SRID-aware envelope transformation - Transform tile envelope to table SRID instead of every geometry
  • Moka-based tile cache - LRU cache with configurable size and TTL

Feature Comparison

Featuretileserver-rstileserver-glmartin
LanguageRustNode.jsRust
PMTiles
MBTiles
PostGIS
Raster Rendering✅ Native✅ Node
Static Images
PMTiles Req/sec1,3721,14878
MBTiles Req/sec751712722
PostGIS Req/sec3,337-3,149

tileserver-rs provides the best balance: fastest PMTiles performance (~20% faster than tileserver-gl, 17x faster than martin), competitive PostgreSQL performance (matches martin), native MapLibre rendering for raster tiles, static image generation - all in a single binary.

COG/Raster Performance

tileserver-rs supports Cloud Optimized GeoTIFF (COG) serving with on-the-fly reprojection and PNG encoding via GDAL.

Test Configuration

  • COG File: 4096x4096 RGB image in Web Mercator (EPSG:3857)
  • Connections: 10 concurrent (COG processing is CPU-intensive)
  • Output: 256x256 PNG tiles

COG Benchmark Results (tileserver-rs)

ZoomRequests/secThroughputAvg LatencyP99 Latency
z02 req/s276 KB/s2,568ms4,343ms
z17 req/s1 MB/s1,349ms2,261ms
z220 req/s3.4 MB/s478ms1,177ms
z338 req/s7.6 MB/s258ms595ms

Key Observations

  • Latency scales with tile complexity - Lower zoom levels read more data from the COG
  • Higher zoom = faster - z3+ tiles achieve 38+ req/s with sub-300ms latency
  • CPU-bound - COG processing (GDAL read + PNG encode) limits throughput
  • LZW compression - The benchmark COG uses LZW compression; uncompressed COGs are faster

Running COG Benchmarks

# Test COG performance only
node run-benchmarks.js --format cog --connections 10

# Compare with TiTiler (Python COG server)
docker compose -f benchmarks/docker-compose.yml up -d titiler tileserver-rs
node run-benchmarks.js --format cog --server tileserver-rs
node run-benchmarks.js --format cog --server titiler

TiTiler Comparison

The benchmark suite includes TiTiler for COG comparison:

ServerAvg Req/secAvg LatencyNotes
tileserver-rs15 req/s1,613msGDAL-based, Rust
TiTilerTBDTBDRio-tiler/GDAL, Python

Run docker compose -f benchmarks/docker-compose.yml up -d to start both servers for comparison.

Optimization Tips

  1. Use release builds - cargo build --release is 10-50x faster than debug
  2. Use PMTiles for production - cloud-native, HTTP range request friendly
  3. Use MBTiles for development - easy to generate and inspect with SQLite tools
  4. Enable compression - gzipped tiles reduce bandwidth significantly
  5. Use CDN caching - tiles are immutable, cache with long TTLs
  6. Match connections to cores - more connections than CPU cores adds overhead
  7. For COG serving - use internal tiling and overviews for faster access