tileserver-rs is designed for high-performance tile serving. This page documents benchmark results for PMTiles, MBTiles, PostgreSQL, and Cloud Optimized GeoTIFF (COG) sources.
Test Environment
- Hardware: Apple Silicon (M-series) MacBook
- Runtime: All servers running in Docker containers (ARM64 native)
- Test Tool: autocannon (Node.js HTTP benchmarking)
- Configuration: 100 concurrent connections, 10 seconds per endpoint
Test Data
| Source | File | Area | Zoom Range | Size |
|---|---|---|---|---|
| PMTiles | protomaps-sample.pmtiles | Florence, Italy | 0-15 | 6.3 MB |
| MBTiles | zurich_switzerland.mbtiles | Zurich, Switzerland | 0-14 | 34 MB |
| PostgreSQL | benchmark_points table | Zurich, Switzerland | 0-14 | 50,000 points |
| COG | benchmark-rgb.cog.tif | World (Web Mercator) | 0-22 | 90 MB |
Summary Results
| Source | Avg Requests/sec | Avg Throughput | Avg Latency |
|---|---|---|---|
| PMTiles | 1,047 req/s | 93.18 MB/s | 171ms |
| MBTiles | 1,133 req/s | 92.62 MB/s | 181ms |
Both formats deliver ~93 MB/s throughput with 1,000+ requests/second under heavy load (100 concurrent connections).
Detailed Results by Zoom Level
PMTiles (Florence, Italy)
| Zoom | Location | Requests/sec | Throughput | Avg Latency | P99 Latency |
|---|---|---|---|---|---|
| z0 | World | 236 | 97.88 MB/s | 461ms | 1,190ms |
| z4 | Europe | 403 | 95.51 MB/s | 264ms | 607ms |
| z8 | Italy | 1,071 | 91.72 MB/s | 99ms | 191ms |
| z10 | Tuscany | 1,290 | 89.81 MB/s | 81ms | 158ms |
| z12 | Florence | 1,675 | 93.62 MB/s | 62ms | 119ms |
| z14 | City Center | 1,605 | 90.54 MB/s | 63ms | 121ms |
MBTiles (Zurich, Switzerland)
| Zoom | Location | Requests/sec | Throughput | Avg Latency | P99 Latency |
|---|---|---|---|---|---|
| z0 | World | 3,441 | 89.82 MB/s | 29ms | 55ms |
| z4 | Europe | 990 | 89.84 MB/s | 104ms | 207ms |
| z8 | Switzerland | 426 | 92.90 MB/s | 252ms | 669ms |
| z10 | Zurich Region | 590 | 92.33 MB/s | 180ms | 361ms |
| z12 | Zurich City | 1,088 | 91.47 MB/s | 97ms | 191ms |
| z14 | City Center | 266 | 99.37 MB/s | 425ms | 1,166ms |
Analysis
Key Insights
- Throughput is consistent at ~90-100 MB/s regardless of zoom level
- Latency scales with tile size - low zoom (large tiles) = higher latency
- High zoom requests are fastest - z10-z14 achieve 1,000-3,400 req/s
PMTiles Performance
- Consistent performance across zoom levels
- Best at city zoom (z12-z14): 1,600+ req/s with 62ms latency
- Memory-mapped file access provides predictable performance
MBTiles Performance
- Fastest at z0: 3,441 req/s with only 29ms latency
- SQLite overhead more visible at high-detail tiles (z14: 425ms)
- Good for local development and smaller datasets
Format Comparison
| Aspect | PMTiles | MBTiles |
|---|---|---|
| Best for | Production, CDN, cloud | Development, local |
| Consistency | More predictable | Variable by tile size |
| High-zoom perf | Excellent | Good |
| Low-zoom perf | Good | Excellent |
Running Benchmarks
To reproduce these benchmarks with fair Docker-to-Docker comparison:
# Build tileserver-rs Docker image for your platform
docker build -t tileserver-rs:local .
# Update benchmarks/docker-compose.yml to use local image
# Then start all servers
docker compose -f benchmarks/docker-compose.yml up -d
# Run benchmarks
cd benchmarks
bun install
node run-benchmarks.js --duration 10 --connections 100
Benchmark Options
# Test only PMTiles
node run-benchmarks.js --format pmtiles
# Test only MBTiles
node run-benchmarks.js --format mbtiles
# Test PostgreSQL table sources
node run-benchmarks.js --format postgres
# Test PostgreSQL function sources
node run-benchmarks.js --format postgres_function
# Test COG/raster sources
node run-benchmarks.js --format cog --connections 10
# Test specific server
node run-benchmarks.js --server tileserver-rs
# Longer test with more connections
node run-benchmarks.js --duration 30 --connections 200
# Generate markdown report
node run-benchmarks.js --markdown
Server Comparison
We benchmarked tileserver-rs against martin and tileserver-gl using the same test data. All servers ran in Docker containers on ARM64 for a fair apples-to-apples comparison.
PMTiles Performance (Florence, Italy)
| Server | Avg Req/sec | Avg Latency | Throughput |
|---|---|---|---|
| tileserver-rs | 1,409 | 79ms | 92 MB/s |
| tileserver-gl | 1,274 | 91ms | 81 MB/s |
| martin | 53 | 1,783ms | 6 MB/s |
tileserver-rs is ~10% faster than tileserver-gl and ~26x faster than martin for PMTiles serving.
MBTiles Performance (Zurich, Switzerland)
| Server | Avg Req/sec | Avg Latency | Throughput |
|---|---|---|---|
| martin | 876 | 128ms | 179 MB/s |
| tileserver-gl | 756 | 198ms | 89 MB/s |
| tileserver-rs | 736 | 188ms | 90 MB/s |
All three servers perform competitively for MBTiles. Martin leads on throughput due to in-memory caching; tileserver-rs and tileserver-gl are within ~3% of each other.
PostgreSQL Performance (Zurich Points)
Benchmarks run with native ARM64 PostgreSQL 16.13 + PostGIS 3.4.4 (compiled from source for fair benchmark).
| Server | Avg Req/sec | Avg Latency | Throughput |
|---|---|---|---|
| tileserver-rs | 3,596 | 209ms | 422 MB/s |
| martin | 3,674 | 209ms | 429 MB/s |
Both servers are effectively PostGIS-bound — performance is nearly identical. The PostgreSQL query and ST_AsMVT encoding dominate the response time, not the tile server overhead.
PostgreSQL by Zoom Level (tileserver-rs vs martin)
| Zoom | tileserver-rs | martin | Winner |
|---|---|---|---|
| z10 | 189 req/s | 192 req/s | ~tie |
| z11 | 338 req/s | 346 req/s | ~tie |
| z12 | 866 req/s | 874 req/s | ~tie |
| z13 | 3,442 req/s | 3,532 req/s | ~tie |
| z14 | 13,144 req/s | 13,423 req/s | ~tie |
Both servers hit the same PostgreSQL bottleneck — performance is virtually identical across all zoom levels, confirming the database is the limiting factor, not the tile server.
PostgreSQL Optimizations
tileserver-rs includes several PostgreSQL performance optimizations:
- Connection pool pre-warming - All connections established at startup
- Prepared statement caching - Tile queries pre-prepared on all connections
- ST_TileEnvelope margin - PostGIS 3.1+ margin parameter for better tile edge clipping
- SRID-aware envelope transformation - Transform tile envelope to table SRID instead of every geometry
- Moka-based tile cache - LRU cache with configurable size and TTL
Feature Comparison
| Feature | tileserver-rs | tileserver-gl | martin |
|---|---|---|---|
| Language | Rust | Node.js | Rust |
| PMTiles | ✅ | ✅ | ✅ |
| MBTiles | ✅ | ✅ | ✅ |
| MLT (MapLibre Tiles) | ✅ | ❌ | ✅ |
| PostGIS | ✅ | ❌ | ✅ |
| Raster Rendering | ✅ Native | ✅ Node | ❌ |
| Static Images | ✅ | ✅ | ❌ |
| PMTiles Req/sec | 1,409 | 1,274 | 53 |
| MBTiles Req/sec | 736 | 756 | 876 |
| PostGIS Req/sec | 3,596 | - | 3,674 |
tileserver-rs provides the best balance: fastest PMTiles performance (~10% faster than tileserver-gl, 26x faster than martin), matching PostgreSQL performance (both PostGIS-bound at ~3,600 req/s), native MapLibre rendering for raster tiles, static image generation - all in a single binary.
COG/Raster Performance
tileserver-rs supports Cloud Optimized GeoTIFF (COG) serving with on-the-fly reprojection and PNG encoding via GDAL.
Test Configuration
- COG File: 4096x4096 RGB image in Web Mercator (EPSG:3857)
- Connections: 10 concurrent (COG processing is CPU-intensive)
- Output: 256x256 PNG tiles
COG Benchmark Results (tileserver-rs)
| Zoom | Requests/sec | Throughput | Avg Latency | P99 Latency |
|---|---|---|---|---|
| z0 | 2 req/s | 276 KB/s | 2,568ms | 4,343ms |
| z1 | 7 req/s | 1 MB/s | 1,349ms | 2,261ms |
| z2 | 20 req/s | 3.4 MB/s | 478ms | 1,177ms |
| z3 | 38 req/s | 7.6 MB/s | 258ms | 595ms |
Key Observations
- Latency scales with tile complexity - Lower zoom levels read more data from the COG
- Higher zoom = faster - z3+ tiles achieve 38+ req/s with sub-300ms latency
- CPU-bound - COG processing (GDAL read + PNG encode) limits throughput
- LZW compression - The benchmark COG uses LZW compression; uncompressed COGs are faster
Running COG Benchmarks
# Test COG performance only
node run-benchmarks.js --format cog --connections 10
# Compare with TiTiler (Python COG server)
docker compose -f benchmarks/docker-compose.yml up -d titiler tileserver-rs
node run-benchmarks.js --format cog --server tileserver-rs
node run-benchmarks.js --format cog --server titiler
TiTiler Comparison
TiTiler is a Python-based COG tile server by Development Seed, built on rio-tiler/GDAL and FastAPI. We benchmarked both servers with the same COG file (4096×4096 RGB, EPSG:3857) at 10 concurrent connections.
| Server | Avg Req/sec | Avg Latency | Memory Usage | Notes |
|---|---|---|---|---|
| TiTiler | 184 req/s | 54ms | 200 MB | Rio-tiler/GDAL, Python (FastAPI) |
| tileserver-rs | 19 req/s | 1,217ms | 624 MB | GDAL-based, Rust |
COG Performance by Zoom Level
| Zoom | tileserver-rs | TiTiler | Winner |
|---|---|---|---|
| z0 | 3 req/s (3,016ms) | 157 req/s (63ms) | TiTiler |
| z1 | 5-8 req/s (1,174-2,054ms) | 170-194 req/s (51-58ms) | TiTiler |
| z2 | 22-23 req/s (419-453ms) | 196-199 req/s (50ms) | TiTiler |
| z3 | 53 req/s (189ms) | 187 req/s (53ms) | TiTiler |
Honest assessment: TiTiler wins on pure COG serving — it's purpose-built for raster data with a mature rio-tiler/rasterio stack optimized specifically for Cloud Optimized GeoTIFF access patterns. tileserver-rs COG support uses raw GDAL bindings which haven't been optimized to the same degree yet.
Why tileserver-rs Still Wins Overall
TiTiler only serves COG/raster data. tileserver-rs is a unified tile server that handles everything in a single binary:
| Capability | tileserver-rs | TiTiler |
|---|---|---|
| PMTiles (vector) | ✅ 1,409 req/s | ❌ |
| MBTiles (vector) | ✅ 736 req/s | ❌ |
| PostGIS (vector) | ✅ 3,337 req/s | ❌ |
| COG/Raster | ✅ 19 req/s | ✅ 184 req/s |
| MLT Transcoding | ✅ | ❌ |
| Native Raster Rendering | ✅ MapLibre Native | ❌ |
| Static Map Images | ✅ | ❌ |
| Style JSON Serving | ✅ | ❌ |
| Font Serving | ✅ | ❌ |
| Hot Reload | ✅ | ❌ |
Bottom line: If you only need COG serving, TiTiler is excellent. If you need a unified tile server that handles vector tiles, raster rendering, PostGIS, static images, and COG — all in a single binary with 1,400+ req/s PMTiles performance — tileserver-rs is the clear choice.
Run
docker compose -f benchmarks/docker-compose.yml up -dto start both servers for comparison.
Optimization Tips
- Use release builds -
cargo build --releaseis 10-50x faster than debug - Use PMTiles for production - cloud-native, HTTP range request friendly
- Use MBTiles for development - easy to generate and inspect with SQLite tools
- Enable compression - gzipped tiles reduce bandwidth significantly
- Use CDN caching - tiles are immutable, cache with long TTLs
- Match connections to cores - more connections than CPU cores adds overhead
- For COG serving - use internal tiling and overviews for faster access