profile picture
Github Twitter Donate

What's the fastest serverless database provider?

July 19, 2023

What’s the fastest serverless database provider, in terms of latency?

Here, I’m defining “serverless database provider” as something that works with serverless applications without major configuration. This means traditional dedicated databases like AWS RDS are not included. Managing connection pool sucks.

I decided to focus on latency since it’s a relatively objective metric that can be compared without requiring too much math or knowledge. It can be easily observed and directly affects your application’s performance.

All numbers shown are in milliseconds unless mentioned otherwise.

Database providers

Here are the list of database providers and methods I compared. I picked these based on these conditions:

I’m also going to compare different drivers (notably queries via HTTP vs TCP).

Database ProviderDriverLanguageTransaction supportNotes
CockroachDBpg driver (TCP)Cockroach
Faunafaunadb driver (HTTP)FQLHosted in United States region group
Firestorefirebase SDK (HTTP)(Document based)
MongoDB Atlasmongodb driver (TCP)MongoDB
Neon@neondatabase/serverless driver (HTTP with connection caching)PostgreSQL
@neondatabase/serverless driver (WebSocket)
pg driver (TCP)Build in pooling
PlanetScale@planetscale/database driver (HTTP)MySQLNo foreign key support
mysql2 (TCP)No foreign key support
Supabase@supabase/supabase-js client (HTTP)PostgreSQLRLS disabled
pg (TCP)Built in pooling with pgBouncer
Tursolibsql (HTTP)LibSQL (SQLite)Transactions supported via batched queries
libsql (WebSocket)
Vercel Postgres@vercel/postgresPostgreSQLBased on Neon

In addition, I also tried out some Redis providers.

Database ProviderDriverLanguageTransaction supportNotes
Upstash Redis@upstash/redis client (HTTP)Redis
ioredis (TCP)
Vercel KV@vercel/kvRedisBased on Upstash Redis

Methodology

I created an empty database table with each provider. I will not be populating them since, again, I am testing latency, not query speeds. The database will be hosted in AWS us-east-1 or GCP us-east4, and on the cheapest tier (i.e. free). The API route making the queries will also be hosted on Vercel Serverless Functions in AWS us-east-1.

To replicate a serverless application, I set up an API route that will make 3 consecutive “get all entries” queries. The time it took to finish each of those queries will be recorded. This API route will be called 5 times every 0.5 seconds, or 10 requests (30 database calls) per second, which I will continue for 60 seconds. In total, 600 requests and 1800 database queries will be made in a minute. This number is quite arbitrary, but that should cover anywhere from around 10k daily users (250 queries/user) to 50k daily users (50 queries/user). I also tried quadrupling the queries and results were more or less similar.

Anyway, I ruled out AppWrite Cloud since it can only be hosted in Frankfurt currently, and Xata since its rate-limiting did not allow me to make enough requests. Globally replicated databases such as Cloudflare D1/KV and Deno KV were also omitted since it’s out of scope of this tests and requires a different testing approach.

I did not measure cold starts and ran a single query before running the 1800 queries to account for it.

Some quick observations

Before we look at the numbers, here are some small observations I made. I have also omitted some databases from the final results and I included my reasons here.

PlanetScale HTTP vs TCP

To my surprise, there weren’t any differences between using PlanetScale’s serverless driver and a regular MySQL driver (mysql2) with PlanetScale. I’d expected the TCP connection to be slow on initial queries (around 10~100ms) and super fast on subsequent queries, which were the case for every other TCP connections. However, latency were the same across the 2 query types. As such, I’ll be omitting PlanetScale on TCP connections from further results since the result is nearly identical to HTTP connections.

Issues with WebSocket connections

For both Neon and Turso on WebSocket connections, big hikes were observed for initial queries, sometimes reaching 1000ms. I’ll be omitting Neon with WebSocket connections since regular TCP connections were better in all metrics, and WebSocket connections are intended for environments without TCP connections (edge functions). On the other hand, Turso with WebSocket will still be included as interactive transactions are not supported in HTTP.

Neon + WebSocket initial query
Average99th percentileMaximum
5710501062

Vercel

Unsurprisingly, there were no noticeable differences between Vercel Postgres and Neon, and Vercel KV and Upstash Redis. With Vercel’s offering being more expensive, I don’t see a single reason to use them. I think I preferred the DX of Neon and Upstash as well.

Upstash HTTP vs TCP

While HTTP saw better initial queries, TCP saw more stable query speeds.

Initial query
NameAverage90th Percentile95%99%Maximum
Upstash Redis (HTTP)561379109
Upstash Redis (TCP)712153279
Subsequent query
NameAverage90th Percentile95%99%Maximum
Upstash Redis (HTTP)45154063
Upstash Redis (TCP)2881019

Cold starts

While I did not run tests for measuring cold starts, I did run some queries manually.

Neon, which includes Vercel Postgres, seems to have cold starts usually ranging anywhere from 500ms~100ms, but sometimes nearly 3 seconds, after a few minutes of inactivity.

Overview

The general trend is that TCP connections are fast once connected, while HTTP connections aren’t as fast but provides a stable query speed all around. That said, there are some clear outliers. For one, PlanetScale and Neon are hitting sub-10ms latency, and Supabase with TCP provides both fast connections and queries. I’m also pretty impressed by CockroachDB since it doesn’t use any pooling under the hood.

MongoDB Atlas has the best query speeds for subsequent queries but has abysmal initial connection time. I’m also not sure why Supabase with HTTP is slower than their competitors (Edit: It looks like the auth middleware which isn’t included when used with TCP is the cause).

Keep in mind the averages has a margin of error from 1ms (for low numbers) to 10ms (for high numbers). This is based on my observations and no real calculations were done.

In the chart below, the top bar represents the latency of initial queries, and the bottom represents the latency of subsequent queries.

Chart comparing 90th, 95th, and 99th percentile of initial and subsequent queries

Initial query
NameAverage90th Percentile95%99%Maximum
CockroachDB5560751441068
Fauna445269114129
Firestore414258371420
MongoDB Atlas1311411532455122
Neon (HTTP + connection caching)58101735
Neon (TCP)55677696134
PlanetScale (HTTP)812152494
Supabase (HTTP)415058137205
Supabase (TCP)1113182611
Turso (HTTP)27414556109
Turso (WebSocket)4236633521530
Subsequent query
NameAverage90th Percentile95%99%Maximum
CockroachDB445914
Fauna29343696135
Firestore29394255118
MongoDB Atlas<1<1<1<114
Neon (HTTP + connection caching)48101937
Neon (TCP)236926
PlanetScale (HTTP)711152959
Supabase (HTTP)38485572207
Supabase (TCP)11147
Turso (HTTP)22414468115
Turso (WebSocket)10616332495

Expected latency

NameInitial querySubsequent query1 query2345
CockroachDB5545559636771
Fauna44294473103132161
Firestore4129417099127156
MongoDB Atlas1310131131132132132
Neon HTTP (connection caching)5459141823
Neon TCP5525557596163
Planetscale HTTP87815233037
Supabase HTTP41384179118156195
Supabase TCP1111112131415
Turso HTTP232223456890113
Turso WebSocket42104252627282
NoSQL databases

Chart comparing the expected latency of NoSQL databases

SQL databases

Chart comparing the expected latency of SQL databases

SQL databases with transaction support

Chart comparing the expected latency of SQL databases with transaction support

Conclusions

I don’t think there are any wrong options per se, but I think PlanetScale and Supabase with TCP can be considered “winners.” Both provide low latency without major hikes and support transactions. The ultra low latency of Neon + HTTP is impressive but I think the lack of transaction support is a big drawback, at least for me. PlanetScale also doesn’t support foreign keys which may be a deal-breaker for some (though apparently it’s coming soon™). That all being said, I think it’s safe to say all 3 providers provide fast query speeds, and the performance is probably comparable (though I have no data to back that). That just leaves pricing and features which are easier to compare.

Supabase Auth (RLS) doesn’t work out of the box when its used over TCP, though my opinion is that security checks should be handled by your server rather than the database. I also want to recommend libraries like Kysely, which makes creating queries easier without the performance implications of using hefty ORMs like Prisma (Pro tip: avoid Prisma).

I kinda wish it was easier to set up your own server, which would allow me to establish a connection once and reuse that for ultra-fast queries. Of course I’m aware that that brings its own issues, but it’s definitely something I’d like to explore.