Image for Testing the limits of Supabase + Postgres in a serverless environment

Testing the limits of Supabase + Postgres in a serverless environment

In this article, we will be comparing how postgres performs on a serverless environment compared to a traditional server.

We will be testing the postgres db hosted on Supabase , so we will also test how an orm performs vs how using supabase’s js library performs.

We will be comparing the following setups in our testing.

  1. Cloudflare Workers + Supabase JS
  2. Cloudflare Workers + Drizzle ORM
  3. Express + Supabase JS
  4. Express + Drizzle ORM

So what are our assumptions going into this test?

  • Since the sererless environment is ephemeral, it has to establish a new connection to the database on every request. This can be a bottleneck in some cases.
  • But using supabase’s js library, the connection is done to their REST endpoint, which has a connection pool already established, a pool which can handle a million connections at once.
  • On the other hand, in a traditional node server, the connection is established once and reused for every request.
  • So which would be faster? Querying the db from an already established connection or querying the db from supabase js’s REST methods?

Let’s start by setting up the project.

  1. Create a new supabase project and run it
Create a new supabase project
supabase init && supabase start

If all goes well, you should be able to access the supabase dashboard at http://127.0.0.1:54323

  1. Now lets create a new table from the SQL Editor in the dashboard
Create a new table
CREATE TABLE users (
id SERIAL PRIMARY KEY,
name TEXT,
email TEXT
);
ALTER TABLE users DISABLE ROW LEVEL SECURITY;
  1. Let’s populate the table by adding a 100000 mock users
Insert some data
INSERT INTO users (name, email)
SELECT
'User ' || gs AS name,
'user' || gs || '@example.com' AS email
FROM generate_series(1, 100000) AS gs;
  1. Let’s create the express server with the following code
express-app/index.js
import express from "express";
import { createClient } from "@supabase/supabase-js";
import { drizzle } from "drizzle-orm/postgres-js";
import postgres from "postgres";
import { pgTable, serial, text } from "drizzle-orm/pg-core";
const app = express();
const port = 3000;
const supabase = createClient(
"http://127.0.0.1:54321",
"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZS1kZW1vIiwicm9sZSI6ImFub24iLCJleHAiOjE5ODM4MTI5OTZ9.CRXP1A7WOeoJeXxjNni43kdQwgnWNReilDMblYTn_I0"
);
const client = postgres(
"postgresql://postgres:postgres@127.0.0.1:54322/postgres"
);
const usersTable = pgTable("users", {
id: serial("id").primaryKey(),
name: text("name").notNull(),
email: text("email").notNull(),
});
export const db = drizzle(client, {
schema: {
usersTable,
},
});
app.get("/", (req, res) => {
res.send("Hello World!");
});
app.get("/supabasejs", async (req, res) => {
try {
const { data, error } = await supabase.from("users").select().limit(100);
if (error) return res.status(400).send(error.message);
res.json(data);
} catch (e) {
console.log(e);
res.status(500).send(e.message);
}
});
app.get("/drizzle", async (req, res) => {
try {
const users = await db.query.usersTable.findMany({
limit: 100,
});
res.json(users);
} catch (e) {
console.log(e);
res.status(500).send(e.message);
}
});
app.listen(port, () => {
console.log(`Example app listening on port ${port}`);
});
  1. We will now create the Cloudflare Worker
cf-worker/src/index.ts
import { createClient } from "@supabase/supabase-js";
import { drizzle } from "drizzle-orm/postgres-js";
import postgres from "postgres";
import { pgTable, serial, text } from "drizzle-orm/pg-core";
const usersTable = pgTable("users", {
id: serial("id").primaryKey(),
name: text("name").notNull(),
email: text("email").notNull(),
});
export default {
async fetch(request, env, ctx): Promise<Response> {
const url = new URL(request.url);
if (url.pathname === "/supabasejs") {
const supabase = createClient(
"http://127.0.0.1:54321",
"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZS1kZW1vIiwicm9sZSI6ImFub24iLCJleHAiOjE5ODM4MTI5OTZ9.CRXP1A7WOeoJeXxjNni43kdQwgnWNReilDMblYTn_I0",
);
const { data, error } = await supabase.from("users").select().limit(100);
if (error) return new Response(error.message, { status: 500 });
return new Response(JSON.stringify(data), {
headers: {
"content-type": "application/json",
},
});
} else if (url.pathname === "/drizzle") {
const client = postgres(
"postgresql://postgres:postgres@127.0.0.1:54322/postgres",
);
const db = drizzle(client, {
schema: {
usersTable,
},
});
const users = await db.query.usersTable.findMany({
limit: 100,
});
return new Response(JSON.stringify(users), {
headers: {
"content-type": "application/json",
},
});
}
return new Response("Hello World!");
},
} satisfies ExportedHandler<Env>;

After starting both the servers, you should have four routes available for testing.

  1. http://localhost:3000/supabasejs - Express server using supabase js
  2. http://localhost:3000/drizzle - Express server using drizzle orm
  3. http://localhost:8787/supabasejs - Cloudflare worker using supabase js
  4. http://localhost:8787/drizzle - Cloudflare worker using drizzle orm

Now lets create a simple script that will test the performance of these routes

  1. Let’s create a simple script that will test these routes. We will use autocannon for this.
test.js
import { stdout, stderr } from "node:process";
import autocannon, { printResult } from "autocannon";
function print(result) {
const printedResult = printResult(result);
stdout.write(printedResult);
return printedResult;
}
const urls = [
"http://localhost:8787/supabasejs",
"http://localhost:8787/drizzle",
"http://localhost:3000/supabasejs",
"http://localhost:3000/drizzle",
];
async function loadTest(url) {
return new Promise((resolve, reject) => {
const instance = autocannon(
{
url,
connections: 100,
pipelining: 100,
duration: 60,
},
(err, result) => {
if (err) {
reject(err);
} else {
const printedResult = print(result);
resolve({ result, printedResult, responseCounts });
}
},
);
let completedRequests = 0;
const responseCounts = {
"2xx": 0,
"3xx": 0,
"4xx": 0,
"5xx": 0,
other: 0,
};
instance.on("response", (client, statusCode) => {
completedRequests += 1;
if (statusCode >= 200 && statusCode < 300) {
responseCounts["2xx"] += 1;
} else if (statusCode >= 300 && statusCode < 400) {
responseCounts["3xx"] += 1;
} else if (statusCode >= 400 && statusCode < 500) {
responseCounts["4xx"] += 1;
} else if (statusCode >= 500 && statusCode < 600) {
responseCounts["5xx"] += 1;
} else {
responseCounts.other += 1;
}
console.log(`Completed Requests for ${url}: ${completedRequests}`);
console.log(
`2xx: ${responseCounts["2xx"]}, 3xx: ${responseCounts["3xx"]}, 4xx: ${responseCounts["4xx"]}, 5xx: ${responseCounts["5xx"]}, other: ${responseCounts.other}`,
);
});
});
}
async function runTests() {
const results = [];
for (const url of urls) {
console.log(`Running load test for ${url}`);
try {
const result = await loadTest(url);
results.push(result);
console.log(`Completed load test for ${url}\n`);
} catch (error) {
stderr.write(`Error during load test for ${url}: ${error.message}\n`);
}
}
summarizeResults(results);
}
function summarizeResults(results) {
console.log("\nSummary of Results:\n");
results.forEach(({ result, printedResult, responseCounts }) => {
console.log(`Results for ${result.url}:`);
console.log(printedResult);
console.log(`Response Codes Summary for ${result.url}:`);
console.log(
`2xx: ${responseCounts["2xx"]}, 3xx: ${responseCounts["3xx"]}, 4xx: ${responseCounts["4xx"]}, 5xx: ${responseCounts["5xx"]}, other: ${responseCounts.other}`,
);
console.log("\n");
});
}
runTests().catch(console.error);

Results when running the test for 60 seconds with 100 connections and 100 pipelining

Total Requests

Total number of requests for different configurations (higher is better)

Requests per Second

Number of requests per second for different configurations (higher is better)

Bytes per Second

Number of bytes transferred per second for different configurations (higher is better)

Latency

Latency for different configurations (lower is better)

Conclusion

From the results, we can see that the biggest performance bottleneck is the connection establishment in the serverless environment. This is evident from the fact that the latency for the serverless environment is much higher than the traditional server.

You can find the full source code here

Log dumps

Results for CF Workers + Supabase :

Stat2.5%50%97.5%99%AvgStdevMax
Latency1795 ms10858 ms11629 ms11682 ms10067.33 ms2468.93 ms11839 ms
Stat1%2.5%50%97.5%AvgStdevMin
Req/Sec5707529011,031900.5279.41570
Bytes/Sec3.34 MB4.41 MB5.28 MB6.04 MB5.28 MB465 kB3.34 MB

Req/Bytes counts sampled once per second.

Number of samples: 60

64k requests in 60.07s, 317 MB read

Response Codes Summary for http://localhost:8787/supabasejs :
2xx: 54031, 3xx: 0, 4xx: 0, 5xx: 0, other: 0

Results for CF Workers + Drizzle :

Stat2.5%50%97.5%99%AvgStdevMax
Latency1089 ms17000 ms34263 ms36654 ms17420.1 ms10019.33 ms42703 ms
Stat1%2.5%50%97.5%AvgStdevMin
Req/Sec00210433218.29178.323
Bytes/Sec0 B0 B1.08 MB2.31 MB1.13 MB929 kB17.6 kB

Req/Bytes counts sampled once per second.

Number of samples: 60

11302 2xx responses, 1795 non 2xx responses
30k requests in 60.06s, 68 MB read
7k errors (7k timeouts)

Response Codes Summary for http://localhost:8787/drizzle :
2xx: 11302, 3xx: 0, 4xx: 0, 5xx: 1795, other: 0

Results for Express.js + Supabase :

Stat2.5%50%97.5%99%AvgStdevMax
Latency105 ms10024 ms12764 ms13425 ms7111.44 ms4314.4 ms20199 ms
Stat1%2.5%50%97.5%AvgStdevMin
Req/Sec005513,431976.751,094.455
Bytes/Sec0 B0 B1.57 MB6.23 MB2.12 MB2.07 MB9.69 kB

Req/Bytes counts sampled once per second.

Number of samples: 60

19477 2xx responses, 39123 non 2xx responses
84k requests in 60.6s, 128 MB read
15k errors (15k timeouts)

Response Codes Summary for http://localhost:3000/supabasejs :
2xx: 19477, 3xx: 0, 4xx: 39123, 5xx: 0, other: 0

Results for Express.js + Drizzle :

Stat2.5%50%97.5%99%AvgStdevMax
Latency2299 ms2715 ms4045 ms4304 ms2891.94 ms467.38 ms4636 ms
Stat1%2.5%50%97.5%AvgStdevMin
Req/Sec7639193,6013,8373,382.24617.51763
Bytes/Sec4.6 MB5.54 MB21.7 MB23.1 MB20.4 MB3.72 MB4.6 MB

Req/Bytes counts sampled once per second.

Number of samples: 60

213k requests in 60.03s, 1.22 GB read

Response Codes Summary for http://localhost:3000/drizzle :
2xx: 202910, 3xx: 0, 4xx: 0, 5xx: 0, other: 0

Combined Results Comparison

MetricCF Workers + SupabaseCF Workers + DrizzleExpress.js + SupabaseExpress.js + Drizzle
Latency (ms)
2.5%179510891052299
50%1085817000100242715
97.5%1162934263127644045
99%1168236654134254304
Avg10067.3317420.17111.442891.94
Stdev2468.9310019.334314.4467.38
Max1183942703201994636
Req/Sec
1%57000763
2.5%75200919
50%9012105513601
97.5%103143334313837
Avg900.52218.29976.753382.24
Stdev79.41178.321094.45617.51
Min57035763
Bytes/Sec
1%3.34 MB0 B0 B4.6 MB
2.5%4.41 MB0 B0 B5.54 MB
50%5.28 MB1.08 MB1.57 MB21.7 MB
97.5%6.04 MB2.31 MB6.23 MB23.1 MB
Avg5.28 MB1.13 MB2.12 MB20.4 MB
Stdev465 kB929 kB2.07 MB3.72 MB
Min3.34 MB17.6 kB9.69 kB4.6 MB
Total Requests64k in 60.07s30k in 60.06s84k in 60.6s213k in 60.03s
Total Data317 MB read68 MB read128 MB read1.22 GB read
Response Codes
2xx540311130219477202910
3xx0000
4xx00391230
5xx0179500
Other0000
Errors07k timeouts15k timeouts0