Query optimization
Before going to production, ensure that none of the queries that your app generates are executed suboptimally. Failure to do so can cause severe performance issues.
The optimization procedure involves executing a representative sample of your app's queries, analyzing the slow queries and iteratively adding any helpful indexes. Here is the procedure:
-
Create a development deployment of your squid.
- Use the squid name that you intend to be using in production, a new slot and (optionally) a dev tag. See the Slots and tags guide.
-
Wait for you dev deployment to sync.
-
Configure your dev deployment to log mildly slow SQL queries.
- Set the
log_min_duration_statement
to some low value in your manifest:I'm using 50 ms = 0.05 s here. All queries that take longer than that to execute will now be logged by thedeploy:
addons:
postgres:
config:
log_min_duration_statement: 50
...db
container. - Update your dev deployment, e.g. with
sqd deploy -s <dev-slot> .
- Set the
-
Send a representative set of queries to your dev deployment. For this step you can use automated or manual tests of your API consumer app.
-
View the slow queries using
sqd logs
, e.g.sqd logs -f -n <my-squid-name> -s <dev-slot> -c db
-
Connect to your dev deployment's database with
psql
. You can find the connection command by going to the squids view of the network app > your dev deployment's page > "DB access" tab. -
Execute the slow queries with
EXPLAIN ANALYZE
and view their execution plans. -
Hypothesize about which indexes that could be used to speed up your queries.
- Aim to use as few indexes as possible: having more than necessary can hurt the performance.
- One possible strategy is to use ChatGPT. Just give it the execution plan and ask which indexes should be added. If it struggles give it your database schema as well (e.g. in form of your
schema.graphql
file listing). It is also useful to ask it to explain its suggestion.
-
Update the indexes of your dev deployment's database:
- Ensure that your local database container schema is the same as the schema of your dev deployment's database. Assuming that the codebases are the same,
should do the trick. If you encounter problems, wipe the database with
docker compose up -d
npm run build
npx squid-typeorm-migration applydocker compose down
and repeat the step. - Add or remove some
@index
statements to your squid's schema file. - Regenerate the TypeORM model classes.
npx squid-typeorm-codegen
- Build your code.
npm run build
- Generate an incremental migration file.
npx squid-typeorm-migration generate
- Update your dev deployment in the Cloud without resetting its database, e.g.
sqd deploy -s <dev-slot> .
Main guide is in this section.
- Ensure that your local database container schema is the same as the schema of your dev deployment's database. Assuming that the codebases are the same,
-
Repeat steps 4-9 until there are no more slow queries.
- If your queries are complicated, you can skip step 9 and add indexes manually in
psql
, e.g. like thisThis can drastically reduce the time you spend per iteration, but then you'll have to manually add the combination of indexes that worked toCREATE INDEX "my_idx" ON my_table (column);
schema.graphql
and redo the whole optimization procedure once more to ensure consistency. Otherwise, you may run into unexpected performance issues due to lost/changed indexes as you redeploy or modify your squid.
- If your queries are complicated, you can skip step 9 and add indexes manually in
-
Set a less strict rule for logging slow queries, e.g. as follows
deploy:
addons:
postgres:
config:
log_min_duration_statement: 1000 # 1 second
...Update the deployment:
sqd deploy -s <dev-slot> .
-
Mark your development deployment for production use.
- If you used a development tag, remove it with
sqd tags remove
- Add a production tag to the deployment with
sqd tags add
See also the Zero-downtime updates section.
- If you used a development tag, remove it with