Best practices
Here is a list of items to check out before you deploy your squid for use in production:
-
Make sure that you use batch processing throughout your code. Consider using
@belopash/typeorm-store
for large projects with extensive entity relations and frequent database reads. -
Filter your data in the batch handler. E.g. if you request event logs from a particular contract, do check that the
address
field of the returned data items matches the contract address before processing the data. This will make sure that any future changes in your processor configuration will not cause the newly added data to be routed to your old processing code by mistake.
Batch handler data filtering used to be compulsory before the release of @subsquid/[email protected]
. Now it is optional but highly recommended.
-
If your squid saves its data to a database, make sure your schema has
@index
decorators for all entities that will be looked up frequently.- Follow the queries optimization procedure for best results.
-
If your squid serves a GraphQL API
- Do not use OpenReader if your application uses subscriptions. Instead, use PostGraphile or Hasura.
- If you do use OpenReader:
- configure the built-in DoS protection against heavy queries;
- configure caching.
- If you use PostGraphile or Hasura, follow their docs to harden your service in a similar way.
-
If you deploy your squid to SQD Cloud:
- Deploy your squid to a Professional organization.
- Use
dedicated: true
in thescale:
section of the manifest. - Make sure that your
scale:
section requests a sufficient but not excessive amount of resources. - Set your deployment up for zero downtime updates. Use a tag-based URL and and not slot-based URLs to access your API e.g. from your frontend app.
- Make sure to use secrets for all sensitive data you might've used in your code. The most common type of such data is API keys and URLs containing them.
- Follow the recommendations from the Cloud logging page.