Best practices
Here is a list of items to check out before you deploy your squid for use in production:
-
Make sure that you use batch processing throughout your code. Consider using
@belopash/typeorm-store
for large projects with extensive entity relations and frequent database reads. -
Filter your data in the batch handler. E.g. if you request event logs from a particular contract, do check that the
address
field of the returned data items matches the contract address before processing the data. This will make sure that any future changes in your processor configuration will not cause the newly added data to be routed to your old processing code by mistake.
Batch handler data filtering used to be compulsory before the release of @subsquid/[email protected]
. Now it is optional but highly recommended.
-
If your squid saves its data to a database, make sure your schema has
@index
decorators for all entities that will be looked up frequently. -
If your squid serves a GraphQL API, consider:
- configuring the built-in DoS protection against heavy queries;
- configuring caching.
-
If you deploy your squid to Subsquid Cloud:
- Deploy your squid to a Professional organization.
- Use
dedicated: true
in thescale:
section of the manifest. - Make sure that your
scale:
section requests a sufficient but not excessive amount of resources. - Once you deploy, set a production alias URL to simplify subsequent updates. Use it and not API URLs of squid versions to access your API e.g. from your frontend app.
- Make sure to use secrets for all sensitive data you might've used in your code. The most common type of such data is API keys and URLs containing them.
- Follow the recommendations from the Cloud logging page.