Skip to main content

Best practices

Here is a list of items to check out before you deploy your squid for use in production:

  • Make sure that you use batch processing throughout your code. Consider using @belopash/typeorm-store for large projects with extensive entity relations and frequent database reads.

  • Filter your data in the batch handler. E.g. if you request event logs from a particular contract, do check that the address field of the returned data items matches the contract address before processing the data. This will make sure that any future changes in your processor configuration will not cause the newly added data to be routed to your old processing code by mistake.

info

Batch handler data filtering used to be compulsory before the release of @subsquid/[email protected]. Now it is optional but highly recommended.

  • If your squid saves its data to a database, make sure your schema has @index decorators for all entities that will be looked up frequently.

  • If your squid serves a GraphQL API

    1. Do not use OpenReader if your application uses subscriptions. Instead, use PostGraphile or Hasura.
    2. If you do use OpenReader:
    3. If you use PostGraphile or Hasura, follow their docs to harden your service in a similar way.
  • If you deploy your squid to SQD Cloud:

    1. Deploy your squid to a Professional organization.
    2. Use dedicated: true in the scale: section of the manifest.
    3. Make sure that your scale: section requests a sufficient but not excessive amount of resources.
    4. Set your deployment up for zero downtime updates. Use a tag-based URL and and not slot-based URLs to access your API e.g. from your frontend app.
    5. Make sure to use secrets for all sensitive data you might've used in your code. The most common type of such data is API keys and URLs containing them.
    6. Follow the recommendations from the Cloud logging page.