Skip to main content

Indexing the Transaction Receipts

In this step-by-step tutorial we will look into a squid that indexes Fuel Network data.

Pre-requisites: Node.js v20 or newer, Git, Docker.

Download the project

Begin by retrieving the template and installing the dependencies:

git clone https://github.com/subsquid-labs/fuel-example
cd fuel-example
npm ci

Configuring the data source

"Data source" is a component that defines what data should be retrieved and where to get it. To configure the data source to retrieve the data produced by the receipt field of the fuel transaction, we initialize it like this:

src/main.ts
const dataSource = new DataSourceBuilder()
.setGateway('https://v2.archive.subsquid.io/network/fuel-testnet')
.setGraphql({
url: 'https://testnet.fuel.network/v1/graphql',
strideConcurrency: 3,
strideSize: 50
})
.setFields({
receipt: {
contract: true,
receiptType: true
}
})
.addReceipt({
type: ['LOG_DATA']
})
.build()

Here,

  • https://v2.archive.subsquid.io/network/fuel-testnet is the address for the public SQD Network gateway for Fuel testnet. Check out the exhaustive public gateways list.
  • The argument of addReceipt() is a set of filters that tells the processor to retrieve all receipts of type LOG.
  • The argument of setFields() specifies the exact fields we need for every data item type. In this case we request contract and receiptType for receipt data items.

See also FuelDataSource reference and the comments in main.ts of the fuel-example repo.

With a data source it becomes possible to retrieve filtered blockchain data from SQD Network, transform it and save the result to a destination of choice.

Decoding the event data

The other part the squid processor (the ingester process of the indexer) is the callback function used to process batches of the filtered data, the batch handler. In Fuel Squid SDK it is typically defined within a run() call, like this:

import {run} from '@subsquid/batch-processor'

run(dataSource, database, async ctx => {
// data transformation and persistence code here
})

Here,

  • dataSource is the data source object described in the previous section
  • database is a Database implementation specific to the target data sink. We want to store the data in a PostgreSQL database and present with a GraphQL API, so we provide a TypeormDatabase object here.
  • ctx is a batch context object that exposes a batch of data (at ctx.blocks) and any data persistence facilities derived from db (at ctx.store). See Block data for Fuel for details on how the data batches are presented.

Batch handler is where the raw on-chain data is decoded, transformed and persisted. This is the part we'll be concerned with for the rest of the tutorial.

We begin by defining a database and starting the data processing:

src/main.ts
run(dataSource, database, async ctx => {
// Block items that we get from `ctx.blocks` are flat JS objects.
//
// We can use `augmentBlock()` function from `@subsquid/fuel-objects`
// to enrich block items with references to related objects.
let contracts: Map<String, Contract> = new Map()

let blocks = ctx.blocks.map(augmentBlock)

for (let block of blocks) {
for (let receipt of block.receipts) {
if (receipt.receiptType == 'LOG_DATA' && receipt.contract != null) {
let contract = contracts.get(receipt.contract)
if (!contract) {
contract = await ctx.store.findOne(Contract, {where: {id: receipt.contract}})
if (!contract) {
contract = new Contract({
id: receipt.contract,
logsCount: 0,
foundAt: block.header.height
})
}
}
contract.logsCount += 1
contracts.set(contract.id, contract)
}
}
}

ctx.store.upsert([...contracts.values()])
})

This goes through all the receipts in the block, verifies that they have type LOG_DATA", reads the contract field from the receipt and saves it to the database.

At this point the squid is ready for its first test run. Execute

npx tsc
docker compose up -d
npx squid-typeorm-migration apply
node -r dotenv/config lib/main.js

You can verify that the data is being stored in the database by running

docker exec "$(basename "$(pwd)")-db-1" psql -U postgres -c "SELECT * FROM contract"

Full code can be found here.