Documentation Index
Fetch the complete documentation index at: https://docs.sqd.dev/llms.txt
Use this file to discover all available pages before exploring further.
Step 2: Deriving owners and tokens
This is the second part of the tutorial in which we build a squid that indexes Bored Ape Yacht Club NFTs, their transfers, and owners from the Ethereum blockchain, fetches the metadata from IPFS and regular HTTP URLs, stores all the data in a database, and serves it over a GraphQL API. In the first part we created a simple squid that scraped Transfer events emitted by the BAYC token contract. Here, we go a step further and derive separate entities for the NFTs and their owners from the transfers. The new entities will reference the correspondingTransfer entities. It will be automatically translated into primary key-foreign key references in the new database schema, and enable efficient cross-entity GraphQL queries.
Prerequisites: Node.js, Squid CLI, Docker, a project folder with the code from the first part (this commit).
Writing schema.graphql
Start the process by adding new entities to the schema.graphql file:
Token is considered an owning entity in relation to Owner. As a result,
- On the database side: the
tokentable that maps to theTokenentity gains a foreign key columnowner_idholding primary keys of theownertable. The column is automatically indexed - no need to add@index. - On the Typeorm side: the
Tokenentity gains anownerfield decorated with@ManyToOne. To create a well-formedTokenentity instance in processor code, we now will have to first get a hold of an appropriateOwnerentity instance and populate theownerfield of a newTokenwith a reference to it: - On the GraphQL side: queries to
tokencan now selectownerand any of its subfields (idis the only one available now).
from, to and tokenId fields of the Transfer entity with fields from the new entity types:
ownedTokens and transfers fields accessible through GraphQL and Typeorm.
You can find the final version of schema.graphql here. Once you’re finished, regenerate the Typeorm entity code with the following command:
Creating the entities
Note how the entities we define form an acyclic dependency graph:Ownerentity instances can be made straight from the raw events data;Tokens require the raw data plus theOwners;Transferentities require all the above.
Owners then Tokens then Transfers in this case). We will assume that it can be hardcoded by the programmer.
Further, at each step we will process the data for the whole batch instead of handling the items individually. This is crucial for achieving a good syncing performance.
With all that in mind, let’s create a batch processor that generates and persists all of our entities:
src/main.ts
Transfer entity as it was at the beginning of this part of the tutorial. This allows us to reuse most of the code of the old batch handler in getRawTransfers():
src/main.ts
Owner entity instances. We will need these to create both Tokens and Transfers. In both scenarios, we’ll have the IDs of the owners (i.e., their addresses) prepared. To simplify future lookups, we choose to return the Owner instances as a Map<string, Owner>:
src/main.ts
Token instances will also need to be looked up later, so we return them as a Map<string, Token>. To identify the most recent owner of each token, we traverse all the transfers in the order they appear on the blockchain and assign the owner of any involved tokens to their recipient:
src/main.ts
Token and Owner instances might have been created in previous batches, so we use ctx.store.upsert() to store these instances while updating any existing ones.
In some circumstances we might have had to retrieve the old entity instances from the database before updating, but here we have all the required fields populated, so we simply overwrite the whole entity with
ctx.store.upsert().Transfer entity instances through a simple mapping:
src/main.ts
Transfers are unique, we can safely use ctx.store.insert() to persist them.
At this point, the squid has accomplished everything planned for this part of the tutorial. The only remaining task is to drop and recreate the database (if it’s running), then regenerate and apply the migrations:
