Skip to main content

Maximizing Performance - A Guide to Efficient RPC Calls with QuickNode

Updated on
Sep 17, 2024

18 min read

Overview

Getting data to and from a blockchain necessarily requires transferring a lot of information between your system and the network. To optimize the performance of your QuickNode endpoint and keep your dApps running smoothly while minimizing costs, it's crucial to know how to make effective RPC calls to the network. In this guide, we've outlined essential best practices for doing just that:

  • Make the Right Calls
  • Optimize RPC Requests
  • Utilize QuickAlerts and Subscriptions
  • Properly Handle API Responses
  • Secure Your Endpoint
  • Other Best Practices

Feel free to bookmark this page for future reference. If you have any questions, don't hesitate to reach out to us on Discord or Twitter.

Make the Right Calls

The first step in making effective RPC calls is to make sure you are making the right calls (and using them correctly) in the first place. Here are a few things to keep in mind when making API requests:

Understand the Method

It's important to understand the API documentation and the available endpoints, parameters, and response formats. Upgrades happen frequently, so it's essential to reference the API docs to stay informed of changes over time. QuickNode keeps an up-to-date list of method documentation for all methods we support for each network.

Use the Correct Method for the Job

There are many different methods available for interacting with the network, and each one has its own particular use case. While some RPC methods may appear similar, they can produce different responses and come with varying multiplier costs. That's why it's crucial to conduct thorough research and select the correct method for each task. For example, eth_getTransactionCount is used to get the number of transactions sent from an address, while eth_getTransactionByHash is used to get the transaction details for a specific transaction hash. It's important to use the correct method for the job, as using the wrong method can result in unnecessary network traffic and increased costs.

Another opportunity to reduce the number of RPC requests is by using the correct class instances when using Ethers.js, For example JsonRpcProvider as opposed to StaticJsonRpcProvider uses 1 or 2 eth_chainId requests before the library sends each RPC request, which can be a problem if you are making a lot of requests. Though both yield the same process, the latter sends fewer requests to the network, which can improve your dApp performance and reduce costs.

Optimize RPC Requests

Once you have selected the right calls to make for your job, you can optimize your API requests to improve performance and reduce costs. Here are a few things to keep in mind when optimizing API requests:

Use Filters

When making API calls that retrieve data, it's important to use filters in the query to limit the amount of data returned.

  • Ethereum Example: If you are using eth_getLogs to get all logs for a specific contract, you can use the fromBlock and toBlock parameters to limit the range of blocks you are searching. This will reduce the amount of data returned. You can also use the address parameter to limit the logs returned to a specific contract address.
  • Solana Example: Similarly, suppose you are using getProgramAccounts on Solana. You can use GetProgramAccountsFilter to filter for accounts of a specific byte size (using {dataSize: SIZE}), accounts at a specific location in the serialized account data (using {memcmp: {offset: LOCATION}}), or for a specific value (using {memcmp: {bytes: SEARCH_VALUE}}). Adding filters to your query instead of retrieving all accounts (and filtering on the client side) can reduce the amount of data returned and improve overall performance. For more information on using GetProgramAccountsFilter, see the Solana RPC documentation.

Avoid Batching Multiple RPC Requests

A batch request is a single request that contains multiple RPC calls (specifically, a single POST request with an array of RPC Method calls in the data element). Though a little counterintuitive, batching RPC requests is not the best approach for making many requests to your endpoint. In fact, batching your request can actually slow down your dApp's performance. This happens for several reasons:

  • Additional Processing Time: Once the RPC is received, the request is broken down into individual requests and routed to the appropriate node. This is done to handle edge case routing between clients and node types. This is extra processing that is not otherwise needed for a single request.
  • Error Handling: If one of the requests in the batch fails, but other requests succeed, it's harder for our system and yours to communicate the results--if we send a 200, you would be unaware of the error; if we send an error, you might not know any of the requests succeeded.
  • One Process Can Slow Down Others: If you send multiple requests in parallel we can respond to them in parallel but if you send in a batch, and one request takes longer to return, then the whole batch must wait for that one longer request (slowing down the response time for all of the requests).
  • Parallel Processing Limits: Our system has a limit on the number of requests that can be processed in parallel, meaning we will process a batch request in batches, not all at once. This can result in a slower response time for some of your requests. You would be better off making the requests individually and processing them in parallel. All requests will contribute towards your RPS limitations. Submitting as a batch does not reduce the number of requests you are making. Instead of submitting requests in a batch, you are better off setting a rate limiter on your end to ensure that you do not exceed your RPS limitations. This will ensure you send as many requests as you are allowed to without impacting your dApp's performance. You can verify your RPS limits here. In certain situations, you might want to use the Single Flight RPC add-on to streamline your RPC calls and improve efficiency. This can be particularly useful when you need to gather block and transaction information in a single call. By using Single Flight RPC, you can enjoy faster block traces and a more convenient approach to handling complex data retrieval scenarios. For more information on how to use Single Flight RPC, see the QuickNode Marketplace.

Bundle Solana Transaction Instructions

If you are using Solana, bundle your transaction instructions whenever possible. This lets you complete multiple tasks in a single sendTransaction request. The Solana runtime will process each of the instructions contained within the transaction atomically--if any part of an instruction fails, then the entire transaction will fail. On Solana, transaction instructions can be appended to a single transaction by using Transaction.add().

For example, if you want to send two instructions to the network at the same time, you can do so in two ways:
    const bulkTx = new Transaction();

const ix1 = new TransactionInstruction(opts:{/* your instruction */});
const ix2 = new TransactionInstruction(opts:{/* your instruction */});
// Option A: Add both instructions to the same transaction
bulkTx.add(ix1, ix2);
await connection.sendTransaction(bulkTx,[/* your signers */]);

const indivTx1 = new Transaction();
const indivTx2 = new Transaction();
// Option B: Add each instruction to a separate transaction
indivTx1.add(ix1);
indivTx2.add(ix2);

await connection.sendTransaction(indivTx1,[/* your signers */]);
await connection.sendTransaction(indivTx2,[/* your signers */]);

As you can see, the first method uses a single transaction to send both instructions to the network, while the second method uses two separate transactions. The first method is more efficient, as it only requires a single request to the network, while the second requires two. For a more detailed example of how instructions can be bundled into a single transaction, see our Guide: How to Send Bulk Transactions on Solana.

Use Pagination

Pagination is a technique for breaking up a large data set into smaller, more manageable pieces and retrieving those pieces in a series of requests. In the context of RPC calls, pagination can limit the amount of data returned in a single API response, which can help reduce response time and improve the performance of your dApp. Let's say you want to retrieve a list of all the transactions for a specific account on a blockchain. If there are thousands or even millions of NFTs for that account, retrieving all of them in a single API call would be slow. Instead, you can use pagination to retrieve the NFTs in smaller chunks.

Here's an example of QuickNode's NFT API taking advantage of pagination:
const axios = require("axios");
(() => {
const config = {
headers: {
"Content-Type": "application/json",
},
};
const data = {
jsonrpc: "2.0",
id: 1,
method: "qn_fetchNFTs",
params: {
wallet: "DcTmx4VLcf5euAB17nynax7g55xuB3XKBDyz1pudMcjW",
omitFields: ["provenance", "traits"],
page: 1,
perPage: 10,
},
};
axios
.post("https://example.solana-mainnet.quiknode.pro/123456/", data, config)
.then(function (response) {
// handle success
console.log(response.data);
})
.catch((err) => {
// handle error
console.log(err);
});
})();

In the example above, we are fetching the NFTs held by a specific wallet address. The page parameter is used to specify which page of results to return, and the perPage parameter is used to specify how many results to return per page. In this case, we are returning 10 results per page. This might be useful in an application where you are displaying a user's NFTs. By paginating, you can increase load times and reduce the amount of data returned in a single RPC call.

Caching

Caching is a technique for storing frequently-used API data in memory so that it can be reused without making a new API call. When you cache API data, you create a local copy of the data on your system, which you can access much more quickly than if you had to retrieve the data from the remote API every time you needed it. This can be useful for reducing the number of API calls made to the network. Imagine, for example, you have a dApp that fetches a user's NFTs and displays them on the page. If you use a caching strategy, you can store the user's NFTs in memory so that you don't have to make a new API call every time the user a component renders on the page. This can be useful for reducing the number of large API calls made to the network. Note: this is not appropriate for all use cases, and you should only use caching when you do not require refreshed data from the network.

Here's a simple example of how you can use `localStorage` to cache RPC results in a React app:
import { useState, useEffect } from 'react';

function LargestAccounts() {
const [accounts, setAccounts] = useState([]);

useEffect(() => {
async function fetchData() {
const cachedData = localStorage.getItem('largestAccountsCache');
if (cachedData) {
setAccounts(JSON.parse(cachedData));
} else {
const response = await connection.getLargestAccounts({/*YOUR SEARCH CRITERIA*/});
const data = await response.json();
setAccounts(data);
localStorage.setItem('largestAccountsCache', JSON.stringify(data));
}
}
fetchData();
}, []);

return (
<div>
{accounts.map((account) => (
<div key={account.id}>
{account.name} - {account.balance}
</div>
))}
</div>
);
}

export default LargestAccounts;

In this example, we use the useEffect hook to fetch the largest accounts data when the component mounts. We first check if the data is already stored in the local storage using localStorage.getItem(). If it is, we set the accounts state to the cached data. If not, we make the RPC call and set the account's state to the fetched data. We also store the fetched data in the local storage using localStorage.setItem() so it will be available the next time the component mounts.

Utilize QuickAlerts and Subscription Requests

Subscriptions offer a more efficient way for applications to communicate compared to traditional RPC requests. With traditional API requests, an application must constantly poll the server to check for updates or new data, often resulting in numerous unnecessary requests. Subscriptions address this issue by enabling an application to listen for specific events or triggers. When the event occurs, the application sends a notification to the registered URL, which processes the data and takes appropriate action.

QuickAlerts

QuickNode enables webhook functionality by allowing you to create QuickAlerts based on the parameters you define. For example, let's say you want to track changes to the balance of a wallet. Instead of constantly polling the chain for updates on that account balance, you can create a QuickAlert or websocket request with the chain to receive notifications when the balance of that wallet changes. This way, you only receive data when the set conditions are met (in this case, when the balance of the tracked wallet changes). Not only does this mean you don't have to waste resources constantly polling the chain, but it also means that you will be made aware of changes to the wallet balance as soon as they occur. For more information on how to set up QuickAlerts, check out ourQuickAlerts Overview Guide.

Solana WebSockets

In addition to QuickAlerts, you can also use Subscription Requests using Websockets to receive notifications when a certain event occurs. For more information on using WebSocket Subscriptions with Solana, check out our guide, How to Use WebSocket Subscriptions with Solana.

Properly Handle RPC Responses

Once you have made an RPC request, you must handle the response correctly. Here are a few things to keep in mind when handling RPC responses:

Check the Response Code

When you make an RPC request, the response will include a status code indicating whether the request was successful. It's important to check the response and handle the response appropriately. For example, if the response returns an error, you know the request was unsuccessful, and you should handle the error appropriately.

Properly Handle Errors

When a request fails, handle the failure gracefully and avoid retrying the same request multiple times. Continuously sending requests despite being returned errors could result in getting rate limited, which could disrupt your dApp's user experience.

Here's an example of a react component that retries a request when it fails but limits the number of retries to 3:
const MAX_RETRIES = 5;

const fetchData = async (retries = 0) => {
try {
const result = await axios.get('https://api.example.com/data');
setData(result.data);
} catch (error) {
if (retries < MAX_RETRIES) {
// Retry the call after a delay
setTimeout(() => fetchData(retries + 1), 1000);
} else {
// Maximum retries reached. Handle the error.
}
}
};

useEffect(() => {
fetchData();
}, []);

Implement Circuit Breakers

Circuit breakers are a way to prevent your dApp from making too many requests to the network. You can implement a circuit breaker that will stop making requests to the network if a certain trigger has been satisfied.

Here's an example of a circuit breaker that will stop making requests if the number of failed requests exceeds `3` in a given time period:
class CircuitBreaker {
constructor(threshold, timeout) {
this.threshold = threshold; // maximum number of failed requests before the circuit breaks
this.timeout = timeout; // duration (in milliseconds) to keep the circuit broken
this.failures = 0; // number of failed requests
this.isBroken = false; // whether or not the circuit is currently broken
this.lastFailure = null; // time of the last failed request
}

async callApi() {
// If the circuit is broken, check if it's time to close it
if (this.isBroken) {
const now = new Date().getTime();
const timeSinceLastFailure = now - this.lastFailure;
if (timeSinceLastFailure > this.timeout) {
// Circuit has been broken long enough. Try again
this.isBroken = false;
} else {
// Circuit still broken, return cached response or error message
return this.getCachedResponse();
}
}

try {
// Call API and return response
const response = await fetch('https://api.example.com/data');
const data = await response.json();
// Reset failure count if successful
this.failures = 0;
return data;
} catch (error) {
// Increment failure count and check if circuit should break
this.failures++;
if (this.failures >= this.threshold) {
this.isBroken = true;
this.lastFailure = new Date().getTime();
}
// Return cached response or error message
return this.getCachedResponse();
}
}

getCachedResponse() {
// Return cached response or error message
return { message: 'API is currently unavailable' };
}
}

// Usage
const circuitBreaker = new CircuitBreaker(3, 5000); // Break circuit after 3 failed requests, keep circuit broken for 5 seconds
const data = await circuitBreaker.callApi(); // Call API using circuit breaker

In this example, the CircuitBreaker class wraps API calls and monitors the number of failed requests. If the number of failed requests exceeds the threshold (in this case, 3), the circuit breaker trips and further requests to the API are blocked for the set timeout (in this case, 5 seconds). While blocked, the callApi method returns a cached response or error message. Once the timeout has elapsed, the circuit breaker can be reset, and API calls can be attempted again.

Implement Rate Limits

When you make API requests, you are making a request to a remote server. If you make too many requests in a short period of time, you could overload the server and get rate limited (check our pricing page to see the number of requests you can make each second). To prevent this from happening, consider implementing rate limits to limit the number of RPC requests you make.

  1. Set a limit: First, know what rate limit you want to set for your dApp. For example, limit the number of requests a user can make in a given time period based on your plan and expected user volume.
  2. Track requests: When a user makes a request, track the time and store it in a data structure like an array or a map.
  3. Check against limit: Before processing each request, check how many requests the user has made in the given period. If they have exceeded the limit, return an error message or block the request.
  4. Clear old requests: Periodically clear out old requests from the data structure to avoid hitting memory limits. Alternatively, create a queue of requests and process them one at a time. This would ensure that you stay within the rate limit, but it would also slow down the performance of your dApp.
Here's a simple example:
const requestQueue = [];
let isProcessing = false;

function addToQueue(request) {
requestQueue.push(request);
processQueue();
}

function processQueue() {
if (isProcessing) {
return;
}

isProcessing = true;
const request = requestQueue.shift();

makeRequest(request)
.then(() => {
isProcessing = false;
processQueue();
})
.catch((error) => {
console.error(error);
isProcessing = false;
processQueue();
});
}

In this example, we're using a requestQueue array to store requests that need to be processed and a processQueue function to manage the queue. When a new request is added to the queue using addToQueue, we check if there's already a request being processed. If not, we start processing the queue by calling processQueue. The processQueue function removes the subsequent request from the queue and calls the makeRequest function to process it. When the request is complete, we set isProcessing to false and call processQueue again to process the next request in the queue. By using a queue, we ensure that we do not send too many requests simultaneously and that requests are processed in the correct order.

Security

Your QuickNode endpoint is private, and you should treat it with the same security you keep any API key or password. If your endpoint gets into the wrong hands, a third party could abuse your endpoint by making requests on your behalf. This could result in you getting rate limited (with would impact your dApp's performance) and higher bills. To prevent this from happening, you should store your API keys on securely to prevent unauthorized access and implement a variety of security measures available at quicknode.com/endpoints/<YOUR_ENDPOINT_ID>/security:

info

JWT Authorization, Domain Masking, and Referrer Whitelists are only available for the Growth plan and above.

Other Best Practices

Compression

HTTP responses from blockchain methods can contain a vast amount of data and take a lot of time to receive at the client's end. Compression is a technique used to reduce the size of data sent over the network which can help reduce the amount of bandwidth used and the amount of time it takes to send the data. Check out our Guide on How to Enable Gzip on RPC calls in JavaScript using Ethers.js to learn more about how to compress and decompress RPC calls.

Monitor Usage

Your QuickNode dashboard contains several useful tools to help you monitor your endpoint's usage. You can view your endpoint's usage statistics, including the number of requests made, the number of failed requests, the types of requests made, the sources of requests made, and the method response time. You can also view your usage history by endpoint to see how your endpoint's usage has changed over time:

To check out your endpoint's usage dashboard, head to https://www.quicknode.com/endpoints/<YOUR_ENDPOINT_ID>/metrics.

Document API Usage

If your endpoint serves API calls from third parties, it is important to ensure your users have the tools necessary to make effective calls to your endpoint. This includes providing documentation on how to use your API. It is important to describe the structure of your APIs so that other developers can use them properly. You should also provide examples of how to use your API to make it easier for developers to get started. You may include relevant information associated with QuickNode API credits in your documentation.

Wrap Up

Congrats! You now have the tools and knowledge to make the most of your QuickNode endpoint and ensure optimal performance for your dApps. If you are building something and have questions about how to improve your application, feel free to reach out to us on Discord or Twitter.

We <3 Feedback

If you have any feedback on this guide, let us know. We'd love to hear from you.

Share this guide