Overview
This guide provides specific instructions for upgrading your stream to version 5, based on your current stream version. Please follow the section that matches your current version.
You can find your current stream version by visiting your stream settings page within the QuickNode dashboard. Your stream version is displayed inside the destination section.
- Enhanced Performance: Enjoy faster processing and more flexible filtering capabilities
- Asynchronous Operations: Streams filters now support asynchronous operations with Key-Value Store
- Consistent Data Structure: Standardized JSON format for all destinations
- Improved Metadata Handling: Better organization and access to stream metadata
Key Changes in Version 5
1. Data Structure
All data is now delivered to your destination or filter in a consistent JSON format:
{
"data": [...], // Your stream data
"metadata": {
"stream_id": "...",
"stream_name": "...",
"stream_region": "...",
"network": "...",
"dataset": "...",
"start_range": 0,
"end_range": 0,
"batch_start_range": 0,
"batch_end_range": 0,
"keep_distance_from_tip": 0
}
}
2. Key-Value Store Methods
In V5, all key-value store methods must:
- Be used asynchronously
- Have the
qnLib.
prefix prepended - Example:
await qnLib.qnGetList(key)
instead ofqnGetList(key)
Database Schema Changes
Postgres Destinations
If you're using Postgres and currently on Version 1, you'll need to migrate your table structure. The change occurred in Version 2:
Old V1 table structure:
CREATE TABLE your_table (
block_number BIGINT,
network VARCHAR,
stream_id VARCHAR,
data JSONB,
PRIMARY KEY (block_number, network)
);
New V2-V5 table structure:
CREATE TABLE your_table (
from_block_number BIGINT,
to_block_number BIGINT,
network VARCHAR,
stream_id VARCHAR,
data JSONB,
PRIMARY KEY (from_block_number, to_block_number, network)
);
Snowflake Destinations
If you're using Snowflake and currently on Version 1, you'll need to migrate your table structure. The change occurred in Version 2:
Old V1 table structure:
CREATE TABLE your_table (
block_number BIGINT,
network VARCHAR,
stream_id VARCHAR,
data VARIANT,
PRIMARY KEY (block_number, network)
);
New V2-V5 table structure:
CREATE TABLE your_table (
from_block_number BIGINT,
to_block_number BIGINT,
network VARCHAR,
stream_id VARCHAR,
data VARIANT,
PRIMARY KEY (from_block_number, to_block_number, network)
);
Migration Paths
From Version 1
1. Filter Function Changes
- Current: Processes data block by block
- New: Processes entire batch at once
- Migration steps:
Old V1 filter:
function main(obj) {
const { streamData, streamName, streamRegion, streamNetwork, streamDataset, streamId } = obj;
// Process individual blocks
return streamData;
}
New V5 filter:
function main(stream) {
const { data, metadata } = stream;
// Process entire batch
return {
data: data,
metadata: metadata
};
}
2. Key-Value Store Updates
- Add
qnLib.
prefix to all key-value operations - Make operations async
Old:
const value = qnGetList("key");
New:
const value = await qnLib.qnGetList("key");
//NB: declare functions, including main, using await with async keyword to return value otherwise you will receive a Promise object
// -> async <- function main(stream) {
3. Database Migration (if using Postgres or Snowflake)
- Create new table with V2-V5 structure
- Migrate data from old table to new table
- Update application to use new table
- Drop old table after verification
From Version 2
1. Filter Function Changes
- Current: Processes entire batch but with different structure
- New: Uses new metadata structure
- Migration steps:
Old V2 filter:
function filter(data) {
// Process data array directly
return processedData;
}
New V5 filter:
function filter(stream) {
const { data, metadata } = stream;
// Process entire batch
return {
data: data,
metadata: metadata
};
}
2. Key-Value Store Updates
- Add
qnLib.
prefix - Make operations async
Old:
const value = qnGetList("key");
New:
const value = await qnLib.qnGetList("key");
//NB: declare functions, including main, using await with async keyword to return value otherwise you will receive a Promise object
// -> async <- function main(stream) {
From Version 3 or 4
1. Filter Function Changes
- Current: Similar to V2 but with some metadata handling
- New: Uses standardized metadata structure
- Migration steps:
Old V3/V4 filter:
function filter(data) {
// Process data with some metadata
return processedData;
}
New V5 filter:
function filter(stream) {
const { data, metadata } = stream;
// Process entire batch
return {
data: data,
metadata: metadata
};
}
2. Key-Value Store Updates
- Add
qnLib.
prefix - Make operations async
Old:
const value = qnGetList("key");
New:
const value = await qnLib.qnGetList("key");
//NB: declare functions, including main, using await with async keyword to return value otherwise you will receive a Promise object
// -> async <- function main(stream) {
Destination-Specific Changes
Webhook Destinations
- No configuration changes needed
- Headers are automatically handled
- Metadata is included in payload
S3 Destinations
- No configuration changes needed
- Data format follows new JSON structure
- Compression options remain the same
Postgres Destinations
- New table structure required (if on V1)
- Data format follows new JSON structure
- Primary key constraints updated
Snowflake Destinations
- New table structure required (if on V1)
- Data format follows new JSON structure
- Primary key constraints updated
Functions Destinations
- No configuration changes needed
- Data format follows new JSON structure
Testing Steps
1. Backup Your Data
- Take snapshots of your current data
- For database destinations, backup tables
- Document current filter function behavior
2. Test Filter Functions
- Test with small batches first
- Verify metadata handling
- Check key-value store operations
- Compare output with current version
3. Test Destination Delivery
- Verify data format
- Check metadata inclusion
- Validate compression (if used)
- Test with different data types
4. Monitor Performance
- Watch for any performance changes
- Monitor error rates
- Check data consistency
- Compare processing times
Troubleshooting
Common issues and solutions:
1. Filter Function Errors
- Check async/await usage
- Verify
qnLib.
prefix on key-value operations - Ensure proper return format
- Validate metadata structure
2. Data Format Issues
- Verify metadata structure
- Check data array format
- Validate JSON structure
- Compare with expected format
3. Database Issues
- Verify table schemas
- Check primary key constraints
- Validate data types
- Test data migration scripts
Support
If you encounter issues:
- Check the logs for specific error messages
- Review the documentation for your destination type
- Contact support with detailed error information
We ❤️ Feedback!
If you have any feedback or questions about this documentation, let us know. We'd love to hear from you!