You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Under PeerDAS, each full node only store a subset of all blob data ("custody data columns"), and these are computed based on:
the node's NodeId (generated on first startup)
the node's custody group count (this is 4 by default for a full node)
e.g. given a node ID running a full node, it could be a custodian of data column 1, 3, 5, 7
On a restart, if either the node id or custody group count changes (e.g. switching between supernode or full node via the --subscribe-all-data-column-subnets flag), its set of custody columns also change, and this would result in a inconsistent data column DB - similar to switching between an archive node and non-archive node.
This means the node would not be able to serve data columns that it's expected to store, and may result in the node getting downscored by all peers.
Proposed Solution
Handling custody column changes are likely quite complex, and might be easier and quicker to resync from scratch.
We could persist the custody info in the DB and perform an integrity check on startup to see if the newly computed custody columns matches what's in the database.
If it doesn't match the DB, exit the process and inform the user to re-sync instead.
Note: we may need end up storing this info as part of validator custody (#6767), and we should be able to use the same info from the DB for both features.
The text was updated successfully, but these errors were encountered:
Under PeerDAS, each full node only store a subset of all blob data ("custody data columns"), and these are computed based on:
NodeId
(generated on first startup)e.g. given a node ID running a full node, it could be a custodian of data column 1, 3, 5, 7
On a restart, if either the node id or custody group count changes (e.g. switching between supernode or full node via the
--subscribe-all-data-column-subnets
flag), its set of custody columns also change, and this would result in a inconsistent data column DB - similar to switching between an archive node and non-archive node.This means the node would not be able to serve data columns that it's expected to store, and may result in the node getting downscored by all peers.
Proposed Solution
Handling custody column changes are likely quite complex, and might be easier and quicker to resync from scratch.
We could persist the custody info in the DB and perform an integrity check on startup to see if the newly computed custody columns matches what's in the database.
If it doesn't match the DB, exit the process and inform the user to re-sync instead.
Note: we may need end up storing this info as part of validator custody (#6767), and we should be able to use the same info from the DB for both features.
The text was updated successfully, but these errors were encountered: