重要操作
看起来很复杂,但我们可以用排除法,先将如下和数方法进行筛选:数据查询、检查判断、逻辑判断、可重复操作,剩下各个方法才是我们的重点关注对象。
init
/// Initializes the blockchain and returns a new Chain instance. Does a
/// check on the current chain head to make sure it exists and creates one
/// based on the genesis block if necessary.
process_block
/// Processes a single block, then checks for orphans, processing
/// those as well if they're found
process_block_single
/// Attempt to add a new block to the chain.
/// Returns true if it has been added to the longest chain
/// or false if it has added to a fork (or orphan?).
process_block_no_orphans
/// Attempt to add a new block to the chain. Returns the new chain tip if it
/// has been added to the longest chain, None if it's added to an (as of
/// now) orphan chain.
process_block_header
/// Process a block header received during "header first" propagation.
validate
/// Validate the current chain state.
txhashset_read
/// Provides a reading view into the current txhashset state as well as
/// the required indexes for a consumer to rewind to a consistent state
/// at the provided block hash.
txhashset_write
/// Writes a reading view on a txhashset state that's been provided to us.
/// If we're willing to accept that new state, the data stream will be
/// read as a zip file, unzipped and the resulting state files should be
/// rewound to the provided indexes.
其它操作也很重要,只是篇幅有限,这里挑几个出来讲解。
compact
/// Triggers chain compaction, cleaning up some unnecessary historical
/// information. We introduce a chain depth called horizon, which is
/// typically in the range of a couple days. Before that horizon, this
/// method will:
///
/// * compact the MMRs data files and flushing the corresponding remove logs
/// * delete old records from the k/v store (older blocks, indexes, etc.)
///
/// This operation can be resource intensive and takes some time to execute.
/// Meanwhile, the chain will not be able to accept new blocks. It should
/// therefore be called judiciously.
get_last_n_output
/// returns the last n nodes inserted into the output sum tree
get_last_n_rangeproof
/// as above, for rangeproofs
get_last_n_kernel
/// as above, for kernels
unspent_outputs_by_insertion_index
/// outputs by insertion index
total_difficulty
/// Total difficulty at the head of the chain
orphans_len
/// Orphans pool size
total_header_difficulty
/// Total difficulty at the head of the header chain
reset_head
/// Reset header_head and sync_head to head of current body chain
head
/// Get the tip that's also the head of the chain
head_header
/// Block header for the chain head
get_block
/// Gets a block header by hash
get_block_header
/// Gets a block header by hash
get_header_by_height
/// Gets the block header at the provided height
is_on_current_chain
/// Verifies the given block header is actually on the current chain.
/// Checks the header_by_height index to verify the header is where we say
/// it is
get_sync_head
/// Get the tip of the current "sync" header chain.
/// This may be significantly different to current header chain.
get_header_head
/// Get the tip of the header chain.
difficulty_iter
/// Builds an iterator on blocks starting from the current chain head and
/// running backward. Specialized to return information pertaining to block
/// difficulty calculation (timestamp and previous difficulties).
block_exists
/// Check whether we have a block without reading it
is_unspent
/// For the given commitment find the unspent output and return the
/// associated Return an error if the output does not exist or has been
/// spent. This querying is done in a way that is consistent with the
/// current chain state, specifically the current winning (valid, most
/// work) fork.
block_exists
/// Check whether we have a block without reading it