mining: add getMemoryLoad() and track template non-mempool memory footprint#33922
mining: add getMemoryLoad() and track template non-mempool memory footprint#33922Sjors wants to merge 5 commits intobitcoin:masterfrom
Conversation
|
The following sections might be updated with supplementary metadata relevant to reviewers and maintainers. Code Coverage & BenchmarksFor details see: https://corecheck.dev/bitcoin/bitcoin/pulls/33922. ReviewsSee the guideline for information on the review process.
If your review is incorrectly listed, please copy-paste ConflictsReviewers, this pull request conflicts with the following ones:
If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first. |
|
I haven't benchmarked this yet on mainnet, so I'm not sure if checking every (unique) transaction for mempool presence is unacceptably expensive. If people prefer, I could also add a way for the |
21ad8c1 to
f22413f
Compare
|
🚧 At least one of the CI tasks failed. HintsTry to run the tests locally, according to the documentation. However, a CI failure may still
Leave a comment here, if you need help tracking down a confusing failure. |
|
|
||
| TxTemplateMap& tx_refs{*Assert(m_tx_template_refs)}; | ||
| // Don't track the dummy coinbase, because it can be modified in-place | ||
| // by submitSolution() |
There was a problem hiding this comment.
b9306b79b8f5667a2679236af8792bb1c36db817: in addition, we might be wiping the dummy coinbase from the template later: Sjors#106
f22413f to
3b77529
Compare
There was a problem hiding this comment.
Concept ACK
I think it would be better if we have internal memory management for the mining interface IPC, since we hold on to the block templates.
I would suggest the following approach:
- Add memory budget for the mining interface.
- Introduce a tracking list of recently built block templates and total memory usage.
- Add templates to the list and increment the memory usage after every
createnewblockorwaitnextreturn. - Whenever the memory budget is exhausted, we should release templates in FIFO order.
I think since we create a new template after a time interval elapses even if fees increase and that interval is usually enough for the client to receive and distribute the template to miners, this mechanism should be safe as the miners have long switch to most recent template when the budget elapsed because of the time interval being used in between returns of waitnext.
Mining interface clients should also handle their own memory internally.
Currently, I don’t see much use for the exposed getMemoryLoad method. In my opinion, we should not rely on the IPC client to manage our memory.
It seems counter intuitive, but from a memory management perspective IPC clients are treated no different than our own code. And if we started FIFO deleting templates that are used by our own code, we'd crash. So I think FIFO deletion should be a last resort (not implemented here). There's another reason why we should give clients an opportunity to gracefully release templates in whatever order they prefer. Maybe there's 100 downstream ASIC's, one of which is very slow at loading templates, so it's only given a new template when the tip changes, not when there's a fee change. In that scenario you have a specific template that the client wants to "defend" at all cost. In practice I'm hoping none of this matters and we can pick and recommend defaults that make it unlikely to get close to a memory limit, other than during some weird token launch. |
IMHO I think we should separate that, and treat clients differently from our own code, because they are different codebases and separate applications with their own memory.
I see your point but I don’t think that’s a realistic scenario, and I think we shouldn’t design software to be one-size-fits-all.
Delegating template eviction responsibility to the client can put us in a situation where they handle it poorly and cause us to OOM (but I guess your argument is that we rather take that chance than being in a situation where we make miners potentially lose on rewards). |
Note that it's already the clients responsibility, that's inherent to how multiprocess works. In the scenario where they handle it poorly, we can use FIFO deletion. All
We currently don't track whether any given
Afaik that means revalidating the block from scratch, removing one advantage the |
3b77529 to
24592b7
Compare
|
I restructured the implementation and commits a bit. The It's also less code churn because I don't have to touch the It also made it easier to move This in turn let me split out a separate commit that introduces the actual I added some comments to point out that we don't hold a |
24592b7 to
03dcfae
Compare
|
One caveat is that Expanded the PR description. |
03dcfae to
ac1e97a
Compare
6cfd0f2 to
2ba0f0b
Compare
2ba0f0b to
c37d715
Compare
|
Rebased after #34184. |
src/node/context.h
Outdated
| TxTemplateMap template_tx_refs GUARDED_BY(template_state_mutex); | ||
| //! Cache latest getblocktemplate result for BIP 22 long polling. Must be cleared | ||
| //! before template_tx_refs. | ||
| std::unique_ptr<interfaces::BlockTemplate> gbt_template; |
There was a problem hiding this comment.
Is "gbt_template" supposed to mean "get block template template"? Maybe "gbt_result" or "get_block_template_result" would be better.
src/node/types.h
Outdated
| /* | ||
| * Map how many templates refer to each transaction reference. | ||
| */ | ||
| using TxTemplateMap = std::map<CTransactionRef, size_t>; |
There was a problem hiding this comment.
How many entries will be supposedly stored in this map? std::map has lookup O(log(size)) whereas std::unordered_map has O(1). Here we do not need the entries to be ordered.
nit: start the comment with /** to make doxygen recognize it and attach it to the following code in the documentation.
There was a problem hiding this comment.
Done both.
Assuming one template per second for two hours (long block interval), less than 10k.
There was a problem hiding this comment.
Just mentioning - it is the same for map and unordered_map when used with shared_ptr (or CTransactionRef) - they compare pointers. So, two distinct objects that have the same values for all their members will be considered different. I am not sure if this is a problem in our code. Grepping the code for (map|set).*CTransactionRef, it looks like this will be the first case in non-test code where we use CTransactionRef as a key without providing custom comparator/hasher.
To illustrate with an explicit example:
CMutableTransaction mutable_tx;
// mutable_tx.vin = ...
// mutable_tx.vout = ...
CTransaction tx1{mutable_tx};
CTransaction tx2 = tx1; // tx2 is a copy of tx1, same transaction _logically_
CTransactionRef tx1_ref{MakeTransactionRef(tx1)};
CTransactionRef tx2_ref{MakeTransactionRef(tx2)};
std::map<CTransactionRef, int> m;
assert(m.emplace(tx1_ref, 5).second); // inserted
assert(m.emplace(tx2_ref, 6).second); // inserted
assert(m.size() == 2); // has 2 elementsThere was a problem hiding this comment.
If this needs to be resolved, then the below should do it:
diff --git i/src/node/types.h w/src/node/types.h
index 164667772a..7bb187ac1c 100644
--- i/src/node/types.h
+++ w/src/node/types.h
@@ -11,21 +11,23 @@
//! files.
#ifndef BITCOIN_NODE_TYPES_H
#define BITCOIN_NODE_TYPES_H
#include <consensus/amount.h>
-#include <cstddef>
-#include <cstdint>
-#include <optional>
#include <policy/policy.h>
#include <primitives/transaction.h>
#include <script/script.h>
-#include <unordered_map>
#include <uint256.h>
+#include <util/hasher.h>
#include <util/time.h>
+
+#include <cstddef>
+#include <cstdint>
+#include <optional>
+#include <unordered_map>
#include <vector>
namespace node {
enum class TransactionError {
OK, //!< No error
MISSING_INPUTS,
@@ -173,11 +175,11 @@ enum class TxBroadcast : uint8_t {
NO_MEMPOOL_PRIVATE_BROADCAST,
};
/**
* Map how many templates refer to each transaction reference.
*/
-using TxTemplateMap = std::unordered_map<CTransactionRef, size_t>;
+using TxTemplateMap = std::unordered_map<CTransactionRef, size_t, CTransactionRefSaltedHash, CTransactionRefComp>;
} // namespace node
#endif // BITCOIN_NODE_TYPES_H
diff --git i/src/util/hasher.h w/src/util/hasher.h
index 02c7703391..d3b77ba72d 100644
--- i/src/util/hasher.h
+++ w/src/util/hasher.h
@@ -114,7 +114,23 @@ private:
public:
SaltedSipHasher();
size_t operator()(const std::span<const unsigned char>& script) const;
};
+struct CTransactionRefSaltedHash {
+ SaltedWtxidHasher m_wtxid_hasher;
+
+ size_t operator()(const CTransactionRef& tx) const
+ {
+ return m_wtxid_hasher(tx->GetWitnessHash());
+ }
+};
+
+struct CTransactionRefComp {
+ bool operator()(const CTransactionRef& a, const CTransactionRef& b) const
+ {
+ return a->GetWitnessHash() == b->GetWitnessHash();
+ }
+};
+
#endif // BITCOIN_UTIL_HASHER_HUsing a salted hash instead of barely using the first bytes of the transaction id as a hash because somebody may craft a pile of transactions with such ids as to make unordered_map lookup time deteriorate from O(1) to O(size). Not sure if that is an overkill in the current use case. In PrivateBroadcast::m_transactions I used the simpler:
struct CTransactionRefHash {
size_t operator()(const CTransactionRef& tx) const
{
return static_cast<size_t>(tx->GetWitnessHash().ToUint256().GetUint64(0));
}
};because there 1. the transactions are originating locally (do not come from untrusted sources) and 2. the expectation is to store a small number of transactions, so even O(size) will not hog the machine.
src/node/context.h
Outdated
| Mutex template_state_mutex; | ||
| //! Track how many templates (which we hold on to on behalf of connected IPC | ||
| //! clients) are referencing each transaction. | ||
| TxTemplateMap template_tx_refs GUARDED_BY(template_state_mutex); | ||
| //! Cache latest getblocktemplate result for BIP 22 long polling. Must be cleared | ||
| //! before template_tx_refs. | ||
| std::unique_ptr<interfaces::BlockTemplate> gbt_template; |
There was a problem hiding this comment.
I think the comment warrants an elaboration. gbt_template is not explicitly cleared anywhere, so that happens at the destructor of NodeContext. struct members are destroyed in reverse order of their declaration. Is this comment intended to prevent swapping the declaration order of template_tx_refs and gbt_template?
Maybe:
//! Cache latest getblocktemplate result for BIP 22 long polling. Must be cleared
//! before template_tx_refs because the destructor of this decrements the count
//! in `template_tx_refs` of each transaction in the template. If it does not find
//! some of its transactions in `template_tx_refs` then it will abort.There was a problem hiding this comment.
I took your comment.
It’s not only about declaration order. The intent is to destroy all BlockTemplate instances, including the one in gbt_result, while template_tx_refs is still alive so destructors can decrement reference counts. Once all templates are gone (and the map is empty), template_tx_refs can be destroyed.
The getblocktemplate RPC uses a static BlockTemplate, which goes out of scope only after the node completed its shutdown sequence. This becomes a problem when a later commit implements a destructor that uses m_node.
IPC clients can hold on to block templates indefinately, which has the same impact as when the node holds a shared pointer to the CBlockTemplate. Because each template in turn tracks CTransactionRefs, transactions that are removed from the mempool will not have their memory cleared. This commit adds bookkeeping to the block template constructor and destructor that will let us track the resulting memory footprint.
Calculate the non-mempool memory footprint for template transaction references. Add bench logging to collect data on whether caching or simplified heuristics are needed, such as not checking for mempool presence.
Allow IPC clients to inspect the amount of memory consumed by non-mempool transactions in blocks. Returns a MemoryLoad struct which can later be expanded to e.g. include a limit. Expand the interface_ipc.py test to demonstrate the behavior and to illustrate how clients can call destroy() to reduce memory pressure.
c37d715 to
04553cd
Compare
|
Rebased after #34422 for easier testing with Rust, see 2140-dev/bitcoin-capnp-types#13. Implemented all of @vasild's nits. |
vasild
left a comment
There was a problem hiding this comment.
ACK 04553cd
Would be good to figure out if #33922 (comment) needs addressing.

Implements a way to track the memory footprint of all non-mempool transactions that are still being referenced by block templates, see discussion in #33899. It does not impose a limit.
IPC clients can query this footprint (total, across all clients) using the
getMemoryLoad()IPC method. Its client-side usage is demonstrated here:Additionally, the functional test in
interface_ipc.pyis expanded to demonstrate how template memory management works: templates are not released until the client drops references to them, or calls the template destroy method, or disconnects. The destroy method is called automatically by clients using libmultiprocess, as sv2-tp does. In the Python tests it also happens when references are destroyed or go out of scope.The PR starts with preparation refactor commits:
interface_ipc.pysodestroy()calls happen in an order that's useful to later demonstrate memory managementstd::unique_ptr<BlockTemplate> block_templatefrom astaticdefined inrpc/mining.cpptoNodeContext. This prevents a crash when we switch to a non-trivial destructor later (which usesm_node).Then the main commits:
template_tx_refstoNodeContextto track how many templates contain any given transaction. This map is updated by theBlockTemplateconstructor and destructor.GetTemplateMemoryUsage()which loops over this map and sums up the memory footprint for transactions outside the mempoolgetMemoryLoad()and add test coverage