| Commit message (Collapse) | Author | Age | Files | Lines |
| ... | |
| | |/
| |
| |
| |
| |
| |
| |
| | |
As these were not updated when 'backporting' the 225430 checkpoint
into head.
Additionally, also report verification progress in debug.log, and
tweak the sigcheck-verification-speed-factor a bit.
|
| |\ \
| |/
|/| |
Global cleanups
|
| | |
| |
| |
| | |
This should make detecting leaks much easier.
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
There exists a per-message-processed send buffer overflow protection,
where processing is halted when the send buffer is larger than the
allowed maximum.
This protection does not apply to individual items, however, and
getdata has the potential for causing large amounts of data to be
sent. In case several hundreds of blocks are requested in one getdata,
the send buffer can easily grow 50 megabytes above the send buffer
limit.
This commit breaks up the processing of getdata requests, remembering
them inside a CNode when too many are requested at once.
|
| | | |
|
| | |
| |
| |
| |
| |
| | |
* Change CNode::vRecvMsg to be a deque instead of a vector (less copying)
* Make sure to acquire cs_vRecvMsg in CNode::CloseSocketDisconnect (as it
may be called without that lock).
|
| |/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Replaces CNode::vRecv buffer with a vector of CNetMessage's. This simplifies
ProcessMessages() and eliminates several redundant data copies.
Overview:
* socket thread now parses incoming message datastream into
header/data components, as encapsulated by CNetMessage
* socket thread adds each CNetMessage to a vector inside CNode
* message thread (ProcessMessages) iterates through CNode's CNetMessage vector
Message parsing is made more strict:
* Socket is disconnected, if message larger than MAX_SIZE
or if CMessageHeader deserialization fails (latter is impossible?).
Previously, code would simply eat garbage data all day long.
* Socket is disconnected, if we fail to find pchMessageStart.
We do not search through garbage, to find pchMessageStart. Each
message must begin precisely after the last message ends.
ProcessMessages() always processes a complete message, and is more efficient:
* buffer is always precisely sized, using CDataStream::resize(),
rather than progressively sized in 64k chunks. More efficient
for large messages like "block".
* whole-buffer memory copy eliminated (vRecv -> vMsg)
* other buffer-shifting memory copies eliminated (vRecv.insert, vRecv.erase)
|
| | |
|
| | |
|
| |\
| |
| | |
small changes in init, main, checkpoints.h and bitcoin-qt.pro
|
| | |
| |
| |
| |
| |
| |
| |
| |
| | |
- remove an unneeded MODAL flag, as MSG_ERROR sets MODAL
- re-order an if-clause in main to have bool checks before a function call
- fix some log messages that used wrong function names
- make a log message use a correct ellipsis
- remove some unneded spaces, brackets and line-breaks
- fix style for adding files in the Qt project
|
| |\ \
| | |
| | | |
Various performance tweaks to CCoinsView
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
* Pass txid's to CCoinsView functions by reference instead of by value
* Add a method to swap CCoins, and use it in some places to avoid a
allocating copy + destruct.
* Optimize CCoinsViewCache::FetchCoins to do only a single search
through the backing map.
|
| |\ \ \
| |_|/
|/| | |
Native versions for AllocateFileRange()
|
| | | | |
|
| | | | |
|
| |\ \ \
| | | |
| | | | |
Bugfix CValidationResult for BIP30 + add DoS
|
| | | | | |
|
| |/ / /
| | |
| | |
| | |
| | | |
Calling ResendWalletTransactions when reindexing, importing or on IBD spams
other nodes with our old transactions, because they become unconfirmed.
|
| |\ \ \ |
|
| | | | | |
|
| |\ \ \ \
| | | | |
| | | | | |
Make transactions larger than 100K non-standard
|
| | |/ / /
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Extremely large transactions with lots of inputs can cost the network
almost as much to process as they cost the sender in fees.
We would never create transactions larger than 100K big; this change
makes transactions larger than 100K non-standard, so they are not
relayed/mined by default. This is most important for miners that might
create blocks larger than 250K big, who could be vulnerable to a
make-your-blocks-so-expensive-to-verify-they-get-orphaned attack.
|
| |/ / /
| | |
| | |
| | |
| | |
| | | |
and ensure orphan processing (when their parents are found) cannot be used to counter-DDoS the node providing the parent
Also fix a minor typo
|
| |\ \ \
| | | |
| | | | |
Improve error handling during validation
|
| | | | | |
|
| | | | | |
|
| | | | | |
|
| | | | | |
|
| | |/ / |
|
| |/ / |
|
| |\ \
| | |
| | | |
Treat non-final transactions as non-standard
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
At least one service that accepted zero-confirmation transactions
was vulnerable because an attacker could send a transaction
with a lock time far in the future, and then have plenty of time in
which to get a double-spend mined (perhaps from a miner who wasn't
on the network when the first transaction was broadcast).
That is a variation on the "Finney attack". We still don't
recommend anybody accept 0-confirmation transactions as final
payment for anything. This change keeps non-final transactions
from appearing in the wallet, and, assuming most of the network
accepts this change, will prevent them from being relayed until
they are final.
|
| |\ \ \
| |/ /
|/| | |
Remove IsFromMe() check in CTxMemPool::accept()
|
| | | | |
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Fixes issue #2178 : attacker could penny-flood with invalid-signature
transactions to deduce which addresses belonged to your node.
I'm committing this early for code review; I still need to write up
a test plan.
Executive summary of fix: check all transactions received from the network
for penny-flood rate-limiting before adding to the memory pool. But do NOT
ratelimit transactions added to the memory pool:
- because of blockchain reorgs
- stored in the wallet and added at startup
- sent from the GUI or one of the send* RPC commands (CWallet::CommitTransaction)
The limit-free-transactions code really should be a method on CNode, with
counters per-peer. But that is a bigger change for another day.
|
| | |/
|/| |
|
| |\ \
| | |
| | | |
Add optional transaction index to databases
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
By specifying -txindex when initializing the database, a txid-to-diskpos
index is maintained in the blktree database. This database is used to
help answering getrawtransaction() RPC queries, when enabled.
Changing the -txindex value requires a -reindex; the client will abort
at startup if the database and the specified -txindex mismatch.
|
| |\ \ \
| | | |
| | | | |
Bugfix - Moved SyncWithWallets out of ProcessMessage and into CTxMemPool::accept()
|
| | |/ /
| | |
| | |
| | | |
that when adding multiple wallets they will be aware of each other's transactions.
|
| |\ \ \
| | | |
| | | | |
Add a notfound message to getdata.
|
| | |/ /
| | |
| | |
| | | |
aren't in the relayable set are requested.
|
| |\ \ \
| | | |
| | | | |
Send transactions after a CMerkleBlock when asked for it in an inv.
|
| | | | | |
|
| | |/ /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This actually simplifies some SPV code, as they can keep track of
a filtered block and its txn before accepting both in one step.
The previous argument was that SPV nodes should handle the txn the
same as any other free txn and then mark them as connected to a
block when they get the filtered block itself. However, it now
appears that SPV nodes will need to put in more effort to verify
loose txn than they would to verify txn in blocks, thus making it
more approriate to send the txn after the filtered block.
|
| |/ / |
|
| |\ \
| | |
| | | |
Parallel script verification
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Since block validation happens in parallel, multiple threads may be
accessing the signature cache simultaneously. To prevent contention:
* Turn the signature cache lock into a shared mutex
* Make reading from the cache only acquire a shared lock
* Let block validations not store their results in the cache
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
* During block verification (when parallelism is requested), script
check actions are stored instead of being executed immediately.
* After every processed transactions, its signature actions are
pushed to a CScriptCheckQueue, which maintains a queue and some
synchronization mechanism.
* Two or more threads (if enabled) start processing elements from
this queue,
* When the block connection code is finished processing transactions,
it joins the worker pool until the queue is empty.
As cs_main is held the entire time, and all verification must be
finished before the block continues processing, this does not reach
the best possible performance. It is a less drastic change than
some more advanced mechanisms (like doing verification out-of-band
entirely, and rolling back blocks when a failure is detected).
The -par=N flag controls the number of threads (1-16). 0 means auto,
and is the default.
|