7 transactions per second. That is the current limit of the Bitcoin protocol and network. As a point of comparison the Mastercard network processes roughly 1200 transactions per second at peak, at 120 milliseconds a transaction, and that pales in comparison to a high frequency trading system which might process tens of thousands of transactions per second with turn-around times in the single digit microseconds. Clearly there is a technology gap between the best centralized processing systems and our decentralized wunderkind Bitcoin. To be fair the playing field isn’t level. A centralized system has massive advantage in terms of inter-node connectivity, and is able to leverage advanced high speed interconnects, directly accessible remote memory, and extremely fast host IPC (inter process communication). Bitcoin must deal with low bandwidth links, untrustworthy actors, intermittent connectivity and a host of other difficult scalability challenges.
Bitcoin’s limitations are several fold. First we have a tiny block size vs the capabilities of modern networks, currently 1 Megabyte. There are only so many transactions that can fit into a single 1 megabyte chunk of data. This puts a hard cap on the amount we can process in a ten minute window. As the popularity of the protocol grows and second generation systems are built on top of it we will begin to feel the pain. There are two relatively simple ways to address this. First, lets fit more transactions into existing blocks. I’ve done some analysis of the blockchains 45 million + transactions at the time of this writing. The vast majority of them are standardized P2pkh scripts which have a standardized format, written here in Erlang bitsyntax.
<<?OP_DUP:8, ?OP_HASH160:8, 16#14:8, Pubkey:160/bitstring, ?OP_EQUALVERIFY:8, ?OP_CHECKSIG:8>>
Seems to me that for standard transaction types we could just have a placeholder that knocks off 4 bytes per transaction. The pubkey can actually be uncompressed or compressed, which means its possible to insert 512 bit pubkeys into standard transactions. Let’s get rid of that also, and stick with only compressed keys. Finally a single public key could be used multiple times across the blockchain, or even within a single block, perhaps we need to have a lookup table functionality or OP_Codes which let us reference elements of other transactions.
Taking it one step farther, maybe it is possible to adapt an existing protocol like Fix/Fast but specifically built for Blockchain use. A presence map per transaction might allow us to codify entire sets of transactions without explicitly including the scripts. That protocol had similar design constraints in that minimal amounts of bandwidth are consumed to decrease transmission latency at the expense of decoding and encoding complexity. The goal of this complexity is simply to cram more transactions into a single block without necessarily increasing the block size.
The second problem is that miners are incentivized to push smaller blocks. The block reward far outweighs the transactional awards, and there is a computational advantage to publishing a block that gets accepted by the majority of the network faster. This is because the propagation delay of larger blocks could potentially result in an orphan. Orphaned blocks are worth approximately nothing, and anything that can be done to reduce the orphan rate will positively affect mining revenue. This problem only becomes more pernicious as the block size grows and the transaction count increases. I think we need to change the conditions of the race such that within a time threshold an alternate winner can steal the mining reward if it has more higher value transactions included in the block. Granted this might cause block-stuffing in order to ensure maximum rewards, but it would even the playing field in some respects. It makes sense to both reap more mining fees and have a better chance at the block reward.
We can increase the block size substantially so that more transactions can be shoved into each block, assuming we can discencentivize pushing smaller blocks. This might have the affect of increasing centralization as slower less well connected nodes get pushed off the network or unable to keep up with the transactional volume.