Batching for Plebs (aka congestion control without a soft fork)

Batching for Plebs (aka congestion control without a soft fork)


I started this document to assess the possibility of coinjoin software that is coordinator-free and asynchronous. It turned into something else, but I’d like to state my motivations for creating it before I share what it turned into. An asynchronous coinjoin would be one where the participants don’t have to be online when the coinjoin happens. This is unlike existing coinjoin software such as joinmarket, wasabi, and whirlpool, in all of which you must be online to cosign the final form of any coinjoin transaction you want to be a part of. A coordinator-free coinjoin would not have a central server who orchestrates the inputs and outputs in the coinjoin transaction. Removing the synchronicity requirement and the coordinator are good goals because removing the coordinator would make coinjoins cheaper and more censorship resistant and removing the synchronicity requirement would make them easier.

Use sighash_one

I initially wondered if I could accomplish these goals using sighash_one. The first person in the coinjoin would create a psbt with one input worth 10,000 sats (for example) and one output worth 9,900 sats (for example). They would sign that psbt with sighash_one anyone_can_pay and then post it online. Someone else could come along later, find the psbt online, and add to it. Because the first signature is anyone_can_pay, the second user can add their own input without invalidating the prior signature, and because the first signature is sighash_one, the second user can add another output without invalidating the prior signature. So they add their own 10k sat input and another 9.9k sat output, sign the transaction again with sighash_one anyone_can_pay, and put the psbt online again for someone else to add to.

Voila, better coinjoins!

This process lets randos add input/output pairs whenever they want, and it continues until someone decides to broadcast the transaction. The broadcaster can be anyone who knows about the psbt and the transaction can be broadcasted the moment it has enough fees to pay for its inclusion in an upcoming block. Any additional input/output pairs beyond the first few just result in a larger coinjoin, and all states of the coinjoin transaction are valid until one of them gets into a block, so up til the moment the coinjoin is in a block, anyone could broadcast the prior states too if they wanted. So in this model there is no coordinator and no one has to stay online after adding their input/output pair. They can go away and come back later to check whether the transaction was broadcasted, and if it wasn’t broadcasted yet, they can broadcast it themselves.

Except it’s broken

After getting to this point I realized that this software would not in fact be effective coinjoin software. The purpose of coinjoins is to prevent analysts from identifying which input is paired with which output. But sighash_one explicitly pairs them – you sign one input and one output, so analysts just have to check which output your input signed, and then they know that one’s yours. That situation might be improved if some of the signers signed *each others’s* outputs. This would give everyone in the coinjoin plausible deniability that the output they signed is theirs, but doing that with minimal trust sounds difficult.

Salvaged as a batching tool

At that point I realized that this doesn’t have to be a coinjoin tool, it can be just a batching tool. Users who spend their coins in the way described above would not get any privacy benefits but they would probably pay lower fees than if they sent their transaction without batching. Every bitcoin transaction has more than just inputs, outputs, and signatures, it also has version numbers and locktime info which take up valuable byte space that must be paid for. This metadata is typically about 4% of the total size of a standard bitcoin transaction. A batched transaction spreads the cost of this metadata among all the participants, resulting in a cost savings of up to 4%. It would also do away with a coinjoin's typical requirement of fixed amounts, which is a good thing to get rid of if the goal is to help people make ordinary bitcoin payments.

Such a batching tool would be useful because, while batching software for bitcoin does exist and is regularly used by bitcoin exchanges, plebs don't currently get to use these tools in an automated and non-custodial manner. Coinjoin software is sort of a batching tool that plebs can use non-custodially, but the cost savings usually get eaten up in coinjoin fees, which are either paid to a coordinator (in whirlpool and wasabi) or to market makers (in joinmarket). A non-custodial batching tool that lets users keep the savings would thus be quite useful.

Broken again

When I got to this point it occurred to me that paying someone via a batched transaction created in the above manner would be difficult because sighash_one does not let you create a change output. Bitcoin transactions usually have one or more inputs and two or more outputs – at least one output goes to your intended recipient and another goes back to your wallet as change. But with sighash_one you only get to sign one output, and you get no guarantee that the recipient will send you back your change.

Fix it with lightning

Except there are ways to guarantee you get your change back. Suppose your output goes to an HTLC with two spend paths: one is a preimage path whereby your intended recipient gets the full amount of the utxo, the other is a timelocked path whereby you get back the full amount after two weeks. So now, before you send your batch transaction which funds that HTLC, you ask the sender to send you a lightning payment with your change amount. The hash that unlocks this lightning payment and lets you settle it matches the one in the HTLC that the batch transaction is supposed to fund. You don’t know the preimage to this payment hash, so when your LND node tells you your change payment is in an “accepted” state, you can’t actually settle it without the preimage. That’s when you add your input to the batch psbt, and now your payment to the recipient is effectively in final settlement. You pay a reduced fee on the base layer and you either get the full amount back (if the recipient doesn’t moves his funds out of the HTLC) or you get your change back (if the recipient *does* move his funds out of the HTLC, thus revealing the preimage you need to settle your change payment). And your recipient can treat the payment as if it is finished too, as long as he plans to settle it at some point before the two weeks are up.

Congestion control

I think this allows us to effectively use batching and the lightning network for congestion control. If you send someone a payment via this batching method, the recipient has up to two weeks to actually settle your payment, or however long your lightning payment’s CLTV timeout is set for. If batchers send the final payment out with a low fee during a high-fee environment, recipients of the money don’t have to immediately settle, they can wait til fees come down. And if fees don’t come down in two weeks, any recipient (or a group of recipients) can use CPFP to bump the fee of the batch transaction to make it confirm faster. Voila, congestion control achieved by non-custodial, asynchronous batching transactions, and lightning, without a soft fork. To me that sounds pretty useful.

Report Page