The necessary prerequisite for any change to the Bitcoin protocol

Monday, 01 February, Year 8 d.Tr. | Author: Mircea Popescu

Summary :

All blocks must include a SHA3i-512 digest calculated over a bitfield composed out of the nonce-th byte out of every preceding block, wrapped.ii

Rationale :

The issues being resolved have been discussed at length in #bitcoin-assets, whose logs you are invited to read - right now, and in integrum.iii

This notwithstanding, an unbinding summary is that the miner-node divisioniv is both an unintended consequence of the poor design and inept implementation of Bitcoin by its original author as well as the single known possible threat to its continued survival. This measure heals that rift, by making it impossible for miners to mine without nodesv) ; and by giving nodes a directly valuable piece of information they can sell.vi

The Vatican's Armored Divisionsvii :

I won't bother with parading for your benefit, nor will I recount the sad story of "what happens when you don't do what MP says". If you've done any reading worth the mention you should know all that by now ; if you need any explanation as to why my pronouncements are binding, you necessarily have no clue about Bitcoin-anything. See here instead.

I will however say that detailsviii are negotiable, on one hand, and that I am open to considering other changes be bundled with this change, possibly including an increase of the blocksize. I will however, attack and sink any other change whatsoever, without regard to who proposes it, who supports it, or what it contains. The only way to make any alteration whatsoever is to make an alteration that includes this one.

And now that we understand each other, back to your regularly scheduled program.

———
  1. The principal consideration is that unlike the rest of the SHA "family", the keccak function takes unlimited input. Bitcoin is forever after all.

    The political importance of not using NSA/NIST crapolade is a secondary concern, even if it makes a very valid, muchly needed statement, namely that the United States has no future in technology just like it has no future in political geography. []

  2. Exempli gratia : if the fourth block is added to a blockchain consisting of

    1. 60 bd e7 67 77 70 20 b2 e6 7c 46 c3 (12 bytes)
    2. 75 80 d2 b0 6e 6c 6d a9 5d 12 98 fe bf (13 bytes)
    3. df fc 22 5f 2a 4d 50 d6 f3 fc c3 (11 bytes)

    Then should that block use a nonce of 17, it must include a field equal to sha3-512(70 6e 50), whereas should that block use a nonce of 11, it must include a field equal to sha3-512(c3 fe df).
    []

  3. The notion that you might be participating in Bitcoin in any capacity or to any degree without keeping up with the logs is not unlike the notion that you're participating in the political process through "reading newspapers" or whatever it is you do. []
  4. There's multiple aspects to the issue.

    One aspect is that while nodes - no, not "full" nodes, simply nodes ; everything else is not a node at all - do provide useful service, they have no way to extract payment in exchange. This takes us to the present, sad situation where the network barely consists of a few hundred nodes, and on the strength of that alone could be toppled by a fart. (Yes, supposedly significant reserve capacity exists. Let's just hope nobody actually gives this a run for its money.)

    Another aspect is that new blocks are mined by one group (miners) but have to be stored in perpetuity by another group (nodes). This creates a situation where X (users) pay Y (miners) to inconvenience Z (nodes), which is unsustainable not to mention sheer nonsense.

    It is true that so called "solutions" to this fundamental problem have been pluriously presented by Bitcoin's enemies in sheep's clothing. Nevertheless, they all reduce to attempts, more or less blatant, to leverage this weakness of the protocol into further damage - not a single one of them is to any degree an actual solution or even vaguely addresses the problem at all. []

  5. For reasons that I think obvious, mining will continue on ASICs, even if this change will require new ASICs be baked. Nevertheless, for technological reasons it will be impossible to include the generation of this bitfield in the ASICS in question - instead, they will have to depend on importing it from outside.

    Whether miners will run their own nodes or allow a decentralized market of "subscription information services" to spawn up will remain to be seen, but if you do believe in the economic superiority of decentralization then you're stuck believing this will happen necessarily.

    In any case, a word to the wise : if you are designing ASIC chips, and you are not including the possibility of feeding a bitfield like this in blocks, you are deliberately ensuring failure not just for yourself, but for your customers as well. This change WILL eventually come in, start planning accordingly, today. []

  6. Logically what you'd do as a node operator is create KNBs (known nonce blocks) every time a new block is found. Depending how fast your machine goes, you should be able to output thousands of these per second. A miner that has to feed its rigs something will then buy these blocks from you and proceed to use them (and possibly announce them afterwards too, to protect other miners from being scammed with the same nonce block).

    This forces a minimum population of nodes to exist in order for mining to even be possible - what use is more hashing if there's nothing to hash ? []

  7. Is there any сталин in the house ? []
  8. Probably the most important of which, shifting the nonce before taking the nonce-th element. Taking the nonce as-is requires strict parity between each hash and the calculated digest, which would require 64 MB of information be available to the miner for each Mhash. This is perhaps not practical - although it does have the marked advantage of making ASICs altogether impractical for mining, and returning that process to more traditional computers. A rather generous eight bit shift would mean the miner needs 64 bytes every Mhash, meaning that each Phash chip must have 64 GB/s worth of data to work its magic. []
Category: Bitcoin
Comments feed : RSS 2.0. Leave your own comment below, or send a trackback.

42 Responses

  1. wachtwoord`s avatar
    1
    wachtwoord 
    Monday, 1 February 2016

    Very creative approach. Don't you think mining pools will run one full-node and make the current SHA3-512 digest available freely to support pooled mining (their bread and butter)? Running a full-node is not that expensive (for a mining pool).

  2. This is an ingenious way to ensure that miners are storing the block chain.

    Would this be a soft fork or a hard fork?

  3. Christianity answers both insofar as most people care.
    You're responding to a block size increases are either hardforks or extension blocks (ugh).
    Illegitimate transactions are meant to transfer bitcoins (or at least some discussion on the spammer end, as new spammers have popped up trying to violate the rules in one way or another Everyone cannot know everything.

  4. Mircea Popescu`s avatar
    4
    Mircea Popescu 
    Monday, 1 February 2016

    @wachtwoord This depends very much on the exact degree of shifting for the nonce that ends up implemented. If the digests are too scarce, the miners will probably compete for them and so they won't be shared anymore than say the passwords to the letter-level dns servers are shared. On the other hand if there's some slack and digests aren't THAT hard to come by, they might share them.

    Understand that every single digest has a cost, which can be actually calculated on the basis of say AWS prices. This cost is significantly above the cost of one hash in terms of capital goods, energy, actual money, what have you.

    There's a different game-theoretic application for whether they'll burn digests (ie, publish the digests they've already tried).

    @PeterL Necessarily a hard fork.

    @Duke-jr Sorry, what ?

  5. Wachtwoord`s avatar
    5
    Wachtwoord 
    Monday, 1 February 2016

    Thanks for the explanation. Does the difficulty/scarcity need to be static and chosen at the time of the hard fork or can it scale based on computational power like the difficulty for the hashing function itself?

  6. This is one of the most interesting Bitcoin proposals I've ever read.

    However, I'm not sure that the miners will actually need to buy KNBs from full nodes. Won't miners just create "ASICs" (of some kind) which can efficiently calculate KNBs?

    ASIC design is not my area of expertise (I would guess that Bitcoin ASICs are built to only accept data of a fixed size/format and double-SHA it), but won't there be some investment that miners (dedicated specialists) can make, which improves their productivity to a degree which is superior to "buying from nodes" (however cheap "buying from nodes" may be)?

  7. Mircea Popescu`s avatar
    7
    Mircea Popescu 
    Monday, 1 February 2016

    @Wachtwoord In a very theoretical way it could scale, but due to the complexities involved and the bugs they promise to induce, I very much doubt a practical implementation could be anything other than fixed.

    Then again, were "we" the "community" give up this sad, famous power rangerism and instead spend the time to seriously review code and so forth, it could perhaps be broached. Hard to say off the cuff.

    @Paul Sztorc It is notoriously difficult to predict this sort of thing correctly, however as far as the simulations I've seen go, that asic meaningfully reduces to "CPU + HDD controller". It is not feasible to reimplement these just to feel special - so they won't be ASICs.

    Obviously a miner can buy a server just as well as the next bloke, so in general miners could run fully integrated operations, having their own digesters and their own miners talk to each other. Nevertheless, the fact that they integrare these distinct operations doesn't make them not be distinct anymore, just like Chiquita integrating retail and sea transport doesn't make retail and sea transport the same activity.

    There's liable to be significant drift between the two, which will likely tear them apart, not only for the usual economic reasons as seen in all companies (what vertically integrated corps can you name ? Apple doesn't manufacture retina screens for instance) but for a very specific reason too : mining fluctuates whereas digesting is fixed. Should you find blocks in quick succession you'll need more digesting work done than should you find them further apart (in fact if you draw the demand curve for digestion, it's infinity just after a block is found and drops from there to the stable rate of mining).

    But given all this, the only honest answer remains nevertheless "who knows".

  8. Riccardo Casatta`s avatar
    8
    Riccardo Casatta 
    Tuesday, 2 February 2016

    Using sha3 because of input limitations of sha2 is pointless. One byte for every 10 minutes blocks times 2^64 is approximately 26k times the age of universe.

    Creation of Proof of Storage should be fast enough to prevent orphans. I think taking nonce-th byte from a fixed number random blocks (deterministically derived from the nonce) should be faster while achieving the same point.

  9. Mircea Popescu`s avatar
    9
    Mircea Popescu 
    Tuesday, 2 February 2016

    Using sha3 because of input limitations of sha2 is pointless.

    There's no because in the original. Please don't mistake amusement for causation.

    Creation of Proof of Storage should be fast enough to prevent orphans.

    This actually merits a lot more discussion. For instance : how would orphans result, everyone's stuck doing the same digestion work.

    I think taking nonce-th byte from a fixed number random blocks (deterministically derived from the nonce) should be faster while achieving the same point.

    There may be merit to this idea, especially if tweaked to read not "(deterministically derived from the nonce)" but instead "determined through doing modulo-blockheight on the block's hash, then shifting the blockhash and repeating the process". If you want a total of 64 blocks be selected randomly, you shift by ones. If you just want 32, you shift by twos.

  10. Could you explain in more detail how the random blocks idea works?

    Also from the article it is not clear if this change would work so that the block hash includes just the previous block headers and the nonce, so that the digest is only needed after the nonce is found, or if the block hash includes the previous block headers, the nonce and the digest.

  11. Mircea Popescu`s avatar
    11
    Mircea Popescu 
    Tuesday, 2 February 2016

    Could you explain in more detail how the random blocks idea works?

    Suppose you are building on a block with hash ab cd ef which is at height 10. ab cd ef happens to be a number, equal to 11259375 in decimal notation. Modulo 10 that comes to 5, so the 1st byte of the digest is to be taken from block #5. Shifting by 4, the hash becomes a bc de, which is also a number, equal to 703710 in decimal notation. Modulo 10 that comes to 0, so the 2nd byte of the digest is to be taken from block #0. Shifting by 4 again you're left with ab cd, which 43981, so 3rd byte from block #1. And so following, 4th byte from block #8, 5th byte from block #1 and 6th byte from block #0 again.

    This way you didn't have to go through all 11 blocks in order, but merely looked at 6 of them. Obviously how much the hash is shifted can be altered to suit.

    Also from the article it is not clear if this change would work so that the block hash includes just the previous block headers and the nonce, so that the digest is only needed after the nonce is found, or if the block hash includes the previous block headers, the nonce and the digest.

    I'd have thought it's clear, but just in case it isn't : currently mining works as sha(sha(headers+nonce)) ; by this proposal mining would work as sha(sha(heqaders+nonce+digest)).

  12. Nonce is cycled in the miner because it's the easiest thing to do. If it becomes the hardest thing to do, miners will simply keep one value of a nonce and generate a different transaction in every cycle instead.

  13. This is doable as a soft-fork, which subsequently "hardens" as economically relevant nodes learn of the new rule and enforce it. An initial deployment only requires miner upgrades (with the added benefit of yet another opportunity to uncloak those who use badly-implemented SPV mining hacks).

    Overall though the real "false dichotomy" isn't XT vs Blockstream, or Core vs Classic, or... it's hard/soft forking. The proposed change is less effective if miners implement it "softly" (a situation unlikely to happen, as the change is undeniably against miners' naive self-interest), and only has teeth once all nodes verify to harden the fork.

  14. Mircea Popescu`s avatar
    14
    Mircea Popescu 
    Tuesday, 2 February 2016

    @jurov That was discussed a little in the log (but didn't really get a fair shake because of random derp getting in the way etc) :

    punkman: mircea_popescu: re:sha3-digest, here's one idea, asic miners can get around recalculating the digest on every hash by changing the merkle-root/timestamp instead of the nonce
    mircea_popescu: punkman quite, yes. this is deliberate. won't be too easy tho, asic needs EVEN MORE ram that way. (miners currently use this incidentally, to some degree)

    punkman: why even more ram?
    [...]
    punkman: mutating the merkle root just needs some sha256 hashing though
    mircea_popescu: sure. but is also finite.

    punkman: is not
    mircea_popescu: how do you mean ?

    punkman: you just put the nonce in coinbase, isn't it equivalent to mutating the actual nonce?
    mircea_popescu: punkman yes, up until people start rejecting nonstandard blocks.

    punkman: you can encode that nonce in standard coinbase outputs
    mircea_popescu: yes, but costs money and takes time.

    punkman: orders of magnitude cheaper than recalculating nonce+digest
    mircea_popescu: not really. for one thing, now it's finite.

    Sure, to some degree miners can fiddle the block they mine to escape from under this. That is their reserve power, to protect them from the blight that'd be 0 shift digests. Nevertheless - it is not free nor easy to do, especially given the very limited medium of the asics they can make.

  15. Mircea Popescu`s avatar
    15
    Mircea Popescu 
    Tuesday, 2 February 2016

    @Adlai

    The proposed change is less effective if miners implement it "softly"

    Carried by political passions of the moment, people tend to interpret this as well as anything else in terms of "it's to punish X" etc. Leaving aside that political expediency makes for horrible design principles - there's a patent reason I announced this years in advance, and that reason isn't to surprise miners.

    The effect of stranding miners is not contemplated. The cause for this very banal and otherwise absolutely necessary move is to heal a well documented, generally accepted, actually present gap in the protocol, not to hit anyone over the head by name.

    So no - it will be perfectly effective if the [more intelligent] miners implement it slowly and with minimal pain.

    More generally - I don't believe in the "soft fork" pretense, just like I don't believe in helping people whether they want to be helped or not and all that USG-like claptrap. The correct approach to mankind's problem is to make the truth plain and measurement tools cheap and effective.

  16. Overall, smells to me like exactly such artificial limitation that can be avoided with some clever hack. Like, sacrificing 1 satoshi to unspendable address allows quickly churning merkle roots.

  17. Mircea Popescu`s avatar
    17
    Mircea Popescu 
    Tuesday, 2 February 2016

    Can you off the cuff guess the degree of magnitude difference between the value of one satoshi and the value of one hash today ?

  18. Finally a question I know the answer to!

    ;;genrate 1000000000
    gribble The expected generation output, at 1000000000.0 Mhps, given difficulty of 1.20033340651e+11, is 4.18972356932 BTC per day and 0.174571815388 BTC per hour.

    If 1000000000 Mhps = 0.174571815388 BTC per hour then 1 satoshi = 1000000000000000*3600/17457181 = 206218862025 hashes.

  19. Mircea Popescu`s avatar
    19
    Mircea Popescu 
    Tuesday, 2 February 2016

    What's a coupla hundred billion when you've got passion!

  20. ButterFarts`s avatar
    20
    ButterFarts 
    Tuesday, 2 February 2016

    Paying one satoshi each hash is not what he's saying. Miner only pays the satoshi if the block is mined. Miner doesn't even have to pay anything could just put 10BTC in one address and send one satoshi more each try to himself. Each 10BTC is good for 1gigahash this way no cost to the miner at all.

  21. Mircea Popescu`s avatar
    21
    Mircea Popescu 
    Tuesday, 2 February 2016

    Well, not other than the cost of keeping 10 BTC around for each Gh - or I suppose more properly for each Gh he expects to mine in between finding two blocks. Seems prohibitively expensive.

    The other approach (introduce satoshi payments to random address) may get caught in multiple places. For one thing, there's no reason for blocks to validate that pay out to invalid addresses. The cost of validating each address on ASIC hardware seems to exceed the cost of simply importing a bitfield. The dust spam trap, for another, may raise the cost significantly enough. What would you prefer, paying 10000 satoshi for a block's worth of digests, or paying 10001 satoshi to a dead address sort of dilemma. Note that the requirement here isn't for nodes to make A LOT for their services (not anymore than the point here is to "destroy mining", for that matter). Just as long as they make something as opposed to nothing.

  22. If the cost to run one node is 1 Bitcoin/month and there are 4320 blocks mined in a month then 25k satoshi/block would be enough to pay for the node.

  23. Satoshi
    23
    Satoshi's Sisteroshi 
    Tuesday, 2 February 2016

    All this talk about jiggling the Merkle tree is childish. MP said the pow would be sha(sha(header+nonce+digest)). He could have just as well said the pow would be that both sha(header+nonce) and sha(digest) come under difficulty, in which case there's no Merkle tree jiggling to be had whatsoever. The method as proposed in principle is workable alright, wisely leaving implementation details tba.

  24. Mircea Popescu`s avatar
    24
    Mircea Popescu 
    Tuesday, 2 February 2016

    @an-on Note that those 4320 blocks mined in a month are all the blocks mined by all miners, they wouldn't necessarily be mined by the customer of your particular node.

    @Satoshi's Sisteroshi Sometghing like that.

    Nice name btw.

  25. The link in comment 14 is broken.

    It should be http://btcbase.org/log/?date=01-02-2016#1392779

  26. Mircea Popescu`s avatar
    26
    Mircea Popescu 
    Thursday, 4 February 2016

    A right thanks.

  27. If you are so sure the 2MB hardfork can't happen, bet on it: https://www.betmoose.com/bet/will-the-bitcoin-classic-2mb-hard-fork-happen-in-2016-1593

  28. Mircea Popescu`s avatar
    28
    Mircea Popescu 
    Monday, 15 February 2016

    I can't even be sufficiently derrogatory.

    Get lost, and take that transparent scam / low effort idiocy with you.

  29. Perhaps alleviating Merkle-root twiddling and breaking pooled mining:

    SHA256(SHA256(header+nonce+digest+addr+sig))

    Where:
    - coinbase transaction always pays out in its entirety to the contents of the new `addr`field
    - `sig` is a signature of the digest by the private key corresponding to the new `addr` field

    Sources:
    - http://bitcoinstats.com/irc/bitcoin-dev/logs/2011/11/21#l1321905670.0
    - https://bitcointalk.org/index.php?topic=652443.20
    - http://hackingdistributed.com/2014/06/13/time-for-a-hard-bitcoin-fork/

  30. Mircea Popescu`s avatar
    30
    Mircea Popescu 
    Sunday, 6 March 2016

    To begin with : why are you chaining sha256 (as opposed to, say, the 512 we're discussing in the article ? yes this difference matters - a lot!). Why are you chaining twice (as opposed to, say, three times) ?

    This proposal is a miserable USG-enabler. There needs not be any form of acceptance in Bitcoin ; the notion of including signed acceptance of transactions by the receiver in Bitcoin transactions is pure USG.

  31. 512! 512! 256 was simply muscle memory.

    > the notion of including signed acceptance of transactions by the receiver in Bitcoin transactions is pure USG.

    I don't quite understand how the above is 'signed acceptance of transactions by the receiver': the bitfield to be signed is the nonce-th digest, not anything transaction related.

  32. Mircea Popescu`s avatar
    32
    Mircea Popescu 
    Sunday, 6 March 2016

    >`sig` is a signature of the digest by the private key corresponding to the new `addr` field

    How am I to interpret this ?

  33. So I read the post, then I read *all* the comments and now I feel uber dumb. And i'm a coder. I was expecting to understand something from this discussion but my brain hurts.

  34. Mircea Popescu`s avatar
    34
    Mircea Popescu 
    Wednesday, 9 March 2016

    Welcome to Trilema.

  35. Mircea Popescu`s avatar
    35
    Mircea Popescu 
    Wednesday, 1 March 2017

    Exciting update: further discussion in the forum culminated with a very interesting proposal to use Luby codes (on the basis of the fixed-width transaction model that recently emerged).

  1. [...] this year, a proposal was made to reduce miner centralization through a hard fork. The specifics of the design are not fully flushed out but the essence of this [...]

  2. [...] the hard fork missile crisis has yet to be fully resolved with a treaty, the block size ceiling remains at 1 megabyte, which has begun to price lower value transaction out [...]

  3. [...] displaced by buzzwords. ↩Really. There has not been nor will there likely ever be a bitcoin node at PorcFest. ↩Again really. Most of the actual commerce taking place is at the food vendors [...]

  4. [...] is another twistable knob. can be 1, can be whatever. see footnote 2, example Exempli gratia : if the fourth block is added to a blockchain consisting [...]

  5. [...] [^]The necessary prerequisite for any change to the Bitcoin protocol [^] mircea_popescu: http://log.bitcoin-assets.com/?date=01-02-2016#1393026 << at least it wasn;t fucking developed by teh nsa. assbot: Logged on 01-02-2016 19:29:18; ascii_butugychag: ;;later tell mircea_popescu in what sense is adoptinc keccak a rejection of usg standards? it was actually adopted as sha3... mircea_popescu: as far as we know. whatevs. minor point. ascii_butugychag: btw between that thread and now i went and read the keccak spec ascii_butugychag: it is mighty spiffy. ascii_butugychag: accordionizes to size. mircea_popescu: :) mircea_popescu: i don't need to explain what i meant by not finite then ? ascii_butugychag: aha. ascii_butugychag: other hashes also accept infinite bits but they eat where they shit. mircea_popescu: quite. mircea_popescu: and mind that while in no means do i propose this is "Asic resistant", from a designer perspective you must appreciate i'm giving you a fun job to do. mircea_popescu: at least therer's that. mircea_popescu: always make sure everyone's having fun. ascii_butugychag: quite! nobody will be plagiarizing old verilog from fpga docs to bake this one. ascii_butugychag: very asian-resistant. ascii_butugychag: which is a mega-plus. [...]

  6. [...] Hash (Keccak): 0xaf9e302a664122389d17ee0fa4394d0c24c33236143c1f26faed97ebb d017d0e Signature: [...]

  7. [...] that the Bitcoin protocol externalizes much of the cost of transacting onto all node operators, and unless a satisfactory solution to that tough problem is deployed, transaction throughput must be kept a [...]

Add your cents! »
    If this is your first comment, it will wait to be approved. This usually takes a few hours. Subsequent comments are not delayed.