[G]ossipd design document

Friday, 09 September, Year 8 d.Tr. | Author: Mircea Popescu

This is an up-to-date draft specification for gossipd. It is up-to-date in the sense that it completely incorporatesi discussion in the forum on the topic. It is a specification in the sense that on its basis one could distinguish an item which is gossipd from an item which isn't gossipd. It is a draft both in the sense that the design is still open to discussion in many (though not all) points and in the sense that this particular statement is made by me, and as of yet hasn't been approved or corrected by others.

The item herein discussed started life almost two years ago as "a better ircd" (and of course independently throughout the minds composing our esteemed Republic). Ample discussion hence has muchly refined the concept and clarified its details to the point that original statement is no longer relevant to the discussion.

Throughout this text, as well as throughout discussion to date, "gossipd" ambiguously denotes the protocol as well as the implementation thereof. This is not because we don't comprehend the difference, through some sort of contagion from the original Bitcoin brainrot ; nor because "gossip" would have to get its final p uppercased or something to distinguish it from the common noun ; but because the discussion is yet young and the distinction is as of yet without much difference.

I. Gossipd will have access to a read-onlyii databaseiii of identitiesiv known to it.

II. Gossipd will perpetually run a RSA-key generation process ; and store the produced keys.v. The keys will be arbitrarily marked as usable and bogus by unspecified criteria. Usable keys will be used whenever a new key is required by gossipd operation - such as when introduced to a new peer. Bogus keys will not be used.vi

III. Gossipd will receive inbound connectionsvii from identified clientsviii and on the basis of that identification produce an encrypted challenge string, which constitutes its response. If the other party responds with the proper challenge string, the connection is established ; otherwise it is dropped.ix

For a functional example consider node A, whose "encryption" mechanism consists of sha256(string+"hurr"), and node B, whose encryption mechanism consists of sha256("durr"+string.).

A trying to establish a connection with B will send B

d99b584025792454dff2a0657a33ec71a2ae1334f65524aa10480112046a2be6

To which B will reply with

e8c3fb96cf946d7eef95ec7d04a3c15f72fa77bf39e2e764b55cc8dfd9a08907

To which A will reply with

325152dfbd94bf7d9fcb7d8200a48b83dd0dd5c2e22bd035be9098a4136fdef3

Thereby establishing the connection.x

Unsolicited challenge strings will also be sent, at intervals and to destinations specified by the operator. In general it is expected a mix of friends' keys, own bogus keys and own usable keys will be used for this purpose in varying proportions - the exact scheme used should be exposed as configurable to the operator of the gossipd node.

IV. Gossipd will maintain a list of messages it has received, in the form of

time, X, Y, text

with the meaning that it has received the text at local time from Y who claims the original source is X.

It is intentionally not specified how gossipd should behave in order to

  1. resolve timestamp conflicts (eg, if Y reports text1 from X after text2 from X whereas Z reports text1 from X before text2 from X) ;

  2. resolve content conflicts (eg, if user is connected to both X and Y, and Y reports a message to originate from X that X itself does not report ; or conversely ; etc)

because this resolution falls upon the operator. A good gossipd implementation would supply well chosen knobs to simplify these and other tasks, but the specification thereof would be premature.

V. Gossipd will forward the complete list of messagesxi it knows to any new client connected to it. This behaviour is within the hands of the operator - he may choose to forward only part of the history, or an altered or altogether fictitious history. There should probably be a provision for clients connecting that have connected before, so as to only exchange data from the point of the last connection forward (this could be accomplished perhaps through a mutual check of the "last line heard", which would of course require clients to keep permanent track of what they sent to whom - not altogether a bad idea).

VI. GUI and UX considerations are not in the scope of this design document.

VII. Rationale and advantages have been discussed qs in other places, and in any case are not in the scope of this design document.

Please add your comments below, either directly in the box or else via trackback from your own blog. There's a significant advantage to concentrating discussion on this topic here, so we don't have to fish through two hundred thousand lines of log next this item has to be revisited.

———
  1. Completely incorporates here means that all points brought have been considered, and on their merits included or not, in the original form or as they best serve altered. []
  2. The database is to be populated, and maintained, by the operator - not by the program. []
  3. Implementing this in the form of a plaintext file stands out as the height of sense. See also the Bitcoin wallet discussion. []
  4. At the very least in the now familiar e, N, comment format.

    As Republican software matures, and especially once G is delivered, this part of the specification will have to be expanded to accomodate scryptography, which is to say cryptography that exists as scripts operating on a standardized bignum machine.

    Let it be mentioned to dispell possible confusion, that no user <-> identity bijection is contemplated here. It is this author's expectation that the average user will have shared on the order of hundreds of different keys with each of his friends, with an unspecified portion thereof shared among those friends. This means that a Dunbar-average fellow will have generated on the order of 10`000 RSA keypairs to use gossipd normally, (which means gossipd can not function without a proper RNG source - something I believe was implicit but now becomes explicit. []

  5. A speed about similar to that of the Bitcoin network (about one key created each 6 minutes, for a total of ~240 per day) is contemplated here. Obviously this will vary by implementation and locale ; the magic number provided is not included in the spec but intended as mere guidance. []
  6. Implementation may allow opperator to move keys that were used in the past into the bogus group ; but not out of the bogus group. []
  7. The method is not yet established. Obviously, TCP is widely available ; but also obviously it has serious problems. This part of the spec is by far the vaguest, for which reason all implementations are in principle acceptable - without prototyping possible solutions it seems improbable we'll come to any sort of bridge here (and moreover, the "scandalosity" resulted in no small part from trying to give this issue time). []
  8. Gossipd is a p2p protocol, so the client is exactly itself. []
  9. It is by this spec not illegal for a client to send challenge strings to another client which has not solicited them. What the receiving client should do in this case is not specified but left at the discretion of the implementer. []
  10. If you're not into fighting trapdoor functions, A says sha256("durr" + "This is A.") to which B replies with sha256("Is this A?" + "hurr") to which A replies in turn with sha256("durr" + "Is this A?"). The fact that A could decrypt a message encrypted to A's key proves to B that A is indeed A. If B has no key associated with A's claimed identity, the session can not continue. []
  11. To point out the obvious : messages may consist of new keys for itself ; or for a third party (either in plaintext or encrypted for the destination ; either signed or unsigned). []
Category: Bitcoin
Comments feed : RSS 2.0. Leave your own comment below, or send a trackback.

163 Responses

  1. This is a very light touch of the subject.

    Fact remains, the "nothing to all-comers" principle is a necessary foundation of any gossipd that wishes to survive outside of the LAN inside the walls of Castle Popescustein - i.e. in the wild, where genuine "hellos" may well be outnumbered by sybil "hellos" by billion:one or whatever arbitrarily pessimistic proportion.

    And this pulls in the single-packet-auth discussion. I contend that the problem is not in practice layer-separable. And thereby attempts to solve it in any way but to begin from the bottom layer, are doomed to TCPistic catastrophe.

    In the same vein, RSA is susceptible to chosen-ciphertext manipulations and is therefore not a complete answer to the crypto question.

  2. Mircea Popescu`s avatar
    2
    Mircea Popescu 
    Friday, 9 September 2016

    I would say that the spec as it is satisfies the "nothing to all comers" as well as the "single packet authentification" in the sense that A must identify itself ; and must identify itself as someone B knows.

  3. The scheme described here relies on a permanent (I do not see any obvious mechanism for rolling it) *symmetric* key.

    Aside from "symmetric crypto has not yet been discovered" considerations, this system is vulnerable to replay. And not only replay, but how exactly does one establish a session key using this cipher? I.e. what does it mean to "establish the connection" ?

  4. Mircea Popescu`s avatar
    4
    Mircea Popescu 
    Friday, 9 September 2016

    Why would the scheme involve symmetric keys at all ? B sends stuff to A encrypted to A's RSA key. A replies, encrypting to B's RSA key. No symmetric cyphers involved.

    The "connection was established" statement in the original spec is perhaps misguiding, in that through (misguided) practice a connection has come to be regarded as a state. Gossipd is no state machine ; that a gossipd connection was established simply means that B will now dump on A a quantity of information ; no more.

  5. In the example case, the "hurr","durr" pair are a symmetric key.

  6. Mircea Popescu`s avatar
    6
    Mircea Popescu 
    Friday, 9 September 2016

    I suppose that example is miserable then.

    All it intends to show is that if there is a function-A known to B and a function-B known to A, so that A knows how to revert function-A and B knows how to revert function-B then the scheme works.

    The example should make this evident in that "somehow" B knows to extract "A" from the hash, and reuse it into its challenge process. Aaaanyway.

  7. I recommend to visualize the example nodes A and B as ~radio~ stations. Which is the correct model for the extant net, rather than the usual mythical untampered two-way link with valid return addresses imagined by naive folks. What does it mean to "reply" to a message ? How do you link the hurr-durr session to the subsequent payloads?

  8. Mircea Popescu`s avatar
    8
    Mircea Popescu 
    Friday, 9 September 2016

    In order for communication to happen, there must be some way to reach the other party.

    For the scheme described, it is irrelevant if the communication is broadcast in a unicast or multicast fashion. As long as B has a way to speak ; and A has a way to hear what B says, then once the "connection" has been established B can tell A what it knows, which is the whole point.

    For an example, and not part of spec :

    I. A to B's key, via B-defined channel : "This is A, and extremepandasex."
    II. B to A's key, via whatever channel : "Truly A ?"
    III. A to B : "Truly A ?"
    IV. http://obama-sucks-dicks.osd/extremepandasex.txt now contains B's payload encrypted to A's key.

  9. Let's work an example.

    1) A broadcasts: rsa_crypt_to(B_pubkey, hash("shared_hurr_with_B"+"I'm A"))

    1.1) B verifies that the hash lines up with what is expected from the shared hurr stored next to the stored pubkey for A.

    2) B broadcasts: rsa_crypt_to(A_pubkey, hash("shared_hurr_with_A"+"I'm B"))

    2.1) A verifies that the hash lines up with what is expected from the shared hurr stored next to the stored pubkey for B.

    ... now we say a "connection was established".

    This presumably means a series of
    N) A broadcasts: rsa_crypt_to(B_pubkey, payload_N)
    N+1) B broadcasts: rsa_crypt_to(A_pubkey, payload_N+1)
    N+2) A broadcasts: rsa_crypt_to(B_pubkey, payload_N+2)
    ...

    Questions:

    -----
    The big one:

    0) What links steps N, N+1, N+2... to steps 1 and 2 ????

    Which is to say, at, e.g., step N, how does B know that it is receiving the payload from the same station that solved the challenge ?
    -----

    Others:

    1) If we aren't using radio, but a wired packet-switched network, how do we direct the reply messages ? What is to keep an enemy from replaying old "hellos" to misdirect replies and spawn sessions with phantoms (e.g., old ip) ?

    2) What, if anything, terminates a session?

    3) What keeps the enemy from replaying arbitrary subset of the messages?

    4) What keeps the enemy from reordering the messages ?

  10. Mircea Popescu`s avatar
    10
    Mircea Popescu 
    Friday, 9 September 2016

    Nothing in there is correct.

    1 There's no "shared hurr with B". Hurr is B, in the sense 6160E1CAC8A3C52966FD76998A736F0E2FB7B452 is mircea_popescu.

    1.1 B verifies that the "A" that comes out of decryption exists in his db from I in the spec.

    2. B does not receive as a result of A having identified. A receives. If B wants to receive, B becomes A, and A (or C) become B.

    O.1. If the enemy should somehow obtain A's pubkey (a situation that is a superset of enemy replaying old helos from A), and proceed to flood B with messages from A, then B will :

    O.1.1 Invalidate A's key and
    O.1.2. Announce A.

    At this point the enemy has delivered valuable information to gossipd ; and gossipd suffered no hardship for it.

    O2. There are no sessions.

    O3-4. There are no message fragments.

    There ~could~ be, of course, predicated upon answering O.2-4.

  11. 1) If "hurr" is not a shared secret, but is publicly known, enemy E can emit a rsa_crypt_to(B_pubkey, hash("hurr"+"I'm A")) as easily as A. What am I missing?

    2) An RSA public key can be trivially derived from a small number (and no one exactly knows how few) encrypted messages.

    3) If there are no sessions, what is the purpose of the hash("hurr"...) part of the dance?

  12. Mircea Popescu`s avatar
    12
    Mircea Popescu 
    Friday, 9 September 2016

    1) That he has to know an A.

    2) Sure.

    3) Apparently very confusingly, I gave an example of an encryption function. Apparently choosing a trapdoor function for this purpose was a very stupid idea, because it auto-loads all sorts of things that weren't intended.

    Really, the whole "sha256sum("hurr"+message)" is intended to stand in place of, as an equivalent of, rsa_encrypt, cs_encrypt or whatever else. It just has the advantage that it's much easier to calculate than those things.

  13. 1) What does it mean to "know an A" ? What prevents enemy from similarly "knowing" ?

    3) Your scheme as I presently understand it posits the existence of "secret public keys". Which are not a strictly impossible thing, but rely on some other - yet to be described - mechanism to be established.

  14. Mircea Popescu`s avatar
    14
    Mircea Popescu 
    Friday, 9 September 2016

    1) The universal statal deployment of "identity papers" should be a good indication as to what prevents the enemy from knowing something as banal as people's names.

    Be that as it may : in a population of 1mn gossipd users which on average each know one hundred others, the chance of a panoptical enemy from hitting on an A B knows are one in ten thousand. In practice there is no such thing as a panoptical enemy, and even should it manage to know 999`990 of the 1`000`000 users, it's to be expected that 80% of the time, the only ones it really wants to impersonate are the remaining ten.

    2) There's no requirement for public keys to be secret for this scheme to work, much like there's no requirement to have actually, physically impgrenable locks for banks to exist ; or to have actually unbreachable armor for wars to be won.

    The point is not to flatten the enemy with a wall ; but to welcome him to the jungle.

  15. 1) This walks dangerously close to "let's use rot13, enemy is retarded."

    Only hard protocol is interesting in the long term -- and not a promise of "they'll never learn the value of X, even though it can be inferred from the sum of traffic in polynomial time, because their mothers dropped them as children".

    2) If I, enemy E, know public key of B, I can impersonate A and - at the very least - cost B unbounded cpu cycles. Which is unacceptable.

    3) The point, as I understand it, ~is~ to flatten the enemy against a wall of genuinely hard crypto. And not to merely inconvenience him with rot13.

  16. Likewise if the thing does not work as well with 3 users as with 3 million, it does not in fact work.

  17. Mircea Popescu`s avatar
    17
    Mircea Popescu 
    Friday, 9 September 2016

    1) The comparison is flawed. It's properly "let's use RSA, the enemy is retarded". The problem here is the expense involved ; not some sort of plain (and for that matter delusional) impossibility. So yes, it's very much a hard protocol.

    Note that as there's no requirement for gossipd traffic to flow over any particular interface or in any particular manner, the "sum of traffic" is by definition incomputable, over any sort of time.

    2) You mean, "If I, enemy E, know the pubkey of B and the name of A such that A is one of B's friends, then I can send bogus requests purporting to be from A which I can never follow up, alerting B to the fact that his key is known to me and putatively A that he is being impersonated. In exchange for this I get to dribble at their gates powerlessly until they can be bothered to cut me off." then yes, you are exactly correct.

    3) For starters the point would be to discuss what we're discussing rather than random replacement.

  18. Mircea Popescu`s avatar
    18
    Mircea Popescu 
    Friday, 9 September 2016

    The likewise is nonsense ; and an attempt to make it work for three users is poison, in the sense that it guarantees you will never be able to make a protocol that actually works for three million.

    It's, if you will, the equivalent of work hardening - your thing becomes brittle and therefore useless.

  19. 1) If breaking the scheme is not provably equivalent to breaking RSA, but rather equals "go and either break RSA OR collect 95% of the packets" - it is weak, as weak as the weakest link in the chain, and the RSA is window dressing.

    2) How does the "cutting off" work ?

    3) If the scheme does not work for 1 ... N users, it never worked. Unlike, e.g.., Bitcoin, which worked just fine for N==2. and 3. and 3million. A scheme that ~relies~ on a million noise-spewing sybils to smokescreen over the actual traffic is an idiocy a la Tor.

  20. Mircea Popescu`s avatar
    20
    Mircea Popescu 
    Friday, 9 September 2016

    1) It's not evident to me we're discussing the same thing.

    2) B is not required to accept anything from A, the whole thing is predicated on B having marked A as a friend. This can be undone by B at any time and for any reason - including this one. A is then at liberty to remedy the defect, for instance through issuing A', for B's use (which does not impede him from using A with C as before).

    3) It's not evident to me we're discussing the same thing.

  21. 1) A "gossipd" that falters in any way when the number of users is 2, or 3, etc. is equivalent to a Tor. Crypto that is insecure against a panoptical (i.e. captures 100% of the ciphertext for all time) opponent is a masturbatory exercise, and not crypto at all.

    2) If the enemy can induce B to "unfriend" A, or vice-versa, your communication link falls apart at his pleasure.

    3) How's that ?

  22. > 1) If breaking the scheme is not provably equivalent to breaking RSA,

    Lulz since when is RSA hardness provably equivalent to anything.

  23. Mircea Popescu`s avatar
    23
    Mircea Popescu 
    Friday, 9 September 2016

    @Stanislav Datskovskiy

    1) That may be, then again it may not be ; what's it all to do with gossipd ?

    2) Once enemy can impersonate A, this is a foregone conclusion already and of not much concern.

    Also, careful with the symbolics, you seem to confuse the word "rose" with the item denoted. Enemy can make you discard the name of your friend ; this has no bearing on the friend itself.

    3) A statement of absence is not open to an inquiry of "how come". Once produced, it's on you to show how your previous statements derive from common ground.

    It is, if you will, a case of A having been unfriended. It requires no further work from B.

    @A Nom Right.

  24. 1) It has everything to do with the described scheme. If gossipd requires a million nodes to come into existence simultaneously, and remain indistinguishable to the enemy from the genuine article, it is a Tor.

    2) If the enemy E can force a key renegotiation (B: "hey A, I'm getting a megatonne of replayed purportedly-A, get yerself a new key willya") the principle of "nothing-to-allcomers" is violated. Now A has to somehow communicate "hey B, I heard you, here's a new key for me." And moreover, to communicate it strictly to B. And actually carry through the negotiation of the new keys.

    I.e. "allcomers" can force A and B to renegotiate their shared secret. Which process fits nowhere into the described scheme, and presumably happens out of band (where? over coffee, in meatspace?)

    Now whatever cost A and B incur from having to do this, can be forced unlimitedly frequently by E, at 0 cost to E.

  25. A Nom: ever met a fella who was convinced that breaking into his house is ~exactly~ equivalent to picking his expensive, fancy front door lock? And not, say, to breaking a window with a brick?

    What is so hard to understand about "if the house has glass windows, the door lock is decorative" ?

  26. The proven strength (or lack thereof) of RSA is immaterial. What ~is~ possible is to craft a scheme that is provably ~as strong as $cipher~. Thereby if a $cipher of proven strength should ever be discovered, and put to use, the scheme is now provably of that strength.

  27. Mircea Popescu`s avatar
    27
    Mircea Popescu 
    Friday, 9 September 2016

    1) There's nowhere found in the spec mention of this 1mn nodes that are required. If you think they are required, you're more than welcome to show they are. The fact that I gave an example which included a million users is no grounds to now presume the million users are somehow required, what is this, the Washington Post ?!

    2) The principle is not violated inasmuch the enemy in this scenario is not an allcomer, but an actually informed participant. The allcomer is one who knows nothing, not your friend once removed. That the cost to E of finding B's A were 0 is a ridiculous proposition.

    As an exercise, (in stupidity, as I suspect you'll now cling to this example and enact it into a strawman), A could ask C to convey his new key to B. Which is no degradation in the protocol but how gossipd works in the first place - the only way to meet people is by being introduced by other people you both know. This, incidentally, is quite what I meant above by "work hardening" - instead of considering the protocol as is, you instead move on to considering an insane version based on your own experience. The fact that you should have friends to exist is not nearly as apparent when you consider a space of 3 as it is when you consider a space of one million.

  28. Ah ok, something more like "to prove RSA is the weakst link".

  29. 1) "the chance of a panoptical enemy from hitting on an A B knows are one in ten thousand. In practice there is no such thing as a panoptical enemy, and even should it manage to know 999`990 of the 1`000`000 users, it's to be expected that 80% of the time, the only ones it really wants to impersonate are the remaining ten."

    This implies combinatorial explosion as the sole protection against E trying every A,B.

    2) Any idiot can simply capture a packet on the wire and replay it $maxint times.

    3) If E, simply by sending a few replayed packets, can force A to catch a plane and visit castle of B to negotiate a new key, whereas they would otherwise not need to, E can bankrupt A and B (A by plane tickets, B by coffee bill, say.) This is elementary.

  30. And it was not yet answered what exactly ensures the contiguity of channel between an auth and a payload transmission.

    And any way I cut it, the scheme does rely on "secret public keys." In the sense that if E know any node's public key, he can flood it with rubbish continuously.

    It is not clear to me what the purpose of the auth steps even is!

    Seems like you still have traces of the TCP-era sketch in your thinking, where a "pipe" linking A and B for a certain time T was posited to exist. In the sense where bits somehow come with a return address and can be "answered", or can somehow speak for the authenticity of future incoming bits without an explicit mathematical linkage.

  31. Mircea Popescu`s avatar
    31
    Mircea Popescu 
    Friday, 9 September 2016

    1) No, giving example of how enemy suffers does not imply this is the only way enemy suffers. It implies this is an example of how enemy suffers.

    2) Capturing the packet is half the work, idiot has to also spoof the original sender's logical (as opposed to geographic, so to speak) location.

    3) The presumptions you rest this on don't hold. It's not 0, and it's not "simply by replaying packets".

    In point of fact the situation is no different from current : "someone" / Hanno Boeck, the deceitful shitbag could right now email you ten billion tidbits of nothing encrypted to your key. Your defenses to this are exactly the same : you can disconnect the email in question.

    That Hanno Boeck, the deceitful shitbag doesn't currently do this is perhaps indicative, but be that as it may.

    4) It's not directly clear that what you call "contiguity of channel" is even a thing ; or for that matter necessary or relevant to gossipd.

    5) Again, for some reason. If E knows B's public key, he can flood B with meaningless packets. They do decrypt, but they do not decrypt to anything meaningful. B can trivially discard these, just as trilema currently discards ten million requests a month. Who cares, and why would they ?

    6) The "traces of TCP-era" or whatever exist in this conversation strictly because you specifically injected them. You don't get to come into a discussion of gossipd, try as best you can to force it happen in terms of TCP, then over the constant protests of the other party declare that "they have traces of TCP in their thinking". Because, again, not Washington Post, shit doesn't stick.

  32. 2-3, 6)

    "Logical location" and "disconnect the email" are TCPisms.

    Transmission of bits is a scalar, not a vector. This is the root of the misconception behind the entire scheme.

    There is no "disconnect the spammer" in this universe. There is only "I will reject strings that do not cause function F to return True." What is F ?

    5) This would be great but we do not have a notion of "meaningful" separate from the wholly useless (because there IS no contiguous channel) auth sequence.

    7 ) Arbitrarily "irrelevantizing" my observations will work for you exactly as well as it worked for Xerxes.

  33. Mircea Popescu`s avatar
    33
    Mircea Popescu 
    Friday, 9 September 2016

    Here's a basic implementation of F in pseudocode :

    F(X):

    • if decrypt(X)
      • if db.key(decrypt(X))

        • return (generate_challenge(db.key(decrypt(X))))
      • else return ("unknown source")
    • else return ("bogus data")

    5) "Meaningful" plainly defined in this spec. Using the prototype F from above, we call a message meaningful if F(message) returns generate_challenge(db.key(decrypt(X))) and meaningless otherwise.

    7) This is certainly true.

    The problem with the statement, true as it is, being that the only instance of "irrelevant" in this thread to date is "For the scheme described, it is irrelevant if the communication is broadcast in a unicast or multicast fashion." in comment #8. Am I to understand you take issue with that then ?

  34. F1) What does return (generate_challenge... physically do? ( How, for instance, does the challenge reach the challengee without being readable by E ? )

    F2) Where in X does one stuff the payload (e.g., "pandasex") ?

    7/8) Re: #4 in comment 31.

    Picture the entire system as a set of radios. You are "A," on Earth, while I, "B", and enemy "E" likewise, are on Mars. You cannot distinguish a transmission from B from same from E via any means other than its content. Nor can you reply selectively to B, while excluding E, via purely analogue means.

    Can you see which elements of the proposed scheme fall apart under this light?

  35. Mircea Popescu`s avatar
    35
    Mircea Popescu 
    Friday, 9 September 2016

    F1) It creates a string. Very much like how this works currently :

    <mircea_popescu> !!rate deedbot 1
    <deedbot> Get your OTP: http://wotpaste.cascadianhacker.com/r/9rjgi/?raw=true

    I'm too lazy to decrypt it, but you know what is inside : the result of a generate_challenge call.

    F2) This is not currently specified.

    7/8 ok, but my saying "it's not clear that what you call X is a thing, or for that matter relevant or necessary" does not directly map to "arbitrarily "irrelevantizing" my observations", especially given that X was not defined in any sense ; and otherwise depends on tenuous propositions that don't seem to be accepted (for eg, comment #4).

    Accepting your A - Earth B/E - Mars model as stated, I don't see any elements of the proposed design falling apart. Go ahead ?

  36. F1 seems to me to imply that anyone who knows a pubkey can trigger a challenge broadcast from that pubkey's privholder. Thereby a node can be distinguished from a dead planet, (or unplugged ethernet cord), which is a Bad Thing. Now they can be scanned for by E.

    F2 is the crux of my problem - it is not only unspecified but - as far as I can see - quite impossible to specify in line with the given scheme. There is no protocolically (vs promisetronically) correct place for a payload.

    Earth/Mars - my point was that the formulation "if Boeck sends 10 trillion spams, disconnect your email and get a new one" doesn't work here. There are no "addresses" ! In so far as physical geometry is concerned, in my example messages from B and E appear to come from the same physical location. And in so far as E can tempt A to tune out Mars entirely, he wins (because, e.g., now I shall be unable to give early warning to A that E has launched his flotilla towards Earth.)

    Likewise E can alter any bits in any particular AB message, and spoof any supposed "logical location". The only thing E cannot do is (resting on the strength assumption of the cipher strictly) to break the cipher.

  37. Mircea Popescu`s avatar
    37
    Mircea Popescu 
    Friday, 9 September 2016

    F1 I have no idea why it implies that to you, because it's not actually the case.

    Someone who knows a pubkey, and a friend of the owner of that pubkey, can indeed pretend to be that friend, up until the first challenge. This is necessarily the case ; and will be the case in any possible implementation of a communication mechanism.

    F2 I don't see any merit to the claim that "it can't be done". Should the hello format for X be encrypt(destination, "Hi, this is A and parameters") the problem is completely and directly solved.

    Earth/Mars - makles entirely no sense. I can distinguish a transmission from B from a transmission from E on the first pass, because the transmission from B decrypts, whereas the transmission from E does not.

    Should E invest the time and effort to find out my privkey, then I can still distinguish a transmission from B from a transmission from E because the transmission from B decrypts to "hello this is B speaking", whereas the transmission from E does not decrypt to "hello, this is B speaking".

    Should E invest the further time and effort necessary to find out B also, I can still distinguish a transmission from B from a transmission from E because B responds correctly to my challenge, whereas E does not respond correctly to my challenge.

    Should E invest even further and obtain a way to decrypt with B's key, it is proper to say that B and E are in fact now the same thing ; and judgements - of any kind and nature - made about E are properly made about B, and can be safely recorded as such.

    None of this has any bearing on geographical location. The "logical location" concept contemplated here does not map to the physical world in any sense, it is a direct equivalent for the colloquialism "from where I'm standing", in expressions such as "that dick looks more like a firehose from where I'm standing."

  38. F1) E can trigger a challenge just by replaying the last TB of captured material, in hopes that it contains challenge requests from target's friends.

    This problem - and the other problems discussed - all disappear if you have signed-1packet-hellos. Which do ~not~ need to be signed with a royal key that "one would not care to produce a standing-forever sig for third parties" with.

    Any attempt to weasel out of signed-1packet-hello results in a node that is mechanically distinguishable from an unplugged machine by someone who can replay old traffic.

    F2) how does the "and" in "Hi, this is A and parameters" work ? I.e. what links "A" and "parameters" mathematically ?

  39. How does one "disconnect email"? Remove office@trilema.com from your mailserver's database and tell everyone to use another entry point?

    > I can distinguish a transmission from B from a transmission from E on the first pass, because the transmission from B decrypts, whereas the transmission from E does not.

    What does "transmission from" mean here? Google's email server address? From: header? A bag of bits just arrived to your ethernet card from..well..ether. Where did it arrive from?

  40. I mean, yes you can distinguish by decrypting them. But you must try decrypting every bag of bytes that comes out of ether anyway, any other heuristics/assumptions about where it comes "from" will be used against you.

  41. Mircea Popescu`s avatar
    41
    Mircea Popescu 
    Saturday, 10 September 2016

    F1) It is true that E can trigger challenges by replaying captured material. It is arguable that this gives him any sort of advantage. For one thing, if he has captured material, he already knows B. For the other thing, if he has captured material, he already knows B talks to A. So yes, he could, and he gains nothing.

    The alternative you propose solves a problem that doesn't exist at a humongous cost. Repudiable communication - "really, A said this to B ? who told you ?" (usually rendered in Romanian as "cine ti-o cicat nasu' ?") transform into irrepudiable communication ("but here is the so and so signed by your supposedly not related key except it's relatable, but feel free to hope the secret stays secret - and while at it let me put it in a little (just the tip!) and take some pics which I won't publish on the internet!").

    I'm not willing to give away gossipd altogether simply to protect against some imaginary bugaboo.

    You're more than welcome to implement something with signed-1packet-helos. I will not be using it for this purpose ; and I don't currently see another purpose I might want it for. Deedbot.org already exists and in my view suffices for handling signed material ; "royal" keys are the only kees that should ever sign anything.

    F2) Nothing links them mathematically. They are linked arbitrarily, by the fact that when A put together its helo to B, he created the string "hi this is A and parameters" rather than the string "hi this is A".

  42. Mircea Popescu`s avatar
    42
    Mircea Popescu 
    Saturday, 10 September 2016

    @jurov

    > How does one "disconnect email"?

    By not reading it anymore, is what I had in mind.

    > What does "transmission from" mean here? Google's email server address? From: header?

    Merely who it is from, a metasyntactical quality of the message we need in this discussion ; nothing formal about the message itself.

    > I mean, yes you can distinguish by decrypting them. But you must try decrypting every bag of bytes that comes out of ether anyway

    Yup.

  43. F1) E is panopticonic and has EVERYONE's transmissions from all time. He knows nothing about any particular A,B, but can throw packets from all of history at anybody he likes and wait for something to budge.

    No one asked you to give up on your aspect of irrepudiability, but I will not give up on ~mine~. Which includes unscannability, and impossibility of en route diddlage.

    The frying pan needs a handle, yes, but "let's not have the fire under it" results in no sort of solution.

    The net that carries sybilade (packets without signatures of any kind) has no future. Because what ~can~ be drowned in shit, ~will~ be.

    F2) If nothing cryptographically links the two strings, a third party can cut them apart and transmit "I am A and parameter=='Let's fuck pigs'".

  44. *repudiability. heh.

  45. Mircea Popescu`s avatar
    45
    Mircea Popescu 
    Saturday, 10 September 2016
    In this statement F1 is basically a blocking dispute for the whole thing ; unless resolved somehow we're stuck here.

    I don't myself agree that the scheme specified can be drowned in shit ; nor do I agree that any aspect of anything is thereby lost. It also seems to me entirely spurious to pretend like E, however "panopticonic" may find itself today, necessarily can maintain that quality in the described scheme. Consider the following model :

    I. Individuals I1 through In currently use keys K1 through Km by some mapping ; and have produced to date messages M1,1 through Mm,k. E has a complete list of all Ms and their meta-data, as well as a complete list of all Ks and a complete relation map between I and K. E does not have the capacity to decrypt any Ms.

    II. Gossipd as specified is introduced. Individuals I1 through In create keys K'1 through K'o and communicate them to each other by some criteria which needn't be defined. E dutifully adds the new Ms to his lists.

    III. E observes that no further (or significantly fewer) messages flow among I ; does some googlefu and discovers the gossipd ; does some printing press kranking and assorted derpage and obtains a mostly functioning gossipe, which is braindamaged in design but meets his bizarro specifications.

    IV. E replays his entire list of Ms to all the K's that he can find. Nothing happens. For the sake of argument Is are in this example retarded, and no endless stream of lulz end up on qntra as a result of this ill conceived attempt.

    V. E manages to capture the entire list of Ms originating from K`j, and replays them against all K's he can find. This results in challenges from K'q ... K'w. E extracts the keys of K'q ... K'w, and proceeds to spam them with numerous hellos from K'j. This results in challenges and then silence.

    VI. E now knows that... what ? Nothing, basically, except that this also didn't work. K'j  used his K'l  identity, which unknown to E also maps to the same I to replace K'j ; the nodes relaying  K'q ... K'w possibly used some bandwidth, but necessarily not more than what the operators (who may, but do not have to be the respective Is!) are willing to expend.

    There is literally nothing here for E. Gossipd as specified effectually

    A. Does away with the entire value of E's db of M, K and I.

    B. Prevents E from acquiring another useful db of any of these ; and prevents E from using the fragments he does acquire in any useful way.

    The end.

    F2 Let us try out in practice. Please cut out the parts in the case reproduced below :

    -----BEGIN PGP MESSAGE-----
    Version: GnuPG v1.4.10 (GNU/Linux)

    hQEMA5a9B3cKC9FrAQgAn5Q3ejtP2V5lOjG1iWrsSE9jIHPKy9P9Gp7I8n0I24fP
    ScQ6+BDHjLJ9rk1lsA/OtIsPBGRs8AazFtBUP4JN5iKEAzba24/rjJX5mNClfaRw
    ylzIvXj9sclROOHCjdep48ccleIZIUDdmtJvDDAx5iYIheSOazwABEsHCG8RRi8P
    JDDJ3//aqaDHOYEhKtMSQkLYK5So34vc4aKHDFh23avON7mYh4J12a8REIOP+zby
    j9rHbZMv13IMqgiZiCupvl0bATw3FHvqEJeQYND6Hv5ozLBZ3CbW1nNFIo0yZHFI
    rF/BHcmAGvutXYK37h6xWl9cbwsCdvn53C16p+oMEtJgAbkvTY4RaHD0t+CmXg1j
    MPGURLNMXNhAN/3misNv0QizxfnlLPP3Baj4K8cqFpf6EP1LGXSyyD9WBv1Ozn0o
    IaslRzzlYIKueBi1VaTb4xVjMjMrfzCVwvakmworKVMN
    =qujK
    -----END PGP MESSAGE-----
  46. Framedragger`s avatar
    46
    Framedragger 
    Sunday, 11 September 2016

    MP writes:

    > Earth/Mars - makles entirely no sense. I can distinguish a transmission from B from a transmission from E on the first pass, because the transmission from B decrypts, whereas the transmission from E does not.

    Sorry if I'm repeating things, but: the clear advantage of the signed-1packet-hellos approach is that you wouldn't need to check all that data sent from Mars. You'd only need to check some amount of bits for a given chunk / packet (if it's packet-switched), dismiss the rest of chunk if no match. Different from attempting to decrypt "everything".

    Counterpoint / devil's advocate: it'd have to be a wholly new IP stack for one to be able to make use of this efficiency, otherwise as jurov said "you must try decrypting every bag of bytes that comes out of ether anyway" - even if the hello is at the beginning of every packet, you must accept the whole packet into your buffer and then check and dismiss if need be, no? Obviously it'd be great to have an implementation of some new stack which'd be different, and could especially be made use of in one of them post-nuclear radio scenarios.

    > Should E invest the further time and effort necessary to find out B also, I can still distinguish a transmission from B from a transmission from E because B responds correctly to my challenge, whereas E does not respond correctly to my challenge.

    As alf said, being exposed to this "must now send 1 trillion challenges" scenario doesn't sound so great (does it), unless it is vehemently argued that the scenario is too hypothetical. IMHO.

  47. Mircea Popescu`s avatar
    47
    Mircea Popescu 
    Sunday, 11 September 2016

    > you wouldn't need to check all that data sent from Mars

    Of course you would - you have to check that 1st packet in any case. If you do 1 packet auth you will be spammed with that first packet, broken. If you do 3 packet auth, you will be spammed with one of those first three packets, broken.

    In ALL cases you will have to at some point check ; prior to that point you will be spammed. Since the limit on how much you're spammed is internal to the spammer and external to you, there is nothing you can do to limit how much you will be spammed in the absolute sense. If E has a 10k pps / 10 Mbps pipe and wants to spam you, you will receive 10k pps / 10 Mbps no matter what you do.

    Change the format - he'll change it too, you are getting what he has to give. Which is why the point of defense here is to make it easy to recognize him, which the current scheme provides for as well as possible, which is an absolute, and which means that any scheme provides exactly as well as this one, or worse. There is no better.

    It is very naive to imagine spammer will send some "rest" you can dismiss. He won't, he has no reason to. This way of thinking (which we call "static" in the log, and derride as infantile) is also how the welfare state ended up with the idea that you can, for instance, "fix" the "homeless problem" by "giving them free shit" - with the predictable result of even fewer people willing to work, or willing to represent their unwillingness to work as anything but a disability.

    > being exposed to this "must now send 1 trillion challenges" scenario doesn't sound so great

    Deedbot is currently exposed to it, doesn't seem to suffer.

    You can't, as a matter of principle, simply make this sort of statement. You are a very poorly qualified devil's advocate, principally because you are no actual devil. You are similarily a very poorly qualified god, principally because you are no actual god.

  48. Framedragger: This is incorrect. At no point is it possible to somehow avoid processing the packet. MP's reply has it.

    Mircea Popescu: In my conception of gossiptronics, there are no "hellos" or "goodbyes", etc. but only fixed-length packets, signed with ephemeral key and - inside this - enciphered to destination's ephemeral key; the payload being a Luby Transform [1] fragment of a larger payload (when channel is idle, a randomly-generated filler, but enemy does not know this; the channel is always operated at capacity and packets are sent at fixed intervals.)

    Enemy can spam the channel but each of his packets can be rejected in ~constant time~. And without providing any feedback to the enemy. [2] This is key.

    > Deedbot is currently exposed to it, doesn't seem to suffer.

    This is disingenuous -- Deedbot rides on top of the sewage filtering mechanisms provided by Freenode. Which could vanish, or flip into reverse gear, tomorrow. 100% promisetronic.

    [1] Recall, it does not matter in what order these are received, and a variably-large proportion can go missing entirely and still result in a correct reconstitution.

    [2] If I am E and sitting on your channel - which in the case of actual radio can be done by any child - I can derive ALL of your "secret" RSA pubkeys. Then I can transmit requests for challenges, and distinguish nodes. [3] And, via spamming, create situations which require manual intervention, every five seconds if I wish. We went over this, and thus far I've seen no adequate counter which doesn't reduce to General LeMay's cigar, 'It wouldn't dare!"

    [3] Your scheme, with the challenge responses, also exposes a "decryption oracle", which is one way that a primary school pupil can bust RSA private key of arbitrary size.

  49. Mircea Popescu`s avatar
    49
    Mircea Popescu 
    Sunday, 11 September 2016

    1. No I get it. The thing with your model is that it includes an impossible object - ephemeral signatures. If it's signed then it's not ephemeral, by the definition of what signed is ; if it's ephemeral then it can't be signed, by the definition of what ephemeral is.

    [Signatures specifically exist, and completely succeed, at extracting specified strings from their intertextuality and instantiating them ; ephemerality is a property of the intertextual gestalt that is mutually exclusive with such instantiation.]

    2. It seems to me that anything can be rejected in constant time, depending strictly on the rejectatron one deploys.

    3. Deedbot has ~nothing to do with freenode. It works as a webinterface in point of fact. For that matter, jurov's implementation of a v eater worked on top of email. The protocol itself as we implemented it to date is transport layer agnostic - deliberately (if I guess not entirely apparent ?)

    4. I'm in no way arguing against Libby or Raptor etc. These all seem rather valuable and I think they have their place ; they are not specifically mentioned in design document because it's unclear in my mind how exactly they'd mesh in, a matter which I'm hoping you'll clarify once we get past this thing here.

    5. You can only discover keys that are used during the portion of communication you can intercept. It was said (in #17, but perhaps not loud enough), that the "sum of traffic" is by definition incomputable. You can't sit on my channel. You can listen to some fragment of my chatter, and that is all.

    6. Breathing requires "manual intervention" if you're the sort that can't breathe without paying attention to it - and are now stuck breathing deliberately just for having read this.

    Nevertheless, setting up your node to, eg, autodrop any keys it receives over 5 messages in a minute from is trivial - and more importantly, currently practiced. Virtually any ssh server will do this, for instance.

    So no, it's not a case that you raised a valid objection which has been arbitrarily rejected. It's that you raised something which you seem to think is an objection, but in fact is completely nonsensical and in direct contradiction with both the whole body of practice known to date as well as any theoretical approach to the matter. It's not unlike saying that "all species will die out because the DNA is not made of diamond". For one thing, notwithstanding how cool you think diamond is, it's not practical for making DNA ; for another thing, the current purine based system works just fine - something you can directly find by looking out the window, should you live in a place with windows and be inclined to look out of them ever.

    7. This is a restatement of the observation that "a well equipped schoolboy" could "break RSA key of arbitrary size" through the application of third grade arithmetics. Sure, he could, very true. Good luck to him!

  50. > Nevertheless, setting up your node to, eg, autodrop any keys it receives over 5 messages in a minute from is trivial - and more importantly, currently practiced. Virtually any ssh server will do this, for instance.

    This is a DOS vector. E can trivially nudge B to reject genuine traffic from A, as often as he likes.

    > Deedbot has ~nothing to do with freenode. It works as a webinterface in point of fact.

    And if it gets a round-the-clock and regularly updated DDOSatron, Deedbot users will be starring in a game of whack-a-mole, as the mole. That this has not taken place, or for that matter the fact that I was able to load Trilema now, is strictly because the enemy is presently lazy and shy.

    > You can listen to some fragment of my chatter, and that is all.

    If I intercept a few packets, I can deduce the public key. And then proceed to use the decryption oracle you handily provided to deduce the node's private key.

    I have been hoping to demonstrate this via argument, here, because it is much cheaper and less traumatic than later having to publicly break this abomination should anyone be foolish enough to implement it as described here. But so far apparently failing.

  51. Unrelatedly, "ephemeral signature" is not by any means "an impossible object." Consider, for instance, a signature where the private key is at a later time given away publicly. It is in every sense ephemeral, in that the enemy can make no claim of attribution after (or, in any practical way, during) the secrecy interval.

    This is not a solved problem in the sense of "solution published long ago and in widespread use" - but is a ~solvable~ one, in the "make a practical Cramer-Shoup" sense.

  52. Mircea Popescu`s avatar
    52
    Mircea Popescu 
    Sunday, 11 September 2016

    This is a DOS vector. E can trivially nudge B to reject genuine traffic from A, as often as he likes.

    Yeah, you made this statement numerous times ; I rejected it numerous times. At this juncture, you can either keep presenting it as a nude statement to the predictable result ; or else actually show it is so.

    And if it gets a round-the-clock and regularly updated DDOSatron, Deedbot users will be starring in a game of whack-a-mole, as the mole. That this has not taken place, or for that matter the fact that I was able to load Trilema now, is strictly because the enemy is presently lazy and shy.

    I'm sure this is so, which is why meta-NSA rules the world and nobody using anything else other than challenge-based auth (eg in the past week phf, framedragger) has any sort of problems in practice.

    Seriously now. Window. Outisde. Look.

    If I intercept a few packets, I can deduce the public key.

    Yes, and if magical unicorns then circle cvadrature.

    Here, intercept this packet and deduce the RSA keys :

    Version: GnuPG v1.4.10 (GNU/Linux)

    hQIMA/grGFxR4PPGAQ/+K1gSNBPYnr8QYpOe/4hWsoU5YhtfUteuxHw/Oc4szrfw
    gkNcy1qow4wln9lRCdGJCSXWVOUBgqTjesrcsz/S4gnsChM/1E4MsXyK0Qn4fIqU
    wC6tKe40t7na/FAVVzUg0KOkPx9YTJ2EXYvSVVTqz1n7hCLoXnmje6ixeNnhD4WX
    A0V4b55vOh5Q7m7ZOu7bivxUX/HkiUZ+M9Oto81lc88P5U/z+TLGsouA5L+vHMK+
    mGFhwOJ2yKFxGnMa3aBhQjTGbkf4Oo00lrCIDwg5zVAQ+tSmDaSumP6B3L63q7aC
    j7ayTowKGNg+OdLlS4d5bLYvJEOu892jzF9we1LT2tccomPBfy6G/n9spxAh6aWe
    H8pFeKOZf5sBHtyd8fap5JGN4GFLUrCK6g8YyM/+1RTxsBxJhnZ4n1pmTn9Qh8F0
    eIQR+ZwxgXY0+lt9icnC3DB/fvNwBNBbS/IT1fY7kXxdwESdwFDbnaSVbhg6Fj44
    LfzzzJ14cHGHplTGekAPf93Fbd1d+Iy/SYxtIP/KuA1UGTR7EoxZTukxmivo/aUl
    oiG5BQK5gAmAe48vdf/jj20yVGASFC80iq4J2oE+sdpx8lwGcoz/kTn0DRL7TeO8
    iGusF2yn01TeWxqSO+rtrELjhrN5VhikSidNg5Qz4UwEdNzBsd5eCf843xk39EbS
    rwHq7BTQo/nHOvW2EStso110I/3Ns4ajX1wt7YaNMb57JczeJCx+jX+jSvpryrDX
    7pNLZOY0QSCAUjz8NaM9022oIwLC+Y40xWyNPiqQ+V7zbWdmPfPBMsGWfAIz8tq9
    rx/dm5Qdkh2qgTDHIa9PwSqS7BpOhKwmCxQ4MBx+G/swAueNkKQyPexjoXGiz2WG
    PzTa/W6zzpOkwVbe3oXbIe9+vBroyX3S9LER7ETr2DU=
    =IKD2

    Knowing it is addressed to Vulpes, which you can indeed "trivially" find (at the cost of you know, maintaining a sks server, one which, you know, we deign to use) - please enumerate the keys inside.

    Or at least you know, specify if they're mine ; or maybe they're trinque's new keys relied by me to Vulpes ? Or maybe they're Framedraggers ? Or maybe it's a grocery list ?

    The very strength and the very point of gossipd is that it does away with pretty much everything from the old world. You can't as much as presume that communication in between A and B happens on a channel between A and B. You can't look inside communication. There's nothing for you.

    In fact, a gossipd which implements challenges as literally always encrypting the same string, for all comers, is not necessarily a bad implementation, nor is it illegal by this here spec. If the operator has the resources to keep delivering the payloads, he is more than welcome to do just that.

    [This is not nearly as bad an idea as it seems to the naive first pass : think that E in the sense of USG.NSA.E suffers from a very blockchain-like problem : it is stuck preserving all encrypted messages until it can decrypt them, which is potentially forever. Basically the cost for us (miners) to create and deliver a message accrues once ; the cost for him (node) to save it accrues forever. It is a very powerful DDOS tool, the only thing is that it works passively and against the enemy ~on the grounds of his enmity and as a result solely of that enmity ~, ie, mathematically, and without possibility of error. And of course any time E's had enough - E can just stop being himself.]

    Consider, for instance, a signature where the private key is at a later time given away publicly.

    This makes the catastrophic destruction of privacy inherent in a signature scheme worse ; not better. Your only faint hope of having a functional gossipd with signatures is that they somehow "are kept secret forever". This obviously will never work in practice ; but to give them away is just not even.

  53. > -----BEGIN PGP MESSAGE-----

    Contention was only that given a handful of these, I can make a new message addressed to the Vulpes key (let's suppose there were not such a thing as SKS, or published pubkeys, to make this utterly trivial). No more, no less.

    >  Window. Outisde. Look.

    The extant and apparent enemy is quite irrelevant, because we live on Tard Planet. Only the ~possible~ is relevant.

    > You can't as much as presume that communication in between A and B happens on a channel between A and B. You can't look inside communication. There's nothing for you.

    In practice, enemy who surrounds my house has a great deal of insight into the set of possible channels I might be making use of.

    And, as in the Mars example, we all share the same spectrum.  There is not a separate, magical one, for anyone to use.

    > think that E in the sense of USG.NSA.E suffers from a very blockchain-like problem : it is stuck preserving all encrypted messages until it can decrypt them, which is potentially forever.

    It is not clear to me that the enemy is "stuck" doing any particular such thing.

    > This makes the catastrophic destruction of privacy inherent in a signature scheme worse ; not better. Your only faint hope of having a functional gossipd with signatures is that they somehow "are kept secret forever". This obviously will never work in practice ; but to give them away is just not even.

    This is what you like to call a "nude statement".  Care to elaborate?
    As I see it, the signature can do its job - of giving a filtration criterion - and then die (private key published.) Creating thereby no attribution hazard. Where is the need for 'secret forever' ? And where do you believe this fails in practice ?
  54. Mircea Popescu`s avatar
    54
    Mircea Popescu 
    Sunday, 11 September 2016

    Contention was only that given a handful of these, I can make a new message addressed to the Vulpes key (let's suppose there were not such a thing as SKS, or published pubkeys, to make this utterly trivial). No more, no less.

    Sure. It is not disputed that you can reconstruct the RSA pubkey of destination messages out of a number of such messages.

    The extant and apparent enemy is quite irrelevant, because we live on Tard Planet. Only the ~possible~ is relevant.

    Yes, but you are arguing the impossible.

    In practice, enemy who surrounds my house has a great deal of insight into the set of possible channels I might be making use of.

    Except, of course, if you're using gossipd. Because, as you said above, you can't actually discern if I'm talking to ben vulpes myself ; or if someone else is talking through me ; or if unrelated third parties are talking to each other.

    And, as in the Mars example, we all share the same spectrum. There is not a separate, magical one, for anyone to use.

    Yes, there is. Gossipd as specified creates a separate, magical spectrum. And it's fucking large, too! Right now you could be conveying my bits from A to B and nobody'd even know!

    It is not clear to me that the enemy is "stuck" doing any particular such thing.

    Maybe so ; maybe not. If you believe E isn't so stuck, you wouldn't run the node described ; otherwise you might. The point remains that fucking with the challenge-based authentication earns you nothing.

    This is what you like to call a "nude statement". Care to elaborate?

    Let us start with a quote :

    Before the agents left, Ross did volunteer that “hypothetically” anyone could have shipped drugs or fake IDs to him via a website called Silk Road.

    This is offspring of the same ever-pregnant mother as of the scheme proposed here ; and I've no intention to partake in her sweet offerings.

    As I see it, the signature can do its job - of giving a filtration criterion - and then die (private key published.)

    A signature can never die. You can have a convention among friends to no longer consider it ; but it's among friends only, much like the convention that we won't fuck your wife - or at least tell you if we get her pregnant.

  55. > Gossipd as specified creates a separate, magical spectrum.

    I am entirely at a loss re: how to make sense of this statement. My house has a couple of wires, etc., all tapped, and radio emanations, ditto. Where do I get one of these magical spectra..?

    > This is offspring of the same ever-pregnant mother as of the scheme proposed here ; and I've no intention to partake in her sweet offerings.

    A prisoner can be impaled by the demented USG-Nero for any reason whatsoever, or no reason. By the same token your ability to decrypt also "attributes" you as the owner of some proscribed key.

    > A signature can never die.

    The signature "dies" if it becomes loudly and ~undisputably possible for ~anyone~ to have created it. We discussed the "leakage of private key is a death" thing on numerous occasions, IIRC.

    Sorta like how the phuctored keys are dead. They no longer authenticate anything, or anyone.

  56. Mircea Popescu`s avatar
    56
    Mircea Popescu 
    Sunday, 11 September 2016

    By the same token your ability to decrypt also "attributes" you as the owner of some proscribed key.

    No, not by the same token. The two are not equivalent, and this happens to be important.

    The signature "dies" if it becomes loudly and ~undisputably possible for ~anyone~ to have created it.

    Nope. A signature never dies. People can agree to discard them, or not, but this agreement stands with signatures in the same exact relation property agreements stand with physical reality. A convention and a conceit ; not a fact.

  57. What does a signature for which the private key is publicly known prove? And to whom?

    Please expand on this.

  58. Mircea Popescu`s avatar
    58
    Mircea Popescu 
    Sunday, 11 September 2016

    Same thing it always proved : that the owner of the key signed the matter in question.

  59. Who is the owner ?

    Let's say I turn up a string, "let's fuck pigs!" signed by the key which appears in http://phuctor.nosuchlabs.com/gpgkey/221218755BB59C166BD88435267638C04F757D5630064CA207D228B4A7520F57 .

    What useful statements can be made about such a signature ? The set of people who could have plausibly created it includes more or less everyone with half a brain.

  60. Mircea Popescu`s avatar
    60
    Mircea Popescu 
    Sunday, 11 September 2016

    I have no idea ; nor am I interested in considering the matter - much like if we were discussing automobile engines I wouldn't consider who is the owner either.

    As far as the key itself is concerned, what stands at all points through its (perpetual) existence is that whatever it signs, its owner signed. That's all.

  61. > I have no idea ; nor am I interested in considering the matter

    Your argument "keys never die" is then an article of faith? (It is also in screaming contradiction with reality - what useful attribution can be made from the signed string in the earlier example?)

    And it appears to be in contradiction with your historic position, where any key found to be in the control of multiple unrelated parties is "dead" for any conceivable purpose.

    This is, IMHO, a serious reversal. Care to elaborate?

  62. Mircea Popescu`s avatar
    62
    Mircea Popescu 
    Sunday, 11 September 2016

    No, keys never die as a matter of fact. That "but MP! here's this meatspace reason we should all agree to pretend like they did" is rejected out of hand has nothing to do with faith. Why should I import the dubious nonsense of vague into a design I actually am taking seriously ?

    You are confusing one kind of situation with another. The one kind has a central spot - I can for instance kill keys via deedbot ; this is nice and good, but also irrelevant, because the other kind is a decentralized system where by deliberate design no such thing as consensus is possible.

  63. In what sense is the "pretense" a pretense, and not actual fact? In what sense is the linked Phuctor key alive and well ? What am I missing here.

  64. Mircea Popescu`s avatar
    64
    Mircea Popescu 
    Sunday, 11 September 2016

    Suppose Phuctor is not in X's WoT. How is X to establish what is found written on Phuctor's page ?

  65. Phuctor is a 0-trust affair. The modular exponentiation either comes out correctly, or it does not. Faith does not play into it.

  66. Mircea Popescu`s avatar
    66
    Mircea Popescu 
    Sunday, 11 September 2016

    That made no sense.

    What was said was, "Suppose Phuctor is not in X's WoT. How is X to establish what is found written on Phuctor's page ?" ; what you did was "I shall now assume that what was said was that Phuctor IS in X's WoT, exactly contrary to the statement I purport to discuss, because I am an idiot five year old with magical powers ; and as a token of exchange in the shamanical incantation I just engaged, let's pretend that at issue was whether X's trust rating to Phuctor is negative or positive".

    Stop emulating a petulant mongoloid. Phuctor is not in X's WoT. At all. Phuctor does not exist, as near as X can determine.

  67. Speaking of ZKP and other vapors https://archive.is/j8a93

  68. Mircea Popescu`s avatar
    68
    Mircea Popescu 
    Sunday, 11 September 2016

    The problem with "zero knowledge proof" is not so much different with pretty much everything else composing the contemporary United States : it discusses a desire, not a thing. That it does so under all the pomp of a three letter acronym is exactly what UStards would do, of course, but doesn't much help towards the realisation of that desire. Who knew that merely referring to hallucinations as if they were things fails to enact hallucinations into ontology.

  69. Mircea Popescu #68: did you take any effort whatsoever to separate the fact and fantasy re: ZKP, or simply pissed on it from reflex? Because your response looks quite exactly like an allergic response. The same token whereby someone like Taleb might well dismiss ~all~ public key crypto, because the only discussion he is familiar with is Microsoft's, or Hanno Boeck's, etc.

    Mircea Popescu #66: if I were to publish the schematics for a +ev cold fusor tomorrow, there will doubtlessly turn out to exist at least one deaf, blind, and retarded fellow in Nepal who will not hear about it for so long as he lives. It does not thereby follow that the +ev fusor has "not been discovered." Ditto for the breaking of a private key. If the factors are broadcast (for the sake of argument, picture a machine which transmits a phuctoring in a packet sent to randomly-selected ipv4, all day long) they are for any reasonable purpose "published." And signatures by this key thereby authenticate nothing and no one.

  70. Also let's remember that I was "killing" a key for the purpose of attributability -- "the set of people who could have produced this signature has now grown by four orders of magnitude, and retroactively" -- and not "revocation" ("no one whosoever shall henceforth risk accepting this key as valid"). The latter, as MP points out here, and in past #t thread re "revocation", is quite impossible, given as it would require an absolute consensus to be reached by a decentralized system.

  71. Mircea Popescu`s avatar
    71
    Mircea Popescu 
    Sunday, 11 September 2016

    Article updated, to include footnote 7.

    @Stanislav Datskovskiy Re ZKP : the only actual implementation I am aware of is the Hamiltonian cycle thing, whereby A and B get to engage in 1024 rounds of "do you want to see the homomorphism or the cycle in the homomorphic graph", with a built-in 50-50 odds of answering correctly by chance (hence the "1024" rounds) and a plain certainty that if A can predict what choice B will make then A can trivially "prove" himself. Figure it out, creating thousands of large homomorphic graphs, presumably on the basis of urandom. How not to piss ?

    Re the fusor : gossipd is a decentralized world. All things in the lightcone are alight, all the rest are dark. This is not escapable, and the fact remains that a key's birth and its supposed death are discrete, independent phenomena, which means that there is no way to ensure an absolute pairing in a decentralized world, which further means that what is "dead" eternal lies, and the word death is not the proper description of the situation.

    It is unclear to me that the difference you propose consists of anything.

  72. It might be worth discussing exactly what kind of "key death" would satisfy MP re repudiability of signatures. Why is it necessary for ~everyone~ to simultaneously learn that the key was broken? (Which - yes - is quite impossible)

  73. Mircea Popescu`s avatar
    73
    Mircea Popescu 
    Sunday, 11 September 2016

    This is pretty scifi to contemplate, but my best guess so far would be that the only way sh would satisfy repudiability is if a key had the property that while live anyone aware of the pubkey could verify a signature and anyone aware of the privkey could create a signature ; but once dead everyone, whether aware of privkey or pubkey would become unable to either create or verify signatures ; and that any matter in any way derived from the key's existence vanishes from the universe - such as for instance transactions in the blockchain predicated on something being signed by that key literallt disappearing from the blockchain without a trace.

    In short, it seems to me it is simply not possible.

  74. Are you saying Satisfactorily Repudiable Key === Reversible Time Flow?

  75. Mircea Popescu`s avatar
    75
    Mircea Popescu 
    Sunday, 11 September 2016

    Actually it seems reversible time is not good enough, because you could reverse it again... more like some sort of reversible-and-splits-if-reverses. Anyway, more like the premise of a science fiction novel than anything.

  76. Re: #73:

    Why not also specify that key-killing must also turn mice into men, cure all ailments, transmute lead into gold, and make the operator fart purest argon?

    You have carefully drawn a picture of an impossible feat, but it is not clear to me how the problem at hand ~necessarily~ reduces to this feat (removing a hypothetical future inquisitor's ability to derive meaning from a signature.)

  77. Mircea Popescu`s avatar
    77
    Mircea Popescu 
    Sunday, 11 September 2016

    Consider a sh scheme.

    At t0, A creates K ; E creates E.DB [evil deedbot], an exact equivalent of deedbot as is that you can't read.

    At t1, A connects to B via K.m1 ; E intercepts K.m1 ; records K.m1 in E.DB

    At t2, A "kills" K.

    At t3, E reveals E.DB, definitively linking A to K.m1 notwithstanding that the promise of deedbot-sh was that K will be ephemeral.

    You just raped your user A, E is grateful.

  78. How does revealing the evil DB prove anything to a third (fourth) party?

    It could say whatever E felt like crapping out - e.g., that A killed Kennedy.

  79. Mircea Popescu`s avatar
    79
    Mircea Popescu 
    Monday, 12 September 2016

    No, the fact that K.m1 was signed at time t1 is verifiable by third party through reviewing blockchain. Exactly like in case of deedbot.

  80. Enemy could just as handily memorialize ~decryption~ auths in Evil Deedbot, neh?

  81. Mircea Popescu`s avatar
    81
    Mircea Popescu 
    Monday, 12 September 2016

    Neh. deedbot-sh looks like this :

    Metadata : A to B
    Payload : (Hi).signed_by_A

    At this point, E can safely mark down K.m1 as per #77 above.

    Meanwhile deedbot-dc looks like this :

    Metadata : A to B
    Payload : Hi

    Metadata : B to A
    Payload : 185f8db32271fe25f561a6fc938b2e264306ec304eda518007d1764826381969

    Metadata : A to B
    Payload : Hello

    At this point E could mark down that he thinks "Hello" = 185f8db32271fe25f561a6fc938b2e264306ec304eda518007d1764826381969, which is very different from above ; or just as well he could mark that he thinks "Hello" != 185f8db32271fe25f561a6fc938b2e264306ec304eda518007d1764826381969.

    The key to resolving whether in fact = or != rests on knowledge privy to B only, which he can't communicate in a trustless manner. Meanwhile the key to resolving whether (Hi).signed_by_A is valid or not valid does not rest on knowledge privy to B only, which means that it can be recorded and verified later by unrelated parties, and which further means that far from being any kind of gossipd, gossipd-sh is merely a reimplementation of deedbot, requiring no trust among parties and explicitly containing all the bits necessary to certify all communications. Such reimplementation is of course spurious, as we already have a deedbot which works just fine (and if it doesn't we could just fix it).

    It is true that E could also extract A's pubkey out of a few hellos/challenges it intercepts. This is equally irrelevant for both schemes, as neither relies on secret-pubkeys to work.

    One could, of course, wrap the entire above in one further pass of encryption, so rather than

    Payload : (Hi).signed_by_A

    you would have

    Payload : ((Hi).signed_by_A).encrypted_to_B

    This is in practice the exact equivalent of whitening for RNGs : it makes it appear to naive inspection that the problem has been resolved ; while it is preserved and lurks underneath. I am currently torn as to whether the gossipd spec should include this extra layer, which is why it's not really in the spec. I can see decent arguments for either side, but that debate is not relevant to this debate.

  82. Mircea Popescu`s avatar
    82
    Mircea Popescu 
    Monday, 12 September 2016

    &updated last line to add "thousand" in "two hundred thousand lines of log".

  83. Mircea Popescu`s avatar
    83
    Mircea Popescu 
    Monday, 12 September 2016

    Also updated 4th footnote to include the last paragraph.

  84. One possible cut of the Gordian Knot re: my "enemy's ability to trigger a response from a suspected-node on demand" would be for every node to have a "lighthouse" - an always-on broadcaster of authentication challenge strings.

    In practice this can be a box that spams packets 24/7 to reasonably wide swaths of ipv4 space. Or even a shortwave station.

    It does not have to be ~physically~ connected to its respective gossip node, so long as the two can agree on what constitutes a valid challenge string (they can operate from OTP synced monthly, or, if you like to live dangerously, Shamir's RSAtronic prng...)

    To craft a valid packet, a sender must collect a single auth string from the receiving node's lighthouse (via whatever means, can be a shortwave tuner), craft auth with it as described by Mircea Popescu earlier, encipher to receiver's RSA pubkey, and send.

    This variant is not, incidentally, intrinsically incompatible with Mircea Popescu's - conceivably he might choose to hand out auth challenges to all-comers, while I operate lighthouse; while retaining the other basic mechanics.

  85. I will add that it is not even necessary for the lighthouse to be operated by the same party as the gossip node making use of it, as it contains no secrets and its sole purpose is to convey random bits to two+ physical locations simultaneously. In that respect it is even theoretically possible to make use of some natural phenomenon that can be reliably observed in multiple places, such as a pulsar.

    The receiver knows whether an incoming packet's auth is equal to, e.g., hash($some_friend's_permaseekrit + $some_segment_of_lighthouseola_from_past_hour) - or not. And if so - which friend's.

  86. I am trying to work out how the lighthouse thing works. Is this lighthouse broadcasting the same challenge string for everybody, or does it send encrypted to all known friends keys?

    A with friends B, C, D periodically broadcasts a series of ('Challenge1').encrypted_to_B, ('Challenge2').encrypted_to_C, ('Challenge3').encrypted_to_D

    B sends to A ('Challenge1' + {payload}).encrypted_to_A

    Am I on the right track here?

  87. PeterL: "same string for everyone" is intrinsic in the word "broadcast", yes.

    Your variant also works, but would require the lighthouse to be operated directly from its respective gossip node. Whereas "random bits, hash a piece with your shared secret, rsaify, and send back to me" does not require it. And potentially multiple nodes can share a lighthouse, which can be simply a TRNG broadcasting from Mars, or the like.

  88. PeterL: and yes, your variant is theoretically stronger, because it incorporates RSA in both directions, instead of relying on the hash shared-secret auth for the incomings. So there is that.

  89. PeterL: the only serious problem is that you would be broadcasting all of your peers' public keys. (See earlier in discussion: they can be derived trivially from a few examples of ciphertext.)

  90. Stanislav: Eventually you will pass some encrypted text to your peers, so perhaps we can assume that the enemy will have their public keys (they are public after all).

    So that they will not be able to listen to your lighthouse and thereby enumerate the set of your peers, you could insert into the series random garbage encrypted to random nobodies taken from sks between the challenges encrypted to peers.

  91. PeterL: the problem is enemy's ability to ~enumerate~ the peers, rather than his learning the pubkeys per se.

  92. Mircea Popescu`s avatar
    92
    Mircea Popescu 
    Monday, 12 September 2016

    One possible cut of the Gordian Knot re: my "enemy's ability to trigger a response from a suspected-node on demand" would be for every node to have a "lighthouse" - an always-on broadcaster of authentication challenge strings.

    This seems rather likely. Depending also on how piddly a set-up someone arranges for himself.

    It does not have to be ~physically~ connected to its respective gossip node, so long as the two can agree on what constitutes a valid challenge string (they can operate from OTP synced monthly, or, if you like to live dangerously, Shamir's RSAtronic prng...)

    This is especially enhanced by the simple fact that the scheme does not depend on the strength of the challenge string, as mentioned before ; that strength is merely relevant internal housekeeping, "how much you like talking to randos" sort of thing.

    Trust flows both ways, and the fact that E managed to connect to A "easily" because he knows A uses the string "36" for all auths also means that E has no serious reason to believe A isn't lying to him outright (unless, of course, he has reasons out of band, which is and will remain a reccuring theme in gossipd design : it is intended to favour friends over enemies, implemented as favouring those who "just know" against those who deduce).

    To craft a valid packet, a sender must collect a single auth string from the receiving node's lighthouse (via whatever means, can be a shortwave tuner), craft auth with it as described by Mircea Popescu earlier, encipher to receiver's RSA pubkey, and send.

    This variant is not, incidentally, intrinsically incompatible with Mircea Popescu's - conceivably he might choose to hand out auth challenges to all-comers, while I operate lighthouse; while retaining the other basic mechanics.

    Quite. There's nothing wrong with the arrangement, and indeed large scale usage seems to naturally favour it.

    I will add that it is not even necessary for the lighthouse to be operated by the same party as the gossip node making use of it, as it contains no secrets and its sole purpose is to convey random bits to two+ physical locations simultaneously. In that respect it is even theoretically possible to make use of some natural phenomenon that can be reliably observed in multiple places, such as a pulsar.

    Such an arrangement is conceivable, although if it's a physical event it will require instruments, which cost money, which then makes it a case-by-case decision.

    PeterL: "same string for everyone" is intrinsic in the word "broadcast", yes.

    One possible problem here is "when does the string change".

    And potentially multiple nodes can share a lighthouse, which can be simply a TRNG broadcasting from Mars, or the like.

    This also threatens to introduce a locally-central Jesus nut in the whole scheme.

    Stanislav: Eventually you will pass some encrypted text to your peers, so perhaps we can assume that the enemy will have their public keys (they are public after all).

    This is not a safe assumption under the spec. See addendum to footnote 4, and think that in principle the very paranoid could operate their gossipd in a single-message-burn-key mode, so no key is ever reused by them (think a sort of otr on top of gossipd). The significant cost of the method is the one thing keeping me from actually putting it right in the standard - we're already spending a lot doing pure-rsa as opposed to kochlean symcipher.

    So that they will not be able to listen to your lighthouse and thereby enumerate the set of your peers, you could insert into the series random garbage encrypted to random nobodies taken from sks between the challenges encrypted to peers.

    It stands to reason that, much like the humans it emulates, gossipd would create imaginary friends and carry on conversations with them. For better security - E can not distinguish between the case where a node is talking to an imaginary friend and the case where a node is talking to an actual friend over an unknown spectrum.

  93. PeterL said "So that they will not be able to ... enumerate the peers, you could ..."

    Stanislav responded "the problem is enemy's ability to ~enumerate~ the peers"

    Thank you for agreeing with me about the problem, but what did you think of my solution?

  94. Mircea Popescu: I was thinking specifically of the 'single-use message mode', yes.

    Even with a shared lighthouse, a node can easily track the auth strings that have been recently "spent" by his peers, and refuse any replays.

    The thing does not become a "Jesus nut" unless somehow everyone gets lethally lazy and starts using BBC World, or similar, in place of a proper lighthouse.

  95. Mircea Popescu`s avatar
    95
    Mircea Popescu 
    Monday, 12 September 2016

    People are known to get lethally lazy, and what's worse it seems to go in direct proportion to the quality of the tools at their disposal.

  96. In principle it is even possible to use Bitcoin blocks as a lighthouse.

  97. Mircea Popescu`s avatar
    97
    Mircea Popescu 
    Monday, 12 September 2016

    Very theoretically, because miners have a lot of leeway in what the blocks contain.

  98. PeterL: your "random garbage" is distinguishable from "enciphered to actual peers" in constant time.

  99. Mircea Popescu: an "empty" block still contains a certain amount of extractable entropy. And no, this is not necessarily an ideal lighthouse, but it is quite usable and "everyone already has one."

  100. Mircea Popescu`s avatar
    100
    Mircea Popescu 
    Monday, 12 September 2016

    Possibly. Anyway, will revise the spec shortly because it does bear some expansion in parts.

  101. Another reason I quite like "lighthouse" method is that it OTPizes very easily if so desired. A "fleet in being", if you will.

  102. Stanislav: "PeterL: your "random garbage" is distinguishable from "enciphered to actual peers" in constant time."

    A sends the series ("CS").encrypt_to_B, ("CS").encrypt_to_C, ("CS").encrypt_to_E, ("CS").encrypt_to_F, ("CS").encrypt_to_G

    how is E to know that B and C are peers while E, F, and G are not?

  103. Mircea Popescu: lighthouses also permit an astonishing variety of "feints" - e.g., Mircea Popescu could proclaim a lighthouse publicly, but in actuality make use of another, or multiples, and thereby distinguish different types of people attempting to connect.

    PeterL: genuine peers's pubkeys (as derived from the packets) will recur; chaff - will not.

  104. Mircea Popescu`s avatar
    104
    Mircea Popescu 
    Monday, 12 September 2016

    Altered to introduce point II (which subsequently renumbered all others past I), and footnotes 5, 6, 11 (which idem).

  105. Mircea Popescu : this is useful but - as far as I can tell - orthogonal to the lighthouse thread?

  106. Mircea Popescu`s avatar
    106
    Mircea Popescu 
    Monday, 12 September 2016

    Well yes, how am I going to put the lighthouse in the spec ? It's a wholly optional item after all.

  107. Mircea Popescu : well, you could specify that outgoing raw auth strings not be RSA'd to anyone, but merely random strings waiting to get hashed and RSA'd back to node. At least optionally. The spec, as I presently read it, precludes the lighthouse method entirely: "In general it is expected a mix of friends' keys, own bogus keys and own usable keys will be used for this purpose in varying proportions..."

  108. Mircea Popescu`s avatar
    108
    Mircea Popescu 
    Monday, 12 September 2016

    Ah yes there is this. Damn.

    Ok will take some thinking as to how to do this correctly.

  109. The basic mathematical demand on a "lighthouse" for use by hypothetical nodes A and B are:

    1) The lighthouse emits an unending sequence of strings S1, S2, ...

    2) The strings neither recur with any appreciable probability, nor have any straightforward dependence on a predictable phenomenon (such as the time of day, or a PRNG of public - or inferrable, as is often the case - seed.) Thus, one example of an ~unsuitable~ sequence generator would be hash(timeofday.)

    3) A and B can both reliably hear the lighthouse. They need not ever, note, agree on a precise time synchronization, but only that some particular string S was uttered by the lighthouse in a particular sliding window interval (e.g. past hour.)

    4) The lighthouse must generate and emit its strings sufficiently quickly to supply the expected flow of traffic into the nodes using said lighthouse, one string per ciphered incoming packet - and ideally greater by a large factor.

  110. I also would like to emphasize that the "lighthouse" need not necessarily be a public affair, but could just as easily be a box on your desk, and on the desk of B, which spits out, and then destroys, a stored OTP token from a NAND flash every ten seconds for the next five years or until you throw it in the stove, or similar.

  111. A "receiving" end of any particular node makes use of its lighthouse in the following way: every incoming S is hashed with - separately - K1, K2, ..., Kn where K are the shared secrets of particular peers. The result is stored next to the S in a ring buffer (which is as long as the desired sliding window.) These will be referred to as S1K1,S1K2,...,S2K1, ... SnKn, a matrix.

    An incoming packet is de-RSA'd and the auth cookie inside is searched for inside the current ring buffer matrix. If the latter is stored in tree form, this can be an NLog(N) operation; and it parallelizes infinitely, so given custom hardware it can be an O(1) operation.

    If the string is found, you have a winner: an authorized packet incoming. This is processed via the scheme described earlier.

    On the transmitting end of a node: a string S is drawn from the transmitting node's ring buffer and hashed (in a way which precludes length extension attack) with the K we share with the desired destination node. This is then RSA'd to the pubkey of said destination, and queued for transmission.

  112. A node's transmitting end ought to feel quite secure in sending queued-for-transmission packets every which way, so long as they also go to the intended recipient. This promiscuity is pure win, as nodes will neither benefit nor suffer from receiving packets not intended for them.

  113. Depending on the realities of the underlying physical medium, it may or may not be possible to make a lighthouse that contains no secrets. For instance, a machine simply throwing out UDP packets of random bytes is unsuitable, because there is no way for the "listeners" to distinguish the output of the lighthouse from forged packets by the enemy. A lighthouse situated on the Internet as we know it would probably have to emit signed (with a key used for nothing else) packets, which would then be verified by listening nodes prior to use.

    Enemy capture of a lighthouse key would enable him to send ~differing~ rubbish to differing listeners; a situation that is mechanically detectable, in the form of a total loss of connectivity; and nothing else.

  114. On second thought, signed lighthouse packets may not be necessary - for so long as the enemy cannot entirely swamp the flow of genuine "light" with his own, the link remains usable; whereas if he can, signatures will not help.

  115. Mircea Popescu`s avatar
    115
    Mircea Popescu 
    Tuesday, 13 September 2016

    This will require thinking about - but the ring buffer mechanism proposed is certainly very interesting.

    In other news, happy 100th comment to me!

  116. Mircea Popescu`s avatar
    116
    Mircea Popescu 
    Tuesday, 13 September 2016

    So what I'm thinking here is along the lines of :

    Modify II to read

    II. Gossipd will perpetually run a RSA-key generation process ; and store the produced keys. The keys will be arbitrarily marked as usable and bogus by unspecified criteria. Usable keys will be used whenever keys are required by gossipd operation - such as when introduced to a new peer. Bogus keys will not be used.

    III. Gossipd will perpetually run a lighthouse process, which consists of calculating sha512 over a number equal to an operator provided value + the unixtime ; store and broadcast the resulting hashes.

    Modify III to read :

    IV. Gossipd will receive messages M, and proceed to :

    1. Decrypt the message to obtain its contents M.c
    2. Split M.c into the text M.c.t and the cookie M.c.c
    3. Hash M.c.t with the last 100 (?) items in his lighthouse current list obtaining M.c.t.h1 through M.c.t.h100.
    4. Compare each of M.c.t.h1 through M.c.t.h100 with M.c.c and in case of identity deliver M.c.t to operator ; drop M otherwise.

    Modify VI to read

    VII. GUI and UX considerations are not in the scope of this design document, except that the operator must at a minimum be provided with :

    1. A method to specify the lighthouse constant whenever he feels like (operators are encouraged to set this numeric integer to values larger than 21024 ie ~three hundred digits).
    2. A method to specify the lighthouse output frequency.

    as part of a clearly labeled, plain text configuration file .

    This make sense ? (The spec of III is in formal contradiction to your 2 in #109, but I believe it resolves the root cause ?)

  117. Roughly correct, but why nail down a PRNG?

    In my conception, 'lighthoused' is a separate proggy, even; potentially runs on separate machine; and simply milks TRNG and sends out to some broad swatch of recipients, including the owner's node(s).

    It is entirely possible to operate and make use of more than one lighthouse.
    And given as the strings may arrive in random order, I'd keep a lot more than 100 around.

  118. Likewise, this is a great time to roll out Keccak, IMHO.

  119. Also it is in practice overwhelmingly faster to prehash each incoming "photon" from the lighthouse(s) with K1..Kn as I described; the results can then be stored as, e.g., red&black tree, for fast lookup.

  120. Mircea Popescu`s avatar
    120
    Mircea Popescu 
    Tuesday, 13 September 2016

    Re 117 : The problem with specifying it as "separate" is that you'll end up with a Bitcoin-nodes situation, where everyone wants to use one but nobody perceives he should run one. This is pretty horrid.

    Re 118 : I did say sha512, and for this exact reason. Accordion ftw (log link for likbez).

    Re 119 : You'll have to properly spec this if you want it.

  121. Sha512 != keccak

  122. And folks who refuse to run lighthouses will have shit connectivity. It is a case of correctly-aligned incentives, imagine!

  123. Mircea Popescu`s avatar
    123
    Mircea Popescu 
    Tuesday, 13 September 2016

    It's what I meant! Sha-3, whatevs!

    Anyway, we still need some manner to specify how one finds A's lighthouse.

  124. He tells you.

  125. Mircea Popescu`s avatar
    125
    Mircea Popescu 
    Tuesday, 13 September 2016

    For one thing, that is a very poor spec as it's allowing detachment from operating lighthouse, contrary to your #122.

    For the other thing, he tells you what, an url ? Perhaps to be expanded later, maybe, to other things ? Will be pretty hard to code this thing with a hole like that in there, makes the whole thing a much less useful spec.

  126. Lighthouses ~push~.

  127. Mircea Popescu`s avatar
    127
    Mircea Popescu 
    Tuesday, 13 September 2016

    Stop with the fragmentarium and write it all out clearly please.

  128. Let's try the whole thing.

    "Lighthoused" eats a config file containing some number of IP addrs. and/or ranges (expressed in the "/8", "/16", "/24" forms) and a transmission rate, expressed in datagrams/second.

    Every second, a number, N (which is derived from the tx rate) of 512-bit strings is pulled from /dev/random and datagrammed out to IP addrs pulled (via random selection, via same) from the programmed set.

    Repeat, ad infinitum.

  129. Mircea Popescu`s avatar
    129
    Mircea Popescu 
    Tuesday, 13 September 2016

    Why not let people poll it like in sanity ?

  130. Holy shit Mircea Popescu, POLL A LIGHTHOUSE!?

    The whole point of a lighthouse is unidirectional flow!

  131. Though anyone is welcome to hand out photons from his lighthouse via carrier pigion, or solely via requests over a cup of tea, handwritten, etc.

  132. Mircea Popescu`s avatar
    132
    Mircea Popescu 
    Tuesday, 13 September 2016

    So you don't want to publish it to a website ~anyone can read, because you want it to send packets to a list of nodes. Which it knows of. But it's not integrated with gossipd node.

    This is idiocy, as stated. Needs re-do.

  133. Again, in gossip world, there is no such thing as "a website". There are radios (or effectively same thing operating over extant net, while we have it) sending and receiving packets. "Website", with the attendant TCP idiocies of stateful connections, SYN, ACK, etc. is for the birds.

    And you are quite welcome to lighthouse from the same machine as is running your node. But I do not see why to glue them together by mandate.

  134. For some reason I thought that this was obvious in the choice of the name, but the entire ~point~ of a lighthouse is that it broadcasts (in the example given, unsolicited spamograms to thousands, potentially, of machines) and that the act of reception is a ~purely passive~ one.

    The variant where no lighthouse is used at all is your original picture, where folks ask for the cookie, and is a wholly other thing - it has nothing to do with the lighthouse scheme.

  135. Mircea Popescu`s avatar
    135
    Mircea Popescu 
    Tuesday, 13 September 2016

    There is no such thing as website but then you talk about IPs.

    Here's the thing : your idea seemed maybe promising originally, but by now it's dead through having been covered in too much diarhea dribble to be seriously considered.

    Go write the thing out, completely, clearly, in one single flat text without unneeded reference etc. I'm not going to be trying to fish for sense out of disparate bits that don't make sense taken together and manifestly don't cover the subject, it's a horrible use of my time.

  136. I'll gladly specify the whole thing, end to end, when you specify the physical medium on which a prototype is expected to be implemented. Examples include IP, shortwave, pigeon.

    The medium unfortunately cannot be abstracted out entirely.
    Just as you cannot specify a 10-34 thread on a screw cut from (how?) helium.

  137. Mircea Popescu: your spec, as originally written, provides repudiation, but nonchalantly pisses on more or less everything else that could be had out of a proper gossipd (unjammability, deniability of the-fact-of node operation, ability to use unidirectional broadcast radio, asymmetric connections (pigeon forward, radio back), etc.)

    If augmented with lighthouse (compatibility, no one forces you to use it) -- you get the entire package. And in that case it makes sense for you and I to work on the same spec and write one proggy.

    But if not, it does not. And we write separate specs, and write two proggies. There is nothing particularly wrong with this; it is why Beelzebub put more than one bloke upon the earth in the first place.

  138. Framedragger`s avatar
    138
    Framedragger 
    Tuesday, 13 September 2016

    In #136, DZ writes:

    > I'll gladly specify the whole thing, end to end, when you specify the physical medium on which a prototype is expected to be implemented. [...] The medium unfortunately cannot be abstracted out entirely.

    Agree with the latter. May I humbly suggest IP with, possibly, UDP atop it? Maps easier to pigeon and > lightsecond-distance geographical scenarios. No ACKs.

    Also, a concern (may be dismissed as "implementation-level detail", but if one is bound to wait for very good PRNGs for a long time it does suck on a more fundamental "can't prototype and therefore iterate on design" level, IMO): *is* it possible on a practical level to dissimate so much pseudorandom data constantly without worrying about entropy depletion or other parties being able to reconstruct the seed?

  139. Framedragger: ~$5 of parts buys you a MB+/sec of ~8bt/byte entropy.

  140. Framedragger: as I recall, during the last major gossipd thread, we learned that Mircea Popescu is violently allergic to UDP (he filters it at the castle wall), but not allergic to otherwise-identical datagrams with protocol number not equal to 17 (UDP). The problem is that all commonplace OS require you to run as root in order to emit these.

  141. Framedragger`s avatar
    141
    Framedragger 
    Tuesday, 13 September 2016

    Stan: re. entropy source, hm, I guess that's that.

    Re. raw sockets requiring root, that does suck, but seriously mr. MP, this makes things *rather* impractical indeed. Isn't there a `netcap` for allowing a process to open / bind to these sockets? (There is one for allowing non-root < 1024 port binds). Otherwise a local daemon could provide these to locally authenticated processes, etc. Ridoinculous.

  142. Also there is no need for pseudorandom-anything. TRNG.

  143. Mircea Popescu`s avatar
    143
    Mircea Popescu 
    Tuesday, 13 September 2016

    @Stanislav Datskovskiy

    I'll gladly specify the whole thing, end to end, when you

    Mno. This is not how things work.

    Part A of this not being how things work is that you don't get to redefine the task. You will specify the lighthouse thing, not some other thing, properly and without further bitching or I'll ignore it without further mention.

    Part B of this not being how things work is that you don't get to insert conditionals like you were the chief of the world - strictly and immutably because you aren't. If you can put your idea in a form usable by others - do. If you can not, there it dies, like so many others in the history of human thought, and you're more than welcome to "world is so unfair" until the cows come home.

    @Framedragger

    Also, a concern (may be dismissed as "implementation-level detail", but if one is bound to wait for very good PRNGs for a long time it does suck on a more fundamental "can't prototype and therefore iterate on design" level, IMO): *is* it possible on a practical level to dissimate so much pseudorandom data constantly without worrying about entropy depletion or other parties being able to reconstruct the seed?

    It's not clear if you mean part I (the RSA key generation) or part II (of the candidate spec, the keccak thing). In principle part II should be immune to this, at least if I correctly understand specific claims made/proven by authors (see noekeon.org, quite a good read anyway).

    Part I is a lot iffier. A "normal" box should be able to spit out a 4096 bit RSA key every five minutes, but then again it may take half an hour. Still, the requirement that you power up a node and wait a month before you can actually use it is not to my view a killer - Bitcoin as the golden standard of modern computing requires about three as it is. In that month it can make 1500 keys at half hour each. This said, a proper RNG would certainly help immensely - and we are kinda-sorta designing towards a world where these are a common appliance.

    Re the UDP discussion, some quotes wouldn't hurt anything neh ?

  144. Mircea Popescu: appreciate that it is very difficult to do as you asked because it is not clear to me what you disliked about the first attempt ("machine broadcasts random bits, over whatever physical medium is available, to largest possible set of passive listeners.")

    Perhaps this does not even belong in the formal spec. Consider simply stating "auth cookies are random strings of a certain length, and are placed somewhere where the intended receivers can find them." This is inclusive of shortwave, udp, chalk on sidewalk, wherever.

  145. Mircea Popescu`s avatar
    145
    Mircea Popescu 
    Tuesday, 13 September 2016

    I strictly disliked that it was not complete. It seemed interesting in parts, but it left a lot of unanswered questions, which is a breaker for a spec. Just sit still for half an hour and write the whole thing down as one piece of text - not a summary, not a mixed bag of hints, not some comparison-metaphor chimera.

    It is perfectly possible that the thing doesn't belong in the gossipd spec per se, and can just live as an implementation convention shared by some implementations. It is presently impossible to tell which is rather the case, for - you've guessed it - absence of an actual description of the lighthouse mechanism.

  146. Mircea Popescu: there is no escaping this job, true. Expect a full-length response in a couple of days.

  147. Framedragger`s avatar
    147
    Framedragger 
    Wednesday, 14 September 2016

    In #143, MP writes:

    > It's not clear if you mean part I (the RSA key generation) or part II (of the candidate spec, the keccak thing). In principle part II should be immune to this, at least if I correctly understand specific claims made/proven by authors (see noekeon.org, quite a good read anyway).

    I meant part II, actually. Fair enough re. safety assumptions I suppose, though this is what made me comment in the first place: "The lighthouse must generate and emit its strings sufficiently quickly to supply the expected flow of traffic into the nodes using said lighthouse, *one string per ciphered incoming packet - and ideally greater by a large factor.*"

    > Part I is a lot iffier. A "normal" box should be able to spit out a 4096 bit RSA key every five minutes, but then again it may take half an hour. Still, the requirement that you power up a node and wait a month before you can actually use it is not to my view a killer

    Sure. I'd still be worried about possible entropy depletion, and the trust that is implicitly put into software which tells whether it has enough entropy, and makes decisions (don't generate additional bits of privkey for now, etc.) accordingly. The key safety part is really important; and doing other things which require entropy on the same machine simultaneously may not be such a great idea, but this is speculation-territory. (Empirical "random uniformity of lots of generated keys in $x time window" tests may help, maybe?)

  148. Mircea Popescu`s avatar
    148
    Mircea Popescu 
    Wednesday, 14 September 2016

    Actually I'm considering changing

    which consists of calculating sha512 over a number equal to an operator provided value + the unixtime

    to

    which consists of calculating sha3-keccak over a number equal to an operator provided value + the unixtime * microtime

    which, other than clarifying the intended hash function also should in principle allow polling at arbitrary frequencies (literally, as iirc it is not possible to obtain the same microtime from two different calls to it).

    The objection re specifying the prng is not without merit, however I believe specifying one as a default is the correct approach. The user is evidently free to alter that line and recompile. As often as he feels like it, even. I don't believe the scheme can be improved in any meaningful sense, but be that as it may.

    Understand also that these aren't signing keys. If you use a shitty RSA key, the most that can happen is that the adversary spots it and factors it before you stop using it, in which case for the dt interval = (when you stop using it) - (when he factors it) he can read what data you receive, but not feed you bogus data, and all this if and only if a) he can intercept all your connections and b) actually has complete history of what communication you've seen to date. All this of course assumes you only use that one key, and of course the problem is automatically and immediately resolved once you replace the weak key, which process happens outside of his knowledge (as you'd be sending it to other people's RSA). In short, the scheme as proposed leverages network effects to significantly improve the opsec value of RSA - even on machines with very little actual entropy.

  149. Wouldn't it make sense to add an Eliza implementation (Markov chain process, whatever) to the spec, so that nonsense can be cheaply generated? This way user can easily decide what part of what he conveys to any A for any B is pretty much random junk.

  150. Mircea Popescu`s avatar
    150
    Mircea Popescu 
    Wednesday, 14 September 2016

    Certainly. I was debating whether to put this in the spec or leave it at the discretion of the implementation. I would definitely consider a spec for this Eliza bot - go ahead and write it.

  151. Framedragger`s avatar
    151
    Framedragger 
    Wednesday, 14 September 2016

    Regarding the responses in #148, fair enough. I would not trust all systems to return unique microtime per multiple calls, this seems to vary anyway and is full of implementation-specific madness (e.g. older Linux kernels do not seem to use High Precision Event Timer (running on Intel at 10MHz or so) which gives more precise timing). IMHO this addition is good but one should not *trust* the system to return unique values for this.

    Regarding RSA factorisation and what happens in that event, that is of course a good point; however this reminds me of a lack of forward secrecy - if a key is extracted (or inferred from multiple datagrams or w/e) and later factor, all recorded communication encrypted to that key would be deciphered. Recall that in "radio tower" scenarios, the "recording" part is particularly easy/-ier.

    Perhaps one could argue that the constant shuffling of keys provides for a degree of forward secrecy? I wonder if a node responding to a valid request could include a session key. I know that symmetric crypto is not particularly liked here, but an already-asymmetrically-encrypted payload could be further symmetrically encrypted with AES256 (at the very least this would not weaken security, and it's highly efficient on modern processors, much more than *actual* RSA anyway (i.e. not GnuPG's "RSA-encrypt a symmetric session key and encrypt the message with the latter, actually"), no?)

    This would satisfy the "where is PFS!!1" crowd (I'm part of it anyway). I understand if this may be pushing it, though.

  152. Mircea Popescu`s avatar
    152
    Mircea Popescu 
    Wednesday, 14 September 2016

    Regarding RSA factorisation and what happens in that event,

    This concern links exactly into what I meant earlier by "The point is not to flatten the enemy with a wall ; but to welcome him to the jungle." in #14. Gossipd as specced makes no attempt to provide forward secrecy as a hard certainty. It is my considered oppinion that no other scheme actually does, notwithstanding what anyone may purport & pretend ; and that such is not practically useful even if it were provided, because the weak link always was and will remain the very people involved. In short, much like "Derpy Autonomous Corporations" and other "trustless-blockchain-blala", forward secrecy is an attempt to build on a fundamental misunderstanding of the problem, misunderstanding that happens to come very naturally to a certain mind (ie, the desocialized mind) so they keep reinforcing it "being a thing" for each other.

    Consequently gossipd implements "forward secrecy" correctly, which is to say as deniability. Yes you may eventually hear all gossip ; no you may not discern truth from fiction on this basis.

    This is not to say, of course, that gossipd can not be used as a transport layer for, say, plain old RSA encrypted payloads - in a manner very similar to how irc-and-dpaste works right now, for instance. If you and your friend X decide all your gossipd comms are to be rot-13d, who's going to prevent you ? Certainly not gossipd itself.

  153. Hm. Regarding III ("Gossipd will receive inbound connectionsvii from identified clients", so presumably gossipd is to decrypt messages encrypted to any key in its "good generated keys" set from II. As key generation continues in II, the time required to decrypt/check incoming messages in III grows. I don't see how this can be sustainable, but I see why MP does not want to drop the "perpetually generate" clause, either.

    Unfortunately I can't think of a way out here myself, unless there is enough info before the encrypted blob to certify to the receiving node that the message is to be trusted (hence "it's OK to try to decrypt, this DoS vector is not being exploited"). This however does break Stan's "not a single bit leaked."

  154. Mircea Popescu`s avatar
    154
    Mircea Popescu 
    Saturday, 18 March 2017

    I have no idea how that "presumably" follows or what sense any of the rest of it makes. Try being more specific.

  155. Sorry, was too curt, should sleep before answering.

    To decrypt a message, you need to know which key to use for decryption. If you have a buncha keys, you need to go through that buncha keys to check which one to use. Now, (1) under small scales this is negligible; and either way, (2) average and/or worst time complexity could be log2(n) if thing's implemented well. But given perpetual key generation process, your bunch just keeps growing. Unless I missed something obvious, as always.

    This issue largely goes away if we have a way of discarding keys (by key age or whatever).

    Also note, if we have some kind of index in memory (to have that efficient time complexity), then even with 86400*365*10 keys we'd have ~28 operations on average to reach the key (+ grabbing it). (The index would be 1-2 GB assuming some decent indexing scheme or another). If however you need to read from FS all the time though, it's not as pretty.

    (Some numbers for reference: L2 cache get ~7ns, memory get ~100ns, 4K SSD read 0.15-godknowswhat ms.)

  156. The "presumably" assumed that incoming messages are encrypted to one of the keys generated in II. If all incoming data is like that, then the case described applies; if however you prepend it with something probably-heathen (session identifier which breaks both (1) statelessness desired by Stan as well as (2) "no useful bits leaked for enemy"), you may make this particular issue go away. (That prepended data could be a nonce (changes with each message) to avoid it getting replayed.)

  157. I'm trivially wrong in #156 because you still need to think about the "first time" another node sends a message to gossipd. (But this reeks of too much state already, so the whole "maybe prepend with..." is probably to be dismissed.)

  158. Mircea Popescu`s avatar
    158
    Mircea Popescu 
    Saturday, 18 March 2017

    > Unless I missed something obvious, as always.

    You missed the obvious parts that the RSA keys allocated to peers are controlled by the operator. If you wish to expose 20, you expose 20. If you wish to expose 200, you expose 200. It's your job to decide how many keys you wish to expose, which is why "The keys will be arbitrarily marked as usable and bogus by unspecified criteria".

    > This issue largely goes away if we have a way of discarding keys

    We do, it is called "The keys will be arbitrarily marked as usable and bogus by unspecified criteria". Ie, whenever operator feels like it, keys get nuked.

    > we'd have ~28 operations on average to reach the key

    Don't get me wrong, I'm reasonably impressed you interiorized that discussion, but really now. Gravitation doesn't bring down the London Bridge not because it doesn't work, but because as novel a concept as it may be, nevertheless it was reasonably familiar to the original builders.

    > if however you prepend it with something probably-heathen

    No such thing is in the spec.

  159. Thanks for bearing with me. Even though it was noted that the key set is controlled by the operator, in my mind I pictured "a large bunch of keys" by default somehow. So this makes sense, then.

  1. [...] any bystanders to come out of the shadows. The Real Bitcoin [↩]ircd as a predecessor to gossipd [↩]MP-WP, see The Whet and billymg's v-tree [↩]He shared the news of his genesis [...]

  2. [...] of its clients. Bitcoin is the sound money, unpatchable 0day to the socialist monetary system, Gossipd is the uninterdictable, undecryptable communications layer of the forum, MP-WP is the preeminent [...]

  3. [...] You could participate in design discussions. Such as for instance re gossipd recently, but these pop up with some frequency. Mind that VIII above is there for good reason -- if [...]

  4. [...] that it fails at its only purpose in life. [↩]This distinction figures prominently in the gossipd spec as well as discussion around [...]

Add your cents! »
    If this is your first comment, it will wait to be approved. This usually takes a few hours. Subsequent comments are not delayed.