QuoteOriginally posted by: outrunI think #2 gets constructed from not doing everything the #1 way? I'd say that #2 gets constructed from not doing ANYTHING the #1 way. Once you use a non-anomymous method to communicate between two people, those two people are forever linked as known associates.QuoteOriginally posted by: outrunWhat do you think about error correcting codes or running XOR's? The idea is to have little bits of information (instead of a whole newspaper) that depend on large amounts of data.I like it a lot. But there is a HUGE risk in clever encodings. If the retriever downloads sufficient data to decode message 1 but does not download sufficient data to decode any possible second message, then the watchers will know which message the retriever was after and know which retriver the writer the communicating with. I think we can assume that the watchers know something about the reconstruction algorithm in that even if the writer and retriever have a secret choice of reconstruction algorithm, the watchers will know about the prevailing anonymous messaging software and be able to use information theoretic analysis to provide a nonparametric bounds on what the retriever might have been looking for. The point is that the less the retriever downloads, the more likely they are to give away their anonymity. The retriever must always download sufficient data to decode any of M possible messages from N possible writer.One possible solution to this problem is to obscure the message boundaries by breaking up each message into little bits like you suggested but then writing them over time and interlaced with little bits from other messages. Not only does this reduce the risk of the watcher using knowledge of the existence of a message decoder to know that the retriever has decoded a specific message but it also prevents the watchers from using correlation of the timing of writes and reads (e.g., the watcher notices a consistent ping-pong pattern such as person A writes, then person B reads, then B writes, then A reads, then A writes again, ....). If everyone is continuously writing a stream of data and everyone is continuously reading a stream of data from seemingly random locations and times, then there's no pattern in the timing metadata.