f



Usenet as a True Random Number Generator

In the past, there were some jokes about using Usenet as a source of
randomness for RNG. Since I'm running open news server, I decided to check
if it will actually work.

Here is my approach, step by step:

1. Take a header of Usenet article. Articles themselves can be completely
identical (or even empty), but at least one header field suppose to be
unique, the Message-ID. Practically, there will be much more unique header
fields.

2. Since all incoming data is in ASCII, to reduce ASCII-related patterns in
the stream of bits, take a first bit of each byte of text.

3. Put bits on top of a stack.

4. Once stack is equal or bigger than X, take X bits from the top, leaving
the rest on the stack. This is to ensure that processing of data does not
always start from the beginning of a header.

5. Do a simple skew correction, same as RANDOM.ORG does on their data.

7. Write processed bits to the string.

6. Once string length is equal Y, create a hash of the bit string.

Hashing is done on the data which is still clearly have patterns, however,
in normal circumstances there is no way to predict where those patterns
start. To attack the RNG, attacker will have to control all upstream
servers, or whole or a significant part of Usenet network, depending on the
type of attack.

I tried to not use results of previous computations. If data is truly
random, there is no need to try to "add randomness".

Here is the Perl program I used: http://neodome.net/rng/rngfeed
It was receiving data directly from INN server which gets full text-only
feed from its upstreams.

Results can be observed here: http://neodome.net/rng/
BIN files are hashes in raw binary format, TXT files are text
representation of same data.

I did not do a statistical analysis of data yet, however, something tells
me it's random.

I also would like to thank everyone who participated in the experiment by
sending the articles to Usenet. Every bit counts!

-- 
Neodome
0
Neodome
12/12/2016 1:19:35 PM
comp.lang.perl.misc 33233 articles. 2 followers. brian (1246) is leader. Post Follow

57 Replies
548 Views

Similar Articles

[PageSpeed] 23

On 12.12.16 14:19, Neodome Admin wrote:
> In the past, there were some jokes about using Usenet as a source of
> randomness for RNG. Since I'm running open news server, I decided to check
> if it will actually work.
>
> Here is my approach, step by step:
>
> 1. Take a header of Usenet article. Articles themselves can be completely
> identical (or even empty), but at least one header field suppose to be
> unique, the Message-ID. Practically, there will be much more unique header
> fields.
>
> 2. Since all incoming data is in ASCII, to reduce ASCII-related patterns in
> the stream of bits, take a first bit of each byte of text.
>
> 3. Put bits on top of a stack.
>
> 4. Once stack is equal or bigger than X, take X bits from the top, leaving
> the rest on the stack. This is to ensure that processing of data does not
> always start from the beginning of a header.
>
> 5. Do a simple skew correction, same as RANDOM.ORG does on their data.
>
> 7. Write processed bits to the string.
>
> 6. Once string length is equal Y, create a hash of the bit string.
>
> Hashing is done on the data which is still clearly have patterns, however,
> in normal circumstances there is no way to predict where those patterns
> start. To attack the RNG, attacker will have to control all upstream
> servers, or whole or a significant part of Usenet network, depending on the
> type of attack.
>
> I tried to not use results of previous computations. If data is truly
> random, there is no need to try to "add randomness".
>
> Here is the Perl program I used: http://neodome.net/rng/rngfeed
> It was receiving data directly from INN server which gets full text-only
> feed from its upstreams.
>
> Results can be observed here: http://neodome.net/rng/
> BIN files are hashes in raw binary format, TXT files are text
> representation of same data.
>
> I did not do a statistical analysis of data yet, however, something tells
> me it's random.
>
> I also would like to thank everyone who participated in the experiment by
> sending the articles to Usenet. Every bit counts!
>

Just controlling the router to your news server for a significantly 
short time might allow the attacker to eliminate all randomness. I did 
not check your perl script, so my question: did you implement any 
precaution measurement in order to prevent such a disaster?

-- 
cHNiMUBACG0HAAAAAAAAAAAAAABIZVbDdKVM0w1kM9vxQHw+bkLxsY/Z0czY0uv8/Ks6WULxJVua
zjvpoYvtEwDVhP7RGTCBVlzZ+VBWPHg5rqmKWvtzsuVmMSDxAIS6Db6YhtzT+RStzoG9ForBcG8k
G97Q3Jml/aBun8Kyf+XOBHpl5gNW4YqhiM0=
0
Karl
12/12/2016 4:03:45 PM
Karl.Frank <Karl.Frank@Freecx.co.uk> wrote:
> On 12.12.16 14:19, Neodome Admin wrote:
>> In the past, there were some jokes about using Usenet as a source of
>> randomness for RNG. Since I'm running open news server, I decided to check
>> if it will actually work.
>> 
>> Here is my approach, step by step:
>> 
>> 1. Take a header of Usenet article. Articles themselves can be completely
>> identical (or even empty), but at least one header field suppose to be
>> unique, the Message-ID. Practically, there will be much more unique header
>> fields.
>> 
>> 2. Since all incoming data is in ASCII, to reduce ASCII-related patterns in
>> the stream of bits, take a first bit of each byte of text.
>> 
>> 3. Put bits on top of a stack.
>> 
>> 4. Once stack is equal or bigger than X, take X bits from the top, leaving
>> the rest on the stack. This is to ensure that processing of data does not
>> always start from the beginning of a header.
>> 
>> 5. Do a simple skew correction, same as RANDOM.ORG does on their data.
>> 
>> 7. Write processed bits to the string.
>> 
>> 6. Once string length is equal Y, create a hash of the bit string.
>> 
>> Hashing is done on the data which is still clearly have patterns, however,
>> in normal circumstances there is no way to predict where those patterns
>> start. To attack the RNG, attacker will have to control all upstream
>> servers, or whole or a significant part of Usenet network, depending on the
>> type of attack.
>> 
>> I tried to not use results of previous computations. If data is truly
>> random, there is no need to try to "add randomness".
>> 
>> Here is the Perl program I used: http://neodome.net/rng/rngfeed
>> It was receiving data directly from INN server which gets full text-only
>> feed from its upstreams.
>> 
>> Results can be observed here: http://neodome.net/rng/
>> BIN files are hashes in raw binary format, TXT files are text
>> representation of same data.
>> 
>> I did not do a statistical analysis of data yet, however, something tells
>> me it's random.
>> 
>> I also would like to thank everyone who participated in the experiment by
>> sending the articles to Usenet. Every bit counts!
>> 
> 
> Just controlling the router to your news server for a significantly 
> short time might allow the attacker to eliminate all randomness. I did 
> not check your perl script, so my question: did you implement any 
> precaution measurement in order to prevent such a disaster?

You mean my upstream provider? I don't have one, I receive news from
several sources, just like, for example, eternal-september does. By
controlling one of them attacker will be able to control part of the data
that my server receive. But the rest of network is still non-predictable.
They cannot predict that, for example, Karl Frank at 10:03 12.12.2016 will
decide to to send a message, and they cannot predict when and from whom
this message will arrive at my server.

-- 
Neodome
0
Neodome
12/12/2016 4:44:20 PM
On 2016-12-12, Neodome Admin <admin@neodome.net> wrote:
> Karl.Frank <Karl.Frank@Freecx.co.uk> wrote:
>> On 12.12.16 14:19, Neodome Admin wrote:
>>> In the past, there were some jokes about using Usenet as a source of
>>> randomness for RNG. Since I'm running open news server, I decided to check
>>> if it will actually work.
>>> 
>>> Here is my approach, step by step:
>>> 
>>> 1. Take a header of Usenet article. Articles themselves can be completely
>>> identical (or even empty), but at least one header field suppose to be
>>> unique, the Message-ID. Practically, there will be much more unique header
>>> fields.
>>> 
>>> 2. Since all incoming data is in ASCII, to reduce ASCII-related patterns in
>>> the stream of bits, take a first bit of each byte of text.
>>> 
>>> 3. Put bits on top of a stack.
>>> 
>>> 4. Once stack is equal or bigger than X, take X bits from the top, leaving
>>> the rest on the stack. This is to ensure that processing of data does not
>>> always start from the beginning of a header.
>>> 
>>> 5. Do a simple skew correction, same as RANDOM.ORG does on their data.
>>> 
>>> 7. Write processed bits to the string.
>>> 
>>> 6. Once string length is equal Y, create a hash of the bit string.
>>> 
>>> Hashing is done on the data which is still clearly have patterns, however,
>>> in normal circumstances there is no way to predict where those patterns
>>> start. To attack the RNG, attacker will have to control all upstream
>>> servers, or whole or a significant part of Usenet network, depending on the
>>> type of attack.
>>> 
>>> I tried to not use results of previous computations. If data is truly
>>> random, there is no need to try to "add randomness".
>>> 
>>> Here is the Perl program I used: http://neodome.net/rng/rngfeed
>>> It was receiving data directly from INN server which gets full text-only
>>> feed from its upstreams.
>>> 
>>> Results can be observed here: http://neodome.net/rng/
>>> BIN files are hashes in raw binary format, TXT files are text
>>> representation of same data.
>>> 
>>> I did not do a statistical analysis of data yet, however, something tells
>>> me it's random.
>>> 
>>> I also would like to thank everyone who participated in the experiment by
>>> sending the articles to Usenet. Every bit counts!
>>> 
>> 
>> Just controlling the router to your news server for a significantly 
>> short time might allow the attacker to eliminate all randomness. I did 
>> not check your perl script, so my question: did you implement any 
>> precaution measurement in order to prevent such a disaster?
>
> You mean my upstream provider? I don't have one, I receive news from

You have an ISP-- someone though whom all of the stuff coming to you via
the internet comes through. 

> several sources, just like, for example, eternal-september does. By
> controlling one of them attacker will be able to control part of the data
> that my server receive. But the rest of network is still non-predictable.
> They cannot predict that, for example, Karl Frank at 10:03 12.12.2016 will
> decide to to send a message, and they cannot predict when and from whom
> this message will arrive at my server.
>
0
William
12/12/2016 5:24:48 PM
On 12.12.16 17:44, Neodome Admin wrote:
> Karl.Frank<Karl.Frank@Freecx.co.uk>  wrote:
>> On 12.12.16 14:19, Neodome Admin wrote:
>>> In the past, there were some jokes about using Usenet as a source of
>>> randomness for RNG. Since I'm running open news server, I decided to check
>>> if it will actually work.
>>>
>>> Here is my approach, step by step:
>>>
>>> 1. Take a header of Usenet article. Articles themselves can be completely
>>> identical (or even empty), but at least one header field suppose to be
>>> unique, the Message-ID. Practically, there will be much more unique header
>>> fields.
>>>
>>> 2. Since all incoming data is in ASCII, to reduce ASCII-related patterns in
>>> the stream of bits, take a first bit of each byte of text.
>>>
>>> 3. Put bits on top of a stack.
>>>
>>> 4. Once stack is equal or bigger than X, take X bits from the top, leaving
>>> the rest on the stack. This is to ensure that processing of data does not
>>> always start from the beginning of a header.
>>>
>>> 5. Do a simple skew correction, same as RANDOM.ORG does on their data.
>>>
>>> 7. Write processed bits to the string.
>>>
>>> 6. Once string length is equal Y, create a hash of the bit string.
>>>
>>> Hashing is done on the data which is still clearly have patterns, however,
>>> in normal circumstances there is no way to predict where those patterns
>>> start. To attack the RNG, attacker will have to control all upstream
>>> servers, or whole or a significant part of Usenet network, depending on the
>>> type of attack.
>>>
>>> I tried to not use results of previous computations. If data is truly
>>> random, there is no need to try to "add randomness".
>>>
>>> Here is the Perl program I used: http://neodome.net/rng/rngfeed
>>> It was receiving data directly from INN server which gets full text-only
>>> feed from its upstreams.
>>>
>>> Results can be observed here: http://neodome.net/rng/
>>> BIN files are hashes in raw binary format, TXT files are text
>>> representation of same data.
>>>
>>> I did not do a statistical analysis of data yet, however, something tells
>>> me it's random.
>>>
>>> I also would like to thank everyone who participated in the experiment by
>>> sending the articles to Usenet. Every bit counts!
>>>
>>
>> Just controlling the router to your news server for a significantly
>> short time might allow the attacker to eliminate all randomness. I did
>> not check your perl script, so my question: did you implement any
>> precaution measurement in order to prevent such a disaster?
>
> You mean my upstream provider? I don't have one, I receive news from
> several sources, just like, for example, eternal-september does. By
> controlling one of them attacker will be able to control part of the data
> that my server receive. But the rest of network is still non-predictable.
> They cannot predict that, for example, Karl Frank at 10:03 12.12.2016 will
> decide to to send a message, and they cannot predict when and from whom
> this message will arrive at my server.
>

There is always a "default route" were all your traffic originate from.
Just give yourself a try and log-in to your server and type the command

netstat -nr

You should get something like

Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt 
Iface
xxx.xxx.xxx.0   0.0.0.0         255.255.255.0   U         0 0          0 
eth0
0.0.0.0         xxx.xxx.xxx.254 0.0.0.0         UG        0 0          0 
eth0

were xxx.xxx.xxx.0 is the representation of your servers IP address and
xxx.xxx.xxx.254 the default gateway through which all your incoming and
outgoing traffic is routed.

A man-in-the-middle will just have the simple task to filter and
manipulate any kind of traffic in order to control your entropy source.



-- 
cHNiMUBACG0HAAAAAAAAAAAAAABIZVbDdKVM0w1kM9vxQHw+bkLxsY/Z0czY0uv8/Ks6WULxJVua
zjvpoYvtEwDVhP7RGTCBVlzZ+VBWPHg5rqmKWvtzsuVmMSDxAIS6Db6YhtzT+RStzoG9ForBcG8k
G97Q3Jml/aBun8Kyf+XOBHpl5gNW4YqhiM0=
0
Karl
12/12/2016 7:03:41 PM
On 2016-12-12, Karl.Frank <Karl.Frank@Freecx.co.uk> wrote:
> On 12.12.16 17:44, Neodome Admin wrote:
>> Karl.Frank<Karl.Frank@Freecx.co.uk>  wrote:
>>> On 12.12.16 14:19, Neodome Admin wrote:
>>>> In the past, there were some jokes about using Usenet as a source of
>>>> randomness for RNG. Since I'm running open news server, I decided to check
>>>> if it will actually work.
>>>>
>>>> Here is my approach, step by step:
>>>>
>>>> 1. Take a header of Usenet article. Articles themselves can be completely
>>>> identical (or even empty), but at least one header field suppose to be
>>>> unique, the Message-ID. Practically, there will be much more unique header
>>>> fields.
>>>>
>>>> 2. Since all incoming data is in ASCII, to reduce ASCII-related patterns in
>>>> the stream of bits, take a first bit of each byte of text.
>>>>
>>>> 3. Put bits on top of a stack.
>>>>
>>>> 4. Once stack is equal or bigger than X, take X bits from the top, leaving
>>>> the rest on the stack. This is to ensure that processing of data does not
>>>> always start from the beginning of a header.
>>>>
>>>> 5. Do a simple skew correction, same as RANDOM.ORG does on their data.
>>>>
>>>> 7. Write processed bits to the string.
>>>>
>>>> 6. Once string length is equal Y, create a hash of the bit string.
>>>>
>>>> Hashing is done on the data which is still clearly have patterns, however,
>>>> in normal circumstances there is no way to predict where those patterns
>>>> start. To attack the RNG, attacker will have to control all upstream
>>>> servers, or whole or a significant part of Usenet network, depending on the
>>>> type of attack.
>>>>
>>>> I tried to not use results of previous computations. If data is truly
>>>> random, there is no need to try to "add randomness".
>>>>
>>>> Here is the Perl program I used: http://neodome.net/rng/rngfeed
>>>> It was receiving data directly from INN server which gets full text-only
>>>> feed from its upstreams.
>>>>
>>>> Results can be observed here: http://neodome.net/rng/
>>>> BIN files are hashes in raw binary format, TXT files are text
>>>> representation of same data.
>>>>
>>>> I did not do a statistical analysis of data yet, however, something tells
>>>> me it's random.
>>>>
>>>> I also would like to thank everyone who participated in the experiment by
>>>> sending the articles to Usenet. Every bit counts!
>>>>
>>>
>>> Just controlling the router to your news server for a significantly
>>> short time might allow the attacker to eliminate all randomness. I did
>>> not check your perl script, so my question: did you implement any
>>> precaution measurement in order to prevent such a disaster?
>>
>> You mean my upstream provider? I don't have one, I receive news from
>> several sources, just like, for example, eternal-september does. By
>> controlling one of them attacker will be able to control part of the data
>> that my server receive. But the rest of network is still non-predictable.
>> They cannot predict that, for example, Karl Frank at 10:03 12.12.2016 will
>> decide to to send a message, and they cannot predict when and from whom
>> this message will arrive at my server.
>>
>
> There is always a "default route" were all your traffic originate from.
> Just give yourself a try and log-in to your server and type the command

Well, no. The default route is where outgoing traffic gets sent if there
is no more specific route to send it. It does not say where the traffic
comes from 
However, your computer is connected by a single wire to something. So
everythng comes and goes along that wire (wireless I will call a virtual
wire). That thing that the wire is connected to is then connected to
something else (eg in the telephone office) again it is a single route. 

Now eventually stuff can branch out, but there is effectively a single
track from your machine to some machine out there. That other machine is
then a point at which an attacker could inject carefully crafted packets
to destroy the randomness of your numbers. 

The usual way to get around this is to glean "randomness" froma bunch of
places-- clock time, keyboard presses, internet traffic, etc. Then that
is hashed sufficiently strongly (Ie large many to one) so that knowing
any one source would not give any advantage to the attacker. Knowing a
buch of them could give an advantage. 

Note that all of this is already done the /dev/urandom
generator so what your are making is a very slow and limited version of
this. .

>
> netstat -nr
>
> You should get something like
>
> Kernel IP routing table
> Destination     Gateway         Genmask         Flags   MSS Window  irtt 
> Iface
> xxx.xxx.xxx.0   0.0.0.0         255.255.255.0   U         0 0          0 
> eth0
> 0.0.0.0         xxx.xxx.xxx.254 0.0.0.0         UG        0 0          0 
> eth0
>
> were xxx.xxx.xxx.0 is the representation of your servers IP address and
> xxx.xxx.xxx.254 the default gateway through which all your incoming and
> outgoing traffic is routed.
>
> A man-in-the-middle will just have the simple task to filter and
> manipulate any kind of traffic in order to control your entropy source.
>
>
>
0
William
12/12/2016 7:28:50 PM
On 12.12.16 20:28, William Unruh wrote:
> On 2016-12-12, Karl.Frank<Karl.Frank@Freecx.co.uk>  wrote:
>> On 12.12.16 17:44, Neodome Admin wrote:
>>> Karl.Frank<Karl.Frank@Freecx.co.uk>   wrote:
>>>> On 12.12.16 14:19, Neodome Admin wrote:
>>>>> In the past, there were some jokes about using Usenet as a source of
>>>>> randomness for RNG. Since I'm running open news server, I decided to check
>>>>> if it will actually work.
>>>>>
>>>>> Here is my approach, step by step:
>>>>>
>>>>> 1. Take a header of Usenet article. Articles themselves can be completely
>>>>> identical (or even empty), but at least one header field suppose to be
>>>>> unique, the Message-ID. Practically, there will be much more unique header
>>>>> fields.
>>>>>
>>>>> 2. Since all incoming data is in ASCII, to reduce ASCII-related patterns in
>>>>> the stream of bits, take a first bit of each byte of text.
>>>>>
>>>>> 3. Put bits on top of a stack.
>>>>>
>>>>> 4. Once stack is equal or bigger than X, take X bits from the top, leaving
>>>>> the rest on the stack. This is to ensure that processing of data does not
>>>>> always start from the beginning of a header.
>>>>>
>>>>> 5. Do a simple skew correction, same as RANDOM.ORG does on their data.
>>>>>
>>>>> 7. Write processed bits to the string.
>>>>>
>>>>> 6. Once string length is equal Y, create a hash of the bit string.
>>>>>
>>>>> Hashing is done on the data which is still clearly have patterns, however,
>>>>> in normal circumstances there is no way to predict where those patterns
>>>>> start. To attack the RNG, attacker will have to control all upstream
>>>>> servers, or whole or a significant part of Usenet network, depending on the
>>>>> type of attack.
>>>>>
>>>>> I tried to not use results of previous computations. If data is truly
>>>>> random, there is no need to try to "add randomness".
>>>>>
>>>>> Here is the Perl program I used: http://neodome.net/rng/rngfeed
>>>>> It was receiving data directly from INN server which gets full text-only
>>>>> feed from its upstreams.
>>>>>
>>>>> Results can be observed here: http://neodome.net/rng/
>>>>> BIN files are hashes in raw binary format, TXT files are text
>>>>> representation of same data.
>>>>>
>>>>> I did not do a statistical analysis of data yet, however, something tells
>>>>> me it's random.
>>>>>
>>>>> I also would like to thank everyone who participated in the experiment by
>>>>> sending the articles to Usenet. Every bit counts!
>>>>>
>>>>
>>>> Just controlling the router to your news server for a significantly
>>>> short time might allow the attacker to eliminate all randomness. I did
>>>> not check your perl script, so my question: did you implement any
>>>> precaution measurement in order to prevent such a disaster?
>>>
>>> You mean my upstream provider? I don't have one, I receive news from
>>> several sources, just like, for example, eternal-september does. By
>>> controlling one of them attacker will be able to control part of the data
>>> that my server receive. But the rest of network is still non-predictable.
>>> They cannot predict that, for example, Karl Frank at 10:03 12.12.2016 will
>>> decide to to send a message, and they cannot predict when and from whom
>>> this message will arrive at my server.
>>>
>>
>> There is always a "default route" were all your traffic originate from.
>> Just give yourself a try and log-in to your server and type the command
>
> Well, no. The default route is where outgoing traffic gets sent if there
> is no more specific route to send it. It does not say where the traffic
> comes from

The upstream provider in a datacenter always controls all incoming and
outgoing routers/switches and can divert any traffic as he feels
necessary. No matter if your server has just one default gateway and any
number of alternating connections. Mostly the routing is realised by
defining VLan routing nowadays to my current knowledge.

https://en.wikipedia.org/wiki/Vlan_routing

And the right place for the interceptor is controlling and/or
manipulation of the Border Gateway Protocol (BGP)

https://en.wikipedia.org/wiki/IP_hijacking

That's what I mean by "A man-in-the-middle will just have the simple
task to filter and manipulate any kind of traffic in order to control
your entropy source."


And this is just a nice reading on how to implement a
man-in-the-middle-proxy

https://blog.heckel.xyz/2013/07/01/how-to-use-mitmproxy-to-read-and-modify-https-traffic-of-your-phone/




> However, your computer is connected by a single wire to something. So
> everythng comes and goes along that wire (wireless I will call a virtual
> wire). That thing that the wire is connected to is then connected to
> something else (eg in the telephone office) again it is a single route.
>
> Now eventually stuff can branch out, but there is effectively a single
> track from your machine to some machine out there. That other machine is
> then a point at which an attacker could inject carefully crafted packets
> to destroy the randomness of your numbers.
>
> The usual way to get around this is to glean "randomness" froma bunch of
> places-- clock time, keyboard presses, internet traffic, etc. Then that
> is hashed sufficiently strongly (Ie large many to one) so that knowing
> any one source would not give any advantage to the attacker. Knowing a
> buch of them could give an advantage.
>
> Note that all of this is already done the /dev/urandom
> generator so what your are making is a very slow and limited version of
> this. .
>
>>
>> netstat -nr
>>
>> You should get something like
>>
>> Kernel IP routing table
>> Destination     Gateway         Genmask         Flags   MSS Window  irtt
>> Iface
>> xxx.xxx.xxx.0   0.0.0.0         255.255.255.0   U         0 0          0
>> eth0
>> 0.0.0.0         xxx.xxx.xxx.254 0.0.0.0         UG        0 0          0
>> eth0
>>
>> were xxx.xxx.xxx.0 is the representation of your servers IP address and
>> xxx.xxx.xxx.254 the default gateway through which all your incoming and
>> outgoing traffic is routed.
>>
>> A man-in-the-middle will just have the simple task to filter and
>> manipulate any kind of traffic in order to control your entropy source.
>>
>>
>>


-- 
cHNiMUBACG0HAAAAAAAAAAAAAABIZVbDdKVM0w1kM9vxQHw+bkLxsY/Z0czY0uv8/Ks6WULxJVua
zjvpoYvtEwDVhP7RGTCBVlzZ+VBWPHg5rqmKWvtzsuVmMSDxAIS6Db6YhtzT+RStzoG9ForBcG8k
G97Q3Jml/aBun8Kyf+XOBHpl5gNW4YqhiM0=
0
Karl
12/12/2016 8:48:23 PM
> 2. Since all incoming data is in ASCII, to reduce ASCII-related patterns in
> the stream of bits, take a first bit of each byte of text.

I certainly hope this does not mean "take the high-order bit of
each byte".  Given that the vast majority of the characters used
in news articles are present in 7-bit ASCII, that gives you a heavy
bias towards nearly all zeroes, even if there is an occasional
character with the high-order bit on (in UTF-8 or ISO8859-1 or
whatever).

I'm not so sure that "take the low-order bit of each byte" is
particularly good (but it has to be better than using the high-order
bit).  Perhaps using the XOR of all the bits of the character would
get more randomness out of a news article.

No matter how you map each character into 1 bit, you're going to
have a bunch of bit patterns that occur frequently, such as whatever
patterns like \r\nMessage-ID:\s (14 bits long) or \r\nSubject:\s
(11 bits long) (where \r, \n, and \s translate to ASCII carriage
return, line feed, and space, respectively) translate to (although
these patterns will not always be byte-aligned, it still might be
a problem).

0
gordonb
12/12/2016 11:28:05 PM
William Unruh <unruh@invalid.ca> wrote:
> On 2016-12-12, Neodome Admin <admin@neodome.net> wrote:
>> Karl.Frank <Karl.Frank@Freecx.co.uk> wrote:
>>> On 12.12.16 14:19, Neodome Admin wrote:
>>>> In the past, there were some jokes about using Usenet as a source of
>>>> randomness for RNG. Since I'm running open news server, I decided to check
>>>> if it will actually work.
>>>> 
>>>> Here is my approach, step by step:
>>>> 
>>>> 1. Take a header of Usenet article. Articles themselves can be completely
>>>> identical (or even empty), but at least one header field suppose to be
>>>> unique, the Message-ID. Practically, there will be much more unique header
>>>> fields.
>>>> 
>>>> 2. Since all incoming data is in ASCII, to reduce ASCII-related patterns in
>>>> the stream of bits, take a first bit of each byte of text.
>>>> 
>>>> 3. Put bits on top of a stack.
>>>> 
>>>> 4. Once stack is equal or bigger than X, take X bits from the top, leaving
>>>> the rest on the stack. This is to ensure that processing of data does not
>>>> always start from the beginning of a header.
>>>> 
>>>> 5. Do a simple skew correction, same as RANDOM.ORG does on their data.
>>>> 
>>>> 7. Write processed bits to the string.
>>>> 
>>>> 6. Once string length is equal Y, create a hash of the bit string.
>>>> 
>>>> Hashing is done on the data which is still clearly have patterns, however,
>>>> in normal circumstances there is no way to predict where those patterns
>>>> start. To attack the RNG, attacker will have to control all upstream
>>>> servers, or whole or a significant part of Usenet network, depending on the
>>>> type of attack.
>>>> 
>>>> I tried to not use results of previous computations. If data is truly
>>>> random, there is no need to try to "add randomness".
>>>> 
>>>> Here is the Perl program I used: http://neodome.net/rng/rngfeed
>>>> It was receiving data directly from INN server which gets full text-only
>>>> feed from its upstreams.
>>>> 
>>>> Results can be observed here: http://neodome.net/rng/
>>>> BIN files are hashes in raw binary format, TXT files are text
>>>> representation of same data.
>>>> 
>>>> I did not do a statistical analysis of data yet, however, something tells
>>>> me it's random.
>>>> 
>>>> I also would like to thank everyone who participated in the experiment by
>>>> sending the articles to Usenet. Every bit counts!
>>>> 
>>> 
>>> Just controlling the router to your news server for a significantly 
>>> short time might allow the attacker to eliminate all randomness. I did 
>>> not check your perl script, so my question: did you implement any 
>>> precaution measurement in order to prevent such a disaster?
>> 
>> You mean my upstream provider? I don't have one, I receive news from
> 
> You have an ISP-- someone though whom all of the stuff coming to you via
> the internet comes through. 

I see what you're trying to say. However, there is nothing that might
prevent me from receiving my feed encrypted. Actually, just recently there
were a proposal to NNTP protocol to allow that. Moreover, instead of
receiving articles, I can download them via anonymous network such as Tor,
so my ISP won't even know which servers I'm connecting to. Or I can
download articles using my laptop and free Wi-Fi from my naive neighbor,
and then feed them to the server using encrypted tunnel. 

On a serious note, Usenet is publicly available block of data. There are
almost endless possibilities to access it, and then make sure that you
accessing the right data by using several different sources. I won't even
have to write any code to do that since it's already written by other
people.

Instead of trying to compromise SSL or Tor connection, it might be cheaper
and faster for attacker to find me, break my legs, and politely ask me to
rewrite the code the way they want. *That* would be a successful attack.

>> several sources, just like, for example, eternal-september does. By
>> controlling one of them attacker will be able to control part of the data
>> that my server receive. But the rest of network is still non-predictable.
>> They cannot predict that, for example, Karl Frank at 10:03 12.12.2016 will
>> decide to to send a message, and they cannot predict when and from whom
>> this message will arrive at my server.
>> 
> 



-- 
Neodome
0
Neodome
12/13/2016 12:01:55 AM
Karl.Frank <Karl.Frank@Freecx.co.uk> wrote:
> On 12.12.16 17:44, Neodome Admin wrote:
>> Karl.Frank<Karl.Frank@Freecx.co.uk>  wrote:
>>> On 12.12.16 14:19, Neodome Admin wrote:
>>>> In the past, there were some jokes about using Usenet as a source of
>>>> randomness for RNG. Since I'm running open news server, I decided to check
>>>> if it will actually work.
>>>> 
>>>> Here is my approach, step by step:
>>>> 
>>>> 1. Take a header of Usenet article. Articles themselves can be completely
>>>> identical (or even empty), but at least one header field suppose to be
>>>> unique, the Message-ID. Practically, there will be much more unique header
>>>> fields.
>>>> 
>>>> 2. Since all incoming data is in ASCII, to reduce ASCII-related patterns in
>>>> the stream of bits, take a first bit of each byte of text.
>>>> 
>>>> 3. Put bits on top of a stack.
>>>> 
>>>> 4. Once stack is equal or bigger than X, take X bits from the top, leaving
>>>> the rest on the stack. This is to ensure that processing of data does not
>>>> always start from the beginning of a header.
>>>> 
>>>> 5. Do a simple skew correction, same as RANDOM.ORG does on their data.
>>>> 
>>>> 7. Write processed bits to the string.
>>>> 
>>>> 6. Once string length is equal Y, create a hash of the bit string.
>>>> 
>>>> Hashing is done on the data which is still clearly have patterns, however,
>>>> in normal circumstances there is no way to predict where those patterns
>>>> start. To attack the RNG, attacker will have to control all upstream
>>>> servers, or whole or a significant part of Usenet network, depending on the
>>>> type of attack.
>>>> 
>>>> I tried to not use results of previous computations. If data is truly
>>>> random, there is no need to try to "add randomness".
>>>> 
>>>> Here is the Perl program I used: http://neodome.net/rng/rngfeed
>>>> It was receiving data directly from INN server which gets full text-only
>>>> feed from its upstreams.
>>>> 
>>>> Results can be observed here: http://neodome.net/rng/
>>>> BIN files are hashes in raw binary format, TXT files are text
>>>> representation of same data.
>>>> 
>>>> I did not do a statistical analysis of data yet, however, something tells
>>>> me it's random.
>>>> 
>>>> I also would like to thank everyone who participated in the experiment by
>>>> sending the articles to Usenet. Every bit counts!
>>>> 
>>> 
>>> Just controlling the router to your news server for a significantly
>>> short time might allow the attacker to eliminate all randomness. I did
>>> not check your perl script, so my question: did you implement any
>>> precaution measurement in order to prevent such a disaster?
>> 
>> You mean my upstream provider? I don't have one, I receive news from
>> several sources, just like, for example, eternal-september does. By
>> controlling one of them attacker will be able to control part of the data
>> that my server receive. But the rest of network is still non-predictable.
>> They cannot predict that, for example, Karl Frank at 10:03 12.12.2016 will
>> decide to to send a message, and they cannot predict when and from whom
>> this message will arrive at my server.
>> 
> 
> There is always a "default route" were all your traffic originate from.

Please see my reply to William.

> Just give yourself a try and log-in to your server and type the command
> 
> netstat -nr
> 
> You should get something like
> 
> Kernel IP routing table
> Destination     Gateway         Genmask         Flags   MSS Window  irtt 
> Iface
> xxx.xxx.xxx.0   0.0.0.0         255.255.255.0   U         0 0          0 
> eth0
> 0.0.0.0         xxx.xxx.xxx.254 0.0.0.0         UG        0 0          0 
> eth0
> 
> were xxx.xxx.xxx.0 is the representation of your servers IP address and
> xxx.xxx.xxx.254 the default gateway through which all your incoming and
> outgoing traffic is routed.
> 
> A man-in-the-middle will just have the simple task to filter and
> manipulate any kind of traffic in order to control your entropy source.
> 
> 
> 



-- 
Neodome
0
Neodome
12/13/2016 12:02:02 AM
Gordon Burditt <gordonb.orbdt@burditt.org> wrote:
>> 2. Since all incoming data is in ASCII, to reduce ASCII-related patterns in
>> the stream of bits, take a first bit of each byte of text.
> 
> I certainly hope this does not mean "take the high-order bit of
> each byte".  Given that the vast majority of the characters used
> in news articles are present in 7-bit ASCII, that gives you a heavy
> bias towards nearly all zeroes, even if there is an occasional
> character with the high-order bit on (in UTF-8 or ISO8859-1 or
> whatever).

You are correct. I was talking of the low-order bit, you can see it in the
code.

> I'm not so sure that "take the low-order bit of each byte" is
> particularly good (but it has to be better than using the high-order
> bit).  Perhaps using the XOR of all the bits of the character would
> get more randomness out of a news article.

I was thinking of it, but I'm not sure if XOR would help. 

> No matter how you map each character into 1 bit, you're going to
> have a bunch of bit patterns that occur frequently, such as whatever
> patterns like \r\nMessage-ID:\s (14 bits long) or \r\nSubject:\s
> (11 bits long) (where \r, \n, and \s translate to ASCII carriage
> return, line feed, and space, respectively) translate to (although
> these patterns will not always be byte-aligned, it still might be
> a problem).

Again, you are correct. I mentioned it in my first message: even after skew
correction there are some patterns, mostly header fields names. However, at
the point when hashing is done, there is no way to predict where those
patterns will be in the input data. Attacker might assume that somewhere in
the middle of input data there was some particular bit pattern, but would
it really help him? It would if he knows internal state of generator, but
if he knows that then game is over anyway.

-- 
Neodome
0
Neodome
12/13/2016 12:20:39 AM
On 13.12.16 01:02, Neodome Admin wrote:
> Karl.Frank<Karl.Frank@Freecx.co.uk>  wrote:
>> On 12.12.16 17:44, Neodome Admin wrote:
>>> Karl.Frank<Karl.Frank@Freecx.co.uk>   wrote:
>>>> On 12.12.16 14:19, Neodome Admin wrote:
>>>>> In the past, there were some jokes about using Usenet as a source of
>>>>> randomness for RNG. Since I'm running open news server, I decided to check
>>>>> if it will actually work.
>>>>>
>>>>> Here is my approach, step by step:
>>>>>
>>>>> 1. Take a header of Usenet article. Articles themselves can be completely
>>>>> identical (or even empty), but at least one header field suppose to be
>>>>> unique, the Message-ID. Practically, there will be much more unique header
>>>>> fields.
>>>>>
>>>>> 2. Since all incoming data is in ASCII, to reduce ASCII-related patterns in
>>>>> the stream of bits, take a first bit of each byte of text.
>>>>>
>>>>> 3. Put bits on top of a stack.
>>>>>
>>>>> 4. Once stack is equal or bigger than X, take X bits from the top, leaving
>>>>> the rest on the stack. This is to ensure that processing of data does not
>>>>> always start from the beginning of a header.
>>>>>
>>>>> 5. Do a simple skew correction, same as RANDOM.ORG does on their data.
>>>>>
>>>>> 7. Write processed bits to the string.
>>>>>
>>>>> 6. Once string length is equal Y, create a hash of the bit string.
>>>>>
>>>>> Hashing is done on the data which is still clearly have patterns, however,
>>>>> in normal circumstances there is no way to predict where those patterns
>>>>> start. To attack the RNG, attacker will have to control all upstream
>>>>> servers, or whole or a significant part of Usenet network, depending on the
>>>>> type of attack.
>>>>>
>>>>> I tried to not use results of previous computations. If data is truly
>>>>> random, there is no need to try to "add randomness".
>>>>>
>>>>> Here is the Perl program I used: http://neodome.net/rng/rngfeed
>>>>> It was receiving data directly from INN server which gets full text-only
>>>>> feed from its upstreams.
>>>>>
>>>>> Results can be observed here: http://neodome.net/rng/
>>>>> BIN files are hashes in raw binary format, TXT files are text
>>>>> representation of same data.
>>>>>
>>>>> I did not do a statistical analysis of data yet, however, something tells
>>>>> me it's random.
>>>>>
>>>>> I also would like to thank everyone who participated in the experiment by
>>>>> sending the articles to Usenet. Every bit counts!
>>>>>
>>>>
>>>> Just controlling the router to your news server for a significantly
>>>> short time might allow the attacker to eliminate all randomness. I did
>>>> not check your perl script, so my question: did you implement any
>>>> precaution measurement in order to prevent such a disaster?
>>>
>>> You mean my upstream provider? I don't have one, I receive news from
>>> several sources, just like, for example, eternal-september does. By
>>> controlling one of them attacker will be able to control part of the data
>>> that my server receive. But the rest of network is still non-predictable.
>>> They cannot predict that, for example, Karl Frank at 10:03 12.12.2016 will
>>> decide to to send a message, and they cannot predict when and from whom
>>> this message will arrive at my server.
>>>
>>
>> There is always a "default route" were all your traffic originate from.
>
> Please see my reply to William.
>

It seems you miss the point. The bottleneck is your default gateway, the
single entry point. If an interceptor can control the bit stream that
*has* to pass through this gateway he has complete control of what you
will receive, not matter how many different sources you are trying to
access.


>> Just give yourself a try and log-in to your server and type the command
>>
>> netstat -nr
>>
>> You should get something like
>>
>> Kernel IP routing table
>> Destination     Gateway         Genmask         Flags   MSS Window  irtt
>> Iface
>> xxx.xxx.xxx.0   0.0.0.0         255.255.255.0   U         0 0          0
>> eth0
>> 0.0.0.0         xxx.xxx.xxx.254 0.0.0.0         UG        0 0          0
>> eth0
>>
>> were xxx.xxx.xxx.0 is the representation of your servers IP address and
>> xxx.xxx.xxx.254 the default gateway through which all your incoming and
>> outgoing traffic is routed.
>>
>> A man-in-the-middle will just have the simple task to filter and
>> manipulate any kind of traffic in order to control your entropy source.
>>
>>
>>
>
>
>


-- 
cHNiMUBACG0HAAAAAAAAAAAAAABIZVbDdKVM0w1kM9vxQHw+bkLxsY/Z0czY0uv8/Ks6WULxJVua
zjvpoYvtEwDVhP7RGTCBVlzZ+VBWPHg5rqmKWvtzsuVmMSDxAIS6Db6YhtzT+RStzoG9ForBcG8k
G97Q3Jml/aBun8Kyf+XOBHpl5gNW4YqhiM0=
0
Karl
12/13/2016 12:25:46 AM
In sci.crypt Neodome Admin <admin@neodome.net> wrote:
> On a serious note, Usenet is publicly available block of data.

Which is what makes it somewhat unsuitable as a randomness source.

0
Rich
12/13/2016 12:28:40 AM
William Unruh <unruh@invalid.ca> wrote:
> On 2016-12-12, Karl.Frank <Karl.Frank@Freecx.co.uk> wrote:
>> On 12.12.16 17:44, Neodome Admin wrote:
>>> Karl.Frank<Karl.Frank@Freecx.co.uk>  wrote:
>>>> On 12.12.16 14:19, Neodome Admin wrote:
>>>>> In the past, there were some jokes about using Usenet as a source of
>>>>> randomness for RNG. Since I'm running open news server, I decided to check
>>>>> if it will actually work.
>>>>> 
>>>>> Here is my approach, step by step:
>>>>> 
>>>>> 1. Take a header of Usenet article. Articles themselves can be completely
>>>>> identical (or even empty), but at least one header field suppose to be
>>>>> unique, the Message-ID. Practically, there will be much more unique header
>>>>> fields.
>>>>> 
>>>>> 2. Since all incoming data is in ASCII, to reduce ASCII-related patterns in
>>>>> the stream of bits, take a first bit of each byte of text.
>>>>> 
>>>>> 3. Put bits on top of a stack.
>>>>> 
>>>>> 4. Once stack is equal or bigger than X, take X bits from the top, leaving
>>>>> the rest on the stack. This is to ensure that processing of data does not
>>>>> always start from the beginning of a header.
>>>>> 
>>>>> 5. Do a simple skew correction, same as RANDOM.ORG does on their data.
>>>>> 
>>>>> 7. Write processed bits to the string.
>>>>> 
>>>>> 6. Once string length is equal Y, create a hash of the bit string.
>>>>> 
>>>>> Hashing is done on the data which is still clearly have patterns, however,
>>>>> in normal circumstances there is no way to predict where those patterns
>>>>> start. To attack the RNG, attacker will have to control all upstream
>>>>> servers, or whole or a significant part of Usenet network, depending on the
>>>>> type of attack.
>>>>> 
>>>>> I tried to not use results of previous computations. If data is truly
>>>>> random, there is no need to try to "add randomness".
>>>>> 
>>>>> Here is the Perl program I used: http://neodome.net/rng/rngfeed
>>>>> It was receiving data directly from INN server which gets full text-only
>>>>> feed from its upstreams.
>>>>> 
>>>>> Results can be observed here: http://neodome.net/rng/
>>>>> BIN files are hashes in raw binary format, TXT files are text
>>>>> representation of same data.
>>>>> 
>>>>> I did not do a statistical analysis of data yet, however, something tells
>>>>> me it's random.
>>>>> 
>>>>> I also would like to thank everyone who participated in the experiment by
>>>>> sending the articles to Usenet. Every bit counts!
>>>>> 
>>>> 
>>>> Just controlling the router to your news server for a significantly
>>>> short time might allow the attacker to eliminate all randomness. I did
>>>> not check your perl script, so my question: did you implement any
>>>> precaution measurement in order to prevent such a disaster?
>>> 
>>> You mean my upstream provider? I don't have one, I receive news from
>>> several sources, just like, for example, eternal-september does. By
>>> controlling one of them attacker will be able to control part of the data
>>> that my server receive. But the rest of network is still non-predictable.
>>> They cannot predict that, for example, Karl Frank at 10:03 12.12.2016 will
>>> decide to to send a message, and they cannot predict when and from whom
>>> this message will arrive at my server.
>>> 
>> 
>> There is always a "default route" were all your traffic originate from.
>> Just give yourself a try and log-in to your server and type the command
> 
> Well, no. The default route is where outgoing traffic gets sent if there
> is no more specific route to send it. It does not say where the traffic
> comes from 
> However, your computer is connected by a single wire to something. So
> everythng comes and goes along that wire (wireless I will call a virtual
> wire). That thing that the wire is connected to is then connected to
> something else (eg in the telephone office) again it is a single route. 
> 
> Now eventually stuff can branch out, but there is effectively a single
> track from your machine to some machine out there. That other machine is
> then a point at which an attacker could inject carefully crafted packets
> to destroy the randomness of your numbers. 
> 
> The usual way to get around this is to glean "randomness" froma bunch of
> places-- clock time, keyboard presses, internet traffic, etc. Then that
> is hashed sufficiently strongly (Ie large many to one) so that knowing
> any one source would not give any advantage to the attacker. Knowing a
> buch of them could give an advantage. 
> 
> Note that all of this is already done the /dev/urandom
> generator so what your are making is a very slow and limited version of
> this. .

So, /dev/urandom is a True Random Number Generator?

>> 
>> netstat -nr
>> 
>> You should get something like
>> 
>> Kernel IP routing table
>> Destination     Gateway         Genmask         Flags   MSS Window  irtt 
>> Iface
>> xxx.xxx.xxx.0   0.0.0.0         255.255.255.0   U         0 0          0 
>> eth0
>> 0.0.0.0         xxx.xxx.xxx.254 0.0.0.0         UG        0 0          0 
>> eth0
>> 
>> were xxx.xxx.xxx.0 is the representation of your servers IP address and
>> xxx.xxx.xxx.254 the default gateway through which all your incoming and
>> outgoing traffic is routed.
>> 
>> A man-in-the-middle will just have the simple task to filter and
>> manipulate any kind of traffic in order to control your entropy source.
>> 
>> 
>> 
> 



-- 
Neodome
0
Neodome
12/13/2016 12:45:10 AM
Rich <rich@example.invalid> wrote:
> In sci.crypt Neodome Admin <admin@neodome.net> wrote:
>> On a serious note, Usenet is publicly available block of data.
> 
> Which is what makes it somewhat unsuitable as a randomness source.

I made results of my experiment publicly available. Would you like to do
statistical analysis and prove that the data is not random?

-- 
Neodome
0
Neodome
12/13/2016 12:51:34 AM
Karl.Frank <Karl.Frank@Freecx.co.uk> wrote:
> On 13.12.16 01:02, Neodome Admin wrote:
>> Karl.Frank<Karl.Frank@Freecx.co.uk>  wrote:
>>> On 12.12.16 17:44, Neodome Admin wrote:
>>>> Karl.Frank<Karl.Frank@Freecx.co.uk>   wrote:
>>>>> On 12.12.16 14:19, Neodome Admin wrote:
>>>>>> In the past, there were some jokes about using Usenet as a source of
>>>>>> randomness for RNG. Since I'm running open news server, I decided to check
>>>>>> if it will actually work.
>>>>>> 
>>>>>> Here is my approach, step by step:
>>>>>> 
>>>>>> 1. Take a header of Usenet article. Articles themselves can be completely
>>>>>> identical (or even empty), but at least one header field suppose to be
>>>>>> unique, the Message-ID. Practically, there will be much more unique header
>>>>>> fields.
>>>>>> 
>>>>>> 2. Since all incoming data is in ASCII, to reduce ASCII-related patterns in
>>>>>> the stream of bits, take a first bit of each byte of text.
>>>>>> 
>>>>>> 3. Put bits on top of a stack.
>>>>>> 
>>>>>> 4. Once stack is equal or bigger than X, take X bits from the top, leaving
>>>>>> the rest on the stack. This is to ensure that processing of data does not
>>>>>> always start from the beginning of a header.
>>>>>> 
>>>>>> 5. Do a simple skew correction, same as RANDOM.ORG does on their data.
>>>>>> 
>>>>>> 7. Write processed bits to the string.
>>>>>> 
>>>>>> 6. Once string length is equal Y, create a hash of the bit string.
>>>>>> 
>>>>>> Hashing is done on the data which is still clearly have patterns, however,
>>>>>> in normal circumstances there is no way to predict where those patterns
>>>>>> start. To attack the RNG, attacker will have to control all upstream
>>>>>> servers, or whole or a significant part of Usenet network, depending on the
>>>>>> type of attack.
>>>>>> 
>>>>>> I tried to not use results of previous computations. If data is truly
>>>>>> random, there is no need to try to "add randomness".
>>>>>> 
>>>>>> Here is the Perl program I used: http://neodome.net/rng/rngfeed
>>>>>> It was receiving data directly from INN server which gets full text-only
>>>>>> feed from its upstreams.
>>>>>> 
>>>>>> Results can be observed here: http://neodome.net/rng/
>>>>>> BIN files are hashes in raw binary format, TXT files are text
>>>>>> representation of same data.
>>>>>> 
>>>>>> I did not do a statistical analysis of data yet, however, something tells
>>>>>> me it's random.
>>>>>> 
>>>>>> I also would like to thank everyone who participated in the experiment by
>>>>>> sending the articles to Usenet. Every bit counts!
>>>>>> 
>>>>> 
>>>>> Just controlling the router to your news server for a significantly
>>>>> short time might allow the attacker to eliminate all randomness. I did
>>>>> not check your perl script, so my question: did you implement any
>>>>> precaution measurement in order to prevent such a disaster?
>>>> 
>>>> You mean my upstream provider? I don't have one, I receive news from
>>>> several sources, just like, for example, eternal-september does. By
>>>> controlling one of them attacker will be able to control part of the data
>>>> that my server receive. But the rest of network is still non-predictable.
>>>> They cannot predict that, for example, Karl Frank at 10:03 12.12.2016 will
>>>> decide to to send a message, and they cannot predict when and from whom
>>>> this message will arrive at my server.
>>>> 
>>> 
>>> There is always a "default route" were all your traffic originate from.
>> 
>> Please see my reply to William.
>> 
> 
> It seems you miss the point. The bottleneck is your default gateway, the
> single entry point. If an interceptor can control the bit stream that
> *has* to pass through this gateway he has complete control of what you
> will receive, not matter how many different sources you are trying to
> access.

If it were that easy, we would not be using SSL right now.

>>> Just give yourself a try and log-in to your server and type the command
>>> 
>>> netstat -nr
>>> 
>>> You should get something like
>>> 
>>> Kernel IP routing table
>>> Destination     Gateway         Genmask         Flags   MSS Window  irtt
>>> Iface
>>> xxx.xxx.xxx.0   0.0.0.0         255.255.255.0   U         0 0          0
>>> eth0
>>> 0.0.0.0         xxx.xxx.xxx.254 0.0.0.0         UG        0 0          0
>>> eth0
>>> 
>>> were xxx.xxx.xxx.0 is the representation of your servers IP address and
>>> xxx.xxx.xxx.254 the default gateway through which all your incoming and
>>> outgoing traffic is routed.
>>> 
>>> A man-in-the-middle will just have the simple task to filter and
>>> manipulate any kind of traffic in order to control your entropy source.
>>> 
>>> 
>>> 
>> 
>> 
>> 
> 
> 



-- 
Neodome
0
Neodome
12/13/2016 12:51:41 AM
On 2016-12-13, Neodome Admin <admin@neodome.net> wrote:
> William Unruh <unruh@invalid.ca> wrote:
>> On 2016-12-12, Karl.Frank <Karl.Frank@Freecx.co.uk> wrote:
>>> On 12.12.16 17:44, Neodome Admin wrote:
>>>> Karl.Frank<Karl.Frank@Freecx.co.uk>  wrote:
>>>>> On 12.12.16 14:19, Neodome Admin wrote:
>>>>>> In the past, there were some jokes about using Usenet as a source of
>>>>>> randomness for RNG. Since I'm running open news server, I decided to check
>>>>>> if it will actually work.
>>>>>> 
>>>>>> Here is my approach, step by step:
>>>>>> 
>>>>>> 1. Take a header of Usenet article. Articles themselves can be completely
>>>>>> identical (or even empty), but at least one header field suppose to be
>>>>>> unique, the Message-ID. Practically, there will be much more unique header
>>>>>> fields.
>>>>>> 
>>>>>> 2. Since all incoming data is in ASCII, to reduce ASCII-related patterns in
>>>>>> the stream of bits, take a first bit of each byte of text.
>>>>>> 
>>>>>> 3. Put bits on top of a stack.
>>>>>> 
>>>>>> 4. Once stack is equal or bigger than X, take X bits from the top, leaving
>>>>>> the rest on the stack. This is to ensure that processing of data does not
>>>>>> always start from the beginning of a header.
>>>>>> 
>>>>>> 5. Do a simple skew correction, same as RANDOM.ORG does on their data.
>>>>>> 
>>>>>> 7. Write processed bits to the string.
>>>>>> 
>>>>>> 6. Once string length is equal Y, create a hash of the bit string.
>>>>>> 
>>>>>> Hashing is done on the data which is still clearly have patterns, however,
>>>>>> in normal circumstances there is no way to predict where those patterns
>>>>>> start. To attack the RNG, attacker will have to control all upstream
>>>>>> servers, or whole or a significant part of Usenet network, depending on the
>>>>>> type of attack.
>>>>>> 
>>>>>> I tried to not use results of previous computations. If data is truly
>>>>>> random, there is no need to try to "add randomness".
>>>>>> 
>>>>>> Here is the Perl program I used: http://neodome.net/rng/rngfeed
>>>>>> It was receiving data directly from INN server which gets full text-only
>>>>>> feed from its upstreams.
>>>>>> 
>>>>>> Results can be observed here: http://neodome.net/rng/
>>>>>> BIN files are hashes in raw binary format, TXT files are text
>>>>>> representation of same data.
>>>>>> 
>>>>>> I did not do a statistical analysis of data yet, however, something tells
>>>>>> me it's random.
>>>>>> 
>>>>>> I also would like to thank everyone who participated in the experiment by
>>>>>> sending the articles to Usenet. Every bit counts!
>>>>>> 
>>>>> 
>>>>> Just controlling the router to your news server for a significantly
>>>>> short time might allow the attacker to eliminate all randomness. I did
>>>>> not check your perl script, so my question: did you implement any
>>>>> precaution measurement in order to prevent such a disaster?
>>>> 
>>>> You mean my upstream provider? I don't have one, I receive news from
>>>> several sources, just like, for example, eternal-september does. By
>>>> controlling one of them attacker will be able to control part of the data
>>>> that my server receive. But the rest of network is still non-predictable.
>>>> They cannot predict that, for example, Karl Frank at 10:03 12.12.2016 will
>>>> decide to to send a message, and they cannot predict when and from whom
>>>> this message will arrive at my server.
>>>> 
>>> 
>>> There is always a "default route" were all your traffic originate from.
>>> Just give yourself a try and log-in to your server and type the command
>> 
>> Well, no. The default route is where outgoing traffic gets sent if there
>> is no more specific route to send it. It does not say where the traffic
>> comes from 
>> However, your computer is connected by a single wire to something. So
>> everythng comes and goes along that wire (wireless I will call a virtual
>> wire). That thing that the wire is connected to is then connected to
>> something else (eg in the telephone office) again it is a single route. 
>> 
>> Now eventually stuff can branch out, but there is effectively a single
>> track from your machine to some machine out there. That other machine is
>> then a point at which an attacker could inject carefully crafted packets
>> to destroy the randomness of your numbers. 
>> 
>> The usual way to get around this is to glean "randomness" froma bunch of
>> places-- clock time, keyboard presses, internet traffic, etc. Then that
>> is hashed sufficiently strongly (Ie large many to one) so that knowing
>> any one source would not give any advantage to the attacker. Knowing a
>> buch of them could give an advantage. 
>> 
>> Note that all of this is already done the /dev/urandom
>> generator so what your are making is a very slow and limited version of
>> this. .
>
> So, /dev/urandom is a True Random Number Generator?

It is much better than the above one.  No idea what you mean by a "True
Random Number Generator" .

>
>>> 
>>> netstat -nr
>>> 
>>> You should get something like
>>> 
>>> Kernel IP routing table
>>> Destination     Gateway         Genmask         Flags   MSS Window  irtt 
>>> Iface
>>> xxx.xxx.xxx.0   0.0.0.0         255.255.255.0   U         0 0          0 
>>> eth0
>>> 0.0.0.0         xxx.xxx.xxx.254 0.0.0.0         UG        0 0          0 
>>> eth0
>>> 
>>> were xxx.xxx.xxx.0 is the representation of your servers IP address and
>>> xxx.xxx.xxx.254 the default gateway through which all your incoming and
>>> outgoing traffic is routed.
>>> 
>>> A man-in-the-middle will just have the simple task to filter and
>>> manipulate any kind of traffic in order to control your entropy source.
>>> 
>>> 
>>> 
>> 
>
>
>
0
William
12/13/2016 4:23:24 AM
In sci.crypt Neodome Admin <admin@neodome.net> wrote:
> Rich <rich@example.invalid> wrote:
>> In sci.crypt Neodome Admin <admin@neodome.net> wrote:
>>> On a serious note, Usenet is publicly available block of data.
>> 
>> Which is what makes it somewhat unsuitable as a randomness source.
> 
> I made results of my experiment publicly available. Would you like to do
> statistical analysis and prove that the data is not random?

Not for free, no.

But if you consider the meaning of "randomness", i.e. the "lack of
predictability" and then think about "publicly available block of
data" you might reach a reasonable conclusion.

Using a "publicly available block of data" does not in any way provide
a "lack of predictability" to an adversary, because it (the publicly
available block of data) is also available to the adversary.

Therefore, usenet is "somewhat unsuitable as a randomness source".

Note that "very hard to predict" is not identical to a "lack of
predictability".

0
Rich
12/13/2016 4:42:52 AM
On 13.12.16 01:51, Neodome Admin wrote:
> Karl.Frank<Karl.Frank@Freecx.co.uk>  wrote:

>> It seems you miss the point. The bottleneck is your default gateway, the
>> single entry point. If an interceptor can control the bit stream that
>> *has* to pass through this gateway he has complete control of what you
>> will receive, not matter how many different sources you are trying to
>> access.
>
> If it were that easy, we would not be using SSL right now.
>

There is no security regarding SSL if any interceptor is in need to
interfere. I assume you didn't follow the link to the article about
MITMPROXY

https://blog.heckel.xyz/2013/07/01/how-to-use-mitmproxy-to-read-and-modify-https-traffic-of-your-phone/

Maybe these two readings will help you to understand the problem

http://www.zdnet.com/article/how-the-nsa-and-your-boss-can-intercept-and-break-ssl/

https://www.wilderssecurity.com/threads/bluecoat-known-for-ssl-mitm-now-has-a-ca-signed-by-symantec.386121/




-- 
cHNiMUBACG0HAAAAAAAAAAAAAABIZVbDdKVM0w1kM9vxQHw+bkLxsY/Z0czY0uv8/Ks6WULxJVua
zjvpoYvtEwDVhP7RGTCBVlzZ+VBWPHg5rqmKWvtzsuVmMSDxAIS6Db6YhtzT+RStzoG9ForBcG8k
G97Q3Jml/aBun8Kyf+XOBHpl5gNW4YqhiM0=
0
Karl
12/13/2016 10:44:32 AM
William Unruh suggested:

[...]

> No idea what you mean by a "True Random Number Generator" .

Maybe he means this:

https://en.wikipedia.org/wiki/Hardware_random_number_generator

But I don't think USENET headers meet the criteria.
0
FromTheRafters
12/13/2016 11:22:30 AM
In sci.crypt FromTheRafters <erratic@nomail.afraid.org> wrote:
> William Unruh suggested:
> 
> [...]
> 
>> No idea what you mean by a "True Random Number Generator" .
> 
> Maybe he means this:
> 
> https://en.wikipedia.org/wiki/Hardware_random_number_generator
> 
> But I don't think USENET headers meet the criteria.

There are far too many predictable patterns in USENET headers to meet
that criteria.

Therefore my assertion in a prior posting that USENET is somewhat
unsuitable as a randomness source.

0
Rich
12/13/2016 11:34:34 AM
In sci.crypt Neodome Admin <admin@neodome.net> wrote:
> Karl.Frank <Karl.Frank@Freecx.co.uk> wrote:
>> 
>> It seems you miss the point. The bottleneck is your default gateway,
>> the single entry point.  If an interceptor can control the bit
>> stream that *has* to pass through this gateway he has complete
>> control of what you will receive, not matter how many different
>> sources you are trying to access.
> 
> If it were that easy, we would not be using SSL right now.

Unless you use SSL client certificates (i.e., certificates on both
sides of the connection) and require the connection be dropped if both
side's certs do not match, then SSL is vulnerable to a man in the
middle attack.  That "man in the middle" has to be present when the
link is first brought up, and the client side has to not be doing
something like verifying that the cert.  it gets now matches the cert. 
it got last time from "site X" [imperfect, but better than nothing]
(all the above is *not* the default), then a proxy in the middle can
capture all your SSL traffic, all while you see the green "lock" icon
and think you are secure.

This is how corporate monitoring proxy's like BlueCoat operate.  The
MITM SSL channels so they get to see everything inside the channel,
while you are non-the-wiser.

And, if there is a proxy that is decrypting and scaning your traffic
inside the SSL connection, that same proxy can also change the traffic
to anything it wishes you to receive.

0
Rich
12/13/2016 11:42:16 AM
On 2016-12-13, Rich <rich@example.invalid> wrote:
> In sci.crypt Neodome Admin <admin@neodome.net> wrote:
>> Rich <rich@example.invalid> wrote:
>>> In sci.crypt Neodome Admin <admin@neodome.net> wrote:
>>>> On a serious note, Usenet is publicly available block of data.
>>> 
>>> Which is what makes it somewhat unsuitable as a randomness source.
>> 
>> I made results of my experiment publicly available. Would you like to do
>> statistical analysis and prove that the data is not random?
>
> Not for free, no.
>
> But if you consider the meaning of "randomness", i.e. the "lack of
> predictability" and then think about "publicly available block of
> data" you might reach a reasonable conclusion.
>
> Using a "publicly available block of data" does not in any way provide
> a "lack of predictability" to an adversary, because it (the publicly
> available block of data) is also available to the adversary.
>
> Therefore, usenet is "somewhat unsuitable as a randomness source".
>
> Note that "very hard to predict" is not identical to a "lack of
> predictability".

Well it is for a suitable value of "very". After all all password
hashing schemes rely on this being the case. One does not need exactly
zero predictability. P=1/akerman(5,5) is more than sufficient. Now,
whether or not a single source of randomness is sufficient to give an
appropriate level of predictability is perhaps open to question. That is
why urandom, or random, use a number of sources of potential randomness.
It makes that "very" much much larger. And if the machine is so
thorougly comprimized that all sources are controllable by the
adversary, you have far far bigger worries.


>
0
William
12/13/2016 3:52:12 PM
On 2016-12-13, FromTheRafters <erratic@nomail.afraid.org> wrote:
> William Unruh suggested:
>
> [...]
>
>> No idea what you mean by a "True Random Number Generator" .
>
> Maybe he means this:
>
> https://en.wikipedia.org/wiki/Hardware_random_number_generator
>
> But I don't think USENET headers meet the criteria.

Except that all hardware random number generators do not meet the
criterion of no-predicatability. They are all biased in one way or
another, some known some unkown. Usenet headers in the raw certainly
have a lot of biases. But hashing them could well remove most of those
biases. But there still maybe biases from long time correlations in the
stream. 
0
William
12/13/2016 3:57:20 PM
In sci.crypt William Unruh <unruh@invalid.ca> wrote:
> On 2016-12-13, Rich <rich@example.invalid> wrote:
>> In sci.crypt Neodome Admin <admin@neodome.net> wrote:
>>> Rich <rich@example.invalid> wrote:
>>>> In sci.crypt Neodome Admin <admin@neodome.net> wrote:
>>>>> On a serious note, Usenet is publicly available block of data.
>>>> 
>>>> Which is what makes it somewhat unsuitable as a randomness source.
>>> 
>>> I made results of my experiment publicly available. Would you like to do
>>> statistical analysis and prove that the data is not random?
>>
>> Not for free, no.
>>
>> But if you consider the meaning of "randomness", i.e. the "lack of
>> predictability" and then think about "publicly available block of
>> data" you might reach a reasonable conclusion.
>>
>> Using a "publicly available block of data" does not in any way provide
>> a "lack of predictability" to an adversary, because it (the publicly
>> available block of data) is also available to the adversary.
>>
>> Therefore, usenet is "somewhat unsuitable as a randomness source".
>>
>> Note that "very hard to predict" is not identical to a "lack of
>> predictability".
> 
> Well it is for a suitable value of "very". After all all password
> hashing schemes rely on this being the case. One does not need exactly
> zero predictability. P=1/akerman(5,5) is more than sufficient. Now,
> whether or not a single source of randomness is sufficient to give an
> appropriate level of predictability is perhaps open to question. That is
> why urandom, or random, use a number of sources of potential randomness.
> It makes that "very" much much larger. And if the machine is so
> thorougly comprimized that all sources are controllable by the
> adversary, you have far far bigger worries.

Yep.  Which is why I said "somewhat unsuitable".  As a single source of
entropy, Usenet content may not be at all suitable as a randomness
source.  And if memory serves, the starting post at least implied using
usenet content as a single source of entropy.

But as one of many different sources of entropy all mixed together, it
may be ok, provided it is not a significant majority of the input
entropy such that a determined and/or well-funded attacker can tweak
things to gain benefit for themselves.

But as you are well aware, very few of these questions like the one
that started this thread that arrive here have binary answers.  Which
is why what often happens is one or more of us pose questions back of
the form of "define hard" and/or "define your adversary".  And which
all too often shock the newbie who thinks he has a wonderful system and
was expecting a "yes - secure" answer only.

0
Rich
12/13/2016 5:08:21 PM
William Unruh <unruh@invalid.ca> wrote:
> On 2016-12-13, Neodome Admin <admin@neodome.net> wrote:
>> William Unruh <unruh@invalid.ca> wrote:
>>> On 2016-12-12, Karl.Frank <Karl.Frank@Freecx.co.uk> wrote:
>>>> On 12.12.16 17:44, Neodome Admin wrote:
>>>>> Karl.Frank<Karl.Frank@Freecx.co.uk>  wrote:
>>>>>> On 12.12.16 14:19, Neodome Admin wrote:
>>>>>>> In the past, there were some jokes about using Usenet as a source of
>>>>>>> randomness for RNG. Since I'm running open news server, I decided to check
>>>>>>> if it will actually work.
>>>>>>> 
>>>>>>> Here is my approach, step by step:
>>>>>>> 
>>>>>>> 1. Take a header of Usenet article. Articles themselves can be completely
>>>>>>> identical (or even empty), but at least one header field suppose to be
>>>>>>> unique, the Message-ID. Practically, there will be much more unique header
>>>>>>> fields.
>>>>>>> 
>>>>>>> 2. Since all incoming data is in ASCII, to reduce ASCII-related patterns in
>>>>>>> the stream of bits, take a first bit of each byte of text.
>>>>>>> 
>>>>>>> 3. Put bits on top of a stack.
>>>>>>> 
>>>>>>> 4. Once stack is equal or bigger than X, take X bits from the top, leaving
>>>>>>> the rest on the stack. This is to ensure that processing of data does not
>>>>>>> always start from the beginning of a header.
>>>>>>> 
>>>>>>> 5. Do a simple skew correction, same as RANDOM.ORG does on their data.
>>>>>>> 
>>>>>>> 7. Write processed bits to the string.
>>>>>>> 
>>>>>>> 6. Once string length is equal Y, create a hash of the bit string.
>>>>>>> 
>>>>>>> Hashing is done on the data which is still clearly have patterns, however,
>>>>>>> in normal circumstances there is no way to predict where those patterns
>>>>>>> start. To attack the RNG, attacker will have to control all upstream
>>>>>>> servers, or whole or a significant part of Usenet network, depending on the
>>>>>>> type of attack.
>>>>>>> 
>>>>>>> I tried to not use results of previous computations. If data is truly
>>>>>>> random, there is no need to try to "add randomness".
>>>>>>> 
>>>>>>> Here is the Perl program I used: http://neodome.net/rng/rngfeed
>>>>>>> It was receiving data directly from INN server which gets full text-only
>>>>>>> feed from its upstreams.
>>>>>>> 
>>>>>>> Results can be observed here: http://neodome.net/rng/
>>>>>>> BIN files are hashes in raw binary format, TXT files are text
>>>>>>> representation of same data.
>>>>>>> 
>>>>>>> I did not do a statistical analysis of data yet, however, something tells
>>>>>>> me it's random.
>>>>>>> 
>>>>>>> I also would like to thank everyone who participated in the experiment by
>>>>>>> sending the articles to Usenet. Every bit counts!
>>>>>>> 
>>>>>> 
>>>>>> Just controlling the router to your news server for a significantly
>>>>>> short time might allow the attacker to eliminate all randomness. I did
>>>>>> not check your perl script, so my question: did you implement any
>>>>>> precaution measurement in order to prevent such a disaster?
>>>>> 
>>>>> You mean my upstream provider? I don't have one, I receive news from
>>>>> several sources, just like, for example, eternal-september does. By
>>>>> controlling one of them attacker will be able to control part of the data
>>>>> that my server receive. But the rest of network is still non-predictable.
>>>>> They cannot predict that, for example, Karl Frank at 10:03 12.12.2016 will
>>>>> decide to to send a message, and they cannot predict when and from whom
>>>>> this message will arrive at my server.
>>>>> 
>>>> 
>>>> There is always a "default route" were all your traffic originate from.
>>>> Just give yourself a try and log-in to your server and type the command
>>> 
>>> Well, no. The default route is where outgoing traffic gets sent if there
>>> is no more specific route to send it. It does not say where the traffic
>>> comes from 
>>> However, your computer is connected by a single wire to something. So
>>> everythng comes and goes along that wire (wireless I will call a virtual
>>> wire). That thing that the wire is connected to is then connected to
>>> something else (eg in the telephone office) again it is a single route. 
>>> 
>>> Now eventually stuff can branch out, but there is effectively a single
>>> track from your machine to some machine out there. That other machine is
>>> then a point at which an attacker could inject carefully crafted packets
>>> to destroy the randomness of your numbers. 
>>> 
>>> The usual way to get around this is to glean "randomness" froma bunch of
>>> places-- clock time, keyboard presses, internet traffic, etc. Then that
>>> is hashed sufficiently strongly (Ie large many to one) so that knowing
>>> any one source would not give any advantage to the attacker. Knowing a
>>> buch of them could give an advantage. 
>>> 
>>> Note that all of this is already done the /dev/urandom
>>> generator so what your are making is a very slow and limited version of
>>> this. .
>> 
>> So, /dev/urandom is a True Random Number Generator?
> 
> It is much better than the above one.

Proof required.

> No idea what you mean by a "True
> Random Number Generator" .

Please see the answer of FromTheRafters.

>> 
>>>> 
>>>> netstat -nr
>>>> 
>>>> You should get something like
>>>> 
>>>> Kernel IP routing table
>>>> Destination     Gateway         Genmask         Flags   MSS Window  irtt 
>>>> Iface
>>>> xxx.xxx.xxx.0   0.0.0.0         255.255.255.0   U         0 0          0 
>>>> eth0
>>>> 0.0.0.0         xxx.xxx.xxx.254 0.0.0.0         UG        0 0          0 
>>>> eth0
>>>> 
>>>> were xxx.xxx.xxx.0 is the representation of your servers IP address and
>>>> xxx.xxx.xxx.254 the default gateway through which all your incoming and
>>>> outgoing traffic is routed.
>>>> 
>>>> A man-in-the-middle will just have the simple task to filter and
>>>> manipulate any kind of traffic in order to control your entropy source.
>>>> 
>>>> 
>>>> 
>>> 
>> 
>> 
>> 
> 



-- 
Neodome
0
Neodome
12/13/2016 9:04:04 PM
Rich <rich@example.invalid> wrote:
> In sci.crypt FromTheRafters <erratic@nomail.afraid.org> wrote:
>> William Unruh suggested:
>> 
>> [...]
>> 
>>> No idea what you mean by a "True Random Number Generator" .
>> 
>> Maybe he means this:
>> 
>> https://en.wikipedia.org/wiki/Hardware_random_number_generator
>> 
>> But I don't think USENET headers meet the criteria.
> 
> There are far too many predictable patterns in USENET headers to meet
> that criteria.

There are far too many predictable patterns in any random data source. If
you wait long enough, it will give you byte 0xFF, or byte 0x4B, or some
other byte. Is it predictable?

> Therefore my assertion in a prior posting that USENET is somewhat
> unsuitable as a randomness source.
> 
> 



-- 
Neodome
0
Neodome
12/13/2016 9:04:11 PM
Karl.Frank <Karl.Frank@Freecx.co.uk> wrote:
> On 13.12.16 01:51, Neodome Admin wrote:
>> Karl.Frank<Karl.Frank@Freecx.co.uk>  wrote:
> 
>>> It seems you miss the point. The bottleneck is your default gateway, the
>>> single entry point. If an interceptor can control the bit stream that
>>> *has* to pass through this gateway he has complete control of what you
>>> will receive, not matter how many different sources you are trying to
>>> access.
>> 
>> If it were that easy, we would not be using SSL right now.
>> 
> 
> There is no security regarding SSL if any interceptor is in need to
> interfere. I assume you didn't follow the link to the article about
> MITMPROXY

No, I didn't. As I said, possibilities to obtain Usenet articles are almost
endless. Unless you can break any and all encryption schemes in the world,
your argument is invalid.

You might have much better chances with sending your own articles to my
server instead of trying to intercept and modify someone else's.

> https://blog.heckel.xyz/2013/07/01/how-to-use-mitmproxy-to-read-and-modify-https-traffic-of-your-phone/
> 
> Maybe these two readings will help you to understand the problem
> 
> http://www.zdnet.com/article/how-the-nsa-and-your-boss-can-intercept-and-break-ssl/
> 
> https://www.wilderssecurity.com/threads/bluecoat-known-for-ssl-mitm-now-has-a-ca-signed-by-symantec.386121/
> 
> 
> 
> 



-- 
Neodome
0
Neodome
12/13/2016 9:04:21 PM
Rich <rich@example.invalid> wrote:
> In sci.crypt Neodome Admin <admin@neodome.net> wrote:
>> Rich <rich@example.invalid> wrote:
>>> In sci.crypt Neodome Admin <admin@neodome.net> wrote:
>>>> On a serious note, Usenet is publicly available block of data.
>>> 
>>> Which is what makes it somewhat unsuitable as a randomness source.
>> 
>> I made results of my experiment publicly available. Would you like to do
>> statistical analysis and prove that the data is not random?
> 
> Not for free, no.

:)

> But if you consider the meaning of "randomness", i.e. the "lack of
> predictability" and then think about "publicly available block of
> data" you might reach a reasonable conclusion.
> 
> Using a "publicly available block of data" does not in any way provide
> a "lack of predictability" to an adversary, because it (the publicly
> available block of data) is also available to the adversary.

Prove it then. What's the problem?

> Therefore, usenet is "somewhat unsuitable as a randomness source".
> 
> Note that "very hard to predict" is not identical to a "lack of
> predictability".
> 
> 

I'm not sure if I ever used the phrase "very hard to predict". I've said
"there is no way to predict".

-- 
Neodome
0
Neodome
12/13/2016 9:04:30 PM
On 13/12/16 21:04, Neodome Admin wrote:
> Rich <rich@example.invalid> wrote:
<snip>
>>
>> Using a "publicly available block of data" does not in any way provide
>> a "lack of predictability" to an adversary, because it (the publicly
>> available block of data) is also available to the adversary.
>
> Prove it then. What's the problem?

The problem is that the onus is on you, as the proposer of the new 
scheme, to demonstrate that it is sound.

-- 
Richard Heathfield
Email: rjh at cpax dot org dot uk
"Usenet is a strange place" - dmr 29 July 1999
Sig line 4 vacant - apply within
0
Richard
12/13/2016 9:09:08 PM
Richard Heathfield <rjh@cpax.org.uk> wrote:
> On 13/12/16 21:04, Neodome Admin wrote:
>> Rich <rich@example.invalid> wrote:
> <snip>
>>> 
>>> Using a "publicly available block of data" does not in any way provide
>>> a "lack of predictability" to an adversary, because it (the publicly
>>> available block of data) is also available to the adversary.
>> 
>> Prove it then. What's the problem?
> 
> The problem is that the onus is on you, as the proposer of the new 
> scheme, to demonstrate that it is sound.

I already did. Generated random data is online, and anyone is free to
download it and make sure it really is random. Otherwise, they are free to
point out any patterns.

It seems that most of people who try to argue with me don't know how Usenet
works. Yes, it is publicly available block of data, but it's generated in a
real time, and attacker don't know what I'm receiving and what I'm
processing *right now*. It's like using sound of waterfall as a source of
randomness. You might try to affect it by putting something in the water,
but unless you put your microphone next to mine, you don't know what I'm
recording right now.

-- 
Neodome
0
Neodome
12/13/2016 9:50:51 PM
On 13/12/16 21:50, Neodome Admin wrote:
> Richard Heathfield <rjh@cpax.org.uk> wrote:
>> On 13/12/16 21:04, Neodome Admin wrote:
>>> Rich <rich@example.invalid> wrote:
>> <snip>
>>>>
>>>> Using a "publicly available block of data" does not in any way provide
>>>> a "lack of predictability" to an adversary, because it (the publicly
>>>> available block of data) is also available to the adversary.
>>>
>>> Prove it then. What's the problem?
>>
>> The problem is that the onus is on you, as the proposer of the new
>> scheme, to demonstrate that it is sound.
>
> I already did. Generated random data is online, and anyone is free to
> download it and make sure it really is random.

Producing data is not the same as demonstrating that the data meet the 
required criteria. The onus is on you to do that. For example, you could 
publish the results of your Diehard tests. (You may already have done 
that but, if so, I missed it.)

-- 
Richard Heathfield
Email: rjh at cpax dot org dot uk
"Usenet is a strange place" - dmr 29 July 1999
Sig line 4 vacant - apply within
0
Richard
12/13/2016 11:01:00 PM
In sci.crypt Neodome Admin <admin@neodome.net> wrote:
> Rich <rich@example.invalid> wrote:
>> In sci.crypt Neodome Admin <admin@neodome.net> wrote:
>>> Rich <rich@example.invalid> wrote:
>>>> In sci.crypt Neodome Admin <admin@neodome.net> wrote:
>>>>> On a serious note, Usenet is publicly available block of data.
>>>> 
>>>> Which is what makes it somewhat unsuitable as a randomness source.
>>> 
>>> I made results of my experiment publicly available. Would you like to do
>>> statistical analysis and prove that the data is not random?
>> 
>> Not for free, no.
> 
> :)
> 
>> But if you consider the meaning of "randomness", i.e. the "lack of
>> predictability" and then think about "publicly available block of
>> data" you might reach a reasonable conclusion.
>> 
>> Using a "publicly available block of data" does not in any way provide
>> a "lack of predictability" to an adversary, because it (the publicly
>> available block of data) is also available to the adversary.
> 
> Prove it then. What's the problem?

I don't have to.  That sentence is enough 'proof', provided the reader
is open-minded enough to follow it and understand its meaning.

You, however, are wedded to your scheme, and are therefore biased,
which means you are not looking at the sentence with an open enough
mindset.

0
Rich
12/13/2016 11:23:40 PM
In sci.crypt Neodome Admin <admin@neodome.net> wrote:
> Richard Heathfield <rjh@cpax.org.uk> wrote:
>> On 13/12/16 21:04, Neodome Admin wrote:
>>> Rich <rich@example.invalid> wrote:
>> <snip>
>>>> 
>>>> Using a "publicly available block of data" does not in any way provide
>>>> a "lack of predictability" to an adversary, because it (the publicly
>>>> available block of data) is also available to the adversary.
>>> 
>>> Prove it then. What's the problem?
>> 
>> The problem is that the onus is on you, as the proposer of the new 
>> scheme, to demonstrate that it is sound.
> 
> I already did. Generated random data is online, and anyone is free to
> download it and make sure it really is random.

This applies: http://dilbert.com/strip/2001-10-25

0
Rich
12/13/2016 11:24:54 PM
In sci.crypt Neodome Admin <admin@neodome.net> wrote:
> Rich <rich@example.invalid> wrote:
>> In sci.crypt FromTheRafters <erratic@nomail.afraid.org> wrote:
>>> William Unruh suggested:
>>> 
>>> [...]
>>> 
>>>> No idea what you mean by a "True Random Number Generator" .
>>> 
>>> Maybe he means this:
>>> 
>>> https://en.wikipedia.org/wiki/Hardware_random_number_generator
>>> 
>>> But I don't think USENET headers meet the criteria.
>> 
>> There are far too many predictable patterns in USENET headers to meet
>> that criteria.
> 
> There are far too many predictable patterns in any random data source. If
> you wait long enough, it will give you byte 0xFF, or byte 0x4B, or some
> other byte. Is it predictable?

That's not a 'predictable pattern'.

In this thread alone, there are at least the following predictable
patterns in the headers:

  \r\nFrom: Neodome Admin <admin@neodome.net>\r\n
    (plus all the other 'from' lines, omitted)
  \r\nNewsgroups: sci.crypt,sci.crypt.random-numbers,comp.lang.perl.misc\r\n
  \r\nSubject: Re: Usenet as a True Random Number Generator\r\n
  \r\nOrganization: Neodome\r\n
    (plus other 'orgs' used by the other posters)
  \r\nMime-Version: 1.0\r\n
  \r\nContent-Type: text/plain; charset=UTF-8\r\n
  \r\nContent-Transfer-Encoding: 8bit\r\n
  \r\nUser-Agent: NewsTap/5.2.1 (iPhone/iPod Touch)\r\n

And, some of those form even longer predictable patterns (i.e., the
from and newgroups and subject headers are adjacent in my copy here. 
Plus the Mime-Version, Content-Type, and Content-Transfer-Encoding are
also adjacent.

Plus all the quoted text (which is nearly verbatim identical save for
an added "> " on the left).

0
Rich
12/13/2016 11:29:47 PM
On 13.12.16 22:04, Neodome Admin wrote:
> Karl.Frank<Karl.Frank@Freecx.co.uk>  wrote:
>> On 13.12.16 01:51, Neodome Admin wrote:
>>> Karl.Frank<Karl.Frank@Freecx.co.uk>   wrote:
>>
>>>> It seems you miss the point. The bottleneck is your default gateway, the
>>>> single entry point. If an interceptor can control the bit stream that
>>>> *has* to pass through this gateway he has complete control of what you
>>>> will receive, not matter how many different sources you are trying to
>>>> access.
>>>
>>> If it were that easy, we would not be using SSL right now.
>>>
>>
>> There is no security regarding SSL if any interceptor is in need to
>> interfere. I assume you didn't follow the link to the article about
>> MITMPROXY
>
> No, I didn't. As I said, possibilities to obtain Usenet articles are almost
> endless. Unless you can break any and all encryption schemes in the world,
> your argument is invalid.
>
> You might have much better chances with sending your own articles to my
> server instead of trying to intercept and modify someone else's.
>

I see you don't even understand the basics how network connection work.



>> https://blog.heckel.xyz/2013/07/01/how-to-use-mitmproxy-to-read-and-modify-https-traffic-of-your-phone/
>>
>> Maybe these two readings will help you to understand the problem
>>
>> http://www.zdnet.com/article/how-the-nsa-and-your-boss-can-intercept-and-break-ssl/
>>
>> https://www.wilderssecurity.com/threads/bluecoat-known-for-ssl-mitm-now-has-a-ca-signed-by-symantec.386121/
>>
>>
>>
>>
>
>
>


-- 
cHNiMUBACG0HAAAAAAAAAAAAAABIZVbDdKVM0w1kM9vxQHw+bkLxsY/Z0czY0uv8/Ks6WULxJVua
zjvpoYvtEwDVhP7RGTCBVlzZ+VBWPHg5rqmKWvtzsuVmMSDxAIS6Db6YhtzT+RStzoG9ForBcG8k
G97Q3Jml/aBun8Kyf+XOBHpl5gNW4YqhiM0=
0
Karl
12/14/2016 1:28:32 PM
On 2016-12-13, Rich <rich@example.invalid> wrote:
> In sci.crypt Neodome Admin <admin@neodome.net> wrote:
>> Rich <rich@example.invalid> wrote:
>>> In sci.crypt FromTheRafters <erratic@nomail.afraid.org> wrote:
>>>> William Unruh suggested:
>>>> 
>>>> [...]
>>>> 
>>>>> No idea what you mean by a "True Random Number Generator" .
>>>> 
>>>> Maybe he means this:
>>>> 
>>>> https://en.wikipedia.org/wiki/Hardware_random_number_generator
>>>> 
>>>> But I don't think USENET headers meet the criteria.
>>> 
>>> There are far too many predictable patterns in USENET headers to meet
>>> that criteria.
>> 
>> There are far too many predictable patterns in any random data source. If
>> you wait long enough, it will give you byte 0xFF, or byte 0x4B, or some
>> other byte. Is it predictable?

A predictable pattern is something where, given some  set of bytes from the
pattern, you can predict with higher probability then 1/256 what the
next (or any other specific byte) will be. 
>
> That's not a 'predictable pattern'.
>
> In this thread alone, there are at least the following predictable
> patterns in the headers:
>
>   \r\nFrom: Neodome Admin <admin@neodome.net>\r\n
>     (plus all the other 'from' lines, omitted)
>   \r\nNewsgroups: sci.crypt,sci.crypt.random-numbers,comp.lang.perl.misc\r\n
>   \r\nSubject: Re: Usenet as a True Random Number Generator\r\n
>   \r\nOrganization: Neodome\r\n
>     (plus other 'orgs' used by the other posters)
>   \r\nMime-Version: 1.0\r\n
>   \r\nContent-Type: text/plain; charset=UTF-8\r\n
>   \r\nContent-Transfer-Encoding: 8bit\r\n
>   \r\nUser-Agent: NewsTap/5.2.1 (iPhone/iPod Touch)\r\n
>
> And, some of those form even longer predictable patterns (i.e., the
> from and newgroups and subject headers are adjacent in my copy here. 
> Plus the Mime-Version, Content-Type, and Content-Transfer-Encoding are
> also adjacent.
>
> Plus all the quoted text (which is nearly verbatim identical save for
> an added "> " on the left).
>
0
William
12/14/2016 3:49:02 PM
Rich <rich@example.invalid> wrote:
> In sci.crypt Neodome Admin <admin@neodome.net> wrote:
>> Rich <rich@example.invalid> wrote:
>>> In sci.crypt FromTheRafters <erratic@nomail.afraid.org> wrote:
>>>> William Unruh suggested:
>>>> 
>>>> [...]
>>>> 
>>>>> No idea what you mean by a "True Random Number Generator" .
>>>> 
>>>> Maybe he means this:
>>>> 
>>>> https://en.wikipedia.org/wiki/Hardware_random_number_generator
>>>> 
>>>> But I don't think USENET headers meet the criteria.
>>> 
>>> There are far too many predictable patterns in USENET headers to meet
>>> that criteria.
>> 
>> There are far too many predictable patterns in any random data source. If
>> you wait long enough, it will give you byte 0xFF, or byte 0x4B, or some
>> other byte. Is it predictable?
> 
> That's not a 'predictable pattern'.
> 
> In this thread alone, there are at least the following predictable
> patterns in the headers:
> 
>   \r\nFrom: Neodome Admin <admin@neodome.net>\r\n
>     (plus all the other 'from' lines, omitted)
>   \r\nNewsgroups: sci.crypt,sci.crypt.random-numbers,comp.lang.perl.misc\r\n
>   \r\nSubject: Re: Usenet as a True Random Number Generator\r\n
>   \r\nOrganization: Neodome\r\n
>     (plus other 'orgs' used by the other posters)
>   \r\nMime-Version: 1.0\r\n
>   \r\nContent-Type: text/plain; charset=UTF-8\r\n
>   \r\nContent-Transfer-Encoding: 8bit\r\n
>   \r\nUser-Agent: NewsTap/5.2.1 (iPhone/iPod Touch)\r\n
> 
> And, some of those form even longer predictable patterns (i.e., the
> from and newgroups and subject headers are adjacent in my copy here. 
> Plus the Mime-Version, Content-Type, and Content-Transfer-Encoding are
> also adjacent.

Did you read my first message? Headers are treated as endless stream of
data, and hashing is done on a window in the middle of that stream. You, as
attacker, might assume that somewhere in that window there was some
particular pattern of bits (left there by the string "MIME-Version: 1.0"),
but you don't know where. How that knowledge would help you? I don't see a
way unless you can restore original data by looking at the hash, which is,
as far as I know, impossible.

> Plus all the quoted text (which is nearly verbatim identical save for
> an added "> " on the left).

I'm not processing article text.

-- 
Neodome
0
Neodome
12/14/2016 10:02:05 PM
Rich <rich@example.invalid> wrote:
> In sci.crypt Neodome Admin <admin@neodome.net> wrote:
>> Rich <rich@example.invalid> wrote:
>>> In sci.crypt Neodome Admin <admin@neodome.net> wrote:
>>>> Rich <rich@example.invalid> wrote:
>>>>> In sci.crypt Neodome Admin <admin@neodome.net> wrote:
>>>>>> On a serious note, Usenet is publicly available block of data.
>>>>> 
>>>>> Which is what makes it somewhat unsuitable as a randomness source.
>>>> 
>>>> I made results of my experiment publicly available. Would you like to do
>>>> statistical analysis and prove that the data is not random?
>>> 
>>> Not for free, no.
>> 
>> :)
>> 
>>> But if you consider the meaning of "randomness", i.e. the "lack of
>>> predictability" and then think about "publicly available block of
>>> data" you might reach a reasonable conclusion.
>>> 
>>> Using a "publicly available block of data" does not in any way provide
>>> a "lack of predictability" to an adversary, because it (the publicly
>>> available block of data) is also available to the adversary.
>> 
>> Prove it then. What's the problem?
> 
> I don't have to.  That sentence is enough 'proof', provided the reader
> is open-minded enough to follow it and understand its meaning.
> 
> You, however, are wedded to your scheme, and are therefore biased,
> which means you are not looking at the sentence with an open enough
> mindset.

....This game can be played by both parties, you know?

No, it's you who are biased. The one with open enough mindset will see that
by your usage of words such as "newbie".

-- 
Neodome
0
Neodome
12/14/2016 10:02:14 PM
Karl.Frank <Karl.Frank@Freecx.co.uk> wrote:
> On 13.12.16 22:04, Neodome Admin wrote:
>> Karl.Frank<Karl.Frank@Freecx.co.uk>  wrote:
>>> On 13.12.16 01:51, Neodome Admin wrote:
>>>> Karl.Frank<Karl.Frank@Freecx.co.uk>   wrote:
>>> 
>>>>> It seems you miss the point. The bottleneck is your default gateway, the
>>>>> single entry point. If an interceptor can control the bit stream that
>>>>> *has* to pass through this gateway he has complete control of what you
>>>>> will receive, not matter how many different sources you are trying to
>>>>> access.
>>>> 
>>>> If it were that easy, we would not be using SSL right now.
>>>> 
>>> 
>>> There is no security regarding SSL if any interceptor is in need to
>>> interfere. I assume you didn't follow the link to the article about
>>> MITMPROXY
>> 
>> No, I didn't. As I said, possibilities to obtain Usenet articles are almost
>> endless. Unless you can break any and all encryption schemes in the world,
>> your argument is invalid.
>> 
>> You might have much better chances with sending your own articles to my
>> server instead of trying to intercept and modify someone else's.
>> 
> 
> I see you don't even understand the basics how network connection work.

That's not an argument.

>>> https://blog.heckel.xyz/2013/07/01/how-to-use-mitmproxy-to-read-and-modify-https-traffic-of-your-phone/
>>> 
>>> Maybe these two readings will help you to understand the problem
>>> 
>>> http://www.zdnet.com/article/how-the-nsa-and-your-boss-can-intercept-and-break-ssl/
>>> 
>>> https://www.wilderssecurity.com/threads/bluecoat-known-for-ssl-mitm-now-has-a-ca-signed-by-symantec.386121/
>>> 
>>> 
>>> 
>>> 
>> 
>> 
>> 
> 
> 



-- 
Neodome
0
Neodome
12/14/2016 10:02:21 PM
Rich <rich@example.invalid> wrote:
> In sci.crypt Neodome Admin <admin@neodome.net> wrote:
>> Richard Heathfield <rjh@cpax.org.uk> wrote:
>>> On 13/12/16 21:04, Neodome Admin wrote:
>>>> Rich <rich@example.invalid> wrote:
>>> <snip>
>>>>> 
>>>>> Using a "publicly available block of data" does not in any way provide
>>>>> a "lack of predictability" to an adversary, because it (the publicly
>>>>> available block of data) is also available to the adversary.
>>>> 
>>>> Prove it then. What's the problem?
>>> 
>>> The problem is that the onus is on you, as the proposer of the new 
>>> scheme, to demonstrate that it is sound.
>> 
>> I already did. Generated random data is online, and anyone is free to
>> download it and make sure it really is random.
> 
> This applies: http://dilbert.com/strip/2001-10-25

I've seen that comic strip before. :)

That's why I saved data generated for over a month before I wrote a message
to sci.crypt.

-- 
Neodome
0
Neodome
12/14/2016 10:02:28 PM
Richard Heathfield <rjh@cpax.org.uk> wrote:
> On 13/12/16 21:50, Neodome Admin wrote:
>> Richard Heathfield <rjh@cpax.org.uk> wrote:
>>> On 13/12/16 21:04, Neodome Admin wrote:
>>>> Rich <rich@example.invalid> wrote:
>>> <snip>
>>>>> 
>>>>> Using a "publicly available block of data" does not in any way provide
>>>>> a "lack of predictability" to an adversary, because it (the publicly
>>>>> available block of data) is also available to the adversary.
>>>> 
>>>> Prove it then. What's the problem?
>>> 
>>> The problem is that the onus is on you, as the proposer of the new
>>> scheme, to demonstrate that it is sound.
>> 
>> I already did. Generated random data is online, and anyone is free to
>> download it and make sure it really is random.
> 
> Producing data is not the same as demonstrating that the data meet the 
> required criteria. The onus is on you to do that. For example, you could 
> publish the results of your Diehard tests. (You may already have done 
> that but, if so, I missed it.)

I did not do it yet. The main reason for that is the fact that Usenet is
not endless. As you can see, I produced only about 150-200 Kilobytes of
data per day. In other words, to produce any quantity of random data, I
might have to use my data as a seed to some CRNG (same as RANDOM.ORG does).
I'll accept any advice on that.

How much data is required to successfully pass Diehard test?

-- 
Neodome
0
Neodome
12/14/2016 10:18:07 PM
[Note - dropped crosspost to comp.lang.perl.misc.  The discussion has
nothing to do with perl]

In sci.crypt Neodome Admin <admin@neodome.net> wrote:
> Rich <rich@example.invalid> wrote:
>> In sci.crypt Neodome Admin <admin@neodome.net> wrote:
>>> Rich <rich@example.invalid> wrote:
>>>> In sci.crypt Neodome Admin <admin@neodome.net> wrote:
>>>>> Rich <rich@example.invalid> wrote:
>>>>>> In sci.crypt Neodome Admin <admin@neodome.net> wrote:
>>>>>>> On a serious note, Usenet is publicly available block of data.
>>>>>> 
>>>>>> Which is what makes it somewhat unsuitable as a randomness
>>>>>> source.
>>>>> 
>>>>> I made results of my experiment publicly available. Would you
>>>>> like to do statistical analysis and prove that the data is not
>>>>> random?
>>>> 
>>>> Not for free, no.
>>> 
>>> :)
>>> 
>>>> But if you consider the meaning of "randomness", i.e. the "lack of
>>>> predictability" and then think about "publicly available block of
>>>> data" you might reach a reasonable conclusion.
>>>> 
>>>> Using a "publicly available block of data" does not in any way
>>>> provide a "lack of predictability" to an adversary, because it
>>>> (the publicly available block of data) is also available to the
>>>> adversary.
>>> 
>>> Prove it then. What's the problem?
>> 
>> I don't have to.  That sentence is enough 'proof', provided the
>> reader is open-minded enough to follow it and understand its
>> meaning.
>> 
>> You, however, are wedded to your scheme, and are therefore biased,
>> which means you are not looking at the sentence with an open enough
>> mindset.
> 
> ...This game can be played by both parties, you know?
> 
> No, it's you who are biased. The one with open enough mindset will
> see that by your usage of words such as "newbie".

As far as I can recall, we've never seen a post from you on sci.crypt
before.  Therefore, given the definition of "newbie:

https://www.merriam-webster.com/dictionary/newbie


    Definition of newbie

    :  newcomer; especially :  a newcomer to cyberspace

the usage is proper.  You are a newcomer to sci.crypt, therefore, a
"newbie".

0
Rich
12/14/2016 10:42:00 PM
On 14/12/16 22:18, Neodome Admin wrote:
<snip>

> How much data is required to successfully pass Diehard test?

Diehard seems to have died. Presumably hard.

All the links I could find were to a university site that kept 404ing on me.

There is, however, a suite called Dieharder, which is available here:

http://www.phy.duke.edu/~rgb/General/dieharder.php

Note that the question "how much data is required" is the wrong 
question. It is the generator, not the data, that is tested.

-- 
Richard Heathfield
Email: rjh at cpax dot org dot uk
"Usenet is a strange place" - dmr 29 July 1999
Sig line 4 vacant - apply within
0
Richard
12/14/2016 11:41:00 PM
On 15.12.16 00:41, Richard Heathfield wrote:
> On 14/12/16 22:18, Neodome Admin wrote:
> <snip>
>
>> How much data is required to successfully pass Diehard test?
>
> Diehard seems to have died. Presumably hard.
>
> All the links I could find were to a university site that kept 404ing on
> me.
>
> There is, however, a suite called Dieharder, which is available here:
>
> http://www.phy.duke.edu/~rgb/General/dieharder.php
>
> Note that the question "how much data is required" is the wrong
> question. It is the generator, not the data, that is tested.
>

The successor of diehard in my opinion is TestU01 which tests can reveal
any kind of problems a PRNG might have

https://en.wikipedia.org/wiki/TestU01
http://simul.iro.umontreal.ca/testu01/tu01.html

My experience with dieharder is that the results are very
unreliable.

Unfortunately I was unable to compile Practrand

http://pracrand.sourceforge.net/



-- 
cHNiMUBACG0HAAAAAAAAAAAAAABIZVbDdKVM0w1kM9vxQHw+bkLxsY/Z0czY0uv8/Ks6WULxJVua
zjvpoYvtEwDVhP7RGTCBVlzZ+VBWPHg5rqmKWvtzsuVmMSDxAIS6Db6YhtzT+RStzoG9ForBcG8k
G97Q3Jml/aBun8Kyf+XOBHpl5gNW4YqhiM0=
0
Karl
12/15/2016 12:11:24 AM
On 14.12.16 23:02, Neodome Admin wrote:
> Karl.Frank<Karl.Frank@Freecx.co.uk>  wrote:
>> On 13.12.16 22:04, Neodome Admin wrote:
>>> Karl.Frank<Karl.Frank@Freecx.co.uk>   wrote:
>>>> On 13.12.16 01:51, Neodome Admin wrote:
>>>>> Karl.Frank<Karl.Frank@Freecx.co.uk>    wrote:
>>>>
>>>>>> It seems you miss the point. The bottleneck is your default gateway, the
>>>>>> single entry point. If an interceptor can control the bit stream that
>>>>>> *has* to pass through this gateway he has complete control of what you
>>>>>> will receive, not matter how many different sources you are trying to
>>>>>> access.
>>>>>
>>>>> If it were that easy, we would not be using SSL right now.
>>>>>
>>>>
>>>> There is no security regarding SSL if any interceptor is in need to
>>>> interfere. I assume you didn't follow the link to the article about
>>>> MITMPROXY
>>>
>>> No, I didn't. As I said, possibilities to obtain Usenet articles are almost
>>> endless. Unless you can break any and all encryption schemes in the world,
>>> your argument is invalid.
>>>
>>> You might have much better chances with sending your own articles to my
>>> server instead of trying to intercept and modify someone else's.
>>>
>>
>> I see you don't even understand the basics how network connection work.
>
> That's not an argument.
>
Arguments for the options and possibilities of tampering with your data
stream is found in the articles behind the links I've posted, no need to
post the content here again as everything is written there already. But
I you don't read them you might not see the problems. Even not why the
gateway is your bottleneck and single point of failure when retrieving
your data.


>>>> https://blog.heckel.xyz/2013/07/01/how-to-use-mitmproxy-to-read-and-modify-https-traffic-of-your-phone/
>>>>
>>>> Maybe these two readings will help you to understand the problem
>>>>
>>>> http://www.zdnet.com/article/how-the-nsa-and-your-boss-can-intercept-and-break-ssl/
>>>>
>>>> https://www.wilderssecurity.com/threads/bluecoat-known-for-ssl-mitm-now-has-a-ca-signed-by-symantec.386121/
>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>>
>>
>>
>
>
>


-- 
cHNiMUBACG0HAAAAAAAAAAAAAABIZVbDdKVM0w1kM9vxQHw+bkLxsY/Z0czY0uv8/Ks6WULxJVua
zjvpoYvtEwDVhP7RGTCBVlzZ+VBWPHg5rqmKWvtzsuVmMSDxAIS6Db6YhtzT+RStzoG9ForBcG8k
G97Q3Jml/aBun8Kyf+XOBHpl5gNW4YqhiM0=
0
Karl
12/15/2016 12:21:01 AM
On 12/12/16 6:51 PM, Neodome Admin wrote:
> Rich <rich@example.invalid> wrote:
>> In sci.crypt Neodome Admin <admin@neodome.net> wrote:
>>> On a serious note, Usenet is publicly available block of data.
>>
>> Which is what makes it somewhat unsuitable as a randomness source.
> 
> I made results of my experiment publicly available. Would you like to do
> statistical analysis and prove that the data is not random?

I think you have missed the point.

Unless you are using it solely to derive non-secrets (salts, nonces,
IVs, etc) then you want your source of randomness to also be secret.
Using publicly available data for your source of randomness, even if
you make it tamper-proof, is a bad idea for anything like key
generation.

Cheers,

-j

-- 
Jeffrey Goldberg          http://goldmark.org/jeff/
I rarely read HTML or poorly quoting posts
Reply-To address is valid
0
Jeffrey
12/15/2016 5:40:27 PM
Richard Heathfield <rjh@cpax.org.uk> wrote:
> On 14/12/16 22:18, Neodome Admin wrote:
> <snip>
> 
>> How much data is required to successfully pass Diehard test?
> 
> Diehard seems to have died. Presumably hard.
> 
> All the links I could find were to a university site that kept 404ing on me.
> 
> There is, however, a suite called Dieharder, which is available here:
> 
> http://www.phy.duke.edu/~rgb/General/dieharder.php

Thank you, I'll check it out.

Even without using any test suite I can tell that the random data cannot be
compressed by common compression programs (resulting archive is bigger than
original file, i.e. compression algorithm cannot find any patterns in its
input).

> Note that the question "how much data is required" is the wrong 
> question. It is the generator, not the data, that is tested.

You see, that's the problem. If I'm using some PRNG with a seed originated
from my TRNG, than what really is being tested? PRNG or TRNG?

-- 
Neodome
0
Neodome
12/16/2016 6:24:35 AM
Karl.Frank <Karl.Frank@Freecx.co.uk> wrote:
> On 15.12.16 00:41, Richard Heathfield wrote:
>> On 14/12/16 22:18, Neodome Admin wrote:
>> <snip>
>> 
>>> How much data is required to successfully pass Diehard test?
>> 
>> Diehard seems to have died. Presumably hard.
>> 
>> All the links I could find were to a university site that kept 404ing on
>> me.
>> 
>> There is, however, a suite called Dieharder, which is available here:
>> 
>> http://www.phy.duke.edu/~rgb/General/dieharder.php
>> 
>> Note that the question "how much data is required" is the wrong
>> question. It is the generator, not the data, that is tested.
>> 
> 
> The successor of diehard in my opinion is TestU01 which tests can reveal
> any kind of problems a PRNG might have
> 
> https://en.wikipedia.org/wiki/TestU01
> http://simul.iro.umontreal.ca/testu01/tu01.html
> 
> My experience with dieharder is that the results are very
> unreliable.
> 
> Unfortunately I was unable to compile Practrand
> 
> http://pracrand.sourceforge.net/

Thank you, I'll check these links out, especially TestU01.

-- 
Neodome
0
Neodome
12/16/2016 6:24:41 AM
Jeffrey Goldberg <nobody@goldmark.org> wrote:
> On 12/12/16 6:51 PM, Neodome Admin wrote:
>> Rich <rich@example.invalid> wrote:
>>> In sci.crypt Neodome Admin <admin@neodome.net> wrote:
>>>> On a serious note, Usenet is publicly available block of data.
>>> 
>>> Which is what makes it somewhat unsuitable as a randomness source.
>> 
>> I made results of my experiment publicly available. Would you like to do
>> statistical analysis and prove that the data is not random?
> 
> I think you have missed the point.
> 
> Unless you are using it solely to derive non-secrets (salts, nonces,
> IVs, etc) then you want your source of randomness to also be secret.
> Using publicly available data for your source of randomness, even if
> you make it tamper-proof, is a bad idea for anything like key
> generation.
> 
> Cheers,
> 
> -j
> 

I already stated it in this thread, but I guess I'll have to do it again.
The source of randomness is publicly available, but it's impossible for
attacker to know what I'm receiving and processing *right now* unless he
controls all my upstream servers, and all people who connect directly to my
server to post an article, including myself.

Of course, resulting random data would be unsuitable for generating secret
keys if I deliberately keep my server open and resulting raw data itself
available for anyone to download. But for the sake of experiment let's
assume I'm not doing it.

-- 
Neodome
0
Neodome
12/16/2016 6:24:49 AM
Karl.Frank <Karl.Frank@Freecx.co.uk> wrote:
> On 14.12.16 23:02, Neodome Admin wrote:
>> Karl.Frank<Karl.Frank@Freecx.co.uk>  wrote:
>>> On 13.12.16 22:04, Neodome Admin wrote:
>>>> Karl.Frank<Karl.Frank@Freecx.co.uk>   wrote:
>>>>> On 13.12.16 01:51, Neodome Admin wrote:
>>>>>> Karl.Frank<Karl.Frank@Freecx.co.uk>    wrote:
>>>>> 
>>>>>>> It seems you miss the point. The bottleneck is your default gateway, the
>>>>>>> single entry point. If an interceptor can control the bit stream that
>>>>>>> *has* to pass through this gateway he has complete control of what you
>>>>>>> will receive, not matter how many different sources you are trying to
>>>>>>> access.
>>>>>> 
>>>>>> If it were that easy, we would not be using SSL right now.
>>>>>> 
>>>>> 
>>>>> There is no security regarding SSL if any interceptor is in need to
>>>>> interfere. I assume you didn't follow the link to the article about
>>>>> MITMPROXY
>>>> 
>>>> No, I didn't. As I said, possibilities to obtain Usenet articles are almost
>>>> endless. Unless you can break any and all encryption schemes in the world,
>>>> your argument is invalid.
>>>> 
>>>> You might have much better chances with sending your own articles to my
>>>> server instead of trying to intercept and modify someone else's.
>>>> 
>>> 
>>> I see you don't even understand the basics how network connection work.
>> 
>> That's not an argument.
>> 
> Arguments for the options and possibilities of tampering with your data
> stream is found in the articles behind the links I've posted, no need to
> post the content here again as everything is written there already. But
> I you don't read them you might not see the problems. Even not why the
> gateway is your bottleneck and single point of failure when retrieving
> your data.

Come on, Frank. I already pointed out what's the problem with your
attacking plan. Unless you can crack any and all encryption schemes
available, I'm able to fight back, thus making your argument invalid.

If you were right, there would be no such thing as encrypted connection.

> 
>>>>> https://blog.heckel.xyz/2013/07/01/how-to-use-mitmproxy-to-read-and-modify-https-traffic-of-your-phone/
>>>>> 
>>>>> Maybe these two readings will help you to understand the problem
>>>>> 
>>>>> http://www.zdnet.com/article/how-the-nsa-and-your-boss-can-intercept-and-break-ssl/
>>>>> 
>>>>> https://www.wilderssecurity.com/threads/bluecoat-known-for-ssl-mitm-now-has-a-ca-signed-by-symantec.386121/
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>> 
>>>> 
>>>> 
>>> 
>>> 
>> 
>> 
>> 
> 
> 



-- 
Neodome
0
Neodome
12/16/2016 6:42:22 AM
On 16/12/16 06:24, Neodome Admin wrote:

<snip>

> I already stated it in this thread, but I guess I'll have to do it again.
> The source of randomness is publicly available, but it's impossible for
> attacker to know what I'm receiving and processing *right now* unless he
> controls all my upstream servers, and all people who connect directly to my
> server to post an article, including myself.

No, he wouldn't have to control anything. He'd need read access to the 
machine that your ISP has at the other end of your broadband connection, 
that's all. Given that, he knows exactly what you're getting and when 
you're getting it.

-- 
Richard Heathfield
Email: rjh at cpax dot org dot uk
"Usenet is a strange place" - dmr 29 July 1999
Sig line 4 vacant - apply within
0
Richard
12/16/2016 8:25:36 AM
On 16.12.16 07:42, Neodome Admin wrote:
> Karl.Frank<Karl.Frank@Freecx.co.uk>  wrote:
>> On 14.12.16 23:02, Neodome Admin wrote:
>>> Karl.Frank<Karl.Frank@Freecx.co.uk>   wrote:
>>>> On 13.12.16 22:04, Neodome Admin wrote:
>>>>> Karl.Frank<Karl.Frank@Freecx.co.uk>    wrote:
>>>>>> On 13.12.16 01:51, Neodome Admin wrote:
>>>>>>> Karl.Frank<Karl.Frank@Freecx.co.uk>     wrote:
>>>>>>
>>>>>>>> It seems you miss the point. The bottleneck is your default gateway, the
>>>>>>>> single entry point. If an interceptor can control the bit stream that
>>>>>>>> *has* to pass through this gateway he has complete control of what you
>>>>>>>> will receive, not matter how many different sources you are trying to
>>>>>>>> access.
>>>>>>>
>>>>>>> If it were that easy, we would not be using SSL right now.
>>>>>>>
>>>>>>
>>>>>> There is no security regarding SSL if any interceptor is in need to
>>>>>> interfere. I assume you didn't follow the link to the article about
>>>>>> MITMPROXY
>>>>>
>>>>> No, I didn't. As I said, possibilities to obtain Usenet articles are almost
>>>>> endless. Unless you can break any and all encryption schemes in the world,
>>>>> your argument is invalid.
>>>>>
>>>>> You might have much better chances with sending your own articles to my
>>>>> server instead of trying to intercept and modify someone else's.
>>>>>
>>>>
>>>> I see you don't even understand the basics how network connection work.
>>>
>>> That's not an argument.
>>>
>> Arguments for the options and possibilities of tampering with your data
>> stream is found in the articles behind the links I've posted, no need to
>> post the content here again as everything is written there already. But
>> I you don't read them you might not see the problems. Even not why the
>> gateway is your bottleneck and single point of failure when retrieving
>> your data.
>
> Come on, Frank. I already pointed out what's the problem with your
> attacking plan. Unless you can crack any and all encryption schemes
> available, I'm able to fight back, thus making your argument invalid.
>
Let me finally try to explain the technical situation in a different 
example:

It seems to me that your are under the impression that the news postings
reaching your server like raindrops from the sky, while in fact they
flow through one single pipe as a stream of water. An attacker can
simply "poison" this single stream.


> If you were right, there would be no such thing as encrypted connection.
>
I hold in account that you're a newbie in terms of encryption. But my
advice again is to read the articles and trying to understand why there
is no secure encrypted connection when a well funded attacker is keen to
interfere.


>>
>>>>>> https://blog.heckel.xyz/2013/07/01/how-to-use-mitmproxy-to-read-and-modify-https-traffic-of-your-phone/
>>>>>>
>>>>>> Maybe these two readings will help you to understand the problem
>>>>>>
>>>>>> http://www.zdnet.com/article/how-the-nsa-and-your-boss-can-intercept-and-break-ssl/
>>>>>>
>>>>>> https://www.wilderssecurity.com/threads/bluecoat-known-for-ssl-mitm-now-has-a-ca-signed-by-symantec.386121/
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>
>>>
>>>
>>
>>
>
>
>


-- 
cHNiMUBACG0HAAAAAAAAAAAAAABIZVbDdKVM0w1kM9vxQHw+bkLxsY/Z0czY0uv8/Ks6WULxJVua
zjvpoYvtEwDVhP7RGTCBVlzZ+VBWPHg5rqmKWvtzsuVmMSDxAIS6Db6YhtzT+RStzoG9ForBcG8k
G97Q3Jml/aBun8Kyf+XOBHpl5gNW4YqhiM0=
0
Karl
12/16/2016 11:18:37 AM
On 16.12.16 07:42, Neodome Admin wrote:

> If you were right, there would be no such thing as encrypted connection.
>

"For your eyes only."

Quote:
"This form of MITM attack is one of the deadliest because it takes what
we think is a secure connection and makes it completely insecure. If you
consider how many secure sites you visit each day and then consider the
potential impact if all of those connections were insecure and that data
fell into the wrong hands then you will truly understand the potential
impact this could have on you or your organization."

http://www.windowsecurity.com/articles-tutorials/authentication_and_encryption/Understanding-Man-in-the-Middle-Attacks-ARP-Part4.html

-- 
cHNiMUBACG0HAAAAAAAAAAAAAABIZVbDdKVM0w1kM9vxQHw+bkLxsY/Z0czY0uv8/Ks6WULxJVua
zjvpoYvtEwDVhP7RGTCBVlzZ+VBWPHg5rqmKWvtzsuVmMSDxAIS6Db6YhtzT+RStzoG9ForBcG8k
G97Q3Jml/aBun8Kyf+XOBHpl5gNW4YqhiM0=
0
Karl
12/16/2016 9:20:56 PM
Richard Heathfield <rjh@cpax.org.uk> wrote:
> On 16/12/16 06:24, Neodome Admin wrote:
> 
> <snip>
> 
>> I already stated it in this thread, but I guess I'll have to do it again.
>> The source of randomness is publicly available, but it's impossible for
>> attacker to know what I'm receiving and processing *right now* unless he
>> controls all my upstream servers, and all people who connect directly to my
>> server to post an article, including myself.
> 
> No, he wouldn't have to control anything. He'd need read access to the 
> machine that your ISP has at the other end of your broadband connection, 
> that's all. Given that, he knows exactly what you're getting and when 
> you're getting it.

How about this, Richard.

I have I2P anonymous network router running on my machine. I will setup a
tunnel via I2P, I'll buy another VPS, and I'll forward all I2P traffic
through that second VPS. I'll give you access to it, so you can intercept
all the traffic. I'll download some Usenet articles via the tunnel, and
you'll tell me what I downloaded and from where. If you're able to do that,
I'll pay you money. If you're not, you pay me. I'm sure it would not be
much harder than to decrypt Tor connection, which you guys probably already
did, since you are so sure about MITM attacks.

Same goes to Karl Frank.

-- 
Neodome
0
Neodome
12/17/2016 9:16:01 PM
I believe that there is a certain amount of randomness injected at
a news hub system that is difficult to predict even if you're at
that host's ISP and can monitor almost all the traffic.  Is it *ENOUGH*?
I have my doubts.  Would it be better to merge some of this randomness
into /dev/urandom instead?  Probably.

A news "hub" is one where a collection of news feeds join and are
merged into one feed to downstream hosts.  It might receive "full"
feeds from at least 4 countries and 2 continents.  It also has lots
of downstream feeds where there are occasional outgoing posts headed
for the rest of the world.  Google, Microsoft, and Yahoo probably
run a few.  Most of the rest of them are pretty much leaf nodes
except for some local (perhaps dialup) users.  Articles go in, they
get merged sort of like a riffle shuffle of decks of cards, and
duplicates get eliminated.  Especially if there's a backlog injecting
articles, the order may depend on things not visible to the ISP
handling almost all net traffic (which likely at least excludes the
sysadmin posting from the system console).

Assuming that things are going well, the news hub might get 3 copies
of the same article coming through different routes and the path
seen by a downstream host will depend on which copy it processes
first.  Article numbers assigned within newsgroups depend on the
past history of articles it has received.  When articles are coming
in from multiple sources, which one gets processed first may depend
on contention in file locks and mutexes.  All of this combines to
make what goes *OUT* a bit unpredictable even if you know everything
that went *IN* and when it arrived down to the microsecond, and not
only the local users posting from the host's local LAN which doesn't
go through the ISP.

Things that would impact the header checksum would be the route the
*FIRST* copy of the article takes, in the Path: header, a timestamp
when the article arrived (Not Date:) down to the second, and the
newsgroup(s) and local article number(s) it is given (in the Xref:
header).  All of those would be high-priority for inclusion in a
header checksum because they are influenced by the local host rather
than the single feed.  One local post can shift article numbers for
a particular newsgroup for the indefinite future.

Eve is a little limited in what she can do that doesn't get noticed.
Local and downstream users will complain it the article volume drops
off.  Some users expect that if they post something, they will get
sensible, non-automated replies from other posters out in the world,
and they'll complain if that stops.

The sysadmin would notice if all but one of the feeds fell significantly
behind so all articles came from one feed.  Eve can inject articles
that look like SPAM, or perhaps tamper with the headers in a
not-very-visible way.  It's easy to inject articles that contribute
known bits from the headers.  It's not so easy to predict what order
these groups of bits end up in.  Users may talk to users with
different hosts, so it has to at least look like USENET is working.

0
gordonb
12/18/2016 5:29:56 AM
On 12/12/2016 5:19 AM, Neodome Admin wrote:
> In the past, there were some jokes about using Usenet as a source of
> randomness for RNG. Since I'm running open news server, I decided to check
> if it will actually work.
>
> Here is my approach, step by step:
>
> 1. Take a header of Usenet article. Articles themselves can be completely
> identical (or even empty), but at least one header field suppose to be
> unique, the Message-ID. Practically, there will be much more unique header
> fields.
>
> 2. Since all incoming data is in ASCII, to reduce ASCII-related patterns in
> the stream of bits, take a first bit of each byte of text.
>
> 3. Put bits on top of a stack.
>
> 4. Once stack is equal or bigger than X, take X bits from the top, leaving
> the rest on the stack. This is to ensure that processing of data does not
> always start from the beginning of a header.
>
> 5. Do a simple skew correction, same as RANDOM.ORG does on their data.
>
> 7. Write processed bits to the string.
>
> 6. Once string length is equal Y, create a hash of the bit string.
>
> Hashing is done on the data which is still clearly have patterns, however,
> in normal circumstances there is no way to predict where those patterns
> start. To attack the RNG, attacker will have to control all upstream
> servers, or whole or a significant part of Usenet network, depending on the
> type of attack.
>
> I tried to not use results of previous computations. If data is truly
> random, there is no need to try to "add randomness".
>
> Here is the Perl program I used: http://neodome.net/rng/rngfeed
> It was receiving data directly from INN server which gets full text-only
> feed from its upstreams.
>
> Results can be observed here: http://neodome.net/rng/
> BIN files are hashes in raw binary format, TXT files are text
> representation of same data.
>
> I did not do a statistical analysis of data yet, however, something tells
> me it's random.
>
> I also would like to thank everyone who participated in the experiment by
> sending the articles to Usenet. Every bit counts!
>

That is interesting to me, but Eve's minions will be all over it trying 
to poison the entropy. Humm...

FWIW, check this crap out:

Entropy from multi-threaded race-conditions, with working code.  ;^)

https://groups.google.com/d/topic/comp.lang.c++/7u_rLgQe86k/discussion
(please read the whole thread if interested...)

The problem, is that its virtually impossible to get a 100% repeating 
sequence without using a virtual machine to exactly record the programs 
actions. So debugging is going to be a bitch if you don't have a 
controlled environment.
0
Chris
12/19/2016 10:25:08 PM
Gordon Burditt <gordonb.uba7y@burditt.org> wrote:
> I believe that there is a certain amount of randomness injected at
> a news hub system that is difficult to predict even if you're at
> that host's ISP and can monitor almost all the traffic.  Is it *ENOUGH*?
> I have my doubts.  Would it be better to merge some of this randomness
> into /dev/urandom instead?  Probably.
> 
> A news "hub" is one where a collection of news feeds join and are
> merged into one feed to downstream hosts.  It might receive "full"
> feeds from at least 4 countries and 2 continents.  It also has lots
> of downstream feeds where there are occasional outgoing posts headed
> for the rest of the world.  Google, Microsoft, and Yahoo probably
> run a few.  Most of the rest of them are pretty much leaf nodes
> except for some local (perhaps dialup) users.  Articles go in, they
> get merged sort of like a riffle shuffle of decks of cards, and
> duplicates get eliminated.  Especially if there's a backlog injecting
> articles, the order may depend on things not visible to the ISP
> handling almost all net traffic (which likely at least excludes the
> sysadmin posting from the system console).
> 
> Assuming that things are going well, the news hub might get 3 copies
> of the same article coming through different routes and the path
> seen by a downstream host will depend on which copy it processes
> first.  Article numbers assigned within newsgroups depend on the
> past history of articles it has received.  When articles are coming
> in from multiple sources, which one gets processed first may depend
> on contention in file locks and mutexes.  All of this combines to
> make what goes *OUT* a bit unpredictable even if you know everything
> that went *IN* and when it arrived down to the microsecond, and not
> only the local users posting from the host's local LAN which doesn't
> go through the ISP.
> 
> Things that would impact the header checksum would be the route the
> *FIRST* copy of the article takes, in the Path: header, a timestamp
> when the article arrived (Not Date:) down to the second, and the
> newsgroup(s) and local article number(s) it is given (in the Xref:
> header).  All of those would be high-priority for inclusion in a
> header checksum because they are influenced by the local host rather
> than the single feed.  One local post can shift article numbers for
> a particular newsgroup for the indefinite future.
> 
> Eve is a little limited in what she can do that doesn't get noticed.
> Local and downstream users will complain it the article volume drops
> off.  Some users expect that if they post something, they will get
> sensible, non-automated replies from other posters out in the world,
> and they'll complain if that stops.
> 
> The sysadmin would notice if all but one of the feeds fell significantly
> behind so all articles came from one feed.  Eve can inject articles
> that look like SPAM, or perhaps tamper with the headers in a
> not-very-visible way.  It's easy to inject articles that contribute
> known bits from the headers.  It's not so easy to predict what order
> these groups of bits end up in.  Users may talk to users with
> different hosts, so it has to at least look like USENET is working.
> 
> 

Best analysis so far. Thank you.

-- 
Neodome
0
Neodome
12/23/2016 2:24:31 AM
Reply: