Motherboard Forums


Reply
Thread Tools Display Modes

"Random" number generation (reprise)

 
 
Don Y
Guest
Posts: n/a
 
      07-28-2011, 08:25 AM
Hi,

[yes, I understand the difference between random, pseudo-random,
etc. I've chosen *random* in the subject line, deliberately.]

0. All of the following happens without human intervention or
supervision.

1. Imagine deploying *identical* devices -- possibly differentiated
from each other with just a small-ish serial number.

2. The devices themselves are not physically secure -- an adversary
could legally purchase one, dismantle it, etc. to determine its current
state to any degree of exactness "for little investment". I.e., it is
not possible to "hide secrets" in them at the time of manufacture.

3. *Deployed* devices are secure (not easily tampered without being
observed) though the infrastructure is vulnerable and should be seen
as providing *no* security.

4. The devices have some small amount (i.e., tens or hundreds of bytes)
of persistent store.

5. The devices download their operating "software" as part of their
bootstrap.

6. It is practically feasible for an adversary to selectively spoof
the "image server" at IPL. (i.e., enter into a dialog with a
particular device that is trying to boot at that time)

The challenge is to come up with a scheme by which only "approved"
images can be executed on the device(s).

From (2), you have to assume the adversary effectively has the
source code for your bootstrap (or can *get* it easily). I.e.,
you might as well *publish* it!

From (1) and (2), the "uniqueness" of a particular device is
trivia that can easily be gleaned empirically.

From (5), any persistent *changes* to the device's state (e.g., (4))
can be observed *directly* (by a rogue image executing on the device)

*If* the challenge is met, then (3) suggests that (4) can become a
true "secret".

So, thwarting (6) seems to be the effective goal, here...

--

If the device can come up with a "truly" (cough) random number
prior to initiating the bootstrap, then a key exchange protocol
could be implemented that would add security to the "image
transfer".

This "random" number can't depend on -- or be significantly
influenced by -- observable events *outside* the device. It
can't be possible for someone with copies of the (published!)
source code, schematics, etc. to determine what a particular
instance of this number is likely to be!

So, how to come up with a hardware entropy source to generate/augment
this random number -- cheaply, reliably and in very little space?

Note that the time required to generate the datum is a prerequisite
to completing the bootstrap. However, the actual sequencing of the
bootstrap might allow the image to be transferred *while* the
random number is generated and then *authenticated* afterwards
(e.g., a signed secure hash computed ex post facto). If the image
is large (more exactly, if the transport time is *long*), then
this could buy considerable time for number generation!

If (4), then this can build upon previous "accumulated entropy".

I've been stewing over Schneier to get a better feel for just
how much "new entropy" is required... in theory, a secure (4)
suggests that very little *should* be required as long as
the protocols don't allow replay attacks, etc. (?) But, I
have yet to come up with hard and fast numbers (if that guy
truly groks all this stuff, he would be amazing to share a
few pitchers with!)

The ideas that come to mind (for small/cheap) are measuring
thermal/avalanche noise. I can't see how an adversary could
predict this sort of thing even with access to *detailed*
operating environment data!

A potential vulnerability would be in detecting failures of
the "entropy source". While the device in question could
actively monitor its output and do some statistical tests
on the data it generates, I suspect that monitoring would have
to take place over a longer period of time than is available
*during* the bootstrap. [And, a savvy adversary could
intentionally force a reboot *before* the software could
indicate (to itself) that the RNG is defective to avoid it
on subsequent boots!]

Bottom line... I'm looking to pointers to practical sources
for this randomness. Especially with guidelines on how to
tweak the amount of entropy yielded per unit time, etc.
(so I can scale the acquisition system to fit the bandwidth
of that source, economically)

Thx,
--don
 
Reply With Quote
 
 
 
 
Ivan Shmakov
Guest
Posts: n/a
 
      07-28-2011, 01:53 PM
>>>>> Don Y <(E-Mail Removed)> writes:

[…]

> 6. It is practically feasible for an adversary to selectively spoof
> the "image server" at IPL. (i.e., enter into a dialog with a
> particular device that is trying to boot at that time)


> The challenge is to come up with a scheme by which only "approved"
> images can be executed on the device(s).


[…]

> So, thwarting (6) seems to be the effective goal, here...


> --


> If the device can come up with a "truly" (cough) random number
> prior to initiating the bootstrap, then a key exchange protocol
> could be implemented that would add security to the "image
> transfer".


I'm a bit out of context here, but if the “key exchange” there
means something like Diffie-Hellman, then there's a catch.
Namely, while Diffie-Hellman, indeed, relies on an RNG, and
secures the data link, it doesn't, in fact, provide
/authentication/.

IOW, the device would establish a data link with the image
server that cannot be feasibly eavesdropped. But it /wouldn't/
/know/ whether it's the intended image server, or a rogue
server.

For authentication, something like RSA or DSA is to be used, and
those don't rely on an RNG once the key pair is generated.

(And sorry if I've missed the point, BTW.)

[…]

--
FSF associate member #7257
 
Reply With Quote
 
 
 
 
Don Y
Guest
Posts: n/a
 
      07-28-2011, 02:38 PM
Hi Ivan,

On 7/28/2011 6:53 AM, Ivan Shmakov wrote:
>>>>>> Don Y<(E-Mail Removed)> writes:

>
> > 6. It is practically feasible for an adversary to selectively spoof
> > the "image server" at IPL. (i.e., enter into a dialog with a
> > particular device that is trying to boot at that time)

>
> > The challenge is to come up with a scheme by which only "approved"
> > images can be executed on the device(s).

>
> > So, thwarting (6) seems to be the effective goal, here...

>
> > If the device can come up with a "truly" (cough) random number
> > prior to initiating the bootstrap, then a key exchange protocol
> > could be implemented that would add security to the "image
> > transfer".

>
> I'm a bit out of context here, but if the “key exchange” there
> means something like Diffie-Hellman, then there's a catch.
> Namely, while Diffie-Hellman, indeed, relies on an RNG, and


I don't think DH will work -- MiM issues.

> secures the data link, it doesn't, in fact, provide
> /authentication/.
>
> IOW, the device would establish a data link with the image
> server that cannot be feasibly eavesdropped. But it /wouldn't/
> /know/ whether it's the intended image server, or a rogue
> server.


Correct. The first step is to have a means of generating a *unique*
key (for each device instance) that can't be predicted.

I still have to sort out how to keep the images themselves "bonafide"...

> For authentication, something like RSA or DSA is to be used, and
> those don't rely on an RNG once the key pair is generated.
>
> (And sorry if I've missed the point, BTW.)

 
Reply With Quote
 
Ivan Shmakov
Guest
Posts: n/a
 
      07-28-2011, 03:57 PM
>>>>> Don Y <(E-Mail Removed)> writes:
>>>>> On 7/28/2011 6:53 AM, Ivan Shmakov wrote:
>>>>> Don Y <(E-Mail Removed)> writes:


[…]

>>> If the device can come up with a "truly" (cough) random number
>>> prior to initiating the bootstrap, then a key exchange protocol
>>> could be implemented that would add security to the "image
>>> transfer".


>> I'm a bit out of context here, but if the “key exchange” there means
>> something like Diffie-Hellman, then there's a catch. Namely, while
>> Diffie-Hellman, indeed, relies on an RNG, and


> I don't think DH will work -- MiM issues.


Exactly.

(Unless reinforced with digital signatures, that is.)

>> secures the data link, it doesn't, in fact, provide
>> /authentication/.


>> IOW, the device would establish a data link with the image server
>> that cannot be feasibly eavesdropped. But it /wouldn't/ /know/
>> whether it's the intended image server, or a rogue server.


> Correct. The first step is to have a means of generating a *unique*
> key (for each device instance) that can't be predicted.


I'd argue that it should be the /second/ step, not the first.
Unless the image is itself a secret.

> I still have to sort out how to keep the images themselves
> "bonafide"...


The device has a tiny bit of persistent storage, right? Why not
to use the standard digital signature approach?

Prior to deploying the devices, we're generating a key pair for
the chosen asymmetric cipher. One of the keys is stored in the
device's persistent memory. The other is used to encrypt the
message digest (e. g., SHA-1) of the image.

The device receives the image and the encrypted digest. It then
decrypts the encrypted digest with the key it has in its
persistent memory and compares it to the digest computed
locally. The image is only used if these are equal (and thus
the image isn't tampered.)

>> For authentication, something like RSA or DSA is to be used, and
>> those don't rely on an RNG once the key pair is generated.


>> (And sorry if I've missed the point, BTW.)


--
FSF associate member #7257
 
Reply With Quote
 
Don Y
Guest
Posts: n/a
 
      07-28-2011, 04:34 PM
Hi Ivan,

On 7/28/2011 8:57 AM, Ivan Shmakov wrote:

> >> IOW, the device would establish a data link with the image server
> >> that cannot be feasibly eavesdropped. But it /wouldn't/ /know/
> >> whether it's the intended image server, or a rogue server.

>
> > Correct. The first step is to have a means of generating a *unique*
> > key (for each device instance) that can't be predicted.

>
> I'd argue that it should be the /second/ step, not the first.
> Unless the image is itself a secret.


No -- which is also part of the problem (i.e., you can't hide any
secrets *in* the image(s), either!)

> > I still have to sort out how to keep the images themselves
> > "bonafide"...

>
> The device has a tiny bit of persistent storage, right? Why not
> to use the standard digital signature approach?
>
> Prior to deploying the devices, we're generating a key pair for
> the chosen asymmetric cipher. One of the keys is stored in the
> device's persistent memory. The other is used to encrypt the
> message digest (e. g., SHA-1) of the image.


I have been trying to avoid an initialization step. I.e., deploy
*identical* devices and let them gather entropy to make themselves
unique.

I have one system in place, now, with which I took this approach.
But, it requires setting up each device before deployment. And,
means those settings *must* be persistent (battery failures, crashes,
etc).

I was hoping to come up with a scheme that could, effectively,
"fix itself" (as if it had just been deployed!)

> The device receives the image and the encrypted digest. It then
> decrypts the encrypted digest with the key it has in its
> persistent memory and compares it to the digest computed
> locally. The image is only used if these are equal (and thus
> the image isn't tampered.)
>
> >> For authentication, something like RSA or DSA is to be used, and
> >> those don't rely on an RNG once the key pair is generated.

>
> >> (And sorry if I've missed the point, BTW.)

 
Reply With Quote
 
Charles Bryant
Guest
Posts: n/a
 
      07-29-2011, 12:27 AM
In article <j0r6dm$28v$(E-Mail Removed)>, Don Y <(E-Mail Removed)> wrote:
}2. The devices themselves are not physically secure -- an adversary
}could legally purchase one, dismantle it, etc. to determine its current
}state to any degree of exactness "for little investment". I.e., it is
}not possible to "hide secrets" in them at the time of manufacture.
}
}3. *Deployed* devices are secure (not easily tampered without being
}observed) though the infrastructure is vulnerable and should be seen
}as providing *no* security.
}
}4. The devices have some small amount (i.e., tens or hundreds of bytes)
}of persistent store.
}
}5. The devices download their operating "software" as part of their
}bootstrap.
}
}6. It is practically feasible for an adversary to selectively spoof
}the "image server" at IPL. (i.e., enter into a dialog with a
}particular device that is trying to boot at that time)
}
}The challenge is to come up with a scheme by which only "approved"
}images can be executed on the device(s).

The devices are made containing a public key. When the loader fetchs an
image, the image has been encrypted using the corresponding private
key. The loader decrypts it with the public key and executes the
decrypted image (preferrably after checking it has decrypted into a valid
image). An attacker can fetch and decrypt any images, and simulate the
action of the loaded as much as they like, but without breaking the
underlying encryption method or possessing the private key they cannot
generate their own image.

I don't see any need for random numbers.
 
Reply With Quote
 
Ivan Shmakov
Guest
Posts: n/a
 
      07-29-2011, 03:34 AM
>>>>> Charles Bryant <(E-Mail Removed)> writes:
>>>>> In article <j0r6dm$28v$(E-Mail Removed)>, Don Y wrote:


[…]

>> 6. It is practically feasible for an adversary to selectively spoof
>> the "image server" at IPL. (i.e., enter into a dialog with a
>> particular device that is trying to boot at that time)


>> The challenge is to come up with a scheme by which only "approved"
>> images can be executed on the device(s).


> The devices are made containing a public key. When the loader fetchs
> an image, the image has been encrypted using the corresponding
> private key. The loader decrypts it with the public key and executes
> the decrypted image (preferrably after checking it has decrypted into
> a valid image).


I deem it infeasible for an embedded device to decrypt the whole
image using an asymmetric cipher — there's just too much
computation involved. Traditionally, the digital signatures are
made by encrypting a message digest (such as SHA-1) instead.

(And note that even with whole image encryption, it's still
advisable to have a message digest transmitted along with the
image, since otherwise an attacker may force the device to
accept garbage.)

> An attacker can fetch and decrypt any images, and simulate the action
> of the loaded as much as they like, but without breaking the
> underlying encryption method or possessing the private key they
> cannot generate their own image.


Right. But since the image isn't a secret anyway, it's not
worth encrypting in the first place.

> I don't see any need for random numbers.


--
FSF associate member #7257
 
Reply With Quote
 
Ivan Shmakov
Guest
Posts: n/a
 
      07-29-2011, 03:45 AM
>>>>> Don Y <(E-Mail Removed)> writes:
>>>>> On 7/28/2011 8:57 AM, Ivan Shmakov wrote:


[…]

>>> I still have to sort out how to keep the images themselves
>>> "bonafide"...


>> The device has a tiny bit of persistent storage, right? Why not to
>> use the standard digital signature approach?


>> Prior to deploying the devices, we're generating a key pair for the
>> chosen asymmetric cipher. One of the keys is stored in the device's
>> persistent memory. The other is used to encrypt the message digest
>> (e. g., SHA-1) of the image.


> I have been trying to avoid an initialization step. I.e., deploy
> *identical* devices and let them gather entropy to make themselves
> unique.


How this is going to make the devices able to distinguish the
intended image server from the attacker's one?

> I have one system in place, now, with which I took this approach.
> But, it requires setting up each device before deployment. And,
> means those settings *must* be persistent (battery failures, crashes,
> etc).


The typical size of a public key is 2048 bits, which is 256
bytes. There're even 8-bit MCU's for which it's not a big deal
to store such a key in the on-chip flash memory.

Moreover, the key can (and probably should) be identical for the
whole set of devices used by a single “owner” (an organization
or unit), which may make uploading it much less a hassle.

> I was hoping to come up with a scheme that could, effectively, "fix
> itself" (as if it had just been deployed!)


I'm afraid there cannot be one. The devices have to be
“labelled” somehow, so that they could know their “owner.”

[…]

--
FSF associate member #7257
 
Reply With Quote
 
1 Lucky Texan
Guest
Posts: n/a
 
      07-29-2011, 02:40 PM
On Jul 28, 3:25*am, Don Y <(E-Mail Removed)> wrote:
> Hi,
>
> [yes, I understand the difference between random, pseudo-random,
> etc. *I've chosen *random* in the subject line, deliberately.]
>
> 0. All of the following happens without human intervention or
> supervision.
>
> 1. Imagine deploying *identical* devices -- possibly differentiated
> from each other with just a small-ish serial number.
>
> 2. The devices themselves are not physically secure -- an adversary
> could legally purchase one, dismantle it, etc. to determine its current
> state to any degree of exactness "for little investment". *I.e., it is
> not possible to "hide secrets" in them at the time of manufacture.
>
> 3. *Deployed* devices are secure (not easily tampered without being
> observed) though the infrastructure is vulnerable and should be seen
> as providing *no* security.
>
> 4. The devices have some small amount (i.e., tens or hundreds of bytes)
> of persistent store.
>
> 5. The devices download their operating "software" as part of their
> bootstrap.
>
> 6. It is practically feasible for an adversary to selectively spoof
> the "image server" at IPL. (i.e., enter into a dialog with a
> particular device that is trying to boot at that time)
>
> The challenge is to come up with a scheme by which only "approved"
> images can be executed on the device(s).
>
> *From (2), you have to assume the adversary effectively has the
> source code for your bootstrap (or can *get* it easily). *I.e.,
> you might as well *publish* it!
>
> *From (1) and (2), the "uniqueness" of a particular device is
> trivia that can easily be gleaned empirically.
>
> *From (5), any persistent *changes* to the device's state (e.g., (4))
> can be observed *directly* (by a rogue image executing on the device)
>
> *If* the challenge is met, then (3) suggests that (4) can become a
> true "secret".
>
> So, thwarting (6) seems to be the effective goal, here...
>
> --
>
> If the device can come up with a "truly" (cough) random number
> prior to initiating the bootstrap, then a key exchange protocol
> could be implemented that would add security to the "image
> transfer".
>
> This "random" number can't depend on -- or be significantly
> influenced by -- observable events *outside* the device. *It
> can't be possible for someone with copies of the (published!)
> source code, schematics, etc. to determine what a particular
> instance of this number is likely to be!
>
> So, how to come up with a hardware entropy source to generate/augment
> this random number -- cheaply, reliably and in very little space?
>
> Note that the time required to generate the datum is a prerequisite
> to completing the bootstrap. *However, the actual sequencing of the
> bootstrap might allow the image to be transferred *while* the
> random number is generated and then *authenticated* afterwards
> (e.g., a signed secure hash computed ex post facto). *If the image
> is large (more exactly, if the transport time is *long*), then
> this could buy considerable time for number generation!
>
> If (4), then this can build upon previous "accumulated entropy".
>
> I've been stewing over Schneier to get a better feel for just
> how much "new entropy" is required... in theory, a secure (4)
> suggests that very little *should* be required as long as
> the protocols don't allow replay attacks, etc. (?) *But, I
> have yet to come up with hard and fast numbers (if that guy
> truly groks all this stuff, he would be amazing to share a
> few pitchers with!)
>
> The ideas that come to mind (for small/cheap) are measuring
> thermal/avalanche noise. *I can't see how an adversary could
> predict this sort of thing even with access to *detailed*
> operating environment data!
>
> A potential vulnerability would be in detecting failures of
> the "entropy source". *While the device in question could
> actively monitor its output and do some statistical tests
> on the data it generates, I suspect that monitoring would have
> to take place over a longer period of time than is available
> *during* the bootstrap. *[And, a savvy adversary could
> intentionally force a reboot *before* the software could
> indicate (to itself) that the RNG is defective to avoid it
> on subsequent boots!]
>
> Bottom line... I'm looking to pointers to practical sources
> for this randomness. *Especially with guidelines on how to
> tweak the amount of entropy yielded per unit time, etc.
> (so I can scale the acquisition system to fit the bandwidth
> of that source, economically)
>
> Thx,
> --don




Just a coupla wild ideas.

If there is operator input, running some local code, before
retrieveing the image from the host, that runs a RNG and asks for
"Press a key to begin" and stops on a number would likely yield an
'unpredictable' value to use.

If a system has a local RTC and has access to a host or internet or
other external clock, perhaps subtracting (using local routine?) the
'drift' between the 2 clocks on startup could yield a value to use.
 
Reply With Quote
 
Don Y
Guest
Posts: n/a
 
      07-29-2011, 06:03 PM
On 7/29/2011 7:40 AM, 1 Lucky Texan wrote:
> On Jul 28, 3:25 am, Don Y<(E-Mail Removed)> wrote:


>> 0. All of the following happens without human intervention or
>> supervision.


> Just a coupla wild ideas.
>
> If there is operator input, running some local code, before


See "0", above. :> Imagine the device controls a robotic
manipulator that wants to identify itself (its "instance")
to some server, retrieve *its* firmware image and then begin
its operations on the assembly line, etc. You wouldn't want
a "technician" to have to walk the length of the line any
time power is cycled *just* to "Press a key to begin"...

> retrieveing the image from the host, that runs a RNG and asks for
> "Press a key to begin" and stops on a number would likely yield an
> 'unpredictable' value to use.
>
> If a system has a local RTC and has access to a host or internet or
> other external clock, perhaps subtracting (using local routine?) the
> 'drift' between the 2 clocks on startup could yield a value to use.


Hmmm... I suspect that would have to take some time to accumulate
enough entropy. Also, it leaves you exposed to someone manipulating
that clock.

I was hoping that the randomness could be handled *entirely*
locally -- so it would be more tamper-resistent.
 
Reply With Quote
 
 
 
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Nvidia's Ageia Purchase ~ Why Ageia matters ~ Next Generation SonyPlayStation GPU - Next Generation PC GPUs - Next Generation Console GPUs -Nvidia Inside - Nvidia Everywhere sprite scaler Embedded 0 02-08-2008 01:52 AM
Nvidia's Ageia Purchase - Why Ageia matters - Next Generation SonyPlayStation GPU - Next Generation PC GPUs - Next Generation Console GPU -Nvidia Inside - Nvidia Everywhere sprite scaler ATI 0 02-08-2008 01:43 AM
Nvidia's Ageia Purchase - Why Ageia matters - Next Generation SonyPlayStation GPU - Next Generation PC GPUs - Next Generation Console GPU -Nvidia Inside, Nvidia Everywhere sprite scaler Nvidia 0 02-08-2008 01:41 AM
Nintendo America President mentions the next-generation of DS and the next-generation of Wii RadeonR600 ATI 0 10-28-2007 08:37 PM
Random Number Generation -----> Hardware or Software? Motaz K. Saad Embedded 110 03-18-2005 10:21 PM


All times are GMT. The time now is 12:29 PM.


Welcome!
Welcome to Motherboard Point
 

Advertisment