1. This forum section is a read-only archive which contains old newsgroup posts. If you wish to post a query, please do so in one of our main forum sections (here). This way you will get a faster, better response from the members on Motherboard Point.

"Random" number generation (reprise)

Discussion in 'Embedded' started by Don Y, Jul 28, 2011.

  1. Don Y

    Don Y Guest

    Hi,

    [yes, I understand the difference between random, pseudo-random,
    etc. I've chosen *random* in the subject line, deliberately.]

    0. All of the following happens without human intervention or
    supervision.

    1. Imagine deploying *identical* devices -- possibly differentiated
    from each other with just a small-ish serial number.

    2. The devices themselves are not physically secure -- an adversary
    could legally purchase one, dismantle it, etc. to determine its current
    state to any degree of exactness "for little investment". I.e., it is
    not possible to "hide secrets" in them at the time of manufacture.

    3. *Deployed* devices are secure (not easily tampered without being
    observed) though the infrastructure is vulnerable and should be seen
    as providing *no* security.

    4. The devices have some small amount (i.e., tens or hundreds of bytes)
    of persistent store.

    5. The devices download their operating "software" as part of their
    bootstrap.

    6. It is practically feasible for an adversary to selectively spoof
    the "image server" at IPL. (i.e., enter into a dialog with a
    particular device that is trying to boot at that time)

    The challenge is to come up with a scheme by which only "approved"
    images can be executed on the device(s).

    From (2), you have to assume the adversary effectively has the
    source code for your bootstrap (or can *get* it easily). I.e.,
    you might as well *publish* it!

    From (1) and (2), the "uniqueness" of a particular device is
    trivia that can easily be gleaned empirically.

    From (5), any persistent *changes* to the device's state (e.g., (4))
    can be observed *directly* (by a rogue image executing on the device)

    *If* the challenge is met, then (3) suggests that (4) can become a
    true "secret".

    So, thwarting (6) seems to be the effective goal, here...

    --

    If the device can come up with a "truly" (cough) random number
    prior to initiating the bootstrap, then a key exchange protocol
    could be implemented that would add security to the "image
    transfer".

    This "random" number can't depend on -- or be significantly
    influenced by -- observable events *outside* the device. It
    can't be possible for someone with copies of the (published!)
    source code, schematics, etc. to determine what a particular
    instance of this number is likely to be!

    So, how to come up with a hardware entropy source to generate/augment
    this random number -- cheaply, reliably and in very little space?

    Note that the time required to generate the datum is a prerequisite
    to completing the bootstrap. However, the actual sequencing of the
    bootstrap might allow the image to be transferred *while* the
    random number is generated and then *authenticated* afterwards
    (e.g., a signed secure hash computed ex post facto). If the image
    is large (more exactly, if the transport time is *long*), then
    this could buy considerable time for number generation!

    If (4), then this can build upon previous "accumulated entropy".

    I've been stewing over Schneier to get a better feel for just
    how much "new entropy" is required... in theory, a secure (4)
    suggests that very little *should* be required as long as
    the protocols don't allow replay attacks, etc. (?) But, I
    have yet to come up with hard and fast numbers (if that guy
    truly groks all this stuff, he would be amazing to share a
    few pitchers with!)

    The ideas that come to mind (for small/cheap) are measuring
    thermal/avalanche noise. I can't see how an adversary could
    predict this sort of thing even with access to *detailed*
    operating environment data!

    A potential vulnerability would be in detecting failures of
    the "entropy source". While the device in question could
    actively monitor its output and do some statistical tests
    on the data it generates, I suspect that monitoring would have
    to take place over a longer period of time than is available
    *during* the bootstrap. [And, a savvy adversary could
    intentionally force a reboot *before* the software could
    indicate (to itself) that the RNG is defective to avoid it
    on subsequent boots!]

    Bottom line... I'm looking to pointers to practical sources
    for this randomness. Especially with guidelines on how to
    tweak the amount of entropy yielded per unit time, etc.
    (so I can scale the acquisition system to fit the bandwidth
    of that source, economically)

    Thx,
    --don
     
    Don Y, Jul 28, 2011
    #1
    1. Advertising

  2. Don Y

    Ivan Shmakov Guest

    >>>>> Don Y <> writes:

    […]

    > 6. It is practically feasible for an adversary to selectively spoof
    > the "image server" at IPL. (i.e., enter into a dialog with a
    > particular device that is trying to boot at that time)


    > The challenge is to come up with a scheme by which only "approved"
    > images can be executed on the device(s).


    […]

    > So, thwarting (6) seems to be the effective goal, here...


    > --


    > If the device can come up with a "truly" (cough) random number
    > prior to initiating the bootstrap, then a key exchange protocol
    > could be implemented that would add security to the "image
    > transfer".


    I'm a bit out of context here, but if the “key exchange†there
    means something like Diffie-Hellman, then there's a catch.
    Namely, while Diffie-Hellman, indeed, relies on an RNG, and
    secures the data link, it doesn't, in fact, provide
    /authentication/.

    IOW, the device would establish a data link with the image
    server that cannot be feasibly eavesdropped. But it /wouldn't/
    /know/ whether it's the intended image server, or a rogue
    server.

    For authentication, something like RSA or DSA is to be used, and
    those don't rely on an RNG once the key pair is generated.

    (And sorry if I've missed the point, BTW.)

    […]

    --
    FSF associate member #7257
     
    Ivan Shmakov, Jul 28, 2011
    #2
    1. Advertising

  3. Don Y

    Don Y Guest

    Hi Ivan,

    On 7/28/2011 6:53 AM, Ivan Shmakov wrote:
    >>>>>> Don Y<> writes:

    >
    > > 6. It is practically feasible for an adversary to selectively spoof
    > > the "image server" at IPL. (i.e., enter into a dialog with a
    > > particular device that is trying to boot at that time)

    >
    > > The challenge is to come up with a scheme by which only "approved"
    > > images can be executed on the device(s).

    >
    > > So, thwarting (6) seems to be the effective goal, here...

    >
    > > If the device can come up with a "truly" (cough) random number
    > > prior to initiating the bootstrap, then a key exchange protocol
    > > could be implemented that would add security to the "image
    > > transfer".

    >
    > I'm a bit out of context here, but if the “key exchange†there
    > means something like Diffie-Hellman, then there's a catch.
    > Namely, while Diffie-Hellman, indeed, relies on an RNG, and


    I don't think DH will work -- MiM issues.

    > secures the data link, it doesn't, in fact, provide
    > /authentication/.
    >
    > IOW, the device would establish a data link with the image
    > server that cannot be feasibly eavesdropped. But it /wouldn't/
    > /know/ whether it's the intended image server, or a rogue
    > server.


    Correct. The first step is to have a means of generating a *unique*
    key (for each device instance) that can't be predicted.

    I still have to sort out how to keep the images themselves "bonafide"...

    > For authentication, something like RSA or DSA is to be used, and
    > those don't rely on an RNG once the key pair is generated.
    >
    > (And sorry if I've missed the point, BTW.)
     
    Don Y, Jul 28, 2011
    #3
  4. Don Y

    Ivan Shmakov Guest

    >>>>> Don Y <> writes:
    >>>>> On 7/28/2011 6:53 AM, Ivan Shmakov wrote:
    >>>>> Don Y <> writes:


    […]

    >>> If the device can come up with a "truly" (cough) random number
    >>> prior to initiating the bootstrap, then a key exchange protocol
    >>> could be implemented that would add security to the "image
    >>> transfer".


    >> I'm a bit out of context here, but if the “key exchange†there means
    >> something like Diffie-Hellman, then there's a catch. Namely, while
    >> Diffie-Hellman, indeed, relies on an RNG, and


    > I don't think DH will work -- MiM issues.


    Exactly.

    (Unless reinforced with digital signatures, that is.)

    >> secures the data link, it doesn't, in fact, provide
    >> /authentication/.


    >> IOW, the device would establish a data link with the image server
    >> that cannot be feasibly eavesdropped. But it /wouldn't/ /know/
    >> whether it's the intended image server, or a rogue server.


    > Correct. The first step is to have a means of generating a *unique*
    > key (for each device instance) that can't be predicted.


    I'd argue that it should be the /second/ step, not the first.
    Unless the image is itself a secret.

    > I still have to sort out how to keep the images themselves
    > "bonafide"...


    The device has a tiny bit of persistent storage, right? Why not
    to use the standard digital signature approach?

    Prior to deploying the devices, we're generating a key pair for
    the chosen asymmetric cipher. One of the keys is stored in the
    device's persistent memory. The other is used to encrypt the
    message digest (e. g., SHA-1) of the image.

    The device receives the image and the encrypted digest. It then
    decrypts the encrypted digest with the key it has in its
    persistent memory and compares it to the digest computed
    locally. The image is only used if these are equal (and thus
    the image isn't tampered.)

    >> For authentication, something like RSA or DSA is to be used, and
    >> those don't rely on an RNG once the key pair is generated.


    >> (And sorry if I've missed the point, BTW.)


    --
    FSF associate member #7257
     
    Ivan Shmakov, Jul 28, 2011
    #4
  5. Don Y

    Don Y Guest

    Hi Ivan,

    On 7/28/2011 8:57 AM, Ivan Shmakov wrote:

    > >> IOW, the device would establish a data link with the image server
    > >> that cannot be feasibly eavesdropped. But it /wouldn't/ /know/
    > >> whether it's the intended image server, or a rogue server.

    >
    > > Correct. The first step is to have a means of generating a *unique*
    > > key (for each device instance) that can't be predicted.

    >
    > I'd argue that it should be the /second/ step, not the first.
    > Unless the image is itself a secret.


    No -- which is also part of the problem (i.e., you can't hide any
    secrets *in* the image(s), either!)

    > > I still have to sort out how to keep the images themselves
    > > "bonafide"...

    >
    > The device has a tiny bit of persistent storage, right? Why not
    > to use the standard digital signature approach?
    >
    > Prior to deploying the devices, we're generating a key pair for
    > the chosen asymmetric cipher. One of the keys is stored in the
    > device's persistent memory. The other is used to encrypt the
    > message digest (e. g., SHA-1) of the image.


    I have been trying to avoid an initialization step. I.e., deploy
    *identical* devices and let them gather entropy to make themselves
    unique.

    I have one system in place, now, with which I took this approach.
    But, it requires setting up each device before deployment. And,
    means those settings *must* be persistent (battery failures, crashes,
    etc).

    I was hoping to come up with a scheme that could, effectively,
    "fix itself" (as if it had just been deployed!)

    > The device receives the image and the encrypted digest. It then
    > decrypts the encrypted digest with the key it has in its
    > persistent memory and compares it to the digest computed
    > locally. The image is only used if these are equal (and thus
    > the image isn't tampered.)
    >
    > >> For authentication, something like RSA or DSA is to be used, and
    > >> those don't rely on an RNG once the key pair is generated.

    >
    > >> (And sorry if I've missed the point, BTW.)
     
    Don Y, Jul 28, 2011
    #5
  6. In article <j0r6dm$28v$>, Don Y <> wrote:
    }2. The devices themselves are not physically secure -- an adversary
    }could legally purchase one, dismantle it, etc. to determine its current
    }state to any degree of exactness "for little investment". I.e., it is
    }not possible to "hide secrets" in them at the time of manufacture.
    }
    }3. *Deployed* devices are secure (not easily tampered without being
    }observed) though the infrastructure is vulnerable and should be seen
    }as providing *no* security.
    }
    }4. The devices have some small amount (i.e., tens or hundreds of bytes)
    }of persistent store.
    }
    }5. The devices download their operating "software" as part of their
    }bootstrap.
    }
    }6. It is practically feasible for an adversary to selectively spoof
    }the "image server" at IPL. (i.e., enter into a dialog with a
    }particular device that is trying to boot at that time)
    }
    }The challenge is to come up with a scheme by which only "approved"
    }images can be executed on the device(s).

    The devices are made containing a public key. When the loader fetchs an
    image, the image has been encrypted using the corresponding private
    key. The loader decrypts it with the public key and executes the
    decrypted image (preferrably after checking it has decrypted into a valid
    image). An attacker can fetch and decrypt any images, and simulate the
    action of the loaded as much as they like, but without breaking the
    underlying encryption method or possessing the private key they cannot
    generate their own image.

    I don't see any need for random numbers.
     
    Charles Bryant, Jul 29, 2011
    #6
  7. Don Y

    Ivan Shmakov Guest

    >>>>> Charles Bryant <> writes:
    >>>>> In article <j0r6dm$28v$>, Don Y wrote:


    […]

    >> 6. It is practically feasible for an adversary to selectively spoof
    >> the "image server" at IPL. (i.e., enter into a dialog with a
    >> particular device that is trying to boot at that time)


    >> The challenge is to come up with a scheme by which only "approved"
    >> images can be executed on the device(s).


    > The devices are made containing a public key. When the loader fetchs
    > an image, the image has been encrypted using the corresponding
    > private key. The loader decrypts it with the public key and executes
    > the decrypted image (preferrably after checking it has decrypted into
    > a valid image).


    I deem it infeasible for an embedded device to decrypt the whole
    image using an asymmetric cipher — there's just too much
    computation involved. Traditionally, the digital signatures are
    made by encrypting a message digest (such as SHA-1) instead.

    (And note that even with whole image encryption, it's still
    advisable to have a message digest transmitted along with the
    image, since otherwise an attacker may force the device to
    accept garbage.)

    > An attacker can fetch and decrypt any images, and simulate the action
    > of the loaded as much as they like, but without breaking the
    > underlying encryption method or possessing the private key they
    > cannot generate their own image.


    Right. But since the image isn't a secret anyway, it's not
    worth encrypting in the first place.

    > I don't see any need for random numbers.


    --
    FSF associate member #7257
     
    Ivan Shmakov, Jul 29, 2011
    #7
  8. Don Y

    Ivan Shmakov Guest

    >>>>> Don Y <> writes:
    >>>>> On 7/28/2011 8:57 AM, Ivan Shmakov wrote:


    […]

    >>> I still have to sort out how to keep the images themselves
    >>> "bonafide"...


    >> The device has a tiny bit of persistent storage, right? Why not to
    >> use the standard digital signature approach?


    >> Prior to deploying the devices, we're generating a key pair for the
    >> chosen asymmetric cipher. One of the keys is stored in the device's
    >> persistent memory. The other is used to encrypt the message digest
    >> (e. g., SHA-1) of the image.


    > I have been trying to avoid an initialization step. I.e., deploy
    > *identical* devices and let them gather entropy to make themselves
    > unique.


    How this is going to make the devices able to distinguish the
    intended image server from the attacker's one?

    > I have one system in place, now, with which I took this approach.
    > But, it requires setting up each device before deployment. And,
    > means those settings *must* be persistent (battery failures, crashes,
    > etc).


    The typical size of a public key is 2048 bits, which is 256
    bytes. There're even 8-bit MCU's for which it's not a big deal
    to store such a key in the on-chip flash memory.

    Moreover, the key can (and probably should) be identical for the
    whole set of devices used by a single “owner†(an organization
    or unit), which may make uploading it much less a hassle.

    > I was hoping to come up with a scheme that could, effectively, "fix
    > itself" (as if it had just been deployed!)


    I'm afraid there cannot be one. The devices have to be
    “labelled†somehow, so that they could know their “owner.â€

    […]

    --
    FSF associate member #7257
     
    Ivan Shmakov, Jul 29, 2011
    #8
  9. On Jul 28, 3:25 am, Don Y <> wrote:
    > Hi,
    >
    > [yes, I understand the difference between random, pseudo-random,
    > etc.  I've chosen *random* in the subject line, deliberately.]
    >
    > 0. All of the following happens without human intervention or
    > supervision.
    >
    > 1. Imagine deploying *identical* devices -- possibly differentiated
    > from each other with just a small-ish serial number.
    >
    > 2. The devices themselves are not physically secure -- an adversary
    > could legally purchase one, dismantle it, etc. to determine its current
    > state to any degree of exactness "for little investment".  I.e., it is
    > not possible to "hide secrets" in them at the time of manufacture.
    >
    > 3. *Deployed* devices are secure (not easily tampered without being
    > observed) though the infrastructure is vulnerable and should be seen
    > as providing *no* security.
    >
    > 4. The devices have some small amount (i.e., tens or hundreds of bytes)
    > of persistent store.
    >
    > 5. The devices download their operating "software" as part of their
    > bootstrap.
    >
    > 6. It is practically feasible for an adversary to selectively spoof
    > the "image server" at IPL. (i.e., enter into a dialog with a
    > particular device that is trying to boot at that time)
    >
    > The challenge is to come up with a scheme by which only "approved"
    > images can be executed on the device(s).
    >
    >  From (2), you have to assume the adversary effectively has the
    > source code for your bootstrap (or can *get* it easily).  I.e.,
    > you might as well *publish* it!
    >
    >  From (1) and (2), the "uniqueness" of a particular device is
    > trivia that can easily be gleaned empirically.
    >
    >  From (5), any persistent *changes* to the device's state (e.g., (4))
    > can be observed *directly* (by a rogue image executing on the device)
    >
    > *If* the challenge is met, then (3) suggests that (4) can become a
    > true "secret".
    >
    > So, thwarting (6) seems to be the effective goal, here...
    >
    > --
    >
    > If the device can come up with a "truly" (cough) random number
    > prior to initiating the bootstrap, then a key exchange protocol
    > could be implemented that would add security to the "image
    > transfer".
    >
    > This "random" number can't depend on -- or be significantly
    > influenced by -- observable events *outside* the device.  It
    > can't be possible for someone with copies of the (published!)
    > source code, schematics, etc. to determine what a particular
    > instance of this number is likely to be!
    >
    > So, how to come up with a hardware entropy source to generate/augment
    > this random number -- cheaply, reliably and in very little space?
    >
    > Note that the time required to generate the datum is a prerequisite
    > to completing the bootstrap.  However, the actual sequencing of the
    > bootstrap might allow the image to be transferred *while* the
    > random number is generated and then *authenticated* afterwards
    > (e.g., a signed secure hash computed ex post facto).  If the image
    > is large (more exactly, if the transport time is *long*), then
    > this could buy considerable time for number generation!
    >
    > If (4), then this can build upon previous "accumulated entropy".
    >
    > I've been stewing over Schneier to get a better feel for just
    > how much "new entropy" is required... in theory, a secure (4)
    > suggests that very little *should* be required as long as
    > the protocols don't allow replay attacks, etc. (?)  But, I
    > have yet to come up with hard and fast numbers (if that guy
    > truly groks all this stuff, he would be amazing to share a
    > few pitchers with!)
    >
    > The ideas that come to mind (for small/cheap) are measuring
    > thermal/avalanche noise.  I can't see how an adversary could
    > predict this sort of thing even with access to *detailed*
    > operating environment data!
    >
    > A potential vulnerability would be in detecting failures of
    > the "entropy source".  While the device in question could
    > actively monitor its output and do some statistical tests
    > on the data it generates, I suspect that monitoring would have
    > to take place over a longer period of time than is available
    > *during* the bootstrap.  [And, a savvy adversary could
    > intentionally force a reboot *before* the software could
    > indicate (to itself) that the RNG is defective to avoid it
    > on subsequent boots!]
    >
    > Bottom line... I'm looking to pointers to practical sources
    > for this randomness.  Especially with guidelines on how to
    > tweak the amount of entropy yielded per unit time, etc.
    > (so I can scale the acquisition system to fit the bandwidth
    > of that source, economically)
    >
    > Thx,
    > --don




    Just a coupla wild ideas.

    If there is operator input, running some local code, before
    retrieveing the image from the host, that runs a RNG and asks for
    "Press a key to begin" and stops on a number would likely yield an
    'unpredictable' value to use.

    If a system has a local RTC and has access to a host or internet or
    other external clock, perhaps subtracting (using local routine?) the
    'drift' between the 2 clocks on startup could yield a value to use.
     
    1 Lucky Texan, Jul 29, 2011
    #9
  10. Don Y

    Don Y Guest

    On 7/29/2011 7:40 AM, 1 Lucky Texan wrote:
    > On Jul 28, 3:25 am, Don Y<> wrote:


    >> 0. All of the following happens without human intervention or
    >> supervision.


    > Just a coupla wild ideas.
    >
    > If there is operator input, running some local code, before


    See "0", above. :> Imagine the device controls a robotic
    manipulator that wants to identify itself (its "instance")
    to some server, retrieve *its* firmware image and then begin
    its operations on the assembly line, etc. You wouldn't want
    a "technician" to have to walk the length of the line any
    time power is cycled *just* to "Press a key to begin"...

    > retrieveing the image from the host, that runs a RNG and asks for
    > "Press a key to begin" and stops on a number would likely yield an
    > 'unpredictable' value to use.
    >
    > If a system has a local RTC and has access to a host or internet or
    > other external clock, perhaps subtracting (using local routine?) the
    > 'drift' between the 2 clocks on startup could yield a value to use.


    Hmmm... I suspect that would have to take some time to accumulate
    enough entropy. Also, it leaves you exposed to someone manipulating
    that clock.

    I was hoping that the randomness could be handled *entirely*
    locally -- so it would be more tamper-resistent.
     
    Don Y, Jul 29, 2011
    #10
  11. Don Y

    Don Y Guest

    Hi Ivan (and Charles),

    On 7/28/2011 8:34 PM, Ivan Shmakov wrote:
    >>>>>> Charles Bryant<> writes:
    >>>>>> In article<j0r6dm$28v$>, Don Y wrote:

    >
    > >> 6. It is practically feasible for an adversary to selectively spoof
    > >> the "image server" at IPL. (i.e., enter into a dialog with a
    > >> particular device that is trying to boot at that time)

    >
    > >> The challenge is to come up with a scheme by which only "approved"
    > >> images can be executed on the device(s).

    >
    > > The devices are made containing a public key. When the loader fetchs
    > > an image, the image has been encrypted using the corresponding
    > > private key. The loader decrypts it with the public key and executes
    > > the decrypted image (preferrably after checking it has decrypted into
    > > a valid image).

    >
    > I deem it infeasible for an embedded device to decrypt the whole
    > image using an asymmetric cipher — there's just too much
    > computation involved. Traditionally, the digital signatures are
    > made by encrypting a message digest (such as SHA-1) instead.


    Yes. When it comes to the image transfer, all you want
    to do is ensure that some bogus image hasn't been substituted
    for a legitimate one. "Signed executable"

    > (And note that even with whole image encryption, it's still
    > advisable to have a message digest transmitted along with the
    > image, since otherwise an attacker may force the device to
    > accept garbage.)
    >
    > > An attacker can fetch and decrypt any images, and simulate the action
    > > of the loaded as much as they like, but without breaking the
    > > underlying encryption method or possessing the private key they
    > > cannot generate their own image.

    >
    > Right. But since the image isn't a secret anyway, it's not
    > worth encrypting in the first place.
    >
    > > I don't see any need for random numbers.


    How do you protect further communications between the device (and
    "whatever") *after* the image has been loaded? Embed keys in the
    images? Rely on some application (and instance) -specific source
    of entropy to algorithmically *generate* random numbers?

    Randomness must be worth *something* as CPU manufacturers move
    to support it "in hardware". I'm just trying to do similarly
    "on the cheap" (for CPUs that don't *have* those capabilities)
     
    Don Y, Jul 29, 2011
    #11
  12. Don Y

    Ivan Shmakov Guest

    >>>>> Don Y <> writes:
    >>>>> On 7/28/2011 8:34 PM, Ivan Shmakov wrote:
    >>>>> Charles Bryant <> writes:


    […]

    >>> An attacker can fetch and decrypt any images, and simulate the
    >>> action of the loaded as much as they like, but without breaking the
    >>> underlying encryption method or possessing the private key they
    >>> cannot generate their own image.


    >> Right. But since the image isn't a secret anyway, it's not worth
    >> encrypting in the first place.


    >>> I don't see any need for random numbers.


    > How do you protect further communications between the device (and
    > "whatever") *after* the image has been loaded? Embed keys in the
    > images? Rely on some application (and instance) -specific source of
    > entropy to algorithmically *generate* random numbers?


    If the secure communication after the image has been loaded is
    an issue, then a good RNG is indeed a good thing to have.

    (I'm not sure, however, that such a requirement was explicitly
    stated in the OP.)

    […]

    --
    FSF associate member #7257
     
    Ivan Shmakov, Jul 30, 2011
    #12
  13. Don Y

    Don Y Guest

    Hi Ivan,

    On 7/29/2011 9:16 PM, Ivan Shmakov wrote:

    [attributions elided]

    > >>> An attacker can fetch and decrypt any images, and simulate the
    > >>> action of the loaded as much as they like, but without breaking the
    > >>> underlying encryption method or possessing the private key they
    > >>> cannot generate their own image.

    >
    > >> Right. But since the image isn't a secret anyway, it's not worth
    > >> encrypting in the first place.

    >
    > >>> I don't see any need for random numbers.

    >
    > > How do you protect further communications between the device (and
    > > "whatever") *after* the image has been loaded? Embed keys in the
    > > images? Rely on some application (and instance) -specific source of
    > > entropy to algorithmically *generate* random numbers?

    >
    > If the secure communication after the image has been loaded is
    > an issue, then a good RNG is indeed a good thing to have.
    >
    > (I'm not sure, however, that such a requirement was explicitly
    > stated in the OP.)


    It *wasn't* stated. I had assumed that, if it was present for the
    image loading, then it would remain present thereafter :>

    E.g., I didn't explicitly state that the devices would use the
    network for inter-device *communication* (all I mentioned was
    image transfer), yet I fully expect to use it for that purpose!

    <wink>
     
    Don Y, Jul 30, 2011
    #13
  14. Don Y

    Noob Guest

    Don Y wrote:

    > The challenge is to come up with a scheme by which only "approved"
    > images can be executed on the device(s).


    You should try asking on sci.crypt
     
    Noob, Aug 1, 2011
    #14
  15. Noob wrote:
    >Don Y wrote:
    >
    >> The challenge is to come up with a scheme by which only "approved"
    >> images can be executed on the device(s).

    >
    >You should try asking on sci.crypt


    Or read on the many, (often successful), attempts to
    root/jailbreak/open/break-into game consoles, cell phones, e-readers,
    very-big-and-secretive government agencies, banks etc.,
    and reach the conclusion that it may not be possible with the limited
    resources guys like us have.
    --
    Roberto Waltman

    [ Please reply to the group,
    return address is invalid ]
     
    Roberto Waltman, Aug 1, 2011
    #15
  16. Don Y

    Don Y Guest

    Hi Roberto,

    On 8/1/2011 7:07 AM, Roberto Waltman wrote:
    > Noob wrote:
    >> Don Y wrote:
    >>
    >>> The challenge is to come up with a scheme by which only "approved"
    >>> images can be executed on the device(s).

    >>
    >> You should try asking on sci.crypt

    >
    > Or read on the many, (often successful), attempts to
    > root/jailbreak/open/break-into game consoles, cell phones, e-readers,
    > very-big-and-secretive government agencies, banks etc.,
    > and reach the conclusion that it may not be possible with the limited
    > resources guys like us have.


    That sort of attack would be equivalent to an attack on a "deployed"
    device, in my scenario:

    3. *Deployed* devices are secure (not easily tampered without
    being observed) though the infrastructure is vulnerable and
    should be seen as providing *no* security.

    I.e., once a device "has its secret" (see conversation elsewhere),
    tampering with the device isn't easy to do -- *unless* you do it
    over the network, etc. (eavesdropping on "secure" communications,
    etc.).

    The hacks you describe are like letting someone into the building
    "after hours" and giving them free reign over the devices... (almost
    impossible to protect against -- even with deep pockets!)
     
    Don Y, Aug 1, 2011
    #16
  17. On Jul 29, 1:03 pm, Don Y <> wrote:
    > On 7/29/2011 7:40 AM, 1 Lucky Texan wrote:
    >


    > > If a system has a local RTC and has access to a host or internet or
    > > other external clock, perhaps subtracting (using local routine?) the
    > > 'drift' between the 2 clocks on  startup could yield a value to use.

    >
    > Hmmm... I suspect that would have to take some time to accumulate
    > enough entropy.  Also, it leaves you exposed to someone manipulating
    > that clock.
    >
    > I was hoping that the randomness could be handled *entirely*
    > locally -- so it would be more tamper-resistent.



    Manipulating the 'host' clock does not 'guarantee' a break-in sine the
    local clock is part of the calc.

    Perhaps, if the local routine is more robust. Say, after performing
    the calculation, it rejects any '0' and it rejects the value stored
    from last time, then performs a few no-ops, then retries the
    subtraction. And, perhaps, use the result as a pointer to a preloaded
    local database of long 'keys' if you feel the need to further make it
    'guess resistant'. not sure that's really helpful though.
    This should make it very difficult for an attacker to force a re-boot
    and 'guess' at a result. It does of course require a battery-backed
    clock. (wouldn't have to be the RTC, could be a purpose-built 'noise/
    drift-prone' circuit)
     
    1 Lucky Texan, Aug 2, 2011
    #17
  18. Don Y

    Don Y Guest

    On 8/2/2011 6:44 AM, 1 Lucky Texan wrote:
    > On Jul 29, 1:03 pm, Don Y<> wrote:
    >> On 7/29/2011 7:40 AM, 1 Lucky Texan wrote:
    >>

    >
    >>> If a system has a local RTC and has access to a host or internet or
    >>> other external clock, perhaps subtracting (using local routine?) the
    >>> 'drift' between the 2 clocks on startup could yield a value to use.

    >>
    >> Hmmm... I suspect that would have to take some time to accumulate
    >> enough entropy. Also, it leaves you exposed to someone manipulating
    >> that clock.
    >>
    >> I was hoping that the randomness could be handled *entirely*
    >> locally -- so it would be more tamper-resistent.

    >
    > Manipulating the 'host' clock does not 'guarantee' a break-in sine the
    > local clock is part of the calc.


    Yes, but that can be predicted, to some degree (remember, the local
    device can't see changes/differences in its clock unless it has
    something to benchmark against -- some other reference.

    > Perhaps, if the local routine is more robust. Say, after performing
    > the calculation, it rejects any '0' and it rejects the value stored
    > from last time, then performs a few no-ops, then retries the
    > subtraction. And, perhaps, use the result as a pointer to a preloaded
    > local database of long 'keys' if you feel the need to further make it
    > 'guess resistant'. not sure that's really helpful though.
    > This should make it very difficult for an attacker to force a re-boot
    > and 'guess' at a result. It does of course require a battery-backed
    > clock. (wouldn't have to be the RTC, could be a purpose-built 'noise/
    > drift-prone' circuit)


    You have to assume the attacker can cycle power at any time
    (e.g., several of the systems I am working on run off PoE so
    it is trivial for an attacker with access to the infrastructure
    to cycle power as well as controlling network traffic).

    It's just A Bad Idea to have anything exposed to outside influences.
    *Motivated* adversaries will find *some* way to exploit that.
    Witness how insecure so many systems *designed* to be secure
    actually are! :-/ Hacking phones, vending machines, gaming
    devices, etc.
     
    Don Y, Aug 3, 2011
    #18
  19. On Thu, 28 Jul 2011 01:25:30 -0700, Don Y <> wrote:

    >6. It is practically feasible for an adversary to selectively spoof
    >the "image server" at IPL. (i.e., enter into a dialog with a
    >particular device that is trying to boot at that time)


    Hi Don,

    Coming very late to this discussion, but I've read through most of the
    thread. I think you should take a look at the Amoeba OS security
    model.

    Amoeba's distributed trust system is too complex to describe here, but
    it is based on public-key plus (P)RNG. It's related to, but more
    secure and more scalable than Kerberos (if you're familiar with that).
    The system uses random generated session keys and uses public key
    encryption to communicate security tokens - which may include
    symmetric encryption keys. The whole system is probably overkill for
    your need, but I think the core is simple enough.

    WRT hardware RNG, Linux has the ability to use an unconnected port. It
    just reads environmental noise and assembles a random set of bits.
    It's easy enough to put a small (unshielded) coil in the device
    connected to a 1-bit AD converter. That will give you a source of
    truly random radio noise.

    George
     
    George Neuner, Aug 4, 2011
    #19
  20. Don Y

    Don Y Guest

    Hi George,

    On 8/4/2011 2:39 PM, George Neuner wrote:
    > On Thu, 28 Jul 2011 01:25:30 -0700, Don Y<> wrote:
    >
    >> 6. It is practically feasible for an adversary to selectively spoof
    >> the "image server" at IPL. (i.e., enter into a dialog with a
    >> particular device that is trying to boot at that time)

    >
    > Coming very late to this discussion, but I've read through most of the
    > thread. I think you should take a look at the Amoeba OS security
    > model.


    Is this Tanenbaum's Amoeba (e.g., bullet server, etc.) or something
    newer (you tend to be more "current" on these things than I). Google
    turned up an "AmoebaOS"...

    I'll dig through Tanenbaum's texts to see what turns up (the "AmoebaOS"
    web site seemed pretty uninformative).

    > Amoeba's distributed trust system is too complex to describe here, but
    > it is based on public-key plus (P)RNG. It's related to, but more
    > secure and more scalable than Kerberos (if you're familiar with that).
    > The system uses random generated session keys and uses public key
    > encryption to communicate security tokens - which may include
    > symmetric encryption keys. The whole system is probably overkill for
    > your need, but I think the core is simple enough.


    I think my problem lies in wanting the devices to be "identical"
    at deployment. It seems like you really need a way to "marry"
    them to a particular system in order to avoid the possibility
    of spoofing (that vulnerability window mentioned elsewhere)

    > WRT hardware RNG, Linux has the ability to use an unconnected port. It
    > just reads environmental noise and assembles a random set of bits.
    > It's easy enough to put a small (unshielded) coil in the device
    > connected to a 1-bit AD converter. That will give you a source of
    > truly random radio noise.


    But you could (?) influence that with an RF source nearby.
    I was thinking of using something like bandwidth limited
    thermal noise, etc. I.e., something that an attacker can't
    influence (?).

    [I've become super paranoid of just how clever attackers can
    be -- esp if you give them the sources, schematics, etc. :< ]
     
    Don Y, Aug 5, 2011
    #20
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Motaz K. Saad
    Replies:
    110
    Views:
    1,689
    Dave Vandervies
    Mar 18, 2005
  2. Guy Macon
    Replies:
    4
    Views:
    261
    Guy Macon
    Mar 7, 2005
  3. karthikbalaguru

    Random Number Generation via Hardware

    karthikbalaguru, Dec 7, 2007, in forum: Embedded
    Replies:
    13
    Views:
    419
    karthikbalaguru
    Dec 26, 2007
  4. David Brown
    Replies:
    12
    Views:
    406
    David Brown
    Sep 26, 2008
  5. Boudewijn Dijkstra
    Replies:
    0
    Views:
    365
    Boudewijn Dijkstra
    Sep 24, 2008
Loading...

Share This Page