Memory compatibility?

Discussion in 'Asus' started by Puddin' Man, Jun 7, 2011.

  1. Puddin' Man

    Puddin' Man Guest

    Just about 1 year ago, I built a desktop based on:

    Intel i5-650
    Asus P7H55D-M EVO
    G.SKILL Ripjaws Series 4GB 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) Desktop Memory Model
    F3-12800CL9D-4GBRL
    <etc>

    The Ripjaws mem tested OK with memtest86+. The system has been doing well for the last
    year.

    Newegg now has a sale on:

    G.SKILL Ripjaws Series 4GB 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) Desktop Memory Model
    F3-12800CL9S-4GBRL

    for $34. Note that the "old" is 9D, the "new" is 9S.

    I can't discern any difference in the specs:

    "old" - http://www.gskill.com/products.php?index=222
    "new" -
    http://www.newegg.com/Product/Produ...-na-_-na&AID=10446076&PID=361116&SID=FW9wkedl
    (apologies for wrap)

    The Asus P7H55D-M EVO has 2 empty dimm slots. Any reason -not- to expect full
    compatibility between the old and new Ripjaws mem?

    Thx,
    P

    "Law Without Equity Is No Law At All. It Is A Form Of Jungle Rule."
     
    Puddin' Man, Jun 7, 2011
    #1
    1. Advertising

  2. Puddin' Man

    Paul Guest

    Puddin' Man wrote:
    > Just about 1 year ago, I built a desktop based on:
    >
    > Intel i5-650
    > Asus P7H55D-M EVO
    > G.SKILL Ripjaws Series 4GB 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) Desktop Memory Model
    > F3-12800CL9D-4GBRL
    > <etc>
    >
    > The Ripjaws mem tested OK with memtest86+. The system has been doing well for the last
    > year.
    >
    > Newegg now has a sale on:
    >
    > G.SKILL Ripjaws Series 4GB 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) Desktop Memory Model
    > F3-12800CL9S-4GBRL
    >
    > for $34. Note that the "old" is 9D, the "new" is 9S.
    >
    > I can't discern any difference in the specs:
    >
    > "old" - http://www.gskill.com/products.php?index=222
    > "new" -
    > http://www.newegg.com/Product/Produ...-na-_-na&AID=10446076&PID=361116&SID=FW9wkedl
    > (apologies for wrap)
    >
    > The Asus P7H55D-M EVO has 2 empty dimm slots. Any reason -not- to expect full
    > compatibility between the old and new Ripjaws mem?
    >
    > Thx,
    > P


    One product consists of (2) 2GB sticks, while the other is (1) 4GB stick ?

    Maybe the "D" stands for Dual, the "S" for Single ?

    Check the vip.asus.com forums, or use the Newegg reviews for your P7H55D-M EVO
    motherboard, to see how well 4GB sticks work.

    Paul
     
    Paul, Jun 7, 2011
    #2
    1. Advertising

  3. Puddin' Man

    Puddin' Man Guest

    On Tue, 07 Jun 2011 16:01:51 -0400, Paul <> wrote:

    >One product consists of (2) 2GB sticks, while the other is (1) 4GB stick ?


    It appears you are correct. I missed the obvious.

    >Maybe the "D" stands for Dual, the "S" for Single ?


    Less certain about that.

    >Check the vip.asus.com forums, or use the Newegg reviews for your P7H55D-M EVO
    >motherboard, to see how well 4GB sticks work.


    The reviews looked very good, but mine is a dual-channel design, and, as I
    recall, a 4gb stick would unbalance it. Prefer not to go that route.

    Many Thanks,
    P

    "Law Without Equity Is No Law At All. It Is A Form Of Jungle Rule."
     
    Puddin' Man, Jun 7, 2011
    #3
  4. Puddin' Man

    Paul Guest

    Puddin' Man wrote:
    > On Tue, 07 Jun 2011 16:01:51 -0400, Paul <> wrote:
    >
    >> One product consists of (2) 2GB sticks, while the other is (1) 4GB stick ?

    >
    > It appears you are correct. I missed the obvious.
    >
    >> Maybe the "D" stands for Dual, the "S" for Single ?

    >
    > Less certain about that.
    >
    >> Check the vip.asus.com forums, or use the Newegg reviews for your P7H55D-M EVO
    >> motherboard, to see how well 4GB sticks work.

    >
    > The reviews looked very good, but mine is a dual-channel design, and, as I
    > recall, a 4gb stick would unbalance it. Prefer not to go that route.
    >
    > Many Thanks,
    > P
    >


    You could use 2x2GB plus 2x4GB. The speed would be adjusted to the
    slowest pair of modules. A slight correction may be required with
    four modules - if memory testing shows a problem, you can make
    a slight adjustment (bump up CAS one notch, use extra VDimm,
    command rate is probably already 2N, drop memory bus clock
    rate one notch). Don't boot into Windows, until memtest86+ is
    passing clean.

    Paul
     
    Paul, Jun 7, 2011
    #4
  5. Puddin' Man

    Puddin' Man Guest

    On Tue, 07 Jun 2011 18:12:10 -0400, Paul <> wrote:

    >Puddin' Man wrote:
    >> On Tue, 07 Jun 2011 16:01:51 -0400, Paul <> wrote:
    >>
    >>> One product consists of (2) 2GB sticks, while the other is (1) 4GB stick ?

    >>
    >> It appears you are correct. I missed the obvious.
    >>
    >>> Maybe the "D" stands for Dual, the "S" for Single ?

    >>
    >> Less certain about that.
    >>
    >>> Check the vip.asus.com forums, or use the Newegg reviews for your P7H55D-M EVO
    >>> motherboard, to see how well 4GB sticks work.

    >>
    >> The reviews looked very good, but mine is a dual-channel design, and, as I
    >> recall, a 4gb stick would unbalance it. Prefer not to go that route.
    >>
    >> Many Thanks,
    >> P
    >>

    >
    >You could use 2x2GB plus 2x4GB. The speed would be adjusted to the
    >slowest pair of modules. A slight correction may be required with
    >four modules - if memory testing shows a problem, you can make
    >a slight adjustment (bump up CAS one notch, use extra VDimm,
    >command rate is probably already 2N, drop memory bus clock
    >rate one notch). Don't boot into Windows, until memtest86+ is
    >passing clean.


    At least 3 full iterations.

    Worth considering, I guess, but I'm not sure I'm gonna need 12 gb
    in the long run. Suspect 8 gb would be sufficient.

    Somewhat curious that the same 2x2GB that I originally installed
    is now priced at $47.99 while the "equivalent" 1x4GB is down to $34.
    Maybe they'll price the 2x2GB down as well before it's all over.

    Best,
    P

    "Law Without Equity Is No Law At All. It Is A Form Of Jungle Rule."
     
    Puddin' Man, Jun 8, 2011
    #5
  6. Puddin' Man

    Paul Guest

    Puddin' Man wrote:

    >
    > Worth considering, I guess, but I'm not sure I'm gonna need 12 gb
    > in the long run. Suspect 8 gb would be sufficient.
    >
    > Somewhat curious that the same 2x2GB that I originally installed
    > is now priced at $47.99 while the "equivalent" 1x4GB is down to $34.
    > Maybe they'll price the 2x2GB down as well before it's all over.
    >
    > Best,
    > P


    I like to keep my modules in pairs, as it would make potential resale
    easier.

    If you look in section 2.4.2 of the manual ("Memory Configurations"
    or something similar), you'll likely see that the memory controller
    supports flex memory.

    You can use three DIMMs, like this. Two DIMMs go on one channel, and
    one DIMM on the other channel. Memory quantity in each channel is
    equal, which means it's "dual channel from top to bottom". Memory
    speed would be reflective of a 4 stick configuration, as the
    more heavily loaded channel limits how aggressively it can be set up.

    2GB 4GB
    2GB

    Another thing you can do, is tolerate this kind of configuration.

    2GB 4GB

    In that case, the lower 2GB+2GB of memory space is dual channel, while
    the last remaining 2GB on the right, runs single channel. This makes the
    bandwidth performance of the memory, location specific. The
    computer still works. (I tested this about five years ago, on
    one of my computers, and couldn't detect which memory space I was
    in, based on the observed speed.)

    So you can do just about anything you want with it.

    If it was mine, I'd run it like this

    4GB 4GB

    and save the 2x2GB in a drawer for later. If the 4GB modules fail,
    you have some spare memory on hand. Or, if you decide some day to test
    12GB, again, you can do it. (I prefer to run my machines with two sticks,
    and have done so, on the last four computers.)

    4GB 4GB
    2GB 2GB

    HTH,
    Paul
     
    Paul, Jun 8, 2011
    #6
  7. Puddin' Man

    Paul Guest

    Puddin' Man wrote:
    > On Wed, 08 Jun 2011 03:07:52 -0400, Paul <> wrote:
    >
    >> Puddin' Man wrote:
    >>
    >>> Worth considering, I guess, but I'm not sure I'm gonna need 12 gb
    >>> in the long run. Suspect 8 gb would be sufficient.
    >>>
    >>> Somewhat curious that the same 2x2GB that I originally installed
    >>> is now priced at $47.99 while the "equivalent" 1x4GB is down to $34.
    >>> Maybe they'll price the 2x2GB down as well before it's all over.
    >>>
    >>> Best,
    >>> P

    >> I like to keep my modules in pairs, as it would make potential resale
    >> easier.

    >
    > There are numerous advantages in keeping modules in pairs.
    >
    >> If you look in section 2.4.2 of the manual ("Memory Configurations"
    >> or something similar), you'll likely see that the memory controller
    >> supports flex memory.

    >
    > I cannot count the Asus manual as the definitive authority on the
    > memory controller as it resides in the 45nm "half" of my i5-650
    > cpu die and lies in the "Domain of Intel", not Asus.


    You have the option of downloading the datasheet for the Intel processor
    and verifying the existence of a flex memory controller. When Asus writes
    the manual, they too would have access to the datasheet, and how the memory
    (and BIOS setup code) works.

    I'd download the file myself and look, but it means booting up a VM
    and viewing the PDF there. (I use a certain version of Acrobat Reader,
    as the newest versions, suck. Intel PDF documents now require the latest
    Reader.)

    >
    >> You can use three DIMMs, like this. Two DIMMs go on one channel, and
    >> one DIMM on the other channel. Memory quantity in each channel is
    >> equal, which means it's "dual channel from top to bottom". Memory
    >> speed would be reflective of a 4 stick configuration, as the
    >> more heavily loaded channel limits how aggressively it can be set up.
    >>
    >> 2GB 4GB
    >> 2GB

    >
    > A literal reading indicates you are correct in this. I was assuming
    > it somehow mapped the mem by dimm sockets. The manual indicates any
    > combination of 1, 2, and 4gb modules are OK and dual-channel performance
    > is totally maintained whenever mem capacity in both channels is equal.


    There are two implementations of dual channel in existence. The AMD
    version, used to make a "128 bit wide DIMM" from two DIMMs, and that
    method implied a need to match modules religiously.

    The other method, is to have a memory controller per DIMM, and
    when an access request comes in, the mapping logic determines
    which controller responds. In the 2+2 versus 4GB DIMM case,
    the "hits" fall in the appropriate place, and alternate between
    channels. Since accesses tend to involve cache line sized requests,
    there aren't really any "boundary" conditions to speak of.

    When you have mismatched amounts of memory on each channel,
    the requests are all answered by a single controller on one
    channel, when there is no memory across from it.

    The Flex memory setup, always does the best it can (i.e. you
    can't "build a better one"). And all it needs to do that, is
    for the hardware (controllers) to be set up properly, so that
    a continuous physical memory map exists (i.e. don't have
    two controllers respond to the same absolute address).

    >
    >> Another thing you can do, is tolerate this kind of configuration.
    >>
    >> 2GB 4GB
    >>
    >> In that case, the lower 2GB+2GB of memory space is dual channel, while
    >> the last remaining 2GB on the right, runs single channel. This makes the
    >> bandwidth performance of the memory, location specific. The
    >> computer still works. (I tested this about five years ago, on
    >> one of my computers, and couldn't detect which memory space I was
    >> in, based on the observed speed.)

    >
    > You couldn't discern a difference between single and dual-channel performance?
    >
    > I don't follow you on this. If you have 2gb in channel A and 4gb in channel B:
    >
    > a.) It is dual-channel unbalanced.
    > b.) According to the manual, the board maps 2gb from each channel as dual-
    > channel, and the excess 2gb in channel B as single-channel (per section 2.4.2).


    I was able to use a specially modified copy of memtest86+, to verify
    my memory controller actually worked that way. The dual channel section
    of the memory space, ran 1400MB/sec, while the single channel section of the
    memory space, ran 900MB/sec. (That gives you some idea how long ago this was.)
    The memory config looked like this.

    512MB <--- remaining memory runs at single channel rates
    512MB 512MB <--- dual channel memory space

    What I'm saying is, when running applications in Windows, if you're blindfolded,
    you can't tell which portion of that space you're using at the moment. Even
    though there is a 500MB/sec difference in memory bandwidth, the L1 and L2
    cache tend to hide the details. It feels just as smooth, as if the
    machine was running fully at 1400MB/sec.

    The benchmark application can detect the difference, and I used memtest86+
    because it works with physical memory addresses, and I could be absolutely
    sure of what I was testing. The code change required adding three lines
    to the source, to print additional information to the memtest screen.

    >
    >> So you can do just about anything you want with it.

    >
    > Subject to certain performance constraints.


    Pre-built computers are frequently shipped with "stupid" configurations,
    which takes advantage of Flex memory, and nobody is the wiser.

    >
    >> If it was mine, I'd run it like this
    >>
    >> 4GB 4GB
    >>
    >> and save the 2x2GB in a drawer for later. If the 4GB modules fail,
    >> you have some spare memory on hand. Or, if you decide some day to test
    >> 12GB, again, you can do it. (I prefer to run my machines with two sticks,
    >> and have done so, on the last four computers.)
    >>
    >> 4GB 4GB
    >> 2GB 2GB

    >
    > There are certain advantages in so doing. If the 4gb modules were selling
    > for, say, $12 each ...
    >
    > Cheers,
    > P


    So you're saying, you don't appreciate the fact you're getting a
    gigabyte of memory for roughly $10 ? How many things can you buy at the
    store today, that are falling in price ? I went to Home Depot the
    other day, to price a four foot length of tubular steel, and
    they wanted close to $16 for it. Luckily for us, memory is
    still headed in the opposite direction. And it's not
    even clear, why that is happening. You'd think with
    Japan all messed up, there'd be some price gouging.

    Paul
     
    Paul, Jun 9, 2011
    #7
  8. Puddin' Man

    Puddin' Man Guest

    On Wed, 08 Jun 2011 19:34:45 -0400, Paul <> wrote:

    >Puddin' Man wrote:
    >> On Wed, 08 Jun 2011 03:07:52 -0400, Paul <> wrote:
    >>
    >>> Puddin' Man wrote:
    >>>
    >>>> Worth considering, I guess, but I'm not sure I'm gonna need 12 gb
    >>>> in the long run. Suspect 8 gb would be sufficient.
    >>>>
    >>>> Somewhat curious that the same 2x2GB that I originally installed
    >>>> is now priced at $47.99 while the "equivalent" 1x4GB is down to $34.
    >>>> Maybe they'll price the 2x2GB down as well before it's all over.
    >>>>
    >>>> Best,
    >>>> P
    >>> I like to keep my modules in pairs, as it would make potential resale
    >>> easier.

    >>
    >> There are numerous advantages in keeping modules in pairs.
    >>
    >>> If you look in section 2.4.2 of the manual ("Memory Configurations"
    >>> or something similar), you'll likely see that the memory controller
    >>> supports flex memory.


    Note ref to "memory controller" (singular).

    >> I cannot count the Asus manual as the definitive authority on the
    >> memory controller as it resides in the 45nm "half" of my i5-650
    >> cpu die and lies in the "Domain of Intel", not Asus.

    >
    >You have the option of downloading the datasheet for the Intel processor
    >and verifying the existence of a flex memory controller. When Asus writes
    >the manual, they too would have access to the datasheet, and how the memory
    >(and BIOS setup code) works.
    >
    >I'd download the file myself and look, but it means booting up a VM
    >and viewing the PDF there. (I use a certain version of Acrobat Reader,
    >as the newest versions, suck. Intel PDF documents now require the latest
    >Reader.)


    That's hardly necessary. However, I'll briefly mention that the datasheet
    is written in (hopefully, hopefully) understandable American-English,
    while the Asus manual is written in (very ambitious) Taiwanese-English,
    thereby leaving considerable margin for less-than-perfect interpretation(s). :)

    >>
    >>> You can use three DIMMs, like this. Two DIMMs go on one channel, and
    >>> one DIMM on the other channel. Memory quantity in each channel is
    >>> equal, which means it's "dual channel from top to bottom". Memory
    >>> speed would be reflective of a 4 stick configuration, as the
    >>> more heavily loaded channel limits how aggressively it can be set up.
    >>>
    >>> 2GB 4GB
    >>> 2GB

    >>
    >> A literal reading indicates you are correct in this. I was assuming
    >> it somehow mapped the mem by dimm sockets. The manual indicates any
    >> combination of 1, 2, and 4gb modules are OK and dual-channel performance
    >> is totally maintained whenever mem capacity in both channels is equal.

    >
    >There are two implementations of dual channel in existence. The AMD
    >version, used to make a "128 bit wide DIMM" from two DIMMs, and that
    >method implied a need to match modules religiously.


    This is likely what I was previously thinking of.

    >The other method, is to have a memory controller per DIMM, and


    "a memory controller per DIMM"??? Uh-oh, we could have difficulties,
    here. :)

    >when an access request comes in, the mapping logic determines
    >which controller responds.


    Note ref to "which controller" (implies several).

    >In the 2+2 versus 4GB DIMM case,
    >the "hits" fall in the appropriate place, and alternate between
    >channels. Since accesses tend to involve cache line sized requests,
    >there aren't really any "boundary" conditions to speak of.
    >
    >When you have mismatched amounts of memory on each channel,
    >the requests are all answered by a single controller on one
    >channel, when there is no memory across from it.


    This statement seems to contradict itself. If there is -some-
    volume of memory on each channel, there cannot be an absence
    of memory on any adjacent channel.

    >The Flex memory setup, always does the best it can (i.e. you
    >can't "build a better one"). And all it needs to do that, is
    >for the hardware (controllers) to be set up properly, so that
    >a continuous physical memory map exists (i.e. don't have
    >two controllers respond to the same absolute address).


    Note ref to "memory controllers" (plural).

    >>
    >>> Another thing you can do, is tolerate this kind of configuration.
    >>>
    >>> 2GB 4GB
    >>>
    >>> In that case, the lower 2GB+2GB of memory space is dual channel, while
    >>> the last remaining 2GB on the right, runs single channel. This makes the
    >>> bandwidth performance of the memory, location specific. The
    >>> computer still works. (I tested this about five years ago, on
    >>> one of my computers, and couldn't detect which memory space I was
    >>> in, based on the observed speed.)

    >>
    >> You couldn't discern a difference between single and dual-channel performance?
    >>
    >> I don't follow you on this. If you have 2gb in channel A and 4gb in channel B:
    >>
    >> a.) It is dual-channel unbalanced.
    >> b.) According to the manual, the board maps 2gb from each channel as dual-
    >> channel, and the excess 2gb in channel B as single-channel (per section 2.4.2).

    >
    >I was able to use a specially modified copy of memtest86+, to verify
    >my memory controller actually worked that way. The dual channel section
    >of the memory space, ran 1400MB/sec, while the single channel section of the
    >memory space, ran 900MB/sec. (That gives you some idea how long ago this was.)
    >The memory config looked like this.
    >
    > 512MB <--- remaining memory runs at single channel rates
    > 512MB 512MB <--- dual channel memory space
    >
    >What I'm saying is, when running applications in Windows, if you're blindfolded,
    >you can't tell which portion of that space you're using at the moment. Even
    >though there is a 500MB/sec difference in memory bandwidth, the L1 and L2
    >cache tend to hide the details. It feels just as smooth, as if the
    >machine was running fully at 1400MB/sec.


    Check.

    >The benchmark application can detect the difference, and I used memtest86+
    >because it works with physical memory addresses, and I could be absolutely
    >sure of what I was testing. The code change required adding three lines
    >to the source, to print additional information to the memtest screen.


    Neat.

    >>> So you can do just about anything you want with it.

    >>
    >> Subject to certain performance constraints.

    >
    >Pre-built computers are frequently shipped with "stupid" configurations,
    >which takes advantage of Flex memory, and nobody is the wiser.


    One reason why we build.

    >>
    >>> If it was mine, I'd run it like this
    >>>
    >>> 4GB 4GB
    >>>
    >>> and save the 2x2GB in a drawer for later. If the 4GB modules fail,
    >>> you have some spare memory on hand. Or, if you decide some day to test
    >>> 12GB, again, you can do it. (I prefer to run my machines with two sticks,
    >>> and have done so, on the last four computers.)
    >>>
    >>> 4GB 4GB
    >>> 2GB 2GB

    >>
    >> There are certain advantages in so doing. If the 4gb modules were selling
    >> for, say, $12 each ...
    >>
    >> Cheers,
    >> P

    >
    >So you're saying, you don't appreciate the fact you're getting a
    >gigabyte of memory for roughly $10 ?


    Actually a bit less. $34/4 = $8.50.

    >How many things can you buy at the
    >store today, that are falling in price ? I went to Home Depot the
    >other day, to price a four foot length of tubular steel, and
    >they wanted close to $16 for it.


    Yeah, it's been like that around here for years. Vicious price increases
    and strictly comical gov't efforts to lie about it.

    >Luckily for us, memory is
    >still headed in the opposite direction. And it's not
    >even clear, why that is happening. You'd think with
    >Japan all messed up, there'd be some price gouging.


    Most memory is made in non-Nipponese Asian locales?

    For the record, I -have- to appreciate the $8.50/gb price because
    I cannot forget having to pay ~ $110/4 = $27.50/gb for it 13 months
    ago. There was a "spike", and I needed to complete/test a build.

    But I have to justify expense on pc resources according to expectations of
    usage. If the "sweet spot" for my little desktop is expected to be around 8gb
    physical mem, it's hard to justify the additional expense of an added 4gb, if
    only $34.

    Prost,
    P

    "Law Without Equity Is No Law At All. It Is A Form Of Jungle Rule."
     
    Puddin' Man, Jun 9, 2011
    #8
  9. Puddin' Man

    Paul Guest

    Puddin' Man wrote:
    > On Wed, 08 Jun 2011 19:34:45 -0400, Paul <> wrote:
    >
    >> Puddin' Man wrote:
    >>> On Wed, 08 Jun 2011 03:07:52 -0400, Paul <> wrote:
    >>>
    >>>> Puddin' Man wrote:
    >>>>
    >>>>> Worth considering, I guess, but I'm not sure I'm gonna need 12 gb
    >>>>> in the long run. Suspect 8 gb would be sufficient.
    >>>>>
    >>>>> Somewhat curious that the same 2x2GB that I originally installed
    >>>>> is now priced at $47.99 while the "equivalent" 1x4GB is down to $34.
    >>>>> Maybe they'll price the 2x2GB down as well before it's all over.
    >>>>>
    >>>>> Best,
    >>>>> P
    >>>> I like to keep my modules in pairs, as it would make potential resale
    >>>> easier.
    >>> There are numerous advantages in keeping modules in pairs.
    >>>
    >>>> If you look in section 2.4.2 of the manual ("Memory Configurations"
    >>>> or something similar), you'll likely see that the memory controller
    >>>> supports flex memory.

    >
    > Note ref to "memory controller" (singular).
    >
    >>> I cannot count the Asus manual as the definitive authority on the
    >>> memory controller as it resides in the 45nm "half" of my i5-650
    >>> cpu die and lies in the "Domain of Intel", not Asus.

    >> You have the option of downloading the datasheet for the Intel processor
    >> and verifying the existence of a flex memory controller. When Asus writes
    >> the manual, they too would have access to the datasheet, and how the memory
    >> (and BIOS setup code) works.
    >>
    >> I'd download the file myself and look, but it means booting up a VM
    >> and viewing the PDF there. (I use a certain version of Acrobat Reader,
    >> as the newest versions, suck. Intel PDF documents now require the latest
    >> Reader.)

    >
    > That's hardly necessary. However, I'll briefly mention that the datasheet
    > is written in (hopefully, hopefully) understandable American-English,
    > while the Asus manual is written in (very ambitious) Taiwanese-English,
    > thereby leaving considerable margin for less-than-perfect interpretation(s). :)
    >
    >>>> You can use three DIMMs, like this. Two DIMMs go on one channel, and
    >>>> one DIMM on the other channel. Memory quantity in each channel is
    >>>> equal, which means it's "dual channel from top to bottom". Memory
    >>>> speed would be reflective of a 4 stick configuration, as the
    >>>> more heavily loaded channel limits how aggressively it can be set up.
    >>>>
    >>>> 2GB 4GB
    >>>> 2GB
    >>> A literal reading indicates you are correct in this. I was assuming
    >>> it somehow mapped the mem by dimm sockets. The manual indicates any
    >>> combination of 1, 2, and 4gb modules are OK and dual-channel performance
    >>> is totally maintained whenever mem capacity in both channels is equal.

    >> There are two implementations of dual channel in existence. The AMD
    >> version, used to make a "128 bit wide DIMM" from two DIMMs, and that
    >> method implied a need to match modules religiously.

    >
    > This is likely what I was previously thinking of.
    >
    >> The other method, is to have a memory controller per DIMM, and

    >
    > "a memory controller per DIMM"??? Uh-oh, we could have difficulties,
    > here. :)
    >
    >> when an access request comes in, the mapping logic determines
    >> which controller responds.

    >
    > Note ref to "which controller" (implies several).
    >
    >> In the 2+2 versus 4GB DIMM case,
    >> the "hits" fall in the appropriate place, and alternate between
    >> channels. Since accesses tend to involve cache line sized requests,
    >> there aren't really any "boundary" conditions to speak of.
    >>
    >> When you have mismatched amounts of memory on each channel,
    >> the requests are all answered by a single controller on one
    >> channel, when there is no memory across from it.

    >
    > This statement seems to contradict itself. If there is -some-
    > volume of memory on each channel, there cannot be an absence
    > of memory on any adjacent channel.
    >
    >> The Flex memory setup, always does the best it can (i.e. you
    >> can't "build a better one"). And all it needs to do that, is
    >> for the hardware (controllers) to be set up properly, so that
    >> a continuous physical memory map exists (i.e. don't have
    >> two controllers respond to the same absolute address).

    >
    > Note ref to "memory controllers" (plural).
    >
    >>>> Another thing you can do, is tolerate this kind of configuration.
    >>>>
    >>>> 2GB 4GB
    >>>>
    >>>> In that case, the lower 2GB+2GB of memory space is dual channel, while
    >>>> the last remaining 2GB on the right, runs single channel. This makes the
    >>>> bandwidth performance of the memory, location specific. The
    >>>> computer still works. (I tested this about five years ago, on
    >>>> one of my computers, and couldn't detect which memory space I was
    >>>> in, based on the observed speed.)
    >>> You couldn't discern a difference between single and dual-channel performance?
    >>>
    >>> I don't follow you on this. If you have 2gb in channel A and 4gb in channel B:
    >>>
    >>> a.) It is dual-channel unbalanced.
    >>> b.) According to the manual, the board maps 2gb from each channel as dual-
    >>> channel, and the excess 2gb in channel B as single-channel (per section 2.4.2).

    >> I was able to use a specially modified copy of memtest86+, to verify
    >> my memory controller actually worked that way. The dual channel section
    >> of the memory space, ran 1400MB/sec, while the single channel section of the
    >> memory space, ran 900MB/sec. (That gives you some idea how long ago this was.)
    >> The memory config looked like this.
    >>
    >> 512MB <--- remaining memory runs at single channel rates
    >> 512MB 512MB <--- dual channel memory space
    >>
    >> What I'm saying is, when running applications in Windows, if you're blindfolded,
    >> you can't tell which portion of that space you're using at the moment. Even
    >> though there is a 500MB/sec difference in memory bandwidth, the L1 and L2
    >> cache tend to hide the details. It feels just as smooth, as if the
    >> machine was running fully at 1400MB/sec.

    >
    > Check.
    >
    >> The benchmark application can detect the difference, and I used memtest86+
    >> because it works with physical memory addresses, and I could be absolutely
    >> sure of what I was testing. The code change required adding three lines
    >> to the source, to print additional information to the memtest screen.

    >
    > Neat.
    >
    >>>> So you can do just about anything you want with it.
    >>> Subject to certain performance constraints.

    >> Pre-built computers are frequently shipped with "stupid" configurations,
    >> which takes advantage of Flex memory, and nobody is the wiser.

    >
    > One reason why we build.
    >
    >>>> If it was mine, I'd run it like this
    >>>>
    >>>> 4GB 4GB
    >>>>
    >>>> and save the 2x2GB in a drawer for later. If the 4GB modules fail,
    >>>> you have some spare memory on hand. Or, if you decide some day to test
    >>>> 12GB, again, you can do it. (I prefer to run my machines with two sticks,
    >>>> and have done so, on the last four computers.)
    >>>>
    >>>> 4GB 4GB
    >>>> 2GB 2GB
    >>> There are certain advantages in so doing. If the 4gb modules were selling
    >>> for, say, $12 each ...
    >>>
    >>> Cheers,
    >>> P

    >> So you're saying, you don't appreciate the fact you're getting a
    >> gigabyte of memory for roughly $10 ?

    >
    > Actually a bit less. $34/4 = $8.50.
    >
    >> How many things can you buy at the
    >> store today, that are falling in price ? I went to Home Depot the
    >> other day, to price a four foot length of tubular steel, and
    >> they wanted close to $16 for it.

    >
    > Yeah, it's been like that around here for years. Vicious price increases
    > and strictly comical gov't efforts to lie about it.
    >
    >> Luckily for us, memory is
    >> still headed in the opposite direction. And it's not
    >> even clear, why that is happening. You'd think with
    >> Japan all messed up, there'd be some price gouging.

    >
    > Most memory is made in non-Nipponese Asian locales?
    >
    > For the record, I -have- to appreciate the $8.50/gb price because
    > I cannot forget having to pay ~ $110/4 = $27.50/gb for it 13 months
    > ago. There was a "spike", and I needed to complete/test a build.
    >
    > But I have to justify expense on pc resources according to expectations of
    > usage. If the "sweet spot" for my little desktop is expected to be around 8gb
    > physical mem, it's hard to justify the additional expense of an added 4gb, if
    > only $34.
    >
    > Prost,
    > P
    >
    > "Law Without Equity Is No Law At All. It Is A Form Of Jungle Rule."
    >


    I don't know, exactly how Intel chose to implement their memory
    control. But the difference is, there is more independence between
    DIMMs and channels. AMD simplified their scheme, because doing
    so reduced the design time to get their product to market. Intel
    has about 10x the design resources of AMD, so they can do all
    sorts of stuff if they want. As far as I know, Nvidia were the
    first to support this sort of thing (Flex memory like operation),
    so Intel shouldn't necessarily get all the credit.

    In the case of the Nforce2 chipset (the one I tested with the custom
    version of memtest86+), if you looked in Device Manager, there
    was actual evidence of individual entries for controllers. Intel
    doesn't show such details. Which is fine, because it can all remain
    safely hidden at the BIOS level. (The BIOS does all the heavy
    lifting anyway, and sets up the mappings and any hardware details.
    Windows shouldn't be doing that.)

    When there are uneven amounts of memory on each channel, for some
    amount of that memory available, it'll be possible to identify
    matching amounts. For example, for the purposes of drawing a diagram,
    I can install two DIMMs like this.

    512MB 1GB

    and it ends up looking like this, for the purposes of explanation.

    512MB
    512MB 512MB

    For the bottom 1GB of space, you can alternate side to side, going
    up through the memory space.

    At just above the 1GB mark, now, you've only got 512MB remaining,
    and it's on the right hand channel. There is no opportunity to
    go from side to side any more. Accesses can only go to that channel,
    and so the measured memory bandwidth drops. When I did my benchmark
    with memtest86+, that's what I found. Above the "magic mark" for
    my configuration, bandwidth changed from 1400MB/sec to 900MB/sec.

    You can see here, that Flex Memory usage is detectable with synthetic
    benchmarks, but is harder to see with practical applications. Notice
    that the system still works in single channel, and single channel
    mode has a bit more of a penalty. The problem with doing benchmarks
    in this way, is you have no control over what memory is being tested.
    Presumably, at least initially after a reboot, Windows "fills" memory
    in a certain direction. Either choosing to start at the top of the
    physical address space, or nearer the bottom. So when doing these
    kinds of tests, with a memory configuration like the one I show above,
    particular care would have to be used to get the "right" result. For
    example, if your benchmark happened to run within the lower 1GB,
    you might see no difference at all between true dual channel, and
    some unbalanced flex mode.

    http://www.tomshardware.com/reviews/intel-stakes-vision-pc-future-775-launch,830-28.html

    This isn't a worry for you. If you want to run 2+2 on one channel and
    4GB on the other channel, that is "dual channel all the way". If
    you want to test how much of an effect an unbalanced config can
    make, you can also test 2GB on one channel, and 4GB on the other (6GB total).
    Then do your benchmarks.

    Paul
     
    Paul, Jun 9, 2011
    #9
  10. Puddin' Man

    Puddin' Man Guest

    On Thu, 09 Jun 2011 17:14:56 -0400, Paul <> wrote:

    >I don't know, exactly how Intel chose to implement their memory
    >control. But the difference is, there is more independence between
    >DIMMs and channels. AMD simplified their scheme, because doing
    >so reduced the design time to get their product to market. Intel
    >has about 10x the design resources of AMD, so they can do all
    >sorts of stuff if they want. As far as I know, Nvidia were the
    >first to support this sort of thing (Flex memory like operation),
    >so Intel shouldn't necessarily get all the credit.
    >
    >In the case of the Nforce2 chipset (the one I tested with the custom
    >version of memtest86+), if you looked in Device Manager, there
    >was actual evidence of individual entries for controllers. Intel
    >doesn't show such details. Which is fine, because it can all remain
    >safely hidden at the BIOS level. (The BIOS does all the heavy
    >lifting anyway, and sets up the mappings and any hardware details.
    >Windows shouldn't be doing that.)
    >
    >When there are uneven amounts of memory on each channel, for some
    >amount of that memory available, it'll be possible to identify
    >matching amounts. For example, for the purposes of drawing a diagram,
    >I can install two DIMMs like this.
    >
    > 512MB 1GB
    >
    >and it ends up looking like this, for the purposes of explanation.
    >
    > 512MB
    > 512MB 512MB
    >
    >For the bottom 1GB of space, you can alternate side to side, going
    >up through the memory space.
    >
    >At just above the 1GB mark, now, you've only got 512MB remaining,
    >and it's on the right hand channel. There is no opportunity to
    >go from side to side any more. Accesses can only go to that channel,
    >and so the measured memory bandwidth drops. When I did my benchmark
    >with memtest86+, that's what I found. Above the "magic mark" for
    >my configuration, bandwidth changed from 1400MB/sec to 900MB/sec.
    >
    >You can see here, that Flex Memory usage is detectable with synthetic
    >benchmarks, but is harder to see with practical applications. Notice
    >that the system still works in single channel, and single channel
    >mode has a bit more of a penalty. The problem with doing benchmarks
    >in this way, is you have no control over what memory is being tested.
    >Presumably, at least initially after a reboot, Windows "fills" memory
    >in a certain direction. Either choosing to start at the top of the
    >physical address space, or nearer the bottom. So when doing these
    >kinds of tests, with a memory configuration like the one I show above,
    >particular care would have to be used to get the "right" result. For
    >example, if your benchmark happened to run within the lower 1GB,
    >you might see no difference at all between true dual channel, and
    >some unbalanced flex mode.
    >
    >http://www.tomshardware.com/reviews/intel-stakes-vision-pc-future-775-launch,830-28.html
    >
    >This isn't a worry for you. If you want to run 2+2 on one channel and
    >4GB on the other channel, that is "dual channel all the way". If
    >you want to test how much of an effect an unbalanced config can
    >make, you can also test 2GB on one channel, and 4GB on the other (6GB total).
    >Then do your benchmarks.


    I don't need to do benchmarks. Just need to keep things simple and understandable.

    No problem with any of this. Many thanks for pointing it out.

    Skoal,
    P

    "Law Without Equity Is No Law At All. It Is A Form Of Jungle Rule."
     
    Puddin' Man, Jun 10, 2011
    #10
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Jeff C

    IC7-MAX3 memory compatibility

    Jeff C, Apr 3, 2004, in forum: Abit
    Replies:
    0
    Views:
    248
    Jeff C
    Apr 3, 2004
  2. huMid
    Replies:
    1
    Views:
    277
    Wes Newell
    Jun 10, 2005
  3. Paul Roberts

    AX34 Pro memory compatibility ?

    Paul Roberts, Nov 16, 2004, in forum: AOpen
    Replies:
    3
    Views:
    1,540
    Paul Roberts
    Nov 20, 2004
  4. Vin
    Replies:
    5
    Views:
    269
  5. Luke Curtis
    Replies:
    0
    Views:
    281
    Luke Curtis
    Aug 29, 2004
Loading...

Share This Page