DIMMs And The Intel Nehalem Memory Architecture Connection


In this post I want to focus only on the intrinsic connection that exists between DIMMs and the Intel Nehalem regarding primarily the memory architecture. I was reading some good papers on this topic and I discovered some interesting details that I wanted to share with you in this blog post.

At the end of the exercise, by picking up the right combination of memory model, memory size, channel population and processor model you can make substantial cost savings especially when doing economies of scale.

Beforehand we need to take a closer look at these two server core items, that is memory, and processors. Let’s start with UDIMM and RDIMM memory architecture and next I’ll go through the Intel Nehalem/Westmere memory architecture. Finally I will have a couple of scenarios to exercise what we have learned here. For both scenarios I will pick up what I think to be the right memory and processor combinations. Feel free to comment to share your experience.

This is quite a long post so bear with me 😉

UDIMMs versus RDIMMs

There are some differences between UDIMMs and RDIMMs that are important in choosing the best options for memory performance. To make the long story short here is a summary of the comparison between UDIMMs and RDIMMs:

  • Typically UDIMMs are a bit cheaper than RDIMMs
  • For one DIMM per memory channel UDIMMs have slightly better memory bandwidth than RDIMMs.
  • For two DIMMs per memory channel RDIMMs have better memory bandwidth than UDIMMs.
  • For the same capacity, RDIMMs will be required more Watt per DIMM than UDIMMs,
  • RDIMMs also provide an extra measure of RAS:
    • Address / control signal parity detection.
    • RDIMMs can use x4 DRAMs so SDDC can correct all DRAM device errors even in independent channel mode.
  • UDIMMs are currently limited to 4GB in a Dual Rank mode.
  • UDIMMs are limited to two DIMMs per memory channel.

So you could go for UDIMMs because they are a bit cheaper, a bit faster and require less power than RDIMMs for the same capacity.

On the other hand you would go for RDIMMs if you need higher capacity per memory module, more reliable error control and data correction than UDIMMs.

So we have define the pro’s and con’s for these two memory models. Keep this in mind and now let’s have a closer look at the Intel Nehalem/Westmere memory architecture and processor models available.

[UPDATE] The LRDIMM case. This is a new type of memory and stands for Load Reduced DIMM. It allows massive memory expansion without sacrificing performance. Remember that as soon you fill in the the third channel, the memory speed drops to 800MHz. LRDIMM increases capacity whilst maintaining high memory speed by fooling the memory controller. The LR Buffer lets a quad rank DIMM look like a dual rank DIMM to the memory controller and therefore allows up to three DIMMs per channel and since that’s still below the eight rank per channel limit the memory speed remains at 1333MHz. How cool is that 🙂 Obviously you can’t mix LRDIMMs with either RDIMMs or UDIMMs. If you look for maximum capacity and increased performance, look at LRDIMMs. More at the DDR3 for Dummies – 2nd edition

Intel Nehalem-DP/Westmere-DP Memory Architecture and Processor Models

There is no difference in the memory architecture between Nehalem and Westmere. Let me summarize below what is, in the case of the memory architecture, important to me:

  • A 2-way Xeon system (DP) has one QPI channel to connect to the other socket and one QPI to connect to the IOH chipset (IO Hub). Eventually you can have 2 IOH’s.
  • QPI operates at a clock rate of either 2.4 GHz(=4.8GT/s), 2.93 GHz(=5.86GT/s), or 3.2 GHz(=6.4GT/s).
  • The QPI has a bi-directional maximum bandwidth of 6.4GT/s x 2Bits/Hz x 2-Way = ~25.6GB/s.
  • GT/s is calculated with 20 bits in mind (or 20 lanes), whilst the GB/s is calculated on the real payload of 16 bits (2 Bytes). For more information on this particular topic, read the An Introduction to the Intel® QuickPath Interconnect.
  • Nehalem/Westmere supports up to 18 slots DIMM with DDR3 memory.
  • In general servers support DDR3 DIMM with a maximum memory clock speed of 166MHz which gives a data rate of 1333MT/s. Many time misleadingly advertised as the I/O clock rate by labeling the MT/s as MHz.
  • The three DDR3 channels to local DRAM support a maximum bandwidth of 3 x 8 x 1.333GTransfers/s = ~31.99GB/s. That is ~10.6GB/s per channel.
  • At 1066GT/s maximum bandwidth is ~25.58GB/s, that is ~8.52GB/s per channel.
  • At 800GT/s maximum bandwidth is ~19.2GB/s, that is ~6.4GB/s per channel.
  • The available bandwidth to access memory blocks on the other socket is bound by the QPI link speed.
  • The available bandwidth through the QPI link is 12.8 GB/s one way that is approximately 40% of the bandwidth to the local DRAM.
  • At the time of authoring this post, 12MB is the maximum shared L3 cache available for Intel Xeon 5000 series.

The diagram below shows the memory layout of a Nehalem DP Server. By the way DP stands for Dual-Processor.

Note the text in green, I will talk about that later in the post.

The next diagram lists the theoretical bandwidth for local and remote memory accesses.
Note that the remote memory access goes through the QPI link.

But that’s not the only things you need to think about. There are other considerations that are often overlooked. For instance the memory frequency, at which the system operates, is determined by a minimization function of three factors:

  1. DIMM frequency.
  2. Memory controller speed.
  3. Channel population scheme.
We can summarize this with the following formula:
System memory speed = MIN (Memory Controller speed, DIMM frequency, population)

First, memory controller speed is limited by the processor model. In general Xeon 5600 ‘X’ series processors run at a maximum speed of 1333 MHz. ‘L’ and ‘E’ series processors run at either 1066 or 800 MHz depending on the CPU clock frequency. Though this not a constant and you have exceptions that I will call ‘marketing exceptions’. Better to look at the technical details for each processor model.

Second, the operating memory speed is dictated by the DIMM frequency. 1066 MHz DIMMs cannot run at 1333 MHz, but 1333 MHz and 1066 MHz can both run at lower frequencies.

Finally, channel memory population schemes dictate that one DIMM-Per-Channel (DPC) or two DPC can run at either 1066 or 1333 MHz, depending on processor model and DIMM type. As soon as you put more than two DPC in any one memory channel, the speed of all the memory drops to 800 MHz.

The table below summarizes this topic:

The difference of performances between 1333MHz and 1066MHz is about 8.5%, between 1333MHz and 800MHz is about 28.5%. Between 1066MHz and 800MHz is about 22%.

Here is below a table grouping the different DIMM capacity and types available for a HP ProLiant BL460c G7. Note that in some circumstances you can drop to 800MHz by populating a second channel i.e. HP BL490 G7.

On the same topic you also need to focus on the processor modelIntel has released many different product lines of Nehalem/Westmere processors, each combination of a processor die and package has both a separate codename and a product code.

Just for the x86 servers market, Intel has four different Xeon Processor families/sequences and for each processor family/sequence a bunch of different processor number such as the X5690 or the E5502.

Let’s have a look at the dual-socket Intel Xeon 5000 Processor Sequence and more precisely the 5500 and 5600 sequences. There you have something like 40 different processor numbers available making your choice even more difficult.

For each processor number, you have the processor clock rate, number of cores and threads, L3 Cache size, QPI Bus Speed, HT technology, TDP, etc… All these processor characteristics are important to make the right choice but also making it over complicated.

The right combination and Business Requirements

With all of these options; UDIMMs, RDIMMs, various DIMM sizes and speeds, low voltage DIMM, processor frequency and other processor technology features, etc. there is a vast number of possibilities and it’s not always obvious which combination of hardware elements you need to logically interlink all together to bring something consistent and coherent in regard to your business requirements and the server architecture as well. It’s like a giant puzzle of 1000 pieces of information you need to logically order them in a way to come up with the best combinations.

See I’m not using the ‘which options for the highest performance‘ because companies are not tied every time to just a high performance business requirement. Energy efficiency or high consolidation can also be your company’s number one business requirement.

Note that in this economical hard time, cost savings are mandatory for many companies and may rule out the traditional business requirements cited above. Cost savings rule helps to keep the company’s business requirement within the budget boundaries.

Sure your company can have other business requirements than the ones above, I know at least one company where the end-user experience is rated number one. A business requirement list is definitely not limited to three or four items.

In many cases companies have multiple business requirements; we need high performance and high consolidation at the lowest cost…Huh! The goal is to juggle with these business requirements to come up with the right combinations.

Sometime this turns into the Triangle Project with no viable combination 🙂

Scenarios

Imagine the following scenario, your company server vendor policy is HP and for this project you have picked up the HP BL460c G7. Business requirement is high consolidation, thus you need memory, plenty of memory. You load up the server with 12x32GB RDIMM memory modules for a maximum memory size of 384GB running at … 800MHz. Now what processor would you choose in this case? Would you buy the X5690 @ $1663.00 or the E5649 @ $774.00?

In this specific config the memory controller has the same value for both processors, that is 800MHz. QPI is higher for the X5690 but anyway you can’t use it at full throttle because of the memory controller speed is down to 800MHz. Thus between the two CPU’s just the clock speed makes a difference, ~1GHz more for the X5690, but it’s also more than 2x the price of the E5649. Does it worth the $889.00 extra notes?

By loading up with 12x16GB memory modules for a total of 192GB, you memory frequency remains at 1333MHz, and fast processors, one in the X series, are now a valid option. But then you don’t stick to your business requirement anymore! You have gone from the highest possible consolidation ratio (100%) to a half of that (50%).

Another scenario, you have again pick up a HP BL460c G7. This time the Business Requirement is energy efficiency. Remember UDIMM uses less power than RDIMM, thus you go for it and load up the server with 12x4GB UDIMM memory modules, one per channel. For the same capacity, RDIMM requires 0.5 to 1.0 Watt more. Now what processor would you choose? The one with the lowest power consumption might be the good choice, like the L5609. But then you do not benefit from the UDIMM running at 1333MHz cause the CPU supports maximum 1066MHz… What about going for 6x32GB RDIMM LV (1.35V instead of 1.5V) running at 1066MHz for a total capacity of 192GB (4x  more than UDIMM max capacity). And choose the L5630 also using only 40W, but with HT and Turbo Boost Technology when you need extra power…

Take these two scenarios just for what they are. They may not reflect any real case. This just to demonstrate the thinking process with the information we gathered today.

Neither I have a secret formula that will sort out this kind of puzzle. At least, I hope I shed some light on these unknown but important links between the memory and the Intel Nehalem architectures.

Here are some tools that will help you pick up the right combination I hope:

There are two other unknown puzzle pieces I will shed some lights on next time; processor clock frequency sensitive applications and memory bandwidth sensitive applications. So stay tuned 😉

Sources: wkipedia.org, intel.com, dell.com, hp.com and google.com

About PiroNet

Didier Pironet is an independent blogger and freelancer with +15 years of IT industry experience. Didier is also a former VMware inc. employee where he specialised in Datacenter and Cloud Infrastructure products as well as Infrastructure, Operations and IT Business Management products. Didier is passionate about technologies and he is found to be a creative and a visionary thinker, expressing with passion and excitement, hopefully inspiring and enrolling people to innovation and change.
This entry was posted in Uncategorized. Bookmark the permalink.

15 Responses to DIMMs And The Intel Nehalem Memory Architecture Connection

  1. Shanetech says:

    Great post, thank you for putting that together. I am glad to see you have configuration tools listed as well. Not knowing how you are configuring the DIMMs can potentially waste money. In VDI scenarios this is also very important because you want to ensure that those awesome x5680s you just sprung for are running the fastest that they and their architecture can.

  2. Hello and thanks for your article. I found you because I have so far unsuccessfully understood why on my Xserve3,1 (OS X) Nehalem 2.93 “8 core” my memory is being used at 800MHz rather than the rated 1066MHz 😦

    Although hundreds of postings say to tweak the bios on non-mac machines (so-called hackintoshes) ours is a trusted/true Xserve3,1 running 10.6.8. When I do “About this Mac” I learn that I have 48 GB 800 MHz DDR3 RAM… but popping the lid on the machines shows full banks of identical Kensington KVR1066D3Q8R7SK3. Some blog-posts simply say this is a msg-cosmetic problem and that I should just ignore the reported value (arghh).

    What’s your take on this?
    Am I underusing my memory? How can I get the full 1066MHz out of my memory?

    thanks for any help/followup.
    shawn

    • deinoscloud says:

      Hi Shawn, thx for your comment.
      As soon as you put more than two DPC in any one memory channel, the speed of all the memory drops to 800 MHz.
      And in some circumstances (read vendor own specs/by design) you can drop at 800MHz just by populating the second channel.

      You can try to leave only one channel populated (24GB) and see how it affects memory speed…

  3. netlistpost says:

    I find it surprising that you have left out Netlist’s HyperCloud memory.

    It is the only other memory (other one is Kingston) certified for VMware.
    http://alliances.vmware.com/public_html/catalog/searchResult.php?catCombo=System+Boards&isVmwareReadySelected=No&isServicesProduct=no&searchKey=

    http://alliances.vmware.com/public_html/catalog/ViewProduct.php?Id=a045000000GQT8gAAH
    16GB Hypercloud DDR3 2vR 1333

    http://alliances.vmware.com/public_html/catalog/ViewProduct.php?Id=a0450000008ZdykAAC&productName=Kingston%20Memory
    Kingston

    HyperCloud will be getting more press as Romley rollout happens as only LRDIMMs (based on Inphi buffer chipsets) and HyperCloud will be offering the load-reduced/rank-multiplication memory modules for Romley (Feb 2012 or so).

    However LRDIMMs have latency issues compared to HyperCloud.

    Inphi is the only supplier of buffer chipsets for LRDIMMs – IDTI seems to have deemphasized it over time, and Texas Instruments has been “not interested” in LRDIMMs.

    Latency issues in LRDIMMs makes 16GB LRDIMMs effectively non-competitive with 16GB RDIMMs (2-rank ones based on 4Gbit DRAM dies).
    32GB LRDIMMs can however eke out a performance advantage vs. 32GB RDIMMs (which are currently 4-rank).
    HP/Samsung confirmed at IDF conference on LRDIMM that they will not be selling 16GB LRDIMMs – but will focus on 32GB LRDIMMs as those still are competitive with 32GB RDIMMs (which are 4-rank).

    In addition, HyperCloud is able to do 768GB at 1333MHz (or MT/s) on a dual-socket server versus LRDIMMs which can only achieve 768GB at 1066MHz (using NLST HyperCloud 32GB and 32GB LRDIMMs based on Inphi’s buffer chipset respectively).

    LRDIMMs are a successor of the earlier MetaRAM (if folks remember that company).
    .

    In addition Netlist’s approach will be used in DDR4:

    http://www.theregister.co.uk/2011/11/30/netlist_32gb_hypercloud_memory/
    Netlist puffs HyperCloud DDR3 memory to 32GB
    DDR4 spec copies homework
    By Timothy Prickett Morgan
    Posted in Servers, 30th November 2011 20:51 GMT

    check out the comments section for the above article – where I have posted info on Netlist vs. LRDIMM:

    http://forums.theregister.co.uk/forum/1/2011/11/30/netlist_32gb_hypercloud_memory/
    Netlist puffs HyperCloud DDR3 memory to 32GB
    Posted Thursday 1st December 2011 09:38 GMT

    • PiroNet says:

      I didn’t know about Netlist’s HyperCloud memory until now.
      Definitely worth a look at this new technology.
      Thx for sharing!

  4. netlistpost says:

    NLST HyperCloud or LRDIMM type solutions are only required if you want heavily memory loaded systems (virtualization, cloud computing, high performance computing (HPC), in-memory databases, high frequency trading).

    People would heard about LRDIMMs because of the Intel marketing for it for Romley (see below for why LRDIMMs type solutions while needed before also are only now being promoted “for Romley”).

    NOTE: Some of the stuff I mention maybe obvious, but I am just putting it in for completeness for naive readers.

    However the trends are now in place which tend to bring those niches more into the mainstream (reduction of desktop PCs – move to tablets and connected phones to cloud-computing – multiplied “computer users” as non-tech folks become “computer users” thanks to their smart phones).

    This “high memory loading” technology will be important now because of a number of factors coming together – the increasing need for higher memory per server (increasing server cores requiring higher total memory per server to keep up the same memory per core levels).

    The power reduction achievable if you can cut server numbers in your data center. Thankfully the greater concentration of power per server leveraged well with availability of virtualization. Many more VMs per server possible – reducing server count and thus server power, but also cost of plant and UPS provisioning per data center.

    The scaling up of applications to use more memory – beginning with high performance computing (HPC), high frequency trading, in-memory databases (where all data in DRAM), virtualization and cloud computing (all of which call for multiplication of DRAM per server).

    And the changes taking place in the marketplace even at the consumer computing level – as “computer users” (traditionally desktop and laptop users) – expand greatly in number with smartphones becoming the new computer. Allied with that the needs/availability of iCloud type services (5GB per user for nearly the whole world), Siri (voice recognition for the whole world) means you are talking about server requirements which vastly outpace all previous expectations (in fact it is not only a replacement for current computing, but it expands the whole user base and maybe a multiplication of total computing needs as every person potentially becomes a computer user – even in the third world).

    This need for search, voice search, and cloud storage per user drive the need for greater DRAM use per server.

    This is a trend which will persist despite economic issues.

    Thus server growth and DRAM per server growth is probably the most predictable robust growth segment in the memory space for the next couple of years.

    Current memory systems experience speed slowdown at high memory loading:

    1 DPC – 1333MHz
    2 DPC – 1066MHz (can be 1333MHz on some newer systems)
    3 DPC – 800MHz

    Intel has been pushing Inphi and others (IDTI and Texas Instruments it seems have now backed off) to make the LRDIMMs.

    However what they have delivered has “latency issues”, and in addition it seems at 3 DPC they are getting 1066MHz (and not 1333MHz).

    http://messages.finance.yahoo.com/Stocks_%28A_to_Z%29/Stocks_N/threadview?m=te&bn=51443&tid=41242&mid=41261&tof=1&frt=2#41261
    Re: LRDIMM Inability to run at 1333MHz Defeats the purpose 15-Dec-11 02:10 pm

    http://finance.yahoo.com/news/Netlist-HyperCloud-Technology-iw-1971535964.html?x=0
    Netlist’s HyperCloud Technology Faster Than LRDIMM on Next Generation Servers: Testing Validates the Speed Advantage of HyperCloud
    Patented HyperCloud Technology Enables 1333 MT/s Memory Speeds on Future Intel(R) Xeon(R) E5 Family Based Two-Processor Servers While LRDIMM Only Enables 1066 MT/s
    Press Release: Netlist, Inc. – Tue, Dec 13, 2011 6:00 AM EST

    http://finance.yahoo.com/news/HyperCloud-Achieves-Server-iw-3376256974.html?x=0&l=1
    HyperCloud Achieves Server Memory Speed Breakthrough at SC11
    Demonstration Highlights HyperCloud’s Advantages Over Commodity RDIMM, LRDIMM
    Press Release: Netlist, Inc. – Wed, Nov 16, 2011 4:00 PM EST

    LRDIMMs are essentially infringing NLST IP – however fortunately for Netlist, that despite that infringement, what they have delivered has these problems (thus marketplace will determine which is best):

    – LRDIMMs have “latency issues”
    – LRDIMMs require a BIOS update for current servers (Romley are expected to ideally arrive with this BIOS fix) – a BIOS update essentially makes them problematic to be used prior to Romley
    – LRDIMMs are not interoperable with standard RDIMMs

    – 16GB LRDIMMs are inferior to 16GB RDIMMs 2-rank (using 4Gbit DRAM die) because the LRDIMM “latency issues” make them underperform those RDIMMs (HP/Samsung comments from IDF conference on LRDIMMs)

    – 32GB LRDIMMs will be the only one showing some advantage c.f. RDIMMs (and this is only because 32GB RDIMMs can currently only be made as 4-rank) (HP/Samsung comments IDF conference on LRDIMMs)

    – 768GB LRDIMMs in a 2-socket server can only achieve 1066MHz (and not 1333MHz like HyperCloud)

    This explains why Intel is pushing LRDIMM for Romley. Because LRDIMMs require a BIOS update, this makes it an extremely unpleasant solution to use in current servers.

    For Romley, Intel is going to try to fix the BIOS so that LRDIMMs can work there.

    THIS is the primary reason why Intel has not promoted LRDIMM type solutions prior to Romley (because they did not HAVE solutions which would work).

    However the marketing by Intel for LRDIMM is helpful for Netlist because it creates a pre-awareness of that need.

    With LRDIMMs arriving with these problems, and since it takes a year or two to re-engineer and re-qualify (if you ignore the fact that LRDIMMs may have serious design issues emanating from the design choices made – centralized buffer vs. decentralized etc.) – that place Netlist’s HyperCloud in a position to take more than the 1% of servers conservative estimate that Netlist has suggested.

    For Romley the load-reduction/rank-multiplication space for LRDIMMs/HyperCloud is estimated to be 20% of Romley servers eventually.

    If LRDIMMs fail to live up to expectations, it is entirely possible that Netlist’s HyperCloud could wind up dominating this 20% server market.

    Netlist guidance for a possible 1% share was enough to surprise some analysts (like Rich Kugele of Needham) at the Q3 2011 CC.

    If you scale that closer to the full 20% server market, this could be a formidable opportunity.

    It is thus not surprising that recently Netlist signed an “exclusive” deal with HP (this means that HP is exclusively tied to using HyperCloud – though details are unclear at this time) and a non-exclusive deal with IBM:

    http://finance.yahoo.com/news/Netlist-HyperCloud-Technology-iw-1971535964.html?x=0
    UPDATE 1-Netlist signs deals with IBM, HP
    Mon Nov 14, 2011 5:19pm EST

    Since Romley arrival is imminent and LRDIMMs are targeting Romley, we are now starting to see real LRDIMMs and will consequently be seeing reviews of LRDIMMs.

    Prior to this (as a NLST shareholder) we have only had NLST’s word for it – that LRDIMMs will have latency issues.

    On the marketing side, Intel has been pushing LRDIMMs and this is going to favor HyperCloud as users realize that LRDIMMs do not deliver as promised, and instead cause more problems than they solve.

    Why Intel is pushing LRDIMM

    Intel is pushing LRDIMMs because it had to come up with a way to handle the memory slowdown for high memory loaded systems – since the market for such systems are expected to increase in the future (analysts expect 20% of Romley servers will require LRDIMM/HyperCloud type solutions).

    Cisco has already tried to solve the high memory loading problem with their Cisco UCS strategy. They use an ASIC-on-motherboard approach but which makes the motherboard non-standard.

    Netlist approach is an ASIC-on-memory-module (same thing LRDIMMs have tried to copy) which makes the module:

    – requires no BIOS update (HyperCloud can be used in pre-Romley as well as Romley systems)
    – plug and play
    – interoperable with standard RDIMMs

    In addition there are certain latency advantages with HyperCloud (over LRDIMM and Cisco UCS’s ASIC-on-motherboard approach).

    http://messages.finance.yahoo.com/Stocks_%28A_to_Z%29/Stocks_N/threadview?m=te&bn=51443&tid=40867&mid=40954&tof=1&frt=2#40954
    Re: NLST vs IPHI .. 32GB LRDIMM speed slowdown 11-Dec-11 12:10 am

    quote:
    —-
    In addition you have the latency advantages for NLST:

    – LRDIMMs have a “5 ns latency penalty” compared to RDIMMs.
    – CSCO UCS has a “6 ns latency penalty” compared to RDIMMs.
    – NLST HyperCloud have similar latency as RDIMMs (a huge advantage) and have a “4 clock latency improvement” over the LRDIMM
    —-

    Coming back to DDR4

    Netlist has been pointing out the intersection of it’s IP with DDR4 (confirmed by the article above which points out the similarities of the JEDEC DDR4 approach with NLST HyperCloud).

    Netlist has stated that while for Romley their IP is valuable for this 20% server segment, for DDR4 it will be “mainstream”.

    The reason for this is that the issues which cause memory slowdown at 2 DPC and 3 DPC on current and Romley systems will become prevalent at the 1 DPC (1 DIMM per memory channel) level – which will require a load-reduction/rank-multiplication based solution like NLST HyperCloud.

  5. netlistpost says:

    Check out this post for a more detailed examination of LRDIMMs and comparison with HyperCloud for the upcoming Romley platform rollout (Spring 2012):

    http://www.seobythesea.com/2009/11/google-to-upgrade-its-memory-assigned-startup-metarams-memory-chip-patents/#comment-420587
    netlist
    01/13/2012 at 7:33 am

    Title: High memory loading LRDIMMs on Romley – An Introduction to Next-Gen Memory for Romley
    Date: January 10, 2012

  6. netlistpost says:

    Netlist HyperCloud has become available on IBM and now HP.

    The second rollout in Romley “tiered rollout” now has HP listing Netlist HyperCloud for the HP Gen8 servers.

    It is the only memory that delivers:

    3 DPC at 1333MHz at 1.5V

    That is, for heavy memory loading applications.

    It is available as a Factory Installed Option (FIO).

    http://h18004.www1.hp.com/products/quickspecs/14225_na/14225_na.html

    quote:
    —-
    Load Reduced DIMMs (LRDIMM)
    HP 32GB (1x32GB) Quad Rank x4 PC3L-10600L (DDR3-1333) Load Reduced CAS-9 Low Voltage Memory Kit 647903-B21

    HyperCloud DIMMs (HDIMM)
    HP 16GB (1x16GB) Dual Rank x4 PC3-10600H (DDR3-1333) HyperCloud CAS-9 FIO Memory Kit 678279-B21
    NOTE: This is a Factory Installed Option (FIO) only.

    Performance
    Because HP SmartMemory is certified, performance tested and tuned for HP ProLiant, certain performance features are unique with HP SmartMemory. For example, while the industry supports DDR3-1333 RDIMM at 1.5V, today’s Gen8 servers support DDR3-1333 RDIMM up to 3 DIMMs per channel at 1066MT/s running at 1.35V. This equates to up to 20% less power at the DIMM level with no performance penalty and now with HyperCloud Memory on DL360p Gen8 and the DL380p Gen8 servers will support 3 DIMMs per channel at 1333MT/s running at 1.5 V. In addition, the industry supports UDIMM at 2 DIMMs per channel at 1066MT/s. HP SmartMemory supports 2 DIMMs per channel 1333MT/s, or 25% greater bandwidth.
    —-

    “HP Smart Memory HyperCloud” or “HP HDIMM” is the name they are giving it.

  7. netlistpost says:

    While NLST has pointed out the move to increasing memory speeds i.e. 1600MHz and higher for DDR4 will force the 3 DPC issues to 2 DPC and lower – thus making NLST IP “mainstream” i.e. required for 2 DPC etc. Thus mainstream for DDR4 because of those speeds requiring this solution.

    Thus “mainstreaming” of NLST IP at not just 3 DPC, but also 2 DPC.

    Today I posted a summary of an argument why the 3 DPC issues will start to appear at 2 DPC (i.e. NLST HyperCloud will become relevant at 2 DPC even) with the arrival of 32GB RDIMMs.

    The arrival of 32GB RDIMMs will create a situation where – because 32GB RDIMMs (4-rank) and lack of availability of 2-rank – will lead to the issues at 3 DPC appearing at 2 DPC.

    And 32GB RDIMMs will be available shortly for Romley.

    Check it out as blog comments here:

    http://marchamilton.wordpress.com/2012/02/07/optimizing-hpc-server-memory-configurations/#comment-286
    HPC_fan says:
    May 17, 2012 at 1:23 pm

    http://marchamilton.wordpress.com/2012/02/07/optimizing-hpc-server-memory-configurations/#comment-287
    HPC_fan says:

  8. netlistpost says:

    An article on memory choices for the HP DL360p and DL380p virtualization servers. Hope it is simple to understand.

    I’ll get to the IBM System x3630 M4 server shortly.

    Installing memory on 2-socket servers – memory mathematics


    May 24, 2012
    Installing memory on 2-socket servers – memory mathematics

    For HP:

    Memory options for the HP DL360p and DL380p servers – 16GB memory modules


    May 24, 2012
    Memory options for the HP DL360p and DL380p servers – 16GB memory modules

    Memory options for the HP DL360p and DL380p servers – 32GB memory modules


    May 24, 2012
    Memory options for the HP DL360p and DL380p servers – 32GB memory modules

  9. vicl2012v says:

    Links did not appear, so here are the links:

    http://ddr3memory.wordpress.com/2012/05/24/installing-memory-on-2-socket-servers-memory-mathematics/
    May 24, 2012
    Installing memory on 2-socket servers – memory mathematics

    For HP:

    http://ddr3memory.wordpress.com/2012/05/24/memory-options-for-the-hp-dl360p-and-dl380p-servers-16gb-memory-modules/
    May 24, 2012
    Memory options for the HP DL360p and DL380p servers – 16GB memory modules

    http://ddr3memory.wordpress.com/2012/05/24/memory-options-for-the-hp-dl360p-and-dl380p-servers-32gb-memory-modules/
    May 24, 2012
    Memory options for the HP DL360p and DL380p servers – 32GB memory modules

    For IBM:

    http://ddr3memory.wordpress.com/2012/05/25/memory-options-for-the-ibm-system-x3630-m4-server-16gb-memory-modules-2/
    May 25, 2012
    Memory options for the IBM System x3630 M4 server – 16GB memory modules

    http://ddr3memory.wordpress.com/2012/05/25/memory-options-for-the-ibm-system-x3630-m4-server-32gb-memory-modules/
    May 25, 2012
    Memory options for the IBM System x3630 M4 server – 32GB memory modules

  10. Anushka Mathur says:

    Thanks for this informative article. This basic knowledge is required by all while dealing with memory modules. In fact, my friend has recommended me to visit RAM Manufacturer , for getting all kind of information related to memory modules.

  11. ddr3memory says:

    Hello Didier,

    VMware certifies Netlist 16GB and 32GB HyperCloud memory modules (supplied by IBM/HP) and Netlist 16GB VLP RDIMMs (supplied by IBM) – as the only memory certified for VMware products.

    If you know of RDIMMs or LRDIMMs listed for VMware, please let me know. Thanks.

    http://ddr3memory.wordpress.com/2012/07/05/memory-for-vmware-virtualization-servers/
    Memory for VMware virtualization servers
    July 5, 2012

  12. Pingback: Welcome to vSphere-land! » Hardware Links

Leave a comment