Add 'Excessive Bandwidth Memory'

master
Caroline Colosimo 2 weeks ago
parent
commit
329ba8604d
  1. 7
      Excessive-Bandwidth-Memory.md

7
Excessive-Bandwidth-Memory.md

@ -0,0 +1,7 @@
<br>Excessive Bandwidth Memory (HBM) is a computer memory interface for 3D-stacked synchronous dynamic random-entry memory (SDRAM) initially from Samsung, AMD and SK Hynix. RAM in upcoming CPUs, and FPGAs and in some supercomputers (such as the NEC SX-Aurora TSUBASA and Fujitsu A64FX). HBM achieves larger bandwidth than DDR4 or GDDR5 whereas using much less power, and in a substantially smaller kind factor. This is achieved by stacking as much as eight DRAM dies and an non-obligatory base die which can include buffer circuitry and take a look at logic. The stack is commonly linked to the memory controller on a GPU or CPU by way of a substrate, equivalent to a silicon interposer. Alternatively, the memory die could be stacked immediately on the CPU or GPU chip. Inside the stack the dies are vertically interconnected by via-silicon vias (TSVs) and microbumps. The HBM technology is similar in principle however incompatible with the Hybrid [Memory Wave](https://forums.vrsimulations.com/wiki/index.php/User:LatishaPoland24) Cube (HMC) interface developed by Micron Technology. HBM memory bus may be very extensive compared to different DRAM recollections corresponding to DDR4 or GDDR5.<br>
<br>An HBM stack of four DRAM dies (4-Hello) has two 128-bit channels per die for a total of 8 channels and a width of 1024 bits in whole. A graphics card/GPU with four 4-Hi HBM stacks would subsequently have a memory bus with a width of 4096 bits. In comparison, the bus width of GDDR reminiscences is 32 bits, with sixteen channels for a graphics card with a 512-bit [Memory Wave Routine](http://gbtk.com/bbs/board.php?bo_table=main4_4&wr_id=163788) interface. HBM supports as much as 4 GB per package. The larger variety of connections to the memory, relative to DDR4 or GDDR5, required a brand new method of connecting the HBM memory to the GPU (or different processor). AMD and Nvidia have each used objective-built silicon chips, referred to as interposers, to connect the memory and GPU. This interposer has the added benefit of requiring the memory and processor to be physically shut, lowering memory paths. Nevertheless, as semiconductor system fabrication is considerably costlier than printed circuit board manufacture, this adds price to the final product.<br>[producerplanet.com](https://producerplanet.com/gb/article/memory-wave-80s-24373/)
<br>The HBM DRAM is tightly coupled to the host compute die with a distributed interface. The interface is divided into unbiased channels. The channels are completely impartial of each other and are not necessarily synchronous to one another. The HBM DRAM makes use of a wide-interface structure to attain excessive-speed, low-energy operation. Every channel interface maintains a 128-bit information bus working at double data rate (DDR). HBM supports transfer charges of 1 GT/s per pin (transferring 1 bit), [Memory Wave](https://covid-wiki.info/index.php?title=The_Memory_Wave_-_Unlock_Sharper_Memory_Focus_In_Simply_12_Minutes) yielding an total package deal bandwidth of 128 GB/s. The second technology of High Bandwidth Memory, HBM2, also specifies up to eight dies per stack and doubles pin transfer charges as much as 2 GT/s. Retaining 1024-bit broad entry, HBM2 is able to achieve 256 GB/s memory bandwidth per bundle. The HBM2 spec allows up to 8 GB per package. HBM2 is predicted to be particularly useful for performance-sensitive consumer functions corresponding to digital actuality. On January 19, 2016, Samsung announced early mass production of HBM2, at up to eight GB per stack.<br>
<br>In late 2018, JEDEC announced an update to the HBM2 specification, offering for increased bandwidth and capacities. As much as 307 GB/s per stack (2.5 Tbit/s efficient data fee) is now supported in the official specification, though merchandise working at this velocity had already been available. Moreover, the replace added support for 12-Hello stacks (12 dies) making capacities of up to 24 GB per stack doable. On March 20, 2019, Samsung introduced their Flashbolt HBM2E, that includes eight dies per stack, a switch charge of 3.2 GT/s, offering a complete of 16 GB and 410 GB/s per stack. August 12, 2019, SK Hynix introduced their HBM2E, featuring eight dies per stack, a transfer price of 3.6 GT/s, providing a total of sixteen GB and 460 GB/s per stack. On July 2, 2020, SK Hynix introduced that mass manufacturing has begun. In October 2019, Samsung announced their 12-layered HBM2E. In late 2020, Micron unveiled that the HBM2E normal can be up to date and alongside that they unveiled the next normal referred to as HBMnext (later renamed to HBM3).<br>
Loading…
Cancel
Save