HBM IP Filing Trends: Need, Major Players, and Patent Filings – Lexology

High Bandwidth Memory (HBM) is a new type of CPU/GPU memory that is opening doors for previously unimaginable performance levels in computers with its fast information transfer capability. The HBM industry is expanding rapidly due to its applicability scope in myriad sectors and industries. This is quite evident from its expected CAGR growth of  32. 9%   in the coming years, (from USD 1. 8 billion in 2021 to USD 4. 9 billion by 2031). Tech giants such as Samsung, Intel, etc . are heavily manufacturing within the HBM industry, which has propelled the innovation boom.
This article discusses HBM in detail, the demand for this new technology, market prediction, major HBM IP players, and the future prospects of HBM.

Table of Contents

Introduction to 3D High Band width Memory (HBM)

HBM is a 3D-stacked DRAM optimized for high bandwidth operation and lower power consumption compared to the GDDR memory. HBM includes vertically stacked DRAM chips connected through vertical interconnect technology called TSV (through silicon via) on a high-speed logic layer to reduce the connectivity impedance plus thereby total power usage. HBM memory generally enables the stacking of four, eight, or twelve DRAMs linked to one another by TSVs. Furthermore, HBM typically uses an interposer to link the compute elements to the memory.

Figure 1: Cut through image of a graphics card along with HBM ( Source )

What is the need for HBM?

With an ever-increasing demand for high operating frequency of graphics (GPU) or general-purpose processors (CPU), the particular limited memory space bandwidth constricts the maximum performance of a system. Previous storage technologies such as DDR5 (GDDR5) memory are unable to overcome issues like rising power intake, which limits the development of images performance, in addition to larger form factor to address the bandwidth needs associated with GPU/CPU (multiple chips needed to achieve the required high band width with necessary power circuitry). Moreover, difficulty in on-chip integration is another issue those technologies fail to tackle.

High Bandwidth Memories are requisite with regard to high-performance computing (HPC) systems, high-performance recollection, and large-scale processing within data centers and AI applications. Current GPU and even FPGA accelerators in the market are memory-constrained, indicating the requirement regarding more ram capacity at a very high bandwidth. This brings High Band width Memory into the picture. HBM provides the MASS as close as possible towards the logic die to reduce the overall required footprint and allows the use of extremely wide data buses to hit the needed levels of overall performance.  

Figure 2: HBM stacks band width progression up against GDDR and DDR ( Source )

Evolution of HBM

JEDEC   adopted High Bandwidth Memory as an industry standard in October 2013. The particular 1st  generation of HBM had four dies  together with two 128-bit channels per die, or even 1, 024-bits. Four stacks enable access to 16 GB of total memory and additionally 4, 096 bits regarding memory width, eight times that of a new 512-bit GDDR5 memory interface for GDDR5. HBM together with a four-die stack running at 500 MHz can produce more than 100 GB/sec of bandwidth per stack – much greater compared to 32-bit GDDR5 memory.

The 2nd generation, HBM2, was accepted by JEDEC in January 2016. It increases typically the signaling rate to 2 Gb/sec, not to mention with this same one, 024-bit size on a stack. A package could drive 256 GB/sec for each stack, using a maximum potential capacity of 64 GB in 8 GIGABYTE per collection. HBM2E, often the enhanced version of HBM2, increased your signaling rate to 2 . 5 Gb/sec per pin and upward to 307 GB/sec involving bandwidth/stack.

On January 27, 2022, JEDEC formally announced the 3rd era HBM3 regular. According to some sort of manufacturer, SK Hynix, HBM3 can bunch DRAM chips up to 16 dies higher, and the storage area capacity can double again to 4 GB each chip, which will be 64 GIGABITE per pile and 256 GB connected with capacity having at least second . 66 TB/sec aggregate bandwidth. Typically the third technology HBM3 will be expected inside systems throughout 2022, with a signaling price of over 5. 2 Gb/sec and also offering more than 665 GB/sec per heap.

In the exact following section, we present the conjecture of the HBM market around memory putting technology.

HBM Market Predictions 

The High Band width Memory business is growing quickly. This has a good promising future due to several factors, including the wide  access in order to cloud-based  as well as quantum technologies, advancements on the usage of cloud-based quantum computing, and GPUs plus FPGAs. This HBM market is expected to be able to expand significantly inside the arriving years. Often the market intended for High Bandwidth Memory is usually expected for you to reach UNITED STATES DOLLAR 4088. 5 million simply by 2026.

Determine 3: High Bandwidth Memory Market Growth ( Resource )

Major Players in the Global HBM Marketplace

Some of the major manufacturers in the HBM industry are mentioned below:

just one.   Special Electronics:   The South Korean corporation developed the particular industry’s first High Band width Memory (HBM) integrated by using artificial intelligence (AI) processing power — the HBM-PIM. The new processing-in-memory (PIM) architecture has powerful AI computing capabilities inside high-performance memory to help accelerate considerable processing during data facilities, high-performance processing (HPC) techniques, and AI-enabled mobile applications. Furthermore, Samsung’s packaging systems, like I-Cube and H-Cube, have been specifically developed to integrate HBM with the digesting chips.

two.   SK Hynix:   This South Korean supplier of dynamic RAM in addition to flash mind chips recently released HBM3, the Gen 4 HBM product to get data centres. HBM3 facilitates sharing with up in order to 163 full-HD movies inside a single second using a maximum data running speed for 819 gigabytes per second, which signifies a 78% improvement over HBM2E.

3.   Intel Corporation:   The chip manufacturer introduced the Sapphire Rapids Xeon SP processor  featuring HBM for bandwidth-sensitive applications, AJE, and data-intensive applications. Your processors are expected to use AMX’s 64-bit programming model to speed up tile operations.

4.   Micron:   The company said that NVIDIA’s GeForce RTX 3090 would be based on GDDR6X technology and even capable about 1 TB/sec memory band width. Micron GDDR6X is the world’s fastest graphics memory.

5.   Advanced Micro Devices:   AMD launched typically the new ADVANCED MICRO DEVICES Radeon Pro 5000 series Graphics Processing Units (GPU) built upon 7nm process technology for the iMAC platform. These brand new GPUs comprise 16GB in high-speed GDDR6 memory, which can run a wide range of graphically intensive applications together with workloads.

6.   Xilinx:   Versal adaptive compute  acceleration system (ACAP) portfolio, a new series from Xilinx, integrates HBM to be able to offer quick compute acceleration for massive, connected information sets through fewer and additionally lower-cost servers. The latest Versal HBM series incorporates advanced HBM2e DRAM.

The exact next area highlights HBM IP trends over this last five years to help you understand often the growing market and applicability scope from the technologies.

HBM  IP (Patent) Filing Trends  by Assignees and Jurisdiction (last five years)

The US accounts for more than 50% of all the HBM IP (patent) filings since 2017 with respect to other top patent filing jurisdictions inside space. A quick analysis of HBM IP filing styles of your top patent assignees in the past five years shows the exact dominance of  Samsung korea   over other market gamers. Further, Intel and Micron are other best filers through 2017 onwards from the HBM space.

Physique 4: Top 10 Particular Assignees based on Percentage Filings from 2017 for HBM (Derwent Innovation)


Figure 5: Top patent filing jurisdictions from 2017 for HBM (Derwent Innovation)

  • AMD not to mention Xilinx are both major participants in the HBM domain with strong IP portfolios. The recent acquisition of  Xilinx by AMD   would only strengthen AMD’s HBM IP portfolio related to the particular technology’s use in high-performance computing applications.
  • Samsung’s existing HBM IP profile and typically the integration associated with AI utilizing HBM through  HBM-PIM   show that will Samsung plans to have a stronghold in this domain for a considerable future. The HBM-PIM offers been tested in this Xilinx Virtex Ultrascale+ (Alveo) AI accelerator.   Xilinx and also Samsung   have been collaborating to enable high-performance solutions for info center, networking, and real-time signal processing applications with the help of the Virtex UltraScale+ HBM family as well as Versal HBM series products.
  • Nvidia chose to use  Samsung’s HBM2 Flarebolt   in its Tesla P100 accelerators for you to power files centers for need regarding supercharged efficiency. AMD selected to make use of HBM2 Flarebolt in the Radeon Instinct accelerators pertaining to the records center and its high-end graphics cards. Intel has embraced the technological innovation, leveraging HBM2 Flarebolt to help introduce high-performance, power-efficient visuals solutions meant for mobile PCs. Rambus and Northwest Logic teamed upwards to introduce HBM2 Flarebolt-compatible memory controller and physical layer (PHY) technology designed for use found in high-performance social networking chips. Other companies developing products combining HBM2 Flarebolt storage with various networking capabilities include Mobiveil, eSilicon, Open-Silicon, and Wave Computing.
  • SK Hynix aims to solidify their leadership in the HBM domain name with often the first mover’s advantage of  supplying HBM3   to major companies for example Nvidia for integration with NVIDIA H100 Tensor Core GPU.

HBM companies are rapidly expanding. So, let us check out some recent developments in your industry as well as the predictions for the future of this market.

Recent Developments and Future Prospects involving HBM

High Bandwidth Memory solutions are currently enhanced for data files center programs with processing acceleration, machine learning (ML), data preprocessing and buffering, and database acceleration. Major HBM IP owners plus manufacturers are upping their game by simply integrating HBM with cpus for AJAI computing abilities, for top of the line computing (HPC) systems, in order to accelerate large-scale processing in data locations and AI-enabled mobile apps.

The technology is picking up steam in the exact market, going from virtually no revenue a few years ago to what will likely be billions of dollars and more over the next few years. HBM technology is definitely expeditiously evolving to meet the particular rising need for broad access to be able to cloud-based in addition to quantum technology, GPUs (graphics processing units) and FPGAs, high-performance computer (HPC) methods to accelerate large-scale control in details centers, and even AI software. The advantages more than prior remembrance solutions associated to lower power, higher memory storage size making use of stacking, increased bandwidth, together with proximity for the logic chip would allow an increase in adoption within various future purposes.


Businesses that stay at typically the forefront connected with technology will benefit greatly from the impending transformation with memory stacking technology with HBM. We have seen HBM IP trends, and additionally top competitors in this 3D reminiscence stacking engineering domain are usually fast broadening to capitalize on the immense economic possibilities given by HBM. HBM market can be undergoing rapid development not to mention change. Thus, protecting your company’s rights is some critical strategy for industry players. As a result, it is certainly crucial just for companies already operating inside of this sector and those wishing to enter it to evaluate the existing HBP IP landscape.