As the surge in generative artificial intelligence accelerates demand for advanced memory solutions, Samsung Electronics is actively advancing its production capabilities for the next iteration of high-bandwidth memory (HBM) technologies. Sources familiar with the matter have indicated that Samsung intends to commence manufacturing its HBM4 memory chips as early as the following month, signaling a strategic move to support key clients such as Nvidia and to mitigate the competitive edge held by SK Hynix in this specialized segment.
This development represents a critical juncture for Samsung as the company seeks to rebalance its position in the high-bandwidth memory market, especially after the production delays experienced in the previous year that adversely affected both its earnings and stock performance. The HBM technology plays an indispensable role in powering modern AI accelerators, a sector that has witnessed exponential growth in line with rapid innovations in generative AI.
While precise shipment volumes and contract details have not been publicly disclosed, reports from South Korea’s Korea Economic Daily have confirmed that Samsung has successfully passed HBM4 qualification processes for both Nvidia and Advanced Micro Devices (AMD). This progress positions Samsung to initiate shipments to Nvidia starting next month, marking a significant step in the firm’s efforts to reclaim market share from its primary competitor.
Conversely, SK Hynix, which currently commands a dominant share of the HBM market, continues to strengthen its foothold. The company serves as the chief supplier of the advanced memory chips utilized in Nvidia's AI processors, underscoring its influence in the supply chain for cutting-edge AI technology. In October, SK Hynix announced the conclusion of supply negotiations with major customers for the upcoming year, reflecting secured demand for its products.
Additionally, SK Hynix is expanding its manufacturing capabilities. An executive recently revealed plans to initiate silicon wafer processing in the new M15X fabrication facility located in Cheongju, South Korea, starting next month. This facility is expected to boost HBM chip production, although it remains undecided whether HBM4 chips will be part of this initial output batch, leaving some uncertainty over the exact product mix of new capacity additions.
The timing of these developments is particularly significant as Nvidia prepares to launch its next-generation AI platform, Vera Rubin, later in the year. According to statements from Nvidia’s CEO Jensen Huang, the Vera Rubin platform is already in full production and is specifically designed to leverage HBM4 memory, highlighting the critical importance of this memory technology to Nvidia’s future AI product roadmap.
Investor focus also centers on the upcoming fourth-quarter earnings reports from both Samsung and SK Hynix, scheduled for Thursday. Memory chip pricing has been notably volatile; Samsung has implemented price increases on major memory products by as much as 60% since September 2025. Recently, the company acted to clarify circulating rumors suggesting an unprecedented 80% price hike across its entire memory portfolio, which were denied by Samsung and allied memory module manufacturers.
These price dynamics come amid broader market scrutiny as memory suppliers adapt to shifting demand and supply conditions driven by AI and related computing workloads.
From a financial and market perspective, Nvidia continues to display a strong pricing trend over short, medium, and long-term timeframes despite a relatively poor value ranking. This reflects ongoing investor interest in Nvidia’s role within the AI hardware stack, particularly as it transitions to the next generation of processing platforms.