Nvidia (NVDA) is experiencing a slight dip in its stock value as investors respond to the company’s decision to increase the production of Groq-designed AI inference chips through Samsung Electronics. This move comes as demand surges for energy-efficient AI processors, particularly those optimized for inference tasks instead of traditional training workloads.
Industry insiders reveal that Samsung is set to boost wafer production for Groq chips from 9,000 units last year to an impressive 15,000 this year. These chips will be manufactured on Samsung’s cutting-edge 4-nanometer process, heralding a significant rise in output. In addition to bolstering Nvidia’s AI ambitions, Samsung will also produce processors for South Korea’s HyperAccel, an emerging AI chip startup.
Groq Chips Target GPU Bottlenecks
Groq, a U.S.-based AI chip startup backed by Nvidia through a technology licensing agreement, is focusing on inference processing as opposed to the heavily GPU-dependent training processes of its competitors. Its Language Processing Units (LPUs) utilize on-chip static RAM (SRAM) instead of high-bandwidth memory (HBM), achieving over 80 terabytes per second of bandwidth while consuming roughly ten times less energy than comparable GPUs.
By adhering to a 14-nanometer process and bypassing HBM, Groq effectively sidesteps the supply-chain bottlenecks plaguing GPU production. The LPUs are entirely designed, engineered, and manufactured within North America, contrasting with the globally sourced components typical in modern GPUs. Analysts argue that this strategic choice enables Nvidia to cater to the increasing demand for inference processing without being hindered by supply chain issues.
Market Effects and Memory Shortages
The escalating demand for AI chips is stirring significant ripple effects across the broader electronics market. The heightened production of AI chips is leading to memory shortages, affecting the availability of components for personal computers and mobile devices. Samsung, benefiting from these trends, reported a staggering operating profit increase of over 200% year-on-year in Q4 2025, largely driven by its memory sector.
Nonetheless, not all divisions of Samsung are thriving. The mobile division saw a decrease in operating profits to 1.9 trillion won, representing a 9.5% year-on-year fall, as rising component costs continue to squeeze margins. Analysts warn that persistent price pressures in memory and AI chip components could lead electronics manufacturers to pass these costs onto consumers, potentially driving prices higher across the board.
New AI Chips Ahead of GTC 2026
Looking ahead, Nvidia is anticipated to unveil a new Groq-designed inference chip at its GTC 2026 event. Reports suggest that this forthcoming chip will continue to utilize SRAM instead of HBM, further emphasizing energy efficiency and mitigating supply-chain risks.
Investors and market watchers are keenly observing Nvidia’s strategic partnerships and product roadmap, with a cautious response from the market regarding both the company’s growth potential and the immediate cost pressures it faces.
This collaborative effort between Nvidia and Samsung highlights the increasing significance of specialized AI inference processors in the semiconductor industry. As the demand for energy-efficient AI solutions accelerates, Nvidia’s emphasis on scaling production while avoiding traditional GPU bottlenecks may prove to be a decisive factor in the competitive AI chip market.
