SAN JOSE, California, March 17 (Yonhap-Reuters) – Nvidia Chief Executive Jensen Huang on Monday pushed back against concerns over so‑called “circular transactions,” defending the U.S. chipmaker’s strategy of investing in major customers that also buy its flagship AI processors.
“We invest in companies we believe will succeed,” Huang told reporters at a press conference at the Hilton Signia Hotel in San Jose. “We see their upcoming business pipeline, so we know they are going to hit a ‘home run.’ The risk is extremely low,” he said.
Nvidia has taken equity stakes in a number of heavy users of its graphics processing units (GPUs), including OpenAI, cloud provider CoreWeave and infrastructure firm Enscale. The overlapping roles of supplier, investor and partner have prompted questions in some quarters about whether Nvidia is effectively helping to finance demand for its own chips.
Huang rejected that characterization, arguing that Nvidia’s insight into customers’ product roadmaps and workloads justifies the bets. He did not disclose the size of individual investments or address whether Nvidia has internal safeguards to manage potential conflicts of interest.
The comments came a day after Huang used a keynote speech to project a $1 trillion revenue opportunity for AI chips by next year. On Monday he stressed that the figure was conservative and limited in scope.
“There are still 21 months left, so it could be even more than that,” he said, adding that the estimate covers only Nvidia’s next‑generation Blackwell and Rubin GPUs. It does not include central processing units (CPUs), Nvidia’s ‘Groq’ language processing unit (LPU) for inference, or ‘Feynman,’ the GPU architecture planned to follow Rubin.
Huang framed the rise of AI “agents” – software that can directly perform tasks on behalf of users – as a major inflection point for inference workloads, the computations needed to run AI models rather than train them.
He said the launch of ChatGPT marked the arrival of the first broadly deployed “thinking model,” GPT‑o1, and was followed by Anthropic’s ‘Claude Code,’ an agent‑style system that further boosted demand for inference. Until recently, he said, such capabilities were largely confined to enterprises.
According to Huang, that changed with the emergence of OpenCLO, which he credited with opening AI agents “to everyone” rather than just corporate users. “OpenCLO, they deserve a lot of credit for that,” he said, while noting that the platform also exposed “serious challenges in the security sector.”
Huang’s remarks were seen as underscoring the role of ‘NemoCLO,’ a framework adopted by OpenCLO and integrated into Nvidia’s ecosystem, which he said was designed to strengthen security and governance around agentic AI while preserving broad access.
By stressing both the low risk of its customer‑investment strategy and the expanding, potentially understated market for AI silicon, Huang sought to reassure investors and regulators that Nvidia can continue to fuel the AI boom without creating undue financial or security vulnerabilities.
Copyright ⓒ 뉴스로드 무단 전재 및 재배포 금지
본 콘텐츠는 뉴스픽 파트너스에서 공유된 콘텐츠입니다.