Industry Newssraminference latencynvidiaopenai
OpenAI Seeks Alternatives To Nvidia Inference Chips
10.0
Relevance ScoreOpenAI has sought alternatives to Nvidia's inference chips since last year, Reuters reports, after finding some Nvidia hardware too slow for specific inference tasks such as code generation; it has struck deals with AMD and Cerebras and held talks with Groq. Negotiations over Nvidia's proposed up-to-$100 billion investment in OpenAI have stalled as OpenAI shifts toward SRAM-heavy chips for faster inference, and Nvidia's $20 billion licensing deal with Groq reshaped options.


