Shinsegae Ends OpenAI Talks, Commits to Reflection AI

Shinsegae Group has halted discussions with OpenAI and will concentrate its AI efforts on a strategic partnership with Reflection AI. The retailer plans to jointly build and operate a large-scale AI data center in South Korea, starting with a 250-MW phased deployment, and to integrate AI across six core retail functions: product sourcing, ordering, pricing, logistics, inventory, and customer management. Shinsegae had previously signed a separate MOU with OpenAI to explore a ChatGPT-based shopping agent and product linking, but announced it would discontinue OpenAI talks to "agilely and efficiently pursue" the Reflection AI collaboration. The move prioritizes infrastructure control and a single-vendor execution path for end-to-end AI commerce.
What happened
Shinsegae Group has abruptly discontinued collaboration talks with OpenAI and shifted focus to its American partner Reflection AI, accelerating plans to build and operate a large AI data center in South Korea. The company publicly said it will stop OpenAI discussions after only 10 days of earlier publicity and will instead concentrate resources on expanding retail integration projects with Reflection AI and constructing a phased 250-MW AI data center.
Technical details
Shinsegae previously announced a two-track AI approach: an agent layer built with ChatGPT integrations for an in-app shopping assistant and a parallel infrastructure play with Reflection AI for on-premise scale compute. The new direction collapses those tracks toward Reflection AI for both infrastructure and applied retail models. Critical technical points practitioners should note:
- •The data center plan is described as a phased 250-MW deployment, implying hyperscale GPU capacity and significant power and cooling engineering requirements.
- •Shinsegae intends to operationalize AI across six retail domains: product sourcing, ordering, pricing, logistics, inventory management, and customer management. These domains will require supply-chain forecasting models, demand-sensing time-series models, price-optimization engines, logistics routing and capacity models, and customer personalization systems.
- •The company emphasizes joint operation of the data center, which suggests a co-location or managed dedicated cloud approach rather than a public-cloud-only integration model.
Context and significance
This pivot reveals two strategic priorities. First, Shinsegae values owning or jointly operating infrastructure at hyperscale, a move aligned with sovereign-AI narratives and with other retailers seeking to avoid vendor lock-in for sensitive customer and supply-chain data. Second, it shows a tradeoff between rapid integration using third-party agent APIs such as ChatGPT and longer-term control over model training, latency, and data governance when you run your own stack.
For the broader AI ecosystem, the deal signals demand for vertically integrated retail AI solutions that pair large models with bespoke operational ML. If the data center reaches the advertised scale it will be a local GPU-capacity anchor, and could attract hardware partners, system integrators, and software vendors focused on model licensing, fine-tuning, and data-platform integration. KED Global and other outlets framed the infrastructure component in dollar terms, citing multibillion-dollar buildout figures tied to Nvidia-backed partners, which underscores the capital intensity and hardware orientation of this strategy.
What to watch
Track three near-term items:
- •technical terms of the Reflection AI joint-operation MOU and ownership of model weights and training data
- •whether Shinsegae retains any ChatGPT-based agent work or reimplements conversational shopping with models hosted in the new data center
- •vendor participation on GPUs, networking, and power builds, since those choices will determine latency, model update cadence, and cost structure
Bottom line
Shinsegae is prioritizing infrastructure sovereignty and an integrated retail AI stack over a multi-vendor agent strategy. For practitioners this is a pragmatic, capital-heavy approach that favors full-stack integration, model governance, and operational performance at the expense of faster, lower-friction API-based deployments.
Scoring Rationale
The story matters because a major retailer is moving from API-first agent experiments to owning hyperscale infrastructure, which affects model governance, latency, and vertical AI deployments. It is regionally significant and capital intensive but not a frontier-model or regulatory watershed.
Practice with real Retail & eCommerce data
90 SQL & Python problems · 15 industry datasets
250 free problems · No credit card
See all Retail & eCommerce problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.


