SK Group Shifts AI Strategy to Efficiency, Boosts Partnerships with OpenAI, Nvidia, AWS for Scalable Solutions
November 3, 2025
SKT’s evolution into a full-stack AI memory creator involves closer customer collaboration and joint development of memory solutions, as articulated by SK hynix’s leadership.
Industry chatter notes Nvidia’s leadership acknowledging SK hynix’s rapid development, underscoring SK’s growing technical capabilities.
Chey Tae-won identified memory, energy, and AI infrastructure bottlenecks as limits to AI growth, advocating an efficiency-over-scale approach to bridge demand-supply gaps.
The leadership highlighted a broader paradigm of prioritizing efficiency to address lead times, power needs, and geopolitical factors affecting AI deployment.
SK hynix plans to start full-scale operation of a new HBM fabrication site in Cheongju next year and expand capacity dramatically by 2027 to develop ultra-high-capacity memory and cost-efficient large-scale NAND products.
As part of that vision, SKT seeks to partner with AWS on large-scale data centers and to extend its AI infrastructure footprint internationally.
SK hynix plans to deploy ultra-high-capacity memory and NAND designs to cut costs and boost data efficiency, with Nvidia’s acknowledgment of progress.
OpenAI and AWS sent messages of commitment to SK Group partnerships, with OpenAI seeking hundreds of thousands of HBM wafers for Stargate and AWS backing a mega AI data center in Ulsan.
OpenAI and AWS partnerships are central to SK Group’s AI strategy, with leaders underscoring the need for massive, coordinated infrastructure investments.
OpenAI’s involvement emphasizes the critical role of collaborations in advancing powerful AI models.
The group is pursuing AI-enabled manufacturing and data-center optimization, including applying AI to memory fabrication and operations, and collaborating with Nvidia to boost production efficiency via the Omniverse platform.
The summit featured participants from AWS, Nvidia, Schneider Electric, Kakao, and various Korean AI startups, with demos of AI-driven services and humanoid foundation models.
The overall strategy focuses on memory semiconductors, AI infrastructure, and AI applications to relieve bottlenecks from demand exceeding supply.
SK Telecom announced plans to accelerate its AI data center business and work with affiliates to deploy more energy-efficient data center solutions.
SK hynix positions itself as a full-stack AI memory creator, expanding the role of memory to improve computing resources and tackle inference bottlenecks.
A new Yongin semiconductor cluster planned for 2027 will reach capacity equivalent to 24 M15X fabs, signaling a major扩ansion in memory chip manufacturing in Korea.
SK Telecom’s new CEO Jung Jai-hun aims to make Korea a global AI infrastructure hub, expanding AI data centers in Ulsan with AWS to exceed 1 gigawatt capacity and planning centers with OpenAI, plus ambitions to expand to Vietnam, Malaysia and Singapore.
The SK Group, led by Chey Tae-won, shifts its AI strategy from chasing scale to emphasizing efficiency, signaling deeper partnerships with OpenAI, Nvidia and AWS to deliver more efficient AI solutions.
At the SK AI Summit 2025 in Seoul, the focus was on moving from sheer scale to efficiency to meet growing AI demand across memory, infrastructure, and services.
The group’s plan centers on a coordinated, alliance-driven approach to overcome bottlenecks and scale efficient AI infrastructure and applications, including closer co-design with customers.
SK hynix plans to expand memory production and pursue ultra-high-capacity memory and NAND designs to improve data efficiency, including meeting OpenAI’s request for a large volume of high-bandwidth memory wafers for Stargate.
Open questions remain about how these alliances will address memory supply constraints, the scale of expanding data centers, and Korea’s evolving role as an AI infrastructure hub.
SK hynix and Nvidia are developing a joint AI manufacturing system and exploring a specialized AI factory to automate chip production.
Overall, the coverage portrays a concerted, alliance-focused effort to alleviate AI supply bottlenecks and scale efficient AI infrastructure across memory, infrastructure, and data-center expansion.
Industry leaders like Nvidia’s Tim Costa and Kakao’s Chung Shin-a highlighted AI supercomputing for semiconductor design and the use of sustainable AI agents.
SK hynix’s CEO Kwak Noh-jung envisions SKT becoming a full-stack AI memory creator through co-design with customers, outlining a memory roadmap from 2026 to 2031 (HBM4/HBM4E and HBM5/HBM5E).
SK hynix also laid out a timeline for HBM4/HBM4E (2026–2028) and HBM5/HBM5E (2029–2031) as part of the memory roadmap to support AI workloads.
To tackle memory bottlenecks, SK Group intends to expand production capacity and technology, including the Yongin cluster slated for 2027 with sizable output.
Summary based on 3 sources
Get a daily email with more AI stories
Sources

The Korea Times • Nov 3, 2025
SK Group pivots from scale to efficiency to meet soaring AI market demand
The Korea Herald • Nov 3, 2025
SK pivots from scale to efficiency in global AI race - The Korea Herald
THE INVESTOR • Nov 3, 2025
SK pivots from scale to efficiency in global AI race - THE INVESTOR