Celestica: The Quiet Infrastructure Winner Inside AI Data Centers
Why hyperscaler integration and speed to production are reshaping Celestica’s role in AI infrastructure
Celestica is no longer a generic electronics manufacturer that wins on price. It is increasingly a technology platform solutions provider that wins on time to production, co design depth, and the ability to industrialize the hardest parts of AI infrastructure at scale. That combination is why the company is leaning into a 2026 outlook that implies step change growth, not a normal year, and why the market is treating Celestica less like an assembler and more like an AI infrastructure compounder.
Investment thesis
The bullish case for Celestica is that AI data centers are shifting from a buy parts and integrate later model toward a design, validate, and manufacture as a system model. In that world, the scarce resource is not access to a contract manufacturer. The scarce resource is an engineering and manufacturing partner that can move with hyperscaler cycles, ship at the bleeding edge of networking bandwidth, and reliably deliver complex rack level solutions that increasingly include liquid cooling and custom silicon adjacency.
Celestica is positioning itself exactly there. Management is guiding to 2026 revenue of $16.0 billion, adjusted operating margin of 7.8 percent, adjusted EPS of $8.20, and non GAAP free cash flow of $500 million. That is not good execution in a steady market. That is a company describing a multi year demand wave with continuing momentum into 2027.
Why networking is becoming the gating factor in AI infrastructure
AI clusters scale differently than traditional compute. As clusters get larger, the number of interconnects grows dramatically, and the network becomes a first order limiter of performance and utilization. Celestica frames this as networking intensity increasing with the scaling of compute, and ties it to accelerating demand for back end networking where performance and bandwidth requirements are higher and refresh cycles are shorter.
Industry data supports this shift. Ethernet has become the dominant protocol for AI back end networks, with higher speed platforms capturing a growing share of deployments. The result is not just higher unit volume, but faster upgrade cycles as customers move from one bandwidth generation to the next.
Zooming out, data center Ethernet switching has entered a phase where spending growth is driven by AI workloads rather than traditional enterprise refreshes. This creates an environment where vendors with proven high bandwidth platforms and hyperscaler integration can gain share quickly.
What Celestica actually sells now
Celestica increasingly sells technology platform solutions rather than discrete manufactured products. Its value proposition spans design, engineering and testing, manufacturing, supply chain, and after market services. Once a hyperscaler qualifies a platform across performance, thermals, reliability, and manufacturability, switching costs rise sharply.
Two aspects of Celestica’s current positioning illustrate this shift. The company has highlighted a 1.6T networking rack program for a hyperscaler that required early deployment on next generation silicon and integration into rack scale liquid cooling. That engagement is expected to ramp into mass production in late 2026 and has already led to follow on trust for additional liquid cooled rack level platforms.
At the same time, Celestica is investing ahead of demand by expanding North America capacity to support AI customers, including a manufacturing center of excellence for AI racks and a new design hub planned for 2026, with further power and footprint expansion through 2027 and beyond.
CCS is the compounding engine, and hyperscaler exposure keeps rising
Connectivity and Cloud Solutions is the core of Celestica’s growth profile. This segment captures the highest value portions of AI data center spend, including high bandwidth networking and custom hyperscaler designs.
What matters most is who is driving that growth. Hyperscalers now account for a rising share of CCS revenue, signaling deeper integration and longer lived programs rather than opportunistic shipments. As hyperscaler exposure increases, revenue visibility improves and platform continuity becomes more likely across technology generations.
This shift turns CCS into a compounding engine rather than a cyclical segment tied to one off demand spikes.
The 800G to 1.6T upgrade path is not theoretical
Celestica is not simply participating in the transition to higher bandwidth networking. It is positioned as a leader. The company has cited leading share across 200G, 400G, and 800G Ethernet platforms, as well as a dominant position in custom Ethernet switch designs for hyperscalers.
More importantly, Celestica is already aligned with the next step in the roadmap. Its 1.6T platforms incorporate next generation switching silicon and direct to chip liquid cooling, increasing both technical complexity and customer dependence on experienced partners.
As the industry eventually moves toward 3.2T platforms, the advantages of incumbency, accumulated validation work, and manufacturing readiness are likely to grow rather than shrink.
Engineering intensity is rising, and Celestica is leaning into it
Celestica employs more than 1,100 design engineers across multiple global design sites and plans to significantly increase research and development spending in 2026. Investment is focused on next generation networking, advanced interconnects, optical technologies, and cooling architectures needed for dense AI compute.
This engineering depth matters because differentiation is shifting away from who can assemble hardware toward who can co design, qualify, and ramp platforms within hyperscaler timelines. In that environment, speed and reliability become competitive weapons.
Celestica’s ability to move quickly from early silicon access to operational prototypes and production ready systems reinforces its position as a preferred partner rather than a replaceable supplier.
The financial profile supports a bullish multiple narrative
This is not a growth at any cost story. Celestica is scaling rapidly while generating meaningful cash flow. Revenue growth has been accompanied by improving adjusted operating margins, reflecting better mix, productivity gains, and operating leverage.
Management has raised its full year 2025 outlook and laid out a 2026 framework that combines strong top line growth with substantial free cash flow generation. If achieved, this profile is unusual for hardware adjacent infrastructure companies, which often require heavy capital investment to sustain growth.
One important nuance is that reported GAAP results can be influenced by non operating factors tied to financial instruments. The underlying operating trajectory is better reflected in adjusted metrics and cash flow, which show a business that is strengthening rather than merely benefiting from accounting effects.
What the next 18 months need to prove
The path forward is relatively well defined. Celestica expects continued strength in 800G networking platforms, the ramp of 1.6T programs in the second half of 2026, and the progression of a rack scale custom AI system for a digital native customer toward meaningful production in 2027.
Execution across these ramps would confirm that Celestica’s engineering led model can sustain growth across multiple technology transitions, not just one cycle.
The key point is that no single event needs to go perfectly. The thesis depends on a sequence of platform ramps where Celestica already has incumbency and customer trust.
The risks are real, but the moat is built for them
Customer concentration remains high, with a small number of hyperscale clients representing a large share of revenue. That creates exposure to timing risk if a major customer delays a build out, shifts architecture, or introduces aggressive dual sourcing.
Gross margins are structurally constrained by the manufacturing intensive nature of the business. However, as platforms become more complex and engineering contribution rises, the quality of revenue can improve even if headline margins remain modest.
Competition is also real, particularly as AI infrastructure attracts more capital and attention. Celestica’s advantage is that it already holds meaningful share in the highest bandwidth tiers and is embedded in customer roadmaps that extend several generations forward.
Takeaway
Celestica has moved up the stack from build what we are told to co design and industrialize what AI data centers need next. The evidence is tangible: rising hyperscaler exposure, leadership positions in high bandwidth Ethernet, a defined upgrade path from 800G to 1.6T and beyond, and a growth outlook that pairs scale with cash generation.
If the company executes the 2026 and 2027 ramp sequence it has outlined, Celestica does not need heroic assumptions to justify its positioning. It simply needs to keep doing what it is already doing: shipping the next platform first and turning complexity into durable customer relationships.


“The key point is that no single event needs to go perfectly. The thesis depends on a sequence of platform ramps where Celestica already has incumbency and customer trust.”
pretty interesting imo. nice post
Ha