Data Centers in Earth Orbit
Elon Musk is merging SpaceX and xAI firms, with plans for space-based AI data centres. Musk says solar powered and space-based data centres are the only way to meet AI’s burgeoning energy demands.
To achieve this, SpaceX wants to launch a constellation of 1 million satellites that will orbit Earth and harness the sun to power AI data centers, according to an 18 Sep 2025 filing at the Federal Communications Commission (FCC) and GN Docket No. 25-302.
The filing was made by:
- Spectrum Business Trust 2025-1 (the Trust),
- Space Exploration Technologies Corp. (SpaceX), and
- EchoStar Corporation and its wholly owned subsidiaries (EchoStar).
Arguments FOR Orbital Data Centers ... and Early Steps
✅ Amazon/Blue Origin
"Blue Origin has been working on technology for AI data centers in space, building on Jeff Bezos' prediction that "giant gigawatt data centers" in orbit could beat the cost of their Earth-bound peers within 10 to 20 years by tapping uninterrupted solar power and radiating heat directly into space." [Reuters/ChatGPT]
✅ Nvidia
Starcloud has already offered a glimpse of that future: its Starcloud-1 satellite, launched on a Falcon 9 last month, carries an Nvidia H100 - the most powerful AI chip ever placed in orbit - and is training and running Google’s open‑source Gemma model as a proof of concept.
The company ultimately envisions a modular “hypercluster” of satellites providing about five gigawatts of computing power, comparable to several hyperscale data centers combined. [Reuters/ChatGPT] [White Paper: Aug 2024]
Arguments AGAINST Orbital Data Centers
ChatGPT offers the following comments on some issues involved in a space-based data center:
- Cooling would be by thermal radiation (infrared radiation into space). That would require thermal radiation (infrared radiation into space), and huge radiator panels. To reject 1 MW of heat may require thousands of square meters of radiator area.
- Orbit is a harsh radiation environment with threats from cosmic rays, solar particle events, and trapped radiation belts (depending on orbit), cosmic rays
- Impacts on hardware: bit flips in memory, silent data corruption, degradation of transistors over time, reduced chip lifespan
- Launch issues ... however Spacex offers mature technology
- Data bandwidth and latency from surface to orbit
In summary, ChatGPT outlines the following issues - in descending order of difficulty:
- 🔥 Heat rejection
- ☢️ Radiation tolerance
- 🚀 Launch mass & cost
- 📡 Data movement
- 🔧 Maintenance & upgrades
- ☄️ Orbital safety
- ☀️ Power generation
MEANWHILE, BACK ON EARTH
✅ Amazon's Northern Indiana Data Center for Anthropic

The 1,200 acre data center will support 500,000 AWS Trainium 2 and Trainium 3 chips. This number will increase to 1 million chips by the end of 2025.
Power for the data center will be provided by NIPSCO (Northern Indiana Public Service Co). To support the energy requirements of that data center, NIPSCO Generation will build about 3 gigawatts of new generation capacity, including new gas-fired power plants and battery storage systems. [ChatGPT]
The initial cost of the Rainer data center campus and related facilities is estimated to be around USD 26 billion:
- USD 11 billion for the initial buildout, and
- USD 15 billion for expanded data center campuses.
AI Technical Infrastructure
This is an investor's view of investment opportunities in AI technical infrastructure, prepared with ClaudeAI.
✅ We are investors in these companies
TIER 1 - Mission Critical (Highest AI Value):
- 1. Broadcom (AVGO) - The Custom Silicon King
- Q4 FY2025 AI revenue: $6.5B, up 74%
- $73B AI backlog over next 18 months
- FY2026 AI revenue projected at $40.4B
- Dominates custom AI chips (✅ Google TPUs, ✅ Meta, OpenAI partnerships)
- 70% market share in custom AI Application-specific Integrated Circuits (ASICs)
- Strategic value: Irreplaceable for hyperscalers wanting alternatives to Nvidia
- 2. Marvell (MRVL) - The Connectivity Backbone
- Q3 FY2026 revenue: $2.075B, up 37% YoY
- AI revenue exceeded $1.5B in FY2025, projected to surpass $2.5B in FY2026
- Data center segment now represents 75% of revenue
- Powers ✅ Amazon AWS Trainium/Inferentia chips
- Celestial AI acquisition adds photonic fabric technology with $1B run-rate target by 2028
- Strategic value: Critical for optical interconnects (800G to 1.6T transition)
- 3. Arista Networks (ARISTA) - The AI Networking Leader
- Q3 2025 revenue: $2.31B, up 27.5% YoY
- FY2026 guidance: $10.65B revenue with $2.75B
- EtherLink architecture dominates 2/3 of AI cluster sales
- Partnered with ✅ NVIDIA, AMD, OpenAI
- Strategic value: Backend networking for AI training clusters is non-negotiable
TIER 2 - Highly Important:
- 4. Applied Materials (AMAT) - The Equipment Enabler
- FY2025 revenue: $28.37B, up 4% YoY
- Preparing for higher demand in H2 2026
- 30% higher revenue opportunity per fab with "Gate-All-Around" GAA and backside power delivery
- Strategic value: Enables all advanced chip manufacturing (foundational, not AI-specific)
- 5. Pure Storage (PSTG) - The AI Storage Specialist
- Q3 FY2026 revenue: $964.5M, up 16% YoY
- Subscription ARR: $1.8B, up 17% YoY
- Partnership with ✅ NVIDIA for AI Factory deployments
- 45% of revenue from subscriptions
- Strategic value: High-performance storage for AI inference becoming critical
- 6. Analog Devices (ADI) - Signal Integrity & Power
- (Need to add: power management for AI chips)
TIER 3 - Supporting Infrastructure:
- 7. NetApp (NTAP) - Enterprise AI Storage
- FY2025 revenue: $6.57B, up 4.85% YoY
- 150 AI infrastructure deals closed in Q4
- All-flash array revenue: $4.1B annualized run-rate, up 14%
- Strategic value: Enterprise AI adoption, but less critical than hyperscaler-focused
- 8. Seagate (STX) / Western Digital (WDC) - Mass Storage
- Strategic value: Least AI-specific; training uses NVMe/SSD, HDDs mainly for archival
- AI workloads favor flash over spinning disks
- 9. Rambus (RMBS) - Memory Interfaces
- Strategic value: Enabling technology but smaller TAM, niche player in chiplets
More Critical Players
Must Have
Vertiv (VRT) - Liquid cooling & power infrastructure for AI datacenters
Eaton (ETN) - Power distribution critical as clusters scale
Coherent (COHR) - Optical transceivers (competes with Marvell)
ASML (ASML) - EUV extreme ultra violet photo lithography monopoly (enables everything, though not US-listed)
Strongly Consider
✅ Nvidia (NVDA) - The elephant in the room for GPUs
AMD (AMD) - MI300X competing with Nvidia
Super Micro (SMCI) - AI server integration
Dell/HPE - AI infrastructure deployment
And also: