Integrated Solar Panels Enable Reduced-Mass Orbital AI Inference

A new arXiv paper proposes a distributed compute architecture for sun-synchronous orbit (SSO) satellites that integrates solar cells, radiator surfaces, and compute into thin, modular panels to maximize compute-per-launched-mass. The design targets more than 100 kW of compute per launched metric ton and an estimated 500 W/kg specific power by using large vapor-chamber radiators to keep integrated IC junction temperatures near 40°C, improving efficiency and reliability. A reference configuration—16 MW of compute in a 150 ton satellite composed of 16,000 panels—fits in a single Starship payload, and a 1 kW/panel subarray design can run an inference-only LLM with a 500,000 token context window and 128 attention blocks at 553 tokens/sec/session across 256 concurrent sessions. The full satellite could support more than 7,900 simultaneous inferences. The architecture emphasizes panel sizes from 1–4 m² to trade heat transport, compute efficiency, and inter-panel communication, and is presented as scalable with on-orbit assembly or larger payloads.
What happened
The paper Reduced-Mass Orbital AI Inference via Integrated Solar, Compute, and Radiator Panels outlines a distributed compute architecture for SSO satellites that co-locates power generation, heat rejection, and compute into modular panels. The authors claim designs achieving >100 kW compute per launched metric ton and a specific power near 500 W/kg, enabling a reference 16 MW, 150 ton satellite to fit in a single Starship hold.
Technical details
The proposal uses large vapor-chamber radiator areas built into each panel so IC junctions can operate near 40°C, trading lower junction temperature for higher energy efficiency and reliability. Panels are sized 1–4 m² to balance vapor-chamber heat transport against compute density and link requirements. A canonical 1 kW/panel subarray comprises 512 panels and is sized to run an inference-only LLM with a 500,000 token context window and 128 attention blocks at 553 tokens/sec/session across 256 simultaneous sessions. A full satellite with 31 such subarrays yields support for >7,900 concurrent inferences. The paper provides mass, power, and layout estimates (e.g., 16,000 panels in a 20 m x 2200 m grid for the reference satellite) and assumes custom components across solar, compute, and thermal subsystems.
Context and significance
This design reframes spacecraft architecture for AI workloads by shifting radiator and PV support structures into the same structural plane as compute, dramatically increasing specific power relative to conventional spacecraft (claimed 500 W/kg vs. <100 W/kg). For practitioners, the paper offers a systems-level blueprint for on-orbit inference farms addressing long-context LLM workloads without depending solely on downlinking data. It intersects trends in edge inference, thermal-aware hardware design, and launch-capacity-driven economics.
What to watch
Engineering validation of vapor-chamber radiators at this scale, radiation-hardened/custom IC development, and demonstration of inter-panel high-bandwidth links will determine practical viability.
Scoring Rationale
The paper proposes a high-impact systems architecture that could reshape on-orbit AI deployment economics by boosting compute-per-launch-mass. Practical importance depends on engineering validation (thermal, radiation, custom ICs) and launch/assembly feasibility.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.



