Inspiration

The global datacenter industry is projected to consume over 8% of the world's electricity by 2030. As organizations race to build compute infrastructure for AI workloads, site selection remains a manual, fragmented process — spreadsheets, consultants, and gut instinct. We asked ourselves: what if you could evaluate any point in the solar system as a datacenter site, with real climate science, AI-powered analysis, and blockchain-verified carbon records — all on a single interactive globe?

That question became Skyly: a multi-celestial datacenter feasibility platform that lets infrastructure planners explore Earth, Moon, Mars, and 5,000+ real orbital satellites in one unified interface.


What It Does

Orbital Atlas is a full-stack platform where users can:

  • Explore datacenter locations across four celestial bodies on an interactive 3D globe with real-time satellite tracking
  • Analyze each site's feasibility using a unified 0–100 scoring engine (power, cooling, connectivity, resilience, cost)
  • Predict future carbon intensity using our custom-trained Ridge Regression model with scenario-based projections out to 2050
  • Chat with an agentic AI advisor (Claude) that can autonomously search locations, compare sites, and even generate Stripe payment links mid-conversation
  • Purchase detailed AI-generated construction blueprints ($299/site) covering power strategy, cooling design, network topology, staffing, and risk analysis
  • Certify carbon records on the Solana blockchain via memo-based transactions

The Novel AI Model: Hybrid Ridge Regression for CO$_2$ Intensity Prediction

The heart of Orbital Atlas is a client-side machine learning model that predicts localized grid carbon intensity — the grams of CO$_2$ emitted per kilowatt-hour of electricity at any given location.

Why We Built It

Publicly available carbon data is coarse: you get a single number per country or per broad electricity zone. But a datacenter in northern Norway (powered by hydroelectric) has a radically different carbon footprint than one in southern Poland (coal-heavy), even though both are in Europe. We needed sub-country granularity from country-level inputs.

The Architecture

We implemented a Ridge Regression model with L2 regularization ($\alpha = 2.0$), trained on 112 real-world samples of measured grid carbon intensity:

$$\hat{y} = \mathbf{w}^T \mathbf{x} + b, \quad \text{where } \mathbf{w} = \arg\min_{\mathbf{w}} \left( |\mathbf{y} - X\mathbf{w}|^2 + \alpha |\mathbf{w}|^2 \right)$$

The model uses 8 domain-engineered features derived from just two raw inputs (country CI and zone CI):

Feature: Country carbon intensity Rationale: National energy policy baseline ──────────────────────────────────────── Feature: Zone carbon intensity Rationale: Regional grid mix signal ──────────────────────────────────────── Feature: $\sqrt{\text{zone_ci}}$ Rationale: Diminishing returns at high intensities ──────────────────────────────────────── Feature: $\text{zone_ci} \times \text{country_ci}$ Rationale: Interaction: local-vs-national divergence ──────────────────────────────────────── Feature: $\text{country_ci}^2$ Rationale: Non-linear national effects ──────────────────────────────────────── Feature: Coal fraction in energy mix Rationale: Dominant driver of high CI ──────────────────────────────────────── Feature: Country-level CO$_2$ trend Rationale: Decarbonization trajectory ──────────────────────────────────────── Feature: $\text{trend} \times \text{country_ci}$ Rationale: Trend impact scales with intensity

Performance

Using Leave-One-Out Cross-Validation across all 112 samples:

$$\text{MAE}_{\text{LOO}} = 50.1 ; \text{gCO}2\text{/kWh}, \quad R^2{\text{LOO}} = 0.895$$

An $R^2$ of 0.895 means the model explains ~90% of the variance in local carbon intensity from coarse national data — a strong result for a hackathon model trained on limited data.

Time-Series Scenario Projections

We extended the model with three scenario-based CO$_2$ trajectory projections:

  • Business as Usual (BAU): 1% annual coal decline, 1% clean energy growth
  • Net Zero 2050: 4% coal decline, 4% clean growth (Paris-aligned)
  • Accelerated Transition: 6% coal decline, 6% clean growth

The projection uses exponential decay on the coal fraction:

$$\text{coal}_t = \text{coal}0 \cdot (1 - r{\text{decline}})^t, \quad \text{CI}_t = \text{CI}_0 \cdot \left[1 - (1 - \text{coal}_t / \text{coal}_0) \cdot 0.8\right]$$

This runs entirely in the browser (~50ms per prediction), giving users real-time scrubbing through 2026–2050 carbon trajectories as they compare locations.

What Makes It Novel

  1. Domain-aware feature engineering — coal fraction and interaction terms capture physical relationships, not just statistical correlations
  2. Client-side execution — zero API latency; the trained model weights ship as a 2KB JSON file
  3. Scenario composability — users can compare BAU vs. Net Zero futures for any location, enabling forward-looking site decisions

The Agentic AI Framework

Our backend implements an agentic Claude integration using the Anthropic SDK's tool_runner pattern. Rather than simple prompt → response, our AI advisor operates in an autonomous loop with 8 callable tools:

User → Claude → [SearchLocations] → Claude → [CompareLocations] → Claude → [CreatePaymentLink] → Response

The AI can chain multiple tool calls to answer complex queries like "Find me the three greenest locations under $0.10/kWh and create a checkout link for the best one." This goes beyond typical chatbot integrations — the AI reasons, acts, and transacts.


How We Built It

Tech Stack

Layer: Frontend Technology: React 19, Vite, Three.js, react-globe.gl, Tailwind CSS, shadcn/ui ──────────────────────────────────────── Layer: Backend Technology: Ruby on Rails 7.1 (API mode), SQLite ──────────────────────────────────────── Layer: AI Technology: Anthropic Claude (Sonnet 4) with agentic tool_runner ──────────────────────────────────────── Layer: ML Technology: Custom Ridge Regression (TypeScript, client-side) ──────────────────────────────────────── Layer: Payments Technology: Stripe Checkout + Webhooks ──────────────────────────────────────── Layer: Blockchain Technology: Solana devnet (Rust/Axum microservice, memo transactions) ──────────────────────────────────────── Layer: Satellite Data Technology: CelesTrak TLE feeds + satellite.js SGP4 propagation

Architecture

The system is designed as three independent services:

  1. React SPA — 3D globe, ML predictions, satellite tracking, all client-side heavy lifting
  2. Rails API — Claude orchestration, Stripe payments, portfolio management, location database
  3. Rust Solana Service — Lightweight Axum server for memo-based blockchain minting

This separation let our 4-person team work in parallel without stepping on each other.


Challenges We Faced

  1. Real-Time Satellite Propagation at Scale

Rendering 5,000+ satellites with SGP4 orbital mechanics at 3-second refresh intervals pushed the browser to its limits. We had to carefully manage Three.js object pooling and batch position updates to avoid frame drops on the 3D globe. The TLE parsing alone (two-line element sets from CelesTrak) required handling dozens of edge cases in satellite classification (LEO/MEO/GEO/HEO based on mean motion thresholds).

  1. Cross-Celestial Feasibility Scoring

How do you compare a datacenter in Iceland to one on the Moon? We built a unified scoring engine that normalizes wildly different physical parameters into comparable 0–100 scores across six dimensions. Lunar bases get bonuses for vacuum radiative cooling but penalties for 1.3-second communication latency. Mars sites suffer from dust storms reducing solar irradiance. Orbital platforms benefit from continuous solar power but face Van Allen belt radiation. Getting these weightings to produce intuitive, defensible rankings took significant iteration.

  1. Training ML on Sparse Data

With only 112 training samples, overfitting was a real danger. We chose Ridge Regression specifically for its L2 regularization, and validated with Leave-One-Out CV (the most honest evaluation for small datasets). The breakthrough was feature engineering — adding coal fraction, squared terms, and interaction features boosted $R^2$ from 0.72 to 0.895 without adding a single training sample.

  1. Solana on Windows

Our blockchain developer (Person 4) hit immediate friction: the Solana CLI doesn't support Windows natively. We pivoted to a Rust Axum microservice using solana-sdk crates directly, with memo-based transactions instead of full program deployment. This gave us blockchain certification without the toolchain headaches.

  1. Making Claude Transactional

Getting the AI advisor to not just recommend but actually create payment links required careful tool design. The CreatePaymentLink tool generates a real Stripe checkout session, meaning a misfire costs money. We implemented guardrails in the system prompt and tool descriptions to ensure Claude only creates links when explicitly requested.


What We Learned

  • Client-side ML is underrated. Shipping a 2KB model weight file eliminates an entire class of latency and infrastructure problems. For prediction tasks with modest feature spaces, browser-side inference is the right call.
  • Agentic AI needs boundaries. Tool-calling LLMs are powerful but need explicit constraints. Our system prompts evolved from "be helpful" to precisely scoped instructions with tool-specific guardrails.
  • Domain features beat more data. Our 112-sample model outperforms naive approaches on 10x the data because we encoded physical knowledge (coal fraction drives CI) into the feature space.
  • Hackathon architecture should be parallel-friendly. Three independent services meant four developers could ship simultaneously. Tight coupling would have been a bottleneck.

What Makes Orbital Atlas Stand Out

  1. First multi-celestial datacenter planning tool — nobody else lets you compare Iceland, the ISS, and a Martian lava tube in one interface
  2. Real science, not mockups — SGP4 orbital mechanics, Ridge Regression with LOO CV, actual Solana transactions, real Stripe payments
  3. AI that acts, not just talks — Claude autonomously chains tools and creates payment links
  4. Future-aware — scenario-based CO$_2$ projections let planners make decisions for 2030, not just today
  5. Full vertical integration — from 3D visualization to ML prediction to AI analysis to payment to blockchain certification, all in one platform

Built at HackEurope Dublin 2026 by a team of four.

Built With

  • anthropic-sdk
  • celestrak-tle
  • claude-api
  • climate-trace
  • devnet-payments-stripe-database-sqlite-infrastructure-docker-apis-&-data-celestrak-tle
  • docker
  • ember-global-electricity-review
  • ember-global-electricity-review-2024
  • geojson-(natural-earth)-other-sgp4-orbital-propagation
  • iea
  • iea-electricity-review
  • languages-typescript
  • lucide-react
  • natural-earth-geojson
  • plotly
  • plotly-ai-/-ml-claude-api-(anthropic)
  • react
  • react-globe.gl
  • ridge-regression
  • ridge-regression-blockchain-solana
  • ruby
  • ruby-on-rails
  • rust
  • rust-frameworks-react
  • sgp4-orbital-propagation
  • solana
  • solana-devnet
  • solana-web3.js
  • sqlite
  • stripe
  • tailwind-css
  • tailwind-css-3d-/-visualization-three.js
  • three.js
  • typescript
  • vite
  • webgl
Share this project:

Updates