By: Doug Mohney, Editor in Chief, Fiber Broadband Association
Published: December 12, 2023
The concept of increasing on-orbit computing resources is not a new one, with startups such as Blue Origin and OrbitsEdge, and name brands like HP, IBM, and Microsoft floating different ideas and concepts. As latency increases and broadband decreases when spacecraft move farther away from Earth, the availability and advantages of cheap, fast, and plentiful compute power fade away, leading to an inevitable road where servers and data centers will have to travel to the Moon and beyond. Probes and explorers will need responsive resources to collect, store, and analyze data for scientific and commercial purposes, as well as to operate complex systems necessary for in situ resource collecting and processing.
Building applications that generate a lot of raw data requires on-board processing or a way to offload all of the data to another platform. If you are offloading data via RF or optical to another platform, it’s easier to send it to a node on an in-orbit network which then can move it to the ground for cheap and plentiful processing and archiving.
That’s not to say edge computing won’t be a part of the equation for both crewed and uncrewed LEO platforms. Certainly, there have been many demonstrations of in-orbit compute on the International Space Station to combine on-site servers with ground-based cloud computing to examine faster and more efficient solutions for weather modeling, image analysis, and life sciences applications such as medical imaging and DNA sequencing. Additive manufacturing is something that can’t be efficiently done at the end of a broadband connection, especially if multiple iterations by humans are necessary to test and refine printed items before they are put into use.
For example, a SAR pass generates tens of megabytes of raw data that needs to be processed into a usable image. The in-orbit compute crowd suggests that it is faster to transfer the data set to another satellite for processing, creating a usable product that can be delivered to an end-user from orbit to the ground, but is this arrangement economically practical or useful? Shaving a few hundred milliseconds to several seconds of delay from the generation of a usable image makes sense in time-critical applications that only national security users and possibly high-frequency traders would be willing to invest in.
Let’s also talk about raw data’s value for a moment. Where does it go once it is processed? The original files need to be stored for future use, both for documentation as to how derivative works were created and use change-over-time applications. Does this original data get shipped off the original platform to the ground as is traditionally handled or does it go from the server-in-the-sky platform down to a high-speed ground station or via an in-orbit high-speed optical network and then to a ground station?
Some have suggested that orbiting LEO server satellites could be accessible resources for areas where there are no locally available data centers, but this runs into two issues. First, you’re back to the simple solution of a high-speed satellite broadband network back to a larger, essentially unlimited data center for crunching numbers. SpaceX Starlink, OneWeb, and others are here to stay and if they can’t sustain their operations, there’s no way flying servers work from a business perspective.
If high-speed broadband backhaul is not practical due to issues of data sovereignty, a constellation of third-party servers 200 miles overhead doesn’t solve those problems, especially if e-commerce transactions and medical records are in play. If there’s going to be true benefit from a data center in space, proponents are going to have to come up with better and simpler LEO use cases or refocus their efforts on the ways and means for deploying and proliferating such resources beyond Earth orbit.