Are we making spacecraft too autonomous?

Does this matter? Software has never played a more critical role in spaceflight. It has managed to get safer and much more efficient, allowing a spacecraft to automatically adjust to changing conditions. According to Darrel Raines, a NASA engineer leading pc software development for the Orion deep space capsule, autonomy is particularly key for regions of “critical response time”—like the ascent of a rocket after liftoff, when a problem might require initiating an abort sequence in a matter of seconds. Or in instances where the crew might be incapacitated for some reason. 

And increased autonomy is practically necessary to making some forms of spaceflight even work. Ad Astra is a Houston-based company that’s looking to make plasma rocket propulsion technology viable. The experimental engine uses plasma made out of argon gas, that is heated using electromagnetic waves. A “tuning” process overseen by the system’s pc software automatically understands the optimal frequencies for this heating. The engine comes to full power in only a few milliseconds. “There’s no way for a human to respond to something like that in time,” says CEO Franklin Chang Díaz, a former astronaut who flew on several space shuttle missions from 1986 to 2002. Algorithms in the get a grip on system are acclimatized to recognize changing conditions in the rocket as it’s moving through the startup sequence—and act accordingly. “We wouldn’t be able to do any of this well without software,” he says.

But overrelying on software and autonomous systems in spaceflight creates new opportunities for problems to arise. That’s especially a problem for many of the space industry’s new contenders, who aren’t necessarily used to the sort of aggressive and comprehensive testing needed to weed out issues in pc software and are still trying to strike a good balance between automation and manual control.

Nowadays, a couple of errors in over one million lines of code could spell the big difference between mission success and mission failure. We saw that late last year, when Boeing’s Starliner capsule (the other vehicle NASA is counting on to send American astronauts in to space) failed to get to the ISS because of a glitch in its internal timer. A human pilot may have overridden the glitch that ended up burning Starliner’s thrusters prematurely. NASA administrator Jim Bridenstine remarked soon after Starliner’s problems arose: “Had we had an astronaut on board, we very well may be at the International Space Station right now.” 

But it was later revealed that many other errors in the software wasn’t caught before launch, including one that may have led to the destruction of the spacecraft. And that has been something human crew members could easily have overridden.

Boeing is certainly no stranger to building and testing spaceflight technologies, so that it was a surprise to see the company fail to catch these issues before the Starliner test flight. “Software defects, particularly in complex spacecraft code, are not unexpected,” NASA said when the second glitch was made public. “However, there were numerous instances where the Boeing software quality processes either should have or could have uncovered the defects.” Boeing declined an obtain comment.

According to Luke Schreier, the vice president and general manager of aerospace at NI (formerly National Instruments), issues in pc software are inevitable, whether in autonomous vehicles or in spacecraft. “That’s just life,” he says. The main solution would be to aggressively test ahead of time to get those dilemmas and fix them: “You have to have a really rigorous software testing program to find those mistakes that will inevitably be there.”

Enter AI

Space, however, is a unique environment to try for. The conditions a spacecraft will encounter aren’t easy to emulate on the ground. While an autonomous vehicle could be taken out of the simulator and eased in to lighter real-world conditions to refine the program little by little, you can’t do the same thing for a launch vehicle. Launch, spaceflight, and a go back to Earth are actions that either happen or they don’t—there isn’t any “light” version.

This, says Schreier, is why AI is this kind of big deal in spaceflight nowadays—you could form an autonomous system that’s capable of anticipating those conditions, rather than requiring the conditions to be learned within a specific simulation. “You couldn’t possibly simulate on your own all the corner cases of the new hardware you’re designing,” that he says. 

So for a few groups, testing pc software isn’t just a matter of finding and fixing errors in the code; it’s also a way to train AI-driven software. Take Virgin Orbit, for example, which recently tried to send its LauncherOne vehicle into space for the very first time. The company worked with NI to develop a test bench that looped together all of the vehicle’s sensors and avionics with the program meant to run a mission into orbit (down to the exact period of wiring used within the vehicle). By the time LauncherOne was prepared to fly, it believed it had recently been in space thousands of times thanks to the testing, plus it had already faced many different types of scenarios.

Of course, the LauncherOne’s first test flight ended in failure, for reasons which have still maybe not been disclosed. If it absolutely was due to pc software limitations, the attempt is yet another sign there’s a limit to how much an AI could be trained to handle real-world conditions. 

Raines adds that in contrast to the slower approach NASA takes for testing, private organizations are able to move much more rapidly. For some, like SpaceX, this calculates well. For others, like Boeing, it may lead to some surprising hiccups. 

Ultimately, “the worst thing you can do is make something fully manual or fully autonomous,” says Nathan Uitenbroek, another NASA engineer taking care of Orion’s pc software development. Humans have to be in a position to intervene if the software is glitching up or if the computer’s memory is destroyed by an unanticipated event (like a blast of cosmic rays). But in addition they rely on the program to inform them when other problems arise. 

NASA is used to figuring out this balance, and contains redundancy constructed into its crewed vehicles. The space shuttle operated on multiple computers using the same software, of course, if one had a problem, others could take control. A separate computer ran on entirely different software, so that it could take control the entire spacecraft if a systemic glitch was affecting others. Raines and Uitenbroek say the same redundancy is used on Orion, which also includes a layer of automatic function that bypasses the software entirely for critical functions like parachute release. 

On the Crew Dragon, you will find instances where astronauts can manually initiate abort sequences, and where they can override software on such basis as new inputs. But the style of these vehicles means it’s more difficult now for the human to take complete control. The touch-screen console is still associated with the spacecraft’s software, and you also can’t just bypass it entirely when you need to take control the spacecraft, even in a crisis. 

There’s no consensus on how much further the human role in spaceflight will—or should—shrink. Uitenbroek thinks trying to develop software that may account for every possible contingency is simply impractical, especially when you have deadlines to make. 

Chang Díaz disagrees, saying the world is shifting “to a point where eventually the human is going to be taken out of the equation.” 

Which approach wins out may rely on the level of success achieved by different parties sending people in to space. NASA has no intention of taking humans from the equation, but if commercial companies find they have a less strenuous time minimizing the human pilot’s role and letting the AI take charge, than touch screens and pilotless flight to the ISS are just a taste of what’s to come.