Missionary Misfits: Meet A Former Fighter Pilot, Current Autonomous Vehicles Road Warrior

Missy Cummings, one of the Navy’s first female fighter pilots.

Missy Cummings

As a tech investor, I get to talk to some intriguing people. But there’s one group that I think is really special. These are people who are supremely talented, endlessly curious, fanatically committed, and who are unconstrained by disciplinary boundaries. This combination of traits makes them uniquely well-suited to solve the world’s most critical and intractable problems. I’ve dubbed these individuals “missionary misfits,” and every few months, I’ll introduce readers to one of these extraordinary humans.

The first missionary misfit I’d like you to meet is Missy Cummings. Missy is a professor in Duke’s School of Engineering and the Duke Institute of Brain Sciences; she’s also the director of the Humans and Autonomy Laboratory and Duke Robotics. Her research centers on autonomous vehicles and the policy, ethical, and social impact of autonomous technologies. 

Missy was also one of the Navy’s first female fighter pilots, a feat more, ahem, maverick than anything anyone pulls in Top Gun. But that’s what makes her a missionary misfit: if she feels it’s important, Missy isn’t afraid to defy convention and punch through the limitations that others tried to impose on her. In these excerpts below from our conversation (edited for length and clarity), we spoke about her experience challenging gender norms in the military, asking hard questions about the fundamentals of autonomous transportation, and taking public jabs at Elon Musk. After reading, I think it’ll be abundantly clear why the ringtone I assigned her is Tom Petty’s “I Won’t Back Down.” 

In your memoir, Hornet’s Nest, you recount that being one of the first female fighter pilots was not the most welcoming of experiences.

Any group of people who are at the vanguard of cultural change are generally not well accepted. We had a pretty rough time as female fighter pilots. The guys didn’t want to share the treehouse. 

Are there moments of hostile reception that stand out?

The first day I arrived at the squadron, there was a golf tournament. At the time, I was a middle grade officer. There was a young female officer with me. When we checked in, we were told to head down to the tournament. So, we walked down and they threw a couple of Hooters uniforms at us. They told us we could drive the beer carts. I threw the Hooters outfits back at them. I wasn’t going to put up with it. I didn’t feel like I needed to impress them and I think they resented it. They still do. 

How did these experiences inform what you chose to do post-Navy? 

I have that type of personality that will always challenge the status quo, but I was miserable. I really loved flying, but I had no friends or life. After flying for three years, I knew I couldn’t sustain my life like that. It was also obvious to me that there were a lot of problems in aviation in terms of aircraft design. During the three years that I flew fighters, 36 people died due to accidents. So, I decided to go into academia and got my PhD. I loved aviation and I still do, but I’ve moved on to driverless cars. It takes a person with a strong personality to be able to stand up to Silicon Valley and say “What you’re doing is wrong.” 

Do you have a recent example of a time you’ve really pushed the perimeter?

I do it all the time with Elon Musk. I go after him regarding Autopilot. I love what he’s doing at SpaceX and I love electric cars. I’m not bothered by him as a person, I just really want him to reconsider what he’s doing with Autopilot because I think it’s exceedingly dangerous. 

Are you excited or unnerved by the fact that most autonomous innovation is happening in the commercial domain rather than the military? 

A little bit of both. I think it’s great when technology transitions away from the military into the commercial space. It’s good because it gives technology the chance to mature in a way it wouldn’t had it stayed inside the military. But the flip side is that the military government and Department of Defense contractors can’t afford to hire good talent to create capable and safe autonomous systems.

Agreed. I’ve always thought of Palantir as just a labor arbitrage opportunity that enables large government contractors to indirectly hire the best and brightest software developers, data scientists and other contractors, under the guise of working for a Silicon Valley startup. Do you think that’s a good thing or just another sad sign that organizations that desperately need it are unable to attract strong talent?

I think Palantir is misunderstood. It’s a small blow to the talent market, comparatively speaking. But there isn’t an easy answer. The government’s going to have to start changing its position on some things. Pay, for one. But sometimes, the problem isn’t with the available labor pool. Let’s take the example of another hot topic, facial recognition. From a privacy perspective, I don’t really have a problem with facial recognition. But the technology that government agencies are using is terrible. That’s actually the bigger problem, not recruiting.

Do you worry that the U.S. is falling behind with respect to automation and AI? Take China, for instance. They are mandating adoption of technologies like facial recognition, giving them a chance to scale. 

I’m actually not worried about that at all. I’m not worried that China or Russia will pull ahead of us in AI technology. Everyone is really bad at it right now. I think the thing that I do worry about is the “Space Race” to get autonomous cars on the road. We’re all using shortcuts for a lot of critical elements. The number one problem is people don’t know the limitations of this technology. And authoritarian regimes usually foster an environment that discourages asking questions, which is dangerous.

Right. Building these frontier technologies is really, really hard. Personally, I think we’ll have flying cars before we have fully autonomous cars. What do you think will be the biggest technological challenges that are keeping us from riding in robotaxis? 

The number one technical challenge is perception—reliable perception under the wide-ranging, broad set of conditions that these vehicles have to operate in. Number Two is understanding how to test that these systems are working correct. If I hear one more person say they can test these systems in simulation, my head is going to explode. I don’t care how good you think your simulation is, Waymo, I don’t care how high-resolution your images of the world are—a million simulation miles equals one real mile. Simulation miles can help you target the low-hanging fruit, the relatively easy problems. But if you don’t have a system that works from first principles, you can’t use simulation to fix specific problems because you don’t know what those problems are.

If perception is broken, what do you believe will solve those problems? What would it take to get you into a robotaxi? 

I think the solution lies at the sensor level. This isn’t really an algorithm problem. For example: LIDAR doesn’t work with moisture in the air, radar has problems with the static vehicles. I don’t think we’re going to be able to resolve some of the key issues until the limitations of the sensors get worked out. There’s a lot of work to be done on software development, but before we even begin to approach AGI [artificial general intelligence], we need to understand that it’s going to take more infrastructure to support it.  

Definitely. And some of these technologies only make sense if every car is autonomous. So, that adds another layer of complexity that’s truly unpredictable. Does this play into your beef with Tesla and Elon Musk?

Yeah, it’s extremely dangerous. It lulls drivers into a false sense of security. They get very complacent. An individual Tesla will not perform the same on any set of tests. And certainly a fleet of Teslas will all perform differently, even though they’re the same make and model. This starts to raise questions about repeatability. If you can’t get a single Tesla to repeat its behavior in the same conditions over and over again, then why are we letting these cars, in theory, engage in automated driving? We know that it sets people up to be complacent and the odds of something tragic happening are much higher.

While I agree with you that it’s dangerous, I’m also encouraged by the fact that we’re having conversations we would never have had had Tesla not released Autopilot. 

I think I would be encouraged if the company had a more durable monitoring system. But the reality is, it’s killing people in some pretty horrific ways. Tesla makes you think that the car is more capable than it is. 

But do you agree that we’re pushing the technology forward faster as a result of Tesla?

Oh yeah, I love Tesla as a company. Electric cars are the future. But, the hubris around what the automation can do, especially a car that does not have LIDAR on it, worries me. Any car that hangs its hat on a computer vision perception system is terribly flawed. 

What are your most gifted students at Duke planning to do when they graduate? 

They’re all heading to Silicon Valley! 

[Laughter] And your job is to strap them to the mast and make sure they don’t go?

No, my job is to make sure they have the right weapons to go out there.

Read More