What’s wrong with software development? When Testing in Production is Unacceptable

Image for post
Image for post

Testing in production is, in almost all cases, unexceptable. However, here are some headlines relating to companies testing in production where the stakes included human lives.

Uber.

A company valued by some at $95 billion killed a homeless woman because it was testing in production.

Tesla.

A company valued at $50.73 billion seems to have been at least in part responsible for a death of a driver using its autopilot system. This is particularly striking because of the CEO’s crusade against AI taking over.

In both cases, the tech hadn’t been fully tested before being unleashed on production conditions.

The proof? The people who were killed are the proof.

In both cases, a junior level developer could have crafted test cases in a test track to assess whether this would have happened or not. In one case, Uber’s slaying of a homeless lady, you simply create that exact road condition in a test track, and you have a fake pedestrian cross the road. In the Tesla case, you have a driver drive a car on a test track. If needed, you have the driver remote in from a safe steering wheel inside the Tesla building so that, should the driver get distracted or complacent, he doesn’t die if something goes wrong. Among autonomous vehicle experts and autopilot experts, there is an understanding that there is an implicit claim of safety we humans seem to think is made in such a situation. Had Tesla understood that features like autopilot make the wrong assumption about human/self-driving behavior, the driver wouldn’t have died when testing autopilot. His car would crash in a test track, and he’d be jolted to awareness in a safe location where he was driving his car using a simulator attached remotely to the car.

Now, tell me how expensive the test suite would be on test tracks. Tell me.

I’ll tell you that it would have cost a lot more to put together a full test suite on test tracks than the undisclosed money Uber paid the family of the homeless woman it killed. You’re right. It would be very expensive, indeed.

However, no human lives would have been lost. To me, a human life is worth enough to build the test facilities.

In my mind, you could create a company solely to road-test autonomous vehicles and autopilot capabilities. This company would put together challenging road situations of all types, test how human drivers react to different autonomous vehicle and autopilot situations, and would be one of the steps required before unleashing an autonomous driving car or an autopilot car on public roads. Its services would be expensive, and it would be audited by government public safety groups and private auditing firms, like my manufacturing firm was audited when putting invoicing code into production.

I’ve seen people test in production before, and it is generally ugly. Often the case results because the company refused to pay for adequate testing in lower environments. This is certainly the case for Uber and Tesla, companies valued in the double-digit billions.

We as humans should extend Musk’s suspicion of AI threatening humans. We should extend it to the race to put self-driving cars on our public streets. Musk famously said: “AI is a greater risk than North Korea,” and posted the warning with a photo of a poster that read “in the end, the machines will win.” The machines win when companies choose a tech race over human lives, as America becomes increasingly feudalistic — with tech company oligarchs trampling over the lower income populations their not adequately regulated tech tramples over. It is the precise dystopian future anticipated by early cyberpunk.

Tesla’s statement on the situation exposes a fallacy of testing, and the contradictory thinking about AI by Musk [AI in cars: good; other AI: bad]:

“In the US, there is one automotive fatality every 86 million miles across all vehicles from all manufacturers. For Tesla, there is one fatality, including known pedestrian fatalities, every 320 million miles in vehicles equipped with Autopilot hardware. If you are driving a Tesla equipped with Autopilot hardware, you are 3.7 times less likely to be involved in a fatal accident.”

Not every test case is the same. In “the 320 million miles” Tesla refers to, were the drivers simply backing out of their driveways and driving around a quiet neighborhood block, or were they 320 million miles of a full regression testing suite and end-to-end user acceptance testing required to model all of the conditions which represent edge cases in real life driving?

The question is rhetorical. The test suite didn’t contain the number of test cases for actual production experience. While I don’t literally believe Tesla tested 320 million miles of backing out of driveways and cruising around quiet neighborhood blocks, they didn’t test enough to prevent a death using Autopilot.

Tell me again how expensive that would be.

The testing wasn’t done, and a driver died. Tesla’s blaming it on the driver is also a red herring. Part of testing is understanding how the user will utilize the tech in real-life situations, not in giving the user tech and assuming they’re going to use it just like your testers did. This happened while a Roadster was drifting around in space, another in a line of space junk put up there with no citizen ratification and potentially causing problems for astronomers.

Ironically, the Roadster is playing Bowie’s Space Oddity, whose subject is an autonomous failure killing a human. Just check out these lyrics, a very bad choice by the marketing department, indeed:

And I think my spaceship knows which way to go
Tell my wife I love her very much she knows
Ground Control to Major Tom
Your circuit’s dead, there’s something wrong

Please note: I want self-driving cars, like, yesterday. I just want appropriate testing to be done first, and that takes money. That takes money which could employ an army of testers, engineers, and software developers. It could very well help revive the lower income folks in the places currently exposed to self-driving cars. They could actually benefit from that employment rather than be killed by tech which fails because of a lack of that employment.

America has the talent to do this; our companies just need to pay for it; our universities would teach testing as a major. We need to retool the people whose jobs are displaced by the AI, machine learning, and advanced tech which is currently out of their career-reach so that they can test the tech literally disrupting their very human lives.

In other words, I want a testing revolution in America which will hire armies of test engineers, engineers, and user-centered design engineers commensurate with the progress companies aspire to. I’ve seen products which do not kill people unleashed with people taking a blind eye to bad testing, incomplete testing, or not budgeting enough for complete testing.

I’d expect more from the United States regarding the safety of its citizens.

We have driving fatalities every day. But driving fatalities linked to self-driving cars and autopilot are qualitatively different: they affect public sentiment and they up the percentage of non-human fault in driving fatalities, representing almost trillion dollar firms killing working-class people or homeless people.

Toyota and other firms linked with mechanical failures were rightfully taken to task for the failures of engineering, maybe not enough.

As the percentage of non-human fault is amped up by technocrats anxious to win, at any cost but the cost of testing, in this technological race, we humans need to change the traffic light from green to yellow.

Listen to Musk himself, when we ask “What can we do about this?”

I don’t know… but there are some scary outcomes… and we should try to make sure that the outcomes are good and not bad.

One thing is certain: we can’t respond in autopilot mode. Multi-billion dollar companies are willing to unleash AI without budgeting enough for testing, and our lives and jobs are on the line because of it.

We complacently accepted the ego-stroking launch of a Roadster into space.

We can’t accept mega-billionaire corporations and their murderous AI toys with the final line of Bowie’s Space Oddity:

“And there’s nothing I can do.”

Thanks for reading! When I see applause for a piece of mine, I want to write more pieces! Please applaud if you appreciate people writing about topics like this.

If you enjoyed this piece, you might consider reading some of the other stories I wrote for this series. Thanks!

Disclaimer: I am a senior software professional with experience as project lead, team lead, and lead engineer in the greater Boston area. I am actively seeking a job while currently fully employed. The world of software development can be an ugly place, and I hope to expose some of the more bewildering and uncomfortable corners in the hopes of improving the industry and the craft. Out of respect for current and former employers, I will keep the names of employers and/or clients scrubbed and anonymized as best as possible in these reports.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store