My bipolar AI manager just fired me!

Companies are resorting to automating their human resources operations, banks are approving loans with the help of a computerised system, and courts are relying on algorithms to give parole

SHARE

The pandemic brought chaos all over the globe and spared no one. One of those harshly affected was a school bus driver operating in a small rural town. When the numbers of infected people began to rise, the school shut and what seemed like a steady job suddenly fizzled into nothingness. Essentially, an unexpected series of events changed her life overnight. Having children dependent on her complicated his situation by various orders of magnitude, and the problem quickly shifted towards obtaining the basic needs, “how to bring food to the table?”

Luckily for her, the pandemic also opened new opportunities. Since people were spending more time at home, they couldn’t go out, and they desperately needed deliveries. This situation boosted the small-goods delivery business overnight. She had already performed a few trips before the pandemic as a part-time gig, but it quickly became her only source of income following the loss of her main job.

The task was not easy; running around and delivering thousands of packages within a limited time frame. But the worse part of it was not that; it was the Artificial Intelligence (AI) manager.

The AI manager had no face. It was not someone you could crack a joke with, maybe get angry or even discuss concerns. The manager app became the main communication channel; it could track the vehicle’s movement and sometimes also demand impossible feats. These demanding requirements arise from the fact that some companies are making very bold promises to their customers. Rather than making their clients wait, they offer them same day deliveries. So the algorithm monitors whether the drivers reached the delivery station, if they completed the route within the predefined time window, whether they left the package on the porch hidden from thieves and so on. The algorithms scan the incoming data, analyse it and decide whether a driver should get more routes or is deactivated. As simple as switching a button, on or off.

But the algorithm does not seem to be giving much weight to factors beyond the driver’s control, like driving through kilometres of winding dirt roads in the snow or waiting for an hour to retrieve a package because the delivery station is overflowing with other drivers. These issues and many others throw the drivers behind schedule, thus negatively affecting their delivery ratings.

In one particular case, the driver had a puncture. When she reported her situation, the company asked her to return the package, which she did, even though she was almost flat. Her rating fell from “great” to “at-risk” almost immediately because she technically abandoned her delivery. Following the incident, she did receive emails from the company reassuring her that she was still one of their best drivers. Most probably, it was just a bipolar AI trying hard to be empathetic because the very next day, the algorithm reevaluated her score and coldly terminated her by simply blocking her app.

She was stunned! She had delivered more than 8,000 packages, was rated as one of the best drivers, and because of a flat tire, they fired her on the spot! Luckily the system provides for an appeal, so she applied. Once again, the empathetic bipolar AI sent her an email a few days later telling her that he’s sorry for the delays and that they’re processing her appeal. I’m sure that the AI lost many sleepless hours of processing time thinking about her precarious situation and her kids. But a few days later, the bipolar AI sent her another email informing her that the company’s position did not change after the appeal and they won’t resume her employment. At the end of the email, it “genuinely” wished her all the success in her future endeavours.

The effect of this decision was that she began to struggle financially. She stopped paying her mortgage, the bank took her car, she almost lost her house, became dependent on government assistance, and her children passed one of the most terrible Christmases in their lives.

This episode is not a horror story situated in a distant dystopian future governed by an AI. It happened last year to a 42-year-old single mother in the United States.

Such a system has many faults. We cannot create algorithms and make them demand unrealistic goals. It makes no sense. While we should aim for productivity, we cannot treat people like machines. If something is wrong with that person’s performance, it should be discussed humanely while considering their backstory. If an algorithm assesses a person, that person should have the right to examine the assessment, counteract the arguments and appeal it. Finally, we should never remove the human judges from the loop. Algorithms are not infallible, they are subject to biases, and they can’t determine the future of human lives based upon a discrete mathematical formula.

This story is not a one-off mistake; many other similar cases are sprouting every day. Companies are resorting to automating their human resources operations, banks are approving loans with the help of a computerised system, and courts are relying on algorithms to give parole. If we want to resolve this situation, algorithms that affect people need to be transparent about their decisions, they have to justify them and give people the information they need. People should have the faculty to call out mistakes and have access to simple corrective mechanisms. Legislators are duty-bound to enact rules which prevent harm before it is too late. Only then can we hope to create a fair society where AI is helping every one of us improve our lives and not acting as a digital executioner over our livelihood.

More in People