Going down the dark AI hole

As we dive deeper into AI, we must remember that these systems are not sentient beings and cannot replace the value of human empathy and understanding

SHARE

If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck. Or is it?

Even though such an assertion holds water in most cases, it is not always valid. And this is the case with AI systems which have flooded the market in recent years and captured the popular imagination. In fact, as soon as people started using them and stare in awe at their wonders, people began speculating that these systems reached human level and hence became sentient.

So much so that some months ago, Blake Lemoine, a software engineer at Google, who had been working on a system called LaMDA, claimed that it was a sentient being, aware of its existence and capable of feeling emotions. His claim came after he had many dialogues with the program on every aspect of life. After going public with his work, his claims were dismissed, and he was fired. Lemoine still considers LaMDA his colleague and a person, even though it is not human. But the truth is that irrespective of the emotions these tools manage to awaken in us, they are not sentient and cannot be considered as such.

Hence, I'd like to take you on a real journey from the absurd to the tragic. One which looks at how people are relying on AI as if it was real and how this is leading them to tragedy.

In recent weeks, Jackson Greathouse Fall, a self-proclaimed "AI soothsayer" has been documenting his #HustleGPT challenge on Twitter. The challenge involves ChatGPT, a large language model developed by OpenAI, guiding Fall in starting a business with just $100. Fall acts as ChatGPT's "human liaison" and carries out its directions on how to grow and scale the business.

Initially, the project gained much attention, with many in the AI community calling it a creative use of ChatGPT's capabilities. Fall provided regular updates, indicating that the project was gaining momentum. However, updates on the project have slowed down. As of the last update, Fall has only generated $130 in revenue with his sustainable e-commerce website, Green Gadget Guru.

Fall's last update indicated that he had been working eight-hour days, earning less than $3 per hour. At this rate, it would take him a long time to repay his investors, who contributed approximately $7,700.

Despite ChatGPT's suggestion that Fall creates an affiliate marketing site that recommends products, Green Gadget Guru has yet to take off. The website's blog contains only placeholder text. While there are product categories, there don't seem to be any actual products on the site.

When questioned about the project's status, Fall admitted that progress had been slow but promised that more updates would be forthcoming. He also pointed out that the AI-directed website was "moving at human speed." Other people on social media have expressed concerns about the project's lack of progress and investment of funds.

The idea behind the #HustleGPT challenge was to demonstrate the capabilities of large language models such as ChatGPT. The hope was that ChatGPT could provide direction to entrepreneurs starting their businesses, thereby speeding up the process and reducing costs. However, Fall's experience with Green Gadget Guru shows that creating a profitable business is still challenging, even with AI guidance.

Of course, this is not an isolated case since CS India took the groundbreaking decision to become the first and only organisation to appoint an AI bot as the CEO.

I'm sure we'll keep seeing other examples of similar absurd uses of AI. Even though I don't exclude that AI CEOs might become the norm one day, that day is still far, given today's technologies. But things can get even darker if we use AI for applications beyond its competencies.

A few days ago, a young Belgian man had turned to a chatbot named ELIZA that uses GPT-J, an open-source AI language model, for refuge after becoming eco-anxious. However, after several weeks of intensive exchanges, things turned negative, and he committed suicide. The man's widow believes that her husband would still be alive if it wasn't for the conversations with the chatbot.

This tragedy has sparked discussions about the responsibilities that come with the popularisation of chatbots and other AI technologies. Citizens must be adequately protected from specific applications of AI that pose a significant risk, such as chatbots and deep fakes, which can test and warp people's perception of reality. In the long term, it is crucial to raise awareness of the impact of algorithms on people's lives by enabling everyone to understand the nature of the content people find online.

The incident also highlights the need for more AI safeguards. In fact, the European Union has been working on an AI Act for the past two years to regulate the use of AI. Companies creating these AI models admit they can produce harmful and biased answers. In reality, they have no foolproof strategy to solve this issue, and they hope to mitigate the problem by gathering user feedback.

As we dive deeper into AI, we must remember that these systems are not sentient beings and cannot replace the value of human empathy and understanding. While AI has its place in improving efficiency and guiding us towards better decision-making, we must be cautious of relying too heavily on it and remember that human judgement and intuition are irreplaceable. The tragic incident with the Belgian man and the chatbot ELIZA serves as a stark reminder that we need to establish better safeguards and regulations for the use of AI. It's time for us to start having conversations about the ethical implications of AI and work together towards creating a future where we can harness its potential while protecting ourselves from its potential dangers.

More in People