Skip to content

Tech Talk: Liar, Liar, Your Pants Are on Fire!

The BFD. Image by Gerd Altmann from Pixabay

This series is designed to help people to understand modern technology, and become more confident in using computing devices. It is not designed to educate experts.

The author is involved in tutoring older students at SeniorNet, a New Zealand wide organisation. SeniorNet hopes that students will feel more confident in using their computing devices as a result of the learning opportunities offered. This series of articles shares that hope.


As you no doubt are aware, I’m interested in things technological. So recently when I read a headline titled Why ChatGPT should be considered a malevolent AI – and be destroyed, published in theregister.com, it piqued my interest.

The article was written by Alexander Hanff, a well-known privacy expert and advocate for privacy and data ethics. He had not had any experience with ChatGPT, and decided that now was the time to delve into the system, particularly as it related to his area of interest.

You can read the resulting article here.

Because he had not had any interaction with ChatGPT, he decided to use himself as a test subject, and see what information had been scraped into the system, and see if any of his “private” information was revealed.

The system gave him a canned biography which was more or less correct. ChatGPT incorrectly told him he was born in London in 1971 (he was born at the other end of the country in a different year) but correctly summarised his career as a privacy technologist. He found the information was quite flattering.

But then it fell right off the rails by telling him he was dead!!! He’d been dead since 2019. Further probing revealed that the cause of his death had not been revealed, but a link to an obituary in The Guardian newspaper was provided. This link turned out to be bogus and had never existed.

So, ChatGPT was making stuff up. Search as he might, Hanff could find no information that could have caused this computer programme to invent information about his death.

Later in the article, he indicated that the error had been corrected, and any reference to his death had been deleted.

Being curious, and having organised a free membership of ChatGPT, I decided to go back and ask it about this man.

My question was “Who is Alexander Hanff?” In the present tense. No suggestion he was dead. And the response was, in part “Alexander Hanff (1971-2019) was a privacy activist and expert who advocated for digital rights and online privacy. He was born in the United Kingdom and spent much of his life in Spain.”

So, he was dead, then undead. Now he’s back to being dead again. And this is in line with information in the original article.

Further probing by me revealed three links to articles about his demise. Unsurprisingly, all three were bogus. What gives? Is the system making up the data, has it been fed incorrect data, or has the system been gamed to give this answer because Hanff is on some blacklist somewhere?

When the information tendered is patently false, this makes everything coming out of this system questionable. It’s lying with more skill than a naughty child.

Going back to the original article, we are told that ChatGPT is trained using certain criteria, called “frameworks”. These are :

  • Fairness, Accountability, and Transparency (FAT)
  • Ethical AI
  • Responsible AI
  • Human-Centred AI
  • Privacy by Design
  • Beneficence
  • Non-maleficence

Nothing in there about honesty. And can a computer programme ever recognise fake news? Can it be made self-correcting if fake news is detected?

Given what we know about how people are willing to feed what could best be described as pig swill to chatbots, and how ChatGPT3 was trained only up to a couple of years ago, it will always be both behind the times and filled with the worst that humanity can provide (as well as, hopefully, the best).

We probably think of this as an amusing toy, to be played with and tinkered with. Perhaps teased as well. It can get some information correct but is out of date with other information. For example, I asked it to provide me with information about the assassination of Jacinda Adern. It correctly advised me that there was no information on an assassination, but then went on to tell me she was still the New Zealand Prime Minister. As far as we know there is no ability to learn from the questions asked (yet), and that’s good given the potential for evil that could be unleashed. Also, what arrangements to feed it with more up to date data will be developed? And what about today’s conspiracy theory being tomorrow’s acknowledged “fact”.

Given that ChatGPT is still in development, should we be worried about the shortcomings being revealed? Microsoft has announced they are incorporating this technology into both their Bing search results and in their Edge browser (they are putting a link onto Windows 11 taskbar). This is to be powered by a combination of a next-gen OpenAI GPT model and Microsoft’s own Prometheus model. The number one computer OS provider is getting into bed with this lying, malevolent technology, in a marriage that many may think was made in heaven. Can Google be far behind (their version is called Bard AI)?. Other search tools playing with AI are Baidu (China), who call theirs “Ernie bot” in English, or “Wenxin Yiyan” in Chinese; and Tencent, also in China. Theirs is called HunyuanAide.

And where these go, will Meta (Facebook, whose AI model is named LlaMA), Tik Tok, and a whole gaggle of other social networks be far behind?

The subject article goes on to provide a number of real-world situations where this technology, if it goes wrong, could adversely affect people. We need to very carefully consider the effects before we allow this into our world…

PS. I haven’t had the courage to ask it about myself. I would hate to find out I’m already dead, or perhaps convicted of some heinous crime. Or, worse still, I’m such a nobody that the system doesn’t even know I exist.

I believe we need to let the philosophers loose on this subject, and then a whole tsunami of lawyers. We need to understand it in detail before we allow it to be in a position to affect our lives. We’ve already got politicians, big business, the woke brigade, Russia and China (amongst others) all interfering in how we live our lives. Is it time to invite another of the same ilk into our tent? I can credit the AI system with “artificial”, but not “intelligence”. I will leave that to humans, and our cats.

The Future: Since I started this article OpenAI has released version 4, which is reported to be much improved and able to learn more up-to-date data. They have also released plug-ins to enable certain apps to converse with ChatGPT and supply it with up-to-date information. Also, other systems have come out of the woodwork because of OpenAI’s release of ChatGPT. So the whole artificial intelligence field is now under much closer scrutiny, which can be no bad thing. With the promises being made about this technology, we need to have a good grasp of just what it means (see my previous paragraph).

This is a fast-moving area of development. One of the things I like about the OpenAI approach is the open approach to much of what they are doing, as apposed to the closed approach being taken by companies such as Google (Bard), Microsoft (Bing) and others

I have just watched an extensive chat with Sam Altman, the head honcho from OpenAI, and you can see this here.

The conversation was with Lex Fridman who does deep interviews on Youtube. It’s over two hours in length, so set aside a chunk of time if you want to see it in its entirety.

Historical Note: OpenAI was founded in San Francisco in 2015 by Sam Altman, Reid Hoffman, Jessica Livingston, Elon Musk, Ilya Sutskever, Peter Thiel, Olivier Grabias and others, who collectively pledged US$1 billion. Musk resigned from the board in 2018 but remained a donor.

What are your thoughts?

Latest