
Artificial Intelligence
First, some context. I have been heavily involved in IT since I was a young child, worked in the industry for decades and still have a hand in it now. I love new technology, and have no fear of it in itself.
However, most things can be used for good or ill. The Internet is a wonderful invention, but has brought scams, exploitation, brutal bullying and so on. You know the sorts of things I mean. Smartphones are super, but have also brought addiction, doom-scrolling etc., and have even broken up marriages.
AI has its benefits. If you want a picture of a horse doing ballet or to research washing machines, great. It can take a lot of low-level leg-work out of jobs and tasks, leaving people to deal with the meatier bits, but can also be used to write and submit essays which the "writer" had no part in and has learned nothing from.
There are a few sayings which are relevant to this topic. Anything free is worth every penny and If you're not the customer then you are the product.
If you turn to an AI for "therapy" (if we must use that term), what are you actually doing? You don't pay for this, but somehow the owners of that AI must be paying the huge costs to run it. Where do they get the money? Well, one source is its interactions with you. AIs train on information they gather, on their own and from users.
I occasionally run a lecture for counsellors on data protection. Here in the UK we have laws (GDPR) to govern how we handle personal data. I asked ChatGPT about where it is based and the implications of that, here is its full reply to me if you want to read it. Some highlights:
- It does not mean every conversation is stored forever or routinely reviewed by humans.
- Conversations may be logged temporarily for safety, abuse prevention, and model improvement
Implying that some conversations are stored forever and reviewed by humans. And next it gave me this:
Bottom-line summary
Dimension Impact of being U.S.-based
Legal jurisdiction Moderate
Privacy risk Comparable to major tech platforms
Cultural bias Noticeable unless corrected
Latency Minimal impact
Surveillance fears Often overstated, but not zero
So it's as secure as social-media. Nothing ever gets misused there. It has a noticeable cultural bias towards the US (where it is "based"), where you do not live. Finally, surveillance possibilities are "not zero", its own words.
Contrast that with a real-live therapist. In my case I use Zoom for sessions, and this has been set to encrypt all communication between us so our sessions cannot be intercepted. Everyone has a cultural bias, but I'm natively from the UK as are the majority of my clients, of course, and I'm trained in working cross-culturally. I would potentially be subject to criminal proceedings if I misused your information, and expelled from the professional societies I belong to for working unethically.
AI has none of this safeguarding. But on top of that, there's the actual interactions themselves.
A chap called Albert Mehrabian put forth the idea that communication is made of three things: the words you say, how you say them, and your body/facial language. They make up 7%, 38% and 55% of the message respectively. This model is often wrongly used to cover any communication, where in fact it was primarily about thoughts and feelings, i.e. exactly what we're on about here.
Words are all the AI has. It can't see that you blush or have a tear in your eye when you say something. In therapy, that's a huge part of the communication. The other day I noticed that a client touched their necklace whenever they mentioned a particular thing, and so I was able to ask about its significance which turned out to be extremely beneficial to the work. AI would have missed that.
I trained for 6 years in this work, and have committed to many hours of continuing professional development every year since. You might argue that an AI has effectively 1000s of hours of training, but it also doesn't see the consequences of its interactions.
If my client is visibly upset then we work to soothe that before we end the session. You suddenly stop talking to an AI because you're upset, how does it know? How does it know to modify its behaviour or ask if you're ok? And actually, there's been quite a lot of press about the harm AIs do in this way.
If you're still with me (see, if we were face to face I'd know if this was working, which is precisely what all this is about), consider this.
- You go to the dentist for a filling, and they tell you they learned everything from watching YouTube videos and never went to dental school. Now, open wide.
- You go to a hairdresser who consults a book of step-by-step pictures for using the clippers. You've always wanted a reverse Mohican, haven't you.
- Your GP lets on that they slept through the lectures on infected cuts, and they just pasted Wikipedia entries into their essays. That toe will probably grow back.
- Your vet who remarks on what a big hamster that is when it's a guinea-pig, but they all look alike on Instagram don't they.
- The person delivering your baby who promises they've studied every single episode of "All Creatures Great and Small". Now, get me those rubber gloves which go up to your armpit.
And most relevantly, your therapist tells you they have no formal qualifications or training, have never been supervised, don't have regular casework reviews and don't belong to any professional societies who have codes of ethical practice but hey, "how does that make you feel?"
Times are tight, absolutely, I get that. And being a therapist, of course I would say that AI is bad news. But if you wouldn't want that unqualified hairdresser cutting your hair, which will grow back eventually and that would be that, I strongly urge you to seek and use properly qualified therapy from a human being before you let a mish-mash of machine learning muck about with your brain, with potentially much longer-lasting consequences than just having to wear a hat for a while.




