My Thoughts on AI

AI is the big buzz word around at the moment. It's been on the front of newspapers and magazines, almost everyone has at least heard mention of it. It's "The Next Big Thing™️".

As with anything that's The Next Big Thing, there's excitement and benefits, but also worry and liabilities. I want to take a little time and explain where I sit on the spectrum.


Admittedly there are huge benefits to be excited about. What AI does is incredible both as an astonishing piece of computer science, but also in what it enables.


Obviously the biggest benefit is what AI can generate. It's a modern miracle that we can type in short prompts and get incredible imagery back, or ask it to write text for us that might help us to prepare for a job interview, play games or cook.


Using Neural Networks we're likely to see innovations in processes. In medicine alone it could improve image analysis, speeding up and improving the accuracy diagnoses.

Conceptual Compression

It used to be that, to make a really interesting and attractive artwork, you had two options. First, you could spend years learning the medium and crafting the art yourself. Alternatively, you could hire someone with those skills to create the piece for you, at a cost.

With AI, you can now access the results of that task, without needing to engage with anyone who has the skills. This is amazing. Word did this with spellchecking, Google Maps for navigation.

It's truly useful for the user when a complex concept can be compressed and abstracted to a point that they can avail of the results without needing any knowledge of the complexity of the process.


I've started with the benefits to make sure that you, dear reader, see that I can see what is good about AI. I truly understand that there are benefits to individuals and society in AI existing. My question for you is, do they outweigh the liabilities.

Copyright and consent

Let's get this out of the way immediately. We don't know where these AI companies are sourcing the content that they use to train their models. Washington Post journalists have put in work to see where some AI's sourced their material.

The argument from AI supporters here is that the information is publicly available. This is undeniably true. You can visit most of these sites and read all their content for free. But remember these are businesses.

A business needs to make money to survive. Typically for online publications that's achieved through subscriptions or advertising. This only works if you visit the site and see the ad. Typically chat based AI will give an answer on a topic, but without any attribution to the source. This means that the people who created the content cannot monetise it.

Note: for the next section, it's important to know I work for MediaHuis, owners of the largest set of news publications in Ireland and my job depends on advertising.

This might seem like a small thing, but many users already get some of their news from AI. They will ask about a topic, and for more elaboration on it. This directly hits the profitability of an industry that is already struggling.

Not only does AI affect advertising supported publications from surviving, but it also affects artists whose work needs to be promoted so they can be hired to make a living. The Washington Post report above shows that many of the biggest sites for designers to show their wares (Behance, DeviantArt, Dribbble) have been scraped for AI. You can now access the combined skills of all artists on these sites, without needing to pay those artists who only have the content up there to try and promote themselves.

All of this is done without consent. The models have been created. There's no opt out, there's no clarity that your labour has been included, and your ability to make money impacted.

At the very least this needs international government oversight, or to be made illegal without an ability to opt out.

You can hear more info on this from the latest episode of ShopTalkShow.


This is a big one for society.

Who are choosing the sites, what are their biases? Humans are so bad at recognising their own biases that there's a term for it, bias blond spot. AI tools are being used internationally, but created by a small amount of people. There's no way to ensure that these people are completely without bias, and again that Washington Post article shows that many of the sites have huge bias.

What are the biases of the datasets? This is unknowable. But, given that people have managed to create racist hand dryers it seems impossible that large language models wont have some issues.


By the very definition of the "generative" in Generative Pre-trained Transformer (GPT), AIs start from a conceptual blur, and generate likely outcomes towards something with enough fidelity to appear to humans as if the content was created by another human. This is all blurry and fuzzy initially. AI does not understand the concepts or ideas it is working on, it's just creating a likely artifact, given the model it is trained on.

What does that mean? Well, when you ask it to create a legal document, it has no idea about the law. It's purely guessing, doing an impression of a legal document. It doesn't know the laws of your country. People are already getting into trouble with AI lawyers.

Purely anecdotally, I've noticed that my friends and colleagues who use it often seem to trust the results too much. As if ChatGPT has done research for them.

This misunderstanding is very dangerous. It could muddy knowledge as the AI generated content, published by humans, becomes reference material. This erodes the scientific method and society's ability to build upon itself.

Experiential knowledge

When it comes to creating knowledge too, I can already see friends and colleagues being deferential to AI. How did they come to a conclusion that this or that idea is correct? "AI said so". They are receiving knowledge (again, by definition, inaccurate knowledge) from an AI and taking it as fact. They have put no effort into understanding the knowledge, they have not applied it and gained any experience as to whether the knowledge is correct or not.

This trend will lead, if it continues, would lead to people not understanding why they do things the way they do, and huge skillsets being lost.

Unknown social impact

I'm old enough to have known a time before Social Media and after. There's so much research out there about the negative impacts of Social Media that we only realised years later. We have no idea what the impacts of AI are, and my fear is that they'll be bigger.

Social Media, being social, had direct effects on how we communicate together personally. It then had further effects on society at large.

AI is affecting how we communicate together in every setting, how work is done, and how the world is understood. It's potentially bigger than Social Media, and its effects could be too.


This is a massive one not spoken about enough. AI requires some HUGE computing power.

Have you noticed that most AI only runs through web interfaces? That's because the power required to run a model cant run on most peoples devices.

Have you noticed that those services are often overwhelmed? Even using Microsoft's Azure cloud infrastructure ChatGPT and DallE are still often over capacity.

We don't know the impact here, but it's significant enough to bottleneck one of the largest cloud infrastructure providers on the planet regularly.


At the moment, most AI tools are free, at least to try. This sets the expectation on the user that using AI is cheap. It's not though, we're being used to further train the models, and to create a reliance on the tools.

Additionally, I worry that when the big companies start charging for their AI tools (which is inevitable), people will create less well monitored bootleg AIs that will be more likely to suffer from the liabilities above.


AI is amazing and we should absolutely take advantage of it. But only where we can understand its effects. For example in medical research or analysis. If we can show it's quick and accurate in making assessments, saving a doctor time and allowing them to first confirm the AI's diagnosis, then great.

Otherwise, we should come at this with an abundance of caution. I support the various leaders out there who have said progress should stop on AI for now, till we understand it better. I also believe it needs to be incredibly closely regulated, internationally.