
Does my Chatbot Really Understand Me? The Illusion of Meaning in AI
March, 2025
One of the most challenging, and fascinating, questions I considered during my AI studies last year was the gap between language proficiency and genuine understanding. We're seeing LLMs from companies like OpenAI, Anthropic, and DeepSeek produce stunningly human-like text, but is that evidence of true comprehension? I kept wondering: can just manipulating language rules ever lead to real meaning? It's easy to be impressed by the output, but it's important to ask if we're seeing real understanding or just incredibly convincing simulations.

Embedding Responsible AI : how RAI drives innovation
February, 2025
I've spent the past week researching responsible AI (RAI) maturity roadmaps, focusing particularly on collaborative, industry-specific examples. Given AI's growing influence, responsible implementation is important, and my research highlights how RAI not only mitigates risks but also drives innovation.

AI misleading content: who’s responsible?
Nov 2024
My 2024 Masters philosophy dissertation on generative AI large language models and epistemic responsibility asks who is responsible when GenAI gets it wrong?

The challenge of scaling Generative AI pilots.
Jan 2025
Scaling AI from pilot projects to real-world operations is a major hurdle for businesses. While data handling is manageable for small-scale tests, it becomes far more complex at scale. Key challenges include data quality and structure, a lack of clear digital boundaries, and effectively integrating AI with human decision-making. These are not simply technical problems, but require a fundamental shift in operational strategy.

Exploring GenAI : 9 things to think about
Jan 2025
Generative AI is not some mystical force. Its power comes from the careful application of advanced algorithms to lots of data. Success depends less on magic and more on the very practical work of building the data pipelines, developing the necessary infrastructure, implementing responsible governance policies, etc. But this dedication to the fundamentals is what will drive real innovation in your organisation.

Hallucinations and risks.
Feb 2025
Generative AI models, like ChatGPT, sometimes produce incorrect, misleading, or nonsensical text, a phenomenon known as "hallucination."These hallucinations carry significant societal risks, including erosion of trust and the spread of misinformation. But did you know that there can be four distinct categories of these AI "mistakes”?
These blog posts represent my own thoughts and analysis, except where otherwise noted by references. The information is based on my understanding at the time of writing, and I cannot guarantee its continued accuracy