Exploring GenAI : 9 things to think about

Generative AI's potential isn't about magic; it's about the strategic application of advanced algorithms to lots of data. True innovation in this space comes not from hype, but from the practical groundwork: building data pipelines, developing infrastructure, and implementing responsible governance. Informed by my previous experience and recent research, here are some key considerations for organisations exploring Generative AI.

Why do so many companies fail to move beyond PoC?
Research from organisations like Rand, and Forbes and Gartner shows that the majority of companies do not operationalise their AI POCs. Here’s some questions to ponder.

  1. Was the pilot the right one?
    Not all PoCs are created equal. Was the pilot project designed to scale, or was it a a siloed initiative with limited long-term value?

  2. Are we selecting the right POCs to operationalise?
    Decision-making forums, including key stakeholders across the enterprise, should assess POCs for scalability and impact. Engaging diverse voices from across (and beyond) the organisation ensures more thorough consideration of all aspects. PoCs developed in isolation often fail to account for the practical realities of operationalising AI.

  3. What makes Generative AI unique, and why does it need special considerations?
    Unlike traditional AI systems, Generative AI introduces complexities such as the potential for bias, misuse, and the need for continuous monitoring. It’s really important for organisations to understand these nuances in order to build effective guardrails.

  4. How should governance evolve to support AI across its entire software development lifecycle (SDLC)?
    AI governance cannot be an afterthought; it must span from ideation to deployment and beyond, ensuring compliance, transparency, and ethical operation at every step. It is constantly evolving.

  5. What is the stakeholder model to ensure success?
    Scaling AI requires a cross-functional stakeholder model that includes IT, legal, compliance, business functions, and external regulators, etc. So, consider who will own the success of the initiative, and how will collaboration be enforced?

  6. How do the EU AI Act and UK AI governance impact our plans?
    Regulatory compliance is not optional. Understanding how evolving frameworks such as the EU AI Act will affect AI deployment is critical for risk mitigation and long-term success.

  7. What steps are required to make Generative AI a reality across the enterprise?
    Is there a roadmap for integrating Generative AI into the organisation? This includes technical enablers, cultural transformation, training, aligning AI initiatives with business goals, etc.

  8. How do we ensure trust in AI decision-making?
    AI already makes decisions for some organisations, even if we don’t realise it (e.g. Netflix choosing our movies, Amazon choosing our books, Banks deciding whether they’ll lend to us). So, what mechanisms need to be established to verify, validate, and audit AI decisions? This is paramount to maintaining stakeholder trust.

  9. Can we describe the problems we’re solving digitally?
    To operationalise AI effectively, problems must be digitally defined in ways that AI systems can interpret and adhere to. Without this clarity, solutions risk becoming inconsistent or misaligned with business objectives. It's not enough to just say, "We want to improve customer service" or "We want to reduce costs." These are high-level business goals, but AI needs more specific, quantifiable input.This means the problem needs to be expressed using data, numbers, categories, and other digital representations that AI algorithms can process. Without this clarity, solutions risk becoming inconsistent or misaligned with business objectives. This, by the way, is not just an AI problem. The underlying issue is a general challenge in problem-solving and project management, often referred to as "defining the problem" or "requirements gathering."

    But what makes it more complex with AI, or GenAI, is that AI systems learn from data. They need concrete, structured data to understand the problem. We need to avoid, where possible (and sometimes it isn’t possible) a ‘black box’ AI system where decision-making processes are opaque or hidden, making it difficult to understand how it arrives at its outputs. That is, if the problem wasn't defined precisely, it becomes even harder to diagnose why the AI is producing undesirable results. The lack of transparency can amplify the consequences of a poorly defined problem.

For example, Scenario 2: Customer Service Chatbot (super high level!).

Use Case: The company wants to deploy an AI chatbot to improve customer support efficiency without diminishing customer satisfaction.

Digital Description:

  • Objective: Resolve at least 80% of tier-1 queries autonomously while maintaining a customer satisfaction score of 90% or higher.

Constraints:

  • The bot must escalate queries involving financial disputes, legal inquiries, or sensitive customer data to a human representative.

  • It must not generate responses involving speculative advice (e.g., medical, financial).

Data Requirements:

  • FAQs, knowledge base articles, customer interaction logs, and sentiment analysis models.

Exclusions:

  • The bot must not request personally identifiable information (PII) beyond what is strictly necessary for query resolution.

This is a more precise digital description. The next step/s would be to uncover the “data chain” this would require to determine feasibility, i.e. where is the data held, is it accessible, do you own it, is it in a 3rd party server, does it have GDPR restrictions, is it sitting in a spreadsheet somewhere, etc.

My phrase "data chain" refers to the sequence of data sources, transformations, and processes needed to actually get the data required for this precise digital description. It's essentially mapping out where the necessary data will come from, how it will be collected, cleaned, and prepared, and how it will flow into the AI system.

So, the "next steps" involve figuring out if it's actually possible to get all this data. Is the data available? Is it in a usable format? Is it accessible? Is it reliable? What is its provenance? Determining the "data chain" and its feasibility is crucial because even with a perfectly defined digital problem, if the data isn't there or can't be obtained, the AI solution can't be built. It's about checking if the practical data requirements match the theoretical digital description of the problem.

(This is my personal blog, so the info here might not be perfect and definitely isn't advice)

Previous
Previous

The challenge of scaling Generative AI pilots.

Next
Next

Hallucinations and risks.