top of page

At the recent AI International Summit in Brussels, an invigorating session explored the transformation challenges AI poses to organizations. We wanted to offer our perspective.

AI now represents a new operating condition—akin to electrification, the internet, or the advent of cloud computing—reshaping or rewiring of how work gets done, how decisions are made, and what organizations will tolerate as they learn.

With the advent of Agentic AI systems, there’s a shift on organizational dynamics—because the systems not only   generate, but also acts autonomously.  The organization becomes AI transformative. The hard part isn’t the model, technical deployment or the launch of pilots. The friction lives elsewhere, within the cultural challenge described below:

  • Teams can’t agree where the tool fits because the process itself is messy, undocumented, or politicized. AI is viewed as an assistant that requires human oversight rather than an autonomous decision maker, which implies cultural resistance to ceding authority to AI even when it scores on all technical parameters.

  • People test tools in isolation, see a spark of productivity, and then hit the wall when they try to move from “personal acceleration” to organisational capability. International studies indicate that most users restrict AI to low‑risk, productivity tasks because of cultural perceptions of organizational error tolerance, which means pilots succeed technically but stall when asked to change decision workflows or responsibilities.

  • Approval of AI initiatives are sought from compliance dept. as if approval is a one-time event rather than a living posture. In our latest studies, worries about data access, misuse, and the need for human verification are repeatedly cited as barriers to deeper engagement, showing that governance and trust culture, not model accuracy, limit adoption.


    AI-generated conceptual illustration


Roles are shifting and so is the meaning of sound judgment

When a tool can draft, summarize, propose, diagnose, simulate, and recommend, work rarely vanishes; it rearranges. Some jobs become orchestration-heavy, guiding systems, curating inputs, checking outputs, and managing exceptions. Some split into a fast lane of experimentation and a slow lane of assurance. And some acquire a new shadow role: the person who does the work, and the separate person who must account for how AI shaped the outcome. Take translators: AI can produce transcripts and rough translations, but the human time now goes to proofreading, preserving nuance, and deciding when to override the machine. That shift forces a deeper question for organizations: what do we mean by quality now? Not just accuracy or compliance, but traceability, contestability, and clear lines of responsibility—changes that are as much cultural as they are technical.


The agentic moment, when 'tools' become coworkers

LLMs nudged us into new habits and agentic systems pushed us into a different workplace altogether. When a system watches a queue, sets priorities, triggers actions, follows up, and escalates issues, it starts behaving like a coworker. In some consulting workflows, you can feel it doing the heavy lifting—if you counted agentic man‑hours against total man‑hours, the agentic share would be strikingly large. Once that threshold is crossed, the question is “What should it be allowed to do?”

What follows are practical, urgent questions: what permissions does autonomous agents get without human sign‑off; where are the stopping rules; how do you spot errors that look perfectly plausible and how do you audit choices that never appeared as formal decisions? Autonomy forces organizations to make explicit what used to live in habit and intuition—risk appetite, escalation paths, accountability, controls, and even the definition of 'done'. That’s why many AI projects surface gaps you didn’t know you had and expose governance needs you didn’t realise existed.


AI doesn’t merely expose process weaknesses; it becomes a diagnostic lens showing where you’re strong, where you’re exposed, and where friction will surface next. That’s why a tool like the ALIGN Index matters—not as a  checklist, but as a navigation instrument. Used well, an index sharpens the questions leaders must answer:

  • Are AI efforts truly aligned to business intent and values or merely chasing novelty?

  • Is governance woven through the lifecycle or tacked on after deployment?

  • Are impacts on people, customers, and operations measured rather than assumed?

  • Do teams have clear accountability and decision rights as autonomy grows?


The closing thought

AI is now a question of how organizations can learn and unlearn quickly. LLMs and agentic systems force a stark trade-off: move fast and accept fragility, surprises, and manage exposure to scrutiny, or slow so much that the organization never compounds value. The real challenge and the real opportunity is learning to do both: moving quickly, while staying on course to meet organisational goals.

 
 
 

Introduction to AI Needs Assessment

Before organisations invest heavily in models and pilots, they need a clear line of sight from strategic intent to measurable outcomes. This note introduces a pragmatic approach to diagnosing where AI will create real value, what capabilities are missing, and how leadership can govern risk while accelerating impact.


Why AI Needs Assessment?

AI initiatives often stall because organisations jump straight into projects without understanding what's missing. A needs assessment identifies the gap between the current state and aspirations, focusing on results rather than activities. Clarifying these gaps early aligns vision and investment and helps leaders build AI responsibly.


GAIN (Gather, Assess, Illuminate and Navigate) Framework

This framework developed by us turns AI needs into impact. It synchronises teams and strategy, uncovers and prioritises promising AI use cases, and helps leaders see their current state, spot opportunities and map a path to sustainable adoption. This use case discovery is often the greatest benefit of a needs assessment.


Setup

Before the commencement of any AI initiative, its imperative to secure executive buy in, define what success looks like at strategic, tactical and operational levels and tailor the framework to your context. Appoint champions to keep momentum.


Gather (Stage 1)

Collect data and perspectives across the organisation. Methods can range from conversations and employee/customer interviews to surveys and field experiments; the choice of the method is usually based on the indicator, purpose and frequency of data collection. Combining qualitative and quantitative insights ensures a realistic picture of readiness.


Assess (Stage 2)

In this stage, the key aspect is diagnosing how AI insights flow through the organisation to expose logic gaps and underused potential; Logic model is developed to link activities to outcomes and show how investments in data, governance and skills lead to ethical, effective AI.


Operationalise: Adapting the logic model into actionable roadmaps provides the most direct route to operationalisation— for example, tying AI literacy, governance and data capabilities to outcomes like safety and ethics.


Prioritise: When options compete, application of multicriteria analysis is useful. Weight factors like cost, time, regulation and safety and feed the top scoring initiatives into the logic model to guide resources and metrics.


Bridge the gap: A key element is crafting a performance-need statement that identifies capability gap stopping organisations reaching the target outcome; this statement highlights why governance and AI literacy practices should be established, how progress will be measured, where AI excels, and where capabilities must be formalised to prioritise investment and scale successful use cases.


Illuminate (Stage 3)

Translating findings into an AI roadmap and co-creating a plan that sets priorities, identifies use cases, fills talent gaps, embeds ethics and sequences milestones is the defining feature of this stage. A one-page roadmap lays out a 12 month execution plan with milestones, leading indicators and executive ownership. Use-case canvases align business goals, data maturity and safeguards to reduce errors and incorporate human-feedback into AI systems.


Navigate (Stage 4)

In this stage, activate the strategy through leadership briefings, departmental playbooks and pilot launches so that every function owns its priorities and delivers early wins. The ultimate outcome is a coherent transformation that integrates leadership intent, diagnostic insight, capability building and ethical governance.


 Final Takeaway

An AI needs assessment ensures initiatives deliver value by helping teams discover and prioritise high impact use-cases. By following GAIN gathering data, assessing gaps, illuminating a roadmap and navigating rollout. Organisations can turn AI from experiments into a strategic asset while upholding ethics and governance.

 
 
 

A while back we had written about CEOs of organizations need an AI vision.  In this follow-up blog, we connect the lessons of a chess game to a practical roadmap for business leaders to evolve their AI vision. The goal is to harness GenAI-driven innovation – aligning new AI initiatives with core business goals and upholding ethical guardrails.


1.Introduction Business strategy in the age of Generative AI can be compared to a high-stakes chess game. Just as chess players balance bold moves with careful defense, today’s executives need to boldly embrace AI innovations and manage AI’s risks. Generative AI has rapidly leapt from niche research projects to a “board-level strategic priority,” with one-third of companies already using GenAI and many CEOs personally experimenting with AI tools. This poses new questions for organizations: How do we integrate GenAI’s powerful capabilities into our game plan while avoiding an AI “checkmate” from missteps? 


2. Solid Opening

In chess, a once-rare opening or tactic can suddenly become mainstream, forcing all players to adjust. Similarly, generative AI has exploded from a niche project into a mainstream imperative for businesses. In a short period since modern GenAI tools became widely available, nearly one-third of companies globally report using GenAI in at least one business function. What’s more, GenAI is now on the agenda of corporate boards around the world – over one-quarter of companies say GenAI is discussed at the Board level. This surge in executive attention signals that GenAI is no longer just an experiment in the R&D lab; it’s a core strategic concern for the entire organization. And underscores an urgent need for guidance at the highest levels on how to play this new game responsibly. What we've found in chess is that well-set opening moves gives breathing space towards the end-game where is a need to defend or checkmate the opposition. Similarly, to stay in the match, business leaders must update their AI vision and strategies,  approach GenAI with the same diligence as a critical chess move: study it, plan for different outcomes, and integrate it into an overarching game strategy. 


3. Quality of Pieces

Not all AI deployments are created equal. A single, well-positioned, high-impact GenAI use case—like automated customer support or dynamic pricing—can unlock disproportionate value. These are your "queens or rooks" on the board. Prioritizing quality over novelty ensures resources are focused where strategic gain is highest. The world of business is often likened to a game of chess, where strategic planning, anticipation, and timely moves determine success. In both arenas, leaders must assess the opponent’s moves, adapt to changing scenarios, and make decisions that leads to a win in due course of time. The famous victory of IBM’s Deep Blue over world champion Garry Kasparov in 1997 symbolized AI’s potential to surpass human strategic thinking. It’s a reminder that leveraging AI effectively can be like having a grandmaster on your team. Today, generative AI is that game-changing “piece” on the board. Business leaders, like skilled chess players, should incorporate AI insights into their strategy to navigate complex markets. Much as a chess player uses previous games, patterns to inform each move, CEOs can use Gen AI’s reasoning abilities and prediction powers to inform decisions, and discover moves that a human strategist might miss,


4. Total Number of Pieces

In blitz chess, the number of remaining pieces directly influences tactical flexibility. Similarly, in GenAI strategy, breadth of deployment matters. Organizations that embed GenAI across multiple functions—from HR and legal to customer service and finance—create a broader surface area for impact. Leaders should inventory all current GenAI applications to assess where coverage is thin or under-leveraged. Likewise, any GenAI initiative your organization pursues should align closely with your core business objectives and customer needs. In strategic terms, this is the principle of Relevance.  Leaders should ask: Does this move advance our mission or improve key performance metrics? For example, if an analyst uses GenAI to write reports, that initiative should tie directly to better client recommendations. By evaluating the relevance of GenAI projects up front, you ground your AI vision in genuine business value. This mirrors how in chess every move, even a daring sacrifice with a vision to win in the shortest possible time. A well-aligned AI vision acts as a roadmap that connects technological capabilities to business outcomes. It keeps teams focused on AI projects that matter, addresses pain points, unlock new opportunities and prevents wasted effort on trends that don’t fit your strategy. In practice,  just as a chess player evaluates the board after each move, you should be ready to pivot if they’re not contributing to corporate goals. In this way, you ensure every “move” your organization makes with GenAI brings you closer to a competitive “checkmate” – whether that’s market leadership, operational excellence, or attracting better customers.

 

5. Positioning Your Pieces

In chess, a low-value pawn close to promotion can outweigh a distant rook. It's about position - in GenAI, organizations with robust data governance, agile IT infrastructure, and embedded ethical oversight hold positional advantage. Even with fewer use cases, these firms can pivot faster, absorb risk better, and build AI responsibly.  

But this positional advantage isn't enough, even the most ingenious chess move must be in line with the rules of the game. In business, those rules take the ethical and policy guardrails are the rules by which AI initiatives must abide. This is the basic tenet of Coherence. Implementing coherence means aligning AI development with your organization’s existing policies and values, and with external requirements like data privacy laws and industry/national regulations. For example, if your company has strict customer data guidelines, any AI-driven personalization say, using customer purchase histories to suggest products must respect those privacy limits and security standards. By weaving these policies into AI projects at the start, you prevent costly conflicts or ethical lapses down the line. In practice, upholding ethical guardrails might include by 1) establishing an AI governance committee to review new GenAI use cases for compliance with laws and ethical standards, 2) developing clear AI ethics guidelines  and 3) training employees on responsible AI use and how to spot issues like biased outputs or privacy risks early.


6. Managing AI Risks: Competing on Tempo And Avoiding an Unexpected Checkmate

In blitz chess, you might have the best position and pieces—but if you run out of time, you still lose. Likewise, the GenAI landscape is moving fast. Industry norms, customer expectations, and regulatory frameworks are evolving quickly. This calls for quick yet thoughtful executive actions .

Every chess player knows the danger of a clever trap – one wrong move can lead to an unexpected checkmate. In the same way, deploying GenAI without a structured process can expose your organization to significant risks. One major risk is the amplification of misinformation or bias. For example, a GenAI model might confidently produce false information or reflect harmful biases present in training data, which could mislead decision-makers or offend customers. Another risk area is where LLMs often make confident but incorrect assertions. Additionally, the automation of tasks raises the spectre of loss of control – An autonomous AI assistant can save time, but if left unchecked it might make decisions misaligned with human preferences or ethical norms.  Setting clear policies for AI autonomy e.g. defining what an AI agent is allowed to do independently and when human oversight is required without losing control of the board.

Just as a chess player defends against an opponent’s threats while executing their own plan, a savvy CEO will address these AI risks as part of their strategy. Practical steps include: instituting rigorous AI testing and validation to catch errors or bias before systems go live, setting up oversight mechanisms like “human checkpoints” for high-stakes AI decisions, and developing incident response plans for AI failures or breaches.


7. Planning Moves Ahead: A Proactive AI Roadmap

As AI technology and the competitive landscape change, you need to be ready to adjust your strategy – in chess terms, update your position. Maybe a new regulation will restrict certain AI data usage, requiring a pivot in your approach, or a breakthrough in AI reasoning will open opportunities for deeper automation in your industry. Organizations that monitor these trends and experiment in controlled pilot projects will be better positioned to respond than those that play catch-up.

 

Conclusion: Making the Right Moves

As the GenAI era unfolds, business leaders would do well to take a page from the chessboard. Just as every grandmaster plays with foresight, purpose, and discipline, so too must organizations evolve their AI vision with clarity and control.

  1. Define the AI vision – Clearly articulate how GenAI fits into your business strategy and visualise what winning with AI looks like for your organization.

  2. Align every move with strategy – Prioritize GenAI projects that align with core business goals and customer needs. Avoid shiny AI experiments that don’t drive real value.

  3. Establish the rules of play – Implement governance, ethics guidelines, and policies  to guide AI development and usage. Set “no-go zones” for unacceptable AI uses and ensure compliance with laws from day one.

  4. Leverage power pieces wisely – Embrace GenAI’s new capabilities to power autonomous agents and roll them out in controlled stages, with monitoring, to learn their dynamics and limits.

  5. Manage risk – Proactively identify and mitigate AI risks such as misinformation, bias, IP leaks, and privacy issues. Conduct rigorous testing and maintain human oversight, especially for high-stakes AI decisions.

  6. Plan scenarios ahead – Develop a forward-looking AI roadmap that anticipates future opportunities and regulations, invest in skills and infrastructure that will support your AI growth over time.

 
 
 
bottom of page