Key Risks of Implementing AI in the Enterprise

Key Risks of Implementing AI in the Enterprise

Key Risks of Implementing AI in the Enterprise – Transcript

In this video series, I’ve been arguing that we should manage AI like an asset. By approaching it through an asset management lens, we’re not only controlling cost and mitigating risk, but also accelerating innovation and achieving better outcomes faster—ultimately helping our organisations remain competitive.

So far, we’ve discussed the signs of AI sprawl, the types of AI commonly in use, and the barriers that hinder its adoption. Now, I’d like to delve deeper into the specific risks that AI poses in the enterprise. Let’s consider three categories of AI technology we highlighted earlier—predictive analytics and machine learning, natural language processing, and generative AI—and translate them into what they mean in terms of the IT stack. Regardless of whether solutions run on-premises, in the cloud, or a hybrid model, they essentially boil down to three core infrastructure components:

Storage:

To train and operate an AI algorithm, you need substantial amounts of data. For something like facial recognition of pests, as in the Rentokil example, you need a vast repository of images and videos—essentially, all the “raw material” to teach the system what a rat looks like and how to identify it. This often means a significant storage requirement, scaling well beyond what many enterprises are used to in conventional IT.

Compute:

Processing and making sense of all this data—especially training complex AI models—often requires high-performance compute resources. This is why companies like NVIDIA, renowned for their graphics processing units (GPUs), have become so influential. These chips demand huge amounts of energy and cooling, and their cost profile dwarfs that of standard CPU-based servers. Enterprises must be prepared for both the operational complexity and the financial implications of these next-generation computing demands.

Software (Algorithms and Code):

Today’s AI landscape leans heavily on open-source tools and frameworks. While this reduces some barriers to entry, it introduces its own set of risks. Over time, we may see a shift towards proprietary toolsets offered by specialised vendors. As that happens, licensing models, vendor lock-in, and reliance on third parties all become concerns—familiar themes for IT Asset Management, but now applied to the powerful and rapidly evolving world of AI.

Key Risks of Implementing AI in the Enterprise

Bringing these three components together—storage, compute, and software—creates a variety of risks. Security and privacy remain paramount, and with AI’s power, any breach or misuse could be particularly damaging. Costs, meanwhile, can spiral out of control as enterprises experiment without a clear view of ROI. On top of that, impending regulation means we must take governance and compliance seriously.

All this points to the need for a cross-functional “AI asset management team.” Such a team might include InfoSec specialists to manage security concerns, FinOps or ITAM professionals to keep a handle on costs, and board-level input to navigate regulatory complexities and ethical considerations. By assembling the right stakeholders, we can manage AI strategically, not just reactively—treating it as a genuine organisational asset.

In the next video, I’ll examine the regulatory angle in more detail. After all, as regulatory frameworks emerge and evolve, they’ll shape how we acquire, deploy, and manage these powerful technologies.

View on Youtube here: https://youtu.be/LCAF1329vQg

Leave a Reply

Your email address will not be published. Required fields are marked *