AI Adoption: Learning from the Cloud's Early Days

chuvakin
Staff

In the early days of cloud computing around the mid-2010s, many organizations began informally experimenting with the cloud. Cloud was new and exciting, cloud was free (at times and in small doses), cloud was fun … thought the developers!

So those developers, fueled by pizza and late-night coding sessions, spun up cloud instances like they were going out of style. 'It's just a sandbox,' they'd say, never imagining those playful experiments could become the backbone of the company's cloud infrastructure.

They learned, they built, they rapidly grew their knowledge and skills, and they impressed their bosses with new capability demonstrations. They did not, however, secure the cloud. Why would you need to secure a sandbox environment? It's not like there was any sensitive data there, right? Some also believed that because the cloud is more secure, their experimental environments are also secure by default (spoiler: this is not entirely true).

Some time passed and their bosses came to these cloud pioneers and said: “Hey, you know how to cloud! Now go and move enterprise services to the cloud!” Can you guess what happened next?

As you can imagine, many of these “just for learning” sandbox experimentation environment projects suddenly became the blueprints for running enterprise applications in the cloud. 

Today, this is a pattern we’re seeing being repeated with AI deployments - cloud migration mishaps and select “worst practices” are re-emerging as gen AI mistakes. Just like those early cloud pioneers who neglected to build a solid foundation, many organizations are now rushing into AI without a secure blueprint.

As you read this blog, you may recognize similar seismic technological transformations you have previously observed in the cloud, such as the shift from mainframes to client-server architectures, and rightfully point out that these challenges are not unique to cloud adoption or AI implementation, but are par for the course for transformation initiatives. Well, perhaps, but let’s unpack that a bit…

Parallels & Mishaps

Cloud computing in the 2010s and AI in the 2020s are both game-changers, and it's fascinating to observe the parallels in their adoption journeys. Both have exploded onto the scene, moving quickly from experimental projects to real-world deployments with serious business impact. Both, however, necessitate a serious conversation about upskilling and reskilling our workforce, which inevitably ripples through the entire company culture.

Just as organizations made the mistake of underestimating the complexity of cloud migration, they are now making the mistake of underestimating the complexity of AI implementations and adoption.

However, there's a notable difference in pace. AI adoption - especially via shadow AI - is happening much faster than cloud adoption did. While cloud mostly reshaped our infrastructure, AI is diving deep and transforming core business processes themselves - the adoption curve for generative AI is incredibly steep, outpacing even past technological shifts like the PC revolution and the rise of the Internet.

Still, we're already noticing some familiar challenges with AI adoption that mirror the issues we faced during cloud migration and other digital transformations. It appears that many organizations are falling into the same traps as before. Below are some of the trends we’re seeing emerge: 

Strategy and objectives … or lack thereof?

Jumping into AI or cloud initiatives without a well-defined business purpose is a recipe for disaster.  It's crucial to start by clearly articulating the business problem being solved,the specific goals that are meant to be achieved, and success metrics that cover various stages of AI project maturity. Without determining and strategically aligning the "why" of implementation, projects can quickly veer off course and fail to deliver meaningful results. 

Cost is also a significant factor. Failing to account for long-term costs and then blaming the technology for being "too expensive" when the issue is actually poor planning is a common pitfall.  Many projects require an outsized upfront investment of both capital and labor before meaningful ROI is captured.

1. The solution to everything

Many organizations seem to fall into the trap of approaching AI as a magic bullet, expecting it to be the solution to all problems. This often leads to vague objectives setting and unrealistic expectations. For instance, aiming to improve customer retention, increase customer satisfaction, or save money are all worthwhile goals, but without defining specific AI applications, use cases and measurable outcomes, a vague objective can result in disjointed efforts and wasted resources. 

This happened to cloud adoptions, and now this exact “silver bullet thinking” has flooded the domain of AI. The solution is the same: sharp use case focus, pick a problem and solve it, then learn from it.

2. Lack of clarity on current state

A comprehensive assessment of an organization’s current state is essential. Organizations that bypass this critical step often encounter significant obstacles. In the context of AI, this involves evaluating data governance maturity and available skill sets. For cloud migration, it necessitated an analysis of existing infrastructure and application suitability. A failure to conduct a thorough assessment can lead to suboptimal technology choices, inappropriate strategies, inefficient use of resources impacting budgets and delivery timelines, and frustration caused by project roadblocks. 

In the cloud days, we saw the lift and shift approach take form, and in the AI era we’re seeing the “let’s just add a chatbot” approach being taken, which is somewhat similar, with each approach failing to address the foundational aspects of data governance and security.

3. One size fits all  

Different AI models and algorithms have varying strengths and weaknesses. It is crucial to select the right tool for the specific task and understand that general models may need to be tuned for your specific applications. Applying a generic AI solution to diverse problems without considering the nuances of each situation can lead to suboptimal results and missed opportunities. Trying to force-fit a solution that worked elsewhere to save on additional expense is a common mistake. The short term savings frequently comes at the expense of longer term and more substantial savings. Customization and careful consideration of the unique needs of an organization's planned use cases are essential. 

Like in the cloud days, the desire to “adopt cloud” led many astray (Adopt how? For what? Is cloud even better for this?) and now the same is happening with “airline magazine adoption” where the CIO reads about AI in the airline magazine and demands that the company adopts it. In reality, looking through the lens of how a customer might participate in the AI ecosystem, we see four basic scenarios that are informed by the organization’s needs and require different risk management strategies.

4. Security, security, security

Operational and security considerations warrant careful attention. The transition from experimental to production environments requires robust security protocols. A lack of internal expertise to effectively manage and secure these technologies can expose the organization to significant risks. Furthermore, the reliance on readily available experimental tools as the foundation for production systems, without adequate consideration for scalability, interoperability and security, can create substantial vulnerabilities and long-term technical debt.

As we said, first cloud deployments were experimental and thus did not demand much security. Sadly, they were then used as a blueprint for production. The same model led to shadow AI - centric adoption models that increase security risks.

Change Management

The human element in AI implementation is often underestimated. It's not just about technology - it's fundamentally about people. Overlooking this can lead to resistance, slow adoption, and ultimately, unsuccessful AI initiatives. Employees may have concerns about job security, feel overwhelmed by new tools, or struggle with shifting roles. If you neglect change management, you're not just slowing down AI adoption; you're breeding resentment and creating shadow AI deployments that will haunt your security team for years. 

1. Don’t forget the humans

Both AI and cloud projects often require significant changes to existing processes and workflows. Experience has shown that neglecting the human side of this change can create roadblocks. Effective change management, including open communication, comprehensive training, and addressing employee concerns, is essential. Communication, of course, is a two-way street. While top-down communication is crucial for setting strategy and agreeing on the right tools, the value of bottom-up input shouldn't be overlooked. Teams often have valuable insights into relevant use cases, and these should inform the overall governance process. This helps prevent siloed efforts that create inconsistencies and potential security vulnerabilities.

2. Pieces of a whole

Integration challenges are another area to consider. One parallel between cloud and AI adoption is the temptation to prioritize speed over security. Early cloud adoption often saw a focus on ease of use and cost savings, sometimes at the expense of security. This resulted in unsecured experimental environments that later became production systems, creating vulnerabilities and potential losses. This risk is equally relevant to AI and should be addressed by an organization’s top level AI governance board.

3. Continuous learning

Building the necessary expertise is also key. Thinking a few introductory lessons are enough to equip everyone to deploy and manage AI systems effectively is a risky assumption. Organizations should invest in building a skilled AI team to guide the development, deployment, and management of these solutions. Fostering a culture of AI literacy through training and knowledge sharing is also important.

With both cloud (then, but also now) and AI (now), boosting team competence in the domain is a hard must. Securing the cloud is hard if you never learned the “ways of the cloud.” The same is even more true for AI.

4. Iteration

Finally, adequate testing and validation are essential. Technology adoption shouldn't be viewed as a one-time event; it's an ongoing process that requires continuous investment, iteration, and adaptation. Launching AI models can’t be a “set it and forget it” approach, but is rather one that requires ongoing monitoring and evaluation to ensure their functioning as intended. 

We are still - to this day- constantly advising clients that cloud transformation is a "journey" and not a destination that must keep evolving. AI is no different etc.

Conclusions

To summarize, many organizations adopted cloud without a clear articulation of their business goals, resulting in wasted resources and missed opportunities. The same can happen for AI today.

Organizations can learn from past cloud mistakes to improve AI adoption by focusing on setting a well-defined and holistic strategy, realistic expectations, and addressing skills gaps, security concerns, and integration issues.

So what can you do to avoid these pitfalls?

  • Define a clear AI strategy and secure executive buy-in: Focus on outlining specific business goals AI will address and presenting a compelling business case to leadership, ensuring alignment and resource allocation. Document the strategy and key performance indicators (KPIs) to track progress.
  • Have a clear view of the organization's critical AI assets and data governance maturity: Conduct an inventory of existing data, models, and AI tools, and assess data quality, security, and compliance practices. Implement data governance policies to ensure data integrity and responsible AI use.
  • Prioritize security from the start: Integrate security considerations into every stage of AI development and deployment, including data protection, model security, and access controls. Implement security audits and vulnerability assessments regularly.
  • Embrace continuous monitoring, evaluation, and iteration: Establish monitoring systems to track AI model performance and identify potential biases or errors. Regularly review and update models based on feedback and new data to ensure effectiveness and accuracy.
  • Address AI skill shortages and foster a culture of AI literacy: Provide training programs and workshops to upskill employees in AI concepts and tools. Encourage knowledge sharing and collaboration across teams to build internal expertise.
  • Gain insights from resistance and friction: Actively solicit feedback from employees and users regarding AI implementations, and analyze resistance to identify underlying concerns or usability issues. Use this feedback to refine AI strategies and improve user adoption.
1 0 30.9K