Skip to main content

Generative AI and the transformation it can bring is the talk of boardrooms the world over. There is an almost endless list of use cases and claims that the technology can be used to drive efficiencies, reduce costs, and improve customer experience. At the same time, there is also much talk of accuracy and ethics.

Easily accessible and deliverable applications like chatGPT and Google’s Bard have thrust generative AI into the limelight. However, wider use of AI in the enterprise will require much greater scrutiny to ensure risks are mitigated and value delivered.

As with the advent of any new technology, business leaders are asking a lot of questions. What does this mean for my business? What are the opportunities and risks? Can we use this technology to drive positive business outcomes?

There is a lot of hype, but the consensus seems to be that AI can deliver value if carefully considered and applied. But can your cloud platform cope with Generative AI? Here we look at the impact of generative AI on an organisation’s technology infrastructure and key points that should be considered before you dive, or even dip a toe, into the muddy waters of AI.

Why enterprises care about AI

Traditional AI (also known as rule-based AI) consists of rules programmed to perform specific tasks. Generative AI involves training a machine learning model or algorithm to create new content based on patterns. These models learn patterns and structures from a given dataset and then generate new examples that follow similar patterns.

The types of content that can be created by generative AI span text, video and images. It is particularly useful for use cases such as:

  1. Improving the customer experience: By analysing customer preferences and behaviours, AI models can tailor interactions to specific individuals, for example, by generating personalised recommendations or promotions.
  2. Product development and prototyping: Generative AI can assist in product development by generating virtual prototypes and designs.
  3. Automation and optimisation: Businesses can use AI models to automate repetitive tasks, streamline workflows, and improve operational efficiency.
  4. Risk assessment and prediction: Generative AI models can analyse historical data and monitor live data to consider the likelihood of specific events occurring as well as predicting different outcomes. This information can be used to proactively manage risks and develop effective risk mitigation strategies.
  5. Customer insights and market research: Generative AI can analyse large volumes of customer data, social media content, and market intelligence to extract valuable insights that can help inform business strategy.

Overall, generative AI allows large businesses to drive innovation, improve operational efficiency, enhance customer experiences, and gain a competitive edge in a rapidly evolving marketplace. But successfully using AI to deliver any of those promised outcomes is neither simple or easy.

Five key challenges for enterprises to consider

For service providers looking to build an enterprise-grade generative AI solution, it isn’t quite as straight-forward as opening a browser and asking chatGPT to do it for you. Whilst the use cases of generative AI are numerous and varied, they all require huge amounts of data and processing capacity to deliver scalable value. Enterprises choosing to train their own AI models will face additional complications that will require extensive resources and skills to tackle.

Before embarking on a generative AI build or deployment project, Cloudscaler recommends first considering the following 5 challenges:

  1. Data privacy and security: AI relies on vast amounts of data, and enterprises must ensure the privacy and security of sensitive information. Inadequate data protection measures can lead to data breaches, unauthorised access, and misuse of confidential data. In fact many are predicting that the explosion of AI tools will almost inevitably lead to multiple major insider data breach incidents. Robust security protocols, encryption, access controls, and compliance with privacy regulations are essential to mitigate these risks.
  2. Data quality and quantity: Generative AI models require large and diverse datasets for training. Enterprises may face challenges in acquiring high-quality data that is representative of the real-world scenarios they wish to replicate. Data scarcity, bias, and inaccuracies can affect the performance and reliability of generative models.
  3. Legal and regulatory compliance: The use of AI in the enterprise must comply with relevant legal and regulatory frameworks. Depending on the industry and the specific application of AI, there may be specific regulations related to privacy, data protection, fairness, transparency, and intellectual property rights that need to be considered. Failure to comply with these regulations can lead to legal consequences and reputational damage not to mention penalties and fines.
  4. Computational resources and infrastructure: Training and deploying generative AI models can be computationally intensive and require significant computational resources. Enterprises may need to invest in powerful hardware, cloud infrastructure (compute and storage capacity) and networking to support the training and inference processes of generative AI models. These resources will also need to be scalable so they can meet fluctuations in demand.
  5. Applying long-term thinking: as you plan for and consider your first foray into the world of generative AI, it’s important you plan for the future. As with any significant investment in technology, be clear on your long-term objectives, not just a short-term gain. Ensure you apply a solid strategy, principles and operating model to working with AI. Take the time to build a foundation that can support and secure future success, in terms of wider use of AI or future deployments of new AI applications.

AI and cloud: a match made in heaven

It’s largely accepted that using public cloud to experiment and train models is best practice when building AI solutions. The wealth of cloud-native services such as AWS Sagemaker and AWS Bedrock plus easily accessible compute and storage capacity make public cloud an obvious solution.

However, it’s important to recognise that for an enterprise looking to the cloud when applying AI, it isn’t as simple as turning on a new service. Of course, cloud and networking will solve the need for a scalable infrastructure. However, it’s the platform you build on top of that infrastructure that will ensure you comprehensively overcome the challenges and mitigate the risks of deploying generative AI.

Building a cloud foundation fit for genAI

A cloud platform (often referred to as a cloud landing zone) is a foundational environment or framework that provides a standardised and secure setup for complex cloud operations. It serves as a starting point for organisations to build and manage their cloud infrastructure in a consistent and controlled manner. It makes enterprise-level or multi-account cloud environments easier to govern and easier to adopt.

A cloud platform defines and enforces best practices, security policies, and governance controls for cloud deployments. It helps organisations establish a strong foundation for their cloud presence by addressing common architectural, operational, and security considerations.

When building or uplifting an existing cloud platform to house an AI solution, Cloudscaler recommends an enterprise considers the following four areas:

  • Security: Ensure comprehensive security across your entire cloud platform by implementing controls and guardrails holistically. Ensure that developers can only access cloud resources through the central platform. This will ensure that all services will by default inherit security controls set by the central team.
  • Regulatory compliance: looking beyond security, guardrails can also be enforced to ensure regulatory compliance. Multilateral agreements such as GDPR or industry-specific regulations such as HIPAA in health care or PCI-DSS for credit card handling may impact how your AI services are architected.
  • Data controls: Data controls can be established to ensure data quality. Use controls to ensure data governance, validation and verification, effective data integration processes, and monitoring as well as data retention and archiving.
  • Scalability and developer time to value: A cloud platform can drastically reduce the time to value of any cloud use case, not just AI. Focus on building a central platform for the entire enterprise to avoid teams duplicating effort by building multiple platforms with different standards. Consider building pre-assured and configured tooling (such as Amazon S3 storage buckets) for developers to consume off the shelf. Used together, these two methodologies will significantly accelerate developer time to value. They’ll be able to sign in and start consuming cloud services without having to configure tools or platforms themselves.

Cloud platforms will be critical to success

Scalable cloud platforms are vital for the successful implementation of generative AI projects in enterprises. While generative AI offers transformative potential, its deployment in an enterprise requires careful consideration of opportunities, risks, and challenges.

Cloud platforms will play a crucial role in overcoming these challenges and establishing a strong foundation for generative AI deployment. A well-designed cloud platform provides a standardised and secure environment, enforcing best practices, security policies, and governance controls for cloud operations. An enterprise with a sound cloud infrastructure and platform in place is an enterprise well-armed to achieve great things with generative AI.

Cloudscaler has extensive experience of building landing zonesget in touch now to understand how we can help ensure your cloud platform is ready for AI.