Embracing the Benefits of AI in the Workplace Securely

Since the release of ChatGPT, AI has seen explosive growth, prompting many organisations, including our own, to explore how employees can use it.

While AI offers numerous benefits, it also presents specific risks that must be managed carefully. In this blog, we examine the key risks associated with AI, how we've implemented AI in our organisation, and how you can embrace AI's benefits safely.

Key risk areas identified 

Data sensitivity 

When using AI, sensitive data might be input. Just like any third-party system, this data enters a "black box," raising concerns about the controls around that black box. What if these systems are hacked? Sensitive information could potentially be exposed on the internet. Ensuring that data is protected and understanding the security measures of AI providers is crucial.

Data ownership

Data ownership is another significant concern. When data is input into an AI system, what happens to it? For example, if someone inputs a contract or financial data to get advice, could that data end up being used elsewhere without permission? The possibility of sensitive data appearing in someone else’s output is a serious risk that organisations must consider.

AI hallucinations

AI models can sometimes produce inaccurate or biased results, a phenomenon known as "AI hallucinations." For instance, there was a case where someone used an AI to research legal precedents for an appeal to HMRC. The AI provided cases that seemed supportive but were fabricated, leading to a failed appeal. This underscores the risk of relying on AI-generated information without verification.

Copyright infringement

Using AI-generated content can lead to copyright infringement issues. If AI output includes material that someone else originally created, there could be legal repercussions. Ensuring that AI tools respect intellectual property rights and that outputs are scrutinised for originality is essential.

Mitigating AI risks

Developing a strategy and policy

  1. Define usage: Establish how your organisation wants to use AI. Determine your risk appetite and identify approved tools.
  2. Due diligence: Implement a thorough process for onboarding new tools, including due diligence on potential risks.
  3. Contractual agreements: Clearly outline terms with suppliers regarding data usage, storage, and liability for copyright infringement. Contracts should specify what happens to the data and how long it is stored.

Implementing controls 

  1. Check Outputs: Always verify AI-generated content to prevent AI hallucinations. For instance, if AI generates code, it should be tested and reviewed to ensure accuracy and security.
  2. Technical Controls: Use technical measures to enforce policies. If certain types of data or products are not allowed, implement controls to prevent their use.

Leveraging available guidance

There is excellent guidance available to help organisations use AI responsibly. The National Cyber Security Centre (NCSC) offers comprehensive resources on AI usage and security. Their guidelines provide valuable insights into embracing AI securely, covering everything from basic concepts to advanced implementation strategies.

MHR's approach to AI

We recognise that AI is still a relatively new technology, and many organisations are learning how to incorporate it safely and effectively. By staying informed and proactive, we aim to harness AI's benefits while mitigating its risks.

Conclusion

AI has the potential to revolutionise the workplace, but it must be embraced with caution.

By understanding and managing the risks, developing robust policies, and leveraging available guidance, organisations can safely incorporate AI into their operations.

Stay tuned for more insights from our Cyber Security team as we continue to explore the evolving landscape of AI and cyber threats.

Looking for something specific?