Using generative AI (GenAI) to help you work can be extremely useful and productive with many different use cases. However, you have to ensure the ethical use of AI if you want to really enjoy that boost in productivity.

As Simon Haighton-Williams, CEO of The Adaptavist Group, told the BBC, "It's a good leg up in the same way that having a good encyclopaedia or a calculator lets you do things that you couldn't have done without those tools." So let's go through six key ethical challenges associated with GenAI and how to ensure you and your team are using it responsibly:

1. Transparency and disclosure

One of the fundamental pillars of the ethical use of AI is transparency. Organisations need to be open about their use of AI systems, particularly when these systems are involved in decision-making processes that affect individuals or society at large.

When individual employees keep their use of AI from managers or IT and security teams within the company, it can also have ethical implications. This is known as 'Shadow AI', and it poses numerous risks, including:

  1. Unintended exposure of sensitive information,
  2. Lack of awareness and risk mitigation strategies,
  3. And potential breaches of privacy policies.

For instance, employees might unknowingly upload proprietary information to public AI models, compromising data security and potentially violating privacy regulations. 

Cybernews revealed that 4% of employees paste sensitive data into GenAI every week, increasing the chances of sensitive data exfiltration, which brings us to our next point…

2. Protecting privacy and data security

As GenAI systems process vast amounts of data, safeguarding privacy and ensuring data security has become paramount. Organisations must implement robust measures to protect sensitive information and comply with relevant laws and regulations, such as the EU's General Data Protection Regulation (GDPR) or the US's Health Insurance Portability and Accountability Act (HIPAA).

As Karoliina Torttila, director of AI at Trimble, pointed out to the BBC, "This brings us to a skill we're all forced to develop: We have to be able to understand what is sensitive data. There are places where you would not put your medical information, and you have to be able to make those types of judgment calls." 

3. Ensuring accuracy and human oversight

While GenAI systems are incredibly powerful, they are not infallible. Two significant challenges in this realm are bias and hallucinations:

  • Bias: AI systems can inadvertently perpetuate or amplify existing societal biases present in their training data, leading to unfair or prejudiced outputs, such as resume-scanning AI models favouring male candidates.
  • Hallucinations: GenAI models can sometimes generate false or nonsensical information and present it as fact. This can result in the spread of misinformation and wrongful decision-making.

To mitigate these issues, it's crucial to implement human oversight and verification processes.

4. Intellectual property rights

The use of GenAI raises complex questions about intellectual property rights. Existing copyright laws were not all designed with AI in mind, leading to uncertainty in their application to AI-generated content. 

New rules are being developed to clarify this area of copyright law, but in the meantime, organisations must address copyright concerns, ensure proper attribution, and implement clear policies regarding the use of AI-generated content. 

5. Accountability and management

We've mentioned shadow AI previously in terms of the users' responsibility to offer transparency and disclosure, but it is also integral that companies take responsibility for setting guidelines that employees can adhere to. Many employees bringing shadow AI into the organisation may not fully understand the risks of doing so, let alone how to mitigate them.

The way forward is through education, policies, and regular audits. With Narus, you can maintain an audit log to track user activity and get a full audit trail when a user violates the safeguard policy. Organisations using Narus can also configure security alerts to flag prompts that contain personally identifiable information (PII), sensitive topics, or banned words.

6. Environmental considerations

The environmental impact of AI cannot be overlooked. Large AI models require significant computational resources, contributing to carbon emissions.

Organisations should acknowledge this impact and explore ways to minimise their carbon footprint, such as through:

  • Prompt engineering: Minimising trial-and-error usage by crafting more precise prompts reduces unnecessary iterations, and shorter prompts generally require less computation.
  • Prompt caching: By reusing saved prompts and responses, you can reduce the computational load on AI models, lowering energy consumption.
  • Choosing the right model: Using the smallest model possible that will be effective for a given task can reduce energy requirements.

By addressing these ethical considerations, we can ensure that AI technologies serve the best interests of society while minimising potential harm. The journey towards ethical AI is ongoing, requiring constant vigilance, adaptation, and a commitment to transparency and accountability.


Meet Tim Sadler at #DTXM25 Manchester, booth C54