Many tech start-ups are experts in AI-driven solutions. Despite that, the regulatory landscape is changing fast and AI needs to be developed and used with real care, to avoid crossing legal red lines and/or incurring avoidable cost further down the line. The regulatory framework now demands that developers and users of AI employ high standards of privacy, security and ethics.

Key issues to address include:

  • privacy protections, including security and how to tackle possible bias and built-in discrimination;
  • how best to protect your IP;
  • the need to explain decision-making by AI to users.

Recent examples of controversy include possible bias in the use of algorithms to decide exam results across the UK, the Home Office’s use of algorithms to decide visa applications and controversial algorithms being used to determine individual pricing and targeted marketing.

These developments have led to deep scrutiny by regulatory authorities across the UK and EU. So much so that:

  • Europe will see a new EU regulation designed specifically to regulate the development and use of AI, underpinned by fines anticipated to be as much as 10% of annual global turnover.
  • The UK’s CMA (Competition and Markets Authority) is examining the use of algorithms to determine pricing to consumers.
  • Both the ICO and EU privacy regulators are considering controls are the use of facial recognition technology in public places.

The ICO has also issued guidance about how to explain the use of AI, in terms of both process and individual decisions, as well as how to audit its use. Operators of AI in the UK need to provide customers with a description of the algorithm, why it is being used, contact details of the operator, the data used, human oversight controls, and the potential risks and technicalities of the algorithm.


The proposed Europe wide AI regulation follows a risk-based approach based on levels of risk: unacceptable, high, limited and minimal, and include special rules for biometrics. When it comes into force, it will affect all those with trading partners and customers located in the EU. AI with unacceptable risks will be prohibited, and high-risk systems will be subject to onerous rules. High-risk AI systems will be subject to strict obligations before they can be put on the market, including:

  • adequate risk assessment and mitigation systems;
  • high quality of the datasets feeding the system to minimise risks and discriminatory outcomes;
  • logging of activity to ensure that results can be traced;
  • detailed documentation providing all information necessary on the system and its purpose for regulatory authorities to assess compliance;
  • clear and adequate information to the user;
  • appropriate human oversight measures to minimise risk;
  • high levels of robustness, security and accuracy.

The changes may be far reaching as commonly used HR systems such as CV-sorting software are considered to be high risk.


Dynamic and personalised pricing is firmly on the CMA’s radar. The CMA has employed a whole team of data scientists to tackle the impact. Although they recognise the benefits, they are really worried about potentially harmful effects such as where it is difficult for consumers to detect the use of these algorithms, and where vulnerable consumers are targeted and/or unfair ‘distributive’ effects occur. These harms can occur through the manipulation of consumer choices, often without the consumer being aware. Digital pricing tags can involve the use of lots of personal data relating to consumer spending habits, phone data and loyalty card data. An example is higher ‘surge’ pricing used by taxi firms when, for example, when a user’s phone battery is almost out of juice. The CMA is also concerned about “dark patterns” where a website is set up to make a consumer choose a particular path that they might not actually want to, such as being directed to take out a subscription and needing to press buttons very carefully to avoid committing yourself. There are steps you can take to fall on the right side of the line.

As well as ensuring that AI tools identify and mitigate bias, you will need to ensure you understand the capabilities and limitations of algorithmic tools and consider how to ensure the fair treatment of users.

Last but certainly not least, due to the risks and controversies we’ve talked about, you will likely need to carry out impact assessments for data privacy and for equality, and to have and comply with, a suitable policy on data ethics.

At Cube by Lewis Silkin we can help you identify and manage the AI risks.



Back to all posts

LATEST COMMUNITY POSTS

I’m calling security

Employee competition: top tips for start-up businesses

IP support and funding

Your Intellectual property (IP) is your business.

SEIS investment guide