GFTU Emplaw Emplaw Emplaw

GFTU Emplaw Monthly - September 2023


The Strikes (Minimum Service Levels) Act 2023: what does it mean for workers?

An insightful article from Cloisters which spotlights the practical impacts of the new law.

Read here

International Jurisdiction Wars In A Post-Brexit And Post-Covid-19 World

An interesting article, from Gowling WLG, for employers and employees on where litigation can be brought when one is based in the UK and the other isn't.

Read here


Updated ACAS guidance on managing sickness absence

Over the Summer, Acas has updated its guidance on Fit Notes and Proof of Sickness, Recording and Reducing Sickness Absence, and Absence Tigger points. These guides form part of a suite of guidance from Acas on managing absence.

Other guidance in the suite includes: Getting a doctor's report about an employee’s health, Creating absence polices, Keeping in touch during absence, and Returning to work after absence

Holiday, sickness and leave | Acas

ICO guidance for employers on processing workers' health data

The ICO has published detailed guidance which explains an employer's data protection obligations when they process health data concerning their workers.

This followed the ICO's review of feedback collected from a consultation on a draft version of the guidance (for background see Emplaw Monthly November 2022).

The guidance includes a set of checklists which provide an overview and quick guide to what needs to be considered whenever collecting and using workers’ health information.

Information about workers’ health | ICO

Large increases to illegal working penalties

If a UK employer does not carry out a right to work check and the Home Office finds them to be employing an illegal worker, they may be required to pay a civil penalty. The Government has announced that it will triple current penalty levels.

The increases which will be implemented early next year are:

For a first breach, the civil penalty will increase from a maximum of £15,000 to £45,000 per illegal worker;

In the case of repeat offenders, the civil penalty will increase from a maximum of £20,000 to £60,000 per illegal worker.

The government publishes quarterly reports of the the total number of fines for illegal working issued to named non-compliant employers in each region of the UK.

For more information on Employer’s duties to ensure it only employs legal workers please see the Emplaw law card Right to Work in the UK (full content available for subscribers only).

Tripling of fines for those supporting illegal migrants - GOV.UK (

House of Commons Library publishes research briefing on AI and Employment Law

The briefing considers the legal framework relevant to the use of AI in the workplace and explains the most relevant technologies. It also summarises the government’s current approach to regulation, as well as alternatives presented by the TUC and others, and provides an overview of the EU AI Act.

There are no explicit UK laws governing the use of AI at work but the briefing sets out the legal framework to consider including:

  • the duty of mutual trust and confidence under common law
  • the Equality Act 2010, stating that it is widely accepted that AI tools can exhibit biases.
  • the Employment Rights Act 1996 where AI is used in decision making (as regards dismissal for example)
  • the right to privacy under Article 8 of the European Convention on Human Rights which places some restrictions on the use of surveillance tools to monitor workers
  • the UK GDPR and the Data Protection Act 2018, which places restrictions on data collection and processing. In particular Article 22 of UK GDPR gives a data subject the right ‘not to be subject to a decision based solely on automated processing, including profiling, which produces legal [or similarly significant] effects concerning him or her’. On the face of it, this Article seems to require a human in the loop but there is ambiguity in the definition as well as limited exceptions. Interestingly, the briefing comments that in practice the Information Commissioner’s Office has not yet issued any penalties to enforce this Article and that The Data Protection and Digital Information Bill (no. 2) (see Emplaw Monthly May 2023) currently going through Parliament would replace the current “general prohibition” on automated decision making with alternative “specific safeguards”.

The briefing sets out, in understandable wording, what is meant by AI and some of the different technologies it includes.

It focusses on AI technology in the workplace known as ‘Algorithmic Management’ (or ‘Digital Taylorism’ named after the approach also known as ‘scientific management’ devised by Fred W Taylor around the turn of the 20th century). These terms refer to the use of AI or other algorithmic tools by employers to manage workers, standardise tasks and maximise efficiency. Over recent years algorithmic management and the use of AI tools have become more widespread across many sectors of the UK. In particular, in three broad areas:

  • in recruitment, to devise job adverts, source candidates, filter CVs, set online assessment tests and review interview performance
  • in task allocation and performance management, including scheduling shifts according to predicted footfall/changing circumstances and evaluating worker performance
  • in surveillance and monitoring of the workforce, including tracking workers to monitor productivity or health and safety in the workplace, and the monitoring of biometric data (biological or behavioural data about individuals) such as facial recognition and potentially brain activity.

The briefing summarises:

  • The UK government's approach to the regulation of AI as set out in its March 2023 AI White Paper (see more detail on the Paper, the subsequent consultation and links to relevant documents in Emplaw Monthly May 2023). In short, the government proposes a non-statutory approach, relying on existing regulators to regulate the use of AI in the context of their specific areas while following five broad principles: safety, transparency, fairness, accountability, and contestability. The Government will establish central support functions to monitor and evaluate the regulatory framework's effectiveness and continuously adapt the regulatory regime as necessary
  • The responses that have been received to that approach including those from the BEIS select committee, the ICO and the Equality and Human Rights Commission
  • The alternative policy proposals that have been put forward including The AI Law Consultancy’s report ‘Technology Managing People – the legal implications’ (see Emplaw Monthly April 2021), commissioned by the Trades Union Congress (TUC). Their proposals include amending GDPR to establish a universal right to personalised explainability for high-risk AI systems in the workplace, and recommending that the government reverse the burden of proof in discrimination claims related to high-risk workplace AI systems, so that employers have to demonstrate non-discrimination. The conclusions of the AI Law Consultancy have been adopted or adapted by the TUC in their own AI manifesto, ‘Dignity at Work and the AI Revolution’
  • The EU AI Act – the text was approved by the European Parliament in June this year and lawmakers have begun negotiations to finalise the legislation which is expected at the end of 2023 or early 2024, and will likely be followed by an 18–24-month lead in. The EU approach to AI regulation is much more interventionist and prescriptive than the UK government has so far proposed.

 The EU intends to establish various new regulators, including a central European AI Board and national AI authorities in each member state. The EU is taking a risk-based       approach to AI regulation according to four risk tiers: unacceptable, high, limited, and minimal. Systems deemed to pose an unacceptable risk are entirely prohibited and others  would be subject to a set of legal obligations throughout the lifecycle of an AI system, which could include training, testing, validation, conformity assessments, risk management systems, and post-market monitoring.

The EU is also proposing financial penalties for AI misuse of up to €30 million or 6% of global turnover. The AI Act's provisions are intended to apply to actors whose system outputs are used in the EU, even if the provider is based outside of the EU.

Artificial intelligence and employment law - House of Commons Library (

Artificial intelligence act (

EU AI Act: first regulation on artificial intelligence | News | European Parliament (

Technology_Managing_People_2021_Report_AW_0.pdf (

Connected tech: smart or sinister? Report by House of Commons Committee

The Report, published in August, investigates the specific benefits and harms of using "connected technology", including in the workplace.  and the government has two months to respond. Recommendations include that monitoring of employees in smart workplaces should be done only in consultation with, and with the consent of, the employees concerned.

"Connected technology" is any (type of) physical object that is connected to the internet or other digital networks. The harms and benefits were widely considered. For example Amazon argued that the robotics, machine learning and other technologies used in Amazon fulfilment centres have reduced the physical burden on employees, reducing walking time and taking on repetitive tasks. However, other evidence described instances where the micro-determination of time and movement tracking through connected devices, which had been introduced to improve productivity by some employers, had led to workers feeling alienated and experiencing increased stress and anxiety.

The Report is  long and wide-ranging and its recommendations also include that The Information Commissioner’s Office should develop its existing draft guidance on “Employment practices: monitoring at work” into a principles-based code for designers and operators of workplace connected tech. The ICO draft guidance was put out to public consultation between October 2022 and January 2023.

Connected tech: smart or sinister? - Culture, Media and Sport Committee (

UK AI regulation: 12 governance challenges must be addressed

On 31 August 2023, the House of Commons Science, Innovation and Technology Committee published an interim report based on input from its inquiry into the governance of AI . The report sets out the twelve essential challenges that AI governance must meet if public safety and confidence in AI are to be secured.

The challenges are:

  • The Bias challenge: AI can introduce or perpetuate biases that society finds unacceptable
  • The Privacy challenge:  AI can allow individuals to be identified and personal information about them to be used in ways beyond what the public wants
  • The Misrepresentation challenge:  AI can allow the generation of material that deliberately misrepresents someone’s behaviour, opinions, or character
  • The Access to Data challenge: The most powerful AI needs very large datasets, which are held by few organisations
  • The Access to Compute challenge: The development of powerful AI requires significant compute power, access to which is limited to a few organisations
  • The Black Box challenge: Some AI models and tools cannot explain why they produce a particular result, which is a challenge to transparency requirements
  • The Open-Source challenge: Requiring code to be openly available may promote transparency and innovation; allowing it to be proprietary may concentrate market power but allow more dependable regulation of harms
  • The Intellectual Property and Copyright Challenge: Some AI models and tools make use of other people's content: policy must establish the rights of the originators of this content, and these rights must be enforced
  • The Liability challenge: If AI models and tools are used by third parties to do harm, policy must establish whether developers or providers of the technology bear any liability for harms done
  • The Employment challenge: AI will disrupt the jobs that people do and that are available to be done. Policy makers must anticipate and manage the disruption
  • The International Coordination challenge: AI is a global technology, and the development of governance frameworks to regulate its uses must be an international undertaking
  • The Existential challenge: Some people think that AI is a major threat to human life. If that is a possibility, governance needs to provide protections for national security

AI offers significant opportunities, but twelve governance challenges must be addressed says Science, Innovation and Technology Committee - Committees - UK Parliament

TUC launches AI taskforce

On 4th September, the TUC launched a new AI taskforce as it calls for urgent new legislation to safeguard workers’ rights and to ensure AI benefits for all. The taskforce aims to bring together leading specialists in law, technology, politics, HR and the voluntary sector and to publish an expert-drafted AI and Employment Bill early in 2024 which it will lobby to have incorporated into UK law.

TUC launches AI taskforce as it calls for “urgent” new legislation to safeguard workers’ rights and ensure AI “benefits all” | TUC


Consultation on "reasonable steps" under the Strikes (Minimum Service Levels) Act 2023

The Department for Business and Trade has opened a consultation on a draft statutory Code of Practice on the "reasonable steps" a trade union should take to comply with the Strikes (Minimum Service Levels) Act 2023. Consultation closes on 6 October 2023.

Under the Act (for more details see the Cloisters' Article above), where a trade union gives an employer a notice of strike action, in the sectors to which the law applies, an employer may issue a work notice identifying persons who are required to work, and the work they must carry out during the strike to secure minimum levels of service.

Unions must take “reasonable steps” to ensure members who are identified on the work notice on a strike day comply with the work notice. This is required to maintain protection from their liabilities in tort in relation to an act done by the union to induce a person to take part, or to continue to take part, in a strike. If a trade union fails to take reasonable steps, the employer could seek damages from the union or an injunction to prevent the strike action taking place.

The draft Code of Practice proposes five steps which a trade union should take to satisfy the reasonable steps requirement under the following headings:

Step 1: identification of members

Step 2: encouraging individual members to comply with a work notice e.g., issuing a "compliance notice" to encourage such members to comply with the work notice.

Step 3: communications to the wider membership

Step 4: picketing e.g., instructions which should be issued to the picket supervisor

Step 5: assurance e.g., Trade unions should ensure that they do not undermine the steps they take to meet the reasonable steps requirement

The Code states that failure to observe the Code does not, by itself, make any person liable to legal proceedings, but its provisions are admissible in evidence and taken into account in proceedings before any court or employment tribunal where the court or tribunal considers them relevant.

The consultation on the draft Code will close on 6 October 2023. The finalised code will need Parliamentary approval “when Parliamentary time allows”. The Government will also make consequential amendments to the Picketing Code to reflect the changes brought in by the Act.

The TUC’s general secretary Paul Nowak has responded to the announcement of the consultation saying ‘This is a sham consultation. Ministers have ignored a mountain of evidence on how these laws are unworkable and will escalate disputes’.

Minimum service levels: Code of Practice on reasonable steps - GOV.UK (

TUC slams “sham consultation” on Strikes Act | TUC

SRA report on non-disclosure agreements in workplace complaints

The Solicitors Regulation Authority’s report follows its review into the use of NDAs in workplace complaints. The report focuses on the role of solicitors in complying with the SRA’s Warning Notice.

The review found no direct evidence of solicitors drafting NDAs with the deliberate intention of preventing reporting of inappropriate behaviour but did find that firms tend to use their own standard templates, which often do not take account of the individual circumstances of a given case. 

SRA | Thematic Review: The use of Non-Disclosure Agreements in workplace complaints | Solicitors Regulation Authority


Presidential Guidance for Employment Tribunals on Alternative Dispute Resolution

The guidance highlights the four main types of ADR available to parties involved in litigation in an employment tribunal, namely: Acas's early conciliation service; judicial mediation; judicial assessment; and dispute resolution appointments.

Whilst recognising that Employment Tribunals must decide a case where the parties cannot reach agreement, the guidance emphasizes that they can and should encourage parties to resolve their cases by agreement.

The guidance largely focuses on the positives of ADR and how it can assist parties in avoiding the financial and emotional strain of litigation. However, it also recognises the extensive costs that Employment Tribunal hearings impose upon the public purse.

PG-ADR-July-2023-final1.pdf (