News

Industries

Companies

Jobs

Events

People

Video

Audio

Galleries

My Biz

Submit content

My Account

Advertise with us

Is AI guilty of unfairly discriminating against certain job applicants?

There is mounting foreign case law providing both employers and artificial intelligence (AI) developers with food for thought as South Africa ushered in the National Artificial Intelligence Policy Framework 2024 (Policy Framework) in October 2024.

The ongoing United States (US) case of Derek Mobley v Workday warrants the attention of South African employers using AI platforms and AI software developers themselves. Derek Mobley, an African American male over 40 with anxiety and depression, alleges that Workday’s AI screening tools discriminate against job applicants based on race, age, and disability. Despite being qualified, Mobley was rejected for over 100 positions by companies using Workday’s AI screening tools.

Mobley successfully established a prima facie case of “unfair discrimination” as he demonstrated a causal link between an employment practice and its impact on a designated group:

  • Mobley argued that Workday’s algorithmic tools likely screen out applicants with mental health disorders or cognitive impairments by using biased training data and personality tests. The court recognised a common discriminatory component, evidenced by Mobley’s numerous rejections.
  • Mobley demonstrated that he was rejected from over 100 employers all using the Workday platform, despite being qualified. He asserted that this suggested potential bias against race, age, and disability.
  • Mobley showed that he received rejections within an hour of applying, indicating that there was no human oversight exercised over the AI decision-making.

Software developers and AI vendors worldwide should take note of the unfolding US case as courts may begin to recognise third-party service providers as agents liable for actions typically performed by an employer. Since the employers to which Mobley applied delegated their hiring and recruiting functions to Workday, the court recognised Workday as their agent.

If this case unfolds in favour of Mobley, it could set a precedent for holding third-party providers accountable for how their designs and AI systems are utilised by customers who may not necessarily follow best practice when it comes to AI usage.

SA employers beware

South African employers should also pay attention to this case due to the relevant legal principles on unfair discrimination and how these principles are applied in South Africa. This case highlights the potential for AI software to infer demographic information from inputs like zip code and educational history, potentially leading to unfair discrimination if proper human oversight is not applied.

Employers cannot escape liability by delegating their duties to third parties. Despite the absence of codified AI legislation in South Africa, employers still face the risk of being held liable under the existing employment legislative framework. For example, the Employment Equity Act, which applies to all employers, employees, and job applicants, prohibits unfair discrimination in the workplace.

AI usage can impact other workplace areas, including retrenchments, promotions, demotions, opportunities, and bonuses. If AI software is found to show bias in such areas, even unintentionally, then employers may be held liable for committing an unfair labour practice in terms of the Labour Relations Act.

Avenues for recourse

While an unfair discrimination dispute between a third-party service provider (ie. not the employer) and a job applicant falls outside traditional employment law purview, it does not leave the parties without recourse.

If this situation were to occur in South Africa, the below alternative recourse may be explored:

  • An employee or job applicant may bring an unfair discrimination dispute against a third-party service provider to the Equality Court under the Promotion of Equality and Prevention of Unfair Discrimination Act.
  • An employee or job applicant may seek to hold the third-party service provider/software developer liable under the Protection of Personal Information Act where automated decision-making has occurred in a manner that contravenes the relevant provisions of that Act (which may include providing a person who is affected by an automated decision with an opportunity to make presentations about the decisions and to provide the person with sufficient information about how the decision was taken).
  • An employer may seek to hold the third-party service provider/software developer liable after considering the agreed terms and conditions of the applicable agreement concluded between them.
  • An employer with an annual turnover of less than R2m may bring a claim for damages under section 61 of the Consumer Protection Act against a third-party service provider or software developer for liability arising from defective goods.

Proceed with caution

Not only do employers need to exercise caution when using AI in the workplace, but software vendors should be alive to the risks that AI systems used as decision-making tools may impact equal access to employment opportunities.

Our previous insight highlighted the human-centered AI Principles that the Organisation for Economic Co-operation and Development adopted in May 2019, which was the primary guide for South African employers implementing AI in the workplace. As of October 2024, the Department of Communications and Digital Technologies introduced the National Artificial Intelligence Policy Framework 2024 (Policy Framework). Although this is not codified law, it illustrates progressive steps being taken to safeguard the interests of South African citizens.

Ensuring transparency and fairness

The Policy Framework outlines 12 strategic pillars for best practice regarding AI usage. The three most prominent pillars include transparency and explainability, fairness and mitigating bias and maintaining human oversight.

For transparency and explainability, the Policy Framework aims to hold employers, organisations, and software developers accountable for the actions and outcomes of AI systems. A practical step employers may take in this regard is to educate themselves and their employees on how AI decisions are made. Understanding the decision-making process behind an AI tool may help detect biases and mitigate unfair discrimination by detecting and correcting skewed data.

To ensure fairness, employers should take steps to ensure that AI systems are trained on diverse data sets that include all demographic groups. Most importantly, human oversight should always be applied to AI decision-making so that outcomes can remain fair and unbiased. As seen by emerging case law, AI might miss contextual nuances and produce outcomes devoid of ethical consideration, thereby perpetuating biases present in its training data.

Considering South Africa's comprehensive employment law framework, South African employers should prepare themselves and adopt a proactive approach to implementing the best practices and pillars envisaged by the new Policy Framework in their workplaces to help mitigate liability flowing from AI usage in the workplace. Software developers designing AI systems for employers should ensure that these pillars are reflected in their software design.

About Mehnaaz Bux, Karl Blom, Caitlin Leahy, and Daniel Philipps

Mehnaaz Bux, Partner, Karl Blom, Partner, Caitlin Leahy, Candidate Attorney and Daniel Philipps, Candidate Attorney at Webber Wentzel
Let's do Biz