When Ethics and Evidence Take Precedence Over Automation
Part 1 emphasized that AI cannot replace the expert witness’s independent judgment. This week, we explore a more pressing concern—the legal and ethical dangers of introducing untested and unaccountable technology into the courtroom.
Ethical and Professional Safeguards: The Essentials of Psycho-Legal Reporting
A psycho-legal report is a professional product subject to ethical, legal, and evidentiary norms, and cannot be delegated to non-human tools.
The Health Professions Council of South Africa (HPCSA) mandates that all registered practitioners maintain personal accountability for the professional content they create and submit, even in medico-legal scenarios.
As stated in the HPCSA’s Ethical Rules of Conduct (Booklet 2, Rule 13(a)): “A practitioner shall be personally responsible for his or her professional decisions and actions…”
Read: Balancing Algorithms and Empathy: AI’s Role in SA Psycho-Legal Practice
This means that an Industrial Psychologist (IOP) must justify and defend every opinion presented in a report, whether during expert testimony or peer review. No level of software automation can absolve them of this responsibility. An excessive reliance on AI for career forecasting or trajectory analysis—especially without human oversight—exposes the practitioner to potential ethical violations, professional sanctions, and legal repercussions.
Although this article focuses on Industrial and Organisational Psychology, the legal implications of expert reporting are equally crucial. Anrich van Stryp, director and attorney specializing in Intellectual Property & Commercial Law at Brits Law, points out that employing generative AI in psycho-legal reporting introduces significant risks related to admissibility, professional liability, intellectual property, and data privacy. South African law mandates expert opinions from qualified professionals exercising independent judgment. Outputs from unverifiable machines cannot meet the reliability criteria outlined in the Law of Evidence Amendment Act 45 of 1988 or the independence requirement of Rule 36(9) of the Uniform Rules of Court.
Van Stryp notes that in Holtzhauzen v Roodt 1997 (4) SA 766 (W), the Court ruled that admissibility necessitates a transparent methodology that can withstand cross-examination—criteria that are lacking in obscure, “black box” AI methods. International law echoes these standards: the UK’s CPR Part 35 requires independence and methodological transparency from experts, while the EU’s AI Act (2024) classifies forensic and medical AI applications as “high risk,” necessitating human oversight and auditability. AI-generated psycho-legal evidence would not comply with these evidentiary requirements.
ADVERTISEMENT
CONTINUE READING BELOW
Generative AI is associated with a real risk of hallucinated content, defined as: “Hallucinated text gives the impression of being fluent…despite being unfaithful and nonsensical. It appears to be grounded in the real context provided, although it is actually hard to specify or verify the existence of such contexts.” (Ji et al., 2023; Survey of Hallucination in Natural Language Generation, ACM Computing Surveys)
This concern is not merely theoretical. Generative AI systems are known to misrepresent legal citations, fabricate authors, conjure false references, or extrapolate from incomplete inputs in ways that lack transparency and verifiability. In the context of expert evidence, especially when testifying in court or facing cross-examination, even a single unverifiable assertion could discredit the report and damage the practitioner’s credibility.
Regarding data privacy, the HPCSA’s Booklet 5 (Confidentiality Guidelines) and the Protection of Personal Information Act (PoPIA) impose rigorous responsibilities on health professionals concerning the management of personal and medical information. Submitting claimant records to an insecure platform for “AI drafting” contravenes these standards and endangers sensitive data.
Van Stryp emphasizes that if psycho-legal reports contain third-party employer records, educational materials, or psychometric test data, uploading such proprietary information to AI platforms without prior consent constitutes copyright infringement and violation of contractual confidentiality obligations. This poses not only a legal risk but may also breach ethical standards of responsible data management and professional trust.
Read: Bringing AI into the Courtroom: A New Era for Psycho-Legal Work
Currently, no publicly accessible AI system meets the necessary encryption, consent, or professional auditability standards; moreover, AI cannot sign a report, testify in court, or take accountability for its content.
Caution Over Codification: AI Cannot Formulate Psycho-Legal Opinions
ADVERTISEMENT:
CONTINUE READING BELOW
In high-volume litigation, automation may seem efficient. Yet, forming a psycho-legal opinion is a high-stakes exercise in professional reasoning, grounded in scientific, ethical, and legal admissibility prerequisites.
Importantly, no existing AI tool fulfills the professional, evidentiary, or legal criteria set forth by:
- HPCSA Ethical Guidelines (Booklet 2, s6.2): Practitioners must justify every professional decision and personally validate report content.
- HPCSA Confidentiality Guidelines (Booklet 5, s4.1.1): Sharing sensitive information with third parties necessitates explicit consent and stringent security measures.
The process of forming a psycho-legal opinion involves more than merely fitting facts into formulas. It requires the IOP to analyze complex interconnections between the claimant’s background, established limitations, and the most probable occupational trajectory within the realistic labor market. This is not a process that can be templated or partially automated without jeopardizing professional and legal standards.
Generative AI tools are inherently unreliable for applications in regulated psycho-legal practice. As demonstrated by Ji et al. (2023), these systems often produce plausible yet unsupported answers, particularly when prompts contain ambiguity or need domain-specific validation. The illusion of accuracy can be dangerously misleading.
Psycho-legal reporting is not a mere presentation of data. It involves applied judgment under circumstances of uncertainty and potential consequences. It requires case-specific reasoning, evidence-based discipline, and ethical restraint—none of which can be mimicked by automation.
Concerns regarding liability are equally significant. Van Stryp notes that negligent misstatement under South African law applies to experts whose reports are utilized in litigation. An IOP who incorporates AI-generated material that turns out to be inaccurate, misleading, or fabricated risks both delictual liability and HPCSA sanctions. Notably, AI providers disclaim liability for professional use of their outputs, placing the onus solely on the practitioner. International law mirrors this: UK courts have determined that professionals are accountable when relying on flawed AI-generated advice, and the EU AI Act mandates human accountability in the operation of high-risk systems.
Data protection and privacy frameworks intensify these risks. Van Stryp highlights that Section 14 of the Constitution, along with the Protection of Personal Information Act 4 of 2013 (PoPIA), imposes stringent security measures and forbids cross-border data transfers without protective safeguards. Uploading claimant records to offshore AI servers without explicit consent is unlawful, potentially exposing practitioners to civil liability and regulatory penalties. These provisions reflect the GDPR and the UK Data Protection Act, which limit processing or transferring sensitive personal, occupational, and health data without explicit consent, adequate protection, and human oversight.
The Path Forward: Anchored Innovation, Not AI-Led Replacement
Integrating technology into psycho-legal practice is not inherently problematic. Responsible innovation—when guided by those knowledgeable of the ethical, evidentiary, and professional requirements of the work—can facilitate timely, accurate, and defensible outcomes.
ADVERTISEMENT:
CONTINUE READING BELOW
However, AI must function as a tool of enhancement, not substitution. It may assist with tasks such as sourcing legitimate references for the practitioner to validate, but it must never be employed to formulate or substantiate core professional opinions—particularly when those opinions will serve as expert testimony in legal proceedings.
The risks are far from hypothetical. Misuse of generative AI in psycho-legal reporting can undermine evidence admissibility, violate ethical and confidentiality standards, and compromise evidentiary accuracy—potentially exposing the IOP to disciplinary measures or legal action.
An algorithm cannot assume the IOP’s critical role in navigating the intersection of psychological functioning, occupational capabilities, and labor market dynamics.
Instead of viewing AI as a replacement, the psycho-legal community should establish regulated standards for ethical AI support, grounded in collaboration, validation, and legal-ethical scrutiny—not mere market enthusiasm.
Final Note
This article is authored from within the profession. Its intent is not to dismiss technology but to emphasize that psycho-legal applications must begin with clarity, legal fidelity, and ethical restraint.
Natasha Gerber, Industrial Psychologist – M.Com (Ind Psych) Pret.
Follow Moneyweb’s in-depth finance and business news on WhatsApp here.