By PEARL MOSES
AI AFFECTS — and will continue to affect — law firms’ traditional methods of working.
It’s a good fit for the industry, which relies heavily on standardised documents and precedents. Drafting legal documents can be labour-intensive, and AI seems to be able to effectively deal with it. Contracts, policies, and other legal documents tend to be normative, so AI’s capabilities in gathering and synthesising information can do a lot of the heavy lifting.
Machine learning identifies relevant information and acts as contract-review software. This allows lawyers to focus on their analysis of the contract, and how best to advise clients.
Legal applications such as conveyancing or contract- or licence-generation are relatively safe areas in which to employ AI tools such as Harvey or Chat GPT. Law firms can draw on standardised templates and precedent banks to scaffold document generation, making the results more predictable than with many free-text outputs.
That said, AI output will need a careful review: a major part of practising law is understanding clients’ individual circumstances. AI output is unlikely be optimal at this stage.
Managing inputs could be equally challenging. Data submitted to an AI may become part of a training model, which could violate confidentiality obligations or privacy rights. In Europe, the use of AI might breach the principles of the EU’s General Data Protection Regulation (GDPR).
Law firms would need a firm legal basis to feed any personal data into a generative AI tool such as Harvey, and have contracts in place to cover data processing by third parties.
AI can benefit the profession, but some caution is needed. AI must be “taught” — which means it can only be as objective as the people who teach it. Biased data leads to biased AI. Lawyers and judges are only as good as the information they receive, and AI should increase the quality of that information. No matter how sophisticated the tech becomes, it is no substitute for human judgement or decision-making.
No consensus yet exists on how AI will shape the legal profession, but it’s poised to transform nearly every aspect of it. The technologies powering it will create unprecedented legal issues, including ownership, liability, privacy, and policing.
In light of such concerns, law schools must prepare tomorrow’s lawyers and judges to use AI, and fully understand the implications.
While some legal roles may disappear, we could be at the start of an evolutionary process. New roles may appear to work with AI, rather than against it. New skills can be harnessed to prompt an upskilled legal workforce.
Pearl Moses is compliance director at Setfords Solicitors
Edinburgh AI firm snapped-up by global player Bayer
SCOTTISH AI platform Blackford Analysis has been acquired by pharmaceutical company Bayer.
Blackford began operating from the University of Edinburgh in 2010, and is still headquartered in the city.
Blackford says it will continue to operate as an independent organisation “on an arm’s-length basis” to preserve its entrepreneurial culture. It will remain headquartered in Scotland and no staff or management changes are expected.
The company’s stated aim is to advance its technology, channel partnerships and clinical application portfolio while benefiting from the infrastructure and global reach of Bayer. The acquisition is expected to be completed this year, “pending the satisfaction of customary closing conditions”.
Blackford has strategic partnerships with blue-chip companies and AI providers. It develops and commercialises technology for sale in international markets.