“Companies adopt AI governance in anticipation of regulation.”
Article by JOTA featuring comments from Founding Partner Alessandra Mourão.
ARTIFICIAL INTELLIGENCE
Companies adopt AI governance in anticipation of regulation.
Law firms report growing demand for AI governance policies ahead of formal legislation.
By Nino Guimarães
In anticipation of artificial intelligence (AI) regulation, companies are turning to law firms to develop their own internal governance policies. According to lawyers specializing in the area, the main demand comes from tech companies, banks, and multinationals seeking to establish their own rules for the use of AI tools.
The law firms interviewed by JOTA say they aim to present clients with governance projects that preempt specific legislation, offering best-practice standards for sensitive issues such as intellectual property rights and data protection. They are also involved in drafting AI solution contracts, including clauses on data protection and cybersecurity.
Renato Opice Blum, an expert in Digital Law and data protection, explains that while legislation for AI is expected, major companies view governance policies not only as a good compliance practice but also as a competitive advantage.
“Our governance model aims to minimize risks, mitigate problems, and offer competitive differentiation for organizations. It’s an irreversible and essential process, given the growing adoption of AI across various sectors,” he stated.
According to Opice Blum, governance policies must be adaptable to the rapid changes in technology and legislation, with ongoing monitoring. He emphasizes that the projects developed by his firm are based on a broad set of international legislation, technical standards, and sub-legal rules that ensure both technical and legal security.
“Our framework is built on 13 main points, including security, explainability (understanding of the AI models used), monitoring, risk management, compliance with copyright, and data protection,” he notes.
Attorney Patrícia Peck, a full member of the National Cybersecurity Committee (CNCiber), observes that since the release of the first generative AI models, companies have increasingly turned to specialized law firms for guidance. According to her, implementing usage rules and AI-related contract clauses serves as a form of prevention against potential future litigation.
Peck explains that governance projects also serve to clearly define the permitted uses of AI within a company, making explicit what cannot be done with the tools. In their projects, lawyers also analyze license agreements and guide clients in drafting AI service contract clauses.
According to Patrícia Peck, before buying or selling AI-based solutions, it’s important that contracts include clauses that address:
1. Definition of terminology
A clear indication of the type of AI solution to be used, its purpose, and limitations.
2. Data protection and intellectual property
Guarantees that AI use complies with current laws on copyright, image rights, and personal data protection.
3. Cybersecurity and attack prevention
Protection against data poisoning and digital security during AI training and usage.
4. Ethics and governance
Alignment between the AI solution and the company’s code of ethics and governance policies.
Self-regulation
The attorney argues that internal policies also serve as a form of corporate self-regulation. She believes that even with the approval of a legal framework for AI in Brazil, each economic sector will need to define more specific rules for their own contexts.
“To ensure greater legal certainty, it’s advisable for sectors to develop specific regulations for their AI applications. For example, the use of AI in healthcare for medical reports or in candidate screening during recruitment processes requires rules that address the specificities of those fields,” she argues.
Governance within Law Firms
In addition to drafting governance policies for clients, law firms themselves are adopting specific rules for the use of AI in their services. According to attorney Alessandra Mourão, client demand has also influenced internal practices at law firms.
She notes that the risk of leaking sensitive data and information has led clients to request contractual clauses that clearly address the use of AI in legal services.
“Client information security is one of the biggest concerns when using AI. There’s a fear that, by using an AI tool, confidential information might be exposed or absorbed by the platform, beyond the control of the firm,” she said.
For Mourão, ethical concerns around AI use are shared by both companies and the legal community. “It’s essential that law firms clearly disclose when AI is used in their operations and ensure detailed reviews to avoid errors or failures,” she pointed out.
JOTA Portal, March 3, 2025, 07:55 AM.
https://www.jota.info/justica/empresas-adotam-governanca-de-ia-em-antecipacao-a-regulacao