When people imagine a future of “robot lawyers,” they tend to focus on employment and economic effects — for example, the implications for the lawyer labor market (e.g., “will the robots take our jobs”). Often overlooked, but no less important, are the ethical implications of artificial intelligence.
Last week, at the Global Privacy Summit of the International Association of Privacy Professionals (IAPP), I attended a session tackling this very subject — “Machines That Can Learn: Can They Also Be Taught Human Values?” — with the following panelists:
- Martin Abrams, Executive Director, Information Accountability Foundation (IAF);
- Mark MacCarthy, SVP, Public Policy, Software & Information Industry Association (SIIA); and
- Eleonore Pauwels, Director, AI Lab, Science and Technology Innovation Program, Wilson Center.

Why Some Firms Are Leading The Market With Generative AI
A culture of innovation with strategic AI like Lexis+ AI is revolutionizing law firms by boosting efficiency and deepening client relationships.
Mark MacCarthy of the SIAA opened by noting the significance of the subject. Machine learning has been described as “the most important general-purpose technology of our era.” Taken as a whole, it will improve human civilization, promoting justice and prosperity.
But as with any technology, AI presents challenges — and we need ethical principles to navigate them. MacCarthy identified a few basics: “AI must be developed in a way that promotes transparency and fairness, that’s available to all, and that doesn’t reinforce existing inequalities.” (For more detailed discussion, see the issue brief issued last fall by the SIAA, Ethical Principles for Artificial Intelligence and Data Analytics.)
Does AI require new laws and regulations? It would be difficult if not impossible to regulate AI writ large, given that it’s a broad range of techniques used in many different ways. “It would be like regulating regression analysis,” as MacCarthy put it.
“But we may need to throw a regulatory net around specific applications of these techniques,” he said. “Using machine learning isn’t a get out of jail free card. You can’t say, for example, ‘I’m using AI, so I don’t need to live up to fair lending laws.’”

How Strong Is Your Firm’s Financial Visibility?
Discover how to gain more control over your firm’s finances and unlock smarter growth strategies—take a quick financial visibility quiz designed for law firms.
At its root, the challenge is figuring out what can and cannot be done with data — which is, after all, the foundation of artificial intelligence. AI is not an entirely new phenomenon, but instead builds upon existing data technologies and the huge amount of data available to us today.
As Martin Abrams acknowledged, it’s no easy feat: “We are living in incredibly uncertain times as to what we do with data and how we can use data to create value.” As reflected by the controversy over Cambridge Analytica and its use (or misuse) of Facebook data, we require sound process and safeguards.
For AI to flourish, we need what Abrams described as “the freedom to think with data.” This thinking with data requires observation — seeing what people click on, for example, and using that information to make suggestions to them based on their behavior.
In the United States, we enjoy the freedom to gather data and to think with data to a high degree, using information from the internet, smartphones, and similar devices. Other nations, however, impose more restrictions — most notably Europe, through the GDPR.
According to Abrams, there’s increased pushback on observational technologies, driven by legitimate concerns about the loss of autonomy. But when regulators try to solve this problem, they often do so by enacting blunt instruments, in ways that are not friendly to innovation. The challenge is to balance protection of privacy, autonomy, and other values against the need for innovation and technological progress.
Abrams’s organization, the IAF, explores these issues in a paper, A Unified Ethical Frame for Big Data Analysis. The IAF’s analytical framework requires a pause between thinking about and acting upon data. The difficulty is that with AI, the pause often gets removed, with the machine or tool acting immediately after processing the data. But without such a pause, how can we ensure that ethical principles receive due consideration?
The SIIA, as principal trade association for the software and digital content industry, has also developed ethical principles for AI, which Mark MacCarthy outlined:
1. Rights: Engage in data practices that respect human rights and promote equal dignity and autonomy. But which rights? They include life, privacy, religion, property, freedom of thought, and due process of law.
2. Justice: Fairly distribute the benefits and burdens of social life, avoiding data practices that disproportionately burden vulnerable groups. The distribution of benefits and burdens should not be based on protected categories like race, gender, ethnicity, and religion.
3. Welfare: Aim to create the greatest possible benefit from use of data and advanced modeling techniques. Increase human welfare through improvements in the provision of public services and low-cost, high-quality goods and services. (We care more about this issue of welfare in the United States, given our utilitarian bent; Europe is more focused on deontology.)
4. Virtue: Adhere to data principles that encourage the practice of virtues that contribute to human flourishing. Help people to live good lives in their communities. Virtues include honesty, courage, moderation, self-control, humility, empathy, and civility.
These general principles must be supplemented by specific principles appropriate to the context or domain of use. For example, in the legal profession, supplemental principles might reflect the duties a lawyer owes to clients. Regardless of the field, what’s crucial is that the vision for ethical compliance must be built in from the beginning.
As technology advances, the need for ethical guidance will only increase, as Eleonore Pauwels’s presentation made clear. She reviewed some current and possible future uses of AI — microchip implants for employees, people monitoring their DNA on shared cloud labs, “smart dolls” that can send data about children to the cloud — and emphasized how their benefits must be balanced against ethics.
As we go about striking this balance, we need to have conservations about the trade-offs — ideally public conversations, according to Mark MacCarthy.
“Ethical judgments need to be considered. That process can be done behind closed doors, but transparency is better.”
David Lat is editor at large and founding editor of Above the Law, as well as the author of Supreme Ambitions: A Novel. He previously worked as a federal prosecutor in Newark, New Jersey; a litigation associate at Wachtell, Lipton, Rosen & Katz; and a law clerk to Judge Diarmuid F. O’Scannlain of the U.S. Court of Appeals for the Ninth Circuit. You can connect with David on Twitter (@DavidLat), LinkedIn, and Facebook, and you can reach him by email at [email protected].