What The AI Industry Can Learn From The Media Industry

AI-generated content and LLMs are in their infancy. New policies, guidelines, and styles will need to be developed for AI-generated content.

Law2020-Ethical-Implications-of-Artificial-IntelligenceNews and media organizations have editorial policies and standards intended to define and guide the kind of content offered to their target audience. The Wall Street Journal targets a business audience with news facts and information. Disney has several properties that appeal to specific audiences in different mediums, but they all support a core mission of entertaining and inspiring people around the world through unparalleled storytelling.

For any media company, there are style guides, editorial policies, review processes, and editing. While some companies follow comprehensive policies and processes with rigor, others might not. Similarly, while some companies are transparent about their policies, others aren’t transparent.

Law Firms Should Consider How Artificial Intelligence (AI) Will Support Their Brand

With AI already a major factor in legal technology in 2023, law firms must assess the technology’s role in their business. Law firms are already posting job openings for “Legal Prompt Engineers,” and Allen & Overy is one of the first major law firms to deploy a firmwide GPT application.

In addition to broad-scale large language models (LLMs) like ChatGPT or the new Bing chat-assisted search, any organization that trains AI or that produces content with AI will likely want to consider developing a policy.

What techniques are used to query an LLM? The way a content creator queries an LLM will matter. New techniques like Chain Prompts help LLMs explain their logic more transparently. This can help with human review.

Organizations that will create content using AI will want to consider how AI is trained, how AI is instructed, how AI formats output, and how AI-generated content is reviewed before publishing.

Sponsored

Just as large media companies stake their brands on the finished work product, so do law firms. An editor will review a news story while a partner will review an associate’s memo, whether generated by an AI or a human.

Relating Editorial Policy To AI

Few people have heard of Meta’s Galactica LLM, even though it was released in demo form two weeks prior to OpenAI’s ChatGPT. Why? Because it was pulled down just three days later, after its responses exhibited bias and spewed nonsense. Galactica’s AI training was not as good as ChatGPT’s, and consumers of Galactica got to see the nonsense firsthand.

AI-generated content and LLMs are in their infancy. New policies, guidelines, and styles will need to be developed for AI-generated content. After all, content is content, whether generated by humans, machines, or humans and machines.

What is the role of AI training in creating consistent output? What is the role and responsibility of those that generate AI content to review it, similar to that of an editor? Galactica’s output was analogous to a media company hiring journalists and untrained writers alike. But not only that, it was equivalent to publishing with little or no editorial review.

Sponsored

Part of the success of ChatGPT is that significant chunks of objectionable content were tagged as such so the AI training would recognize ugly content like hate speech, sexual abuse, torture, and worse.

In the analogy above, ChatGPT’s AI training has standardized more of the writing. It has reduced the number of untrained writers in the analogy but still has a ways to go. ChatGPT output will require review, similar to an editor, in most use cases. That applies even if the content will only be consumed by the user who created it.

A Glimpse Into Training ChatGPT

A little-known fact is that a lot of the dirty work in finding objectionable content and removing it from the AI training sets used by ChatGPT was initially outsourced to workers in Kenya. The steps to create ethical AI and responsible AI in ChatGPT required some poor souls to review some horrific content. That may bother some readers as it bothers me. Just realize the concept isn’t new. Some people play similar roles in the media industry. Video editors watch some pretty horrific content during the editing process and then pixelate the images or cut away at just the right time to protect the broader viewing audience.

AI Editorial Policies Have Parallels To Traditional Content Creation Editorial Policies

There are parallels between AI content creation and traditional content creation. The decisions made by the Kenyan workers under the direction of OpenAI represent a de facto AI training policy. The de facto policy will evolve, and it is unknown if OpenAI will ever summarize or publish a training policy.

More than 175 billion machine learning parameters are in the GPT-3 model, and GPT-4 will likely measure machine learning parameters in the trillions, which will noticeably improve its ability to answer questions accurately.

When an AI service provider eventually publishes their AI training policy, it will help explain the output to those creating content. This will help explain what is intentional policy and what is AI bias or error that needs to be corrected. AI training won’t ever filter out everything.

Regulation Is Probably Inevitable

If self-regulation and disclosure don’t occur, you can be sure legislative bodies will begin to require disclosures. In 2021, the European Union began to propose a regulatory framework for AI.

The awareness of AI-related issues is ramping up faster than that of Internet-related issues. Similar events occurred in the early days of the Internet when privacy was a concern. The Electronic Frontier Foundation and TRUSTe (now TrustArc) advocated for personal liberties and voluntary disclosures to protect the privacy and rights of individuals. Eventually, privacy policies became standard fare on websites. However, the GDPR and CCPA laws and regulations now define much of what can be done with personal identifiable information.

We are quickly entering new territory here and taking the necessary steps to be prepared for what may come is essential. Let’s learn from others that have already traveled a similar journey.


Ken Crutchfield HeadshotKen Crutchfield is Vice President and General Manager of Legal Markets at Wolters Kluwer Legal & Regulatory U.S., a leading provider of information, business intelligence, regulatory and legal workflow solutions. Ken has more than three decades of experience as a leader in information and software solutions across industries. He can be reached at ken.crutchfield@wolterskluwer.com.

CRM Banner