20.2 C
New York
Thursday, July 25, 2024
- Advertisement -

    Ownership in AI: Who is responsible for the benefits and downsides of AI’s output?

    Constantly evolving and ever more popular, generative AI is being used by more individuals and organisations. Text, digital art, code and even music is being created with the help of Large Language Models (LLMs) and AI chatbots such as ChatGPT. But who is the author, and importantly, who is the owner? Gauri Kohli writes.

    Things move quickly in the generative AI world. The fast-paced evolution, backed by the launch of new tools like DALL-E 2, Bard and the recently released GPT-4, however, is creating complex issues surrounding the output generated through these LLMs, such as ownership, attribution and copyrights.
    One perspective is that the ownership resides with the companies that invest their resources and expertise in creating these LLMs and that they are responsible for its development. However, LLM users also play a key role in determining the output. They feed in queries, prompts or context that guide the model’s responses. It can be argued that their input is essential in generating the output and thus they must get some level of ownership.

    Answering the question of who owns the output is tricky, according to Professor David Epstein, Executive Director, Susilo Institute for Ethics in the Global Economy at the Questrom School of Business, Boston University. “There is no law against integrating things that we have read and interpreting that information and creating a new narrative. And that is what is newly copyrighted,” he says.

    “Using that model, as long as ChatGPT does not quote or use sources materially unchanged, then the AI is the owner of that. If a user produces a paper or statement lifted directly or materially unchanged from the AI, then [the AI] should be referenced as the rightful owner of that material.”

    Other experts also suggest that the question of ownership is entirely dependent on where you are located. Dr Andres Guadamuz, Reader in Intellectual Property Law at the University of Sussex, notes that in some countries, the outputs have no copyright and they’re in the public domain, whereas in others, the question remains open to interpretation. In the UK, for instance, the output belongs to the person who made it possible for the work to be created.

    University of Oxford researchers, in collaboration with international experts, recently published a study in Nature Machine Intelligence addressing the complex ethical issues surrounding responsibility for outputs generated by LLMs. The study, co-authored by an interdisciplinary team of experts in law, bioethics, machine learning and related fields, delves into the potential impact of LLMs in critical areas such as education, academic publishing and intellectual property.

    While other studies have focussed primarily on harmful consequences and AI responsibility, the paper diverges. To claim intellectual ownership credit, or authorship for a creation, a person has to put in certain amount of skill and effort, explains the paper’s joint first author Dr Brian D Earp. Therefore, they must also be able to take responsibility for the creation, producing what he sees as a paradox for outputs of human-prompted generative AI.

    “Suppose you instruct an LLM to write an essay based on a few keywords or bullet points. Well, you won’t have expended much effort, or demonstrated any real skill, and so as the human contributor to the essay, you can’t really claim intellectual ownership over it,” he says.

    “But at the same time, the LLM that actually produced the essay can’t take moral responsibility for its creation, because it isn’t the right kind of agent.”

    As a consequence, neither entity can take full ownership of the output. Accordingly, there might be a lot of creative work that is generated in the coming years that will strictly speaking be author-less.

    What Dr Earp and fellow joint first author Dr Sebastian Porsdam Mann, with their collaborators, are now considering, is the question of credit or blame for outputs of LLMs that have been specifically trained on one’s own prior. human-generated writing. “We argue that if a human uses a personalised LLM, trained on their own past original writing to generate new ideas or articles, then, compared to using a general-purpose LLM, the human in such a case would deserve relatively more credit and should be able to claim at least partial ownership of the output,” observes Dr Earp.

    Can AI replace writers and researchers?
    While there are issues with the accuracy and bias of the materials that AI platforms generate, there is growing speculation that these platforms could replace some of the work of writers, analysts, and other content creators. “We need to start considering that an increasing number of works are going to be generated with AI, and short of widespread use of AI detectors, this is a reality that we will have to be content with,” said University of Sussex’s Dr Guadamuz.

    Experts like Professor Epstein at Boston University believe it will replace much of the work now done by humans. “All those jobs of writers, analysts and other content creators are at risk, and it is unclear that we will need much more content that would employ those replaced. In other words, it is unlikely that work products will expand at the rate that people are replaced to take up the slack,” he says.

    As far as inaccuracies and biases are concerned, ideally humans will provide oversight of the AI content generated to ensure the message they are trying to get out is both accurate and unbiased, or biased in the way they want to communicate their opinions. “People will become the editors of the AI instead of the other way around,” adds Professor Epstein.

    Experts are also debating whether LLMs be used for processes and fields which require critical decisions like medical care, legal or finance.

    Who gets the blame for damaging content?
    As frequent users of generative AI already know, LLMs will at times confidently provide inaccurate or outright false information, known as “hallucinations”. For images, it has also tended to struggle with details like fingers. More sinisterly, however, these tools provide opportunities for fraud and hoaxes.
    In March, an image of Pope Francis wearing a white, puffy jacket went viral, and many believed it was real before later discovering it was actually created using AI image generator, Midjourney. A month later, music artist Drake appeared to have another hit single on his hands, except he didn’t write or perform it. An AI doppelganger did.

    Who is responsible when generative AI produces unwanted or harmful output, either intentionally or unintentionally, remain open ended. Should the generative AI, it’s company or the user who posed the query be liable?

    “I believe that the one who publishes this content is liable, however it is generated. The publisher and author are responsible for what is published now, so that should not change just because it is generated by AI,” says Professor Epstein.

    However, Dr Guadamuz, whose main areas of research are artificial intelligence and copyright, says the answer will depend on the situation. In their terms of service, OpenAI claims they’re not liable. With the consistent growth of generative AI and its expanding use, the issue of LLM output and IP ownership is set to grow even more complex.

    This article was from the QS Insights Magazine, Issue 3. Read the full edition.