Ownership in AI: Who is responsible for the benefits and downsides of AI’s output?

Constantly evolving and ever more popular, generative AI is being used by more individuals and organisations. Text, digital art, code and even music is being created with the help of Large Language Models (LLMs) and AI chatbots such as ChatGPT. But who is the author, and importantly, who is the owner? Gauri Kohli writes.

Things move quickly in the generative AI world. The fast-paced evolution, backed by the launch of new tools like DALL-E 2, Bard and the recently released GPT-4, however, is creating complex issues surrounding the output generated through these LLMs, such as ownership, attribution and copyrights.
One perspective is that the ownership resides with the companies that invest their resources and expertise in creating these LLMs and that they are responsible for its development. However, LLM users also play a key role in determining the output. They feed in queries, prompts or context that guide the model’s responses. It can be argued that their input is essential in generating the output and thus they must get some level of ownership.

Answering the question of who owns the output is tricky, according to Professor David Epstein, Executive Director, Susilo Institute for Ethics in the Global Economy at the Questrom School of Business, Boston University. “There is no law against integrating things that we have read and interpreting that information and creating a new narrative. And that is what is newly copyrighted,” he says.

“Using that model, as long as ChatGPT does not quote or use sources materially unchanged, then the AI is the owner of that. If a user produces a paper or statement lifted directly or materially unchanged from the AI, then [the AI] should be referenced as the rightful owner of that material.”

Other experts also suggest that the question of ownership is entirely dependent on where you are located. Dr Andres Guadamuz, Reader in Intellectual Property Law at the University of Sussex, notes that in some countries, the outputs have no copyright and they’re in the public domain, whereas in others, the question remains open to interpretation. In the UK, for instance, the output belongs to the person who made it possible for the work to be created.

University of Oxford researchers, in collaboration with international experts, recently published a study in Nature Machine Intelligence addressing the complex ethical issues surrounding responsibility for outputs generated by LLMs. The study, co-authored by an interdisciplinary team of experts in law, bioethics, machine learning and related fields, delves into the potential impact of LLMs in critical areas such as education, academic publishing and intellectual property.

While other studies have focussed primarily on harmful consequences and AI responsibility, the paper diverges. To claim intellectual ownership credit, or authorship for a creation, a person has to put in certain amount of skill and effort, explains the paper’s joint first author Dr Brian D Earp. Therefore, they must also be able to take responsibility for the creation, producing what he sees as a paradox for outputs of human-prompted generative AI.

“Suppose you instruct an LLM to write an essay based on a few keywords or bullet points. Well, you won’t have expended much effort, or demonstrated any real skill, and so as the human contributor to the essay, you can’t really claim intellectual ownership over it,” he says.

“But at the same time, the LLM that actually produced the essay can’t take moral responsibility for its creation, because it isn’t the right kind of agent.”

As a consequence, neither entity can take full ownership of the output. Accordingly, there might be a lot of creative work that is generated in the coming years that will strictly speaking be author-less.

What Dr Earp and fellow joint first author Dr Sebastian Porsdam Mann, with their collaborators, are now considering, is the question of credit or blame for outputs of LLMs that have been specifically trained on one’s own prior. human-generated writing. “We argue that if a human uses a personalised LLM, trained on their own past original writing to generate new ideas or articles, then, compared to using a general-purpose LLM, the human in such a case would deserve relatively more credit and should be able to claim at least partial ownership of the output,” observes Dr Earp.

Can AI replace writers and researchers?
While there are issues with the accuracy and bias of the materials that AI platforms generate, there is growing speculation that these platforms could replace some of the work of writers, analysts, and other content creators. “We need to start considering that an increasing number of works are going to be generated with AI, and short of widespread use of AI detectors, this is a reality that we will have to be content with,” said University of Sussex’s Dr Guadamuz.

Experts like Professor Epstein at Boston University believe it will replace much of the work now done by humans. “All those jobs of writers, analysts and other content creators are at risk, and it is unclear that we will need much more content that would employ those replaced. In other words, it is unlikely that work products will expand at the rate that people are replaced to take up the slack,” he says.

As far as inaccuracies and biases are concerned, ideally humans will provide oversight of the AI content generated to ensure the message they are trying to get out is both accurate and unbiased, or biased in the way they want to communicate their opinions. “People will become the editors of the AI instead of the other way around,” adds Professor Epstein.

Experts are also debating whether LLMs be used for processes and fields which require critical decisions like medical care, legal or finance.

Who gets the blame for damaging content?
As frequent users of generative AI already know, LLMs will at times confidently provide inaccurate or outright false information, known as “hallucinations”. For images, it has also tended to struggle with details like fingers. More sinisterly, however, these tools provide opportunities for fraud and hoaxes.
In March, an image of Pope Francis wearing a white, puffy jacket went viral, and many believed it was real before later discovering it was actually created using AI image generator, Midjourney. A month later, music artist Drake appeared to have another hit single on his hands, except he didn’t write or perform it. An AI doppelganger did.

Who is responsible when generative AI produces unwanted or harmful output, either intentionally or unintentionally, remain open ended. Should the generative AI, it’s company or the user who posed the query be liable?

“I believe that the one who publishes this content is liable, however it is generated. The publisher and author are responsible for what is published now, so that should not change just because it is generated by AI,” says Professor Epstein.

However, Dr Guadamuz, whose main areas of research are artificial intelligence and copyright, says the answer will depend on the situation. In their terms of service, OpenAI claims they’re not liable. With the consistent growth of generative AI and its expanding use, the issue of LLM output and IP ownership is set to grow even more complex.

This article was from the QS Insights Magazine, Issue 3. Read the full edition.

Learning in the metaverse

Universities in the US are starting to tread carefully into the metaverse, and while some are excited at the potential benefits in student engagement, some warn about not getting too carried away the new tech. Eugenia Lim writes.

In recent months, it has become clear that the metaverse is still far from being realised than initially touted. Among the many promises of how it would revolutionise online interactions, the interoperability of identity, data and digital assets have been more commonly lauded. However it remains to be seen what exactly this will look like and how it will be experienced en masse.

In the higher education space, universities are just beginning to gain ground in the metaverse. More classes are being taught with virtual reality, and various institutions are establishing their digital twin campuses. However, many of these efforts are still in the pilot testing phase.

Early studies indicate learning in the metaverse can increase student engagement, collaboration and retention, leading proponents to believe that a virtual reality (VR) headset could very well be on the back-to-school shopping list for university students.

Metaversity poster child

In the United States, Atlanta’s Morehouse College has been the poster child for learning in the metaverse. In 2021 it piloted a Metaversity, a fully spatial 3D digital twin campus of the physical building and its grounds.

In the midst of the COVID-19 pandemic, Muhsinah Morris, Assistant Professor of Education and Morehouse’s “Metaversity” Director, sought more engaging teaching solutions to lower student dropout rates from her classes being taught remotely. Her initial experiment to teach an inorganic chemistry class in virtual reality eventually led Morehouse College to increase its offerings to over a dozen classes taught in the metaverse.

Most notably as a historically black liberal arts college, it now offers the first ever VR black history class, where students can explore an “underground railroad” museum and experience what it might have been like on a slave ship in the 1830s.

Morehouse College’s latest survey shows that 90 percent of students that have taken Metaversity courses say it was more effective than anything they had participated in. To top it all off, students achieved an increase of over 11.9 percent in their grades and a 10 percent increase in attendance rates; better than they did with just remote learning.
“What this says to me is that our students attend class at a much higher rate because they’re more engaged and excited to come to class,” Professor Morris said in an interview with local media outlet the Saporta Report.

“They are fully immersed in this digitally stimulating world. They are going back in time, going through the human body or going forward into a futuristic world that they could not go to until now.”

Seeing double

Digital twin campuses feature front and center in most universities’ efforts to establish their presence in the metaverse. Leading the charge is Iowa-based VR company VictoryXR, which is also behind Morehouse College’s digital twin campus.

The company says digital twin metaversities allow users to experience their campus grounds in virtual reality, within the metaverse. These 3D, hyper-realistic spaces are designed to feel as immersive and realistic as a physical university.

VictoryXR also has a hand in assisting Meta to build 10 digital campuses affiliated with real universities. This is part of the social media giant’s US$150 million investment into its Meta Immersive Learning project, which seeks to take education to virtual reality environments.

To date, VictoryXR has signed up 55 colleges and universities, mostly in the United States, according to the company’s CEO Steve Grubbs in an interview with Forbes. Each college starts with a pilot to test the proof of concept with 50-100 students. Grubbs estimates there are more than 2,500 students currently experiencing Metaversity at a higher education level.

Tread carefully

University of Maryland Global Campus (UMGC) is one of the institutions in the midst of running a pilot test with VictoryRX. While the college also has a digital twin campus set up, its Pilot Program Manager, Daniel Mintz,is quick to warn against getting carried away with the new tech.

“I think the metaversity is a distraction, I don’t need to have a student that can ‘walk’ from here and ‘walk’ across the street and go to Yale,” he says, referring to how metaversities could be connected in virtual reality.

“Our university is going to invest in this if we think student retention will go up, not because it sounds like a nice thing to be true.”

Mintz says UMGC’s focus will remain on how learning in the metaverse impacts the success rate and the retention rate for individual students. Its pilot programme findings have been largely positive, recording higher re-enrolment rates, increased engagement with faculty, as well as academic success. However, Mintz explains that the university is still testing its operational ability. “The downside is, we haven’t figured out at scale, what we’re going to do, and how we’re going to deal with the financing of it.”

In the first year of the UMGC immersive programme pilot, the college prepared about a dozen different courses ranging from Forensic Biology to the Foundations of Oral Communication. However, the rollout is limited to just 12 students a class, which is a little under half a traditionally taught class size. Part of the university’s limitations is cost: UMGC pays for all the VR headsets, which students loan for the duration of the class. It also has to budget for the licensing of the software.

When asked if the metaverse could replace current learning arrangements, Mintz says it is “slow going”, especially since the penetration of VR headset use remains relatively low.
Universities will also have to decide to buy or build its VR capabilities as it expands its course offerings. While UMGC is making plans to manage and update new course content, it is still early days. It offers a certificate programme for students to learn how to create virtual and augmented reality content with the intention of creating a means for more VR content for future classes.

Mintz concludes: “Over time, we hope to use that class to develop content for our immersive classes. But that’s not today. That’s tomorrow.”

This article was from the QS Insights Magazine, Issue 4. Read the full edition.

SMU computing dons receive global recognition for outstanding contributions in software engineering and artificial intelligence

Professor David Lo and Associate Professor Akshat Kumar from the School of Computing and Information Systems (SCIS) have been recognised for their outstanding contributions and accomplishments in the fields of software engineering and artificial intelligence respectively.

Professor David Lo has been awarded the 2021 IEEE CS TCSE Distinguished Service Award for his extensive and outstanding service to the software engineering community in his many roles in major software engineering conferences and journals. He is the first in Singapore and second in Asia to have received this prestigious award.

The IEEE Computer Society is the world’s largest professional organisation devoted to computer science, and the Technical Council on Software Engineering (TCSE) is the voice of software engineering within the IEEE and the Computer Society. TCSE aims to advance awareness of software engineering and to support education and training through conferences, workshops, and other professional activities that contribute to the growth and enrichment of software engineering academics and professionals.

Associate Professor Akshat Kumar has been named a Senior Member of the Association for the Advancement of Artificial Intelligence (AAAI). He is among the nine worldwide to achieve this recognition, and the only academic in Singapore and Asia to be named among the 2021 Honourees.

Senior Member status is designed to recognise AAAI members who have achieved significant accomplishments within the field of artificial intelligence. To be eligible for nomination for Senior Member, candidates must be consecutive members of AAAI for at least five years and have been active in the professional arena for at least ten years.

AAAI is a scientific society devoted to advancing the scientific understanding of the mechanisms underlying thought and intelligent behaviour and their embodiment in machines. It aims to promote research in, and responsible use of, artificial intelligence.

Professor Lo said, “I am honoured and humbled to receive the 2021 IEEE CS TCSE Distinguished Service Award. I would like to thank the hundreds of colleagues whom I have worked with in conference and journal organizations and to SCIS and SMU for their support. It has been a fun and rewarding journey to work together with many wonderful colleagues in SCIS, SMU, Singapore, and from across the globe to co-organize more than 30 international conferences. I especially fondly remember the conferences that were held at the SMU campus. Thank you very much SCIS and SMU for supporting these events!”

Prof Lo’s research is at the intersection of software engineering and data science, also known as software analytics, encompassing socio-technical aspects, and analysis of different kinds of software artefacts such as code, execution traces, bug reports, Q&A posts, user feedback, and developer networks, and the interplay between them. He designs data science solutions that transform passive data into tools that improve developer productivity and system quality, and generate new insights.

Prof Lo has published more than 400 papers in refereed conferences and journals. His research work has created impact in several ways. Collectively, they have attracted much interest from the research community and inspired many subsequent studies that push the frontiers of knowledge in the areas of software engineering and data science. This is evidenced by the more than 16,000 citations listed on Google Scholar, corresponding to an H-index of 71.

In addition to his current line of research work on software analytics, Prof Lo is keen to solve an emerging problem — how best to adapt software engineering processes and tools that are currently used to design conventional software for AI system development. AI is advancing rapidly and has been, or will be, incorporated into many systems that humans interact with daily, such as self-driving cars. His immediate future goal is to investigate and characterise the limits of current best practices and tools to AI system development, and design novel solutions that address those limitations.

Associate Professor Akshat Kumar said, “I am greatly honoured to be selected as a Senior Member of AAAI. I am fortunate enough to have great mentors, students, and collaborators over the course of my career, and an intellectually stimulating work environment at SMU’s School of Computing and Information Systems. I am very thankful for their continued support and collaboration which are invaluable for my research and academic career.”

Prof Kumar’s research is in the area of planning and decision making under uncertainty with a focus on multiagent systems and urban system optimisation. His work addresses our rapidly interconnected society and urban environments, from personal digital assistants to self-driving taxi fleets and autonomous ships, and develops computational techniques that will allow such complex ecosystem of autonomous agents to operate in a coordinated fashion. Over the past few years, Prof Kumar’s work has addressed various challenges in such diverse urban settings as scalability to thousands of agents, uncertainty and partial observability, and resource-constrained optimisation.

In addition to academic contributions, Prof Kumar also participated in the Fujitsu-SMU Urban Computing and Engineering Corporate Lab from 2014-2019. He along with his collaborators have designed maritime simulators and novel intelligent scheduling algorithms that can coordinate vessel traffic in Singapore Straits for better safety of navigation. Such simulators and approaches are based on studying the real  location data for ships that enter Singapore waters over a large period of time. Results of such studies have appeared in leading AI conferences.

Prior to joining the School of Information Systems (former name of SCIS) in 2014, Prof Kumar was a research scientist at the IBM research lab in New Delhi. He obtained his Bachelor degree from the Indian Institute of Technology Guwahati, India, and his Masters and PhD from the University of Massachusetts Amherst, all in computer science.

Over his decade long career in AI, Prof Kumar has published more than 40 papers in
refereed conferences and journals.

Prof Kumar’s work has received numerous awards including the Best Dissertation Award at the International Conference on Automated Planning and Scheduling (ICAPS 2014), and a runner-up award at International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2013). His work has also received the Outstanding Application Paper Award at ICAPS 2014, and the Best Paper Award in the 2017 AAAI Conference on Artificial Intelligence in the computational sustainability track. All these conferences are among the top-tier conferences in the field of AI.

At SMU, he has been awarded the Lee Kong Chian Fellowship in 2017 for his sustained research contributions at SMU. On his future research, Prof Kumar sees multiagent systems becoming more and more relevant with the adoption of internet-of-things. He is particularly excited by several research challenges which arise with such unprecedented connectivity, such as dealing with the problem of scale, ensuring safe co-habitation of humans and autonomous agents, and ensuring coordination in the presence of both cooperating and competing agents.