Ethical issues concerning Gen-AI

Reading time: 6 minutes

Introduction

The term ‘AI’ has been recently surrounded by a huge amount of hype. Many companies and products claim to use AI, which would revolutionize their industries, boost productivity, and push the boundaries of what’s possible with technology,  even though they are simply employing basic algorithms or automation. On the other hand, a catastrophic vision of the end of humanity caused by artificial intelligence taking over the world is strongly present in the public debate.

What do we mean by AI?

First, it would be worth clarifying what is currently meant by the term ‘AI’ in this context. Usually, it refers to systems like ‘ChatGPT’, ‘Copilot’ or ‘Gemini’, so Generative AI.

According to Wikipedia, generative artificial intelligence (Gen-AI) is a type of AI capable of generating new data, such as text, images, or videos using generative models. These models learn the patterns and structure of their input training data and then generate new data with similar characteristics, often using deep neural networks. Of course, this is very far from our intuition about what human intelligence is. 

AI that can perform any intellectual task that a human can is called Artificial General Intelligence (AGI). AGI is a hypothetical future AI system that could match or exceed human intelligence across a wide range of cognitive tasks. 

Superintelligence refers to a hypothetical artificial intelligence that would vastly surpass human cognitive abilities in virtually all domains. This type of AI is present in the public debate around AI potentially posing an existential threat to humanity if its goals are not carefully aligned with human values. For now, these are considerations in the realm of futurology and science fiction (which does not mean they are not important!).

So, we are dealing with Gen-AI tools designed for specific tasks and lack broad understanding and adaptability. Nonetheless, while this technology rapidly advances and becomes more widely adopted, it is crucial to examine the key ethical issues surrounding its development and use.

Privacy and Data Security

One of the primary ethical issues with generative AI is the potential for privacy violations and data breaches. These AI models are trained on vast amounts of data, which may include personal information. If proper safeguards are not in place, this sensitive data could be exposed or misused. Companies developing and using generative AI must prioritize data privacy, implement robust security measures, and adhere to relevant regulations.

Recently, OpenAI’s CTO Mira Murati sparked controversy during an interview with The Wall Street Journal about the company’s new video-generating AI model, Sora. When pressed about the sources of training data used for Sora, Murati appeared uncertain, stating that she wasn’t sure if videos from platforms like YouTube, Instagram, or Facebook were used. Instead, she referred to „publicly available data” and avoided further questions on the topic. This lack of transparency regarding the training data has raised concerns in the AI community.

Bias and Discrimination

Generative AI models can perpetuate and even amplify biases present in the data they are trained on. If the training data contains societal biases, the AI-generated content may reflect and reinforce these prejudices, leading to discriminatory outcomes. AI developers need to use diverse and unbiased datasets, regularly audit their models for bias, and take steps to mitigate any identified issues.

One example of bias and discrimination in generative AI is the tendency for image generation models to overrepresent and reinforce harmful stereotypes when depicting people of certain genders, races, or ethnicities.

For instance, a study by researchers at UNESCO found pervasive gender biases in generative AI systems. When prompted to generate images of people in various professions, the AI models tended to depict men more frequently in high-prestige jobs like doctors and engineers, while women were more often shown in lower-status roles like teachers and nurse

Misinformation and Deepfakes

The realistic content generated by AI has the potential to be used for spreading misinformation and creating convincing deepfakes. These fake images, videos, or texts can be used to manipulate public opinion, defame individuals, or even influence political processes. Combating the misuse of generative AI for misinformation requires a combination of technological solutions, such as watermarking and detection tools, as well as public awareness and media literacy initiatives.

One notable example of a deepfake created with generative AI that spread widely online was the fake image of an explosion near the Pentagon in Washington, D.C. in May 2023. The realistic-looking image, which showed fire and smoke billowing from an apparent blast close to the U.S. Department of Defense headquarters, quickly went viral on social media platforms like Twitter. Some mainstream Indian TV news channels even broadcast the image along with reports about an alleged attack.

Intellectual Property and Copyright

Generative AI raises concerns about intellectual property rights and copyright infringement. As these models are trained on existing content, there are questions about the ownership of AI-generated works and the potential for plagiarism. Clear legal frameworks and industry standards need to be developed to address these issues and protect the rights of content creators.

One prominent example is the lawsuit filed by Getty Images against Stability AI, the company behind the popular AI image generator Stable Diffusion.

In the lawsuit, Getty Images alleges that Stability AI used millions of copyrighted images from its database to train Stable Diffusion without proper licensing or permission. Getty claims this constitutes copyright infringement, as the AI model can generate images that are derivative works based on the copyrighted training data.

Transparency and Accountability

The lack of transparency in how generative AI models arrive at their outputs is another significant ethical concern. Without clear explanations of the decision-making process, it becomes difficult to hold AI systems accountable for their actions. Developers must prioritize transparency, provide explanations for AI-generated content, and establish mechanisms for redress when issues arise.

One example of the lack of transparency and accountability in generative AI usage is the case of CNET quietly publishing AI-written articles for months without proper disclosure. In early 2023, it was discovered that the technology news website CNET had been using an AI tool to write articles on personal finance and other topics since November 2022. The articles contained errors and were not clearly labeled as AI-generated, leading to concerns about transparency and accuracy.

One prominent example is the lawsuit filed by Getty Images against Stability AI, the company behind the popular AI image generator Stable Diffusion.

In the lawsuit, Getty Images alleges that Stability AI used millions of copyrighted images from its database to train Stable Diffusion without proper licensing or permission. Getty claims this constitutes copyright infringement, as the AI model can generate images that are derivative works based on the copyrighted training data.

Job Displacement

As generative AI becomes more sophisticated, there are fears that it could lead to job displacement, particularly in creative fields. While AI may automate certain tasks, it is crucial to recognize that it also has the potential to create new jobs and enhance human creativity.  For example, some Gen-AI-based tools like Jasper.ai and Copy.ai are marketed as AI-powered copywriting assistants that can generate blog posts, social media content, and ad copy. As these tools become more sophisticated, there is a risk that companies may rely more heavily on AI-generated content, potentially reducing the demand for human copywriters.

Addressing this issue requires a proactive approach to reskilling and upskilling workers, as well as fostering collaboration between humans and AI

One example of the lack of transparency and accountability in generative AI usage is the case of CNET quietly publishing AI-written articles for months without proper disclosure. In early 2023, it was discovered that the technology news website CNET had been using an AI tool to write articles on personal finance and other topics since November 2022. The articles contained errors and were not clearly labeled as AI-generated, leading to concerns about transparency and accuracy.

One prominent example is the lawsuit filed by Getty Images against Stability AI, the company behind the popular AI image generator Stable Diffusion.

In the lawsuit, Getty Images alleges that Stability AI used millions of copyrighted images from its database to train Stable Diffusion without proper licensing or permission. Getty claims this constitutes copyright infringement, as the AI model can generate images that are derivative works based on the copyrighted training data.

Summary

In conclusion, generative AI presents a range of ethical challenges that must be addressed to ensure its responsible development and deployment. By prioritizing privacy, mitigating bias, combating misinformation, protecting intellectual property rights, promoting transparency, and establishing effective governance frameworks, we can harness the potential of generative AI while minimizing its risks. As this technology continues to evolve, ongoing dialogue and collaboration among all stakeholders will be essential to navigating the ethical landscape of generative AI.

Image generated using Microsoft Designer. Prompt „A puppy to attract the attention of Linkedin users”
Facebook
Twitter
LinkedIn

Contact

Nearshore Partner means for you:

Trusted us

Contact

Ola Wojdyła

Contact

Nearshore Partner means for you:

Trusted us