Legal Consequences of Generative AI on Copyright and Privacy

Legal Consequences of Generative AI on Copyright and Privacy

Generative AI technologies like ChatGPT, DALL-E, and MidJourney are the new gold of automation for creativity and problem-solving. As much as they contribute heavily to education, marketing, and entertainment, the rapid development of such technologies has raised alarms about intellectual property rights and privacy.

The existing legal frameworks related to Copyright, 1957, and Information Technology, 2000, do not contain any explicit details of the specific problem this type of generation is facing. These are relative to an international scenario wherein problems concerned are the copyright rights involving work created by AI as well as the use of data in training procedures. This paper is going to focus on the depth of such problems and also is going to propose practical solutions to them.

Generative AI and Copyright Challenges

Redefining authorship and ownership

Copyright law, through Section 13 of India’s Copyright Act, protects the rights of human authors over creative works. But when a generative AI produces music, art, or literature, ownership gets complicated.

To whom should copyright rights be assigned: to the developer of the AI, the person providing input, or simply fall into the public domain?

Can the AI have some rights or status regarding creative ownership?

The US case Thaler v. Copyright Office (2022) has settled the point that copyright law confers rights only to human authors. The UK and EU laws also have not provided copyright for AI-generated work till date. India has not made specific provisions for AI-generated content, thus leaving developers and users vulnerable to legal controversies.

Accidental infringement

Generative AI models are trained on huge datasets, often derived from copyrighted material available online. Even though AI does not literally copy these works, it may produce outputs that are substantially similar to them, giving rise to infringement claims. Some examples include:

  • Music generators generate melodies that are substantially similar to copyrighted songs.
  • AI art tools generate images that are substantially similar to the style of famous artists.

Under Section 51 of the Copyright Act, using copyrighted works without permission is considered infringement. However, in Indian law, whether AI training itself infringes on copyright is unclear, creating uncertainty for developers.

Licensing and dataset transparency

A third important issue concerns unclear sources of the datasets used during AI systems training. For many developers, they do not get the proper licenses that would otherwise expose them to liability when their AI outputs are monetized. In general, creators, developers, and users without clear legal stipulations face immense difficulty in fulfilling copyright requirements.

Dataset transparency and a log to be maintained by developers for licensed materials used for training AI systems form part of the recommendations.

Data collection and consent issues

Generative AI systems are based on large amounts of data, most of which is collected without users’ explicit consent. This approach contradicts international data protection policies, such as the European Union’s GDPR, which demands that data collection should be based on informed consent.

The Digital Personal Data Protection Act of 2023 in India underlines the need for consent-based data processing; however, it is lacking in explicit rules for generative AI. As such, individuals whose data is used to train AI might not be aware of possible risks, such as unauthorized profiling or identity theft.

Deepfake technology abuse

Deepfakes, a technology based on generative AI, pose a critical threat to privacy and security. By creating hyper-realistic but entirely fabricated content, deepfakes have been used for:

  • Spreading of false information and propaganda.
  • Identity theft and financial fraud.
  • Non-consensual explicit videos for blackmailing persons.

India’s Information Technology Act, 2000 has legal provisions to combat cybercrime but is weak to meet the AI-gener-ated deepfakes. Section 66E in respect of breach of privacy and Section 67A in respect of obscenity can be enacted but these don’t meet the crime of the deepfake as a whole.

Personal data security issues

It’s obvious that such tools will leak sensitive personal information due to poor training methods or even malicious manipulation. The issue raises ethical and legal questions about the responsibility of developers to maintain data integrity and avoid misuse.

Cross-national comparative approaches

Countries have been proactive in countering challenges thrown by generative AI:

European Union

Such is the case with the European Union’s Artificial Intelligence Act, which remains under negotiation and aims to establish control over AI systems on a risk-based approach. High-risk systems would be strictly compliant with the features involving privacy, datasets, and human oversight.

United States

The US has adopted a sectoral approach and is applying it mainly in the health and financial sectors. The US does not have an overarching AI law; however, its courts heard some of the most important copyright issues, including Thaler v. Copyright Office.

China

China has draconian regulations for generative AI, requiring that every AI-generated material be aligned to the ethical norms of the country, and all training data sources must be disclosed by developers.

India can take these models and create a balanced approach that nourishes innovation while protecting rights.

Proposed Llegal reforms in India

Copyright law amendment

In the effort to correct deficiencies of the existing legal order, the following amendments in the Copyright Act of 1957 are proposed:

Establish a definition of authorship with respect to AI-generated works, either by ascribing it to developers/users or by placing it in an entirely new intellectual property context.

Elaborate on fair use exceptions regarding AI training that do not infringe on copyright holders.

Data protection frameworks

In respect to the Digital Personal Data Protection Act India must contain:

Explicit provisions concerning datasets that are used to train AIs, include consent to collect and anonymise personal data;

Penalize those developers who derive data illegally

Deepfakes and Abuse of AI

As to the abuse of deepfakes, it should be so recommended that the government-

Enshrines non-consensual uses of deepfakes using the IT Act.

Design AI detection tools that will concentrate on detecting modified content and assisting the law enforcement services.

Responsible AI development

All the governments and players in industries need to collaborate and create an ethics for AI development that considers aspects like openness, account, and inclusion.

Judicial and legal profession players

The judiciary is also an important institution that contributes to the legal framework concerning generative AI. The decisions of the judiciary can fill in legislative gaps, as is evident in cases like Google India Pvt. Ltd. V. Visaka Industries Ltd., where the court analyzed intermediary liability under the IT Act. Similarly, legal professionals also need to be aware of technological advancements to effectively argue their cases.

Conclusion

Generative AI has great prospects but also raises important challenges-the possibility of transforming the world and, at the same time, destroying traditional standards of legality. It definitely needs a multi-faceted approach that encompasses legislative updates, judicial interpretation, and industry cooperation.

Being an emerging centre for artificial innovation, India needs to be proactive towards building a comprehensive framework of regulation. Keeping the scales of innovation and accountability at parity can ensure the complete exploitation of the capabilities of generative AI while yet protecting human rights and fostering ethical use.


Author: Mohammed Rayyan Azaz Ahmed Sonde,  2nd-year LLB student at Lords Universal College of Law, Mumbai

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *