Algorithmic Power and Human Autonomy: Rethinking Consent in the Digital Age

Abstract

The rise of digital technology has engendered a profound paradox: cyberspace democratise the access to information and amplify this marginalised voices it also enables Untrammelled mechanisms of manipulation and control over individual autonomy. This essay explores the perplexing dynamic between technology and human autonomy, and cyberspace in support of the argument that the current legal frameworks are inherently ill-equipped to deal with the new challenges posed by algorithmic governance and data asymmetry Highlighting landmark development such as Cambridge Analytica scandal, the Schrems II judgement and comparative regulatory models including the EU’s GDPR, Digital Services Act and Digital Markets Act, This essay critically interrogates the fictitious nature of digital content disguised as “free consent”.

It further critically analyse how algorithmic decision making can heavily influence domains such as employment credit and criminal justice undermining procedural fairness leading to erosion of individual autonomy. The essay asserts that consent based regulatory paradigms are insufficient and introducing for structural interventions including data fiduciaries, algorithmic impact assessment, privacy- by -design mandates and human -in -the- loop requirements to protect human autonomy in cyberspace. At last the Clarion call of this essay is to make adaptive and reformative legal reform frameworks which are capable of addressing collective digital vulnerabilities and not just individual autonomy.

Technology, Human Autonomy, and Cyberspace: Navigating the digital conundrum

As shocking as it may appear,but our digital profiles can be used by big tech companies to influence our electoral choices which was made evident by a 2018 incident[1]where a whistleblower Christopher Wyliedivulged the shocking reality of how psychological profiles of millions of Facebook users were harvested and weaponized to influence electoral choices. This scandal unfolded a fundamental paradox ofourdigitalage:the same technology that is democratizing access to information and giving a platform to marginalized voices is also the toolthatis leading to unbridled control and manipulation over common citizens. Aswe movetowards digital space it is pertinent to determine whether technology is enhancing or eroding the human autonomy and it has become a legal imperative to balance digital freedom and human autonomy in digital space. Human autonomy is the capacity of self- determination and making a meaningful choice, which is one of the rudimentary ideas of the Article 21[2] of Indian Constitution.

Technology’s Paradoxical Impact on Autonomy

There is a paradoxical relationship between technology and human autonomy, the technology which democratizes the access to information, giving voice to marginalized communities also create challenges.Open-sourcesoftwareand encryption tools facilitate secure and private communication resisting state-surveillance. The internet architecture promised a secured and decentralized space allowing people to express their opinions bypassing the socio-economic and political barriers. Yet the profound potential coexists withdebilitating control. An eminent Scholar Shoshana Zubofflucidly exemplified[3] how technology use our behavioral data and algorithms feed on the same to predict and sway our choices. Thisis conspicuous in myriads of platforms like Spotify, Netflix, Instagram and how they analyze our online behavior, their algorithms recommend us well- curated content narrowing the diversity of perspectives,constraining the intellectual autonomy necessary for democratic citizenship.

As countries are constantlygrappling with the negative effects of technology, they are beginning to counter it with effective legal frameworks. The most comprehensive attempt to restore individual autonomy over personal data, enshrining principles of constant, transparency and data portability is adopted by the European Union’s General Data Protection Regulation (GDPR)[4], but this farmwork has its own limitations, when users face impenetrable privacy policies and design dark patterns engineered to nudge them towards data sharing , calling it “Consent”, this model appears to be ineffective.

Consent – A Fictious Concept and Power of Algorithm

The legal concept of “consent” in digital regulation fails in cyberspace. The take- it- or-leave- it type of contract are agreements with no bona fide negotiation power dominating the digital landscape, such adhesion contracts compel users to agree to their terms and conditions as reading them would require an unrealistic time investment. As shown in one of the studies[5] which calculated, that reading all the privacy policies encountered annually would take 76 working days. In today’s time when everyone is using onlineplatforms to avail digital services opting out or not accepting such privacy policies means they would be getting socially  marginalised. Platforms like Amazon, Google and Facebook they all mediatesocial connections but consent to their terms and conditions appears more like a coercionrather than a choice. This problem accentuates the very conspicuous algorithmic decision making systems wherein algorithm determines credibility , employment opportunities, insurance premiums and bail decisions . These system endorse biasness and but claim objectivity operating as inscrutable black boxes as Frank Pasquale in The Black Box Society[6]  also critiqued the algorithmic opacity and its impact on accountability. For e.g. when an algorithm denies someone aloan, the person cannot fathom the decision’s rationale and may challenge its assumptions but  due tothe opacity of these systems fundamentally it, undermines procedural fairness and individual autonomy.

These tensions are highlighted by a recent case of Schrems II(2020)[7], the court of Justice of European Union invalidated the EU-US privacy shield framework underlining that the US surveillance practices were violating European citizens fundamental rights to privacy and data protection. This decision made clear that autonomy requires protection not just from private actors but also from state surveillance that often treats personal data as its intelligence fodder. Similarly companies like Meta had to face lawsuits for their manipulative design features and algorithm amplification of harmful content, reinforcing the notion that platforms should merely facilitate user choices not architect them.

Regulations and Possible Future Frameworks

Globally legal systems are constantly grappling  with how to preserve human autonomy in this rapidly evolving technologically cyberspace. The most aggressive regulator amongst the nations appeared to be the European Union which introduced Digital Services Act[8] and Digital Markets Act complimenting GDPR to address platform power and content moderation. This act aims to impose risk-based requirements on algorithmic systems, prohibiting some application that contradict fundamental rights thus trying to maintain transparency and human oversight for high risk uses.

The United States took a more sector- specific regulatory approach. The Consumer Privacy Act[9] gives Californians some data rights, while other states are considering legislation about algorithmic accountability. While this is an e.g.  of federalism, it is also a kind of patch workfor dealing with data protection.Amore comprehensive approach has been adopted by China which combined consumer data protection with extensive state surveillance and social credit systems demonstrating how autonomy focused regulation can coexist with authoritarian control.Despite the existence of multiple frameworks,they are not suffice.

As most of them tends  to individualise autonomy focusing only on personal data rights and individual consent while ignoring how networked technologies create collective vulnerabilities. For e.g. where large-scale disinformation campaigns use social media to influence the outcome of elections, the harm goes beyond the violation of privacy to collectively attack democratic autonomy.

Obviously, to make an effectiveregulation, it should start the process of moving beyond consent-based frameworks and start focusing more on structural interventions.To prevent  any prospective damage the legal frameworks should focus on making it mandatory for the systems to build algorithms that respect autonomy from inception. Approaches like privacy -by -design[10], interpretability  requirements for algorithms affecting rights and human- in -the -loop mandates for consequential decisions. These approaches recognise that incyberspace autonomy should be prioritised and be legally protected.

Conclusion

The interaction between technology, human autonomy and cyberspace reveals the profound challenges faced by law to maintain harmony among these concepts. As technology simultaneously liberates and constraints, empowers and manipulate. The obsolete legal frameworks that were developed specifically for physical space and industrial age -power dynamics pans out to be redundant to address algorithmic governance, data asymmetries and architectural control.

So effective legal frameworks must move beyond individualized consent models towards a model which protects both individual and collective autonomy.The law must address technology not only as a tool that people use but also as an environment that people inhabit, with architecture that shapes what autonomy itself means. The question that we face in building the legal architecture for the digital age is this: Will cyberspace be a space of human liberation or a space of unprecedented control?


[1].  Carole Cadwalladr and Emma Graham-Harrison, “Revealed: 50 million Facebook profiles harvested for Cambridge Analytica,” The Guardian (17 March 2018), available at:
https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election

[2]Justice KSPuttaswamy (Retd.) v. Union of India, (2017) 10 SCC 1 (Supreme Court of India).

[3]Shoshana Zuboff, The Age of Surveillance Capitalism(Profile Books 2019)https://profilebooks.com/work/the-age-of-surveillance-capitalism/.

[4]Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 (General Data Protection Regulation) [2016] OJ L 119/1

[5]Aleecia M. McDonald & Lorrie Faith Cranor, ‘The Cost of Reading Privacy Policies,’ I/S: A Journal of Law and Policy for the Information Society (2008), available at:
https://kb.osu.edu/bitstream/handle/1811/72839/ISJLP_V4N3_543.pdf

[6]Frank Pasquale, The Black Box Society (Harvard University Press 2015)

[7]Case C-311/18, Data Protection Commissioner v Facebook Ireland Ltd and Maximillian Schrems (“Schrems II”), Judgment of the Court (Grand Chamber), 16 July 2020, available at:
https://curia.europa.eu/juris/liste.jsf?num=C-311/18.

[8]Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services [2022] OJ L 277/1

[9]California Consumer Privacy Act 2018 (Cal Civ Code § 1798.100)

[10]Ann Cavoukian, “Privacy by Design: The 7 Foundational Principles” (2011), available at:
https://www.ipc.on.ca/wp-content/uploads/resources/7foundationalprinciples.pdf


Author’s Detail

Name: Shriya Awasthi

1st year Law student at National Law University, Shimla

Mob no. 9877425295

Email id. Shriyaawasthi1808@gmail.com

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *