Political Risks in Artificial Intelligence: A Comparative Analysis
https://doi.org/10.53658/RW2025-4-4(18)-225-238
Abstract
The study presented in this article aims to identify systemic threats to political stability and digital sovereignty in the Republic of Uzbekistan caused by fragmented legal regulation in the field of artificial intelligence (AI). A comparative analysis of regulatory models in Uzbekistan, Russia and Kazakhstan revealed critical gaps in legal frameworks for AI, specifically, there is a lack of specialized legislation on use of AI, ethical standards, and a direct prohibition on manipulative techniques such as deepfakes, mechanisms for risk assessment, clear delineation of responsibility for harm, and the development of national infrastructure with effective training tools. In the context of Marshall McLuhan’s media ecological theory, which views AI as an “external brain,” these gaps create a vicious cycle of interrelated political risks. The population’s cognitive vulnerability, known as “brain rot,” combined with unregulated synthetic media (deepfakes), creates the conditions for mass manipulation. Technological dependence on foreign platforms and the exodus of specialists lead to the loss of digital sovereignty. The uncritical implementation of AI in the public sector without accountability mechanisms undermines institutional trust. The article provides evidence that maintaining the current regulatory approach turns AI from a tool for development into a source of systemic vulnerability. In order to minimize these risks, it is necessary to immediately adopt comprehensive legislation that takes into account best practices from neighboring countries and addresses current challenges in the legal regulation of AI.
About the Author
A. V. VasilenkoRussian Federation
Anton V. Vasilenko. Independent Researcher
Moscow
References
1. Vinogradova E.A. Potential Threats of Unauthorized Use of Political Deepfakes during Political Elections: International Experience. Mirovaya Politika [World Politics], 2024; 3:44–60 [In Russian]. https://doi.org/10.25136/2409-8671.2024.3.71519. EDN: KNTVCO. Available from: https://nbpublish.com/library_read_article.php?id=71519.
2. Grgić-Hlača N. et al. Knowing About Knowing: An Illusion of Human Competence Can Hinder Appropriate Relianceon AISystems. Proceedingsofthe ACMConferenceon Fairness, Accountability, and Transparency. 2024:1–15 [In English]. https://doi.org/10.1145/3544548.3581025.
3. Castells M. The Information Age: Economy, Society, and Culture. Oxford: Blackwell, 2000. 608 p. [In English].
4. McLuhan M. Understanding Media: The Extensions of Man. New York: McGraw-Hill, 1964 [In English].
5. Postman N. Amusing Ourselves to Death: Public Discourse in the Age of Show Business. New York: Penguin Books, 1985 [In English].
6. Rădulescu B.-G. The Threat of Algorithmic Populism: Intelligence Strategies for Safeguarding Democracy. Intelligence Info 2025; 8(1):33–49 [In English]. Available from: https://www.intelligenceinfo.org/the-threat-of-algorithmic-populism-intelligence-strategies-for-safeguarding-democracy/.
7. The Political Economy of Attention. Annual Review of Anthropology. 2023; 52:287–304 [In English] Available from: https://www.annualreviews.org/content/journals/10.1146/annurev-anthro-101819-110356.
8. Toffler A. The Third Wave. New York: Bantam Books, 1980 [In English].
9. Yousef A.M.F., Alshamy A., Tlili A., Metwally A.H.S. Demystifying the New Dilemma of Brain Rot in the Digital Era: A Review. Brain Sci. 2025; 15:283 [In English]. https://doi.org/10.3390/brainsci15030283.
Review
For citations:
Vasilenko A.V. Political Risks in Artificial Intelligence: A Comparative Analysis. Russia & World: Sc. Dialogue. 2025;(4):225-238. (In Russ.) https://doi.org/10.53658/RW2025-4-4(18)-225-238









