REGULATORY CHALLENGES TO INVESTIGATING AI-DRIVEN CYBERCRIMES

Authors

  • Naeem Allah Rakha Tashkent State University of Law

Keywords:

AI Cybercrimes, Digital Evidence, Cyber Law, International Frameworks

Abstract

The rapid rise of artificial intelligence (AI) has changed many sectors, but it has also created new and complex cybercrimes. Criminals now use AI for deepfakes, automated phishing, ransomware, and botnet attacks that are harder to detect and prove in court. This research looks at the main regulatory challenges that block effective investigation and prosecution of AI-driven cybercrimes. The study uses a doctrinal legal method, supported by comparative analysis of international and domestic laws, and reviews open-source legal materials. Findings show that laws today are not ready for AI’s unique risks. There is confusion about liability when AI systems misuse data, as seen in US copyright lawsuits against OpenAI and Microsoft. Deepfake cases, such as the false video of Scarlett Johansson, show how AI tools threaten reputation, trust, and legal evidence. A striking example came in April 2025, when a New York court faced an AI-generated avatar trying to argue a case, which raised serious concerns about authenticity and accountability in legal settings. These cases show gaps in law, weak rules for evidence, and unclear standards for cross-border crimes. This paper argues that legal systems must adapt quickly. It calls for clear definitions of AI in crime, new liability rules for autonomous systems, strict checks on AI-generated evidence, and stronger global cooperation. The value of this study lies in showing how fast AI is reshaping crime and law, and in offering reforms that can build stronger, fairer, and more secure legal frameworks for the digital age.

References

Aleksandrowicz, M. (2025, March 19). Europol warns of AI-driven crime threats. https://www.reuters.com/world/europe/europol-warns-ai-driven-crime-threats-2025-03-18/

AllahRakha, N. (2024). Legal Frameworks for AI-Driven Cybercrime Prevention. Uzbek Journal of Law and Digital Policy, 2(6), 1–24. https://doi.org/10.59022/ujldp.253

AllahRakha, N. (2025). AI and Corruption: Legal Liability in Algorithmic Decision-Making. Access to Justice in Eastern Europe, 8(3), 303–326. https://doi.org/10.33327/AJEE-18-8.3-a000120

Analiza V Muñoz. (2025). AI in the Crosshairs: Advancing Cybersecurity and Digital Forensics in the Era of Intelligent Threats. https://doi.org/10.13140/RG.2.2.34418.77769

André Maio. (2025). Artificial Intelligence and Crime: The Dual Role of AI in Criminal Activity and Crime Prevention. Zenodo. https://doi.org/10.5281/ZENODO.16945903

Burton, J., Janjeva, A., Moseley, S., & Alice. (2025). AI and Serious Online Crime (CETaS Research Reports). Centre for Emerging Technology and Security. https://cetas.turing.ac.uk/publications/ai-and-serious-online-crime

Caldwell, M., Andrews, J. T. A., Tanay, T., & Griffin, L. D. (2020). AI-enabled future crime. Crime Science, 9(1), 14. https://doi.org/10.1186/s40163-020-00123-8

Carpenter, C. (2025). Whose [Crime] is it Anyway? Journal of International Criminal Justice, mqae055. https://doi.org/10.1093/jicj/mqae055

Chen, N. (2025). Stolen Stories or Fair Use? The New York Times v. OpenAI and the Limits of Machine Learning. Columbia Undergraduate Law Review. https://www.culawreview.org/ddc-x-culr-1/nyt-v-openai-and-microsoft

Christen, M., Burri, T., Kandul, S., & Vörös, P. (2023). Who is controlling whom? Reframing “meaningful human control” of AI systems in security. Ethics and Information Technology, 25(1), 10. https://doi.org/10.1007/s10676-023-09686-x

Custers, B., Lahmann, H., & Scott, B. I. (2025). From liability gaps to liability overlaps: Shared responsibilities and fiduciary duties in AI and other complex technologies. AI & SOCIETY, 40(5), 4035–4050. https://doi.org/10.1007/s00146-024-02137-1

Daniel, L. (2025, April 8). AI Avatars Replacing Human Lawyers In Court? Recent Case Says Not So Fast. Forbes. https://www.forbes.com/sites/larsdaniel/2025/04/08/ai-avatars-replacing-human-lawyers-in-court-recent-case-says-not-so-fast/

Daniele, L. (2024). Incidentality of the civilian harm in international humanitarian law and its Contra Legem antonyms in recent discourses on the laws of war. Journal of Conflict and Security Law, 29(1), 21–54. https://doi.org/10.1093/jcsl/krae004

Dathathri, S., See, A., Ghaisas, S., Huang, P.-S., McAdam, R., Welbl, J., Bachani, V., Kaskasoli, A., Stanforth, R., Matejovicova, T., Hayes, J., Vyas, N., Merey, M. A., Brown-Cohen, J., Bunel, R., Balle, B., Cemgil, T., Ahmed, Z., Stacpoole, K., … Kohli, P. (2024). Scalable watermarking for identifying large language model outputs. Nature, 634(8035), 818–823. https://doi.org/10.1038/s41586-024-08025-4

Farooq, A., & De Vreese, C. (2025). Deciphering authenticity in the age of AI: How AI-generated disinformation images and AI detection tools influence judgements of authenticity. AI & SOCIETY. https://doi.org/10.1007/s00146-025-02416-5

Fernandez‐Basso, C., Gutiérrez‐Batista, K., Gómez‐Romero, J., Ruiz, M. D., & Martin‐Bautista, M. J. (2025). An AI knowledge‐based system for police assistance in crime investigation. Expert Systems, 42(1), e13524. https://doi.org/10.1111/exsy.13524

Fidler, M. (2025). Fragmentation of International Cybercrime Law. Utah Law Review, 2025(3), 737–804. https://dc.law.utah.edu/cgi/viewcontent.cgi?article=1413&context=ulr

Fine, A., Le, S., & K. Miller, M. (2023). Content Analysis of Judges’ Sentiments Toward Artificial Intelligence Risk Assessment Tools. Criminology, Criminal Justice, Law & Society, 24(2), 31–46. https://scholasticahq.com/criminology-criminal-justice-law-society/

Gaeta, P. (2024). Who Acts When Autonomous Weapons Strike? Journal of International Criminal Justice, 21(5), 1033–1055. https://doi.org/10.1093/jicj/mqae001

Guembe, B., Azeta, A., Misra, S., Osamor, V. C., Fernandez-Sanz, L., & Pospelova, V. (2022). The Emerging Threat of Ai-driven Cyber Attacks: A Review. Applied Artificial Intelligence, 36(1), 2037254. https://doi.org/10.1080/08839514.2022.2037254

Härmand, K. (2023). AI Systems’ Impact on the Recognition of Foreign Judgements: The Case of Estonia. Juridica International, 32, 107–118. https://doi.org/10.12697/JI.2023.32.09

Hyslip, T., & Pittman, J. (2015). A Survey of Botnet Detection Techniques by Command and Control Infrastructure. Journal of Digital Forensics, Security and Law. https://doi.org/10.15394/jdfsl.2015.1195

Königs, P. (2022). Artificial intelligence and responsibility gaps: What is the problem? Ethics and Information Technology, 24(3), 36. https://doi.org/10.1007/s10676-022-09643-0

Krishna, V. V. (2024). AI and contemporary challenges: The good, bad and the scary. Journal of Open Innovation: Technology, Market, and Complexity, 10(1), 100178. https://doi.org/10.1016/j.joitmc.2023.100178

L. Kimbrough, J. (2025). Developing Lawyering Skills in the Age of Artificial Intelligence: A Framework for Legal Education. Journal of Technology Law & Polic, 29(1), 31–71. https://scholarship.law.ufl.edu/cgi/viewcontent.cgi?article=1242&context=jtlp

Lake, T. (2024, April 26). A school principal faced threats after being accused of offensive language on a recording. Now police say it was a deepfake. CNN. https://edition.cnn.com/2024/04/26/us/pikesville-principal-maryland-deepfake-cec/index.html

López-Borrull, A., & Lopezosa, C. (2025). Mapping the Impact of Generative AI on Disinformation: Insights from a Scoping Review. Publications, 13(3), 33. https://doi.org/10.3390/publications13030033

Lundberg, E., & Mozelius, P. (2025). The potential effects of deepfakes on news media and entertainment. AI & SOCIETY, 40(4), 2159–2170. https://doi.org/10.1007/s00146-024-02072-1

Mahardhika, V., Astuti, P., & Mustaffa, A. (2023). Could Artificial Intelligence be the Subject of Criminal Law? Yustisia Jurnal Hukum, 12(1), 1. https://doi.org/10.20961/yustisia.v12i1.56065

Malik, A. W., Bhatti, D. S., Park, T.-J., Ishtiaq, H. U., Ryou, J.-C., & Kim, K.-I. (2024). Cloud Digital Forensics: Beyond Tools, Techniques, and Challenges. Sensors, 24(2), 433. https://doi.org/10.3390/s24020433

Meurs, T., Cartwright, E., Cartwright, A., Junger, M., & Abhishta, A. (2024). Deception in double extortion ransomware attacks: An analysis of profitability and credibility. Computers & Security, 138, 103670. https://doi.org/10.1016/j.cose.2023.103670

Miranda, B. (2025, August 5). ‘We didn’t vote for ChatGPT’: Swedish PM under fire for using AI in role. The Guardian. https://www.theguardian.com/technology/2025/aug/05/chat-gpt-swedish-pm-ulf-kristersson-under-fire-for-using-ai-in-role

Mohsendokht, M., Li, H., Kontovas, C., Chang, C.-H., Qu, Z., & Yang, Z. (2024). Decoding dependencies among the risk factors influencing maritime cybersecurity: Lessons learned from historical incidents in the past two decades. Ocean Engineering, 312, 119078. https://doi.org/10.1016/j.oceaneng.2024.119078

Montgomery, B. (2025, July 1). AI companies start winning the copyright fight. The Guardian. https://www.theguardian.com/technology/2025/jun/30/ai-techscape-copyright

Nastoska, A., Jancheska, B., Rizinski, M., & Trajanov, D. (2025). Evaluating Trustworthiness in AI: Risks, Metrics, and Applications Across Industries. Electronics, 14(13), 2717. https://doi.org/10.3390/electronics14132717

Nerantzi, E., & Sartor, G. (2024). ‘Hard AI Crime’: The Deterrence Turn. Oxford Journal of Legal Studies, 44(3), 673–701. https://doi.org/10.1093/ojls/gqae018

Pantanowitz, L., Hanna, M., Pantanowitz, J., Lennerz, J., Henricks, W. H., Shen, P., Quinn, B., Bennet, S., & Rashidi, H. H. (2024). Regulatory Aspects of Artificial Intelligence and Machine Learning. Modern Pathology, 37(12), 100609. https://doi.org/10.1016/j.modpat.2024.100609

Park, P. S., Goldstein, S., O’Gara, A., Chen, M., & Hendrycks, D. (2024). AI deception: A survey of examples, risks, and potential solutions. Patterns, 5(5), 100988. https://doi.org/10.1016/j.patter.2024.100988

Pesetski, A. (2020). Deepfakes: A New Content Category for a Digital Age. WILLIAM & MARY BILL OF RIGHTS JOURNAL, 29(2), 503–532. https://scholarship.law.wm.edu/cgi/viewcontent.cgi?article=1965&context=wmborj

Popa, C., Pallath, R., Cunningham, L., Tahiri, H., Kesavarajah, A., & Wu, T. (2025). Deepfake Technology Unveiled: The Commoditization of AI and Its Impact on Digital Trust (arXiv:2506.07363). arXiv. https://doi.org/10.48550/arXiv.2506.07363

Rahman-Jones, I. (2025, August 28). AI firm says its technology weaponised by hackers. BBC. https://www.bbc.com/news/articles/crr24eqnnq9o

Runyon, N. (2025, May 8). Deepfakes on trial: How judges are navigating AI evidence authentication. Thomson Reuters. https://www.thomsonreuters.com/en-us/posts/ai-in-courts/deepfakes-evidence-authentication/

Sandoval, M.-P., De Almeida Vau, M., Solaas, J., & Rodrigues, L. (2024). Threat of deepfakes to the criminal justice system: A systematic review. Crime Science, 13(1), 41. https://doi.org/10.1186/s40163-024-00239-1

Sherman, N., & Hooker, L. (2025, June 25). Judge backs AI firm over use of copyrighted books. BBC. https://www.bbc.com/news/articles/c77vr00enzyo

Steven, G. (2021). It’s Time t s Time to Put Char o Put Character Back int acter Back into the Char o the Character-E acter-Evidence Rule. Marquette Law Review, 104(3), 709–811. https://scholarship.law.marquette.edu/cgi/viewcontent.cgi?article=5483&context=mulr

Thomas, D. (2025, January 14). Judge rebukes Minnesota over AI errors in “deepfakes” lawsuit. Reuters. https://www.reuters.com/legal/government/judge-rebukes-minnesota-over-ai-errors-deepfakes-lawsuit-2025-01-13/

Wang, X. (2024). Global (re-)framing of cybercrime: An emerging common interest in flux of competing normative powers? Leiden Journal of International Law, 1–27. https://doi.org/10.1017/S0922156524000402

Young, F. (2025). A Deepfake Evidentiary Rule (Just in Case). Univeristy of Illiois Chicago. https://library.law.uic.edu/news-stories/a-deepfake-evidentiary-rule-just-in-case/

Završnik, A. (2020). Criminal justice, artificial intelligence systems, and human rights. ERA Forum, 20(4), 567–583. https://doi.org/10.1007/s12027-020-00602-0

Downloads

Published

2026-03-26

Issue

Section

Natural and Applied Sciences in Forensics, Cybercrime and Security