Artificial Intelligence in Criminalistics and Forensic Examination: Issues of Legal Personality and Algorithmic Bias
https://doi.org/10.30764/1819-2785-2023-2-30-37
Abstract
Active development and implementation of artificial intelligence technologies (AI) in various spheres of human activity have started the processes of qualitative change in public relations. This fact necessitates the development of legal and technical standards to regulate AI technologies. In this regard, the most controversial issue is the recognition of AI personality. The analysis of various opinions on the matter shows the lack of a consolidated approach in the existing legal doctrine. Creating the legal status for AI systems would provide for several options depending on its type and purpose – from technical means to the status of an “electronic personality” and recognition as a full-fledged subject of law.
Considering the specifics of criminalistics and forensic examination, it is better to position AI systems as technical means. Machine learning is considered a form of AI. It is the use of mathematical data models that enables computer training through specialized algorithms and training data. Algorithms can create or reproduce distortions and inaccuracies unintentionally embedded in the training data, which causes the manifestation of algorithmic bias. To eliminate bias of algorithms it is necessary to pay attention to the quality of training data. The author has developed special methods to prepare such data, which are presented in this article in relation to ballistic identification systems. Also, one of the elements of system technical solutions to the problem of bias of AI algorithms is the development of standards for minimizing unjustified bias in algorithmic solutions.
About the Authors
A. V. KokinRussian Federation
Kokin Andrey Vasil’evich – Doctor of Law, Chief Forensic Examiner at the Department of Trace and Ballistics Examinations; Professor of Department of Weapons and Trace Examinations at Educational and Scientific Forensic Complex
Moscow 109028;
Moscow 117997
Yu. D. Denisov
Russian Federation
Denisov Yurii Dmitrievich – Candidate of Law, Distinguished Lawyer of the Russian Federation, Director
Moscow 109028
References
1. Etzioni A., Etzioni O. Incorporating Ethics into Artificial Intelligence. Journal of Ethics. 2017. Vol. 21. No. 4. Р. 403–418. https://doi.org/10.1007/978-3-319-69623-2_15
2. Danaher J. The Rise of the Robots and the Crisis of Moral Patiency. AI & Society: Knowledge, Culture and Communication. 2017. Р. 1–8. https://doi.org/10.1007/s00146-017-0773-9
3. Solum B.L. Legal Personhood for Artifical Intellignces. North Carolina Law Review. 1992. Vol. 70. No. 4. Р. 1231–1287.
4. Neznamov A.V., Naumov V.B. Regulation Strategy for Robotics and Cyberphysical Systems. Law. 2018. No. 2. P. 69–89. (In Russ.).
5. Solaiman S.M. Legal Personality of Robots, Corporations, Idols and Chimpanzees: A Quest for Legitimacy. Artificial Intelligence and Law. 2017. Vol. 25. No. 2. Р. 155–179. https://doi.org/10.1007/s10506-016-9192-3
6. Bryson J.J., Diamantis M.E., Grant T.D. Of, For, and By the People: The Legal Lacuna of Synthetic Persons. Artificial Intelligence and Law. 2017. Vol. 25. No. 3. Р. 273–291. https://doi.org/10.1007/s10506-017-9214-9
7. Yastrebov О.А. The Legal Capacity of Electronic Persons: Theoretical and Methodological Approaches. Proceedings of the Institute of State and Law of the RAS. 2018. Vol. 13. No. 2. P. 36–53. (In Russ.).
8. Kharitonova Yu.S., Savina V.S., Pagnini F. Artificial Intelligence’s Algorithmic Bias: Ethical and Legal Issues. Perm University Herald. Juridical Sciences. 2021. No. 53. P. 488–515. (In Russ.). https://doi.org/10.17072/1995-4190-2021-53-488-515
9. Carriquiry A., Hofmann H., Xiao Hui Tai, VanderPlas S. Machine Learning in Forensic Applications. Significance. 2019. Vol. 2. No. 2. Р. 29–35. https://doi.org/10.1111/j.1740-9713.2019.01252.x
10. Fedorenko V. A., Sorokina K. O., Giverts P. V. Classification of Firing Pinmarks Images by Weapon Specimens Using a Fully-Connected Neural Network. Izvestiya of Saratov University, Economics. Management. Law. 2022. Vol. 22. Issue 2. P. 184–190. (In Russ.). https://doi.org/10.18500/1994-2540-2022-22-2-184-190
11. Giverts P., Sorokina K., Fedorenko V. Examination of the Possibility to Use Siamese Networks for the Comparison of Firing Marks. Journal of Forensic Sciences. 2022. Vol. 67. Iss. 6. Р. 2416–2424. https://doi.org/10.1111/1556-4029.15143
12. Song J. Proposed “Congruent Matching Cells (CMC)” Method for Ballistic Identification and Error Rate Estimation. AFTE Journal. 2015. Vol. 47. No. 3. Р. 177–185.
13. Sorokina K.O., Fedorenko V.A., Giverts P.V. Evaluation of the Similarity of Images of Breech Face Marks Using the Method of Correlation Cells. Journal of Information Technologies and Computing Systems. 2019. No. 3. P. 3–15. (In Russ.). https://doi.org/10.14357/20718632190301
14. Chen Z., Chu W., Soons J.A., Thompson R.M., Song J., Zhao X. Fired Bullet Signature Correlation Using the Congruent Matching Profile Segments (CMPS) Method. Forensic Science International. 2019. Р. 10–19. https://doi.org/10.1016/j.forsciint.2019.109964
Review
For citations:
Kokin A.V., Denisov Yu.D. Artificial Intelligence in Criminalistics and Forensic Examination: Issues of Legal Personality and Algorithmic Bias. Theory and Practice of Forensic Science. 2023;18(2):30-37. (In Russ.) https://doi.org/10.30764/1819-2785-2023-2-30-37