Mathematical methods of the digitalization of HR processes in public administration

Authors

DOI:

https://doi.org/10.26906/EiR.2025.4(99).4169

Keywords:

public administration, artificial intelligence in personnel selection, algorithmic bias, algorithmic discrimination, high-risk HR systems, EU «AI Act», data audit, mathematical methods

Abstract

The study analyzes the dual effect of AI in public sector recruitment: it increases productivity but risks structural inequalities and algorithmic discrimination, undermining public authority. Current approaches to fairness remain fragmented, creating gaps between efficiency and societal perception. Bias emerges at all recruitment stages. Under the EU “AI Act”, such systems are classified as high-risk technologies requiring mandatory audits. To address this, a composite fairness and non-discrimination index is proposed, integrating technical, ethical, and legal metrics into a unified assessment system. This model enables the identification of vulnerabilities and supports evidence-based decisions. It is substantiated that digitalization requires a systemic methodology for ensuring fairness as a key condition for trust in government algorithms.

Author Biographies

Oleksandr Oliinyk, Zaporizhzhia National University

Candidate of Philosophical Sciences, Associate Professor of the Department of Business Administration and Management of Foreign Economic Activity

Damir Bikulov, Zaporizhzhia National University

Doctor of Public Administration, Professor of the Department of Business Administration and Management of Foreign Economic Activity

Olha Holovan, Zaporizhzhia National University

Candidate of Physical and Mathematical Sciences, Associate Professor of the Department of Business Administration and Management of Foreign Economic Activity

Svitlana Markova, Zaporizhzhia National University

Doctor of Economic Sciences, Professor of the Department of Business Administration and Management of Foreign Economic Activity

Olha Veritova, Zaporizhzhia National University

Candidate of Pedagogical Sciences, Senior Lecturer of the Department of Business Administration and Management of Foreign Economic Activity

References

1. Köchling A., Wehner M.C. (2020). Discriminated by an algorithm: a systematic review of discrimination and fairness by algorithmic decision making in the context of HR recruitment and HR development. Business Research, vol. 13, pp. 795–848.

2. Starke C., Lünich M. (2020). Artificial intelligence for political decision making in the European Union: effects on citizen’s perceptions of input, throughput and output legitimacy. Data and Policy, no. 2, pp. 1–17.

3. Bansak K., Ferwerda J., Hainmueller J., Dillon A., Hangartner D., Lawrence D., Weinstein J. (2018). Improving refugee integration through data driven algorithmic assignment. Science, vol. 359, pp. 325–329.

4. Mujtaba D.F., Mahapatra N.R. (2019). Ethical considerations in AI based recruitment. IEEE International Symposium on Technology and Society, pp. 1–7.

5. Fregnan E., Ivaldi S., Scaratti G. (2020). HRM 4.0 and new managerial competences profile: the COMAU case. Frontiers in Psychology, vol. 11, pp. 1–16.

6. Zeebaree S.R., Shukur H.M., Hussan B.K. (2019). Human resource management systems for enterprise organizations: a review. Periodicals of Engineering and Natural Sciences, vol. 7, no. 2, pp. 660–669.

7. Acikgoz Y., Davison K.H., Compagnone M., Laske M. (2020). Justice perceptions of artificial intelligence in selection. International Journal of Selection and Assessment, vol. 28, pp. 399–416.

8. DiRomualdo A., El Khoury D., Girimonte F. (2018). HR in the digital age: how digital technology will change HR organization structure, processes and roles. Strategic HR Review, vol. 17, pp. 234–242.

9. Verma S., Rubin J. (2018). Fairness definitions explained. Proceedings of the IEEE ACM International Workshop on Software Fairness, vol. 18, pp. 1–7.

10. Mehrabi N., Morstatter F., Saxena N., Lerman K., Galstyan A. (2021). A survey on bias and fairness in machine learning. ACM Computing Surveys, vol. 54, no. 6, pp. 1–35.

11. Alves G., Bernier F., Couceiro M., Makhlouf K., Palamidessi C., Zhioua S. (2023). Survey on fairness notions and related tensions. EURO Journal on Decision Processes, vol. 11, pp. 1–14.

12. Mujtaba D.F., Mahapatra N.R. (2024). Fairness in AI driven recruitment: challenges, metrics, methods, and future directions. arXiv preprint, vol. 1, pp. 1–17.

13. Fabris A., Baranowska N., Dennis M.J. et al. (2024). Fairness and bias in algorithmic hiring: a multidisciplinary survey. ACM Transactions on Intelligent Systems and Technology, vol. 16, no. 1, pp. 1–54.

14. Chen Z. (2023). Ethics and discrimination in artificial intelligence enabled recruitment practices. Humanities and Social Sciences Communications, vol. 10, pp. 1–12.

15. Capasso M., Arora P., Sharma D., Tacconi C. (2024). On the right to work in the age of artificial intelligence: ethical safeguards in algorithmic human resource management. Business and Human Rights Journal, vol. 9, issue 3, pp. 346–360.

16. Alon Barkat S., Busuioc M. (2021). Human AI interactions in public sector decision making: automation bias and selective adherence to algorithmic advice. Journal of Public Administration Research and Theory, vol. 33, issue 1, pp. 153–169

Downloads

Published

2025-12-26

How to Cite

Oliinyk, O., Bikulov, D., Holovan, O., Markova, S., & Veritova, O. (2025). Mathematical methods of the digitalization of HR processes in public administration. Economics and Region, (4(99), 167–176. https://doi.org/10.26906/EiR.2025.4(99).4169

Issue

Section

Mathematical methods, models and information technologies in economics