Integrating Ethical AI Governance into Administrative Decision Support Systems: A Case-Study Synthesis and Implementation Framework
Abstract
Administrative decision support systems increasingly rely on Artificial Intelligence (AI) to allocate resources, prioritise cases, and recommend actions. While these systems promise efficiency and predictive power, they also introduce ethical risks, including bias, opacity, and accountability gaps. This paper examines how ethical governance models—self-regulatory, co-regulatory, and statutory—can be integrated into AI-enabled DSS. Drawing on a focused literature review and three case studies in healthcare, civil service human resources, and municipal administration, we compare safeguards across models (fairness audits, transparency artefacts, accountability mechanisms, and human oversight) and assess their effects on trust, contestability, and performance. Our analysis finds that hybrid governance, combining internal controls, multi-stakeholder engagement, and proportionate statutory oversight, best balances innovation with ethical integrity. We propose a practical mapping from principles to controls, metrics, and documentary artefacts, and outline priorities for research and policy, including adaptive regulation, routine impact assessment, and post-deployment monitoring for bias and drift. The paper concludes with an expanded framework and checklist to aid practitioners in operationalising ethical AI governance throughout the DSS lifecycle.
Full text article
References
Barocas, S., & Selbst, A. D. (2016). Big data’s disparate impact. California Law Review, 104(3), 671-732. https://doi.org/10.2139/ssrn.2477899
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., … & Schafer, B. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689-707. https://doi.org/10.1007/s11023-018-9482-5
Hollweck, Trista. (2016). Robert K. Yin. (2014). Case Study Research Design and Methods (5th ed.). Thousand Oaks, CA: Sage. 282 pages.. The Canadian Journal of Program Evaluation. 30. 10.3138/cjpe.30.1.108.
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389-399. https://doi.org/10.1038/s42256-019-0088-2
Kraemer, F., van Overveld, K., & Peterson, M. (2020). Is there an ethics of algorithms? Ethics and Information Technology, 12(3), 251-259. https://doi.org/10.1007/s10676-010-9233-7
Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 1-21. https://doi.org/10.1177/2053951716679679
Mökander, J. Auditing of AI: Legal, Ethical and Technical Approaches. DISO 2, 49 (2023). https://doi.org/10.1007/s44206-023-00074-y
Morley, J., Floridi, L., Kinsey, L., & Elhalal, A. (2020). From what to how: An initial review of publicly available AI ethics tools, methods and research to translate principles into practices. Science and Engineering Ethics, 26(4), 2141-2168. https://doi.org/10.1007/s11948-019-00165-5
Rita, L (2023). Government Information Quarterly. https://doi.org/10.1016/J.GIQ.2023.101833
Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Transparent, explainable, and accountable AI for robotics. Science Robotics, 2(6), eaan6080. https://doi.org/10.1126/scirobotics.aan6080
Zeng, Y., Lu, E., & Huangfu, C. (2019). Linking artificial intelligence principles. arXiv preprint: 10.48550/arXiv.1812.04814.
Authors
Copyright (c) 2025 Abdullah Alrwili, Farook Ayyub Kantharia

This work is licensed under a Creative Commons Attribution 4.0 International License.
