Algorithmic fairness as sociotechnical system
A typology of the information construct
DOI:
https://doi.org/10.59490/dgo.2025.952Keywords:
Algorithmic fairness, fairness, information construct, justice, sociotechnical systemsAbstract
Organizations and enterprises search for ways to exploit the vast amount of data that is produced by citizens, sensors, devices and administrative processes. Capitalizing on the produced data should be done responsibly by preventing, mitigating and managing undesired side effects such as violation of rules and regulations, human rights, ethical principles as well as privacy and security requirements. A key challenge in employing data, algorithms and data-driven systems is to adhere to the principle of fairness and justice. In this contribution we focus on the issue of algorithmic fairness, which itself can be framed as a sociotechnical system with interacting social and technical/formal subsystems. Information is a key construct of any sociotechnical system, where information creation and exchange can ease the opacity of interactions between the social and formal subsystems, and of interactions between the subsystems and the environment in which they operate. Based on literature, we categorize the types and flows of the information construct within the sociotechnical systems of algorithmic fairness in 7 categories. As such, the presented insights about the 7 categories of the information construct can form a common mental model whereby social and technical disciplines can inform each other systematically and align their views on algorithmic fairness.
Downloads
References
Alfrink, K., Keller, I., Kortuem, G., & Doorn, N. (2023). Contestable ai by design: Towards a framework. Minds and Machines, 33(4), 613–639.
Altman,M.,Wood, A., & Vayena, E. (2018). A harm-reduction framework for algorithmic fairness. IEEE Security & Privacy, 16(3), 34–45.
Balayn, A., Lofi, C., & Houben, G.-J. (2021). Managing bias and unfairness in data for decision support: A survey of machine learning and data engineering approaches to identify and mitigate bias and unfairness within data management and analytics systems. The VLDB Journal, 30(5), 739–768.
Bargh,M. S. (2019).Realizing secure and privacy-protecting information systems: Bridging the gaps. Hogeschool Rotterdam.
Bargh, M. S. (2024). Data lineage for the justice system: Scope, potentials, and directions. [link]
Beynon-Davies, P., & Wang, Y. (2019). Deconstructing information sharing. Journal of the association for information systems, 20(4), 1.
Binns, R. (2018). What can political philosophy teach us about algorithmic fairness? IEEE Security & Privacy, 16(3), 73–80.
Binns, R. (2020). On the apparent conflict between individual and group fairness. Proceedings of the 2020 conference on fairness, accountability, and transparency, 514–524.
Carey, A. N., & Wu, X. (2023). The statistical fairness field guide: Perspectives from social and formal sciences. AI and Ethics, 3(1), 1–23.
Chatterjee, S., Sarker, S., Lee,M. J., Xiao, X., &Elbanna, A. (2021). A possible conceptualization of the information systems (is) artifact: A general systems theory perspective 1. Information Systems Journal, 31(4), 550–578.
Choenni, S., Netten, N., Bargh, M. S., & van den Braak, S. (2021). Exploiting big data for smart government: Facing the challenges. In J. C. Augusto (Ed.), Handbook of smart cities (pp. 1035–1057). Springer International Publishing. DOI: https://doi.org/10.1007/978-3-030-69698-6_82.
Choenni, S., Netten, N., Shoae-Bargh, M., & Choenni, R. (2018). On the usability of big (social) data. 2018 IEEE Intl Conf on Parallel & Distributed Processing with Applications, Ubiquitous Computing & Communications, Big Data & Cloud Computing, Social Computing & Networking, Sustainable Computing & Communications (ISPA/IUCC/BDCloud/SocialCom/SustainCom), 1167–1174. DOI: https://doi.org/10.1109/BDCloud.2018.00172.
Choraś, M., Pawlicki, M., Puchalski, D., & Kozik, R. (2020). Machine learning–the results are not the only thing thatmatters!what about security, explainability and fairness? Computational Science–ICCS 2020: 20th International Conference, Amsterdam, The Netherlands, June 3–5, 2020, Proceedings, Part IV 20, 615–628.
Chouldechova, A. (2017). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big data, 5(2), 153–163. DOI: https://doi.org/10.1089/big.2016.0047.
Corbett-Davies, S., Pierson, E., Feller, A., & Goel, S. (2016). A computer program used for bail and sentencing decisions was labeled biased against blacks. it’s actually not that clear.Washington Post, 17.
Dieterich, W., Mendoza, C., & Brennan, T. (2016). Compas risk scales: Demonstrating accuracy equity and predictive parity. Northpointe Inc, 7(4), 1–36.
Dolata, M., Feuerriegel, S., & Schwabe, G. (2022). A sociotechnical view of algorithmic fairness. Information Systems Journal, 32(4), 754–818.
Dwork, C., Hardt, M., Pitassi, T., Reingold, O., & Zemel, R. (2012). Fairness through awareness. Proceedings of the 3rd innovations in theoretical computer science conference, 214–226.
Engin, Z., Gardner, E., Hyde, A., Verhulst, S., & Crowcroft, J. (2024). Unleashing collective intelligence for public decision-making: The data for policy community. Data & Policy, 6, e23.
Flores, A. W., Bechtel, K., & Lowenkamp, C. T. (2016). False positives, false negatives, and false analyses: A rejoinder to machine bias: There’s software used across the country to predict future criminals. and it’s biased against blacks. Fed. Probation, 80, 38.
Hellman, D. (2020). Measuring algorithmic fairness. Virginia Law Review, 106(4), 811–866.
Karimi, A.-H., Barthe, G., Schölkopf, B., & Valera, I. (2022). A survey of algorithmic recourse: Contrastive explanations and consequential recommendations. ACM Computing Surveys, 55(5), 1–29.
Kleinberg, J. M., Mullainathan, S., & Raghavan, M. (2016). Inherent trade-offs in the fair determination of risk scores. Information Technology Convergence and Services. [link]
Larson, J., Mattu, S., Kirchner, L., & Angwin, J. (2016). How we analyzed the compas recidivism algorithm [Retrieved on 14 October 2024 from [link]].
Lee, A. S. (2004). Thinking about social theory and philosophy for information systems. Social theory and philosophy for information systems, 1, 26.
Leeuw, F. L. (2025). The algorithmization of policy and society: The need for a realist evaluation approach. In Artificial intelligence and evaluation (pp. 242–265). Routledge.
Manning, P., & Ravi, S. (2013). Cross-disciplinary theory in construction of a world-historical archive. Journal of World-Historical Information, 1(1), 15–39.
Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). A survey on bias and fairness in machine learning. ACM computing surveys (CSUR), 54(6), 1–35.
Misuraca, G., & van Noordt, C. (2020). Overview of the use and impact of ai in public services in the eu. Publications Office of the European Union: Luxembourg.
Mogul, J. C. (2006). Emergent (mis) behavior vs. complex software systems. ACM SIGOPS Operating Systems Review, 40(4), 293–304.
Netten, N., Bargh, M. S., & Choenni, S. (2018). Exploiting data analytics for social services: On searching for profiles of unlawful use of social benefits. Proceedings of the 11th International Conference on Theory and Practice of Electronic Governance, 550–559.
Nissim, K., &Wood, A. (2018). Is privacy privacy? Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2128), 20170358.
Paré, G., Trudel, M.-C., Jaana, M., & Kitsiou, S. (2015). Synthesizing information systems knowledge: A typology of literature reviews. Information & management, 52(2), 183–199.
Sarker, S., Chatterjee, S., Xiao, X., & Elbanna, A. (2019). The sociotechnical axis of cohesion for the is discipline: Its historical legacy and its continued relevance.MIS quarterly, 43(3), 695–720.
Starke, C., Baleis, J., Keller, B., &Marcinkowski, F. (2022). Fairness perceptions of algorithmic decision-making: A systematic review of the empirical literature. Big Data & Society, 9(2). DOI: https://doi.org/10.1177/20539517221115189.
Toli, A. M., & Murtagh, N. (2020). The concept of sustainability in smart city definitions. Frontiers in Built Environment, 6, 77. DOI: https://doi.org/10.3389/fbuil.2020.00077.
Verma, S., & Rubin, J. (2018). Fairness definitions explained. Proceedings of the international workshop on software fairness, 1–7.
Westin, A. F. (1968). Privacy and freedom.Washington and Lee Law Review, 25(1), 166.
Zafar, M. B., Valera, I., Gomez Rodriguez, M., & Gummadi, K. P. (2017). Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. Proceedings of the 26th international conference on world wide web, 1171–1180.
Downloads
Published
How to Cite
Conference Proceedings Volume
Section
License
Copyright (c) 2025 Mortaza S. Bargh, Sunil Choenni, Floris ter Braak

This work is licensed under a Creative Commons Attribution 4.0 International License.
