Using Generative AI for Identifying Electoral Irregularities on Social Media

Authors

DOI:

https://doi.org/10.59490/dgo.2025.990

Keywords:

Generative AI, electoral irregularities, SM, election monitoring, online campaigns

Abstract

The increasing use of social media (SM) in political campaigns has raised concerns about electoral irregularities, such as unauthorized voter solicitation and the use of sound trucks. Monitoring and enforcing electoral regulations manually are time-consuming and prone to inconsistencies, highlighting the need for automated solutions. A major challenge in automating the detection of electoral violations is the lack of sufficient labeled data. Additionally, the effectiveness of Generative AI in addressing this issue remains underexplored, especially regarding its ability to create synthetic data and enhance detection accuracy. In this context, this study aims to assess the potential of Generative AI in identifying electoral irregularities on SM, focusing on two common violations in the 2024 Brazilian municipal elections: voter solicitation and the use of sound trucks. The goal is to evaluate whether synthetic image generation, combined with AI-based visual analysis, can improve the identification of such infractions. We first generate synthetic images using Imagen 3, Stable Diffusion, and FLUX, identifying Imagen 3 as the most effective in producing realistic and visually coherent images. Then, we test the ability of three AI models—Gemini 2.0 Flash, Llama 3.2 Vision, and PaliGemma 2—to detect electoral violations in both real and synthetic images. To enhance detection accuracy, we apply different prompting strategies, including basic, chain-of-thought, and detailed prompts. Our findings show that Gemini 2.0 Flash performs best, particularly when using detailed prompts. Also, synthetic images help mitigate data scarcity, improving model training and evaluation. Overall, the study demonstrates that Generative AI, combined with optimized prompt engineering, can significantly enhance the accuracy of detecting electoral irregularities.

Downloads

Download data is not yet available.

References

Ajayi, A. i., & S.a. Adesote. (2016). The Social Media and Consolidation of Democracy in Nigeria: Uses, Potentials and Challenges . In [link] (Vol. 6, Issue No.1).

Brito, K., & Adeodato, P. J. L. (2022). Measuring the performances of politicians on social media and the correlation with major Latin American election results. Government Information Quarterly, 39(4), 101745. https://doi.org/10.1016/j.giq.2022.101745

Brito, K., & Adeodato, P. J. L. (2023). Machine learning for predicting elections in Latin America based on social media engagement and polls. Government Information Quarterly, 40(1), 101782. https://doi.org/10.1016/j.giq.2022.101782

Brito, K., Paula, N., Fernandes, M., & Meira, S. (2019). Social Media and Presidential Campaigns – Preliminary Results of the 2018 Brazilian Presidential Election. Proceedings of the 20th Annual International Conference on Digital Government Research, 332–341. https://doi.org/10.1145/3325112.3325252

Clemmensen, L. H., & Kjærsgaard, R. D. (2022). Data Representativity for Machine Learning and AI Systems.

Danielle K. Citron, & Robert Chesney. (2019). Deepfakes and the New Disinformation War. Foreign Affairs.

Ettinger, A. (2020). What BERT Is Not: Lessons from a New Suite of Psycholinguistic Diagnostics for Language Models. Transactions of the Association for Computational Linguistics, 8, 34–48. https://doi.org/10.1162/tacl_a_00298

Falkenberg, M., Zollo, F., Quattrociocchi, W., Pfeffer, J., & Baronchelli, A. (2024). Patterns of partisan toxicity and engagement reveal the common structure of online political communication across countries. Nature Communications, 15(1), 9560. https://doi.org/10.1038/s41467-024-53868-0

Ferrara, E. (2024). Charting the Landscape of Nefarious Uses of Generative Artificial Intelligence for Online Election Interference.

Ferrara, E., Varol, O., Davis, C., Menczer, F., & Flammini, A. (2016). The rise of social bots. Communications of the ACM, 59(7), 96–104. https://doi.org/10.1145/2818717

Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., & Vayena, E. (2018). AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018-9482-5

Gomes, N. D. (2001). Persuasive forms of political communication: political propaganda and electoral advertising (Vol. 3). Edipucrs.

Grattafiori, A., Dubey, A., Jauhri, A., Pandey, A., Kadian, A., Al-Dahle, A., Letman, A., Mathur, A., Schelten, A., Vaughan, A., Yang, A., Fan, A., Goyal, A., Hartshorn, A., Yang, A., Mitra, A., Sravankumar, A., Korenev, A., Hinsvark, A., … Ma, Z. (2024). The Llama 3 Herd of Models.

Haupt, M. R., Yang, L., Purnat, T., & Mackey, T. (2024). Evaluating the Influence of Role-Playing Prompts on ChatGPT’s Misinformation Detection Accuracy: Quantitative Study. JMIR Infodemiology, 4, e60678. https://doi.org/10.2196/60678

Hongladarom, S. (2023). Shoshana Zuboff, The age of surveillance capitalism: the fight for a human future at the new frontier of power. AI & SOCIETY, 38(6), 2359–2361. https://doi.org/10.1007/s00146-020-01100-0

Howard, P. N., & Kollanyi, B. (2016). Bots, #StrongerIn, and #Brexit: Computational Propaganda during the UK-EU Referendum.

Ian Goodfellow, Yoshua Bengio, & Aaron Courville. (2016). Deep Learning. [link].

Kent, M. L., & Li, C. (2020). Toward a normative social media theory for public relations. Public Relations Review, 46(1), 101857. https://doi.org/10.1016/j.pubrev.2019.101857

Knoth, N., Tolzin, A., Janson, A., & Leimeister, J. M. (2024). AI literacy and its implications for prompt engineering strategies. Computers and Education: Artificial Intelligence, 6, 100225. https://doi.org/10.1016/j.caeai.2024.100225

Kwak, N., Lane, D. S., Weeks, B. E., Kim, D. H., Lee, S. S., & Bachleda, S. (2018). Perceptions of Social Media for Politics: Testing the Slacktivism Hypothesis. Human Communication Research, 44(2), 197–221. https://doi.org/10.1093/hcr/hqx008

Lazer, D. M. J., Baum, M. A., Benkler, Y., Berinsky, A. J., Greenhill, K. M., Menczer, F., Metzger, M. J., Nyhan, B., Pennycook, G., Rothschild, D., Schudson, M., Sloman, S. A., Sunstein, C. R., Thorson, E. A., Watts, D. J., & Zittrain, J. L. (2018). The science of fake news. Science, 359(6380), 1094–1096. https://doi.org/10.1126/science.aao2998

Olesya Tkacheva. (2013). Electoral Fraud Social Media and Whistle. [link]

Roy, M. (2017). Cathy O’Neil. Weapons of Math Destruction: How Big Data Increases Inequality and

Threatens Democracy. New York: Crown Publishers, 2016. 272p. Hardcover, $26 (ISBN 978-0553418811). College & Research Libraries, 78(3), 403. https://doi.org/10.5860/crl.78.3.403

Santana, J., Santana, M., Sampaio, P., & Brito, K. (2024). Towards a Methodology for Analyzing Visual Elements in Social Media Posts of Politicians. Proceedings of the 17th International Conference on Theory and Practice of Electronic Governance, 366–373. https://doi.org/10.1145/3680127.3680218

Santos, É. L. D. (2023). Coronelismo e clientelismo: ecos na sociedade brasileira nas eleições presidenciais (Brasil 2018-2022). In [link]. Faculdade de Ciências Humanas e Sociais, Universidade Estadual Paulista “Júlio de Mesquita Filho.”

Schmitt, M., & Flechais, I. (2023). Digital Deception: Generative Artificial Intelligence in Social Engineering and Phishing. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.4602790

Shrestha Basu Mallick, & Logan Kilpatrick. (2025, February 5). Gemini 2.0: Flash, Flash-Lite and Pro. [link].

Steiner, A., Pinto, A. S., Tschannen, M., Keysers, D., Wang, X., Bitton, Y., Gritsenko, A., Minderer, M., Sherbondy, A., Long, S., Qin, S., Ingle, R., Bugliarello, E., Kazemzadeh, S., Mesnard, T., Alabdulmohsin, I., Beyer, L., & Zhai, X. (2024). PaliGemma 2: A Family of Versatile VLMs for Transfer.

Straub, V. J., Morgan, D., Bright, J., & Margetts, H. (2022). Artificial intelligence in government: Concepts, standards, and a unified framework.

Sun, T. Q., & Medaglia, R. (2019). Mapping the challenges of Artificial Intelligence in the public sector: Evidence from public healthcare. Government Information Quarterly, 36(2), 368–383. https://doi.org/10.1016/j.giq.2018.09.008

Superior Electoral Court. (2024). Statistics. [link].

Woolley, S. C., & Howard, P. N. (2018). Computational Propaganda (S. C. Woolley & P. N. Howard, Eds.; Vol. 1). Oxford University Press. https://doi.org/10.1093/oso/9780190931407.001.0001

Downloads

Published

2025-05-21

How to Cite

Moura, S., Santos, E., Sampaio, P., & Brito, K. (2025). Using Generative AI for Identifying Electoral Irregularities on Social Media. Conference on Digital Government Research, 26. https://doi.org/10.59490/dgo.2025.990

Conference Proceedings Volume

Section

Research papers