Behavioral Manipulation in Big Data Implementation: Systematic Literature Review
DOI:
https://doi.org/10.57152/malcom.v6i1.2418Keywords:
Behavioral Manipulation, Big Data Implementation, Decision Making, Systematic Literature ReviewAbstract
This study examines behavioral manipulation in big data implementation through a systematic literature review of thirty peer-reviewed articles published between 2020 and 2025. The review aims to provide a clear understanding of the mechanisms, impacts, and mitigation strategies related to use of big data to influence human behavior. The PRISMA 2020 framework was applied, starting with 250 identified records; after screening for inclusion and exclusion criteria, 30 studies were selected for full analysis. The results indicate that behavioral manipulation most frequently occurs through algorithmic recommendation systems, price personalization, deceptive interface designs (dark patterns), and data-driven persuasion techniques. These mechanisms were consistently associated with reduced user autonomy, biased decision-making, psychological pressure, and widening social inequalities. Several studies further reveal that algorithmic transparency alone is insufficient to prevent manipulation when users lack meaningful understanding or control over automated systems. The review also identifies emerging mitigation strategies, including dynamic consent mechanisms, independent algorithmic audits, ethical-by-design interfaces, and adaptive regulatory frameworks. However, the findings suggest that such interventions remain fragmented and unevenly implemented across sectors. Approximately 83.3% the reviewed studies conclude that addressing behavioral manipulation of big data requires an integrated response combining technical safeguards, ethical system design, regulatory oversight, and strengthened digital literacy.
Downloads
References
D. Acemoglu, A. Makhdoumi, A. Malekian, and A. Ozdaglar, “When Big Data Enables Behavioral Manipulation *,” 2024.
M. Li et al., “A Comprehensive Study on Dark Patterns,” Proc. Make sure to enter correct Conf. title from your rights confirmation emai (Conference Acron. ’XX), vol. 1, no. 1, 2024, [Online]. Available: http://arxiv.org/abs/2412.09147
T. Kollmer and A. Eckhardt, “Dark Patterns: Conceptualization and Future Research Directions,” Bus. Inf. Syst. Eng., vol. 65, no. 2, pp. 201–208, 2023, doi: 10.1007/s12599-022-00783-7.
G. N. Dec and K. Agrawal, “Personalized Recommendations in EdTech?: Evidence from a,” 2022.
Y. Sazid and K. Sakib, “Prevalence and User Perception of Dark Patterns: A Case Study on E-Commerce Websites of Bangladesh,” Int. Conf. Eval. Nov. Approaches to Softw. Eng. ENASE - Proc., no. Enase, pp. 238–249, 2024, doi: 10.5220/0012734900003687.
S. Genovesi, K. Kaesling, and S. Robbins, The International Library of Ethics, Law and Technology 40 Recommender Systems: Legal and Ethical Issues. 2023. [Online]. Available: https://doi.org/10.1007/978-3-031-34804-4
S. Tjahyadi and E. Titoni, “Risk Analysis In Indonesian Educational Online Learning Systems: A Systematic Literature Review,” JITE (Journal Informatics Telecommun. Eng., vol. 8, no. 2, pp. 240–247, 2025, doi: 10.31289/jite.v8i2.13239.
H. Sama, I. Deu, and S. Tjahyadi, “Development of an Online-Based Management System to Facilitate School Events,” Conf. Manag. Bus. Innov. Educ. Soc. Sci., vol. 4, no. 1, pp. 26–39, 2024, [Online]. Available: https://journal.uib.ac.id/index.php/combines
R. A. García-Hernández et al., “A Systematic Literature Review of Modalities, Trends, and Limitations in Emotion Recognition, Affective Computing, and Sentiment Analysis,” Appl. Sci., vol. 14, no. 16, 2024, doi: 10.3390/app14167165.
Greenfield Patrick, “The Cambridge Analytica files: the story so far | News | The Guardian,” TheGuardian.com. 2018.
H. R. Kwon and E. A. Silva, “Mapping the Landscape of Behavioral Theories: Systematic Literature Review,” J. Plan. Lit., vol. 35, no. 2, pp. 161–179, 2020, doi: 10.1177/0885412219881135.
I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving Face: Investigating the ethical concerns of facial recognition auditing,” AIES 2020 - Proc. AAAI/ACM Conf. AI, Ethics, Soc., pp. 145–151, 2020, doi: 10.1145/3375627.3375820.
M. J. Page et al., “The PRISMA 2020 statement: An updated guideline for reporting systematic reviews,” BMJ, vol. 372, 2021, doi: 10.1136/bmj.n71.
D. Acemoglu, A. Makhdoumi, A. Malekian, and A. E. Ozdaglar, “A Model of Behavioral Manipulation,” SSRN Electron. J., 2023, doi: 10.2139/ssrn.4638206.
D. Acemoglu, A. Makhdoumi, A. Malekian, and A. Ozdaglar, “When Big Data Enables Behavioral Manipulation,” Am. Econ. Rev. Insights, vol. 7, no. 1, pp. 19–38, 2025, doi: 10.1257/aeri.20230589.
C. Li, Y. Chen, and Y. Shang, “A review of industrial big data for decision making in intelligent manufacturing,” Eng. Sci. Technol. an Int. J., vol. 29, no. xxxx, 2022, doi: 10.1016/j.jestch.2021.06.001.
J. Bandy, “Problematic Machine Behavior: A Systematic Literature Review of Algorithm Audits,” Proc. ACM Human-Computer Interact., vol. 5, no. CSCW1, pp. 1–34, 2021, doi: 10.1145/3449148.
T. T. Nguyen et al., “Manipulating Recommender Systems: A Survey of Poisoning Attacks and Countermeasures,” ACM Comput. Surv., vol. 57, no. 1, 2024, doi: 10.1145/3677328.
M. Jagielski, A. Oprea, B. Biggio, C. Liu, C. Nita-Rotaru, and B. Li, “Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning,” Proc. - IEEE Symp. Secur. Priv., vol. 2018-May, no. 1, pp. 19–35, 2018, doi: 10.1109/SP.2018.00057.
G. Shmueli and A. Tafti, “Rejoinder: How to ‘improve’ prediction using behavior modification,” Int. J. Forecast., vol. 39, no. 2, pp. 566–569, 2023, doi: 10.1016/j.ijforecast.2022.12.007.
A. Punetha, “Dark Patterns in User Interfaces?: A Systematic and Meta-Analysis,” 2024.
J. Baumeister Ji-Young Park Andrew Cunningham Stewart Von Itzstein Ian Gwilt Aaron Davis James Walsh, “Deceptive Practices In Online Interactions Patterns In The Dark A Report to the Data Standards Chair Heading Two Line Heading Landscape Assessment: Dark Patterns A Report to the Data Standards Chair”.
S. Sabour et al., “Human Decision-making is Susceptible to AI-driven Manipulation,” 2025, [Online]. Available: http://arxiv.org/abs/2502.07663
M. Carroll, A. Chan, H. Ashton, and D. Krueger, “Characterizing Manipulation from AI Systems,” ACM Int. Conf. Proceeding Ser., 2023, doi: 10.1145/3617694.3623226.
Y. Fan and X. Liu, “Exploring the role of AI algorithmic agents: The impact of algorithmic decision autonomy on consumer purchase decisions,” Front. Psychol., vol. 13, no. October, pp. 1–17, 2022, doi: 10.3389/fpsyg.2022.1009173.
Z. Hu, “Research on the Impact of Social Media Algorithmic on User Decision-making: Focus on Algorithmic Transparent and Ethical Design,” Appl. Comput. Eng., vol. 174, no. 1, pp. 18–22, 2025, doi: 10.54254/2755-2721/2025.po24665.
S. Arora, S. Arora, and J. Hastings, “The Psychological Impacts of Algorithmic and AI-Driven Social Media on Teenagers: A Call to Action,” 2024 IEEE Digit. Platforms Soc. Harms, DPSH 2024, 2024, doi: 10.1109/DPSH60098.2024.10774922.
E. Bogert, A. Schecter, and R. T. Watson, “Humans rely more on algorithms than social influence as a task becomes more difficult,” Sci. Rep., vol. 11, no. 1, pp. 1–9, 2021, doi: 10.1038/s41598-021-87480-9.
W. J. Chang, K. Seaborn, and A. A. Adams, “Theorizing Deception: A Scoping Review of Theory in Research on Dark Patterns and Deceptive Design,” Conf. Hum. Factors Comput. Syst. - Proc., 2024, doi: 10.1145/3613905.3650997.
P. Fagan, “Clicks and tricks: The dark art of online persuasion,” Curr. Opin. Psychol., vol. 58, no. July 2024, p. 101844, 2024, doi: 10.1016/j.copsyc.2024.101844.
H. Overbye-Thompson and R. E. Rice, “Understanding how users may work around algorithmic bias,” AI Soc., no. Hamilton 2019, 2025, doi: 10.1007/s00146-025-02498-1.
S. Padarha, “Data-Driven Dystopia?: an uninterrupted breach of ethics [ 3 ] Reckless misuse of sensitive data by large firms,” no. march 2021.
M. Stella, E. Ferrara, and M. De Domenico, “Bots increase exposure to negative and inflammatory content in online social systems,” Proc. Natl. Acad. Sci. U. S. A., vol. 115, no. 49, pp. 12435–12440, 2018, doi: 10.1073/pnas.1803470115.
U. Franke, “Algorithmic Transparency, Manipulation, and Two Concepts of Liberty,” Philos. Technol., vol. 37, no. 1, pp. 2–7, 2024, doi: 10.1007/s13347-024-00713-3.
H. Wang, “Transparency as Manipulation? Uncovering the Disciplinary Power of Algorithmic Transparency,” Philos. Technol., vol. 35, no. 3, pp. 1–25, 2022, doi: 10.1007/s13347-022-00564-w.
U. Franke, “How Much Should You Care About Algorithmic Transparency as Manipulation?,” Philos. Technol., vol. 35, no. 4, pp. 1–7, 2022, doi: 10.1007/s13347-022-00586-4.
C. Starke, J. Baleis, B. Keller, and F. Marcinkowski, “Fairness perceptions of algorithmic decision-making: A systematic review of the empirical literature,” Big Data Soc., vol. 9, no. 2, 2022, doi: 10.1177/20539517221115189.
J. R. Saura, D. Ribeiro-Soriano, and D. Palacios-Marqués, “Assessing behavioral data science privacy issues in government artificial intelligence deployment,” Gov. Inf. Q., vol. 39, no. 4, 2022, doi: 10.1016/j.giq.2022.101679.
P. Hacker, “Manipulation by algorithms. Exploring the triangle of unfair commercial practice, data protection, and privacy law,” Eur. Law J., vol. 29, no. 1–2, pp. 142–175, 2023, doi: 10.1111/eulj.12389.
M. Hosseini, M. Wieczorek, and B. Gordijn, “Ethical Issues in Social Science Research Employing Big Data,” Sci. Eng. Ethics, vol. 28, no. 3, pp. 1–21, 2022, doi: 10.1007/s11948-022-00380-7.
L. Cellard, “Surfacing Algorithms: An Inventive Method for Accountability,” Qual. Inq., vol. 28, no. 7, pp. 798–813, 2022, doi: 10.1177/10778004221097055.
M. Valderrama, M. P. Hermosilla, and R. Garrido, “State of the Evidence: Algorithmic Transparency,” Open Gov. Partnersh., no. May, 2023.
J. David Gutiérrez and A. Gillwald, “Algorithmic Transparency in the Public Sector A state-of-the-art report of algorithmic transparency instruments Acknowledgements,” no. November, 2024.
A. R. Lee, D. Koo, I. K. Kim, E. Lee, S. Yoo, and H. Y. Lee, “Opportunities and challenges of a dynamic consent-based application: personalized options for personal health data sharing and utilization,” BMC Med. Ethics, vol. 25, no. 1, 2024, doi: 10.1186/s12910-024-01091-3.
S. Grimmelikhuijsen, “Explaining Why the Computer Says No: Algorithmic Transparency Affects the Perceived Trustworthiness of Automated Decision-Making,” Public Adm. Rev., vol. 83, no. 2, pp. 241–262, 2023, doi: 10.1111/puar.13483.
Y. Yuan, Y. Shi, T. Su, and H. Zhang, “Resistance or compliance? The impact of algorithmic awareness on people’s attitudes toward online information browsing,” Front. Psychol., vol. 16, 2025, doi: 10.3389/fpsyg.2025.1563592.
“Dark Patterns , Online Reviews , and Gender?: A Behavioural Analysis of Consumer Decision Making on TEMU in the E- Commerce Context,” pp. 1–61.
G. Aridor, D. Goncalves, D. Kluver, R. Kong, and J. Konstan, “The Economics of Recommender Systems: Evidence from a Field Experiment on MovieLens,” EC 2023 - Proc. 24th ACM Conf. Econ. Comput., no. 10129, p. 117, 2023, doi: 10.1145/3580507.3597677.
M. Mansoury, H. Abdollahpouri, M. Pechenizkiy, B. Mobasher, and R. Burke, “Feedback Loop and Bias Amplification in Recommender Systems,” Int. Conf. Inf. Knowl. Manag. Proc., pp. 2145–2148, 2020, doi: 10.1145/3340531.3412152.
D. Kowald, “Investigating Popularity Bias Amplification in Recommender Systems Employed in the Entertainment Domain,” Ewaf, p. 3, 2025.
M. Horta Ribeiro, V. Veselovsky, and R. West, “The Amplification Paradox in Recommender Systems,” Proc. Int. AAAI Conf. Web Soc. Media, vol. 17, pp. 1138–1142, 2023, doi: 10.1609/icwsm.v17i1.22223.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Nancy Vanessa, Hendi Sama, Mangapul Siahaan

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Copyright © by Author; Published by Institut Riset dan Publikasi Indonesia (IRPI)
This Indonesian Journal of Machine Learning and Computer Science is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

















