2004
Volume 38, Issue 1
  • ISSN: 0921-5077
  • E-ISSN: 1875-7235

Abstract

Samenvatting

Het gebruik van eenvoudige algoritmes om informatie te combineren (mechanische predictie) leidt tot betere beslissingen dan het combineren van dergelijke informatie ‘in het hoofd’ (holistische predictie). Toch worden algoritmes niet veel gebruikt bij beslissingen in de personeelselectie en toelating tot het hoger onderwijs. In dit dissertatieonderzoek hebben we onderzocht hoe deze algoritme-aversie overwonnen kan worden. We vonden dat het versterken van de autonomie van professionals bij mechanische predictie het gebruik van algoritmes aanmoedigt en vaak resulteert in betere selectiebeslissingen dan holistische predictie. Bovendien zorgde het trainen van beslissers in mechanische predictie voor een toename in algoritmegebruik. We bespreken de praktische implicaties van dit proefschrift en presenteren stappen die professionals en academici kunnen nemen om het gebruik van algoritmes aan te moedigen en besluitvorming te verbeteren.

Loading

Article metrics loading...

/content/journals/10.5117/GO2025.1.003.NEUM
2025-03-01
2025-04-02
Loading full text...

Full text loading...

References

  1. Arneson, J. J., Sackett, P. R., & Beatty, A. S. (2011). Ability-performance relationships in education and employment settings: Critical tests of the more-is-better and the good-enough hypotheses. Psychological Science, 22(10), 1336-1342. https://doi.org/10.1177/0956797611417004
    [Google Scholar]
  2. Bolander, P., & Sandberg, J. (2013). How employee selection decisions are made in practice. Organization Studies, 34(3), 285-311. https://doi.org/10.1177/0170840612464757
    [Google Scholar]
  3. Campion, M. A., & Campion, E. D. (2023). Machine learning applications to personnel selection: Current illustrations, lessons learned, and future research. Personnel Psychology, 76(4), 993-1009. https://doi.org/10.1111/peps.12621
    [Google Scholar]
  4. Cortina, J. M., Goldstein, N. B., Payne, S. C., Davison, H. K., & Gilliland, S. W. (2000). The incremental validity of interview scores over and above cognitive ability and conscientiousness scores. Personnel Psychology, 53(2), 325-351. https://doi.org/10.1111/j.1744-6570.2000.tb00204.x
    [Google Scholar]
  5. Dawes, R. M. (1979). The robust beauty of improper linear models in decision making. American Psychologist, 34(7), 571-582. https://doi.org/10.1037/0003-066X.34.7.571
    [Google Scholar]
  6. Dawes, R. M., & Corrigan, B. (1974). Linear models in decision making. Psychological Bulletin, 81(2), 95-106. https://doi.org/10.1037/h0037613
    [Google Scholar]
  7. Dawes, R. M., Faust, D., & Meehl, P. E. (1989). Clinical versus actuarial judgment. Science, 243(4899), 1668-1674. https://doi.org/10.1126/science.2648573
    [Google Scholar]
  8. Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), 114-126. https://doi.org/10.1037/xge0000033
    [Google Scholar]
  9. Dietvorst, B. J., Simmons, J. P., & Massey, C. (2018). Overcoming algorithm aversion: People will use imperfect algorithms if they can (even slightly) modify them. Management Science, 64(3), 1155-1170. https://doi.org/10.1287/mnsc.2016.2643
    [Google Scholar]
  10. Drenth, P. J. D. (2008). Psychology: Is it applied enough?Applied Psychology: An International Review, 57(3), 524-540. https://doi.org/10.1111/j.1464-0597.2008.00337.x
    [Google Scholar]
  11. Failenschmid, J. I., Neumann, M., Meijer, R. R., & Niessen, A. S. M. (2021). The predictive validity of weighted combinations of predictors and criteria [Shiny App] [Computer software]. https://failenschmid-neumann-meijer-niessen-2021.shinyapps.io/publication/
    [Google Scholar]
  12. Gonzalez-Mulé, E., Mount, M. K., & Oh, I. S. (2014). A meta-analysis of the relationship between general mental ability and nontask performance. Journal of Applied Psychology, 99(6), 1222-1243. https://doi.org/10.1037/a0037547
    [Google Scholar]
  13. Grove, W. M., & Meehl, P. E. (1996). Comparative efficiency of informal (subjective, impressionistic) and formal (mechanical, algorithmic) prediction procedures: The clinical-statistical controversy. Psychology, Public Policy, and Law, 2(2), 293-323. https://doi.org/10.1037/1076-8971.2.2.293
    [Google Scholar]
  14. Grove, W. M., Zald, D. H., Lebow, B. S., Snitz, B. E., & Nelson, C. (2000). Clinical versus mechanical prediction: A meta-analysis. Psychological Assessment, 12(1), 19-30. https://doi.org/10.1037/1040-3590.12.1.19
    [Google Scholar]
  15. Hastie, R., & Dawes, R. M. (2010). Rational choice in an uncertain world: The psychology of judgment and decision making (2nd ed.). Sage Publications.
    [Google Scholar]
  16. Hattrup, K. (2012). Using composite predictors in personnel selection. In N.Schmitt (Ed.), The Oxford handbook of personnel assessment and selection. (pp. 297-319). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780199732579.013.0014
    [Google Scholar]
  17. Highhouse, S. (2008). Stubborn reliance on intuition and subjectivity in employee selection. Industrial and Organizational Psychology: Perspectives on Science and Practice, 1(3), 333-342. https://doi.org/10.1111/j.1754-9434.2008.00058.x
    [Google Scholar]
  18. Hoffman, M., Kahn, L. B., & Li, D. (2018). Discretion in hiring. The Quarterly Journal of Economics, 133(2), 765-800. https://doi.org/10.1093/qje/qjx042
    [Google Scholar]
  19. Kahneman, D., Rosenfield, A. M., Gandhi, L., & Blaser, T. (2016). Noise: How to overcome the high, hidden cost of inconsistent decision making. Harvard Business Review, 94(10), 38-46.
    [Google Scholar]
  20. Kahneman, D., Sibony, O., & Sunstein, C. R. (2021). Noise: A flaw in human judgment. Little, Brown Spark.
    [Google Scholar]
  21. Karelaia, N., & Hogarth, R. M. (2008). Determinants of linear judgment: A meta-analysis of lens model studies. Psychological Bulletin, 134(3), 404-426. https://doi.org/10.1037/0033-2909.134.3.404
    [Google Scholar]
  22. Kelley, H. H. (1973). The processes of causal attribution. American Psychologist, 28(2), 107-128. https://doi.org/10.1037/h0034225
    [Google Scholar]
  23. Kuncel, N. R., Klieger, D. M., Connelly, B. S., & Ones, D. S. (2013). Mechanical versus clinical data combination in selection and admissions decisions: A meta-analysis. Journal of Applied Psychology, 98(6), 1060-1072. https://doi.org/10.1037/a0034156
    [Google Scholar]
  24. Lievens, F., Conway, J. M., & De Corte, W. (2008). The relative importance of task, citizenship and counterproductive performance to job performance ratings: Do rater source and team-based culture matter?Journal of Occupational and Organizational Psychology, 81(1), 11-27. https://doi.org/10.1348/096317907X182971
    [Google Scholar]
  25. Meehl, P. E. (1954). Clinical versus statistical prediction: A theoretical analysis and a review of the evidence. University of Minnesota Press. https://doi.org/10.1037/11281-000
    [Google Scholar]
  26. Meijer, R. R., Neumann, M., Hemker, B. T., & Niessen, A. S. M. (2020). A tutorial on mechanical decision-making for personnel and educational selection. Frontiers in Psychology, 10, Article 3002. https://doi.org/10.3389/fpsyg.2019.03002
    [Google Scholar]
  27. Meijer, R. R., Niessen, A. S. M., & Neumann, M. (2023). Psychological and educational testing and decision-making: The lack of knowledge dissemination in textbooks and test guidelines. In L. A.Van der Ark, W. H. M.Emons, & R. R.Meijer (Eds.), Essays on contemporary psychometrics (pp. 47-67). Springer International Publishing. https://doi.org/10.1007/978-3-031-10370-4_3
    [Google Scholar]
  28. Murphy, K. R. (2019). Understanding how and why adding valid predictors can decrease the validity of selection composites: A generalization of Sackett, Dahlke, Shewach, and Kuncel (2017). International Journal of Selection and Assessment, 27(3), 249-255. https://doi.org/10.1111/ijsa.12253
    [Google Scholar]
  29. Neumann, M. (2023). Overcoming algorithm aversion in personnel selection and admissions decisions [Doctoral dissertation, University of Groningen]. https://doi.org/10.33612/diss.651814676
    [Google Scholar]
  30. Neumann, M., Hengeveld, M., Niessen, A. S. M., Tendeiro, J. N., & Meijer, R. R. (2022). Education increases decision-rule use: An investigation of education and incentives to improve decision making. Journal of Experimental Psychology: Applied, 28(1), 166-178. https://doi.org/10.1037/xap0000372
    [Google Scholar]
  31. Neumann, M., Niessen, A. S. M., Hurks, P. M., & Meijer, R. R. (2023). Holistic and mechanical combination in psychological assessment: Why algorithms are underutilized and what is needed to increase their use. International Journal of Selection and Assessment, 31(2), 267-285. https://doi.org/10.1111/ijsa.12416
    [Google Scholar]
  32. Neumann, M., Niessen, A. S. M., Linde, M., Tendeiro, J. N., & Meijer, R. R. (2024). “Adding an egg” in algorithmic decision making: Improving stakeholder and user perceptions, and predictive validity by enhancing autonomy. European Journal of Work and Organizational Psychology, 33(3), 245-262. https://doi.org/10.1080/1359432X.2023.2260540
    [Google Scholar]
  33. Neumann, M., Niessen, A. S. M., & Meijer, R. R. (2021). Implementing evidence-based assessment and selection in organizations: A review and an agenda for future research. Organizational Psychology Review, 11(3), 205-239. https://doi.org/10.1177/2041386620983419
    [Google Scholar]
  34. Neumann, M., Niessen, A. S. M., & Meijer, R. R. (2023). Predicting decision-makers’ algorithm use. Computers in Human Behavior, 145, 107759. https://doi.org/10.1016/j.chb.2023.107759
    [Google Scholar]
  35. Neumann, M., Niessen, A. S. M., Tendeiro, J. N., & Meijer, R. R. (2022). The autonomy-validity dilemma in mechanical prediction procedures: The quest for a compromise. Journal of Behavioral Decision Making, 35(4), e2270. https://doi.org/10.1002/bdm.2270
    [Google Scholar]
  36. Niessen, A. S. M., Hurks, P. M., Neumann, M., & Meijer, R. R. (2024). Handleiding voor het opstellen en gebruiken van beslisregels, bijlage AST-NIP 2024. NIP.
    [Google Scholar]
  37. Niessen, A. S. M., Meijer, R. R., & Neumann, M. (2019). Mis(ver)standen in de selectiepraktijk: Een goed verhaal maakt nog geen goede beslissing. De Psycholoog, 54(11), 46-55.
    [Google Scholar]
  38. Nolan, K. P., Carter, N. T., & Dalal, D. K. (2016). Threat of technological unemployment: Are hiring managers discounted for using standardized employee selection practices?Personnel Assessment and Decisions, 2(1), 30-47. https://doi.org/10.25035/pad.2016.004
    [Google Scholar]
  39. Nolan, K. P., & Highhouse, S. (2014). Need for autonomy and resistance to standardized employee selection practices. Human Performance, 27(4), 328-346. https://doi.org/10.1080/08959285.2014.929691
    [Google Scholar]
  40. Rogelberg, S. G., King, E. B., & Alonso, A. (2022). How we can bring I-O psychology science and evidence-based practices to the public. Industrial and Organizational Psychology, 15(2), 259-272. https://doi.org/10.1017/iop.2021.142
    [Google Scholar]
  41. Rotundo, M., & Sackett, P. R. (2002). The relative importance of task, citizenship, and counterproductive performance to global ratings of job performance: A policy-capturing approach. Journal of Applied Psychology, 87(1), 66-80. https://doi.org/10.1037/0021-9010.87.1.66
    [Google Scholar]
  42. Ryan, A. M., & Sackett, P. R. (1987). A survey of individual assessment practices by I/O psychologists. Personnel Psychology, 40(3), 455-488. http://dx.doi.org/10.1111/j.1744-6570.1987.tb00610.x
    [Google Scholar]
  43. Ryan, R. M., & Deci, E. L. (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. American Psychologist, 55, 68-78. https://doi.org/10.1037110003-066X.55.1.68
    [Google Scholar]
  44. Rynes, S. L., Colbert, A. E., & Brown, K. G. (2002). HR professionals’ beliefs about effective human resource practices: Correspondence between research and practice. Human Resource Management, 41(2), 149-174. https://doi.org/10.1002/hrm.10029
    [Google Scholar]
  45. Sackett, P. R., Dahlke, J. A., Shewach, O. R., & Kuncel, N. R. (2017). Effects of predictor weighting methods on incremental validity. Journal of Applied Psychology, 102(10), 1421-1434. https://doi.org/10.1037/apl0000235
    [Google Scholar]
  46. Sackett, P. R., Zhang, C., Berry, C. M., & Lievens, F. (2022). Revisiting meta-analytic estimates of validity in personnel selection: Addressing systematic overcorrection for restriction of range. Journal of Applied Psychology, 107(11), 2040-2068. https://doi.org/10.1037/APL0000994
    [Google Scholar]
  47. Sanders, K., Van Riemsdijk, M., & Groen, B. (2008). The gap between research and practice: A replication study on the HR professionals’ beliefs about effective human resource practices. International Journal of Human Resource Management, 19(10), 1976-1988. https://doi.org/10.1080/09585190802324304
    [Google Scholar]
  48. Schmidt, F. L., & Hunter, J. E. (1998). The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 85 years of research findings. Psychological Bulletin, 124(2), 262-274. https://doi.org/10.1037/0033-2909.124.2.262
    [Google Scholar]
  49. Silzer, R., & Jeanneret, R. (2011). Individual psychological assessment: A practice and science in search of common ground. Industrial and Organizational Psychology, 4(3), 270-296. https://doi.org/10.1111/j.1754-9434.2011.01341.x
    [Google Scholar]
  50. Van Iddekinge, C. H., Aguinis, H., Mackey, J. D., & DeOrtentiis, P. S. (2018). A meta-analysis of the interactive, additive, and relative effects of cognitive ability and motivation on performance. Journal of Management, 44(1), 249-279. https://doi.org/10.1177/0149206317702220
    [Google Scholar]
  51. Woehr, D. J. (1994). Understanding frame-of-reference training: The impact of training on the recall of performance information. Journal of Applied Psychology, 79(4), 525-534. https://doi.org/10.1037/0021-9010.79.4.525
    [Google Scholar]
  52. Yu, M. C., & Kuncel, N. R. (2020). Pushing the limits for judgmental consistency: Comparing random weighting schemes with expert judgments. Personnel Assessment and Decisions, 6(2), 1-10. https://scholarworks.bgsu.edu/pad/vol6/iss2/2
    [Google Scholar]
  53. Yu, M. C., & Kuncel, N. R. (2022). Testing the value of expert insight: Comparing local versus general expert judgment models. International Journal of Selection and Assessment, 30(2), 202-215. https://doi.org/10.1111/IJSA.12356
    [Google Scholar]
/content/journals/10.5117/GO2025.1.003.NEUM
Loading
/content/journals/10.5117/GO2025.1.003.NEUM
Loading

Data & Media loading...

This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error