[-] Show simple item record

dc.contributor.advisorLim, Seung-Lark
dc.contributor.authorHass, Norah Celeste
dc.date.issued2019
dc.date.submitted2019 Summer
dc.descriptionTitle from PDF of title page viewed September 24, 2019
dc.descriptionDissertation advisor: Seung-Lark Lim
dc.descriptionVita
dc.descriptionIncludes bibliographical references (pages 168-184)
dc.descriptionThesis (Ph.D.)--Department of Psychology. University of Missouri--Kansas City, 2019
dc.description.abstractAs technology advances, processes traditionally carried out by humans are being automated in a variety of industries, such as automotive, security, and food service. In the medical field, advances in automation allow for disease classification, diagnosis, and even treatment recommendations. Technological advances have improved diagnoses by automated devices such that many cases can be more accurately diagnosed by a computer program than by a medical doctor. The hindrance to implementing these technologies is that these systems need not only to exist, they must be accepted, trusted, and appropriately used by both patients and healthcare providers. Previous literature on automation acceptance has focused primarily on how design features and characteristics of the automation influence human trust. Less research has explored the role that user characteristics—such as personality and dispositional traits—play in developing trust. User responses to automation may warrant adaptation in how automation is presented and distributed in order to encourage its acceptance. In the present study, researchers examined the relationship between user characteristics, trust, and automation use in a medical screening decision task. Although user characteristics were found to predict trust attitudes, they did not significantly predict trust behaviors, i.e., automation use. These findings are discussed with the consideration of the differences between attitudes and behaviors in predicting trust. Keywords: trust in automation, medical decision-making, trust, automation, user traitseng
dc.description.tableofcontentsIntroduction -- Review of literature -- Methods -- Appendix A. Checklist for trust between people and automation -- Appendix B. Propensity to trust questionnaire -- Appendix C. Automation-induced complacency potential rating scale -- Appendix D. Rotter's Interpersonal trust Scale -- Appendix E. Big Five inventory -- Appendix F. revised domain-specific risk-taking (Dospert) scale -- Appendix G. Desirability of control scale -- Appendix H. Levenson IPC Scale -- Appendix I. General self-efficacy scale -- Appendix J. Demographics -- Appendix K. Media and technology usage and attitudes scale -- Appendix L. Multidimensional health locus of control scale -- Appendix M. Pre- and post-task questions
dc.format.extentviii, 185 pages
dc.identifier.urihttps://hdl.handle.net/10355/69666
dc.publisherUniversity of Missouri -- Kansas Cityeng
dc.subject.lcshMedical screening --Automation
dc.subject.lcshPatients -- Attitudes
dc.subject.lcshMedical personnel -- Attitudes
dc.subject.lcshMedicine -- Automation
dc.subject.otherDissertation -- University of Missouri--Kansas City -- Psychology
dc.title"Can I get a second opinion?" How user characteristics impact trust in automation in a medical screening taskeng
dc.typeThesiseng
thesis.degree.disciplinePsychology (UMKC)
thesis.degree.grantorUniversity of Missouri--Kansas City
thesis.degree.levelDoctoral
thesis.degree.namePh.D. (Doctor of Philosophy)


Files in this item

[PDF]

This item appears in the following Collection(s)

[-] Show simple item record