I am intrigued by human cognitive interaction with technology. Specifically, how humans respond to content that is generated by a human versus an algorithm. Some recent research shows that when people know something has been generated by a machine, such as a piece of art or advice, they are less likely to favor that item (Sumitava et al., 2023). There are certain categories where we seem willing to trust the machine, but those are limited to things like tax fraud and managing a financial portfolio. When it comes to relationship advice on the other hand, we are much more skeptical! From what I gather, those areas that fall more in the realm of mathematics, economics, and science seem to be “appropriate” territory for a machine. But when it comes to social and emotional intelligence and circumstances, we are much more careful not to rely on an algorithm. I wonder if this is justifiable. Is it possible that a machine can learn things about human relationships and make relevant suggestions based on that knowledge? Isn’t a machine less likely to rely on anecdotal evidence and subjective experience? Then again, subjectivism has its own value. Perhaps we are looking for that special human touch that algorithms lack. Ironically, when someone is ignorant of whether something is generated by a human or a machine, they will often favor the machine-generated content (Sumitava et al., 2023). I think that people are wary of machine content because it feels like cheating. But that is just my subjective take on the matter.
References
Mukherjee, S., Senapati, D. & Mahajan, I. (2023). Toward behavioral AI: Cognitive factors underlying the public psychology of artificial intelligence. In S. Mukherjee, V. Dutt, & N. Srinivasan (Eds.), Applied cognitive science and technology: Implications of interactions between human cognition and technology. Springer.



Leave a comment