Sharing the load: Achieving balance between humans and LLMs

Published by

on

When I think about finding the right balance between users and computers, especially large language models (LLMs), I think of being assigned to a group in school: there is always the one person willing to do all of the work because they want a good grade. The trade off is that when this happens, true collaboration doesn’t necessarily take place. In this case, I don’t think the LLM cares about things like grades, but it is willing to do as much work as we let it. The other extreme is that some completely shy away from using it because they believe that the outcomes produced are inauthentic, or somehow less valuable because they were not produced by a human being. (I would counter that with the fact that a human needs to start the process of generating content with an idea and a prompt.) I believe that there is a “sweet spot” where humans and LLMs can work together to achieve desirable outcomes. Since humans have limited mental capacity (according to cognitive load theory), why not augment our ability to perform tasks (Kirschner et al., 2018)?

I suppose the next question is whether we will naturally find that balance in collaboration, or if we will discover it through trial and error. My guess is that it will be a little bit of both.

References

Hawkins, T., & Cassenti, D.N. (2023). Defining the relationship between the level of autonomy in a computer and the cognitive workload of its user. In S. Mukherjee, V. Dutt, & N. Srinivasan (Eds.), Applied cognitive science and technology: Implications of interactions between human cognition and technology. Springer.

Kirschner, P. A., Sweller, J., Kirschner, F., & Jimmy Zambrano, R. (2018). From cognitive load theory to collaborative cognitive load theory. International Journal of Computer-Supported Collaborative Learning, 13(2), 213-233. https://doi.org/10.1007/s11412-018-9277-y

Leave a comment