Human-computer interaction (HCI) focuses on the design and use of computer technology, particularly the interfaces between people (users) and computers. Researchers in this field observe how humans interact with computers and design technologies that allow humans to interact with computers in new ways. HCI encompasses diverse fields, such as user experience design, usability, and cognitive psychology, aiming to create intuitive and effective interfaces that improve user satisfaction and performance.
An important challenge in HCI and education is the integration of large language models (LLMs) into undergraduate programming courses. These advanced AI tools, such as OpenAI's GPT models, have the potential to revolutionize the way programming is taught and learned. However, their impact on students' learning processes, self-efficacy and career perceptions remains a major concern. Understanding how these tools can be effectively integrated into the educational setting is essential to maximizing their benefits while minimizing potential harms.
Traditionally, teaching programming has relied on lectures, textbooks, and interactive coding assignments. Some educational environments have begun to incorporate simpler AI tools for code generation and debugging assistance. However, the integration of sophisticated LLMs is still in its infancy. These models can generate, debug, and explain code, providing new ways to help students on their learning journey. Despite their potential, it is necessary to understand how students adapt to these tools and how they influence their learning outcomes and self-confidence.
Researchers at the University of Michigan launched an in-depth study to explore the social factors influencing the adoption and use of LLMs in an undergraduate programming course. The study used social formation theory to examine the impact of students' social perceptions, peer influence, and career expectations on their use of LLMs. The research team used a mixed methods approach, including an anonymous end-of-course survey of 158 students, mid-course self-efficacy surveys, student interviews, and regression analysis of the data performance at mid-term. This multifaceted approach aimed to provide a detailed understanding of the dynamics at play.
The study methodologically involved an anonymous survey distributed to students, semi-structured interviews for deeper insights, and regression analysis of midterm performance data. This approach aimed to triangulate data from multiple sources to comprehensively understand the social dynamics affecting LLM use. Researchers found that students' use of LLMs was associated with their future career expectations and their perceptions of peer use. Notably, self-reported early LLM use correlated with lower self-efficacy and midterm scores. However, perceived overreliance on LLMs, rather than their actual use, is associated with decreased self-efficacy later in the course.
The proposed methodology included a detailed survey and interviews to collect qualitative and quantitative data. The survey, conducted during the final week of in-person classes, aimed to capture a representative sample of student attitudes and perceptions regarding LLMs. The survey included 25 questions, covering areas such as familiarity with LLM tools, usage patterns, and concerns about over-reliance. Five self-efficacy questions were also included to assess students' confidence in their programming abilities. These data were then analyzed using regression techniques to identify significant trends and correlations.
Notable results from the study indicated that early use of LLM was correlated with lower self-efficacy and midterm scores. Students perceived an overreliance on LLMs rather than their use itself, which led to a decrease in their self-efficacy later in the course. Their career aspirations and perceptions of their peers' use of LLMs significantly influenced students' decisions to use LLMs. For example, students who believed that overreliance on LLMs would harm their job prospects tended to prefer learning programming skills independently. Conversely, those who anticipated significant future use of LLMs in their careers were more likely to engage with these tools during the course.
The study also highlighted the notable performance and results of integrating LLMs into the curriculum. For example, LLM students reported mixed results in terms of programming self-efficacy and learning outcomes. Some students found that using LLMs helped them understand complex coding concepts and error messages, while others felt it had a negative impact on their confidence in their coding abilities. Regression analysis revealed that students who felt overly dependent on LLMs had lower self-efficacy scores, highlighting the importance of balanced tool use.
![](https://www.marktechpost.com/wp-content/uploads/2024/06/Screenshot-2024-06-11-at-2.08.21-PM-1024x421.png)
In conclusion, the study highlights the complex dynamics of integrating LLMs into undergraduate programming education. Social factors, such as peer usage and career aspirations, strongly influence the adoption of these advanced tools. Although LLMs can significantly improve learning experiences, an over-reliance on these tools can negatively impact student confidence and performance. Therefore, it is crucial to strike a balance in the use of LLMs to ensure that students acquire strong foundational skills while leveraging AI tools for their improvement. These findings highlight the need for thoughtful integration strategies that consider both the technological capabilities of LLMs and the social context of their use in educational settings.
Source
- https://arxiv.org/pdf/2406.06451