Assessing Student Understanding of Computing: Self-Efficacy, Non-CS Majors, and ChatGPT
July 17, 2024 at 8:00 am
Assessment is a hot topic in computing education research right now.
I’m sharing below a workshop announcement from Nell O’Rourke and Melissa Chen. They want to help students make accurate self-assessments, because (as Nell’s group has found in the past, with one paper described here) students tend to have inflated views of what they should be able to do, and when they can’t achieve those lofty goals, there is a negative impact on their self-efficacy.
We just received notice that our panel for SIGCSE Virtual 2024 has been accepted, on the topic of “Assessments for Non-CS Major Computing Classes” from Jinyoung Hur, Parmit Chilana, Katie Cunningham, Dan Garcia and me. I’ll give away here my position on this panel: We get assessments for non-CS majors wrong because we think about them as CS majors. Calling a non-major’s introductory computing course “CS0” is making the assumption that it’s the starting point for a sequence that goes on to CS1 and so on. Mastery learning is a good idea, but only when the skills to be mastered are appropriate for the student. Asking non-CS majors to master the skills of a CS1 is holding them to the standards of the CS major. There is more than one kind of “knowing how to code.” There are conversational programmers, computational artists and scientists, and others in our CS1 classes who need to code or to understand the process of coding, but don’t need or want the skills of a professional software developer. Assessment for non-CS majors has to recognize alternative endpoints for computing education.
Side note: Everything that we say about non-CS majors computing education applies to K-12 computing education. We should not assume that K-12 students are being prepared for software development jobs. Not all K-12 students will be CS majors, and there are other uses for programming in other careers besides software development.
Finally, ChatGPT is showing up everywhere in computing education research these days. We computing teachers have typically assessed understanding of computing by evaluating proficiency with textual programming. Now ChatGPT can be as proficient at the textual languages as the average CS1 student. Assessing understanding becomes harder when we can’t use proficiency as a proxy — the LLMs can make students appear proficient without any actual understanding.
We have a lot to do in assessment as computing education expands and LLMs can perform more of the programming tasks.
——————————
Do you teach an undergraduate introductory computing or programming course and want to help your students make accurate judgements about their programming ability?
We are researchers from Northwestern University interested in co-designing curricular and policy interventions with instructors to help students more accurately assess their programming abilities and develop higher self-efficacy.
Sign up here to learn more about our research on student self-assessments and to collaboratively design interventions at our two-day workshop on August 7 and August 8 at 12-3 PM central time (in your time zone)! Registration will close 3 days prior to the first session. More information about this workshop is available on the workshop website.
To be eligible for this workshop, you must teach an undergraduate-level introductory course and are 18 years of age or older. This study has been approved by the Northwestern University IRB (STU00222017: “Designing interventions to support student motivation and self-efficacy“). The PI for this study is Professor Eleanor O’Rourke.
If you have any questions, please email melissac@u.northwestern.edu.
Best,
Dr. Eleanor O’Rourke
Melissa Chen
Northwestern University
Entry filed under: Uncategorized. Tags: alternative endpoints, assessment, ChatGPT, computing education research, non-CS majors.
Leave a comment