- Nichole Powell, Assistant Professor of Chemistry, Oxford College of Emory University, USA, email@example.com
- Brenda Harmon, Senior Lecturer and Chemistry Laboratory Director, Oxford College of Emory University, USA, firstname.lastname@example.org
- Margy MacMillan, Professor/Communications Librarian, Mount Royal University, Calgary, AB, Canada mmacmillan@mtroyal,.ca
In a pre-conference workshop at ISSOTL 2014, a small group of us decided to investigate how much of the SoTL work presented at ISSOTL 2014 was actually based on data from students. As we dove into the work, trends emerged around the kinds of investigations reported on and the kinds of data presented. This blog post provides a quick summary of what we found and raises questions that point the way to future work.
The workshop, “A Collaborative flash-research project on the state of the field of SoTL,” was led by Peter Felten, Nancy Chick, and Margy MacMillan. It had a number of objectives, including familiarizing participants with the breadth of SoTL research, developing protocols for analysing data from conference programs, and actually looking for patterns and meaning in the abstracts of successful submissions to ISSOTL 2014. Our team – Margy from Mount Royal University, Nichole Powell and Brenda Harmon from Emory University, and Sook (Ivy) Chia from the Singapore Ministry of Education, first extracted presentations that identifiably dealt with data from students. Some of us continued working on the data in the Winter and Spring and thought others might be interested in what we found.
We found that of the 178 presentations and 44 posters (T=222), abstracts for 98 clearly stated that at least some portion of the presentation was based on data generated by learners (we did not include pre-conference workshops). Of these, using Pat Hutching’s taxonomy of questions we classified 53 as ‘What is” questions, 26 as “What works” questions, and 19 as including elements of both..
We first looked at indicators of which populations were being studied, so extracted information on level and discipline of students and courses.
Student level, # of abstracts (T=98)
Freshman/early undergraduate, 22
Upper-level undergraduate, 10
Not Given, 7
Mixed levels, 5
It appears that the majority of studies involve undergraduate students with the largest identifiable subcategory being first year students. This leaves a lot of room for those who wish to study students further along in academia. It would be useful to know for example whether there are similar effects to various interventions in later years to those seen in freshmen populations.
Similarly, we looked at patterns in the representation of disciplines:
Subject/Discipline (where noted), # of abstracts (T=98)
History/Social Science, 15
All (drawn from across many disciplines), 12
Not given, 6
Liberal arts/interdisciplinary, 5
Fine Arts, 4
The sciences are well represented at the conference. Is it possible that faculty who hold appointments that have high teaching loads in the sciences choose to publish SoTL work rather than laboratory based scholarship? Or is there more support for SoTL in the sciences? Or are presentations based on methods drawn from the sciences more likely to be accepted?
We were interested in the kinds of student data used, distinguishing between performance – data based on students’ assignments, tests, assessments etc. and perception – data based on how students thought they were doing and/or their attitudes to subject matter etc.
Type of data, # of abstracts (T=98)
Only Perception, 49
Only Performance, 25
Both Performance and Perception, 20
Not determinable from abstract, 4
Researchers used a variety of performance measures including tests, written work, concept maps, performance reviews, peer reviews, grade point average, and retention. Similarly we saw a variety of perception measurements. This evidence included reflections, self-assessments, focus groups, interviews, surveys, and recorded conversations. Some abstracts noted another interesting indicator of student learning: behavior. The results here show a marked reliance on perception data to gauge the effects of various intervention or to understand what is going on in the classroom. What are the implications of this for the strength of conclusions about student learning? Are there benefits for combining data from perception and performance measures and if so, what are they?
This is just a preliminary look at an interesting data set; there is more work that could be done by looking at criteria in combination, e.g. are there patterns in whether ‘What is” or ‘What works” are measured through performance or perceptions? and what are the disciplinary differences in levels of students studied, questions asked and measures used?
This activity – examining the program abstracts for themes- was a fabulous way to clarify our thinking and to give us a “big picture” understanding of what SoTL is and can be. In the workshop we worked with people from various disciplines and this allowed us to move outside our own disciplinary lenses and required us to think outside our disciplinary box. We have gained a much clearer understanding of what SoTL is and what it can be (various forms, types of questions pursued, evidence of student learning, etc.)
Open questions others might want to investigate:
- What are the characteristics of research that investigates students across sections, courses or disciplines?
- What are the patterns in methods of gathering and analyzing data from students?
- Are there patterns in research /results undertaken in different kinds of institutions?
- Do the patterns we saw in the data for the 2014 conference show up in the abstracts for other ISSOTL conferences? other non-ISSOTL conferences?