Our chemical education research group is focused on investigating methods of content delivery, student cognition in problem-solving strategies and assessment in preparatory and introductory college chemistry courses. More specifically these goals have centered on three main projects:
- Measuring and enhancing students’ scale literacy
- Examining multiple-choice assessments for differential item functioning
- Combining measures of students’ affect with performance to examine persistence in a STEM major
In addition, I lead various projects in my role as the Director of the Examinations Institute of the American Chemical Society, Division of Chemical Education together with the Associate Director, Prof. Jeff Raker of the University of South Florida. These are described in detail on the ACS Exams website. Typically, the researchers on these projects include postdoctoral researchers (for whom we are regularly seeking, additional information on the ACS Exams website) and Ashford fellows (faculty members on sabbatical (additional information about these opportunities). Very occasionally, there may be opportunities for undergraduate or graduate researchers to participate in these projects but this is not common.
Finally, I am proud to participate in a number of collaborative projects working with faculty from other institutions. As these projects are led by those faculty members, these are not described here. However, periodically, there may be opportunities for undergraduate or graduate researchers to participate in these projects.
Grasping scale outside the visual realm can be difficult particularly with regards to the very small. Undergraduate students in preparatory and introductory chemistry courses, for example, are required to begin thinking about certain concepts in chemistry on a particle level, which is orders of magnitude smaller than the resolving ability of the human eye. The development of a student’s scale literacy outside of the concepts of chemistry has been noted as an important component of a student’s overall science literacy by the AAAS (American Association for the Advancement of Science). Research has shown that students need to continue cultivating their understanding of scale, particularly down to the nanometer size, beyond their elementary and secondary education years. Additionally, it has been found that students who utilize instrumentation in these very small regions have a better concept of scale than those who do not.
This project measures changes in understanding in both a student’s scale perception and unitizing on the atomic level. Unitizing is the development and use of a convenient or familiar unit. For instance, although we use a common length of the meter this is on the order of human size (we unitize to what we are most familiar) and it is often only through necessity that we unitize to other units (for example, a light year). It is expected that students in preparatory or introductory college chemistry “think conceptually” of atoms and molecules interacting. The precursor expectation to this is unitizing on the atomic level (with the unit of the atom). Once students unitize on the atomic level, the transfer of both enhanced scale perception and atomic unitizing to other specific content areas are measured.
This project began with investigating students’ conception of scale (both absolute and relative) through interviews and one-on-one activities. The results of these interviews were then used to develop a means to measure students’ scaling skills and scale conceptual understanding on a classwide level. These assessments have been combined to measure a student’s “scale literacy” and found to be better predictors for success in general chemistry than more traditional measures of math or chemistry content knowledge. The validation studies on these assessments have included response process validity testing.
Scale-themed instruction was then developed for general chemistry I and II and incorporated in all course components: lecture (lecture activities were built for active learning), laboratory experiments and supplemental instruction (available for use by the broad community). These have been systematically incorporated into instruction and tested for learning gains over multiple semesters (with replication studies for all combined components).
Most recently, this project has transitioned from chemistry to anatomy and physiology where the scale literacy assessments have been re-validated with this new population as well as the development of the baseline regression model.
Students interested in this project have opportunities to work on a number of key aspects of the project including evaluating process changes due to scale instruction, evaluating long-term effects of scale instruction and evaluating the use of the supplemental instruction activities.
Differential Item Functioning
Differential item functioning (DIF) is an item-level characteristic of test items where an item may be found to be statistically easier for members of one demographic comparison group than another. DIF analyses typically involve matching examinees from different subgroups (such as gender, race/ethnicity, socioeconomic status, language ability) on a proficiency variable, carrying out item analysis for each group, and evaluating the results for statistical significance. Where DIF is present, it is said that the item “favors” one group over another (a result that suggests that examinees at equal skill levels from different subgroups do not have an equivalent chance of answering a question correct due to subgroup membership). Statistical techniques for detecting DIF include item response theory (IRT), simultaneous item bias statistic (SIBTEST), and Mantel-Haenszel statistic and can be carried out on both multiple-choice and constructed-response items.
In this project, multiple-choice exam items in general chemistry were investigated using trial-tested preliminary exams in preparation for a standardized first-term general chemistry exam. These items were then retested in general chemistry I with both the original item (that was not included in the final released version of the test) and clones of the original items. These were tested in high, medium and low stakes testing and matching proficiency internally (using the score on the assessment as the measure of proficiency) and externally (using placement exam scores, ACT score and subscores, and standardized final exam scores). These items have been tested over three semesters and the subset of persistent DIF items have been coded into an eye tracker. Examining for differences between the subgroups solving the tasks using the eye tracker includes performance, time on task, scan path maps, fixation patterns and task-evoked pupillary response. Additionally, these subgroups have been collectively examined for processing differences when correlated to DIF persistence.
These studies have also been extended to general chemistry II where the method is similar, examining both trial items and items found to exhibit persistent DIF in higher stakes testing. Additionally, these methods have also been used to examine subgroups based on test form (through work with ACS Exams) and treatment vs. control (based on program changes). This latter work entailed incorporating studies of sample size stability for both overall size and ratio between subgroups.
Students interested in this project have opportunities to work on a number of key aspects of the project including continuing the interview work to investigate the extent of domain specificity of task-evoked pupillary response, persistence of differential performance (through repeated testing) and applying alternate methods of classifying proficiency level.
Persistence in STEM majors
In addition to differential performance on tasks, subgroups have been found to have differential persistence in STEM majors. Using social cognitive theory and social cognitive career theory, four indicators are used to predict persistence in STEM majors: self-efficacy, outcome expectations, interest and goals. When considering the performance model, self-efficacy and outcomes expectations emerge as key components of the model. Instruments to measure these in chemistry have been development and validated with students in introductory and general chemistry.
Changes in persistence may occur over the course of a single semester and may be due to many different factors. In an attempt to isolate factors related to course experiences, subset instruments of self-efficacy and outcome expectations have been developed and validated with students in general chemistry. Further, to investigate the fluidity of these, additional interventions have been developed and tested. The current work on this project includes establishing the best methods for longitudinal evaluation of students’ persistence in STEM majors and the potential to move beyond the performance model.
Students interested in this project have opportunities to work on a number of key aspects of the project including investigating the current results of instruments in use in this group to measure interest and goals and incorporating these into the model, investigating key subgroups for differential persistence measures and combining with the work on differential performance for any key performance areas.