DerekBriggs

  • Professor
  • Director
  • RESEARCH & EVALUATION METHODOLOGY
  • CADRE
Address

Miramontes Baca Education Building, Room 400D
Թ of Colorado at Boulder
249 UCB
Boulder, CO 80309

As a professor intheResearch and Evaluation Methodology program, where he also directs the, Derek Briggs has built his career around a deceptively simple question: How do we know what students have truly learned? He addresses this question through his research into the assessment, measurement, and evaluation of student learning.

An elected Fellow of the and a member of the, Dr. Briggs has served as president of the and has edited the journal Educational Measurement: Issues and Practice. He has authored a widely respected, and his research has shaped how educators think about tracking student growth, bridging day-to-day classroom assessment with high-stakes testing, and understanding the impact of educational programs and interventions on student achievement. Dr. Briggs regularly advises researchers, state agencies, school districts, and other organizations on the design and use of educational tests and assessments.

Dr. Briggs is also known for his broader contributions to psychometrics and quantitative research methodology. He pushes researchers to think hard about why they choose the methods they use—and to explain them clearly and transparently, without hiding behind big words. At CU Boulder, Dr. Briggs teaches graduate courses that blend theory with hands-on statistical analysis, preparing the next generation of scholars to ask hard questions and demand strong evidence.

Education

PhD Education, Թ of California, Berkeley, 2002
MA Education, Թ of California, Berkeley, 1998
BA Economics, Carleton College, 1993

Education

PhD Education, Թ of California, Berkeley, 2002
MA Education, Թ of California, Berkeley, 1998
BA Economics, Carleton College, 1993

How does one hold teachers and schools accountable for student learning? How do we know “what works” in terms of interventions that have the most potential to increase student learning? I see two related methodological obstacles to answering these questions. First, analyses premised on standardized test results assume that these tests validly measure what they claim to measure. Such claims can’t be accepted at face value. Second, even when these test outcomes can be accepted as valid measures of some construct of interest (e.g., “understanding of mathematics”), it is very difficult to isolate the effect that teachers, schools, and other interventions may be having on them, let alone to generalize the presumed effect beyond the local context of a given study. Because the policies associated with accountability programs and effective educational interventions are high stakes in nature, it is important to scrutinize the methods used to measure and evaluate growth in student achievement. The goal is to find ways to improve these methods, or to develop better alternatives.

A precondition for the measurement of student learning is typically the administration and scoring of a standardized test instrument. In much of the educational research literature it is implicitly assumed that differences in test scores among students (i.e., differences in student “achievement”) are suggestive of different levels of what students have learned in a given domain. It follows from this that subsequent changes in these scores over time for any given student can be interpreted as a quantification of learning. A key aim of my research is to increase the plausibility of such assumptions by making tighter linkages between what students learn and the way that this is measured.

Much of my research focuses upon both technical and theoretical issues in the development of measurement models. My emphasis has been on models within the framework of what is known as item response theory (IRT). Even my most technical research endeavors have a clear relevance to the bigger picture of developing valid measures of student learning. From a more theoretical standpoint, I am interested in foundational issues that are easy to overlook in psychometrics. For instance, is it appropriate to apply the terminology of measurement, which has its roots in the physical sciences, to describe the process of quantification in the social, psychological and behavioral sciences? Psychometricians often talk past each other because we do not share a commonly understood definition or vocabulary for educational measurement.

My portfolio of research projects is constantly changing, but almost always involves some combination of psychometric investigation using empirical and/or simulated data, or more evaluative projects that involve statistical modeling and analyses. Most of these projects are part of ongoing work through the Center for Assessment Design Research and Evaluation (CADRE).

Please visit the and my curriculum vitae for summaries of current projects as well as links to publications, project reports, etc. Feel free to email me for “pre-print” versions of any publications behind journal firewalls.

In the various courses I teach, my goals are (1) to show my students how quantitative methods and models can help them as they make and critique research arguments, (2) to impress upon them that the validity of research conclusions depends not upon the specific methodological approach being taken, but upon how well the approach fits the research question that was posed, (3) to help them learn how different quantitative methods fit together and how they can be used effectively, and (4) to motivate them to deepen their understanding of different methodological approaches.

I teach graduate level courses on quantitative research methodology with a focus on psychometrics. These include:

  • Quantitative Methods in Educational Research I (EDUC 8230)
  • Measurement in Survey Research (EDUC 8710),
  • Psychometric Modeling: Item Response Theory (EDUC 8720)
  • Latent Variable and Structural Equation Modeling (EDUC 7396).