Assessment at Duke University



A Handbook for
Chairs and program officers

Developed by
Jennifer L. Hill, Ed.D.

Text reviewed by
Alessandra Dinin, Ph.D., Matt Serra, Ph.D., and Evan Widney

Please direct questions to Jennifer Hill at or


We thank the following individuals for their long-standing commitment to the practice of learning outcomes assessment at Duke, and for their active support of the initiatives and services described in this document.



Valerie Ashby, Ph.D.

Lee Baker, Ph.D.

Frank Blalark, Ph.D.

Jennifer Francis, Ph.D.

Molly Goldwasser, Ed.D.

Patricia Hull

David Jamieson-Drake, Ph.D.

Valerie Konczal

Sally Kornbluth, Ph.D.

Peter Lange, Ph.D.

Shawn Miller

Elise Mueller, Ph.D.

Arlie Petters, Ph.D.

Philip Pope

Cheryl Ratchford

Kendrick Tatum

Robert Thompson, Ph.D.

Lee Willard, Ph.D.

Edward Gomes





Educators constantly are engaged in the practice of assessment and improvement, whether we’re aware of it or not.  Every enhancement to a learning experience, inside the classroom and in the co-curriculum, is based on some explicit or implicit evaluation of the learning occurring (or not occurring) therein.  Learning outcomes assessment is the continuous and systematic process by which (1) we collect evidence about students’ learning, (2) communicate these findings with students, colleagues, college leaders, accreditors, and the community at large, and (3) demonstrate to a variety of stakeholders that we use these findings to inform and improve educational practice.  Executed well, it is a scholarly enterprise that utilizes a variety of research methodologies and is held to rigorous standards.[1]

Observers sometimes approach assessment narrowly, perhaps assuming that as long as data about students are collected, the requirements of assessment have been satisfied.  That’s a fair point:  evidence is central to this enterprise.  However, assessment as we practice it in Trinity College is the broader tradition of evaluating the degree to which our students know and can do the things we expect them to do, and then making conscious, evidence-based enhancements to our programs and practices.  Thus, the four main purposes of program assessment are:

To improve.  The assessment process should cultivate recommendations for ways the faculty can enhance the program.

To inform.  The assessment process should inform faculty and other stakeholders of the program’s impact and influence.

To prove.  The assessment process should demonstrate to students, faculty, staff, and external observers the program’s strengths and opportunities for improvement.

To support.  The assessment process should provide support for campus decision-making activities such as program review and strategic planning, as well as external accountability (e.g., accreditation).


A common misconception about assessment is that it is driven and directed by external accreditors, who enforce uniformity of outcomes, measures, and standards in undergraduate education.  On the contrary, our regional accreditor[2], as a partner in the process of assessment, encourages faculty autonomy in the development of learning outcomes and the methods by which they are studied.  Likewise, the Office of Assessment urges program faculty to develop relevant learning outcomes for students in that program.  As disciplinary experts, you are in the best position to articulate these values and to establish an assessment methodology that makes sense for your program. 

Another misconception about assessment is that assessment happens only occasionally.  Program reviews[3] may occur on a 10-year cycle, but assessment – as a culture of iterative study of and reflections on student learning – is always underway.  The Office of Assessment supports this approach by encouraging regular meetings among assessment liaisons, offering workshops and information sessions throughout the academic year, and most importantly structuring your assessment report as a reflective portfolio.  To be effective – that is, to use evidence productively – assessment must be continuous.  As the program evolves, so too does its assessment plan and methods of measurement. 

In 1992, the American Association of Higher Education published nine principles[4] of assessment, which remain relevant and influential twenty-some years later.  These principles assert:




The assessment of student learning begins with educational values.

Assessment is most effective when it reflects an understanding of learning as multidimensional, integrated, and revealed in performance over time.

Assessment works best when the programs it seeks to improve have clear, explicitly stated purposes.

Assessment requires attention to outcomes but also equally to the experiences that lead to those outcomes.

Assessment works best when it is ongoing, not episodic.

Assessment fosters wider improvement when representatives from across the educational communities are involved.

Assessment makes a difference when it begins with issues of use and illuminates questions that people really care about.

Assessment is more likely to lead to improvement when it is part of a larger set of conditions that promote change.

Through assessment, educators meet responsibilities to students and to the public.


This handbook is not intended to supplant regular meetings among assessment personnel and program officers.  Rather, this resource is intended to provide program officers with the structure, guidance, and information they need to lead the practice of assessment within their programs.   


[1] Jennifer Hill EDUC 289S course syllabus

[2] Southern Association of Colleges and Schools, Commission on Colleges




If one were to search for the term assessment cycle on the Internet, the number of illustrations of assessment would be almost limitless.  The descriptive vocabulary may vary from institution to institution, but these models share some central elements. 

First, the faculty within program must determine its learning outcomes: specific and measureable statements of what students know and are able to do by virtue of their participation in that program. 

Second, the program must select or design the measures (or methods) by which it will collect information about the learning outcomes.  It’s often necessary to use multiple measures, both direct and indirect, to understand whether and to what degree students are developing in these areas. 

Third, once the data are in, faculty representatives begin the process of analyzing and interpreting them.  This includes comparing the results with the program’s expectations.

Fourth, the individuals leading the collection and interpretation of evidence must share their findings with the larger faculty body.

Fifth, the program’s faculty determines how to use these findings to shape future enhancements to faculty teaching and student learning in the program.


The Department Assessment Portfolio [DAP], which is described in detail in this document, guides faculty and program officers through this process. 



The Office of Assessment strives for common, accessible language to describe the study of teaching and learning.  Some key terms, however, commonly appear in written publications on assessment and in conversations among practitioners of assessment.


Action research[AD1] .  Action research[1] commonly is understood as research intended to address an immediate issue, as well as the iterative process of problem-solving among individuals engaged in communities of practice.  Linda Suskie provides a helpful contrast between traditional empirical research and assessment:


“[the former] is conducted to test theories, while assessment… [is] a distinct form of action research, a distinct type of research whose purpose is to inform and improve one’s own practice, rather than make broad generalizations…  It is disciplined and systematic and uses many of the methodologies of traditional research…  [Practitioners] aim to keep the benefits of assessment in proportion to the time and resources devoted to them.”[2]


Authentic evidence.  Authentic assessments sometimes are called “performance assessment.” In contrast to traditional assessments (e.g., multiple choice tests), authentic assessments like writing assignments, projects, lab activities, computer programs, and portfolios, among others enable students to demonstrate students’ skill, competency, and ability in real world situations.  The evidence derived from these measures may be described as “authentic evidence.”


Benchmarks.  It would be difficult to interpret the results of one’s assessment activities without a standard against which to compare those results.  As one begins planning and implementing an assessment task, it is helpful to articulate that standard.  The Department Assessment Portfolio [DAP] refers to benchmarks as “targets”.  The following table[3] introduces some of the benchmarks commonly recommended and used by the Office of Assessment.



Questions the benchmark can answer

Local standards

Are our students meeting our own standards or expectations?

External standards

Are our students meeting standards or expectations set by someone else?

Internal peer benchmark

How do our students compare to peers within our course, program, or college?

External peer benchmark

How do our students compare to peers at other colleges?

Value-added benchmark

Are our students improving over time?


The selection of an appropriate benchmark depends on the availability of comparative information, whether from past studies, internally within the institution, or externally across institutions.  In all cases, it is useful to develop a consensus among faculty so that, eventually, the findings will be accepted and acted upon by specific stakeholders.


Closing the loop.  Perhaps the single most important part of assessment is using one’s findings to influence or make changes to student learning experiences.  If a program’s targets are not met, the faculty should consider why and what to do about it.  But even when targets are met, one can still using these findings to guide discussion about the state of the curriculum, advising, resources, the assessment plan itself, and so on.  In many cases, adjustments are small scale and straightforward to administer.  In other cases, the suggested changes may require months or a year, over sequential steps of implementation.  The Department Assessment Portfolio [DAP] walks one through the process of interpreting findings and make decisions about next steps.  


Curriculum map.  A curriculum map[4],[5] is a planning and organizational device in which the practitioner aligns the student learning outcomes with different points in the students’ learning experience.  It encourages discussions among faculty about when students are expected encounter, develop, and master essential learning outcomes.  It exposes and enables a critical review of possible misalignments.  Finally, it helps individuals plan for the most suitable times to collect evidence of learning from students, and from which learning tasks. 


Direct and indirect measures.  There are many ways to characterize or differentiate assessment measures.  Direct measures are those that show in a clear, tangible, and compelling way what students know and can do.  These include performances, presentations, written work, capstone projects, and mentor or employer observations.  Usually these demonstrations of ability are rated by faculty reviewers, using a rubric.  By contrast, indirect measures approximate what students know and can do.  Suskie explains, “indirect evidence consists of proxy signs that students are probably learning”.[6]  Surveys and course evaluations are indirect measures:  in most cases they ask students to self-report their perceived learning progress.  Other examples include course grades, retention and graduation rates, placement rates, participation in learning experiences, and awards received.  It’s worth noting that grades usually are treated as indirect measures because often it is difficult to reliably deconstruct how the grade maps onto or is aligned with the learning outcomes of the course or program.


Experiential learning.  The Association for Experiential Education explains it as “a philosophy that informs many methodologies in which educators purposefully engage with learners in direct experience and focused reflection in order to increase knowledge, develop skills, clarify values, and develop people's capacity to contribute to their communities.”[7]  Recognized by the Association of American Colleges and Universities [AACU] as a high-impact practice[8], experiential learning connects classroom-based learning activities with real world contexts and problems.  The Office of Civic Engagement[9], DukeEngage[10], and Service-Learning[11], are examples of Duke programs providing connecting students and faculty to structured opportunities for experiential learning.


Formative and summative assessment.   Formative assessment generally refers to the learning process:  how does one provide feedback to students to facilitate the process of learning and development?  Such assessments happen while the course (or other learning experience) is underway, and they are used to improve or enhance the learning of current students by making in-time adjustments to the pedagogy or learning environment.  Summative assessment, by contrast, occurs at the end of the learning experience, where we take stock of students learning across the experience.  Formative and summative assessments are equally important and compatible. 


High-impact practices.  A handful of well-established teaching and learning practices are known to be beneficial for college students from many backgrounds, especially historically underserved students, who often do not have equitable access to high-impact learning.[12]  These practices include:


First-Year Experiences

Common Intellectual Experiences

Learning Communities

Writing-Intensive Courses

Collaborative Assignments and Projects

Undergraduate Research

Diversity/Global Learning


Service Learning, Community-Based Learning


Capstone Courses and Projects


Longitudinal and cross-sectional designs.  Like many other research areas, assessment tries to understand gains made by groups of individuals over time.  In that respect, this longitudinal approach is similar to a time-series design in which levels of competency (or skill, disposition, etc.) are measured at multiple points in time.  This approach requires monitoring individuals over time and, in many cases, managing attrition in a study sample.  Despite the analytic value of a longitudinal design, sometimes its logistical challenges compel the analyst to use a cross-sectional design.  Instead of measuring learning gains in a student group over time, one might have data from a single point in time.  To interpret data from a single point in time, however, the analyst must determine a suitable benchmark for comparison.  (See section on benchmarks above.)  Both approaches are useful for studying student learning, and they can be used in a complementary way.


Measures.  A measure is a source of evidence.  We use it as a noun, to indicate the device, tool, or mechanism through which we will collect information about student learning.  Measures include student surveys, interviews, faculty-evaluated papers, among many others.  A helpful list of measures is available on the Cleveland State University website.[13]  This website also is linked from within the Department Assessment Portfolio [DAP] for easier reference.


Mission statement.  All Duke University academic departments and programs should have codified mission statements on the program website.  The mission statement, as an aspirational declaration of the core values of the program, should anchor the program’s student learning outcomes.  One should observe clear alignment between the mission of the program and the student learning outcomes under exploration in the program’s assessment plan.


Program objectives.  We understand that academic departments and programs measure a variety of things germane to undergraduate teaching and learning.  Assessment, as it is commonly understood, focuses on understanding what and how students learn.  Program objectives illustrate how the program itself wants to evolve.  They may include tracking enrollment statistics, growing the number of faculty lines, promoting undergraduate research, and expanding laboratory space.  These all are critical inputs to student learning, but they are not measures of learning.  The Department Assessment Portfolio [DAP] provides you a space to document and share this important work, but take care not to confuse program objectives and program evaluation with the genuine assessment of student learning.


Program review, program evaluation, and institutional effectiveness.  Assessment, program review, evaluation, and institutional effectiveness often are used interchangeably, but inappropriately so.  Linda Suskie attempts to clarify the difference:


Program review is a comprehensive evaluation of an academic program that is designed both to foster improvement and demonstrate accountability.  [They include] self-study conducted by the program’s faculty and staff, a visit by one or more external reviewers, and recommendations for improvement… Student learning assessment should be a primary component of the program review process.[14]


She goes on to describe institutional effectiveness as promoting not only students learning, but also each of the other college wise aims [e.g., research and scholarship, diversity, community service].  Program review, evaluation, and institutional effectiveness represent broad efforts to demonstrate how well the institution is achieving its mission and core values.  Learning outcomes assessment is a key part of those efforts.


Quantitative and Qualitative traditions of inquiry.  Quantitative assessments generally include assessment that use structured, pre-set response options that are numerically coded for later analysis through descriptive or inferential statistical techniques.  Qualitative assessments, on the other hand, use more flexible and naturalistic methods to identify themes and patterns.


Individuals who are new to the process of assessment understandably might assume that they are expected to produce numbers:  correlation coefficients, inferential statistical models, etc.  Not true!  Our faculty partners are encouraged to develop assessments of learning that are aligned with and authentic to the research traditions of their disciplines.  For example:  a well-designed and carefully implemented set of focus groups, whose transcripts are rigorously coded and evaluated by a trained analyst can yield important, even transformative information about student learning.  We encourage our partners to recognize and be open to the variety of assessment measures that can reveal insights about student learning.


Rubrics.  Simply-put, rubrics are scoring guides.  They guide raters through the criteria that represent levels of learning or competency.  Their benefits include:


  • They can clarify vague or undefined objectives.
  • They help students understand the instructor’s or program’s expectations.
  • They give students structure for self-evaluation and improvement.
  • They help make scoring more consistent, reliable, and valid.
  • They improve the quality of feedback and reduce disagreements with students.


Rubrics can take many forms, including almost limitless content.  Thus, a rubric that is reliable, valid, and usable among multiple raters take time and practice to develop.  Often it is helpful to pilot test a rubric before deploying it on a large-scale.  For excellent examples of rubrics on a variety of competency areas, see the AACU VALUE Project.[15]  They are free to the public, and open for local adaptation.


Student Learning Outcome [SLO].  A student learning outcome represents a destination:  what do students know and what are they able to do by virtue of a learning experience?  What are the knowledge, skills, attitudes, and habits of mind that students develop as a result of a program, class, or major? 


Program officers often need assistance developing and adapting student learning outcomes.  A simple mnemonic is A + B + C:  An actor undertakes or demonstrates a behavior in some learning context.  For example, by graduation students in the Business major are able evaluate multiple sources of data to craft a hypothesis.  The student learning outcome focuses on the resulting competency or skill, not on the inputs to or process of learning.  Continuing the example above, the term paper requirement is not the outcome, it is the process by which students get to their outcome.  Writing sound learning outcomes is a critical early step in the process of assessment planning.  The measures one selects, the targets one sets, and one’s interpretation of evidence depend on the language of the student learning outcome. 


Targets.  See benchmarks, above.


Triangulation.  One of the principles of action research is the necessity of triangulation.  Student learning is messy and complicated.  Given the myriad factors that influence learning (student, organizational, institutional) it can be very difficult to establish causality.  As much as possible, we use multiple measures to paint a multi-dimensional picture of student learning.  These sources of evidence ideally corroborate one another and reinforce the conclusion that students, indeed, have learned what you expect them to.


[1] Reason, P., & Bradbury, H. (Eds.). (2001). Handbook of action research: Participative inquiry and practice. Sage.

[2] Suskie, L. (2018). Assessing student learning: A common sense guide. John Wiley & Sons.  See page 15.


[3] Suskie, L. (2018). Assessing student learning: A common sense guide. John Wiley & Sons.  See table 15.1 on page 234.

[6] Suskie, L. (2018). Assessing student learning: A common sense guide. John Wiley & Sons.  See page 20. 

[14] Suskie, L. (2018). Assessing student learning: A common sense guide. John Wiley & Sons.



The departmental assessment liaison is a long-established role in Trinity College.  Historically this individual has coordinated assessment planning and implementation within and on behalf of the department, including the reporting of findings and proposed next steps for the development of the curriculum, course content, pedagogy, and academic support services. 


An additional role emerged in spring 2018:  the curricular liaison.  As a response to College’s new initiative, Enhancing Undergraduate Teaching and Learning, department chairs appointment the curricular liaison to facilitate visionary thinking and planning in the service of excellence in the first two years of the college experience.  Like the assessment liaison, the curricular liaison is expected to use evidence of student learning to inform plans and resource requests focused on the first- and second-years.  The two roles are expected to interact often and constructively; in many cases, a single faculty member holds both roles on behalf of the department’s faculty. 


The practical execution of roles naturally varies from program to program.  In general, however, the following distribution of responsibilities tends to characterize the division of assessment activity in most Trinity College programs. It is expected that the following personnel will collaborate closely to develop and implement a sound assessment plan, and to use results to inform teaching and learning in the future. 




Assessment liaison

Curricular liaison


Oversees self-evaluation activities within the program.[AD1] 

Leads the development and yearly review of learning outcomes for the undergraduate experience.

Aligns learning goals for years 1 and 2 with the learning outcomes for undergraduate education generally.   

Assigning of responsibilities

Determines which persons are responsible for the development, continuation, and evolution of the assessment plan and its implementation.

Seeks assistance and collaboration from faculty colleagues.

Seeks assistance and collaboration from faculty colleagues.


Provides leadership support to the assessment liaison

Executes the planned assessment strategy with support from Chair, DUS[AD2] , staff, and other faculty

Works with program officers to design and implement innovations.  Works with the assessment liaison to determine how success of implementation should be measured and integrate assessment into the portfolio.


Plans structured opportunities for the assessment liaison to share updates with the faculty and solicit feedback.

Initiates conversations about student learning and assessment with faculty colleagues.  Informs the program of institutional requirements regarding learning assessment.  Liaise with the Office of Assessment.

Initiates and promotes conversations about student learning in the first two years.  Liaise with the Office of Assessment and the Dean of Academic Affairs.  


Authors an annual report which articulates planning goals and objectives, and reports recent activities and outcomes.

Completes the annual Department Assessment Portfolio by June 1 of each academic year.

Develops periodic reports focused on student success benchmarks and learning outcomes for years 1 and 2 of the undergraduate experience.  Contributes to the annual Department Assessment Portfolio due by June 1 of each academic year.   

Data collection & analysis

Provides leadership support to the assessment liaison

Structures and executes the collection and analysis, delegating to or collaborating with colleagues as appropriate to study learning in the program generally.

Utilizes existing and planned assessment data to answer questions about student learning in the first two years.  Consults with assessment liaison to integrate new data sources.  Structures and executes the collection and analysis, delegating to or collaborating with colleagues as appropriate for efforts concerning excellence in education in the first two years.

Future planning

Leads the effort to use findings to enhance student learning.

Provides evidence-based recommendations to the faculty community to enhance undergraduate student learning generally.

Provides evidence-based recommendations to the faculty community to enhance undergraduate student learning in the first two years.


Identifies needs and seeks resources (e.g., funding)

Consults regularly with Office of Assessment for support and guidance.

Consults regularly with Office of Assessment for support and guidance.  Communicates financial needs to the Dean of Academic Affairs.




The Department Assessment Portfolio [DAP] is the tool through which Trinity College academic programs plan for, describe, and document their study of undergraduate student learning.  It is intended to be used throughout the academic year, reaching completion in late May.  On June 1 of each academic year the Office of Assessment will begin reviewing and providing feedback on each program’s portfolio.  Program officers should expect notification of feedback in August, in preparation for the start of the next academic year.

Early in each academic year, the assessment liaison and/or Chair should meet with the program’s assigned assessment staff member to (a) begin the assessment process for the year and (b) to ensure access to and discuss any recent updates to the DAP.   A comprehensive users’ guide[1] as well as a brief quick start guide[2] are available for help and guidance. 

The DAP requires the following pieces of information.  At minimum, assessment liaisons should discuss these topics with the program Chair, DUS, and curricular liaison but preferably they should be informed and developed through wider conversations among program faculty. 

  1. Mission statement for the program
  2. Explanation of the assignment and sharing of assessment responsibilities within the program
  3. Explanation of the implementation of planned actions from the previous assessment cycle.
  4. Evidence of an assessment plan and a proposal to improve the experience of undergraduate students in the first two years.
  5. Articulation of Student Learning Outcomes [SLOs]. 
    Because the crafting of the SLO is a key part of assessment planning, the Office of Assessment evaluates them carefully early in each assessment cycle.  We also offer workshops, both live or recorded, to assist with the development of SLOs. [3]
  6. For each of the 2-4 SLOs under exploration each year, the program describes:
    • The measure(s) by which evidence about this SLO is collected
    • The target:  what it hopes to find
    • The actual findings, and a judgment as to whether the SLO was met
  7. A general interpretative summary of the assessment findings for that outcome, across measures
  8. A plan for changes or enhancements, which may include updates to courses, course sequencing, advising practice, physical resources, or the assessment plan itself.

The portfolio is completed in PebblePad[4], the university’s enterprise portfolio platform.  Trinity College does not accept paper copies of the assessment portfolio, or versions delivered in other file formats (e.g., pdf, xlsx).  Please be sure to discuss access to and completion of the DAP with your assigned assessment staff member.  A quick start guide[5] and comprehensive users’ guide[6] also are available to support your work.

While the content of departments’ assessment plans vary widely, Trinity College nonetheless holds departments to the common standards of assessment practice.  The rubric we use to identify points of strength/concern, guide our feedback, and report a general summary to College leadership is listed below; it was originally developed and deployed by the former Arts & Sciences Faculty Assessment Committee [ASFAC].  We use the numerical ratings to monitor aggregate progress over time and to identify areas needing focused support across the College in the present. 


Not sufficient

Can be improved

Satisfactory (Meets expectations)

1 pt

2 pts

3 pts

4 pts

5 pts

1: The department clearly identifies its mission & goals.






2: The department clearly identifies specific learning objectives/ outcomes.






3: The department clearly identifies measures, instruments, and indicators.






4: The department clearly identifies the methods and standards used to judge the quality of learning products and indicators.






5: Achievement targets are clearly identified.






6: Findings are discussed in the assessment report; the report indicates if targets have been met.






7: Findings are shared and discussed with faculty members in the department for purposes of future target setting and action planning; faculty members engage in meaningful discussion about the assessment.






8: The department takes specific actions (based on findings) to strengthen undergraduate education. The department sets clear future targets.[AD1] 







Written feedback from Assessment personnel is compiled each summer during the review process, and the feedback is released to all departments concurrently at the end of the summer.  Only individuals who author or formally collaborate on the portfolio will see the feedback and receive notice of its availability in PebblePad.  Chairs and Directors of Undergraduate Study may request the feedback from their assessment liaisons at any time.






The Office of Assessment administers course evaluations on behalf of Trinity College academic departments.  The questionnaire occasionally is revised at the request of and in partnership with the Provost’s Office, and we inform program officers by email as soon as the updates are finalized.  General instructions for academic departments and programs are available online, and are distributed by email each term prior to the opening of the course evaluation window.[1]

Student questionnaires and associated “codebooks”, for the current term and past terms, also are available on our website.[2]  In general, we evaluate all Trinity College courses with the following exceptions.

  • If a course has fewer than five (5) enrolled students, we do not evaluate the course to protect student confidentiality. 
  • We do not evaluate independent study courses given the wide variability of content, format, and expectations. 
  • We evaluate graduate-level courses only in cases where one or more undergraduates are enrolled. 
  • We evaluate labs, recitations, discussion sections, and activity courses only at the request of the DUS and/or Chair.  
  • We do not evaluate House courses, except as directed by the Dean of Academic Affairs. 



A general schedule for course evaluation administration is included in this document under Key Dates.  In general, and as of fall 2017, evaluations for students open about 17 days before the last day of class and close about 5 days after the last day of class.  This window will vary by term as the official university calendar shifts.[3]  Please watch for informational emails from the Office of Assessment explaining relevant dates and deadlines for the present term. 


Responsibilities of the academic program

A full description of program-level responsibilities for evaluations are listed on our website.[4]  At the request of the Office of Assessment, department personal approve the final list of courses to be evaluated each term.  Department personnel bear no responsibility for the dissemination of questionnaires to students.  Students access evaluations via DukeHub, and they are sent notification emails from the office of the Dean of Academic Affairs.  That said, faculty are strongly urged to explain the value of course evaluations to students during each evaluation window; please reiterate this point to your faculty colleagues.  Recommendations to faculty are listed online.[5]  In the interest of monitoring and providing effective feedback about teaching and learning, department Chairs and DUSes are expected to access course evaluation reports regularly.



Course evaluation data can and should be used to inform teaching and planning.  We occasionally update the format and content of published reports to meet the evolving data needs of Trinity College departments, and to accommodate changes to the questionnaire itself. 

Tableau Software is a data visualization tool: dynamic and interactive.[6]  Once a report is created and uploaded to Tableau, authorized users have the ability to manipulate the tables and figures to answer important questions about your course(s) or academic department(s).

If you are unsure if you have access to your reports, please consult the section in this document titled “Communication and access requests”.   A quick start guide is available and should address most immediate questions.[7]   For users who would like to explore the reports in greater depth, including screen shots of different report views, a comprehensive users’ guide also is available.[8]  Because all Tableau reports sit behind Duke’s IT firewall, users must be on the Duke network or logged in via a VPN.[9]

Code sheets, to help you translate the numeric values, are available online as well.[10]



Members of the Duke community with a valid NetID may access the Student Accessible Course Evaluation System [SACES] reports to view the course evaluation results for select courses.  Access to individual courses is available through the course search in DukeHub.  One may browse all courses from directly within the Tableau platform.[11]  Instructors who chose not to opt-in to the SACES system will not appear in these tables.  Also, because the SACES reports, and all Tableau reports, sit behind Duke’s IT firewall, users must be on the Duke network or logged in via a VPN.[12]

Some faculty may recall the now-obsolete Instructor Course Description Form, often called the “faculty form”.  This questionnaire asked instructors to provide information about the course pedagogy and expectations, in addition to the instructor’s decision to release course evaluation results to students (“opt-in”) or to keep them private (“opt-out”).   In the absence of this questionnaire, faculty now indicate their opt-in/opt-out preference on a Qualtrics web form:

SACES reports are not updated in real-time.  Assessment personnel update the reports at least once each semester, roughly one month before the start of bookbagging for the following term.

Appointments, Promotion, and Tenure [APT]

Several tables within the program’s course evaluation reports are required for the APT process.  The Office of Assessment provides full information in the APT section of our users’ guide.[13]  Individuals completing the course evaluation section of a dossier are encouraged to contact Faculty Affairs for guidance.[14] 


Supplemental evaluations of teaching and learning

Faculty and program personnel occasionally ask if additional questions can be appended to the standard Trinity College evaluation form.  In short, no, the form cannot be extended to include supplemental questions.  However, program personnel are welcome to develop complementary tools to solicit additional information from students. 

Many faculty wish to collect additional information about student learning, often at a middle point in the term.  Trinity College does not have a formal, universal process for mid-semester evaluations, but we have cultivated a handful of online resources and recommendations to guide faculty through this process.[15] 



Course evaluations are not the only report type published in Tableau.  We publish other information on the students graduating from your major, minor, and/or affiliated certificate program in the Program-level reports (student information) dashboard.  The purpose of this canon of reports is to help you understand which students your program has served in the major, minor, and/or certificate, what these students do during their time at Duke, and perhaps most importantly, how they score on Trinity College’s assessment of key competencies in the general education.

To elaborate on this last point, the College issues a handful of assessment to new first-year students in the summer before Orientation.  Of the approximately 1700 new matriculates, one-third are asked to complete a test of ethical reasoning, one-third are asked to complete a test of global perspectives and engagement, and one-third are asked to complete a test of quantitative literacy and reasoning.  Participation is voluntary, and the tests are completed online before August Orientation.  In addition, a test of critical thinking is administered to a voluntary sample of students after their arrival on campus, shortly after the end of the add/drop period.  This is a proctored test, which cannot be administered online.

The results of these tests become a baseline measure for these competencies at matriculation.  Student participants are asked to re-take the same assessment in their senior year, roughly February – March, to help us determine whether and to what degree students have developed in each of these competency areas.  We understand that many factors influence students’ performance on these measures, and that they are best interpreted in conjunction with other sources of evidence. 

Although these data originally were intended for the assessment of the Trinity College general education – and continue to be used in this way – it also makes sense to provide relevant subsets of the data to program officers as well.  When you log into the Tableau reports, your credentials will determine which cases you are permitted to see:  specifically, students who graduated with a major, minor, and/or affiliated certificate in your program.  Login here:



The Office of Assessment developed a simplified dashboard, Enhancing Undergraduate Teaching and Learning, that provides data for each department about which students are taking your courses, the quality of teaching in your courses, and grade distributions.  These are streamlined reports help to answer questions pertaining to the first two years of an undergraduate student’s Duke experience, such as:


  • What is the student experience with the introduction to your field?
  • If your introductory courses are the only ones that a student takes in your field, then is it the experience that you would want them to have?
  • How do your introductory courses create a climate of inclusion and belonging?
  • If a student takes an introductory or service course in your discipline,
    • does it engage that student so that the student sees its importance and relevance?
    • does it stimulate a student intellectually so that the student’s intellectual capacities are deepened and broadened?


These dashboards should be read and utilized in conjunction with other information about student learning in your program.  Your contact in the Office of Assessment can help you make sense of and use these data sources effectively.



Portfolios are a well-established pedagogy and practice in higher education.  Originally conceived to showcase students’ signature work, portfolios now are understood to be a way of learning.  By structuring and making space for the process of discovery and integration, portfolios enable authentic, experiential, and contextualized learning.

The claim is that all learning is situated; there is no escaping that. Presenting knowledge that is out of context, as in a lecture about abstract concepts, runs against the way that humans, especially novice learners, learn.


[Portfolios] invite learners to see connections among their learning situations and to therefore see patterns and find meaning.


Excerpted from the Field Guide to Eportfolios[1]


Folio thinking[2],[3], as a pedagogy, includes the following essential principles, all of which contribute to self-understanding and meaning-making throughout students’ education.


Integrating learning across experiences

Collaborating with peers and colleagues

Envisioning assessment as a learning space and activity

Empowering self-evaluation

Creative interpretation

Reflecting on learning over time




Faculty and program officers who would like to learn more about the pedagogy of portfolios or their implementation in a class or program are encouraged to visit Duke’s portfolio homepage:


[1] Batson, T., Coleman, K. S., Chen, H. L., Watson, C. E., Rhodes, T. L., & Harver, A. (Eds.). (2017). Field guide to eportfolio. Washington, DC: Association of American Colleges and Universities.

[2] Chen, H.L. (2004, January). Supporting individual folio learning: Folio thinking in practice.  Poster presented at the NLII Annual Meeting, San Diego, California.

[3] Lombardi, J.  (2008). To portfolio or not to portfolio: Helpful or hyped?  College Teaching 56 (1), 7-10.




Policy on the dissemination of student data

The Office of Assessment and the office of the Director of Academic Services and Systems recognize that much of the information we collect and use is of a fundamentally private nature. We make both implicit and explicit pledges of confidentiality to students and faculty.  The assessment community should release data in an intentional, purposive, and controlled manner that:

  • permits an appropriate level of disclosure,
  • ensures the timely receipt of data by interested parties,
  • enables accurate and meaningful interpretation and use, and
  • protects the confidentiality of faculty and students, and of Trinity College of Arts & Sciences (TCA&S) and the university overall.

Additionally, we conclude that:

  • The interests of data managers must be balanced with the university’s support for program-driven assessment. Programs’ access to data must be mediated by the need to safeguard data from possible misinterpretation.
  • The assessment community will promote and use a single source of data. The Office of Assessment, in collaboration with the office of the Director of Academic Services and Systems, will be the chief analysts and communicators of Trinity College-level data. 
  • The TCA&S Assessment Working Group will evaluate the proposed dissemination of aggregate or itemized data and provide guidance to the Dean of Academic Affairs as to the personnel and/or organizations that may receive or access data. 
  • No itemized data files including the following categories will be provided: ethnicity; grades, GPA, or rank; admissions markers (SAT, ACT, Reader Ratings); financial or financial aid data; and athletics data. The dissemination of aggregate reports using protected data categories necessitates special review by assessment personnel.

Data requests will be routed through members of the Trinity College Assessment Working Group, which will coordinate and provide guidance pertaining to data distribution, use, and interpretation.  Trinity College programs are welcome to consult with the Office of Institutional Research and/or the Office of the University Registrar regarding student data, but it is likely those groups will re-route data requests back to the Office of Assessment and the Director of Academic Services and Systems for Trinity College.  The full policy is available online.[1]

A form to officially request data is available online as well. [2]  The Office of Assessment, in cooperation with the Office of the Associate Dean for Information Systems & Data Management of Trinity College, use this web form to document the data needs, discuss the most appropriate operationalization of information, and inform data recipients of their responsibilities for appropriate usage.


Undergraduate Survey Policy

The Office of Assessment subscribes to and reinforces Duke’s Undergraduate Survey Policy[3], overseen by the Office of Institutional Research [IR].  Programs that wish to issue a survey to enrolled students must consult with IR prior to administration.  The consultation may include discussions of survey design, timing, and sampling.  The purpose of this discussion is to ensure that students and, in some cases faculty and staff, are not overwhelmed with surveys and that the surveys that are issued within this learning community are rigorously designed and likely to yield useable information.


Research ethics and the Institutional Review Board

Because they involve human subjects, many assessment projects require review by Duke’s IRB.  The Office of Research Support and the campus IRB[4] provide clarification about the operational definition of human subjects research.

Macintosh HD:Users:jenniferhill:Desktop:Screen Shot 2018-01-19 at 10.12.48 AM.png

On the other hand, internally-focused assessment activities that are intended to inform programming, policies, and/or curriculum development may not require IRB review.  As a general rule, the Office of Assessment suggests that researchers seek clarification and assistance from Campus IRB personnel if it’s possible they may pursue opportunities for presentations or publications based on their assessment work.  Office of Assessment personnel and the consultants from the Campus IRB can help you determine your level of responsibility.  Whether one seeks full or expedited IRB review, however, the process of obtaining informed consent from student-participants is always a recommended practice. 



The Office of Assessment regularly sends announcements by email on a variety of topics including updates to the course evaluation process, upcoming workshops and information sessions, and deadlines for the Department Assessment Portfolio [DAP].    We base our email listserv on (a) the current list of department Chairs and program officers on file with the Dean of Academic Affairs, Trinity College and (b) the current list of assessment liaisons on file with the Senior Associate Dean for Academic Planning.  If you are unsure whether you are on the email listserv, please contact Jennifer Hill ( to inquire.


Every other October, the Office of Assessment initiates a College-wide audit of access to Tableau data reports.  The purpose of this audit is to confirm continued access among current program officers, and to rescind access for individuals who no longer serve in roles requiring access to sensitive program data.  An email will be sent to department Chairs itemizing the names and IDs of individuals who have access to Tableau reports on behalf of the department; Chairs are asked to review and possibly make updates to that cohort of authorized users.  Independent of the biennial audit of report access, program officers may request access for new users:


Each Trinity College department and certificate program has a designated contact[1] within the Office of Assessment.  This individual serves as your primary support and guide as you and colleagues work through the process of assessment.  We expect these assignments to persist over time, so please do take the opportunity to know and form a relationship with your assessment contact:




The Office of Assessment is one of a handful of groups at Duke creating, access, and using data.  Previously this document introduced (a) course evaluations, (b) Identity and experience of recent graduates, and (c) Undergraduate student learning experience as important sources of information about teaching and learning in your program.  You also may require or benefit from additional information supplied by one of our campus partners. 


Office of Institutional Research [IR]

At many institutions, assessment and institutional research are integrated into a single entity, perhaps titled Institutional Effectiveness or similar.  At Duke, the two offices are separate, with distinct reporting responsibilities, but they do communicate and collaborate frequently.  Generally speaking, assessment focuses internally on the study of student learning within our programs and courses, whereas institutional research focuses externally, reporting key information to external stakeholders and agencies.  There are substantial overlaps between the two groups’ access to data, analytic capabilities, and commitments to providing sound, evidence-based policy recommendations. 

The Office of Institutional Research[1] administers Duke’s primary undergraduate student surveys, as part of the Consortium on the Financing of Higher Education [COFHE], a body of Duke’s peer institutions.  The surveys individually focus on enrolled students, graduating seniors, and alumni.  Individual academic departments are invited to consult with the Director of IR to determine if any of these survey data can inform the study of teaching and learning in that department.


The Office of the University Registrar  

The Registrar is the custodian of student academic records at Duke University.  The Registrar has formal procedures for the request of student-level information[2], however, given the strong partnership between the Office of Assessment and the Registrar, Registrar staff are likely to refer data requests from Trinity College academic programs to the Office of Assessment.  This practice is not intended to delay your access to relevant data, but to ensure that Trinity College has an opportunity to catalog and coordinate the use of student records data.  In many cases, data requested by Trinity academic programs already are available in online dashboards, and assessment personnel can help you locate, interpret, and use these data quickly and effectively.


How to request data

The Registrar is the primary clearinghouse for requests of student, faculty, and program data.[3]  As noted in the previous section, it is not inappropriate to begin the request process with the Registrar, however, the Office of Assessment and the Registrar mutually determined that requests for data on undergraduates or the undergraduate experience in Trinity College or the Pratt School of Engineering will be routed to the Office of the Associate Dean for Information Systems & Data Management of Trinity College.  To make your request as efficient as possible, consider consulting with Associate Dean for Information Systems & Data Management or with an Office of Assessment staff member for guidance.  We can help you determine (a) which data are likely to answer your question, (b) how the data should be operationalized and queried, and (c) which restrictions, if any, to which your request may be subject.  Please also see the previous section in this document, “Policy on the dissemination of student data.” 


Consider exploring the multiple data sources available to you in Tableau:  a) course evaluations, (b) Identity and experience of recent graduates, and (c) Undergraduate student learning experience[AD1] .  These were constructed to provide immediate access a variety of student and program data, to expedite and streamline your work.



The following institutes, programs, and offices support the work of assessment, both directly and indirectly, and often provide guidance for individuals studying the process of teaching and learning.  Several are described elsewhere in this document

Social Science Research Institute [SSRI]

Office of the University Registrar

Office of Institutional Research

Learning Innovations, formerly the Center for Instructional Technology

Provost’s Office of Faculty Affairs

Office of Research Support, and the Duke Institutional Review Board

Data and Visualization Services (Duke Libraries)

Division of Student Affairs:  Assessment & Research




Tableau.  Of all of the enterprise web services available at Duke, the only one that is essential for program leaders is Tableau.  Tableau is a dynamic data visualization platform, and it houses Trinity College’s reports on course evaluations, Identity and experience of recent graduates, and Undergraduate student learning experience.  It is important that program officers check their access to Tableau earlier in their tenure to ensure ready access to essential reports.  See the sections on Communication and/or Getting Help, or contact the Office of Assessment directly.

Qualtrics.  Qualtrics is Duke’s enterprise survey software.  It is preferable to other third-party platforms (e.g., Survey Monkey) because one can enable authentication and respondent identification through Shibboleth.  OIT supports and publishes help guides for new users of Qualtrics[1], and Qualtrics provides ample help through the Qualtrics Community.[2]


PebblePad.  PebblePad is Duke’s enterprise portfolio platform.  This is the space in which program leaders complete the Department Assessment Portfolio [DAP].  For courses, it simplifies the process of designing and administering learning portfolios.  For students, it enables the development of organic, authentic representations of learning, which can be kept private or posted to the Internet public.  For more information about PebblePad, see portfolio@Duke.[3]


Nvivo.  NVivo is a qualitative data management and analysis tool.  It helps researchers maintain, organize, and analyze text, audio, and video sources, and to use analytic techniques (e.g., coding) to find patterns and themes in unstructured materials.  It is available for free through OIT.[4]


SAS, SPSS, STATA, R, MATLAB.  The Office of Assessment is a SAS office:  we use desktop SAS to manage data and run quantitative analyses.  We have some expertise in STATA and SPSS.  All applications are available (at variable cost) through OIT.[5]  Researchers are encouraged to seek statistical and data management support through SSRI[6] or Data and Visualization Services (Duke Libraries).[7]




  • Confirm that you can access to Tableau reports for course evaluations and information about students affiliated with your program.
  • Determine who needs access to departmental data. Watch for the biennial notification of the next access audit, every other October.
  • Make sure that the designated staff member in your department checks and approves the list of courses that require evaluation each term.  Watch for emails email from the Office of Assessment.
  • Confirm with assessment personnel that we have you on our list of Chairs, and that your email has been added to our listserv
  • Support and encourage the assessment efforts of the assessment and curricular liaisons

Director of Undergraduate Studies and/or Assessment liaison

  • Identify your designated liaison from among the staff in the Office of Assessment.[1]  Schedule initial one-on-one consultation to discuss plans for the assessment of teaching and learning in your program.
  • Confirm that you can access the current version of the Department Assessment Portfolio.  Request a Quick Start guide from your designated liaison in the Office of Assessment. 
  • Obtain and review assessment feedback from the previous year’s Department Assessment Portfolio [DAP].
  • Obtain a copy of the program’s assessment plan from your predecessor, if applicable.
  • Review and understand your program’s current Student Learning Outcomes [SLOs].  Prepare to discuss them with faculty colleagues. 
  • Confirm with assessment personnel that we have you on our list of DUSes/Liaisons, and that your email has been added to our listserv.
  • Schedule a meeting with your department’s curricular liaison early in the academic year to coordinate and communicate assessment efforts for the coming year.


Curricular liaison

  • Identify your designated liaison from among the staff in the Office of Assessment.[2] 
  • Ask your department’s assessment liaison to give you read/write access to the Department Assessment Portfolio [DAP].  You and your colleague can contact your assigned assessment liaison for help if you wish.
  • Confirm that you can access to Tableau reports for course evaluations and information about students affiliated with your program.
  • Schedule a meeting with your department’s assessment liaison early in the academic year to coordinate and communicate assessment efforts for the coming year.





Fall term


Initial assessment consultations begin.  Each program is assigned an Office of Assessment staff liaison, with whom they will discuss assessment planning for that year. 

October/November (Biennial)

Tableau access audit:  Office of Assessment distributes to each Program Chair and Certificate Director a list of the individuals who have access to confidential data about students in that program.  The Chair is asked to review, confirm, and/or update this list to maintain security and confidentiality of student information.


The Office of Assessment sends program officers the proposed list of courses to be evaluated in the fall term.  Corrections and updates are due to the Office of Assessment by the end of October.  No late changes or course additions will be accepted after the published deadline.

November 1

Any necessary updates about the fall course evaluation process will be sent to Chairs, DUSes, and staff assistants via our assessment listserv.  Updates may include new information about the evaluation questionnaire, deadlines, or reporting processes.

Third week in November – mid-December

Course evaluations are open for completion by students.  Specific dates vary by term.

Third week in December

Course evaluation reports for the fall semester are published in Tableau.  Because course evaluations for Study Away courses have variable end dates, reports may be appended with new data throughout the following month. 


Spring term[AD1] 

Early March

The Office of Assessment sends program officers the proposed list of courses to be evaluated in the spring term.  Corrections and updates are due to the Office of Assessment by the middle of March.  No late changes or course additions will be accepted after the published deadline.


Any necessary updates about the fall course evaluation process will be sent to Chairs, DUSes, and staff assistants via our assessment listserv.  Updates may include new information about the evaluation questionnaire, deadlines, or reporting processes.

Mid-April – Early Mary

Course evaluations are open for completion by students.  Specific dates vary by term.

Second week in May

Course evaluation reports for the spring semester are published in Tableau.  Because course evaluations for Study Away courses have variable end dates, reports may be appended with new data throughout the following month. 



June 1 of each calendar year

Department Assessment Portfolio [DAP] complete and ready for review by the Office of Assessment.  The submitted portfolio will represent assessment activities and findings from the immediately preceding academic year.

August  of each calendar year

Office of Assessment released written feedback on the Department Assessment Portfolio [DAP].  The email notification will include instructions for access and scheduling follow-up conversations.




The Office of Assessment maintains a comprehensive list of relevant informational resources on our website.[1]  These include well-regarded national and regional organizations that lead and support best practices in assessment, as well as individual institutions and programs that provide quality examples of a variety of assessment activities and methodologies.  A brief bibliography of important scholarship in the field of assessment is listed below.


Cambridge, D., Cambridge, B. L., & Yancey, K. B. (Eds.). (2009). Electronic portfolios 2.0: Emergent research on implementation and impact. Stylus Publishing, LLC.


Ewell, P. T. (2002). An emerging scholarship: A brief history of assessment. Building a scholarship of assessment, 3-25.


Kuh, G. D. (2008). Excerpt from high-impact educational practices: What they are, who has access to them, and why they matter. Association of American Colleges and Universities.


Kuh, G. D., Jankowski, N., Ikenberry, S. O., & Kinzie, J. L. (2014). Knowing what students know and can do: The current state of student learning outcomes assessment in US colleges and universities. Urbana, IL: National Institute for Learning Outcomes Assessment.


Maki, P. L. (2012). Assessing for learning: Building a sustainable commitment across the institution. Stylus Publishing, LLC.


Palomba, C. A., & Banta, T. W. (1999). Assessment Essentials: Planning, Implementing, and Improving Assessment in Higher Education. Higher and Adult Education Series. Jossey-Bass.


Suskie, L. (2018). Assessing student learning: A common sense guide. John Wiley & Sons.


Walvoord, B. E. (2010). Assessment clear and simple: A practical guide for institutions, departments, and general education. John Wiley & Sons.



External web resources

Association of American Colleges & Universities [AACU]

Liberal Education & America’s Promise [LEAP] Initiative

Valid Assessment Learning in Undergraduate Education [VALUE] Project

Project Kaleidoscope [PKAL]

Association for Authentic, Experiential, and Evidence-Based Learning [AAEEBL]

National Institute for Learning Outcomes Assessment [NILOA]

Consortium on the Financing of Higher Education [COFHE]

Teagle Foundation

Spencer Foundation

Lumina Foundation