The Smarter Balanced Assessment Consortium offers universal embedded online tools that improve
the accessibility for all students, several embedded designated supports that improve accessibility for
ELs, and accommodations for students with disabilities as required by their individualized education
program (IEP) or 504 plan (Smarter Balanced 2013b). Examples of designated supports, depending
on the type of assessment, include bilingual glossaries, translated test directions, and text-to-speech
features. The type of support useful to ELs varies depending on the student’s age and level of English
language proficiency, as well as the subject area assessed, type of assessment task, and other factors.
(See Usability, Accessibility, and Accommodations Guidelines from Smarter Balanced 2014 located
at http://www.smarterbalanced.org/wordpress/wp-content/uploads/2014/08/SmarterBalanced_
Guidelines.pdf.)
Additional Methods of Medium- and Long-Cycle Assessment
Additional methods for evaluating student achievement in medium or long cycles include rubrics
and student portfolios.
Rubrics
Performance assessments that require students to demonstrate learning through an oral, written,
or multimodal performance task (e.g., a presentation, a report) can be evaluated according to a
rubric. A commonly accepted definition of a rubric is a document that articulates the expectations for
an assignment by listing the criteria, or what counts, and describing levels of quality (Andrade, and
others 2009). Criteria relate to the learning that students are being asked to demonstrate rather than
the tasks themselves, and they should provide clear descriptions of performance across a continuum
of quality (Brookhart 2013). The criteria are linked to standards and reflect what is required to meet a
specific standard or cluster of standards.
Descriptions of performance are usually presented within score levels, and the number of score
levels depends on the extent to which criteria across the levels can distinguish among varying
degrees of understanding and skills. The knowledge and skills at one level differ distinctively from
those at other levels (Lane 2013). Commercially produced performance assessments used for high
stakes assessment purposes (e.g., placement or end-of-year grades) should provide evidence of their
technical quality. (See the section on technical quality in this chapter.) Examples of evidence include
review by language and literacy experts, review to ensure cultural and language sensitivity, and field
tests that demonstrate that the rubric differentiates performance across levels of the rubric and across
grades.
For classroom assessment, in situations where stakes are not so high, teachers—sometimes
in collaboration with students—can develop rubrics for performance assessments. Co-construction
of rubrics with students is a powerful way to build student understanding and acceptance of
expectations. When creating rubrics, three points are
important. First, rubrics should express as clearly and
concisely as possible the expected performance at each
level. Therefore, it is important to avoid ambiguous
language. Before using the rubric, the language of the
rubric is explained to students. Second, expectations are
communicated through non-pejorative descriptions of what
performance looks like at each level, reflecting a growth
mindset. Third, the gradations of quality are specifically
articulated across levels. Figure 8.6 presents an example of
a rubric for scoring an essay. The dimensions of the rubric
are listed on the left-hand side and the criteria are clearly
described across four levels of performance along the top.
Performance assessments
that require students to
demonstrate learning through
an oral, written, or multimodal
performance task (e.g., a
presentation, a report) can
be evaluated according to a
rubric.
842 | Chapter 8 Assessment