Testing Lecture Comprehension Through Listening-to-summarize Cloze Tasks

(National Geographic (Little) Kids) #1

authentic oral discourse, etc., we need to provide test-takers with lectures con-
taining those authentic spoken language characteristics. It is also worth mentioning
that since a lecture is a“planned, message-oriented”discourse (Hansen 1994: 246),
the main function of a lecture is to deliver knowledge or information to the audience
and hence it is assumed a lecture is a monologue with no or very little amount of
interaction between the speaker and the listener. Test input as a monologue lecture
suffices situational authenticity in this regard.
In line with the significance of interaction between test tasks and the test-takers’
abilities, Bachman and Palmer (1996) reconstructed the concept of“interactive-
ness”upon the original“interactive authenticity”. Interactiveness is defined as“the
extent and type of involvement of the test taker’s individual characteristics in
accomplishing a test task”(Bachman and Palmer 1996: 25). How to make the
interaction between the test-taker and test task close to that between the language
user and the task in the target-language use domain really poses a challenge if we
embark on the task-based construct of academic listening. Zou (2004) emphasized
three aspects to improve interactiveness in listening tests: (1) oral speech discourse
features in listening input; (2) listening test item writing and (3) test format, among
which test format is the most important, for the impact of test format on test-takers’
response in the listening test ultimately represents the interactiveness of it.
In order to ensure quality of test item writing, test designers can follow 5 steps of
listening test item development (Zou 2004: 35):



  1. Select appropriate materials for the listening test and modify them if necessary;

  2. Read aloud the potential script, or record it beforehand and play it to a group of
    item writers;

  3. Ask item writers to take notes of important details/points while listening to it;

  4. Compare notes and identify those points on consensus;

  5. Locate information units and write items on them according to text size, test
    difficulty level and even distribution of tested information units in the listening
    material.
    Writing items via the acoustic channel can reflect cognitive processes typical of
    listening comprehension and ensure interactiveness of the listening test (Zou 2004:
    35). In order to compare different test formats in terms of their influence on stu-
    dents’performance, Zou (2004) exemplified three test formats, i.e., thefirst one
    conducted in MCQ format that only required students to identify what was heard;
    the second one in a gap-filling format that required students to comprehend relevant
    information andfill in the key words; the third in a short-answer format that
    required students to comprehend and summarize the discourse withinfive sen-
    tences. Obviously, the three afore-mentioned formats target different skills and
    strategies, defining different degrees of interactiveness.
    Traditionally, MCQ and gap-filling tasks in listening tests guarantee a satisfac-
    tory level of reliability, but they have also suffered criticism of not sufficiently
    representing the listening construct. MCQ, for example, involves different skills
    including those especially tailored for the test format. People believe proper training


3.3 Task-Based Construct 29

Free download pdf