of the test-taking strategies specifically targeting MCQ test format can improve test
performance (Powers and Rock 1999; Dude and Duckworth 2002). Wu (1998)
found MCQ test format favored more advanced listeners, misinterpretations of the
options led to wrong choices and, occasionally, test-takers’choice of the correct
answer resulted from incorrect reasons. Field’s (2012: 417) verbal protocol analysis
indicated that“MCQ items promote a process of checking information against
pre-established cues rather than the more ecological one of receiving, interpreting
and organizing it”. In one word, it remains doubtful whether MCQ really tests
test-takers’comprehension in listening tests.
As for the listening-recall gap-filling task, it might only test the skill of recog-
nizing the sound of the word instead of catching the meaning of it. Anderson (1972)
argued that texts can be processed on a perceptual or phonological level without
bringing the meaning to mind. Apparently, test-takers’writing down correct words
tofill in those blanks dependent upon sound recognition does not represent com-
prehension. Anderson’s argument is crystal clear that unless cognitive evidence of
semantic processing is provided a test format cannot be claimed to test compre-
hension. In order to prevent test-takers’picking an option orfilling in a gap with a
word without really understanding the meaning of the discourse, the gap-filling task
on summaries as a test format has found its way. Test-takers mustfill in gaps in a
summary derived from the original recorded passage. That is to say, test-takers’
response format is different from the original input and they are forced to under-
stand the content beforefilling in the gaps. Buck (1992) conducted a study with
gap-filling on summaries as a test format and found it quite reliable. The underlying
argument is similar that different test formats are calibrated for different levels of
skills, strategies and cognitive processes.
To conclude, this section presents two approaches to investigating academic
listening construct, i.e., the approach toward the competence-based construct and
that toward the task-based construct. Recent development in L2 listening com-
prehension has witnessed an integration of the perspective from cognitive psy-
chology. The latest studies on cognitive processes, especially higher level
processing in academic listening can compensate the conventional result-oriented
research approaches mainly focused on listening sub-skills and strategies. With
deeper understanding of the academic listening competence underlying language
users’ academic listening performance, it also poses new challenges to the
task-based construct development. Apart from ensuring the authenticity of aca-
demic listening test input, interactiveness between the test-taker and test task should
also be enhanced provided we aim to measure test-takers’higher level of listening
comprehension. Practically, both competence-based approach and task-based
approach have their strong points and weak points and, put together, they will
“compensate for the weakness of the other”(Buck 2001: 109). We need to make
full use of both approaches and develop test tasks that require both relevant abilities
and match critical target-language use tasks if we want to develop a valid academic
listening test.
30 3 Approaches to Assessment of Lecture Comprehension