Needs Assessment
![Picture](/uploads/3/0/3/1/30319677/1431671372.png)
The assessments used prior to Phase I were students’ self-reported appraisal of personal language learning needs and attitude toward listening comprehension (via an open-ended survey that was part of the course needs analysis), discussions with my mentor teacher, observations, and the Metacognitive Awareness and Listening Skills Questionnaire (MALQ). The questionnaire, developed by Vandergrift et al. (2006), measures the four recursive processes of metacognitive listening skills: planning, monitoring, evaluation, and problem solving (link to MALQ).
Student Self-Report. As part of the course needs assessment, students filled out an open-ended questionnaire with questions on their assessment of personal goals or needs for improving English speaking, pronunciation, and listening. It also included sections for writing learning goals, estimated time to be spent studying, and overall semester goals. The questions related to listening were the following:
The strengths and weaknesses in listening comprehension that students reported were also coded and grouped according to theme. There was a total of fourteen reported strengths categorized as follows: understanding main ideas (4), comprehension during interpersonal communication (6), and enjoying learning or learning quickly to improve listening (4). There were twenty-four weaknesses reported that were grouped into four themes: limited vocabulary (9), losing attention when the content was too difficult (3), difficulty understanding academic English (5), and keeping up with the speed of native English speakers (7).
The remaining questions yielded less relevant information. Generally, students reported that they knew when they did not understand when they felt confused, and asked for repetition when they realized that they had not understood. After analyzing the responses, I realized that I had failed to specify the context for responses – I had wanted to know how they dealt with listening in an academic context – and students had mostly responded referring to interpersonal communication in their daily lives.
Overall, the picture created by the responses to the survey revealed that students felt they needed to use English language for daily life, and academic and career purposes. Further, their reported weaknesses – vocabulary, academic English, and speed of native speakers – correlated to their perceived needs. These findings were encouraging in that students’ perceived needs fell generally within the objectives for the course. I had the opportunity to talk with students more about their responses during two informal conversations which occurred one week after the needs assessment survey was completed. Both confirmed that students experienced difficulty in keeping up with the speed of native speakers. The first conversation was with a group of four students before class started. I asked them what they thought would be their greatest challenges with listening after completing the Listening and Speaking course. One student replied that she was afraid of going to classes with native speakers in which professors or other students would not slow down for non-native speakers; the other students agreed that it would be a difficult transition. In another conversation with a female student that occurred after class, I asked what other classes she was taking. She told me that she had been in a computer programming class for two weeks. She felt comfortable with the math and coding assignments, but whenever the professor lectured or gave directions for the labs she felt completely confused. She said, “I was just like monkey – only click, click, and follow what other students do. Even when professor sat with me to help me, I could not understand him. He talked so fast, fast. I just feel so angry with myself – why I can’t understand? So I gave up the class.”
Mentor Teacher Consultation. Since I was new to the listening and speaking course, my mentor teacher’s experience and insight helped me understand the broader context of the students’ needs. Through conversations with her, I learned that the students who had come from the level two Listening and Speaking course (all but one student) had been using the Pathways 2: Listening, Speaking & Critical Thinking textbook (Chase & Johannsen, 2012), meaning they would be skipping an entire level moving to the Pathways 4: Listening, Speaking & Critical Thinking (McIntyre, 2012) textbook for our course. Although students in the past had been able to deal with the change, they struggled at first with the more complex listening content and vocabulary. The reason for choosing a more advanced text was to prepare students for classes outside the ESOL program. My mentor teacher perceived that even though it would be a challenging transition from Pathways 2 to Pathways 4, the adjustment to unsupported classroom leaning it was preparing them for was more significant. In essence, the Listening and Speaking level three course was the last chance to focus specifically on listening skills for mainstream classes. Thus improving listening comprehension was a high priority for the course.
Student Self-Report. As part of the course needs assessment, students filled out an open-ended questionnaire with questions on their assessment of personal goals or needs for improving English speaking, pronunciation, and listening. It also included sections for writing learning goals, estimated time to be spent studying, and overall semester goals. The questions related to listening were the following:
- Please list 3-5 specific reasons why you need to improve your listening skills. In other words, what situations in your real life (currently or in the future) require listening skills?
- What do you consider your strengths and weaknesses in your listening skills currently? Please explain.
- How do you know that you have understood what you have listened to? How do you know when you haven’t understood?
- What do you realize when you haven’t understood what you have listened to?
The strengths and weaknesses in listening comprehension that students reported were also coded and grouped according to theme. There was a total of fourteen reported strengths categorized as follows: understanding main ideas (4), comprehension during interpersonal communication (6), and enjoying learning or learning quickly to improve listening (4). There were twenty-four weaknesses reported that were grouped into four themes: limited vocabulary (9), losing attention when the content was too difficult (3), difficulty understanding academic English (5), and keeping up with the speed of native English speakers (7).
The remaining questions yielded less relevant information. Generally, students reported that they knew when they did not understand when they felt confused, and asked for repetition when they realized that they had not understood. After analyzing the responses, I realized that I had failed to specify the context for responses – I had wanted to know how they dealt with listening in an academic context – and students had mostly responded referring to interpersonal communication in their daily lives.
Overall, the picture created by the responses to the survey revealed that students felt they needed to use English language for daily life, and academic and career purposes. Further, their reported weaknesses – vocabulary, academic English, and speed of native speakers – correlated to their perceived needs. These findings were encouraging in that students’ perceived needs fell generally within the objectives for the course. I had the opportunity to talk with students more about their responses during two informal conversations which occurred one week after the needs assessment survey was completed. Both confirmed that students experienced difficulty in keeping up with the speed of native speakers. The first conversation was with a group of four students before class started. I asked them what they thought would be their greatest challenges with listening after completing the Listening and Speaking course. One student replied that she was afraid of going to classes with native speakers in which professors or other students would not slow down for non-native speakers; the other students agreed that it would be a difficult transition. In another conversation with a female student that occurred after class, I asked what other classes she was taking. She told me that she had been in a computer programming class for two weeks. She felt comfortable with the math and coding assignments, but whenever the professor lectured or gave directions for the labs she felt completely confused. She said, “I was just like monkey – only click, click, and follow what other students do. Even when professor sat with me to help me, I could not understand him. He talked so fast, fast. I just feel so angry with myself – why I can’t understand? So I gave up the class.”
Mentor Teacher Consultation. Since I was new to the listening and speaking course, my mentor teacher’s experience and insight helped me understand the broader context of the students’ needs. Through conversations with her, I learned that the students who had come from the level two Listening and Speaking course (all but one student) had been using the Pathways 2: Listening, Speaking & Critical Thinking textbook (Chase & Johannsen, 2012), meaning they would be skipping an entire level moving to the Pathways 4: Listening, Speaking & Critical Thinking (McIntyre, 2012) textbook for our course. Although students in the past had been able to deal with the change, they struggled at first with the more complex listening content and vocabulary. The reason for choosing a more advanced text was to prepare students for classes outside the ESOL program. My mentor teacher perceived that even though it would be a challenging transition from Pathways 2 to Pathways 4, the adjustment to unsupported classroom leaning it was preparing them for was more significant. In essence, the Listening and Speaking level three course was the last chance to focus specifically on listening skills for mainstream classes. Thus improving listening comprehension was a high priority for the course.
Metacognitive Awareness Listening Questionnaire. In order to gain insight into students’ listening strategy use – specifically their metacognitive awareness – I chose the MALQ because it was designed precisely for this purpose. The questionnaire was developed by Vandergrift, Goh, Mareschal, & Tafaghodtari (2006) to assess second language learners’ “metacognitive awareness and perceived use of strategies while listening to oral texts” (p.449). (link to full MALQ). The questionnaire consists of twenty-one statements relating to five factors which represent groups of strategies listeners may use during the listening process: problem-solving, planning and evaluation, mental translation, person knowledge, and directed attention. An example of a statement from the problem-solving factor, “I use the words I understand to guess the meaning of words I don’t understand”, is typical of the questionnaire. For each statement, students mark their degree of agreement or disagreement on a scale of one to six, from strongly disagree (1) to strongly agree (6).
Prior to filling out the questionnaire, students took a listening diagnostic test administered by the course instructor which was similar to the listening section of the course tests. Students were informed that the diagnostic was just to provide information on their current level of listening comprehension and note-taking and would not be graded. Students were given a sheet of paper for note-taking and told that they would listen to an audio recording twice, then answer multiple choice and short answer questions on the content. Immediately following completion of the listening and diagnostic, students took the MALQ. The primary reason for this order was to help students think of academic listening as they filled out the questionnaire. Before students began, I informed them that although the MALQ was part of the course listening content, it was not graded and they should answer as accurately and honestly as possible. Then I explained the questionnaire and the directions verbally and demonstrated how to answer the example question. Students were free to ask questions for clarification as they completed the survey.
The results of the survey were analyzed by adding numerical values of reported answers with the assumption that a high score would represent a high level of use of strategies within that grouping. A low score would indicate low awareness or use of strategies (see Appendix A for full results). A score sheet and interpretation rubric has not been made available for the MALQ by the developers. Student scores are reported as numerical values added together and categorized according to skill groupings of the MALQ. It is assumed that a “full score” of six (strongly agree) indicates high strategy awareness or usage while lower scores correlate to less frequent usage or lower awareness. However, an exception is the person knowledge category in which an aggregate of reported scores is not representative of strategy use. This category, described in greater detail below, is related to self-efficacy and anxiety. Another exception is mental translation which is the only ineffective strategy included in the MALQ. Reliance on translation during listening is not associated with efficient, skilled listening and is therefore included as a strategy listeners must learn to avoid. Higher scores in this category represent a less-skilled approach to listening.
Results indicated that students were more likely to employ strategies in the planning and evaluation and problem-solving categories. Considering each question individually provided greater insight into specific strategies students underutilized. For example, in the planning and evaluation category, an item of note was question 21, “I have a goal in mind as I listen," for which 15 students reported agreeing and only three students reported that they slightly disagreed. Responses to question ten, in this category, "Before I start listening, I think of similar texts I may have listened," indicated that fewer students utilized this strategy with 50% of responses in agreement and 50% in disagreement.
In the strategy area of directed attention, students reported the lowest degree of agreement overall. Looking at individual responses, a clearer picture emerged of how students attended during listening. Of note is question 16, “When I have difficulty understanding what I hear, I give up and stop listening.” Although 61% of students responded in disagreement, 33% an answer of five (agree), indicating that they tended to give up when the content was difficult.
High numbers in the mental translation skill area indicated frequent reliance on translation during listening, which is not an effective strategy beyond beginning-level L2 acquisition. As students gain higher L2 proficiency, they must learn to avoid mental translation (Vandergrift, et al. 2006). However, closer examination of the three questions in this category revealed that student-reported use of mental translation was not as unequivocal. For the first question in the category (four), “I translate in my head as I listen,” 50% of students agreed. For the second question (11), “I translate key words in my head as I listen,” 67% percent agreed to some degree. However, the final question, “I translate word by word as I listen,” revealed that students were using translation differently; only 39% of students agreed. For this sample, these scores may indicate that listeners translated content words with which they may be less familiar, but fewer were relying on translation as a regular strategy. Only respondent, S - 4, reported no use of translation with a total score of three.
In addition to analyzing the MALQ strategy categories individually and comparing question responses within categories, I wanted to see if patterns emerged from comparison across categories. In order to do so, I converted the category scores into percentages since categories had different numbers of questions. In the chart below (Figure 5), categories are displayed by respondent. When I first examined the scores, it was difficult to detect a specific pattern. I met with a critical friend who asked me how I had organized the respondents. I realized that I had not considered grouping respondents differently to look for trends. With the critical friend, I made an informal analysis of student proficiency based on performance on listening tasks, the first unit test, and general listening behavior in class. This resulted in rough categories of higher proficiency (S01-S08), intermediate (S09-S16), and lower proficiency (S17 and S18).
Prior to filling out the questionnaire, students took a listening diagnostic test administered by the course instructor which was similar to the listening section of the course tests. Students were informed that the diagnostic was just to provide information on their current level of listening comprehension and note-taking and would not be graded. Students were given a sheet of paper for note-taking and told that they would listen to an audio recording twice, then answer multiple choice and short answer questions on the content. Immediately following completion of the listening and diagnostic, students took the MALQ. The primary reason for this order was to help students think of academic listening as they filled out the questionnaire. Before students began, I informed them that although the MALQ was part of the course listening content, it was not graded and they should answer as accurately and honestly as possible. Then I explained the questionnaire and the directions verbally and demonstrated how to answer the example question. Students were free to ask questions for clarification as they completed the survey.
The results of the survey were analyzed by adding numerical values of reported answers with the assumption that a high score would represent a high level of use of strategies within that grouping. A low score would indicate low awareness or use of strategies (see Appendix A for full results). A score sheet and interpretation rubric has not been made available for the MALQ by the developers. Student scores are reported as numerical values added together and categorized according to skill groupings of the MALQ. It is assumed that a “full score” of six (strongly agree) indicates high strategy awareness or usage while lower scores correlate to less frequent usage or lower awareness. However, an exception is the person knowledge category in which an aggregate of reported scores is not representative of strategy use. This category, described in greater detail below, is related to self-efficacy and anxiety. Another exception is mental translation which is the only ineffective strategy included in the MALQ. Reliance on translation during listening is not associated with efficient, skilled listening and is therefore included as a strategy listeners must learn to avoid. Higher scores in this category represent a less-skilled approach to listening.
Results indicated that students were more likely to employ strategies in the planning and evaluation and problem-solving categories. Considering each question individually provided greater insight into specific strategies students underutilized. For example, in the planning and evaluation category, an item of note was question 21, “I have a goal in mind as I listen," for which 15 students reported agreeing and only three students reported that they slightly disagreed. Responses to question ten, in this category, "Before I start listening, I think of similar texts I may have listened," indicated that fewer students utilized this strategy with 50% of responses in agreement and 50% in disagreement.
In the strategy area of directed attention, students reported the lowest degree of agreement overall. Looking at individual responses, a clearer picture emerged of how students attended during listening. Of note is question 16, “When I have difficulty understanding what I hear, I give up and stop listening.” Although 61% of students responded in disagreement, 33% an answer of five (agree), indicating that they tended to give up when the content was difficult.
High numbers in the mental translation skill area indicated frequent reliance on translation during listening, which is not an effective strategy beyond beginning-level L2 acquisition. As students gain higher L2 proficiency, they must learn to avoid mental translation (Vandergrift, et al. 2006). However, closer examination of the three questions in this category revealed that student-reported use of mental translation was not as unequivocal. For the first question in the category (four), “I translate in my head as I listen,” 50% of students agreed. For the second question (11), “I translate key words in my head as I listen,” 67% percent agreed to some degree. However, the final question, “I translate word by word as I listen,” revealed that students were using translation differently; only 39% of students agreed. For this sample, these scores may indicate that listeners translated content words with which they may be less familiar, but fewer were relying on translation as a regular strategy. Only respondent, S - 4, reported no use of translation with a total score of three.
In addition to analyzing the MALQ strategy categories individually and comparing question responses within categories, I wanted to see if patterns emerged from comparison across categories. In order to do so, I converted the category scores into percentages since categories had different numbers of questions. In the chart below (Figure 5), categories are displayed by respondent. When I first examined the scores, it was difficult to detect a specific pattern. I met with a critical friend who asked me how I had organized the respondents. I realized that I had not considered grouping respondents differently to look for trends. With the critical friend, I made an informal analysis of student proficiency based on performance on listening tasks, the first unit test, and general listening behavior in class. This resulted in rough categories of higher proficiency (S01-S08), intermediate (S09-S16), and lower proficiency (S17 and S18).
Next, I removed two categories in which students reported higher awareness or use, planning and evaluation and problem-solving, to examine directed attention and mental translation. These were areas that I wanted to target in the intervention since in combination, they indicated two areas where students could improve. This comparison revealed a pattern which was distinctive between proficiency levels. In Figure 6, category scores are displayed as percentages with trend lines added to emphasize the disparity in reliance on attention versus translation. Students with higher language proficiency appeared to rely more on attention than translation whereas lower proficiency students relied more on mental translation. I wondered if it would be possible to help students utilize attention strategies and decrease reliance on mental translation.
From the variety of data , an intervention supporting students' development of listening skills, specifically metacognitive listening strategies, seemed appropriate within the context and time frame of the action research project.