The Influence of Accent Familiarity on Rater Behavior in L2 Speech Assessment

Modified: 18th May 2020
Wordcount: 7670 words

Disclaimer: This is an example of a student written essay. Click here for sample essays written by our professional writers.
Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UKEssays.ae.

Cite This

INTRODUCTION

Rater training has long been considered the panacea for variance in speech assessment ratings. For decades, large-scale language testing organizations have used human raters, specifically native or first language (L1) speakers, as the standard measurement tool for evaluating second language (L2) oral proficiency. However, nuances in human behavior and construct irrelevant factors need to be taken into account to ensure that fair and reliable evaluations are being made by test raters. During the late 80s, language assessment researchers began to explore the influence of rater characteristics on rater performance. This intrigue gave rise to further investigations of effects on rater behavior in the 1990s (Bachman, 2000). Chalhoub-Deville (1996) has reported that “rater groups vary in expectation and evaluation from task to task” (p. 66). At the center of the discussion on L2 rater performance and training is the impact of raters’ background characteristics as rater effects. Rater effects are factors that contribute to undesirable test score variance. Some researchers have categorized these “effects” as rater severity/leniency, rater type, and experience, as well as rater bias (Myford & Wolfe, 2003; Yan, 2014).

Get Help With Your Essay

If you need assistance with writing your essay, our professional essay writing service is here to help!

Essay Writing Service

To date, language testing research continues to observe the impact of rater background factors and the risks they may pose to test reliability and validity. In the domain of L2 speech assessment, increasing attention has been given to raters’ language background with respect given to raters’ familiarity L2 accented speech. Several studies have examined the interaction between raters’ accent familiarity and raters’ perception of L2 speech as well as rater behavior. However, the results of these studies have been inconclusive. Some studies have demonstrated an influence of accent familiarity on rater judgments (Carey, Mannell, & Dunn, 2011; Winke, Gass & Myford, 2013) whereas other studies have indicated that accent familiarity has no effect on speech assessment scores (Huang, 2013; Xi & Mollaun, 2009). These conflicting results highlight the complexity of this issue and emphasize the need to address whether rater accent familiarity is an actual risk to test validity and reliability.

Bearing this in mind, the primary goal of this paper is to investigate the influence of accent familiarity on L2. The secondary goal is to determine whether rater training can mitigate the effects of raters’ accent familiarity on rater judgment and behavior. The paper will begin with a theoretical discussion of accent and accent familiarity in research. Followed by a review of the literature surrounding the influence of accent familiarity and rater training. Finally, the paper will conclude with a summary of the research findings and a discussion about the implications for language test-raters and training programs as well as suggestions for future research.

THEORETICAL BACKGROUND

What is an accent?

Before we can address the impact of accent familiarity, it is crucial that we underline the theoretical constructs that are central to this discussion. According to Derwing and Munro (2009), “listeners are amazingly sensitive to the presence or absence of a foreign accent” (p. 477). That is to say that a speaker’s accent can impact listening comprehension in a multitude of ways. In spite of the wealth of research on L2 accented-speech, there is a lack of consensus on how best to define accent. Currently, the proposed theoretical definitions for accent range from dialectal to phonological. Some researchers have described accent as the extent to which speech differs from a local variety of English and the perceived influence of that difference on listeners’ comprehension (Derwing & Munro, 2009; Ockey & French, 2016). Accent has also been defined as variations in aspects of pronunciation, such as vowel and consonant sounds, stress, rhythm, and intonation (Harding, 2011). Nevertheless, there seems to be a distinction being made amongst these two conceptualizations. The first definition seems to be sociolinguistic in nature and presumes that communication is occurring in an English dominant or diglossic environment and the latter is purely linguistic and refers to the transferring of linguistic patterns from an L1 to an L2. Drawing from these definitions, it can be said that accent is comprised of both linguistic and sociolinguistic dimensions. What we know about L2 accented-speech, however, is largely based on listeners’ perceptions and evaluation of L2 speech. (Kang & Ginther, 2017; Munro & Derwing, 2009).

Much of the work on L2 speech perception has explored three underlying constructs: (a) comprehensibility,(b) accentedness,and (c) intelligibility; through listener judgements using a rating or measurement scale (Derwing & Munro, 2009; French & Ockey, 2016; Gass & Varonis, 1984; Kennedy & Trofimovich, 2008; Kraut & Wulff, 2013; Saito, Trofimovich & Issacs, 2015). Comprehensibility refers to the perceived effort required to understand L2 speech, and accentedness refers to the perceived strength of a speaker’s accent. Accentedness has also been described by some researchers as a measure of “nativelikeness” (Saito, Trofimovich & Isaacs, 2015). Intelligibility, on the other hand, is not as easy to explain. Munro and Derwing (2009) define intelligibility as “how much the listener actually understands” (p. 480). Saito, Trofimovich, and Issacs (2015) cited research that views comprehensibility as part and parcel of intelligibility. Commonly, researchers have measured intelligibility through the amount of speech that they can transcribe after listening to speech stimuli. However, accurate transcriptions of L2 speech data does not completely illustrate a listener’s perception of L2 speech. (Derwing and Munro, 2009).

Following previous studies on speech perception, Saito et al. (2015) relied on listener judgments from native English speakers to reveal the linguistic dimensions that impact L2 comprehensibility and accentedness. The results from this study affirmed that comprehensibility and accentedness are interconnected yet distinct constructs (Derwing & Munro 2009). This was demonstrated by the fact that the measures of accentedness were greatly influenced by aspects of pronunciation (i.e. speech rate, vowel/consonant errors, and word stress) with little to no effects from lexicogrammar variables whereas comprehensibility is influenced by a combination of lexicogrammar and pronunciation variables. To elucidate the differences between all three constructs, Derwing and Munro (2009) revisited their accented-speech research spanning decades where they examined the associations between accentedness, comprehensibility, and intelligibility. They found that listeners may judge speakers with heavy accents to be intelligible but not the other way around, deeming the two constructs “partially independent” (p. 479). Further, they found that comprehensibility had a closer relationship with intelligibility than accentedness.

 

What is accent familiarity?

While it is commonly accepted that a speaker’s accent in concert with a listener’s perception may impede their overall comprehension of what is heard, a listener’s familiarity with an accent can also aid comprehension. In an early investigation by Gass and Varonis (1984), they hypothesized that four variables of familiarity could increase the comprehensibility of nonnative English speakers. It should be noted that the authors actually examined intelligibility, although they use the term comprehensibility throughout the study. Their results suggest that familiarity with nonnative accents “generally facilitates comprehension” whereas familiarity with the topic being discussed “greatly facilitates comprehension” (p. 81). This study has since become the foundation for observing the impact of accent familiarity.

Kennedy and Trofimovich (2008) attempted to address the methodological shortcomings of the Gass and Varonis study by increasing the number of participants and speech samples used to investigate the effects of listener familiarity on the comprehensibility, accentedness, and intelligibility of L2 speech. Their findings with regard to accent familiarity indicated that greater familiarity with L2 speech led to greater intelligibility. On the other hand, familiarity with L2 speech did not exhibit significant effects on comprehensibility and accentedness. Kennedy and Trofimovich (2008) operationalized accent familiarity as listener experience, which they defined as “the extent of previous exposure to L2 speech” (p. 461). However, the researchers did not specify the degree of listeners’ exposure to Chinese-accented English, which accounted for half of the speech samples in this study. Gass and Varonis (1984) also failed to detail the degree of familiarity that L1 English listeners had with Arabic and Japanese accented-speech types, which was under direct examination in their study. It is possible that both studies could have produced different outcomes had the listeners’ language exposure been specific to the accented-speech types examined in question and not accented-speech in general. However, evidence from Bradlow and Bent’s (2008) study on listener adaptation to foreign-accented speech indicated that increased exposure to Chinese-accented English by listening to sentence recordings yielded greater intelligibility scores. They posit that it is not about the amount of language exposure but instead the quality of the language that is heard (p. 717). That is to say, if the language displays the displays enough Yet, no construct definitions for accent familiarity were presented in any of these studies.

In a previous study by Bradlow and Bent (2003), the authors defined accent familiarity as shared interlanguage knowledge between nonnative interlocutors with the same L1 background. The researchers maintain that having shared interlanguage knowledge can benefit speech intelligibility. They coined this phenomenon, the interlanguage speech intelligibility benefit (ISIB). The results from their study demonstrated that nonnative speech was considered more intelligible by listeners’ who had the same L1 background as the speaker. Major, Fitzmaurice, Bunta, and Chandrika (2002) noted shared L1 effects in their study from a year prior in the context of listening assessment. Typically, studies exploring accent familiarity and listening comprehension have either employed L1 English listeners or listeners who share an L1 with the speakers selected for the study. As such, accent familiarity has often been operationalized as having a shared L1. In Major et. al (2002), they observed listeners from varying language backgrounds to determine whether listening comprehension in English is greater when the listener shares the L1 of the speaker. Their analysis revealed that native Spanish speakers displayed greater listening comprehension when they listened to Spanish-accented English. Conversely, this effect was not seen amongst the other nonnative English groups. Although the outcomes from this study did not fully support the shared L1 hypothesis it did indicate that familiarity with accented- speech can influence listening comprehension. The results from this study were largely inconclusive, most likely due to the confounding of test task difficulty with accent familiarity. Harding (2011) yielded similar results in his attempt to find evidence of a shared L1 effect on listening comprehension. His differential item functioning analysis demonstrated that the Chinese L1 listeners benefited from listening to a speaker from the same L1 background across all of the test tasks but Japanese L1 speakers showed very little benefit from listening to a Japanese L1.

Alternatively, French and Ockey (2016) endeavored to investigate accent familiarity by observing the relationship between familiarity, accentedness and listening comprehension using L2 English test-takers. Speakers with varying degrees of accent strength from the United States, the United Kingdom, and Australia were selected to record lectures that were used on the TOEFL listening test section. Test-takers were given a questionnaire to gauge their accent familiarity with the representative accented-speech types. Although the authors did not formally state how accent familiarity was operationalized in this study, it could be inferred from the questionnaire that accent familiarity in this study referred to listeners’ exposure to speakers with accented-speech types represented in this study. Test-takers reported their general exposure to the various L1 English accents, as well as their exposure to the accents through an English teacher, real-time communication, and television or radio. The results indicated that listening scores decreased as accent strength increased. Further analysis was conducted on the test scores for the effects of accent familiarity. Those results supported previous research in that familiarity with Australian and British accents was beneficial for comprehension and unfamiliarity with the accents had adverse effects on comprehension.

Overall, these studies suggest that familiarity with accented speech may develop from either having a shared language background with a speaker or some level of exposure to a particular accent to some degree. Nevertheless, evidence has not been provided to solidly prove the existence of a shared L1 benefit (Kang, Moran, & Thomson, 2019). Given the indeterminate nature of the shared L1 benefit and varying operational definitions of accent familiarity, the only proposal that can be made with confidence is that accented speech has differential effects on listening comprehension. These effects are very important to underscore given the growing concerns surrounding the incorporation of various English accent types on high-stakes L2 listening comprehension tests and the potential threat to test validity. By the same token, the possible effects of accent familiarity on listeners in speech perception research and English listening comprehension examinees can also be extended to L2 oral proficiency raters.

 Although the primary responsibility of an oral proficiency rater is considered to be assigning consistent (reliable) scores, it could be argued that the first and foremost responsibility of a rater is that of a listener. Yan and Ginther (2017) explained that the distinction between listeners and raters exists in the context of language testing. While the authors maintain the interchangeable use of the two terms, they retain the term rater for people who have received special training to evaluate speech performance on a language proficiency exam. Furthermore, they assert that studies that “everyday listeners” in research studies usually concentrate on listeners’ perception of comprehensibility, accentedness, and intelligibility whereas raters are mainly concerned with these three constructs with respect to language proficiency (p.69). This extrapolation of what distinguishes a rater from a normal listener, the difference boils down to rater training. However, it is debatable to claim that listeners are not raters and vice versa when raters must naturally listen to L2 speech samples on a speaking proficiency test before they can assign ratings.

 

 

LITERATURE REVIEW

 

An Overview of Accent familiarity in L2 Speech Assessment

An increasing number of language testing studies have drawn from the speech perception research that was explored in the previous section to address the potential influence of accent familiarity in L2 speech assessment. Within these studies, it is suggested that accent familiarity is a source of rater bias (Winke, Gass, Myford, 2013; Xi and Mollaun, 2009). It must be emphasized that rater bias generally refers to a rater’s tendency to assign harsh/lenient scores and not rater’s personal implicit bias. Xi and Mollaun (2009) delineated bias as a positive bias and negative bias. The authors explained that positive bias may afford to raters greater understand of a particular accent leading to lenient score assignments and negative bias results in harsher score. Xi and Mollaun (2009) contend that negative bias can manifest in unfairly low scoring assignments on the basis of the rater and test-takers’ shared L1 and raters’ perception of their scoring tendencies. Yet, research has not established accent familiarity as a true rater effect or source of bias due to the lack of conclusive evidence on its impact on oral proficiency scores.

One potential cause of these divergent results on the effects of raters’ accent familiarity could be the differing operational definitions for accent familiarity across these studies. Carey, Mannell, and Dunn (2011) operationalized accent familiarity as prolonged exposure to test-takers’ L2 pronunciation and variety of English and denominated it as interlanguage phonology familiarity. Winke, Gass, and Myford (2013) and Winke and Gass (2013) operationalized accent familiarity as having formally studied the test taker’s (speaker) L1 as a foreign or second language. Huang, Alegre, and Eisenberg (2016) went a step further to include language learning through familial or heritage ties in addition to formal language learning in their operational definition. However, Browne & Fulcher (2016) underscored that these operational definitions are not clear-cut construct definitions, instead, they are inferences made about raters’ language experiences with L2 accents. Browne and Fulcher (2016) proposed a new definition that describes accent familiarity as “a speech perception benefit developed through exposure and linguistic experience” (p.39).

Another possible explanation for the contradictory outcomes can be attributed to the differences in methodologies across these studies. These include the implementation of different research designs (i.e. mixed methods or quantitative). For instance the selection of is use of professionally trained raters (Carey, Mannell & Dunn, 2011; Kang, Rubin, & Kermad, 2019; Winke & Gass, 2013; Winke & Gass; 2013; Xi & Mollaun, 2009; Yan, 2014) versus untrained raters (Browne and Fulcher, 2016; Huang, 2013; Huang, Alegre and Eisenberg, 2016; Wei & Llosa, 2015). In the section below, I will survey seven studies in detail that have utilized trained and untrained raters to examine whether rater accent familiarity influences raters’ perception and evaluation of L2 oral proficiency as well as observe the potential of rater training as a tool to reduce the effects of accent familiarity.

 

 

The Effects of Accent familiarity with Trained Raters

Perhaps the most seminal studies on the effects of raters’ accent familiarity have observed those effects by employing trained or certified raters. The groundbreaking study by Xi and Mollaun (2009) investigated the influence of accent familiarity and rater training on rater performance. In this study, rater participants from India were selected to rate 100 TOEFL iBT speech samples for Indian and non-Indian test takers. One group of raters received the regular training given to ETS operational raters, who were primarily native speakers in the United States and presumably less familiar with Indian-accented English. The other group received the regular rater training with an additional training session that only included speech samples from Indian examinees. Results from the reliability and agreement analysis revealed that the scores assigned by both groups of Indian raters had high rater agreement with operational raters. Although the G study analysis found raters who received the special training to be more consistent, the overall results indicated that the raters’ accent familiarity did not affect their ability to reliably evaluate English oral proficiency. The qualitative analysis performed using survey responses reported that the raters felt confident in their ability Indian speakers as a result of the rater training. Additionally, the raters reported that their familiarity with Indian-accented did not make it difficult to make fair and accurate scoring judgments in this context but have been concerned by their accent familiarity previously. Given these results, Xi and Mollaun (2009) speculated that rater bias could be reduced through rater training.

Find Out How UKEssays.com Can Help You!

Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.

View our academic writing services

On the other hand, Carey et. al (2011) and Winke et al. (2013) both employed trained can influence rating behavior. Carey and colleagues posited that interlanguage phonetic familiarity, which is raters’ familiarity with an L2 speaker’s pronunciation, can positively or negatively influence rater judgments on test taker’s speech performance. Ninety-nine IELTS examiners with varying first languages, nationalities, and teaching experience were selected from five international test centers (Korea, New Zealand, India, Australia, and Hong Kong) to rate IELTS oral proficiency interviews for three test-takers with Korean, Indian, and Chinese accented English. The raters used the authentic four-point pronunciation subscale to evaluate the oral proficiency interviews in addition to completing three questions to determine their familiarity with the three interlanguages: Korean, Indian and Chinese. Logistic regression analysis revealed that pronunciation scores were higher when the rater was familiar with the test taker’s accent and higher when the speaker was rated in their native country. As a result, rater familiarity was found to have noticeable effects on rater behavior regardless of rater training.

In support of Carey and colleagues, Winke et al. (2013) recognized that accent familiarity was a possible source of rater bias. To increase the generalizability of their study, Winke et al. (2013) selected a larger number of rater participants than both Xi and Mollaun (2009) and Carey et al. (2011). The researchers analyzed the behavior of raters whose L2 is shared with the test takers’ L1. This study added another dimension to the discourse on the effects of accent familiarity. Additionally, the raters received rater training before assigning any ratings. Since the speech samples in this study belonged to test takers whose L1 was either Spanish, Chinese or Korean, the researchers were mainly interested in the ratings assessed by raters whose L2 was either Spanish, Chinese or Korean to see if accent familiarity influenced their ratings. The raters and test-takers were divided by their language backgrounds and put into subgroups. The multifaceted Rasch analysis indicated that Spanish L2 and Chinese L2 raters were less harsh in rating test takers whose L1 was the language that they previously learned. In spite of the evidence of rater bias, the data revealed the bias size to be quite small. Moreover, the study’s results indicated no bias in L2 Korean raters towards L1 Korean test-takers. However, the researchers suggested that this result could have been attributed to the small sample of L2 Korean raters. This study provides evidence that accent familiarity does impact speech performance ratings, albeit, minimally.

 Building on their previous study, Winke, and Gass (2013) wanted to explore raters’ reasonings behind the ratings they assigned as an attempt to see if raters are aware that their accent familiarity is a potential source of bias that could impact their scoring. A subset of the trained rater participants from their previous study (Winke, Gass & Myford, 2013) were selected for this study. A total of 26 raters were videotaped while rating the speech sample and the videotapes were later used in stimulated recall sessions. Although accent familiarity was defined in this study as having learned the test takers’ L1 as an L2 previously, a few participants self -reported familial ties to some of the test takers’ L1 and were considered heritage learners. The results revealed from the stimulated recall comments evidenced that raters’ L2 learning experiences had impacted the way they listen to speech samples, which in turn affected the way they scored the speech samples. Furthermore, this study revealed that raters are aware of accents and how their familiarity played a role in their rating. Additionally, the findings suggested that raters attend to accents as they tried to comprehend the speech or for affective reasons.

Combined, Carey et. al (2011) and Winke et al. (2013) corroborate that raters’ accent is a source of rater bias in that contributes to lenient rating assignments from trained raters. On the other hand, Xi and Mollaun’s (2009) established that rater training can mitigate the effects of accent familiarity and the possibility for rater bias attributed by raters’ accent familiarity. Quantitatively, test scores did not exhibit any severe or lenient scoring patterns. Furthermore, the results from the qualitative analysis confirmed that the raters in their study reported feeling more secure in their ability to accurately rate Indian speakers due to receiving rater training. Although the qualitative analysis conducted on the rater feedback survey responses pointed out that raters are aware of their accent familiarity and its potential to impact their rating judgments, it did not bear any consequence to the test scores. Likewise, the qualitative analysis conducted by Winke and Gass (2013) gave further insight into accent familiarity as a rater effect by indicating that raters’ perceptions of their own rating ability and language experiences may inform rater bias. Despite the conflicting findings, these studies all agreed that large-scale testing companies should address the rater’s familiarity and the potential risk that it presents to the reliability of L2 speech proficiency scores.

 

The Effects of Accent familiarity with Untrained Raters

Alternatively, a considerable amount of accent familiarity studies have examined the impact of accent familiarity on naive or untrained raters. For example, Huang (2013) investigated the effects of raters’ accent familiarity with Chinese-accented English where she employed three untrained rater groups that varied on two variables: accent familiarity and English language teaching experience. The first rater group consisted of participants that were categorized as unfamiliar with Chinese-accented English with no teaching experience. The second group consisted of raters with no teaching experience that reported being familiar with Chinese-accented English through language study and the third rater group consisted of English language teachers that reported having Chinese students in their classroom. Each rater participant rated 26 speech samples using analytical and holistic rubrics. Raters also completed an accent identification task that was also used to measure raters’ accent familiarity.

To address her first research question, Huang (2013) conducted a one-way analysis of variance (ANOVA) to determine whether the groups varied in their performance on an accent identification task which was used to confirm raters’ self-reported accent familiarity. The results indicated that the unfamiliar non-teaching group had the poorest performance on the accent identification task establishing their lack of familiarity with Chinese- accented English. The results also confirmed that the other rater groups were indeed familiar with Chinese-accented English. Next, Huang (2016) analyzed rater reliability and severity of raters’ analytic and holistic scores. Although the internal consistency was considered to within an acceptable range, the results revealed significant differences in inter-rater consistency amongst the groups. Moreover, rater severity analysis indicated that the unfamiliar raters had a tendency to assign more lenient scores. Seeing as the unfamiliar rater group was primarily responsible for test score variance in this study, no conclusions can be drawn about the effect of accent familiarity as well as teaching experience on rater performance. However, Huang (2016) did point out that the raters’ do arrive at their scoring judgments in different ways because the raters who were familiar with Chinese- accented speech and who had teaching experience stated that their background in these areas influenced their judgments.

Similarly, Wei and Llosa (2015) used a mixed-method design to explore the effects of accent familiarity on rater performance. The researchers selected six rater participants (three American raters and three Indian raters) to identify the extent to which the two rating groups differed in their evaluation of Indian test-takers. Over 200 samples were obtained from Indian TOEFL iBT test takers. Two quantitative analyses were conducted to measure internal consistency and rater severity. The results displayed that the two rater groups were equally consistent in their scoring patterns. Interestingly, rater severity did not demonstrate any significant differences between the two groups, yet the analysis revealed that the most lenient rater was American and the most severe was of one Indian rater. These findings did not only suggest that accent familiarity can influence rater bias (Carey, Mannell & Dunn, 2011; Winke & Gass, 2013; Winke & Gass; 2013), they also affirmed Xi and Mollaun’s (2009) concept of positive and negative bias.

The qualitative analysis, however, demonstrated that the Indian raters and American raters who reported their familiarity with Indian- accented English were more successful at comprehending speech produced by Indian test-takers. Thus, providing evidence in support of the existence of an interlanguage speech intelligibility benefit (Bradlow & Bent, 2003). Outside of the role of accent familiarity on scoring judgments, the researchers were also motivated by the scarcity of speech assessment studies that focus on raters from Outer Circle countries such as India (see Kachru, 1985;1991). Accordingly, they explored raters’ attitudes towards Indian-accented English by tabulating the amount of positive and negative comments made about Indian English during the think-aloud procedure. It was revealed that the Indian raters greatly varied in their attitudes towards Indian English whereas the American raters largely expressed positive comments about Indian English.

By comparison, Huang et al. (2016) investigated whether accent familiarity and the source of familiarity affected rater behavior with untrained rater. In this study, the source of familiarity refers to how the raters became familiarized with the language (i.e. formal study or heritage ties). Raters were categorized by language and source of familiarity to create three groups: Spanish Heritage, Spanish Non-Heritage and Chinese Heritage. Although accent familiarity was found to assist in accent identification, no significant difference was found in the numerical ratings across rater groups which indicated that the raters were not influenced by accent familiarity. These results deviated from Carey et. al (2011) and Winke and Gass’s (2013) findings. In stark contrast, the survey data indicated that accent familiarity had significant effects on rater behavior for all groups. Many of the raters reported that they were influenced by their accent familiarity and as a result more lenient with their ratings.

 While these three studies demonstrated the general consensus that accent familiarity facilitates accent identification, the discrepancies between the rater’s perception and numerical ratings illustrate a larger picture of the mixed results in the field of speech assessment and rater effects research. However, the differences found amongst the three studies demonstrate that additional research needs to be conducted before we can move from hypotheses to grounded statements about the influential effects of accent familiarity on rater judgment and scoring. Additionally, these studies shed light on another background variable that may be at play during L2 speech evaluation; the role raters’ educational and professional background in English language teaching. The findings from Huang (2013) and Wei and Llosa (2015) exhibited that raters can still make accurate scoring judgments on L2 oral proficiency to some extent without having been formally trained to do so. It appeared that raters’ experience in English language teaching provided raters’ the wherewithal to correctly interpret the rating criterion which prevented them from making inaccurate judgments about test-takers’ oral proficiency.

DISCUSSION AND CONCLUSION

Since its conceptualization in speech perception research, accent familiarity has become a topic of much discussion within multiple research domains. Speech assessment research, specifically, has expanded the operationalization of accent familiarity from a listener/ rater sharing the speaker/ test taker’s L1 to including rater’s exposure to L2 accented speech (Carey et al. 2011) to rater’s sharing the test taker’s L1 as their L2 (Winke et al. 2013). Given the widespread international use of high-stakes language tests, such as the Test of English as Foreign Language (TOEFL) and the International English Language Testing System (IELTS), testing companies have been left with no other recourse than to utilize raters from differing language backgrounds (Winke, Gass & Myford, 2013).

Furthermore, the growing sociolinguistic debate around World Englishes has initiated a shift in standards from “nativelike” pronunciation to intelligible pronunciation. Therefore, it is imperative to understand how all raters, regardless of their linguistic background, align themselves with rating L2 speech proficiency. In the context of L2 speech assessment, however, it is equally important to understand the influence rater-specific variables and the potential threats on test reliability and validity. Kang, Rubin, and Kermad (2019) contend that “oral proficiency scores ought to be affected only by test-takers’ performance and how their performance aligns with the rating criteria” (p.2). In keeping with this view, the goal of this paper was twofold: to investigate the influence of accent familiarity on rater behavior and to determine whether rater training can minimize the biasing effects on L2 oral proficiency scores related to raters’ accent familiarity.

Despite the acute awareness of accent familiarity and its impact on rater performance in the existing body of language assessment literature, research results have been mixed. Studies have varied in methodological approach (i.e. quantitative vs. qualitative), the use of trained raters, as well as rating criteria (i.e. analytical vs. holistic). While some studies have yielded results indicating the influence of accent familiarity on rater behavior, other studies have seen little to no effects.

 Some researchers contend that the application of mixed methods research designs may be more effective than quantitative analysis alone to examine rater-related factors that affect rater performance (Yan 2014). Although few studies have conducted quantitative analysis to observe the effects of accent familiarity (Carey et al. 2011; Winke et. al, 2013), most studies have either applied a mixed-methods (Wei & Llosa, 2015; Xi & Mollaun, 2009; Yan, 2014) or quasi-experimental design (Huang, 2013; Huang et, al, 2016). By combining quantitative and qualitative analyses, we are provided a window into raters’ perceptions which simply does not occur with quantitative analysis alone. During qualitative analysis procedures, researchers gain insight into whether or not raters feel affected by their familiarity with test-takers accents as well as the implicit biases that raters bring with rating sessions. This further elucidates the fact that bias is not exactly quantifiable.

With regard to whether or not rater training can mitigate rater bias, research appears to suggest so even when studies that employ trained (Winke et al., 2013) or certified raters (Carey et, al. 2011) contradict the effects of rater training. Surprisingly, support for rater training has been demonstrated in studies that did not employ trained raters nor display the influential effects of accent familiarity (Huang 2013).

A recent study conducted by Kang, Rubin, and Kermad (2019) substantiated previous research on rater training. The results from this study indicated that the influence of rater-specific variables, which included raters’ accent familiarity and raters’ attitude towards L2 speech was lessened due to one session of rater training. Evidence from Kang et. al (2019) as well as the other studies suggest that the influence of accent familiarity on rater performance seems to be circumstantial at best. Taken together, these findings implicate that raters’ familiarity with L2 speech does have an impact on rater behavior to some extent. Therefore, rater training programs need to contend with raters’ linguistic background as well as other rater-related variables to ensure that test-takers are receiving reliable and unbiased oral proficiency scores.

Although, it may not be feasible for rater training packages to address all of the rater background factors at once. It may beneficial to have modules that address these factors in the same way Xi and Mollaun (2009) implemented the specialized training that was geared specifically towards evaluating Indian test-takers. Additionally

Finally, one major limitation across all of the accent familiarity studies is related to sample size. The relatively small samples of rater participants across these studies constrain the generalizability of their findings. Additionally, many of these studies have looked at the effects of being familiar with widely spoken L2 English varieties. Future studies should aim to replicate these studies using larger rater pools and different to improve the generalizability of studies. References

  • Bachman, L.F. (2000). Modern language testing at the turn of the century: assuring that what count counts. Language Testing, 17(1), 1–42.
  • Bent, T., & Bradlow, A. R., (2003). The interlanguage speech intelligibility benefit. Journal of the Acoustical Society of America, 114(3), 1600-1610. doi: 10.1121/1.1603234
  • Bradlow, A. R., & Bent, T (2008). Perceptual adaptation to non-native speech.Cognition, 106(2), 707–729. Retrieved from https://doi.org/10.1016/j.cognition.2007.04.005
  • Browne, K. & Fulcher, G., (2016). Pronunciation and intelligibility in assessing spoken fluency. In T. Isaacs, & P. Trofimovich (Eds.), Second language pronunciation assessment: Interdisciplinary perspectives (pp. 37-53). Bristol: Multilingual Matters. https://doi.org/10.21832/ISAACS6848
  • Carey, M. D., Mannell, R. H., & Dunn, P. K. (2011). Does a rater’s familiarity with a candidate’s pronunciation affect the rating in oral proficiency interviews? Language Testing, 28(2), 201–219. doi:10.1177/0265532210393704.
  • Chalhoub-Deville, M.1996: Performance assessment and the components of the oral construct across different tests and rater groups. In Milanovic, M. and Saville, N., editors, Performance testing, cognition and assessment. Cambridge: University of Cambridge Local Examinations Syndicate and Cambridge University Press, 55–73.
  • Derwing, T., & Munro, M. (2009). Putting accent in its place: rethinking obstacles to communication. Language Teaching, 42(4), 476-490. doi:10.1017/S026144480800551X
  • Gass, S., & Varonis, E. M. (1984), The effect of familiarity on the comprehensibility of nonnative speech. Language Learning, 34(1), 65-87. doi:10.1111/j.1467-1770.1984.tb00996.x
  • Harding, L. (2011). Accent and listening assessment: A validation study of the use of speakers with L2 accents on an academic English listening test. Frankfurt, Germany: Peter Lang
  • Huang, B. (2013). The effects of accent familiarity and language teaching experience on raters’ judgements of non-native speech. System, 41(3), 770-785. doi: 10.1016/j.system.2013.07.009
  • Huang, B., Alegre, A., & Eisenberg, A. (2016). A cross-linguistic investigation of the effects of raters’ accent familiarity on speaking assessment. Language Assessment Quarterly, 13(1),25-41. doi:10.1080/15434303.2015.1134540
  • Kennedy, S., & Trofimovich, P. (2008). Intelligibility, comprehensibility, and accentedness of L2 speech: The role of listener experience and semantic context. The Canadian Modern Language Review, 64 (3), 459-489
  • Kang, O., Moran, M., & Thomson, R. (2019). The effects of International accents and shared first language on listening comprehension tests. TESOL Quarterly, 53(3), 56-81.
  • Major, R.C., Fitzmaurice, S.F., Bunta, F., & Balasubramanian, C. (2002). The effects of nonnative accents on listening comprehension: implications for ESL assessment. TESOL Quarterly, 36, 173-190. doi:10.2307/3588329
  • Myford, Carol & Wolfe, Edward. (2003). Detecting and measuring rater effects using Many-Facet Rasch Measurement: Part I. Journal of Applied Measurement, (4) 4. 386-422.
  • Ockey, G.J., French,R. (2016) From one to multiple accents on a test of L2 listening comprehension. Applied Linguistics,37(5), 693–715. doi: 10.1093/applin/amu060
  • Saito, K., Trofimovich, P., & Isaacs, T. (2015). Using listener judgments to investigate linguistic influences on L2 comprehensibility and accentedness: A validation and generalization study. Applied Linguistics, 38, 439–462.
  • Wei, J., & Llosa, Lorena. (2015). Investigating differences between American and Indian raters in assessing TOEFL iBT speaking tasks. Language Assessment Quarterly, 12(3), 283- 304. doi:10.1080/15434303.2015.1037446
  • Winke, P., & Gass, S. (2013). The influence of second language experience and accent familiarity on oral proficiency rating: A qualitative investigation. TESOL Quarterly, 47(4), 762-789. Retrieved from http://www.jstor.org/stable/43267928
  • Winke, P., Gass, S., & Myford, C. (2013). Raters’ L2 background as a potential source of bias in rating oral performance. Language Testing, 30(2), 231–252. doi:10.1177/0265532212456968
  • Xi, X., & Mollaun, P. (2009). How do raters from India perform in scoring the TOEFL IBT™ speaking section and what kind of training helps? ETS Research Report Series, 2009: i-37. doi:10.1002/j.2333-8504.2009.tb02188.x
  • Yan, X. (2014). An examination of rater performance on a local oral English proficiency test: A mixed-methods approach. Language Testing31(4), 501- 527. doi:10.1177/0265532214536171
  • Yan, X., & Ginther, A. (2017). Listeners and raters: similarities and differences in evaluation of accented speech. In Assessment in Second Language Pronunciation (pp. 67-88). Taylor and Francis. https://doi.org/10.4324/9781315170756

 

Cite This Work

To export a reference to this article please select a referencing style below:

Give Yourself The Academic Edge Today

  • On-time delivery or your money back
  • A fully qualified writer in your subject
  • In-depth proofreading by our Quality Control Team
  • 100% confidentiality, the work is never re-sold or published
  • Standard 7-day amendment period
  • A paper written to the standard ordered
  • A detailed plagiarism report
  • A comprehensive quality report
Discover more about our
Essay Writing Service

Essay Writing
Service

AED558.00

Approximate costs for Undergraduate 2:2

1000 words

7 day delivery

Order An Essay Today

Delivered on-time or your money back

Reviews.io logo

1858 reviews

Get Academic Help Today!

Encrypted with a 256-bit secure payment provider