Rebecca Orton

Linguistics 741

Take Home Final Examination

 

1.  Definitions is the key to determining the usefulness of the terms 'dialect' and 'pidgin' in sign language research.  However, the term 'dialect' is difficult to define.  Wolfram and Christain (1989) ask what is 'dialect.'  There's no clear definition of dialect in terms of linguistic research.  In many of the world's spoken languages, the terms 'language' and 'dialect' can be difficult to distinguish (Wardhaugh 1986:23).  The term 'dialect' was applied for spoken language research, how much more difficult would it be to define 'dialect' for sign language research?  However, there does appear to be dialects of  ASL in the United States, for example, Ameslan in Oregon.

 

The term 'pidgin', on the other hand, has many different definitions and characteristics.  Kegl (et al, in press:31) said that one characteristic of a pidgin is that it is derived from a standard language and "is the result of simplification (the dropping of unnecessary complication such as inflections and a reduction in the lexical inventory)".  Wardhaugh (1989: 35) says that 'pidgin language are very much reduced in the function they serve in society in contrast to standardized languages."  Wardhaugh (1989:57) defines a pidgin as "a language with no native speakers: it is no one's first language but is a contact language."

 

 The definitions of pidgin do not describe contact sign adequately because in contact sign, two modalities are involved: mouthing of the dominant spoken language, and signing.  For example: there is evidence against the usefulness of the term pidgin when comparing English-based spoken pidgins and contact sign based on ASL.  Characteristics of English-based spoken pidgins are:  contribution of mostly lexical items from the non-English language; sounding out of mostly English words; use of English meanings; reduced morphology; and shallow, superficial sentence structure.  Characteristics of ASL-based contact sign are:  contribution of more lexical items, morphology, and syntactical structure from ASL; use of mostly ASL and ASL-like signs; the sharing of meanings between English and ASL with some invented; shared NMS, ASL verb inflection, and English morphology; and deeper sentence structure (Lucas & Valli 1992).

 

The modality of sign language is different than spoken language.  However, Kegl (et al, in press:10) in her research was able to take advantage of these modality differences to "clearly distinguish the superstrate and substrate contributions" of spoken Spanish and LSN/ISN to PSN.  LSN is the expanded symmetric pidgin or jargon used between deaf signers only.  ISN is the native language the deaf community uses in Nicaragua.  PSN is the spoken/signed pidgin.  Kegl (et al, in press:8) said that "PSN, the communication form used between hearing speakers and Deaf signers, is very distinctive because it  characteristically involves the interspersing, sometimes overlapping of spoken Spanish and Nicaraguan sign, abundantly sprinkled with commonly occurring Nicaraguan gestures."  This example is evidence for the usefulness of the term 'pidgin' for sign language research.

 

Even if spoken language research does not apply directly to sign language research, the foundation of knowledge from spoken language research would be helpful.  Differences are bound to show up in comparing signed to spoken languages.  Parallels between spoken and signed languages would be more noticeable as well.  Clearly, research on spoken language can contribute to research by providing methods, theories, concepts, and other such 'received knowledge.' 

 

2.  I disagree with the statement that the sociolinguistic situation of the deaf communities the world over is about the same because deaf people are always a minority among hearing people.  There are two parts to this statement.  The first part is 'the sociolinguistic situation of the deaf communities the world over is about the same.'  The second part is 'deaf people are always a minority among hearing people'.  That the second part is the cause of the first part is questionable.  The second part has an absolute 'always'  which is tricky to deal with especially with a vague word like 'minority'.  In addition, there are exceptions to the first part anyway, which can be expected.  We will tackle each of these issues below.

 

There are other causes why the sociolinguistic situation of the deaf communities the world over may be about the same.  Johnson (1994:103) said that "because deaf ethnic groups are embedded within larger mainstream societies, many of the values and behavior patterns of the mainstream are incorporated into the culture of the deaf ethic group."  Most deaf communities are industrialized.  Some differences show up when a non-industrialized deaf community, such as the Mayan deaf village, is compared with industrialized deaf communities (Johnson 1994).  In addition, differences between non-literate and literate societies are more significant than the differences between signed and spoken languages.  For example, Branson and his associates said that "sign languages and oral languages in non-literate societies may in fact have more in common syntactically than do oral languages in non-literate societies and oral languages in literate societies."  Most deaf communities are surrounded by a culture with a written language, not just a spoken language.  A deaf community within a culture using only an oral  language, like the Bali deaf village in Indonesia, is difficult to find (Branson et al).

 

There are problems with saying that deaf people are always a minority among hearing people because the term 'minority' is too vague.  Are Deaf people a minority because of their sign language or because they have a hearing loss and hence they are viewed as pathologically and not just different?  Disabled people are a minority but not because they have a different language than the majority, but because they are viewed as pathological.  Is a group of ethnic people a minority because of their language or because of their physical characteristics, like skin color, shape of eyes, etc.?  This group may speak the majority language.  One thing is for sure, these differences within the ethnic group are not viewed as pathological.  A pathological viewpoint may be the cause why a group of people is considered a minority.  Language differences may not a minority group make. 

 

More than half of the people in the world are bilingual or multilingual (Grosjean 1992:307).  Being monolingual is not a typical situation even for hearing people.   Granted, hearing bilinguals of one spoken language and one sign language are even less common.  But whole communities of these kind of hearing bilingual people are found, for example, in Martha's Vineyard (Groce 1980, in Johnson 1994:106), in the Mayan deaf village (Johnson 1994), in the Bali deaf village (Branson et al), and in the Venezuela community of signed/spoken bilinguals (p.c. Pietrosemoli in Lucas and Valli 1992:39).    These communities do not view their deaf people as a pathological minority, but as simply different.

 

3.  I agree that bilingualism is a positive thing.  I personally believe that and I see nothing inherently negative about bilingualism itself.  Even the various definitions of bilingualism suggest nothing negative.  One definition is the "ability to produce meaningful utterances in two (or more) languages" (Grosjean 1992:308).  Another definition, which I revised, is the command of at least one language skill such as reading, writing, speaking, signing, listening, etc., in another language (Grosjean 1992:308).  Grosjean's (1992:308) definition of bilingualism is the "use of two or more languages (or dialects) in their everyday lives." 

 

Deducting from these various definitions, it is clear that fluency in both languages is not a prerequisite,  unlike what Luke and Richards (1982:58) say.  Oviedo's definition does concern fluency:  bilingualism is "the ability to produce discourse equally acceptable in whatever languages are considered."  But Oviedo admitted that this definition is more restricted than other definitions like above.  Not being fully fluent in both languages may be considered a negative thing, except that there is a vast majority of people in the world where this is the case (Grosjean 1992:308).  It is possible that these people are more fluent at the bilingual end than at the monolingual end of the continuum.  For example, bilinguals may be more fluent at code mixing or contact sign than in the monolingual language itself (Luke & Richards 1982:57-58, Lucas & Valli 1992, Grosjean 1992:312). 

 

If a bilingual tries to deactivate one language in order to communicate with a monolingual, interference from the bilingual's other language or interlanguage may occur.  This interference could be considered a negative thing depending on the status of the languages in concern.  But sometimes, this interference becomes part of the contact language as a positive marker of a bilingual.  Lucas and Valli (1992:34) say that "what begins as 'sporadic and idiosyncratic' interference in the individual speaker may become the norm at the community level, over time."  Romaine (1989:50 in Lucas and Valli 1992:5) says that "over time the effects of interference in a bilingual speech community can be cumulative, and lead to new norms, which are different from those observed by monolinguals who use the language elsewhere."

 

There are positive benefits to being a bilingual.  More specialized registers would be available to the bilingual than to a monolingual.  For ASL does not have complete access to the decontextualized definitions, but English does (Schley).  Another example is that Hong Kong Cantonese English bilingual has access to registers for science and law in English (Luke & Richards 1982:57).

 

In fact, there is a negative thing about being monolingual in a diglossia situation.  Hong Kong is a close example of diglossia without bilingualism (Luke & Richards 1982:51).  This situation limits the accessibility of people to each other.  Resources have to be duplicated in two languages.  Hong Kong has duplicate resources for Cantonese and English speakers in most situations and for language independent situations, separate language facilities are provided (Luke & Richards 1982:52).  If there was bilingualism in the whole society, then these resources can be combined and shared by all.  People would also have more access to each other.

 

4.  I disagree that sign language in contact with spoken language have a great number of borrowings.  As Lucas and Valli (1992:40) say, the term borrowing "has been defined in terms of spoken languages contact and generally refers to the relationships between two spoken language phonologies and morphologies.  The basic premise is that spoken language phonologies, while unique, are composed of the same basic parts - the sounds articulated by the vocal tract- and the borrowing takes place within the same modality."  This term 'borrowing' was established in spoken language research long before sign language research started. 

 

Dr. Jean Ann herself said that "the term 'borrowing' in the traditional sense would describe a case in which one sign language incorporates a sign which originally came from another sign language."  Both Ann and Davis (1989:97) say that 'borrowing' is not an adequate term for such phenomena as fingerspelling.  A fingerspelled word in ASL is not 'spoken English', but rather 'written English' (Robert E Johnson).  Ann makes it very clear what I mean, "a fingerspelled English word is not English - but rather a signed representation of a written representation of an English word".  In Ann's opinion, "it is not obvious that these are indeed 'borrowings' from English to ASL".  Lucas and Valli (1992:43) say that "fingerspelling... is a unique outcome of language contact... not the outcome of contact between a sign language and a spoken language, but rather between a sign language and the orthographic system used to represent that spoken language."

 

The term 'borrowing' itself was borrowed (pun intended) into sign language research as part of 'received knowledge', but sometimes this does not always work.  Lucas and Valli (1992:40) say that "contact between a sign language and a spoken language necessarily involves two modalities, and the outcomes of contact need to be defined in different ways as a result."  Lucas and Valli (1992:40) "suggest that the terms that have been used to define spoken language contact phenomena, such as borrowing, cannot be indiscriminately applied to sign-spoken events.  Grosjean (1992:312), for example, is not being very careful when saying that Deaf bilinguals "choose a base language of the community or a signed version of the spoken language.  Then, according to various momentary needs, and by means of signing, fingerspelling, mouthing, etc., they bring in the other language in the form of code-switches or borrowing.  The result has recently been called contact signing (Lucas and Valli, 1992)."

 

Personally, I think that, if anything, contact signing is a unique form of code mixing, not code-switching or borrowing.  Indeed, Lucas and Valli (1992:4) mention that there is an "ongoing debate over terms used for describing the linguistic outcomes of language contact....is a given phenomenon an example of borrowing or is it more aptly described as code switching; is a given utterance a case of interference or borrowing; what is the precise difference between code switching and code mixing..."  Perhaps, this debate can be resolved by inventing new terms, now that we understand why old terms, like 'borrowing' do not fully work.

 

5.  Eyegaze may distinguish sign language discourse from spoken language discourse.  Eyegaze is defined to be a non-manual signal in sign languages, but does have functions in spoken languages.  Kendon proposes three fundamental functions of eye gaze for spoken languages:  "monitoring (to gather information about addressee), signaling (to inform addressee of intentions and expectations), and expressive (to emphasize feelings and attitudes)" (Martinez, in Lucas 1995:278).  For example, "brief eye contact at phrase ending and longer gaze at the end of utterances may serve to monitor feedback as well as signal turn-yielding" (Martinez, in Lucas 1995:278).  Argyle and Cook (1976) propose similar functions for eyegaze for spoken languages:  "as a signal for grammatical junctures and speaker-select, as a channel for obtaining feedback and gaze aversion as it relates to a 'cognitive overload hypothesis'"(Martinez, in Lucas 1995:278).

 

One function for eye gaze already mentioned above for spoken languages could be applicable to sign languages, too.  Kendon (1967:22-63) suggested "that gaze aversion at the beginning of an utterance could signal an intention to hold the turn and preclude interruptions as well as enable the speaker to organize his utterance" (Martinez, in Lucas 1995:278).  Martinez (1993) and Baker (1976b:1-13) both noticed this pattern of behavior for sign language, also.  Baker also mentioned that "eye gaze patterns could be a combined outcome of discourse, lexical and grammatical factors" in sign languages (Martinez, in Lucas 1995:279).  In sign language, for example, Venezuela Sign Language (VSL), eye gaze is a non-manual signal that introduces new "argumental information" and "nonnarrative information in the narration" such as "comments, flashbacks, descriptions" etc., and utilizes "grammatical space" (Ovideo).  Ovideo describes in detail the grammatical function of eyegaze in space for sign language as:  "due to the grammatical use of gaze direction in VSL, there is no need to lexicalize manually all arguments of the narration.  Rather it is enough to establish a particular place for them in space, a position relative to the protagonist (if located in the same narrative space as in the selected fragments.).  Each subsequent time, it is necessary to refer to them, orienting the gaze toward their location(s)."

 

Another function of eye gaze in sign language is with regard to the receiver.  The receiver tends to gaze at the signer all the time.  This receiver eye gaze was demonstrated in ASL and with Deaf Filipinos (Martinez, in Lucas 1995:272).  It may be that eye gaze on the receiver's part is simply required in order to perceive the sign language the signer is using and thus is not really a non-manual signal in the sense already established above.  Listeners have no such fundamental need to look at the speaker in order to understand spoken language.  The listener of spoken language may not value gazing at facial expressions, body language, and gestures as much as a receiver of signed language.  Even though these visual features are common to both spoken and signed languages, sign languages utilize these a lot more and in different ways than spoken language.  This is clearly true with the functions of eye gaze as well.  In spoken language, eye gaze is not as significant as it is in sign language.  In sign language, eye gaze is used grammatically and lexically, as well as for those functions that spoken language typically use eye gaze for.

 

6.  Sociolinguistics contribute a lot to the discussion of the question 'what is pure XSL?'  Sociolinguistics contribute objectivity and analysis in terms of linguistics features of XSL.  Sociolinguistics can analyze grammatical structures used in XSL.  Sociolinguistics can provide analysis of differences in spoken languages and signed languages.  Sociolinguistics can analyze modality differences such as use of space in sign languages that is not readily available in spoken languages.  Sociolinguistics can determine whether a spoken language or a written language has indeed influenced sign language in different ways, for example, fingerspelling.  Sociolinguistics contribute 'received knowledge' about pidgins and creoles that may be applied to the analysis of what is pure XSL and what is not. 

 

Sociolinguistic analysis of contact sign features can show what XSL is not.  Sociolinguistics analyze what is unique, for example, contact sign phenomena, and what is common in the world's languages.  Sociolinguistics can point out important features of pure XSL that are missing in contact sign, such as eye gaze.  Sociolinguistics take into consideration non-manual signals within XSL.  Sociolinguistics found out about how Deaf people used only English-like signing or contact signing with hearing researchers in an effort to hide their ASL.  Sociolinguistics can also point out differences in intergroup and intragroup XSL.  For example, Aramburo (1989:115) noted that "exaggerated body movements and facial expressions are not as prevalent when a deaf participant converses with a hearing participant as compared to when both participants are deaf."  

 

Sociolinguistics can point out and clarify misconceptions among hearing people about what is pure XSL.  In the past, Trotter noted that teachers of the deaf were taught ASL in such ways that were  detrimental.  For example, "students were encouraged to use the handshape of the initial letter of the English word and 'create' their own sign.  This sign was supposed to iconically represent the concept of the English word or perhaps bear sufficient similarity in form to a related sign" (Trotter 1989:218).  Trotter knows that sociolinguistics can contribute to the discussion of what is pure ASL.   For example, regarding the situation above,

"no discussion was provided of grammatical facial expression, use of signing space, classifiers, inflections, or the distinction between ASL and non-ASL initialized signs.  Nor was it ever made clear that voicing or mouthing of English words is not part of ASL" (Trotter 1989:218)

 

In conclusion, there is no doubt that sociolinguistics contribute many things to the discussion of what is pure XSL.  The only question remains is whether the question 'what is pure XSL' can be answered.  Again, sociolinguistics can contribute to the answer.  Pure XSL cannot be determined precisely.  The definition of XSL depends on who is using it and where it is being used.  Pure XSL is just as difficult to define as standard English.  There are no speakers of standard English, only dialects of English, and this is also applicable to pure XSL.  People will judge for themselves what is pure XSL.

 

7.  First we must define what a linguistic variable is.  "Linguistic variable is an item in the structure of a language, an item that has alternate realization, as one speaker realizes it one way and another a different way or the same speaker realizes it differently on different occasions" (Wardhaugh 1986:140).  So a linguistic variable is something in a language that varies.

 

What significance must a linguistic variable have?  Wardhaugh answers, for example, that it " is any relationship we find between these habits and either (or both) the social class to which each speaker belongs or the circumstances which bring about one pronunciation rather than the other" (Wardhaugh 1986:140).  Wardhaugh mentions three categories of significance that a linguistic variable can fall into, namely, indicators, markers, and stereotypes.  A "indicator is a linguistic variable to which little or no social import is attached.  Only a linguistically trained observer is aware of indicators" (Wardhaugh 1986:140).  "A marker does carry with it social significance... Markers may be potent carriers of social information.  People are aware of markers, and the distribution of markers is clearly related to social groupings and to styles of speaking" (Wardhaugh 1986:140).  "A stereotype is a popular and therefore conscious characterization of the speech of a particular group" (Wardhaugh 1986:140).  Wardhaugh (1986:140) mentions that "studies of variation tend therefore to focus on describing the distributions of linguistic variables which are markers; they may explain how stereotypes arise, but they merely note indicators."

 

Now that the linguistic variable has been defined and explained in terms of social significance, I think that one linguistic variable that would be worth studying in ASL and contact sign is mouthing.  There is a range from full English mouthing, to reduced mouthing, to ASL mouthing of ASL lexical items like CHA, PAH, etc.  Deaf people that do not mouth are considered to be "pure ASL."  Hearing and hard of hearing people are excluded from the "pure ASL" group because they benefit from mouthing.  According to MJ Bienvenu, she has found that mouthing helps hearing people understand her sign better.  Lip readers, typically hard of hearing people, directly benefit from mouthing.  This mouthing is clearly a marker of who belongs to the "pure ASL" group and who belongs to the contact sign group.  In favor of native or "pure ASL" research, mouthing appears to have been neglected in sign language research.  Mouthing deserves attention as it is seen in more than one sign language/contact sign in the world.

 

A linguistic variable that would not be worth studying is the various forms of the fingerspelled 'E.'  There is a variation of a fingerspelled 'E' with two fingers on the thumb, another with all four fingers on the thumb, and one where the four fingers are bent but do not touch the bent thumb, that is called an 'open E.'  The first variation is a result of phonological assimilation of letters fingerspelled before the 'E' that have only two fingers extended, like 'V', 'U', etc.  The second variation is the standard 'E.'  The third variation, the open 'E' is a form that is used by new signers due to misconceptions of what the fingerspelled 'E' should look like.  There is no social relationship that is significant other than distinguishing a new signer.  If someone persist in using the open 'E' handshape, that is an idiosyncrasy, not a trend within a group of fluent signers.  It does not matter whether the signers are fluent in contact sign or ASL.  What matters is the fluency itself and that is the difference between the two linguistic variables.