• No results found

While the experiments of Hubel and Wiesel discussed in section 2.2 give us some indication of the nature of cortical plasticity, the plasticity under exami-nation is very limited. Specifically, it is plasticity within a single modality. In fact, cortical tissue can exhibit a far greater degree of plasticity under the right circumstances. In cases where one modality is entirely absent during the devel-opment of the cortex, such as congenital blindness, areas of the brain normally associated with the missing modality can adapt to process information from en-tirely different modalities. This phenomenon is particularly relevant to the sign-phonology argument, which claims that similarities between sign-sign-phonology and spoken phonology are evidence of the modal-independence of phonology. This section will argue that this argument is incorrect, because sign-phonology can just as easily be understood as an example of cross-modal plasticity.

2.3.1 Cross-Modal Plasticity in Occipital Areas

The conclusion of the experimental data in section 2.2 was that V1 only devel-ops into a visual centre because it receives signals from the retina. An obvious question then, is what happens to V1 in people who are born blind? In such cases V1 receives no signals from the retina at all, therefore we would not expect V1 to develop into a visual centre at all, but something else entirely. The truth of the matter is particularly interesting to linguists. In the congenitally blind,

occipital areas are involved in language processing:

(9) In congenitally blind adults, occipital areas show a re-sponse profile that is similar to classic language areas: Dur-ing sentence comprehension, occipital regions are sensitive to high-level linguistic information such as the meanings of words and compositional sentence structure [. . . ] Like classic lan-guage areas, occipital regions respond most to sentences, less to lists of words and to Jabberwocky (non-word sentences), and still less to lists of nonsense words and backwards speech [. . . ] There is also some evidence that occipital activation is functionally relevant to language processing: rTMS to the oc-cipital pole leads to verb-generation errors and impairs Braille reading in congenitally blind individuals. (Bedny, Pascual-Leone, Dravida, & Saxe, 2011, p. 1)

Bedny et al. (2011) used neuroimaging techniques to compare the blood flow in the occipital areas of congenitally blind and late blind adults, when perform-ing sentence comprehensions tasks. This allowed them determine whether or not the co-opting of occipital areas for language processing is dependent on a crit-ical period, early in life. Their method and results are reviewed and discussed below.

Method

The study was conducted on a total of 42 participants. Of those, 22 were sighted, 11 were congenitally blind and 9 were late blind.

The participants heard short passages of English speech and were then re-quired to answer true/false question. Unfortunately, Bedny et al. do not note whether or not the participants were native speakers of English.

As a control condition, participants heard two passages of speech played backwards, a long passage and a short passage. The participants then had to determine whether the short passage was a part of the longer passage or a novel string. The backwards speech is assumed to lack any English syntactic, semantic or phonological information, and therefore should not trigger any response in the language processing areas of the brain.

MRI scans of the whole brain were taken while the participants performed the tasks. This allowed the experimenters to monitor the flow of blood in the occipital areas.

Results

All three groups had comparable accuracy when performing the tasks. The linguistic tasks were accomplished with 82-85% accuracy, while the backwards speech tasks were accomplished with 52-56% accuracy (see Bedny et al. (2011) table 2).

The congenitally blind participants had a notably higher response in V1 during the story and questions portion of the trial than during the backwards speech portion. The difference in response was greater in the left hemisphere than in the right. No such difference was recorded in the V1 of late blind and sighted participants, who exhibited a similar response in V1 during all portions of the trial, and showed no evidence of lateralization.

Congenitally blind participants had a similar response in classic language areas to late blind and sighted participants.

Discussion

The high response in V1 of congenitally blind participants during linguistic tasks is strong evidence that V1 is involved in the language processing in the congenitally blind. This tells us that, in the absence of signals from the retina, V1 can be co-opted by non-visual modalities. The fact that there is no similar response in the late blind participants implies that this co-opting can only take place during a critical period, after which cross-modal plasticity is presumably not possible.

Clearly, this a much greater degree of plasticity than was observed in section 2.2. However it is entirely in keeping with Wiesel and Hubel’s findings that the development of V1 is not determined by a genetic wiring diagram but rather by signals received from other areas of the central nervous system.

2.3.2 Relevance to Sign-Phonology

The similarities between sign-phonology and spoken phonology have been cited as evidence that phonology is modally independent. If the same phonology can work across multiple modalities, it is argued, then the phonology must be modally independent. However, the existence of cross-modal plasticity presents a serious problem for this argument. If a region of the brain so obviously modally-dependent as the visual system can be co-opted by other modalities in the congenitally blind, then it would seem entirely plausible that a modally-dependent phonology could be co-opted by other modalities in the deaf.

But is there any evidence that sign-phonology is an instance of cross-modal plasticity? Bedny et al. (2011) discovered a clear difference in the V1 response between the congenitally blind and the late blind/sighted, which presents a means of determining when an area of the brain has been co-opted cross-modally.

In the congenitally blind, V1 was active during language processing, and thus this area had been co-opted. In the late blind/sighted, no such activation was recorded, and thus V1 had not been co-opted cross-modally.

By way of analogy then, if sign-phonology is an instance of cross-modal plas-ticity, we should see a clear difference in the activation patterns of congenitally deaf signers from those of hearing signers. Ideally, we should see a difference in those areas of the brain normally associated with phonology.

Such differences may well have already been discovered. MacSweeney et al. (2002) used fMRI to compare the responses of congenitally deaf signers to

hearing signers, while performing sentence acceptability tasks in British Sign Language. They discovered a greater response to in the superior temporal gyri (STG) of congenitally deaf signers compared to hearing signers. This would seem to indicate some degree of cross-modal plasticity in the absence of auditory input.

The remaining question is whether or not the STG is associated with phono-logical processing. Certainly, this area is normally associated with auditory pro-cessing. Hickok and Poeppel (2007) propose the STG is involved in spectrotem-poral analysis, which is clearly not a form of modally-independent processing.

However, Hickok and Poeppel propose that phonological level processing itself takes place in the nearby superior temporal sulcus (STS).

Whatever the exact distribution of labour between the STG and the STS, there is a clear difference in the activation patterns of congenitally deaf signers and hearing signers, and this difference is localized to areas of the brain associ-ated with modally-dependent aspects of language. It was precisely the difference between the congenitally blind and late-blind that formed the evidence for cross-modal plasticity in Bedny et al. (2011). Thus there is undeniably some degree of cross-modal plasticity in congenitally deaf signers. The remaining question is the extent to which this plasticity correlates with the observable differences and similarities between sign-phonology and spoken phonology. Whatever the answer, it us currently untenable to claim that any similarities between sign-phonology and spoken sign-phonologymust be evidence of modal independence in phonology, since the cross-modal plasticity hypothesis is at least an equally tenable hypothesis.

A Note on Hearing Signers

Despite lacking the activation in the STG indicative of cross-modal plasticity, hearing individuals are nonetheless fully capable of acquiring a sign-phonological grammar. Does this refute the claim that similarities between sign-phonology are an instance of cross-modal plasticity? Not necessarily. Firstly it should be noted that hearing individuals are on average less competent than deaf individ-uals at signing. MacSweeney et al. report that the hearing signers performed the sentence identification task less accurately than deaf signers.2

Additionally, it should be pointed out that hearing signers have succeeded in learning a language created, maintained and propagated chiefly by deaf in-dividuals. Humans are nothing if not expert pattern learners. It is entirely conceivable that hearing individuals learning sign may be able to overcome a handicap, stemming from lack of STG recruitment, by means of some alternate mechanism, for example, by general purpose learning mechanisms or by some plasticity elsewhere in the brain driven by exposure to PLD.

2MacSweeney et al. point out that this disparity may be in part due to the fact that hearing children of deaf parents may interact with their parents differently to deaf children of deaf parents. Additionally, hearing signers may use sign language less frequently in adult life compared to deaf signers. Exactly how important these factors might be remains an open question.

Because these languages are created by deaf signers, the structure of sign-phonology should be regarded as indicative of the linguistic capacity of the congenitally deaf. There is no evidence that a community of hearing individuals could spontaneously create and use a sign language with the complexity and richness of British Sign Language. Indeed, in this case, absence of evidence may well be evidence of absence, given that we have the recorded histories of hearing communities going back literally thousands of years, and have not one instance of a hearing community ever developing a full sign language. While functional arguments have been offered to explain the prevalence spoken language over sign, such as the ability to communicate in the dark, these arguments don’t hold up to much scrutiny. It is just as easy to think up functional advantages for sign-language, such as the ability to communicate in silence (Fitch, 2010, p.

442).

Moreover, if modal-independence were true, then there would be no clear rea-son for people to restrict themselves to either spoken language or sign-language.

If the whole of grammar were modally-independent, then using the same gram-mar in multiple modalities should be a comparatively simple task. There could easily be a signedand a spoken version of English, which speakers could switch between at will. And yet all natural languages, both signed and spoken, appear to be confined to a single modality3.

We can follow this line of reasoning even further. It is quite easy to conceive of a language which utilises multiple modalities simultaneously. For example, lexical items could be expressed with speech, while tense and aspect could be expressed using sign. As long as the grammar itself were modally-independent, nothing would seem to forbid this. And yet nothing like this appears to ex-ist. Despite the fact that people instinctively gesticulate when speaking, this gesticulation never seems to exhibit any kind of grammar comparable to sign language. This should be deeply puzzling to anyone advocating a modally-independent phonology.

If we assume a modally-dependent phonology however, the puzzle disap-pears. Even assuming a modally-independent syntax/semantics4, if the phono-logical component of grammar is modally-dependent, then there would be no easy way of expressing the same language in different modalities, or using mul-tiple modalities in the same language. A different modality would entail a different phonological grammar, an entirely different proposition from simply transducing the same grammar between different modalities.