|
|
|
|
Abstract |
|
Slides or Handout |
Monday, July 4th |
Tuesday, July 5th |
Wednesday, July 6th |
|||
About Complexity |
Phonology & cognitive processing |
Phonetics & phonology I |
|||
Björn Lindblom |
René Carré |
John Ohala |
|||
Christophe Coupé, Egidio Marsico |
|
Sharon Peperkamp |
|||
Ian Maddieson |
Noël Nguyen |
Gérard Philippson |
|||
Christopher Kello |
Nathalie Bedoin & Sonia
Krifi |
Christophe Coupé, Egidio
Marsico |
|||
Didier Demolin |
|
Willy Serniclaes &
Christian Geng |
Adamantios Gafos |
||
Emergent properties |
Acquisition |
Phonetics & phonology II |
|||
Didier Demolin |
Michael Studdert-Kennedy |
Ian Maddieson |
|||
Björn Lindblom |
Nathalie Vallée |
Abby Cohn |
|||
René Carré |
|
Sophie Kern |
Ioana Chitoran |
||
Louis
Goldstein |
Yvan Rose |
John Ohala |
|||
DINNER at "Les Terrasses de la Tour Rose" |
ABSTRACTS by order of presentation
MONDAY MORNING
Ian MADDIESON, "Interrelationships between measures of phonological complexity"
University of California, Berkeley
A number of measures indexing phonological complexity, including properties of consonant and vowel systems, syllable structure and tone and stress, have been calculated for a large sample of genetically and geographically diverse languages, approaching one thousand. The primary issue in this paper is whether these measures of phonological complexity show a tendency to be positively or negatively correlated with each other. Linguists generally believe all languages are equally complex, implying that complexity of different types should be negatively correlated. Global analyses of the current sample more frequently show no correlation or positive correlation between the measures, rather than negative correlations.
Christopher KELLO, "Self-organization and fractal structure in phonological systems and dynamics"
Krasnow Institute for Advanced Study
I will talk about two recent projects, one experimental and one computational, in the context of the following theoretical framework.
- Phonological structures emerge from interactions among language users. These interactions are constrained, but never determined, by properties of human organisms and their environments. Interactions over longer time scales give rise to languages, shorter time scales give rise to dialects, and even shorter time scales give rise to transient modes of speaking. Different constraints may apply at different time scales.
- Phonological structures emerge from interactions within language users. Processes and representations of phonology develop over the longer time scales of speech acquisition and maturation, and the shorter time scales of speech perception and production. Again, different constraints may apply at different time scales.
- The constraints are idiosyncratic from the perspective of phonology, which makes any given phonological structure idiosyncratic. But there is reason to believe that the dynamics of interactions across and within language users are governed by principles that apply not only across the linguistic and cognitive sciences, but across the biological sciences at large. Evidence is found in the fractal structure and dynamics that have so often been observed in biological systems. Similar evidence is mounting for linguistic and cognitive systems.
Didier DEMOLIN, "Control and regulation in phonological systems"
Universidade de Sao Paulo
This paper examines the role of some speech production and perception mechanisms to explain the basis of phonetic and phonological universals. Although there are obviously many automatic processes (due to the bio-physical constraints on the vocal tract) to explain how speech production is regulated, there are more controlled phenomena than are assumed. Recently, Kingston and Diehl (1994) (K&D) in defining their view of phonetic knowledge, suggest that many phenomena are auditorily driven and they state that far more articulations are controlled by speakers than usually assumed. This phonetic knowledge is intimately connected to the phonology of the language because it is by means of this knowledge that phonological strings are recognized in the acoustic signal, that is phonological representations as well as phonetic constraints. Examples from the world’s languages will illustrate and test K&D hypothesis.
MONDAY AFTERNOON
Björn LINDBLOM, "Deriving Language from non-Language"
Stockholm University & University of Texas at Austin
The present meeting provides a welcome opportunity to ask: “Where does phonetic structure come from?” – a broad question that ultimately demands a developmental and evolutionary answer. For interesting recent work bearing on this issue see e.g., Carré 2004, Clements 2003, Flemming 2005, Goldstein 2003, Nowak & Krakauer 1999, Oudeyer 2003, Stevens 2003, Studdert-Kennedy 2002, Zuidema & de Boer 2005. My contribution will be to propose that a key factor promoting combinatorial phonetic structure is a motor universal having to do with the way the brain represents voluntary movement. I shall suggest that, during ontogeny and in the shaping of the world’s sound systems, this universal interacts with expressive needs and with constraints such as distinctiveness (speech signals as adaptations to robust and noise-resistant perceptual processing) and energetics (economy of effort). In a sense the suggestion offers nothing new. It merely revives a classical phonetic concept by highlighting some of its non-speech parallels and precursors. To illustrate the proposal, a simple tutorial algorithm will be presented. It sequentially selects holistic phonetic shapes from a larger set of possible patterns (a continuous search space) according to a probability score. The probability score of any given element is a function of its ‘articulatory cost’, ‘input frequency’ and ‘imitation success rate’. Initially the output of the model is determined by articulatory costs but, in the course of continued ambient exposure and sustained imitative efforts, the probability scores are recalibrated and the output is gradually steered in the direction of the input patterns. Recalibration of scores occurs in two situations. When a given pattern is heard its probability of being produced/learned is increased. Acquiring a particular item is thus merely a sort of copying of holistic patterns. However, it is significantly constrained (delayed) by the item’s articulatory cost score. The other condition is met when a pattern is acceptably imitated - that is, produced in an adult-like manner both articulatorily and acoustically. “Adult-like” refers to movements produced by a general-purpose (not limited to speech) mechanism (GPM) that derives ‘least action’ trajectories between arbitrary specified target positions. Since this last assumption holds the key to the model’s combinatorial re-use of the continuously distributed information of the search space, a brief motivation is offered in the next paragraph. I shall assume that speech movements are represented as overdamped oscillations produced by interpolating movement paths between spatial targets. The interpolation mechanism generates a smooth movement between any two points in the action space and does so in a way that exhibits ‘motor equivalence’. This organization is suggested by the rich literature on non-speech movement (for reviews see Kawato 1999, Todorov 2004). For instance, in modeling point-to-point arm movements (reaching), investigators tend to conceptualize the behavior in terms of endpoint control plus trajectory formation that works either by means of stiffness tuning (EP models), or by optimization (of e.g., ‘efficiency’ or ‘precision’ as in optimal control models). In this large body of work on non-speech, there is no shortage of approaches but a common trend appears to be that target specification is pervasive and primary, whereas movement paths are determined by the response characteristics of the motor system (for further discussion see Lindblom et al in press).
The preceding paragraph helps clarify the fate of successfully imitated patterns. This is how the model works. Imitation-induced recalibration is applied, not to the pattern as a whole, but only to its targets. The GPM is not affected by the procedure. The updating is thus independent of context. An example: Suppose that the algorithm learns a motion path QX and that this pattern is the first to use Q. If the ambient input should require mastering also QY and QZ, no further learning of Q is necessary, no matter what the articulatory cost of Q is, and what the degree of exposure to QY and QZ has been, because, by definition, once a target has been established in a single case, the recalibration instantly affects all motions involving the participation of Q. As a result, the probability of re-using of Q in novel contexts is considerably enhanced. [A similar logic was used in the ‘nepotism game’ (Lindblom 2000) to illustrate how combinatorial patterning could come about for functional reasons]. Implicit in this reasoning is the following testable developmental hypothesis: Phonemically coded input patterns will be acquired faster and more easily than systems of holistic signals with zero re-use (for experimental work bearing on this idea see e.g., Schwartz & Leonard (1982)). [Question: Is the “formal” property of combinatorial phonetic structure in fact an adaptation to learning?] The model will be illustrated using a set of syllable-like phonetic patterns. I will argue that, if we acknowledge the importance of gestures for speech perception but give targets a primary role in speech motor control, we may be able to achieve two things: (i) get some further insights into the issue of “where phonetic structure comes from”; and (ii) begin to resolve the long-standing “gestures vs targets” issue of experimental phonetics.
Carré R (2004): “On the phonetic properties of an acoustic tube: Vowel and consonant systems”, manuscript, Département Traitement du Signal et des Images, ENST, Paris.
Clements G N (2003): “Testing feature economy”, Proceedings of the 15th International Congress of Phonetic Sciences, Barcelona, Spain.
Flemming E (in press): “A phonetically-based model of phonological vowel reduction”, submitted to J Phonetics.
Goldstein L (2003): “Emergence of discrete gestures”, Proceedings of the 15th International Congress of Phonetic Sciences, Barcelona, Spain.
Kawato M (1999): “Internal models for motor control and trajectory planning.” Current Opinion in Neurobiology 9:718-72
Lindblom B (2000): “Developmental origins of adult phonology: The interplay between phonetic emergents and evolutionary adaptations”, Phonetica 57(2-4):297-314.
Lindblom B, Mauk C & Moon S-J (in press): “Dynamic specification in the production of speech and sign”, in Meyer G & Divenyi P (eds): Dynamics of speech production and perception, NATO ASI series, (IOP Press, Amsterdam).
Lindblom B (2004): “The organization of speech movements: Specification of units and modes of control”, Slifka J et al (eds): From sound to sense: 50+ years of discoveries in speech communication, Research Laboratory of Electronics, MIT, Cambridge, Mass
Nowak M A & Krakauer D C (1999): “The evolution of language”, Proc Nat Acad Sci, 96:8028-8033.
Oudeyer Y-P (2003): L’auto-organisation de la parole, doctoral dissertation, Sony CSL, Paris.
Perrier P, Ostry D J et al (1996): “The equilibrium point hypothesis and its application to speech motor control.” J Speech & Hearing Res 39:365-378.
Schwartz R G & Leonard L B (1982): “Do children pick and choose? An examination of phonological selection and avoidance in early lexical acquisition”, J Child Language 9:319-336.
Stevens K N (1989): “On the quantal nature of speech,” J. Phonetics 17:3-46.
Stevens K N (2003): “Acoustic and perceptual evidence for universal phonological features”, Proceedings of the 15th International Congress of Phonetic Sciences, Barcelona, Spain.
Studdert-Kennedy M (2002): “How did language go discrete?” 4th International Conference on the Evolution of Language, Harvard University.
Todorov E (2004): “Optimality principles in sensorimotor control”, Nature Neuroscience 7:907-915.
Zuidema W & de Boer B (2005): “The evolution of combinatorial phonology”, manuscript.
René CARRÉ, "Production and perception of vowels without acoustic static targets"
ENST, Paris (collaboration with
Pierre Divenyi, Egidio Marsico,
It is admitted without discussion that vowels can be produced with static articulatory positions. So they can be represented by points in the acoustic plane. But the characteristics of these vowels vary with speaker, consonantal environment (co-articulation) and production rate (reduction phenomenon). Then, the acoustic characteristics of the vowels of a language are averages with large categories (male, female, and child voice) or ranges (unfortunately with common parts). The characteristics are generally obtained from analysis of laboratory productions. These standard averages are ‘targets’ to be reached and it is supposed that they are represented as such in the speech production and perception system. In short, vowels are considered from a static point of view. But, at this level, several questions can be raised: How vowel representations are set up if vowel realizations rarely reach the targets? Is representation the same for one person or another? How vowels with different acoustic characteristics are perceived? by using the context? by normalization? The results of numerous works to answer the questions are very often sectional, contradictory, in a word disappointing. They lead to underline the importance of dynamics in vowel perception (Strange, 1989) but they cannot help to develop a new theory, simple and explanatory, of observed phenomena.
To better understand these speech production and perception phenomena, an original approach from acoustic characteristics of a 17cm length tube is used (Carré, 2004 ; Carré, et al., 2004). Suppose an initial area function and a goal which is to obtain maximum acoustic space with minimum deformation of the tube area function. An algorithm performing this goal holds a Darwinian functioning: new shapes of the area function are selected according to the criterion of minimal deformation leading to maximal acoustic variations. Within this evolutionary dynamics, the deformations of the tube are not performed to reach targets, unknown during the process, but to increase an acoustic contrast providing displacements along specific directions and defining trajectories in the acoustic plane. The displacements are stopped because of the limitations given by the acoustic properties of the tube.
The results obtained lead: a) to define an acoustic space of maximal possible variations, which is in fact the vowel triangle, b) to display privileged places of articulation which are, in fact, places of articulation for vowels and consonants, c) to structure the acoustic space in privileged trajectories which correspond, in fact, to [ai], [au], [iu], [ay], and [aµ]. The deductive approach developed here does not display vowels as end-products but trajectories which can be used to predict successfully vowel systems observed in world languages (Carré, submitted). The deformations of the area function are simple and reduced in number. They can be obtained by two gestural deformations corresponding to specific tongue gestures and lip gestures. Following this approach, the goal in speech production is not to reach an acoustic static target but to work a direction and a displacement rate in the acoustic plane in order to perform some ‘vocal effect’. In production, following the traditional view with vowels represented by static targets, starting for example from [a], we look for reaching the target [i]. In perception, the acoustic characteristics of [i] are used for identification. Following our dynamic point of view, in production we choose first the specific direction corresponding to the trajectory [ai], then the transition rate to select among [E, e, i]. In perception, all the necessary and sufficient information to detect [i] is at the very beginning of the transition and also all along the transition. This approach explains the results obtained by the ‘silent center’ paradigm in perception tests (Strange, 1983) (see also (Divenyi, et al., 1995)) and production data and especially on the more or less constant transition duration observed in syllabic production (Kent and Moll, 1969) and with different production rate (Gay, 1978).
In the paper, experiments on V1V2 production and perception will be described to test our hypothesis, and several other issues will be revisited in the light of our prediction. Theoretical consequences will be discussed and especially to explain vocalic reduction, hyper and hypo speech, normalization, perceptual overshoot… At last, a unified approach, fully dynamic, of the vowel and consonant representation on one hand and of phonetic substance and linguistic form on other hand will be proposed and discussed.
Carré, R. (Submitted). "On the phonetic characteristics of an acoustic tube: Vowel and consonant systems,"
Carré, R. (2004). "From acoustic tube to speech production," Speech Communication 42, 227-240.
Carré, R., Serniclaes, W. and Marsico, E. (2004). "Production and perception of vowel categories," Proc. of the From Sound to Sense Conference, (MIT, Cambridge).
Divenyi, P., Lindblom, B. and Carré, R. (1995). "The role of transition velocity in the perception of V1V2 complexes," Proceedings of the XIIIth Int. Congress of Phonetic Sciences, (Stockholm), pp. 258-261.
Louis GOLDSTEIN, "Emergence of syllable structure from a coupled oscillator model of intergestural timing"
Yale University and Haskins Laboratories
It is possible to view speech as a combinatorial system in which a small set of atomic units of action (gestures) are combined into a large number distinct patterns. It is important, then, to identify the "glue" that provides temporal cohesion among the units, keeping them in the appropriate pattern. A model of intergestural timing at the level of the syllable has been developed (Nam & Saltzman, 2003; Goldstein, Byrd & Saltzman, in press) in which the glue is provided by dynamical coupling. Specifically, each gesture is associated with a planning oscillator, and the oscillators are coupled pair-wise to one another. The output of the planning system is a set of limit-cycle oscillations with stabilized relative phases, which trigger production of the appropriate gestures at the appropriate times. Systems of coupled oscillators are known to harbor a small number of stable modes, and these stable modes (in-phase and anti-phase for human movements) are hypothesized (Browman & Goldstein, 2000) to be the basis of syllable structure: C-V coordination exploits the in-phase mode, and V-C and C-C exploit the anti-phase mode. This hypothesis can provide an account of a range macroscopic and microscopic properties of syllable structure, which will be discussed: the relatively free combinatoriality exhibited between onsets and rimes in languages, the unmarkedness of CV syllables (including their early acquisition), the (typically) weightless status of onsets, and differences in the stability of timing exhibited by onset and coda clusters.
Browman, C. P., & Goldstein, L.
(2000). Competing constraints on intergestural
coordination and self-organization of phonological structures. Bull. de
Goldstein, L., Byrd, D., and Saltzman, E. (to appear) The role of vocal tract gestural action units in understanding the evolution of phonology. In M. Arbib (Ed.) From Action to Language: The Mirror Neuron System. Cambridge: Cambridge University Press.
TUESDAY MORNING
Sharon PEPERKAMP, "Statistical inferences and linguistic knowledge in early phonological acquisition"
Université Paris8 & Laboratoire de Sciences Cognitives et Psycholinguistique
Recent work has shown that both adults and infants can use statistical information during phonological acquisition. Concentrating on the acquisition of allophonic rules, I argue that acquisition is not purely statistical and that linguistic knowledge is exploited as well.
In the first half of the talk, I show results from a simulation of an algorithm for the acquisition of allophonic rules on phonetically-transcribed child-directed speech. This algorithm exploits the fact that segments that are related by an allophonic rule have complementary distributions. It is shown that statistics based on the distribution of segments and their local context give a robust indicator of complementary distributions, but are not sufficient to distinguish between real and spurious allophonic distributions. It is only by adding linguistic constraints on the form of possible allophonic rules that spurious rules are discarded.
In the second half of the talk, I show results from an experiment using an artificial language-learning paradigm with a production task. It is shown that French adults can learn phonetically natural allophonic rules (such as intervocalic stop voicing) but fail to learn unnatural rules that arbitrarily link surface segments to underlying phonemes. These results suggest that phonetic naturalness plays a role in phonological acquisition. The effect of naturalness, though, can be modulated as a function of the task that participants have to perform. In particular, when tested with a perception task, participants can learn both natural and unnatural rules.
I will discuss these findings in light of theories of phonological processing and acquisition.
Noël NGUYEN, "The dynamical approach to speech perception: from fine phonetic detail to abstract phonological categories"
Laboratoire Parole et Langage, CNRS & Univ. Provence
Much attention has been devoted recently to the potential role of phonetic detail in the perception and understanding of speech. On the one hand, because speech perception is resistant to noise and to the huge intra- and inter-speaker variability of the acoustic signal, it has often been hypothesized that lexical access involves mapping speech onto a set of context-independent abstract features. On the other hand, recent research suggests that listeners are sensitive to fine phonetic detail and retain many if not all the encountered phonetic variants of a word in memory. In this talk, the opposition between abstractionist and exemplar-based models of speech perception will be discussed in the light of a number of recent studies that examined the perceptual relevance of phonetic detail both in English and French. I will also offer new empirical evidence for a non-linear dynamical model of speech perception (Tuller, Case, Ding & Kelso, 1994) in which stable perceptual categories are associated with attractors of a potential function and are gradually built up by the listener in the identification of speech sounds.
Nathalie BEDOIN et Sonia KRIFI, "The hierarchy of phonetic features categories in printed syllables matching in adult skilled readers, normally developing young readers, and dyslexic children."
Laboratoire EMC / DDL, Université Lyon 2
Skilled readers are sensitive to phonetic features shared by consonants within one printed stimulus and between successive stimuli (Bedoin, 2003; Krifi, Bedoin & Mérigot, 2003). Voicing similarity provided the most impressive phonetic similarity effects. We recently replicated these effects in a verbal production task.
In a new series of experiments, we investigated the relative weight implicitly granted to voicing, manner and place of articulation to guide responses in a syllable matching task. Subjects presented with a printed target-syllable CV had to select one of two proposed CV syllables, according to intuitively estimated acoustic similarity. Manner and place similarity were pitted against in Experiment 1, manner and voicing in Experiment 2, place and voicing in Experiment 3. Adult skilled readers’ responses were mainly guided by manner similarity, especially for voiced consonants, suggesting a modulator role of voicing upon this effect. Place similarity also guided matches, particularly for front consonants. We depicted the development of these rules in normal reading children (second-, third- and fourth- graders). Dyslexic children without phonological impairment did not use phonetic rules for syllables matching : whatever the phonetic features category, their choices never differed from chance. On the contrary, phonetic features categories guided responses in dyslexic children with phonological deficits. However, they followed phonetic rules, that were clearly distinct from skilled readers’ rules : 1/ they preferred manner of articulation only for stop consonants, 2/ the modulator role of voicing disappeared, 3/ their responses were more frequently guided by place of articulation than by voicing, but only for back consonants (contrary to skilled readers). Printed syllables matching tasks may improve our understanding of phonological knowledge organisation in dyslexic children.
Willy SERNICLAES* & Christian GENG**, " Title: Cross-linguistic trends in the perception of place of articulation in stop consonants: A comparison between Hungarian and French.
* Laboratoire de Psychologie Expérimentale, CNRS & Université René Descartes (Paris 5)
** Centre for General Linguistics, Typology and Universals Research (ZAS), Berlin
A basic question in the study of speech perception is to understand how the predispositions evidenced in the newborn adapt to the cross-linguistic diversity of phoneme categories. Possible answers to these questions in current theories of speech development are: (1) selection of predispositions relevant for perceiving categories in a given language (Werker & Tees, 1984; Infant Behavior and Development, 7, 49); (2) creation of couplings between predispositions (Serniclaes et al., 2004; J. Exp. Child Psychology, 87, 336); (3) creation of a language-specific model, unrelated to the predispositions (Kuhl, 1994; Current Opinion in Neurobiology, 4, 812). Here we present some new evidence in support of the role of predispositions in the build-up of adult percepts. These models were tested in the framework of the Distinctive Region Model of speech production, which assigns the four potential place categories to four different regions in the F2-F3 transition onset space, with flat transitions corresponding to natural boundaries in the neutral vocoïd context (Carré et al., 2002; 7th International Conference on Spoken Language Processing, 1681). Different synthetic speech continua, varying along different directions in the F2-F3 transition space were constructed and both labelling and discrimination data were collected in two different languages differing as to the number of place categories: French – a three-category language- vs. Hungarian - a four-category language. Results (Serniclaes et al., 2003; Proc. 15th International Congress on Phonetic Sciences, 391; Bogliotti, 2005; PhD. Thesis, Université Paris 7 - Denis Diderot; Geng et al., 2005; ISCA Meeting London) indicate that labelling boundaries occupy similar positions for both languages in the F2-F3 space, the palatal –velar Hungarian boundary being located inside the palato-velar French category. However, labelling boundaries in Hungarian do not match the natural flat transition boundaries. Yet, the discrimination data reveal that he later remain active in both languages. The implications of these findings for perceptual development theories, specifically for the selection vs. coupling issue, will be discussed.
TUESDAY AFTERNOON
Nathalie VALLÉE, "Some favoured syllabic patterns in the world’s languages explained by sensori-motor constraints"
Institut de
The purpose of this talk will be the syllabic structures of lexical units.
A typological study of a 15 natural language database (ULSID) will be presented. Results related to co-occurences between phonemes of the same syllable and those of two consecutive syllables will be unfolded. Computed ratios between observed and expected syllables show that some combinations are clearly favoured, others disfavoured, and we claim that some of them could be explained by sensori-motor constraints. To support this, we will report i) data from experimental works ; ii) data from speech development by children.
Sophie KERN, "Universals and language specificities in canonical babbling"
Dynamique Du Langage
Pre-linguistic babbling shows common trends across languages. Similarities in sound types, sequences, and utterance type preferences in varied languages have been frequently documented, suggesting a universal foundation for babbling patterns in production system characteristics. Frequently reported consonants show stop, nasal and glide manner and coronal and labial place of articulation. Mid and low front and central vowels are most often observed. Three preferred within- syllable co-occurrence pattern emerge also from a large set of different languages: coronal (tongue tip closure) consonants with front vowels, dorsal (tongue back closure) consonants with back vowels, and labial (lip closure) consonants with central vowels.
On the other hand, it is generally acknowledged that input from the ambient language plays a role in children’s very early perception as early as 8-10 months. It has also been proposed that input from the ambient language may also influence the shaping of children’s production preferences at some point in the late babbling and first word periods. This potential for ambient language influence has been examined for utterance and syllable structures, vowel and consonant repertoires and distribution as well as CV co-occurrence preferences.
This presentation has two main aims. 1.) description of relationships between children’s pre-linguistic vocalization patterns and characteristics of the production system and 2.) exploration of the relative role of learning from ambient language input during the canonical babbling period. A cross-linguistic perspective is adopted: sixteen typically developing children from monolingual environments participated (4 Turkish, 4 French, 4 Romanian, 4 Dutch infants). One hour of spontaneous vocalization data was recorded every two weeks in the children’s homes from babbling onset until onset of first words. Parents followed their normal routines with their child. Vocalizations were broadly transcribed using IPA. All singleton consonants and singleton vowels as well as syllable-like vocalizations were analysed. Data were entered for computer analysis. In addition, minimally 1,000 dictionary entries from each ambient language were analyzed for comparison.
Yvan ROSE, "Conceptual and empirical challenges to statistical approaches to child language production"
Memorial University of Newfoundland & Dynamique Du Langage
The literature on infant speech perception provides robust support for the hypothesis that input statistics offer strong cues for the discrimination of sound sequences, the perception and development of linguistic categories, and the development of the mental lexicon (see Gerken and Aslin 2005 for a recent review). In the field of early speech productions, scholars have recently extended this hypothesis and claimed that statistics of the input can be taken as the main predictor for learning paths and speech production patterns in first language acquisition (e.g. Levelt, Schiller and Levelt 2000, Demuth 2003). In this paper, I argue that such an approach to the study of speech production (as opposed to perception) in child language is both conceptually and empirically inadequate. Looking at cross-linguistic evidence from the literature on child language, and focusing more specifically on variability in production patterns and emergent processes such as consonant harmony, I will argue that the phenomena observed in child language production must come from several different sources, thereby arguing against approaches based on single factors (e.g. statistical learning-only; markedness-only). I will conclude with a discussion of avenues that could be taken in future research to solve some of current mysteries in child language phonological acquisition.
WEDNESDAY MORNING
Gérard PHILIPPSON, "Some problems in defining and organizing phonological primes"
INALCO & Dynamique Du Langage
Phonological primes must fulfill three tasks :
- account for all and only the contrasts attested in the world's languages
- give an explanatory representation of morphophonological alternations
- be phonetically appropriate, whether from the articulatory or acoustic angle
A comparison will be presented of the respective merits of different approaches to phonological primes such as Articulator theory (Halle et al, 2000; Watson, 2002) , Vowel-place theory (Clements, 1993, Clements & Hume, 1995) and Government Phonology (Harris & Linsay, 1995, Scheer, 1999). The main question to be addressed will be that of consonant-vowel interaction in various Arabic dialects.
Christophe COUPÉ, Egidio MARSICO, François PELLEGRINO, "Complexity of phonological inventories: features & structures"
Dynamique Du Langage
What do sound inventories tell us about the complexity of phonological systems? We address the issue of measuring the structural complexity of phonological systems. The issues at stakes are therefore twofold: i) take into account the complexity of constituents, ii) take into account the complexity of the relations between the constituents (systemic level). This leads us both to evaluate the relevance of various measurements of graph complexity (off-diagonal complexity, graph mapping, networks of oppositions) and to question the nature of the basic constituents of phonological inventories (dynamic descriptions, unification of vocalic and consonant feature spaces).
Adamantios GAFOS, "Dynamical systems and integrated phonetics-phonology"
New York University & Haskins Laboratory
How
is the discreteness of phonological systems related to the continuity
of phonetic substance? According to the conventional view, the relation
between the discrete aspects of phonological systems and the continuous
dimensions of their phonetic substance is to be fleshed out by a
process of translation from discrete symbols to continuous physical
properties of an articulatory-acoustic nature. This is the view in the
background of most current work in linguistics and cognitive science in
general (see the notion of a transducer in Fodor
& Pylyshyn 81 and Harnad
90). I will argue for a different view using a single formal language,
the mathematics of non-linear dynamics. This formal language enables us
to express qualitative and quantitative aspects of a complex system
within a unified framework, and does away with the temporal metaphor of
precedence between the qualitative and the quantitative, without
losing sight of the essential distinction between the two.
Specifically, in this talk, I will propose models of the relation
between continuity and discreteness for two language-particular but
nevertheless generalizable phenomena, the
phonetic basis of vowel harmony and the incompleteness of a class of
neutralization phenomena. In each case, the proposed model links the
experimentally observed continuous distinctions to the discreteness of
phonological form.
WEDNESDAY AFTERNOON
Abby COHN, "Gradience and Categoriality in Sound Patterns"
Cornell University
In this presentation, I explore the nature of gradience vs. categoriality in the domain of linguistic sound systems and consider the implication of these patterns for the nature of phonetics vs. phonology. A widely held hypothesis is that phonology is the domain of abstract patterns understood to be discrete and categorical, and phonetics is the domain of the quantitative realization of those patterns in time and space. Following this view, we expect to observe patterns of categorical phonology and gradient phonetics. Yet there is evidence suggesting that both categorical phonetics and gradient phonology also exist. The existence of categorical phonetics--periods of stability in space through time--is in fact not surprising. This results directly from certain discontinuities in the phonetics. More controversial is the status of gradient phonology, that is, phonological patterns best characterized in terms of continuous variables. It is particularly evidence claiming that there is gradient phonology that has led some to question whether phonetics and phonology are distinct. After reviewing recent proposals of the nature of the relationship between phonology and phonetics, I explore evidence for gradient phonology in the different aspects of what is understood to be phonology--contrast, phonotactics, morphophonemics and allophony.
Ioana CHITORAN, "Phonetic naturalness in phonology"
Dartmouth College
The question of the relationship between phonetics and phonology revolves to a large extent around the issue of naturalness in phonology. Establishing whether phonetics and phonology constitute separate systems or not depends partly on whether natural phonetic explanations are taken to be directly encoded in the phonology or not. I will survey different proposals put forward over time, focusing on three main views which differ in the specific way they integrate the role of natural explanations in phonology.
(i) Phonetic naturalness in diachrony (e.g., Ohala 1981, 1990; Hyman 1976, 2001; Blevins 2004)
(ii) Direct encoding of phonetic detail and full integration of phonetic knowledge in phonology (e.g., Steriade 2000, 2001; Flemming 1995, 2001);
(iii) Indirect reflection of phonetic detail in phonological constraints (e.g., Hayes 1999, Hayes & Steriade 2004).
John OHALA, "Languages’ sound inventories: the devil in the details"
University of California, Berkeley
The inventories of speech sounds in works such as Lepsius (1855, 1863), Maddieson (1984) and the IPA handbook (1999) or the discussion of patterns of speech sound inventories in theoretical works such as Jakobson, Fant, and Halle (1952), etc. usually assume a certain neat symmetry. English, for example, is usually said to have a symmetrical set of stops /p t k/ and /b d g/ and fricatives /f T s S/ and /v D z Z/. In the accompanying discussion one can sometimes find out that /p t k/ are voiceless aspirated and that /b d g/ are typically voiceless unaspirated in initial position but voiced intervocalically (or perhaps inter-sonorantly). But other details weaken the notion of complete symmetry: Even intervocalically the /g/, as in the word ‘again’, may be largely voiceless. One may also find out that the place of articulation of the ‘alveolar’ stops /d t/ is not precisely the same as that of the corresponding fricatives /z s/, the latter set being slightly more retracted and being more laminal than apical. For one Swedish dialect, Livijn & Engstrand (2001) found that the place of articulation of /d/ may be more retracted than that for /t/. Historically the reason for the imposed symmetry on segment inventories is justified by economy of expression and symbolization and by structuralist assumptions. The question arises, though: how much detail that may be relevant to phonological universals is being hidden by such economy? The aerodynamic voicing constraint is invoked to explain the lack of a voiced velar stop /g/ in Thai, Dutch, and Czech (in native vocabulary) in spite of +/- voice being manifested contrastively in stops with more forward places of articulation. But the same constraint is probably the cause of the asymmetry regarding the voicelessness of the intervocalic /g/ in English and the different place of articulation of Swedish /d/ as opposed to /t/. I will review other cases where phonetic details are at odds with phonological neatness and symmetry in segment inventories. Can this problem be resolved? In the end, we must be clear as to what our phonological generalizations are intended to accomplish.
Jakobson, Roman; Fant, C. Gunnar M.; Halle, Morris. 1952. Preliminaries to speech analysis. The distinctive features and their correlates. [Acoustic Laboratory, MIT, Technical Report No. 13] Cambridge: Acoustic Laboratory, MIT.
Lepsius, R. 1855. Das allgemeine linguistische Alphabet. Grundsätze der Ubertragung fremder Schriftsysteme und bisher noch ungeschriebener Sprachen in europäische Buchstaben. Berlin: Verlag von Wilhelm Hertz.
Lepsius, C. R. 1863. Standard alphabet for reducing unwritten languages and foreign graphic systems to a uniform orthography in European letters. 2nd ed. London: Williams & Norgate.
Livijn, P. & Engstrand, O. (2001), Place of articulation for coronals in some Swedish dialects. Proceedings of Fonetik 2001, the XIVth Swedish Phonetics Conference, Örenäs, May 30 - June 1, 2001. Working Papers, Department of Linguistics, Lund University 49: 112-115.
Maddieson, I. 1984. Patterns of sounds. Cambridge University Press.