• HOME
  • Publications
  • BLOG
  • English Phonetics and Phonology
  • The English Pronouncing Dictionary
  • RESOURCES
  Peter Roach

Relaxed pronunciation

29/1/2021

0 Comments

 
It is a well-known fact that when we speak naturally we fail to pronounce all the phonemes and syllables that would be predicted if we said each separate word slowly and carefully. I have been reading Wikipedia’s articles on Elision and related topics, and I’ve been getting confused. This is an area that has always seemed vague and ill-defined, and Wikipedia has a lot of articles that overlap each other and don’t seem coordinated. In particular,  I have problems with an unusual article called Relaxed pronunciation. The title bothers me: I have never come across the term in any work on phonetics or pronunciation teaching, and I suspect that the person who wrote the article just made it up. Wikipedia articles are supposed to be on recognized topics, and to contains references to published literature, but this article has none. Lots of writers use terms like ‘casual speech’  (e.g. Linda Shockey in Sound Patterns of Spoken English) or ‘spontaneous speech’ (e.g. Richard Cauldwell’s Phonology for Listening); we talk about “connected speech phenomena”. These aspects of speech are tremendously important and need careful analysis and discussion.

If you look at the “Relaxed speech” article, you will see that it begins with a list of American English casual-speech forms of some common words and phrases, with the normal orthography, an IPA transcription of the reduced form and a respelling to indicate the pronunciation. Some are straightforward, such as
     kind of: [ˈkaɪɾ̃ə], kinda
     don't you: [ˈdoʊntʃə], doncha

while others strike me as odd:
     fixing to: "finna"
     I'm going to: [ˈaɪmə], "I'mma" or [ˈɑmənə], "Ah-muhnuh"

A few cases of single words are listed among the phrases:
     suppose: [spoʊz] s'pose
     library: [ˈlaɪbɹi], [ˈlaɪˌbɛɹi]

After the American English examples, the article becomes a bit of a free-for-all where other contributors have added examples of what they know about other languages. None seems to have taken the trouble to look up a handbook on the pronunciation of the language they are writing about. It all has a rather anecdotal feel.

The Dutch and German examples are given only in a respelling form. There are half a dozen Russian examples, given with IPA transcription, and something about contracted forms in Russian poetry with an example that I can’t understand. French gets very little coverage, with notes about the pronunciations of ‘te’ and ‘"Qu'est-ce que”; however, there is a more detailed analysis (with IPA) of “il ne savait peut-être plus ce qu'il faisait”. The section on Spanish is brief and not very clear: we get some notes about the verb ‘estar’, the pronunciation of ‘para’ and some discussion of the lenition of intervocalic /d/ into an approximant. Many of the assertions in this section have been challenged with the comment “citation needed”. Portuguese gets quite a lot of examples (I think these are all European Portuguese, though we are not told). Confusingly, the reduced form is often given to left of the “=” sign, for example

     home = homem (man)

As far as I know, ‘homem’ is not pronounced with a final /m/ anyway, so it’s not clear what exactly the change is. The limitations of using spelling to indicate pronunciation in detail can be seen in examples of words beginning with ‘qu’, as in

     que + o = q'o

The section on Japanese is very brief, and does not use IPA. For Turkish (no IPA) we get examples to illustrate one single instance, the contraction of ‘ne’ to ‘n’ before a vowel. We also get an example of vowel loss in poetry. For Hindustani there is a two-sentence explanation of how /h/ is elided in relaxed speech. The article ends with the observation that in Bengali

            “it is common to change the sound of rchh to chchh or chch in normal speech. For example, করছ will be pronounced কচ্ছ or কচ্চ."

- which I find pretty opaque.  
 
Here is a list of related Wikipedia topics (I am sure it is incomplete) that cover the field of elision, contraction etc. There is a wealth of knowledge in these articles, but there seems to be no route-map to help the reader to find their way.
  • Elision (perhaps the best known heading in the phonetics and pronunciation literature). There is also a language-specific article on Elision (French)
  • Syncope (not much used in works on phonetics and pronunciation, more familiar in works on metrics and versification, but it's a useful article)
  • Stress and vowel reduction  (particularly the section on weak forms)
  • Contraction (Wikipedia lists this as Contraction (Grammar), but it seems to me to be mainly about pronunciation - and to add to the confusion there is a substantial part of English auxiliaries and contractions devoted to the same topic).
  • Crasis
  • Sandhi
  • ​Synalepha   (which contains links to a lot of other similar processes not usually found in the phonetics literature).
Incidentally, If you look for John Wells’ useful term Compression in WP you will be redirected to Syncope.
0 Comments

Voiceless approximants: do they really exist?

5/12/2020

1 Comment

 
[NOTE:  I HAVE NOW ADDED SOME NOTES BASED ON THE MATERIAL BELOW TO THE RELEVANT SECTION OF THE WIKIPEDIA ARTICLE ON APROXIMANTS -SEE IT HERE]


The introduction of the term approximant has been a useful one for phonetics. It has come to replace the earlier terms frictionless continuant and semivowel. It is usually credited to Peter Ladefoged, in his early work A Study of West African Languages (1964), and Abercrombie recommends adoption of the term in his 1967 book Elements of General Phonetics. The basic idea is straightforward. We can take the opening section of Wikipedia’s article on Approximant as the basis for a provisional definition:
Approximants are speech sounds that involve the articulators approaching each other but not narrowly enough, nor with enough articulatory precision, to create turbulent airflow. Therefore, approximants fall between fricatives, which do produce a turbulent airstream, and vowels, which normally produce no turbulence. This class is composed of sounds like [ɹ] (as in ''rest'') and semivowels like [j] and [w] (as in ''yes'' and ''west'', respectively), as well as lateral approximants like [l].

I have no problem with this as a definition of approximant. However, a much later introduction has been the concept of voiceless approximant. An example of these often quoted is the sound [ʍ] as found in the “voiceless w” in some English speakers’ pronunciation of ‘whine’ (particularly in Scottish accents). Another is the palatal sound at the beginning of the word ‘hue’. Ladefoged and Maddieson (The Sounds of the World’s Languages, 1996, p 326) give a few examples of voiceless approximants: speaking of the distinction between ‘you’ and ‘hue’ in English, they write “the onset in the second word is normally a voiceless palatal approximant, [j̊], for which the IPA has no unitary symbol.”. Later, they write “In … Scotland, the words ‘weather’ and ‘whether’ contrast, the latter beginning with a non-fricative [ʍ]”. From the data they report, it seems extremely unlikely that any language has a phonemic contrast between a voiceless approximant and a voiceless fricative.

The problem with voiceless approximants now confronts us. We know from the definition of approximant that if an articulation is made with a narrow enough air passage to produce turbulent airflow, and hence to generate fricative noise, then it cannot be an approximant. If the airflow and articulation of an approximant have to be such that turbulence is not generated, how can a voiceless approximant be audible? If a voiceless approximant is audible, how can it be non-fricative? This dilemma is presented in a very useful discussion piece arising from disagreement among Wikipedia contributors at phonology - Do voiceless approximants exist? What is the consensus among phoneticians/phonologists? - Linguistics Stack Exchange

It seems that the only way out of this dilemma is to argue that there are two ways of generating an audible voiceless continuant sound, one being the familiar type of most fricatives, where the noise resulting from turbulent flow is produced at the point of articulation, and the other being noise produced by non-turbulent airflow and generated in the vocal tract, though not at a specific point of articulation. A detailed explanation by Shadle (“The aerodynamics of speech” in Hardcastle and Laver (eds) The Handbook of Phonetic Sciences,1997, p.37) describes the difference between turbulent and laminal flow (interestingly, though she produced valuable work on speech aerodynamics she was an engineer who had previously been working on jet engines). If we go back to the pioneering work of Kenneth Pike, in his ‘Phonetics’ (1943) we find a distinction proposed between ‘local’ friction and ‘cavity’ friction. Pike’s cavity friction presumably depends on laminar flow. On p. 71, he writes “One of the marked weaknesses in current usage of friction as a criterion for consonant vowel differentiation lies in the failure to distinguish satisfactorily between two types of friction which function very differently and have different origins, even though the border lines are not sharp between them. Generally speaking, one type retains its audibility when voiced (e.g. for sibilants) the other is a weak fricative and audible only when sounds are voiceless (e.g. most vowels and certain sonants). The first type results from stricture at a single local point; the second is due to cavity friction, that is, voiceless resonance of a chamber has a whole caused by air going through it as through an open tube.”

It does seem, then, that there is at least a theoretical possibility of a viable phonetic distinction between voiceless fricatives and voiceless approximants.

In another theoretical discussion, Catford (Fundamental Problems in Phonetics, 1977, pp. 120-124) suggests a three-way distinction between fricative, approximant and resonant.  “In fricative articulation the articulatory channel is very small and the flow through it is always turbulent.” Of approximants he writes “they have non-turbulent flow when voiced; but the flow becomes turbulent when they are made voiceless”. Of resonant, he writes “In voiceless resonants … there must always be turbulent flow through the glottis; if there were not, there would simply be silence.” In the case of Catford’s resonant, therefore, the noise excitation in the vocal tract is the same as for [h], being made by turbulent flow through the glottis. A similar position is found in Laver’s Principles of Phonetics (p. 269): “All resonants have a stricture of open approximation. A necessary condition for the performance of resonant is that the airstream passes through the vocal tract in a smoothly laminar flow, with no audible local frication.” The idea of voiceless resonant is necessary for understanding how voiceless vowels are audible.
​
I haven’t said anything about the acoustic characteristics of voiceless approximants, mainly because I have not yet had time to analyze a controlled set of recordings of comparative fricatives and voiced approximants. What I expect to find in spectrograms (and have found in a few trials) is that in a voiceless approximant made without local friction as far as I can manage, the formants corresponding to those of the voiced equivalent are visible in the noise spectrum generated by laminar flow. In the case of voiceless continuants with local friction, what is most visible is regions of high-frequency energy similar to those of other voiceless fricatives.
Now we need to look at how the concept of voiceless approximants works in practice rather than in theory. Pike’s work was aimed at establishing a pure theory of phonetics removed from the influence of phonology and of phonemic function. But however fine the distinctions he considered between phonetically different articulations, distinctions were only of interest if they were above the threshold of perception (Pike, p. 151). This is a relevant condition in the present issue.
 
To keep things simple, we can confine the discussion to the voiceless [j] and [w] sounds of English, (which I will call ‘hue’ and ‘whine’), and leave aside sounds such as [h] and [ɬ]. Accounts of voiceless approximants in the context of European languages seems to be always found in discussing voiceless counterparts of (voiced) approximant phonemes: these are often (but not always) voiceless allophones of voiced approximant phonemes. Sounds classed as voiceless approximants appear to be always made with the tongue close to the roof of the mouth, though the place of constriction is of course variable. They are related to a number of other similar sounds. The relationships can be set out like this:
 
(1)   Voiceless approximants and voiceless fricatives: if the articulation produces local friction with turbulent flow, we get a palatal fricative [ç] (though some writers on English phonology prefer the symbolization [hj]) for ‘hue’. For ‘whine’ the IPA symbol is [ʍ] in the case of a voiceless fricative, but this is also used for the suggested voiceless approximant.
(2)   Voiceless approximants and devoiced fricatives: the latter can be shown with the alternative of a devoicing diacritic [ ̥  or ̊ ] appended to the symbol for a voiced fricative. Thus ‘hue’ is [j̊] and ‘whine’ is [w̥].
(3)   Voiceless approximants and voiceless fricatives: a lesser degree of articulatory constriction can be shown by means of an “open” diacritic [ ̨ ], which could be taken to imply absence of local friction. Thus [x̨], [ʍ̨].
(4)   Voiceless approximants and devoiced vowels: we could in theory show ‘hue’ as [i̥] and ‘whine’ as [u̥].
 
If the voiceless approximant is to be acceptable as a viable phonetic category, it must fulfil certain conditions:
  1. No sound can be classed as a voiceless approximant if it contains audible local friction.
  2. Phoneticians must be able to make a reliable auditory discrimination between voiceless approximants and voiceless fricatives. The auditory difference between local friction and cavity friction must be sufficient to enable the distinction to be made.
  3. It must be possible for a phonetician to produce and demonstrate clearly distinct voiceless fricatives and voiceless approximants.
  4. Languages must exist where voiceless approximants (as defined in (1)) occur regularly in speech.
 
Conclusion
 I do not believe that the conditions listed above are met. On the other hand, it seems clear that at least some phonetics researchers feel the need for the category voiceless approximant. I believe that the best solution is to treat the label voiceless approximant as a “cover term”, i.e. one which does not map on to a single phonetic entity, but on to a collection of related entities.  I would use a form of words like this: “The term voiceless approximant denotes a voiceless sound with an articulation similar to a voiced approximant, which may consist of cavity friction resulting from laminal airflow in the vocal tract, or local friction resulting from turbulent airflow through one or more constrictions, or both”.
 

1 Comment

Professor Erik Fudge

18/11/2020

0 Comments

 
I was very sorry to hear of the recent death of Professor Erik Fudge. He was a pleasant and good-natured colleague at Reading University, and in fact I knew him earlier in the days when he was at Hull University and I was at Leeds. I am a great admirer of his work on phonology, particularly “Syllables” in Journal of Linguistics, 5, 1969, and his well-known book  English Word Stress. This is a sad loss.

0 Comments

November 10th, 2020

10/11/2020

0 Comments

 
[Note: I have now rewritten the Wikipedia article described below]

Auditory phonetics


Phonetics is traditionally said to have three principal branches: Acoustic, Articulatory and Auditory.  Wikipedia has, appropriately, articles on Acoustic phonetics, Articulatory phonetics and Auditory phonetics. I have always felt that of these three, Auditory gets the least attention and may be regarded as something of a Cinderella. If you look at each of the three articles, you will probably find, as I do, some serious weaknesses, but the Auditory one is by far the most unsatisfactory. It consists largely of some sketchy definitions and some rather dubious history.
  • I find the statement near the beginning that “With auditory experiments such as sound impulses or questionnaires it can be investigated, based on the feedback from the listeners” doesn’t tell us much.
  • I am quite mystified by this: “Neurological measurement methods are also used, e.g. determine the frequency range of the sound perception of individual letters”.
  • The claim that “Auditory phonetics therefore focuses heavily on the neurophysiological and anatomical processes in hearing”, though based, apparently, on a source that I haven’t read and can’t get hold of, seems wrong. If Auditory phonetics comprises (as I believe it does) speech perception, then the subject must depend just as much on cognitive aspects of speech communication as on physiology and anatomy.
  • We are told (without a reference) that “Georg von Békésy has put forward a theory that shows how language is processed in the ear and in the human brain”, but the idea of “language being processed in the ear” makes no sense.
  • There is a table which attempts to pair physical properties with corresponding auditory sensations. Among the pairings we find “frequency” paired with “high”, and “duration” paired with “tone color”
  • The attempt to explain categorical perception falls well short of the mark. It mixes up the segmentation of continuous speech into successive units with the perception of vowel and consonant differences along a continuum.
I won’t go on, but I would encourage you to read the whole thing and judge for yourself. The Talk page for this article shows that it is not well regarded and has had a somewhat troubled history. I think that as a general rule, any very broad, over-arching article tends to be less satisfactory than the more precisely focused topics that are subsumed under it, but it should still be possible to do better than this.
​
I think I will spend my remaining lockdown time revising the “Three A’s” articles one by one, starting with Auditory. Any suggestions gratefully received.
0 Comments

The Queen's/King's English

25/9/2020

4 Comments

 
A recent edit of the Wikipedia article on Received Pronunciation added in as an alternative name "The Queen's/King's English". This has been removed by another editor on the grounds that no reference was given to establish that this term is really an alternative to "Received Pronunciation". I suspect that this may lead to a lengthy argument, but to me it is beyond question that the "Queen's/King's English" name refers to a set of grammatical, lexical and phonological characteristics of one variety of English, while "Received Pronunciation" is only concerned with phonetic and phonological characteristics of the standard accent. 

(Sept 26th) I am not surprised to see that the person who added the term "The Queen!s/King's English" has put it back in, with a reference. Not surprisingly, the reference is to the British Library web article that seems to have become the default reference on the subject of RP (apart from the Wikipdia article) and this is one of a number of ways in which the BL article seems to me unsatisfactory. I still believe that it is incorrect to say that the term "The Queen's/King's English" is an alternative name for RP.
4 Comments

Digital Assistants' Accents

3/6/2020

2 Comments

 
In Wikipedia's article on Received Pronunciation (a topic in which I keep finding things to grumble about) there is a section called Usage. It ends with a sentence that is pretty dubious but which raises an interesting question. The sentence reads "Most British voices in apps like Siri and Google Assistant speak RP, and most TV and radio stations across the UK use this accent." The second part of the sentence is clearly nonsense - across TV and radio broadcasting as a whole, the amount of speaking with an RP accent is extremely small. The first part of the sentence is given with no supporting citation, something which normally isn't approved of in Wikipedia.
I will remove the incorrect statement about TV and radio pronunciation, [NOW DONE} but if anyone knows of any respectable study of the choice of accent built into digital assistants I would be very glad to hear of it. I imagine the choice of voices for car SatNav systems would be a similar case.
2 Comments

"Conservative RP" again

12/5/2020

15 Comments

 
[I HAVE NOW MADE THE CHANGES SUGGESTED HERE AND ON THE WIKIPEDIA TALK PAGE, AND HOPE THEY ARE AN IMPROVEMENT]


 I last wrote in this blog on the subject of Wikipedia’s presentation of “Conservative RP” way back in 2016, when I was objecting to a poorly-written article dedicated to the subject. You can read what I wrote here. Some of the errors and unsubstantiated claims in that article were modified by an editor, and at some point a slightly reduced version was shoe-horned into the Wikipedia article on Received Pronunciation, and the offending article itself was deleted. You can read the current piece here.
Looking at it again I still feel that this is an unacceptable piece of work, so although I find the subject of RP rather boring, I am reluctantly starting to revise the material. The first objection I have is that the whole idea of “Conservative RP” is muddled. Its description implies that this is a present-day accent used or adopted by some British speakers (principally older and higher-class speakers), alongside other present-day accents of English; much of what is presented, however, is simply an account of the phonetics of RP of fifty to a hundred years ago. If speakers on the BBC used "Conservative RP" up to 1960, that is because they were speaking with the RP accent of sixty years ago, not because they had a distinctively conservative accent at the time. The material mixes up diachronic (historical) with synchronic (present-day) analysis; this article has a section on Historical variation, and it is there that “Conservative RP” belongs.
Section 5 of the Wikipedia piece is almost entirely based on a much-cited but superficial web article on the British Library’s “British Accents and Dialects” site. The examples given in the BL article are based on a single speaker who was born in 1909. The lead in the WP material contrasts Conservative RP with “Contemporary RP”, and claims that the Oxford English Dictionary’s pronunciations were based on Conservative RP for its first two editions, but on Contemporary RP for its third edition. No reference is given for this. We are given two alternative names for Conservative RP:  Traditional RP and Upper RP. Section 1.1 of the same WP article goes through half a dozen other names for RP, and Section 1.2 lists sub-varieties. If the terms “Traditional” and “Upper” belong anywhere, it is there (but references for their use are not given).
We are then given an unordered list of “phonological features” of Conservative RP that distinguish it from Contemporary. I have pointed out before that most of the “features” are phonetic, not phonological. The most important point to make here is that there is a great deal of overlap between this list and the list given in Section 4.3 (Historical variation), and this overlap must be confusing to readers.
Finally, Sections 1.1 and 1.2 could be better organized. What I propose to do is to move anything useful from Section 5 into Section 4.3 and then remove Section 5.
15 Comments

The 1989 Kiel Convention of the IPA

20/4/2020

0 Comments

 
I recently posted about the official photograph of all the participants at the 1989 Kiel Convention of the International Phonetic Association. I am glad to say that, with the help of Professor Klaus Kohler, the IPA will now host the picture, together with the names of the people in the picture, on its own website.
0 Comments

Phonetic transcription

29/12/2019

4 Comments

 
[NOTE: I have now made a lot of changes to the sections mentioned below, so you will find the text different from what is referred to in this post.]

I have been re-reading Wikipedia's article on Phonetic transcription. It's a huge subject, and the article does a reasonable job of covering the basics, but I have some problems with the sections “Versus orthography”, “Narrow versus broad transcription” and “Types of notational systems”. I'll be going through these one by one, so here I'll just comment on the first one.

I believe there are a couple of oversimplifications in  “Versus orthography”.  The statement “Other languages, such as Spanish and Italian have a more consistent (but still imperfect) relationship between orthography and pronunciation (phonemic orthography)” implies that no language has an orthography that is genuinely "phonemic", but the Wikipedia article Phonemic orthography states that “That ideal situation is rare but exists in a few languages”. There is another oversimplification in the statement “phonetic transcription can provide a function that the orthography cannot. It displays a one-to-one relationship between symbols and sounds, unlike traditional writing systems”. But phonetic transcriptions of a single utterance may identify different numbers of sounds: Lodge (2009, pp. 67-8) shows five different phonetic transcriptions of the word ‘cab’, some of which contain a  symbol for the aspiration component of the initial /k/ and others a symbol for the final release of /b/. Laver (1994, p. 558) gives an illustration of successive efforts at transcription of a Czech utterance by a transcriber who was an expert in phonetics but not a speaker of Czech. Three transcriptions of the word [prɒstʃi] (“simpler”) have different numbers of symbols. This shows that the principle of one-to-one correspondence between a phonetic symbol and a single sound is not strictly true in practice: what we identify as “a sound” is often arbitrary. The other part of this claim, that orthography is not able to offer a one-to-one correspondence between letters and sounds, is shown to be incorrect by the statement quoted above from the phonemic orthography article. I hope to modify this section to take account of these points.


4 Comments

J D O'Connor

21/12/2019

0 Comments

 
I heard recently from a former pupil of the UCL phonetician J D O'Connor (known to all as Doc), and this prompted me to look at what Wikipedia has to say about him in this article.  In summary, there is very little. The main sentence is the following: "Joseph Desmond O'Connor (10 December 1919 – 15 July 1998) was a British linguist and Chair of Phonetics at University College London". To me, the phrase "Chair of phonetics" doesn't seem right. His title was "Professor of Phonetics". I imagine the "Chair" term is American. I wonder if I should correct the title, or if this would seem merely picky?
Although such subjective thoughts aren't suitable for Wikipedia, I think it's worth noting here that he was a much-loved and influential figure in the world of twentieth-century British phonetics. It was good that he had a festschrift (edited by Jack Windsor Lewis) dedicated to him. I feel privileged to have been taught by him, and I know I learned a lot from his books and papers. It would be nice if someone would expand this article to say more about him.

​[UPDATE: I have now change "Chair" to "Professor"]

0 Comments
<<Previous

    A blog that discusses problems in Wikipedia's coverage of Phonetics

    Peter Roach

    Emeritus Professor of Phonetics,
    ​University of Reading, UK

    uArchives

    January 2021
    December 2020
    November 2020
    September 2020
    June 2020
    May 2020
    April 2020
    December 2019
    November 2019
    September 2019
    August 2019
    March 2019
    April 2018
    January 2018
    December 2017
    November 2017
    October 2017
    July 2017
    February 2017
    January 2017
    November 2016
    September 2016
    April 2016
    March 2016
    February 2016
    January 2016
    December 2015
    November 2015
    October 2015
    September 2015
    August 2015
    June 2015
    May 2015
    April 2015
    March 2015
    February 2015
    January 2015
    December 2014

    Categories

    All

    RSS Feed

Powered by Create your own unique website with customizable templates.