Please write exclusively to firstname.lastname@example.org. My other mailbox email@example.com has been discontinued.
01. What is Natural Phonetics? In what sense is it natural?
02. Is phonetics part of linguistics?
03. Aren't you committed to any theoretical framework at all, then? I mean, what is linguistics about, in your opinion?
04. Should we content ourselves with descriptive adequacy, then, and let explicative adequacy aside?
05. How can you acquire sufficient skills to do Natural Phonetics properly?
06. How long will it take me to be able to get into so much detail?
07. What's the purpose of going about sounds the way you do?
08. Why do you use so many symbols?
09. Wouldn't it be better to stick to the phonemic principle put forward by the IPA and then use diacritics for all the minute differences?
10. What's wrong with IPA Phonetic Alphabet?
11. Why is it called a phonetic alphabet in the first place, then?
12. What's the purpose of putting so much detail in the transcriptions?
13. But you must admit that your transcriptions are difficult to read.
14. But can you really distinguish between all those sounds?
15.You must recognize, though, that your impressionistic way of doing phonetics tends to lack objectivity?
16. Why aren't Jones's Cardinal Vowels as useful a classification for vowel sounds as your Natural Phonetics classification?
17. Why should I bother learning Natural Phonetics if it's so much easier doing phonetics with a computer (I'm thinking, for instance, of pitch analysis)?
18. You can't deny, though, the dramatic progress in technology which was made possible by acoustic phonetics alone.
19. Why aren't sequences such as [je, ja, wo, wa] proper phonetic diphthongs?
20. Why do you call syllables «phonosyllables»?
21. Do you approve of normative phonetics?
22. What is neutral pronunciation?
23. Are you really willing to work with just "anyone" in order to spread the method of Natural Phonetics?
By Natural Phonetics, I mean that kind of analysis and introspection of linguistic sounds and intonation which you can do by yourself, without any expensive and complicated machinery. It is something you can do even while you’re taking a shower or when you’re in bed in the dark. You won’t need any other devilments than your own keenness or will to «play» with the sounds of language, either your own language or others’: just in case, a taperecorder is the only «external» device which can come in handy.
Most importantly, you don’t want any obscure and abstruse theory, either. Plain common sense is enough, as long as you are not wearing the blinker of spelling. In fact, you should bear in mind that spelling is quite a different thing to actual pronunciation, even for those tongues where there is an almost straightforward correspondence of one letter-one sound (and they’re not many). Letters are not sounds, however useful they may be to represent them conveniently (in some languages arguably more consistently than in others). All too often, we seem to forget that letters don’t sound; rather, it’s the other way round: it is sounds that are «lettered», i.e. given a letter to represent them conveniently.
We often forget —or are misled to do so by the traditional school— that in many senses speech comes before writing, not perhaps in importance but certainly in urgency: children spontaneously learn to speak but have to be taught how to read. In several cultures, some individuals cannot use written language; and very many languages even lack a written form.
When it comes to muse over the sounds of a language in adult age (not necessarily a foreign language, but also your own language with a different accent ), the form of written record for sounds which is most helpful is in the shape of a phonetic alphabet, which pursues the ideal of one symbol for each sound and one sound for each symbol. And given the fact that the articulatory possibilities of man are multifarious, a rich choice of these symbols will be needed. An accurate set of articulatory figures is also necessary to direct you appropriately.
No, if linguistics is meant in the peculiar Chomskyan sense. Yes indeed, if we more loosely take it to mean the scientific study of language.
As any language is made up of meanings (which are the concern of semantics) expressed through words (the concern of morphology), formed by sounds (the subject matter of phonetics), put into phrases and sentences (i.e. syntax), completed by intonation (i.e. tonetics and paraphonics), then there is no reason not to consider phonetics part of linguistics.
As I see it, modern linguistics can be divided into three different (but complementary) branches: descriptive (which I call glottography), explicative (which I call glottosophy), and quantitative (i.e. glottometry). I’m concerned with glottography; Chomskyans, with glottosophy; acoustic phoneticians and many sociolinguists, with glottometry. Each branch has aims and methods of its own; but altogether, the three of them contribute to a better understanding of how speech works.
Arguably, accurate and articulate descriptions lie at the bottom of any linguistic fact. That’s why I’m firmly convinced that descriptive phonetics and tonetics are still necessary, and that the method of Natural Phonetics (with its three fundamental components, i.e. articulatory, auditory, and functional) is the most fruitful way of doing accurate and useful descriptions.
Functional phonetics, generally called «phonology» or «phonemics», is inevitably a part of phonetics, although some scholars still think that phonetics is just the material part of phonology itself. But, as a car has at least three essential parts (i.e. a body to carry you, wheels to move, and an engine to make the wheels turn), also the functional component is nothing but a part of the whole.
Technology too has developed an auxiliary aspect of phonetics, which seeks for help outside the speaker-hearer, especially through acoustic phonetics. Today, it’s affordable for anyone able to use a computer, and allows them to do instrumental and quantitative phonetics (instead of natural phonetics), which very often is tantamount to saying eye-phonetics instead of ear-phonetics, i.e. a kind of phonetics which is artificial, not to say unnatural.
Traditional linguistics mainly studied the history of languages (i.e. glottology), or their sources and development (i.e. etymology). Modern linguistics has added the desire to explain any linguistic fact, by declaring them «explanatorily» adequate. However, I think that any explanation must be based on complete and accurate descriptions. Even phonetically and tonetically (to say nothing about paraphonics), our languages still lack such satisfactory descriptions.
Even English has not yet received such a full description, although in recent years many scholars have begun to collect recordings of actual spontaneous connected speech, and to transcribe it, in spite of the fact that they keep using only a very poor and unsatisfactory notational system, which doesn’t allow them to be accurate and precise enough. They risk to go far beyond their intentions, by overdifferentiating the smaller nuances and by ignoring the ones which are more important for learning languages and accents.
I have begun to do this for the 12 culture languages (and variants thereof) given in my Handbook of Pronunciation (HPr). While for more than 300 other languages, I have given a sketchy but accurate set of diagrams, from which the interested and gifted readers may derive their full descriptions of the languages they want to work on.
What I mean is that even today there is a terrible need for good phonotonetic descriptions, based on a rich set of symbols, as is my own canIPA inventory, which comprises 52 basic vocoid symbols and several hundreds of contoids. If we then consider that there are 8 further potential vocoids, and that taking into account intermediate lip-positions we could get at least as many as 26 more, and as many nasalized vocoids as there are oral, we could reckon with about 1000 linguistic sounds altogether, which are all to be found in my Handbook of Phonetics (HPh), each with its own symbol (generally with no «second-class» diacritics): 500 basic, 300 complementary, and 200 supplementary ones.
I would answer that it all depends on what your purposes are. If you aim at mastering the sounds of a language at an adult age, either a foreign language or yours own, then the most rewarding way to do it is by Natural phonetics. Indeed, it must be acknowledged that very often generative accounts lose sight of descriptive accuracy.
First and foremost, you must be willing to actually become a real listener to any sound nuances you or other people can produce. Then, you should refine your innate but generally dulled imitation skills, according to kinesthesia, which allows you to feel even the tiniest movements in your mouth.
Much will depend on your natural endowment. Let’s say it’s pretty much like learning to play an instrument and to read music. Anyway, the true feeling which must guide you is that there is always something new to discover and to check, or —to put it philosophically— you should be Socratic enough as to admit that what we know is that we just do not know enough.
Essentially to master the sounds of a language more quickly: both the way they are pronounced and the way they are organized to convey meaning.
Secondarily, it’s fun. It’s a whole new world! Instead of a few miserable set of written vowel signs, say 5 or 10, there are at least 50 of them there for you to play with, as you like. And instead of two or three lousy scores of consonants, you have hundreds of them at your disposal. This way you’re made to feel their differences and similarities. You can easily spot languages and accents. You can fake regional, social and foreign accents just for fun, or even for earning a living out of it in the entertainment world…
Because only in this way one can record pronunciation as precisely as possible. When children paint their drawings, they only use the basic colors available for fibre tips, so that their mum and dad are made to have, say, yellow or black or red hair, and pink (or brown) skin… When you use only the official IPA symbols, it’s as though you contented yourself to use those few basic colors. Through training most people can acquire enough skills to be able to identify around 1000 phonetic sounds (with the possibility for the finest ears to distinguish further nuances), even if each of the world’s languages generally uses only a few dozens of phonemic sounds (or «phonemes»), i.e. sounds which can change the meaning of two similar words, like «bit» and «bet», or «light» and «right».
When you can use many different words to show nuances of meaning, your communication is decidedly better than when using, say, broken English. Thus, if you have many symbols at your disposal, you can be more precise in identifying similar but different sounds. Inevitably, with only a few symbols available, even your ability to distinguish different sounds is seriously compromised.
If you did so, you would inevitably feel that minute nuances are less important and you’d end up preferring not to bother about them. Diacritics are necessary, but they aren’t always sufficient nor clear enough, given that they are more likely than a self-sufficient symbol to be ambiguously used by different transcribers. In addition, all too often a sound represented by a symbol bearing one or more diacritics is inevitably felt as a second-class sound and its symbol as complicated.
Although it’s obvious that certain sounds (and symbols) are more basic and widespread than others, canIPA symbols are more «egalitarian» inasmuch as they are all practically on a par. However, the 26 letters of the Roman alphabet are used for those sounds of the world’s languages that are the most frequent and the most widely used. Rarer or more complex sounds do have derived symbols, but they’re not discriminated by any mark attached to them.
That it’s not a real phoneTic alphabet, i.e. one which achieves the ideal of «one sound, one symbol», but just a phoneMic alphabet. It must be acknowledged, however, that it is way better than any other alphabet of the kind.
For the sake of tradition. When you open a common grammar-book, you generally find a very short chapter at the beginning called Phonetics, which supposedly should deal with sounds even though it mainly deals with letters, making just some confused and confusing remarks about how they are supposed to sound. But letters don’t sound: they simply try to represent sounds, with no great success, to be frank.
Allow me to answer with a question (which is not at all silly for me): Why shouldn’t painters be as precise as they can, or musicians as sofisticated as they can?
They are. But they can actually guide you to the real pronunciations as used by people every day. When you content yourself with phonemic symbols, the only way for you to hope to get near the real sounds is when you’re a native speaker. And even in that case, you may be pronouncing differently from what the phonemic symbols can show, without even realizing you’re saying something different.
Yes! Many people are able to distinguish a great deal of different car models, including the year they have been made. Of course, they must be particularly interested in cars. When you’re very much interested in sounds, you know that you’re able to feel and appreciate their differences. Try it and see… Obviously, some particular or newer sounds can be more difficult to grasp, at first: a good taperecorder is very important.
At the beginning, when you know little about linguistic sounds, you may be easily mistaken. But, with more experience, you can be very precise. And —what’s more— you can reach the mean value of a series of sounds (you represent with a given symbol) in a way that machines can’t do. Machines can tell you more than you need on any particular sound string you have recorded, but nothing beyond it. It’s up to you to obtain the mean value, by listening to different voices as well. Machines are quite dull in this respect. The human ear is stratospherically better than any machine, also because it doesn’t bother with what is only plain and confusing rubbish.
Because they’re based on the «highest point» of the tongue in the X-ray prints. This produced a sort of deformed trapezoid, with the upper part much longer than the lower part, and the back part less long than the front part. The reasons for these asymmetries lie in precise physical barriers: the tongue is in fact more mobile in the high-front area than in the low-back area.
By considering, instead, a constant point, namely the center of the mediumdorsum (that is the absolute center of the back of the tongue), the resulting figure is similar to a much more regular quadrilateral. And even though any diagram with sharp corners is rather unnatural, it is still helpful to make the figure as schematic and regular as possible. Although simplified in this way, the diagram retains all of its usefulness in practical contexts.
Another defect was the attempt to subdivide the internal spaces, between the four «cardinal» points in the quadrilateral, by means of an «auditory equidistance», instead of continuing with articulatory subdivisions, naturally, aided by auditory feedback. It is quite clear that something which is purely auditory cannot be faithfully transmitted, without a direct contact with the source or producer of the sound. In this manner, even the learning and training of specialized phoneticians has suffered, and the results have inevitably included undesired and unappreciated discrepancies, with respect to the articulatory method assisted by auditory feedback.
However, it must be acknowledged that chaos reigned supreme before Jones’s Cardinal Vowels. They were a very great achievement indeed and we will all be eternally grateful to DJ for them.
It’s up to you. If I become deaf, I will do acoustic phonetics, of course. In fact, I could at least «see» the sounds and the intonation samples I wouldn’t be able to listen to any longer. But, isn’t that what so many acoustic phoneticians do every day, possibly while listening to some more or less good music? As a matter of fact, they don’t care about how sounds and pitch sequences actually sound. They just care about their phisical characteristics or about their quantitative peculiarities. They just content themselves to watch, like many Peeping Toms. If you know what I mean.
I would be blind if I did. The fact is that I’m more interested in the artistic side of sounds than in the technical side.
Simply because, by definition, a diphthong is a sequence of two vowel sounds not separated by an increase in stress. Instead, [j] and [w] are consonant sounds, just like [m, l, p, s]. And nobody considers sequences like [ma, la, pa, sa] as diphthongs.
To avoid confusion between the phonic level and the ubiquitous but far less significant —in our sense, at least— graphic level.
I do, however old-fashioned this might make me appear! To me, it’s a way of showing respect to an aspect of a language which is secondary to none. Exactly as people strive hard to use correct grammar and proper vocabulary, why shouldn’t they try to pronounce their language correctly?
A pronunciation which can’t be spotted, neither geographically nor —to some extent, at least— socially. Very often uninterested people deny its existence, although they themselves end up by being somehow influenced by it. Of course, neutral pronunciation has to be learnt, because nobody —generally— has it from birth, just as nobody fully masters grammar quite spontaneously or the lexicon without personal application.
Why not? Provided they share the principles of Natural Phonetics and they are willing to deepen them enough. In addition, they should know enough about the language(s) they intend to co-work on. That’s to say: we must get to know ourselves phonetically, or natural-phonetically.