The Physics of a Language

The Physics of a Language

Sharif Adnan

 

If I say a language is nothing but a physics, you will raise your eyebrow, won’t you?

It’s not a hypothetical statement but scientific truth. Do you still disagree with me? Okay, let me clarify my position and welcome to the world of physics of language.

Language is a combination of speech sounds. Air is one of the most essential elements to produce a speech. By default, every speech sound has a unique quality and has specific physical properties when it is being uttered. Etymologically, the word phonetics derived from two Greek words such as akoustikos (i.e., meaning “of or for hearing, ready to hear”) and phonetikos (i.e, meaning “vocal”). So, acoustic phonetics is a branch of linguistics that deals with the physical properties of speech sounds and signs.

Language is always ever-changing and different branches of linguistics are emerging. Acoustic phonetics is such an emerging branch in the world of modern linguistics. Now, let’s talk about some of the fundamentals concepts of acoustic phonetics which are interrelated and helpful to analyze a speech sound in a systematic way.

Waves

If you throw a stone in the middle of a quiet pond, it will cause waves causing ripples across the surface. Well, sound moves through the air in the same way someone throws a pebble into the water.

Fig.1.1: Labeled as X-Y wave graph

In Figure 1.1, the line, X represents the surface of the water whereas Y represents the highest point, the peak of the wave and the point of departure is being defined as the lowest point of the wave.

The most fundamental features of the waves are their frequency and amplitude.

Fig.1.2: Amplitude and frequency

When a wave passes a maximum distance from the beginning it is known as amplitude and it is expressed in decibels (dB). The loudness of a sound and the air particles’ deviation can be understood by analyzing the amplitude. In fact, it denotes the frequency.

Frequency:

Frequency refers to the number of waves that pass a point per unit of time.  It is an objective and measurable property and a sound can be analyzed via this form of a wave. To hear a sound, the frequency of the vibration must be between roughly 20 and 20000 vibrations per second. (i.e., the normal audible range for human beings), usually known as Hertz (Hz).

 Fig.1.3: High and low frequency

Compression and Rarefaction:

Longitudinal waves have compressions and rarefactions. In fact, a compression is a region in a wave where the particles are the closest together. On the other hand, rarefaction refers to a region when the particles are furthest apart in a wave.

Fig. 1.4: Compression and rarefaction

Soundwaves:
Soundwaves result from a vibration and are moved by a propagation medium (i.e., the pathway which sound travels). In the previous example, a stone was thrown into the water where water was the propagation medium. Usually, air is one of such mediums through which sounds can easily move.

There are two types of vibration in acoustic phonetics such as periodic vibration and aperiodic vibration.

Periodic vibration: It consists of regularly repeated patterns like the simple waves illustrated in graph 1.

Aperiodic vibration: It has less musical or rhythmic quality like the sound of a jet engine.

Fig. 1.5: Soundwaves

Pitch:

Pitch is the primary frequency of the speech wave. It’s a perceptual term for frequency. Basically, the higher frequency depends on the higher pitch.

Fig. 1.6: Low and high pitch

Formants:

In a speech pattern, the resonant frequencies of the vocal tract (that is the frequencies that resonate the loudest) are called formants. We can see them as the peaks in a spectrum. At any one point in time (as with spectra) there may be any number of formants, but for speech, the most informative are the first three, appropriately referred to as F1, F2, and F3.

Fig. 1.7: Formants of four American vowel sounds

Spectrogram

It is a visual representation of the sound, it displays the amplitude of the frequency components of the signal over time.

Fig. 1.8. A Spectrogram of the Word

A spectrogram is created from a mathematical algorithm. Spectrogram displays time on the horizontal X-axis and frequency on the vertical Y axis. In Figure 1.8, the periodic, aperiodic and transient sounds are marked.

In sum, phonetics deals with the common properties of human sound whereas acoustic phonetics talks about its physical properties. Every speech sound has a specific and measurable effect on the air during their production. These effects are analyzed in a scientific way which is the main concern of acoustic phonetics. Other areas of linguistics also analyze and deal with language but what makes acoustic phonetics special is that it analyzes linguistic data in a more precise way for those who aim to analyze sound scientifically. Not bad, huh!

 

Leave a Reply

Your email address will not be published.