A musical "interval" is a frequency ratio.
For example, a "perfect fourth" is a frequency ratio of 4/3, and a "semitone" is a ratio of roughly 1.0595, or "S".
Increasing the effective length of an instrument by a factor of S causes its frequency to decrease by that same factor. A "fractional change" in frequency is defined as the change in frequency divided by the original frequency. If "Fo" represents original frequency and "Fn" represents new frequency then the fractional change is (Fn - Fo)/Fo, which simplifies to (Fn/Fo - 1).
If the pitch goes up one semitone, then the fractional change in frequency is just S-1, or 0.0595. If the pitch goes up half a semitone (i.e. 50 "cents") then the fractional change in frequency is approximately (S - 1)/2, or about 0.0297. And if the pitch goes up just one cent, then the fractional change in frequency is approximately (S - 1)/100. (Strictly speaking, a "cent" is a ratio equal to the 100th root of S, or about 1.0005777895, so if the pitch goes up by one cent then the fractional change in frequency is about 0.0005777895.)
The modern standard frequency for "A" is 440 Hz. ("Hz" is the modern abbreviation for "cycles per second".)
Some orchestras tune to A = 442 Hz instead of 440. How sharp are they?
The fractional change in pitch = (442 - 440) ÷ 440 = 0.004545. To convert that to semitones, divide by (S - 1):
0.004545 ÷ 0.0595 = 0.0764 semitones, or approximately 7.6 cents sharp.
More Precise Answer:
N = 1200 × log2(442/440) = 3986 × log(442/440) = 7.8 cents sharp.
Notice that my approximation method gives results that are reasonably close to those of the more precise method for intervals that are a small fraction of a semitone. In the work that follows I shall continue to use the approximation method because of its simplicity. Any errors introduced by the approximation will be dwarfed by the intonation quirks of real instruments.
Next: The Effect of Temperature Changes:
Back to the main page of the Tuba-Logic Website:
Excellent Wikipedia Article about "cents"