
In my last technical article, I wrote about the relationship between amplifier power and how we hear. In that piece I showed that small increases in amplifier power make almost no differences to the loudness we hear. In this article I’m going to delve into the murky waters of how different manufacturers derive their power ratings and how in many instances, because of how the measurements were made, you cannot compare amplifier power ratings.
Theoretically calculating amplifier power is easy.
Using Ohm’s Law, we can calculate amplifier power in a number of ways, but I’ll stick to the most common.
Power (in watts) = Voltage squared divided by Resistance P =V2/R
Or
Power = Current multiplied by voltage P =I xV
It is worth noting that when we talk about voltage we mean RMS voltage, which stands for Root Mean Square, and is the DC equivalent of a peak voltage. To convert peak voltage to RMS, you simply multiply the peak voltage by 707. I’ll get to the importance of this point a little later.
The Resistance in the equation would typically be the impedance of a loudspeaker. Speaker impedance is a little different to speaker resistance as it is frequency dependant.
When I first got into audio, a long, long time ago, most manufacturers followed a similar approach to deriving their amplifier power ratings.
In the stereo world most manufacturer’s specifications would say something along the lines of 50 watts per channel, both channels driven into 8 ohm at less than 0.1% distortion and from 20 Hz to 20 000 Hz.
By using Ohm’s law we could calculate that this amplifier delivers 20v RMS (or 28.29 volts peak) into an 8 ohm speaker
P = (20 x 20)/8
P= 50 watts
This spec list was quite comprehensive and made comparing amplifier power (but not sound) relatively easy. So long as all parameters were kept the same.
And then a number of things happened, with the first of these in my opinion, being the introduction of Home Theatre.
At the inception of Home Theatre, the rear channels of a Dolby Pro Logic system were bandwidth-limited from 100 Hz to 7 KHz. This together with the fact that subwoofers were now becoming more common in audio systems meant that in a sub equipped system, the main amplifier didn’t need to do any of the real low frequency heavy lifting. This in turn meant that engineers could, shall we say, finesse amplifier specs a little.
We now began to see amplifiers rated at, for example, 5 x 100 watts at 8 ohm.
What was really meant, was not that the above amplifier could actually deliver 100 watts into all channels simultaneously and from 20 Hz to 20 KHz, but rather if used in a Pro logic system and with a subwoofer, the amplifier had the potential to deliver its rated power. Any parameter that would use more power — such as running the front channels full range — and the amplifier would run out of steam rather quickly.
Such an amplifier would likely have a 2 amp fuse on the power supply side, and this brings me to the second power calculation I mentioned earlier, with this being Power = Current multiplied by Voltage
By using this method to calculate power, we find that the most power this amplifier could draw from Eskom (load shedding dependent of course) would be 440 watts, calculated by multiplying the fuse rating (2 amp) by the incoming voltage of 220 volts.
Now because an amplifier can’t continuously deliver more power than it draws from its supply, we can see that our amplifier could only deliver around 88 watts into all five channels simultaneously. But even this is a bit of a stretch as it assumes that the amp is 100% efficient. That is, every watt of energy the amplifier draws is available to drive the speakers attached to it.
In a best case scenario we could assume that the amp is a class D design and is 85% efficient and 15% of the power is used to drive circuitry in the amplifier or lost to heat, (Class A/B or Class A amplifiers are way less efficient and will have higher losses) and this means that only 374 watts are available across all channels to drive speakers.
If used in a home theatre, our example amplifier would probably be able to deliver 100 watts to each of the three front speakers, and around 37 watts to the rear speakers. All in all, this wasn’t too bad and not too far from the quoted specification.
When we moved from Pro Logic to Dolby AC3 or Dolby Surround, the rear channels became full bandwidth and this meant that theoretically all five channels could demand maximum power. In this scenario, and if we divided power equally, the maximum power per channel would be about 75 watts per channel.
Our amplifier example would still be rated at 5 x 100 watts and so long as the other specifications were in line, this would still be considered a reasonable specification, because in theatre mode where it’s unlikely that all channels would be driven to the maximum, each channel could still potentially deliver at, or close to, 100 watts.
Somewhere down the home theatre line, someone decided that a little more creativity was needed to make their products a little more marketable than the opposition, and the standard 8 ohm rating became 6 ohm.
Read on to see how this change would affect the power specification of an imaginary 5 x 50 watt amplifier.
Going back to our Ohm’s Law calculation for a 50 watt amplifier, but now using 6 ohm instead of 8 ohm we get the following:
P = (20 x 20)/6
P = 66.67 watts
As you can see, by simply changing from an 8 to a 6 ohm, our power rating increased by 33%.
The next thing that happened was that a decimal point was shifted and the distortion limit was moved to around 1%. With a little more distortion, our same amplifier would now make around 70 watts. Following this, we saw two almost insignificant changes in the ultra-fine print in the spec sheet.
Now, if you followed the asterisk at the end of the power spec, and looked at the bottom of page 67, subsection 3, you would read the following “power at 6 ohm, 1% distortion, one channel driven at 1000 Hz.”
Power at these ratings would easily be over 100 watts for the single channel at the specs listed. Marketing now simply called this amplifier a 5 x 100 watt amplifier, and this makes it far more sell-able than an opposition product that is rated at half the power. Even if the reality is that both amplifiers deliver the same power.
Of course none of us would ever listen to only one channel at full volume and with a 1000 Hz test tone, so using these details to derive amplifier power would seem just a little dishonest to me. Manufacturers counter this by saying that as the test methodology is disclosed, they’re being honest.
But wait, there’s more.
It has become fairly common, particularly for mass market brands, to list a peak power rating in bold on a spec sheet and a RMS rating in in a much smaller font elsewhere.
You may remember I mentioned the importance of RMS voltage right at the top of this piece. It turns out that if we use the peak voltage and not RMS voltage, you get exactly double the power.
So peak power of our amplifier with one channel driven at 1000 Hz and 1% distortion is 200 watts.
200 watts per channel is a lot better than 100 or even 50 watts. And why not go all the way and call this five channel amplifier a 1000 watt amplifier, which sounds even more impressive. Once again, if all the information is given, these manufacturers aren’t really telling porky pies. Misleading yes, but outright lies, no.
As you can see, it’s easy to get misled about amplifier power ratings, and the more channels there are, the more some manufacturers stretch the truth.
Ultimately, what I set out to do in this article, was point out that when comparing amplifier power, it’s vitally important to know how the power ratings were derived. I hope this article sheds some light on this aspect of amplifiers. Better brands will always be open and honest about how they derive their specifications.
There are quite a few over simplifications in the article that technically-minded readers can pick apart. This was done intentionally to hopefully make the piece more understandable to novices in the arena.
Joel Kopping