Friday, March 29, 2024
Login

Advantages of the Doherty Amplifier Configuration

Parent Category: 2018 HFE

By Boris Aleiner

The Doherty amplifier configuration is a de-facto standard for RF Power Amplifiers. These notes, written in layman’s terms, explain its principles of operation and, as a follow-up, the reason for its overwhelming success.

Introduction

The purpose of the RF Power Amplifier (PA) is amplification of an RF signal to the level expected at antenna port. It needs to be done not only accurately (so that a system receiver is able to recognize a transmitted signal), but also efficiently (in order to avoid wasting DC power, and/or to preserve the battery drainage).

However, efficient amplifiers introduce distortions which prevent an accurate recognition of the transmitted signal. We need to find a compromise between distortion-free and a still-efficient amplifier. The Doherty configuration offers a least expensive solution for achieving this compromise.

There are many excellent papers on the subject of Doherty amplifiers; however, they focus on the specifics of its implementation. The intention of these notes is to show the logic for and the advantage of the Doherty-type configuration.

Consequences of Backoff

RF Power Amplifiers are divided into different classes (Class A to C for controlled current source and Class D and above for switched mode amplifiers). The higher the class of amplifier the more efficient (but less linear) it is. Controlled current source Power Amplifiers offer the easiest compromise between linearity and efficiency; the issue with them is that their maximum efficiency rapidly drops when input RF power is reduced.

RF power could get reduced for a number of reasons: orders from Base Stations, requests from mobiles – to name a few. The most difficult reason to deal with though, is insufficient linearity. It is an unlikely scenario, since today there are many cost-efficient linearization schemes available; however if they don’t work, the only resource left is to roll back RF power. Consequently the efficiency of amplifiers operating at rolled back power (called a “backoff”) is sharply reduced.

Parallel Sub-Amplifiers

We need a method to keep efficiency intact when RF power is backed off. It is conceivable just to choke an amplifier at that level by reducing DC supply. However when backoff is used to improve linearity – it would not work (since it would bring back an unacceptable level of nonlinearity). We need to introduce a different amplifier configuration to resolve this issue. That configuration would use parallel stages of amplification instead of series ones.

In traditional amplifiers a desired gain is achieved by applying an output power from a previous stage to the input of a next one (that is – in series). As was pointed out before, the issue with this approach is that maximum efficiency is reached only at the highest RF power. Instead, we can place those stages (called sub-amplifiers) not in series, but in parallel to each other. The idea is to tune each of the sub-amplifiers to be as efficient as linearity of their specific range of power allows, and then to combine the result. By properly combining contributions from sub-amplifiers, we expect to reduce efficiency drop when the power is backed off.

The parallel stages approach makes sub-amplifiers share the load (vs. having a dedicated load as in “traditional” amplifiers). And this sharing is what would allow us to implement a constant level of efficiency over the power range. In effect this approach creates what is known as “Active Load Pull” or “Load Modulation.”

Load Modulation

Load modulation is changing the actual value of a load by combining RF currents from sub-amplifiers. According to Ohm’s Law, the higher the total current – the lower the value of a load. That is, application of separate RF outputs to the load changes its value, or, in other words – modulates it. This modulation changes sub-amplifiers efficiency, since efficiency is determined by two factors: its DC power supply and the value of its output load. At a fixed biasing (the voltage between a transistor’s gate and source) it is easier for a transistor to work on a smaller load, which is realized in higher linearity (or, in other words – lower efficiency). Conversely, with the load increase – the voltage drop on it increases – DC voltage applied to the transistor’s drain being reduced (since this voltage is the difference between DC supply and a voltage drop on the load) – the transistor is choking – and this is realized in higher efficiency. Graphically it is shown in the Appendix using a technique known as a “load line.”

Load modulated by parallel stages of sub-amplifiers creates conditions for constant efficiency over a range of RF powers.

Impedance Inverters

Load Modulation changes the efficiency of Power Amplifiers. When two separate RF signals are applied to the load, its value is reduced, which means that the efficiency of sub-amplifiers is reduced (as shown in the previous chapter). To keep efficiency high we need the load to be increased with the power increase. It is done by an impedance inverter. This inverter would also help to separate sub-amplifiers from one another.

The simplest way to implement an impedance inverter is to use a quarter-wave transmission line; however, it is a bandwidth-limited approach. There are other means though, to invert impedance; they include sections of transmission lines with different impedances connected in PI or TEE configurations. It makes bandwidth a nonissue for any application.

Principle of Operation

That is, the amplifier with high efficiency over the range of RF input powers should consist of parallel sub-amplifiers (each is tuned to its maximum efficiency in its predetermined range of input powers) connected to the same load and separated from each other by impedance inverters. The resulting block diagram for two sub-amplifiers is given in Fig.1.

1803 amps fg01

Fig. 1 • Block diagram of Concept.

Let’s bias one of the sub-amplifier as Class AB, and the other as Class B. In this case at low input powers only the AB-biased sub-amplifier is working and it is behaving as a “traditional” amplifier. With the power increase the second sub-amplifier (biased as B) starts to operate – so each of them contribute to the load. RF current through the load is being increased, so the actual value of a load is getting reduced. Due to the impedance inverter, the Class AB sub-amplifier sees the load value as being increased, so its efficiency should be getting better.

When the Class B sub-amplifier starts to operate, though, the efficiency of the Class AB amplifier is reduced, because its input RF power is getting reduced (since part of it is applied to Class B amplifier), but this drop is compensated by the load increase being seen by this amplifier. As a result the overall efficiency of Class AB amplifier stays flat with the RF power increase, which is good (since it is an improvement over a drastic drop of efficiency in “traditional” amplifiers).

The Class B sub-amplifier, would not see an inverter, so it would behave as a “traditional” highly efficient amplifier.

The resulting efficiency of this type of Power Amplifier is given on the graph, Fig.2.

1803 amps fg02

Fig. 2 • Amplifier’s efficiency η.

Now, with the completion of a concept block diagram, we can describe its operation mathematically. The equations are based on Kirchhoff laws; they are straightforward, and presented by many authors (including initial introduction by William H Doherty, who pioneered this idea in 1936). The goal of the equations is to confirm the graph in Fig. 2 and to see what improvements can be made to the operation of this type of amplifier. Various modifications based on this approach are given in the literature.

Conclusion

In these notes the Power Amplifier concept of a Doherty-type configuration was presented and explained. It is based on independent parallel brunches of sub-amplifiers; each of them tuned to operate at its own RF power range and connected to the common load. It was shown that by the virtue of Load Modulation (reducing a load value by adding RF signals from sub-amplifiers) and application of impedance inverters (to enhance efficiency of sub-amplifiers when the load value is reduced) this configuration improves efficiency when RF power is backed off.

1803 amps fg03

Fig.A1 • Load line.

The reasons for backoff are many, and the most difficult one to deal with is nonlinearity correction, where the Doherty configuration is the only cost-effective solution for sustained power efficiency. It does not require additional components, since very often “traditional” amplifiers are also implemented with parallel branches of sub-amplifiers (which is done for many reasons, and reliability improvement being one), only they work differently and no load modulation is involved.

That is to say that the Doherty configuration offers an important advantage over a “traditional” amplifier configuration (enhanced efficiency when power drops) without drawbacks and at no additional cost. And, as a consequence, it offers significant savings over other efficiency enhancement schemes needed for traditional amplifiers (where additional costly components are required).

Literature

No papers were quoted here since the goal was to demonstrate the logic leading to the Doherty-type configuration. There are many excellent papers on the subject of Doherty amplifiers and one of the most comprehensive is the paper by Raymond Pengelly, Christian Fager, and Mustafa Özen, entitled “Doherty’s Legacy” (IEEE Microwave Magazine, February 2016). The authors compiled works by many authorities in Doherty-type amplifiers (S. C. Cripps, F. H. Raab, A. Grebennikov, et al) with total of 87 entries. It is a good starting point for anyone who needs to expand on specific areas of Doherty-type amplifiers.

Appendix

The notion of Load Modulation is better understood by the graphical presentation known as a “load line.” It superimposes current-voltage (I-V) curves of a transistor with I-V graph of a resistive load.

Transistor I-V curves display the dependence of a drain current ID on drain-to-source voltage VDS with gate-to-source voltage VGS as a parameter. For idealized transistor a drain current depends only on how open a transistor’s channel is, so it does not change with variations of VDS and depends only on VGS.

The graph of a resistive load is a representation of Ohm’s Law (which states that the current through a conductor between two points is directly proportional to the voltage across the two points) – so it is a straight line, the position of which is fixed by the voltage applied to the load and the current through it.

The point where they intercept is determining an operating point (that is, the point at which RF operation starts), since the same current flows through the load and the transistor – as seen on the insert on Fig A1.

The load line shows that for any given VGS, changes of the load lead to corresponding changes of VDS. When the value of R is increased – the angle of a load line is reduced, so VDS becomes smaller. Reduced VDS chokes the transistor and makes it to operate in less linear (that is, more efficient) mode.

About the Author

1803 amps fg04Boris Aleiner is an RF Engineer with many years of experience at leading telecom companies. He has a number of patents, has published numerous papers, and now serves as a consultant. He can be reached at baleiner@gmail.com.

Search

CLICKABLES