FPGA methodology for power analysis of embedded adaptive beamforming

In wireless communication systems, multipath and interference effects degrade the SNR and increase the BER of received signals. Antenna arrays and beamforming algorithms are typically used to improve system performance and noise immunity. For non-adaptive beamforming where signal-path gains remain unchanged, power dissipation is predictable but SNR and BER depend on the changing statistics of the received signals. On the other hand, in adaptive beamforming where signal-path gains are changed in real time according to the statistics of the received signals, the SNR is improved and BER is decreased, but power dissipation undergoes unpredictable transients during the gain adaptationprocess. Such transients are especially detrimental to battery-based power management systems in mobile applications. In this paper, we propose an FPGA-based methodology for the analysis, modeling and prediction of power dissipation in embedded array signal processing systems containing adaptive beamforming components. This FPGA-based methodology enables the exploration of the adaptive beamforming design space in terms of power, timing, overhead, arithmetic precision and computational resources.

A distinct feature of this methodology is that it enables such design-space exploration in real-time and on actual received waveforms. We describe a specific implementation of this methodology using a hardware prototype based on Xilinx’s Virtex 7 FPGA. We use this prototype to explore the design space of a four-channel Least-Mean-Squares (LMS) beamformer. The main result of this exploration is the selection of an adaptive algorithm design point that represents the best tradeoff between parameter convergence, machine precision and energy-efficiency for the embedded arraysignal processor.