Optimum Averaging of Superimposed Training Schemes in OFDM under Realistic Time-Variant Channels Articles uri icon

publication date

  • August 2021

start page

  • 115620

end page

  • 115631

volume

  • 9

Electronic International Standard Serial Number (EISSN)

  • 2169-3536

abstract

  • The current global bandwidth shortage in orthogonal frequency division multiplexing (OFDM)-based systems motivates the use of more spectrally efficient techniques. Superimposed training (ST) is a candidate in this regard because it exhibits no information rate loss. Additionally, it is very flexible to deploy and it requires low computational cost. However, data symbols sent together with training sequences cause an intrinsic interference. Previous studies, based on an oversimplified channel (a quasi-static channel model) have solved this interference by averaging the received signal over the coherence time. In this paper, the mean square error (MSE) of the channel estimation is minimized in a realistic time-variant scenario. The optimization problem is stated and theoretical derivations are presented to attain the optimum amount of OFDM symbols to be averaged. The derived optimal value for averaging is dependent on the signal-to-noise ratio (SNR) and it provides a better MSE, of up to two orders of magnitude, than the amount given by the coherence time. Moreover, in most cases, the optimal number of OFDM symbols for averaging is much shorter, about 90% reduction of the coherence time, thus it provides a decrease of the system delay. Therefore, these results match the goal of improving performance in terms of channel estimation error while getting even better energy efficiency, and reducing delays.

keywords

  • ofdm; superimposed training; time-variant channel; channel estimation; least squares; optimization; averaging