It has been demonstrated that modified denoising stacking autoencoders (MSDAEs) serve to implement high-performance missing value imputation schemes. On the other hand, complete MSDAE (CMSDAE) classifiers, which extend their inputs with target estimates from an auxiliary classifier and are layer by layer trained to recover both the observation and the target estimates, offer classification results that are better than those provided by MSDAEs. As a consequence, investigating whether CMSDAEs can improve the MSDAEs imputation processes has an obvious practical importance. In this correspondence, two types of imputation mechanisms with CMSDAEs are considered. The first is a direct procedure in which the CMSDAE output is just the target. The second mechanism is suggested by the presence of the targets in the vectors to be autoencoded, and it uses the well-known multitask learning (MTL) ideas, including the observations as a secondary task. Experimental results show that these CMSDAE structures increase the quality of the missing value imputations, in particular the MTL versions. They give the best result in 5 out of 6 missing value problems.
autoencoding; complete deep learners; missing data; multitask learning