Convolutional Neural Networks (CNNs) are widely used for image classification. To fit the implementation of CNNs on resource-limited systems like FPGAs, pruning is a popular technique to reduce the complexity. In this paper, the robustness of the pruned CNNs against errors on weights and configuration memory of the FPGA accelerator is evaluated with VGG16 as a case study, and two popular pruning methods (magnitude-based and filter pruning) are considered. In particular, the accuracy loss of the original VGG16 and the ones with different pruning rates is tested based on fault injection experiments, and the results show that the effect of errors on weights and configuration memories are different for the two pruning methods. For errors on weights, the networks pruned using both methods demonstrate higher reliability with higher pruning rates, but the ones using filter pruning are relatively less reliable. For errors on configuration memory, errors on about 30% of the configuration bits will affect the CNN operation, and only 14% of them will introduce significant accuracy loss. However, the effect of the same critical bits is different for the two pruning methods. The pruned networks using magnitude-based method are less reliable than the original VGG16, but the ones using filter pruning are more reliable than the original VGG16. The different effects are explained based on the structure of the CNN accelerator and the properties of the two pruning methods. The impact of quantization on the CNN reliability is also evaluated for the magnitude-based pruning method.