Support Vector Machines (SVMs), originally proposed for classifications of two classes, have become a very popular technique in the machine learning field. For multi-class classifications, various single-objective models and multi-objective ones have been proposed. However,in most single-objective models, neither the different costs of different misclassifications nor the users' preferences were considered. This drawback has been taken into account in multi-objective models.In these models, large and hard second-order cone programs(SOCPs) were constructed ane weakly Pareto-optimal solutions were offered. In this paper, we propose a Projected Multi-objective SVM (PM), which is a multi-objective technique that works in a higher dimensional space than the object space. For PM, we can characterize the associated Pareto-optimal solutions. Additionally, it significantly alleviates the computational bottlenecks for classifications with large numbers of classes. From our experimental results, we can see PM outperforms the single-objective multi-class SVMs (based on an all-together method, one-against-all method and one-against-one method) and other multi-objective SVMs. Compared to the single-objective multi-class SVMs, PM provides a wider set of options designed for different misclassifications, without sacrificing training time. Compared to other multi-objective methods, PM promises the out-of-sample quality of the approximation of the Pareto frontier, with a considerable reduction of the computational burden.