Automated machine learning (AutoML) greatly eases human efforts in architecture engineering. However, the mainstream AutoML methods, like neural architecture search (NAS), are customized for well-designed search spaces wherein promising architectures are densely distributed. In contrast, AutoML-Zero builds machine-learning algorithms using basic primitives and can explore novel architectures beyond human knowledge. AutoML-Zero shows the potential to deploy machine learning systems by not taking advantage of either feature engineering or architectural engineering. However, it only optimizes a single objective, like accuracy, and has no mechanism to ensure that the constraints of real-world applications are satisfied. We propose a multi-objective variant of AutoML-Zero, called MOAZ, that distributes solutions on a front by trading off the accuracy and computational complexity of the machine learning algorithms. In addition to generating different Pareto-optimal solutions, MOAZ can effectively explore the sparse search space to improve search efficiency. Experimental results on linear regression tasks show MOAZ reduces the median complexity by 87.4% compared to AutoML-Zero while maintaining the required accuracy. MOAZ also accelerates the median search convergence rate by 84%. These experiments provide us a direction towards further improving the search accuracy and lowering the human intervention in AutoML.