Neural Architecture Search is an active research field that aims to design neural networks automatically. Nevertheless, this is usually an expensive process since the search algorithm must evaluate the performance of many candidate solutions from a vast search space. Because of that, different strategies have been proposed to perform efficient Neural Architecture Search. The recent development of zero-cost performance predictors has shown a lot of promise due to the possibility of predicting a network’s performance without training. On the other hand, a predictor’s correlation with a model’s performance may depend on the network search space and on the benchmark dataset. Each performance predictor might lead the search processes to favor very different network patterns. A design principle is defined as a restriction in a hyperparameter distribution that is expected to yield optimum network performance. In this work, we propose an automatic iterative approach to uncover the design principles of deep neural networks optimized by zero-cost performance predictors, and we discuss insightful information obtained by its application.