IT-enabled peer-based creation, review, and evaluation systems are widely spread in multiple areas of open innovation and knowledge management. Despite a noticeable variety of designs, particularly, in the structure of peer networks (that is, how participants are linked to each other as creators and reviewers), these design choices are hardly ever grounded in design research. Characteristics of peer network structure, such as reciprocity and clustering, may affect how well such systems reveal participants' competencies and their products' qualities. Designing peer review systems that produce valid and reliable evaluations is, therefore, among the most fundamental concerns. Using a simulation approach, we show that reciprocity and clustering indeed have an effect, but its direction and magnitude depend on the evaluation scale used. So far, we have found no evidence that transitional networks have superior efficacy in comparison with "pure" networks. We outlined directions for further investigation.