The analysis of a distributed consensus algorithm for estimating the maximum of the node initial state values in a network is considered in the presence of communication noise. Conventionally, the maximum is estimated by updating the node state value with the largest received measurements in every iteration at each node. However, due to additive channel noise, the estimate of the maximum at each node has a positive drift at each iteration and this results in nodes diverging from the true max value. Max-plus algebra is used to study this ergodic process, wherein, at each iteration, the state values are multiplied by a random matrix characterized by the noise distribution. The growth rate of the state values due to noise is studied by analyzing the Lyapunov exponent of the product of noise matrices in a max-plus semiring. The growth rate of the state values is bounded by a constant which depends on the spectral radius of the network and the noise variance. Simulation results supporting the theory are also presented.