Per-device adaptive test is a promising direction with the best trade-off between test quality and test time so far. In this work, we propose a method for online assessment of the information content of the next test in the test queue. This assessment can be used to tune the trade-off between test quality and test time of a per-device adaptive test. Since majority of specification parameters are correlated, the overall information content of multiple tests is difficult to extract. We model multi-variate correlations among specification parameters and take these correlations into account to estimate the multivariate overall information utility of a given set of tests. The proposed method can be integrated within an existing adaptive test flow (per-device or per-wafer) that runs in the background. Experimental results using 3 distinct industry circuits and sizable data show that the proposed technique can finely tune the trade-off, even achieve zero test escape rates with appreciable test time savings.