Compared to well-edited videos with predefined structures (e.g., news or sports videos), extracting key frames from unconstrained consumer videos remains a much more challenging problem due to their extremely diverse contents (no pre-imposed structure) and uncontrolled video quality (e.g., due to poor lighting or camera shake). In order to exploit spatio-temporal correlation present in the video for key frame extraction, we propose a bilayer group sparse representation in which the input video frames are first segmented into homogeneous patches and group sparsity is imposed at two levels simultaneously: (i) patch-to-frame, and (ii) frame-to-sequence. The grouped sparse coefficients are further combined with frame quality scores to generate key frames. Extensive experiments are performed on videos from actual end users. Results obtained by the proposed approach compare favorably with existing methods to confirm its effectiveness.