TY - GEN
T1 - SSR-GNNs
T2 - 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2022
AU - Cheng, Sheng
AU - Ren, Yi
AU - Yang, Yezhou
N1 - Funding Information:
Acknowledgement This work was supported by the NSF DMR#2020277 and the DARPA GAILA ADAM project.
Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - This paper follows cognitive studies to investigate a graph representation for sketches, where the information of strokes, i.e., parts of a sketch, are encoded on vertices and information of inter-stroke on edges. The resultant graph representation facilitates the training of a Graph Neural Networks for classification tasks, and achieves accuracy and robustness comparable to the state-of-the-art against translation and rotation attacks, as well as stronger attacks on graph vertices and topologies, i.e., modifications and addition of strokes, all without resorting to adversarial training. Prior studies on sketches, e.g., graph transformers, encode control points of stroke on vertices, which are not invariant to spatial transformations. In contrary, we encode vertices and edges using pairwise distances among control points to achieve invariance. Compared with existing generative sketch model for one-shot classification [26], our method does not rely on run-time statistical inference. Lastly, the proposed representation enables generation of novel sketches that are structurally similar to while separable from the existing dataset.
AB - This paper follows cognitive studies to investigate a graph representation for sketches, where the information of strokes, i.e., parts of a sketch, are encoded on vertices and information of inter-stroke on edges. The resultant graph representation facilitates the training of a Graph Neural Networks for classification tasks, and achieves accuracy and robustness comparable to the state-of-the-art against translation and rotation attacks, as well as stronger attacks on graph vertices and topologies, i.e., modifications and addition of strokes, all without resorting to adversarial training. Prior studies on sketches, e.g., graph transformers, encode control points of stroke on vertices, which are not invariant to spatial transformations. In contrary, we encode vertices and edges using pairwise distances among control points to achieve invariance. Compared with existing generative sketch model for one-shot classification [26], our method does not rely on run-time statistical inference. Lastly, the proposed representation enables generation of novel sketches that are structurally similar to while separable from the existing dataset.
UR - http://www.scopus.com/inward/record.url?scp=85137765283&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85137765283&partnerID=8YFLogxK
U2 - 10.1109/CVPRW56347.2022.00561
DO - 10.1109/CVPRW56347.2022.00561
M3 - Conference contribution
AN - SCOPUS:85137765283
T3 - IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops
SP - 5127
EP - 5137
BT - Proceedings - 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2022
PB - IEEE Computer Society
Y2 - 19 June 2022 through 20 June 2022
ER -