Scan-to-BIM is the process of converting 3D reconstructions into building information models (BIM). Currently, it involves manual tracing of point clouds by human users in BIM authoring tools, with some automation functionality available for walls, floors, windows, doors, and piping. Emerging semantic segmentation methods demonstrate a level of versatility that could extend the capabilities of automated Scan-to-BIM well past the limited existing object categories. The accuracy of supervised deep learning methods in the context of 3D scene segmentation has experienced rapid improvement over the past year due to the recent availability of large, annotated datasets of indoor spaces. Unfortunately, the semantic object categories in the available datasets do not cover many essential BIM object categories, such as heating, ventilation and air-conditioning (HVAC), and plumbing systems. In an effort to leverage the success of deep learning for Scan-to-BIM, we present 3DFacilities, an annotated dataset of 3D reconstructions of building facilities. The dataset contains over 11,000 individual RGB-D frames comprising 50 scene reconstructions annotated with 3D camera poses and per-vertex and per-pixel annotations. Our dataset is available at https://thomasczerniawski.com/3dfacilities/.