Many architectures have been proposed to solve tightly-coupled multirobot tasks (MT) through coalitions of heterogeneous robots. However, several issues remain unaddressed. As coalitions are formed, sensor constraints among robots are also established. For example, in a leader-follower task, follower robots must keep leader robots within their sights, while in a box-pushing task, a supervisor robot needs to track the moving direction of the box and monitor the pushing path to the goal for obstacle avoidance. The question of how to keep these constraints satisfied during the entire execution, from initial configurations to completeness of the task, remains an open issue. In addition, environmental factors, both static and dynamic, can influence the maintenance of the constraints. Moreover, problems arise when the constraints are unsatisfiable given the current circumstances. For example, the sight of the leader might be blocked or there might be obstacles blocking the view of the box. This paper proposes a general method to address these issues for various applications with sensors having certain characteristics. Our approach combines the use of sensor models, environment sampling, measures of information quality, a motion model with sampling, and a constraint model. We believe that this approach offers the first generic formulation of robotic sensor constraints that can be applied to a wide variety of applications. To illustrate this method, we apply the approach to solve robot tracking and navigation tasks both in simulation and with physical robots. Experimental results illustrate the flexibility and robustness of the approach.