Human-robot interactions (HRI) can be modeled as differential games with incomplete information, where each agent holds private reward parameters. Due to the open challenge in finding perfect Bayesian equilibria of such games, existing studies often decouple the belief and physical dynamics by iterating between belief update and motion planning. Importantly, the robot's reward parameters are often assumed to be known to the humans, in order to simplify the computation. We show in this paper that under this simplification, the robot performs non-empathetic belief update about the humans' parameters, which causes high safety risks in uncontrolled intersection scenarios. In contrast, we propose a model for empathetic belief update, where the agent updates the joint probabilities of all agents' parameter combinations. The update uses a neural network that approximates the Nash equilibrial action-values of agents. We compare empathetic and non-empathetic belief update methods on a two-vehicle uncontrolled intersection case with short reaction time. Results show that when both agents are unknowingly aggressive (or non-aggressive), empathy is necessary for avoiding collisions when agents have false believes about each others' parameters. This paper demonstrates the importance of acknowledging the incomplete-information nature of HRI.