Full text: Unavailable
Abstract In this paper we investigate the relation between the potential and geometric time delays in gravitational lensing. In the original paper of Shapiro (1964), it is stated that there is a time delay in the radar signals between Earth and Venus that pass near a massive object (the Sun), compared to the path taken in the absence of any mass. The reason for this delay is connected with the influence of gravity on the coordinate velocity of a light ray in a gravitational potential. The contribution from the change of the path length, which happens to be of second order, is considered as negligible. Nevertheless, in the gravitational lens theory the geometrical delay, related to the change of path length, is routinely taken into account along with the potential term. In this work we explain this apparent discrepancy. We address the contribution of the geometric part of the time delay in different situations, and introduce a unified treatment with two limiting regimes of lensing. One of these limits corresponds to the time delay experiments near the Sun where the geometrical delay is shown to be negligible. The second corresponds to the typical gravitational lens scenario with multiple imaging where the geometrical delay is shown to be significant. We introduce a compact, analytical, and quantitative criteria based on relation between the angular position of source and the Einstein radius. This criterion allows one to find out easily when it is necessary to take the geometrical delay into account. In particular, it is shown that the geometrical delay is non-negligible in the case of good alignment between source, lens and observer, because in such a case it becomes a first order quantity (the same order as the potential term).