The reliable evaluation of the QRS detection algorithm requires comparability and reproducibility. Although it is commonly accepted to test QRS detection accuracy by standard binary classification parameters, much less attention is paid to the temporal accuracy of the detector. A variety of temporal tolerance values are used in the literature for performance evaluation of QRS detection, ranging from 60 ms to 160 ms, which sometimes result in comparison of algorithms with different temporal resolutions. This paper addresses a problem of the dependence of the accuracy of QRS detection algorithms represented by detection error rate, sensitivity, and positive predictivity, on the temporal resolution of the detection defined by Detector Temporal Tolerance (DTT). In this work, the classification statistics achieved for three state-of-the-art low complexity algorithms in a broad range of DTT (from 160 ms to 8 ms) for the entire standard MIT-BIH Arrhythmia Database are compared with the performance of the Pan-Tompkins algorithm. The analysis shows that along with decreasing value of DTT, the classification statistics for R-peak detection algorithms deteriorate, while the deterioration rate is characteristic of a given algorithm. In addition, the algorithms change their positions in the detection accuracy ranking with changing DTT value. The performed analyses proved that DTT is an integral parameter of QRS complex detection that determines the reproducibility of test results and fair comparative study.