Effects of Prior Data on the Inference and Filtering Based Electrocardiographic Imaging

Taha Erenler1 and Yesim Serinagaoglu Dogrusoz2
1Middle East Technical University, Electrical and Electronics Eng. Dept., 2Middle East Technical University


Abstract

Background: Statistical estimation improves the accuracy of electrocardiographic imaging (ECGI) with “good” prior information. Maximum likelihood (ML) and maximum aposteriori (MAP) based learning algorithms estimate the model parameters from the priors. There is still need to evaluate how to construct a “good” training set, and its effects on the inverse solutions. We compare the performances of the ML and MAP-based prior estimation methods using two different training data. Methods: Torso measurements were simulated at 30 and 10 dB SNR from two datasets including LV-paced epicardial potentials obtained from University of Utah. First dataset was used for testing, and second was used for training. Two different training sets were composed, (I) pacing locations near the test beat’s pacing location, (II) pacing locations within a wider region. The problem was modeled in state-space form under zero mean, independent Gaussian noise assumptions. First, the state transition matrix, the initial state’s mean and covariance were estimated from the training set using ML and MAP estimation. Then, Kalman smoother was used for solving the inverse problem (MLIF: ML-based; MAPIF: MAP-based). Tikhonov regularization was applied for comparison. Reconstructed electrograms and activation times (AT) were compared to the recorded values. Results: Electrograms: MAPIF improves results compared to MLIF with both training sets (mean CC increase >0.05 for 30 dB, >0.01 for 10 dB). Among the statistical methods, MLIF-I is most sensitive to noise, but still performs better than Tikhonov. AT: for the 30 dB case, MLIF-II is comparable to Tikhonov, but both MAPIF methods have worse performance (CC lower by 0.1). MLIF-II is most robust to noise (CC decrease 0.02, 0.11, 0.14, 0.16, 0.28 for MLIF-II, MLIF-I, MAPIF-I, MAPIF-II, Tikhonov, respectively). Conclusion: MLIF is most robust to noise with a more general training data. MAPIF yields better electrogram reconstructions for both training data, especially with moderate noise.