SymTrack logo

SYMTRACK

Beyond Detection: A Structure-Aware Framework for Scene Text Tracking

Scene Text Tracking. SymTrack tracks target text instances in videos without relying on per-frame detection, maintaining robust trajectories under perspective shifts, dense distractors, and fine-grained structural variations.

Abstract

Modern visual object trackers show impressive results on general targets, yet their performance drops substantially when dealing with scene text. Although currently underexplored, tracking text in videos is essential for dynamic text manipulations such as segmentation, removal, and editing. To fill this gap, this paper formalizes this specific task as Scene Text Tracking and present the first systematic work for it. We identify three primary challenges in this task: 1) severe geometric distortions from perspective shifts, 2) high visual ambiguity across different instances, and 3) high sensitivity to fine-grained structural details. To address these issues, we propose SymTrack, a unified detection-free framework with synergistic dual-branch design. It integrates a Cross-Expert Calibration mechanism to reduce semantic bias, along with a Predictive Token Rectification mechanism to correct structural imbalances, complemented by an Adaptive Inference Engine that stabilizes predictions under motion constraints. Considering the lack of dedicated benchmarks for this task, we utilize three datasets from video text spotting to construct a benchmark with high-quality annotations. Extensive experiments demonstrate that SymTrack sets the new state-of-the-art on all three benchmarks, outperforming previous best trackers by up to 11.97% AUC on BOVTextSOT. Overall, our work promotes efficient and thorough text tracking, paving the way toward more generalized video text manipulation.

Method Overview

Overview of the SymTrack framework

Overview of SymTrack. SymTrack adopts a synergistic dual-branch design for scene text tracking. The PTR branch performs predictive token rectification to alleviate structural imbalance, while the CEC branch injects text-specific priors to suppress visually ambiguous distractors. During inference, the training-free AIE dynamically adapts the search region and regularizes temporal predictions.

Benchmark Results

State-of-the-art comparison on ArTVideoSOT, DSTextSOT, and BOVTextSOT. Subscripts denote the corresponding configuration and input resolution. V-L denotes visual-language tracking. VTS models are excluded on BOVTextSOT due to the lack of Chinese character sets in their public implementations.

Type Method Venue ArTVideoSOT DSTextSOT BOVTextSOT
AUC PNorm P AUC PNorm P AUC PNorm P
Vision-only SiamRPN++ CVPR2019 56.4067.3071.90 44.4054.4063.20 58.7071.5065.50
STARK ICCV2021 70.3783.4886.84 57.5968.6378.01 61.9275.1676.33
OSTrack256 ECCV2022 64.8677.9581.99 52.5063.4970.83 58.6773.0474.03
OSTrack384 ECCV2022 64.8077.8282.05 54.8366.5174.44 59.1872.6873.30
AiATrack ECCV2022 66.4177.9681.77 57.9268.1279.02 64.1675.0773.42
SeqTrackL384 CVPR2023 64.3576.4680.92 54.6365.8174.19 60.4276.1876.70
ARTrack256 CVPR2023 64.8578.8179.53 48.5356.1265.20 62.7572.2873.01
GRM256 CVPR2023 68.2279.8483.30 53.0563.8771.04 59.5972.1272.82
GRM384 CVPR2023 68.4780.6583.64 55.5166.0774.63 59.1371.0271.66
ROMTrack ICCV2023 70.6283.3287.13 56.8268.7975.61 62.8273.7474.90
ODTrack AAAI2024 69.8183.5486.68 62.7175.8484.26 64.7477.7478.45
SymTrack (Ours) - 77.7491.2995.88 70.6683.6191.83 77.0690.0590.18
V-L DUTrack256 CVPR2025 68.7382.4686.87 60.5772.7781.31 65.0978.9879.04
DUTrack384 CVPR2025 72.09 85.97 89.36 63.63 76.72 85.00 65.08 79.41 79.30
VTS TransVTSpotter NeurIPS2021 8.8478.1138.07 4.9375.2167.80 ---
TransDETR IJCV2024 9.1878.7543.31 5.0876.0969.79 ---

The best results are highlighted in bold, and the second-best results are underlined.

BibTeX


        @inproceedings{yu2026symtrack,
        title={Beyond Detection: A Structure-Aware Framework for Scene Text Tracking},
        author={Yu, Chenmin and Yu, Liu and Wu, Daiqing and Li, Gengluo and Chen, Zeyu and Zhou, Yu},
        booktitle={Proceedings of the 43rd International Conference on Machine Learning},
        year={2026}, 
        url={https://EdisonYCM.github.io/SymTrack/}
        }