File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Bytetrack: Multi-object tracking by associating every detection box

TitleBytetrack: Multi-object tracking by associating every detection box
Authors
Issue Date2022
PublisherOrtra Ltd..
Citation
European Conference on Computer Vision (Hybrid), Tel Aviv, Israel, October 23-27, 2022. In Proceedings of the European Conference on Computer Vision (ECCV), 2022 How to Cite?
AbstractMulti-object tracking (MOT) aims at estimating bounding boxes and identities of objects in videos. Most methods obtain identities by associating detection boxes whose scores are higher than a threshold. The objects with low detection scores, e.g. occluded objects, are simply thrown away, which brings non-negligible true object missing and fragmented trajectories. To solve this problem, we present a simple, effective and generic association method, tracking by associating almost every detection box instead of only the high score ones. For the low score detection boxes, we utilize their similarities with tracklets to recover true objects and filter out the background detections. When applied to 9 different state-of-the-art trackers, our method achieves consistent improvement on IDF1 score ranging from 1 to 10 points. To put forwards the state-of-the-art performance of MOT, we design a simple and strong tracker, named ByteTrack. For the first time, we achieve 80.3 MOTA, 77.3 IDF1 and 63.1 HOTA on the test set of MOT17 with 30 FPS running speed on a single V100 GPU. ByteTrack also achieves state-of-the-art performance on MOT20, HiEve and BDD100K tracking benchmarks.
Persistent Identifierhttp://hdl.handle.net/10722/315799

 

DC FieldValueLanguage
dc.contributor.authorZhang, Y-
dc.contributor.authorSun, P-
dc.contributor.authorJiang, Y-
dc.contributor.authorYu, D-
dc.contributor.authorWeng, F-
dc.contributor.authorYuan, Z-
dc.contributor.authorLuo, P-
dc.contributor.authorLiu, W-
dc.contributor.authorWang, X-
dc.date.accessioned2022-08-19T09:04:39Z-
dc.date.available2022-08-19T09:04:39Z-
dc.date.issued2022-
dc.identifier.citationEuropean Conference on Computer Vision (Hybrid), Tel Aviv, Israel, October 23-27, 2022. In Proceedings of the European Conference on Computer Vision (ECCV), 2022-
dc.identifier.urihttp://hdl.handle.net/10722/315799-
dc.description.abstractMulti-object tracking (MOT) aims at estimating bounding boxes and identities of objects in videos. Most methods obtain identities by associating detection boxes whose scores are higher than a threshold. The objects with low detection scores, e.g. occluded objects, are simply thrown away, which brings non-negligible true object missing and fragmented trajectories. To solve this problem, we present a simple, effective and generic association method, tracking by associating almost every detection box instead of only the high score ones. For the low score detection boxes, we utilize their similarities with tracklets to recover true objects and filter out the background detections. When applied to 9 different state-of-the-art trackers, our method achieves consistent improvement on IDF1 score ranging from 1 to 10 points. To put forwards the state-of-the-art performance of MOT, we design a simple and strong tracker, named ByteTrack. For the first time, we achieve 80.3 MOTA, 77.3 IDF1 and 63.1 HOTA on the test set of MOT17 with 30 FPS running speed on a single V100 GPU. ByteTrack also achieves state-of-the-art performance on MOT20, HiEve and BDD100K tracking benchmarks.-
dc.languageeng-
dc.publisherOrtra Ltd..-
dc.relation.ispartofProceedings of the European Conference on Computer Vision (ECCV), 2022-
dc.titleBytetrack: Multi-object tracking by associating every detection box-
dc.typeConference_Paper-
dc.identifier.emailLuo, P: pluo@hku.hk-
dc.identifier.authorityLuo, P=rp02575-
dc.identifier.doi10.48550/arXiv.2110.06864-
dc.identifier.hkuros335588-
dc.publisher.placeIsrael-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats