Chen et al. PTZ evaluation framework, ICIP 2015 |
For reproducible evaluation of tracking with a simulated PTZ camera. PTZ camera simulator and Tracking evaluation framework C++ code and Spherical panoramic videos with annotations for single object tracking evaluation Requested Citation Acknowledgment: Chen, G., St-Charles, P.-L., Bouachir, W., Bilodeau, G.-A., Bergevin, R., Reproducible Evaluation of Pan-Tilt-Zoom Tracking, IEEE International Conference on Image Processing (ICIP 2015), Quebec, QC, Canada, September 27-30, 2015 |
LITIV Dataset 2018,Online Mutual Foreground Segmentation for Multispectral Stereo Videos, International Journal of Computer Vision, 2019 | |||||||||
For Mutual Foreground Segmentation. Requested Citation Acknowledgment: St-Charles, P.-L., Bilodeau, G.-A., Bergevin, R., Online Mutual Foreground Segmentation for Multispectral Stereo Videos, International Journal of Computer Vision, 2019 |
|||||||||
LITIV-VAP dataset, ICCV, Multi-Sensor Fusion for Dynamic Scene Understanding workshop, 2017 | |||||||||
For Mutual Foreground Segmentation. Augmentation of the VAP dataset with foreground mask and calibration data Requested Citation Acknowledgment: St-Charles P.-L., Bilodeau, G.-A., Bergevin, R., Mutual Foreground Segmentation with Multispectral Stereo Pairs, International Conference on Computer Vision Workshops (ICCV Workshops), Venice, Italy, October 22th-29th, 2017 |
|||||||||
Bilodeau et al. dataset, Infrared Physics & Technology, 2014 | |||||||||
For registering infrared and visible people appearing at different planes.
Dataset and ground-truth download link (117 MB) Requested Citation Acknowledgment: Bilodeau, G.-A., Torabi, A., St-Charles, P.-L., Riahi, D., Thermal-Visible Registration of Human Silhouettes: a Similarity Measure Performance Evaluation, Infrared Physics & Technology, Vol. 64, May 2014, pp. 79-86 |
|||||||||
Torabi et al. dataset, CVIU, 2012 / St-Charles et al. CVPRW 2015 | |||||||||
For infrared - visible image registration Important note: St-Charles et al. CVPRW 2015 paper added ground-truth polygons to facilitate evaluation. Ground-truth matrices and videos are the same as the Torabi et al. CVIU 2012 paper.
Dataset and ground-truth download link (100 MB) Requested Citation Acknowledgment: Torabi, A., Massé, G., Bilodeau, G.-A, An iterative integrated framework for thermal-visible image registration, sensor fusion, and people tracking for video surveillance applications, Computer Vision and Image Understanding, Vol. 116, Issue 2, 2012, pp. 210-221 and if you use the polygons: St-Charles, P.-L., Bilodeau, G.-A., Bergevin, R., Online Multimodal Video Registration Based on Shape Matching, IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPR Workshops), Boston, MA, USA, June 7-12, 2015 |
Bouachir et al. dataset, CVIU 2015 | |||||||||||||||
For object tracking. Includes occlusion and many distractors. With ground-truth.
Dataset and ground-truth download link (30 MB) Requested Citation Acknowledgment: Bouachir, W., Bilodeau, G.-A., Collaborative part-based tracking using salient predictors, Computer Vision and Image Understanding, Vol. 137, August 2015, pp. 88-101 |
Mutual Foreground Segmentation with Multispectral Stereo Pairs |
Simultaneous registration and segmentation of infrared and visible images. Work done by Pierre-Luc St-Charles at LITIV lab. at Polytechnique Montreal. Appeared at ICCVW 2017 If you use this code, please cite: St-Charles P.-L., Bilodeau, G.-A., Bergevin, R.,Mutual Foreground Segmentation with Multispectral Stereo Pairs, International Conference on Computer Vision Workshops (ICCV Workshops), Venice, Italy, October 22th-29th, 2017 |
Multimodal registration of Videos based on the contour of shapes |
It was specifically developped for registration of people silhouettes in infrared and visible images. Work done by Pierre-Luc St-Charles at LITIV lab. at Polytechnique Montreal. Appeared at CVPRW 2015 If you use this code, please cite: St-Charles, P.-L., Bilodeau, G.-A.,Bergevin, R.,Online Multimodal Registration based on Shape Matching, , IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPR Workshops), Boston, MA, USA, June 7-12, 2015 |
Test of features for infrared/visible silhouette registration |
Code to test various feature descriptor for infrared and visible silhouette registration. If you use this code, please cite: Bilodeau, G.-A., Torabi, A., St-Charles, P.-L., Riahi, D., Thermal-Visible Registration of Human Silhouettes: a Similarity Measure Performance Evaluation, Infrared Physics & Technology, Vol. 64, May 2014, pp. 79-86 |
PAWCS: Background subtraction with background words with automatic adjustments of local sensitivity |
This change detection method, called PAWCS, is based on a non-parametric model where each sample is modeled with LBSP and color information inside background words. Parameters are automatically adjusted based on noise measurements. Work done by Pierre-Luc St-Charles at LITIV lab. at Polytechnique Montreal. Published at IEEE WACV 2015 and IEEE TIP 2016 If you use this code, please cite: St-Charles, P.-L., Bilodeau, G.-A.,Bergevin, R., Universal Background Subtraction Using Word Consensus Models, IEEE Transactions on Image Processing, Vol. 25, Issue 10, 2016, pp. 4768 - 4781 |
SubSENSE: Background subtraction with Local Binary Similarity Patterns (LBSP) with automatic adjustments of local sensitivity |
This change detection method, called SuBSENSE, is based on a non-parametric model where each sample is modeled with LBSP and color information. Parameters are automatically adjusted based on noise measurements. Work done by Pierre-Luc St-Charles at LITIV lab. at Polytechnique Montreal. Published at CVPR 2014 Workshops If you use this code, please cite: St-Charles, P.-L., Bilodeau, G.-A., Bergevin, R., Flexible Background Subtraction with Self-Balanced Local Sensitivity, in IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPR Workshops), Columbus, OH, USA, June 23-28, 2014 |
LOBSTER: Background subtraction with Local Binary Similarity Patterns (LBSP) |
This change detection method, called LOBSTER, is based on a non-parametric model where each sample is modeled with LBSP and color information. Work done by Pierre-Luc St-Charles at LITIV lab. at Polytechnique Montreal. Published at WACV14 If you use this code, please cite: St-Charles, P.-L., Bilodeau, G.-A., Improving Background Subtraction using Local Binary Similarity Patterns, in IEEE Winter conference on Applications of Computer Vision (WACV14), Steamboat Springs, Colorado, USA, March 24-26, 2014 |
MKCF tracker |
This is the implementation of MKCF multiple object tracker. Work done by Yuebin Yang at LITIV lab. at Polytechnique Montreal. Published at CRV 2017 If you use this code, please cite: Yang, Y., Bilodeau, G.-A., Multiple Object Tracking with Kernelized Correlation Filters in Urban Mixed Traffic, 14th Conference on Computer and Robot Vision (CRV), Edmonton, Alberta, Canada, May 16-19, 2017, pp. 209-216 |
SPiKeS: Superpixel-Keypoints Structure for Robust Visual Tracking |
This is the implementation of SPiKES object tracker based on voting, superpixels and keypoints. Work done by François-Xavier Derue at LITIV lab. at Polytechnique Montreal. Published in Machine Vision and Applications If you use this code, please cite: Derue, F.-X., Bilodeau, G.-A., Bergevin, R., SPiKeS: Superpixel-Keypoints Structure for Robust Visual Tracking, Machine Vision and Applications, 2017 |
CTSE: Single object tracking with structure encoding |
This is the implementation of CTSE Single object tracker based on voting and coherent motion of keypoints. Also use contextual information around the tracked object. Work done by Tanushri Chakravorty at LITIV lab. at Polytechnique Montreal. Published at ICIP 2015 If you use this code, please cite: Chakravorty, T., Bilodeau, G.-A., Granger, E., Contextual Object Tracker with Structure Encoding, Accepted for IEEE International Conference on Image Processing (ICIP 2015), Quebec, QC, Canada, September 27-30, 2015, pp. 4937-4941 |
Urban tracker: Multiple object tracking for a priori unknown objects |
Multiple object tracker based on FREAK keypoints, bounding box interpolation and a state machine to handle occlusions and fragmentation. Work done by Jean-Philippe Jodoin at LITIV lab. at Polytechnique Montreal. Published at WACV14. If you use this code, please cite: Jodoin, J.-P., Bilodeau, G.-A., Saunier, N., Urban Tracker: Multiple Object Tracking in Urban Mixed Traffic, in IEEE Winter conference on Applications of Computer Vision (WACV14), Steamboat Springs, Colorado, USA, March 24-26, 2014 |
SpotNet: Self-Attention Multi-Task Network for Object Detection |
Work done by Huhghes Perreault at LITIV Lab. at Polytechnique Montreal. If you use this code, please cite: Perreault, H., Bilodeau, G.-A., Saunier, N., Héritier, M., SpotNet: Self-Attention Multi-Task Network for Object Detection, Accepted for 17th Conference on Computer and Robot Vision (CRV), Ottawa, Ontario, Canada, May 13-15, 2020 |
Spatio-Temporal Feedback to Detect and Segment Carried Objects |
Carried object detection with spatio-temporal information. Work done by Farnoosh Ghadiri at LITIV Lab. at Polytechnique Montreal. Work was published at BMVC 2017. If you use this code, please cite: Ghadiri, F., Bergevin, R., Bilodeau, G.-A., Spatio-Temporal Feedback to Detect and Segment Carried Objects, 28th British Machine Vision Conference (BMVC), London, UK, September 4th-7th, 2017 |
Carried Object Detection based on an Ensemble of Contour Exemplars |
Carried object detection based on a contour dictionary. Work done by Farnoosh Ghadiri at LITIV Lab. at Polytechnique Montreal. Work was published at ECCV 2016. If you use this code, please cite: Ghadiri, F., Bergevin, R., Bilodeau, G.-A., Carried Object Detection based on an Ensemble of Contour Exemplars, 14th European Conference on Computer Vision (ECCV 2016), Amsterdam, the Netherlands, October 8-14, 2016, Part VII, pp. 852-866 |
Activity recognition with the MoFREAK spatio-temporal descriptor |
Action recognition / activity recognition for surveillance scenarios with local binary feature descriptors. Work done by Chris Whiten for the VIVA Research Lab at University of Ottawa and for LITIV Lab. at Polytechnique Montreal. Work was completed for TRECVID 2012, as well as further research in the action recognition domain and published at CRV 2013. If you use this code, please cite: Whiten, C., Laganiere, R., Bilodeau, G.-A.,Efficient Action Recognition with MoFREAK, in Tenth Conference on Computer and Robot Vision (CRV 2013), Regina, Saskatchewan, Canada, May 28-31, 2013, pp. 319-325 |
Source code for:
El Maachi, I., Bilodeau, G.-A., Bouachir, W., Deep 1D-Convnet for accurate Parkinson disease detection and severity prediction from gait, Expert Systems With Applications, Volume 143, 2020