Mobile QR Code QR CODE

REFERENCES

1 
Yi Y, Zheng Z, Lin M. Realistic action recognition with salient foreground trajectories. Expert Systems with Applications, 2017, 75(JUN.): 44-55.DOI
2 
Han Y, Yang Y, Wu F, et al. Compact and Discriminative Descriptor Inference Using Multi-Cues. IEEE Trans Image Process, 2015, 24(12): 5114-5126.DOI
3 
Wen Z, Wang C, Xiao B, et al. Human action recognition using weighted pooling. Iet Computer Vision, 2014, 8(6): 579-587.DOI
4 
Feng L, Zhao Y, Zhao W, et al. A comparative review of graph convolutional networks for human skeleton-based action recognition. Artificial Intelligence Review, 2022, 55(5): 4275-4305.DOI
5 
Rodrigues A, Pereira A S, Rui M, et al. Using Artificial Intelligence for Pattern Recognition in a Sports Context. Sensors, 2020, 20(11): 3040.DOI
6 
Zhang S, Gao C, Jing Z, et al. Discriminative Part Selection for Human Action Recognition. IEEE Transactions on Multimedia, 2017, 20(99): 769-780.DOI
7 
Wang B, Yu L, Xiao W, et al. Position and locality constrained soft coding for human action recognition. Journal of Electronic Imaging, 2013, 22(4): 041118.DOI
8 
Wu D. Online position recognition and correction method for sports athletes. Cognitive Systems Research, 2018, 52(DEC.): 174-181.URL
9 
Yong D, Yun F, Liang W. Representation Learning of Temporal Dynamics for Skeleton-Based Action Recognition. IEEE Transactions on Image Processing, 2016, 25(7): 3010-3022.DOI
10 
Ma S, Bargal S A, Zhang J, et al. Do Less and Achieve More: Training CNNs for Action Recognition Utilizing Action Images from the Web. Pattern Recognition, 2015, 68: 334-345.DOI
11 
Niu L, Li W, Xu D. Exploiting Privileged Information from Web Data for Action and Event Recognition. International Journal of Computer Vision, 2016, 118(2): 130-150.DOI
12 
Hu B, Yuan J, Wu Y. Discriminative Action States Discovery for Online Action Recognition. IEEE Signal Processing Letters, 2016, 23(10): 1374-1378.DOI
13 
Chen C, Jafari R, Kehtarnavaz N. Improving Human Action Recognition Using Fusion of Depth Camera and Inertial Sensors. IEEE Transactions on Human-Machine Systems, 2015, 45(1): 51-61.DOI
14 
Liu L, Shao L, Li X, et al. Learning Spatio-Temporal Representations for Action Recognition: A Genetic Programming Approach. IEEE Transactions on Cybernetics, 2015, 46(1): 158-170.DOI
15 
Yu K, Yun F. Max-Margin Heterogeneous Information Machine for RGB-D Action Recognition. International Journal of Computer Vision, 2017, 123(3): 350-371.DOI
16 
Marlon, F, Alcantara, et al. Real-time action recognition using a multilayer descriptor with variable size. Journal of Electronic Imaging, 2016, 25(1): 13020-13020.DOI
17 
Nazir S, Yousaf M H, Nebel J C, et al. A Bag of Expression framework for improved human action recognition. Pattern recognition letters, 2018, 103(FEB.1): 39-45.DOI
18 
Liu Y, Dong H, Wang L. Trampoline Motion Decomposition Method Based on Deep Learning Image Recognition. Scientific Programming, 2021, 2021(9): 1-8.DOI
19 
Ramezani M, Yaghmaee F. A review on human action analysis in videos for retrieval applications. Artificial Intelligence Review, 2016, 46(4): 485-514.DOI
20 
Xu W, Miao Z, Yu J, et al. Action Recognition and Localization with Spatial and Temporal Contexts. Neurocomputing, 2019, 333(MAR.14): 351-363.DOI
21 
Yong B, Zhang G, Chen H, et al. Intelligent monitor system based on cloud and convolutional neural networks. Journal OF Supercomputing, 2017, 73(7): 3260-3276.DOI
22 
Lin B, Fang B, Yang W, et al. Human Action Recognition Based on Spatio-temporal Three-Dimensional Scattering Transform Descriptor and An Improved VLAD Feature Encoding Algorithm. Neurocomputing, 2018, 348(JUL.5): 145-157.DOI
23 
Lemieux N, Noumeir R. A Hierarchical Learning Approach for Human Action Recognition. Sensors, 2020, 20(17): 4946.DOI
24 
Wang T, Li J, Wu H N, et al. ResLNet: deep residual LSTM network with longer input for action recognition. Frontiers of Computer Science, 2022, 16(6): 1-9.DOI
25 
Merler M, Mac K, Joshi D, et al. Automatic Curation of Sports Highlights Using Multimodal Excitement Features. IEEE Transactions on Multimedia, 2019, 21(5): 1147-1160.DOI