[cs-talks] IVC Seminar - Thursday 4/21, 2-3PM in MCS 148
Devits, Christopher R
cdevits at bu.edu
Tue Apr 19 09:27:21 EDT 2016
Recognition and prediction of human activities from videos
April 21, 2-3pm
Dr. Yu Kong
Robust recognition and prediction of human activities from videos heavily relies on the understanding of spatiotemporal properties of sequential data. For example, classification can be made based on the motion relationships detected from full sequential data. Nevertheless, in certain extreme cases, intelligent systems do not have the luxury of waiting for the entire action execution, and thus prompt decisions must be made upon temporally incomplete data.
In this talk, I will first focus on recognizing human interactions by the semantic descriptions discovered from data. Specifically, I will introduce manually labeled and unsupervisedly learned semantic motion descriptions for characterizing interaction properties. I will further describe an approach for activity prediction, where classification must be made on temporally incomplete action executions. The proposed model captures temporal dynamic properties of human activities by explicitly considering all the history of observed features as well as features in smaller temporal segments.
Dr. Yu Kong received B.Eng. degree in automation from Anhui University in 2006, and PhD degree in computer science from Beijing Institute of Technology, China, in 2012. He was a visiting student at the National Laboratory of Pattern Recognition (NLPR), Chinese Academy of Science from 2007 to 2009, and visited the Department of Computer Science and Engineering, State University of New York, Buffalo in 2012. He is now a postdoctoral research associate in the Electrical and Computer Engineering, Northeastern University, Boston, MA. Dr. Kong's research interests are computer vision, social media analytics, and machine learning.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the cs-talks