[cs-talks] IVC Seminars - Thursday and Friday this week
cs at bu.edu
Wed Apr 6 08:11:37 EDT 2016
From Augmented Reality to Augmented Human
Judith Amores, MIT
Thursday April 7, 2pm – 3pm, MCS 148
Over the last two decades, long distance communication has gotten dramatically easier thanks to new technologies. E-mail, Facebook, Skype and text messaging allow for immediate communication and enable conversations between people who are thousands of miles apart. Current technologies are very good at connecting people remotely, but when it comes to perform physical tasks or being emotionally connected at a distance, existing technologies offer limited ways to do so. A deep and personal connection requires ways for expressing empathy; sharing memories, and showing intimacy and affection. .In this talk, I will discuss other possibilities that go beyond conventional face to face technologies, making otherwise imperceptible cognitive state information explicit known. I will introduce my research projects including design and applications with Head Mounted displays and wearable fashion for augmenting our perception and connecting people at a distance.
Judith is a 2nd year graduate student in the Fluid Interfaces Group at the MIT Media Lab. Her main area of research focuses on Human Computer Interaction with the aim of making the user experience more seamless, natural and integrated in our physical lives. Before joining MIT she graduated as a Multimedia Engineer and worked as a UX Researcher at URL Barcelona. She worked at Microsoft Research and developed interactive prototypes in the areas of mixed and virtual reality. She has explored the use of wearable devices such as head-mounted displays and wearable fashion to create solutions that more naturally extend our minds, bodies and behavior. She has been awarded the Graduate Facebook Fellowship and her work has been featured by press such as The Creators Project, CNN, Fast Company, etc and published in top HCI conferences.
Diverse Particle Selection for High-Dimensional Inference in Graphical Models
Erik Sudderth, Brown University
Friday April 8, 2pm – 3pm, MCS 148
Rich graphical models for real-world scene understanding encode the shape and pose of objects via high-dimensional, continuous variables. We describe a particle-based max-product inference algorithm which maintains a diverse set of posterior mode hypotheses, and is robust to initialization. At each iteration, the set of particle hypotheses is augmented via stochastic proposals, and then reduced via an optimization algorithm that minimizes distortions in max-product messages. Our particle selection metric is submodular, and thus efficient greedy algorithms have rigorous optimality guarantees. By avoiding the stochastic resampling steps underlying standard particle filters, we also avoid common degeneracies where particles collapse onto a single hypothesis. Our approach significantly outperforms previous particle-based algorithms in the estimation of human pose from images and videos, and the prediction of protein side-chain conformations.
Erik B. Sudderth is an Assistant Professor in the Brown University Department of Computer Science. He received the Bachelor's degree (summa cum laude, 1999) in Electrical Engineering from the University of California, San Diego, and the Master's and Ph.D. degrees (2006) in EECS from the Massachusetts Institute of Technology. His research interests include probabilistic graphical models; nonparametric Bayesian methods; and applications of statistical machine learning in computer vision and the sciences. He received an NSF CAREER award, the ISBA Mitchell Prize, and was named one of "AI's 10 to Watch" by IEEE Intelligent Systems Magazine.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the cs-talks