[cs-talks] Upcoming Seminars: iBench (Thurs) + IVC (Thurs)

cs, Group cs at bu.edu
Thu Feb 19 10:19:11 EST 2015


iBench/CS512 Lecture
Formal Methods over Euclidean Spaces
Sicun Gao, MIT CSAIL
Thursday, February 19, 2015 at 12:30pm in MCS 148

Abstract: Formal methods are crucial for building computing systems that physically engage us in safety-critical ways such as airplanes, cardiac pacemakers, and nuclear plants. I will present a logical framework that aims to accomplish the following: 1. Understand the computational complexity of controlling nonlinear and hybrid physical systems. Such problems are commonly considered to be wildly undecidable because of the involvement of real numbers, differential equations, and so on. I show how reasonable upper bounds can be obtained through a practical formulation.  2. Enhance automation in the design and implementation of control systems through automated reasoning engines. These engines are designed to cope with NP-hard problems, matching the complexity of the problems to be solved. The key is to engineer exponential algorithms to behave well in practice, by combining the full power of logical reasoning and numerical algorithms. I introduce our solver dReal and show some promising experimental results.  3. Certify correctness of control software through formal proofs. All algorithms that are used for the design and analysis of these systems should produce mathematical proofs that can be machine-checked. This requires a thorough logical analysis of techniques in control theory and numerical computing.

Bio: Sicun Gao is a postdoctoral researcher at MIT CSAIL. His research focuses on automated formal methods for the design and analysis of cyber-physical systems. He obtained PhD from Carnegie Mellon University in 2012 and BS from Peking University in 2006.

IVC Seminar
Computational Understanding of Image Memorability
Zoya Bylinskii, MIT
Thursday, February 19, 2015 at 4pm in MCS 148

Abstract: In this talk, I will describe the research done in the Oliva Lab on Image Memorability - a quantifiable property of images that can be used to predict whether an image will be remembered or forgotten. Apart from presenting the lab's research  directions and findings, I will focus on the work I have done in understanding and modeling the intrinsic and extrinsic factors that affect image memorability. I will present results on how consistent people are in which images they find memorable and forgettable (across experiments, settings, and visual stimuli) and I will show how these findings generalize to information visualizations. I will also demonstrate how the extrinsic factors of image context and observer eye behavior modulate image memorability. I will present an information-theoretic model of context and image distinctiveness, to quantify their effects on memorability. Finally, I will demonstrate how eye movements, pupil dilations, and blinks can be predictive of image memorability. In particular, our computational model can use an observer's eye movements on an image to predict whether or not the image will be later remembered. In this talk, I hope to offer a more complete picture of image memorability, including the contributions to cognitive science, and the computational applications made possible.


The following is the first paper on image memorability that has come out of the Oliva Lab, and has started a whole direction of research: http://cvcl.mit.edu/papers/IsolaXiaoTorralbaOliva-PredictingImageMemory-CVPR2011.pdf -- it can give people some background, though I will provide an intro as well.


Bio: Zoya Bylinskii is a PhD student at MIT, jointly supervised by Aude Oliva and Fredo Durand. She works in the area of computational perception - at the intersection of cognitive science and computer science. Specifically, she is interested in studying human memory and attention, in order to build computational models to advance the understanding and application possibilities of these areas. Her current work spans a number of research directions, including: image memorability, saliency benchmarking, and information visualizations. Zoya most recently completed her MS under the supervision of Antonio Torralba and Aude Oliva, on a "Computational Understanding of Image Memorability". Prior to this, her BS research on parts-based object recognition was supervised by Sven Dickinson at the University of Toronto. She also spent a lovely summer in 2011 working in BU with Stan Sclaroff on reduplication detection in sign language :)

——
UPCOMING

IVC Seminar
Improving Face Analysis Using Expression Dynamics
Hamdi Diberklioglu, Delft University of Technology
Monday, February 23, 2015 at 3pm in MCS 148

Abstract: Most of the approaches in face analysis rely solely on static appearance. However, temporal analysis of expressions reveals interesting patterns. In this talk, I will describe automatic spontaneity detection for enjoyment smiles using temporal dynamics of different facial regions. We have recorded spontaneous and posed enjoyment smiles of hundreds of visitors to the NEMO Science Centre in Amsterdam, thus creating the most comprehensive smile database ever: the UvA-NEMO Smile Database (www.uva-nemo.org). Our findings on this publicly available database show that facial dynamics go beyond expression analysis. I will discuss how we can use expression dynamics to improve age estimation and kinship detection.

Bio: Hamdi Dibeklioglu received the B.Sc. degree from Yeditepe University, Istanbul, Turkey, in 2006, the M.Sc. degree from Bogazici University, Istanbul, Turkey, in 2008, and the Ph.D. degree from the University of Amsterdam, Amsterdam, The Netherlands, in 2014. He is currently a Post-Doctoral Researcher with the Pattern Recognition and Bioinformatics Group, Delft University of Technology, Delft, The Netherlands. He is also a Guest Researcher with the Intelligent Systems Lab Amsterdam, University of Amsterdam. His research interests include computer vision, pattern recognition, and automatic analysis of human behavior.

BUSec Seminar
Security and Privacy for the Forthcoming Vehicle-to-Vehicle Communications System
William Whyte, Security Innovation
Wednesday, February 25, 2015 at 9:30am in MCS 180 — Hariri Seminar Room

Abstract: The US Department of Transportation announced on February 3rd, 2014, that it intends to mandate a system for inclusion in all light vehicles that would allow them to broadcast their position and velocity on a more-or-less continuous basis. The system is claimed to have the capability to prevent up to 80% of all unimpaired collisions. The presentation, by a key member of the team designing the communications security for the system, will discuss the security needs, the constraints due to cost and other issues, and the efforts that are being made to ensure that the system will not compromise end-user privacy. This will include an overview of some novel cryptographic constructs that improve the scalability, robustness, and privacy of the system. There may even be proofs.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://cs-mailman.bu.edu/pipermail/cs-talks/attachments/20150219/7df93f87/attachment.html>


More information about the cs-talks mailing list