[cs-talks] Upcoming CS Seminars: Data Seminar (Tues) + IVC (Tues) + BUSec Seminar (Wed) + Student Sem (Thurs)
fgreen1 at bu.edu
Mon Oct 19 11:29:14 EDT 2015
Applications of Mining Heterogeneous Information Networks
Yizhou Sun, Northeastern University
Tuesday, October 20, 2015 at 11am in MCS 148
Abstract: Most real-world applications that handle big data, including interconnected social media and social networks, scientific, engineering, or medical information systems, online e-commerce systems, and most database systems, can be structured into heterogeneous information networks. Different from homogeneous information networks, where objects and links are treated either as of the same type or as of untyped nodes or links, heterogeneous information networks in our model are semi-structured and typed, following a network schema. Recent studies have demonstrated the power of heterogeneous information networks in many real-world applications. In this talk, I will introduce some of these interesting studies, which include (1) recommendation and (2) information diffusion.
Bio: Yizhou Sun is an assistant professor in the College of Computer and Information Science of Northeastern University. She received her Ph.D. in Computer Science from the University of Illinois at Urbana-Champaign in 2012. Her principal research interest is in mining information and social networks, and more generally in data mining, machine learning, and network science, with a focus on modeling novel problems and proposing scalable algorithms for large-scale, real-world applications. Yizhou has over 60 publications in books, journals, and major conferences. Tutorials based on her thesis work on mining heterogeneous information networks have been given in several premier conferences, including EDBT 2009, SIGMOD 2010, KDD 2010, ICDE 2012, VLDB 2012, and ASONAM 2012. She received 2012 ACM SIGKDD Best Student Paper Award, 2013 ACM SIGKDD Doctoral Dissertation Award, 2013 Yahoo ACE (Academic Career Enhancement) Award, and 2015 NSF CAREER Award.
Understanding and Improving the Internal Representation of CNNs
Aditya Khosla, MIT
Tuesday, October 20, 2015 at 2pm in MCS 148
Abstract: The recent success of convolutional neural networks (CNNs) on object recognition has led to CNNs becoming the state-of-the-art approach for a variety of tasks in computer vision. This has led to a plethora of recent works that analyze the internal representation of CNNs in an attempt to unlock the secret to their remarkable performances and provide a means to further improve upon these performances by understanding the shortcomings. While some works suggest that the CNN learns a distributed code for objects, others suggest that they learn a more semantically interpretable representation consisting of various components such as colors, textures, objects and scenes.
In this talk, I explore methods to deepen the semantic understanding of the internal representation of CNNs and propose methods for improving it. Unlike prior work that relies on the manual annotation of each neuron, we propose an approach that uses existing annotation from a variety of datasets to automatically understand the semantics of the firings of each neuron. Specifically, we classify each neuron as being one detecting color, texture, shape, object part, object or scene and apply this to automatically parse images at various levels in a single forward pass of a CNN. We find that despite the availability of ground truth annotation from various datasets, the task of identifying exactly what a unit is doing turns out to be rather challenging. As such, we introduce a visualization benchmark containing the annotations of the internal units of popular CNN models, allowing for further research to be conducted in a more structured setting.
We demonstrate that our approach performs well on this benchmark and can be applied to answering a number of questions related to CNNs: How do the semantics of neurons evolve during training? Do they latch on to specific concepts and stick to them, or do they fluctuate? Do the semantics learned by a network differ when training from scratch or fine-tuning? How does the representation change if the image set is the same but the label space changes?
Bio: Aditya Khosla is a PhD student at MIT working on deep learning for computer vision and human cognition. He is interested in developing machine learning techniques that go beyond simply identifying what an image or video contains, but instead predict the impact visual media has on people e.g., predicting whether someone would like an image or not, and whether they would remember it. He is also interested in applying computational techiques to predictably modify these properties of visual media automatically. He is a recipient of the Facebook Fellowship, and his work on predicting image popularity and modifying face memorability has been widely featured in popular media like The New York Times, BBC, and TechCrunch. For more information, visit: http://mit.edu/khosla
On the Correlation Intractability of Obfuscated Pseudorandom Functions
Yilei Chen, BU
Wednesday, October 21, 2015 at 10am in MCS 180- Hariri Seminar Room
Abstract: A family of hash functions is called "correlation intractable'' if it is hard to find, given a random function in the family, an input-output pair that satisfies any "sparse'' relation, namely any relation that is hard to satisfy for truly random functions. Correlation intractability captures a strong and natural Random Oracle-like property. However, it is widely considered to be unobtainable. Indeed, it was shown that correlation intractable functions do not exist for some length parameters [Canetti, Goldreich and Halevi, J.ACM 04]. Furthermore, no candidate constructions have been proposed in the literature for any setting of the parameters.
We construct a correlation intractable function ensemble that withstands all relations with a priori bounded polynomial complexity. We assume the existence of sub-exponentially secure indistinguishability obfuscators, puncturable pseudorandom functions, and input-hiding obfuscators for evasive circuits. The existence of the latter is implied by Virtual-Grey-Box obfuscation for evasive circuits
Joint work with: Ran Canetti and Leonid Reyzin
Great Ideas in Theoretical Computer Science That Don't Work
Yilei Chen, BU
Thursday, October 22, 2015 at 12:30pm in MCS 148
Description: In this talk we discuss, as the title indicates, great ideas in (theoretical) computer science that don't work.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the cs-talks