[cs-talks] CS Upcoming Seminars: NRG (Tues) + BUSec (Wed) + IVC (Thurs)

Greenwald, Faith fgreen1 at bu.edu
Mon Feb 29 11:10:52 EST 2016


BUSec Seminar/Thesis Proposal
On the Cryptographic Hardness of Finding a Nash Equilibrium  [Thesis Proposal]
Omer Paneth, BU
Wednesday, March 2, 2016 at 9:45am in MCS 148

Abstract: We prove that finding a Nash equilibrium of a game is hard, assuming the existence of indistinguishability obfuscation and injective one-way functions with sub-exponential hardness. We do so by showing how these cryptographic primitives give rise to a hard computational problem that lies in the complexity class PPAD, for which finding Nash equilibrium is known to be complete.

Previous proposals for basing PPAD-hardness on program obfuscation considered a strong "virtual black-box" notion that is subject to severe limitations and is unlikely to be realizable for the programs in question. In contrast, for indistinguishability obfuscation no such limitations are known, and recently, several candidate constructions of indistinguishability obfuscation were suggested based on different hardness assumptions on multilinear maps.

Our result provides further evidence of the intractability of finding a Nash equilibrium, one that is extrinsic to the evidence presented so far.

Joint work with Nir Bitansky and Alon Rosen

BU Lecture
"Center for the Study of Ancient Documents"
Oxford University
Wednesday, March 2, 2016 at 4pm
MET College, room 625

IVC Seminar
Tejas Kulkarni
Thursday, March 3, 2016 at 2pm in MCS 148

Abstract: Recent progress on probabilistic modeling and statistical learning, coupled with the availability of large training datasets, has led to remarkable progress in perception. An alternative to the empirical regression based approach, often termed as 'analysis-by-synthesis', instead relies on models or simulations of the world, which can be used to interpret perceptual observations. How do we best design systems that can map raw scenes into structured representations? In this talk I will demonstrate models that take as input raw pixels, produce sub-symbolic representations using deep neural networks, and finally bind the distributed representations into structured representations. In the final parts of the talk, I will transition into using simulators as a training ground for learning agents using deep reinforcement learning. Time permitting, I shall discuss and present results on the importance of using structured representations in deep reinforcement learning when rewards are sparse.

Bio: Tejas Kulkarni is a PhD candidate in the Department of Brain and Cognitive Sciences and Computer Science and Artificial Intelligence Laboratory at the Massachusetts Institute of Technology. His broad interest is at the intersection of Artificial Intelligence, Cognitive Science and Neuroscience. He is specifically interested in bridging Deep Learning and Probabilistic Modeling/Programming, inspired by findings from Neuroscience and Psychology. Recently, he has focused on applying these techniques in the domain of vision as inverse graphics and deep reinforcement learning. He has been awarded the Henry Singleton award and the Leventhal Fellowship for his graduate work. He has also won best paper honorable mention awards at the CVPR and EMNLP conferences.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://cs-mailman.bu.edu/pipermail/cs-talks/attachments/20160229/ce564f4d/attachment.html>

More information about the cs-talks mailing list