[Busec] BUsec this week: Kai-Min Chung (Tues 11AM) & David Xiao (Thurs 11AM)

Sharon Goldberg goldbe at cs.bu.edu
Tue Feb 28 10:16:49 EST 2012

Just a reminder for Kai-Min's talk today at 11AM.

On Sun, Feb 26, 2012 at 3:44 PM, Sharon Goldberg <goldbe at cs.bu.edu> wrote:
> All,
> We have two visitors giving seminars this week:  Tuesday 11AM we have
> Kai-Min Chung visiting from Cornell, and Thursday 11AM we have Dave
> Xiao visiting from France.  Both seminars will be in 111 Cummington
> St, but we'll move to the bigger room MCS148.
> We continue next Tues with our own Ben Fuller, speaking about joint
> work with Leo Reyzin and Adam ONeill.
> See you there!
> Sharon
> **************
> Title: The Randomness Complexity of Parallel Repetition
> Speaker: Kai-Min Chung, Cornell
> When:  Feb 28, 11AM, MCS148
> Abstract:
> Consider a $m$-round interactive protocol with soundness error $1/2$.
> How much extra randomness is required to decrease the soundness error
> to $\delta$ through parallel repetition? Previous work, initiated by
> Bellare, Goldreich and Goldwasser, shows that for public-coin
> interactive protocols with statistical soundness, $m \cdot O(\log
> (1/\delta))$ bits of extra randomness suffices. In this talk, I will
> present our recent work on the above question:
> - We establish the first derandomized parallel repetition theorem for
> public-coin interactive protocols with *computational soundness*
> (a.k.a. arguments). The parameters of our result essentially matches
> the earlier works in the information-theoretic setting.
> - We show that obtaining even a sub-linear dependency on the number of
> rounds $m$ (i.e., $o(m) \cdot \log(1/\delta)$) is impossible in the
> information-theoretic, and requires the existence of one-way functions
> in the computational setting.
> - We show that non-trivial derandomized parallel repetition for
> private-coin protocols is impossible in the information-theoretic
> setting and requires the existence of one-way functions in the
> computational setting.
> These results are tight in the sense that parallel repetition theorems
> in the computational setting can trivially be derandomized using
> pseudorandom generators, which are implied by the existence of one-way
> functions.
> This is a joint work with Rafael Pass.
> ****************
> David Xiao, LIAFA France. Privacy, incentives, and truthfulness.
> Thursday, March 1, 11AM, MCS148
> Privacy has become an ever more pressing concern as we
> conduct more and more of our lives in public forums such as the
> Internet. One privacy question that has received much study is how a
> database curator may output "sanitized" data that does not reveal too
> much information about any particular individual.  This criteria has
> been formalized as differential privacy, proposed originally by Dwork
> et al. (TCC '06 and ICALP '06), which captures the idea that "the
> presence or absence of any individual's data does not change the
> distribution of the sanitized data by much". This guarantee has been
> interpreted to mean that individuals should be comfortable revealing
> their information, since their participation barely changes the
> output.
> In this talk, we advocate combining the study of privacy in
> conjunction with game theory, since individuals need to be motivated
> by some incentive in order to part with their private information.  We
> focus on the notion of truthfulness, which says that a mechanism
> should be designed so that it is in the individuals' own interest to
> give their true information.  We show that there exist games for which
> differentially private mechanisms, in particular the exponential
> mechanism of McSherry and Talwar (FOCS '07), do not motivate the
> individuals to participate truthfully. On the positive side, we show
> that a wide class of games do admit differentially
> private, truthful, and efficient mechanisms.
> Finally, we explore the possibility of tradeoffs between utility and
> privacy.  This is because individuals may be willing to give up some
> privacy if they receive enough utility from a game, and vice versa. We
> show that, under a natural measure of information cost, certain
> differentially private mechanisms such as releasing a differentially
> private histogram or a differentially private synthetic database may
> reveal so much information that individuals would rather suffer the
> consequences of lying rather than have their information published
> ***********
> Ben Fuller. BU. A unified approach to deterministic encryption
> Abstract:
> Public Key Encryption is a ubiquitous cryptographic tool.  A
> randomized encryption algorithm is necessary to achieve semantic
> security.  However, in several applications a deterministic encryption
> algorithm is necessary or enables additional functionality.
> In this talk, we present a general construction of deterministic
> encryption schemes that unifies prior work and gives novel schemes. We
> focus on a single message instantiation based on any trapdoor function
> that has sufficiently many hardcore bits.  Our work also provides a
> construction that achieves "bounded" multi-message security from lossy
> trapdoor functions through a generalization of the leftover hash
> lemma.  Our single message scheme is enabled by two tools that are of
> broader interest:
> - A weaker and more precise sufficient condition for "semantic"
> security on a high-entropy message distribution. Namely, we show that
> to establish "semantic" security on a distribution M of messages, it
> suffices to establish indistinguishability for all conditional
> distribution M|E, where E is an event of probability at least 1/4.
> (Prior work required indistinguishability on all distributions of a
> given entropy.)
> - A result about computational entropy of conditional distributions.
> Namely, we show that conditioning on an event E of probability p
> reduces the quality of computational entropy by a factor of p and its
> quantity by log_2 1/p.  We also extend our result about computational
> entropy to the average case, which is useful in reasoning about
> leakage-resilient cryptography: leaking \lambda bits of information
> reduces the quality of computational entropy by a factor of 2^\lambda
> and its quantity by \lambda.
> Joint work with Adam ONeill and Leo Reyzin.
> --
> Sharon Goldberg
> Computer Science, Boston University
> http://www.cs.bu.edu/~goldbe

Sharon Goldberg
Computer Science, Boston University

More information about the Busec mailing list