[NRG] BUsec this week: Robert Lychev (Mon 10AM) Cynthia Dwork (Wed 11AM)

Sharon Goldberg goldbe at cs.bu.edu
Sun Apr 28 12:14:06 EDT 2013


All,

At this week's seminar, Robert Lychev (GATech/BU) will be presenting
his new work that was just accepted to SIGCOMM'13.  Monday 10AM.

On Wednesday at 11AM, we are very excited to host a distinguished
lecture by Cynthia Dwork on her new work on "Fairness Through
Awareness".

Abstracts below.  Hope to see you all there!

Best,
Sharon


BUsec Calendar:  http://www.bu.edu/cs/busec/
BUsec Mailing list:  http://cs-mailman.bu.edu/mailman/listinfo/busec
How to get to BU from MIT:  Try the CT2 bus or MIT's "Boston Daytime
Shuttle" http://web.mit.edu/facilities/transportation/shuttles/daytime_boston.html

*****

Is the Juice Worth the Squeeze?  BGP Security in Partial Deployment
Speaker: Robert Lychev, GATech & BU
Date: Monday April 29, 2013 10AM
MCS137, 111 Cummington St, Boston

Abstract

The Border Gateway Protocol (BGP) sets up routes between the smaller
networks that make up the Internet. However, BGP is vulnerable to such
serious problems as the propagation of bogus routing information due
to attacks or misconfigurations. The S*BGP protocols (Secure BGP,
secure origin BGP, BGPsec, etc) were proposed to address these issues,
but the transition to S*BGP is expected to be long and slow, with
S*BGP coexisting in “partial deployment” alongside BGP for possibly a
very long time. We use theoretical and  experimental analyses to study
the security benefits provided by partially-deployed S*BGP and show
how the complex interactions between S*BGP and insecure BGP can
introduce new vulnerabilities and instabilities into the interdomain
routing system.

Joint work with Michael Schapira and Sharon Goldberg.

****
Fairness Through Awareness
Speaker: Cynthia Dwork, Microsoft Research, SVC
Date:  Wednesday May 1, 2013, 11AM
Hariri Institute, 111 Cummington St, Boston

"Why was I not shown this advertisement? Why was my loan application
denied? Why was I rejected from this university?"

This talk will address fairness in classification, where the goal is
to prevent discrimination against protected population subgroups in
classification systems while simultaneously preserving utility for the
party carrying out the classification, for example, the advertiser,
bank, or admissions committee. We argue that a classification is fair
only when individuals who are similar with respect to the
classification task at hand are treated similarly, and this in turn
requires understanding of sub cultures of the population. Similarity
metrics are applied in many contexts, but these are often hidden. Our
work explicitly exposes the metric, opening it to public debate.
(Joint work with Moritz Hardt, Toniann Pitassi, Omer Reingold, and
Richard Zemel.)

Our approach provides a (theoretical) method by which an on-line
advertising network can prevent discrimination against protected
groups, even when the advertisers are unknown and untrusted. We
briefly discuss the role of fairness in consumer objections to
behavioral targeting and explain how traditional notions of privacy
miss the mark and fail to address these. (Joint work with Deirdre
Mulligan.)

Finally, we discuss a machine learning instantiation of our approach,
in which the distance metric need not be given but can instead be
learned.  (Joint work with Toniann Pitassi, Yu Wu, and Richard Zemel.)

--
Sharon Goldberg
Computer Science, Boston University
http://www.cs.bu.edu/~goldbe


More information about the NRG-L mailing list