New England Machine Learning Hackathon: Hacking Bias in ML

REGISTER HERE
Join us for hacking the biases, discrimination, and fairness in machine learning, algorithms, big data analytics! Our goal is to have each team develop websites to address these issues.
Prizes will be awarded at the end of the day. The winning team will receive a Surface Pro 4 for the team leader and Xbox One S FIFA ’17 bundle for each team member.
Topics are being collected to form teams. Please register and note if you are interested in leading a team. Our teams currently include:
  • Racial Discrimination in Facial Recognition, Genevieve Patterson, Microsoft Research
    Government agencies are rapidly adopting automatic face recognition and matching in law enforcement practices. Unfortunately, commonly used data-driven training algorithms are only as good as the data you feed them. We will explore the discriminatory effects of training deep nets on racially unbalanced collections of face images and how such training data bias can be identified and corrected.
  • Word Biases, Max Leiserson, Microsoft Research
    When you envision a nurse, a woman most likely pops into your mind. If you imagine an accomplished executive, on the other hand, it’s quite likely you’re thinking about a man. It’s not just you, though. The machine learning algorithms that target ads at us, prune our search results, or sort resumes for recruiters are all plagued by gendered stereotypes.https://papers.nips.cc/paper/6228-man-is-to-computer-programmer-as-woman-is-to-homemaker-debiasing-word-embeddings,
  • Pre-Trial Fairness, Sam Corbett-Davies, Stanford
    Courts around the country use machine learned risk scores to guide them in deciding whether defendants should be detained before their trial. There is concern that these scores could be unfair to certain groups, but recent research has shown that different concepts of fairness are mutually exclusive, so policy makers must make trade offs. In this project we’ll develop an interactive webpage toexplore the fairness tradeoffs inherent in risk assessments, similar to this work from Google studying fictitious loans.https://5harad.com/papers/fairness.pdf
  • Political influence: Who has Political Power and How Do You Measure It?,Weiwei Pan, Harvard Institute of Applied Computational Science
    The unequal distribution of power among the members of a political system is one of the most pervasive facts of political life.” – S. J. Brams (Measuring the Concentration of Power in Political Systems, 1968).
  • Equity in Higher Education and the Future of Work, Sergio Marrero, Co-Founder,Caila
    Over 60% of the U.S. workforce and 93% of the world does not have a post secondary degree while 78% of jobs in the U.S. will require training beyond high school by 2018. As the cost of college rises, ‘bundled’ higher education has become increasingly inaccessible. There are a multitude of new online and in-person courses but employers have a bias against online and incremental training that is less than a degree as it is unclear how to assess relevance to jobs. In this project we will develop a webpage that helps match bundles of courses directly to jobs- creating alternative pathways to jobs that are accessible for all.
  • Other topics to be added …

Date

May 11 2017
Expired!

Time

All Day

More Info

Read More

Location

Microsoft New England R&D
1 Memorial Drive, 1st floor
Category
Read More
Skip to content