In his influential textbook on artificial intelligence, Stuart Russell poses the question: 'What if we succeed in creating intelligent systems? How do we retain control over the systems we built and make sure they do exactly what we want?' As AI systems are growing more powerful, resolving this question becomes increasingly important. And yet, it has been highly neglected by the AI research community, compared to making these systems more competent in the first place. In this talk, Rohin Shah from the Center for Human Compatible AI at UC Berkeley will give a short introduction to the problem of aligning powerful AI systems with human values and intentions. This will be followed by a live Q&A and discussions in smaller groups, facilitated by more experienced students.
The event will be held on Thursday, September 24th, from 18-19:30 CEST (Amsterdam time) via Zoom.
The only prerequisites are interest in the topic and a basic understanding of current machine learning systems. First-year master students are encouraged to join.
If you'd like to join, please fill in this very short registration form so that we can anticipate the number of participants.
We only have 30 spots since we want to enable small discussion groups, so please register early if you want to attend.
This talk is jointly organized by safety interested AI and machine learning students at UvA and ETH Zurich.