Warning: Unexpected character in input: '\' (ASCII=92) state=1 in /home/u58262/romip.ru/www/russir2017/wp-includes/Requests/Hooks.php on line 70
Program – RuSSIR 2017
19.04.2017

Program

Keynotes by Ruslan Salakhutdinov (Carnegie Mellon University, USA) and Jaap Kamps (University of Amsterdam, The Netherlands)

Evaluating Personal Assistants on Mobile Devices

Julia Kiseleva (University of Amsterdam)

The iPhone was introduced only a decade ago in 2007 but has fundamentally changed the way we interact with online information. Mobile devices differ radically from classic command-based and point-and-click user interfaces, now allowing for gesture-based interaction using fine-grained touch and swipe signals. Due to the rapid growth in the use of voice-controlled intelligent personal assistants on mobile devices, such as Microsoft’s Cortana, Google Now, and Apple’s Siri, mobile devices have become personal, allowing us to be online all the time, and assist us in any task, both in work and in our daily lives, making context a crucial factor to consider. Mobile usage is now exceeding desktop usage, and is still growing at a rapid rate, yet our main ways of training and evaluating personal assistants are still based on (and framed in) classical desktop interactions, focusing on explicit queries, clicks, and dwell time spent. However, modern user interaction with mobile devices is radically different due to touch screens with a gesture- and voice-based control and the varying context of use, e.g., in a car, by bike, often invalidating the assumptions underlying today’s user satisfaction evaluation. There is an urgent need to understand voice- and gesture-based interaction, taking all interaction signals and context into account in appropriate ways. We propose a research agenda for developing methods to evaluate and improve context-aware user satisfaction with mobile interactions using gesture-based signals at scale.

Design and Implementation of User Experiments in Information Retrieval

Ying-Hsang Liu (Charles Sturt University)

This course is designed to provide students with an overview of the interactive information retrieval (IIR) evaluation studies, with particular reference to the modelling of users, tasks and contexts, methods for IIR studies and related experimental design issues. The design and evaluation of search user interfaces will be emphasised.

An Interactive “View” of Probabilistic Models for Text Retrieval, Classification, Quantification

Giorgio Maria Di Nunzio (Dept. of Information Engineering – University of Padua)

In this course, we will present an overview of probabilistic models for IR (Boolean, Binary Independence Model, BM25, Language Models) by means of interactive visualizations that allows for an intuitive understanding of how these probabilistic models work and how we can improve them. We will start from the basic assumptions that are made by probabilistic classical models, then we will derive, step by step, a fully functional probabilistic Bayesian framework. We will show how to interpret these models on an interactive two-dimensional space, called Likelihood Spaces, by means of the Shiny R Package. There will be lots of entertaining hands-on activities to show real cases of text retrieval, classification and quantification that use standard IR datasets. Finally, we will also discuss why some models are usually better than others (both intuitively and formally) and present open problems for future research.

Deep Learning for Language and Vision

Efstratios Gavves (University of Amsterdam)

Deep learning is primarily a study of multi-layered neural networks, spanning a great range of model architectures. The course will study the theory of deep learning, namely of modern, multi-layered neural networks trained on big data. The course will have a particular focus on computer vision and language modeling, which are perhaps the most recognizable and impressive applications of the deep learning theory. The course will be composed of a theoretical and a practical part. At the end of the course, the students will have obtained the following theoretical and practical skills.
During theory, the students will be taught the fundamentals of deep learning and the latest, state-of-the-art models that empower popular applications, such as Google Photos, Google Translate, Google Text-to-Speech, Google Brain, Facebook Friend Finder, Self-driving cars, Self-learning robots, AlphaGo etc. During the theory, part students will learn to re-implement similar models, as well as build novel ones for new tasks. Also, during the theory sessions, we plan to have experts from the field presenting their views on the subject.
During the practicals, the students will implement the core versions of some of the aforementioned applications. The course will focus on state-of-the-art programming frameworks. Students will learn what are the most relevant and frequent practical problems, and how they can be addressed in practice.

Visual Retrieval and Mining

Stefan Rueger (The Open University’s Knowledge Media Institute)

At its very core visual retrieval means the process of searching for images or videos. The intriguing bit here is that the query itself can be visual: For example you take a picture of a landmark with your mobile phone that then finds a similar picture a repository and tells you more about the building thus linking reality to databases.
This course goes a bit further by examining the full matrix of a variety of query modes versus document types. How do you retrieve all clips of a Formula 1 race in which a logo of a sponsor appears? How do you find a particular episode of the Simpsons, given sketches of two protagonists? We will discuss underlying techniques and common approaches to facilitate visual search engines: metadata driven retrieval; piggy-back text retrieval; automated image annotation; content-based retrieval.

Deep Learning and Conversational AI

Mikhail Burtsev and Valentin Malykh (MIPT)

The course ​gives overview of current state of the art in deep learning and it’s applications for natural language processing, understanding, and generation.​ Lectures will cover ​convolutional and ​recurrent neural network models in application to NLP ​such as word2vec, seq2seq, encoder-decoder architecture.​ Current state of the filed will be illustrated by detailed presentation of Google’s Smart Reply and Neural Machine Translation systems. ​The introduction to deep reinforcement learning which is currently on the rise in conversational AI field will be ​provided ​on ​the case of deep RL dialog agent.

Neural Networks for Information Retrieval

Tom Kenter, Alexey Borisov, Christophe Van Gysel, Mostafa Dehghani, Maarten de Rijke and Bhaskar Mitra (University of Amsterdam and Yandex, Microsoft Bing)

Machine learning plays a role in many aspects of modern IR systems, and deep learning is applied in all of them. The fast pace of modern-day research has given rise to many different approaches for many different IR problems. The amount of information available can be overwhelming both for junior students and for experienced researchers looking for new research topics and directions. Additionally, it is interesting to see what key insights into IR problems the new technologies have given us. The aim of this full-day tutorial is to give a clear overview of current tried-and-trusted neural methods in IR and how they benefit IR research. It covers the key architectures in use, as well as the most promising future directions.