Tutorial 1

Social Media Data Analytics for Disaster Management

Sanjay Madria, Missouri University of Science and Technology, Rolla, USA

A disaster can refer to an effect and result of natural hazards like the hurricane, flood, earthquake, tornado, heatwave, etc. Every activity of a disaster management such as taking precautions, managing evacuation, running rescue missions demands accurate and up-todate information to allow a quick, easy and cost-effective process and hence reduce the loss of lives and properties. Social media has emerged as a valuable supplementary tool in this context, providing real-time data that can assist authorities in developing prompt and effective response strategies. However, despite its potential, utilizing social media data for disaster management presents several challenges. It needs a multi-faceted approach that leverages deep learning and natural language processing (NLP) techniques tackling the complexities of contextual information and the relevance of social media content. The tutorial will offer actionable insights that significantly enhance situational awareness information, decision-making, and resource allocation during disasters. The tutorial will focus on i) How can we detect, classify, and analyze hate and offensive emotions during large-scale events based on the social media data such as tweets? ii) How can deep learning models improve sentiment analysis by identifying low-level emotions in major events? iii) How can fine-grained data enhance crisis communication classification and decision-making in disaster response? iv) How can we assess information relevance and urgency to prioritize emergency responses? v) How can we identify and classify first responders during emergencies? vi) How can we develop an automated factchecking system to verify disaster claims and combat misinformation? vii) How can unsupervised learning be used to extract key phrases and detect critical sub-events from unstructured disaster data?


Tutorial 2

Enhancing Reproducibility and Replicability in Information Retrieval:

A Path Towards Scientific Integrity and Effective Research

Antonio Ferrara1, Claudio Pomo1, and Nicola Tonellotto2

1Politecnico di Bari, Bari, Italy, 2University of Pisa, Pisa, Italy

While Information Retrieval (IR) and Recommender Systems (RecSys) have made significant strides, reproducibility and replicability remain elusive due to methodological inconsistencies and pressures for high-impact results. Despite reproducibility tracks, guidelines, and frameworks, inconsistent application and limited motivation for rigorous reproducibility hinder progress. This tutorial introduces tools and methodologies for reproducibility in IR and RecSys research. It focuses on foundational concepts like selecting appropriate baselines and designing replicable experimental pipelines, and practical case studies demonstrating the impact of minor experimental variations. This session equips researchers with actionable strategies for producing robust, transparent studies, fostering scientific integrity within the IR and RecSys communities and supporting more reliable and impactful research advancements.


Tutorial 3

Creating Accessible Digital Content and Applications

Ombretta Gaggi, University of Padua, Padua, Italy

Access to digital content and to web pages is a fundamental right for all citizens and is crucial for people with disabilities to participate fully in society and the workforce. Unfortunately, imposing accessibility by law is insufficient, considering many websites still present relevant accessibility issues. Moreover, there is a lack of culture about accessibility, even among designers and developers. This tutorial teaches some basic concept to create accessible digital content like web pages, papers and presentations.


Scroll to Top