Language Models and Neuro-Symbolic Learning

Pasquale Minervini University of Edinburgh

Registration form:

Register here

Course summary:

This mini-course provides a comprehensive exploration of modern advancements in Natural Language Processing (NLP), with a particular focus on the development and implications of large language models (LLMs). The curriculum begins by tracing the evolution of NLP technologies, from early neural models to the transformative impact of deep learning architectures like Transformers. A significant portion of the course is dedicated to the exploration of large language models, detailing their design, training methodologies, and the emerging paradigm of scaling laws in AI. Further, the course will cover popular LLM alignment strategies, namely Instruction Fine-Tuning and Reinforcement Learning from Human Feedback (RLHF), two techniques for aligning LLM outputs with human values and preferences. Students will learn about the theoretical underpinnings of RLHF, its implementation challenges, and its role in enhancing the reliability and ethical grounding of model responses. Additionally, the course covers Retrieval-Augmented Generation (RAG), integrating traditional NLP tasks with innovative approaches to improve information relevance and accuracy through dynamic content retrieval mechanisms.

About the lecturer:

Pasquale is a Lecturer in Natural Language Processing at the School of Informatics, University of Edinburgh. Previously, he was a Senior Research Fellow at UCL (2017-2022); a postdoc at the INSIGHT Centre for Data Analytics, Ireland (2016); and a postdoc at the University of Bari, Italy (2015). Pasquale's research interests are in NLP and ML, focusing on relational learning and learning from graph-structured data, solving knowledge-intensive tasks, hybrid neuro-symbolic models, compositional generalisation, and designing data-efficient and robust deep learning models. Pasquale published 100+ peer-reviewed papers in top-tier AI conferences, receiving multiple awards (including one Outstanding Paper Award at ICLR 2021), and delivered several tutorials on Explainable AI and relational learning (including four AAAI tutorials). On behalf of the University of Edinburgh and UCL, he is the PI of a seven-figure EU Horizon 2020 research grant on applications of relational learning to cancer research, an Edinburgh Laboratory on Integrated Artificial Intelligence (ELIAI) grant on learning neural models with structured latent representations, and multiple industry grants. In 2020, his team won two tracks out of three of the Efficient Open-Domain Question Answering Challenge at NeurIPS 2020. For more information you can check his website: www.neuralnoise.com.

Location and schedule:
Wednesday, June 12 in 3180
16:15 - 17:45Lecture
Thursday, June 13 in 3180
14:15 - 15:45 Lecture
16:15 - 17:45 Tutorial
Friday, June 14 in 3180
14:15 - 15:45 Lecture
16:15 - 17:45 Tutorial