Archives

NLP & LLMs: Harnessing the power of language for economic and related research

Do you want to learn how to make use of text data for your research?

This in-person workshop is ideal for researchers who want to learn the potential of Natural Language Processing (NLP) and Large Language Models (LLMs) for economic and related research.

You will receive a broad overview of the theoretical foundations of NLP and its practical concepts. The workshop closes with a Coding Lab to apply this knowledge to real world problems.

Prerequisites

A basic understanding of Machine Learning and Deep Learning is required.

Basic Python skills are required. Familiarity with common modules for text processing and deep learning frameworks is recommended.

Schedule

09:30 – 11:00 NLP Basics (1-3)

11:15 – 12:45 Neural Nets & Transfer Learning (4-8)

13:45 – 15:45 Generative Models (9-End)

16:00 – 17:00 Coding Lab

Topics

1 Learning Paradigms

Understand the different learning paradigms, Relate type of learning to amount of labeled data required

2 NLP tasks

Understand the different types of tasks (low- vs. high-level), Purely Linguistic tasks vs. more general classification tasks

3 Word Embeddings

Understand what word embeddigns are, Learn the main methods for creating them

4 Recurrent Neural Networks

Understand recurrent structure of RNNs, Learn the different types of RNNs

5 Attention

Understand attention mechanism, Learn the different types of attention, The Transformer / Self-Attention

6 The BERT Architecture

Use of the transformer encoder in this model, Understand the pre-training, Gain understanding of the fine-tuning procedure Differences between token- and sequence classification

7 BERTology

Understand how impactful this architecture was, See how this changed research in the field, Glimpse into BERTology

8 Model distillation

soft vs. hard targets, understand how distillation works, DistilBERT, other approaches towards compression

9 Towards a unified task format  

developments of the post-BERT era, reformulating classification tasks, multi-task learning, fine-tuning on task-prefixes

10 GPT series

use of the transformer decoder, input modifications (and how this is useful), concept of prompting

Turning PDFs into Research Data

Do you ever feel that the data you need for your research is accessible but it’s not in a convenient table, such as company reports or building plans?

Perhaps the information you need is spread out across many different documents?

If only we could read and extract structured data from thousands of written documents. 

In this course, we explore how to accomplish this task by combining web scraping, Optical Character Recognition (OCR), and Natural Language Processing (NLP). Over four weeks, we provide online lessons and interactive sessions to learn the fundamentals of these key technologies.

Topics

  • Methods for extracting text and files from websites using tools such as Selenium and how to avoid common pitfalls.
  • Methods for extracting text from images, such as scans of written documents. 
  • Exploring technologies that can help automate data extraction from harvested text and a critical review of common data quality issues. 

Format

This is an online course. 

  • Week 1: Watch pre-prepared video lectures about relevant theory and demonstration of example exercises. The topic is web scraping and OCR  (~45 min). Interactive Online Session (~60 min).
  • Week 2: Applying last week’s lessons to the example coding exercise or your own project (~30 min). Interactive Online Session (~60 min).
  • Week 3: Watch pre-prepared video lectures about relevant theory and demonstration of example exercises. The topic is NLP and common data extract issues (~30 min). Interactive Online Session (~60 min).
  • Week 4: Applying last week’s lessons to the example coding exercise or your own project (~30 min). Interactive Online Session (~60 min).

Weekly Meetings

The course includes 4 live Online Meetings, in which you will discuss the week’s contents with the instructor and fellow participants:

Meeting 1: Aug 27, 2024, 4:30pm – 5:30pm CEST
Meeting 2: Sep 03, 2024, 4:30pm – 5:30pm CEST
Meeting 3: Sep 10, 2024, 4:30pm – 5:30pm CEST
Meeting 4: Sep 17, 2024, 4:30pm – 5:30pm CEST

Prerequisites

  • Basic programming knowledge (R, python, …)
    • Note that the course will be in Python, but if you only know R, this is still ok! The code examples are simple and will run entirely on Google Colab, meaning you will not have to install anything. This course will make a good opportunity to try Python for the first time and you can also try the self-paced BERD introduction to Python course
  • Willingness to learn new technical skills
  • A Google Account

About the Instructor

John ‘Jack’ Collins is a PhD Student in Sociology at the Graduate School of Economic and Social Sciences. He holds a Bachelor’s of Sociology with Honours from the Australian National University. Jack has a Master’s degree in Data Science from James Cook University. His Master’s project was regarding predictive modelling for student attrition from sub-tertiary courses in Australia. During his Master’s studies, he also assisted in research projects regarding social attitudes and voting behaviour in Australia. Before starting PhD, Jack was a Senior IT Consultant specialising in data engineering, analytics and software development. Jack is interested in applying Data Science and IT to sociological research, particularly with regard to machine learning, analytics, and web applications.