**CPSC 532Y / 538L: Causal Machine Learning**
**Instructor**: [Mathias Lécuyer](https://mathias.lecuyer.me)
**Schedule**: MW 11:00-12:20 -- Term 1 (September - December 2023)
**Location**: SPPH 143 (the class is in person)
**Office Hours**: My general office hours are Mon. 13-15. I should be in my office, but if you want to make sure that I'm around and that you have a spot, you can book it [here](https://freebusy.io/lecuyer/15min) (priority to booked students).
**Logistics**: Logistics and discussions will happen on Piazza [^piazza]. You can [login through Canvas here](https://canvas.ubc.ca/courses/123462), which also shows the access code. **If you want to audit or attend the first sessions, please feel free to join**. Every UBC student should be able to access Canvas for now.
**Previous offerings**: while anything might change, this session will be similar to previous offerings [2022w1](previous_offerings/2022w1).
Objectives
==========
This class has two main educational goals:
1. Cover the basics of causal inference, including:
1. The Potential outcome framework ("Rubin model")
2. The Causal Graph framework ("Pearl model")
3. Goals of causal inference, and core techniques
2. Introduce recent developments using Machine Learning (ML) to extend the core techniques, and address some of their limitations. Topics include:
1. Using ML to estimate heterogeneous causal effects, including in regression (meta learners, random forests, orthogonal learning…) and IV settings (deep-IV, method of moments).
2. Using ML to learn confounders.
3. Using ML for sensitivity analysis.
4. Causal discovery
5. Link with RL.
We will cover these topics trough a mix of lectures, paper presentations by students, and in class discussions of the material. **This is a graduate seminar, and all students are expected to be actively involved during lectures and paper presentations. The grading with reflect these expectations.**
Prerequisites
=============
While there is no formal requirement to allow anyone to take the class, general, basic background on the following topics is assumed: probability, statistics, and Machine Learning. The assignment and mini-project will also require the ability to program in python.
It should be very possible for a motivated student to learn any missing background on their own as the class progresses (and students with missing background knowledge are expected to).
Evaluation
==========
Your course grade will be based on (potentially a subset of) the following: in class participation, paper presentation, assignment, mini-project.
Your course grade will be based on the following breakdown:
- In class participation: 25% + 5% (for going above and beyond)
- Paper Discussion Lead: 20%
- Assignment: 15% + 15%
- Mini-project: 15% + 5% (for going above and beyond)
Paper discussion
----------------
Each students will be Discussion Lead for one paper (2 or 3 leads per paper). [Register here for a date](https://freebusy.io/lecuyer/paper-discussion-lead).
Each **Discussion Lead** will talk 15min to present the paper (either splitting the content, or doing a whole presentation each):
- Causal inference questions and why they are important? Can you frame them in the different causal frameworks?
- Notations, assumptions, quick summary of the approach.
- Data setting they focus on / minimal interesting example. Work through it.
The rest of the session is for discussing the paper, and discussion leads animate and structure the discussion. The presentation will be graded based on content, clarity, delivery, as well as on the quality of the follow up discussion (the presenters are expected to encourage and manage the discussion).
The reading is **required for all students**.
Students who are not presenting should ask questions and participate in the discussion, and this participation will count towards the general participation grade. **Preparing topics of discussion, comments, and questions is part of the requirements** for participation.
Assignments and project
-----------------------
There will be two open ended assignment and a mini-project.
For the first assignment, each student will return a **short** explanation as a python notebook, and present their code and results to me in a short meeting scheduled for this purpose.
The second assignment and project can be done in groups.
There will be a progress where assignments (and potentially the project) build on each other. At a high level, the first assignment requires you to construct a data distribution to showcase challenges when measuring causal quantities, and how to address those challenges. In the second assignment, you will extend this approach to study causal ML models.
For mini-projects, you can either apply what we saw in class to a research project from you area, or work on projects I will propose that use similar approaches to the assignment to tackle open questions in causal ML.
**_Assignment 1:_**
1. Create a generative data model that illustrates the challenges of causal inference (i.e. when observational quantities are different from causal ones).
2. Express these challenges in the frameworks of Potential Outcomes and Structural Causal Models (using the assumptions we saw in class).
3. Empirically demonstrate these challenges (estimation errors).
4. Give at least one condition sufficient for identifyability (expressed with the proper formalism) and demonstrate it empirically.
Keep it simple! Think about it as replicating our running example in class.
**Presented to me on the week of Oct 16 (dates TBD).** You should start thinking about it!
**_Assignment 2:_**
Using a generative data model, showcase how ML based approaches to learn causal quantities can fail:
1. When estimating the ATE based on counterfactual predictions.
2. When fitting a model for the CATE.
(1) Create a generative model that shows the issues empirically. (2) Explain the failures intuitively and/or theoretically. (3) Show assumptions under which these challenges can be addressed and fit a model accordingly.
Syllabus
========
See [what we covered last term](previous_offerings/2022w1/#schedule).
Schedule
--------
6 Sep 2023: Introduction: Why Causal Inference?
11 Sep 2023: The Potential Outcomes framework (lecture)
Definitions and notations.
13 Sep 2023: The Potential Outcomes framework (lecture)
Concept: Identifiability. Assumptions: (conditional) ignorability. Techniques and intuition: randomization.
18 Sep 2023: The Potential Outcomes framework (lecture)
Identifiability without randomization, estimation, heterogeneity. Confidence intervals.
20 Sep 2023: Structural Causal Models
Definitions: DAGs, SCMs, Do().
25 Sep 2023: Structural Causal Models
Definitions: DAG structures.
27 Sep 2023: Structural Causal Models
2 Oct 2023: **No class** (Day of Truth and Reconciliation)
4 Oct 2023: Structural Causal Models
Backdoor criterion, do-calculus (rules 1 and 2).
9 Oct 2023: **No class** (Thanksgiving)
11 Oct 2023: Structural Causal Models
Rule 3, Using do-Calculus, Counterfactuals
12 Oct 2023: (Makeup Monday) Structural Causal Models
Ladder of Causation, PNS
16 Oct 2023: Regression (lecture)
Basic facts, interpretation(s) as an estimator for the ATE (using potential outcomes), confidence intervals.
18 Oct 2023: IPS (lecture)
20 Oct 2023: ⚠️ Assignment Presentations
Present your assignment: 5min presentation, 5min questions. Register [here](https://freebusy.io/lecuyer/assigment-presentation).
23 Oct 2023: Review + Projects
We will split the time in two for:
- A review session after assignment.
- A description of potential projects.
25 Oct 2023: HW2 and Project groups
You will have the session to: form groups for HW2 and Projects. Discuss them and start working.
By the end of the week, I will require groups for HW2 and Projects (on Piazza---groups don't have to be the same for both HW2 and projects).
27 Oct 2023: ⚠️ Declare groups for HW2 and Project
On piazza.
30 Oct 2023: (Paper) ML for heterogeneous causal effects
[Machine Learning Methods for Estimating Heterogeneous Causal Effects (Athey, Imbens, 2013)](https://www.researchgate.net/profile/Guido-Imbens-2/publication/274644919_Machine_Learning_Methods_for_Estimating_Heterogeneous_Causal_Effects/links/553c02250cf2c415bb0b1720/Machine-Learning-Methods-for-Estimating-Heterogeneous-Causal-Effects.pdf)
1 Nov 2023: (Paper) ML for predicting counterfactuals
[Meta-learners for Estimating Heterogeneous Treatment Effects using Machine Learning (Künzel, Sekhon, Bickel, Yu, 2017)](https://arxiv.org/abs/1706.03461)
[Adapting Neural Networks for the Estimation of Treatment Effects (Shi, Blei, Veitch, 2019)](https://arxiv.org/abs/1906.02120).
3 Nov 2023: ⚠️ In class participation self-assessment.
To encourage participation as it is a large fraction of the grade, we'll do a participation self-assessment and goal-setting exercise. By Fri. Nov. 3, each student should **email me** a paragraph (2 or 3 sentences) on:
- Their thoughts on their participation so far (frequency, quality, efforts).
- Their goals for the rest of the class.
6 Nov 2023: Double/Debiased/Orthogonal ML (paper)
[Orthogonal Statistical Learning (Foster, Syrgkanis, 2019)](https://arxiv.org/abs/1901.09036)
8 Nov 2023: Causal Representation Learning (paper)
[Desiderata for Representation Learning: A Causal Perspective](https://arxiv.org/abs/2109.03795)
10 Nov 2023: ⚠️ Project paragraph due
Each group will send me a project proposal/paragraph (1p. max, though .5p is likely enough). It should include:
- A motivation paragraph.
- A setting paragraph (causal question you're looking at).
- A plan (how you'll try to tackle the question). Frame the plan as a hypothesis and experiments you'll run / theory you'll do to see if the hypothesis is supported. This way, you learn something (and have a project) whether it works or not.
13 Nov 2023: **No class** (Midterm break)
15 Nov 2023: **No class** (Midterm break)
20 Nov 2023: Instrument Variables (lecture)
- Natural Experiments
- Instrument Variables
- Two stages least square (2SLS)
- (Maybe) Method of Moments
22 Nov 2023: DeepIV (paper)
[Deep IV: A Flexible Approach for Counterfactual Prediction (Hartford, Lewis, Leyton-Brown, Taddy, 2017)](https://proceedings.mlr.press/v70/hartford17a/hartford17a.pdf)
24 Nov 2023: ⚠️ Assignment 2 due
Present your HW2 **in group**. 10/15min presentation, 10/15min for discussion and questions. Register [here](https://freebusy.io/lecuyer/assigment-presentation).
27 Nov 2023: Deep Method of Moments (paper)
[Deep Generalized Method of Moments for Instrumental Variable Analysis (Bennett, Kallus, Schnabel, 2019)](https://arxiv.org/abs/1905.12495)
29 Nov 2023: Sensitivity analysis (paper)
[Sense and Sensitivity Analysis: Simple Post-Hoc Analysis of Bias Due to Unobserved Confounding (Veitch, Zaveri, 2020)](https://proceedings.neurips.cc/paper/2020/hash/7d265aa7147bd3913fb84c7963a209d1-Abstract.html)
4 Dec 2023: Causal Discovery (papers)
[Review of Causal Discovery Methods Based on Graphical Models (Glymour, Zhang, Spirtes, 2019)](https://www.frontiersin.org/articles/10.3389/fgene.2019.00524/full)
[DAGs with NO TEARS: Continuous Optimization for Structure Learning](https://proceedings.neurips.cc/paper_files/paper/2018/file/e347c51419ffb23ca3fd5050202f9c3d-Paper.pdf)
6 Dec 2023: The decounfounder (papers)
[The Blessings of Multiple Causes (Wang, Blei, 2018)](https://arxiv.org/abs/1805.06826)
15 Dec 2023: ⚠️ Project reports due (4p max, less is fine)
[^piazza]: In this course, you will be using Piazza, which is a tool to help facilitate discussions. When creating an account in the tool, you will be asked to provide personally identifying information. Because this tool is hosted on servers in the U.S. and not in Canada, by creating an account you will also be consenting to the storage of your information in the U.S. Please know you are not required to consent to sharing this personal information with the tool, if you are uncomfortable doing so. If you choose not to provide consent, you may create an account using a nickname and a non-identifying email address, then let your instructor know what alias you are using in the tool.