Greetings

I am a fifth-year PhD student at the University of Illinois, Urbana-Champaign (UIUC), where I am advised by the eminent Nan Jiang. My current research interests are in deriving principled methods for reinforcement learning (RL). I mostly think the question of when/how we can use function approximation to derive sound/efficient/useful RL algorithms.

During my PhD, I’ve had the immense delight of interning with Dylan Foster and Akshay Krishnamurty at Microsoft Research, Dean P. Foster at Amazon Research, and Csaba Szepesvári at the University of Alberta.

In a previous life, I completed an MSc in Computer Science and a BSc in Maths & Physics, both from McGill University. My MSc research was advised by the dynamic duo of Prakash Panangaden and Marc G. Bellemare.

Here are links to my Google Scholar and my CV. I can be reached at philipa4 at illinois dot edu (or, in the event of overeager spam filters, at amortilaphilip at gmail dot com).

Education

Experience

  • Summer 2023. Research Intern @ Microsoft Research (New England).

  • Summer 2022. Research Intern @ Amazon Research (New York).

    • Advised by Dean P. Foster.

    • Topic: Coordination & Communication in Partially Observed Cooperative Games.

  • Fall 2021. Research Intern @ Amazon Research (New York).

    • Advised by Dean P. Foster.

    • Topic: Optimal Algorithms for Expert-Assisted RL With Linear Features.

  • Summer 2021. Visiting Researcher @ University of Alberta.

    • Advised by Csaba Szepesvári.

    • Topic: Optimal Methods for Off-policy Evaluation With Misspecification.

  • Summer 2020. Visiting Researcher @ University of Alberta.

    • Advised by Csaba Szepesvári.

    • Topic: Limits of Sample-Efficient Learning With Linear Features.

Preprints

  1. Scalable Online Exploration via Coverability

    Philip Amortila, Dylan J. Foster, Akshay Krishnamurthy

    Preprint [arXiv]

  2. Mitigating Covariate Shift in Misspecified Regression with Applications to Reinforcement Learning

    Philip Amortila, Tongyi Cao, Akshay Krishnamurthy

    Preprint [arXiv]

Conference Papers

  1. Harnessing Density Ratios for Online Reinforcement Learning

    Philip Amortila, Dylan J. Foster, Nan Jiang, Ayush Sekhari, Tengyang Xie

    ICLR 2024 Spotlight [arXiv]

  2. The Optimal Approximation Factors in Misspecified Off-Policy Value Function Estimation

    Philip Amortila, Nan Jiang, Csaba Szepesvari

    ICML 2023 [arXiv]

  3. A Few Expert Queries Suffices for Sample-Efficient RL with Resets and Linear Value Approximation 

    Philip Amortila, Nan Jiang, Dhruv Madeka, Dean P. Foster

    NeurIPS 2022 [arXiv, talk]

  4. On Query-efficient Planning in MDPs under Linear Realizability of the Optimal State-value Function

    Gellert Weisz, Philip Amortila, Barnabàs Janzer, Yasin Abbasi-Yadkori, Nan Jiang, Csaba Szepesvári 

    COLT 2021 [arXiv, talk]

  5. Exponential Lower Bounds for Planning in MDPs With Linearly-Realizable Optimal Action-Value Functions

    Gellert Weisz, Philip Amortila, Csaba Szepesvári

    ALT 2021 Best Student Paper Award [arXiv, talk]

  6. Solving Constrained Markov Decision Processes via Backward Value Functions

    Harsh Satija, Philip Amortila, Joelle Pineau

    ICML 2020 [arXiv, talk]

  7. A Distributional Analysis of Sampling-Based Reinforcement Learning Algorithms

    Philip Amortila, Doina Precup, Prakash Panangaden, Marc G. Bellemare

    AISTATS 2020 [arXiv, talk] & NeurIPS 2019 Optimization in RL Workshop Spotlight [talk]

  8. Learning Graph Weighted Models on Pictures

    Philip Amortila, Guillaume Rabusseau

    ICGI 2018 [arXiv]

Technical Notes

  1. A Variant of the Wang-Foster-Kakade Lower Bound for the Discounted Setting

    Philip Amortila, Nan Jiang, Tengyang Xie [arXiv]

Workshop Papers

  1.  Temporally Extended Metrics for Markov Decision Processes

    Philip Amortila, Marc G. Bellemare, Prakash Panangaden, Doina Precup

    AAAI 2019 Safety in AI Workshop Spotlight [pdf]

Publications

  • 2022 & 2023. Finalist for Apple PhD Fellowship (2022) and Google PhD Fellowship (2023).

    • Nominated by UIUC for national competitions (3 selected amongst all UIUC students).

  • 2021. Best Student Paper Award at ALT 2021.

  • 2019. NSERC Postgraduate Doctoral Fellowship (PGS-D).

Awards

Reviewer

  • Journal of Machine Learning Research (JMLR). 2022, 2023

  • Transactions of Machine Learning Research (TMLR). 2022, 2023

  • International Conference on Machine Learning (ICML). 2020, 2021, 2022, 2023

  • Neural Information Processing Systems (NeurIPS). 2020, 2021, 2022

  • International Conference on Learning Representations (ICLR). 2023

Service

Teaching Assistant

  • Fall 2023. CS 542 Statistical Reinforcement Learning @ UIUC.

  • Spring 2023. CS 443 Reinforcement Learning @ UIUC.

  • Fall 2019. CS 498 Reinforcement Learning @ UIUC.

  • Fall 2018. CS 598 Foundations of Machine Learning @ McGill.

  • Winter 2018. CS 551 Applied Machine Learning @ McGill.

  • Fall 2017. CS 551 Applied Machine Learning @ McGill.

  • Winter 2017. CS 302 Functional Programming @ McGill.

Teaching Experience