About The Meeting
The New England Mechanistic Interpretability (NEMI) workshop aims to bring together academic and industry researchers from the New England and surrounding regions who are advancing the field of mechanistic interpretability in machine learning systems. The workshop will serve as a forum to share recent progress, challenges, and ideas in reverse-engineering, circuit analysis, and other techniques that seek to understand how models compute internally. NEMI seeks to foster a participatory and collaborative environment where researchers at all levels—including graduate students, early-career scientists, and established experts—can engage in discussion and feedback. We particularly encourage submissions from rising researchers currently enrolled in graduate programs at New England-based universities. Topics of interest include, but are not limited to, interpretability of neural circuits, activation patching, probe-based analysis, feature attribution methods, model simplification, scaling laws and applications as applied to interpretability. The workshop will feature a dynamic program including invited keynote speakers, selected oral presentations, interactive poster sessions, and opportunities for open discussion.

Schedule
09:00 AM - 10:00 AM | Breakfast & Registration |
10:00 AM - 10:10 AM | Opening Remarks |
10:10 AM - 10:30 AM | Keynote 1: Lee Sharkey: "Mech Interp: Where should we go from here?" |
10:30 AM - 10:40 AM | Student Talk 1 |
10:40 AM - 10:50 AM | Student Talk 2 |
10:50 AM - 11:00 AM | Coffee Break (10 mins) |
11:00 AM - 11:20 AM | Keynote 2: Tamar Rott Shaham: "Can Language Models Interpret Humans?" |
11:20 AM - 11:50 PM | Round 1 of LLM Team Matching |
11:50 PM - 12:00 PM | Group Photo |
12:00 PM - 01:00 PM | Lunch + Continued Team Matching |
01:00 PM - 02:00 PM | NDIF/NNsight + 2nd/3rd Round of Matches |
02:00 PM - 04:00 PM | Poster Session |
04:00 PM - 04:10 PM | Coffee Break |
04:10 PM - 04:30 PM | Keynote 3: Aaron Mueller: "Beyond Human Concepts: Evaluating and Applying Unsupervised Interpretability" |
04:30 PM - 04:40 PM | Student Talk 3 |
04:40 PM - 04:50 PM | Student Talk 4 |
04:50 PM - 05:10 PM | Keynote 3: Ekdeep Singh Lubana: "Looking Inwards: Implicit Assumptions Formally Constrain Mechanistic Interpretability" |
05:10 PM - 05:50 PM | Panel Discussion |
05:55 PM - 06:00 PM | Closing Remarks |
06:00 PM+ | Optional Social |
Registration
Register by August 4, 2025.
Submission Guidelines
We invite submissions for the NEMI 2025 workshop, a one-day event dedicated to exploring the latest developments in mechanistic interpretability research. We welcome submissions on all aspects of interpretability. Some of them will be selected for oral presentations and the remaining will be presented as posters. We encourage submissions from rising researchers who are enrolled in graduate programs at universities located in the New England region.
Dates
- Registration deadline: August 4, 2025, (AOE)
- Submission deadline: August 9, 2025, (AOE)
- Notification: August 12, 2025, (AOE)
- Event Date: August 22, 2025
Keynote Speakers

Aaron
Mueller
Boston University

Ekdeep Singh Lubana
Harvard UniversityStudent Organizers

Koyena Pal
Northeastern University
Alex Loftus
Northeastern University
Emma Bortz
Northeastern University
Aruna Sankaranarayanan
MITSenior Program Committee

David Bau
Northeastern University
Jacob Andreas
MIT
Hima Lakkaraju
Harvard
Najoung Kim
Boston UniversityLogistics Support

Heather Sciacca
Northeastern UniversityVenue
Curry Student Center, 360 Huntington Ave, Boston, MA 02115