Hey! I'm a PhD@Oxford working on LLM explainability and building some evals along the way. You can check out all my research at www.harrymayne.com
-
University of Oxford
- London
- www.harrymayne.com
- @HarryMayne5
- in/harry-mayne-04aba7194
Highlights
- Pro
Popular repositories Loading
-
SV_interpretability
SV_interpretability PublicCode for the paper "Can sparse autoencoders be used to decompose and interpret steering vectors?"
-
qwen_3_chat_templates
qwen_3_chat_templates PublicAlternative chat templates for Qwen 3 8B. Useful for multi-turn RL
Jupyter Notebook 8
-
ICU-patient-subgroups
ICU-patient-subgroups PublicUnsupervised Learning Approaches for Identifying ICU Patient Subgroups: Do Results Generalise?
-
-
SAELensPlus
SAELensPlus PublicForked from decoderesearch/SAELens
Training Sparse Autoencoders on Language Models
HTML
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.


