Skip to content

Dev/prvep experiment with docs#8

Draft
pellet wants to merge 69 commits intomasterfrom
dev/prvep_experiment_with_docs
Draft

Dev/prvep experiment with docs#8
pellet wants to merge 69 commits intomasterfrom
dev/prvep_experiment_with_docs

Conversation

@pellet
Copy link
Copy Markdown
Owner

@pellet pellet commented Apr 12, 2026

  • Use conda env for building docs in github.
  • Added setuptools to conda environment to fix CI build issue.
  • Added PR-VEP experiment.
  • Added PR-VEP visualization/analysis.

@pellet pellet force-pushed the dev/prvep_experiment_with_docs branch from c67758a to fb931fa Compare April 12, 2026 05:04
pellet added 28 commits April 12, 2026 16:16
Future-proof VEP naming now that more VEP types will be added.
…uploaded

- Add visual-PRVEP to datasets.py with None placeholder gdrive ID
- fetch_dataset raises a clear error if gdrive ID is missing
- Rewrite example to use fetch_dataset instead of dotenv/DATA_DIR
- Add visual_vep to sphinx-gallery examples_dirs/gallery_dirs
- Exclude visual_vep examples from execution until dataset is on Drive (TODO comment)
- Move intro + examples include to top (matching other experiment pages)
- Move Running the Experiment up as quick-start
- Add Participant Preparation section on glasses/contacts requirement
- Detail sections (stimulus, VR, electrodes, timing) follow as reference
Add parabolic interpolation to get_peak() for ~0.5ms latency resolution
at 250 Hz sampling rate. Increase default blocks from 4 to 8 (400
reversals per eye). Add Latency Resolution section to PR-VEP docs.
Add 02r__pattern_reversal_longitudinal.py example that loads multiple
sessions for a subject, extracts per-eye P100 latency with parabolic
interpolation, and plots trends over time. Add Longitudinal Tracking
section to PR-VEP docs with baseline guidance.
pellet added 30 commits April 22, 2026 20:44
- Updated 01r__pattern_reversal_viz to automatically fetch the example
  dataset from Google Drive via gdown if missing (enables CI/CD doc builds).
- Modified PR-VEP block schedule to group trials by eye, allowing the use
  of a physical patch (ISCEV standard) without interrupting the VR session.
- Upgraded VR instruction screens to render stereoscopically with color-coded
  backgrounds (black for patched eye, grey for open eye).
- Fixed diagnostics.py signal check to correctly identify and warn about
  shared reference (M1/A2) failures when all channels inflate simultaneously.
- Removed the complicated state-machine logic used to decode v1 markers
  from block-start codes.
- The pipeline now expects v2 markers (where both eye and size are fully
  encoded into integers 1-4) by default.
- Removed bitwise math comments to clarify the condition-to-integer mapping.
- Split Oz evoked plots into two distinct cells (Large vs Small checks)
  for better negative space and intuitive side-by-side L/R eye comparison.
- Switched default REF_SCHEME to 'Mastoid M2' and added comment
  explaining why M1 was too noisy for linked mastoids in session 016,
  allowing BM12 (Halliday inversion) to compute.
- Kept overall pipeline referenced to Fz (ISCEV) for KISS compliance.
- Modified BM12 to temporarily re-reference to M2 locally, enabling the
  Halliday Fz polarity inversion check even when M1 is too noisy for
  a linked mastoid reference.
- Rewrote get_peak to find the absolute local maximum within the search
  window, removing MNE's strict requirement for positive values.
- Supports waveforms with large downward baseline shifts (e.g. from Quest 2
  pixel response time) without needing distortive high-pass filtering.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant