Skip to content

Gsoc2026 evaluations#216

Open
abhiram123467 wants to merge 3 commits intoML4SCI:mainfrom
abhiram123467:gsoc2026-evaluations
Open

Gsoc2026 evaluations#216
abhiram123467 wants to merge 3 commits intoML4SCI:mainfrom
abhiram123467:gsoc2026-evaluations

Conversation

@abhiram123467
Copy link
Copy Markdown

GSoC 2026 Evaluation Submission

Applicant: Abhi Ramg
Project: ML4Sci — DeepLense

This Pull Request contains my evaluation notebook submissions for the GSoC 2026 DeepLense project. The repository includes three separate tasks demonstrating proficiency in Foundation Models, Diffusion Models, and Super-Resolution architectures for gravitational lensing.

🔭 Task Breakdown & Results

1. DEEPLENSE4: Foundation Model for Gravitational Lensing

  • Methodology: Built a Vision Transformer (ViT) Masked Autoencoder (MAE) inspired by Lens-JEPA.
  • Pre-training: Self-supervised MAE pre-training on no_sub images with a 75% masking ratio to learn underlying structural physics.
  • Task IX.A (Classification): Fine-tuned the encoder for 3-class dark matter classification (CDM, Axion, No Substructure) achieving robust ROC-AUC scores.
  • Task IX.B (Super-Resolution): Reused the frozen encoder backbone and fine-tuned a custom decoder for 4x super-resolution.

2. DEEPLENSE8: Physics-Informed Diffusion Models

  • Methodology: Implemented a Denoising Diffusion Probabilistic Model (DDPM) with a custom U-Net backbone (incorporating Sinusoidal Time Embeddings and Attention blocks).
  • Training: Trained on 10,000 strong lensing images over 1000 diffusion timesteps.
  • Evaluation: Generated highly realistic lensing images from pure noise, evaluated using Frechet Inception Distance (FID) to quantify domain-specific realism.

3. DEEPLENSE9: Unsupervised Super-Resolution

  • Methodology: Developed a Residual Channel Attention Network (RCAN-lite) to reconstruct high-resolution lensing features from degraded inputs.
  • Task VI.A (Simulated): Trained on simulated HR/LR pairs, optimizing a combined L1 + Gradient loss function to preserve critical arc sharpness.
  • Task VI.B (Real HSC/HST): Applied transfer learning by fine-tuning the Task A model on 300 real telescope image pairs to handle physical noise distributions (Domain Adaptation).
  • Metrics: Evaluated against Bicubic baselines using MSE, SSIM, and PSNR.

📁 Attached Files

  • DEEPLENSE4_Foundation_Model_FINAL.ipynb
  • DEEPLENSE8_DDPM_FINAL.ipynb
  • DEEPLENSE9_SuperResolution.ipynb
  • Related output visualizations and metric dashboards.

Added a robust loader for DeepLense .npy files that supports multiple formats and includes a PyTorch Dataset class for easy data handling.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant