Adithya S · Research Contribution · Computer Vision & Bias Analysis
This repository contains the code, experiments, and evaluation notebooks for our submission to the MediaEval 2025 Synthetic Image Detection Task.
The challenge focused on distinguishing real vs synthetic images and identifying manipulated regions across diverse datasets.
Our work explores how well modern deep-learning models handle bias, distribution shifts, and generalization in synthetic image detection.
We evaluated two families of models:
- DenseNet121 → binary classification (real vs synthetic)
- U-Net → pixel-level manipulation localization
Across experiments, we found that segmentation-based approaches generalize more consistently, while classification models often develop silent dataset bias — favoring one label regardless of image content.
This repo includes our training code, results, and analysis notebooks for both tasks.
- DenseNet121 struggled with generalization and tended to bias toward the “real” class under dataset variation.
- U-Net segmentation delivered stronger cross-distribution performance and better captured localized manipulations.
- Results varied significantly across synthetic sources, showing the need for more diverse synthetic datasets and improved augmentation strategies.
- Segmentation models showed unexpected robustness, outperforming classifiers when synthetic images came from unseen generators.
mediaEval/
│
├── taskA/ # DenseNet121 classification experiments
├── taskB/ # U-Net segmentation experiments
├── train.ipynb # Main training & evaluation notebook
├── taskA.ipynb # Task A — real vs synthetic evaluation
├── taskB.ipynb # Task B — manipulation localization evaluation
├── .ipynb_checkpoints/ # Notebook auto-checkpoints
└── ticket.pdf # MediaEval 2025 submission ticket
- Frameworks: PyTorch, Torchvision
- Models: DenseNet121, U-Net
- Tasks:
- Task A: binary classification
- Task B: manipulation segmentation
- Metrics: F1 score, IoU, pixel accuracy
- Augmentations: flips, rotations, color jitter, random crops
| Task | Model | Metric | Score |
|---|---|---|---|
| A: Real vs Synthetic | DenseNet121 | F1 | ~0.06 |
| B: Localization | U-Net | IoU | ~0.30 |
| RAISE-mixed subset | U-Net | F1 | 0.65 |
These results highlight consistent limitations in classifier generalization and the comparative strength of segmentation models on manipulation detection.
As part of the research team, I worked on:
- training and evaluating classification & segmentation models
- analyzing bias and performance degradation under dataset shifts
- preparing comparative study notes between DenseNet and U-Net approaches
- structuring our experiments into organized notebooks for reproducibility
- assisting in assembling the MediaEval 2025 submission
This repo is a cleaned and documented version of the work I contributed to.
If you’d like to discuss the work or collaborate:
Email: adithya2410402@ssn.edu.in