Skip to content

ad8thya/mediaEval

 
 

Repository files navigation

🧪 MediaEval 2025 — Synthetic Image Detection

Adithya S · Research Contribution · Computer Vision & Bias Analysis

This repository contains the code, experiments, and evaluation notebooks for our submission to the MediaEval 2025 Synthetic Image Detection Task.
The challenge focused on distinguishing real vs synthetic images and identifying manipulated regions across diverse datasets.

Our work explores how well modern deep-learning models handle bias, distribution shifts, and generalization in synthetic image detection.


📌 Overview

We evaluated two families of models:

  • DenseNet121 → binary classification (real vs synthetic)
  • U-Net → pixel-level manipulation localization

Across experiments, we found that segmentation-based approaches generalize more consistently, while classification models often develop silent dataset bias — favoring one label regardless of image content.

This repo includes our training code, results, and analysis notebooks for both tasks.


🧠 Key Insights

  • DenseNet121 struggled with generalization and tended to bias toward the “real” class under dataset variation.
  • U-Net segmentation delivered stronger cross-distribution performance and better captured localized manipulations.
  • Results varied significantly across synthetic sources, showing the need for more diverse synthetic datasets and improved augmentation strategies.
  • Segmentation models showed unexpected robustness, outperforming classifiers when synthetic images came from unseen generators.

📂 Repository Structure


mediaEval/
│
├── taskA/                  # DenseNet121 classification experiments
├── taskB/                  # U-Net segmentation experiments
├── train.ipynb             # Main training & evaluation notebook
├── taskA.ipynb             # Task A — real vs synthetic evaluation
├── taskB.ipynb             # Task B — manipulation localization evaluation
├── .ipynb_checkpoints/     # Notebook auto-checkpoints
└── ticket.pdf              # MediaEval 2025 submission ticket


🧰 Methods & Tools

  • Frameworks: PyTorch, Torchvision
  • Models: DenseNet121, U-Net
  • Tasks:
    • Task A: binary classification
    • Task B: manipulation segmentation
  • Metrics: F1 score, IoU, pixel accuracy
  • Augmentations: flips, rotations, color jitter, random crops

📊 Results Summary

Task Model Metric Score
A: Real vs Synthetic DenseNet121 F1 ~0.06
B: Localization U-Net IoU ~0.30
RAISE-mixed subset U-Net F1 0.65

These results highlight consistent limitations in classifier generalization and the comparative strength of segmentation models on manipulation detection.


🧩 My Contribution

As part of the research team, I worked on:

  • training and evaluating classification & segmentation models
  • analyzing bias and performance degradation under dataset shifts
  • preparing comparative study notes between DenseNet and U-Net approaches
  • structuring our experiments into organized notebooks for reproducibility
  • assisting in assembling the MediaEval 2025 submission

This repo is a cleaned and documented version of the work I contributed to.


📫 Contact

If you’d like to discuss the work or collaborate:

Email: adithya2410402@ssn.edu.in


About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • Python 52.1%
  • Jupyter Notebook 47.9%