Skip to content

reicraftscodes/group-4-dissertation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

31 Commits
 
 
 
 

Repository files navigation

Multimodal Facial Emotion Recognition Using Deep Learning Models

This dissertation project is a requirement for the completion of the Master’s in Artificial Intelligence degree at the University of the West of England, Bristol (UWE).

The complete detailed documentation for this project can be found on GitBook documentation and paper available on ResearchGate

Project Overview

This documentation covers the benchmarking of deep learning models for multimodal facial emotion recognition using RGB and thermal images. The objective is to compare model performance across different architectures and input modalities to improve the accuracy and reliability of emotion recognition systems.

Phase Duration Activities
Proposal Submission January – March 2025 Develop and finalise project proposal, conduct initial background study, obtain approval.
Research Gathering March – May 2025 Collect data and resources; review related literature, refine research objectives and methodology.
Development May – August 2025 Implement main project work; testing, analysis, and evaluation.
Paper Writing August – September 2025 Document results; compile findings; finalise research paper and presentation materials.

Set Up

Before running the project, please make sure you have the following installed on your machine:

  • Use an IDE of your choice, such as PyCharm or VS Code.
  • Python 3.9+

Or you can run it via Google Colab.

How to run the project

To run the project, please follow the steps outlined below.

Step 1: Clone the repository

  git clone https://gitlab.uwe.ac.uk/lmr2-sanejo/group-4-dissertation

Step 2: Select a branch name

The code implementations can be found in the following GitLab branches.

Data augmentation

Models

  • vit_model: Vision Transformer implementations can be found on branch.
  • YOLO11: Yolo11 implementations can be found on branch.
  • CNN_Model: CNN implementations can be found on branch.

Step 3: Running the project

Check the README.md for how to run the model once you’ve checked out the branch and read further details in the GitBook Documentation

License

This repository was created as part of dissertation project completion at the University of the West of England (UWE).

The code is provided for academic purposes only and may not be used for commercial applications.

All rights reserved by the authors and UWE.

Authors

  • Fiorella Scarpino (21010043), University of the West of England (UWE)
  • May Sanejo (15006280), University of the West of England (UWE)
  • Soumia Kouadri (24058628), University of the West of England (UWE)

About

Master's Dissertation Multimodal Model project

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors