A comprehensive GUI application for training and analyzing structural damage in images using U-Net segmentation models with both PyTorch and Keras implementations.
- Professional Interface: Sleek dark mode design with custom logo
- Colorful Buttons: Color-coded buttons for different actions (success, warning, info, etc.)
- Enhanced Typography: Modern fonts and improved readability
- Responsive Layout: Adaptive interface that works on different screen sizes
- Visual Feedback: Hover effects and status indicators
- Dual Framework Support: Train models using PyTorch (segmentation-models-pytorch) or Keras (keras-unet-collection)
- Multiple Backbones: Choose from ResNet, VGG, DenseNet, MobileNet, and EfficientNet architectures
- Real-time Training Monitoring: Live progress tracking and training logs
- Configurable Parameters: Adjust epochs, batch size, and data directories
- Automatic Model Saving: Best models saved automatically during training
- U-Net Segmentation: Advanced semantic segmentation for precise damage detection
- Multi-Model Support: Load and compare PyTorch and Keras models simultaneously
- Batch Processing: Analyze entire directories of images automatically
- Visual Results: Generate overlay images showing detected damage areas
- Damage Classification: Categorizes damage as Manageable, Partially damaged, or Completely damaged
- Traditional Computer Vision: White pixel detection method for immediate analysis
- No Training Required: Analyze images without pre-trained models
- Baseline Comparison: Compare ML results with traditional CV methods
-
Clone or download this repository
-
Install Python 3.8 or higher
-
Install dependencies:
pip install -r requirements.txt- Run the application:
python app.py- Choose your workflow:
- Training Tab: Train new models from your data
- Prediction Tab: Use trained models for damage detection
- OpenCV Tab: Quick analysis using traditional computer vision
- Images: Place your raw images in a directory (e.g.,
RAW_Images/) - Masks: Place corresponding binary masks in another directory (e.g.,
Masks/)- Masks should be binary images where white pixels indicate damage
- Use the OpenCV tab to generate masks from images with visible damage
- Go to the Training tab
- Select framework (PyTorch or Keras) and backbone architecture
- Set training parameters (epochs, batch size)
- Choose your data directories
- Click Start Training
- Monitor progress in real-time
PyTorch Options:
- ResNet:
resnet18,resnet34,resnet50,resnet101,resnet152 - VGG:
vgg11,vgg13,vgg16,vgg19 - DenseNet:
densenet121,densenet169,densenet201 - MobileNet:
mobilenet_v2 - EfficientNet:
efficientnet-b0throughefficientnet-b7
Keras Options:
- ResNet:
ResNet50,ResNet101,ResNet152,ResNet50V2,ResNet101V2,ResNet152V2 - VGG:
vgg16,vgg19 - DenseNet:
densenet121,densenet169,densenet201 - MobileNet:
MobileNetV2
- Go to the Prediction tab
- Load your trained model(s) - specify the correct backbone used during training
- Click Load Single Image to select an image
- Click Analyze Current Image to get results
- Load your trained model(s)
- Click Batch Process Directory
- Select a folder containing images
- Results will be saved automatically with masks and overlay images
The system uses area-based thresholds to classify damage:
- No Damage: 0 pixels
- Manageable: 1-5,025 pixels
- Partially Damaged: 5,026-17,670 pixels
- Completely Damaged: >17,671 pixels
These thresholds are based on the original research and can be modified in the code.
├── app.py # Main GUI application
├── utils/ # Core utilities
│ ├── __init__.py
│ ├── model_loader.py # Model loading and prediction
│ ├── image_utils.py # Image processing utilities
│ └── training_utils.py # Training management
├── models/ # Place your trained models here
├── RAW_Images/ # Training images (create this folder)
├── Masks/ # Training masks (create this folder)
├── requirements.txt # Dependencies
└── README.md # This file
- Uses
segmentation-models-pytorchlibrary - U-Net architecture with ImageNet pretrained encoders
- Binary segmentation with sigmoid activation
- BCEWithLogitsLoss for training
- Uses
keras-unet-collectionlibrary - U-Net++ architecture with ImageNet pretrained backbones
- Binary segmentation with sigmoid activation
- Binary crossentropy loss for training
This GUI integrates the complete functionality from your GitHub repository:
- Training scripts:
train_pytorch_unet.pyandtrain_keras_unet.py - Prediction scripts:
predict_pytorch.pyandpredict_keras.py - OpenCV analysis:
damage_analyzer.py
All functionality is now available through an intuitive graphical interface with additional features like real-time monitoring and batch processing.
- CUDA/GPU Issues: Models will automatically fall back to CPU if CUDA is unavailable
- Memory Errors: Reduce batch size if you encounter out-of-memory errors
- Model Loading Fails: Ensure the backbone specified matches the one used during training
- Training Stops: Check that your data directories contain matching image and mask files
- GPU Training: Use CUDA-compatible GPU for faster training
- Batch Size: Start with batch size 4, reduce if memory issues occur
- Image Size: Models use 512x512 input size for optimal performance
- Data Quality: Ensure masks accurately represent damage areas
If you encounter issues:
- Check the training log for detailed error messages
- Verify your data directory structure and file formats
- Ensure all dependencies are correctly installed
- Try with a smaller dataset first to validate the setup
Based on the comprehensive damage detection system from Project_19 with enhanced GUI interface and additional features for improved usability.