This project implements a Convolutional Neural Network (CNN) to detect Distributed Denial of Service (DDoS) attacks. Using the CIC-IDS2017 dataset (225,746 rows), we convert 1D numerical logs into 2D grayscale images.
By treating network flows as spatial patterns, the CNN can identify malicious "burst" signatures that are often invisible to traditional linear models.
The fundamental challenge is converting 1D CSV data into a 2D image format suitable for a CNN.
To capture a "snapshot" of activity, we group 150 consecutive rows into a single sample window.
- Security Logic: If any row within the 150-row window contains a DDoS signature, the entire image is labeled as DDoS. This ensures the model learns to identify the specific malicious signal within the noise.
Features are standardized using Z-Score normalization to ensure high-variance network spikes are visible. Data is reshaped to 128×128×1 and normalized to a [0, 1] range:
- Status: Model was stuck in a local minimum, predicting only the majority class. The heatmap showed a uniform "fog," indicating a failure to extract features.
- Status: Implemented Dropout and lowered the learning rate. Accuracy improved, but the model showed "Edge Bias," focusing on zero-padding rather than the network flow data.
- Adjustments: Switched to a Functional API architecture, implemented Batch Normalization, used Z-Score Standardization, and applied Class Weights to force the model to prioritize DDoS detection.
- Outcome: The model achieved near-perfect metrics across both classes. The Grad-CAM heatmap now shows precise "hotspots" proving feature identification.
# Functional API for Keras 3 Compatibility
inputs = layers.Input(shape=(128, 128, 1))
x = layers.Conv2D(32, (3, 3), padding='same', activation='relu')(inputs)
x = layers.BatchNormalization()(x)
x = layers.MaxPooling2D((2, 2))(x)
# Target Layer for Interpretability
target_layer = layers.Conv2D(64, (3, 3), padding='same', activation='relu', name="target_conv")(x)
x = layers.BatchNormalization()(target_layer)
x = layers.MaxPooling2D((2, 2))(x)
x = layers.Flatten()(x)
x = layers.Dense(64, activation='relu')(x)
x = layers.Dropout(0.5)(x)
outputs = layers.Dense(1, activation='sigmoid')(x)
model = models.Model(inputs=inputs, outputs=outputs)| Class | Precision | Recall | F1-Score | Support |
|---|---|---|---|---|
| Benign | 0.99 | 0.99 | 0.99 | 126 |
| DDoS | 0.99 | 0.99 | 0.99 | 175 |
| Accuracy | 0.99 | 301 | ||
| Macro Avg | 0.99 | 0.99 | 0.99 | 301 |
| Weighted Avg | 0.99 | 0.99 | 0.99 | 301 |
To interpret the AI's decision, we compute the importance of each feature map
via the gradient of the class score y:
The visual heatmap (L) is generated by:
- Training: Run
ddos_cnn_vision.py. Uses Adam optimizer and Binary Cross-Entropy Loss:
- Analysis: View the generated Grad-CAM heatmap to identify which network features contributed to the DDoS classification.
pip install tensorflow pandas numpy matplotlib opencv-python Pillow scikit-learn

