Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
124 changes: 61 additions & 63 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,71 +1,69 @@
# SynAccel Sentinel
# SynAccel-Sentinel

**Status:** Active Development (Phase-1 Prototype)
**Last Updated:** November 2025
**SynAccel-Sentinel** is a modular, Python-based security detection framework designed to identify risks across cloud environments and network activity.

SynAccel Sentinel is an **adaptive cloud-security research framework** under active development by the **SynAccel Cyber R&D** initiative.
The project explores how automation and feedback loops can enable **self-learning cloud defenses** that detect risks, respond intelligently, and adapt over time.
It is part of the larger **SynAccel** ecosystem and focuses on early-stage detection, analysis, and response logic for modern security threats.

---

## Current Focus

### Phase-1: Adaptive Response Loop (ARL)
Sentinel currently includes a working **Adaptive Response Loop**, which allows the system to learn from its own detections and automatically tighten its response policy when repeated risks occur.

**What’s implemented so far:**
- **Detectors** — Identify AWS misconfigurations (IAM and S3 modules).
- **Responders** — Perform actions or tagging based on the live policy.
- **Core (ARL)** — Tracks detections, updates 24-hour counters, and adjusts policy automatically.
- **Config + State** — JSON files store Sentinel’s current policy and adaptive memory.

```
Detectors → Reports → Core (ARL) → Updated Config → Responders
↑ ↓
└────────────────────── 24h State Memory ─────────────┘

```

**Example behavior**
- Multiple public S3 buckets in 24h → `auto_remediate_public = true`
- Repeated IAM users without MFA → `require_mfa = true`, later `disable_keys_on_nomfa = true`


**Run**
```bash
python src/core/sentinel_core.py
```

See the live Phase-1 demo of Sentinel’s Adaptive Response Loop:
[View Showcase →](docs/showcase_phase1_arl.md)

**Current Folder Structure**
```
SynAccel-Sentinel/
├── src/
│ ├── detectors/
│ │ ├── iam_exposure_detector.py
│ │ └── s3_public_access_detector.py
│ ├── responders/
│ │ ├── iam_responder.py
│ │ └── s3_responder.py
│ ├── core/
│ │ └── sentinel_core.py
│ ├── utils/
│ └── ...
├── configs/
│ └── sentinel_config.json
├── state/
│ └── sentinel_state.json
├── reports/
│ ├── sample_output/
│ └── ...
└── README.md
```

### Phase-2: Behavioral scoring and weighted risk aggregation

----Planned----
## Current Features

### AWS Misconfiguration Detection
Located in: `/src/detectors/`

- IAM exposure detection
- Public S3 bucket detection
- Extensible detector-based design

---

### CloudTrail Anomaly Detection (Experimental)
Located in: `/src/cloudtrail_anomaly_detector/` — on `guardduty-dev` branch

- Pulls CloudTrail events using `boto3`
- Flags:
- Unusual API call frequency
- High-risk actions (Delete*, Put*, Attach*, etc.)
- Rule-driven via `anomaly_rules.json`
- Designed to integrate with Sentinel's core loop

---

### GuardDuty Integration (Experimental)
Located in: `/src/guardduty_integration/` — on `guardduty-dev` branch

- Connects to AWS GuardDuty
- Pulls recent findings
- Parses and normalizes alerts for Sentinel
- Awaiting live AWS testing/activation


---

## Development Flow

- Stable code remains on `main`
- Experimental modules are developed on:

guardduty-dev

- Changes are added via pull requests after testing

---

## Planned Next Steps

- Finish AWS account activation
- Enable GuardDuty + CloudTrail logging
- Live test anomaly detection modules
- Build Sentinel core execution loop
- Add logging/dashboard

---

## Goal

To create an **adaptive, AI-assisted security monitoring system** capable of detecting misconfigurations, abnormal behavior, and potential threats across cloud and network environments.



Expand Down
3 changes: 3 additions & 0 deletions src/cloudtrail_anomaly_detector/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
from cloudtrail_anomaly_detector import CloudTrailAnomalyDetector

__all__ = ["CloudTrailAnomalyDetector"]
113 changes: 113 additions & 0 deletions src/cloudtrail_anomaly_detector/cloudtrail_anomaly_detector.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,113 @@
import json
from collections import Counter
from datetime import datetime, timedelta

import boto3

class CloudTrailAnomalyDetector:
"""CloudTrailAnomalyDetector
---
-Pulls recent CloudTrail events
- Applies simple rule based anomaly detection
- Returns a list of anomaly dictionaries that Sentinel can consume
"""

def __init__(self, region_name="us-east-1", rules_path=None):
"""
:param region_name: AWS region for CloudTrail
:param rules_path: path to anomaly_rules.json
"""
self.client = boto3.client("cloudtrail", region_name=region_name)

# Load detection rules from json
if rules_path is None:
rules_path = "src/cloudtrail_anomaly_detector/anomaly_rules.json"

with open(rules_path, "r") as f:
self.rules = json.load(f)
self.frequency_threshold = self.rules.get("frequency_threshold", 50)
self.high_risk_prefixes = self.rules.get("high_risk_prefixes", [])
self.lookback_hours = self.rules.get("lookback_hours", 24)

def _fetch_events(self):
"""fetch cloudtrail events for configured time window
uses lookup_events
"""

end_time = datetime.utcnow
start_time = end_time - timedelta(hours=self.lookback_hours)

response = self.client.lookup_events(
StartTime=start_time
EndTime=end_time
MaxResults=1000
)

return response.get("Events", [])

def _identify_frequency_anomalies(self, events):
"""_summary_

Args:
events (_type_): _description_
"""
event_names = [e["EventName"] for e in events]
counts = Counter(event_names)

anomalies = []

for event_name, count in counts.items():
if count >= self.frequency_threshold:
anomalies.append({
"type": "API_FREQUENCY_ANOMALY",
"event_name": event_name,
"count": count,
"threshold": self.frequency_threshold

})

return anomalies

def _identify_high_risk_actions(self,events):
"""Flag events where API name starts with high-risk prefixes
ex. Delete*, Put*, Attach*, etc.
"""
anomalies = []

for e in events:
event_name = e["EventName"]
username = e.get("Username", "Unknown")
event_time = e.get("EventTime")

if any(event_name.startswith(prefix)for prefix in self.high_risk_prefixes):
anomalies.append({
"type": "HIGH_RISK_ACTION",
"event_name": event_name,
"user": username,
"time": event_time.isoformat() if hasattr(event_time, "isoformat") else str(event_time)
})

return anomalies

def run(self):
"""Main entry point.
1. Fetch events
2. Run detection logic
3. Return combined list of anomalies
"""

events = self._fetch_events()
if not events:
return []

anomalies = []
anomalies.extend(self._identify_frequency_anomalies(events))
anomalies.extend(self._identify_high_risk_actions(events))

return anomalies






4 changes: 4 additions & 0 deletions src/guardduty_integration/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
from .guardduty_connector import GuardDutyConnector
from .finding_parser import parse_findings

__all__ = ["GuardDutyConnector", "parse_findings"]
21 changes: 21 additions & 0 deletions src/guardduty_integration/finding_parser.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
def parse_findings(findings):
"""
Convert raw GuardDuty finding objects into a simplified alert structure.

Input: list of findings from GuardDutyConnector.run()
Output: list of dicts with key fields Sentinel can act on.
"""
alerts = []

for f in findings:
alerts.append({
"id": f.get("Id"),
"type": f.get("Type"),
"severity": f.get("Severity"),
"resource": f.get("Resource"),
"created_at": f.get("CreatedAt"),
"title": f.get("Title"),
"description": f.get("Description")
})

return alerts
53 changes: 53 additions & 0 deletions src/guardduty_integration/guardduty_connector.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
import boto3


class GuardDutyConnector:
"""
GuardDutyConnector
------------------
- Connects to AWS GuardDuty
- Fetches recent findings
- Returns raw finding objects for further parsing
"""

def __init__(self, region_name="us-east-1"):
self.client = boto3.client("guardduty", region_name=region_name)
self.detector_id = self._get_detector_id()

def _get_detector_id(self):
"""
Get the first available GuardDuty detector ID.
"""
response = self.client.list_detectors()
detector_ids = response.get("DetectorIds", [])

if not detector_ids:
raise RuntimeError("No GuardDuty detectors found in this account/region.")

return detector_ids[0]

def get_findings(self, max_results=50):
"""
List and retrieve GuardDuty findings.
"""
finding_ids_response = self.client.list_findings(
DetectorId=self.detector_id,
MaxResults=max_results
)

finding_ids = finding_ids_response.get("FindingIds", [])
if not finding_ids:
return []

findings_response = self.client.get_findings(
DetectorId=self.detector_id,
FindingIds=finding_ids
)

return findings_response.get("Findings", [])

def run(self, max_results=50):
"""
Public entry point – fetch recent findings.
"""
return self.get_findings(max_results=max_results)
7 changes: 7 additions & 0 deletions todo.txt
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,13 @@
# V Add IAM Exposure Detector (MFA and policy check)
# V Add IAM Responder (auto-tag or remediate risky users)


# In experimental 11/26

# V Add CloudTrail anomaly detector
# V Add GuardDuty Integration module


TODO in Adamptive Loop

# FINISHED 11/3
Expand Down