Skip to content

Deep Time Series Attack (from PR #330)#361

Merged
fazelehh merged 5 commits intomainfrom
deep-time-series-attack-merged
Jan 27, 2026
Merged

Deep Time Series Attack (from PR #330)#361
fazelehh merged 5 commits intomainfrom
deep-time-series-attack-merged

Conversation

@fazelehh
Copy link
Collaborator

Summary

This PR incorporates the Deep Time Series (DTS) attack from PR #330 by @ZyzzGit, with:

  • Merge conflicts resolved with current main
  • Typo fix: "learing" → "learning"

Original PR Description

Adds the Deep Time Series (DTS) attack implementation, leveraging deep learning (instead of signal analysis) for attacking time series models. N.B. only supports the time series modality. The update includes:

  • The DTS attack
  • An API for training and inference of the deep membership classifier (used within DTS)
  • Two time series classification models (LSTM and InceptionTime)

New Files

  • leakpro/attacks/mia_attacks/dts.py - Main DTS attack implementation (275 lines)
  • leakpro/attacks/utils/dts_mia_classifier/mi_classifier.py - MIC model API (158 lines)
  • leakpro/attacks/utils/dts_mia_classifier/models/inception_time.py - InceptionTime model (136 lines)
  • leakpro/attacks/utils/dts_mia_classifier/models/lstm_classifier.py - LSTM classifier (31 lines)

Modified Files

  • leakpro/attacks/mia_attacks/attack_factory_mia.py - Register "dts": AttackDTS

Test plan

  • CI tests pass
  • Manual testing with time series dataset

Closes #330

🤖 Generated with Claude Code

ZyzzGit and others added 5 commits July 1, 2025 19:43
Resolve conflict in attack_factory_mia.py by keeping all attacks:
- base, ramia, multi_signal_lira (from main)
- dts (from this PR)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Copy link
Collaborator

@henrikfo henrikfo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • There are no tests. However, the code looks good, and there are no apparent issues with it. Can we add an Issue of missing tests to keep track of that, then implement them at a later stage?
  • No random seeds, for instance data_indices = np.random.choice(..), or train_test_split(...) in dts.py, which will make reproducability harder.
  • Might be appropriate with a warining when users sets a too-low "clf_data_fraction". I think users will understand this without the warning but it might be a good add as a failsafe for runtime, when we are trying to mature the tool. A simple # After creating MIC dataset if len(self.mic_data) < 10: logger.warning(f"MIC dataset only has {len(self.mic_data)} samples...")

Similar to the first bullet, the later ones can be implemented and fixed at a later stage if we just keep track of them with an issue. Anyhow, the code looks great. If you have tested it on your end and it works, I'm approving the PR.

@fazelehh fazelehh merged commit 4eba83c into main Jan 27, 2026
1 check passed
@fazelehh fazelehh mentioned this pull request Jan 27, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants