Skip to content

Add rate limiting to movefeed API#4

Open
mmiermans wants to merge 2 commits intomasterfrom
feature/rate-limiting
Open

Add rate limiting to movefeed API#4
mmiermans wants to merge 2 commits intomasterfrom
feature/rate-limiting

Conversation

@mmiermans
Copy link
Owner

Summary

  • Add file-based rate limiting to the movefeed API endpoint
  • Limits to 100 requests per minute per client IP
  • Returns 429 with retry_after when limit is exceeded

Test plan

  • Verify rate limiting kicks in after 100 requests
  • Verify rate limit resets after 60 seconds
  • Verify normal requests work under the limit

Limits requests to 100 per minute per client IP using
file-based tracking in /tmp.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@claude
Copy link

claude bot commented Feb 20, 2026

Claude finished @mmiermans's task in 2m 11s —— View job


Code Review

  • Gather context
  • Read changed file
  • Analyze implementation
  • Post review feedback

The rate limiting approach is reasonable for a low-traffic PHP app: file-based windowed counters, proper use of REMOTE_ADDR, SHA-256 hashing for the filename, and exclusive locking to prevent race conditions. A few issues worth addressing:


Bug: Missing ftruncate() causes corrupted state after window resets

File: webclient/public/getmovefeed.php:43-45

rewind($fp);
fwrite($fp, json_encode($rate_data));
fclose($fp);

rewind() moves the file pointer back to position 0, but doesn't truncate the file. When the window resets, count drops from (e.g.) 99 back to 1, making the JSON string 1 byte shorter than before. The stale trailing byte remains, producing invalid JSON like:

{"window_start":1740000001,"count":1}}
                                     ^ stale byte

On the next request, json_decode returns null, the rate data is silently treated as a new session (count resets to 1), and the tracking becomes unreliable. This will happen for any IP that has made ≥ 10 requests when their window expires.

Fix: add ftruncate($fp, 0) after rewinding. Fix this →


Bug: Missing check for fopen() failure

File: webclient/public/getmovefeed.php:17-18

$fp = fopen($rate_file, 'c+');
flock($fp, LOCK_EX);

If fopen fails (disk full, permission error, etc.), it returns false. Passing false to flock emits a PHP warning and the script continues without locking or tracking, effectively bypassing rate limiting for that request. Fix this →


Minor: No file cleanup — stale files accumulate indefinitely

Each unique IP creates a file in /tmp/rate_limits that is never deleted. On a public-facing endpoint this could accumulate thousands of files. Consider periodically deleting files older than a few minutes, or using a system cron job, or switching to APCu/Redis for shared state if available.


Minor: Missing Content-Type on the 503 response

File: webclient/public/getmovefeed.php:8-10

http_response_code(503);
exit();

The 503 path doesn't set a Content-Type header and returns an empty body. Clients (and the browser) may render it inconsistently. At minimum, matching the application/json used elsewhere would be consistent.


Summary: The locking strategy and IP handling are solid. The ftruncate omission is the most important fix — it will cause silent rate-limit resets for active IPs after each window expiry. The fopen failure check is a safety net worth adding. The rest are low-priority.

- Use REMOTE_ADDR instead of X-Forwarded-For to prevent IP spoofing
- Hash IP for filename to prevent path traversal
- Add file locking to prevent race conditions
- Handle mkdir failure and json_decode null

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant