Skip to content

scottmmjackson/LinkFinder

 
 

Repository files navigation

About

LinkFinder discovers endpoints and their parameters in JavaScript files, helping penetration testers and bug hunters uncover hidden API surface on targets they are testing. It uses jsbeautifier to normalize minified source before scanning with a regular expression that finds:

  • Full URLs (https://example.com/path, http://10.0.0.1/api)
  • Absolute and relative paths (/path, ../path, ./path)
  • Relative URLs with extensions (api/create.php, vendor.bundle.min.js)
  • REST endpoints without extensions (api/v1/users)

A Chrome extension is also available here.

Screenshots

LinkFinder

Installation

Requires Python 3.9+.

git clone https://github.com/GerbenJavado/LinkFinder.git
cd LinkFinder
pip install .

After installation the linkfinder command is available on your $PATH.

To install with development tools (ruff, mypy, pytest):

pip install .[dev]

Usage

Usage: linkfinder [OPTIONS]

Options:
  -i, --input <target>     Target to scan: a URL, local file, glob pattern
                           (e.g. '*.js'), or Burp Suite XML export (combine
                           with --burp).  [required]
  -o, --output <path>      Output file path. Use 'cli' to print results to
                           stdout instead.  [default: output.html]
  -r, --regex <pattern>    Only include endpoints that match this regex
                           (e.g. ^/api/).
  -d, --domain             Recursively crawl all JS files discovered on the
                           target domain.
  -b, --burp               Treat input as a Burp Suite 'Save selected items'
                           XML export.
  -c, --cookies <cookies>  Cookie header value to include with each HTTP
                           request.
  -t, --timeout <seconds>  HTTP request timeout in seconds.  [default: 10]
  -k, --insecure           Disable TLS certificate verification (useful for
                           self-signed certs).
      --no-browser         Write HTML output without opening it in the browser.
  -l, --list               Treat input as a newline-delimited file of URLs
                           to scan.
  -q, --quiet              Suppress progress and informational messages
                           (useful for scripting).
  -h, --help               Show this message and exit.

Examples

Scan a single JS file and save results as HTML:

linkfinder -i https://example.com/app.js -o results.html

Fast stdout output (skips jsbeautifier):

linkfinder -i https://example.com/app.js -o cli

Crawl an entire domain and follow all discovered JS files:

linkfinder -i https://example.com -d

Burp Suite export (Target tab → right-click → Save selected items):

linkfinder -i burp_export.xml -b

Scan a folder of JS files, filter to /api/ endpoints only:

linkfinder -i '*.js' -r ^/api/ -o cli

Scan a newline-delimited URL list (e.g. output from gau or waybackurls):

linkfinder -i urls.txt -l -o cli

Target with a self-signed certificate:

linkfinder -i https://internal.corp/app.js -k -o cli

Docker

Build the image:

docker build -t linkfinder .

Run against a URL, saving HTML output to the current directory:

docker run --rm -v $(pwd):/linkfinder/output linkfinder \
  -i https://example.com/app.js -o /linkfinder/output/output.html

Use /linkfinder/output as the output path so the file is accessible after the container exits.

Development

Run the test suite:

pytest

Lint and format:

ruff check .
ruff format .

Type checking:

mypy linkfinder.py

License

Published under the MIT License.

Thanks to @jackhcable for feedback and @edoverflow for making this project cleaner.

About

A python script that finds endpoints in JavaScript files

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages

  • Python 92.5%
  • HTML 6.5%
  • Dockerfile 1.0%