LLM-powered Python bytecode decompiler. Uses litellm to support OpenAI, Anthropic, Google, Ollama, and other providers. Handles .pyc files from any Python version via xdis.
just build # build the Docker image
just lint # run linters (ruff, pyrefly)
just test # run tests
just ci # lint + test
just shell # interactive shell inside the containerpip install pychdpychd compile <directory | file>just compile example/python/01_example_variables.pypychd decompile <pyc-file> [-m MODEL] [-o OUTPUT]# Default model (ollama/deepseek-r1)
just decompile example/__pycache__/01_example_variables.cpython-314.pyc
# Specify a model
just decompile example/__pycache__/01_example_variables.cpython-314.pyc gpt-4oSupported -m values include any model supported by litellm:
| Provider | Example |
|---|---|
| OpenAI | gpt-4o |
| Anthropic | claude-sonnet-4-20250514 |
gemini/gemini-2.0-flash |
|
| Ollama (local) | ollama/deepseek-r1, ollama/llama3 |
Large disassembly is automatically split into token-safe chunks when it exceeds the model's context window.
Compare original source against decompiled output using AST comparison:
pychd validate <original> <decompiled> [-v]just validate example/python/ example/decompiled/All development tasks run inside Docker via just. No local Python installation is required.
just fix # auto-fix lint issues
just test # run pytest
just shell # drop into the containerExample Python source files are in example/python/. Pre-generated decompiled output is in example/decompiled/.