- 👀 Overview
- 📁 Directory
- 🔧 Installation
- 🧐 Evaluation
- 💻 Generation
| Directory | Description |
|---|---|
case_files/ |
All files required for the test cases |
dataset/ |
Workflow generation instructions and evaluation checks |
experiment_result/ |
The final result of the evaluation |
assets/ |
The images used in README.md |
nodes/ |
The functional logic of each node |
prompts/ |
System prompt and evaluation prompts |
yaml/ |
The generated Dify workflow YAML files, you can obtain them from https://huggingface.co/datasets/zjunlp/Chat2Workflow-Evaluation |
Conda virtual environments offer a light and flexible setup. For different projects, we recommend using separate conda environments for management.
conda create -n chat2workflow python=3.10
conda activate chat2workflow
pip install -r requirements.txtBefore installing Dify, make sure your machine meets the following minimum system requirements:
- CPU >= 2 Core
- RAM >= 4 GiB
Obtain the specified version of dify:
git clone https://github.com/langgenius/dify.git --branch 1.9.2 --depth 1The easiest way to start the Dify server is through Docker Compose. Before running Dify with the following commands, make sure that Docker and Docker Compose are installed on your machine:
cd dify
cd docker
cp .env.example .env
# [Optional]: The default port is 80. You can modify it (MY_PORT) here.
# If modified, it needs to be synchronized to `config.yaml`.
perl -pi -e 's/^EXPOSE_NGINX_PORT=.*/EXPOSE_NGINX_PORT={MY_PORT}/' .env
docker compose up -dAfter running, you can access the Dify dashboard in your browser at http://localhost:{MY_PORT}/install and start the initialization process.
- Setting up an admin account. Also fill the following information into
config.yaml.
- email_address
- user_name
- password
- Install the following specified version plugins in [Plugins]-[MarketPlace]:
- langgenius/tongyi:0.1.13 —— API Key Configuration in [Settings]-[WORKSPACE]-[Model Provider]
- langgenius/openai:0.2.7 —— API Key Configuration in [Settings]-[WORKSPACE]-[Model Provider]
- wwwzhouhui/qwen_text2image:0.0.3 —— API Key Configuration in [Plugins]
- langgenius/google:0.0.9 —— API Key Configuration in [Plugins]
- bowenliang123/md_exporter:2.2.0
- hjlarry/mermaid_converter:0.0.1
- langgenius/echarts:0.0.1
In this setup, the LLM defaults to
tongyi:qwen3-vl-plus, TTS (Text-to-Speech) toopenai:gpt-4o-mini-tts, image generation toqwen_text2image:qwen-image, and search engines togoogle:SerpApi. After the workflow is generated, you can modify the above nodes as needed.
- Fill in the information in the
config.yaml.
# Github REST API for higher rate limits.
# Used for the GithubSummary task in the resolve stage.
github_rest_token: "github_xxx" # null or "github_xxx"
# Your admin account
user_name: "xxx"
email_address: "xxx@yyy.com"
password: "xxxxx"
# LLM API for workflow generation and evaluation
llm_api_key: "sk-xxxxxx"
base_url: "xxxxx"
evaluation_model: deepseek-chat- Modify the
model_nameand then execute the script sequentially.:
# Step 1: Generate LLM response.
# The result will be stored in `output/llm_response`.
bash bash_generation.sh
# Step 2: The pass stage of the evaluation.
# The result will be stored in `output/pass_eval` and `output/yaml`.
bash bash_pass_stage.sh
# Step 3: The resolve stage of the evaluation.
# The result will be stored in `output/resolve_eval`.
bash bash_resolve_stage.sh- Fill in the information in the
config.yaml.
# LLM API for workflow generation and evaluation
llm_api_key: "sk-xxxxxx"
base_url: "xxxxx"- Run the Python script to start the workflow generation program.
chainlit run chat2workflow.py -wClick on the returned link to start the interactive conversation. The result will be stored in output/generated_workflows.
Finally import the generated YAML file into the Dify platform for execution.

