Guangyu Wei*, Ke Han*, Yueming Lyu†, Yu Luo, Yue Jiang, Caifeng Shan, Nicu Sebe
(*Contribute equally, †Corresponding author)
We will publicly release all implementation details, including code, datasets, and infrastructure, to enable result verification and contribute to the research community.
If you have any questions, please new an issue or contact
wgy3129@stu.ouc.edu.cn.
[2025.12.27] We have fully open-sourced the agent code in step 1.
[2025.12.22] We have fully open-sourced the expert architecture and training code in step 3.
Previous Releases
[2025.10.17] We have publicly released the EmergingNews-25 Dataset. Researchers can now download and use it by completing [this form](https://forms.office.com/r/mJRTtJR2Qf).This dataset can be accessed by completing the Application to Use the EmergingNews-25 from EASE for Emerging Fake News Detection. Upon approval, it will be available for download and use.
The dataset is structured as follows:
├── data
├── news
└── news.json
├── imgs
├── 0.png
├── 1.jpg
├── 2.png
└── ... # {id}.jpg/png/webp
- Clone the repository:
git clone https://github.com/wgyhhhh/EASE.git
cd EASE/Expert- Install dependencies:
conda create --name EASE python=3.10
conda activate EASE
pip install -r requirements.txtAfter downloading the pretrained models from their links (bert-base-uncased and chinese-bert-wwm-ext), please configure the local bert_path in your scripts.
- Agent
Please first register API keys on OpenAI API and Serper API, and fill them in the ./Agent/config/api_keys.yaml file. Currently, the author's Agent only integrates OpenAI's GPT series models. Support for other models will be provided in future updates, and we encourage everyone to submit Pull Requests.
The dataset should be organized in the following format and placed in the /data/ directory, divided into train.json, val.json, and test.json:
[
{
"id": 0,
"content": "News",
"label": "real", # real or fake
},
{
"id": 1,
"content": "News",
...
},
...
]
cd EASE/Agent
conda create --name Agent python=3.10
conda activate Agent
pip install -r requirements.txt
python run.py
After processing, the data will be transformed into a format suitable for the expert model.
[
{
"id": 0,
"content": "News",
"label": "real", # real or fake
"sentiment": "Sentiment analysis from Agent",
"reasoning": "Reasoning knowledge from Agent",
"evidence": "External evidence from Agent",
"sentiment_pred": "fake", # Prediction for the news based on this knowledge
"reasoning_pred": "real",
"evidence_pred": "real",
"sentiment_acc": 0, # Whether it matches the label (1 if matches, otherwise 0)
"reasoning_acc": 1,
"evidence_acc": 1
},
{
"id": 1,
"content": "News",
...
},
...
]
- Run
# For training on FakeNewsDetection dataset
bash train.shAfter obtaining the trained weights (saved in results/EASE_{expert_type}_{dataset}/checkpoints/parameter_{expert_type}_{dataset}.pkl), simply update the corresponding paths in test.sh to run batch testing on the news dataset.
bash test.shPlease cite the paper as follows if you use the data or code from EASE:
@misc{wei2025realtimefakenewsdetection,
title={Towards Real-Time Fake News Detection under Evidence Scarcity},
author={Guangyu Wei and Ke Han and Yueming Lyu and Yu Luo and Yue Jiang and Caifeng Shan and Nicu Sebe},
year={2025},
eprint={2510.11277},
archivePrefix={arXiv},
primaryClass={cs.CL},
}
