Skip to content

error of prompt in vllm #18

@lhbjyaqi12

Description

@lhbjyaqi12

hello,I run this eval.sh:

CUDA_VISIBLE_DEVICES=0,1 python scripts/eval_vllm.py
--model_path "lingcco/fakeVLM"
--val_batch_size 16
--workers 16
--output_path results/fakevlm.json
--test_json_file "/home/work/xueyunqi/fakevlm/FakeVLM-main/test.json"
--data_base_test "/home/work/xueyunqi/fakevlm/FakeVLM-main/test"

after downloading the model from huggingface, there exists error like this:

(EngineCore_DP0 pid=26617) RuntimeError: Expected there to be 1 prompt placeholders corresponding to 1 image items, but instead found 0 prompt placeholders! Make sure the implementation of _call_hf_processor and _get_mm_fields_config are consistent with each other.
(EngineCore_DP0 pid=26617) ERROR 01-07 16:00:16 [multiproc_executor.py:231] Worker proc VllmWorker-0 died unexpectedly, shutting down executor.
/root/miniconda3/lib/python3.13/multiprocessing/resource_tracker.py:324: UserWarning: resource_tracker: There appear to be 1 leaked shared_memory objects to clean up at shutdown: {'/psm_1120bbfb'}
warnings.warn(
Traceback (most recent call last):
File "/home/work/xueyunqi/FakeVLM-main/scripts/eval_vllm.py", line 211, in
main()
~~~~^^
File "/home/work/xueyunqi/FakeVLM-main/scripts/eval_vllm.py", line 196, in main
model = load_model(args)
File "/home/work/xueyunqi/FakeVLM-main/scripts/eval_vllm.py", line 70, in load_model
llm = LLM(model=args.model_path,
dtype="float16",
max_model_len=800,
tensor_parallel_size=torch.cuda.device_count()
)
File "/root/miniconda3/lib/python3.13/site-packages/vllm/entrypoints/llm.py", line 351, in init
self.llm_engine = LLMEngine.from_engine_args(
~~~~~~~~~~~~~~~~~~~~~~~~~~^
engine_args=engine_args, usage_context=UsageContext.LLM_CLASS
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/root/miniconda3/lib/python3.13/site-packages/vllm/v1/engine/llm_engine.py", line 183, in from_engine_args
return cls(
vllm_config=vllm_config,
...<4 lines>...
multiprocess_mode=enable_multiprocessing,
)
File "/root/miniconda3/lib/python3.13/site-packages/vllm/v1/engine/llm_engine.py", line 109, in init
self.engine_core = EngineCoreClient.make_client(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
multiprocess_mode=multiprocess_mode,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<3 lines>...
log_stats=self.log_stats,
^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/root/miniconda3/lib/python3.13/site-packages/vllm/v1/engine/core_client.py", line 93, in make_client
return SyncMPClient(vllm_config, executor_class, log_stats)
File "/root/miniconda3/lib/python3.13/site-packages/vllm/v1/engine/core_client.py", line 648, in init
super().init(
~~~~~~~~~~~~~~~~^
asyncio_mode=False,
^^^^^^^^^^^^^^^^^^^
...<2 lines>...
log_stats=log_stats,
^^^^^^^^^^^^^^^^^^^^
)
^
File "/root/miniconda3/lib/python3.13/site-packages/vllm/v1/engine/core_client.py", line 477, in init
with launch_core_engines(vllm_config, executor_class, log_stats) as (
~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/lib/python3.13/contextlib.py", line 148, in exit
next(self.gen)
~~~~^^^^^^^^^^
File "/root/miniconda3/lib/python3.13/site-packages/vllm/v1/engine/utils.py", line 903, in launch_core_engines
wait_for_engine_startup(
~~~~~~~~~~~~~~~~~~~~~~~^
handshake_socket,
^^^^^^^^^^^^^^^^^
...<5 lines>...
coordinator.proc if coordinator else None,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/root/miniconda3/lib/python3.13/site-packages/vllm/v1/engine/utils.py", line 960, in wait_for_engine_startup
raise RuntimeError(
...<3 lines>...
)
RuntimeError: Engine core initialization failed. See root cause above. Failed core proc(s): {'EngineCore_DP0': 1}

Can you help explain and solve it, ty

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions