Skip to content

Conversation

@yiliu30
Copy link
Contributor

@yiliu30 yiliu30 commented Jan 12, 2026

Signed-off-by: yiliu30 yi4.liu@intel.com

Signed-off-by: yiliu30 <yi4.liu@intel.com>
Copilot AI review requested due to automatic review settings January 12, 2026 10:07
@yiliu30 yiliu30 changed the title allow set flashinfer workspace [vllm-ext] allows setting flashinfer workspace Jan 12, 2026
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds support for configuring the FlashInfer workspace buffer size through an environment variable. It introduces a new environment variable VLLM_AR_FLASHINFER_WORKSPACE_BUFFER_SIZE that allows users to override the default workspace buffer size.

Changes:

  • Added new environment variable VLLM_AR_FLASHINFER_WORKSPACE_BUFFER_SIZE to control FlashInfer workspace buffer size
  • Implemented automatic update of FlashInfer's workspace buffer size on module initialization when the environment variable is set

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.

File Description
auto_round_extension/vllm_ext/envs_ext.py Registers the new VLLM_AR_FLASHINFER_WORKSPACE_BUFFER_SIZE environment variable with default value of 0
auto_round_extension/vllm_ext/vllm_oot_patches.py Implements the workspace buffer size update logic and applies it during module initialization if the environment variable is set

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +23 to +25
import vllm.v1.attention.backends.flashinfer as flashinfer_backend

flashinfer_backend.WORKSPACE_BUFFER_SIZE = new_size
Copy link

Copilot AI Jan 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This code executes at module import time and may fail if the flashinfer backend module is not available or hasn't been initialized. Consider wrapping this in a try-except block to handle ImportError gracefully, or defer the update until runtime when the backend is actually needed.

Suggested change
import vllm.v1.attention.backends.flashinfer as flashinfer_backend
flashinfer_backend.WORKSPACE_BUFFER_SIZE = new_size
try:
import vllm.v1.attention.backends.flashinfer as flashinfer_backend
except ImportError:
logger.warning(
"FlashInfer backend is not available; "
"skipping update of WORKSPACE_BUFFER_SIZE to %s",
new_size,
)
return
except Exception:
# Catch any unexpected errors during backend import or initialization
logger.exception(
"Failed to import FlashInfer backend while trying to update "
"WORKSPACE_BUFFER_SIZE to %s",
new_size,
)
return
flashinfer_backend.WORKSPACE_BUFFER_SIZE = new_size
logger.info("Updated FlashInfer WORKSPACE_BUFFER_SIZE to %s", new_size)

Copilot uses AI. Check for mistakes.


def update_flashinfer_workspace_buffer_size(new_size: int):
import vllm.v1.attention.backends.flashinfer as flashinfer_backend
Copy link

Copilot AI Jan 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the flashinfer backend module cannot be imported or WORKSPACE_BUFFER_SIZE doesn't exist, this will raise an unclear error. Add error handling with a descriptive message explaining that FlashInfer backend is required for this feature.

Suggested change
import vllm.v1.attention.backends.flashinfer as flashinfer_backend
try:
import vllm.v1.attention.backends.flashinfer as flashinfer_backend
except ImportError as e:
raise RuntimeError(
"FlashInfer backend is required to set WORKSPACE_BUFFER_SIZE, but it "
"could not be imported. Make sure vLLM is installed and built with "
"FlashInfer support before setting VLLM_AR_FLASHINFER_WORKSPACE_BUFFER_SIZE."
) from e
if not hasattr(flashinfer_backend, "WORKSPACE_BUFFER_SIZE"):
raise RuntimeError(
"FlashInfer backend does not define WORKSPACE_BUFFER_SIZE. This feature "
"requires a FlashInfer-enabled version of vLLM that exposes this setting."
)

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants