Summary
LoadProhibited panic inside AsyncMiddlewareChain::_runChain → std::_Function_handler::_M_manager when setCloseClientOnQueueFull(true) force-closes an AsyncWebSocketClient while a concurrent HTTP request on the same AsyncClient is mid-flight in the middleware chain.
Crash signature
E async_ws: [<url>][<id>] Too many messages queued: closing connection
Guru Meditation Error: Core 0 panic'ed (LoadProhibited). Exception was unhandled.
EXCVADDR: 0x00000014 EXCCAUSE: 0x1c
PC: std::_Function_handler<void (), AsyncMiddlewareChain::_runChain(...)::{lambda()#1}>::_M_manager
(inlined) std_function.h:161 _M_create<...>
(inlined) std_function.h:215 _M_init_functor<...>
(inlined) std_function.h:198/282 _M_manager
Backtrace frames #0..#2:
AsyncMiddlewareChain::_runChain lambda _M_manager
std::_List_iterator<AsyncMiddleware*>::operator++(int)
(inlined) Middleware.cpp:67 operator()
AsyncMiddlewareChain::_runChain at Middleware.cpp:56
EXCVADDR = 0x14 ≈ offset of _M_manager in std::function's erased storage → the next std::function the outer chain is about to re-invoke has been torn down (zero-initialized _M_manager).
Mechanism
-
Same AsyncClient holds both an active HTTP request and a WS upgrade.
-
Server publishes WS frames faster than the slow client drains, _messageQueue.size() >= WS_MAX_QUEUED_MESSAGES.
-
AsyncWebSocketClient::_queueMessage with closeWhenFull == true (the default) calls _client->close() synchronously — the comment at AsyncWebSocket.cpp:489–494 itself documents the reentrant chain:
_client->close() shall call the callback function _onDisconnect() … _onDisconnect() → _handleDisconnect() → ~AsyncWebSocketClient().
-
Meanwhile, an HTTP handler on the same client is inside the nested chain at WebRequest.cpp:851–865:
_server->_runChain(this, [this]() {
if (_handler) {
_handler->_runChain(this, [this]() {
_handler->handleRequest(this);
});
}
});
-
_runChain (Middleware.cpp:56–70) captures its next std::function and it iterator by reference into the per-step lambda. The synchronous teardown initiated from (3) frees request/handler state while the outer next() in step (4) is still about to invoke the inner chain. The next evaluation reads a torn std::function (_M_manager = nullptr) and faults at offset 0x14.
Is this already fixed?
No. Adjacent work but not this path:
None of the above protect the HTTP middleware path against synchronous client teardown initiated from _queueMessage.
Reproduction
ws.onEvent(...) sets client->setCloseClientOnQueueFull(true).
- Publish small WS frames via
ws.textAll(...) at a rate the client cannot drain (e.g. status poll every 250 ms to a slow Wi-Fi client).
- Issue concurrent HTTP requests (e.g.
/api/status) against the same server.
- When the WS queue fills and the lib force-closes the client, any request whose nested
_runChain is still unwinding faults.
Observed on: ESPAsyncWebServer v3.10.3, AsyncTCP v3.4.10, arduino-esp32 3.3.x / IDF 5.5.4, ESP32-S3.
Suggested fix
Don't tear the client down synchronously from _queueMessage. Defer to cleanupClients() / the next async-tcp event tick — the same direction #424 proposed for disconnect cleanup — so that no HTTP handler running on the same AsyncClient can be unwound while its stack frames are still live.
A minimal first-order fix inside _queueMessage: mark the client for async close (e.g. set a flag + notify cleanup task) instead of invoking _client->close() directly. cleanupClients() already runs on a safe context.
Workaround for users hitting this today
client->setCloseClientOnQueueFull(false) at WS_EVT_CONNECT — excess frames are silently dropped instead of destroying the client. Requires producer-side rate limiting to avoid unbounded queue growth.
Summary
LoadProhibitedpanic insideAsyncMiddlewareChain::_runChain→std::_Function_handler::_M_managerwhensetCloseClientOnQueueFull(true)force-closes anAsyncWebSocketClientwhile a concurrent HTTP request on the sameAsyncClientis mid-flight in the middleware chain.Crash signature
EXCVADDR = 0x14≈ offset of_M_managerinstd::function's erased storage → thenextstd::function the outer chain is about to re-invoke has been torn down (zero-initialized_M_manager).Mechanism
Same
AsyncClientholds both an active HTTP request and a WS upgrade.Server publishes WS frames faster than the slow client drains,
_messageQueue.size() >= WS_MAX_QUEUED_MESSAGES.AsyncWebSocketClient::_queueMessagewithcloseWhenFull == true(the default) calls_client->close()synchronously — the comment atAsyncWebSocket.cpp:489–494itself documents the reentrant chain:Meanwhile, an HTTP handler on the same client is inside the nested chain at
WebRequest.cpp:851–865:_runChain(Middleware.cpp:56–70) captures itsnextstd::function andititerator by reference into the per-step lambda. The synchronous teardown initiated from (3) frees request/handler state while the outernext()in step (4) is still about to invoke the inner chain. The next evaluation reads a tornstd::function(_M_manager = nullptr) and faults at offset0x14.Is this already fixed?
No. Adjacent work but not this path:
_runChainor_queueMessageteardown._clientmutex in_queueMessage/_onDisconnect; protects the WS side against concurrent access but does not prevent the synchronousclose()→~AsyncWebSocketClient()chain from unwinding while an HTTP handler on the sameAsyncClientholds stack frames in_runChain.binaryAll).None of the above protect the HTTP middleware path against synchronous client teardown initiated from
_queueMessage.Reproduction
ws.onEvent(...)setsclient->setCloseClientOnQueueFull(true).ws.textAll(...)at a rate the client cannot drain (e.g. status poll every 250 ms to a slow Wi-Fi client)./api/status) against the same server._runChainis still unwinding faults.Observed on: ESPAsyncWebServer
v3.10.3, AsyncTCPv3.4.10, arduino-esp323.3.x/ IDF5.5.4, ESP32-S3.Suggested fix
Don't tear the client down synchronously from
_queueMessage. Defer tocleanupClients()/ the next async-tcp event tick — the same direction #424 proposed for disconnect cleanup — so that no HTTP handler running on the sameAsyncClientcan be unwound while its stack frames are still live.A minimal first-order fix inside
_queueMessage: mark the client for async close (e.g. set a flag + notify cleanup task) instead of invoking_client->close()directly.cleanupClients()already runs on a safe context.Workaround for users hitting this today
client->setCloseClientOnQueueFull(false)atWS_EVT_CONNECT— excess frames are silently dropped instead of destroying the client. Requires producer-side rate limiting to avoid unbounded queue growth.