Feedback & performance suggestions after a weekend with LivePilot #36
hansdesmedt
started this conversation in
Ideas
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hey! First of all, really impressive project. I installed LivePilot this weekend and spent a good amount of time exploring it. The depth of what you've built is remarkable: the M4L bridge with deep LOM access, the spectral analyzer, the composition engine, the device atlas with 5000+ devices. It's clear a huge amount of thought and effort went into this. I got some genuinely useful results out of it, especially the analyzer and the rack introspection.
That said, I ran into a consistent issue: response times are really slow, and I hit timeouts fairly regularly.
I was curious about the root cause, so I pulled the codebase into Claude and did a bit of an audit. Here's what came out of that analysis, sharing in case it's useful.
Main bottleneck: 462 tools in the context window
The biggest factor seems to be the sheer number of MCP tools. With 462 tool definitions, that's roughly ~50K tokens of tool schemas that get loaded into the LLM's context on every interaction. This means the model spends significantly more time on tool selection (picking from 462 vs. say 30-40), the enlarged context adds latency to every single request even trivial ones, and combined with the skill/reference docs (~1.1 MB of markdown, ~200 KB of YAML affordances) the effective prompt becomes very large.
For comparison, a leaner setup with 25-40 well-designed tools would sit around ~5K tokens, roughly a 10x reduction.
Potential improvements
A few ideas that came out of the analysis:
1. Tool consolidation / tiered loading
Split tools into a core set (~30-40 tools: tracks, clips, devices, transport, browser) that's always loaded, and extended domains (composition engine, sound design, splice, wonder mode, etc.) that load on demand via a meta-tool like
enable_domain("composition"). This alone would probably have the biggest impact on perceived speed.2. Deep parameter set via M4L bridge
The bridge already supports reading parameters deep in chains via
walk_rackandget_params, but there's no corresponding write operation. Adding aset_param_deepcommand tolivepilot_bridge.jswould be relatively straightforward (cursor goto + set value) and would close an important gap. Right now you can introspect nested rack chains but can't modify parameters inside them.3. Lazy skill loading
Instead of potentially loading all skill markdown and reference docs into context, only pull in the relevant skill when a specific domain is actually invoked.
4. Response caching with invalidation
Session info, track info, and device parameters could be cached server-side and invalidated only on mutations. This would speed up the common pattern of read-then-act.
Offer
I'd be happy to put together a PR for some of these if you think they'd be valuable, particularly the tool consolidation and the
set_param_deepbridge command since those seem like the highest-impact, most contained changes. Let me know what you think and what would fit best with your roadmap.Thanks again for building this, it's the most complete Ableton MCP integration out there by a wide margin.
Beta Was this translation helpful? Give feedback.
All reactions