LLM sees scripts but not actually calling them

A Home Assistant user running a local LLM (7B/14B on 12GB VRAM) via LiteLLM and the Local OpenAI LLM integration (via HACS) reports the model recognizes scripts (including a Music Assistant Blueprint script) and outputs the JSON for a tool call (llm_script_for_music_assistant_voice_requests) but will not actually execute the script, despite 'tools' calling capability being enabled and the script exposed.
What happened
A Home Assistant user configured a fully local LLM on a desktop with 12GB of VRAM and tested 7B and 14B models. They routed requests via LiteLLM and connected those endpoints to the Local OpenAI LLM integration (installed via HACS) in Home Assistant. The assistant can see and reference scripts (including a Music Assistant script configured with a Blueprint) and produces the JSON it would use to call the script, but it does not actually run the script when asked to do so.
Technical details
In a sample interaction the assistant identifies it will use the llm_script_for_music_assistant_voice_requests function and returns the JSON payload:
``json { "name": "llm_script_for_music_assistant_voice_requests", "arguments": { "media_type": "artist", "artist": "Taylor Swift", "album": "", "media_id": "Taylor Swift", "media_description": "music by Taylor Swift", "area": ["Office"], "media_player": [], "shuffle": false } } ``
When prompted to "actually do the tool call," the assistant replies that it does not have the capability to directly execute tool calls or interact with the home automation system and suggests copying the JSON into Home Assistant to execute it.
Key access and partners
- •Home Assistant
- •LiteLLM
- •Local OpenAI LLM integration (installed via HACS)
- •Music Assistant (Blueprint script referenced)
Observed technical risk vectors
- •The LLM can produce actionable JSON for automation but may not be configured or permitted to perform tool calls end-to-end, which can confuse users expecting direct execution.
- •Misalignment between the assistant's tool-calling ability and Home Assistant's execution path (integration/configuration or permission settings) can produce a false sense of capability.
Context and significance
This is a common user-facing gap when integrating local LLMs with home automation: the model can generate the correct call structure but either lacks permission, tooling, or a configured runtime path to execute the call. The post confirms the user has enabled the 'tools' calling capability and exposed the script, yet execution still does not occur, indicating the problem may be in how the integration handles tool invocations or in the assistant's tool-execution endpoint.
What's next
Troubleshooting steps to try include verifying the integration endpoint routing from LiteLLM to the Local OpenAI integration, checking Home Assistant logs for rejected/failed service calls, and confirming the exact exposure/configuration of the Blueprint-based script within Home Assistant.
What to watch
If others report similar behavior, it could point to a gap in the Local OpenAI LLM integration's tool-call handling or a common misconfiguration pattern when using local LLM endpoints via LiteLLM.
Bottom line
The local LLM correctly recognizes and formats the script call but will not execute it in the user's environment; the issue appears to be in execution/permission/configuration rather than the model's ability to generate the call JSON.
Why it matters
For practitioners integrating local LLMs into home automation, being able to distinguish between generation of actionable instructions and actual execution is critical to reliable automation and user expectations.
Scoring Rationale
User-facing integration issue that affects execution of automation but is limited to configuration/integration paths rather than broader safety or systemic model failure.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problems

