<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Api-Failure on ErrorVault — Developer Error Code Dictionary</title><link>https://errorvault.dev/tags/api-failure/</link><description>Recent content in Api-Failure on ErrorVault — Developer Error Code Dictionary</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Tue, 15 Oct 2024 00:00:00 +0000</lastBuildDate><atom:link href="https://errorvault.dev/tags/api-failure/feed.xml" rel="self" type="application/rss+xml"/><item><title>Fix clw-llm-failure: OpenClaw LLM service connection or inference failure error</title><link>https://errorvault.dev/openclaw/openclaw-clw-llm-failure-llm-service-failure/</link><pubDate>Tue, 01 Oct 2024 00:00:00 +0000</pubDate><guid>https://errorvault.dev/openclaw/openclaw-clw-llm-failure-llm-service-failure/</guid><description>&lt;h2 id="1-symptoms">1. Symptoms&lt;/h2>
&lt;p>The &lt;code>clw-llm-failure&lt;/code> error in OpenClaw manifests during LLM inference calls. Common indicators:&lt;/p>
&lt;ul>
&lt;li>Console output: &lt;code>ERROR: clw-llm-failure: Failed to invoke LLM endpoint: [detailed message, e.g., 'HTTP 503 Service Unavailable']&lt;/code>&lt;/li>
&lt;li>Application halts on &lt;code>ClawLLM::infer()&lt;/code> or &lt;code>clw_llm_generate()&lt;/code>.&lt;/li>
&lt;li>HTTP status codes in logs: 429 (rate limit), 500 (internal server error), or connection refused.&lt;/li>
&lt;li>No response payload; partial traces show successful init but failure on &lt;code>/v1/completions&lt;/code> POST.&lt;/li>
&lt;/ul>
&lt;p>Example log snippet:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#f8f8f2;background-color:#282a36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-mysql" data-lang="mysql">&lt;span style="display:flex;">&lt;span>[&lt;span style="color:#bd93f9">2024&lt;/span>&lt;span style="color:#ff79c6">-&lt;/span>&lt;span style="color:#bd93f9">10&lt;/span>&lt;span style="color:#ff79c6">-&lt;/span>&lt;span style="color:#bd93f9">01&lt;/span>T12:&lt;span style="color:#bd93f9">00&lt;/span>:&lt;span style="color:#bd93f9">00&lt;/span>Z] INFO: ClawLLM init &lt;span style="color:#ff79c6">with&lt;/span> model&lt;span style="color:#ff79c6">=&lt;/span>gpt&lt;span style="color:#ff79c6">-&lt;/span>&lt;span style="color:#bd93f9">4&lt;/span>o&lt;span style="color:#ff79c6">-&lt;/span>mini, endpoint&lt;span style="color:#ff79c6">=&lt;/span>https:&lt;span style="color:#ff79c6">//&lt;/span>api.openai.com&lt;span style="color:#ff79c6">/&lt;/span>v1
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>[&lt;span style="color:#bd93f9">2024&lt;/span>&lt;span style="color:#ff79c6">-&lt;/span>&lt;span style="color:#bd93f9">10&lt;/span>&lt;span style="color:#ff79c6">-&lt;/span>&lt;span style="color:#bd93f9">01&lt;/span>T12:&lt;span style="color:#bd93f9">00&lt;/span>:&lt;span style="color:#bd93f9">01&lt;/span>Z] ERROR: clw&lt;span style="color:#ff79c6">-&lt;/span>llm&lt;span style="color:#ff79c6">-&lt;/span>failure: Request &lt;span style="color:#ff79c6">to&lt;/span> LLM failed. Code: &lt;span style="color:#ff79c6">-&lt;/span>&lt;span style="color:#bd93f9">32603&lt;/span>, Message: Internal JSON&lt;span style="color:#ff79c6">-&lt;/span>RPC error.
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>[&lt;span style="color:#bd93f9">2024&lt;/span>&lt;span style="color:#ff79c6">-&lt;/span>&lt;span style="color:#bd93f9">10&lt;/span>&lt;span style="color:#ff79c6">-&lt;/span>&lt;span style="color:#bd93f9">01&lt;/span>T12:&lt;span style="color:#bd93f9">00&lt;/span>:&lt;span style="color:#bd93f9">01&lt;/span>Z] FATAL: Inference aborted. Retries exhausted: &lt;span style="color:#bd93f9">3&lt;/span>&lt;span style="color:#ff79c6">/&lt;/span>&lt;span style="color:#bd93f9">3&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#ff79c6">-&lt;/span>&lt;span style="color:#6272a4">--
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#6272a4">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#6272a4">Runtime symptoms include:
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#6272a4">&lt;/span>&lt;span style="color:#ff79c6">-&lt;/span> Spikes &lt;span style="color:#ff79c6">in&lt;/span> CPU &lt;span style="color:#ff79c6">usage&lt;/span> during retries.
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#ff79c6">-&lt;/span> Increased &lt;span style="color:#50fa7b">latency&lt;/span> (&lt;span style="color:#ff79c6">&amp;gt;&lt;/span>&lt;span style="color:#bd93f9">10&lt;/span>s per &lt;span style="color:#ff79c6">call&lt;/span>).
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#ff79c6">-&lt;/span> Docker containers restarting &lt;span style="color:#ff79c6">if&lt;/span> &lt;span style="color:#ff79c6">using&lt;/span> Claw &lt;span style="color:#ff79c6">in&lt;/span> containerized envs.
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>This error blocks &lt;span style="color:#ff79c6">all&lt;/span> downstream LLM&lt;span style="color:#ff79c6">-&lt;/span>dependent pipelines, such &lt;span style="color:#ff79c6">as&lt;/span> RAG systems &lt;span style="color:#ff79c6">or&lt;/span> chatbots built &lt;span style="color:#ff79c6">on&lt;/span> OpenClaw.
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#6272a4">## 2. Root Cause
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#6272a4">&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#ff79c6">`&lt;/span>clw&lt;span style="color:#ff79c6">-&lt;/span>llm&lt;span style="color:#ff79c6">-&lt;/span>failure&lt;span style="color:#ff79c6">`&lt;/span> triggers &lt;span style="color:#ff79c6">when&lt;/span> OpenClaw&lt;span style="color:#f1fa8c">&amp;#39;s ClawLLM client cannot complete an inference request. Core causes:
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#f1fa8c">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#f1fa8c">1. **Network/Connectivity**: Firewall blocks, DNS resolution fails, or proxy misconfig. OpenClaw uses HTTP/2 to LLM providers (OpenAI, Anthropic, etc.).
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#f1fa8c">2. **Authentication**: Invalid/missing `CLAW_LLM_API_KEY` or expired tokens.
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#f1fa8c">3. **Provider-Side Issues**: Model overload (e.g., GPT-4 rate limits), endpoint downtime, or unsupported model names.
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#f1fa8c">4. **Configuration Mismatch**: Incorrect `base_url`, `model_id`, or payload format (e.g., missing `temperature` in JSON).
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#f1fa8c">5. **Resource Exhaustion**: Local timeouts (&amp;lt;30s default), memory leaks in multi-threaded calls.
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#f1fa8c">6. **JSON-RPC Parsing**: Malformed responses from LLM API not handled by ClawLLM v2.1+.
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#f1fa8c">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#f1fa8c">Inspect via `claw_debug --llm` flag for traces. Root cause often correlates with `curl -v` tests to the endpoint.
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#f1fa8c">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#f1fa8c">## 3. Step-by-Step Fix
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#f1fa8c">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#f1fa8c">Resolve `clw-llm-failure` systematically. Start with basics, escalate to code changes.
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#f1fa8c">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#f1fa8c">### Step 1: Verify Environment
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#f1fa8c">```bash
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#f1fa8c"># Check API key and endpoint
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#f1fa8c">echo $CLAW_LLM_API_KEY | wc -c # &amp;gt;30 chars expected
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#f1fa8c">curl -H &amp;#34;Authorization: Bearer $CLAW_LLM_API_KEY&amp;#34; https://api.openai.com/v1/models
&lt;/span>&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;h3 id="step-2-update-clawllm-config">Step 2: Update ClawLLM Config&lt;/h3>
&lt;p>Set robust defaults in &lt;code>claw_config.toml&lt;/code>:&lt;/p></description></item></channel></rss>