Error7 reports
Fix ServiceUnavailableError
in LiteLLM
✅ Solution
ServiceUnavailableError usually indicates the LLM provider is overloaded or temporarily unavailable. Implement retry logic with exponential backoff using `retry` decorator in `litellm` around the failing function calls, and/or check the LLM provider's status page to confirm an outage before retrying. You might also need to increase your rate limits or switch to a different model if the issue persists.
Related Issues
Real GitHub issues where developers encountered this error:
[Bug]: Limit the "stop" and "stop_sequences" arguments based on modelJan 11, 2026
[Bug]: Sonar Deep Research Streaming IssueJan 6, 2026
[Bug]: Inconsistent HTTP status code (503 vs 400) when using stream=true and max_tokens=-1 with Vertex AI (Gemini)Jan 6, 2026
[Bug]: Structured Output + Tool Calling is not working with Gemini in Open AI Agents SDKJan 4, 2026
[Bug]: Vertex AI returns INVALID_ARGUMENT when using multiple tool types (enterprise_web_search, url_context, etc.)Dec 30, 2025
Timeline
First reported:Dec 26, 2025
Last reported:Jan 11, 2026