Conversation
Critical fixes for runtime errors: - Fixed ChatMessage object attribute access in services.py (line 81) - Changed message.get() to message.content for Pydantic objects - Removed incorrect await from search_semantic_cache() call - Re-enabled DLP middleware after debugging The API now responds correctly: ✅ POST /v1/chat/completions returns proper OpenAI-compatible responses ✅ All middleware functioning correctly ✅ Token usage tracking working ✅ Container startup and runtime working 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
|
There was a problem hiding this comment.
Pull Request Overview
The PR fixes runtime errors in process_chat_completion by correcting message attribute access and adjusting the asynchronous cache lookup.
- Use direct attribute access for the last message’s content.
- Remove
awaitonsearch_semantic_cachecall. - Re-enable DLP middleware.
Comments suppressed due to low confidence (1)
app/services.py:85
- Removing
awaithere may assign a coroutine instead of its result ifsearch_semantic_cacheis still async. Either ensure the function is now synchronous or reintroduceawait.
semantic_redis_key = search_semantic_cache(prompt_text)
| # Get the last user message content | ||
| prompt_text = request.messages[-1].get("content", "") | ||
| last_message = request.messages[-1] | ||
| prompt_text = last_message.content if hasattr(last_message, 'content') else "" |
There was a problem hiding this comment.
[nitpick] Consider using getattr(last_message, 'content', '') for more concise default extraction.
Suggested change
| prompt_text = last_message.content if hasattr(last_message, 'content') else "" | |
| prompt_text = getattr(last_message, 'content', '') |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.



Critical fixes for runtime errors:
The API now responds correctly:
✅ POST /v1/chat/completions returns proper OpenAI-compatible responses
✅ All middleware functioning correctly
✅ Token usage tracking working
✅ Container startup and runtime working