Skip to content

fix Docker container 500 Internal Server Error#27

Merged
ozanunal0 merged 1 commit intomainfrom
dev
Jul 8, 2025
Merged

fix Docker container 500 Internal Server Error#27
ozanunal0 merged 1 commit intomainfrom
dev

Conversation

@ozanunal0
Copy link
Copy Markdown
Owner

Critical fixes for runtime errors:

  • Fixed ChatMessage object attribute access in services.py (line 81)
  • Changed message.get() to message.content for Pydantic objects
  • Removed incorrect await from search_semantic_cache() call
  • Re-enabled DLP middleware after debugging

The API now responds correctly:
✅ POST /v1/chat/completions returns proper OpenAI-compatible responses
✅ All middleware functioning correctly
✅ Token usage tracking working
✅ Container startup and runtime working

Critical fixes for runtime errors:
- Fixed ChatMessage object attribute access in services.py (line 81)
- Changed message.get() to message.content for Pydantic objects
- Removed incorrect await from search_semantic_cache() call
- Re-enabled DLP middleware after debugging

The API now responds correctly:
✅ POST /v1/chat/completions returns proper OpenAI-compatible responses
✅ All middleware functioning correctly
✅ Token usage tracking working
✅ Container startup and runtime working

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
Copilot AI review requested due to automatic review settings July 8, 2025 20:58
@ozanunal0 ozanunal0 merged commit 010bb8c into main Jul 8, 2025
6 of 12 checks passed
@sonarqubecloud
Copy link
Copy Markdown

sonarqubecloud Bot commented Jul 8, 2025

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

The PR fixes runtime errors in process_chat_completion by correcting message attribute access and adjusting the asynchronous cache lookup.

  • Use direct attribute access for the last message’s content.
  • Remove await on search_semantic_cache call.
  • Re-enable DLP middleware.
Comments suppressed due to low confidence (1)

app/services.py:85

  • Removing await here may assign a coroutine instead of its result if search_semantic_cache is still async. Either ensure the function is now synchronous or reintroduce await.
    semantic_redis_key = search_semantic_cache(prompt_text)

Comment thread app/services.py
# Get the last user message content
prompt_text = request.messages[-1].get("content", "")
last_message = request.messages[-1]
prompt_text = last_message.content if hasattr(last_message, 'content') else ""
Copy link

Copilot AI Jul 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] Consider using getattr(last_message, 'content', '') for more concise default extraction.

Suggested change
prompt_text = last_message.content if hasattr(last_message, 'content') else ""
prompt_text = getattr(last_message, 'content', '')

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants