Skip to content
This repository was archived by the owner on Mar 13, 2026. It is now read-only.

dependency failed to start: container cognita-postgres is unhealthy #421

@Amitt1412

Description

@Amitt1412

Dear Team,

it is to update you that I gave been trying to deploy the app locally using my windows system & docker desktop. The following has been the output in the terminal. Could you please suggest me as how can I resolve the issue here in.

I', trying to run the build service with - docker-compose --env-file compose.env up

cognita-postgres  | fixing permissions on existing directory /var/lib/postgresql/data ... ok                                                                                       
cognita-postgres  | creating subdirectories ... ok
cognita-postgres  | selecting dynamic shared memory implementation ... posix
cognita-postgres  | selecting default max_connections ... 20                                                                                                                       
cognita-postgres  | selecting default shared_buffers ... 400kB
cognita-postgres  | selecting default time zone ... Etc/UTC
cognita-postgres  | creating configuration files ... ok                                                                                                                            
cognita-postgres  | 2025-01-16 10:59:37.027 UTC [83] FATAL:  data directory "/var/lib/postgresql/data" has invalid permissions
cognita-postgres  | 2025-01-16 10:59:37.027 UTC [83] DETAIL:  Permissions should be u=rwx (0700) or u=rwx,g=rx (0750).
cognita-postgres  | child process exited with exit code 1
cognita-postgres  | initdb: removing contents of data directory "/var/lib/postgresql/data"
cognita-postgres  | running bootstrap script ...
cognita-postgres exited with code 1
Gracefully stopping... (press Ctrl+C again to force)
dependency failed to start: container cognita-postgres is unhealthy
PS C:\Users\AmitTiwari\PycharmProjects\cognita> 

Here is the configuration at models_cofig.yaml, I have been trying to deploy with -

model_providers:
  ############################ Local ############################################
  #   Uncomment this provider if you want to use local models providers         #
  #   using ollama and infinity model server                                    #
  ###############################################################################

#  - provider_name: local-ollama
#    api_format: openai
#    base_url: http://ollama-server:11434/v1/
#    api_key_env_var: ""
#    llm_model_ids:
#      - "qwen2:1.5b"
#    embedding_model_ids: []
#    reranking_model_ids: []
#    default_headers: {}

#  - provider_name: local-infinity
#    api_format: openai
#    base_url: http://infinity-server:7997/
#    api_key_env_var: INFINITY_API_KEY
#    llm_model_ids: []
#    embedding_model_ids:
#      - "mixedbread-ai/mxbai-embed-large-v1"
#    reranking_model_ids:
#      - "mixedbread-ai/mxbai-rerank-xsmall-v1"
#    default_headers: {}

#  - provider_name: faster-whisper
#    api_format: openai
#    base_url: http://faster-whisper:8000
#    api_key_env_var: ""
#    llm_model_ids: []
#    embedding_model_ids: []
#    reranking_model_ids: []
#    audio_model_ids:
#      - "Systran/faster-distil-whisper-large-v3"
#    default_headers: {}
############################ OpenAI ###########################################
#   Uncomment this provider if you want to use OpenAI as a models provider    #
#   Remember to set `OPENAI_API_KEY` in container environment                 #
###############################################################################

 - provider_name: openai
   api_format: openai
   api_key_env_var: OPENAI_API_KEY
   llm_model_ids:
     - "gpt-3.5-turbo"
     - "gpt-4o"
   embedding_model_ids:
     - "text-embedding-3-small"
     - "text-embedding-ada-002"
   reranking_model_ids: []
   default_headers: {}

############################ TrueFoundry ###########################################
#   Uncomment this provider if you want to use TrueFoundry as a models provider    #
#   Remember to set `TFY_API_KEY` in container environment                         #
####################################################################################

# - provider_name: truefoundry
#   api_format: openai
#   base_url: https://llm-gateway.truefoundry.com/api/inference/openai
#   api_key_env_var: TFY_API_KEY
#   llm_model_ids:
#     - "openai-main/gpt-4o-mini"
#     - "openai-main/gpt-4-turbo"
#     - "openai-main/gpt-3-5-turbo"
#   embedding_model_ids:
#     - "openai-main/text-embedding-3-small"
#     - "openai-main/text-embedding-ada-002"
#   reranking_model_ids: []
#   default_headers: {}

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions