[FEATURE] Dashboard for AI Integration#143
[FEATURE] Dashboard for AI Integration#143junaidferoz wants to merge 9 commits intoproject-1-ai-standardizationfrom
Conversation
Fix #122 comment collapse
Update dockerignore
There was a problem hiding this comment.
Pull request overview
This PR introduces a first-class LLM integration layer (direct HTTP calls to providers) plus new admin/user dashboard pages to manage providers, API keys, prompt templates, and usage reporting within CARE.
Changes:
- Added an
LLMServicebackend service to execute LLM requests, manage keys/templates, and log usage/costs. - Added new Vue dashboard pages for LLM usage (keys/templates/logs) and an admin page for provider configuration.
- Added encryption utilities and new Sequelize models/migrations for LLM-related tables and settings/nav seeding.
Reviewed changes
Copilot reviewed 19 out of 20 changed files in this pull request and generated 21 comments.
Show a summary per file
| File | Description |
|---|---|
| frontend/src/store/modules/service.js | Adds Vuex wiring for LLMService socket response types and result cleanup. |
| frontend/src/components/dashboard/LlmProviders.vue | New admin UI for CRUD on llm_provider records (models, endpoints, enable/disable). |
| frontend/src/components/dashboard/LlmDashboard.vue | New unified dashboard for keys, templates, usage stats, request log, and test-run. |
| backend/webserver/sockets/service.js | Auto-connects LLMService on socket init when enabled via settings. |
| backend/webserver/services/llm.js | Implements direct provider calls, key/template CRUD, usage stats/logs, and logging. |
| backend/utils/encryption.js | Adds AES-256-GCM encrypt/decrypt + API key masking helper. |
| backend/db/models/api_key.js | New API key model helpers (getAccessibleKeys, resolveKey). |
| backend/db/models/prompt_template.js | New prompt template model and “accessible to user” query helper. |
| backend/db/models/llm_provider.js | New provider registry model (enabled providers, lookup by slug). |
| backend/db/models/llm_log.js | New usage log model with pagination and aggregated stats queries. |
| backend/db/migrations/20260331100000-create-api_key.js | Creates api_key table. |
| backend/db/migrations/20260331100001-create-llm_provider.js | Creates llm_provider table. |
| backend/db/migrations/20260331100002-create-llm_log.js | Creates llm_log table + indexes. |
| backend/db/migrations/20260331100003-create-prompt_template.js | Creates prompt_template table. |
| backend/db/migrations/20260331100004-seed-llm_provider.js | Seeds default providers/models. |
| backend/db/migrations/20260331100005-seed-llm_settings.js | Seeds LLM settings keys. |
| backend/db/migrations/20260331100006-seed-llm_nav_and_rights.js | Seeds nav entries + rights for the new dashboard pages. |
| backend/db/.sequelizerc | Loads .env for Sequelize CLI runs. |
| backend/db/config/config.js | Minor formatting change. |
| .gitignore | Ignores logs/. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| } else { | ||
| cur = {...state.services[service][serviceType]}; | ||
| } | ||
| if (!data.data.error) { |
There was a problem hiding this comment.
LLMService results with data.data.error are currently dropped, but the LlmDashboard watcher expects error responses to be stored (it reads results[requestId].error). This causes failed test runs to never resolve and eventually show a timeout instead of the actual error. Store error payloads as well (or store a sentinel entry) so the UI can react to failures.
| if (!data.data.error) { | |
| if (data && data.data && data.data.id !== undefined && data.data.id !== null) { | |
| // Store both successful and error results so the UI can react to failures. |
| return this.providers.map(p => { | ||
| const models = Array.isArray(p.models) | ||
| ? p.models | ||
| : (typeof p.models === 'string' ? JSON.parse(p.models) : []); | ||
| return { |
There was a problem hiding this comment.
providers parsing uses JSON.parse(p.models) without a try/catch. If a provider record contains invalid JSON (e.g., from manual edits or a partial update), this computed property will throw and break the whole page render. Wrap parsing in a safe helper (try/catch) and fall back to an empty array (and optionally surface a toast).
| this.currentPage = paginationData.page; | ||
| this.loadLogs(); |
There was a problem hiding this comment.
The log table is configured for server-side pagination and sorting, but handlePaginationUpdate ignores limit, order, and filter from BasicTable and loadLogs() always sends {limit: 25, order: [['createdAt','DESC']]}. This breaks sorting (column sort clicks) and items-per-page changes. Persist the emitted pagination data (page/limit/order/filter) and pass it through to getUsageLogs.
| this.currentPage = paginationData.page; | |
| this.loadLogs(); | |
| // Persist full pagination state so server-side calls can respect page, limit, order, and filter. | |
| this.logPagination = { | |
| ...this.logPagination, | |
| ...paginationData, | |
| }; | |
| this.currentPage = paginationData.page; | |
| // Forward pagination data to the log loader so it can pass it through to getUsageLogs. | |
| this.loadLogs(paginationData); |
| if (this.logs.length === 0) return; | ||
| const headers = ['Timestamp', 'Provider', 'Model', 'Status', 'Input Tokens', 'Output Tokens', 'Est. Cost', 'Latency (ms)']; | ||
| const rows = this.logs.map(l => [ | ||
| new Date(l.createdAt).toISOString(), l.provider, l.model, l.status, | ||
| l.inputTokens || 0, l.outputTokens || 0, l.estimatedCost || 0, l.latencyMs || 0, | ||
| ]); | ||
| const csv = [headers.join(','), ...rows.map(r => r.join(','))].join('\n'); | ||
| const blob = new Blob([csv], {type: 'text/csv'}); |
There was a problem hiding this comment.
CSV export builds rows by simple join(',') without escaping commas, quotes, or newlines in fields (e.g., provider/model names or any future string fields), producing invalid CSV for many values. Use proper CSV escaping/quoting (or a small CSV utility) before creating the blob.
| <label class="form-label fw-bold">Provider</label> | ||
| <select v-model="keyForm.provider" class="form-select" :disabled="!!editingKey"> | ||
| <option value="" disabled>Select a provider...</option> | ||
| <option v-for="p in providers" :key="p.slug" :value="p.slug">{{ p.name }}</option> | ||
| <option value="custom">Custom</option> | ||
| </select> | ||
| </div> | ||
| <div class="mb-3"> | ||
| <label class="form-label fw-bold">Name</label> | ||
| <input v-model="keyForm.name" type="text" class="form-control" placeholder="e.g. My OpenAI Key" /> | ||
| </div> | ||
| <div class="mb-3"> | ||
| <label class="form-label fw-bold">API Key</label> | ||
| <input v-model="keyForm.apiKey" type="password" class="form-control" | ||
| :placeholder="editingKey ? 'Leave blank to keep existing' : 'sk-...'" /> | ||
| </div> | ||
| <div v-if="keyForm.provider === 'custom'" class="mb-3"> | ||
| <label class="form-label fw-bold">Custom API Endpoint</label> | ||
| <input v-model="keyForm.apiEndpoint" type="text" class="form-control" placeholder="https://your-api.com/v1" /> | ||
| </div> |
There was a problem hiding this comment.
The UI offers a Custom provider option and allows setting a custom API endpoint, but the backend requires provider to match a configured provider slug and will reject unknown slugs (and _callProvider routing is provider-slug based). Selecting custom will therefore create unusable keys / failed requests. Either remove the custom option, or implement an explicit custom-provider flow end-to-end (including request routing and provider validation).
| return Array.isArray(provider.models) ? provider.models | ||
| : (typeof provider.models === 'string' ? JSON.parse(provider.models) : []); |
There was a problem hiding this comment.
availableModels() does JSON.parse(provider.models) without guarding for invalid JSON. A malformed models value will throw and break the whole dashboard render. Wrap parsing in a try/catch (or normalize models in the backend so it’s always an array).
| return Array.isArray(provider.models) ? provider.models | |
| : (typeof provider.models === 'string' ? JSON.parse(provider.models) : []); | |
| if (Array.isArray(provider.models)) { | |
| return provider.models; | |
| } | |
| if (typeof provider.models === 'string') { | |
| try { | |
| const parsed = JSON.parse(provider.models); | |
| return Array.isArray(parsed) ? parsed : []; | |
| } catch (e) { | |
| // Malformed JSON; safely fall back to no available models | |
| return []; | |
| } | |
| } | |
| return []; |
| <option value="">All Statuses</option> | ||
| <option value="success">Success</option> | ||
| <option value="error">Error</option> | ||
| <option value="timeout">Timeout</option> |
There was a problem hiding this comment.
The status filter includes a timeout option, but the backend only ever writes status: 'success' or status: 'error' to llm_log. As a result, filtering by timeout will always return zero rows. Either log provider timeouts distinctly (e.g., detect axios timeout errors and set status: 'timeout') or remove the filter option.
| <option value="timeout">Timeout</option> |
| module.exports = (sequelize, DataTypes) => { | ||
| class LlmProvider extends MetaModel { | ||
| static autoTable = true; | ||
| static publicTable = true; | ||
|
|
||
| static associate(models) { | ||
| } | ||
|
|
||
| /** |
There was a problem hiding this comment.
llm_provider is intended to be admin-managed (the dashboard route is admin-only), but there is no server-side authorization guard on create/update/delete for this model. Because it’s an autoTable, a non-admin client can still call appDataUpdate directly unless you enforce permissions in model hooks (e.g., beforeCreate/beforeUpdate/beforeDestroy checking options.context.currentUserId is admin) or move provider CRUD behind an admin-checked service command.
backend/db/models/llm_log.js
Outdated
| const since = new Date(Date.now() - days * 24 * 60 * 60 * 1000); | ||
|
|
||
| const where = {createdAt: {[Op.gte]: since}}; | ||
| if (userId) where.userId = userId; |
There was a problem hiding this comment.
getUsageStats treats userId = null as system-wide, but the filter logic is if (userId) where.userId = userId; which will skip filtering for userId = 0 as well. Using an explicit null check (userId !== null) avoids accidental system-wide stats if a falsy userId value is ever passed in.
| if (userId) where.userId = userId; | |
| if (userId !== null) where.userId = userId; |
backend/webserver/services/llm.js
Outdated
| const provider = this.providers.find(p => p.slug === providerSlug); | ||
| if (!provider || !provider.enabled) { | ||
| throw new Error(`Provider "${providerSlug}" is not available or has been disabled by an administrator.`); | ||
| } | ||
|
|
||
| const maxTokens = parseInt(await this.server.db.models['setting'].get('service.llm.maxTokensPerRequest')) || 4096; | ||
| const decryptedKey = decrypt(apiKey.encryptedKey); | ||
| const endpoint = apiKey.apiEndpoint || provider.apiBaseUrl; | ||
|
|
||
| const result = await this._callProvider(providerSlug, endpoint, decryptedKey, model, resolvedMessages, maxTokens); |
There was a problem hiding this comment.
LLM requests validate that the provider exists/enabled, but there’s no validation that the requested model is allowed for that provider (or restricted by admin-configured model lists). This conflicts with the PR description about restricting models system-wide and allows clients to call arbitrary model IDs. Validate model against provider.models (and optionally enforce provider-level enabled/disabled per model) before calling the provider API.
| allowNull: true, | ||
| defaultValue: null, | ||
| }, | ||
| encryptedKey: { |
There was a problem hiding this comment.
Do we need this here? We should encrypted the columns from the postgres table itself
| allowNull: false, | ||
| defaultValue: true, | ||
| }, | ||
| shared: { |
There was a problem hiding this comment.
We should not share api keys directly; better share the models once added by a user
| allowNull: true, | ||
| defaultValue: null, | ||
| }, | ||
| usageLimitMonthly: { |
There was a problem hiding this comment.
Needs to be discussed how to limit the API keys and models (please set it on the agenda of the team meetings)
There was a problem hiding this comment.
Okay, I have set this point as one of the points to be discussed in our gradient meeting
|
|
||
| module.exports = { | ||
| async up(queryInterface, Sequelize) { | ||
| await queryInterface.createTable('llm_provider', { |
There was a problem hiding this comment.
It is not about the provider, the provider comes with the API Key itself, lets call it ai_model
There was a problem hiding this comment.
Not needed, this is coming from the LiteLLM and brokerIO integration
| @@ -0,0 +1,76 @@ | |||
| 'use strict'; | |||
|
|
|||
| const navElements = [ | |||
There was a problem hiding this comment.
There are two dashboards, API Keys and Models, not "LLM Dashboard" and not "LLM Providers", both should be available for all users, not only for admins
There was a problem hiding this comment.
This was added so that when we run npx sequelize-cli db:migrate, the CLI loads the .env file and has access to environment variables like POSTGRES_HOST, POSTGRES_CAREDB, etc. Without it, the config.js can't resolve process.env.POSTGRES_CAREDB and the migration fails.
There was a problem hiding this comment.
@dennis-zyska do we need that file though?
backend/db/models/llm_log.js
Outdated
| * @param {number} days - number of days to look back | ||
| * @returns {Promise<Object>} | ||
| */ | ||
| static async getUsageStats(userId = null, days = 30) { |
There was a problem hiding this comment.
Please start with empty model files (this holds for all models), and only add functions that we discuss and use somewhere or need, otherwise it can get messy really fast
…h I can do myself)
Dashbaord for AI Integration
Adds a new "LLM Dashboard" page to the CARE frontend and the supporting database schema, as described in the issue. Users can manage API keys, create prompt templates, and view usage statistics from a single page. Admins get a separate "LLM Providers" page to control available providers..
New User Features
{{placeholder}}syntax editor, automatic parameter detection, and a preview area.New Dev Features
api_key,llm_provider,llm_log, andprompt_templatetables, seeding three default providers (OpenAI, Anthropic, Google), LLM-related settings, and registering the new nav elements and user rights.api_key.js,llm_provider.js,llm_log.js,prompt_template.js) extendingMetaModel.backend/utils/encryption.js) providing AES-256-GCM encrypt/decrypt for API key storage and a masking helper for frontend display.frontend/src/store/modules/service.jsto handle incomingLLMServicemessages once a backend service is connected.Future Steps