Skip to content

[FEATURE] Dashboard for AI Integration#143

Open
junaidferoz wants to merge 9 commits intoproject-1-ai-standardizationfrom
feat-141-Dashboard_Implementation
Open

[FEATURE] Dashboard for AI Integration#143
junaidferoz wants to merge 9 commits intoproject-1-ai-standardizationfrom
feat-141-Dashboard_Implementation

Conversation

@junaidferoz
Copy link
Copy Markdown
Collaborator

@junaidferoz junaidferoz commented Mar 31, 2026

Dashbaord for AI Integration

Adds a new "LLM Dashboard" page to the CARE frontend and the supporting database schema, as described in the issue. Users can manage API keys, create prompt templates, and view usage statistics from a single page. Admins get a separate "LLM Providers" page to control available providers..

New User Features

  • LLM Dashboard page accessible from the sidebar for all users, combining API key management, prompt templates, usage stats, and a request log in one view.
  • Add/edit/delete API keys through a modal form with provider selection, naming, sharing options (system-wide, per study, per project), and monthly token limits.
  • Add/edit/delete prompt templates through a modal with a {{placeholder}} syntax editor, automatic parameter detection, and a preview area.
  • Usage stats cards showing total requests, input/output tokens, and estimated cost.
  • Request log table with filters for provider, status, and time range, plus CSV export and a detail modal for inspecting individual requests.
  • LLM Providers admin page where admins can enable/disable providers and manage which models are available system-wide.

New Dev Features

  • Seven database migrations creating the api_key, llm_provider, llm_log, and prompt_template tables, seeding three default providers (OpenAI, Anthropic, Google), LLM-related settings, and registering the new nav elements and user rights.
  • Four Sequelize models (api_key.js, llm_provider.js, llm_log.js, prompt_template.js) extending MetaModel.
  • Encryption utility (backend/utils/encryption.js) providing AES-256-GCM encrypt/decrypt for API key storage and a masking helper for frontend display.
  • Vuex store extension in frontend/src/store/modules/service.js to handle incoming LLMService messages once a backend service is connected.

Future Steps

  • Integrate LiteLLM as the backend service so dashboard actions persist and return real data.
  • Add a unified model browser listing models from LiteLLM, Ollama, and NLP skills.
  • Wire prompt template input mapping to auto-populate parameters from UI components.
  • Enforce sharing scopes at the backend level.

Copilot AI review requested due to automatic review settings March 31, 2026 14:14
@junaidferoz junaidferoz self-assigned this Mar 31, 2026
@junaidferoz junaidferoz added enhancement New feature or request frontend requires changes in the frontend of CARE labels Mar 31, 2026
@junaidferoz junaidferoz linked an issue Mar 31, 2026 that may be closed by this pull request
8 tasks
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR introduces a first-class LLM integration layer (direct HTTP calls to providers) plus new admin/user dashboard pages to manage providers, API keys, prompt templates, and usage reporting within CARE.

Changes:

  • Added an LLMService backend service to execute LLM requests, manage keys/templates, and log usage/costs.
  • Added new Vue dashboard pages for LLM usage (keys/templates/logs) and an admin page for provider configuration.
  • Added encryption utilities and new Sequelize models/migrations for LLM-related tables and settings/nav seeding.

Reviewed changes

Copilot reviewed 19 out of 20 changed files in this pull request and generated 21 comments.

Show a summary per file
File Description
frontend/src/store/modules/service.js Adds Vuex wiring for LLMService socket response types and result cleanup.
frontend/src/components/dashboard/LlmProviders.vue New admin UI for CRUD on llm_provider records (models, endpoints, enable/disable).
frontend/src/components/dashboard/LlmDashboard.vue New unified dashboard for keys, templates, usage stats, request log, and test-run.
backend/webserver/sockets/service.js Auto-connects LLMService on socket init when enabled via settings.
backend/webserver/services/llm.js Implements direct provider calls, key/template CRUD, usage stats/logs, and logging.
backend/utils/encryption.js Adds AES-256-GCM encrypt/decrypt + API key masking helper.
backend/db/models/api_key.js New API key model helpers (getAccessibleKeys, resolveKey).
backend/db/models/prompt_template.js New prompt template model and “accessible to user” query helper.
backend/db/models/llm_provider.js New provider registry model (enabled providers, lookup by slug).
backend/db/models/llm_log.js New usage log model with pagination and aggregated stats queries.
backend/db/migrations/20260331100000-create-api_key.js Creates api_key table.
backend/db/migrations/20260331100001-create-llm_provider.js Creates llm_provider table.
backend/db/migrations/20260331100002-create-llm_log.js Creates llm_log table + indexes.
backend/db/migrations/20260331100003-create-prompt_template.js Creates prompt_template table.
backend/db/migrations/20260331100004-seed-llm_provider.js Seeds default providers/models.
backend/db/migrations/20260331100005-seed-llm_settings.js Seeds LLM settings keys.
backend/db/migrations/20260331100006-seed-llm_nav_and_rights.js Seeds nav entries + rights for the new dashboard pages.
backend/db/.sequelizerc Loads .env for Sequelize CLI runs.
backend/db/config/config.js Minor formatting change.
.gitignore Ignores logs/.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

} else {
cur = {...state.services[service][serviceType]};
}
if (!data.data.error) {
Copy link

Copilot AI Mar 31, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LLMService results with data.data.error are currently dropped, but the LlmDashboard watcher expects error responses to be stored (it reads results[requestId].error). This causes failed test runs to never resolve and eventually show a timeout instead of the actual error. Store error payloads as well (or store a sentinel entry) so the UI can react to failures.

Suggested change
if (!data.data.error) {
if (data && data.data && data.data.id !== undefined && data.data.id !== null) {
// Store both successful and error results so the UI can react to failures.

Copilot uses AI. Check for mistakes.
Comment on lines +137 to +141
return this.providers.map(p => {
const models = Array.isArray(p.models)
? p.models
: (typeof p.models === 'string' ? JSON.parse(p.models) : []);
return {
Copy link

Copilot AI Mar 31, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

providers parsing uses JSON.parse(p.models) without a try/catch. If a provider record contains invalid JSON (e.g., from manual edits or a partial update), this computed property will throw and break the whole page render. Wrap parsing in a safe helper (try/catch) and fall back to an empty array (and optionally surface a toast).

Copilot uses AI. Check for mistakes.
Comment on lines +797 to +798
this.currentPage = paginationData.page;
this.loadLogs();
Copy link

Copilot AI Mar 31, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The log table is configured for server-side pagination and sorting, but handlePaginationUpdate ignores limit, order, and filter from BasicTable and loadLogs() always sends {limit: 25, order: [['createdAt','DESC']]}. This breaks sorting (column sort clicks) and items-per-page changes. Persist the emitted pagination data (page/limit/order/filter) and pass it through to getUsageLogs.

Suggested change
this.currentPage = paginationData.page;
this.loadLogs();
// Persist full pagination state so server-side calls can respect page, limit, order, and filter.
this.logPagination = {
...this.logPagination,
...paginationData,
};
this.currentPage = paginationData.page;
// Forward pagination data to the log loader so it can pass it through to getUsageLogs.
this.loadLogs(paginationData);

Copilot uses AI. Check for mistakes.
Comment on lines +807 to +814
if (this.logs.length === 0) return;
const headers = ['Timestamp', 'Provider', 'Model', 'Status', 'Input Tokens', 'Output Tokens', 'Est. Cost', 'Latency (ms)'];
const rows = this.logs.map(l => [
new Date(l.createdAt).toISOString(), l.provider, l.model, l.status,
l.inputTokens || 0, l.outputTokens || 0, l.estimatedCost || 0, l.latencyMs || 0,
]);
const csv = [headers.join(','), ...rows.map(r => r.join(','))].join('\n');
const blob = new Blob([csv], {type: 'text/csv'});
Copy link

Copilot AI Mar 31, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CSV export builds rows by simple join(',') without escaping commas, quotes, or newlines in fields (e.g., provider/model names or any future string fields), producing invalid CSV for many values. Use proper CSV escaping/quoting (or a small CSV utility) before creating the blob.

Copilot uses AI. Check for mistakes.
Comment on lines +200 to +219
<label class="form-label fw-bold">Provider</label>
<select v-model="keyForm.provider" class="form-select" :disabled="!!editingKey">
<option value="" disabled>Select a provider...</option>
<option v-for="p in providers" :key="p.slug" :value="p.slug">{{ p.name }}</option>
<option value="custom">Custom</option>
</select>
</div>
<div class="mb-3">
<label class="form-label fw-bold">Name</label>
<input v-model="keyForm.name" type="text" class="form-control" placeholder="e.g. My OpenAI Key" />
</div>
<div class="mb-3">
<label class="form-label fw-bold">API Key</label>
<input v-model="keyForm.apiKey" type="password" class="form-control"
:placeholder="editingKey ? 'Leave blank to keep existing' : 'sk-...'" />
</div>
<div v-if="keyForm.provider === 'custom'" class="mb-3">
<label class="form-label fw-bold">Custom API Endpoint</label>
<input v-model="keyForm.apiEndpoint" type="text" class="form-control" placeholder="https://your-api.com/v1" />
</div>
Copy link

Copilot AI Mar 31, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The UI offers a Custom provider option and allows setting a custom API endpoint, but the backend requires provider to match a configured provider slug and will reject unknown slugs (and _callProvider routing is provider-slug based). Selecting custom will therefore create unusable keys / failed requests. Either remove the custom option, or implement an explicit custom-provider flow end-to-end (including request routing and provider validation).

Copilot uses AI. Check for mistakes.
Comment on lines +549 to +550
return Array.isArray(provider.models) ? provider.models
: (typeof provider.models === 'string' ? JSON.parse(provider.models) : []);
Copy link

Copilot AI Mar 31, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

availableModels() does JSON.parse(provider.models) without guarding for invalid JSON. A malformed models value will throw and break the whole dashboard render. Wrap parsing in a try/catch (or normalize models in the backend so it’s always an array).

Suggested change
return Array.isArray(provider.models) ? provider.models
: (typeof provider.models === 'string' ? JSON.parse(provider.models) : []);
if (Array.isArray(provider.models)) {
return provider.models;
}
if (typeof provider.models === 'string') {
try {
const parsed = JSON.parse(provider.models);
return Array.isArray(parsed) ? parsed : [];
} catch (e) {
// Malformed JSON; safely fall back to no available models
return [];
}
}
return [];

Copilot uses AI. Check for mistakes.
<option value="">All Statuses</option>
<option value="success">Success</option>
<option value="error">Error</option>
<option value="timeout">Timeout</option>
Copy link

Copilot AI Mar 31, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The status filter includes a timeout option, but the backend only ever writes status: 'success' or status: 'error' to llm_log. As a result, filtering by timeout will always return zero rows. Either log provider timeouts distinctly (e.g., detect axios timeout errors and set status: 'timeout') or remove the filter option.

Suggested change
<option value="timeout">Timeout</option>

Copilot uses AI. Check for mistakes.
Comment on lines +4 to +12
module.exports = (sequelize, DataTypes) => {
class LlmProvider extends MetaModel {
static autoTable = true;
static publicTable = true;

static associate(models) {
}

/**
Copy link

Copilot AI Mar 31, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

llm_provider is intended to be admin-managed (the dashboard route is admin-only), but there is no server-side authorization guard on create/update/delete for this model. Because it’s an autoTable, a non-admin client can still call appDataUpdate directly unless you enforce permissions in model hooks (e.g., beforeCreate/beforeUpdate/beforeDestroy checking options.context.currentUserId is admin) or move provider CRUD behind an admin-checked service command.

Copilot uses AI. Check for mistakes.
const since = new Date(Date.now() - days * 24 * 60 * 60 * 1000);

const where = {createdAt: {[Op.gte]: since}};
if (userId) where.userId = userId;
Copy link

Copilot AI Mar 31, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

getUsageStats treats userId = null as system-wide, but the filter logic is if (userId) where.userId = userId; which will skip filtering for userId = 0 as well. Using an explicit null check (userId !== null) avoids accidental system-wide stats if a falsy userId value is ever passed in.

Suggested change
if (userId) where.userId = userId;
if (userId !== null) where.userId = userId;

Copilot uses AI. Check for mistakes.
Comment on lines +88 to +97
const provider = this.providers.find(p => p.slug === providerSlug);
if (!provider || !provider.enabled) {
throw new Error(`Provider "${providerSlug}" is not available or has been disabled by an administrator.`);
}

const maxTokens = parseInt(await this.server.db.models['setting'].get('service.llm.maxTokensPerRequest')) || 4096;
const decryptedKey = decrypt(apiKey.encryptedKey);
const endpoint = apiKey.apiEndpoint || provider.apiBaseUrl;

const result = await this._callProvider(providerSlug, endpoint, decryptedKey, model, resolvedMessages, maxTokens);
Copy link

Copilot AI Mar 31, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LLM requests validate that the provider exists/enabled, but there’s no validation that the requested model is allowed for that provider (or restricted by admin-configured model lists). This conflicts with the PR description about restricting models system-wide and allows clients to call arbitrary model IDs. Validate model against provider.models (and optionally enforce provider-level enabled/disabled per model) before calling the provider API.

Copilot uses AI. Check for mistakes.
@junaidferoz junaidferoz changed the title [FEATURE] Dashboard creation for APIs [FEATURE] LLM Model management dashbaord Apr 1, 2026
@junaidferoz junaidferoz added this to the AI Standardization milestone Apr 1, 2026
@junaidferoz junaidferoz linked an issue Apr 1, 2026 that may be closed by this pull request
6 tasks
@junaidferoz junaidferoz changed the base branch from dev to project-1-ai-standardization April 1, 2026 14:01
@junaidferoz junaidferoz changed the title [FEATURE] LLM Model management dashbaord [FEATURE] Dashboard for AI Integration Apr 2, 2026
allowNull: true,
defaultValue: null,
},
encryptedKey: {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need this here? We should encrypted the columns from the postgres table itself

allowNull: false,
defaultValue: true,
},
shared: {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should not share api keys directly; better share the models once added by a user

allowNull: true,
defaultValue: null,
},
usageLimitMonthly: {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Needs to be discussed how to limit the API keys and models (please set it on the agenda of the team meetings)

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, I have set this point as one of the points to be discussed in our gradient meeting


module.exports = {
async up(queryInterface, Sequelize) {
await queryInterface.createTable('llm_provider', {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is not about the provider, the provider comes with the API Key itself, lets call it ai_model

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not needed, this is coming from the LiteLLM and brokerIO integration

@@ -0,0 +1,76 @@
'use strict';

const navElements = [
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are two dashboards, API Keys and Models, not "LLM Dashboard" and not "LLM Providers", both should be available for all users, not only for admins

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is that needed?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was added so that when we run npx sequelize-cli db:migrate, the CLI loads the .env file and has access to environment variables like POSTGRES_HOST, POSTGRES_CAREDB, etc. Without it, the config.js can't resolve process.env.POSTGRES_CAREDB and the migration fails.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dennis-zyska do we need that file though?

* @param {number} days - number of days to look back
* @returns {Promise<Object>}
*/
static async getUsageStats(userId = null, days = 30) {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please start with empty model files (this holds for all models), and only add functions that we discuss and use somewhere or need, otherwise it can get messy really fast

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request frontend requires changes in the frontend of CARE

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[FEATURE] LLM Model Management Dashboard

5 participants