Compare commits

..

22 Commits

Author SHA1 Message Date
8873886f5d Implement code-splitting to reduce initial bundle size
- Convert all route components to lazy-loaded with React.lazy()
- Add Suspense boundaries with loading fallback components
- Configure manual chunks in Vite for better code organization:
  - Separate React vendor libraries (react-vendor)
  - Group components by feature (reports, settings, admin, apps, auth)
  - Isolate other node_modules (vendor)
- Reduce initial bundle from ~1,080 kB to under 500 kB
- Components now load on-demand when routes are accessed
- Improves initial page load performance and caching
2026-01-22 22:58:19 +01:00
57e4adc69c Remove JIRA_SCHEMA_ID from entire application
- Remove JIRA_SCHEMA_ID from all documentation, config files, and scripts
- Update generate-schema.ts to always auto-discover schemas dynamically
- Runtime application already discovers schemas via /objectschema/list API
- Build script now automatically selects schema with most objects
- Remove JIRA_SCHEMA_ID from docker-compose.yml, Azure setup scripts, and all docs
- Application is now fully schema-agnostic and discovers schemas automatically
2026-01-22 22:56:29 +01:00
f4399a8e4e Consolidate documentation and update backend services
- Reorganize docs into 'Core deployment guides' and 'Setup and configuration' subdirectories
- Consolidate redundant documentation files (ACR, pipelines, deployment guides)
- Add documentation consolidation plan
- Update backend database factory and logger services
- Update migration script and docker-compose configurations
- Add PostgreSQL setup script
2026-01-22 22:45:54 +01:00
18aec4ad80 Fix classifications database SSL for Azure 2026-01-22 01:52:45 +01:00
71480cedd6 Fix PostgreSQL SSL configuration for Azure
- Explicitly configure SSL in PostgresAdapter Pool
- Always require SSL for Azure PostgreSQL connections
- Add logging for database connection details
2026-01-22 01:51:20 +01:00
b8d7e7a229 Fix logger for Azure App Service and update deployment docs
- Fix logger to handle Azure App Service write restrictions
- Skip file logging in Azure App Service (console logs captured automatically)
- Add deployment scripts for App Service setup
- Update documentation with correct resource names
- Add Key Vault access request documentation
- Add alternative authentication methods for ACR and Key Vault
2026-01-22 00:51:53 +01:00
ffce6e8db3 Fix TypeScript error: Type 'unknown' not assignable to ReactNode in ObjectDetailModal
- Added explicit React import
- Converted complex conditional rendering to IIFE for proper type inference
- Fixed type safety issues in metadata conditional rendering
- Resolves build failure in Azure DevOps pipeline
2026-01-21 23:40:07 +01:00
7cf7e757b9 Improve TypeScript type safety in DataValidationDashboard and ObjectDetailModal
- Use proper type references instead of typeof in DataValidationDashboard
- Improve date value type handling in ObjectDetailModal with explicit type checks
2026-01-21 23:25:05 +01:00
73660cdf66 Fix TypeScript compilation errors in frontend components
- Remove unused variables in ApplicationInfo, ArchitectureDebugPage
- Fix type errors in Dashboard, GovernanceAnalysis, GovernanceModelHelper (PageHeader description prop)
- Add null checks and explicit types in DataValidationDashboard
- Fix ObjectDetailModal type errors for _jiraCreatedAt and Date constructor
- Remove unused imports and variables in SchemaConfigurationSettings
- Update PageHeader to accept string | ReactNode for description prop
2026-01-21 23:19:06 +01:00
c4fa18ed55 Update ACR name to zdlasacr in pipeline configuration 2026-01-21 23:06:17 +01:00
42a04e6cb3 Add Azure deployment automation and documentation
- Add separate deployment pipeline (azure-pipelines-deploy.yml) for App Service deployment
- Add advanced pipeline with deployment slots (azure-pipelines-slots.yml)
- Restore azure-pipelines.yml to build-only (no deployment)
- Add comprehensive Azure setup documentation:
  - AZURE-NEW-SUBSCRIPTION-SETUP.md: Complete step-by-step Azure resource setup
  - AZURE-RESOURCES-OVERVIEW.md: Quick reference for all Azure resources
  - AZURE-ACR-SHARED-SETUP.md: Guide for shared Container Registry
  - AZURE-ACR-NAMING-RECOMMENDATION.md: Naming recommendations for Zuyderland
  - AZURE-PIPELINE-DEPLOYMENT.md: Automated deployment setup guide
  - AZURE-PIPELINE-QUICK-REFERENCE.md: Quick reference for pipeline variables
  - AZURE-PIPELINES-USAGE.md: Guide for using build and deployment pipelines
- Add setup script (scripts/setup-azure-resources.sh) for automated resource creation
- Support for shared ACR across multiple applications
2026-01-21 23:03:48 +01:00
52f851c1f3 Fix TypeError: totalFTE.toFixed is not a function
- Add Number() conversion in overallKPIs calculation to ensure all values are numbers
- Add safeguards in render to handle null/undefined/NaN values before calling .toFixed()
- Prevents crashes when data contains non-numeric values
2026-01-21 13:47:19 +01:00
3c11402e6b Load team dashboard data from database cache instead of API
- Replace API-based getTeamDashboardData with database-backed implementation
- Load all ApplicationComponents from normalized cache store
- Reuse existing grouping and KPI calculation logic
- Significantly faster as it avoids hundreds of API calls
- Falls back to API if database query fails
2026-01-21 13:36:00 +01:00
e1ad0d9aa7 Fix missing key prop in BIASyncDashboard list items 2026-01-21 11:42:15 +01:00
cb418ed051 Fix missing PageHeader import in LifecyclePipeline component 2026-01-21 11:07:29 +01:00
9ad4bd9a73 Fix remaining TypeScript 'Untyped function calls' errors
- Add DatabaseAdapter type imports where needed
- Properly type database adapter calls with type assertions
- Fix type mismatches in schemaMappingService
- Fix ensureInitialized calls on DatabaseAdapter
2026-01-21 09:39:58 +01:00
6bb5907bbd Fix TypeScript compilation errors in backend
- Fix query parameter type issues (string | string[] to string) in controllers
- Add public getDatabaseAdapter() method to SchemaRepository for db access
- Fix SchemaSyncService export and import issues
- Fix referenceObject vs referenceObjectType property name
- Add missing jiraAssetsClient import in normalizedCacheStore
- Fix duplicate properties in object literals
- Add type annotations for implicit any types
- Fix type predicate issues with generics
- Fix method calls (getEnabledObjectTypes, syncAllSchemas)
- Fix type mismatches (ObjectTypeRecord vs expected types)
- Fix Buffer type issue in biaMatchingService
- Export SchemaSyncService class for ServiceFactory
2026-01-21 09:29:05 +01:00
c331540369 Fix TypeScript type errors in schemaConfigurationService
- Type all db variables as DatabaseAdapter to enable generic method calls
- Add explicit type annotations for row parameters in map callbacks
- Cast ensureInitialized calls to any (not part of DatabaseAdapter interface)

Resolves all 9 TypeScript linter errors in the file
2026-01-21 03:31:55 +01:00
cdee0e8819 UI styling improvements: dashboard headers and navigation
- Restore blue PageHeader on Dashboard (/app-components)
- Update homepage (/) with subtle header design without blue bar
- Add uniform PageHeader styling to application edit page
- Fix Rapporten link on homepage to point to /reports overview
- Improve header descriptions spacing for better readability
2026-01-21 03:24:56 +01:00
e276e77fbc Migrate from xlsx to exceljs to fix security vulnerabilities
- Replace xlsx package (v0.18.5) with exceljs (v4.4.0)
- Remove @types/xlsx dependency (exceljs has built-in TypeScript types)
- Update biaMatchingService.ts to use ExcelJS API:
  - Replace XLSX.read() with workbook.xlsx.load()
  - Replace XLSX.utils.sheet_to_json() with eachRow() iteration
  - Handle 1-based column indexing correctly
- Make loadBIAData() and findBIAMatch() async functions
- Update all callers in applications.ts and claude.ts to use await
- Fix npm audit: 0 vulnerabilities (was 1 high severity)

This migration eliminates the Prototype Pollution and ReDoS vulnerabilities
in the xlsx package while maintaining full functionality.
2026-01-15 09:59:43 +01:00
c60fbe8821 Fix frontend TypeScript compilation errors
- Fix process.env.NODE_ENV in ApplicationInfo.tsx (use import.meta.env.DEV)
- Remove unused variables and imports in Profile.tsx, ProfileSettings.tsx, RoleManagement.tsx, UserManagement.tsx, UserSettings.tsx
- Fix FormData type issue in UserSettings.tsx (use React.FormEvent<HTMLFormElement>)
2026-01-15 03:30:11 +01:00
ff46da842f Fix TypeScript compilation errors
- Fix conflicting Request interface declarations (auth.ts vs authorization.ts)
- Fix email field type issue in auth.ts (handle undefined)
- Fix req.params type issues (string | string[] to string) in auth.ts, roles.ts, users.ts
- Fix apiKey undefined issue in claude.ts (use tavilyApiKey)
- Fix duplicate isConfigured identifier in emailService.ts (rename to _isConfigured)
- Fix confluencePage property type issue in jiraAssetsClient.ts (add type assertion)
2026-01-15 03:26:20 +01:00
168 changed files with 28434 additions and 7701 deletions

View File

@@ -37,7 +37,6 @@ DATABASE_URL=postgresql://cmdb:cmdb-dev@localhost:5432/cmdb
# Jira Assets Configuration
# -----------------------------------------------------------------------------
JIRA_HOST=https://jira.zuyderland.nl
JIRA_SCHEMA_ID=your_schema_id
# Jira Service Account Token (for read operations: sync, fetching data)
# This token is used for all read operations from Jira Assets.

View File

@@ -1,8 +1,8 @@
# CLAUDE.md - ZiRA Classificatie Tool
# CLAUDE.md - CMDB Insight
## Project Overview
**Project:** ZiRA Classificatie Tool (Zuyderland CMDB Editor)
**Project:** CMDB Insight (Zuyderland CMDB Editor)
**Organization:** Zuyderland Medisch Centrum - ICMT
**Purpose:** Interactive tool for classifying ~500 application components into ZiRA (Ziekenhuis Referentie Architectuur) application functions with Jira Assets CMDB integration.
@@ -18,7 +18,7 @@ The project has a working implementation with:
- SQLite database for classification history
Key files:
- `zira-classificatie-tool-specificatie.md` - Complete technical specification
- `cmdb-insight-specificatie.md` - Complete technical specification
- `zira-taxonomy.json` - ZiRA taxonomy with 90+ application functions across 10 domains
- `management-parameters.json` - Reference data for dynamics, complexity, users, governance models
@@ -57,7 +57,7 @@ cd frontend && npm run build
## Project Structure
```
zira-classificatie-tool/
cmdb-insight/
├── package.json # Root workspace package
├── docker-compose.yml # Docker development setup
├── .env.example # Environment template
@@ -156,7 +156,6 @@ Dutch hospital reference architecture with 90+ application functions organized i
```env
# Jira Data Center
JIRA_HOST=https://jira.zuyderland.nl
JIRA_SCHEMA_ID=<schema_id>
# Jira Authentication Method: 'pat' or 'oauth'
JIRA_AUTH_METHOD=pat # Choose: 'pat' (Personal Access Token) or 'oauth' (OAuth 2.0)
@@ -271,9 +270,25 @@ SESSION_SECRET=your_secure_random_string
| File | Purpose |
|------|---------|
| `zira-classificatie-tool-specificatie.md` | Complete technical specification |
| `cmdb-insight-specificatie.md` | Complete technical specification |
| `zira-taxonomy.json` | 90+ ZiRA application functions |
| `management-parameters.json` | Dropdown options and reference data |
| `docs/refactor-plan.md` | **Architecture refactoring plan (Phase 1: Analysis)** |
## Architecture Refactoring
**Status:** Phase 1 Complete - Analysis and Planning
A comprehensive refactoring plan has been created to improve maintainability, reduce duplication, and establish clearer separation of concerns. See `docs/refactor-plan.md` for:
- Current architecture map (files/folders/modules)
- Pain points and duplication analysis
- Target architecture (domain/infrastructure/services/api)
- Migration steps in order
- Explicit deletion list (files to remove later)
- API payload contract and recursion insights
**⚠️ Note:** Phase 1 is analysis only - no functional changes have been made yet.
## Language

133
azure-pipelines-deploy.yml Normal file
View File

@@ -0,0 +1,133 @@
# Azure DevOps Pipeline - Deploy to Azure App Service
# Use this pipeline after images have been built and pushed to ACR
#
# To use this pipeline:
# 1. Make sure images exist in ACR (run azure-pipelines.yml first)
# 2. Update variables below with your Azure resource names
# 3. Create Azure service connection for App Service deployment
# 4. Create 'production' environment in Azure DevOps
# 5. Configure this pipeline in Azure DevOps
trigger:
branches:
include:
- main
tags:
include:
- 'v*'
pool:
vmImage: 'ubuntu-latest'
variables:
# Azure Container Registry configuratie
acrName: 'zdlas' # Pas aan naar jouw ACR naam
repositoryName: 'cmdb-insight'
# Azure App Service configuratie
resourceGroup: 'rg-cmdb-insight-prod' # Pas aan naar jouw resource group
backendAppName: 'cmdb-backend-prod' # Pas aan naar jouw backend app naam
frontendAppName: 'cmdb-frontend-prod' # Pas aan naar jouw frontend app naam
azureSubscription: 'zuyderland-cmdb-subscription' # Azure service connection voor App Service deployment
# Deployment configuratie
imageTag: 'latest' # Use 'latest' or specific tag like 'v1.0.0'
stages:
- stage: Deploy
displayName: 'Deploy to Azure App Service'
jobs:
- deployment: DeployBackend
displayName: 'Deploy Backend'
environment: 'production'
strategy:
runOnce:
deploy:
steps:
- task: AzureWebAppContainer@1
displayName: 'Deploy Backend Container'
inputs:
azureSubscription: '$(azureSubscription)'
appName: '$(backendAppName)'
containers: '$(acrName).azurecr.io/$(repositoryName)/backend:$(imageTag)'
deployToSlotOrASE: false
- task: AzureCLI@2
displayName: 'Restart Backend App Service'
inputs:
azureSubscription: '$(azureSubscription)'
scriptType: 'bash'
scriptLocation: 'inlineScript'
inlineScript: |
echo "Restarting backend app service..."
az webapp restart \
--name $(backendAppName) \
--resource-group $(resourceGroup)
echo "Backend app service restarted successfully"
- deployment: DeployFrontend
displayName: 'Deploy Frontend'
environment: 'production'
strategy:
runOnce:
deploy:
steps:
- task: AzureWebAppContainer@1
displayName: 'Deploy Frontend Container'
inputs:
azureSubscription: '$(azureSubscription)'
appName: '$(frontendAppName)'
containers: '$(acrName).azurecr.io/$(repositoryName)/frontend:$(imageTag)'
deployToSlotOrASE: false
- task: AzureCLI@2
displayName: 'Restart Frontend App Service'
inputs:
azureSubscription: '$(azureSubscription)'
scriptType: 'bash'
scriptLocation: 'inlineScript'
inlineScript: |
echo "Restarting frontend app service..."
az webapp restart \
--name $(frontendAppName) \
--resource-group $(resourceGroup)
echo "Frontend app service restarted successfully"
- job: VerifyDeployment
displayName: 'Verify Deployment'
dependsOn:
- DeployBackend
- DeployFrontend
steps:
- task: AzureCLI@2
displayName: 'Health Check'
inputs:
azureSubscription: '$(azureSubscription)'
scriptType: 'bash'
scriptLocation: 'inlineScript'
inlineScript: |
echo "Checking backend health..."
BACKEND_URL="https://$(backendAppName).azurewebsites.net/api/health"
FRONTEND_URL="https://$(frontendAppName).azurewebsites.net"
echo "Backend URL: $BACKEND_URL"
echo "Frontend URL: $FRONTEND_URL"
# Wait a bit for apps to start
sleep 10
# Check backend health
BACKEND_STATUS=$(curl -s -o /dev/null -w "%{http_code}" $BACKEND_URL || echo "000")
if [ "$BACKEND_STATUS" = "200" ]; then
echo "✅ Backend health check passed"
else
echo "⚠️ Backend health check returned status: $BACKEND_STATUS"
fi
# Check frontend
FRONTEND_STATUS=$(curl -s -o /dev/null -w "%{http_code}" $FRONTEND_URL || echo "000")
if [ "$FRONTEND_STATUS" = "200" ]; then
echo "✅ Frontend is accessible"
else
echo "⚠️ Frontend returned status: $FRONTEND_STATUS"
fi

247
azure-pipelines-slots.yml Normal file
View File

@@ -0,0 +1,247 @@
# Azure DevOps Pipeline - Build, Push and Deploy with Deployment Slots
# Advanced version with zero-downtime deployment using staging slots
trigger:
branches:
include:
- main
tags:
include:
- 'v*'
pool:
vmImage: 'ubuntu-latest'
variables:
# Azure Container Registry configuratie
acrName: 'zdlas' # Pas aan naar jouw ACR naam
repositoryName: 'cmdb-insight'
dockerRegistryServiceConnection: 'zuyderland-cmdb-acr-connection'
# Azure App Service configuratie
resourceGroup: 'rg-cmdb-insight-prod'
backendAppName: 'cmdb-backend-prod'
frontendAppName: 'cmdb-frontend-prod'
azureSubscription: 'zuyderland-cmdb-subscription'
# Deployment configuratie
imageTag: '$(Build.BuildId)'
deployToProduction: true
useDeploymentSlots: true # Enable deployment slots
stagingSlotName: 'staging'
stages:
- stage: Build
displayName: 'Build and Push Docker Images'
jobs:
- job: BuildImages
displayName: 'Build Docker Images'
steps:
- task: Docker@2
displayName: 'Build and Push Backend Image'
inputs:
command: buildAndPush
repository: '$(repositoryName)/backend'
dockerfile: 'backend/Dockerfile.prod'
containerRegistry: '$(dockerRegistryServiceConnection)'
tags: |
$(imageTag)
latest
- task: Docker@2
displayName: 'Build and Push Frontend Image'
inputs:
command: buildAndPush
repository: '$(repositoryName)/frontend'
dockerfile: 'frontend/Dockerfile.prod'
containerRegistry: '$(dockerRegistryServiceConnection)'
tags: |
$(imageTag)
latest
- task: PowerShell@2
displayName: 'Output Image URLs'
inputs:
targetType: 'inline'
script: |
$backendImage = "$(acrName).azurecr.io/$(repositoryName)/backend:$(imageTag)"
$frontendImage = "$(acrName).azurecr.io/$(repositoryName)/frontend:$(imageTag)"
Write-Host "##vso[task.setvariable variable=backendImage;isOutput=true]$backendImage"
Write-Host "##vso[task.setvariable variable=frontendImage;isOutput=true]$frontendImage"
Write-Host "Backend Image: $backendImage"
Write-Host "Frontend Image: $frontendImage"
- stage: DeployToStaging
displayName: 'Deploy to Staging Slot'
dependsOn: Build
condition: and(succeeded(), eq(variables['deployToProduction'], true), eq(variables['useDeploymentSlots'], true))
jobs:
- deployment: DeployBackendStaging
displayName: 'Deploy Backend to Staging'
environment: 'staging'
strategy:
runOnce:
deploy:
steps:
- task: AzureWebAppContainer@1
displayName: 'Deploy Backend to Staging Slot'
inputs:
azureSubscription: '$(azureSubscription)'
appName: '$(backendAppName)'
deployToSlotOrASE: true
resourceGroupName: '$(resourceGroup)'
slotName: '$(stagingSlotName)'
containers: '$(acrName).azurecr.io/$(repositoryName)/backend:latest'
- task: AzureCLI@2
displayName: 'Wait for Backend Staging to be Ready'
inputs:
azureSubscription: '$(azureSubscription)'
scriptType: 'bash'
scriptLocation: 'inlineScript'
inlineScript: |
echo "Waiting for backend staging to be ready..."
sleep 30
STAGING_URL="https://$(backendAppName)-$(stagingSlotName).azurewebsites.net/api/health"
for i in {1..10}; do
STATUS=$(curl -s -o /dev/null -w "%{http_code}" $STAGING_URL || echo "000")
if [ "$STATUS" = "200" ]; then
echo "✅ Backend staging is ready"
exit 0
fi
echo "Waiting... ($i/10)"
sleep 10
done
echo "⚠️ Backend staging health check timeout"
- deployment: DeployFrontendStaging
displayName: 'Deploy Frontend to Staging'
environment: 'staging'
strategy:
runOnce:
deploy:
steps:
- task: AzureWebAppContainer@1
displayName: 'Deploy Frontend to Staging Slot'
inputs:
azureSubscription: '$(azureSubscription)'
appName: '$(frontendAppName)'
deployToSlotOrASE: true
resourceGroupName: '$(resourceGroup)'
slotName: '$(stagingSlotName)'
containers: '$(acrName).azurecr.io/$(repositoryName)/frontend:latest'
- task: AzureCLI@2
displayName: 'Wait for Frontend Staging to be Ready'
inputs:
azureSubscription: '$(azureSubscription)'
scriptType: 'bash'
scriptLocation: 'inlineScript'
inlineScript: |
echo "Waiting for frontend staging to be ready..."
sleep 20
STAGING_URL="https://$(frontendAppName)-$(stagingSlotName).azurewebsites.net"
for i in {1..10}; do
STATUS=$(curl -s -o /dev/null -w "%{http_code}" $STAGING_URL || echo "000")
if [ "$STATUS" = "200" ]; then
echo "✅ Frontend staging is ready"
exit 0
fi
echo "Waiting... ($i/10)"
sleep 10
done
echo "⚠️ Frontend staging health check timeout"
- stage: SwapToProduction
displayName: 'Swap Staging to Production'
dependsOn: DeployToStaging
condition: and(succeeded(), eq(variables['deployToProduction'], true), eq(variables['useDeploymentSlots'], true))
jobs:
- deployment: SwapBackend
displayName: 'Swap Backend to Production'
environment: 'production'
strategy:
runOnce:
deploy:
steps:
- task: AzureCLI@2
displayName: 'Swap Backend Staging to Production'
inputs:
azureSubscription: '$(azureSubscription)'
scriptType: 'bash'
scriptLocation: 'inlineScript'
inlineScript: |
echo "Swapping backend staging to production..."
az webapp deployment slot swap \
--name $(backendAppName) \
--resource-group $(resourceGroup) \
--slot $(stagingSlotName) \
--target-slot production
echo "✅ Backend swapped to production"
- deployment: SwapFrontend
displayName: 'Swap Frontend to Production'
environment: 'production'
strategy:
runOnce:
deploy:
steps:
- task: AzureCLI@2
displayName: 'Swap Frontend Staging to Production'
inputs:
azureSubscription: '$(azureSubscription)'
scriptType: 'bash'
scriptLocation: 'inlineScript'
inlineScript: |
echo "Swapping frontend staging to production..."
az webapp deployment slot swap \
--name $(frontendAppName) \
--resource-group $(resourceGroup) \
--slot $(stagingSlotName) \
--target-slot production
echo "✅ Frontend swapped to production"
- stage: VerifyProduction
displayName: 'Verify Production Deployment'
dependsOn: SwapToProduction
condition: and(succeeded(), eq(variables['deployToProduction'], true), eq(variables['useDeploymentSlots'], true))
jobs:
- job: VerifyDeployment
displayName: 'Verify Production'
steps:
- task: AzureCLI@2
displayName: 'Production Health Check'
inputs:
azureSubscription: '$(azureSubscription)'
scriptType: 'bash'
scriptLocation: 'inlineScript'
inlineScript: |
echo "Verifying production deployment..."
BACKEND_URL="https://$(backendAppName).azurewebsites.net/api/health"
FRONTEND_URL="https://$(frontendAppName).azurewebsites.net"
echo "Backend URL: $BACKEND_URL"
echo "Frontend URL: $FRONTEND_URL"
# Wait for swap to complete
sleep 15
# Check backend health
BACKEND_STATUS=$(curl -s -o /dev/null -w "%{http_code}" $BACKEND_URL || echo "000")
if [ "$BACKEND_STATUS" = "200" ]; then
echo "✅ Backend production health check passed"
else
echo "❌ Backend production health check failed: $BACKEND_STATUS"
exit 1
fi
# Check frontend
FRONTEND_STATUS=$(curl -s -o /dev/null -w "%{http_code}" $FRONTEND_URL || echo "000")
if [ "$FRONTEND_STATUS" = "200" ]; then
echo "✅ Frontend production is accessible"
else
echo "❌ Frontend production check failed: $FRONTEND_STATUS"
exit 1
fi
echo "🎉 Production deployment verified successfully!"

View File

@@ -14,8 +14,8 @@ pool:
variables:
# Azure Container Registry naam - pas aan naar jouw ACR
acrName: 'zdlas'
repositoryName: 'zuyderland-cmdb-gui'
acrName: 'zdlasacr'
repositoryName: 'cmdb-insight'
dockerRegistryServiceConnection: 'zuyderland-cmdb-acr-connection' # Service connection naam in Azure DevOps
imageTag: '$(Build.BuildId)'

View File

@@ -1,7 +1,7 @@
{
"name": "zira-backend",
"name": "cmdb-insight-backend",
"version": "1.0.0",
"description": "ZiRA Classificatie Tool Backend",
"description": "CMDB Insight Backend",
"type": "module",
"main": "dist/index.js",
"scripts": {
@@ -9,9 +9,13 @@
"build": "tsc",
"start": "node dist/index.js",
"generate-schema": "tsx scripts/generate-schema.ts",
"generate-types": "tsx scripts/generate-types-from-db.ts",
"discover-schema": "tsx scripts/discover-schema.ts",
"migrate": "tsx scripts/run-migrations.ts",
"check-admin": "tsx scripts/check-admin-user.ts",
"migrate:sqlite-to-postgres": "tsx scripts/migrate-sqlite-to-postgres.ts"
"migrate:sqlite-to-postgres": "tsx scripts/migrate-sqlite-to-postgres.ts",
"migrate:search-enabled": "tsx scripts/migrate-search-enabled.ts",
"setup-schema-mappings": "tsx scripts/setup-schema-mappings.ts"
},
"dependencies": {
"@anthropic-ai/sdk": "^0.32.1",
@@ -29,7 +33,7 @@
"openai": "^6.15.0",
"pg": "^8.13.1",
"winston": "^3.17.0",
"xlsx": "^0.18.5"
"exceljs": "^4.4.0"
},
"devDependencies": {
"@types/better-sqlite3": "^7.6.12",
@@ -38,7 +42,6 @@
"@types/express": "^5.0.0",
"@types/node": "^22.9.0",
"@types/pg": "^8.11.10",
"@types/xlsx": "^0.0.35",
"tsx": "^4.19.2",
"typescript": "^5.6.3"
}

View File

@@ -0,0 +1,38 @@
#!/usr/bin/env npx tsx
/**
* Schema Discovery CLI
*
* Manually trigger schema discovery from Jira Assets API.
* This script fetches the schema and stores it in the database.
*
* Usage: npm run discover-schema
*/
import { schemaDiscoveryService } from '../src/services/schemaDiscoveryService.js';
import { schemaCacheService } from '../src/services/schemaCacheService.js';
import { logger } from '../src/services/logger.js';
async function main() {
try {
console.log('Starting schema discovery...');
logger.info('Schema Discovery CLI: Starting manual schema discovery');
// Force discovery (ignore cache)
await schemaDiscoveryService.discoverAndStoreSchema(true);
// Invalidate cache so next request gets fresh data
schemaCacheService.invalidate();
console.log('✅ Schema discovery completed successfully!');
logger.info('Schema Discovery CLI: Schema discovery completed successfully');
process.exit(0);
} catch (error) {
console.error('❌ Schema discovery failed:', error);
logger.error('Schema Discovery CLI: Schema discovery failed', error);
process.exit(1);
}
}
main();

View File

@@ -10,6 +10,11 @@
* and their attributes, ensuring the data model is always in sync with the
* actual CMDB configuration.
*
* Schema Discovery:
* - Automatically discovers available schemas via /objectschema/list
* - Selects the schema with the most objects (or the first one if counts unavailable)
* - The runtime application also discovers schemas dynamically
*
* Usage: npm run generate-schema
*/
@@ -38,7 +43,6 @@ for (const envPath of envPaths) {
// Configuration
const JIRA_HOST = process.env.JIRA_HOST || '';
const JIRA_PAT = process.env.JIRA_PAT || '';
const JIRA_SCHEMA_ID = process.env.JIRA_SCHEMA_ID || '';
const OUTPUT_DIR = path.resolve(__dirname, '../src/generated');
@@ -255,6 +259,36 @@ class JiraSchemaFetcher {
}
}
/**
* List all available schemas
*/
async listSchemas(): Promise<JiraObjectSchema[]> {
try {
const response = await fetch(`${this.baseUrl}/objectschema/list`, {
headers: this.headers,
});
if (!response.ok) {
console.error(`Failed to list schemas: ${response.status} ${response.statusText}`);
return [];
}
const result = await response.json();
// Handle both array and object responses
if (Array.isArray(result)) {
return result;
} else if (result && typeof result === 'object' && 'objectschemas' in result) {
return result.objectschemas || [];
}
return [];
} catch (error) {
console.error(`Error listing schemas:`, error);
return [];
}
}
/**
* Test the connection
*/
@@ -752,18 +786,12 @@ function generateDatabaseSchema(generatedAt: Date): string {
'-- =============================================================================',
'-- Core Tables',
'-- =============================================================================',
'',
'-- Cached CMDB objects (all types stored in single table with JSON data)',
'CREATE TABLE IF NOT EXISTS cached_objects (',
' id TEXT PRIMARY KEY,',
' object_key TEXT NOT NULL UNIQUE,',
' object_type TEXT NOT NULL,',
' label TEXT NOT NULL,',
' data JSON NOT NULL,',
' jira_updated_at TEXT,',
' jira_created_at TEXT,',
' cached_at TEXT NOT NULL',
');',
'--',
'-- NOTE: This schema is LEGACY and deprecated.',
'-- The current system uses the normalized schema defined in',
'-- backend/src/services/database/normalized-schema.ts',
'--',
'-- This file is kept for reference and migration purposes only.',
'',
'-- Object relations (references between objects)',
'CREATE TABLE IF NOT EXISTS object_relations (',
@@ -787,10 +815,6 @@ function generateDatabaseSchema(generatedAt: Date): string {
'-- Indices for Performance',
'-- =============================================================================',
'',
'CREATE INDEX IF NOT EXISTS idx_objects_type ON cached_objects(object_type);',
'CREATE INDEX IF NOT EXISTS idx_objects_key ON cached_objects(object_key);',
'CREATE INDEX IF NOT EXISTS idx_objects_updated ON cached_objects(jira_updated_at);',
'CREATE INDEX IF NOT EXISTS idx_objects_label ON cached_objects(label);',
'',
'CREATE INDEX IF NOT EXISTS idx_relations_source ON object_relations(source_id);',
'CREATE INDEX IF NOT EXISTS idx_relations_target ON object_relations(target_id);',
@@ -829,17 +853,10 @@ async function main() {
process.exit(1);
}
if (!JIRA_SCHEMA_ID) {
console.error('❌ ERROR: JIRA_SCHEMA_ID environment variable is required');
console.error(' Set this in your .env file: JIRA_SCHEMA_ID=6');
process.exit(1);
}
if (envLoaded) {
console.log(`🔧 Environment: ${envLoaded}`);
}
console.log(`📡 Jira Host: ${JIRA_HOST}`);
console.log(`📋 Schema ID: ${JIRA_SCHEMA_ID}`);
console.log(`📁 Output Dir: ${OUTPUT_DIR}`);
console.log('');
@@ -862,20 +879,41 @@ async function main() {
console.log('✅ Connection successful');
console.log('');
// Fetch schema info
console.log('📋 Fetching schema information...');
const schema = await fetcher.fetchSchema(JIRA_SCHEMA_ID);
if (!schema) {
console.error(`❌ Failed to fetch schema ${JIRA_SCHEMA_ID}`);
// Discover schema automatically
console.log('📋 Discovering available schemas...');
const schemas = await fetcher.listSchemas();
if (schemas.length === 0) {
console.error('❌ No schemas found');
console.error(' Please ensure Jira Assets is configured and accessible');
process.exit(1);
}
// Select the schema with the most objects (or the first one if counts unavailable)
const schema = schemas.reduce((prev, current) => {
const prevCount = prev.objectCount || 0;
const currentCount = current.objectCount || 0;
return currentCount > prevCount ? current : prev;
});
const selectedSchemaId = schema.id.toString();
console.log(` Found ${schemas.length} schema(s)`);
if (schemas.length > 1) {
console.log(' Available schemas:');
schemas.forEach(s => {
const marker = s.id === schema.id ? ' → ' : ' ';
console.log(`${marker}${s.id}: ${s.name} (${s.objectSchemaKey}) - ${s.objectCount || 0} objects`);
});
console.log(` Using schema: ${schema.name} (ID: ${selectedSchemaId})`);
}
console.log(` Schema: ${schema.name} (${schema.objectSchemaKey})`);
console.log(` Total objects: ${schema.objectCount || 'unknown'}`);
console.log('');
// Fetch ALL object types from the schema
console.log('📦 Fetching all object types from schema...');
const allObjectTypes = await fetcher.fetchAllObjectTypes(JIRA_SCHEMA_ID);
const allObjectTypes = await fetcher.fetchAllObjectTypes(selectedSchemaId);
if (allObjectTypes.length === 0) {
console.error('❌ No object types found in schema');

View File

@@ -0,0 +1,484 @@
#!/usr/bin/env npx tsx
/**
* Type Generation Script - Database to TypeScript
*
* Generates TypeScript types from database schema.
* This script reads the schema from the database (object_types, attributes)
* and generates:
* - TypeScript types (jira-types.ts)
* - Schema metadata (jira-schema.ts)
*
* Usage: npm run generate-types
*/
import * as fs from 'fs';
import * as path from 'path';
import { fileURLToPath } from 'url';
import { createDatabaseAdapter } from '../src/services/database/factory.js';
import type { AttributeDefinition } from '../src/generated/jira-schema.js';
const __filename = fileURLToPath(import.meta.url);
const __dirname = path.dirname(__filename);
const OUTPUT_DIR = path.resolve(__dirname, '../src/generated');
interface DatabaseObjectType {
jira_type_id: number;
type_name: string;
display_name: string;
description: string | null;
sync_priority: number;
object_count: number;
}
interface DatabaseAttribute {
jira_attr_id: number;
object_type_name: string;
attr_name: string;
field_name: string;
attr_type: string;
is_multiple: boolean | number;
is_editable: boolean | number;
is_required: boolean | number;
is_system: boolean | number;
reference_type_name: string | null;
description: string | null;
}
function generateTypeScriptType(attrType: string, isMultiple: boolean, isReference: boolean): string {
let tsType: string;
if (isReference) {
tsType = 'ObjectReference';
} else {
switch (attrType) {
case 'text':
case 'textarea':
case 'url':
case 'email':
case 'select':
case 'user':
case 'status':
tsType = 'string';
break;
case 'integer':
case 'float':
tsType = 'number';
break;
case 'boolean':
tsType = 'boolean';
break;
case 'date':
case 'datetime':
tsType = 'string'; // ISO date string
break;
default:
tsType = 'unknown';
}
}
if (isMultiple) {
return `${tsType}[]`;
}
return `${tsType} | null`;
}
function escapeString(str: string): string {
return str.replace(/'/g, "\\'").replace(/\n/g, ' ');
}
function generateTypesFile(objectTypes: Array<{
jiraTypeId: number;
name: string;
typeName: string;
objectCount: number;
attributes: AttributeDefinition[];
}>, generatedAt: Date): string {
const lines: string[] = [
'// AUTO-GENERATED FILE - DO NOT EDIT MANUALLY',
'// Generated from database schema',
`// Generated at: ${generatedAt.toISOString()}`,
'//',
'// Re-generate with: npm run generate-types',
'',
'// =============================================================================',
'// Base Types',
'// =============================================================================',
'',
'/** Reference to another CMDB object */',
'export interface ObjectReference {',
' objectId: string;',
' objectKey: string;',
' label: string;',
' // Optional enriched data from referenced object',
' factor?: number;',
'}',
'',
'/** Base interface for all CMDB objects */',
'export interface BaseCMDBObject {',
' id: string;',
' objectKey: string;',
' label: string;',
' _objectType: string;',
' _jiraUpdatedAt: string;',
' _jiraCreatedAt: string;',
'}',
'',
'// =============================================================================',
'// Object Type Interfaces',
'// =============================================================================',
'',
];
for (const objType of objectTypes) {
lines.push(`/** ${objType.name} (Jira Type ID: ${objType.jiraTypeId}, ${objType.objectCount} objects) */`);
lines.push(`export interface ${objType.typeName} extends BaseCMDBObject {`);
lines.push(` _objectType: '${objType.typeName}';`);
lines.push('');
// Group attributes by type
const scalarAttrs = objType.attributes.filter(a => a.type !== 'reference');
const refAttrs = objType.attributes.filter(a => a.type === 'reference');
if (scalarAttrs.length > 0) {
lines.push(' // Scalar attributes');
for (const attr of scalarAttrs) {
const tsType = generateTypeScriptType(attr.type, attr.isMultiple, false);
const comment = attr.description ? ` // ${attr.description}` : '';
lines.push(` ${attr.fieldName}: ${tsType};${comment}`);
}
lines.push('');
}
if (refAttrs.length > 0) {
lines.push(' // Reference attributes');
for (const attr of refAttrs) {
const tsType = generateTypeScriptType(attr.type, attr.isMultiple, true);
const comment = attr.referenceTypeName ? ` // -> ${attr.referenceTypeName}` : '';
lines.push(` ${attr.fieldName}: ${tsType};${comment}`);
}
lines.push('');
}
lines.push('}');
lines.push('');
}
// Generate union type
lines.push('// =============================================================================');
lines.push('// Union Types');
lines.push('// =============================================================================');
lines.push('');
lines.push('/** Union of all CMDB object types */');
lines.push('export type CMDBObject =');
for (let i = 0; i < objectTypes.length; i++) {
const suffix = i < objectTypes.length - 1 ? '' : ';';
lines.push(` | ${objectTypes[i].typeName}${suffix}`);
}
lines.push('');
// Generate type name literal union
lines.push('/** All valid object type names */');
lines.push('export type CMDBObjectTypeName =');
for (let i = 0; i < objectTypes.length; i++) {
const suffix = i < objectTypes.length - 1 ? '' : ';';
lines.push(` | '${objectTypes[i].typeName}'${suffix}`);
}
lines.push('');
// Generate type guards
lines.push('// =============================================================================');
lines.push('// Type Guards');
lines.push('// =============================================================================');
lines.push('');
for (const objType of objectTypes) {
lines.push(`export function is${objType.typeName}(obj: CMDBObject): obj is ${objType.typeName} {`);
lines.push(` return obj._objectType === '${objType.typeName}';`);
lines.push('}');
lines.push('');
}
return lines.join('\n');
}
function generateSchemaFile(objectTypes: Array<{
jiraTypeId: number;
name: string;
typeName: string;
syncPriority: number;
objectCount: number;
attributes: AttributeDefinition[];
}>, generatedAt: Date): string {
const lines: string[] = [
'// AUTO-GENERATED FILE - DO NOT EDIT MANUALLY',
'// Generated from database schema',
`// Generated at: ${generatedAt.toISOString()}`,
'//',
'// Re-generate with: npm run generate-types',
'',
'// =============================================================================',
'// Schema Type Definitions',
'// =============================================================================',
'',
'export interface AttributeDefinition {',
' jiraId: number;',
' name: string;',
' fieldName: string;',
" type: 'text' | 'integer' | 'float' | 'boolean' | 'date' | 'datetime' | 'select' | 'reference' | 'url' | 'email' | 'textarea' | 'user' | 'status' | 'unknown';",
' isMultiple: boolean;',
' isEditable: boolean;',
' isRequired: boolean;',
' isSystem: boolean;',
' referenceTypeId?: number;',
' referenceTypeName?: string;',
' description?: string;',
'}',
'',
'export interface ObjectTypeDefinition {',
' jiraTypeId: number;',
' name: string;',
' typeName: string;',
' syncPriority: number;',
' objectCount: number;',
' attributes: AttributeDefinition[];',
'}',
'',
'// =============================================================================',
'// Schema Metadata',
'// =============================================================================',
'',
`export const SCHEMA_GENERATED_AT = '${generatedAt.toISOString()}';`,
`export const SCHEMA_OBJECT_TYPE_COUNT = ${objectTypes.length};`,
`export const SCHEMA_TOTAL_ATTRIBUTES = ${objectTypes.reduce((sum, ot) => sum + ot.attributes.length, 0)};`,
'',
'// =============================================================================',
'// Object Type Definitions',
'// =============================================================================',
'',
'export const OBJECT_TYPES: Record<string, ObjectTypeDefinition> = {',
];
for (let i = 0; i < objectTypes.length; i++) {
const objType = objectTypes[i];
const comma = i < objectTypes.length - 1 ? ',' : '';
lines.push(` '${objType.typeName}': {`);
lines.push(` jiraTypeId: ${objType.jiraTypeId},`);
lines.push(` name: '${escapeString(objType.name)}',`);
lines.push(` typeName: '${objType.typeName}',`);
lines.push(` syncPriority: ${objType.syncPriority},`);
lines.push(` objectCount: ${objType.objectCount},`);
lines.push(' attributes: [');
for (let j = 0; j < objType.attributes.length; j++) {
const attr = objType.attributes[j];
const attrComma = j < objType.attributes.length - 1 ? ',' : '';
let attrLine = ` { jiraId: ${attr.jiraId}, name: '${escapeString(attr.name)}', fieldName: '${attr.fieldName}', type: '${attr.type}', isMultiple: ${attr.isMultiple}, isEditable: ${attr.isEditable}, isRequired: ${attr.isRequired}, isSystem: ${attr.isSystem}`;
if (attr.referenceTypeName) {
attrLine += `, referenceTypeName: '${attr.referenceTypeName}'`;
}
if (attr.description) {
attrLine += `, description: '${escapeString(attr.description)}'`;
}
attrLine += ` }${attrComma}`;
lines.push(attrLine);
}
lines.push(' ],');
lines.push(` }${comma}`);
}
lines.push('};');
lines.push('');
// Generate lookup maps
lines.push('// =============================================================================');
lines.push('// Lookup Maps');
lines.push('// =============================================================================');
lines.push('');
// Type ID to name map
lines.push('/** Map from Jira Type ID to TypeScript type name */');
lines.push('export const TYPE_ID_TO_NAME: Record<number, string> = {');
for (const objType of objectTypes) {
lines.push(` ${objType.jiraTypeId}: '${objType.typeName}',`);
}
lines.push('};');
lines.push('');
// Type name to ID map
lines.push('/** Map from TypeScript type name to Jira Type ID */');
lines.push('export const TYPE_NAME_TO_ID: Record<string, number> = {');
for (const objType of objectTypes) {
lines.push(` '${objType.typeName}': ${objType.jiraTypeId},`);
}
lines.push('};');
lines.push('');
// Jira name to TypeScript name map
lines.push('/** Map from Jira object type name to TypeScript type name */');
lines.push('export const JIRA_NAME_TO_TYPE: Record<string, string> = {');
for (const objType of objectTypes) {
lines.push(` '${escapeString(objType.name)}': '${objType.typeName}',`);
}
lines.push('};');
lines.push('');
// Helper functions
lines.push('// =============================================================================');
lines.push('// Helper Functions');
lines.push('// =============================================================================');
lines.push('');
lines.push('/** Get attribute definition by type and field name */');
lines.push('export function getAttributeDefinition(typeName: string, fieldName: string): AttributeDefinition | undefined {');
lines.push(' const objectType = OBJECT_TYPES[typeName];');
lines.push(' if (!objectType) return undefined;');
lines.push(' return objectType.attributes.find(a => a.fieldName === fieldName);');
lines.push('}');
lines.push('');
lines.push('/** Get attribute definition by type and Jira attribute ID */');
lines.push('export function getAttributeById(typeName: string, jiraId: number): AttributeDefinition | undefined {');
lines.push(' const objectType = OBJECT_TYPES[typeName];');
lines.push(' if (!objectType) return undefined;');
lines.push(' return objectType.attributes.find(a => a.jiraId === jiraId);');
lines.push('}');
lines.push('');
lines.push('/** Get attribute definition by type and Jira attribute name */');
lines.push('export function getAttributeByName(typeName: string, attrName: string): AttributeDefinition | undefined {');
lines.push(' const objectType = OBJECT_TYPES[typeName];');
lines.push(' if (!objectType) return undefined;');
lines.push(' return objectType.attributes.find(a => a.name === attrName);');
lines.push('}');
lines.push('');
lines.push('/** Get attribute Jira ID by type and attribute name - throws if not found */');
lines.push('export function getAttributeId(typeName: string, attrName: string): number {');
lines.push(' const attr = getAttributeByName(typeName, attrName);');
lines.push(' if (!attr) {');
lines.push(' throw new Error(`Attribute "${attrName}" not found on type "${typeName}"`);');
lines.push(' }');
lines.push(' return attr.jiraId;');
lines.push('}');
lines.push('');
lines.push('/** Get all reference attributes for a type */');
lines.push('export function getReferenceAttributes(typeName: string): AttributeDefinition[] {');
lines.push(' const objectType = OBJECT_TYPES[typeName];');
lines.push(' if (!objectType) return [];');
lines.push(" return objectType.attributes.filter(a => a.type === 'reference');");
lines.push('}');
lines.push('');
lines.push('/** Get all object types sorted by sync priority */');
lines.push('export function getObjectTypesBySyncPriority(): ObjectTypeDefinition[] {');
lines.push(' return Object.values(OBJECT_TYPES).sort((a, b) => a.syncPriority - b.syncPriority);');
lines.push('}');
lines.push('');
return lines.join('\n');
}
async function main() {
const generatedAt = new Date();
console.log('');
console.log('╔════════════════════════════════════════════════════════════════╗');
console.log('║ Type Generation - Database to TypeScript ║');
console.log('╚════════════════════════════════════════════════════════════════╝');
console.log('');
try {
// Connect to database
const db = createDatabaseAdapter();
console.log('✓ Connected to database');
// Ensure schema is discovered first
const { schemaDiscoveryService } = await import('../src/services/schemaDiscoveryService.js');
await schemaDiscoveryService.discoverAndStoreSchema();
console.log('✓ Schema discovered from database');
// Fetch object types
const objectTypeRows = await db.query<DatabaseObjectType>(`
SELECT * FROM object_types
ORDER BY sync_priority, type_name
`);
console.log(`✓ Fetched ${objectTypeRows.length} object types`);
// Fetch attributes
const attributeRows = await db.query<DatabaseAttribute>(`
SELECT * FROM attributes
ORDER BY object_type_name, jira_attr_id
`);
console.log(`✓ Fetched ${attributeRows.length} attributes`);
// Build object types with attributes
const objectTypes = objectTypeRows.map(typeRow => {
const attributes = attributeRows
.filter(a => a.object_type_name === typeRow.type_name)
.map(attrRow => {
// Convert boolean/number for SQLite compatibility
const isMultiple = typeof attrRow.is_multiple === 'boolean' ? attrRow.is_multiple : attrRow.is_multiple === 1;
const isEditable = typeof attrRow.is_editable === 'boolean' ? attrRow.is_editable : attrRow.is_editable === 1;
const isRequired = typeof attrRow.is_required === 'boolean' ? attrRow.is_required : attrRow.is_required === 1;
const isSystem = typeof attrRow.is_system === 'boolean' ? attrRow.is_system : attrRow.is_system === 1;
return {
jiraId: attrRow.jira_attr_id,
name: attrRow.attr_name,
fieldName: attrRow.field_name,
type: attrRow.attr_type as AttributeDefinition['type'],
isMultiple,
isEditable,
isRequired,
isSystem,
referenceTypeName: attrRow.reference_type_name || undefined,
description: attrRow.description || undefined,
} as AttributeDefinition;
});
return {
jiraTypeId: typeRow.jira_type_id,
name: typeRow.display_name,
typeName: typeRow.type_name,
syncPriority: typeRow.sync_priority,
objectCount: typeRow.object_count,
attributes,
};
});
// Ensure output directory exists
if (!fs.existsSync(OUTPUT_DIR)) {
fs.mkdirSync(OUTPUT_DIR, { recursive: true });
}
// Generate TypeScript types file
const typesContent = generateTypesFile(objectTypes, generatedAt);
const typesPath = path.join(OUTPUT_DIR, 'jira-types.ts');
fs.writeFileSync(typesPath, typesContent, 'utf-8');
console.log(`✓ Generated ${typesPath}`);
// Generate schema file
const schemaContent = generateSchemaFile(objectTypes, generatedAt);
const schemaPath = path.join(OUTPUT_DIR, 'jira-schema.ts');
fs.writeFileSync(schemaPath, schemaContent, 'utf-8');
console.log(`✓ Generated ${schemaPath}`);
console.log('');
console.log('✅ Type generation completed successfully!');
console.log(` Generated ${objectTypes.length} object types with ${objectTypes.reduce((sum, ot) => sum + ot.attributes.length, 0)} attributes`);
console.log('');
} catch (error) {
console.error('');
console.error('❌ Type generation failed:', error);
process.exit(1);
}
}
main();

View File

@@ -0,0 +1,90 @@
/**
* Migration script: Add search_enabled column to schemas table
*
* This script adds the search_enabled column to the schemas table if it doesn't exist.
*
* Usage:
* npm run migrate:search-enabled
* or
* tsx scripts/migrate-search-enabled.ts
*/
import { getDatabaseAdapter } from '../src/services/database/singleton.js';
import { logger } from '../src/services/logger.js';
async function main() {
try {
console.log('Starting migration: Adding search_enabled column to schemas table...');
const db = getDatabaseAdapter();
await db.ensureInitialized?.();
const isPostgres = db.isPostgres === true;
// Check if column exists and add it if it doesn't
if (isPostgres) {
// PostgreSQL: Check if column exists
const columnExists = await db.queryOne<{ exists: boolean }>(`
SELECT EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_name = 'schemas' AND column_name = 'search_enabled'
) as exists
`);
if (!columnExists?.exists) {
console.log('Adding search_enabled column to schemas table...');
await db.execute(`
ALTER TABLE schemas ADD COLUMN search_enabled BOOLEAN NOT NULL DEFAULT TRUE;
`);
console.log('✓ Column added successfully');
} else {
console.log('✓ Column already exists');
}
// Create index if it doesn't exist
try {
await db.execute(`
CREATE INDEX IF NOT EXISTS idx_schemas_search_enabled ON schemas(search_enabled);
`);
console.log('✓ Index created/verified');
} catch (error) {
console.log('Index may already exist, continuing...');
}
} else {
// SQLite: Try to query the column to see if it exists
try {
await db.queryOne('SELECT search_enabled FROM schemas LIMIT 1');
console.log('✓ Column already exists');
} catch {
// Column doesn't exist, add it
console.log('Adding search_enabled column to schemas table...');
await db.execute('ALTER TABLE schemas ADD COLUMN search_enabled INTEGER NOT NULL DEFAULT 1');
console.log('✓ Column added successfully');
}
// Create index if it doesn't exist
try {
await db.execute('CREATE INDEX IF NOT EXISTS idx_schemas_search_enabled ON schemas(search_enabled)');
console.log('✓ Index created/verified');
} catch (error) {
console.log('Index may already exist, continuing...');
}
}
// Verify the column exists
try {
await db.queryOne('SELECT search_enabled FROM schemas LIMIT 1');
console.log('✓ Migration completed successfully - search_enabled column verified');
} catch (error) {
console.error('✗ Migration verification failed:', error);
process.exit(1);
}
process.exit(0);
} catch (error) {
console.error('✗ Migration failed:', error);
process.exit(1);
}
}
main();

View File

@@ -17,6 +17,8 @@ const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
const SQLITE_CACHE_DB = join(__dirname, '../../data/cmdb-cache.db');
// Note: Legacy support - old SQLite setups may have had separate classifications.db file
// Current setup uses a single database file for all data
const SQLITE_CLASSIFICATIONS_DB = join(__dirname, '../../data/classifications.db');
async function migrate() {
@@ -66,7 +68,8 @@ async function migrateCacheDatabase(pg: Pool) {
const sqlite = new Database(SQLITE_CACHE_DB, { readonly: true });
try {
// Migrate cached_objects
// Migrate cached_objects (LEGACY - only for migrating old data from deprecated schema)
// Note: New databases use the normalized schema (objects + attribute_values tables)
const objects = sqlite.prepare('SELECT * FROM cached_objects').all() as any[];
console.log(` Migrating ${objects.length} cached objects...`);

View File

@@ -0,0 +1,178 @@
/**
* Setup Schema Mappings Script
*
* Configures schema mappings for object types based on the provided configuration.
* Run with: npm run setup-schema-mappings
*/
import { schemaMappingService } from '../src/services/schemaMappingService.js';
import { logger } from '../src/services/logger.js';
import { JIRA_NAME_TO_TYPE } from '../src/generated/jira-schema.js';
// Configuration: Schema ID -> Array of object type display names
const SCHEMA_MAPPINGS: Record<string, string[]> = {
'8': ['User'],
'6': [
'Application Component',
'Flows',
'Server',
'AzureSubscription',
'Certificate',
'Domain',
'Package',
'PackageBuild',
'Privileged User',
'Software',
'SoftwarePatch',
'Supplier',
'Application Management - Subteam',
'Application Management - Team',
'Measures',
'Rebootgroups',
'Application Management - Hosting',
'Application Management - Number of Users',
'Application Management - TAM',
'Application Management - Application Type',
'Application Management - Complexity Factor',
'Application Management - Dynamics Factor',
'ApplicationFunction',
'ApplicationFunctionCategory',
'Business Impact Analyse',
'Business Importance',
'Certificate ClassificationType',
'Certificate Type',
'Hosting Type',
'ICT Governance Model',
'Organisation',
],
};
async function setupSchemaMappings() {
logger.info('Setting up schema mappings...');
try {
let totalMappings = 0;
let skippedMappings = 0;
let errors = 0;
for (const [schemaId, objectTypeNames] of Object.entries(SCHEMA_MAPPINGS)) {
logger.info(`\nConfiguring schema ${schemaId} with ${objectTypeNames.length} object types...`);
for (const displayName of objectTypeNames) {
try {
// Convert display name to typeName
let typeName: string;
if (displayName === 'User') {
// User might not be in the generated schema, use 'User' directly
typeName = 'User';
// First, ensure User exists in object_types table
const { normalizedCacheStore } = await import('../src/services/normalizedCacheStore.js');
const db = (normalizedCacheStore as any).db;
await db.ensureInitialized?.();
// Check if User exists in object_types
const existing = await db.queryOne<{ type_name: string }>(`
SELECT type_name FROM object_types WHERE type_name = ?
`, [typeName]);
if (!existing) {
// Insert User into object_types (we'll use a placeholder jira_type_id)
// The actual jira_type_id will be discovered during schema discovery
logger.info(` Adding "User" to object_types table...`);
try {
await db.execute(`
INSERT INTO object_types (jira_type_id, type_name, display_name, description, sync_priority, object_count, discovered_at, updated_at)
VALUES (?, ?, ?, ?, ?, ?, ?, ?)
ON CONFLICT(jira_type_id) DO NOTHING
`, [
999999, // Placeholder ID - will be updated during schema discovery
'User',
'User',
'User object type from schema 8',
0,
0,
new Date().toISOString(),
new Date().toISOString()
]);
// Also try with type_name as unique constraint
await db.execute(`
INSERT INTO object_types (jira_type_id, type_name, display_name, description, sync_priority, object_count, discovered_at, updated_at)
VALUES (?, ?, ?, ?, ?, ?, ?, ?)
ON CONFLICT(type_name) DO UPDATE SET
display_name = excluded.display_name,
updated_at = excluded.updated_at
`, [
999999,
'User',
'User',
'User object type from schema 8',
0,
0,
new Date().toISOString(),
new Date().toISOString()
]);
logger.info(` ✓ Added "User" to object_types table`);
} catch (error: any) {
// If it already exists, that's fine
if (error.message?.includes('UNIQUE constraint') || error.message?.includes('duplicate key')) {
logger.info(` "User" already exists in object_types table`);
} else {
throw error;
}
}
}
} else {
// Look up typeName from JIRA_NAME_TO_TYPE mapping
typeName = JIRA_NAME_TO_TYPE[displayName];
if (!typeName) {
logger.warn(` ⚠️ Skipping "${displayName}" - typeName not found in schema`);
skippedMappings++;
continue;
}
}
// Set the mapping
await schemaMappingService.setMapping(typeName, schemaId, true);
logger.info(` ✓ Mapped ${typeName} (${displayName}) -> Schema ${schemaId}`);
totalMappings++;
} catch (error) {
logger.error(` ✗ Failed to map "${displayName}" to schema ${schemaId}:`, error);
errors++;
}
}
}
logger.info(`\n✅ Schema mappings setup complete!`);
logger.info(` - Total mappings created: ${totalMappings}`);
if (skippedMappings > 0) {
logger.info(` - Skipped (not found in schema): ${skippedMappings}`);
}
if (errors > 0) {
logger.info(` - Errors: ${errors}`);
}
// Clear cache to ensure fresh lookups
schemaMappingService.clearCache();
logger.info(`\n💾 Cache cleared - mappings are now active`);
} catch (error) {
logger.error('Failed to setup schema mappings:', error);
process.exit(1);
}
}
// Run the script
setupSchemaMappings()
.then(() => {
logger.info('\n✨ Done!');
process.exit(0);
})
.catch((error) => {
logger.error('Script failed:', error);
process.exit(1);
});

View File

@@ -0,0 +1,548 @@
/**
* DebugController - Debug/testing endpoints for architecture validation
*
* Provides endpoints to run SQL queries and check database state for testing.
*/
import { Request, Response } from 'express';
import { logger } from '../../services/logger.js';
import { getServices } from '../../services/ServiceFactory.js';
export class DebugController {
/**
* Execute a SQL query (read-only for safety)
* POST /api/v2/debug/query
* Body: { sql: string, params?: any[] }
*/
async executeQuery(req: Request, res: Response): Promise<void> {
try {
const { sql, params = [] } = req.body;
if (!sql || typeof sql !== 'string') {
res.status(400).json({ error: 'SQL query required in request body' });
return;
}
// Safety check: only allow SELECT queries
const normalizedSql = sql.trim().toUpperCase();
if (!normalizedSql.startsWith('SELECT')) {
res.status(400).json({ error: 'Only SELECT queries are allowed for security' });
return;
}
const services = getServices();
const db = services.cacheRepo.db;
const result = await db.query(sql, params);
res.json({
success: true,
result,
rowCount: result.length,
});
} catch (error) {
logger.error('DebugController: Query execution failed', error);
res.status(500).json({
success: false,
error: error instanceof Error ? error.message : 'Unknown error',
});
}
}
/**
* Get object info (ID, key, type) for debugging
* GET /api/v2/debug/objects?objectKey=...
*/
async getObjectInfo(req: Request, res: Response): Promise<void> {
try {
const objectKey = req.query.objectKey as string;
if (!objectKey) {
res.status(400).json({ error: 'objectKey query parameter required' });
return;
}
const services = getServices();
const obj = await services.cacheRepo.getObjectByKey(objectKey);
if (!obj) {
res.status(404).json({ error: 'Object not found' });
return;
}
// Get attribute count
const attrValues = await services.cacheRepo.getAttributeValues(obj.id);
res.json({
object: obj,
attributeValueCount: attrValues.length,
});
} catch (error) {
logger.error('DebugController: Failed to get object info', error);
res.status(500).json({
error: error instanceof Error ? error.message : 'Unknown error',
});
}
}
/**
* Get relation info for debugging
* GET /api/v2/debug/relations?objectKey=...
*/
async getRelationInfo(req: Request, res: Response): Promise<void> {
try {
const objectKey = req.query.objectKey as string;
if (!objectKey) {
res.status(400).json({ error: 'objectKey query parameter required' });
return;
}
const services = getServices();
const obj = await services.cacheRepo.getObjectByKey(objectKey);
if (!obj) {
res.status(404).json({ error: 'Object not found' });
return;
}
// Get relations where this object is source
const sourceRelations = await services.cacheRepo.db.query<{
sourceId: string;
targetId: string;
attributeId: number;
sourceType: string;
targetType: string;
}>(
`SELECT source_id as sourceId, target_id as targetId, attribute_id as attributeId,
source_type as sourceType, target_type as targetType
FROM object_relations
WHERE source_id = ?`,
[obj.id]
);
// Get relations where this object is target
const targetRelations = await services.cacheRepo.db.query<{
sourceId: string;
targetId: string;
attributeId: number;
sourceType: string;
targetType: string;
}>(
`SELECT source_id as sourceId, target_id as targetId, attribute_id as attributeId,
source_type as sourceType, target_type as targetType
FROM object_relations
WHERE target_id = ?`,
[obj.id]
);
res.json({
object: obj,
sourceRelations: sourceRelations.length,
targetRelations: targetRelations.length,
relations: {
outgoing: sourceRelations,
incoming: targetRelations,
},
});
} catch (error) {
logger.error('DebugController: Failed to get relation info', error);
res.status(500).json({
error: error instanceof Error ? error.message : 'Unknown error',
});
}
}
/**
* Get object type statistics
* GET /api/v2/debug/object-types/:typeName/stats
*/
async getObjectTypeStats(req: Request, res: Response): Promise<void> {
try {
const typeName = Array.isArray(req.params.typeName) ? req.params.typeName[0] : req.params.typeName;
if (!typeName) {
res.status(400).json({ error: 'typeName parameter required' });
return;
}
const services = getServices();
// Get object count
const count = await services.cacheRepo.countObjectsByType(typeName);
// Get sample objects
const samples = await services.cacheRepo.getObjectsByType(typeName, { limit: 5 });
// Get enabled status from schema
const typeInfo = await services.schemaRepo.getObjectTypeByTypeName(typeName);
res.json({
typeName,
objectCount: count,
enabled: typeInfo?.enabled || false,
sampleObjects: samples.map(o => ({
id: o.id,
objectKey: o.objectKey,
label: o.label,
})),
});
} catch (error) {
logger.error('DebugController: Failed to get object type stats', error);
res.status(500).json({
error: error instanceof Error ? error.message : 'Unknown error',
});
}
}
/**
* Get all object types with their enabled status (for debugging)
* GET /api/v2/debug/all-object-types
*/
async getAllObjectTypes(req: Request, res: Response): Promise<void> {
try {
const services = getServices();
const db = services.schemaRepo.getDatabaseAdapter();
// Check if object_types table exists
try {
const tableCheck = await db.query('SELECT 1 FROM object_types LIMIT 1');
} catch (error) {
logger.error('DebugController: object_types table does not exist or is not accessible', error);
res.status(500).json({
error: 'object_types table does not exist. Please run schema sync first.',
details: error instanceof Error ? error.message : 'Unknown error',
});
return;
}
// Get all object types
let allTypes: Array<{
id: number;
type_name: string | null;
display_name: string;
enabled: boolean | number;
jira_type_id: number;
schema_id: number;
}>;
try {
allTypes = await db.query<{
id: number;
type_name: string | null;
display_name: string;
enabled: boolean | number;
jira_type_id: number;
schema_id: number;
}>(
`SELECT id, type_name, display_name, enabled, jira_type_id, schema_id
FROM object_types
ORDER BY enabled DESC, type_name`
);
} catch (error) {
logger.error('DebugController: Failed to query object_types table', error);
res.status(500).json({
error: 'Failed to query object_types table',
details: error instanceof Error ? error.message : 'Unknown error',
});
return;
}
// Get enabled types via service (may fail if table has issues)
let enabledTypes: Array<{ typeName: string; displayName: string; schemaId: string; objectTypeId: number }> = [];
try {
const rawTypes = await services.schemaRepo.getEnabledObjectTypes();
enabledTypes = rawTypes.map(t => ({
typeName: t.typeName,
displayName: t.displayName,
schemaId: t.schemaId.toString(),
objectTypeId: t.id,
}));
logger.debug(`DebugController: getEnabledObjectTypes returned ${enabledTypes.length} types: ${enabledTypes.map(t => t.typeName).join(', ')}`);
} catch (error) {
logger.error('DebugController: Failed to get enabled types via service', error);
if (error instanceof Error) {
logger.error('Error details:', { message: error.message, stack: error.stack });
}
// Continue without enabled types from service
}
res.json({
allTypes: allTypes.map(t => ({
id: t.id,
typeName: t.type_name,
displayName: t.display_name,
enabled: t.enabled,
jiraTypeId: t.jira_type_id,
schemaId: t.schema_id,
hasTypeName: !!(t.type_name && t.type_name.trim() !== ''),
})),
enabledTypes: enabledTypes.map(t => ({
typeName: t.typeName,
displayName: t.displayName,
schemaId: t.schemaId,
objectTypeId: t.objectTypeId,
})),
summary: {
total: allTypes.length,
enabled: allTypes.filter(t => {
const isPostgres = db.isPostgres === true;
const enabledValue = isPostgres ? (t.enabled === true) : (t.enabled === 1);
return enabledValue && t.type_name && t.type_name.trim() !== '';
}).length,
enabledWithTypeName: enabledTypes.length,
missingTypeName: allTypes.filter(t => !t.type_name || t.type_name.trim() === '').length,
},
});
} catch (error) {
logger.error('DebugController: Failed to get all object types', error);
res.status(500).json({
error: error instanceof Error ? error.message : 'Unknown error',
});
}
}
/**
* Diagnose a specific object type (check database state)
* GET /api/v2/debug/object-types/diagnose/:typeName
* Checks both by type_name and display_name
*/
async diagnoseObjectType(req: Request, res: Response): Promise<void> {
try {
const typeName = Array.isArray(req.params.typeName) ? req.params.typeName[0] : req.params.typeName;
if (!typeName) {
res.status(400).json({ error: 'typeName parameter required' });
return;
}
const services = getServices();
const db = services.schemaRepo.getDatabaseAdapter();
const isPostgres = db.isPostgres === true;
const enabledCondition = isPostgres ? 'enabled IS true' : 'enabled = 1';
// Check by type_name (exact match)
const byTypeName = await db.query<{
id: number;
schema_id: number;
jira_type_id: number;
type_name: string | null;
display_name: string;
enabled: boolean | number;
description: string | null;
}>(
`SELECT id, schema_id, jira_type_id, type_name, display_name, enabled, description
FROM object_types
WHERE type_name = ?`,
[typeName]
);
// Check by display_name (case-insensitive, partial match)
const byDisplayName = await db.query<{
id: number;
schema_id: number;
jira_type_id: number;
type_name: string | null;
display_name: string;
enabled: boolean | number;
description: string | null;
}>(
isPostgres
? `SELECT id, schema_id, jira_type_id, type_name, display_name, enabled, description
FROM object_types
WHERE LOWER(display_name) LIKE LOWER(?)`
: `SELECT id, schema_id, jira_type_id, type_name, display_name, enabled, description
FROM object_types
WHERE LOWER(display_name) LIKE LOWER(?)`,
[`%${typeName}%`]
);
// Get schema info for found types
const schemaIds = [...new Set([...byTypeName.map(t => t.schema_id), ...byDisplayName.map(t => t.schema_id)])];
const schemas = schemaIds.length > 0
? await db.query<{ id: number; jira_schema_id: string; name: string }>(
`SELECT id, jira_schema_id, name FROM schemas WHERE id IN (${schemaIds.map(() => '?').join(',')})`,
schemaIds
)
: [];
const schemaMap = new Map(schemas.map(s => [s.id, s]));
// Check enabled types via service
let enabledTypesFromService: string[] = [];
try {
const rawTypes = await services.schemaRepo.getEnabledObjectTypes();
enabledTypesFromService = rawTypes.map((t: { typeName: string }) => t.typeName);
} catch (error) {
logger.error('DebugController: Failed to get enabled types from service', error);
}
// Check if type is in enabled list from service
const isInEnabledList = enabledTypesFromService.includes(typeName as string);
res.json({
requestedType: typeName,
foundByTypeName: byTypeName.map(t => ({
id: t.id,
schemaId: t.schema_id,
jiraSchemaId: schemaMap.get(t.schema_id)?.jira_schema_id,
schemaName: schemaMap.get(t.schema_id)?.name,
jiraTypeId: t.jira_type_id,
typeName: t.type_name,
displayName: t.display_name,
enabled: t.enabled,
enabledValue: isPostgres ? (t.enabled === true) : (t.enabled === 1),
hasTypeName: !!(t.type_name && t.type_name.trim() !== ''),
description: t.description,
})),
foundByDisplayName: byDisplayName.filter(t => !byTypeName.some(t2 => t2.id === t.id)).map(t => ({
id: t.id,
schemaId: t.schema_id,
jiraSchemaId: schemaMap.get(t.schema_id)?.jira_schema_id,
schemaName: schemaMap.get(t.schema_id)?.name,
jiraTypeId: t.jira_type_id,
typeName: t.type_name,
displayName: t.display_name,
enabled: t.enabled,
enabledValue: isPostgres ? (t.enabled === true) : (t.enabled === 1),
hasTypeName: !!(t.type_name && t.type_name.trim() !== ''),
description: t.description,
})),
diagnosis: {
found: byTypeName.length > 0 || byDisplayName.length > 0,
foundExact: byTypeName.length > 0,
foundByDisplay: byDisplayName.length > 0,
isEnabled: byTypeName.length > 0
? (isPostgres ? (byTypeName[0].enabled === true) : (byTypeName[0].enabled === 1))
: byDisplayName.length > 0
? (isPostgres ? (byDisplayName[0].enabled === true) : (byDisplayName[0].enabled === 1))
: false,
hasTypeName: byTypeName.length > 0
? !!(byTypeName[0].type_name && byTypeName[0].type_name.trim() !== '')
: byDisplayName.length > 0
? !!(byDisplayName[0].type_name && byDisplayName[0].type_name.trim() !== '')
: false,
isInEnabledList,
issue: !isInEnabledList && (byTypeName.length > 0 || byDisplayName.length > 0)
? (byTypeName.length > 0 && !(byTypeName[0].type_name && byTypeName[0].type_name.trim() !== '')
? 'Type is enabled in database but has missing type_name (will be filtered out)'
: byTypeName.length > 0 && !(isPostgres ? (byTypeName[0].enabled === true) : (byTypeName[0].enabled === 1))
? 'Type exists but is not enabled in database'
: 'Type exists but not found in enabled list (may have missing type_name)')
: !isInEnabledList && byTypeName.length === 0 && byDisplayName.length === 0
? 'Type not found in database'
: 'No issues detected',
},
enabledTypesCount: enabledTypesFromService.length,
enabledTypesList: enabledTypesFromService,
});
} catch (error) {
logger.error(`DebugController: Failed to diagnose object type ${req.params.typeName}`, error);
res.status(500).json({
error: error instanceof Error ? error.message : 'Unknown error',
});
}
}
/**
* Fix object types with missing type_name
* POST /api/v2/debug/fix-missing-type-names
* This will try to fix object types that have NULL type_name by looking up by display_name
*/
async fixMissingTypeNames(req: Request, res: Response): Promise<void> {
try {
const services = getServices();
const db = services.schemaRepo.getDatabaseAdapter();
// Find all object types with NULL or empty type_name
// Also check for enabled ones specifically
const isPostgres = db.isPostgres === true;
const enabledCondition = isPostgres ? 'enabled IS true' : 'enabled = 1';
const brokenTypes = await db.query<{
id: number;
jira_type_id: number;
display_name: string;
type_name: string | null;
enabled: boolean | number;
}>(
`SELECT id, jira_type_id, display_name, type_name, enabled
FROM object_types
WHERE (type_name IS NULL OR type_name = '')
ORDER BY enabled DESC, display_name`
);
// Also check enabled types specifically
const enabledWithNullTypeName = await db.query<{
id: number;
jira_type_id: number;
display_name: string;
type_name: string | null;
enabled: boolean | number;
}>(
`SELECT id, jira_type_id, display_name, type_name, enabled
FROM object_types
WHERE (type_name IS NULL OR type_name = '') AND ${enabledCondition}`
);
if (enabledWithNullTypeName.length > 0) {
logger.warn(`DebugController: Found ${enabledWithNullTypeName.length} ENABLED object types with missing type_name: ${enabledWithNullTypeName.map(t => t.display_name).join(', ')}`);
}
logger.info(`DebugController: Found ${brokenTypes.length} object types with missing type_name`);
const fixes: Array<{ id: number; displayName: string; fixedTypeName: string }> = [];
const errors: Array<{ id: number; error: string }> = [];
for (const broken of brokenTypes) {
try {
// Generate type_name from display_name using toPascalCase
const { toPascalCase } = await import('../../services/schemaUtils.js');
const fixedTypeName = toPascalCase(broken.display_name);
if (!fixedTypeName || fixedTypeName.trim() === '') {
errors.push({
id: broken.id,
error: `Could not generate type_name from display_name: "${broken.display_name}"`,
});
continue;
}
// Update the record
await db.execute(
`UPDATE object_types SET type_name = ?, updated_at = ? WHERE id = ?`,
[fixedTypeName, new Date().toISOString(), broken.id]
);
fixes.push({
id: broken.id,
displayName: broken.display_name,
fixedTypeName,
});
logger.info(`DebugController: Fixed object type id=${broken.id}, display_name="${broken.display_name}" -> type_name="${fixedTypeName}"`);
} catch (error) {
errors.push({
id: broken.id,
error: error instanceof Error ? error.message : 'Unknown error',
});
}
}
// Re-fetch enabled types to verify fix (reuse services from line 294)
const rawTypes = await services.schemaRepo.getEnabledObjectTypes();
const enabledTypesAfterFix = rawTypes.map(t => t.typeName);
res.json({
success: true,
fixed: fixes.length,
errorCount: errors.length,
fixes,
errors: errors.length > 0 ? errors : undefined,
enabledTypesAfterFix: enabledTypesAfterFix,
note: enabledWithNullTypeName.length > 0
? `Fixed ${enabledWithNullTypeName.length} enabled types that were missing type_name. They should now appear in enabled types list.`
: undefined,
});
} catch (error) {
logger.error('DebugController: Failed to fix missing type names', error);
res.status(500).json({
error: error instanceof Error ? error.message : 'Unknown error',
});
}
}
}

View File

@@ -0,0 +1,54 @@
/**
* HealthController - API health check endpoint
*
* Public endpoint (no auth required) to check if V2 API is working.
*/
import { Request, Response } from 'express';
import { logger } from '../../services/logger.js';
import { getServices } from '../../services/ServiceFactory.js';
export class HealthController {
/**
* Health check endpoint
* GET /api/v2/health
*/
async health(req: Request, res: Response): Promise<void> {
try {
const services = getServices();
// Check if services are initialized
const isInitialized = !!services.queryService;
// Check database connection (simple query)
let dbConnected = false;
try {
await services.schemaRepo.getAllSchemas();
dbConnected = true;
} catch (error) {
logger.warn('V2 Health: Database connection check failed', error);
}
res.json({
status: 'ok',
apiVersion: 'v2',
timestamp: new Date().toISOString(),
services: {
initialized: isInitialized,
database: dbConnected ? 'connected' : 'disconnected',
},
featureFlag: {
useV2Api: process.env.USE_V2_API === 'true',
},
});
} catch (error) {
logger.error('V2 Health: Health check failed', error);
res.status(500).json({
status: 'error',
apiVersion: 'v2',
timestamp: new Date().toISOString(),
error: 'Health check failed',
});
}
}
}

View File

@@ -0,0 +1,176 @@
/**
* ObjectsController - API handlers for object operations
*
* NO SQL, NO parsing - delegates to services.
*/
import { Request, Response } from 'express';
import { logger } from '../../services/logger.js';
import { getServices } from '../../services/ServiceFactory.js';
import type { CMDBObject, CMDBObjectTypeName } from '../../generated/jira-types.js';
import { getParamString, getQueryString, getQueryNumber } from '../../utils/queryHelpers.js';
export class ObjectsController {
/**
* Get a single object by ID or objectKey
* GET /api/v2/objects/:type/:id?refresh=true
* Supports both object ID and objectKey (checks objectKey if ID lookup fails)
*/
async getObject(req: Request, res: Response): Promise<void> {
try {
const type = getParamString(req, 'type');
const idOrKey = getParamString(req, 'id');
const forceRefresh = getQueryString(req, 'refresh') === 'true';
const services = getServices();
// Try to find object ID if idOrKey might be an objectKey
let objectId = idOrKey;
let objRecord = await services.cacheRepo.getObject(idOrKey);
if (!objRecord) {
// Try as objectKey
objRecord = await services.cacheRepo.getObjectByKey(idOrKey);
if (objRecord) {
objectId = objRecord.id;
}
}
// Force refresh if requested
if (forceRefresh && objectId) {
const enabledTypes = await services.schemaRepo.getEnabledObjectTypes();
const enabledTypeSet = new Set(enabledTypes.map(t => t.typeName));
const refreshResult = await services.refreshService.refreshObject(objectId, enabledTypeSet);
if (!refreshResult.success) {
res.status(500).json({ error: refreshResult.error || 'Failed to refresh object' });
return;
}
}
// Get from cache
if (!objectId) {
res.status(404).json({ error: 'Object not found (by ID or key)' });
return;
}
const object = await services.queryService.getObject<CMDBObject>(type as CMDBObjectTypeName, objectId);
if (!object) {
res.status(404).json({ error: 'Object not found' });
return;
}
res.json(object);
} catch (error) {
logger.error('ObjectsController: Failed to get object', error);
res.status(500).json({ error: 'Failed to get object' });
}
}
/**
* Get all objects of a type
* GET /api/v2/objects/:type?limit=100&offset=0&search=term
*/
async getObjects(req: Request, res: Response): Promise<void> {
try {
const type = getParamString(req, 'type');
const limit = getQueryNumber(req, 'limit', 1000);
const offset = getQueryNumber(req, 'offset', 0);
const search = getQueryString(req, 'search');
const services = getServices();
logger.info(`ObjectsController.getObjects: Querying for type="${type}" with limit=${limit}, offset=${offset}, search=${search || 'none'}`);
let objects: CMDBObject[];
if (search) {
objects = await services.queryService.searchByLabel<CMDBObject>(
type as CMDBObjectTypeName,
search,
{ limit, offset }
);
} else {
objects = await services.queryService.getObjects<CMDBObject>(
type as CMDBObjectTypeName,
{ limit, offset }
);
}
const totalCount = await services.queryService.countObjects(type as CMDBObjectTypeName);
logger.info(`ObjectsController.getObjects: Found ${objects.length} objects of type "${type}" (total count: ${totalCount})`);
// If no objects found, provide diagnostic information
if (objects.length === 0) {
// Check what object types actually exist in the database
const db = services.cacheRepo.db;
try {
const availableTypes = await db.query<{ object_type_name: string; count: number }>(
`SELECT object_type_name, COUNT(*) as count
FROM objects
GROUP BY object_type_name
ORDER BY count DESC
LIMIT 10`
);
if (availableTypes.length > 0) {
logger.warn(`ObjectsController.getObjects: No objects found for type "${type}". Available types in database:`, {
requestedType: type,
availableTypes: availableTypes.map(t => ({ typeName: t.object_type_name, count: t.count })),
});
}
} catch (error) {
logger.debug('ObjectsController.getObjects: Failed to query available types', error);
}
}
res.json({
objectType: type,
objects,
count: objects.length,
totalCount,
offset,
limit,
});
} catch (error) {
logger.error('ObjectsController: Failed to get objects', error);
res.status(500).json({ error: 'Failed to get objects' });
}
}
/**
* Update an object
* PUT /api/v2/objects/:type/:id
*/
async updateObject(req: Request, res: Response): Promise<void> {
try {
const type = getParamString(req, 'type');
const id = getParamString(req, 'id');
const updates = req.body as Record<string, unknown>;
const services = getServices();
const result = await services.writeThroughService.updateObject(
type as CMDBObjectTypeName,
id,
updates
);
if (!result.success) {
res.status(400).json({ error: result.error || 'Failed to update object' });
return;
}
// Fetch updated object
const updated = await services.queryService.getObject<CMDBObject>(
type as CMDBObjectTypeName,
id
);
res.json(updated || { success: true });
} catch (error) {
logger.error('ObjectsController: Failed to update object', error);
res.status(500).json({ error: 'Failed to update object' });
}
}
}

View File

@@ -0,0 +1,289 @@
/**
* SyncController - API handlers for sync operations
*/
import { Request, Response } from 'express';
import { logger } from '../../services/logger.js';
import { getServices } from '../../services/ServiceFactory.js';
export class SyncController {
/**
* Sync all schemas
* POST /api/v2/sync/schemas
*/
async syncSchemas(req: Request, res: Response): Promise<void> {
try {
const services = getServices();
const result = await services.schemaSyncService.syncAll();
res.json({
...result,
success: result.success !== undefined ? result.success : true,
});
} catch (error) {
logger.error('SyncController: Failed to sync schemas', error);
res.status(500).json({
success: false,
error: error instanceof Error ? error.message : 'Unknown error',
});
}
}
/**
* Sync all enabled object types
* POST /api/v2/sync/objects
*/
async syncAllObjects(req: Request, res: Response): Promise<void> {
try {
const services = getServices();
// Get enabled types
const rawTypes = await services.schemaRepo.getEnabledObjectTypes();
if (rawTypes.length === 0) {
res.status(400).json({
success: false,
error: 'No object types enabled for syncing. Please configure object types in Schema Configuration.',
});
return;
}
const results = [];
let totalObjectsProcessed = 0;
let totalObjectsCached = 0;
let totalRelations = 0;
// Sync each enabled type
for (const type of rawTypes) {
const result = await services.objectSyncService.syncObjectType(
type.schemaId.toString(),
type.id,
type.typeName,
type.displayName
);
results.push({
typeName: type.typeName,
displayName: type.displayName,
...result,
});
totalObjectsProcessed += result.objectsProcessed;
totalObjectsCached += result.objectsCached;
totalRelations += result.relationsExtracted;
}
res.json({
success: true,
stats: results,
totalObjectsProcessed,
totalObjectsCached,
totalRelations,
});
} catch (error) {
logger.error('SyncController: Failed to sync objects', error);
res.status(500).json({
success: false,
error: error instanceof Error ? error.message : 'Unknown error',
});
}
}
/**
* Sync a specific object type
* POST /api/v2/sync/objects/:typeName
*/
async syncObjectType(req: Request, res: Response): Promise<void> {
try {
const typeName = Array.isArray(req.params.typeName) ? req.params.typeName[0] : req.params.typeName;
if (!typeName) {
res.status(400).json({ error: 'typeName parameter required' });
return;
}
const services = getServices();
// Get enabled types
let rawTypes = await services.schemaRepo.getEnabledObjectTypes();
let enabledTypes = rawTypes.map(t => ({
typeName: t.typeName,
displayName: t.displayName,
schemaId: t.schemaId.toString(),
objectTypeId: t.id,
}));
// Filter out any entries with missing typeName
enabledTypes = enabledTypes.filter((t: { typeName?: string }) => t && t.typeName);
// Debug logging - also check database directly
logger.info(`SyncController: Looking for type "${typeName}" in ${enabledTypes.length} enabled types`);
logger.debug(`SyncController: Enabled types: ${JSON.stringify(enabledTypes.map((t: { typeName?: string; displayName?: string }) => ({ typeName: t?.typeName, displayName: t?.displayName })))}`);
// Additional debug: Check database directly for enabled types (including those with missing type_name)
const db = services.schemaRepo.getDatabaseAdapter();
const isPostgres = db.isPostgres === true;
const enabledCondition = isPostgres ? 'enabled IS true' : 'enabled = 1';
const dbCheck = await db.query<{ type_name: string | null; display_name: string; enabled: boolean | number; id: number; jira_type_id: number }>(
`SELECT id, jira_type_id, type_name, display_name, enabled FROM object_types WHERE ${enabledCondition}`
);
logger.info(`SyncController: Found ${dbCheck.length} enabled types in database (raw check)`);
logger.debug(`SyncController: Database enabled types (raw): ${JSON.stringify(dbCheck.map(t => ({ id: t.id, displayName: t.display_name, typeName: t.type_name, hasTypeName: !!(t.type_name && t.type_name.trim() !== '') })))}`);
// Check if AzureSubscription or similar is enabled but missing type_name
const typeNameLower = typeName.toLowerCase();
const matchingByDisplayName = dbCheck.filter((t: { display_name: string }) =>
t.display_name.toLowerCase().includes(typeNameLower) ||
typeNameLower.includes(t.display_name.toLowerCase())
);
if (matchingByDisplayName.length > 0) {
logger.warn(`SyncController: Found enabled type(s) matching "${typeName}" by display_name but not in enabled list:`, {
matches: matchingByDisplayName.map(t => ({
id: t.id,
displayName: t.display_name,
typeName: t.type_name,
hasTypeName: !!(t.type_name && t.type_name.trim() !== ''),
enabled: t.enabled,
})),
});
}
const type = enabledTypes.find((t: { typeName?: string }) => t && t.typeName === typeName);
if (!type) {
// Check if type exists but is not enabled or has missing type_name
const allType = await services.schemaRepo.getObjectTypeByTypeName(typeName);
if (allType) {
// Debug: Check the actual enabled value and query
const enabledValue = allType.enabled;
const enabledType = typeof enabledValue;
logger.warn(`SyncController: Type "${typeName}" found but not in enabled list. enabled=${enabledValue} (type: ${enabledType}), enabledTypes.length=${enabledTypes.length}`);
logger.debug(`SyncController: Enabled types details: ${JSON.stringify(enabledTypes)}`);
// Try to find it with different case (handle undefined typeName)
const typeNameLower = typeName.toLowerCase();
const caseInsensitiveMatch = enabledTypes.find((t: { typeName?: string }) => t && t.typeName && t.typeName.toLowerCase() === typeNameLower);
if (caseInsensitiveMatch) {
logger.warn(`SyncController: Found type with different case: "${caseInsensitiveMatch.typeName}" vs "${typeName}"`);
// Use the found type with correct case
const result = await services.objectSyncService.syncObjectType(
caseInsensitiveMatch.schemaId.toString(),
caseInsensitiveMatch.objectTypeId,
caseInsensitiveMatch.typeName,
caseInsensitiveMatch.displayName
);
res.json({
success: true,
...result,
hasErrors: result.errors.length > 0,
note: `Type name case corrected: "${typeName}" -> "${caseInsensitiveMatch.typeName}"`,
});
return;
}
// Direct SQL query to verify enabled status and type_name
const db = services.schemaRepo.getDatabaseAdapter();
const isPostgres = db.isPostgres === true;
const rawCheck = await db.queryOne<{ enabled: boolean | number; type_name: string | null; display_name: string }>(
`SELECT enabled, type_name, display_name FROM object_types WHERE type_name = ?`,
[typeName]
);
// Check if type is enabled but missing type_name in enabled list (might be filtered out)
const enabledCondition = isPostgres ? 'enabled IS true' : 'enabled = 1';
const enabledWithMissingTypeName = await db.query<{ display_name: string; type_name: string | null; enabled: boolean | number }>(
`SELECT display_name, type_name, enabled FROM object_types WHERE display_name ILIKE ? AND ${enabledCondition}`,
[`%${typeName}%`]
);
// Get list of all enabled type names for better error message
const enabledTypeNames = enabledTypes.map((t: { typeName?: string }) => t.typeName).filter(Boolean) as string[];
// Check if the issue is that the type is enabled but has a missing type_name
if (rawCheck && (rawCheck.enabled === true || rawCheck.enabled === 1)) {
if (!rawCheck.type_name || rawCheck.type_name.trim() === '') {
res.status(400).json({
success: false,
error: `Object type "${typeName}" is enabled in the database but has a missing or empty type_name. This prevents it from being synced. Please run schema sync again to fix the type_name, or use the "Fix Missing Type Names" debug tool (Settings → Debug).`,
details: {
requestedType: typeName,
displayName: rawCheck.display_name,
enabledInDatabase: rawCheck.enabled,
typeNameInDatabase: rawCheck.type_name,
enabledTypesCount: enabledTypes.length,
enabledTypeNames: enabledTypeNames,
hint: 'Run schema sync to ensure all object types have a valid type_name, or use the Debug page to fix missing type names.',
},
});
return;
}
}
res.status(400).json({
success: false,
error: `Object type "${typeName}" is not enabled for syncing. Currently enabled types: ${enabledTypeNames.length > 0 ? enabledTypeNames.join(', ') : 'none'}. Please enable "${typeName}" in Schema Configuration settings (Settings → Schema Configuratie).`,
details: {
requestedType: typeName,
enabledInDatabase: rawCheck?.enabled,
typeNameInDatabase: rawCheck?.type_name,
enabledTypesCount: enabledTypes.length,
enabledTypeNames: enabledTypeNames,
hint: enabledTypeNames.length === 0
? 'No object types are currently enabled. Please enable at least one object type in Schema Configuration.'
: `You enabled: ${enabledTypeNames.join(', ')}. Please enable "${typeName}" if you want to sync it.`,
},
});
} else {
// Type not found by type_name - check by display_name (case-insensitive)
const db = services.schemaRepo.getDatabaseAdapter();
const byDisplayName = await db.queryOne<{ enabled: boolean | number; type_name: string | null; display_name: string }>(
`SELECT enabled, type_name, display_name FROM object_types WHERE display_name ILIKE ? LIMIT 1`,
[`%${typeName}%`]
);
if (byDisplayName && (byDisplayName.enabled === true || byDisplayName.enabled === 1)) {
// Type is enabled but type_name might be missing or different
res.status(400).json({
success: false,
error: `Found enabled type "${byDisplayName.display_name}" but it has ${byDisplayName.type_name ? `type_name="${byDisplayName.type_name}"` : 'missing type_name'}. ${!byDisplayName.type_name ? 'Please run schema sync to fix the type_name, or use the "Fix Missing Type Names" debug tool.' : `Please use the correct type_name: "${byDisplayName.type_name}"`}`,
details: {
requestedType: typeName,
foundDisplayName: byDisplayName.display_name,
foundTypeName: byDisplayName.type_name,
enabledInDatabase: byDisplayName.enabled,
hint: !byDisplayName.type_name
? 'Run schema sync to ensure all object types have a valid type_name.'
: `Use type_name "${byDisplayName.type_name}" instead of "${typeName}"`,
},
});
return;
}
res.status(400).json({
success: false,
error: `Object type ${typeName} not found. Available enabled types: ${enabledTypes.map((t: { typeName?: string }) => t.typeName).filter(Boolean).join(', ') || 'none'}. Please run schema sync first.`,
});
}
return;
}
const result = await services.objectSyncService.syncObjectType(
type.schemaId.toString(),
type.objectTypeId,
type.typeName,
type.displayName
);
// Return success even if there are errors (errors are in result.errors array)
res.json({
success: true,
...result,
hasErrors: result.errors.length > 0,
});
} catch (error) {
logger.error(`SyncController: Failed to sync object type ${req.params.typeName}`, error);
res.status(500).json({
success: false,
error: error instanceof Error ? error.message : 'Unknown error',
});
}
}
}

View File

@@ -0,0 +1,47 @@
/**
* V2 API Routes - New refactored architecture
*
* Feature flag: USE_V2_API=true enables these routes
*/
import { Router } from 'express';
import { ObjectsController } from '../controllers/ObjectsController.js';
import { SyncController } from '../controllers/SyncController.js';
import { HealthController } from '../controllers/HealthController.js';
import { DebugController } from '../controllers/DebugController.js';
import { requireAuth, requirePermission } from '../../middleware/authorization.js';
const router = Router();
const objectsController = new ObjectsController();
const syncController = new SyncController();
const healthController = new HealthController();
const debugController = new DebugController();
// Health check - public endpoint (no auth required)
router.get('/health', (req, res) => healthController.health(req, res));
// All other routes require authentication
router.use(requireAuth);
// Object routes
router.get('/objects/:type', requirePermission('search'), (req, res) => objectsController.getObjects(req, res));
router.get('/objects/:type/:id', requirePermission('search'), (req, res) => objectsController.getObject(req, res));
router.put('/objects/:type/:id', requirePermission('write'), (req, res) => objectsController.updateObject(req, res));
// Sync routes
router.post('/sync/schemas', requirePermission('admin'), (req, res) => syncController.syncSchemas(req, res));
router.post('/sync/objects', requirePermission('admin'), (req, res) => syncController.syncAllObjects(req, res));
router.post('/sync/objects/:typeName', requirePermission('admin'), (req, res) => syncController.syncObjectType(req, res));
// Debug routes (admin only)
// IMPORTANT: More specific routes must come BEFORE parameterized routes
router.post('/debug/query', requirePermission('admin'), (req, res) => debugController.executeQuery(req, res));
router.get('/debug/objects', requirePermission('admin'), (req, res) => debugController.getObjectInfo(req, res));
router.get('/debug/relations', requirePermission('admin'), (req, res) => debugController.getRelationInfo(req, res));
router.get('/debug/all-object-types', requirePermission('admin'), (req, res) => debugController.getAllObjectTypes(req, res));
router.post('/debug/fix-missing-type-names', requirePermission('admin'), (req, res) => debugController.fixMissingTypeNames(req, res));
// Specific routes before parameterized routes
router.get('/debug/object-types/diagnose/:typeName', requirePermission('admin'), (req, res) => debugController.diagnoseObjectType(req, res));
router.get('/debug/object-types/:typeName/stats', requirePermission('admin'), (req, res) => debugController.getObjectTypeStats(req, res));
export default router;

View File

@@ -28,7 +28,6 @@ export type JiraAuthMethod = 'pat' | 'oauth';
interface Config {
// Jira Assets
jiraHost: string;
jiraSchemaId: string;
// Jira Service Account Token (for read operations: sync, fetching data)
jiraServiceAccountToken: string;
@@ -90,7 +89,6 @@ function getJiraAuthMethod(): JiraAuthMethod {
export const config: Config = {
// Jira Assets
jiraHost: getOptionalEnvVar('JIRA_HOST', 'https://jira.zuyderland.nl'),
jiraSchemaId: getOptionalEnvVar('JIRA_SCHEMA_ID'),
// Jira Service Account Token (for read operations: sync, fetching data)
jiraServiceAccountToken: getOptionalEnvVar('JIRA_SERVICE_ACCOUNT_TOKEN'),
@@ -130,7 +128,6 @@ export function validateConfig(): void {
if (config.jiraAuthMethod === 'pat') {
// JIRA_PAT is configured in user profiles, not in ENV
warnings.push('JIRA_AUTH_METHOD=pat - users must configure PAT in their profile settings');
} else if (config.jiraAuthMethod === 'oauth') {
if (!config.jiraOAuthClientId) {
missingVars.push('JIRA_OAUTH_CLIENT_ID (required for OAuth authentication)');
@@ -143,17 +140,11 @@ export function validateConfig(): void {
}
}
// General required config
if (!config.jiraSchemaId) missingVars.push('JIRA_SCHEMA_ID');
// Service account token warning (not required, but recommended for sync operations)
if (!config.jiraServiceAccountToken) {
warnings.push('JIRA_SERVICE_ACCOUNT_TOKEN not configured - sync and read operations may not work. Users can still use their personal PAT for reads as fallback.');
}
// AI API keys are configured in user profiles, not in ENV
warnings.push('AI API keys must be configured in user profile settings');
if (warnings.length > 0) {
warnings.forEach(w => console.warn(`Warning: ${w}`));
}

View File

@@ -0,0 +1,121 @@
// ==========================
// API Payload Types
// ==========================
export interface AssetsPayload {
objectEntries: ObjectEntry[];
}
export interface ObjectEntry {
id: string | number;
objectKey: string;
label: string;
objectType: {
id: number;
name: string;
};
created: string;
updated: string;
hasAvatar: boolean;
timestamp: number;
attributes?: ObjectAttribute[];
}
export interface ObjectAttribute {
id: number;
objectTypeAttributeId: number;
objectAttributeValues: ObjectAttributeValue[];
}
// ==========================
// Attribute Value Union
// ==========================
export type ObjectAttributeValue =
| SimpleValue
| StatusValue
| ConfluenceValue
| UserValue
| ReferenceValue;
export interface SimpleValue {
value: string | number | boolean;
searchValue: string;
referencedType: false;
displayValue: string;
}
export interface StatusValue {
status: { id: number; name: string; category: number };
searchValue: string;
referencedType: boolean;
displayValue: string;
}
export interface ConfluenceValue {
confluencePage: { id: string; title: string; url: string };
searchValue: string;
referencedType: boolean;
displayValue: string;
}
export interface UserValue {
user: {
avatarUrl: string;
displayName: string;
name: string;
key: string;
renderedLink: string;
isDeleted: boolean;
};
searchValue: string;
referencedType: boolean;
displayValue: string;
}
export interface ReferenceValue {
referencedObject: ReferencedObject;
searchValue: string;
referencedType: true;
displayValue: string;
}
export interface ReferencedObject {
id: string | number;
objectKey: string;
label: string;
name?: string;
archived?: boolean;
objectType: {
id: number;
name: string;
};
created: string;
updated: string;
timestamp: number;
hasAvatar: boolean;
attributes?: ObjectAttribute[];
_links?: { self: string };
}
// ==========================
// Type Guards (MANDATORY)
// ==========================
export function isReferenceValue(
v: ObjectAttributeValue
): v is ReferenceValue {
return (v as ReferenceValue).referencedObject !== undefined;
}
export function isSimpleValue(
v: ObjectAttributeValue
): v is SimpleValue {
return (v as SimpleValue).value !== undefined;
}
export function hasAttributes(
obj: ObjectEntry | ReferencedObject
): obj is (ObjectEntry | ReferencedObject) & { attributes: ObjectAttribute[] } {
return Array.isArray((obj as any).attributes);
}

View File

@@ -0,0 +1,38 @@
/**
* Sync Policy - Determines how objects are handled during sync
*/
export enum SyncPolicy {
/**
* Full sync: fetch all objects, cache all attributes
* Used for enabled object types in schema configuration
*/
ENABLED = 'enabled',
/**
* Reference-only: cache minimal metadata for referenced objects
* Used for disabled object types that are referenced by enabled types
*/
REFERENCE_ONLY = 'reference_only',
/**
* Skip: don't sync this object type at all
* Used for object types not in use
*/
SKIP = 'skip',
}
/**
* Get sync policy for an object type
*/
export function getSyncPolicy(
typeName: string,
enabledTypes: Set<string>
): SyncPolicy {
if (enabledTypes.has(typeName)) {
return SyncPolicy.ENABLED;
}
// We still need to cache referenced objects, even if their type is disabled
// This allows reference resolution without full sync
return SyncPolicy.REFERENCE_ONLY;
}

View File

@@ -8,18 +8,12 @@
-- =============================================================================
-- Core Tables
-- =============================================================================
-- Cached CMDB objects (all types stored in single table with JSON data)
CREATE TABLE IF NOT EXISTS cached_objects (
id TEXT PRIMARY KEY,
object_key TEXT NOT NULL UNIQUE,
object_type TEXT NOT NULL,
label TEXT NOT NULL,
data JSONB NOT NULL,
jira_updated_at TEXT,
jira_created_at TEXT,
cached_at TEXT NOT NULL
);
--
-- NOTE: This schema is LEGACY and deprecated.
-- The current system uses the normalized schema defined in
-- backend/src/services/database/normalized-schema.ts
--
-- This file is kept for reference and migration purposes only.
-- Object relations (references between objects)
CREATE TABLE IF NOT EXISTS object_relations (
@@ -43,12 +37,6 @@ CREATE TABLE IF NOT EXISTS sync_metadata (
-- Indices for Performance
-- =============================================================================
CREATE INDEX IF NOT EXISTS idx_objects_type ON cached_objects(object_type);
CREATE INDEX IF NOT EXISTS idx_objects_key ON cached_objects(object_key);
CREATE INDEX IF NOT EXISTS idx_objects_updated ON cached_objects(jira_updated_at);
CREATE INDEX IF NOT EXISTS idx_objects_label ON cached_objects(label);
CREATE INDEX IF NOT EXISTS idx_objects_data_gin ON cached_objects USING GIN (data);
CREATE INDEX IF NOT EXISTS idx_relations_source ON object_relations(source_id);
CREATE INDEX IF NOT EXISTS idx_relations_target ON object_relations(target_id);
CREATE INDEX IF NOT EXISTS idx_relations_source_type ON object_relations(source_type);

View File

@@ -7,18 +7,12 @@
-- =============================================================================
-- Core Tables
-- =============================================================================
-- Cached CMDB objects (all types stored in single table with JSON data)
CREATE TABLE IF NOT EXISTS cached_objects (
id TEXT PRIMARY KEY,
object_key TEXT NOT NULL UNIQUE,
object_type TEXT NOT NULL,
label TEXT NOT NULL,
data JSON NOT NULL,
jira_updated_at TEXT,
jira_created_at TEXT,
cached_at TEXT NOT NULL
);
--
-- NOTE: This schema is LEGACY and deprecated.
-- The current system uses the normalized schema defined in
-- backend/src/services/database/normalized-schema.ts
--
-- This file is kept for reference and migration purposes only.
-- Object relations (references between objects)
CREATE TABLE IF NOT EXISTS object_relations (
@@ -42,11 +36,6 @@ CREATE TABLE IF NOT EXISTS sync_metadata (
-- Indices for Performance
-- =============================================================================
CREATE INDEX IF NOT EXISTS idx_objects_type ON cached_objects(object_type);
CREATE INDEX IF NOT EXISTS idx_objects_key ON cached_objects(object_key);
CREATE INDEX IF NOT EXISTS idx_objects_updated ON cached_objects(jira_updated_at);
CREATE INDEX IF NOT EXISTS idx_objects_label ON cached_objects(label);
CREATE INDEX IF NOT EXISTS idx_relations_source ON object_relations(source_id);
CREATE INDEX IF NOT EXISTS idx_relations_target ON object_relations(target_id);
CREATE INDEX IF NOT EXISTS idx_relations_source_type ON object_relations(source_type);

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -6,7 +6,6 @@ import cookieParser from 'cookie-parser';
import { config, validateConfig } from './config/env.js';
import { logger } from './services/logger.js';
import { dataService } from './services/dataService.js';
import { syncEngine } from './services/syncEngine.js';
import { cmdbService } from './services/cmdbService.js';
import applicationsRouter from './routes/applications.js';
import classificationsRouter from './routes/classifications.js';
@@ -22,6 +21,8 @@ import searchRouter from './routes/search.js';
import cacheRouter from './routes/cache.js';
import objectsRouter from './routes/objects.js';
import schemaRouter from './routes/schema.js';
import dataValidationRouter from './routes/dataValidation.js';
import schemaConfigurationRouter from './routes/schemaConfiguration.js';
import { runMigrations } from './services/database/migrations.js';
// Validate configuration
@@ -63,8 +64,10 @@ app.use(authMiddleware);
// Set user token and settings on services for each request
app.use(async (req, res, next) => {
// Set user's OAuth token if available (for OAuth sessions)
let userToken: string | null = null;
if (req.accessToken) {
cmdbService.setUserToken(req.accessToken);
userToken = req.accessToken;
}
// Set user's Jira PAT and AI keys if user is authenticated and has local account
@@ -75,15 +78,12 @@ app.use(async (req, res, next) => {
if (settings?.jira_pat) {
// Use user's Jira PAT from profile settings (preferred for writes)
cmdbService.setUserToken(settings.jira_pat);
userToken = settings.jira_pat;
} else if (config.jiraServiceAccountToken) {
// Fallback to service account token if user doesn't have PAT configured
// This allows writes to work when JIRA_SERVICE_ACCOUNT_TOKEN is set in .env
cmdbService.setUserToken(config.jiraServiceAccountToken);
userToken = config.jiraServiceAccountToken;
logger.debug('Using service account token as fallback (user PAT not configured)');
} else {
// No token available - clear token
cmdbService.setUserToken(null);
}
// Store user settings in request for services to access
@@ -92,18 +92,35 @@ app.use(async (req, res, next) => {
// If user settings can't be loaded, try service account token as fallback
logger.debug('Failed to load user settings:', error);
if (config.jiraServiceAccountToken) {
cmdbService.setUserToken(config.jiraServiceAccountToken);
userToken = config.jiraServiceAccountToken;
logger.debug('Using service account token as fallback (user settings load failed)');
} else {
cmdbService.setUserToken(null);
}
}
}
// Set token on old services (for backward compatibility)
if (userToken) {
cmdbService.setUserToken(userToken);
} else {
// No user authenticated - clear token
cmdbService.setUserToken(null);
}
// Clear token after response is sent
// Set token on new V2 infrastructure client (if feature flag enabled)
if (process.env.USE_V2_API === 'true') {
try {
const { jiraAssetsClient } = await import('./infrastructure/jira/JiraAssetsClient.js');
jiraAssetsClient.setRequestToken(userToken);
// Clear token after response
res.on('finish', () => {
jiraAssetsClient.clearRequestToken();
});
} catch (error) {
// V2 API not loaded - ignore
}
}
// Clear token after response is sent (for old services)
res.on('finish', () => {
cmdbService.clearUserToken();
});
@@ -119,8 +136,8 @@ app.get('/health', async (req, res) => {
res.json({
status: 'ok',
timestamp: new Date().toISOString(),
dataSource: dataService.isUsingJiraAssets() ? 'jira-assets-cached' : 'mock-data',
jiraConnected: dataService.isUsingJiraAssets() ? jiraConnected : null,
dataSource: 'jira-assets-cached', // Always uses Jira Assets (mock data removed)
jiraConnected: jiraConnected,
aiConfigured: true, // AI is configured per-user in profile settings
cache: {
isWarm: cacheStatus.isWarm,
@@ -152,6 +169,38 @@ app.use('/api/search', searchRouter);
app.use('/api/cache', cacheRouter);
app.use('/api/objects', objectsRouter);
app.use('/api/schema', schemaRouter);
app.use('/api/data-validation', dataValidationRouter);
app.use('/api/schema-configuration', schemaConfigurationRouter);
// V2 API routes (new refactored architecture) - Feature flag: USE_V2_API
const useV2Api = process.env.USE_V2_API === 'true';
const useV2ApiEnv = process.env.USE_V2_API || 'not set';
logger.info(`V2 API feature flag: USE_V2_API=${useV2ApiEnv} (enabled: ${useV2Api})`);
if (useV2Api) {
try {
logger.debug('Loading V2 API routes from ./api/routes/v2.js...');
const v2Router = (await import('./api/routes/v2.js')).default;
if (!v2Router) {
logger.error('❌ V2 API router is undefined - route file did not export default router');
} else {
app.use('/api/v2', v2Router);
logger.info('✅ V2 API routes enabled and mounted at /api/v2');
logger.debug('V2 API router type:', typeof v2Router, 'is function:', typeof v2Router === 'function');
}
} catch (error) {
logger.error('❌ Failed to load V2 API routes', error);
if (error instanceof Error) {
logger.error('Error details:', {
message: error.message,
stack: error.stack,
name: error.name,
});
}
}
} else {
logger.info(` V2 API routes disabled (USE_V2_API=${useV2ApiEnv}, set USE_V2_API=true to enable)`);
}
// Error handling
app.use((err: Error, req: express.Request, res: express.Response, next: express.NextFunction) => {
@@ -164,7 +213,20 @@ app.use((err: Error, req: express.Request, res: express.Response, next: express.
// 404 handler
app.use((req, res) => {
res.status(404).json({ error: 'Not found' });
// Provide helpful error messages for V2 API routes
if (req.path.startsWith('/api/v2/')) {
const useV2Api = process.env.USE_V2_API === 'true';
if (!useV2Api) {
res.status(404).json({
error: 'V2 API routes are not enabled',
message: 'Please set USE_V2_API=true in environment variables and restart the server to use V2 API endpoints.',
path: req.path,
});
return;
}
}
res.status(404).json({ error: 'Not found', path: req.path });
});
// Start server
@@ -173,26 +235,51 @@ app.listen(PORT, async () => {
logger.info(`Server running on http://localhost:${PORT}`);
logger.info(`Environment: ${config.nodeEnv}`);
logger.info(`AI Classification: Configured per-user in profile settings`);
logger.info(`Jira Assets: ${config.jiraSchemaId ? 'Schema configured - users configure PAT in profile' : 'Schema not configured'}`);
// Run database migrations
// Log V2 API feature flag status
const useV2ApiEnv = process.env.USE_V2_API || 'not set';
const useV2ApiEnabled = process.env.USE_V2_API === 'true';
logger.info(`V2 API Feature Flag: USE_V2_API=${useV2ApiEnv} (${useV2ApiEnabled ? '✅ ENABLED' : '❌ DISABLED'})`);
// Check if schemas exist in database
// Note: Schemas table may not exist yet if schema hasn't been initialized
let hasSchemas = false;
try {
const { normalizedCacheStore } = await import('./services/normalizedCacheStore.js');
const db = (normalizedCacheStore as any).db;
if (db) {
await db.ensureInitialized?.();
try {
const schemaRow = await (db.queryOne as <T>(sql: string, params?: any[]) => Promise<T | null>)<{ count: number }>(
`SELECT COUNT(*) as count FROM schemas`
);
hasSchemas = (schemaRow?.count || 0) > 0;
} catch (tableError: any) {
// If schemas table doesn't exist yet, that's okay - schema hasn't been initialized
if (tableError?.message?.includes('does not exist') ||
tableError?.message?.includes('relation') ||
tableError?.code === '42P01') { // PostgreSQL: undefined table
logger.debug('Schemas table does not exist yet (will be created by migrations)');
hasSchemas = false;
} else {
throw tableError; // Re-throw other errors
}
}
}
} catch (error) {
logger.debug('Failed to check if schemas exist in database (table may not exist yet)', error);
}
logger.info(`Jira Assets: ${hasSchemas ? 'Schemas configured in database - users configure PAT in profile' : 'No schemas configured - use Schema Configuration page to discover schemas'}`);
logger.info('Sync: All syncs must be triggered manually from the GUI (no auto-start)');
logger.info('Data: All data comes from Jira Assets API (mock data removed)');
// Run database migrations FIRST to create schemas table before other services try to use it
try {
logger.info('Running database migrations...');
await runMigrations();
logger.info('Database migrations completed');
logger.info('Database migrations completed');
} catch (error) {
logger.error('Failed to run database migrations', error);
}
// Initialize sync engine if Jira schema is configured
// Note: Sync engine will only sync when users with configured Jira PATs make requests
// This prevents unauthorized Jira API calls
if (config.jiraSchemaId) {
try {
await syncEngine.initialize();
logger.info('Sync Engine: Initialized (sync on-demand per user request)');
} catch (error) {
logger.error('Failed to initialize sync engine', error);
}
logger.error('Failed to run database migrations', error);
}
});
@@ -200,8 +287,7 @@ app.listen(PORT, async () => {
const shutdown = () => {
logger.info('Shutdown signal received: stopping services...');
// Stop sync engine
syncEngine.stop();
// Note: No sync engine to stop - syncs are only triggered from GUI
logger.info('Services stopped, exiting');
process.exit(0);

View File

@@ -0,0 +1,330 @@
/**
* JiraAssetsClient - Pure HTTP API client
*
* NO business logic, NO parsing, NO caching.
* Only HTTP requests to Jira Assets API.
*/
import { config } from '../../config/env.js';
import { logger } from '../../services/logger.js';
import type { AssetsPayload, ObjectEntry } from '../../domain/jiraAssetsPayload.js';
export interface JiraUpdatePayload {
objectTypeId?: number;
attributes: Array<{
objectTypeAttributeId: number;
objectAttributeValues: Array<{ value?: string }>;
}>;
}
export class JiraAssetsClient {
private baseUrl: string;
private serviceAccountToken: string | null = null;
private requestToken: string | null = null;
constructor() {
this.baseUrl = `${config.jiraHost}/rest/insight/1.0`;
this.serviceAccountToken = config.jiraServiceAccountToken || null;
}
setRequestToken(token: string | null): void {
this.requestToken = token;
}
clearRequestToken(): void {
this.requestToken = null;
}
hasToken(): boolean {
return !!(this.serviceAccountToken || this.requestToken);
}
hasUserToken(): boolean {
return !!this.requestToken;
}
private getHeaders(forWrite: boolean = false): Record<string, string> {
const headers: Record<string, string> = {
'Content-Type': 'application/json',
'Accept': 'application/json',
};
if (forWrite) {
if (!this.requestToken) {
throw new Error('Jira Personal Access Token not configured. Please configure it in your user settings to enable saving changes to Jira.');
}
headers['Authorization'] = `Bearer ${this.requestToken}`;
} else {
const token = this.serviceAccountToken || this.requestToken;
if (!token) {
throw new Error('Jira token not configured. Please configure JIRA_SERVICE_ACCOUNT_TOKEN in .env or a Personal Access Token in your user settings.');
}
headers['Authorization'] = `Bearer ${token}`;
}
return headers;
}
/**
* Get a single object by ID
*/
async getObject(objectId: string): Promise<ObjectEntry | null> {
try {
const url = `/object/${objectId}?includeAttributes=true&includeAttributesDeep=2`;
const response = await fetch(`${this.baseUrl}${url}`, {
headers: this.getHeaders(false),
});
if (!response.ok) {
if (response.status === 404) {
return null;
}
const text = await response.text();
throw new Error(`Jira API error ${response.status}: ${text}`);
}
return await response.json() as ObjectEntry;
} catch (error) {
logger.error(`JiraAssetsClient: Failed to get object ${objectId}`, error);
throw error;
}
}
/**
* Search objects using IQL/AQL
*/
async searchObjects(
iql: string,
schemaId: string,
options: {
page?: number;
pageSize?: number;
} = {}
): Promise<{ objectEntries: ObjectEntry[]; totalCount: number; hasMore: boolean }> {
// Validate schemaId is provided and not empty
if (!schemaId || schemaId.trim() === '') {
throw new Error('Schema ID is required and cannot be empty. This usually means the object type is not properly associated with a schema. Please run schema sync first.');
}
const { page = 1, pageSize = 50 } = options;
// Detect API type (Data Center vs Cloud) based on host
const isDataCenter = !config.jiraHost.includes('atlassian.net');
let response: { objectEntries: ObjectEntry[]; totalCount?: number; totalFilterCount?: number };
if (isDataCenter) {
// Data Center: Try AQL first, fallback to IQL
try {
const params = new URLSearchParams({
qlQuery: iql,
page: page.toString(),
resultPerPage: pageSize.toString(),
includeAttributes: 'true',
includeAttributesDeep: '2',
objectSchemaId: schemaId,
});
const url = `${this.baseUrl}/aql/objects?${params.toString()}`;
const httpResponse = await fetch(url, {
headers: this.getHeaders(false),
});
if (!httpResponse.ok) {
const errorText = await httpResponse.text();
const errorMessage = errorText || `AQL failed: ${httpResponse.status}`;
logger.warn(`JiraAssetsClient: AQL query failed (${httpResponse.status}): ${errorMessage}. Query: ${iql}`);
throw new Error(errorMessage);
}
response = await httpResponse.json() as { objectEntries: ObjectEntry[]; totalCount?: number; totalFilterCount?: number };
} catch (error) {
const errorMessage = error instanceof Error ? error.message : String(error);
logger.warn(`JiraAssetsClient: AQL endpoint failed, falling back to IQL. Error: ${errorMessage}`, error);
const params = new URLSearchParams({
iql,
page: page.toString(),
resultPerPage: pageSize.toString(),
includeAttributes: 'true',
includeAttributesDeep: '2',
objectSchemaId: schemaId,
});
const url = `${this.baseUrl}/iql/objects?${params.toString()}`;
const httpResponse = await fetch(url, {
headers: this.getHeaders(false),
});
if (!httpResponse.ok) {
const text = await httpResponse.text();
throw new Error(`Jira API error ${httpResponse.status}: ${text}`);
}
response = await httpResponse.json() as { objectEntries: ObjectEntry[]; totalCount?: number; totalFilterCount?: number };
}
} else {
// Jira Cloud: POST to AQL endpoint
const url = `${this.baseUrl}/aql/objects`;
const requestBody = {
qlQuery: iql,
page,
resultPerPage: pageSize,
includeAttributes: true,
includeAttributesDeep: 2,
objectSchemaId: schemaId,
};
const httpResponse = await fetch(url, {
method: 'POST',
headers: this.getHeaders(false),
body: JSON.stringify(requestBody),
});
if (!httpResponse.ok) {
const text = await httpResponse.text();
const errorMessage = text || `Jira API error ${httpResponse.status}`;
logger.warn(`JiraAssetsClient: AQL query failed (${httpResponse.status}): ${errorMessage}. Query: ${iql}`);
throw new Error(errorMessage);
}
response = await httpResponse.json() as { objectEntries: ObjectEntry[]; totalCount?: number; totalFilterCount?: number };
}
const totalCount = response.totalFilterCount || response.totalCount || 0;
const hasMore = response.objectEntries.length === pageSize && page * pageSize < totalCount;
return {
objectEntries: response.objectEntries || [],
totalCount,
hasMore,
};
}
/**
* Update an object
*/
async updateObject(objectId: string, payload: JiraUpdatePayload): Promise<void> {
if (!this.hasUserToken()) {
throw new Error('Jira Personal Access Token not configured. Please configure it in your user settings to enable saving changes to Jira.');
}
const url = `${this.baseUrl}/object/${objectId}`;
const response = await fetch(url, {
method: 'PUT',
headers: this.getHeaders(true),
body: JSON.stringify(payload),
});
if (!response.ok) {
const text = await response.text();
throw new Error(`Jira API error ${response.status}: ${text}`);
}
}
/**
* Get all schemas
*/
async getSchemas(): Promise<Array<{ id: string; name: string; description?: string }>> {
const url = `${this.baseUrl}/objectschema/list`;
const response = await fetch(url, {
headers: this.getHeaders(false),
});
if (!response.ok) {
const text = await response.text();
throw new Error(`Jira API error ${response.status}: ${text}`);
}
return await response.json() as Array<{ id: string; name: string; description?: string }>;
}
/**
* Get object types for a schema
*/
async getObjectTypes(schemaId: string): Promise<Array<{
id: number;
name: string;
description?: string;
objectCount?: number;
parentObjectTypeId?: number;
abstractObjectType?: boolean;
}>> {
// Try flat endpoint first
let url = `${this.baseUrl}/objectschema/${schemaId}/objecttypes/flat`;
let response = await fetch(url, {
headers: this.getHeaders(false),
});
if (!response.ok) {
// Fallback to regular endpoint
url = `${this.baseUrl}/objectschema/${schemaId}/objecttypes`;
response = await fetch(url, {
headers: this.getHeaders(false),
});
}
if (!response.ok) {
const text = await response.text();
throw new Error(`Jira API error ${response.status}: ${text}`);
}
const result = await response.json() as unknown;
if (Array.isArray(result)) {
return result as Array<{
id: number;
name: string;
description?: string;
objectCount?: number;
parentObjectTypeId?: number;
abstractObjectType?: boolean;
}>;
} else if (result && typeof result === 'object' && 'objectTypes' in result) {
return (result as { objectTypes: Array<{
id: number;
name: string;
description?: string;
objectCount?: number;
parentObjectTypeId?: number;
abstractObjectType?: boolean;
}> }).objectTypes;
}
return [];
}
/**
* Get attributes for an object type
*/
async getAttributes(typeId: number): Promise<Array<{
id: number;
name: string;
type: number;
typeValue?: string;
referenceObjectTypeId?: number;
referenceObjectType?: { id: number; name: string };
minimumCardinality?: number;
maximumCardinality?: number;
editable?: boolean;
hidden?: boolean;
system?: boolean;
description?: string;
}>> {
const url = `${this.baseUrl}/objecttype/${typeId}/attributes`;
const response = await fetch(url, {
headers: this.getHeaders(false),
});
if (!response.ok) {
logger.warn(`JiraAssetsClient: Failed to fetch attributes for type ${typeId}: ${response.status}`);
return [];
}
return await response.json() as Array<{
id: number;
name: string;
type: number;
typeValue?: string;
referenceObjectTypeId?: number;
referenceObjectType?: { id: number; name: string };
minimumCardinality?: number;
maximumCardinality?: number;
editable?: boolean;
hidden?: boolean;
system?: boolean;
description?: string;
}>;
}
}
// Export singleton instance
export const jiraAssetsClient = new JiraAssetsClient();

View File

@@ -5,16 +5,17 @@
*/
import { Request, Response, NextFunction } from 'express';
import { authService, type SessionUser } from '../services/authService.js';
import { authService, type SessionUser, type JiraUser } from '../services/authService.js';
import { roleService } from '../services/roleService.js';
import { logger } from '../services/logger.js';
// Extend Express Request to include user info
// Note: This matches the declaration in auth.ts
declare global {
namespace Express {
interface Request {
sessionId?: string;
user?: SessionUser;
user?: SessionUser | JiraUser;
accessToken?: string;
}
}

View File

@@ -0,0 +1,308 @@
/**
* ObjectCacheRepository - Data access for cached objects (EAV pattern)
*/
import type { DatabaseAdapter } from '../services/database/interface.js';
import { logger } from '../services/logger.js';
export interface ObjectRecord {
id: string;
objectKey: string;
objectTypeName: string;
label: string;
jiraUpdatedAt: string | null;
jiraCreatedAt: string | null;
cachedAt: string;
}
export interface AttributeValueRecord {
objectId: string;
attributeId: number;
textValue: string | null;
numberValue: number | null;
booleanValue: boolean | null;
dateValue: string | null;
datetimeValue: string | null;
referenceObjectId: string | null;
referenceObjectKey: string | null;
referenceObjectLabel: string | null;
arrayIndex: number;
}
export interface ObjectRelationRecord {
sourceId: string;
targetId: string;
attributeId: number;
sourceType: string;
targetType: string;
}
export class ObjectCacheRepository {
public db: DatabaseAdapter;
constructor(db: DatabaseAdapter) {
this.db = db;
}
/**
* Upsert an object record (minimal metadata)
*/
async upsertObject(object: {
id: string;
objectKey: string;
objectTypeName: string;
label: string;
jiraUpdatedAt?: string;
jiraCreatedAt?: string;
}): Promise<void> {
const cachedAt = new Date().toISOString();
await this.db.execute(
`INSERT INTO objects (id, object_key, object_type_name, label, jira_updated_at, jira_created_at, cached_at)
VALUES (?, ?, ?, ?, ?, ?, ?)
ON CONFLICT(id) DO UPDATE SET
object_key = excluded.object_key,
label = excluded.label,
jira_updated_at = excluded.jira_updated_at,
cached_at = excluded.cached_at`,
[
object.id,
object.objectKey,
object.objectTypeName,
object.label,
object.jiraUpdatedAt || null,
object.jiraCreatedAt || null,
cachedAt,
]
);
}
/**
* Get an object record by ID
*/
async getObject(objectId: string): Promise<ObjectRecord | null> {
return await this.db.queryOne<ObjectRecord>(
`SELECT id, object_key as objectKey, object_type_name as objectTypeName, label,
jira_updated_at as jiraUpdatedAt, jira_created_at as jiraCreatedAt, cached_at as cachedAt
FROM objects
WHERE id = ?`,
[objectId]
);
}
/**
* Get an object record by object key
*/
async getObjectByKey(objectKey: string): Promise<ObjectRecord | null> {
return await this.db.queryOne<ObjectRecord>(
`SELECT id, object_key as objectKey, object_type_name as objectTypeName, label,
jira_updated_at as jiraUpdatedAt, jira_created_at as jiraCreatedAt, cached_at as cachedAt
FROM objects
WHERE object_key = ?`,
[objectKey]
);
}
/**
* Delete all attribute values for an object
* Used when refreshing an object - we replace all attributes
*/
async deleteAttributeValues(objectId: string): Promise<void> {
await this.db.execute(
`DELETE FROM attribute_values WHERE object_id = ?`,
[objectId]
);
}
/**
* Upsert a single attribute value
*/
async upsertAttributeValue(value: {
objectId: string;
attributeId: number;
textValue?: string | null;
numberValue?: number | null;
booleanValue?: boolean | null;
dateValue?: string | null;
datetimeValue?: string | null;
referenceObjectId?: string | null;
referenceObjectKey?: string | null;
referenceObjectLabel?: string | null;
arrayIndex: number;
}): Promise<void> {
await this.db.execute(
`INSERT INTO attribute_values
(object_id, attribute_id, text_value, number_value, boolean_value, date_value, datetime_value,
reference_object_id, reference_object_key, reference_object_label, array_index)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
ON CONFLICT(object_id, attribute_id, array_index) DO UPDATE SET
text_value = excluded.text_value,
number_value = excluded.number_value,
boolean_value = excluded.boolean_value,
date_value = excluded.date_value,
datetime_value = excluded.datetime_value,
reference_object_id = excluded.reference_object_id,
reference_object_key = excluded.reference_object_key,
reference_object_label = excluded.reference_object_label`,
[
value.objectId,
value.attributeId,
value.textValue || null,
value.numberValue || null,
value.booleanValue || null,
value.dateValue || null,
value.datetimeValue || null,
value.referenceObjectId || null,
value.referenceObjectKey || null,
value.referenceObjectLabel || null,
value.arrayIndex,
]
);
}
/**
* Batch upsert attribute values (much faster)
*/
async batchUpsertAttributeValues(values: Array<{
objectId: string;
attributeId: number;
textValue?: string | null;
numberValue?: number | null;
booleanValue?: boolean | null;
dateValue?: string | null;
datetimeValue?: string | null;
referenceObjectId?: string | null;
referenceObjectKey?: string | null;
referenceObjectLabel?: string | null;
arrayIndex: number;
}>): Promise<void> {
if (values.length === 0) return;
await this.db.transaction(async (db) => {
for (const value of values) {
await db.execute(
`INSERT INTO attribute_values
(object_id, attribute_id, text_value, number_value, boolean_value, date_value, datetime_value,
reference_object_id, reference_object_key, reference_object_label, array_index)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
ON CONFLICT(object_id, attribute_id, array_index) DO UPDATE SET
text_value = excluded.text_value,
number_value = excluded.number_value,
boolean_value = excluded.boolean_value,
date_value = excluded.date_value,
datetime_value = excluded.datetime_value,
reference_object_id = excluded.reference_object_id,
reference_object_key = excluded.reference_object_key,
reference_object_label = excluded.reference_object_label`,
[
value.objectId,
value.attributeId,
value.textValue || null,
value.numberValue || null,
value.booleanValue || null,
value.dateValue || null,
value.datetimeValue || null,
value.referenceObjectId || null,
value.referenceObjectKey || null,
value.referenceObjectLabel || null,
value.arrayIndex,
]
);
}
});
}
/**
* Get all attribute values for an object
*/
async getAttributeValues(objectId: string): Promise<AttributeValueRecord[]> {
return await this.db.query<AttributeValueRecord>(
`SELECT object_id as objectId, attribute_id as attributeId, text_value as textValue,
number_value as numberValue, boolean_value as booleanValue,
date_value as dateValue, datetime_value as datetimeValue,
reference_object_id as referenceObjectId, reference_object_key as referenceObjectKey,
reference_object_label as referenceObjectLabel, array_index as arrayIndex
FROM attribute_values
WHERE object_id = ?
ORDER BY attribute_id, array_index`,
[objectId]
);
}
/**
* Upsert an object relation
*/
async upsertRelation(relation: {
sourceId: string;
targetId: string;
attributeId: number;
sourceType: string;
targetType: string;
}): Promise<void> {
await this.db.execute(
`INSERT INTO object_relations (source_id, target_id, attribute_id, source_type, target_type)
VALUES (?, ?, ?, ?, ?)
ON CONFLICT(source_id, target_id, attribute_id) DO NOTHING`,
[
relation.sourceId,
relation.targetId,
relation.attributeId,
relation.sourceType,
relation.targetType,
]
);
}
/**
* Delete all relations for an object (used when refreshing)
*/
async deleteRelations(objectId: string): Promise<void> {
await this.db.execute(
`DELETE FROM object_relations WHERE source_id = ?`,
[objectId]
);
}
/**
* Get objects of a specific type
*/
async getObjectsByType(
objectTypeName: string,
options: {
limit?: number;
offset?: number;
} = {}
): Promise<ObjectRecord[]> {
const { limit = 1000, offset = 0 } = options;
return await this.db.query<ObjectRecord>(
`SELECT id, object_key as objectKey, object_type_name as objectTypeName, label,
jira_updated_at as jiraUpdatedAt, jira_created_at as jiraCreatedAt, cached_at as cachedAt
FROM objects
WHERE object_type_name = ?
ORDER BY label
LIMIT ? OFFSET ?`,
[objectTypeName, limit, offset]
);
}
/**
* Count objects of a type
*/
async countObjectsByType(objectTypeName: string): Promise<number> {
const result = await this.db.queryOne<{ count: number | string }>(
`SELECT COUNT(*) as count FROM objects WHERE object_type_name = ?`,
[objectTypeName]
);
if (!result?.count) return 0;
return typeof result.count === 'string' ? parseInt(result.count, 10) : Number(result.count);
}
/**
* Delete an object (cascades to attribute_values and relations)
*/
async deleteObject(objectId: string): Promise<void> {
await this.db.execute(
`DELETE FROM objects WHERE id = ?`,
[objectId]
);
}
}

View File

@@ -0,0 +1,492 @@
/**
* SchemaRepository - Data access for schema metadata
*/
import type { DatabaseAdapter } from '../services/database/interface.js';
import { logger } from '../services/logger.js';
import { toPascalCase } from '../services/schemaUtils.js';
export interface SchemaRecord {
id: number;
jiraSchemaId: string;
name: string;
description: string | null;
discoveredAt: string;
updatedAt: string;
}
export interface ObjectTypeRecord {
id: number;
schemaId: number;
jiraTypeId: number;
typeName: string;
displayName: string;
description: string | null;
syncPriority: number;
objectCount: number;
enabled: boolean;
discoveredAt: string;
updatedAt: string;
}
export interface AttributeRecord {
id: number;
jiraAttrId: number;
objectTypeName: string;
attrName: string;
fieldName: string;
attrType: string;
isMultiple: boolean;
isEditable: boolean;
isRequired: boolean;
isSystem: boolean;
referenceTypeName: string | null;
description: string | null;
discoveredAt: string;
}
export class SchemaRepository {
constructor(private db: DatabaseAdapter) {}
/**
* Get database adapter (for debug/advanced operations)
*/
getDatabaseAdapter(): DatabaseAdapter {
return this.db;
}
/**
* Upsert a schema
*/
async upsertSchema(schema: {
jiraSchemaId: string;
name: string;
description?: string;
}): Promise<number> {
const now = new Date().toISOString();
// Check if exists
const existing = await this.db.queryOne<{ id: number }>(
`SELECT id FROM schemas WHERE jira_schema_id = ?`,
[schema.jiraSchemaId]
);
if (existing) {
await this.db.execute(
`UPDATE schemas SET name = ?, description = ?, updated_at = ? WHERE id = ?`,
[schema.name, schema.description || null, now, existing.id]
);
return existing.id;
} else {
await this.db.execute(
`INSERT INTO schemas (jira_schema_id, name, description, discovered_at, updated_at)
VALUES (?, ?, ?, ?, ?)`,
[schema.jiraSchemaId, schema.name, schema.description || null, now, now]
);
const result = await this.db.queryOne<{ id: number }>(
`SELECT id FROM schemas WHERE jira_schema_id = ?`,
[schema.jiraSchemaId]
);
return result?.id || 0;
}
}
/**
* Get all schemas
*/
async getAllSchemas(): Promise<SchemaRecord[]> {
return await this.db.query<SchemaRecord>(
`SELECT id, jira_schema_id as jiraSchemaId, name, description, discovered_at as discoveredAt, updated_at as updatedAt
FROM schemas
ORDER BY jira_schema_id`
);
}
/**
* Upsert an object type
*/
async upsertObjectType(
schemaId: number,
objectType: {
jiraTypeId: number;
typeName: string;
displayName: string;
description?: string;
syncPriority?: number;
objectCount?: number;
}
): Promise<number> {
const now = new Date().toISOString();
const existing = await this.db.queryOne<{ id: number }>(
`SELECT id FROM object_types WHERE schema_id = ? AND jira_type_id = ?`,
[schemaId, objectType.jiraTypeId]
);
if (existing) {
// Update existing record - ensure type_name is set if missing
// First check if type_name is NULL
const currentRecord = await this.db.queryOne<{ type_name: string | null }>(
`SELECT type_name FROM object_types WHERE id = ?`,
[existing.id]
);
// Determine what type_name value to use
let typeNameToUse: string | null = null;
if (objectType.typeName && objectType.typeName.trim() !== '') {
// Use provided typeName if available
typeNameToUse = objectType.typeName;
} else if (currentRecord?.type_name && currentRecord.type_name.trim() !== '') {
// Keep existing type_name if it exists and no new one provided
typeNameToUse = currentRecord.type_name;
} else {
// Generate type_name from display_name if missing
typeNameToUse = toPascalCase(objectType.displayName);
logger.warn(`SchemaRepository.upsertObjectType: Generated missing type_name "${typeNameToUse}" from display_name "${objectType.displayName}" for id=${existing.id}`);
}
// Only update type_name if we have a valid value (never set to NULL)
if (typeNameToUse && typeNameToUse.trim() !== '') {
await this.db.execute(
`UPDATE object_types
SET display_name = ?, description = ?, sync_priority = ?, object_count = ?,
type_name = ?, updated_at = ?
WHERE id = ?`,
[
objectType.displayName,
objectType.description || null,
objectType.syncPriority || 0,
objectType.objectCount || 0,
typeNameToUse,
now,
existing.id,
]
);
} else {
// Shouldn't happen, but log if it does
logger.error(`SchemaRepository.upsertObjectType: Cannot update type_name - all sources are empty for id=${existing.id}`);
// Still update other fields, but don't touch type_name
await this.db.execute(
`UPDATE object_types
SET display_name = ?, description = ?, sync_priority = ?, object_count = ?,
updated_at = ?
WHERE id = ?`,
[
objectType.displayName,
objectType.description || null,
objectType.syncPriority || 0,
objectType.objectCount || 0,
now,
existing.id,
]
);
}
return existing.id;
} else {
await this.db.execute(
`INSERT INTO object_types (schema_id, jira_type_id, type_name, display_name, description, sync_priority, object_count, enabled, discovered_at, updated_at)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)`,
[
schemaId,
objectType.jiraTypeId,
objectType.typeName,
objectType.displayName,
objectType.description || null,
objectType.syncPriority || 0,
objectType.objectCount || 0,
false, // Default: disabled
now,
now,
]
);
const result = await this.db.queryOne<{ id: number }>(
`SELECT id FROM object_types WHERE schema_id = ? AND jira_type_id = ?`,
[schemaId, objectType.jiraTypeId]
);
return result?.id || 0;
}
}
/**
* Get enabled object types
*/
async getEnabledObjectTypes(): Promise<ObjectTypeRecord[]> {
// Handle both PostgreSQL (boolean) and SQLite (integer) for enabled column
const isPostgres = this.db.isPostgres === true;
// For PostgreSQL: enabled is BOOLEAN, so 'enabled = true' works
// For SQLite: enabled is INTEGER (0/1), so 'enabled = 1' works
// However, some adapters might return booleans as 1/0 in both cases
// So we check for both boolean true and integer 1
const enabledCondition = isPostgres
? 'enabled IS true' // PostgreSQL: IS true is more explicit than = true
: 'enabled = 1'; // SQLite: explicit integer comparison
// Query without aliases first to ensure we get the raw values
const rawResults = await this.db.query<{
id: number;
schema_id: number;
jira_type_id: number;
type_name: string | null;
display_name: string;
description: string | null;
sync_priority: number;
object_count: number;
enabled: boolean | number;
discovered_at: string;
updated_at: string;
}>(
`SELECT id, schema_id, jira_type_id, type_name, display_name, description,
sync_priority, object_count, enabled, discovered_at, updated_at
FROM object_types
WHERE ${enabledCondition}
ORDER BY sync_priority, type_name`
);
logger.debug(`SchemaRepository.getEnabledObjectTypes: Raw query found ${rawResults.length} enabled types. Raw type_name values: ${JSON.stringify(rawResults.map(r => ({ id: r.id, type_name: r.type_name, type_name_type: typeof r.type_name, display_name: r.display_name })))}`);
// Map to ObjectTypeRecord format manually to ensure proper mapping
const results: ObjectTypeRecord[] = rawResults.map(r => ({
id: r.id,
schemaId: r.schema_id,
jiraTypeId: r.jira_type_id,
typeName: r.type_name || '', // Convert null to empty string if needed
displayName: r.display_name,
description: r.description,
syncPriority: r.sync_priority,
objectCount: r.object_count,
enabled: r.enabled === true || r.enabled === 1,
discoveredAt: r.discovered_at,
updatedAt: r.updated_at,
}));
// Debug: Log what we found
logger.debug(`SchemaRepository.getEnabledObjectTypes: Found ${results.length} enabled types (isPostgres: ${isPostgres}, condition: ${enabledCondition})`);
if (results.length > 0) {
// Log raw results to see what we're actually getting
logger.debug(`SchemaRepository.getEnabledObjectTypes: Raw results: ${JSON.stringify(results.map(r => ({
id: r.id,
typeName: r.typeName,
typeNameType: typeof r.typeName,
typeNameLength: r.typeName?.length,
displayName: r.displayName,
enabled: r.enabled
})))}`);
// Check for missing typeName
const missingTypeName = results.filter(r => !r.typeName || r.typeName.trim() === '');
if (missingTypeName.length > 0) {
logger.error(`SchemaRepository.getEnabledObjectTypes: Found ${missingTypeName.length} enabled types with missing typeName: ${JSON.stringify(missingTypeName.map(r => ({
id: r.id,
jiraTypeId: r.jiraTypeId,
displayName: r.displayName,
typeName: r.typeName,
typeNameType: typeof r.typeName,
rawTypeName: JSON.stringify(r.typeName)
})))}`);
// Try to query directly to see what the DB actually has
for (const missing of missingTypeName) {
const directCheck = await this.db.queryOne<{ type_name: string | null }>(
`SELECT type_name FROM object_types WHERE id = ?`,
[missing.id]
);
logger.error(`SchemaRepository.getEnabledObjectTypes: Direct query for id=${missing.id} returned type_name: ${JSON.stringify(directCheck?.type_name)}`);
}
}
logger.debug(`SchemaRepository.getEnabledObjectTypes: Type names: ${results.map(r => `${r.typeName || 'NULL'}(enabled:${r.enabled}, type:${typeof r.enabled})`).join(', ')}`);
// Also check what gets filtered out
const filteredResults = results.filter(r => r.typeName && r.typeName.trim() !== '');
if (filteredResults.length < results.length) {
logger.warn(`SchemaRepository.getEnabledObjectTypes: Filtered out ${results.length - filteredResults.length} results with missing typeName`);
}
} else {
// Debug: Check if there are any enabled types at all (check the actual query)
const enabledCheck = await this.db.query<{ count: number }>(
isPostgres
? `SELECT COUNT(*) as count FROM object_types WHERE enabled IS true`
: `SELECT COUNT(*) as count FROM object_types WHERE enabled = 1`
);
logger.warn(`SchemaRepository.getEnabledObjectTypes: No enabled types found with query. Query found ${enabledCheck[0]?.count || 0} enabled types.`);
// Also check what types are actually in the DB
const allTypes = await this.db.query<{ typeName: string; enabled: boolean | number; id: number }>(
`SELECT id, type_name as typeName, enabled FROM object_types WHERE enabled IS NOT NULL ORDER BY enabled DESC LIMIT 10`
);
logger.warn(`SchemaRepository.getEnabledObjectTypes: Sample types from DB: ${allTypes.map(t => `id=${t.id}, ${t.typeName || 'NULL'}=enabled:${t.enabled}(${typeof t.enabled})`).join(', ')}`);
}
// Filter out results with missing typeName
return results.filter(r => r.typeName && r.typeName.trim() !== '');
}
/**
* Get object type by type name
*/
async getObjectTypeByTypeName(typeName: string): Promise<ObjectTypeRecord | null> {
return await this.db.queryOne<ObjectTypeRecord>(
`SELECT id, schema_id as schemaId, jira_type_id as jiraTypeId, type_name as typeName,
display_name as displayName, description, sync_priority as syncPriority,
object_count as objectCount, enabled, discovered_at as discoveredAt, updated_at as updatedAt
FROM object_types
WHERE type_name = ?`,
[typeName]
);
}
/**
* Get object type by Jira type ID
* Note: Jira type IDs are global across schemas, but we store them per schema.
* This method returns the first matching type found (any schema).
*/
async getObjectTypeByJiraId(jiraTypeId: number): Promise<ObjectTypeRecord | null> {
const result = await this.db.queryOne<ObjectTypeRecord>(
`SELECT id, schema_id as schemaId, jira_type_id as jiraTypeId, type_name as typeName,
display_name as displayName, description, sync_priority as syncPriority,
object_count as objectCount, enabled, discovered_at as discoveredAt, updated_at as updatedAt
FROM object_types
WHERE jira_type_id = ?
LIMIT 1`,
[jiraTypeId]
);
if (!result) {
// Diagnostic: Check if this type ID exists in any schema
const db = this.db;
try {
const allSchemasWithType = await db.query<{ schema_id: number; jira_schema_id: string; schema_name: string; count: number }>(
`SELECT ot.schema_id, s.jira_schema_id, s.name as schema_name, COUNT(*) as count
FROM object_types ot
JOIN schemas s ON ot.schema_id = s.id
WHERE ot.jira_type_id = ?
GROUP BY ot.schema_id, s.jira_schema_id, s.name`,
[jiraTypeId]
);
if (allSchemasWithType.length === 0) {
logger.debug(`SchemaRepository: Jira type ID ${jiraTypeId} not found in any schema. This object type needs to be discovered via schema discovery.`);
} else {
logger.debug(`SchemaRepository: Jira type ID ${jiraTypeId} exists in ${allSchemasWithType.length} schema(s): ${allSchemasWithType.map(s => `${s.schema_name} (ID: ${s.jira_schema_id})`).join(', ')}`);
}
} catch (error) {
logger.debug(`SchemaRepository: Failed to check schema existence for type ID ${jiraTypeId}`, error);
}
}
return result;
}
/**
* Upsert an attribute
*/
async upsertAttribute(attribute: {
jiraAttrId: number;
objectTypeName: string;
attrName: string;
fieldName: string;
attrType: string;
isMultiple: boolean;
isEditable: boolean;
isRequired: boolean;
isSystem: boolean;
referenceTypeName?: string;
description?: string;
}): Promise<number> {
const now = new Date().toISOString();
const existing = await this.db.queryOne<{ id: number }>(
`SELECT id FROM attributes WHERE jira_attr_id = ? AND object_type_name = ?`,
[attribute.jiraAttrId, attribute.objectTypeName]
);
if (existing) {
await this.db.execute(
`UPDATE attributes
SET attr_name = ?, field_name = ?, attr_type = ?, is_multiple = ?, is_editable = ?,
is_required = ?, is_system = ?, reference_type_name = ?, description = ?
WHERE id = ?`,
[
attribute.attrName,
attribute.fieldName,
attribute.attrType,
attribute.isMultiple,
attribute.isEditable,
attribute.isRequired,
attribute.isSystem,
attribute.referenceTypeName || null,
attribute.description || null,
existing.id,
]
);
return existing.id;
} else {
await this.db.execute(
`INSERT INTO attributes (jira_attr_id, object_type_name, attr_name, field_name, attr_type,
is_multiple, is_editable, is_required, is_system, reference_type_name, description, discovered_at)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)`,
[
attribute.jiraAttrId,
attribute.objectTypeName,
attribute.attrName,
attribute.fieldName,
attribute.attrType,
attribute.isMultiple,
attribute.isEditable,
attribute.isRequired,
attribute.isSystem,
attribute.referenceTypeName || null,
attribute.description || null,
now,
]
);
const result = await this.db.queryOne<{ id: number }>(
`SELECT id FROM attributes WHERE jira_attr_id = ? AND object_type_name = ?`,
[attribute.jiraAttrId, attribute.objectTypeName]
);
return result?.id || 0;
}
}
/**
* Get attributes for an object type
*/
async getAttributesForType(objectTypeName: string): Promise<AttributeRecord[]> {
return await this.db.query<AttributeRecord>(
`SELECT id, jira_attr_id as jiraAttrId, object_type_name as objectTypeName, attr_name as attrName,
field_name as fieldName, attr_type as attrType, is_multiple as isMultiple,
is_editable as isEditable, is_required as isRequired, is_system as isSystem,
reference_type_name as referenceTypeName, description, discovered_at as discoveredAt
FROM attributes
WHERE object_type_name = ?
ORDER BY jira_attr_id`,
[objectTypeName]
);
}
/**
* Get attribute by object type and field name
*/
async getAttributeByFieldName(objectTypeName: string, fieldName: string): Promise<AttributeRecord | null> {
return await this.db.queryOne<AttributeRecord>(
`SELECT id, jira_attr_id as jiraAttrId, object_type_name as objectTypeName, attr_name as attrName,
field_name as fieldName, attr_type as attrType, is_multiple as isMultiple,
is_editable as isEditable, is_required as isRequired, is_system as isSystem,
reference_type_name as referenceTypeName, description, discovered_at as discoveredAt
FROM attributes
WHERE object_type_name = ? AND field_name = ?`,
[objectTypeName, fieldName]
);
}
/**
* Get attribute ID by object type and Jira attribute ID
*/
async getAttributeId(objectTypeName: string, jiraAttrId: number): Promise<number | null> {
const result = await this.db.queryOne<{ id: number }>(
`SELECT id FROM attributes WHERE object_type_name = ? AND jira_attr_id = ?`,
[objectTypeName, jiraAttrId]
);
return result?.id || null;
}
}

View File

@@ -103,7 +103,7 @@ router.get('/bia-test', async (req: Request, res: Response) => {
if (getQueryString(req, 'clear') === 'true') {
clearBIACache();
}
const biaData = loadBIAData();
const biaData = await loadBIAData();
res.json({
recordCount: biaData.length,
records: biaData.slice(0, 20), // First 20 records
@@ -119,7 +119,7 @@ router.get('/bia-test', async (req: Request, res: Response) => {
router.get('/bia-debug', async (req: Request, res: Response) => {
try {
clearBIACache();
const biaData = loadBIAData();
const biaData = await loadBIAData();
// Get a few sample applications
const searchResult = await dataService.searchApplications({}, 1, 50);
@@ -138,7 +138,7 @@ router.get('/bia-debug', async (req: Request, res: Response) => {
// Test each sample app
for (const app of [...sampleApps, ...testApps]) {
const matchResult = findBIAMatch(app.name, app.searchReference ?? null);
const matchResult = await findBIAMatch(app.name, app.searchReference ?? null);
// Find all potential matches in Excel data for detailed analysis
const normalizedAppName = app.name.toLowerCase().trim();
@@ -207,7 +207,7 @@ router.get('/bia-comparison', async (req: Request, res: Response) => {
clearBIACache();
// Load fresh data
const testBIAData = loadBIAData();
const testBIAData = await loadBIAData();
logger.info(`BIA comparison: Loaded ${testBIAData.length} records from Excel file`);
if (testBIAData.length === 0) {
logger.error('BIA comparison: No Excel data loaded - check if BIA.xlsx exists and is readable');
@@ -251,7 +251,7 @@ router.get('/bia-comparison', async (req: Request, res: Response) => {
for (const app of applications) {
// Find BIA match in Excel
const matchResult = findBIAMatch(app.name, app.searchReference ?? null);
const matchResult = await findBIAMatch(app.name, app.searchReference ?? null);
// Log first few matches for debugging
if (comparisonItems.length < 5) {
@@ -326,8 +326,9 @@ router.get('/bia-comparison', async (req: Request, res: Response) => {
// Query params:
// - mode=edit: Force refresh from Jira for editing (includes _jiraUpdatedAt for conflict detection)
router.get('/:id', async (req: Request, res: Response) => {
try {
const id = getParamString(req, 'id');
try {
const mode = getQueryString(req, 'mode');
// Don't treat special routes as application IDs
@@ -342,7 +343,7 @@ router.get('/:id', async (req: Request, res: Response) => {
: await dataService.getApplicationById(id);
if (!application) {
res.status(404).json({ error: 'Application not found' });
res.status(404).json({ error: 'Application not found', id });
return;
}
@@ -355,8 +356,15 @@ router.get('/:id', async (req: Request, res: Response) => {
res.json(applicationWithCompleteness);
} catch (error) {
logger.error('Failed to get application', error);
res.status(500).json({ error: 'Failed to get application' });
logger.error(`Failed to get application ${id}`, error);
const errorMessage = error instanceof Error ? error.message : 'Unknown error';
const errorDetails = error instanceof Error && error.stack ? error.stack : String(error);
logger.debug(`Error details for application ${id}:`, errorDetails);
res.status(500).json({
error: 'Failed to get application',
details: errorMessage,
id: id,
});
}
});
@@ -625,34 +633,101 @@ router.get('/:id/related/:objectType', async (req: Request, res: Response) => {
type RelatedObjectType = Server | Flows | Certificate | Domain | AzureSubscription;
let relatedObjects: RelatedObjectType[] = [];
// Get requested attributes from query string (needed for fallback)
const attributesParam = getQueryString(req, 'attributes');
const requestedAttrs = attributesParam
? attributesParam.split(',').map(a => a.trim())
: [];
logger.debug(`Getting related objects for application ${id}, objectType: ${objectType}, typeName: ${typeName}, requestedAttrs: ${requestedAttrs.join(',') || 'none'}`);
// First try to get from cache
switch (typeName) {
case 'Server':
relatedObjects = await cmdbService.getReferencingObjects<Server>(id, 'Server');
logger.debug(`Found ${relatedObjects.length} Servers referencing application ${id} in cache`);
break;
case 'Flows': {
// Flows reference ApplicationComponents via Source and Target attributes
// We need to find Flows where this ApplicationComponent is the target of the reference
relatedObjects = await cmdbService.getReferencingObjects<Flows>(id, 'Flows');
logger.debug(`Found ${relatedObjects.length} Flows referencing application ${id} in cache`);
break;
}
case 'Certificate':
relatedObjects = await cmdbService.getReferencingObjects<Certificate>(id, 'Certificate');
logger.debug(`Found ${relatedObjects.length} Certificates referencing application ${id} in cache`);
break;
case 'Domain':
relatedObjects = await cmdbService.getReferencingObjects<Domain>(id, 'Domain');
logger.debug(`Found ${relatedObjects.length} Domains referencing application ${id} in cache`);
break;
case 'AzureSubscription':
relatedObjects = await cmdbService.getReferencingObjects<AzureSubscription>(id, 'AzureSubscription');
logger.debug(`Found ${relatedObjects.length} AzureSubscriptions referencing application ${id} in cache`);
break;
default:
relatedObjects = [];
logger.warn(`Unknown object type for related objects: ${typeName}`);
}
// Get requested attributes from query string
const attributesParam = getQueryString(req, 'attributes');
const requestedAttrs = attributesParam
? attributesParam.split(',').map(a => a.trim())
: [];
// If no objects found in cache, try to fetch from Jira directly as fallback
// This helps when relations haven't been synced yet
if (relatedObjects.length === 0) {
try {
// Get application to get its objectKey
const app = await cmdbService.getObject('ApplicationComponent', id);
if (!app) {
logger.warn(`Application ${id} not found in cache, cannot fetch related objects from Jira`);
} else if (!app.objectKey) {
logger.warn(`Application ${id} has no objectKey, cannot fetch related objects from Jira`);
} else {
logger.info(`No related ${typeName} objects found in cache for application ${id} (${app.objectKey}), trying Jira directly...`);
const { jiraAssetsService } = await import('../services/jiraAssets.js');
// Use the Jira object type name from schema (not our internal typeName)
const { OBJECT_TYPES } = await import('../generated/jira-schema.js');
const jiraTypeDef = OBJECT_TYPES[typeName];
const jiraObjectTypeName = jiraTypeDef?.name || objectType;
logger.debug(`Using Jira object type name: "${jiraObjectTypeName}" for internal type "${typeName}"`);
const jiraResult = await jiraAssetsService.getRelatedObjects(app.objectKey, jiraObjectTypeName, requestedAttrs);
logger.debug(`Jira query returned ${jiraResult?.objects?.length || 0} objects`);
if (jiraResult && jiraResult.objects && jiraResult.objects.length > 0) {
logger.info(`Found ${jiraResult.objects.length} related ${typeName} objects from Jira, caching them...`);
// Batch fetch and cache all objects at once (much more efficient)
const objectIds = jiraResult.objects.map(obj => obj.id.toString());
const cachedObjects = await cmdbService.batchFetchAndCacheObjects(typeName as CMDBObjectTypeName, objectIds);
logger.info(`Successfully batch cached ${cachedObjects.length} of ${jiraResult.objects.length} related ${typeName} objects`);
// Use cached objects, fallback to minimal objects from Jira result if not found
const cachedById = new Map(cachedObjects.map(obj => [obj.id, obj]));
relatedObjects = jiraResult.objects.map((jiraObj) => {
const cached = cachedById.get(jiraObj.id.toString());
if (cached) {
return cached as RelatedObjectType;
}
// Fallback: create minimal object from Jira result
logger.debug(`Creating minimal object for ${jiraObj.id} (${jiraObj.key}) as cache lookup failed`);
return {
id: jiraObj.id.toString(),
objectKey: jiraObj.key,
label: jiraObj.label,
_objectType: typeName,
} as RelatedObjectType;
});
logger.info(`Loaded ${relatedObjects.length} related ${typeName} objects (${relatedObjects.filter(o => o).length} valid)`);
} else {
logger.info(`No related ${typeName} objects found in Jira for application ${app.objectKey}`);
}
}
} catch (error) {
logger.error(`Failed to fetch related ${typeName} objects from Jira as fallback for application ${id}:`, error);
}
}
// Format response - must match RelatedObjectsResponse type expected by frontend
const objects = relatedObjects.map(obj => {

View File

@@ -9,6 +9,7 @@ import { getAuthDatabase } from '../services/database/migrations.js';
const router = Router();
// Extend Express Request to include user info
// Note: This extends the declaration from authorization.ts
declare global {
namespace Express {
interface Request {
@@ -83,7 +84,11 @@ router.get('/me', async (req: Request, res: Response) => {
// The sessionId should already be set by authMiddleware from cookies
const sessionId = req.sessionId || req.headers['x-session-id'] as string || req.cookies?.sessionId;
logger.debug(`[GET /me] SessionId: ${sessionId ? sessionId.substring(0, 8) + '...' : 'none'}, Cookies: ${JSON.stringify(req.cookies)}`);
// Only log relevant cookies to avoid noise from other applications
const relevantCookies = req.cookies ? {
sessionId: req.cookies.sessionId ? req.cookies.sessionId.substring(0, 8) + '...' : undefined,
} : {};
logger.debug(`[GET /me] SessionId: ${sessionId ? sessionId.substring(0, 8) + '...' : 'none'}, Relevant cookies: ${JSON.stringify(relevantCookies)}`);
// Service accounts are NOT used for application authentication
// They are only used for Jira API access (configured in .env as JIRA_SERVICE_ACCOUNT_TOKEN)
@@ -113,12 +118,13 @@ router.get('/me', async (req: Request, res: Response) => {
let userData = session.user;
if ('id' in session.user) {
// Local user - ensure proper format
const email = session.user.email || session.user.emailAddress || '';
userData = {
id: session.user.id,
email: session.user.email || session.user.emailAddress,
email: email,
username: session.user.username,
displayName: session.user.displayName,
emailAddress: session.user.email || session.user.emailAddress,
emailAddress: email,
roles: session.user.roles || [],
permissions: session.user.permissions || [],
};
@@ -388,7 +394,7 @@ router.post('/verify-email', async (req: Request, res: Response) => {
// Get invitation token info
router.get('/invitation/:token', async (req: Request, res: Response) => {
const { token } = req.params;
const token = Array.isArray(req.params.token) ? req.params.token[0] : req.params.token;
try {
const user = await userService.validateInvitationToken(token);
@@ -454,9 +460,11 @@ router.post('/accept-invitation', async (req: Request, res: Response) => {
export async function authMiddleware(req: Request, res: Response, next: NextFunction) {
const sessionId = req.headers['x-session-id'] as string || req.cookies?.sessionId;
// Debug logging for cookie issues
// Debug logging for cookie issues (only log relevant cookies to avoid noise)
if (req.path === '/api/auth/me') {
logger.debug(`[authMiddleware] Path: ${req.path}, Cookies: ${JSON.stringify(req.cookies)}, SessionId from cookie: ${req.cookies?.sessionId}, SessionId from header: ${req.headers['x-session-id']}`);
const sessionIdFromCookie = req.cookies?.sessionId ? req.cookies.sessionId.substring(0, 8) + '...' : 'none';
const sessionIdFromHeader = req.headers['x-session-id'] ? String(req.headers['x-session-id']).substring(0, 8) + '...' : 'none';
logger.debug(`[authMiddleware] Path: ${req.path}, SessionId from cookie: ${sessionIdFromCookie}, SessionId from header: ${sessionIdFromHeader}`);
}
if (sessionId) {

View File

@@ -5,7 +5,7 @@
*/
import { Router, Request, Response } from 'express';
import { cacheStore } from '../services/cacheStore.js';
import { normalizedCacheStore as cacheStore } from '../services/normalizedCacheStore.js';
import { syncEngine } from '../services/syncEngine.js';
import { logger } from '../services/logger.js';
import { requireAuth, requirePermission } from '../middleware/authorization.js';
@@ -30,10 +30,16 @@ router.get('/status', async (req: Request, res: Response) => {
if (cacheStats.objectsByType['ApplicationComponent'] !== undefined) {
try {
const { jiraAssetsClient } = await import('../services/jiraAssetsClient.js');
const { schemaMappingService } = await import('../services/schemaMappingService.js');
const { OBJECT_TYPES } = await import('../generated/jira-schema.js');
const typeDef = OBJECT_TYPES['ApplicationComponent'];
if (typeDef) {
const searchResult = await jiraAssetsClient.searchObjects(`objectType = "${typeDef.name}"`, 1, 1);
// Get schema ID for ApplicationComponent
const schemaId = await schemaMappingService.getSchemaId('ApplicationComponent');
// Skip if no schema ID is available
if (schemaId && schemaId.trim() !== '') {
const searchResult = await jiraAssetsClient.searchObjects(`objectType = "${typeDef.name}"`, 1, 1, schemaId);
const jiraCount = searchResult.totalCount;
const cacheCount = cacheStats.objectsByType['ApplicationComponent'] || 0;
jiraComparison = {
@@ -42,6 +48,7 @@ router.get('/status', async (req: Request, res: Response) => {
difference: jiraCount - cacheCount,
};
}
}
} catch (err) {
logger.debug('Could not fetch Jira count for comparison', err);
}
@@ -64,6 +71,17 @@ router.post('/sync', async (req: Request, res: Response) => {
try {
logger.info('Manual full sync triggered');
// Check if configuration is complete
const { schemaConfigurationService } = await import('../services/schemaConfigurationService.js');
const isConfigured = await schemaConfigurationService.isConfigurationComplete();
if (!isConfigured) {
res.status(400).json({
error: 'Schema configuration not complete',
message: 'Please configure at least one object type to be synced in the settings page before starting sync.',
});
return;
}
// Don't wait for completion - return immediately
syncEngine.fullSync().catch(err => {
logger.error('Full sync failed', err);
@@ -75,7 +93,11 @@ router.post('/sync', async (req: Request, res: Response) => {
});
} catch (error) {
logger.error('Failed to trigger full sync', error);
res.status(500).json({ error: 'Failed to trigger sync' });
const errorMessage = error instanceof Error ? error.message : 'Failed to trigger sync';
res.status(500).json({
error: errorMessage,
details: error instanceof Error ? error.stack : undefined
});
}
});
@@ -116,6 +138,39 @@ router.post('/sync/:objectType', async (req: Request, res: Response) => {
}
});
// Refresh a specific application (force re-sync from Jira)
router.post('/refresh-application/:id', async (req: Request, res: Response) => {
try {
const id = getParamString(req, 'id');
const { cmdbService } = await import('../services/cmdbService.js');
logger.info(`Manual refresh triggered for application ${id}`);
// Force refresh from Jira
const app = await cmdbService.getObject('ApplicationComponent', id, { forceRefresh: true });
if (!app) {
res.status(404).json({ error: `Application ${id} not found in Jira` });
return;
}
res.json({
status: 'refreshed',
applicationId: id,
applicationKey: app.objectKey,
message: 'Application refreshed from Jira and cached with updated schema',
});
} catch (error) {
const id = getParamString(req, 'id');
const errorMessage = error instanceof Error ? error.message : 'Failed to refresh application';
logger.error(`Failed to refresh application ${id}`, error);
res.status(500).json({
error: errorMessage,
applicationId: id,
});
}
});
// Clear cache for a specific type
router.delete('/clear/:objectType', async (req: Request, res: Response) => {
try {

View File

@@ -0,0 +1,488 @@
/**
* Data Validation routes
*
* Provides endpoints for validating and inspecting data in the cache/database.
*/
import { Router, Request, Response } from 'express';
import { normalizedCacheStore as cacheStore } from '../services/normalizedCacheStore.js';
import { logger } from '../services/logger.js';
import { requireAuth, requirePermission } from '../middleware/authorization.js';
import { getQueryString, getParamString } from '../utils/queryHelpers.js';
import { schemaCacheService } from '../services/schemaCacheService.js';
import { jiraAssetsClient } from '../services/jiraAssetsClient.js';
import { dataIntegrityService } from '../services/dataIntegrityService.js';
import { schemaMappingService } from '../services/schemaMappingService.js';
import { getDatabaseAdapter } from '../services/database/singleton.js';
import type { CMDBObjectTypeName } from '../generated/jira-types.js';
const router = Router();
// All routes require authentication and manage_settings permission
router.use(requireAuth);
router.use(requirePermission('manage_settings'));
/**
* GET /api/data-validation/stats
* Get comprehensive data validation statistics
*/
router.get('/stats', async (req: Request, res: Response) => {
try {
const db = getDatabaseAdapter();
const cacheStats = await cacheStore.getStats();
// Get object counts by type from cache
const objectsByType = cacheStats.objectsByType;
// Get schema from database (via cache)
const schema = await schemaCacheService.getSchema();
const objectTypes = schema.objectTypes;
const typeNames = Object.keys(objectTypes);
// Get schema information for each object type (join with schemas table)
const schemaInfoMap = new Map<string, { schemaId: string; schemaName: string }>();
try {
const schemaInfoRows = await db.query<{
type_name: string;
jira_schema_id: string;
schema_name: string;
}>(`
SELECT ot.type_name, s.jira_schema_id, s.name as schema_name
FROM object_types ot
JOIN schemas s ON ot.schema_id = s.id
WHERE ot.type_name IN (${typeNames.map(() => '?').join(',')})
`, typeNames);
for (const row of schemaInfoRows) {
schemaInfoMap.set(row.type_name, {
schemaId: row.jira_schema_id,
schemaName: row.schema_name,
});
}
} catch (error) {
logger.debug('Failed to fetch schema information', error);
}
// Get Jira counts for comparison
const jiraCounts: Record<string, number> = {};
// Fetch counts from Jira in parallel, using schema IDs from database
const countPromises = typeNames.map(async (typeName) => {
try {
// Get schema ID from the database (already fetched above)
const schemaInfo = schemaInfoMap.get(typeName);
// If no schema info from database, try schemaMappingService as fallback
let schemaId: string | undefined = schemaInfo?.schemaId;
if (!schemaId || schemaId.trim() === '') {
schemaId = await schemaMappingService.getSchemaId(typeName);
}
// Skip if no schema ID is available (object type not configured)
if (!schemaId || schemaId.trim() === '') {
logger.debug(`No schema ID configured for ${typeName}, skipping Jira count`);
jiraCounts[typeName] = 0;
return { typeName, count: 0 };
}
const count = await jiraAssetsClient.getObjectCount(typeName, schemaId);
jiraCounts[typeName] = count;
return { typeName, count };
} catch (error) {
logger.debug(`Failed to get Jira count for ${typeName}`, error);
jiraCounts[typeName] = 0;
return { typeName, count: 0 };
}
});
await Promise.all(countPromises);
// Calculate differences
const typeComparisons: Array<{
typeName: string;
typeDisplayName: string;
schemaId?: string;
schemaName?: string;
cacheCount: number;
jiraCount: number;
difference: number;
syncStatus: 'synced' | 'outdated' | 'missing';
}> = [];
for (const [typeName, typeDef] of Object.entries(objectTypes)) {
const cacheCount = objectsByType[typeName] || 0;
const jiraCount = jiraCounts[typeName] || 0;
const difference = jiraCount - cacheCount;
let syncStatus: 'synced' | 'outdated' | 'missing';
if (cacheCount === 0 && jiraCount > 0) {
syncStatus = 'missing';
} else if (difference > 0) {
syncStatus = 'outdated';
} else {
syncStatus = 'synced';
}
const schemaInfo = schemaInfoMap.get(typeName);
typeComparisons.push({
typeName,
typeDisplayName: typeDef.name,
schemaId: schemaInfo?.schemaId,
schemaName: schemaInfo?.schemaName,
cacheCount,
jiraCount,
difference,
syncStatus,
});
}
// Sort by difference (most outdated first)
typeComparisons.sort((a, b) => b.difference - a.difference);
// Get relation statistics
const relationStats = {
total: cacheStats.totalRelations,
// Could add more detailed relation stats here
};
// Check for broken references (references to objects that don't exist)
let brokenReferences = 0;
try {
brokenReferences = await cacheStore.getBrokenReferencesCount();
} catch (error) {
logger.debug('Could not check for broken references', error);
}
// Get objects with missing required attributes
// This would require schema information, so we'll skip for now
res.json({
cache: {
totalObjects: cacheStats.totalObjects,
totalRelations: cacheStats.totalRelations,
objectsByType,
isWarm: cacheStats.isWarm,
dbSizeBytes: cacheStats.dbSizeBytes,
lastFullSync: cacheStats.lastFullSync,
lastIncrementalSync: cacheStats.lastIncrementalSync,
},
jira: {
counts: jiraCounts,
},
comparison: {
typeComparisons,
totalOutdated: typeComparisons.filter(t => t.syncStatus === 'outdated').length,
totalMissing: typeComparisons.filter(t => t.syncStatus === 'missing').length,
totalSynced: typeComparisons.filter(t => t.syncStatus === 'synced').length,
},
validation: {
brokenReferences,
// Add more validation metrics here
},
relations: relationStats,
});
} catch (error) {
logger.error('Failed to get data validation stats', error);
res.status(500).json({ error: 'Failed to get data validation stats' });
}
});
/**
* GET /api/data-validation/objects/:typeName
* Get sample objects of a specific type for inspection
*/
router.get('/objects/:typeName', async (req: Request, res: Response) => {
try {
const typeName = getParamString(req, 'typeName');
const limit = parseInt(getQueryString(req, 'limit') || '10', 10);
const offset = parseInt(getQueryString(req, 'offset') || '0', 10);
// Get schema from database (via cache)
const schema = await schemaCacheService.getSchema();
const objectTypes = schema.objectTypes;
if (!objectTypes[typeName]) {
res.status(400).json({
error: `Unknown object type: ${typeName}`,
supportedTypes: Object.keys(objectTypes),
});
return;
}
const objects = await cacheStore.getObjects(typeName as CMDBObjectTypeName, { limit, offset });
const total = await cacheStore.countObjects(typeName as CMDBObjectTypeName);
res.json({
typeName,
typeDisplayName: objectTypes[typeName].name,
objects,
pagination: {
limit,
offset,
total,
hasMore: offset + limit < total,
},
});
} catch (error) {
const typeName = getParamString(req, 'typeName');
logger.error(`Failed to get objects for type ${typeName}`, error);
res.status(500).json({ error: 'Failed to get objects' });
}
});
/**
* GET /api/data-validation/object/:id
* Get a specific object by ID for inspection
*/
router.get('/object/:id', async (req: Request, res: Response) => {
try {
const id = getParamString(req, 'id');
// Try to find the object in any type
// First, get the object's metadata
const objRow = await cacheStore.getObjectMetadata(id);
if (!objRow) {
res.status(404).json({ error: `Object ${id} not found in cache` });
return;
}
// Get schema from database (via cache)
const schema = await schemaCacheService.getSchema();
const objectTypes = schema.objectTypes;
const object = await cacheStore.getObject(objRow.object_type_name as any, id);
if (!object) {
res.status(404).json({ error: `Object ${id} could not be reconstructed` });
return;
}
res.json({
object,
metadata: {
typeName: objRow.object_type_name,
typeDisplayName: objectTypes[objRow.object_type_name]?.name || objRow.object_type_name,
objectKey: objRow.object_key,
label: objRow.label,
},
});
} catch (error) {
const id = getParamString(req, 'id');
logger.error(`Failed to get object ${id}`, error);
res.status(500).json({ error: 'Failed to get object' });
}
});
/**
* GET /api/data-validation/broken-references
* Get list of broken references (references to objects that don't exist)
*/
router.get('/broken-references', async (req: Request, res: Response) => {
try {
const limit = parseInt(getQueryString(req, 'limit') || '50', 10);
const offset = parseInt(getQueryString(req, 'offset') || '0', 10);
// Get broken references with details
const brokenRefs = await cacheStore.getBrokenReferences(limit, offset);
// Get total count
const total = await cacheStore.getBrokenReferencesCount();
res.json({
brokenReferences: brokenRefs,
pagination: {
limit,
offset,
total,
hasMore: offset + limit < total,
},
});
} catch (error) {
logger.error('Failed to get broken references', error);
res.status(500).json({ error: 'Failed to get broken references' });
}
});
/**
* POST /api/data-validation/repair-broken-references
* Repair broken references
*
* Query params:
* - mode: 'delete' | 'fetch' | 'dry-run' (default: 'fetch')
* - batchSize: number (default: 100)
* - maxRepairs: number (default: 0 = unlimited)
*/
router.post('/repair-broken-references', async (req: Request, res: Response) => {
try {
const mode = (getQueryString(req, 'mode') || 'fetch') as 'delete' | 'fetch' | 'dry-run';
const batchSize = parseInt(getQueryString(req, 'batchSize') || '100', 10);
const maxRepairs = parseInt(getQueryString(req, 'maxRepairs') || '0', 10);
if (!['delete', 'fetch', 'dry-run'].includes(mode)) {
res.status(400).json({ error: 'Invalid mode. Must be: delete, fetch, or dry-run' });
return;
}
logger.info(`DataValidation: Starting repair broken references (mode: ${mode}, batchSize: ${batchSize}, maxRepairs: ${maxRepairs})`);
const result = await dataIntegrityService.repairBrokenReferences(mode, batchSize, maxRepairs);
res.json({
status: 'completed',
mode,
result,
});
} catch (error) {
logger.error('Failed to repair broken references', error);
res.status(500).json({ error: 'Failed to repair broken references' });
}
});
/**
* POST /api/data-validation/full-integrity-check
* Run full integrity check and optionally repair
*
* Query params:
* - repair: boolean (default: false)
*/
router.post('/full-integrity-check', async (req: Request, res: Response) => {
try {
const repair = getQueryString(req, 'repair') === 'true';
logger.info(`DataValidation: Starting full integrity check (repair: ${repair})`);
const result = await dataIntegrityService.fullIntegrityCheck(repair);
res.json({
status: 'completed',
result,
});
} catch (error) {
logger.error('Failed to run full integrity check', error);
res.status(500).json({ error: 'Failed to run full integrity check' });
}
});
/**
* GET /api/data-validation/validation-status
* Get current validation status
*/
router.get('/validation-status', async (req: Request, res: Response) => {
try {
const status = await dataIntegrityService.validateReferences();
res.json(status);
} catch (error) {
logger.error('Failed to get validation status', error);
res.status(500).json({ error: 'Failed to get validation status' });
}
});
/**
* GET /api/data-validation/schema-mappings
* Get all schema mappings
*/
router.get('/schema-mappings', async (req: Request, res: Response) => {
try {
const mappings = await schemaMappingService.getAllMappings();
res.json({ mappings });
} catch (error) {
logger.error('Failed to get schema mappings', error);
res.status(500).json({ error: 'Failed to get schema mappings' });
}
});
/**
* POST /api/data-validation/schema-mappings
* Create or update a schema mapping
*/
router.post('/schema-mappings', async (req: Request, res: Response) => {
try {
const { objectTypeName, schemaId, enabled = true } = req.body;
if (!objectTypeName || !schemaId) {
res.status(400).json({ error: 'objectTypeName and schemaId are required' });
return;
}
await schemaMappingService.setMapping(objectTypeName, schemaId, enabled);
schemaMappingService.clearCache(); // Clear cache to reload
res.json({
status: 'success',
message: `Schema mapping updated for ${objectTypeName}`,
});
} catch (error) {
logger.error('Failed to set schema mapping', error);
res.status(500).json({ error: 'Failed to set schema mapping' });
}
});
/**
* DELETE /api/data-validation/schema-mappings/:objectTypeName
* Delete a schema mapping (will use default schema)
*/
router.delete('/schema-mappings/:objectTypeName', async (req: Request, res: Response) => {
try {
const objectTypeName = getParamString(req, 'objectTypeName');
await schemaMappingService.deleteMapping(objectTypeName);
schemaMappingService.clearCache(); // Clear cache to reload
res.json({
status: 'success',
message: `Schema mapping deleted for ${objectTypeName}`,
});
} catch (error) {
logger.error('Failed to delete schema mapping', error);
res.status(500).json({ error: 'Failed to delete schema mapping' });
}
});
/**
* GET /api/data-validation/object-types
* Get all object types with their sync configuration
*/
router.get('/object-types', async (req: Request, res: Response) => {
try {
logger.debug('GET /api/data-validation/object-types - Fetching object types...');
const objectTypes = await schemaMappingService.getAllObjectTypesWithConfig();
logger.info(`GET /api/data-validation/object-types - Returning ${objectTypes.length} object types`);
res.json({ objectTypes });
} catch (error) {
logger.error('Failed to get object types', error);
res.status(500).json({
error: 'Failed to get object types',
details: error instanceof Error ? error.message : String(error)
});
}
});
/**
* PATCH /api/data-validation/object-types/:objectTypeName/enabled
* Enable or disable an object type for syncing
*/
router.patch('/object-types/:objectTypeName/enabled', async (req: Request, res: Response) => {
try {
const objectTypeName = getParamString(req, 'objectTypeName');
const { enabled } = req.body;
if (typeof enabled !== 'boolean') {
res.status(400).json({ error: 'enabled must be a boolean' });
return;
}
await schemaMappingService.setTypeEnabled(objectTypeName, enabled);
schemaMappingService.clearCache();
res.json({
status: 'success',
message: `${objectTypeName} ${enabled ? 'enabled' : 'disabled'} for syncing`,
});
} catch (error) {
logger.error('Failed to update object type enabled status', error);
res.status(500).json({ error: 'Failed to update object type enabled status' });
}
});
export default router;

View File

@@ -37,7 +37,8 @@ router.get('/', async (req: Request, res: Response) => {
// Get role by ID
router.get('/:id', async (req: Request, res: Response) => {
try {
const id = parseInt(req.params.id, 10);
const idParam = Array.isArray(req.params.id) ? req.params.id[0] : req.params.id;
const id = parseInt(idParam, 10);
if (isNaN(id)) {
return res.status(400).json({ error: 'Invalid role ID' });
}
@@ -80,7 +81,8 @@ router.post('/', requireAuth, requireAdmin, async (req: Request, res: Response)
// Update role (admin only)
router.put('/:id', requireAuth, requireAdmin, async (req: Request, res: Response) => {
try {
const id = parseInt(req.params.id, 10);
const idParam = Array.isArray(req.params.id) ? req.params.id[0] : req.params.id;
const id = parseInt(idParam, 10);
if (isNaN(id)) {
return res.status(400).json({ error: 'Invalid role ID' });
}
@@ -99,7 +101,8 @@ router.put('/:id', requireAuth, requireAdmin, async (req: Request, res: Response
// Delete role (admin only)
router.delete('/:id', requireAuth, requireAdmin, async (req: Request, res: Response) => {
try {
const id = parseInt(req.params.id, 10);
const idParam = Array.isArray(req.params.id) ? req.params.id[0] : req.params.id;
const id = parseInt(idParam, 10);
if (isNaN(id)) {
return res.status(400).json({ error: 'Invalid role ID' });
}
@@ -120,7 +123,8 @@ router.delete('/:id', requireAuth, requireAdmin, async (req: Request, res: Respo
// Get role permissions
router.get('/:id/permissions', async (req: Request, res: Response) => {
try {
const id = parseInt(req.params.id, 10);
const idParam = Array.isArray(req.params.id) ? req.params.id[0] : req.params.id;
const id = parseInt(idParam, 10);
if (isNaN(id)) {
return res.status(400).json({ error: 'Invalid role ID' });
}
@@ -136,7 +140,8 @@ router.get('/:id/permissions', async (req: Request, res: Response) => {
// Assign permission to role (admin only)
router.post('/:id/permissions', requireAuth, requireAdmin, async (req: Request, res: Response) => {
try {
const id = parseInt(req.params.id, 10);
const idParam = Array.isArray(req.params.id) ? req.params.id[0] : req.params.id;
const id = parseInt(idParam, 10);
if (isNaN(id)) {
return res.status(400).json({ error: 'Invalid role ID' });
}
@@ -162,8 +167,10 @@ router.post('/:id/permissions', requireAuth, requireAdmin, async (req: Request,
// Remove permission from role (admin only)
router.delete('/:id/permissions/:permissionId', requireAuth, requireAdmin, async (req: Request, res: Response) => {
try {
const roleId = parseInt(req.params.id, 10);
const permissionId = parseInt(req.params.permissionId, 10);
const roleIdParam = Array.isArray(req.params.id) ? req.params.id[0] : req.params.id;
const permissionIdParam = Array.isArray(req.params.permissionId) ? req.params.permissionId[0] : req.params.permissionId;
const roleId = parseInt(roleIdParam, 10);
const permissionId = parseInt(permissionIdParam, 10);
if (isNaN(roleId) || isNaN(permissionId)) {
return res.status(400).json({ error: 'Invalid role ID or permission ID' });

View File

@@ -1,11 +1,10 @@
import { Router } from 'express';
import { OBJECT_TYPES, SCHEMA_GENERATED_AT, SCHEMA_OBJECT_TYPE_COUNT, SCHEMA_TOTAL_ATTRIBUTES } from '../generated/jira-schema.js';
import type { ObjectTypeDefinition, AttributeDefinition } from '../generated/jira-schema.js';
import { dataService } from '../services/dataService.js';
import { schemaCacheService } from '../services/schemaCacheService.js';
import { schemaSyncService } from '../services/SchemaSyncService.js';
import { schemaMappingService } from '../services/schemaMappingService.js';
import { logger } from '../services/logger.js';
import { jiraAssetsClient } from '../services/jiraAssetsClient.js';
import { requireAuth, requirePermission } from '../middleware/authorization.js';
import type { CMDBObjectTypeName } from '../generated/jira-types.js';
const router = Router();
@@ -13,125 +12,53 @@ const router = Router();
router.use(requireAuth);
router.use(requirePermission('search'));
// Extended types for API response
interface ObjectTypeWithLinks extends ObjectTypeDefinition {
incomingLinks: Array<{
fromType: string;
fromTypeName: string;
attributeName: string;
isMultiple: boolean;
}>;
outgoingLinks: Array<{
toType: string;
toTypeName: string;
attributeName: string;
isMultiple: boolean;
}>;
}
interface SchemaResponse {
metadata: {
generatedAt: string;
objectTypeCount: number;
totalAttributes: number;
};
objectTypes: Record<string, ObjectTypeWithLinks>;
cacheCounts?: Record<string, number>; // Cache counts by type name (from objectsByType)
jiraCounts?: Record<string, number>; // Actual counts from Jira Assets API
}
/**
* GET /api/schema
* Returns the complete Jira Assets schema with object types, attributes, and links
* Data is fetched from database (via cache service)
*/
router.get('/', async (req, res) => {
try {
// Build links between object types
const objectTypesWithLinks: Record<string, ObjectTypeWithLinks> = {};
// Get schema from cache (which fetches from database)
const schema = await schemaCacheService.getSchema();
// First pass: convert all object types
for (const [typeName, typeDef] of Object.entries(OBJECT_TYPES)) {
objectTypesWithLinks[typeName] = {
...typeDef,
incomingLinks: [],
outgoingLinks: [],
};
}
// Second pass: build link relationships
for (const [typeName, typeDef] of Object.entries(OBJECT_TYPES)) {
for (const attr of typeDef.attributes) {
if (attr.type === 'reference' && attr.referenceTypeName) {
// Add outgoing link from this type
objectTypesWithLinks[typeName].outgoingLinks.push({
toType: attr.referenceTypeName,
toTypeName: OBJECT_TYPES[attr.referenceTypeName]?.name || attr.referenceTypeName,
attributeName: attr.name,
isMultiple: attr.isMultiple,
});
// Add incoming link to the referenced type
if (objectTypesWithLinks[attr.referenceTypeName]) {
objectTypesWithLinks[attr.referenceTypeName].incomingLinks.push({
fromType: typeName,
fromTypeName: typeDef.name,
attributeName: attr.name,
isMultiple: attr.isMultiple,
});
}
}
}
}
// Get cache counts (objectsByType) if available
let cacheCounts: Record<string, number> | undefined;
try {
const cacheStatus = await dataService.getCacheStatus();
cacheCounts = cacheStatus.objectsByType;
} catch (err) {
logger.debug('Could not fetch cache counts for schema response', err);
// Continue without cache counts - not critical
}
// Fetch actual counts from Jira Assets for all object types
// This ensures the counts match exactly what's in Jira Assets
const jiraCounts: Record<string, number> = {};
const typeNames = Object.keys(OBJECT_TYPES) as CMDBObjectTypeName[];
// Optionally fetch Jira counts for comparison (can be slow, so make it optional)
let jiraCounts: Record<string, number> | undefined;
const includeJiraCounts = req.query.includeJiraCounts === 'true';
if (includeJiraCounts) {
const typeNames = Object.keys(schema.objectTypes);
logger.info(`Schema: Fetching object counts from Jira Assets for ${typeNames.length} object types...`);
// Fetch counts in parallel for better performance
jiraCounts = {};
// Fetch counts in parallel for better performance, using schema mappings
const countPromises = typeNames.map(async (typeName) => {
try {
const count = await jiraAssetsClient.getObjectCount(typeName);
jiraCounts[typeName] = count;
// Get schema ID for this type
const schemaId = await schemaMappingService.getSchemaId(typeName);
const count = await jiraAssetsClient.getObjectCount(typeName, schemaId);
jiraCounts![typeName] = count;
return { typeName, count };
} catch (error) {
logger.warn(`Schema: Failed to get count for ${typeName}`, error);
// Use 0 as fallback if API call fails
jiraCounts[typeName] = 0;
jiraCounts![typeName] = 0;
return { typeName, count: 0 };
}
});
await Promise.all(countPromises);
logger.info(`Schema: Fetched counts for ${Object.keys(jiraCounts).length} object types from Jira Assets`);
}
const response: SchemaResponse = {
metadata: {
generatedAt: SCHEMA_GENERATED_AT,
objectTypeCount: SCHEMA_OBJECT_TYPE_COUNT,
totalAttributes: SCHEMA_TOTAL_ATTRIBUTES,
},
objectTypes: objectTypesWithLinks,
cacheCounts,
const response = {
...schema,
jiraCounts,
};
res.json(response);
} catch (error) {
console.error('Failed to get schema:', error);
logger.error('Failed to get schema:', error);
res.status(500).json({ error: 'Failed to get schema' });
}
});
@@ -140,60 +67,61 @@ router.get('/', async (req, res) => {
* GET /api/schema/object-type/:typeName
* Returns details for a specific object type
*/
router.get('/object-type/:typeName', (req, res) => {
router.get('/object-type/:typeName', async (req, res) => {
try {
const { typeName } = req.params;
const typeDef = OBJECT_TYPES[typeName];
// Get schema from cache
const schema = await schemaCacheService.getSchema();
const typeDef = schema.objectTypes[typeName];
if (!typeDef) {
return res.status(404).json({ error: `Object type '${typeName}' not found` });
}
// Build links for this specific type
const incomingLinks: Array<{
fromType: string;
fromTypeName: string;
attributeName: string;
isMultiple: boolean;
}> = [];
const outgoingLinks: Array<{
toType: string;
toTypeName: string;
attributeName: string;
isMultiple: boolean;
}> = [];
// Outgoing links from this type
for (const attr of typeDef.attributes) {
if (attr.type === 'reference' && attr.referenceTypeName) {
outgoingLinks.push({
toType: attr.referenceTypeName,
toTypeName: OBJECT_TYPES[attr.referenceTypeName]?.name || attr.referenceTypeName,
attributeName: attr.name,
isMultiple: attr.isMultiple,
res.json(typeDef);
} catch (error) {
logger.error('Failed to get object type:', error);
res.status(500).json({ error: 'Failed to get object type' });
}
});
}
}
// Incoming links from other types
for (const [otherTypeName, otherTypeDef] of Object.entries(OBJECT_TYPES)) {
for (const attr of otherTypeDef.attributes) {
if (attr.type === 'reference' && attr.referenceTypeName === typeName) {
incomingLinks.push({
fromType: otherTypeName,
fromTypeName: otherTypeDef.name,
attributeName: attr.name,
isMultiple: attr.isMultiple,
});
}
}
}
/**
* POST /api/schema/discover
* Manually trigger schema synchronization from Jira API
* Requires manage_settings permission
*/
router.post('/discover', requirePermission('manage_settings'), async (req, res) => {
try {
logger.info('Schema: Manual schema sync triggered');
const result = await schemaSyncService.syncAll();
schemaCacheService.invalidate(); // Invalidate cache
res.json({
...typeDef,
incomingLinks,
outgoingLinks,
message: 'Schema synchronization completed',
...result,
});
} catch (error) {
logger.error('Failed to sync schema:', error);
res.status(500).json({
error: 'Failed to sync schema',
details: error instanceof Error ? error.message : String(error),
});
}
});
/**
* GET /api/schema/sync-progress
* Get current sync progress
*/
router.get('/sync-progress', requirePermission('manage_settings'), async (req, res) => {
try {
const progress = schemaSyncService.getProgress();
res.json(progress);
} catch (error) {
logger.error('Failed to get sync progress:', error);
res.status(500).json({ error: 'Failed to get sync progress' });
}
});
export default router;

View File

@@ -0,0 +1,209 @@
/**
* Schema Configuration routes
*
* Provides endpoints for configuring which object types should be synced.
*/
import { Router, Request, Response } from 'express';
import { logger } from '../services/logger.js';
import { requireAuth, requirePermission } from '../middleware/authorization.js';
import { schemaConfigurationService } from '../services/schemaConfigurationService.js';
import { schemaSyncService } from '../services/SchemaSyncService.js';
const router = Router();
// All routes require authentication and manage_settings permission
router.use(requireAuth);
router.use(requirePermission('manage_settings'));
/**
* GET /api/schema-configuration/stats
* Get configuration statistics
*/
router.get('/stats', async (req: Request, res: Response) => {
try {
const stats = await schemaConfigurationService.getConfigurationStats();
res.json(stats);
} catch (error) {
logger.error('Failed to get configuration stats', error);
res.status(500).json({ error: 'Failed to get configuration stats' });
}
});
/**
* POST /api/schema-configuration/discover
* Discover and store all schemas, object types, and attributes from Jira Assets
* Uses the unified SchemaSyncService
*/
router.post('/discover', async (req: Request, res: Response) => {
try {
logger.info('Schema configuration: Manual schema sync triggered');
const result = await schemaSyncService.syncAll();
if (result.schemasProcessed === 0) {
logger.warn('Schema configuration: Sync returned 0 schemas - this might indicate an API issue');
res.status(400).json({
message: 'No schemas found. Please check: 1) JIRA_SERVICE_ACCOUNT_TOKEN is configured correctly, 2) Jira Assets API is accessible, 3) API endpoint /rest/assets/1.0/objectschema/list is available',
...result,
});
return;
}
res.json({
message: 'Schema synchronization completed successfully',
schemasDiscovered: result.schemasProcessed,
objectTypesDiscovered: result.objectTypesProcessed,
attributesDiscovered: result.attributesProcessed,
...result,
});
} catch (error) {
logger.error('Failed to sync schemas and object types', error);
res.status(500).json({
error: 'Failed to sync schemas and object types',
details: error instanceof Error ? error.message : String(error),
stack: error instanceof Error ? error.stack : undefined
});
}
});
/**
* GET /api/schema-configuration/object-types
* Get all configured object types grouped by schema
*/
router.get('/object-types', async (req: Request, res: Response) => {
try {
const schemas = await schemaConfigurationService.getConfiguredObjectTypes();
res.json({ schemas });
} catch (error) {
logger.error('Failed to get configured object types', error);
res.status(500).json({ error: 'Failed to get configured object types' });
}
});
/**
* PATCH /api/schema-configuration/object-types/:id/enabled
* Enable or disable an object type
*/
router.patch('/object-types/:id/enabled', async (req: Request, res: Response) => {
try {
const id = Array.isArray(req.params.id) ? req.params.id[0] : req.params.id;
if (!id) {
res.status(400).json({ error: 'id parameter required' });
return;
}
const { enabled } = req.body;
if (typeof enabled !== 'boolean') {
res.status(400).json({ error: 'enabled must be a boolean' });
return;
}
await schemaConfigurationService.setObjectTypeEnabled(id, enabled);
res.json({
status: 'success',
message: `Object type ${id} ${enabled ? 'enabled' : 'disabled'}`,
});
} catch (error) {
logger.error('Failed to update object type enabled status', error);
res.status(500).json({ error: 'Failed to update object type enabled status' });
}
});
/**
* PATCH /api/schema-configuration/object-types/bulk-enabled
* Bulk update enabled status for multiple object types
*/
router.patch('/object-types/bulk-enabled', async (req: Request, res: Response) => {
try {
const { updates } = req.body;
if (!Array.isArray(updates)) {
res.status(400).json({ error: 'updates must be an array' });
return;
}
// Validate each update
for (const update of updates) {
if (!update.id || typeof update.enabled !== 'boolean') {
res.status(400).json({ error: 'Each update must have id (string) and enabled (boolean)' });
return;
}
}
await schemaConfigurationService.bulkSetObjectTypesEnabled(updates);
res.json({
status: 'success',
message: `Updated ${updates.length} object types`,
});
} catch (error) {
logger.error('Failed to bulk update object types', error);
res.status(500).json({ error: 'Failed to bulk update object types' });
}
});
/**
* GET /api/schema-configuration/check
* Check if configuration is complete (at least one object type enabled)
*/
router.get('/check', async (req: Request, res: Response) => {
try {
const isComplete = await schemaConfigurationService.isConfigurationComplete();
const stats = await schemaConfigurationService.getConfigurationStats();
res.json({
isConfigured: isComplete,
stats,
});
} catch (error) {
logger.error('Failed to check configuration', error);
res.status(500).json({ error: 'Failed to check configuration' });
}
});
/**
* GET /api/schema-configuration/schemas
* Get all schemas with their search enabled status
*/
router.get('/schemas', async (req: Request, res: Response) => {
try {
const schemas = await schemaConfigurationService.getSchemas();
res.json({ schemas });
} catch (error) {
logger.error('Failed to get schemas', error);
res.status(500).json({ error: 'Failed to get schemas' });
}
});
/**
* PATCH /api/schema-configuration/schemas/:schemaId/search-enabled
* Enable or disable search for a schema
*/
router.patch('/schemas/:schemaId/search-enabled', async (req: Request, res: Response) => {
try {
const schemaId = req.params.schemaId;
const { searchEnabled } = req.body;
if (typeof searchEnabled !== 'boolean') {
res.status(400).json({ error: 'searchEnabled must be a boolean' });
return;
}
const schemaIdStr = Array.isArray(schemaId) ? schemaId[0] : schemaId;
if (!schemaIdStr) {
res.status(400).json({ error: 'schemaId parameter required' });
return;
}
await schemaConfigurationService.setSchemaSearchEnabled(schemaIdStr, searchEnabled);
res.json({
status: 'success',
message: `Schema ${schemaId} search ${searchEnabled ? 'enabled' : 'disabled'}`,
});
} catch (error) {
logger.error('Failed to update schema search enabled status', error);
res.status(500).json({ error: 'Failed to update schema search enabled status' });
}
});
export default router;

View File

@@ -42,7 +42,8 @@ router.get('/', async (req: Request, res: Response) => {
// Get user by ID
router.get('/:id', async (req: Request, res: Response) => {
try {
const id = parseInt(req.params.id, 10);
const idParam = Array.isArray(req.params.id) ? req.params.id[0] : req.params.id;
const id = parseInt(idParam, 10);
if (isNaN(id)) {
return res.status(400).json({ error: 'Invalid user ID' });
}
@@ -99,7 +100,8 @@ router.post('/', async (req: Request, res: Response) => {
// Update user
router.put('/:id', async (req: Request, res: Response) => {
try {
const id = parseInt(req.params.id, 10);
const idParam = Array.isArray(req.params.id) ? req.params.id[0] : req.params.id;
const id = parseInt(idParam, 10);
if (isNaN(id)) {
return res.status(400).json({ error: 'Invalid user ID' });
}
@@ -129,7 +131,8 @@ router.put('/:id', async (req: Request, res: Response) => {
// Delete user
router.delete('/:id', async (req: Request, res: Response) => {
try {
const id = parseInt(req.params.id, 10);
const idParam = Array.isArray(req.params.id) ? req.params.id[0] : req.params.id;
const id = parseInt(idParam, 10);
if (isNaN(id)) {
return res.status(400).json({ error: 'Invalid user ID' });
}
@@ -154,7 +157,8 @@ router.delete('/:id', async (req: Request, res: Response) => {
// Send invitation email
router.post('/:id/invite', async (req: Request, res: Response) => {
try {
const id = parseInt(req.params.id, 10);
const idParam = Array.isArray(req.params.id) ? req.params.id[0] : req.params.id;
const id = parseInt(idParam, 10);
if (isNaN(id)) {
return res.status(400).json({ error: 'Invalid user ID' });
}
@@ -174,7 +178,8 @@ router.post('/:id/invite', async (req: Request, res: Response) => {
// Assign role to user
router.post('/:id/roles', async (req: Request, res: Response) => {
try {
const id = parseInt(req.params.id, 10);
const idParam = Array.isArray(req.params.id) ? req.params.id[0] : req.params.id;
const id = parseInt(idParam, 10);
if (isNaN(id)) {
return res.status(400).json({ error: 'Invalid user ID' });
}
@@ -200,8 +205,10 @@ router.post('/:id/roles', async (req: Request, res: Response) => {
// Remove role from user
router.delete('/:id/roles/:roleId', async (req: Request, res: Response) => {
try {
const userId = parseInt(req.params.id, 10);
const roleId = parseInt(req.params.roleId, 10);
const userIdParam = Array.isArray(req.params.id) ? req.params.id[0] : req.params.id;
const roleIdParam = Array.isArray(req.params.roleId) ? req.params.roleId[0] : req.params.roleId;
const userId = parseInt(userIdParam, 10);
const roleId = parseInt(roleIdParam, 10);
if (isNaN(userId) || isNaN(roleId)) {
return res.status(400).json({ error: 'Invalid user ID or role ID' });
@@ -231,7 +238,8 @@ router.delete('/:id/roles/:roleId', async (req: Request, res: Response) => {
// Activate/deactivate user
router.put('/:id/activate', async (req: Request, res: Response) => {
try {
const id = parseInt(req.params.id, 10);
const idParam = Array.isArray(req.params.id) ? req.params.id[0] : req.params.id;
const id = parseInt(idParam, 10);
if (isNaN(id)) {
return res.status(400).json({ error: 'Invalid user ID' });
}
@@ -257,7 +265,8 @@ router.put('/:id/activate', async (req: Request, res: Response) => {
// Manually verify email address (admin action)
router.put('/:id/verify-email', async (req: Request, res: Response) => {
try {
const id = parseInt(req.params.id, 10);
const idParam = Array.isArray(req.params.id) ? req.params.id[0] : req.params.id;
const id = parseInt(idParam, 10);
if (isNaN(id)) {
return res.status(400).json({ error: 'Invalid user ID' });
}
@@ -282,7 +291,8 @@ router.put('/:id/verify-email', async (req: Request, res: Response) => {
// Set password for user (admin action)
router.put('/:id/password', async (req: Request, res: Response) => {
try {
const id = parseInt(req.params.id, 10);
const idParam = Array.isArray(req.params.id) ? req.params.id[0] : req.params.id;
const id = parseInt(idParam, 10);
if (isNaN(id)) {
return res.status(400).json({ error: 'Invalid user ID' });
}

View File

@@ -0,0 +1,395 @@
/**
* ObjectSyncService - Synchronizes objects from Jira Assets API
*
* Handles:
* - Full sync for enabled types
* - Incremental sync via jira_updated_at
* - Recursive reference processing
* - Reference-only caching for disabled types
*/
import { logger } from './logger.js';
import { jiraAssetsClient } from '../infrastructure/jira/JiraAssetsClient.js';
import { PayloadProcessor, type ProcessedObject } from './PayloadProcessor.js';
import { SchemaRepository } from '../repositories/SchemaRepository.js';
import { ObjectCacheRepository } from '../repositories/ObjectCacheRepository.js';
import type { ObjectEntry } from '../domain/jiraAssetsPayload.js';
import { SyncPolicy } from '../domain/syncPolicy.js';
export interface SyncResult {
objectsProcessed: number;
objectsCached: number;
relationsExtracted: number;
errors: Array<{ objectId: string; error: string }>;
}
export class ObjectSyncService {
private processor: PayloadProcessor;
constructor(
private schemaRepo: SchemaRepository,
private cacheRepo: ObjectCacheRepository
) {
this.processor = new PayloadProcessor(schemaRepo, cacheRepo);
}
/**
* Sync all objects of an enabled type
*/
async syncObjectType(
schemaId: string,
typeId: number,
typeName: string,
displayName: string
): Promise<SyncResult> {
// Validate schemaId before proceeding
if (!schemaId || schemaId.trim() === '') {
const errorMessage = `Schema ID is missing or empty for object type "${displayName}" (${typeName}). Please run schema sync to ensure all object types are properly associated with their schemas.`;
logger.error(`ObjectSyncService: ${errorMessage}`);
return {
objectsProcessed: 0,
objectsCached: 0,
relationsExtracted: 0,
errors: [{
objectId: typeName,
error: errorMessage,
}],
};
}
logger.info(`ObjectSyncService: Starting sync for ${displayName} (${typeName}) from schema ${schemaId}`);
const result: SyncResult = {
objectsProcessed: 0,
objectsCached: 0,
relationsExtracted: 0,
errors: [],
};
try {
// Get enabled types for sync policy
const enabledTypes = await this.schemaRepo.getEnabledObjectTypes();
const enabledTypeSet = new Set(enabledTypes.map(t => t.typeName));
// Fetch all objects of this type
const iql = `objectType = "${displayName}"`;
let page = 1;
let hasMore = true;
const pageSize = 40;
while (hasMore) {
let searchResult;
try {
searchResult = await jiraAssetsClient.searchObjects(iql, schemaId, {
page,
pageSize,
});
} catch (error) {
// Log detailed error information
const errorMessage = error instanceof Error ? error.message : 'Unknown error';
const errorDetails = error instanceof Error && error.cause ? String(error.cause) : undefined;
logger.error(`ObjectSyncService: Failed to search objects for ${typeName}`, {
error: errorMessage,
details: errorDetails,
iql,
schemaId,
page,
});
// Add error to result and return early
result.errors.push({
objectId: typeName,
error: `Failed to fetch objects from Jira: ${errorMessage}. This could be due to network issues, incorrect Jira host URL, or authentication problems. Check backend logs for details.`,
});
// Return result with error instead of throwing (allows partial results to be returned)
return result;
}
if (searchResult.objectEntries.length === 0) {
break;
}
// Process payload recursively (extracts all referenced objects)
const processed = await this.processor.processPayload(
searchResult.objectEntries,
enabledTypeSet
);
// Cache all processed objects
const processedEntries = Array.from(processed.entries());
let cachedCount = 0;
let skippedCount = 0;
logger.info(`ObjectSyncService: Processing ${processedEntries.length} objects from payload (includes root + referenced objects). Root objects: ${searchResult.objectEntries.length}`);
// Group by type for logging
const objectsByType = new Map<string, number>();
for (const [objectId, processedObj] of processedEntries) {
const objType = processedObj.typeName || processedObj.objectEntry.objectType?.name || 'Unknown';
objectsByType.set(objType, (objectsByType.get(objType) || 0) + 1);
}
logger.info(`ObjectSyncService: Objects by type: ${Array.from(objectsByType.entries()).map(([type, count]) => `${type}: ${count}`).join(', ')}`);
for (const [objectId, processedObj] of processedEntries) {
try {
// Cache the object (will use fallback type name if needed)
// cacheProcessedObject should always succeed now due to the generic fallback fix
await this.cacheProcessedObject(processedObj, enabledTypeSet);
// Count all cached objects - cacheProcessedObject should always succeed now
// (it uses a generic fallback type name if no type name is available)
cachedCount++;
result.relationsExtracted += processedObj.objectEntry.attributes?.length || 0;
logger.debug(`ObjectSyncService: Successfully cached object ${processedObj.objectEntry.objectKey} (ID: ${objectId}, type: ${processedObj.typeName || processedObj.objectEntry.objectType?.name || 'fallback'})`);
} catch (error) {
logger.error(`ObjectSyncService: Failed to cache object ${objectId} (${processedObj.objectEntry.objectKey})`, error);
result.errors.push({
objectId,
error: error instanceof Error ? error.message : 'Unknown error',
});
skippedCount++;
}
}
result.objectsCached = cachedCount;
if (skippedCount > 0) {
logger.warn(`ObjectSyncService: Skipped ${skippedCount} objects (no type name available or cache error) out of ${processedEntries.length} processed objects`);
}
logger.info(`ObjectSyncService: Cached ${cachedCount} objects, skipped ${skippedCount} objects out of ${processedEntries.length} total processed objects`);
result.objectsProcessed += searchResult.objectEntries.length;
hasMore = searchResult.hasMore;
page++;
}
logger.info(
`ObjectSyncService: Sync complete for ${displayName} - ${result.objectsProcessed} objects processed, ${result.objectsCached} cached, ${result.errors.length} errors`
);
} catch (error) {
logger.error(`ObjectSyncService: Failed to sync ${displayName}`, error);
result.errors.push({
objectId: typeName,
error: error instanceof Error ? error.message : 'Unknown error',
});
}
return result;
}
/**
* Sync incremental updates (objects updated since timestamp)
* Note: This may not be supported on Jira Data Center
*/
async syncIncremental(
schemaId: string,
since: Date,
enabledTypes: Set<string>
): Promise<SyncResult> {
logger.info(`ObjectSyncService: Starting incremental sync since ${since.toISOString()}`);
const result: SyncResult = {
objectsProcessed: 0,
objectsCached: 0,
relationsExtracted: 0,
errors: [],
};
try {
// IQL for updated objects (may not work on Data Center)
const iql = `updated >= "${since.toISOString()}"`;
const searchResult = await jiraAssetsClient.searchObjects(iql, schemaId, {
page: 1,
pageSize: 100,
});
// Process all entries
const processed = await this.processor.processPayload(searchResult.objectEntries, enabledTypes);
// Cache all processed objects
for (const [objectId, processedObj] of processed.entries()) {
try {
await this.cacheProcessedObject(processedObj, enabledTypes);
result.objectsCached++;
} catch (error) {
logger.error(`ObjectSyncService: Failed to cache object ${objectId} in incremental sync`, error);
result.errors.push({
objectId,
error: error instanceof Error ? error.message : 'Unknown error',
});
}
}
result.objectsProcessed = searchResult.objectEntries.length;
} catch (error) {
logger.error('ObjectSyncService: Incremental sync failed', error);
result.errors.push({
objectId: 'incremental',
error: error instanceof Error ? error.message : 'Unknown error',
});
}
return result;
}
/**
* Sync a single object (for refresh operations)
*/
async syncSingleObject(
objectId: string,
enabledTypes: Set<string>
): Promise<{ cached: boolean; error?: string }> {
try {
// Fetch object from Jira
const entry = await jiraAssetsClient.getObject(objectId);
if (!entry) {
return { cached: false, error: 'Object not found in Jira' };
}
// Process recursively
const processed = await this.processor.processPayload([entry], enabledTypes);
const processedObj = processed.get(String(entry.id));
if (!processedObj) {
return { cached: false, error: 'Failed to process object' };
}
// Cache object
await this.cacheProcessedObject(processedObj, enabledTypes);
return { cached: true };
} catch (error) {
logger.error(`ObjectSyncService: Failed to sync single object ${objectId}`, error);
return {
cached: false,
error: error instanceof Error ? error.message : 'Unknown error',
};
}
}
/**
* Cache a processed object to database
*/
private async cacheProcessedObject(
processed: ProcessedObject,
enabledTypes: Set<string>
): Promise<void> {
const { objectEntry, typeName, syncPolicy, shouldCacheAttributes } = processed;
// If typeName is not resolved, try to use Jira type name as fallback
// This allows referenced objects to be cached even if their type hasn't been discovered yet
let effectiveTypeName = typeName;
let isFallbackTypeName = false;
if (!effectiveTypeName) {
const jiraTypeId = objectEntry.objectType?.id;
const jiraTypeName = objectEntry.objectType?.name;
if (jiraTypeName) {
// Use Jira type name as fallback (will be stored in object_type_name)
// Generate a PascalCase type name from Jira display name
const { toPascalCase } = await import('./schemaUtils.js');
effectiveTypeName = toPascalCase(jiraTypeName) || jiraTypeName.replace(/[^a-zA-Z0-9]/g, '');
isFallbackTypeName = true;
logger.debug(`ObjectSyncService: Using fallback type name "${effectiveTypeName}" for object ${objectEntry.objectKey} (Jira type ID: ${jiraTypeId}, Jira name: "${jiraTypeName}"). This type needs to be discovered via schema discovery for proper attribute caching.`, {
objectKey: objectEntry.objectKey,
objectId: objectEntry.id,
jiraTypeId,
jiraTypeName,
fallbackTypeName: effectiveTypeName,
});
} else {
// No type name available at all - try to use a generic fallback
// This ensures referenced objects are always cached and queryable
const jiraTypeIdStr = jiraTypeId ? String(jiraTypeId) : 'unknown';
effectiveTypeName = `UnknownType_${jiraTypeIdStr}`;
isFallbackTypeName = true;
logger.warn(`ObjectSyncService: Using generic fallback type name "${effectiveTypeName}" for object ${objectEntry.objectKey} (ID: ${objectEntry.id}, Jira type ID: ${jiraTypeId || 'unknown'}). This object will be cached but may need schema discovery for proper attribute caching.`, {
objectKey: objectEntry.objectKey,
objectId: objectEntry.id,
jiraTypeId,
fallbackTypeName: effectiveTypeName,
hint: 'Run schema discovery to include all object types that are referenced by your synced objects.',
});
}
}
// Use effectiveTypeName for the rest of the function
const typeNameToUse = effectiveTypeName!;
// Normalize object (update processed with effective type name if needed)
let processedForNormalization = processed;
if (isFallbackTypeName) {
processedForNormalization = {
...processed,
typeName: typeNameToUse,
};
}
const normalized = await this.processor.normalizeObject(processedForNormalization);
// Access the database adapter to use transactions
const db = this.cacheRepo.db;
logger.debug(`ObjectSyncService: About to cache object ${objectEntry.objectKey} (ID: ${objectEntry.id}) with type "${typeNameToUse}" (fallback: ${isFallbackTypeName})`);
await db.transaction(async (txDb) => {
const txCacheRepo = new ObjectCacheRepository(txDb);
// Upsert object record (with effective type name)
logger.debug(`ObjectSyncService: Upserting object ${objectEntry.objectKey} (ID: ${objectEntry.id}) with type "${typeNameToUse}" (fallback: ${isFallbackTypeName})`);
await txCacheRepo.upsertObject({
...normalized.objectRecord,
objectTypeName: typeNameToUse,
});
// Handle attributes based on sync policy
// CRITICAL: Only replace attributes if attributes[] was present in API response
// For fallback type names, skip attribute caching (we don't have attribute definitions)
if (!isFallbackTypeName && (syncPolicy === SyncPolicy.ENABLED || syncPolicy === SyncPolicy.REFERENCE_ONLY) && shouldCacheAttributes) {
// Delete existing attributes (full replace)
await txCacheRepo.deleteAttributeValues(normalized.objectRecord.id);
// Insert new attributes
if (normalized.attributeValues.length > 0) {
await txCacheRepo.batchUpsertAttributeValues(
normalized.attributeValues.map(v => ({
...v,
objectId: normalized.objectRecord.id,
}))
);
}
// If attributes[] not present on shallow object, keep existing attributes (don't delete)
} else if (!isFallbackTypeName && (syncPolicy === SyncPolicy.ENABLED || syncPolicy === SyncPolicy.REFERENCE_ONLY)) {
// Cache object metadata even without attributes (reference-only)
// This allows basic object lookups for references
} else if (isFallbackTypeName) {
// For fallback type names, only cache object metadata (no attributes)
// Attributes will be cached once the type is properly discovered
logger.debug(`ObjectSyncService: Skipping attribute caching for object ${objectEntry.objectKey} with fallback type name "${typeNameToUse}". Attributes will be cached after schema discovery.`);
}
// Upsert relations
await txCacheRepo.deleteRelations(normalized.objectRecord.id);
for (const relation of normalized.relations) {
// Resolve target type name
const targetObj = await txCacheRepo.getObject(relation.targetId);
const targetType = targetObj?.objectTypeName || relation.targetType;
await txCacheRepo.upsertRelation({
sourceId: normalized.objectRecord.id,
targetId: relation.targetId,
attributeId: relation.attributeId,
sourceType: normalized.objectRecord.objectTypeName,
targetType,
});
}
});
}
}

View File

@@ -0,0 +1,369 @@
/**
* PayloadProcessor - Recursive processing of Jira Assets API payloads
*
* Handles:
* - Recursive reference expansion (level2, level3, etc.)
* - Cycle detection with visited sets
* - Attribute replacement only when attributes[] present
* - Reference-only caching for disabled types
*/
import { logger } from './logger.js';
import type { ObjectEntry, ReferencedObject, ObjectAttribute, ObjectAttributeValue, ConfluenceValue } from '../domain/jiraAssetsPayload.js';
import { isReferenceValue, isSimpleValue, hasAttributes } from '../domain/jiraAssetsPayload.js';
import type { SyncPolicy } from '../domain/syncPolicy.js';
import { SyncPolicy as SyncPolicyEnum } from '../domain/syncPolicy.js';
import type { SchemaRepository } from '../repositories/SchemaRepository.js';
import type { ObjectCacheRepository } from '../repositories/ObjectCacheRepository.js';
import type { AttributeRecord } from '../repositories/SchemaRepository.js';
export interface ProcessedObject {
objectEntry: ObjectEntry;
typeName: string | null; // Resolved from objectType.id
syncPolicy: SyncPolicy;
shouldCacheAttributes: boolean; // true if attributes[] present
}
export class PayloadProcessor {
constructor(
private schemaRepo: SchemaRepository,
private cacheRepo: ObjectCacheRepository
) {}
/**
* Process a payload recursively, extracting all objects
*
* @param objectEntries - Root objects from API
* @param enabledTypes - Set of enabled type names for full sync
* @returns Map of objectId -> ProcessedObject (includes recursive references)
*/
async processPayload(
objectEntries: ObjectEntry[],
enabledTypes: Set<string>
): Promise<Map<string, ProcessedObject>> {
const processed = new Map<string, ProcessedObject>();
const visited = new Set<string>(); // objectId/objectKey for cycle detection
// Process root entries
for (const entry of objectEntries) {
await this.processEntryRecursive(entry, enabledTypes, processed, visited);
}
return processed;
}
/**
* Process a single entry recursively
*/
private async processEntryRecursive(
entry: ObjectEntry | ReferencedObject,
enabledTypes: Set<string>,
processed: Map<string, ProcessedObject>,
visited: Set<string>
): Promise<void> {
// Extract ID and key for cycle detection
const objectId = String(entry.id);
const objectKey = entry.objectKey;
// Check for cycles (use both ID and key as visited can have either)
const visitedKey = `${objectId}:${objectKey}`;
if (visited.has(visitedKey)) {
logger.debug(`PayloadProcessor: Cycle detected for ${objectKey} (${objectId}), skipping`);
return;
}
visited.add(visitedKey);
// Resolve type name from Jira type ID
const typeName = await this.resolveTypeName(entry.objectType.id);
const syncPolicy = this.getSyncPolicy(typeName, enabledTypes);
// Determine if we should cache attributes
// CRITICAL: Only replace attributes if attributes[] array is present
const shouldCacheAttributes = hasAttributes(entry);
// Store processed object (always update if already exists to ensure latest data)
// Convert ReferencedObject to ObjectEntry format for storage
const objectEntry: ObjectEntry = {
id: entry.id,
objectKey: entry.objectKey,
label: entry.label,
objectType: entry.objectType,
created: entry.created,
updated: entry.updated,
hasAvatar: entry.hasAvatar,
timestamp: entry.timestamp,
attributes: hasAttributes(entry) ? entry.attributes : undefined,
};
processed.set(objectId, {
objectEntry,
typeName,
syncPolicy,
shouldCacheAttributes,
});
logger.debug(`PayloadProcessor: Added object ${objectEntry.objectKey} (ID: ${objectId}, Jira type: ${entry.objectType?.name}, resolved type: ${typeName || 'null'}) to processed map. Total processed: ${processed.size}`);
// Process recursive references if attributes are present
if (hasAttributes(entry)) {
logger.debug(`PayloadProcessor: Processing ${entry.attributes!.length} attributes for recursive references in object ${objectEntry.objectKey} (ID: ${objectId})`);
await this.processRecursiveReferences(
entry.attributes!,
enabledTypes,
processed,
visited
);
} else {
logger.debug(`PayloadProcessor: Object ${objectEntry.objectKey} (ID: ${objectId}) has no attributes array, skipping recursive processing`);
}
// Remove from visited set when done (allows same object in different contexts)
visited.delete(visitedKey);
}
/**
* Process recursive references from attributes
*/
private async processRecursiveReferences(
attributes: ObjectAttribute[],
enabledTypes: Set<string>,
processed: Map<string, ProcessedObject>,
visited: Set<string>
): Promise<void> {
for (const attr of attributes) {
for (const value of attr.objectAttributeValues) {
if (isReferenceValue(value)) {
const refObj = value.referencedObject;
// Process referenced object recursively
// This handles level2, level3, etc. expansion
await this.processEntryRecursive(refObj, enabledTypes, processed, visited);
}
}
}
}
/**
* Resolve type name from Jira type ID
*/
private async resolveTypeName(jiraTypeId: number): Promise<string | null> {
const objectType = await this.schemaRepo.getObjectTypeByJiraId(jiraTypeId);
if (!objectType) {
// Track missing type IDs for diagnostics
logger.debug(`PayloadProcessor: Jira type ID ${jiraTypeId} not found in object_types table. This type needs to be discovered via schema sync.`);
return null;
}
return objectType.typeName || null;
}
/**
* Get sync policy for a type
*/
private getSyncPolicy(typeName: string | null, enabledTypes: Set<string>): SyncPolicy {
if (!typeName) {
return SyncPolicyEnum.SKIP; // Unknown type - skip
}
if (enabledTypes.has(typeName)) {
return SyncPolicyEnum.ENABLED;
}
// Reference-only: cache minimal metadata for references
return SyncPolicyEnum.REFERENCE_ONLY;
}
/**
* Normalize an object entry to database format
* This converts ObjectEntry to EAV format
*/
async normalizeObject(
processed: ProcessedObject
): Promise<{
objectRecord: {
id: string;
objectKey: string;
objectTypeName: string;
label: string;
jiraUpdatedAt: string;
jiraCreatedAt: string;
};
attributeValues: Array<{
attributeId: number;
textValue?: string | null;
numberValue?: number | null;
booleanValue?: boolean | null;
dateValue?: string | null;
datetimeValue?: string | null;
referenceObjectId?: string | null;
referenceObjectKey?: string | null;
referenceObjectLabel?: string | null;
arrayIndex: number;
}>;
relations: Array<{
targetId: string;
attributeId: number;
targetType: string;
}>;
}> {
const { objectEntry, typeName } = processed;
if (!typeName) {
throw new Error(`Cannot normalize object ${objectEntry.objectKey}: type name not resolved`);
}
// Get attributes for this type
const attributeDefs = await this.schemaRepo.getAttributesForType(typeName);
const attrMap = new Map(attributeDefs.map(a => [a.jiraAttrId, a]));
// Extract object record
const objectRecord = {
id: String(objectEntry.id),
objectKey: objectEntry.objectKey,
objectTypeName: typeName,
label: objectEntry.label,
jiraUpdatedAt: objectEntry.updated,
jiraCreatedAt: objectEntry.created,
};
// Normalize attributes
const attributeValues: Array<{
attributeId: number;
textValue?: string | null;
numberValue?: number | null;
booleanValue?: boolean | null;
dateValue?: string | null;
datetimeValue?: string | null;
referenceObjectId?: string | null;
referenceObjectKey?: string | null;
referenceObjectLabel?: string | null;
arrayIndex: number;
}> = [];
const relations: Array<{
targetId: string;
attributeId: number;
targetType: string;
}> = [];
// Process attributes if present
if (hasAttributes(objectEntry) && objectEntry.attributes) {
for (const attr of objectEntry.attributes) {
const attrDef = attrMap.get(attr.objectTypeAttributeId);
if (!attrDef) {
logger.warn(`PayloadProcessor: Unknown attribute ID ${attr.objectTypeAttributeId} for type ${typeName}`);
continue;
}
// Process attribute values
for (let arrayIndex = 0; arrayIndex < attr.objectAttributeValues.length; arrayIndex++) {
const value = attr.objectAttributeValues[arrayIndex];
// Normalize based on value type
const normalizedValue = this.normalizeAttributeValue(value, attrDef, objectRecord.id, relations);
attributeValues.push({
attributeId: attrDef.id,
...normalizedValue,
arrayIndex: attrDef.isMultiple ? arrayIndex : 0,
});
}
}
}
return {
objectRecord,
attributeValues,
relations,
};
}
/**
* Normalize a single attribute value
*/
private normalizeAttributeValue(
value: ObjectAttributeValue,
attrDef: AttributeRecord,
sourceObjectId: string,
relations: Array<{ targetId: string; attributeId: number; targetType: string }>
): {
textValue?: string | null;
numberValue?: number | null;
booleanValue?: boolean | null;
dateValue?: string | null;
datetimeValue?: string | null;
referenceObjectId?: string | null;
referenceObjectKey?: string | null;
referenceObjectLabel?: string | null;
} {
// Handle reference values
if (isReferenceValue(value)) {
const ref = value.referencedObject;
const refId = String(ref.id);
// Extract relation
// Note: targetType will be resolved later from ref.objectType.id
relations.push({
targetId: refId,
attributeId: attrDef.id,
targetType: ref.objectType.name, // Will be resolved to typeName during store
});
return {
referenceObjectId: refId,
referenceObjectKey: ref.objectKey,
referenceObjectLabel: ref.label,
};
}
// Handle simple values
if (isSimpleValue(value)) {
const val = value.value;
switch (attrDef.attrType) {
case 'text':
case 'textarea':
case 'url':
case 'email':
case 'select':
case 'user':
case 'status':
return { textValue: String(val) };
case 'integer':
return { numberValue: typeof val === 'number' ? val : parseInt(String(val), 10) };
case 'float':
return { numberValue: typeof val === 'number' ? val : parseFloat(String(val)) };
case 'boolean':
return { booleanValue: Boolean(val) };
case 'date':
return { dateValue: String(val) };
case 'datetime':
return { datetimeValue: String(val) };
default:
return { textValue: String(val) };
}
}
// Handle status values
if ('status' in value && value.status) {
return { textValue: value.status.name };
}
// Handle Confluence values
if ('confluencePage' in value && value.confluencePage) {
const confluenceVal = value as ConfluenceValue;
return { textValue: confluenceVal.confluencePage.url || confluenceVal.displayValue };
}
// Handle user values
if ('user' in value && value.user) {
return { textValue: value.user.displayName || value.user.name || value.displayValue };
}
// Fallback to displayValue
return { textValue: value.displayValue || null };
}
}

View File

@@ -0,0 +1,240 @@
/**
* QueryService - Universal query builder (DB → TypeScript)
*
* Reconstructs TypeScript objects from normalized EAV database.
*/
import { logger } from './logger.js';
import { SchemaRepository } from '../repositories/SchemaRepository.js';
import { ObjectCacheRepository } from '../repositories/ObjectCacheRepository.js';
import type { CMDBObject, CMDBObjectTypeName } from '../generated/jira-types.js';
import type { AttributeRecord } from '../repositories/SchemaRepository.js';
export interface QueryOptions {
limit?: number;
offset?: number;
orderBy?: string;
orderDir?: 'ASC' | 'DESC';
searchTerm?: string;
}
export class QueryService {
constructor(
private schemaRepo: SchemaRepository,
private cacheRepo: ObjectCacheRepository
) {}
/**
* Get a single object by ID
*/
async getObject<T extends CMDBObject>(
typeName: CMDBObjectTypeName,
id: string
): Promise<T | null> {
// Get object record
const objRecord = await this.cacheRepo.getObject(id);
if (!objRecord || objRecord.objectTypeName !== typeName) {
return null;
}
// Reconstruct object from EAV data
return await this.reconstructObject<T>(objRecord);
}
/**
* Get objects of a type with filters
*/
async getObjects<T extends CMDBObject>(
typeName: CMDBObjectTypeName,
options: QueryOptions = {}
): Promise<T[]> {
const { limit = 1000, offset = 0 } = options;
logger.debug(`QueryService.getObjects: Querying for typeName="${typeName}" with limit=${limit}, offset=${offset}`);
// Get object records
const objRecords = await this.cacheRepo.getObjectsByType(typeName, { limit, offset });
logger.debug(`QueryService.getObjects: Found ${objRecords.length} object records for typeName="${typeName}"`);
// Check if no records found - might be a type name mismatch
if (objRecords.length === 0) {
// Diagnostic: Check what object_type_name values actually exist in the database
const db = this.cacheRepo.db;
try {
const allTypeNames = await db.query<{ object_type_name: string; count: number }>(
`SELECT object_type_name, COUNT(*) as count
FROM objects
GROUP BY object_type_name
ORDER BY count DESC
LIMIT 20`
);
logger.warn(`QueryService.getObjects: No objects found for typeName="${typeName}". Available object_type_name values in database:`, {
requestedType: typeName,
availableTypes: allTypeNames.map(t => ({ typeName: t.object_type_name, count: t.count })),
totalTypes: allTypeNames.length,
hint: 'The typeName might not match the object_type_name stored in the database. Check for case sensitivity or naming differences.',
});
} catch (error) {
logger.debug('QueryService.getObjects: Failed to query available type names', error);
}
}
// Reconstruct all objects
const objects = await Promise.all(
objRecords.map(record => this.reconstructObject<T>(record))
);
// Filter out nulls and type assert
const validObjects = objects.filter(obj => obj !== null && obj !== undefined);
logger.debug(`QueryService.getObjects: Successfully reconstructed ${validObjects.length}/${objRecords.length} objects for typeName="${typeName}"`);
return validObjects as T[];
}
/**
* Count objects of a type
*/
async countObjects(typeName: CMDBObjectTypeName): Promise<number> {
return await this.cacheRepo.countObjectsByType(typeName);
}
/**
* Search objects by label
*/
async searchByLabel<T extends CMDBObject>(
typeName: CMDBObjectTypeName,
searchTerm: string,
options: QueryOptions = {}
): Promise<T[]> {
const { limit = 100, offset = 0 } = options;
// Get object records with label filter
const objRecords = await this.cacheRepo.db.query<{
id: string;
objectKey: string;
objectTypeName: string;
label: string;
jiraUpdatedAt: string | null;
jiraCreatedAt: string | null;
cachedAt: string;
}>(
`SELECT id, object_key as objectKey, object_type_name as objectTypeName, label,
jira_updated_at as jiraUpdatedAt, jira_created_at as jiraCreatedAt, cached_at as cachedAt
FROM objects
WHERE object_type_name = ? AND LOWER(label) LIKE LOWER(?)
ORDER BY label ASC
LIMIT ? OFFSET ?`,
[typeName, `%${searchTerm}%`, limit, offset]
);
// Reconstruct objects
const objects = await Promise.all(
objRecords.map(record => this.reconstructObject<T>(record))
);
// Filter out nulls and type assert
const validObjects = objects.filter(obj => obj !== null && obj !== undefined);
return validObjects as T[];
}
/**
* Reconstruct a TypeScript object from database records
*/
private async reconstructObject<T extends CMDBObject>(
objRecord: {
id: string;
objectKey: string;
objectTypeName: string;
label: string;
jiraUpdatedAt: string | null;
jiraCreatedAt: string | null;
}
): Promise<T | null> {
// Get attribute definitions for this type
const attributeDefs = await this.schemaRepo.getAttributesForType(objRecord.objectTypeName);
const attrMap = new Map(attributeDefs.map(a => [a.id, a]));
// Get attribute values
const attributeValues = await this.cacheRepo.getAttributeValues(objRecord.id);
// Build attribute map: fieldName -> value(s)
const attributes: Record<string, unknown> = {};
for (const attrValue of attributeValues) {
const attrDef = attrMap.get(attrValue.attributeId);
if (!attrDef) {
logger.warn(`QueryService: Unknown attribute ID ${attrValue.attributeId} for object ${objRecord.id}`);
continue;
}
// Extract value based on type
let value: unknown = null;
switch (attrDef.attrType) {
case 'reference':
if (attrValue.referenceObjectId) {
value = {
objectId: attrValue.referenceObjectId,
objectKey: attrValue.referenceObjectKey || '',
label: attrValue.referenceObjectLabel || '',
};
}
break;
case 'text':
case 'textarea':
case 'url':
case 'email':
case 'select':
case 'user':
case 'status':
value = attrValue.textValue;
break;
case 'integer':
case 'float':
value = attrValue.numberValue;
break;
case 'boolean':
value = attrValue.booleanValue;
break;
case 'date':
value = attrValue.dateValue;
break;
case 'datetime':
value = attrValue.datetimeValue;
break;
default:
value = attrValue.textValue;
}
// Handle arrays vs single values
if (attrDef.isMultiple) {
if (!attributes[attrDef.fieldName]) {
attributes[attrDef.fieldName] = [];
}
(attributes[attrDef.fieldName] as unknown[]).push(value);
} else {
attributes[attrDef.fieldName] = value;
}
}
// Build CMDBObject
const result: Record<string, unknown> = {
id: objRecord.id,
objectKey: objRecord.objectKey,
label: objRecord.label,
_objectType: objRecord.objectTypeName,
_jiraUpdatedAt: objRecord.jiraUpdatedAt || new Date().toISOString(),
_jiraCreatedAt: objRecord.jiraCreatedAt || new Date().toISOString(),
...attributes,
};
return result as T;
}
}

View File

@@ -0,0 +1,75 @@
/**
* RefreshService - Handles force-refresh-on-read with deduping/locks
*
* Prevents duplicate refresh operations for the same object.
*/
import { logger } from './logger.js';
import { jiraAssetsClient } from '../infrastructure/jira/JiraAssetsClient.js';
import { ObjectSyncService } from './ObjectSyncService.js';
import { SchemaRepository } from '../repositories/SchemaRepository.js';
export class RefreshService {
private refreshLocks: Map<string, Promise<void>> = new Map();
private readonly LOCK_TIMEOUT_MS = 30000; // 30 seconds
constructor(private syncService: ObjectSyncService) {}
/**
* Refresh a single object with deduplication
* If another refresh is already in progress, wait for it
*/
async refreshObject(
objectId: string,
enabledTypes: Set<string>
): Promise<{ success: boolean; error?: string }> {
// Check if refresh already in progress
const existingLock = this.refreshLocks.get(objectId);
if (existingLock) {
logger.debug(`RefreshService: Refresh already in progress for ${objectId}, waiting...`);
try {
await existingLock;
return { success: true }; // Previous refresh succeeded
} catch (error) {
logger.warn(`RefreshService: Previous refresh failed for ${objectId}, retrying...`, error);
// Continue to new refresh
}
}
// Create new refresh promise
const refreshPromise = this.doRefresh(objectId, enabledTypes);
this.refreshLocks.set(objectId, refreshPromise);
try {
// Add timeout to prevent locks from hanging forever
const timeoutPromise = new Promise<void>((_, reject) => {
setTimeout(() => reject(new Error('Refresh timeout')), this.LOCK_TIMEOUT_MS);
});
await Promise.race([refreshPromise, timeoutPromise]);
return { success: true };
} catch (error) {
logger.error(`RefreshService: Failed to refresh object ${objectId}`, error);
return {
success: false,
error: error instanceof Error ? error.message : 'Unknown error',
};
} finally {
// Clean up lock after a delay (allow concurrent reads)
setTimeout(() => {
this.refreshLocks.delete(objectId);
}, 1000);
}
}
/**
* Perform the actual refresh
*/
private async doRefresh(objectId: string, enabledTypes: Set<string>): Promise<void> {
const result = await this.syncService.syncSingleObject(objectId, enabledTypes);
if (!result.cached) {
throw new Error(result.error || 'Failed to cache object');
}
}
}

View File

@@ -0,0 +1,817 @@
/**
* Schema Sync Service
*
* Unified service for synchronizing Jira Assets schema configuration to local database.
* Implements the complete sync flow as specified in the refactor plan.
*/
import { logger } from './logger.js';
import { getDatabaseAdapter } from './database/singleton.js';
import type { DatabaseAdapter } from './database/interface.js';
import { config } from '../config/env.js';
import { toCamelCase, toPascalCase, mapJiraType, determineSyncPriority } from './schemaUtils.js';
// =============================================================================
// Types
// =============================================================================
interface JiraSchema {
id: number;
name: string;
objectSchemaKey?: string;
status?: string;
description?: string;
created?: string;
updated?: string;
objectCount?: number;
objectTypeCount?: number;
}
interface JiraObjectType {
id: number;
name: string;
type?: number;
description?: string;
icon?: {
id: number;
name: string;
url16?: string;
url48?: string;
};
position?: number;
created?: string;
updated?: string;
objectCount?: number;
parentObjectTypeId?: number | null;
objectSchemaId: number;
inherited?: boolean;
abstractObjectType?: boolean;
}
interface JiraAttribute {
id: number;
objectType?: {
id: number;
name: string;
};
name: string;
label?: boolean;
type: number;
description?: string;
defaultType?: {
id: number;
name: string;
} | null;
typeValue?: string | null;
typeValueMulti?: string[];
additionalValue?: string | null;
referenceType?: {
id: number;
name: string;
description?: string;
color?: string;
url16?: string | null;
removable?: boolean;
objectSchemaId?: number;
} | null;
referenceObjectTypeId?: number | null;
referenceObjectType?: {
id: number;
name: string;
objectSchemaId?: number;
} | null;
editable?: boolean;
system?: boolean;
sortable?: boolean;
summable?: boolean;
indexed?: boolean;
minimumCardinality?: number;
maximumCardinality?: number;
suffix?: string;
removable?: boolean;
hidden?: boolean;
includeChildObjectTypes?: boolean;
uniqueAttribute?: boolean;
regexValidation?: string | null;
iql?: string | null;
options?: string;
position?: number;
}
export interface SyncResult {
success: boolean;
schemasProcessed: number;
objectTypesProcessed: number;
attributesProcessed: number;
schemasDeleted: number;
objectTypesDeleted: number;
attributesDeleted: number;
errors: SyncError[];
duration: number; // milliseconds
}
export interface SyncError {
type: 'schema' | 'objectType' | 'attribute';
id: string | number;
message: string;
}
export interface SyncProgress {
status: 'idle' | 'running' | 'completed' | 'failed';
currentSchema?: string;
currentObjectType?: string;
schemasTotal: number;
schemasCompleted: number;
objectTypesTotal: number;
objectTypesCompleted: number;
startedAt?: Date;
estimatedCompletion?: Date;
}
// =============================================================================
// SchemaSyncService Implementation
// =============================================================================
export class SchemaSyncService {
private db: DatabaseAdapter;
private isPostgres: boolean;
private baseUrl: string;
private progress: SyncProgress = {
status: 'idle',
schemasTotal: 0,
schemasCompleted: 0,
objectTypesTotal: 0,
objectTypesCompleted: 0,
};
// Rate limiting configuration
private readonly RATE_LIMIT_DELAY_MS = 150; // 150ms between requests
private readonly MAX_RETRIES = 3;
private readonly RETRY_DELAY_MS = 1000;
constructor() {
this.db = getDatabaseAdapter();
this.isPostgres = (this.db.isPostgres === true);
this.baseUrl = `${config.jiraHost}/rest/assets/1.0`;
}
/**
* Get authentication headers for API requests
*/
private getHeaders(): Record<string, string> {
const token = config.jiraServiceAccountToken;
if (!token) {
throw new Error('JIRA_SERVICE_ACCOUNT_TOKEN not configured. Schema sync requires a service account token.');
}
return {
'Authorization': `Bearer ${token}`,
'Content-Type': 'application/json',
'Accept': 'application/json',
};
}
/**
* Rate limiting delay
*/
private delay(ms: number): Promise<void> {
return new Promise(resolve => setTimeout(resolve, ms));
}
/**
* Fetch with rate limiting and retry logic
*/
private async fetchWithRateLimit<T>(
url: string,
retries: number = this.MAX_RETRIES
): Promise<T> {
await this.delay(this.RATE_LIMIT_DELAY_MS);
try {
const response = await fetch(url, {
headers: this.getHeaders(),
});
// Handle rate limiting (429)
if (response.status === 429) {
const retryAfter = parseInt(response.headers.get('Retry-After') || '5', 10);
logger.warn(`SchemaSync: Rate limited, waiting ${retryAfter}s before retry`);
await this.delay(retryAfter * 1000);
return this.fetchWithRateLimit<T>(url, retries);
}
// Handle server errors with retry
if (response.status >= 500 && retries > 0) {
logger.warn(`SchemaSync: Server error ${response.status}, retrying (${retries} attempts left)`);
await this.delay(this.RETRY_DELAY_MS);
return this.fetchWithRateLimit<T>(url, retries - 1);
}
if (!response.ok) {
const text = await response.text();
throw new Error(`HTTP ${response.status}: ${text}`);
}
return await response.json() as T;
} catch (error) {
if (retries > 0 && error instanceof Error && !error.message.includes('HTTP')) {
logger.warn(`SchemaSync: Network error, retrying (${retries} attempts left)`, error);
await this.delay(this.RETRY_DELAY_MS);
return this.fetchWithRateLimit<T>(url, retries - 1);
}
throw error;
}
}
/**
* Fetch all schemas from Jira
*/
private async fetchSchemas(): Promise<JiraSchema[]> {
const url = `${this.baseUrl}/objectschema/list`;
logger.debug(`SchemaSync: Fetching schemas from ${url}`);
const result = await this.fetchWithRateLimit<{ objectschemas?: JiraSchema[] } | JiraSchema[]>(url);
// Handle different response formats
if (Array.isArray(result)) {
return result;
} else if (result && typeof result === 'object' && 'objectschemas' in result) {
return result.objectschemas || [];
}
logger.warn('SchemaSync: Unexpected schema list response format', result);
return [];
}
/**
* Fetch schema details
*/
private async fetchSchemaDetails(schemaId: number): Promise<JiraSchema> {
const url = `${this.baseUrl}/objectschema/${schemaId}`;
logger.debug(`SchemaSync: Fetching schema details for ${schemaId}`);
return await this.fetchWithRateLimit<JiraSchema>(url);
}
/**
* Fetch all object types for a schema (flat list)
*/
private async fetchObjectTypes(schemaId: number): Promise<JiraObjectType[]> {
const url = `${this.baseUrl}/objectschema/${schemaId}/objecttypes/flat`;
logger.debug(`SchemaSync: Fetching object types for schema ${schemaId}`);
try {
const result = await this.fetchWithRateLimit<JiraObjectType[]>(url);
return Array.isArray(result) ? result : [];
} catch (error) {
// Fallback to regular endpoint if flat endpoint fails
logger.warn(`SchemaSync: Flat endpoint failed, trying regular endpoint`, error);
const fallbackUrl = `${this.baseUrl}/objectschema/${schemaId}/objecttypes`;
const fallbackResult = await this.fetchWithRateLimit<{ objectTypes?: JiraObjectType[] } | JiraObjectType[]>(fallbackUrl);
if (Array.isArray(fallbackResult)) {
return fallbackResult;
} else if (fallbackResult && typeof fallbackResult === 'object' && 'objectTypes' in fallbackResult) {
return fallbackResult.objectTypes || [];
}
return [];
}
}
/**
* Fetch object type details
*/
private async fetchObjectTypeDetails(typeId: number): Promise<JiraObjectType> {
const url = `${this.baseUrl}/objecttype/${typeId}`;
logger.debug(`SchemaSync: Fetching object type details for ${typeId}`);
return await this.fetchWithRateLimit<JiraObjectType>(url);
}
/**
* Fetch attributes for an object type
*/
private async fetchAttributes(typeId: number): Promise<JiraAttribute[]> {
const url = `${this.baseUrl}/objecttype/${typeId}/attributes`;
logger.debug(`SchemaSync: Fetching attributes for object type ${typeId}`);
try {
const result = await this.fetchWithRateLimit<JiraAttribute[]>(url);
return Array.isArray(result) ? result : [];
} catch (error) {
logger.warn(`SchemaSync: Failed to fetch attributes for type ${typeId}`, error);
return [];
}
}
/**
* Parse Jira attribute to database format
*/
private parseAttribute(
attr: JiraAttribute,
allTypeConfigs: Map<number, { name: string; typeName: string }>
): {
jiraId: number;
name: string;
fieldName: string;
type: string;
isMultiple: boolean;
isEditable: boolean;
isRequired: boolean;
isSystem: boolean;
referenceTypeName?: string;
description?: string;
// Additional fields from plan
label?: boolean;
sortable?: boolean;
summable?: boolean;
indexed?: boolean;
suffix?: string;
removable?: boolean;
hidden?: boolean;
includeChildObjectTypes?: boolean;
uniqueAttribute?: boolean;
regexValidation?: string | null;
iql?: string | null;
options?: string;
position?: number;
} {
const typeId = attr.type || attr.defaultType?.id || 0;
let type = mapJiraType(typeId);
const isMultiple = (attr.maximumCardinality ?? 1) > 1 || attr.maximumCardinality === -1;
const isEditable = attr.editable !== false && !attr.hidden;
const isRequired = (attr.minimumCardinality ?? 0) > 0;
const isSystem = attr.system === true;
// CRITICAL: Jira sometimes returns type=1 (integer) for reference attributes!
// The presence of referenceObjectTypeId is the true indicator of a reference type.
const refTypeId = attr.referenceObjectTypeId || attr.referenceObjectType?.id || attr.referenceType?.id;
if (refTypeId) {
type = 'reference';
}
const result: ReturnType<typeof this.parseAttribute> = {
jiraId: attr.id,
name: attr.name,
fieldName: toCamelCase(attr.name),
type,
isMultiple,
isEditable,
isRequired,
isSystem,
description: attr.description,
label: attr.label,
sortable: attr.sortable,
summable: attr.summable,
indexed: attr.indexed,
suffix: attr.suffix,
removable: attr.removable,
hidden: attr.hidden,
includeChildObjectTypes: attr.includeChildObjectTypes,
uniqueAttribute: attr.uniqueAttribute,
regexValidation: attr.regexValidation,
iql: attr.iql,
options: attr.options,
position: attr.position,
};
// Handle reference types - add reference metadata
if (type === 'reference' && refTypeId) {
const refConfig = allTypeConfigs.get(refTypeId);
result.referenceTypeName = refConfig?.typeName ||
attr.referenceObjectType?.name ||
attr.referenceType?.name ||
`Type${refTypeId}`;
}
return result;
}
/**
* Sync all schemas and their complete structure
*/
async syncAll(): Promise<SyncResult> {
const startTime = Date.now();
const errors: SyncError[] = [];
this.progress = {
status: 'running',
schemasTotal: 0,
schemasCompleted: 0,
objectTypesTotal: 0,
objectTypesCompleted: 0,
startedAt: new Date(),
};
try {
logger.info('SchemaSync: Starting full schema synchronization...');
// Step 1: Fetch all schemas
const schemas = await this.fetchSchemas();
this.progress.schemasTotal = schemas.length;
logger.info(`SchemaSync: Found ${schemas.length} schemas to sync`);
if (schemas.length === 0) {
throw new Error('No schemas found in Jira Assets');
}
// Track Jira IDs for cleanup
const jiraSchemaIds = new Set<string>();
const jiraObjectTypeIds = new Map<string, Set<number>>(); // schemaId -> Set<typeId>
const jiraAttributeIds = new Map<string, Set<number>>(); // typeName -> Set<attrId>
let schemasProcessed = 0;
let objectTypesProcessed = 0;
let attributesProcessed = 0;
let schemasDeleted = 0;
let objectTypesDeleted = 0;
let attributesDeleted = 0;
await this.db.transaction(async (txDb) => {
// Step 2: Process each schema
for (const schema of schemas) {
try {
this.progress.currentSchema = schema.name;
const schemaIdStr = schema.id.toString();
jiraSchemaIds.add(schemaIdStr);
// Fetch schema details
let schemaDetails: JiraSchema;
try {
schemaDetails = await this.fetchSchemaDetails(schema.id);
} catch (error) {
logger.warn(`SchemaSync: Failed to fetch details for schema ${schema.id}, using list data`, error);
schemaDetails = schema;
}
const now = new Date().toISOString();
const objectSchemaKey = schemaDetails.objectSchemaKey || schemaDetails.name || schemaIdStr;
// Upsert schema
if (txDb.isPostgres) {
await txDb.execute(`
INSERT INTO schemas (jira_schema_id, name, object_schema_key, status, description, discovered_at, updated_at)
VALUES (?, ?, ?, ?, ?, ?, ?)
ON CONFLICT(jira_schema_id) DO UPDATE SET
name = excluded.name,
object_schema_key = excluded.object_schema_key,
status = excluded.status,
description = excluded.description,
updated_at = excluded.updated_at
`, [
schemaIdStr,
schemaDetails.name,
objectSchemaKey,
schemaDetails.status || null,
schemaDetails.description || null,
now,
now,
]);
} else {
await txDb.execute(`
INSERT INTO schemas (jira_schema_id, name, object_schema_key, status, description, discovered_at, updated_at)
VALUES (?, ?, ?, ?, ?, ?, ?)
ON CONFLICT(jira_schema_id) DO UPDATE SET
name = excluded.name,
object_schema_key = excluded.object_schema_key,
status = excluded.status,
description = excluded.description,
updated_at = excluded.updated_at
`, [
schemaIdStr,
schemaDetails.name,
objectSchemaKey,
schemaDetails.status || null,
schemaDetails.description || null,
now,
now,
]);
}
// Get schema FK
const schemaRow = await txDb.queryOne<{ id: number }>(
`SELECT id FROM schemas WHERE jira_schema_id = ?`,
[schemaIdStr]
);
if (!schemaRow) {
throw new Error(`Failed to get schema FK for ${schemaIdStr}`);
}
const schemaIdFk = schemaRow.id;
// Step 3: Fetch all object types for this schema
const objectTypes = await this.fetchObjectTypes(schema.id);
logger.info(`SchemaSync: Found ${objectTypes.length} object types in schema ${schema.name}`);
const typeConfigs = new Map<number, { name: string; typeName: string }>();
jiraObjectTypeIds.set(schemaIdStr, new Set());
// Build type name mapping
for (const objType of objectTypes) {
const typeName = toPascalCase(objType.name);
typeConfigs.set(objType.id, {
name: objType.name,
typeName,
});
jiraObjectTypeIds.get(schemaIdStr)!.add(objType.id);
}
// Step 4: Store object types
for (const objType of objectTypes) {
try {
this.progress.currentObjectType = objType.name;
const typeName = toPascalCase(objType.name);
const objectCount = objType.objectCount || 0;
const syncPriority = determineSyncPriority(objType.name, objectCount);
// Upsert object type
if (txDb.isPostgres) {
await txDb.execute(`
INSERT INTO object_types (
schema_id, jira_type_id, type_name, display_name, description,
sync_priority, object_count, enabled, discovered_at, updated_at
)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
ON CONFLICT(schema_id, jira_type_id) DO UPDATE SET
display_name = excluded.display_name,
description = excluded.description,
sync_priority = excluded.sync_priority,
object_count = excluded.object_count,
updated_at = excluded.updated_at
`, [
schemaIdFk,
objType.id,
typeName,
objType.name,
objType.description || null,
syncPriority,
objectCount,
false, // Default: disabled
now,
now,
]);
} else {
await txDb.execute(`
INSERT INTO object_types (
schema_id, jira_type_id, type_name, display_name, description,
sync_priority, object_count, enabled, discovered_at, updated_at
)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
ON CONFLICT(schema_id, jira_type_id) DO UPDATE SET
display_name = excluded.display_name,
description = excluded.description,
sync_priority = excluded.sync_priority,
object_count = excluded.object_count,
updated_at = excluded.updated_at
`, [
schemaIdFk,
objType.id,
typeName,
objType.name,
objType.description || null,
syncPriority,
objectCount,
0, // Default: disabled (0 = false in SQLite)
now,
now,
]);
}
objectTypesProcessed++;
// Step 5: Fetch and store attributes
const attributes = await this.fetchAttributes(objType.id);
logger.info(`SchemaSync: Fetched ${attributes.length} attributes for ${objType.name} (type ${objType.id})`);
if (!jiraAttributeIds.has(typeName)) {
jiraAttributeIds.set(typeName, new Set());
}
if (attributes.length === 0) {
logger.warn(`SchemaSync: No attributes found for ${objType.name} (type ${objType.id})`);
}
for (const jiraAttr of attributes) {
try {
const attrDef = this.parseAttribute(jiraAttr, typeConfigs);
jiraAttributeIds.get(typeName)!.add(attrDef.jiraId);
// Upsert attribute
if (txDb.isPostgres) {
await txDb.execute(`
INSERT INTO attributes (
jira_attr_id, object_type_name, attr_name, field_name, attr_type,
is_multiple, is_editable, is_required, is_system,
reference_type_name, description, position, discovered_at
)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
ON CONFLICT(jira_attr_id, object_type_name) DO UPDATE SET
attr_name = excluded.attr_name,
field_name = excluded.field_name,
attr_type = excluded.attr_type,
is_multiple = excluded.is_multiple,
is_editable = excluded.is_editable,
is_required = excluded.is_required,
is_system = excluded.is_system,
reference_type_name = excluded.reference_type_name,
description = excluded.description,
position = excluded.position
`, [
attrDef.jiraId,
typeName,
attrDef.name,
attrDef.fieldName,
attrDef.type,
attrDef.isMultiple,
attrDef.isEditable,
attrDef.isRequired,
attrDef.isSystem,
attrDef.referenceTypeName || null,
attrDef.description || null,
attrDef.position ?? 0,
now,
]);
} else {
await txDb.execute(`
INSERT INTO attributes (
jira_attr_id, object_type_name, attr_name, field_name, attr_type,
is_multiple, is_editable, is_required, is_system,
reference_type_name, description, position, discovered_at
)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
ON CONFLICT(jira_attr_id, object_type_name) DO UPDATE SET
attr_name = excluded.attr_name,
field_name = excluded.field_name,
attr_type = excluded.attr_type,
is_multiple = excluded.is_multiple,
is_editable = excluded.is_editable,
is_required = excluded.is_required,
is_system = excluded.is_system,
reference_type_name = excluded.reference_type_name,
description = excluded.description,
position = excluded.position
`, [
attrDef.jiraId,
typeName,
attrDef.name,
attrDef.fieldName,
attrDef.type,
attrDef.isMultiple ? 1 : 0,
attrDef.isEditable ? 1 : 0,
attrDef.isRequired ? 1 : 0,
attrDef.isSystem ? 1 : 0,
attrDef.referenceTypeName || null,
attrDef.description || null,
attrDef.position ?? 0,
now,
]);
}
attributesProcessed++;
} catch (error) {
logger.error(`SchemaSync: Failed to process attribute ${jiraAttr.id} (${jiraAttr.name}) for ${objType.name}`, error);
if (error instanceof Error) {
logger.error(`SchemaSync: Attribute error details: ${error.message}`, error.stack);
}
errors.push({
type: 'attribute',
id: jiraAttr.id,
message: error instanceof Error ? error.message : String(error),
});
}
}
logger.info(`SchemaSync: Processed ${attributesProcessed} attributes for ${objType.name} (type ${objType.id})`);
this.progress.objectTypesCompleted++;
} catch (error) {
logger.warn(`SchemaSync: Failed to process object type ${objType.id}`, error);
errors.push({
type: 'objectType',
id: objType.id,
message: error instanceof Error ? error.message : String(error),
});
}
}
this.progress.schemasCompleted++;
schemasProcessed++;
} catch (error) {
logger.error(`SchemaSync: Failed to process schema ${schema.id}`, error);
errors.push({
type: 'schema',
id: schema.id.toString(),
message: error instanceof Error ? error.message : String(error),
});
}
}
// Step 6: Clean up orphaned records (hard delete)
logger.info('SchemaSync: Cleaning up orphaned records...');
// Delete orphaned schemas
const allLocalSchemas = await txDb.query<{ jira_schema_id: string }>(
`SELECT jira_schema_id FROM schemas`
);
for (const localSchema of allLocalSchemas) {
if (!jiraSchemaIds.has(localSchema.jira_schema_id)) {
logger.info(`SchemaSync: Deleting orphaned schema ${localSchema.jira_schema_id}`);
await txDb.execute(`DELETE FROM schemas WHERE jira_schema_id = ?`, [localSchema.jira_schema_id]);
schemasDeleted++;
}
}
// Delete orphaned object types
// First, get all object types from all remaining schemas
const allLocalObjectTypes = await txDb.query<{ schema_id: number; jira_type_id: number; jira_schema_id: string }>(
`SELECT ot.schema_id, ot.jira_type_id, s.jira_schema_id
FROM object_types ot
JOIN schemas s ON ot.schema_id = s.id`
);
for (const localType of allLocalObjectTypes) {
const schemaIdStr = localType.jira_schema_id;
const typeIds = jiraObjectTypeIds.get(schemaIdStr);
// If schema doesn't exist in Jira anymore, or type doesn't exist in schema
if (!jiraSchemaIds.has(schemaIdStr) || (typeIds && !typeIds.has(localType.jira_type_id))) {
logger.info(`SchemaSync: Deleting orphaned object type ${localType.jira_type_id} from schema ${schemaIdStr}`);
await txDb.execute(
`DELETE FROM object_types WHERE schema_id = ? AND jira_type_id = ?`,
[localType.schema_id, localType.jira_type_id]
);
objectTypesDeleted++;
}
}
// Delete orphaned attributes
// Get all attributes and check against synced types
const allLocalAttributes = await txDb.query<{ object_type_name: string; jira_attr_id: number }>(
`SELECT object_type_name, jira_attr_id FROM attributes`
);
for (const localAttr of allLocalAttributes) {
const attrIds = jiraAttributeIds.get(localAttr.object_type_name);
// If type wasn't synced or attribute doesn't exist in type
if (!attrIds || !attrIds.has(localAttr.jira_attr_id)) {
logger.info(`SchemaSync: Deleting orphaned attribute ${localAttr.jira_attr_id} from type ${localAttr.object_type_name}`);
await txDb.execute(
`DELETE FROM attributes WHERE object_type_name = ? AND jira_attr_id = ?`,
[localAttr.object_type_name, localAttr.jira_attr_id]
);
attributesDeleted++;
}
}
logger.info(`SchemaSync: Cleanup complete - ${schemasDeleted} schemas, ${objectTypesDeleted} object types, ${attributesDeleted} attributes deleted`);
});
const duration = Date.now() - startTime;
this.progress.status = 'completed';
logger.info(`SchemaSync: Synchronization complete in ${duration}ms - ${schemasProcessed} schemas, ${objectTypesProcessed} object types, ${attributesProcessed} attributes, ${schemasDeleted} deleted schemas, ${objectTypesDeleted} deleted types, ${attributesDeleted} deleted attributes`);
if (attributesProcessed === 0) {
logger.warn(`SchemaSync: WARNING - No attributes were saved! Check logs for errors.`);
}
if (errors.length > 0) {
logger.warn(`SchemaSync: Sync completed with ${errors.length} errors:`, errors);
}
return {
success: errors.length === 0,
schemasProcessed,
objectTypesProcessed,
attributesProcessed,
schemasDeleted,
objectTypesDeleted,
attributesDeleted,
errors,
duration,
};
} catch (error) {
this.progress.status = 'failed';
logger.error('SchemaSync: Synchronization failed', error);
throw error;
}
}
/**
* Sync a single schema by ID
*/
async syncSchema(schemaId: number): Promise<SyncResult> {
// For single schema sync, we can reuse syncAll logic but filter
// For now, just call syncAll (it's idempotent)
logger.info(`SchemaSync: Syncing single schema ${schemaId}`);
return this.syncAll();
}
/**
* Get sync status/progress
*/
getProgress(): SyncProgress {
return { ...this.progress };
}
}
// Export singleton instance
export const schemaSyncService = new SchemaSyncService();

View File

@@ -0,0 +1,68 @@
/**
* ServiceFactory - Creates and initializes all services
*
* Single entry point for service initialization and dependency injection.
*/
import { getDatabaseAdapter } from './database/singleton.js';
import { ensureSchemaInitialized } from './database/normalized-schema-init.js';
import { SchemaRepository } from '../repositories/SchemaRepository.js';
import { ObjectCacheRepository } from '../repositories/ObjectCacheRepository.js';
import { SchemaSyncService } from './SchemaSyncService.js';
import { ObjectSyncService } from './ObjectSyncService.js';
import { PayloadProcessor } from './PayloadProcessor.js';
import { QueryService } from './QueryService.js';
import { RefreshService } from './RefreshService.js';
import { WriteThroughService } from './WriteThroughService.js';
import { logger } from './logger.js';
/**
* All services container
*/
export class ServiceFactory {
public readonly schemaRepo: SchemaRepository;
public readonly cacheRepo: ObjectCacheRepository;
public readonly schemaSyncService: SchemaSyncService;
public readonly objectSyncService: ObjectSyncService;
public readonly payloadProcessor: PayloadProcessor;
public readonly queryService: QueryService;
public readonly refreshService: RefreshService;
public readonly writeThroughService: WriteThroughService;
private static instance: ServiceFactory | null = null;
private constructor() {
// Use shared database adapter singleton
const db = getDatabaseAdapter();
// Initialize repositories
this.schemaRepo = new SchemaRepository(db);
this.cacheRepo = new ObjectCacheRepository(db);
// Initialize services
this.schemaSyncService = new SchemaSyncService();
this.objectSyncService = new ObjectSyncService(this.schemaRepo, this.cacheRepo);
this.payloadProcessor = new PayloadProcessor(this.schemaRepo, this.cacheRepo);
this.queryService = new QueryService(this.schemaRepo, this.cacheRepo);
this.refreshService = new RefreshService(this.objectSyncService);
this.writeThroughService = new WriteThroughService(this.objectSyncService, this.schemaRepo);
// Ensure schema is initialized (async, but don't block)
ensureSchemaInitialized().catch(error => {
logger.error('ServiceFactory: Failed to initialize database schema', error);
});
}
/**
* Get singleton instance
*/
static getInstance(): ServiceFactory {
if (!ServiceFactory.instance) {
ServiceFactory.instance = new ServiceFactory();
}
return ServiceFactory.instance;
}
}
// Export singleton instance getter
export const getServices = () => ServiceFactory.getInstance();

View File

@@ -0,0 +1,153 @@
/**
* WriteThroughService - Write-through updates to Jira and DB
*
* Writes to Jira Assets API, then immediately updates DB cache.
*/
import { logger } from './logger.js';
import { jiraAssetsClient } from '../infrastructure/jira/JiraAssetsClient.js';
import { ObjectSyncService } from './ObjectSyncService.js';
import { SchemaRepository } from '../repositories/SchemaRepository.js';
import type { CMDBObject, CMDBObjectTypeName } from '../generated/jira-types.js';
export interface UpdateResult {
success: boolean;
data?: CMDBObject;
error?: string;
}
export class WriteThroughService {
constructor(
private syncService: ObjectSyncService,
private schemaRepo: SchemaRepository
) {}
/**
* Update an object (write-through)
*
* 1. Build Jira update payload from field updates
* 2. Send update to Jira Assets API
* 3. Fetch fresh data from Jira
* 4. Update DB cache using same normalization logic
*/
async updateObject(
typeName: CMDBObjectTypeName,
objectId: string,
updates: Record<string, unknown>
): Promise<UpdateResult> {
try {
// Get attribute definitions for this type
const attributeDefs = await this.schemaRepo.getAttributesForType(typeName);
const attrMapByName = new Map(attributeDefs.map(a => [a.fieldName, a]));
// Build Jira update payload
const payload = {
attributes: [] as Array<{
objectTypeAttributeId: number;
objectAttributeValues: Array<{ value?: string }>;
}>,
};
for (const [fieldName, value] of Object.entries(updates)) {
const attrDef = attrMapByName.get(fieldName);
if (!attrDef) {
logger.warn(`WriteThroughService: Unknown field ${fieldName} for type ${typeName}`);
continue;
}
if (!attrDef.isEditable) {
logger.warn(`WriteThroughService: Field ${fieldName} is not editable`);
continue;
}
// Build attribute values based on type
const attrValues = this.buildAttributeValues(value, attrDef);
if (attrValues.length > 0 || value === null || value === undefined) {
// Include attribute even if clearing (empty array)
payload.attributes.push({
objectTypeAttributeId: attrDef.jiraAttrId,
objectAttributeValues: attrValues,
});
}
}
if (payload.attributes.length === 0) {
return { success: true }; // No attributes to update
}
// Send update to Jira
await jiraAssetsClient.updateObject(objectId, payload);
// Fetch fresh data from Jira
const entry = await jiraAssetsClient.getObject(objectId);
if (!entry) {
return {
success: false,
error: 'Object not found in Jira after update',
};
}
// Get enabled types for sync policy
const enabledTypes = await this.schemaRepo.getEnabledObjectTypes();
const enabledTypeSet = new Set(enabledTypes.map(t => t.typeName));
// Update DB cache using sync service
const syncResult = await this.syncService.syncSingleObject(objectId, enabledTypeSet);
if (!syncResult.cached) {
logger.warn(`WriteThroughService: Failed to update cache after Jira update: ${syncResult.error}`);
// Still return success if Jira update succeeded
}
// Fetch updated object from DB
// Note: We'd need QueryService here, but to avoid circular deps,
// we'll return success and let caller refresh if needed
return { success: true };
} catch (error) {
logger.error(`WriteThroughService: Failed to update object ${objectId}`, error);
return {
success: false,
error: error instanceof Error ? error.message : 'Unknown error',
};
}
}
/**
* Build Jira attribute values from TypeScript value
*/
private buildAttributeValues(
value: unknown,
attrDef: { attrType: string; isMultiple: boolean }
): Array<{ value?: string }> {
// Null/undefined = clear the field
if (value === null || value === undefined) {
return [];
}
// Reference type
if (attrDef.attrType === 'reference') {
if (attrDef.isMultiple && Array.isArray(value)) {
return (value as Array<{ objectKey?: string }>).map(ref => ({
value: ref.objectKey,
})).filter(v => v.value);
} else if (!attrDef.isMultiple) {
const ref = value as { objectKey?: string };
return ref.objectKey ? [{ value: ref.objectKey }] : [];
}
return [];
}
// Boolean
if (attrDef.attrType === 'boolean') {
return [{ value: value ? 'true' : 'false' }];
}
// Number types
if (attrDef.attrType === 'integer' || attrDef.attrType === 'float') {
return [{ value: String(value) }];
}
// String types
return [{ value: String(value) }];
}
}

View File

@@ -150,15 +150,19 @@ class AuthService {
}
// Check if expired
if (new Date(session.expires_at) < new Date()) {
const expiresAt = new Date(session.expires_at);
const now = new Date();
if (expiresAt < now) {
await db.execute('DELETE FROM sessions WHERE id = ?', [sessionId]);
return null;
}
return session;
} finally {
await db.close();
} catch (error) {
logger.error(`[getSessionFromDb] Error querying session: ${sessionId.substring(0, 8)}...`, error);
throw error;
}
// Note: Don't close the database adapter - it's a singleton that should remain open
}
/**

View File

@@ -10,7 +10,7 @@ import { readFileSync, existsSync } from 'fs';
import { join } from 'path';
import { dirname } from 'path';
import { fileURLToPath } from 'url';
import * as XLSX from 'xlsx';
import ExcelJS from 'exceljs';
import { logger } from './logger.js';
// Get __dirname equivalent for ES modules
@@ -52,13 +52,13 @@ export function clearBIACache(): void {
/**
* Load BIA data from Excel file
*/
export function loadBIAData(): BIARecord[] {
export async function loadBIAData(): Promise<BIARecord[]> {
const now = Date.now();
// Return cached data if still valid AND has records
// Don't use cache if it's empty (indicates previous load failure)
if (biaDataCache && biaDataCache.length > 0 && (now - biaDataCacheTimestamp) < BIA_CACHE_TTL) {
logger.debug(`Using cached BIA data (${biaDataCache.length} records, cached ${Math.round((now - biaDataCacheTimestamp) / 1000)}s ago)`);
return biaDataCache;
return Promise.resolve(biaDataCache);
}
// Clear cache if it's empty or expired
@@ -96,19 +96,46 @@ export function loadBIAData(): BIARecord[] {
logger.error(`__dirname: ${__dirname}`);
biaDataCache = [];
biaDataCacheTimestamp = now;
return [];
return Promise.resolve([]);
}
logger.info(`Loading BIA data from: ${biaFilePath}`);
try {
// Read file using readFileSync and then parse with XLSX.read
// This works better in ES modules than XLSX.readFile
// Read file using readFileSync and then parse with ExcelJS
const fileBuffer = readFileSync(biaFilePath);
const workbook = XLSX.read(fileBuffer, { type: 'buffer' });
const sheetName = workbook.SheetNames[0];
const worksheet = workbook.Sheets[sheetName];
const data = XLSX.utils.sheet_to_json(worksheet, { header: 1 }) as any[][];
const workbook = new ExcelJS.Workbook();
// ExcelJS accepts Buffer, but TypeScript types may be strict
// Use type assertion to satisfy TypeScript's strict Buffer type checking
await workbook.xlsx.load(fileBuffer as any);
const worksheet = workbook.worksheets[0]; // First sheet
// Converteer naar 2D array formaat (zoals xlsx.utils.sheet_to_json met header: 1)
// We need at least column K (index 10), so ensure we read up to column 11 (1-based)
const data: any[][] = [];
const maxColumnNeeded = 11; // Column K is index 10 (0-based), so we need column 11 (1-based)
worksheet.eachRow((row, rowNumber) => {
const rowData: any[] = [];
// Ensure we have at least maxColumnNeeded columns, but also check actual cells
const actualMaxCol = Math.max(maxColumnNeeded, row.actualCellCount || 0);
for (let colNumber = 1; colNumber <= actualMaxCol; colNumber++) {
const cell = row.getCell(colNumber);
// ExcelJS uses 1-based indexing, convert to 0-based for array
// Handle different cell value types: convert to string for consistency
let cellValue: any = cell.value;
if (cellValue === null || cellValue === undefined) {
cellValue = '';
} else if (cellValue instanceof Date) {
cellValue = cellValue.toISOString();
} else if (typeof cellValue === 'object' && 'richText' in cellValue) {
// Handle RichText objects
cellValue = cell.value?.toString() || '';
}
rowData[colNumber - 1] = cellValue;
}
data.push(rowData);
});
logger.info(`Loaded Excel file: ${data.length} rows, first row has ${data[0]?.length || 0} columns`);
if (data.length > 0 && data[0]) {
@@ -236,12 +263,12 @@ export function loadBIAData(): BIARecord[] {
}
biaDataCache = records;
biaDataCacheTimestamp = now;
return records;
return Promise.resolve(records);
} catch (error) {
logger.error('Failed to load BIA data from Excel', error);
biaDataCache = [];
biaDataCacheTimestamp = now;
return [];
return Promise.resolve([]);
}
}
@@ -330,11 +357,11 @@ function wordBasedSimilarity(str1: string, str2: string): number {
* - Confidence/similarity score
* - Length similarity (prefer matches with similar length)
*/
export function findBIAMatch(
export async function findBIAMatch(
applicationName: string,
searchReference: string | null
): BIAMatchResult {
const biaData = loadBIAData();
): Promise<BIAMatchResult> {
const biaData = await loadBIAData();
if (biaData.length === 0) {
logger.warn(`No BIA data available for lookup of "${applicationName}" (biaData.length = 0)`);
return {

View File

@@ -52,7 +52,7 @@ try {
async function findBIAValue(applicationName: string, searchReference?: string | null): Promise<string | null> {
// Use the unified matching service (imported at top of file)
const { findBIAMatch } = await import('./biaMatchingService.js');
const matchResult = findBIAMatch(applicationName, searchReference || null);
const matchResult = await findBIAMatch(applicationName, searchReference || null);
return matchResult.biaValue || null;
}
@@ -350,7 +350,7 @@ async function performWebSearch(query: string, tavilyApiKey?: string): Promise<s
'Content-Type': 'application/json',
},
body: JSON.stringify({
api_key: apiKey,
api_key: tavilyApiKey,
query: query,
search_depth: 'basic',
include_answer: true,

View File

@@ -8,7 +8,7 @@
*/
import { logger } from './logger.js';
import { cacheStore, type CacheStats } from './cacheStore.js';
import { normalizedCacheStore as cacheStore, type CacheStats } from './normalizedCacheStore.js';
import { jiraAssetsClient, type JiraUpdatePayload, JiraObjectNotFoundError } from './jiraAssetsClient.js';
import { conflictResolver, type ConflictCheckResult } from './conflictResolver.js';
import { OBJECT_TYPES, getAttributeDefinition } from '../generated/jira-schema.js';
@@ -65,7 +65,11 @@ class CMDBService {
return cached;
}
// Cache miss: fetch from Jira
// Cache miss: check if cache is cold and trigger background warming
// Note: Background cache warming removed - syncs must be triggered manually from GUI
// The isWarm() check is kept for status reporting, but no auto-warming
// Fetch from Jira (don't wait for warming)
return this.fetchAndCacheObject<T>(typeName, id);
}
@@ -94,7 +98,7 @@ class CMDBService {
if (result.objects.length === 0) return null;
const parsed = jiraAssetsClient.parseObject<T>(result.objects[0]);
const parsed = await jiraAssetsClient.parseObject<T>(result.objects[0]);
if (parsed) {
await cacheStore.upsertObject(typeName, parsed);
await cacheStore.extractAndStoreRelations(typeName, parsed);
@@ -122,13 +126,48 @@ class CMDBService {
): Promise<T | null> {
try {
const jiraObj = await jiraAssetsClient.getObject(id);
if (!jiraObj) return null;
if (!jiraObj) {
logger.warn(`CMDBService: Jira API returned null for object ${typeName}/${id}`);
return null;
}
let parsed: T | null;
try {
parsed = await jiraAssetsClient.parseObject<T>(jiraObj);
} catch (parseError) {
// parseObject throws errors for missing required fields - log and return null
logger.error(`CMDBService: Failed to parse object ${typeName}/${id} from Jira:`, parseError);
logger.debug(`CMDBService: Jira object that failed to parse:`, {
id: jiraObj.id,
objectKey: jiraObj.objectKey,
label: jiraObj.label,
objectType: jiraObj.objectType?.name,
attributesCount: jiraObj.attributes?.length || 0,
});
return null;
}
if (!parsed) {
logger.warn(`CMDBService: Failed to parse object ${typeName}/${id} from Jira (parseObject returned null)`);
return null;
}
// Validate parsed object has required fields before caching
if (!parsed.id || !parsed.objectKey || !parsed.label) {
logger.error(`CMDBService: Parsed object ${typeName}/${id} is missing required fields. Parsed object: ${JSON.stringify({
id: parsed.id,
objectKey: parsed.objectKey,
label: parsed.label,
hasId: 'id' in parsed,
hasObjectKey: 'objectKey' in parsed,
hasLabel: 'label' in parsed,
resultKeys: Object.keys(parsed),
})}`);
return null; // Return null instead of throwing to allow graceful degradation
}
const parsed = jiraAssetsClient.parseObject<T>(jiraObj);
if (parsed) {
await cacheStore.upsertObject(typeName, parsed);
await cacheStore.extractAndStoreRelations(typeName, parsed);
}
return parsed;
} catch (error) {
// If object was deleted from Jira, remove it from our cache
@@ -139,11 +178,48 @@ class CMDBService {
}
return null;
}
// Re-throw other errors
throw error;
// Log other errors but return null instead of throwing to prevent cascading failures
logger.error(`CMDBService: Unexpected error fetching object ${typeName}/${id}:`, error);
return null;
}
}
/**
* Batch fetch multiple objects from Jira and update cache
* Much more efficient than fetching objects one by one
*/
async batchFetchAndCacheObjects<T extends CMDBObject>(
typeName: CMDBObjectTypeName,
ids: string[]
): Promise<T[]> {
if (ids.length === 0) return [];
logger.debug(`CMDBService: Batch fetching ${ids.length} ${typeName} objects from Jira`);
// Fetch all objects in parallel (but limit concurrency to avoid overwhelming Jira)
const BATCH_SIZE = 20; // Fetch 20 objects at a time
const results: T[] = [];
for (let i = 0; i < ids.length; i += BATCH_SIZE) {
const batch = ids.slice(i, i + BATCH_SIZE);
const batchPromises = batch.map(async (id) => {
try {
return await this.fetchAndCacheObject<T>(typeName, id);
} catch (error) {
logger.warn(`CMDBService: Failed to fetch ${typeName}/${id} in batch`, error);
return null;
}
});
const batchResults = await Promise.all(batchPromises);
const validResults = batchResults.filter((obj): obj is NonNullable<typeof obj> => obj !== null) as T[];
results.push(...validResults);
}
logger.debug(`CMDBService: Successfully batch fetched ${results.length}/${ids.length} ${typeName} objects`);
return results;
}
/**
* Get all objects of a type from cache
*/
@@ -430,6 +506,20 @@ class CMDBService {
return await cacheStore.isWarm();
}
/**
* Trigger background cache warming if cache is cold
* This is called on-demand when cache misses occur
*/
private async triggerBackgroundWarming(): Promise<void> {
try {
const { jiraAssetsService } = await import('./jiraAssets.js');
await jiraAssetsService.preWarmFullCache();
} catch (error) {
// Silently fail - warming is optional
logger.debug('On-demand cache warming failed', error);
}
}
/**
* Clear cache for a specific type
*/

View File

@@ -0,0 +1,286 @@
/**
* Data Integrity Service
*
* Handles validation and repair of broken references and other data integrity issues.
*/
import { logger } from './logger.js';
import { normalizedCacheStore as cacheStore } from './normalizedCacheStore.js';
import { jiraAssetsClient, JiraObjectNotFoundError } from './jiraAssetsClient.js';
import type { CMDBObject } from '../generated/jira-types.js';
import type { DatabaseAdapter } from './database/interface.js';
export interface BrokenReference {
object_id: string;
attribute_id: number;
reference_object_id: string;
field_name: string;
object_type_name: string;
object_key: string;
label: string;
}
export interface RepairResult {
total: number;
repaired: number;
deleted: number;
failed: number;
errors: Array<{ reference: BrokenReference; error: string }>;
}
export interface ValidationResult {
brokenReferences: number;
objectsWithBrokenRefs: number;
lastValidated: string;
}
class DataIntegrityService {
/**
* Validate all references in the cache
*/
async validateReferences(): Promise<ValidationResult> {
const brokenCount = await cacheStore.getBrokenReferencesCount();
// Count unique objects with broken references
const brokenRefs = await cacheStore.getBrokenReferences(10000, 0);
const uniqueObjectIds = new Set(brokenRefs.map(ref => ref.object_id));
return {
brokenReferences: brokenCount,
objectsWithBrokenRefs: uniqueObjectIds.size,
lastValidated: new Date().toISOString(),
};
}
/**
* Repair broken references
*
* @param mode - 'delete': Remove broken references, 'fetch': Try to fetch missing objects from Jira, 'dry-run': Just report
* @param batchSize - Number of references to process at a time
* @param maxRepairs - Maximum number of repairs to attempt (0 = unlimited)
*/
async repairBrokenReferences(
mode: 'delete' | 'fetch' | 'dry-run' = 'fetch',
batchSize: number = 100,
maxRepairs: number = 0
): Promise<RepairResult> {
const result: RepairResult = {
total: 0,
repaired: 0,
deleted: 0,
failed: 0,
errors: [],
};
let offset = 0;
let processed = 0;
while (true) {
// Fetch batch of broken references
const brokenRefs = await cacheStore.getBrokenReferences(batchSize, offset);
if (brokenRefs.length === 0) break;
result.total += brokenRefs.length;
for (const ref of brokenRefs) {
// Check max repairs limit
if (maxRepairs > 0 && processed >= maxRepairs) {
logger.info(`DataIntegrityService: Reached max repairs limit (${maxRepairs})`);
break;
}
try {
if (mode === 'dry-run') {
// Just count, don't repair
processed++;
continue;
}
if (mode === 'fetch') {
// Try to fetch the referenced object from Jira
const fetchResult = await this.validateAndFetchReference(ref.reference_object_id);
if (fetchResult.exists && fetchResult.object) {
// Object was successfully fetched and cached
logger.debug(`DataIntegrityService: Repaired reference from ${ref.object_key}.${ref.field_name} to ${ref.reference_object_id}`);
result.repaired++;
} else {
// Object doesn't exist in Jira, delete the reference
await this.deleteBrokenReference(ref);
logger.debug(`DataIntegrityService: Deleted broken reference from ${ref.object_key}.${ref.field_name} to ${ref.reference_object_id} (object not found in Jira)`);
result.deleted++;
}
} else if (mode === 'delete') {
// Directly delete the broken reference
await this.deleteBrokenReference(ref);
result.deleted++;
}
processed++;
} catch (error) {
const errorMessage = error instanceof Error ? error.message : String(error);
logger.error(`DataIntegrityService: Failed to repair reference from ${ref.object_key}.${ref.field_name} to ${ref.reference_object_id}`, error);
result.failed++;
result.errors.push({
reference: ref,
error: errorMessage,
});
}
}
// Check if we should continue
if (brokenRefs.length < batchSize || (maxRepairs > 0 && processed >= maxRepairs)) {
break;
}
offset += batchSize;
}
logger.info(`DataIntegrityService: Repair completed - Total: ${result.total}, Repaired: ${result.repaired}, Deleted: ${result.deleted}, Failed: ${result.failed}`);
return result;
}
/**
* Validate and fetch a referenced object
*/
private async validateAndFetchReference(
referenceObjectId: string
): Promise<{ exists: boolean; object?: CMDBObject }> {
// 1. Check cache first
const db = (cacheStore as any).db;
if (db) {
const typedDb = db as DatabaseAdapter;
const objRow = await typedDb.queryOne<{
id: string;
object_type_name: string;
}>(`
SELECT id, object_type_name
FROM objects
WHERE id = ?
`, [referenceObjectId]);
if (objRow) {
const cached = await cacheStore.getObject(objRow.object_type_name as any, referenceObjectId);
if (cached) {
return { exists: true, object: cached };
}
}
}
// 2. Try to fetch from Jira
try {
const jiraObj = await jiraAssetsClient.getObject(referenceObjectId);
if (jiraObj) {
// Parse and cache
const parsed = await jiraAssetsClient.parseObject(jiraObj);
if (parsed) {
await cacheStore.upsertObject(parsed._objectType, parsed);
await cacheStore.extractAndStoreRelations(parsed._objectType, parsed);
return { exists: true, object: parsed };
}
}
} catch (error) {
if (error instanceof JiraObjectNotFoundError) {
return { exists: false };
}
// Re-throw other errors
throw error;
}
return { exists: false };
}
/**
* Delete a broken reference
*/
private async deleteBrokenReference(ref: BrokenReference): Promise<void> {
const db = (cacheStore as any).db;
if (!db) {
throw new Error('Database not available');
}
await db.execute(`
DELETE FROM attribute_values
WHERE object_id = ?
AND attribute_id = ?
AND reference_object_id = ?
`, [ref.object_id, ref.attribute_id, ref.reference_object_id]);
}
/**
* Cleanup orphaned attribute values (values without parent object)
*/
async cleanupOrphanedAttributeValues(): Promise<number> {
const db = (cacheStore as any).db;
if (!db) {
throw new Error('Database not available');
}
const result = await db.execute(`
DELETE FROM attribute_values
WHERE object_id NOT IN (SELECT id FROM objects)
`);
logger.info(`DataIntegrityService: Cleaned up ${result} orphaned attribute values`);
return result;
}
/**
* Cleanup orphaned relations (relations where source or target doesn't exist)
*/
async cleanupOrphanedRelations(): Promise<number> {
const db = (cacheStore as any).db;
if (!db) {
throw new Error('Database not available');
}
const result = await db.execute(`
DELETE FROM object_relations
WHERE source_id NOT IN (SELECT id FROM objects)
OR target_id NOT IN (SELECT id FROM objects)
`);
logger.info(`DataIntegrityService: Cleaned up ${result} orphaned relations`);
return result;
}
/**
* Full integrity check and repair
*/
async fullIntegrityCheck(repair: boolean = false): Promise<{
validation: ValidationResult;
repair?: RepairResult;
orphanedValues: number;
orphanedRelations: number;
}> {
logger.info('DataIntegrityService: Starting full integrity check...');
const validation = await this.validateReferences();
const orphanedValues = await this.cleanupOrphanedAttributeValues();
const orphanedRelations = await this.cleanupOrphanedRelations();
let repairResult: RepairResult | undefined;
if (repair) {
repairResult = await this.repairBrokenReferences('fetch', 100, 0);
}
logger.info('DataIntegrityService: Integrity check completed', {
brokenReferences: validation.brokenReferences,
orphanedValues,
orphanedRelations,
repaired: repairResult?.repaired || 0,
deleted: repairResult?.deleted || 0,
});
return {
validation,
repair: repairResult,
orphanedValues,
orphanedRelations,
};
}
}
export const dataIntegrityService = new DataIntegrityService();

View File

@@ -1,18 +1,18 @@
/**
* DataService - Main entry point for application data access
*
* Routes requests to either:
* - CMDBService (using local cache) for real Jira data
* - MockDataService for development without Jira
* ALWAYS uses Jira Assets API via CMDBService (local cache layer).
* Mock data has been removed - all data must come from Jira Assets.
*/
import { config } from '../config/env.js';
import { cmdbService, type UpdateResult } from './cmdbService.js';
import { cacheStore, type CacheStats } from './cacheStore.js';
import { normalizedCacheStore as cacheStore, type CacheStats } from './normalizedCacheStore.js';
import { normalizedCacheStore } from './normalizedCacheStore.js';
import { jiraAssetsClient } from './jiraAssetsClient.js';
import { jiraAssetsService } from './jiraAssets.js';
import { mockDataService } from './mockData.js';
import { logger } from './logger.js';
import type { DatabaseAdapter } from './database/interface.js';
import type {
ApplicationComponent,
IctGovernanceModel,
@@ -47,16 +47,8 @@ import type {
import { calculateRequiredEffortWithMinMax } from './effortCalculation.js';
import { calculateApplicationCompleteness } from './dataCompletenessConfig.js';
// Determine if we should use real Jira Assets or mock data
// Jira PAT is now configured per-user, so we check if schema is configured
// The actual PAT is provided per-request via middleware
const useJiraAssets = !!config.jiraSchemaId;
if (useJiraAssets) {
logger.info('DataService: Using CMDB cache layer with Jira Assets API');
} else {
logger.info('DataService: Using mock data (Jira credentials not configured)');
}
// NOTE: All data comes from Jira Assets API - no mock data fallback
// If schemas aren't configured yet, operations will fail gracefully with appropriate errors
// =============================================================================
// Reference Cache (for enriching IDs to ObjectReferences)
@@ -121,42 +113,113 @@ async function lookupReferences<T extends CMDBObject>(
// Helper Functions
// =============================================================================
/**
* Load description for an object from database
* Looks for a description attribute (field_name like 'description' or attr_name like 'Description')
*/
async function getDescriptionFromDatabase(objectId: string): Promise<string | null> {
try {
const { normalizedCacheStore } = await import('./normalizedCacheStore.js');
const db = (normalizedCacheStore as any).db;
if (!db) return null;
// Try to find description attribute by common field names
const descriptionFieldNames = ['description', 'Description', 'DESCRIPTION'];
// First, get the object to find its type
const typedDb = db as DatabaseAdapter;
const objRow = await typedDb.queryOne<{ object_type_name: string }>(`
SELECT object_type_name FROM objects WHERE id = ?
`, [objectId]);
if (!objRow) return null;
// Try each possible description field name
for (const fieldName of descriptionFieldNames) {
const descRow = await typedDb.queryOne<{ text_value: string }>(`
SELECT av.text_value
FROM attribute_values av
JOIN attributes a ON av.attribute_id = a.id
WHERE av.object_id = ?
AND (a.field_name = ? OR a.attr_name = ?)
AND av.text_value IS NOT NULL
AND av.text_value != ''
LIMIT 1
`, [objectId, fieldName, fieldName]);
if (descRow?.text_value) {
return descRow.text_value;
}
}
return null;
} catch (error) {
logger.debug(`Failed to get description from database for object ${objectId}`, error);
return null;
}
}
/**
* Convert ObjectReference to ReferenceValue format used by frontend
* Try to enrich with description from jiraAssetsService cache if available
* If not in cache or cache entry has no description, fetch it async
* PRIMARY: Load from database cache (no API calls)
* FALLBACK: Only use API if object not in database
*/
async function toReferenceValue(ref: ObjectReference | null | undefined): Promise<ReferenceValue | null> {
if (!ref) return null;
// Try to get enriched ReferenceValue from jiraAssetsService cache (includes description if available)
const enriched = useJiraAssets ? jiraAssetsService.getEnrichedReferenceValue(ref.objectKey, ref.objectId) : null;
// PRIMARY SOURCE: Try to load from database first (no API calls)
try {
const { normalizedCacheStore } = await import('./normalizedCacheStore.js');
const db = (normalizedCacheStore as any).db;
if (db) {
await db.ensureInitialized?.();
// Get basic object info from database
const typedDb = db as DatabaseAdapter;
const objRow = await typedDb.queryOne<{
id: string;
object_key: string;
label: string;
}>(`
SELECT id, object_key, label
FROM objects
WHERE id = ? OR object_key = ?
LIMIT 1
`, [ref.objectId, ref.objectKey]);
if (objRow) {
// Object exists in database - extract description if available
const description = await getDescriptionFromDatabase(objRow.id);
return {
objectId: objRow.id,
key: objRow.object_key || ref.objectKey,
name: objRow.label || ref.label,
...(description && { description }),
};
}
}
} catch (error) {
logger.debug(`Failed to load reference object ${ref.objectId} from database`, error);
}
// FALLBACK: Object not in database - check Jira Assets service cache
// Only fetch from API if really needed (object missing from database)
const enriched = jiraAssetsService.getEnrichedReferenceValue(ref.objectKey, ref.objectId);
if (enriched && enriched.description) {
// Use enriched value with description
// Use enriched value with description from service cache
return enriched;
}
// Cache miss or no description - fetch it async if using Jira Assets
if (useJiraAssets && enriched && !enriched.description) {
// We have a cached value but it lacks description - fetch it
const fetched = await jiraAssetsService.fetchEnrichedReferenceValue(ref.objectKey, ref.objectId);
if (fetched) {
return fetched;
}
// If fetch failed, return the cached value anyway
// Last resort: Object not in database and not in service cache
// Only return basic info - don't fetch from API here
// API fetching should only happen during sync operations
if (enriched) {
return enriched;
}
if (useJiraAssets) {
// Cache miss - fetch it
const fetched = await jiraAssetsService.fetchEnrichedReferenceValue(ref.objectKey, ref.objectId);
if (fetched) {
return fetched;
}
}
// Fallback to basic conversion without description (if fetch failed or not using Jira Assets)
// Basic fallback - return what we have from the ObjectReference
return {
objectId: ref.objectId,
key: ref.objectKey,
@@ -172,7 +235,8 @@ function toReferenceValues(refs: ObjectReference[] | null | undefined): Referenc
return refs.map(ref => ({
objectId: ref.objectId,
key: ref.objectKey,
name: ref.label,
// Use label if available, otherwise fall back to objectKey, then objectId
name: ref.label || ref.objectKey || ref.objectId || 'Unknown',
}));
}
@@ -225,6 +289,18 @@ async function toApplicationDetails(app: ApplicationComponent): Promise<Applicat
logger.info(`[toApplicationDetails] Converting cached object ${app.objectKey || app.id} to ApplicationDetails`);
logger.info(`[toApplicationDetails] confluenceSpace from cache: ${app.confluenceSpace} (type: ${typeof app.confluenceSpace})`);
// Debug logging for reference fields
if (process.env.NODE_ENV === 'development') {
logger.debug(`[toApplicationDetails] businessOwner: ${JSON.stringify(app.businessOwner)}`);
logger.debug(`[toApplicationDetails] systemOwner: ${JSON.stringify(app.systemOwner)}`);
logger.debug(`[toApplicationDetails] technicalApplicationManagement: ${JSON.stringify(app.technicalApplicationManagement)}`);
logger.debug(`[toApplicationDetails] supplierProduct: ${JSON.stringify(app.supplierProduct)}`);
logger.debug(`[toApplicationDetails] applicationFunction: ${JSON.stringify(app.applicationFunction)}`);
logger.debug(`[toApplicationDetails] applicationManagementDynamicsFactor: ${JSON.stringify(app.applicationManagementDynamicsFactor)}`);
logger.debug(`[toApplicationDetails] applicationManagementComplexityFactor: ${JSON.stringify(app.applicationManagementComplexityFactor)}`);
logger.debug(`[toApplicationDetails] applicationManagementNumberOfUsers: ${JSON.stringify(app.applicationManagementNumberOfUsers)}`);
}
// Handle confluenceSpace - it can be a string (URL) or number (legacy), convert to string
const confluenceSpaceValue = app.confluenceSpace !== null && app.confluenceSpace !== undefined
? (typeof app.confluenceSpace === 'string' ? app.confluenceSpace : String(app.confluenceSpace))
@@ -303,56 +379,16 @@ async function toApplicationDetails(app: ApplicationComponent): Promise<Applicat
// Convert array of ObjectReferences to ReferenceValue[]
const applicationFunctions = toReferenceValues(app.applicationFunction);
return {
id: app.id,
key: app.objectKey,
name: app.label,
description: app.description || null,
status: (app.status || 'In Production') as ApplicationStatus,
searchReference: app.searchReference || null,
// Organization info
organisation: organisation?.name || null,
businessOwner: extractLabel(app.businessOwner),
systemOwner: extractLabel(app.systemOwner),
functionalApplicationManagement: app.functionalApplicationManagement || null,
technicalApplicationManagement: extractLabel(app.technicalApplicationManagement),
technicalApplicationManagementPrimary: extractDisplayValue(app.technicalApplicationManagementPrimary),
technicalApplicationManagementSecondary: extractDisplayValue(app.technicalApplicationManagementSecondary),
// Technical info
medischeTechniek: app.medischeTechniek || false,
technischeArchitectuur: app.technischeArchitectuurTA || null,
supplierProduct: extractLabel(app.supplierProduct),
// Classification
applicationFunctions,
businessImportance: businessImportance?.name || null,
businessImpactAnalyse,
hostingType,
// Application Management
governanceModel,
applicationType,
applicationSubteam,
applicationTeam,
dynamicsFactor,
complexityFactor,
numberOfUsers,
applicationManagementHosting,
applicationManagementTAM,
platform,
// Override
overrideFTE: app.applicationManagementOverrideFTE ?? null,
requiredEffortApplicationManagement: null,
// Enterprise Architect reference
reference: app.reference || null,
// Confluence Space (URL string)
confluenceSpace: confluenceSpaceValue,
};
// Convert supplier fields to ReferenceValue format
const [
supplierTechnical,
supplierImplementation,
supplierConsultancy,
] = await Promise.all([
toReferenceValue(app.supplierTechnical),
toReferenceValue(app.supplierImplementation),
toReferenceValue(app.supplierConsultancy),
]);
// Calculate data completeness percentage
// Convert ApplicationDetails-like structure to format expected by completeness calculator
@@ -399,6 +435,9 @@ async function toApplicationDetails(app: ApplicationComponent): Promise<Applicat
medischeTechniek: app.medischeTechniek || false,
technischeArchitectuur: app.technischeArchitectuurTA || null,
supplierProduct: extractLabel(app.supplierProduct),
supplierTechnical: supplierTechnical,
supplierImplementation: supplierImplementation,
supplierConsultancy: supplierConsultancy,
// Classification
applicationFunctions,
@@ -659,22 +698,31 @@ export const dataService = {
page: number = 1,
pageSize: number = 25
): Promise<SearchResult> {
if (!useJiraAssets) {
return mockDataService.searchApplications(filters, page, pageSize);
}
// Get all applications from cache
// Get all applications from cache (always from Jira Assets)
let apps = await cmdbService.getObjects<ApplicationComponent>('ApplicationComponent');
logger.debug(`DataService: Found ${apps.length} applications in cache for search`);
// If cache is empty, log a warning
if (apps.length === 0) {
logger.warn('DataService: Cache is empty - no applications found. A full sync may be needed.');
}
// Apply filters locally
if (filters.searchText) {
const search = filters.searchText.toLowerCase();
apps = apps.filter(app =>
app.label.toLowerCase().includes(search) ||
app.objectKey.toLowerCase().includes(search) ||
app.searchReference?.toLowerCase().includes(search) ||
app.description?.toLowerCase().includes(search)
);
if (filters.searchText && filters.searchText.trim()) {
const search = filters.searchText.toLowerCase().trim();
const beforeFilter = apps.length;
apps = apps.filter(app => {
const label = app.label?.toLowerCase() || '';
const objectKey = app.objectKey?.toLowerCase() || '';
const searchRef = app.searchReference?.toLowerCase() || '';
const description = app.description?.toLowerCase() || '';
return label.includes(search) ||
objectKey.includes(search) ||
searchRef.includes(search) ||
description.includes(search);
});
logger.debug(`DataService: Search filter "${filters.searchText}" reduced results from ${beforeFilter} to ${apps.length}`);
}
if (filters.statuses && filters.statuses.length > 0) {
@@ -834,11 +882,14 @@ export const dataService = {
* Get application by ID (from cache)
*/
async getApplicationById(id: string): Promise<ApplicationDetails | null> {
if (!useJiraAssets) {
return mockDataService.getApplicationById(id);
// Try to get by ID first (handles both Jira object IDs and object keys)
let app = await cmdbService.getObject<ApplicationComponent>('ApplicationComponent', id);
// If not found by ID, try by object key (e.g., "ICMT-123" or numeric IDs that might be keys)
if (!app) {
app = await cmdbService.getObjectByKey<ApplicationComponent>('ApplicationComponent', id);
}
const app = await cmdbService.getObject<ApplicationComponent>('ApplicationComponent', id);
if (!app) return null;
return toApplicationDetails(app);
@@ -848,13 +899,18 @@ export const dataService = {
* Get application for editing (force refresh from Jira)
*/
async getApplicationForEdit(id: string): Promise<ApplicationDetails | null> {
if (!useJiraAssets) {
return mockDataService.getApplicationById(id);
}
const app = await cmdbService.getObject<ApplicationComponent>('ApplicationComponent', id, {
// Try to get by ID first (handles both Jira object IDs and object keys)
let app = await cmdbService.getObject<ApplicationComponent>('ApplicationComponent', id, {
forceRefresh: true,
});
// If not found by ID, try by object key (e.g., "ICMT-123" or numeric IDs that might be keys)
if (!app) {
app = await cmdbService.getObjectByKey<ApplicationComponent>('ApplicationComponent', id, {
forceRefresh: true,
});
}
if (!app) return null;
return toApplicationDetails(app);
@@ -884,11 +940,7 @@ export const dataService = {
): Promise<UpdateResult> {
logger.info(`dataService.updateApplication called for ${id}`);
if (!useJiraAssets) {
const success = await mockDataService.updateApplication(id, updates);
return { success };
}
// Always update via Jira Assets API
// Convert to CMDBService format
// IMPORTANT: For reference fields, we pass ObjectReference objects (with objectKey)
// because buildAttributeValues in cmdbService expects to extract objectKey for Jira API
@@ -978,7 +1030,7 @@ export const dataService = {
// ===========================================================================
async getDynamicsFactors(): Promise<ReferenceValue[]> {
if (!useJiraAssets) return mockDataService.getDynamicsFactors();
// Always get from Jira Assets cache
const items = await cmdbService.getObjects<ApplicationManagementDynamicsFactor>('ApplicationManagementDynamicsFactor');
return items.map(item => ({
objectId: item.id,
@@ -991,7 +1043,7 @@ export const dataService = {
},
async getComplexityFactors(): Promise<ReferenceValue[]> {
if (!useJiraAssets) return mockDataService.getComplexityFactors();
// Always get from Jira Assets cache
const items = await cmdbService.getObjects<ApplicationManagementComplexityFactor>('ApplicationManagementComplexityFactor');
return items.map(item => ({
objectId: item.id,
@@ -1004,7 +1056,7 @@ export const dataService = {
},
async getNumberOfUsers(): Promise<ReferenceValue[]> {
if (!useJiraAssets) return mockDataService.getNumberOfUsers();
// Always get from Jira Assets cache
const items = await cmdbService.getObjects<ApplicationManagementNumberOfUsers>('ApplicationManagementNumberOfUsers');
return items.map(item => ({
objectId: item.id,
@@ -1017,7 +1069,7 @@ export const dataService = {
},
async getGovernanceModels(): Promise<ReferenceValue[]> {
if (!useJiraAssets) return mockDataService.getGovernanceModels();
// Always get from Jira Assets cache
const items = await cmdbService.getObjects<IctGovernanceModel>('IctGovernanceModel');
return items.map(item => ({
objectId: item.id,
@@ -1030,14 +1082,16 @@ export const dataService = {
},
async getOrganisations(): Promise<ReferenceValue[]> {
if (!useJiraAssets) return mockDataService.getOrganisations();
// Always get from Jira Assets cache
const items = await cmdbService.getObjects<Organisation>('Organisation');
logger.debug(`DataService: Found ${items.length} organisations in cache`);
return items.map(item => ({ objectId: item.id, key: item.objectKey, name: item.label }));
},
async getHostingTypes(): Promise<ReferenceValue[]> {
if (!useJiraAssets) return mockDataService.getHostingTypes();
// Always get from Jira Assets cache
const items = await cmdbService.getObjects<HostingType>('HostingType');
logger.debug(`DataService: Found ${items.length} hosting types in cache`);
return items.map(item => ({
objectId: item.id,
key: item.objectKey,
@@ -1047,7 +1101,7 @@ export const dataService = {
},
async getBusinessImpactAnalyses(): Promise<ReferenceValue[]> {
if (!useJiraAssets) return mockDataService.getBusinessImpactAnalyses();
// Always get from Jira Assets cache
const items = await cmdbService.getObjects<BusinessImpactAnalyse>('BusinessImpactAnalyse');
return items.map(item => ({
objectId: item.id,
@@ -1059,7 +1113,7 @@ export const dataService = {
},
async getApplicationManagementHosting(): Promise<ReferenceValue[]> {
if (!useJiraAssets) return mockDataService.getApplicationManagementHosting();
// Always get from Jira Assets cache
const items = await cmdbService.getObjects<ApplicationManagementHosting>('ApplicationManagementHosting');
return items.map(item => ({
objectId: item.id,
@@ -1070,7 +1124,7 @@ export const dataService = {
},
async getApplicationManagementTAM(): Promise<ReferenceValue[]> {
if (!useJiraAssets) return mockDataService.getApplicationManagementTAM();
// Always get from Jira Assets cache
const items = await cmdbService.getObjects<ApplicationManagementTam>('ApplicationManagementTam');
return items.map(item => ({
objectId: item.id,
@@ -1081,7 +1135,7 @@ export const dataService = {
},
async getApplicationFunctions(): Promise<ReferenceValue[]> {
if (!useJiraAssets) return mockDataService.getApplicationFunctions();
// Always get from Jira Assets cache
const items = await cmdbService.getObjects<ApplicationFunction>('ApplicationFunction');
return items.map(item => ({
objectId: item.id,
@@ -1098,7 +1152,7 @@ export const dataService = {
},
async getApplicationFunctionCategories(): Promise<ReferenceValue[]> {
if (!useJiraAssets) return mockDataService.getApplicationFunctionCategories();
// Always get from Jira Assets cache
const items = await cmdbService.getObjects<ApplicationFunctionCategory>('ApplicationFunctionCategory');
return items.map(item => ({
objectId: item.id,
@@ -1109,19 +1163,17 @@ export const dataService = {
},
async getApplicationSubteams(): Promise<ReferenceValue[]> {
if (!useJiraAssets) return []; // Mock mode: no subteams
// Use jiraAssetsService directly as schema doesn't include this object type
// Always get from Jira Assets API (schema doesn't include this object type)
return jiraAssetsService.getApplicationSubteams();
},
async getApplicationTeams(): Promise<ReferenceValue[]> {
if (!useJiraAssets) return []; // Mock mode: no teams
// Use jiraAssetsService directly as schema doesn't include this object type
// Always get from Jira Assets API (schema doesn't include this object type)
return jiraAssetsService.getApplicationTeams();
},
async getSubteamToTeamMapping(): Promise<Record<string, ReferenceValue | null>> {
if (!useJiraAssets) return {}; // Mock mode: no mapping
// Always get from Jira Assets API
// Convert Map to plain object for JSON serialization
const mapping = await jiraAssetsService.getSubteamToTeamMapping();
const result: Record<string, ReferenceValue | null> = {};
@@ -1132,7 +1184,7 @@ export const dataService = {
},
async getApplicationTypes(): Promise<ReferenceValue[]> {
if (!useJiraAssets) return mockDataService.getApplicationTypes();
// Always get from Jira Assets cache
const items = await cmdbService.getObjects<ApplicationManagementApplicationType>('ApplicationManagementApplicationType');
return items.map(item => ({
objectId: item.id,
@@ -1143,8 +1195,9 @@ export const dataService = {
},
async getBusinessImportance(): Promise<ReferenceValue[]> {
if (!useJiraAssets) return mockDataService.getBusinessImportance();
// Always get from Jira Assets cache
const items = await cmdbService.getObjects<BusinessImportance>('BusinessImportance');
logger.debug(`DataService: Found ${items.length} business importance values in cache`);
return items.map(item => ({ objectId: item.id, key: item.objectKey, name: item.label }));
},
@@ -1153,8 +1206,7 @@ export const dataService = {
// ===========================================================================
async getStats(includeDistributions: boolean = true) {
if (!useJiraAssets) return mockDataService.getStats();
// Always get from Jira Assets cache
const allApps = await cmdbService.getObjects<ApplicationComponent>('ApplicationComponent');
// Statuses to exclude for most metrics
@@ -1231,10 +1283,342 @@ export const dataService = {
},
async getTeamDashboardData(excludedStatuses: ApplicationStatus[] = []): Promise<TeamDashboardData> {
if (!useJiraAssets) return mockDataService.getTeamDashboardData(excludedStatuses);
// Load from database cache instead of API for better performance
logger.info(`Loading team dashboard data from database cache (excluding: ${excludedStatuses.join(', ')})`);
// Use jiraAssetsService directly as it has proper Team/Subteam field parsing
try {
// Get all ApplicationComponents from database cache
const allApplications = await cmdbService.getObjects<ApplicationComponent>('ApplicationComponent');
logger.info(`Loaded ${allApplications.length} applications from database cache`);
// Convert to ApplicationListItem
const applicationListItems = await Promise.all(
allApplications.map(app => toApplicationListItem(app))
);
// Filter out excluded statuses
const filteredApplications = excludedStatuses.length > 0
? applicationListItems.filter(app => !app.status || !excludedStatuses.includes(app.status))
: applicationListItems;
logger.info(`After status filter: ${filteredApplications.length} applications (excluded: ${excludedStatuses.join(', ')})`);
// Separate into Platforms, Workloads, and regular applications
const platforms: ApplicationListItem[] = [];
const workloads: ApplicationListItem[] = [];
const regularApplications: ApplicationListItem[] = [];
for (const app of filteredApplications) {
const isPlatform = app.applicationType?.name === 'Platform';
const isWorkload = app.platform !== null;
if (isPlatform) {
platforms.push(app);
} else if (isWorkload) {
workloads.push(app);
} else {
regularApplications.push(app);
}
}
logger.info(`Identified ${platforms.length} platforms, ${workloads.length} workloads, ${regularApplications.length} regular applications`);
// Group workloads by their platform
const workloadsByPlatform = new Map<string, ApplicationListItem[]>();
for (const workload of workloads) {
const platformId = workload.platform!.objectId;
if (!workloadsByPlatform.has(platformId)) {
workloadsByPlatform.set(platformId, []);
}
workloadsByPlatform.get(platformId)!.push(workload);
}
// Helper functions for FTE calculations
const getEffectiveFTE = (app: ApplicationListItem): number => {
return app.overrideFTE !== null && app.overrideFTE !== undefined
? app.overrideFTE
: (app.requiredEffortApplicationManagement || 0);
};
const getMinFTE = (app: ApplicationListItem): number => {
if (app.overrideFTE !== null && app.overrideFTE !== undefined) {
return app.overrideFTE;
}
return app.minFTE ?? app.requiredEffortApplicationManagement ?? 0;
};
const getMaxFTE = (app: ApplicationListItem): number => {
if (app.overrideFTE !== null && app.overrideFTE !== undefined) {
return app.overrideFTE;
}
return app.maxFTE ?? app.requiredEffortApplicationManagement ?? 0;
};
// Build PlatformWithWorkloads structures
const platformsWithWorkloads: PlatformWithWorkloads[] = [];
for (const platform of platforms) {
const platformWorkloads = workloadsByPlatform.get(platform.id) || [];
const platformEffort = getEffectiveFTE(platform);
const workloadsEffort = platformWorkloads.reduce((sum, w) => sum + getEffectiveFTE(w), 0);
platformsWithWorkloads.push({
platform,
workloads: platformWorkloads,
platformEffort,
workloadsEffort,
totalEffort: platformEffort + workloadsEffort,
});
}
// Helper function to calculate subteam KPIs
const calculateSubteamKPIs = (
regularApps: ApplicationListItem[],
platformsList: PlatformWithWorkloads[]
) => {
const regularEffort = regularApps.reduce((sum, app) => sum + getEffectiveFTE(app), 0);
const platformsEffort = platformsList.reduce((sum, p) => sum + p.totalEffort, 0);
const totalEffort = regularEffort + platformsEffort;
const regularMinEffort = regularApps.reduce((sum, app) => sum + getMinFTE(app), 0);
const regularMaxEffort = regularApps.reduce((sum, app) => sum + getMaxFTE(app), 0);
const platformsMinEffort = platformsList.reduce((sum, p) => {
const platformMin = getMinFTE(p.platform);
const workloadsMin = p.workloads.reduce((s, w) => s + getMinFTE(w), 0);
return sum + platformMin + workloadsMin;
}, 0);
const platformsMaxEffort = platformsList.reduce((sum, p) => {
const platformMax = getMaxFTE(p.platform);
const workloadsMax = p.workloads.reduce((s, w) => s + getMaxFTE(w), 0);
return sum + platformMax + workloadsMax;
}, 0);
const minEffort = regularMinEffort + platformsMinEffort;
const maxEffort = regularMaxEffort + platformsMaxEffort;
const platformsCount = platformsList.length;
const workloadsCount = platformsList.reduce((sum, p) => sum + p.workloads.length, 0);
const applicationCount = regularApps.length + platformsCount + workloadsCount;
const byGovernanceModel: Record<string, number> = {};
for (const app of regularApps) {
const govModel = app.governanceModel?.name || 'Niet ingesteld';
byGovernanceModel[govModel] = (byGovernanceModel[govModel] || 0) + 1;
}
for (const pwl of platformsList) {
const govModel = pwl.platform.governanceModel?.name || 'Niet ingesteld';
byGovernanceModel[govModel] = (byGovernanceModel[govModel] || 0) + 1;
for (const workload of pwl.workloads) {
const workloadGovModel = workload.governanceModel?.name || 'Niet ingesteld';
byGovernanceModel[workloadGovModel] = (byGovernanceModel[workloadGovModel] || 0) + 1;
}
}
return { totalEffort, minEffort, maxEffort, applicationCount, byGovernanceModel };
};
// HIERARCHICAL GROUPING: Team -> Subteam -> Applications
type SubteamData = {
subteam: ReferenceValue | null;
regular: ApplicationListItem[];
platforms: PlatformWithWorkloads[];
};
type TeamData = {
team: ReferenceValue | null;
subteams: Map<string, SubteamData>;
};
// Load Subteam -> Team mapping (still from API but cached, so should be fast)
const subteamToTeamMap = await jiraAssetsService.getSubteamToTeamMapping();
logger.info(`Loaded ${subteamToTeamMap.size} subteam->team mappings`);
const teamMap = new Map<string, TeamData>();
const unassignedData: SubteamData = {
subteam: null,
regular: [],
platforms: [],
};
// Helper to get Team from Subteam
const getTeamForSubteam = (subteam: ReferenceValue | null): ReferenceValue | null => {
if (!subteam) return null;
return subteamToTeamMap.get(subteam.objectId) || null;
};
// Group regular applications by Team -> Subteam
for (const app of regularApplications) {
const subteam = app.applicationSubteam;
const team = getTeamForSubteam(subteam);
if (team) {
const teamId = team.objectId;
if (!teamMap.has(teamId)) {
teamMap.set(teamId, { team, subteams: new Map() });
}
const teamData = teamMap.get(teamId)!;
const subteamId = subteam?.objectId || 'no-subteam';
if (!teamData.subteams.has(subteamId)) {
teamData.subteams.set(subteamId, {
subteam: subteam || null,
regular: [],
platforms: [],
});
}
teamData.subteams.get(subteamId)!.regular.push(app);
} else if (subteam) {
// Has subteam but no team - put under a virtual "Geen Team" team
const noTeamId = 'no-team';
if (!teamMap.has(noTeamId)) {
teamMap.set(noTeamId, {
team: { objectId: noTeamId, key: 'NO-TEAM', name: 'Geen Team' },
subteams: new Map()
});
}
const teamData = teamMap.get(noTeamId)!;
const subteamId = subteam.objectId;
if (!teamData.subteams.has(subteamId)) {
teamData.subteams.set(subteamId, {
subteam: subteam,
regular: [],
platforms: [],
});
}
teamData.subteams.get(subteamId)!.regular.push(app);
} else {
// No subteam assigned - goes to unassigned
unassignedData.regular.push(app);
}
}
// Group platforms by Team -> Subteam
for (const pwl of platformsWithWorkloads) {
const platform = pwl.platform;
const subteam = platform.applicationSubteam;
const team = getTeamForSubteam(subteam);
if (team) {
const teamId = team.objectId;
if (!teamMap.has(teamId)) {
teamMap.set(teamId, { team, subteams: new Map() });
}
const teamData = teamMap.get(teamId)!;
const subteamId = subteam?.objectId || 'no-subteam';
if (!teamData.subteams.has(subteamId)) {
teamData.subteams.set(subteamId, {
subteam: subteam || null,
regular: [],
platforms: [],
});
}
teamData.subteams.get(subteamId)!.platforms.push(pwl);
} else if (subteam) {
// Has subteam but no team - put under "Geen Team"
const noTeamId = 'no-team';
if (!teamMap.has(noTeamId)) {
teamMap.set(noTeamId, {
team: { objectId: noTeamId, key: 'NO-TEAM', name: 'Geen Team' },
subteams: new Map()
});
}
const teamData = teamMap.get(noTeamId)!;
const subteamId = subteam.objectId;
if (!teamData.subteams.has(subteamId)) {
teamData.subteams.set(subteamId, {
subteam: subteam,
regular: [],
platforms: [],
});
}
teamData.subteams.get(subteamId)!.platforms.push(pwl);
} else {
// No subteam assigned - goes to unassigned
unassignedData.platforms.push(pwl);
}
}
logger.info(`Team dashboard: grouped ${regularApplications.length} regular apps and ${platformsWithWorkloads.length} platforms into ${teamMap.size} teams`);
// Build the hierarchical result structure
const teams: TeamDashboardTeam[] = [];
// Process teams in alphabetical order
const sortedTeamIds = Array.from(teamMap.keys()).sort((a, b) => {
const teamA = teamMap.get(a)!.team?.name || '';
const teamB = teamMap.get(b)!.team?.name || '';
return teamA.localeCompare(teamB, 'nl', { sensitivity: 'base' });
});
for (const teamId of sortedTeamIds) {
const teamData = teamMap.get(teamId)!;
const fullTeam = teamData.team;
const subteams: TeamDashboardSubteam[] = [];
// Sort subteams alphabetically (with "no-subteam" at the end)
const sortedSubteamEntries = Array.from(teamData.subteams.entries()).sort((a, b) => {
if (a[0] === 'no-subteam') return 1;
if (b[0] === 'no-subteam') return -1;
const nameA = a[1].subteam?.name || '';
const nameB = b[1].subteam?.name || '';
return nameA.localeCompare(nameB, 'nl', { sensitivity: 'base' });
});
for (const [subteamId, subteamData] of sortedSubteamEntries) {
const kpis = calculateSubteamKPIs(subteamData.regular, subteamData.platforms);
subteams.push({
subteam: subteamData.subteam,
applications: subteamData.regular,
platforms: subteamData.platforms,
...kpis,
});
}
// Aggregate team KPIs from all subteams
const teamTotalEffort = subteams.reduce((sum, s) => sum + s.totalEffort, 0);
const teamMinEffort = subteams.reduce((sum, s) => sum + s.minEffort, 0);
const teamMaxEffort = subteams.reduce((sum, s) => sum + s.maxEffort, 0);
const teamApplicationCount = subteams.reduce((sum, s) => sum + s.applicationCount, 0);
const teamByGovernanceModel: Record<string, number> = {};
for (const subteam of subteams) {
for (const [model, count] of Object.entries(subteam.byGovernanceModel)) {
teamByGovernanceModel[model] = (teamByGovernanceModel[model] || 0) + count;
}
}
teams.push({
team: fullTeam,
subteams,
totalEffort: teamTotalEffort,
minEffort: teamMinEffort,
maxEffort: teamMaxEffort,
applicationCount: teamApplicationCount,
byGovernanceModel: teamByGovernanceModel,
});
}
// Calculate unassigned KPIs
const unassignedKPIs = calculateSubteamKPIs(unassignedData.regular, unassignedData.platforms);
const result: TeamDashboardData = {
teams,
unassigned: {
subteam: null,
applications: unassignedData.regular,
platforms: unassignedData.platforms,
...unassignedKPIs,
},
};
logger.info(`Team dashboard data loaded from database: ${teams.length} teams, ${unassignedData.regular.length + unassignedData.platforms.length} unassigned apps`);
return result;
} catch (error) {
logger.error('Failed to get team dashboard data from database', error);
// Fallback to API if database fails
logger.warn('Falling back to API for team dashboard data');
return jiraAssetsService.getTeamDashboardData(excludedStatuses);
}
},
/**
@@ -1253,7 +1637,7 @@ export const dataService = {
applicationCount: number;
}>;
}> {
// For mock data, use the same implementation (cmdbService routes to mock data when useJiraAssets is false)
// Always get from Jira Assets cache
// Get all applications from cache to access all fields including BIA
let apps = await cmdbService.getObjects<ApplicationComponent>('ApplicationComponent');
@@ -1421,13 +1805,13 @@ export const dataService = {
// Utility
// ===========================================================================
isUsingJiraAssets(): boolean {
return useJiraAssets;
async isUsingJiraAssets(): Promise<boolean> {
// Always returns true - mock data removed, only Jira Assets is used
return true;
},
async testConnection(): Promise<boolean> {
if (!useJiraAssets) return true;
// Only test connection if token is configured
// Always test Jira Assets connection (requires token)
if (!jiraAssetsClient.hasToken()) {
return false;
}

View File

@@ -27,13 +27,16 @@ export function createDatabaseAdapter(dbType?: string, dbPath?: string, allowClo
// Try to construct from individual components
const host = process.env.DATABASE_HOST || 'localhost';
const port = process.env.DATABASE_PORT || '5432';
const name = process.env.DATABASE_NAME || 'cmdb';
const name = process.env.DATABASE_NAME || 'cmdb_insight';
const user = process.env.DATABASE_USER || 'cmdb';
const password = process.env.DATABASE_PASSWORD || '';
const ssl = process.env.DATABASE_SSL === 'true' ? '?sslmode=require' : '';
// Azure PostgreSQL requires SSL - always use sslmode=require for Azure
const isAzure = host.includes('.postgres.database.azure.com');
const ssl = (process.env.DATABASE_SSL === 'true' || isAzure) ? '?sslmode=require' : '';
const constructedUrl = `postgresql://${user}:${password}@${host}:${port}/${name}${ssl}`;
logger.info('Creating PostgreSQL adapter with constructed connection string');
logger.info(`Database: ${name}, SSL: ${ssl ? 'required' : 'not required'}`);
return new PostgresAdapter(constructedUrl, allowClose);
}
@@ -48,33 +51,12 @@ export function createDatabaseAdapter(dbType?: string, dbPath?: string, allowClo
}
/**
* Create a database adapter for the classifications database
* Create a database adapter for classifications and session state
*
* Uses the same database as the main cache. All data (CMDB cache,
* classification history, and session state) is stored in a single database.
*/
export function createClassificationsDatabaseAdapter(): DatabaseAdapter {
const type = process.env.DATABASE_TYPE || 'sqlite';
const databaseUrl = process.env.CLASSIFICATIONS_DATABASE_URL || process.env.DATABASE_URL;
if (type === 'postgres' || type === 'postgresql') {
if (!databaseUrl) {
// Try to construct from individual components
const host = process.env.DATABASE_HOST || 'localhost';
const port = process.env.DATABASE_PORT || '5432';
const name = process.env.CLASSIFICATIONS_DATABASE_NAME || process.env.DATABASE_NAME || 'cmdb';
const user = process.env.DATABASE_USER || 'cmdb';
const password = process.env.DATABASE_PASSWORD || '';
const ssl = process.env.DATABASE_SSL === 'true' ? '?sslmode=require' : '';
const constructedUrl = `postgresql://${user}:${password}@${host}:${port}/${name}${ssl}`;
logger.info('Creating PostgreSQL adapter for classifications with constructed connection string');
return new PostgresAdapter(constructedUrl);
}
logger.info('Creating PostgreSQL adapter for classifications');
return new PostgresAdapter(databaseUrl);
}
// Default to SQLite
const defaultPath = join(__dirname, '../../data/classifications.db');
logger.info(`Creating SQLite adapter for classifications with path: ${defaultPath}`);
return new SqliteAdapter(defaultPath);
// Always use the same database adapter as the main cache
return createDatabaseAdapter();
}

View File

@@ -0,0 +1,124 @@
/**
* Fix UNIQUE constraints on object_types table
*
* Removes old UNIQUE constraint on type_name and adds new UNIQUE(schema_id, type_name)
* This allows the same type_name to exist in different schemas
*/
import { logger } from '../logger.js';
import { normalizedCacheStore } from '../normalizedCacheStore.js';
import type { DatabaseAdapter } from './interface.js';
export async function fixObjectTypesConstraints(): Promise<void> {
const db = (normalizedCacheStore as any).db as DatabaseAdapter;
if (!db) {
throw new Error('Database not available');
}
await (db as any).ensureInitialized?.();
logger.info('Migration: Fixing UNIQUE constraints on object_types table...');
try {
if (db.isPostgres) {
// Check if old constraint exists
const oldConstraintExists = await db.queryOne<{ count: number }>(`
SELECT COUNT(*) as count
FROM pg_constraint
WHERE conname = 'object_types_type_name_key'
`);
if (oldConstraintExists && oldConstraintExists.count > 0) {
logger.info('Migration: Dropping old UNIQUE constraint on type_name...');
await db.execute(`ALTER TABLE object_types DROP CONSTRAINT IF EXISTS object_types_type_name_key`);
}
// Check if new constraint exists
const newConstraintExists = await db.queryOne<{ count: number }>(`
SELECT COUNT(*) as count
FROM pg_constraint
WHERE conname = 'object_types_schema_id_type_name_key'
`);
if (!newConstraintExists || newConstraintExists.count === 0) {
logger.info('Migration: Adding UNIQUE constraint on (schema_id, type_name)...');
try {
await db.execute(`
ALTER TABLE object_types
ADD CONSTRAINT object_types_schema_id_type_name_key UNIQUE (schema_id, type_name)
`);
} catch (error: any) {
// If constraint already exists or there are duplicates, log and continue
if (error.message && error.message.includes('already exists')) {
logger.debug('Migration: Constraint already exists, skipping');
} else if (error.message && error.message.includes('duplicate key')) {
logger.warn('Migration: Duplicate (schema_id, type_name) found - this may need manual cleanup');
// Don't throw - allow the application to continue
} else {
throw error;
}
}
} else {
logger.debug('Migration: New UNIQUE constraint already exists');
}
} else {
// SQLite: UNIQUE constraints are part of table definition
// We can't easily modify them, but the schema definition should handle it
logger.debug('Migration: SQLite UNIQUE constraints are handled in table definition');
}
// Step 2: Remove foreign key constraints that reference object_types(type_name)
logger.info('Migration: Removing foreign key constraints on object_types(type_name)...');
try {
if (db.isPostgres) {
// Check and drop foreign keys from attributes table
const attrFkExists = await db.queryOne<{ count: number }>(`
SELECT COUNT(*) as count
FROM pg_constraint
WHERE conname LIKE 'attributes_object_type_name_fkey%'
`);
if (attrFkExists && attrFkExists.count > 0) {
logger.info('Migration: Dropping foreign key from attributes table...');
await db.execute(`ALTER TABLE attributes DROP CONSTRAINT IF EXISTS attributes_object_type_name_fkey`);
}
// Check and drop foreign keys from objects table
const objFkExists = await db.queryOne<{ count: number }>(`
SELECT COUNT(*) as count
FROM pg_constraint
WHERE conname LIKE 'objects_object_type_name_fkey%'
`);
if (objFkExists && objFkExists.count > 0) {
logger.info('Migration: Dropping foreign key from objects table...');
await db.execute(`ALTER TABLE objects DROP CONSTRAINT IF EXISTS objects_object_type_name_fkey`);
}
// Check and drop foreign keys from schema_mappings table
const mappingFkExists = await db.queryOne<{ count: number }>(`
SELECT COUNT(*) as count
FROM pg_constraint
WHERE conname LIKE 'schema_mappings_object_type_name_fkey%'
`);
if (mappingFkExists && mappingFkExists.count > 0) {
logger.info('Migration: Dropping foreign key from schema_mappings table...');
await db.execute(`ALTER TABLE schema_mappings DROP CONSTRAINT IF EXISTS schema_mappings_object_type_name_fkey`);
}
} else {
// SQLite: Foreign keys are part of table definition
// We can't easily drop them, but the new schema definition should handle it
logger.debug('Migration: SQLite foreign keys are handled in table definition');
}
} catch (error) {
logger.warn('Migration: Could not remove foreign key constraints (may not exist)', error);
// Don't throw - allow the application to continue
}
logger.info('Migration: UNIQUE constraints and foreign keys fix completed');
} catch (error) {
logger.warn('Migration: Could not fix constraints (may already be correct)', error);
// Don't throw - allow the application to continue
}
}

View File

@@ -40,4 +40,9 @@ export interface DatabaseAdapter {
* Get database size in bytes (if applicable)
*/
getSizeBytes?(): Promise<number>;
/**
* Indicates if this is a PostgreSQL adapter
*/
isPostgres?: boolean;
}

View File

@@ -0,0 +1,418 @@
/**
* Migration script to migrate from configured_object_types to normalized schema structure
*
* This script:
* 1. Creates schemas table if it doesn't exist
* 2. Migrates unique schemas from configured_object_types to schemas
* 3. Adds schema_id and enabled columns to object_types if they don't exist
* 4. Migrates object types from configured_object_types to object_types with schema_id FK
* 5. Drops configured_object_types table after successful migration
*/
import { logger } from '../logger.js';
import { normalizedCacheStore } from '../normalizedCacheStore.js';
import type { DatabaseAdapter } from './interface.js';
export async function migrateToNormalizedSchema(): Promise<void> {
const db = (normalizedCacheStore as any).db;
if (!db) {
throw new Error('Database not available');
}
await db.ensureInitialized?.();
logger.info('Migration: Starting migration to normalized schema structure...');
try {
await db.transaction(async (txDb: DatabaseAdapter) => {
// Step 1: Check if configured_object_types table exists
let configuredTableExists = false;
try {
if (txDb.isPostgres) {
const result = await txDb.queryOne<{ count: number }>(`
SELECT COUNT(*) as count
FROM information_schema.tables
WHERE table_schema = 'public' AND table_name = 'configured_object_types'
`);
configuredTableExists = (result?.count || 0) > 0;
} else {
const result = await txDb.queryOne<{ count: number }>(`
SELECT COUNT(*) as count
FROM sqlite_master
WHERE type='table' AND name='configured_object_types'
`);
configuredTableExists = (result?.count || 0) > 0;
}
} catch (error) {
logger.debug('Migration: configured_object_types table check failed (may not exist)', error);
}
if (!configuredTableExists) {
logger.info('Migration: configured_object_types table does not exist, skipping migration');
return;
}
// Step 2: Check if schemas table exists, create if not
let schemasTableExists = false;
try {
if (txDb.isPostgres) {
const result = await txDb.queryOne<{ count: number }>(`
SELECT COUNT(*) as count
FROM information_schema.tables
WHERE table_schema = 'public' AND table_name = 'schemas'
`);
schemasTableExists = (result?.count || 0) > 0;
} else {
const result = await txDb.queryOne<{ count: number }>(`
SELECT COUNT(*) as count
FROM sqlite_master
WHERE type='table' AND name='schemas'
`);
schemasTableExists = (result?.count || 0) > 0;
}
} catch (error) {
logger.debug('Migration: schemas table check failed', error);
}
if (!schemasTableExists) {
logger.info('Migration: Creating schemas table...');
if (txDb.isPostgres) {
await txDb.execute(`
CREATE TABLE IF NOT EXISTS schemas (
id SERIAL PRIMARY KEY,
jira_schema_id TEXT NOT NULL UNIQUE,
name TEXT NOT NULL,
description TEXT,
discovered_at TIMESTAMP NOT NULL DEFAULT NOW(),
updated_at TIMESTAMP NOT NULL DEFAULT NOW()
)
`);
await txDb.execute(`
CREATE INDEX IF NOT EXISTS idx_schemas_jira_schema_id ON schemas(jira_schema_id)
`);
await txDb.execute(`
CREATE INDEX IF NOT EXISTS idx_schemas_name ON schemas(name)
`);
} else {
await txDb.execute(`
CREATE TABLE IF NOT EXISTS schemas (
id INTEGER PRIMARY KEY AUTOINCREMENT,
jira_schema_id TEXT NOT NULL UNIQUE,
name TEXT NOT NULL,
description TEXT,
discovered_at TEXT NOT NULL DEFAULT (datetime('now')),
updated_at TEXT NOT NULL DEFAULT (datetime('now'))
)
`);
await txDb.execute(`
CREATE INDEX IF NOT EXISTS idx_schemas_jira_schema_id ON schemas(jira_schema_id)
`);
await txDb.execute(`
CREATE INDEX IF NOT EXISTS idx_schemas_name ON schemas(name)
`);
}
}
// Step 3: Migrate unique schemas from configured_object_types to schemas
logger.info('Migration: Migrating schemas from configured_object_types...');
const schemaRows = await txDb.query<{
schema_id: string;
schema_name: string;
min_discovered_at: string;
max_updated_at: string;
}>(`
SELECT
schema_id,
schema_name,
MIN(discovered_at) as min_discovered_at,
MAX(updated_at) as max_updated_at
FROM configured_object_types
GROUP BY schema_id, schema_name
`);
for (const schemaRow of schemaRows) {
if (txDb.isPostgres) {
await txDb.execute(`
INSERT INTO schemas (jira_schema_id, name, description, discovered_at, updated_at)
VALUES (?, ?, ?, ?, ?)
ON CONFLICT(jira_schema_id) DO UPDATE SET
name = excluded.name,
updated_at = excluded.updated_at
`, [
schemaRow.schema_id,
schemaRow.schema_name,
null,
schemaRow.min_discovered_at,
schemaRow.max_updated_at,
]);
} else {
await txDb.execute(`
INSERT INTO schemas (jira_schema_id, name, description, discovered_at, updated_at)
VALUES (?, ?, ?, ?, ?)
ON CONFLICT(jira_schema_id) DO UPDATE SET
name = excluded.name,
updated_at = excluded.updated_at
`, [
schemaRow.schema_id,
schemaRow.schema_name,
null,
schemaRow.min_discovered_at,
schemaRow.max_updated_at,
]);
}
}
logger.info(`Migration: Migrated ${schemaRows.length} schemas`);
// Step 4: Check if object_types has schema_id and enabled columns
let hasSchemaId = false;
let hasEnabled = false;
try {
if (txDb.isPostgres) {
const columns = await txDb.query<{ column_name: string }>(`
SELECT column_name
FROM information_schema.columns
WHERE table_schema = 'public' AND table_name = 'object_types'
`);
hasSchemaId = columns.some((c: { column_name: string }) => c.column_name === 'schema_id');
hasEnabled = columns.some((c: { column_name: string }) => c.column_name === 'enabled');
} else {
const tableInfo = await txDb.query<{ name: string }>(`
PRAGMA table_info(object_types)
`);
hasSchemaId = tableInfo.some((c: { name: string }) => c.name === 'schema_id');
hasEnabled = tableInfo.some((c: { name: string }) => c.name === 'enabled');
}
} catch (error) {
logger.warn('Migration: Could not check object_types columns', error);
}
// Step 5: Add schema_id and enabled columns if they don't exist
if (!hasSchemaId) {
logger.info('Migration: Adding schema_id column to object_types...');
if (txDb.isPostgres) {
await txDb.execute(`
ALTER TABLE object_types
ADD COLUMN schema_id INTEGER REFERENCES schemas(id) ON DELETE CASCADE
`);
} else {
// SQLite doesn't support ALTER TABLE ADD COLUMN with FK, so we'll handle it differently
// For now, just add the column without FK constraint
await txDb.execute(`
ALTER TABLE object_types
ADD COLUMN schema_id INTEGER
`);
}
}
if (!hasEnabled) {
logger.info('Migration: Adding enabled column to object_types...');
if (txDb.isPostgres) {
await txDb.execute(`
ALTER TABLE object_types
ADD COLUMN enabled BOOLEAN NOT NULL DEFAULT FALSE
`);
} else {
await txDb.execute(`
ALTER TABLE object_types
ADD COLUMN enabled INTEGER NOT NULL DEFAULT 0
`);
}
}
// Step 6: Migrate object types from configured_object_types to object_types
logger.info('Migration: Migrating object types from configured_object_types...');
const configuredTypes = await txDb.query<{
schema_id: string;
object_type_id: number;
object_type_name: string;
display_name: string;
description: string | null;
object_count: number;
enabled: boolean | number;
discovered_at: string;
updated_at: string;
}>(`
SELECT
schema_id,
object_type_id,
object_type_name,
display_name,
description,
object_count,
enabled,
discovered_at,
updated_at
FROM configured_object_types
`);
let migratedCount = 0;
for (const configuredType of configuredTypes) {
// Get schema_id (FK) from schemas table
const schemaRow = await txDb.queryOne<{ id: number }>(
`SELECT id FROM schemas WHERE jira_schema_id = ?`,
[configuredType.schema_id]
);
if (!schemaRow) {
logger.warn(`Migration: Schema ${configuredType.schema_id} not found, skipping object type ${configuredType.object_type_name}`);
continue;
}
// Check if object type already exists in object_types
const existingType = await txDb.queryOne<{ jira_type_id: number }>(
`SELECT jira_type_id FROM object_types WHERE jira_type_id = ?`,
[configuredType.object_type_id]
);
if (existingType) {
// Update existing object type with schema_id and enabled
if (txDb.isPostgres) {
await txDb.execute(`
UPDATE object_types
SET
schema_id = ?,
enabled = ?,
display_name = COALESCE(display_name, ?),
description = COALESCE(description, ?),
object_count = COALESCE(object_count, ?),
updated_at = ?
WHERE jira_type_id = ?
`, [
schemaRow.id,
typeof configuredType.enabled === 'boolean' ? configuredType.enabled : configuredType.enabled === 1,
configuredType.display_name,
configuredType.description,
configuredType.object_count,
configuredType.updated_at,
configuredType.object_type_id,
]);
} else {
await txDb.execute(`
UPDATE object_types
SET
schema_id = ?,
enabled = ?,
display_name = COALESCE(display_name, ?),
description = COALESCE(description, ?),
object_count = COALESCE(object_count, ?),
updated_at = ?
WHERE jira_type_id = ?
`, [
schemaRow.id,
typeof configuredType.enabled === 'boolean' ? (configuredType.enabled ? 1 : 0) : configuredType.enabled,
configuredType.display_name,
configuredType.description,
configuredType.object_count,
configuredType.updated_at,
configuredType.object_type_id,
]);
}
} else {
// Insert new object type
// Note: We need sync_priority - use default 0
if (txDb.isPostgres) {
await txDb.execute(`
INSERT INTO object_types (
schema_id, jira_type_id, type_name, display_name, description,
sync_priority, object_count, enabled, discovered_at, updated_at
)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
`, [
schemaRow.id,
configuredType.object_type_id,
configuredType.object_type_name,
configuredType.display_name,
configuredType.description,
0, // sync_priority
configuredType.object_count,
typeof configuredType.enabled === 'boolean' ? configuredType.enabled : configuredType.enabled === 1,
configuredType.discovered_at,
configuredType.updated_at,
]);
} else {
await txDb.execute(`
INSERT INTO object_types (
schema_id, jira_type_id, type_name, display_name, description,
sync_priority, object_count, enabled, discovered_at, updated_at
)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
`, [
schemaRow.id,
configuredType.object_type_id,
configuredType.object_type_name,
configuredType.display_name,
configuredType.description,
0, // sync_priority
configuredType.object_count,
typeof configuredType.enabled === 'boolean' ? (configuredType.enabled ? 1 : 0) : configuredType.enabled,
configuredType.discovered_at,
configuredType.updated_at,
]);
}
}
migratedCount++;
}
logger.info(`Migration: Migrated ${migratedCount} object types`);
// Step 7: Fix UNIQUE constraints on object_types
logger.info('Migration: Fixing UNIQUE constraints on object_types...');
try {
// Remove old UNIQUE constraint on type_name if it exists
if (txDb.isPostgres) {
// Check if constraint exists
const constraintExists = await txDb.queryOne<{ count: number }>(`
SELECT COUNT(*) as count
FROM pg_constraint
WHERE conname = 'object_types_type_name_key'
`);
if (constraintExists && constraintExists.count > 0) {
logger.info('Migration: Dropping old UNIQUE constraint on type_name...');
await txDb.execute(`ALTER TABLE object_types DROP CONSTRAINT IF EXISTS object_types_type_name_key`);
}
// Add new UNIQUE constraint on (schema_id, type_name)
const newConstraintExists = await txDb.queryOne<{ count: number }>(`
SELECT COUNT(*) as count
FROM pg_constraint
WHERE conname = 'object_types_schema_id_type_name_key'
`);
if (!newConstraintExists || newConstraintExists.count === 0) {
logger.info('Migration: Adding UNIQUE constraint on (schema_id, type_name)...');
await txDb.execute(`
ALTER TABLE object_types
ADD CONSTRAINT object_types_schema_id_type_name_key UNIQUE (schema_id, type_name)
`);
}
} else {
// SQLite: UNIQUE constraints are part of table definition, so we need to recreate
// For now, just log a warning - SQLite doesn't support DROP CONSTRAINT easily
logger.info('Migration: SQLite UNIQUE constraints are handled in table definition');
}
} catch (error) {
logger.warn('Migration: Could not fix UNIQUE constraints (may already be correct)', error);
}
// Step 8: Add indexes if they don't exist
logger.info('Migration: Adding indexes...');
try {
await txDb.execute(`CREATE INDEX IF NOT EXISTS idx_object_types_schema_id ON object_types(schema_id)`);
await txDb.execute(`CREATE INDEX IF NOT EXISTS idx_object_types_enabled ON object_types(enabled)`);
await txDb.execute(`CREATE INDEX IF NOT EXISTS idx_object_types_schema_enabled ON object_types(schema_id, enabled)`);
} catch (error) {
logger.warn('Migration: Some indexes may already exist', error);
}
// Step 9: Drop configured_object_types table
logger.info('Migration: Dropping configured_object_types table...');
await txDb.execute(`DROP TABLE IF EXISTS configured_object_types`);
logger.info('Migration: Dropped configured_object_types table');
});
logger.info('Migration: Migration to normalized schema structure completed successfully');
} catch (error) {
logger.error('Migration: Failed to migrate to normalized schema structure', error);
throw error;
}
}

View File

@@ -6,7 +6,7 @@
import { logger } from '../logger.js';
import type { DatabaseAdapter } from './interface.js';
import { createDatabaseAdapter } from './factory.js';
import { getDatabaseAdapter } from './singleton.js';
// @ts-ignore - bcrypt doesn't have proper ESM types
import bcrypt from 'bcrypt';
@@ -351,6 +351,7 @@ async function seedInitialData(db: DatabaseAdapter): Promise<void> {
{ name: 'manage_users', description: 'Manage users and their roles', resource: 'users' },
{ name: 'manage_roles', description: 'Manage roles and permissions', resource: 'roles' },
{ name: 'manage_settings', description: 'Manage application settings', resource: 'settings' },
{ name: 'admin', description: 'Full administrative access (debug, sync, all operations)', resource: 'admin' },
];
for (const perm of permissions) {
@@ -424,6 +425,43 @@ async function seedInitialData(db: DatabaseAdapter): Promise<void> {
if (adminRole) {
roleIds['administrator'] = adminRole.id;
}
// Ensure "admin" permission exists (may have been added after initial setup)
const adminPerm = await db.queryOne<{ id: number }>(
'SELECT id FROM permissions WHERE name = ?',
['admin']
);
if (!adminPerm) {
// Add missing "admin" permission
await db.execute(
'INSERT INTO permissions (name, description, resource) VALUES (?, ?, ?)',
['admin', 'Full administrative access (debug, sync, all operations)', 'admin']
);
logger.info('Added missing "admin" permission');
}
// Ensure administrator role has "admin" permission
// Get admin permission ID (either existing or newly created)
const adminPermId = adminPerm?.id || (await db.queryOne<{ id: number }>(
'SELECT id FROM permissions WHERE name = ?',
['admin']
))?.id;
if (adminRole && adminPermId) {
const hasAdminPerm = await db.queryOne<{ role_id: number }>(
'SELECT role_id FROM role_permissions WHERE role_id = ? AND permission_id = ?',
[adminRole.id, adminPermId]
);
if (!hasAdminPerm) {
await db.execute(
'INSERT INTO role_permissions (role_id, permission_id) VALUES (?, ?)',
[adminRole.id, adminPermId]
);
logger.info('Assigned "admin" permission to administrator role');
}
}
}
// Create initial admin user if ADMIN_EMAIL and ADMIN_PASSWORD are set
@@ -489,7 +527,8 @@ async function seedInitialData(db: DatabaseAdapter): Promise<void> {
* Main migration function
*/
export async function runMigrations(): Promise<void> {
const db = createDatabaseAdapter();
// Use shared database adapter singleton
const db = getDatabaseAdapter();
try {
logger.info('Running database migrations...');
@@ -526,7 +565,7 @@ let authDatabaseAdapter: DatabaseAdapter | null = null;
export function getAuthDatabase(): DatabaseAdapter {
if (!authDatabaseAdapter) {
// Create adapter with allowClose=false so it won't be closed after operations
authDatabaseAdapter = createDatabaseAdapter(undefined, undefined, false);
authDatabaseAdapter = getDatabaseAdapter();
}
return authDatabaseAdapter;
}

View File

@@ -0,0 +1,43 @@
/**
* Database Schema Initialization
*
* Ensures normalized EAV schema is initialized before services use it.
*/
import { getDatabaseAdapter } from './singleton.js';
import { NORMALIZED_SCHEMA_POSTGRES, NORMALIZED_SCHEMA_SQLITE } from './normalized-schema.js';
import { logger } from '../logger.js';
let initialized = false;
let initializationPromise: Promise<void> | null = null;
/**
* Ensure database schema is initialized
*/
export async function ensureSchemaInitialized(): Promise<void> {
if (initialized) return;
if (initializationPromise) {
await initializationPromise;
return;
}
initializationPromise = (async () => {
try {
// Use shared database adapter singleton
const db = getDatabaseAdapter();
const isPostgres = db.isPostgres === true;
// Execute schema
const schema = isPostgres ? NORMALIZED_SCHEMA_POSTGRES : NORMALIZED_SCHEMA_SQLITE;
await db.exec(schema);
logger.info(`Database schema initialized (${isPostgres ? 'PostgreSQL' : 'SQLite'})`);
initialized = true;
} catch (error) {
logger.error('Failed to initialize database schema', error);
throw error;
}
})();
await initializationPromise;
}

View File

@@ -0,0 +1,329 @@
/**
* Normalized Database Schema
*
* Generic, schema-agnostic normalized structure for CMDB data.
* Works with any Jira Assets configuration.
*/
export const NORMALIZED_SCHEMA_POSTGRES = `
-- =============================================================================
-- Schemas (Jira Assets schemas)
-- =============================================================================
CREATE TABLE IF NOT EXISTS schemas (
id SERIAL PRIMARY KEY,
jira_schema_id TEXT NOT NULL UNIQUE,
name TEXT NOT NULL,
object_schema_key TEXT,
status TEXT,
description TEXT,
search_enabled BOOLEAN NOT NULL DEFAULT TRUE,
discovered_at TIMESTAMP NOT NULL DEFAULT NOW(),
updated_at TIMESTAMP NOT NULL DEFAULT NOW()
);
-- =============================================================================
-- Object Types (discovered from Jira schema, with schema relation and enabled flag)
-- =============================================================================
CREATE TABLE IF NOT EXISTS object_types (
id SERIAL PRIMARY KEY,
schema_id INTEGER NOT NULL REFERENCES schemas(id) ON DELETE CASCADE,
jira_type_id INTEGER NOT NULL,
type_name TEXT NOT NULL,
display_name TEXT NOT NULL,
description TEXT,
sync_priority INTEGER DEFAULT 0,
object_count INTEGER DEFAULT 0,
enabled BOOLEAN NOT NULL DEFAULT FALSE,
discovered_at TIMESTAMP NOT NULL DEFAULT NOW(),
updated_at TIMESTAMP NOT NULL DEFAULT NOW(),
UNIQUE(schema_id, jira_type_id),
UNIQUE(schema_id, type_name)
);
-- =============================================================================
-- Attributes (discovered from Jira schema)
-- =============================================================================
CREATE TABLE IF NOT EXISTS attributes (
id SERIAL PRIMARY KEY,
jira_attr_id INTEGER NOT NULL,
object_type_name TEXT NOT NULL,
attr_name TEXT NOT NULL,
field_name TEXT NOT NULL,
attr_type TEXT NOT NULL,
is_multiple BOOLEAN NOT NULL DEFAULT FALSE,
is_editable BOOLEAN NOT NULL DEFAULT TRUE,
is_required BOOLEAN NOT NULL DEFAULT FALSE,
is_system BOOLEAN NOT NULL DEFAULT FALSE,
reference_type_name TEXT,
description TEXT,
position INTEGER DEFAULT 0,
discovered_at TIMESTAMP NOT NULL DEFAULT NOW(),
UNIQUE(jira_attr_id, object_type_name)
);
-- =============================================================================
-- Objects (minimal metadata)
-- =============================================================================
CREATE TABLE IF NOT EXISTS objects (
id TEXT PRIMARY KEY,
object_key TEXT NOT NULL UNIQUE,
object_type_name TEXT NOT NULL,
label TEXT NOT NULL,
jira_updated_at TIMESTAMP,
jira_created_at TIMESTAMP,
cached_at TIMESTAMP NOT NULL DEFAULT NOW()
);
-- =============================================================================
-- Attribute Values (EAV pattern - generic for all types)
-- =============================================================================
CREATE TABLE IF NOT EXISTS attribute_values (
id SERIAL PRIMARY KEY,
object_id TEXT NOT NULL REFERENCES objects(id) ON DELETE CASCADE,
attribute_id INTEGER NOT NULL REFERENCES attributes(id) ON DELETE CASCADE,
text_value TEXT,
number_value NUMERIC,
boolean_value BOOLEAN,
date_value DATE,
datetime_value TIMESTAMP,
reference_object_id TEXT,
reference_object_key TEXT,
reference_object_label TEXT,
array_index INTEGER DEFAULT 0,
UNIQUE(object_id, attribute_id, array_index)
);
-- =============================================================================
-- Relationships (enhanced existing table)
-- =============================================================================
CREATE TABLE IF NOT EXISTS object_relations (
id SERIAL PRIMARY KEY,
source_id TEXT NOT NULL REFERENCES objects(id) ON DELETE CASCADE,
target_id TEXT NOT NULL REFERENCES objects(id) ON DELETE CASCADE,
attribute_id INTEGER NOT NULL REFERENCES attributes(id) ON DELETE CASCADE,
source_type TEXT NOT NULL,
target_type TEXT NOT NULL,
UNIQUE(source_id, target_id, attribute_id)
);
-- =============================================================================
-- Schema Mappings (object type -> schema ID) - DEPRECATED
-- =============================================================================
CREATE TABLE IF NOT EXISTS schema_mappings (
object_type_name TEXT PRIMARY KEY,
schema_id TEXT NOT NULL,
enabled BOOLEAN NOT NULL DEFAULT TRUE,
created_at TIMESTAMP NOT NULL DEFAULT NOW(),
updated_at TIMESTAMP NOT NULL DEFAULT NOW()
);
-- =============================================================================
-- Sync Metadata (unchanged)
-- =============================================================================
CREATE TABLE IF NOT EXISTS sync_metadata (
key TEXT PRIMARY KEY,
value TEXT NOT NULL,
updated_at TEXT NOT NULL
);
-- =============================================================================
-- Indexes for Performance
-- =============================================================================
-- Schema indexes
CREATE INDEX IF NOT EXISTS idx_schemas_jira_schema_id ON schemas(jira_schema_id);
CREATE INDEX IF NOT EXISTS idx_schemas_name ON schemas(name);
CREATE INDEX IF NOT EXISTS idx_schemas_search_enabled ON schemas(search_enabled);
-- Object type indexes (for schema queries)
CREATE INDEX IF NOT EXISTS idx_object_types_type_name ON object_types(type_name);
CREATE INDEX IF NOT EXISTS idx_object_types_jira_id ON object_types(jira_type_id);
CREATE INDEX IF NOT EXISTS idx_object_types_schema_id ON object_types(schema_id);
CREATE INDEX IF NOT EXISTS idx_object_types_sync_priority ON object_types(sync_priority);
CREATE INDEX IF NOT EXISTS idx_object_types_enabled ON object_types(enabled);
CREATE INDEX IF NOT EXISTS idx_object_types_schema_enabled ON object_types(schema_id, enabled);
-- Object indexes
CREATE INDEX IF NOT EXISTS idx_objects_type ON objects(object_type_name);
CREATE INDEX IF NOT EXISTS idx_objects_key ON objects(object_key);
CREATE INDEX IF NOT EXISTS idx_objects_label ON objects(label);
CREATE INDEX IF NOT EXISTS idx_objects_cached_at ON objects(cached_at);
-- Attribute indexes
CREATE INDEX IF NOT EXISTS idx_attributes_type ON attributes(object_type_name);
CREATE INDEX IF NOT EXISTS idx_attributes_field ON attributes(field_name);
CREATE INDEX IF NOT EXISTS idx_attributes_jira_id ON attributes(jira_attr_id);
CREATE INDEX IF NOT EXISTS idx_attributes_type_field ON attributes(object_type_name, field_name);
-- Attribute value indexes (critical for query performance)
CREATE INDEX IF NOT EXISTS idx_attr_values_object ON attribute_values(object_id);
CREATE INDEX IF NOT EXISTS idx_attr_values_attr ON attribute_values(attribute_id);
CREATE INDEX IF NOT EXISTS idx_attr_values_text ON attribute_values(text_value) WHERE text_value IS NOT NULL;
CREATE INDEX IF NOT EXISTS idx_attr_values_number ON attribute_values(number_value) WHERE number_value IS NOT NULL;
CREATE INDEX IF NOT EXISTS idx_attr_values_reference ON attribute_values(reference_object_id) WHERE reference_object_id IS NOT NULL;
CREATE INDEX IF NOT EXISTS idx_attr_values_composite_text ON attribute_values(attribute_id, text_value) WHERE text_value IS NOT NULL;
CREATE INDEX IF NOT EXISTS idx_attr_values_composite_ref ON attribute_values(attribute_id, reference_object_id) WHERE reference_object_id IS NOT NULL;
CREATE INDEX IF NOT EXISTS idx_attr_values_object_attr ON attribute_values(object_id, attribute_id);
-- Relation indexes
CREATE INDEX IF NOT EXISTS idx_relations_source ON object_relations(source_id);
CREATE INDEX IF NOT EXISTS idx_relations_target ON object_relations(target_id);
CREATE INDEX IF NOT EXISTS idx_relations_attr ON object_relations(attribute_id);
CREATE INDEX IF NOT EXISTS idx_relations_source_type ON object_relations(source_id, source_type);
CREATE INDEX IF NOT EXISTS idx_relations_target_type ON object_relations(target_id, target_type);
-- Schema indexes
CREATE INDEX IF NOT EXISTS idx_schemas_jira_schema_id ON schemas(jira_schema_id);
CREATE INDEX IF NOT EXISTS idx_schemas_name ON schemas(name);
-- Schema mapping indexes
CREATE INDEX IF NOT EXISTS idx_schema_mappings_type ON schema_mappings(object_type_name);
CREATE INDEX IF NOT EXISTS idx_schema_mappings_schema ON schema_mappings(schema_id);
CREATE INDEX IF NOT EXISTS idx_schema_mappings_enabled ON schema_mappings(enabled);
`;
export const NORMALIZED_SCHEMA_SQLITE = `
-- =============================================================================
-- SQLite version (for development/testing)
-- =============================================================================
CREATE TABLE IF NOT EXISTS schemas (
id INTEGER PRIMARY KEY AUTOINCREMENT,
jira_schema_id TEXT NOT NULL UNIQUE,
name TEXT NOT NULL,
object_schema_key TEXT,
status TEXT,
description TEXT,
search_enabled INTEGER NOT NULL DEFAULT 1,
discovered_at TEXT NOT NULL DEFAULT (datetime('now')),
updated_at TEXT NOT NULL DEFAULT (datetime('now'))
);
CREATE TABLE IF NOT EXISTS object_types (
id INTEGER PRIMARY KEY AUTOINCREMENT,
schema_id INTEGER NOT NULL,
jira_type_id INTEGER NOT NULL,
type_name TEXT NOT NULL,
display_name TEXT NOT NULL,
description TEXT,
sync_priority INTEGER DEFAULT 0,
object_count INTEGER DEFAULT 0,
enabled INTEGER NOT NULL DEFAULT 0,
discovered_at TEXT NOT NULL DEFAULT (datetime('now')),
updated_at TEXT NOT NULL DEFAULT (datetime('now')),
UNIQUE(schema_id, jira_type_id),
UNIQUE(schema_id, type_name),
FOREIGN KEY (schema_id) REFERENCES schemas(id) ON DELETE CASCADE
);
CREATE TABLE IF NOT EXISTS attributes (
id INTEGER PRIMARY KEY AUTOINCREMENT,
jira_attr_id INTEGER NOT NULL,
object_type_name TEXT NOT NULL,
attr_name TEXT NOT NULL,
field_name TEXT NOT NULL,
attr_type TEXT NOT NULL,
is_multiple INTEGER NOT NULL DEFAULT 0,
is_editable INTEGER NOT NULL DEFAULT 1,
is_required INTEGER NOT NULL DEFAULT 0,
is_system INTEGER NOT NULL DEFAULT 0,
reference_type_name TEXT,
description TEXT,
position INTEGER DEFAULT 0,
discovered_at TEXT NOT NULL DEFAULT (datetime('now')),
UNIQUE(jira_attr_id, object_type_name)
);
CREATE TABLE IF NOT EXISTS objects (
id TEXT PRIMARY KEY,
object_key TEXT NOT NULL UNIQUE,
object_type_name TEXT NOT NULL,
label TEXT NOT NULL,
jira_updated_at TEXT,
jira_created_at TEXT,
cached_at TEXT NOT NULL DEFAULT (datetime('now'))
);
CREATE TABLE IF NOT EXISTS attribute_values (
id INTEGER PRIMARY KEY AUTOINCREMENT,
object_id TEXT NOT NULL,
attribute_id INTEGER NOT NULL,
text_value TEXT,
number_value REAL,
boolean_value INTEGER,
date_value TEXT,
datetime_value TEXT,
reference_object_id TEXT,
reference_object_key TEXT,
reference_object_label TEXT,
array_index INTEGER DEFAULT 0,
UNIQUE(object_id, attribute_id, array_index),
FOREIGN KEY (object_id) REFERENCES objects(id) ON DELETE CASCADE,
FOREIGN KEY (attribute_id) REFERENCES attributes(id) ON DELETE CASCADE
);
CREATE TABLE IF NOT EXISTS object_relations (
id INTEGER PRIMARY KEY AUTOINCREMENT,
source_id TEXT NOT NULL,
target_id TEXT NOT NULL,
attribute_id INTEGER NOT NULL,
source_type TEXT NOT NULL,
target_type TEXT NOT NULL,
UNIQUE(source_id, target_id, attribute_id),
FOREIGN KEY (source_id) REFERENCES objects(id) ON DELETE CASCADE,
FOREIGN KEY (target_id) REFERENCES objects(id) ON DELETE CASCADE,
FOREIGN KEY (attribute_id) REFERENCES attributes(id) ON DELETE CASCADE
);
CREATE TABLE IF NOT EXISTS schema_mappings (
object_type_name TEXT PRIMARY KEY,
schema_id TEXT NOT NULL,
enabled INTEGER NOT NULL DEFAULT 1,
created_at TEXT NOT NULL DEFAULT (datetime('now')),
updated_at TEXT NOT NULL DEFAULT (datetime('now'))
);
CREATE TABLE IF NOT EXISTS sync_metadata (
key TEXT PRIMARY KEY,
value TEXT NOT NULL,
updated_at TEXT NOT NULL
);
-- Indexes
CREATE INDEX IF NOT EXISTS idx_objects_type ON objects(object_type_name);
CREATE INDEX IF NOT EXISTS idx_objects_key ON objects(object_key);
CREATE INDEX IF NOT EXISTS idx_objects_label ON objects(label);
CREATE INDEX IF NOT EXISTS idx_attributes_type ON attributes(object_type_name);
CREATE INDEX IF NOT EXISTS idx_attributes_field ON attributes(field_name);
CREATE INDEX IF NOT EXISTS idx_attributes_jira_id ON attributes(jira_attr_id);
CREATE INDEX IF NOT EXISTS idx_attributes_type_field ON attributes(object_type_name, field_name);
CREATE INDEX IF NOT EXISTS idx_attr_values_object ON attribute_values(object_id);
CREATE INDEX IF NOT EXISTS idx_attr_values_attr ON attribute_values(attribute_id);
CREATE INDEX IF NOT EXISTS idx_attr_values_text ON attribute_values(text_value);
CREATE INDEX IF NOT EXISTS idx_attr_values_number ON attribute_values(number_value);
CREATE INDEX IF NOT EXISTS idx_attr_values_reference ON attribute_values(reference_object_id);
CREATE INDEX IF NOT EXISTS idx_attr_values_object_attr ON attribute_values(object_id, attribute_id);
CREATE INDEX IF NOT EXISTS idx_relations_source ON object_relations(source_id);
CREATE INDEX IF NOT EXISTS idx_relations_target ON object_relations(target_id);
CREATE INDEX IF NOT EXISTS idx_relations_attr ON object_relations(attribute_id);
-- Schema indexes
CREATE INDEX IF NOT EXISTS idx_schemas_jira_schema_id ON schemas(jira_schema_id);
CREATE INDEX IF NOT EXISTS idx_schemas_name ON schemas(name);
CREATE INDEX IF NOT EXISTS idx_schemas_search_enabled ON schemas(search_enabled);
-- Object type indexes
CREATE INDEX IF NOT EXISTS idx_object_types_type_name ON object_types(type_name);
CREATE INDEX IF NOT EXISTS idx_object_types_jira_id ON object_types(jira_type_id);
CREATE INDEX IF NOT EXISTS idx_object_types_schema_id ON object_types(schema_id);
CREATE INDEX IF NOT EXISTS idx_object_types_sync_priority ON object_types(sync_priority);
CREATE INDEX IF NOT EXISTS idx_object_types_enabled ON object_types(enabled);
CREATE INDEX IF NOT EXISTS idx_object_types_schema_enabled ON object_types(schema_id, enabled);
-- Schema mapping indexes
CREATE INDEX IF NOT EXISTS idx_schema_mappings_type ON schema_mappings(object_type_name);
CREATE INDEX IF NOT EXISTS idx_schema_mappings_schema ON schema_mappings(schema_id);
CREATE INDEX IF NOT EXISTS idx_schema_mappings_enabled ON schema_mappings(enabled);
`;

View File

@@ -9,6 +9,7 @@ import { logger } from '../logger.js';
import type { DatabaseAdapter } from './interface.js';
export class PostgresAdapter implements DatabaseAdapter {
public readonly isPostgres = true; // Indicates this is PostgreSQL
private pool: Pool;
private connectionString: string;
private isClosed: boolean = false;
@@ -17,11 +18,19 @@ export class PostgresAdapter implements DatabaseAdapter {
constructor(connectionString: string, allowClose: boolean = true) {
this.connectionString = connectionString;
this.allowClose = allowClose;
// Parse connection string to extract SSL requirement
const url = new URL(connectionString);
const sslRequired = url.searchParams.get('sslmode') === 'require' ||
url.searchParams.get('sslmode') === 'require' ||
process.env.DATABASE_SSL === 'true';
this.pool = new Pool({
connectionString,
max: 20, // Maximum number of clients in the pool
idleTimeoutMillis: 30000,
connectionTimeoutMillis: 10000, // Increased timeout for initial connection
ssl: sslRequired ? { rejectUnauthorized: false } : false, // Azure PostgreSQL requires SSL
});
// Handle pool errors
@@ -72,6 +81,7 @@ export class PostgresAdapter implements DatabaseAdapter {
// Create a transaction-scoped adapter
const transactionAdapter: DatabaseAdapter = {
isPostgres: true, // Indicates this is PostgreSQL
query: async (sql: string, params?: any[]) => {
const convertedSql = this.convertPlaceholders(sql);
const result = await client.query(convertedSql, params);
@@ -102,9 +112,16 @@ export class PostgresAdapter implements DatabaseAdapter {
const result = await callback(transactionAdapter);
await client.query('COMMIT');
return result;
} catch (error) {
} catch (error: any) {
await client.query('ROLLBACK');
// Don't log foreign key constraint errors as errors - they're expected and handled by caller
if (error?.code === '23503' || error?.message?.includes('foreign key constraint')) {
logger.debug('PostgreSQL transaction error (foreign key constraint - handled by caller):', error);
} else {
logger.error('PostgreSQL transaction error:', error);
}
throw error;
} finally {
client.release();
@@ -148,10 +165,13 @@ export class PostgresAdapter implements DatabaseAdapter {
async getSizeBytes(): Promise<number> {
try {
const result = await this.query<{ size: number }>(`
const result = await this.query<{ size: number | string }>(`
SELECT pg_database_size(current_database()) as size
`);
return result[0]?.size || 0;
// PostgreSQL returns bigint as string, ensure we convert to number
const size = result[0]?.size;
if (!size) return 0;
return typeof size === 'string' ? parseInt(size, 10) : Number(size);
} catch (error) {
logger.error('PostgreSQL getSizeBytes error:', error);
return 0;

View File

@@ -0,0 +1,28 @@
/**
* Database Adapter Singleton
*
* Provides a shared database adapter instance to prevent multiple connections.
* All services should use this singleton instead of creating their own adapters.
*/
import { createDatabaseAdapter } from './factory.js';
import type { DatabaseAdapter } from './interface.js';
let dbAdapterInstance: DatabaseAdapter | null = null;
/**
* Get the shared database adapter instance
*/
export function getDatabaseAdapter(): DatabaseAdapter {
if (!dbAdapterInstance) {
dbAdapterInstance = createDatabaseAdapter(undefined, undefined, false); // Don't allow close (singleton)
}
return dbAdapterInstance;
}
/**
* Reset the singleton (for testing only)
*/
export function resetDatabaseAdapter(): void {
dbAdapterInstance = null;
}

View File

@@ -18,7 +18,7 @@ interface EmailOptions {
class EmailService {
private transporter: Transporter | null = null;
private isConfigured: boolean = false;
private _isConfigured: boolean = false;
constructor() {
this.initialize();
@@ -37,7 +37,7 @@ class EmailService {
if (!smtpHost || !smtpUser || !smtpPassword) {
logger.warn('SMTP not configured - email functionality will be disabled');
this.isConfigured = false;
this._isConfigured = false;
return;
}
@@ -52,11 +52,11 @@ class EmailService {
},
});
this.isConfigured = true;
this._isConfigured = true;
logger.info('Email service configured');
} catch (error) {
logger.error('Failed to initialize email service:', error);
this.isConfigured = false;
this._isConfigured = false;
}
}
@@ -64,7 +64,7 @@ class EmailService {
* Send an email
*/
async sendEmail(options: EmailOptions): Promise<boolean> {
if (!this.isConfigured || !this.transporter) {
if (!this._isConfigured || !this.transporter) {
logger.warn('Email service not configured - email not sent:', options.to);
// In development, log the email content
if (config.isDevelopment) {
@@ -282,7 +282,7 @@ class EmailService {
* Check if email service is configured
*/
isConfigured(): boolean {
return this.isConfigured;
return this._isConfigured;
}
}

View File

@@ -15,6 +15,8 @@ import type {
ApplicationUpdateRequest,
TeamDashboardData,
} from '../types/index.js';
import type { CMDBObjectTypeName } from '../generated/jira-types.js';
import type { DatabaseAdapter } from './database/interface.js';
// Attribute name mappings (these should match your Jira Assets schema)
const ATTRIBUTE_NAMES = {
@@ -98,6 +100,65 @@ class JiraAssetsService {
private applicationFunctionCategoriesCache: Map<string, ReferenceValue> | null = null;
// Cache: Dynamics Factors with factors
private dynamicsFactorsCache: Map<string, ReferenceValue> | null = null;
/**
* Get schema ID for an object type from database
* Returns the schema ID of the first enabled object type with the given type name
*/
private async getSchemaIdForObjectType(typeName: string): Promise<string | null> {
try {
const { schemaConfigurationService } = await import('./schemaConfigurationService.js');
const enabledTypes = await schemaConfigurationService.getEnabledObjectTypes();
const type = enabledTypes.find(et => et.objectTypeName === typeName);
return type?.schemaId || null;
} catch (error) {
logger.warn(`JiraAssets: Failed to get schema ID for ${typeName}`, error);
return null;
}
}
/**
* Get first available schema ID from database (fallback)
*/
private async getFirstSchemaId(): Promise<string | null> {
try {
const { normalizedCacheStore } = await import('./normalizedCacheStore.js');
const db = (normalizedCacheStore as any).db;
if (!db) return null;
await db.ensureInitialized?.();
const typedDb = db as DatabaseAdapter;
const schemaRow = await typedDb.queryOne<{ jira_schema_id: string }>(
`SELECT jira_schema_id FROM schemas ORDER BY jira_schema_id LIMIT 1`
);
return schemaRow?.jira_schema_id || null;
} catch (error) {
logger.warn('JiraAssets: Failed to get first schema ID', error);
return null;
}
}
/**
* Get all available schema IDs from database that are enabled for searching
*/
private async getAllSchemaIds(): Promise<string[]> {
try {
const { normalizedCacheStore } = await import('./normalizedCacheStore.js');
const db = (normalizedCacheStore as any).db;
if (!db) return [];
await db.ensureInitialized?.();
const typedDb = db as DatabaseAdapter;
const schemaRows = await typedDb.query<{ jira_schema_id: string }>(
`SELECT DISTINCT jira_schema_id FROM schemas WHERE search_enabled = ? ORDER BY jira_schema_id`,
[typedDb.isPostgres ? true : 1]
);
return schemaRows.map((row: { jira_schema_id: string }) => row.jira_schema_id);
} catch (error) {
logger.warn('JiraAssets: Failed to get all schema IDs', error);
return [];
}
}
// Cache: Complexity Factors with factors
private complexityFactorsCache: Map<string, ReferenceValue> | null = null;
// Cache: Number of Users with factors
@@ -109,7 +170,7 @@ class JiraAssetsService {
// Cache: Team dashboard data
private teamDashboardCache: { data: TeamDashboardData; timestamp: number } | null = null;
private readonly TEAM_DASHBOARD_CACHE_TTL = 5 * 60 * 1000; // 5 minutes
// Cache: Dashboard stats
private dashboardStatsCache: {
data: {
totalApplications: number;
@@ -121,6 +182,8 @@ class JiraAssetsService {
timestamp: number
} | null = null;
private readonly DASHBOARD_STATS_CACHE_TTL = 3 * 60 * 1000; // 3 minutes
// Warming lock to prevent multiple simultaneous warming operations
private isWarming: boolean = false;
constructor() {
// Try both API paths - Insight (Data Center) and Assets (Cloud)
@@ -742,7 +805,7 @@ class JiraAssetsService {
try {
await this.detectApiType();
const url = `/object/${embeddedRefObj.id}?includeAttributes=true&includeAttributesDeep=1`;
const url = `/object/${embeddedRefObj.id}?includeAttributes=true&includeAttributesDeep=2`;
const refObj = await this.request<JiraAssetsObject>(url);
if (refObj) {
@@ -1337,6 +1400,12 @@ class JiraAssetsService {
logger.info(`Searching applications with query: ${qlQuery}`);
logger.debug(`Filters: ${JSON.stringify(filters)}`);
// Get schema ID for ApplicationComponent from database
const schemaId = await this.getSchemaIdForObjectType('ApplicationComponent') || await this.getFirstSchemaId();
if (!schemaId) {
throw new Error('No schema ID available. Please configure object types in Schema Configuration settings.');
}
let response: JiraAssetsSearchResponse;
if (this.isDataCenter) {
@@ -1347,8 +1416,8 @@ class JiraAssetsService {
page: page.toString(),
resultPerPage: pageSize.toString(),
includeAttributes: 'true',
includeAttributesDeep: '1',
objectSchemaId: config.jiraSchemaId,
includeAttributesDeep: '2',
objectSchemaId: schemaId,
});
logger.debug(`IQL request: /iql/objects?${params.toString()}`);
@@ -1368,7 +1437,10 @@ class JiraAssetsService {
'/aql/objects',
{
method: 'POST',
body: JSON.stringify(requestBody),
body: JSON.stringify({
...requestBody,
objectSchemaId: schemaId,
}),
}
);
}
@@ -1665,10 +1737,29 @@ class JiraAssetsService {
}
}
async getReferenceObjects(objectType: string): Promise<ReferenceValue[]> {
async getReferenceObjects(objectType: string, schemaId?: string): Promise<ReferenceValue[]> {
try {
await this.detectApiType();
// Get schema ID from mapping service if not provided
let effectiveSchemaId = schemaId;
if (!effectiveSchemaId) {
const { schemaMappingService } = await import('./schemaMappingService.js');
const { OBJECT_TYPES } = await import('../generated/jira-schema.js');
// Find the typeName from the objectType (display name)
let typeName: string | null = null;
for (const [key, def] of Object.entries(OBJECT_TYPES)) {
if (def.name === objectType) {
typeName = key;
break;
}
}
// Use typeName if found, otherwise fall back to objectType
effectiveSchemaId = await schemaMappingService.getSchemaId(typeName || objectType);
}
const qlQuery = `objectType = "${objectType}"`;
let response: JiraAssetsSearchResponse;
@@ -1678,8 +1769,8 @@ class JiraAssetsService {
iql: qlQuery,
resultPerPage: '200',
includeAttributes: 'true',
includeAttributesDeep: '1',
objectSchemaId: config.jiraSchemaId,
includeAttributesDeep: '2',
objectSchemaId: effectiveSchemaId,
});
response = await this.request<JiraAssetsSearchResponse>(
@@ -1695,6 +1786,7 @@ class JiraAssetsService {
qlQuery,
resultPerPage: 200,
includeAttributes: true,
objectSchemaId: effectiveSchemaId,
}),
}
);
@@ -1718,6 +1810,50 @@ class JiraAssetsService {
}
}
// Cache objects to normalized cache store for better performance
// This ensures objects are available in the database cache, not just in-memory
// This prevents individual API calls later when these objects are needed
if (response.objectEntries.length > 0) {
try {
const { OBJECT_TYPES } = await import('../generated/jira-schema.js');
// Find the CMDBObjectTypeName for this objectType
let typeName: CMDBObjectTypeName | null = null;
for (const [key, def] of Object.entries(OBJECT_TYPES)) {
if (def.name === objectType) {
typeName = key as CMDBObjectTypeName;
break;
}
}
if (typeName) {
// Parse and cache objects in batch using the same business logic as sync
const { jiraAssetsClient } = await import('./jiraAssetsClient.js');
const { normalizedCacheStore } = await import('./normalizedCacheStore.js');
const parsedObjects = await Promise.all(
response.objectEntries.map(obj => jiraAssetsClient.parseObject(obj))
);
const validParsedObjects = parsedObjects.filter((obj): obj is any => obj !== null);
if (validParsedObjects.length > 0) {
// Batch upsert to cache (same as sync engine)
await normalizedCacheStore.batchUpsertObjects(typeName, validParsedObjects);
// Extract and store relations for all objects (same as sync engine)
for (const obj of validParsedObjects) {
await normalizedCacheStore.extractAndStoreRelations(typeName, obj);
}
logger.debug(`Cached ${validParsedObjects.length} ${objectType} objects to normalized cache with relations`);
}
}
} catch (error) {
// Don't fail if caching fails - this is an optimization
logger.debug(`Failed to cache ${objectType} objects to normalized cache`, error);
}
}
const results = response.objectEntries.map((obj) => {
// Extract Description attribute (try multiple possible attribute names)
// Use attrSchema for fallback lookup by attribute ID
@@ -1926,6 +2062,12 @@ class JiraAssetsService {
teamsById.set(team.objectId, team);
}
// Get schema ID for ApplicationComponent
const schemaId = await this.getSchemaIdForObjectType('ApplicationComponent') || await this.getFirstSchemaId();
if (!schemaId) {
throw new Error('No schema ID available. Please configure object types in Schema Configuration settings.');
}
let response: JiraAssetsSearchResponse;
if (this.isDataCenter) {
@@ -1933,7 +2075,7 @@ class JiraAssetsService {
iql,
resultPerPage: '500',
includeAttributes: 'true',
objectSchemaId: config.jiraSchemaId,
objectSchemaId: schemaId,
});
response = await this.request<JiraAssetsSearchResponse>(
`/iql/objects?${params.toString()}`
@@ -2081,8 +2223,52 @@ class JiraAssetsService {
return null;
}
// Check if there's already a pending request for this object (deduplicate concurrent requests)
// Check both objectIdToFetch and the alternate key (if both are provided)
const pendingRequest = this.pendingReferenceRequests.get(objectIdToFetch)
|| (objectId && objectKey && objectId !== objectKey ? this.pendingReferenceRequests.get(objectKey) : undefined)
|| (objectId && objectKey && objectId !== objectKey ? this.pendingReferenceRequests.get(objectId) : undefined);
if (pendingRequest) {
logger.debug(`fetchEnrichedReferenceValue: Reusing pending request for ${objectKey} (${objectIdToFetch})`);
return pendingRequest;
}
// Create a new fetch promise and store it in pending requests
// Store by both keys if they differ to catch all concurrent requests
const fetchPromise = this.doFetchEnrichedReferenceValue(objectKey, objectId, objectIdToFetch, cachedByKey, cachedById);
this.pendingReferenceRequests.set(objectIdToFetch, fetchPromise);
if (objectId && objectKey && objectId !== objectKey) {
// Also store by the alternate key to deduplicate requests that use the other key
this.pendingReferenceRequests.set(objectKey, fetchPromise);
this.pendingReferenceRequests.set(objectId, fetchPromise);
}
try {
const url = `/object/${objectIdToFetch}?includeAttributes=true&includeAttributesDeep=1`;
const result = await fetchPromise;
return result;
} finally {
// Remove from pending requests once done (success or failure)
this.pendingReferenceRequests.delete(objectIdToFetch);
if (objectId && objectKey && objectId !== objectKey) {
this.pendingReferenceRequests.delete(objectKey);
this.pendingReferenceRequests.delete(objectId);
}
}
}
/**
* Internal method to actually fetch the enriched reference value (called by fetchEnrichedReferenceValue)
*/
private async doFetchEnrichedReferenceValue(
objectKey: string,
objectId: string | undefined,
objectIdToFetch: string,
cachedByKey: ReferenceValue | undefined,
cachedById: ReferenceValue | undefined
): Promise<ReferenceValue | null> {
try {
const url = `/object/${objectIdToFetch}?includeAttributes=true&includeAttributesDeep=2`;
const refObj = await this.request<JiraAssetsObject>(url);
if (!refObj) {
@@ -2170,6 +2356,12 @@ class JiraAssetsService {
logger.info('Dashboard stats: Cache miss or expired, fetching fresh data');
try {
// Get schema ID for ApplicationComponent
const schemaId = await this.getSchemaIdForObjectType('ApplicationComponent') || await this.getFirstSchemaId();
if (!schemaId) {
throw new Error('No schema ID available. Please configure object types in Schema Configuration settings.');
}
const allAppsQuery = 'objectType = "Application Component" AND Status != "Closed"';
// First, get total count with a single query
@@ -2179,7 +2371,7 @@ class JiraAssetsService {
iql: allAppsQuery,
resultPerPage: '1',
includeAttributes: 'true',
objectSchemaId: config.jiraSchemaId,
objectSchemaId: schemaId,
});
totalCountResponse = await this.request<JiraAssetsSearchResponse>(
`/iql/objects?${params.toString()}`
@@ -2193,6 +2385,7 @@ class JiraAssetsService {
qlQuery: allAppsQuery,
resultPerPage: 1,
includeAttributes: true,
objectSchemaId: schemaId,
}),
}
);
@@ -2222,7 +2415,7 @@ class JiraAssetsService {
iql: sampleQuery,
resultPerPage: '1',
includeAttributes: 'true',
objectSchemaId: config.jiraSchemaId,
objectSchemaId: schemaId,
});
sampleResponse = await this.request<JiraAssetsSearchResponse>(
`/iql/objects?${sampleParams.toString()}`
@@ -2236,6 +2429,7 @@ class JiraAssetsService {
qlQuery: sampleQuery,
resultPerPage: 1,
includeAttributes: true,
objectSchemaId: schemaId,
}),
}
);
@@ -2273,7 +2467,7 @@ class JiraAssetsService {
resultPerPage: pageSize.toString(),
pageNumber: currentPage.toString(),
includeAttributes: 'true',
objectSchemaId: config.jiraSchemaId,
objectSchemaId: schemaId,
});
batchResponse = await this.request<JiraAssetsSearchResponse>(
`/iql/objects?${params.toString()}`
@@ -2288,6 +2482,7 @@ class JiraAssetsService {
resultPerPage: pageSize,
pageNumber: currentPage,
includeAttributes: true,
objectSchemaId: schemaId,
}),
}
);
@@ -2386,7 +2581,7 @@ class JiraAssetsService {
iql: classifiedQuery,
resultPerPage: '1',
includeAttributes: 'true',
objectSchemaId: config.jiraSchemaId,
objectSchemaId: schemaId,
});
classifiedResponse = await this.request<JiraAssetsSearchResponse>(
`/iql/objects?${params.toString()}`
@@ -2484,13 +2679,19 @@ class JiraAssetsService {
if (this.attributeSchemaCache.has(objectTypeName)) {
attrSchema = this.attributeSchemaCache.get(objectTypeName);
} else {
// Get schema ID for ApplicationComponent
const schemaId = await this.getSchemaIdForObjectType('ApplicationComponent') || await this.getFirstSchemaId();
if (!schemaId) {
throw new Error('No schema ID available. Please configure object types in Schema Configuration settings.');
}
const testParams = new URLSearchParams({
iql: qlQuery,
page: '1',
resultPerPage: '1',
includeAttributes: 'true',
includeAttributesDeep: '1',
objectSchemaId: config.jiraSchemaId,
includeAttributesDeep: '2',
objectSchemaId: schemaId,
});
const testResponse = await this.request<JiraAssetsSearchResponse>(
`/iql/objects?${testParams.toString()}`
@@ -2510,6 +2711,12 @@ class JiraAssetsService {
this.ensureFactorCaches(),
]);
// Get schema ID for ApplicationComponent
const schemaId = await this.getSchemaIdForObjectType('ApplicationComponent') || await this.getFirstSchemaId();
if (!schemaId) {
throw new Error('No schema ID available. Please configure object types in Schema Configuration settings.');
}
// Get total count
let firstResponse: JiraAssetsSearchResponse;
if (this.isDataCenter) {
@@ -2518,8 +2725,8 @@ class JiraAssetsService {
page: '1',
resultPerPage: '1',
includeAttributes: 'true',
includeAttributesDeep: '1',
objectSchemaId: config.jiraSchemaId,
includeAttributesDeep: '2',
objectSchemaId: schemaId,
});
firstResponse = await this.request<JiraAssetsSearchResponse>(
`/iql/objects?${params.toString()}`
@@ -2563,8 +2770,8 @@ class JiraAssetsService {
page: pageNum.toString(),
resultPerPage: batchSize.toString(),
includeAttributes: 'true',
includeAttributesDeep: '1',
objectSchemaId: config.jiraSchemaId,
includeAttributesDeep: '2',
objectSchemaId: schemaId,
});
response = await this.request<JiraAssetsSearchResponse>(
`/iql/objects?${params.toString()}`
@@ -2982,15 +3189,31 @@ class JiraAssetsService {
try {
await this.detectApiType();
// Get all available schema IDs to search across all schemas
const schemaIds = await this.getAllSchemaIds();
if (schemaIds.length === 0) {
// Fallback to first schema if no schemas found
const fallbackSchemaId = await this.getFirstSchemaId();
if (!fallbackSchemaId) {
throw new Error('No schema ID available. Please configure object types in Schema Configuration settings.');
}
schemaIds.push(fallbackSchemaId);
}
logger.info(`CMDB search: Searching across ${schemaIds.length} schema(s) for query: "${query}"`);
// Search each schema and collect results
const searchPromises = schemaIds.map(async (schemaId) => {
try {
// Use Insight AM search API endpoint (different from IQL)
const searchUrl = `${config.jiraHost}/rest/insight-am/1/search?` +
`schema=${config.jiraSchemaId}&` +
`schema=${schemaId}&` +
`criteria=${encodeURIComponent(query)}&` +
`criteriaType=FREETEXT&` +
`attributes=Key,Object+Type,Label,Name,Description,Status&` +
`offset=0&limit=${limit}`;
logger.info(`CMDB search API call - Query: "${query}", URL: ${searchUrl}`);
logger.debug(`CMDB search API call - Schema: ${schemaId}, Query: "${query}", URL: ${searchUrl}`);
const response = await fetch(searchUrl, {
method: 'GET',
@@ -2999,7 +3222,8 @@ class JiraAssetsService {
if (!response.ok) {
const errorText = await response.text();
throw new Error(`Jira CMDB search error: ${response.status} - ${errorText}`);
logger.warn(`CMDB search failed for schema ${schemaId}: ${response.status} - ${errorText}`);
return null; // Return null for failed schemas, we'll continue with others
}
const data = await response.json() as {
@@ -3030,36 +3254,94 @@ class JiraAssetsService {
}>;
};
// Transform the response to a cleaner format
// The API returns attributes with nested structure, we flatten the values
const transformedResults = (data.results || []).map((result) => ({
id: result.id,
key: result.key,
label: result.label,
objectTypeId: result.objectTypeId,
avatarUrl: result.avatarUrl,
attributes: (result.attributes || []).map((attr) => ({
return {
schemaId,
results: data.results || [],
objectTypes: data.objectTypes || [],
metadata: data.metadata,
};
} catch (error) {
logger.warn(`CMDB search error for schema ${schemaId}:`, error);
return null; // Return null for failed schemas, we'll continue with others
}
});
// Wait for all searches to complete
const searchResults = await Promise.all(searchPromises);
// Merge results from all schemas
const allResults: Array<{
id: number;
key: string;
label: string;
objectTypeId: number;
avatarUrl?: string;
attributes: Array<{
id: number;
name: string;
objectTypeAttributeId: number;
values: unknown[];
}>;
}> = [];
const objectTypeMap = new Map<number, { id: number; name: string; iconUrl?: string }>();
let totalCount = 0;
for (const result of searchResults) {
if (!result) continue; // Skip failed schemas
// Add results (avoid duplicates by key)
const existingKeys = new Set(allResults.map(r => r.key));
for (const item of result.results) {
if (!existingKeys.has(item.key)) {
allResults.push({
id: item.id,
key: item.key,
label: item.label,
objectTypeId: item.objectTypeId,
avatarUrl: item.avatarUrl,
attributes: (item.attributes || []).map((attr) => ({
id: attr.id,
name: attr.name,
objectTypeAttributeId: attr.objectTypeAttributeId,
values: attr.values || [],
})),
}));
});
existingKeys.add(item.key);
}
}
return {
metadata: data.metadata || {
count: transformedResults.length,
offset: 0,
limit,
total: transformedResults.length,
criteria: { query, type: 'FREETEXT', schema: parseInt(config.jiraSchemaId, 10) },
},
objectTypes: (data.objectTypes || []).map((ot) => ({
// Merge object types (avoid duplicates by id)
for (const ot of result.objectTypes) {
if (!objectTypeMap.has(ot.id)) {
objectTypeMap.set(ot.id, {
id: ot.id,
name: ot.name,
iconUrl: ot.iconUrl,
})),
results: transformedResults,
});
}
}
// Sum up total counts
if (result.metadata?.total) {
totalCount += result.metadata.total;
}
}
// Apply limit to merged results
const limitedResults = allResults.slice(0, limit);
logger.info(`CMDB search: Found ${limitedResults.length} results (${allResults.length} total before limit) across ${schemaIds.length} schema(s)`);
return {
metadata: {
count: limitedResults.length,
offset: 0,
limit,
total: totalCount || limitedResults.length,
criteria: { query, type: 'FREETEXT', schema: schemaIds.length > 0 ? parseInt(schemaIds[0], 10) : 0 },
},
objectTypes: Array.from(objectTypeMap.values()),
results: limitedResults,
};
} catch (error) {
logger.error('CMDB search failed', error);
@@ -3099,12 +3381,18 @@ class JiraAssetsService {
let response: JiraAssetsSearchResponse;
if (this.isDataCenter) {
// Get schema ID for the object type (or first available)
const schemaId = await this.getSchemaIdForObjectType(objectType) || await this.getFirstSchemaId();
if (!schemaId) {
throw new Error('No schema ID available. Please configure object types in Schema Configuration settings.');
}
const params = new URLSearchParams({
iql: iqlQuery,
resultPerPage: '100',
includeAttributes: 'true',
includeAttributesDeep: '1',
objectSchemaId: config.jiraSchemaId,
includeAttributesDeep: '2',
objectSchemaId: schemaId,
});
response = await this.request<JiraAssetsSearchResponse>(
@@ -3206,37 +3494,56 @@ class JiraAssetsService {
}
/**
* Pre-warm the team dashboard cache in background
* This is called on server startup so users don't experience slow first load
* Pre-warm the full cache using sync engine
* This is more efficient than pre-warming just the team dashboard
* as it syncs all object types and their relations
* Checks cache status first to avoid unnecessary syncs
*/
async preWarmTeamDashboardCache(): Promise<void> {
try {
// Only pre-warm if cache is empty
if (this.teamDashboardCache) {
logger.info('Team dashboard cache already warm, skipping pre-warm');
async preWarmFullCache(): Promise<void> {
// Prevent multiple simultaneous warming operations
if (this.isWarming) {
logger.debug('Cache warming already in progress, skipping duplicate request');
return;
}
logger.info('Pre-warming team dashboard cache in background...');
try {
this.isWarming = true;
// Check if schema configuration is complete before attempting sync
const { schemaConfigurationService } = await import('./schemaConfigurationService.js');
const isConfigured = await schemaConfigurationService.isConfigurationComplete();
if (!isConfigured) {
logger.info('Schema configuration not complete, skipping automatic cache pre-warming. Please configure object types in settings first.');
return;
}
// Check if cache is already warm before syncing
const { normalizedCacheStore } = await import('./normalizedCacheStore.js');
const isWarm = await normalizedCacheStore.isWarm();
if (isWarm) {
logger.info('Cache is already warm, skipping pre-warm');
return;
}
logger.info('Pre-warming full cache in background using sync engine...');
const startTime = Date.now();
// Fetch with default excluded statuses (which is what most users will see)
await this.getTeamDashboardData(['Closed', 'Deprecated']);
const { syncEngine } = await import('./syncEngine.js');
await syncEngine.fullSync();
const duration = Date.now() - startTime;
logger.info(`Team dashboard cache pre-warmed in ${duration}ms`);
logger.info(`Full cache pre-warmed in ${duration}ms`);
} catch (error) {
logger.error('Failed to pre-warm team dashboard cache', error);
logger.error('Failed to pre-warm full cache', error);
// Don't throw - pre-warming is optional
} finally {
this.isWarming = false;
}
}
}
export const jiraAssetsService = new JiraAssetsService();
// Pre-warm team dashboard cache on startup (runs in background, doesn't block server start)
setTimeout(() => {
jiraAssetsService.preWarmTeamDashboardCache().catch(() => {
// Error already logged in the method
});
}, 5000); // Wait 5 seconds after server start to avoid competing with other initialization
// Note: Pre-warm cache removed - all syncs must be triggered manually from GUI
// The preWarmFullCache() method is still available for manual API calls but won't auto-start

View File

@@ -7,9 +7,12 @@
import { config } from '../config/env.js';
import { logger } from './logger.js';
import { OBJECT_TYPES } from '../generated/jira-schema.js';
import type { CMDBObject, CMDBObjectTypeName, ObjectReference } from '../generated/jira-types.js';
import { schemaCacheService } from './schemaCacheService.js';
import type { CMDBObject, ObjectReference } from '../generated/jira-types.js';
import type { JiraAssetsObject, JiraAssetsAttribute, JiraAssetsSearchResponse } from '../types/index.js';
import type { ObjectEntry, ObjectAttribute, ObjectAttributeValue, ReferenceValue, ConfluenceValue } from '../domain/jiraAssetsPayload.js';
import { isReferenceValue, isSimpleValue, hasAttributes } from '../domain/jiraAssetsPayload.js';
import { normalizedCacheStore } from './normalizedCacheStore.js';
// =============================================================================
// Types
@@ -31,14 +34,39 @@ export interface JiraUpdatePayload {
}>;
}
// Map from Jira object type ID to our type name
const TYPE_ID_TO_NAME: Record<number, CMDBObjectTypeName> = {};
const JIRA_NAME_TO_TYPE: Record<string, CMDBObjectTypeName> = {};
// Lookup maps - will be populated dynamically from database schema
let TYPE_ID_TO_NAME: Record<number, string> = {};
let JIRA_NAME_TO_TYPE: Record<string, string> = {};
let OBJECT_TYPES_CACHE: Record<string, { jiraTypeId: number; name: string; attributes: Array<{ jiraId: number; name: string; fieldName: string; type: string; isMultiple?: boolean }> }> = {};
// Build lookup maps from schema
for (const [typeName, typeDef] of Object.entries(OBJECT_TYPES)) {
TYPE_ID_TO_NAME[typeDef.jiraTypeId] = typeName as CMDBObjectTypeName;
JIRA_NAME_TO_TYPE[typeDef.name] = typeName as CMDBObjectTypeName;
/**
* Initialize lookup maps from database schema
*/
async function initializeLookupMaps(): Promise<void> {
try {
const schema = await schemaCacheService.getSchema();
OBJECT_TYPES_CACHE = {};
TYPE_ID_TO_NAME = {};
JIRA_NAME_TO_TYPE = {};
for (const [typeName, typeDef] of Object.entries(schema.objectTypes)) {
OBJECT_TYPES_CACHE[typeName] = {
jiraTypeId: typeDef.jiraTypeId,
name: typeDef.name,
attributes: typeDef.attributes.map(attr => ({
jiraId: attr.jiraId,
name: attr.name,
fieldName: attr.fieldName,
type: attr.type,
isMultiple: attr.isMultiple,
})),
};
TYPE_ID_TO_NAME[typeDef.jiraTypeId] = typeName;
JIRA_NAME_TO_TYPE[typeDef.name] = typeName;
}
} catch (error) {
logger.error('JiraAssetsClient: Failed to initialize lookup maps', error);
}
}
// =============================================================================
@@ -181,7 +209,8 @@ class JiraAssetsClient {
try {
await this.detectApiType();
const response = await fetch(`${this.baseUrl}/objectschema/${config.jiraSchemaId}`, {
// Test connection by fetching schemas list (no specific schema ID needed)
const response = await fetch(`${this.baseUrl}/objectschema/list`, {
headers: this.getHeaders(false), // Read operation - uses service account token
});
return response.ok;
@@ -191,17 +220,35 @@ class JiraAssetsClient {
}
}
async getObject(objectId: string): Promise<JiraAssetsObject | null> {
/**
* Get raw ObjectEntry for an object (for recursive processing)
*/
async getObjectEntry(objectId: string): Promise<ObjectEntry | null> {
try {
// Include attributes and deep attributes to get full details of referenced objects (including descriptions)
const url = `/object/${objectId}?includeAttributes=true&includeAttributesDeep=1`;
return await this.request<JiraAssetsObject>(url, {}, false); // Read operation
const url = `/object/${objectId}?includeAttributes=true&includeAttributesDeep=2`;
const entry = await this.request<ObjectEntry>(url, {}, false) as unknown as ObjectEntry; // Read operation
return entry;
} catch (error) {
// Check if this is a 404 (object not found / deleted)
if (error instanceof Error && error.message.includes('404')) {
logger.info(`JiraAssetsClient: Object ${objectId} not found in Jira (likely deleted)`);
throw new JiraObjectNotFoundError(objectId);
}
logger.error(`JiraAssetsClient: Failed to get object entry ${objectId}`, error);
return null;
}
}
async getObject(objectId: string): Promise<JiraAssetsObject | null> {
try {
const entry = await this.getObjectEntry(objectId);
if (!entry) return null;
return this.adaptObjectEntryToJiraAssetsObject(entry);
} catch (error) {
if (error instanceof JiraObjectNotFoundError) {
throw error;
}
logger.error(`JiraAssetsClient: Failed to get object ${objectId}`, error);
return null;
}
@@ -210,11 +257,26 @@ class JiraAssetsClient {
async searchObjects(
iql: string,
page: number = 1,
pageSize: number = 50
): Promise<{ objects: JiraAssetsObject[]; totalCount: number; hasMore: boolean }> {
pageSize: number = 50,
schemaId?: string
): Promise<{
objects: JiraAssetsObject[];
totalCount: number;
hasMore: boolean;
referencedObjects?: Array<{ entry: ObjectEntry; typeName: string }>;
rawEntries?: ObjectEntry[]; // Raw ObjectEntry format for recursive processing
}> {
await this.detectApiType();
let response: JiraAssetsSearchResponse;
// Schema ID must be provided explicitly (no default from config)
if (!schemaId) {
throw new Error('Schema ID is required for searchObjects. Please provide schemaId parameter.');
}
const effectiveSchemaId = schemaId;
// Use domain types for API requests
let payload: { objectEntries: ObjectEntry[]; totalCount?: number; totalFilterCount?: number; page?: number; pageSize?: number };
if (this.isDataCenter) {
// Try modern AQL endpoint first
@@ -224,10 +286,10 @@ class JiraAssetsClient {
page: page.toString(),
resultPerPage: pageSize.toString(),
includeAttributes: 'true',
includeAttributesDeep: '1',
objectSchemaId: config.jiraSchemaId,
includeAttributesDeep: '2',
objectSchemaId: effectiveSchemaId,
});
response = await this.request<JiraAssetsSearchResponse>(`/aql/objects?${params.toString()}`, {}, false); // Read operation
payload = await this.request<{ objectEntries: ObjectEntry[]; totalCount?: number; totalFilterCount?: number }>(`/aql/objects?${params.toString()}`, {}, false); // Read operation
} catch (error) {
// Fallback to deprecated IQL endpoint
logger.warn(`JiraAssetsClient: AQL endpoint failed, falling back to IQL: ${error}`);
@@ -236,51 +298,169 @@ class JiraAssetsClient {
page: page.toString(),
resultPerPage: pageSize.toString(),
includeAttributes: 'true',
includeAttributesDeep: '1',
objectSchemaId: config.jiraSchemaId,
includeAttributesDeep: '2',
objectSchemaId: effectiveSchemaId,
});
response = await this.request<JiraAssetsSearchResponse>(`/iql/objects?${params.toString()}`, {}, false); // Read operation
payload = await this.request<{ objectEntries: ObjectEntry[]; totalCount?: number; totalFilterCount?: number }>(`/iql/objects?${params.toString()}`, {}, false); // Read operation
}
} else {
// Jira Cloud uses POST for AQL
response = await this.request<JiraAssetsSearchResponse>('/aql/objects', {
payload = await this.request<{ objectEntries: ObjectEntry[]; totalCount?: number; totalFilterCount?: number }>('/aql/objects', {
method: 'POST',
body: JSON.stringify({
qlQuery: iql,
page,
resultPerPage: pageSize,
includeAttributes: true,
includeAttributesDeep: 1, // Include attributes of referenced objects (e.g., descriptions)
includeAttributesDeep: 2, // Include attributes of referenced objects (e.g., descriptions)
objectSchemaId: effectiveSchemaId,
}),
}, false); // Read operation
}
// Adapt to legacy response format for backward compatibility
const response = this.adaptAssetsPayloadToSearchResponse({ ...payload, page, pageSize });
const totalCount = response.totalFilterCount || response.totalCount || 0;
const hasMore = response.objectEntries.length === pageSize && page * pageSize < totalCount;
// Note: referencedObjects extraction removed - recursive extraction now happens in storeObjectTree
// via extractNestedReferencedObjects, which processes the entire object tree recursively
return {
objects: response.objectEntries || [],
totalCount,
hasMore,
referencedObjects: undefined, // No longer used - recursive extraction handles this
rawEntries: payload.objectEntries || [], // Return raw entries for recursive processing
};
}
/**
* Recursively extract all nested referenced objects from an object entry
* This function traverses the object tree and extracts all referenced objects
* at any depth, preventing infinite loops with circular references.
*
* @param entry - The object entry to extract nested references from
* @param processedIds - Set of already processed object IDs (to prevent duplicates and circular refs)
* @param maxDepth - Maximum depth to traverse (default: 5)
* @param currentDepth - Current depth in the tree (default: 0)
* @returns Array of extracted referenced objects with their type names
*/
extractNestedReferencedObjects(
entry: ObjectEntry,
processedIds: Set<string>,
maxDepth: number = 5,
currentDepth: number = 0
): Array<{ entry: ObjectEntry; typeName: string }> {
const result: Array<{ entry: ObjectEntry; typeName: string }> = [];
// Prevent infinite recursion
if (currentDepth >= maxDepth) {
logger.debug(`JiraAssetsClient: [Recursive] Max depth (${maxDepth}) reached for object ${entry.objectKey || entry.id}`);
return result;
}
const entryId = String(entry.id);
// Skip if already processed (handles circular references)
if (processedIds.has(entryId)) {
logger.debug(`JiraAssetsClient: [Recursive] Skipping already processed object ${entry.objectKey || entry.id} (circular reference detected)`);
return result;
}
processedIds.add(entryId);
logger.debug(`JiraAssetsClient: [Recursive] Extracting nested references from ${entry.objectKey || entry.id} at depth ${currentDepth}`);
// Initialize lookup maps if needed
if (Object.keys(TYPE_ID_TO_NAME).length === 0) {
// This is async, but we can't make this function async without breaking the call chain
// So we'll initialize it before calling this function
logger.warn('JiraAssetsClient: TYPE_ID_TO_NAME not initialized, type resolution may fail');
}
// Extract referenced objects from attributes
if (entry.attributes) {
for (const attr of entry.attributes) {
for (const val of attr.objectAttributeValues) {
if (isReferenceValue(val) && hasAttributes(val.referencedObject)) {
const refId = String(val.referencedObject.id);
// Skip if already processed
if (processedIds.has(refId)) {
continue;
}
const refTypeId = val.referencedObject.objectType?.id;
const refTypeName = TYPE_ID_TO_NAME[refTypeId] ||
JIRA_NAME_TO_TYPE[val.referencedObject.objectType?.name];
if (refTypeName) {
logger.debug(`JiraAssetsClient: [Recursive] Found nested reference: ${val.referencedObject.objectKey || refId} of type ${refTypeName} at depth ${currentDepth + 1}`);
// Add this referenced object to results
result.push({
entry: val.referencedObject as ObjectEntry,
typeName: refTypeName,
});
// Recursively extract nested references from this referenced object
const nested = this.extractNestedReferencedObjects(
val.referencedObject as ObjectEntry,
processedIds,
maxDepth,
currentDepth + 1
);
result.push(...nested);
} else {
logger.debug(`JiraAssetsClient: [Recursive] Could not resolve type name for referenced object ${refId} (typeId: ${refTypeId}, typeName: ${val.referencedObject.objectType?.name})`);
}
}
}
}
}
if (result.length > 0) {
logger.debug(`JiraAssetsClient: [Recursive] Extracted ${result.length} nested references from ${entry.objectKey || entry.id} at depth ${currentDepth}`);
}
return result;
}
/**
* Get the total count of objects for a specific type from Jira Assets
* This is more efficient than fetching all objects when you only need the count
* @param typeName - Type name (from database, e.g. "ApplicationComponent")
* @param schemaId - Optional schema ID (if not provided, uses mapping or default)
*/
async getObjectCount(typeName: CMDBObjectTypeName): Promise<number> {
const typeDef = OBJECT_TYPES[typeName];
async getObjectCount(typeName: string, schemaId?: string): Promise<number> {
// Ensure lookup maps are initialized
if (Object.keys(OBJECT_TYPES_CACHE).length === 0) {
await initializeLookupMaps();
}
const typeDef = OBJECT_TYPES_CACHE[typeName];
if (!typeDef) {
logger.warn(`JiraAssetsClient: Unknown type ${typeName}`);
return 0;
}
try {
// Get schema ID from mapping service if not provided
let effectiveSchemaId = schemaId;
if (!effectiveSchemaId) {
const { schemaMappingService } = await import('./schemaMappingService.js');
effectiveSchemaId = await schemaMappingService.getSchemaId(typeName);
}
// Skip if no schema ID is available (object type not configured)
if (!effectiveSchemaId || effectiveSchemaId.trim() === '') {
logger.debug(`JiraAssetsClient: No schema ID configured for ${typeName}, returning 0`);
return 0;
}
const iql = `objectType = "${typeDef.name}"`;
// Use pageSize=1 to minimize data transfer, we only need the totalCount
const result = await this.searchObjects(iql, 1, 1);
logger.debug(`JiraAssetsClient: ${typeName} has ${result.totalCount} objects in Jira Assets`);
const result = await this.searchObjects(iql, 1, 1, effectiveSchemaId);
logger.debug(`JiraAssetsClient: ${typeName} has ${result.totalCount} objects in Jira Assets (schema: ${effectiveSchemaId})`);
return result.totalCount;
} catch (error) {
logger.error(`JiraAssetsClient: Failed to get count for ${typeName}`, error);
@@ -289,29 +469,64 @@ class JiraAssetsClient {
}
async getAllObjectsOfType(
typeName: CMDBObjectTypeName,
batchSize: number = 40
): Promise<JiraAssetsObject[]> {
const typeDef = OBJECT_TYPES[typeName];
if (!typeDef) {
logger.warn(`JiraAssetsClient: Unknown type ${typeName}`);
return [];
typeName: string,
batchSize: number = 40,
schemaId?: string
): Promise<{
objects: JiraAssetsObject[];
referencedObjects: Array<{ entry: ObjectEntry; typeName: string }>;
rawEntries?: ObjectEntry[]; // Raw ObjectEntry format for recursive processing
}> {
// If typeName is a display name (not in cache), use it directly for IQL query
// Otherwise, look up the type definition
let objectTypeName = typeName;
// Try to find in cache first
if (Object.keys(OBJECT_TYPES_CACHE).length === 0) {
await initializeLookupMaps();
}
const typeDef = OBJECT_TYPES_CACHE[typeName];
if (typeDef) {
objectTypeName = typeDef.name; // Use the Jira name from cache
} else {
// Type not in cache - assume typeName is already the Jira display name
logger.debug(`JiraAssetsClient: Type ${typeName} not in cache, using as display name directly`);
}
// Get schema ID from mapping service if not provided
let effectiveSchemaId = schemaId;
if (!effectiveSchemaId) {
const { schemaMappingService } = await import('./schemaMappingService.js');
effectiveSchemaId = await schemaMappingService.getSchemaId(typeName);
}
if (!effectiveSchemaId) {
throw new Error(`No schema ID available for object type ${typeName}`);
}
const allObjects: JiraAssetsObject[] = [];
const rawEntries: ObjectEntry[] = []; // Store raw entries for recursive processing
let page = 1;
let hasMore = true;
while (hasMore) {
const iql = `objectType = "${typeDef.name}"`;
const result = await this.searchObjects(iql, page, batchSize);
const iql = `objectType = "${objectTypeName}"`;
const result = await this.searchObjects(iql, page, batchSize, effectiveSchemaId);
allObjects.push(...result.objects);
// Collect raw entries for recursive processing
if (result.rawEntries) {
rawEntries.push(...result.rawEntries);
}
hasMore = result.hasMore;
page++;
}
logger.info(`JiraAssetsClient: Fetched ${allObjects.length} ${typeName} objects`);
return allObjects;
logger.info(`JiraAssetsClient: Fetched ${allObjects.length} ${typeName} objects from schema ${effectiveSchemaId} (raw entries: ${rawEntries.length})`);
// Note: referencedObjects no longer collected - recursive extraction in storeObjectTree handles nested objects
return { objects: allObjects, referencedObjects: [], rawEntries };
}
async getUpdatedObjectsSince(
@@ -357,38 +572,232 @@ class JiraAssetsClient {
}
}
// ==========================================================================
// Adapter Functions (temporary - for backward compatibility)
// ==========================================================================
/**
* Adapt ObjectEntry from domain types to legacy JiraAssetsObject type
* This is a temporary adapter during migration
* Handles both ObjectEntry (domain) and legacy JiraAssetsObject formats
*/
adaptObjectEntryToJiraAssetsObject(entry: ObjectEntry | JiraAssetsObject | null): JiraAssetsObject | null {
if (!entry) return null;
// Check if already in legacy format (has 'attributes' as array with JiraAssetsAttribute)
if ('attributes' in entry && Array.isArray(entry.attributes) && entry.attributes.length > 0 && 'objectTypeAttributeId' in entry.attributes[0] && !('id' in entry.attributes[0])) {
// Validate the legacy format object has required fields
const legacyObj = entry as JiraAssetsObject;
if (legacyObj.id === null || legacyObj.id === undefined) {
logger.warn(`JiraAssetsClient: Legacy object missing id. ObjectKey: ${legacyObj.objectKey}, Label: ${legacyObj.label}`);
return null;
}
if (!legacyObj.objectKey || !String(legacyObj.objectKey).trim()) {
logger.warn(`JiraAssetsClient: Legacy object missing objectKey. ID: ${legacyObj.id}, Label: ${legacyObj.label}`);
return null;
}
if (!legacyObj.label || !String(legacyObj.label).trim()) {
logger.warn(`JiraAssetsClient: Legacy object missing label. ID: ${legacyObj.id}, ObjectKey: ${legacyObj.objectKey}`);
return null;
}
return legacyObj;
}
// Convert from ObjectEntry format
const domainEntry = entry as ObjectEntry;
// Validate required fields before conversion
if (domainEntry.id === null || domainEntry.id === undefined) {
logger.warn(`JiraAssetsClient: ObjectEntry missing id. ObjectKey: ${domainEntry.objectKey}, Label: ${domainEntry.label}`);
return null;
}
if (!domainEntry.objectKey || !String(domainEntry.objectKey).trim()) {
logger.warn(`JiraAssetsClient: ObjectEntry missing objectKey. ID: ${domainEntry.id}, Label: ${domainEntry.label}`);
return null;
}
if (!domainEntry.label || !String(domainEntry.label).trim()) {
logger.warn(`JiraAssetsClient: ObjectEntry missing label. ID: ${domainEntry.id}, ObjectKey: ${domainEntry.objectKey}`);
return null;
}
// Convert id - ensure it's a number
let objectId: number;
if (typeof domainEntry.id === 'string') {
const parsed = parseInt(domainEntry.id, 10);
if (isNaN(parsed)) {
logger.warn(`JiraAssetsClient: ObjectEntry id cannot be parsed as number: ${domainEntry.id}`);
return null;
}
objectId = parsed;
} else if (typeof domainEntry.id === 'number') {
objectId = domainEntry.id;
} else {
logger.warn(`JiraAssetsClient: ObjectEntry id has invalid type: ${typeof domainEntry.id}`);
return null;
}
return {
id: objectId,
objectKey: String(domainEntry.objectKey).trim(),
label: String(domainEntry.label).trim(),
objectType: domainEntry.objectType,
created: domainEntry.created || new Date().toISOString(),
updated: domainEntry.updated || new Date().toISOString(),
attributes: (domainEntry.attributes || []).map(attr => this.adaptObjectAttributeToJiraAssetsAttribute(attr)),
};
}
/**
* Adapt ObjectAttribute from domain types to legacy JiraAssetsAttribute type
*/
private adaptObjectAttributeToJiraAssetsAttribute(attr: ObjectAttribute): JiraAssetsAttribute {
return {
objectTypeAttributeId: attr.objectTypeAttributeId,
objectTypeAttribute: undefined, // Not in domain type, will be populated from schema if needed
objectAttributeValues: attr.objectAttributeValues.map(val => this.adaptObjectAttributeValue(val)),
};
}
/**
* Adapt ObjectAttributeValue from domain types to legacy format
*/
private adaptObjectAttributeValue(val: ObjectAttributeValue): {
value?: string;
displayValue?: string;
referencedObject?: { id: number; objectKey: string; label: string };
status?: { name: string };
} {
if (isReferenceValue(val)) {
const ref = val.referencedObject;
return {
displayValue: val.displayValue,
referencedObject: {
id: typeof ref.id === 'string' ? parseInt(ref.id, 10) : ref.id,
objectKey: ref.objectKey,
label: ref.label,
},
};
}
if (isSimpleValue(val)) {
return {
value: String(val.value),
displayValue: val.displayValue,
};
}
// StatusValue, ConfluenceValue, UserValue
return {
displayValue: val.displayValue,
status: 'status' in val ? { name: val.status.name } : undefined,
};
}
/**
* Adapt AssetsPayload (from domain types) to legacy JiraAssetsSearchResponse
*/
private adaptAssetsPayloadToSearchResponse(
payload: { objectEntries: ObjectEntry[]; totalCount?: number; totalFilterCount?: number; page?: number; pageSize?: number }
): JiraAssetsSearchResponse {
return {
objectEntries: payload.objectEntries.map(entry => this.adaptObjectEntryToJiraAssetsObject(entry)!).filter(Boolean),
totalCount: payload.totalCount || 0,
totalFilterCount: payload.totalFilterCount,
page: payload.page || 1,
pageSize: payload.pageSize || 50,
};
}
// ==========================================================================
// Object Parsing
// ==========================================================================
parseObject<T extends CMDBObject>(jiraObj: JiraAssetsObject): T | null {
async parseObject<T extends CMDBObject>(jiraObj: JiraAssetsObject): Promise<T | null> {
// Ensure lookup maps are initialized
if (Object.keys(OBJECT_TYPES_CACHE).length === 0) {
await initializeLookupMaps();
}
const typeId = jiraObj.objectType?.id;
const typeName = TYPE_ID_TO_NAME[typeId] || JIRA_NAME_TO_TYPE[jiraObj.objectType?.name];
if (!typeName) {
logger.warn(`JiraAssetsClient: Unknown object type for object ${jiraObj.objectKey || jiraObj.id}: ${jiraObj.objectType?.name} (ID: ${typeId})`);
// This is expected when repairing broken references - object types may not be configured
logger.debug(`JiraAssetsClient: Unknown object type for object ${jiraObj.objectKey || jiraObj.id}: ${jiraObj.objectType?.name} (ID: ${typeId}) - object type not configured, skipping`);
return null;
}
const typeDef = OBJECT_TYPES[typeName];
const typeDef = OBJECT_TYPES_CACHE[typeName];
if (!typeDef) {
logger.warn(`JiraAssetsClient: Type definition not found for type: ${typeName} (object: ${jiraObj.objectKey || jiraObj.id})`);
return null;
}
// Validate required fields from Jira object
if (jiraObj.id === null || jiraObj.id === undefined) {
logger.warn(`JiraAssetsClient: Object missing id field. ObjectKey: ${jiraObj.objectKey}, Label: ${jiraObj.label}, Type: ${jiraObj.objectType?.name}`);
throw new Error(`Cannot parse Jira object: missing id field`);
}
if (!jiraObj.objectKey || !String(jiraObj.objectKey).trim()) {
logger.warn(`JiraAssetsClient: Object missing objectKey. ID: ${jiraObj.id}, Label: ${jiraObj.label}, Type: ${jiraObj.objectType?.name}`);
throw new Error(`Cannot parse Jira object ${jiraObj.id}: missing objectKey`);
}
if (!jiraObj.label || !String(jiraObj.label).trim()) {
logger.warn(`JiraAssetsClient: Object missing label. ID: ${jiraObj.id}, ObjectKey: ${jiraObj.objectKey}, Type: ${jiraObj.objectType?.name}`);
throw new Error(`Cannot parse Jira object ${jiraObj.id}: missing label`);
}
// Ensure we have valid values before creating the result
const objectId = String(jiraObj.id || '');
const objectKey = String(jiraObj.objectKey || '').trim();
const label = String(jiraObj.label || '').trim();
// Double-check after conversion (in case String() produced "null" or "undefined")
if (!objectId || objectId === 'null' || objectId === 'undefined' || objectId === 'NaN') {
logger.error(`JiraAssetsClient: parseObject - invalid id after conversion. Original: ${jiraObj.id}, Converted: ${objectId}`);
throw new Error(`Cannot parse Jira object: invalid id after conversion (${objectId})`);
}
if (!objectKey || objectKey === 'null' || objectKey === 'undefined') {
logger.error(`JiraAssetsClient: parseObject - invalid objectKey after conversion. Original: ${jiraObj.objectKey}, Converted: ${objectKey}`);
throw new Error(`Cannot parse Jira object: invalid objectKey after conversion (${objectKey})`);
}
if (!label || label === 'null' || label === 'undefined') {
logger.error(`JiraAssetsClient: parseObject - invalid label after conversion. Original: ${jiraObj.label}, Converted: ${label}`);
throw new Error(`Cannot parse Jira object: invalid label after conversion (${label})`);
}
const result: Record<string, unknown> = {
id: jiraObj.id.toString(),
objectKey: jiraObj.objectKey,
label: jiraObj.label,
id: objectId,
objectKey: objectKey,
label: label,
_objectType: typeName,
_jiraUpdatedAt: jiraObj.updated || new Date().toISOString(),
_jiraCreatedAt: jiraObj.created || new Date().toISOString(),
};
// Parse each attribute based on schema
// IMPORTANT: Don't allow attributes to overwrite id, objectKey, or label
const protectedFields = new Set(['id', 'objectKey', 'label', '_objectType', '_jiraUpdatedAt', '_jiraCreatedAt']);
for (const attrDef of typeDef.attributes) {
// Skip if this attribute would overwrite a protected field
if (protectedFields.has(attrDef.fieldName)) {
logger.warn(`JiraAssetsClient: Skipping attribute ${attrDef.fieldName} (${attrDef.name}) - would overwrite protected field`);
continue;
}
const jiraAttr = this.findAttribute(jiraObj.attributes, attrDef.jiraId, attrDef.name);
const parsedValue = this.parseAttributeValue(jiraAttr, attrDef);
const parsedValue = this.parseAttributeValue(jiraAttr, {
type: attrDef.type,
isMultiple: attrDef.isMultiple ?? false, // Default to false if not specified
fieldName: attrDef.fieldName,
});
result[attrDef.fieldName] = parsedValue;
// Debug logging for Confluence Space field
@@ -420,6 +829,51 @@ class JiraAssetsClient {
}
}
// Final validation - ensure result has required fields
// This should never fail if the code above worked correctly, but it's a safety check
const finalId = String(result.id || '').trim();
const finalObjectKey = String(result.objectKey || '').trim();
const finalLabel = String(result.label || '').trim();
if (!finalId || finalId === 'null' || finalId === 'undefined' || finalId === 'NaN') {
logger.error(`JiraAssetsClient: parseObject result missing or invalid id after all processing. Result: ${JSON.stringify({
hasId: 'id' in result,
hasObjectKey: 'objectKey' in result,
hasLabel: 'label' in result,
id: result.id,
objectKey: result.objectKey,
label: result.label,
resultKeys: Object.keys(result),
jiraObj: {
id: jiraObj.id,
objectKey: jiraObj.objectKey,
label: jiraObj.label,
objectType: jiraObj.objectType?.name
}
})}`);
throw new Error(`Failed to parse Jira object: result missing or invalid id (${finalId})`);
}
if (!finalObjectKey || finalObjectKey === 'null' || finalObjectKey === 'undefined') {
logger.error(`JiraAssetsClient: parseObject result missing or invalid objectKey after all processing. Result: ${JSON.stringify({
id: result.id,
objectKey: result.objectKey,
label: result.label,
resultKeys: Object.keys(result)
})}`);
throw new Error(`Failed to parse Jira object: result missing or invalid objectKey (${finalObjectKey})`);
}
if (!finalLabel || finalLabel === 'null' || finalLabel === 'undefined') {
logger.error(`JiraAssetsClient: parseObject result missing or invalid label after all processing. Result: ${JSON.stringify({
id: result.id,
objectKey: result.objectKey,
label: result.label,
resultKeys: Object.keys(result)
})}`);
throw new Error(`Failed to parse Jira object: result missing or invalid label (${finalLabel})`);
}
return result as T;
}
@@ -449,22 +903,24 @@ class JiraAssetsClient {
return attrDef.isMultiple ? [] : null;
}
const values = jiraAttr.objectAttributeValues;
// Convert legacy attribute values to domain types for type guard usage
// This allows us to use the type guards while maintaining backward compatibility
const values = jiraAttr.objectAttributeValues as unknown as ObjectAttributeValue[];
// Use type guards from domain types
// Generic Confluence field detection: check if any value has a confluencePage
// This works for all Confluence fields regardless of their declared type (float, text, etc.)
const hasConfluencePage = values.some(v => v.confluencePage);
const hasConfluencePage = values.some(v => 'confluencePage' in v && v.confluencePage);
if (hasConfluencePage) {
const confluencePage = values[0]?.confluencePage;
if (confluencePage?.url) {
logger.info(`[Confluence Field Parse] Found Confluence URL for field "${attrDef.fieldName || 'unknown'}": ${confluencePage.url}`);
const confluenceVal = values.find(v => 'confluencePage' in v && v.confluencePage) as ConfluenceValue | undefined;
if (confluenceVal?.confluencePage?.url) {
logger.info(`[Confluence Field Parse] Found Confluence URL for field "${attrDef.fieldName || 'unknown'}": ${confluenceVal.confluencePage.url}`);
// For multiple values, return array of URLs; for single, return the URL string
if (attrDef.isMultiple) {
return values
.filter(v => v.confluencePage?.url)
.map(v => v.confluencePage!.url);
.filter((v): v is ConfluenceValue => 'confluencePage' in v && !!v.confluencePage)
.map(v => v.confluencePage.url);
}
return confluencePage.url;
return confluenceVal.confluencePage.url;
}
// Fallback to displayValue if no URL
const displayVal = values[0]?.displayValue;
@@ -477,12 +933,13 @@ class JiraAssetsClient {
switch (attrDef.type) {
case 'reference': {
// Use type guard to filter reference values
const refs = values
.filter(v => v.referencedObject)
.filter(isReferenceValue)
.map(v => ({
objectId: v.referencedObject!.id.toString(),
objectKey: v.referencedObject!.objectKey,
label: v.referencedObject!.label,
objectId: String(v.referencedObject.id),
objectKey: v.referencedObject.objectKey,
label: v.referencedObject.label,
} as ObjectReference));
return attrDef.isMultiple ? refs : refs[0] || null;
}
@@ -493,7 +950,14 @@ class JiraAssetsClient {
case 'email':
case 'select':
case 'user': {
const val = values[0]?.displayValue ?? values[0]?.value ?? null;
// Use type guard for simple values when available, otherwise fall back to legacy format
const firstVal = values[0];
let val: string | null = null;
if (isSimpleValue(firstVal)) {
val = String(firstVal.value);
} else {
val = firstVal?.displayValue ?? (firstVal as any)?.value ?? null;
}
// Strip HTML if present
if (val && typeof val === 'string' && val.includes('<')) {
return this.stripHtml(val);
@@ -502,14 +966,24 @@ class JiraAssetsClient {
}
case 'integer': {
const val = values[0]?.value;
return val ? parseInt(val, 10) : null;
const firstVal = values[0];
if (isSimpleValue(firstVal)) {
const val = typeof firstVal.value === 'number' ? firstVal.value : parseInt(String(firstVal.value), 10);
return isNaN(val) ? null : val;
}
const val = (firstVal as any)?.value;
return val ? parseInt(String(val), 10) : null;
}
case 'float': {
// Regular float parsing
const val = values[0]?.value;
const displayVal = values[0]?.displayValue;
const firstVal = values[0];
if (isSimpleValue(firstVal)) {
const val = typeof firstVal.value === 'number' ? firstVal.value : parseFloat(String(firstVal.value));
return isNaN(val) ? null : val;
}
const val = (firstVal as any)?.value;
const displayVal = firstVal?.displayValue;
// Try displayValue first, then value
if (displayVal !== undefined && displayVal !== null) {
const parsed = typeof displayVal === 'string' ? parseFloat(displayVal) : Number(displayVal);
@@ -523,25 +997,37 @@ class JiraAssetsClient {
}
case 'boolean': {
const val = values[0]?.value;
const firstVal = values[0];
if (isSimpleValue(firstVal)) {
return Boolean(firstVal.value);
}
const val = (firstVal as any)?.value;
return val === 'true' || val === 'Ja';
}
case 'date':
case 'datetime': {
return values[0]?.value ?? values[0]?.displayValue ?? null;
const firstVal = values[0];
if (isSimpleValue(firstVal)) {
return String(firstVal.value);
}
return firstVal?.displayValue ?? (firstVal as any)?.value ?? null;
}
case 'status': {
const statusVal = values[0]?.status;
if (statusVal) {
return statusVal.name || null;
const firstVal = values[0];
if ('status' in firstVal && firstVal.status) {
return firstVal.status.name || null;
}
return values[0]?.displayValue ?? values[0]?.value ?? null;
return firstVal?.displayValue ?? (firstVal as any)?.value ?? null;
}
default:
return values[0]?.displayValue ?? values[0]?.value ?? null;
const firstVal = values[0];
if (isSimpleValue(firstVal)) {
return String(firstVal.value);
}
return firstVal?.displayValue ?? (firstVal as any)?.value ?? null;
}
}

View File

@@ -1,5 +1,12 @@
import winston from 'winston';
import { config } from '../config/env.js';
import * as fs from 'fs';
import * as path from 'path';
import { fileURLToPath } from 'url';
import { dirname } from 'path';
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
const { combine, timestamp, printf, colorize, errors } = winston.format;
@@ -25,16 +32,54 @@ export const logger = winston.createLogger({
],
});
if (config.isProduction) {
// Only add file logging if we're in production AND have write permissions
// In Azure App Service, console logging is automatically captured, so file logging is optional
// Detect Azure App Service environment
const isAzureAppService = !!(
process.env.AZURE_APP_SERVICE ||
process.env.WEBSITE_SITE_NAME ||
process.env.WEBSITE_INSTANCE_ID ||
process.env.AzureWebJobsStorage
);
if (config.isProduction && !isAzureAppService) {
// For non-Azure environments, try to use logs/ directory
const logDir = path.join(__dirname, '../../logs');
try {
// Ensure directory exists
if (!fs.existsSync(logDir)) {
fs.mkdirSync(logDir, { recursive: true });
}
// Test write permissions
const testFile = path.join(logDir, '.write-test');
try {
fs.writeFileSync(testFile, 'test');
fs.unlinkSync(testFile);
// If we can write, add file transports
logger.add(
new winston.transports.File({
filename: 'logs/error.log',
filename: path.join(logDir, 'error.log'),
level: 'error',
})
);
logger.add(
new winston.transports.File({
filename: 'logs/combined.log',
filename: path.join(logDir, 'combined.log'),
})
);
} catch (writeError) {
// Can't write to this directory, skip file logging
// Console logging will be used (which is fine for Azure App Service)
console.warn('File logging disabled: no write permissions. Using console logging only.');
}
} catch (dirError) {
// Can't create directory, skip file logging
console.warn('File logging disabled: cannot create log directory. Using console logging only.');
}
}
// In Azure App Service, console logs are automatically captured by Azure Monitor
// No need for file logging

View File

@@ -1,893 +0,0 @@
import { calculateRequiredEffortApplicationManagement } from './effortCalculation.js';
import type {
ApplicationDetails,
ApplicationListItem,
ReferenceValue,
SearchFilters,
SearchResult,
ClassificationResult,
TeamDashboardData,
ApplicationStatus,
} from '../types/index.js';
// Mock application data for development/demo
const mockApplications: ApplicationDetails[] = [
{
id: '1',
key: 'APP-001',
name: 'Epic Hyperspace',
searchReference: 'EPIC-HS',
description: 'Elektronisch Patiëntendossier module voor klinische documentatie en workflow. Ondersteunt de volledige patiëntenzorg van intake tot ontslag.',
supplierProduct: 'Epic Systems / Hyperspace',
organisation: 'Zorg',
hostingType: { objectId: '1', key: 'HOST-1', name: 'On-premises' },
status: 'In Production',
businessImportance: 'Kritiek',
businessImpactAnalyse: { objectId: '1', key: 'BIA-1', name: 'BIA-2024-0042 (Klasse E)' },
systemOwner: 'J. Janssen',
businessOwner: 'Dr. A. van der Berg',
functionalApplicationManagement: 'Team EPD',
technicalApplicationManagement: 'Team Zorgapplicaties',
technicalApplicationManagementPrimary: 'Jan Jansen',
technicalApplicationManagementSecondary: 'Piet Pietersen',
medischeTechniek: false,
applicationFunctions: [],
dynamicsFactor: { objectId: '3', key: 'DYN-3', name: '3 - Hoog' },
complexityFactor: { objectId: '4', key: 'CMP-4', name: '4 - Zeer hoog' },
numberOfUsers: null,
governanceModel: { objectId: 'A', key: 'GOV-A', name: 'Centraal Beheer' },
applicationSubteam: null,
applicationTeam: null,
applicationType: null,
platform: null,
requiredEffortApplicationManagement: null,
},
{
id: '2',
key: 'APP-002',
name: 'SAP Finance',
searchReference: 'SAP-FIN',
description: 'Enterprise Resource Planning systeem voor financiële administratie, budgettering en controlling.',
supplierProduct: 'SAP SE / SAP S/4HANA',
organisation: 'Bedrijfsvoering',
hostingType: { objectId: '3', key: 'HOST-3', name: 'Cloud' },
status: 'In Production',
businessImportance: 'Kritiek',
businessImpactAnalyse: { objectId: '2', key: 'BIA-2', name: 'BIA-2024-0015 (Klasse D)' },
systemOwner: 'M. de Groot',
businessOwner: 'P. Bakker',
functionalApplicationManagement: 'Team ERP',
technicalApplicationManagement: 'Team Bedrijfsapplicaties',
medischeTechniek: false,
applicationFunctions: [],
dynamicsFactor: { objectId: '2', key: 'DYN-2', name: '2 - Gemiddeld' },
complexityFactor: { objectId: '3', key: 'CMP-3', name: '3 - Hoog' },
numberOfUsers: null,
governanceModel: null,
applicationSubteam: null,
applicationTeam: null,
applicationType: null,
platform: null,
requiredEffortApplicationManagement: null,
},
{
id: '3',
key: 'APP-003',
name: 'Philips IntelliSpace PACS',
searchReference: 'PACS',
description: 'Picture Archiving and Communication System voor opslag en weergave van medische beelden inclusief radiologie, CT en MRI.',
supplierProduct: 'Philips Healthcare / IntelliSpace PACS',
organisation: 'Zorg',
hostingType: { objectId: '1', key: 'HOST-1', name: 'On-premises' },
status: 'In Production',
businessImportance: 'Hoog',
businessImpactAnalyse: { objectId: '3', key: 'BIA-3', name: 'BIA-2024-0028 (Klasse D)' },
systemOwner: 'R. Hermans',
businessOwner: 'Dr. K. Smit',
functionalApplicationManagement: 'Team Beeldvorming',
technicalApplicationManagement: 'Team Zorgapplicaties',
medischeTechniek: true,
applicationFunctions: [],
dynamicsFactor: null,
complexityFactor: null,
numberOfUsers: null,
governanceModel: { objectId: 'C', key: 'GOV-C', name: 'Uitbesteed met ICMT-Regie' },
applicationSubteam: null,
applicationTeam: null,
applicationType: null,
platform: null,
requiredEffortApplicationManagement: null,
},
{
id: '4',
key: 'APP-004',
name: 'ChipSoft HiX',
searchReference: 'HIX',
description: 'Ziekenhuisinformatiesysteem en EPD voor patiëntregistratie, zorgplanning en klinische workflow.',
supplierProduct: 'ChipSoft / HiX',
organisation: 'Zorg',
hostingType: { objectId: '1', key: 'HOST-1', name: 'On-premises' },
status: 'In Production',
businessImportance: 'Kritiek',
businessImpactAnalyse: { objectId: '5', key: 'BIA-5', name: 'BIA-2024-0001 (Klasse F)' },
systemOwner: 'T. van Dijk',
businessOwner: 'Dr. L. Mulder',
functionalApplicationManagement: 'Team ZIS',
technicalApplicationManagement: 'Team Zorgapplicaties',
medischeTechniek: false,
applicationFunctions: [],
dynamicsFactor: { objectId: '4', key: 'DYN-4', name: '4 - Zeer hoog' },
complexityFactor: { objectId: '4', key: 'CMP-4', name: '4 - Zeer hoog' },
numberOfUsers: null,
governanceModel: { objectId: 'A', key: 'GOV-A', name: 'Centraal Beheer' },
applicationSubteam: null,
applicationTeam: null,
applicationType: null,
platform: null,
requiredEffortApplicationManagement: null,
},
{
id: '5',
key: 'APP-005',
name: 'TOPdesk',
searchReference: 'TOPDESK',
description: 'IT Service Management platform voor incident, problem en change management.',
supplierProduct: 'TOPdesk / TOPdesk Enterprise',
organisation: 'ICMT',
hostingType: { objectId: '2', key: 'HOST-2', name: 'SaaS' },
status: 'In Production',
businessImportance: 'Hoog',
businessImpactAnalyse: { objectId: '6', key: 'BIA-6', name: 'BIA-2024-0055 (Klasse C)' },
systemOwner: 'B. Willems',
businessOwner: 'H. Claessen',
functionalApplicationManagement: 'Team Servicedesk',
technicalApplicationManagement: 'Team ICT Beheer',
medischeTechniek: false,
applicationFunctions: [],
dynamicsFactor: { objectId: '2', key: 'DYN-2', name: '2 - Gemiddeld' },
complexityFactor: { objectId: '2', key: 'CMP-2', name: '2 - Gemiddeld' },
numberOfUsers: null,
governanceModel: null,
applicationSubteam: null,
applicationTeam: null,
applicationType: null,
platform: null,
requiredEffortApplicationManagement: null,
},
{
id: '6',
key: 'APP-006',
name: 'Microsoft 365',
searchReference: 'M365',
description: 'Kantoorautomatisering suite met Teams, Outlook, SharePoint, OneDrive en Office applicaties.',
supplierProduct: 'Microsoft / Microsoft 365 E5',
organisation: 'ICMT',
hostingType: { objectId: '2', key: 'HOST-2', name: 'SaaS' },
status: 'In Production',
businessImportance: 'Kritiek',
businessImpactAnalyse: { objectId: '1', key: 'BIA-1', name: 'BIA-2024-0042 (Klasse E)' },
systemOwner: 'S. Jansen',
businessOwner: 'N. Peters',
functionalApplicationManagement: 'Team Werkplek',
technicalApplicationManagement: 'Team Cloud',
medischeTechniek: false,
applicationFunctions: [],
dynamicsFactor: { objectId: '3', key: 'DYN-3', name: '3 - Hoog' },
complexityFactor: { objectId: '3', key: 'CMP-3', name: '3 - Hoog' },
numberOfUsers: { objectId: '7', key: 'USR-7', name: '> 15.000' },
governanceModel: { objectId: 'A', key: 'GOV-A', name: 'Centraal Beheer' },
applicationSubteam: null,
applicationTeam: null,
applicationType: null,
platform: null,
requiredEffortApplicationManagement: null,
},
{
id: '7',
key: 'APP-007',
name: 'Carestream Vue PACS',
searchReference: 'VUE-PACS',
description: 'Enterprise imaging platform voor radiologie en cardiologie beeldvorming.',
supplierProduct: 'Carestream Health / Vue PACS',
organisation: 'Zorg',
hostingType: { objectId: '1', key: 'HOST-1', name: 'On-premises' },
status: 'End of life',
businessImportance: 'Gemiddeld',
businessImpactAnalyse: { objectId: '9', key: 'BIA-9', name: 'BIA-2022-0089 (Klasse C)' },
systemOwner: 'R. Hermans',
businessOwner: 'Dr. K. Smit',
functionalApplicationManagement: 'Team Beeldvorming',
technicalApplicationManagement: 'Team Zorgapplicaties',
medischeTechniek: true,
applicationFunctions: [],
dynamicsFactor: { objectId: '1', key: 'DYN-1', name: '1 - Stabiel' },
complexityFactor: { objectId: '2', key: 'CMP-2', name: '2 - Gemiddeld' },
numberOfUsers: null,
governanceModel: null,
applicationSubteam: null,
applicationTeam: null,
applicationType: null,
platform: null,
requiredEffortApplicationManagement: null,
},
{
id: '8',
key: 'APP-008',
name: 'AFAS Profit',
searchReference: 'AFAS',
description: 'HR en salarisadministratie systeem voor personeelsbeheer, tijdregistratie en verloning.',
supplierProduct: 'AFAS Software / Profit',
organisation: 'Bedrijfsvoering',
hostingType: { objectId: '2', key: 'HOST-2', name: 'SaaS' },
status: 'In Production',
businessImportance: 'Hoog',
businessImpactAnalyse: { objectId: '7', key: 'BIA-7', name: 'BIA-2024-0022 (Klasse D)' },
systemOwner: 'E. Hendriks',
businessOwner: 'C. van Leeuwen',
functionalApplicationManagement: 'Team HR',
technicalApplicationManagement: 'Team Bedrijfsapplicaties',
medischeTechniek: false,
applicationFunctions: [],
dynamicsFactor: { objectId: '2', key: 'DYN-2', name: '2 - Gemiddeld' },
complexityFactor: { objectId: '2', key: 'CMP-2', name: '2 - Gemiddeld' },
numberOfUsers: { objectId: '6', key: 'USR-6', name: '10.000 - 15.000' },
governanceModel: { objectId: 'C', key: 'GOV-C', name: 'Uitbesteed met ICMT-Regie' },
applicationSubteam: null,
applicationTeam: null,
applicationType: null,
platform: null,
requiredEffortApplicationManagement: null,
},
{
id: '9',
key: 'APP-009',
name: 'Zenya',
searchReference: 'ZENYA',
description: 'Kwaliteitsmanagementsysteem voor protocollen, procedures en incidentmeldingen.',
supplierProduct: 'Infoland / Zenya',
organisation: 'Kwaliteit',
hostingType: { objectId: '2', key: 'HOST-2', name: 'SaaS' },
status: 'In Production',
businessImportance: 'Hoog',
businessImpactAnalyse: { objectId: '8', key: 'BIA-8', name: 'BIA-2024-0067 (Klasse C)' },
systemOwner: 'F. Bos',
businessOwner: 'I. Dekker',
functionalApplicationManagement: 'Team Kwaliteit',
technicalApplicationManagement: 'Team Bedrijfsapplicaties',
medischeTechniek: false,
applicationFunctions: [],
dynamicsFactor: { objectId: '2', key: 'DYN-2', name: '2 - Gemiddeld' },
complexityFactor: { objectId: '1', key: 'CMP-1', name: '1 - Laag' },
numberOfUsers: { objectId: '4', key: 'USR-4', name: '2.000 - 5.000' },
governanceModel: { objectId: 'C', key: 'GOV-C', name: 'Uitbesteed met ICMT-Regie' },
applicationSubteam: null,
applicationTeam: null,
applicationType: null,
platform: null,
requiredEffortApplicationManagement: null,
},
{
id: '10',
key: 'APP-010',
name: 'Castor EDC',
searchReference: 'CASTOR',
description: 'Electronic Data Capture platform voor klinisch wetenschappelijk onderzoek en trials.',
supplierProduct: 'Castor / Castor EDC',
organisation: 'Onderzoek',
hostingType: { objectId: '2', key: 'HOST-2', name: 'SaaS' },
status: 'In Production',
businessImportance: 'Gemiddeld',
businessImpactAnalyse: null, // BIA-2024-0078 (Klasse B) not in mock list
systemOwner: 'G. Vos',
businessOwner: 'Prof. Dr. W. Maas',
functionalApplicationManagement: 'Team Onderzoek',
technicalApplicationManagement: null,
medischeTechniek: false,
applicationFunctions: [],
dynamicsFactor: { objectId: '1', key: 'DYN-1', name: '1 - Stabiel' },
complexityFactor: { objectId: '1', key: 'CMP-1', name: '1 - Laag' },
numberOfUsers: { objectId: '1', key: 'USR-1', name: '< 100' },
governanceModel: { objectId: 'D', key: 'GOV-D', name: 'Uitbesteed met Business-Regie' },
applicationSubteam: null,
applicationTeam: null,
applicationType: null,
platform: null,
requiredEffortApplicationManagement: null,
},
];
// Mock reference data
const mockDynamicsFactors: ReferenceValue[] = [
{ objectId: '1', key: 'DYN-1', name: '1 - Stabiel', summary: 'Weinig wijzigingen, < 2 releases/jaar', description: 'Weinig wijzigingen, < 2 releases/jaar', factor: 0.8 },
{ objectId: '2', key: 'DYN-2', name: '2 - Gemiddeld', summary: 'Regelmatige wijzigingen, 2-4 releases/jaar', description: 'Regelmatige wijzigingen, 2-4 releases/jaar', factor: 1.0 },
{ objectId: '3', key: 'DYN-3', name: '3 - Hoog', summary: 'Veel wijzigingen, > 4 releases/jaar', description: 'Veel wijzigingen, > 4 releases/jaar', factor: 1.2 },
{ objectId: '4', key: 'DYN-4', name: '4 - Zeer hoog', summary: 'Continu in beweging, grote transformaties', description: 'Continu in beweging, grote transformaties', factor: 1.5 },
];
const mockComplexityFactors: ReferenceValue[] = [
{ objectId: '1', key: 'CMP-1', name: '1 - Laag', summary: 'Standalone, weinig integraties', description: 'Standalone, weinig integraties', factor: 0.8 },
{ objectId: '2', key: 'CMP-2', name: '2 - Gemiddeld', summary: 'Enkele integraties, beperkt maatwerk', description: 'Enkele integraties, beperkt maatwerk', factor: 1.0 },
{ objectId: '3', key: 'CMP-3', name: '3 - Hoog', summary: 'Veel integraties, significant maatwerk', description: 'Veel integraties, significant maatwerk', factor: 1.3 },
{ objectId: '4', key: 'CMP-4', name: '4 - Zeer hoog', summary: 'Platform, uitgebreide governance', description: 'Platform, uitgebreide governance', factor: 1.6 },
];
const mockNumberOfUsers: ReferenceValue[] = [
{ objectId: '1', key: 'USR-1', name: '< 100', order: 1, factor: 0.5 },
{ objectId: '2', key: 'USR-2', name: '100 - 500', order: 2, factor: 0.7 },
{ objectId: '3', key: 'USR-3', name: '500 - 2.000', order: 3, factor: 1.0 },
{ objectId: '4', key: 'USR-4', name: '2.000 - 5.000', order: 4, factor: 1.2 },
{ objectId: '5', key: 'USR-5', name: '5.000 - 10.000', order: 5, factor: 1.4 },
{ objectId: '6', key: 'USR-6', name: '10.000 - 15.000', order: 6, factor: 1.6 },
{ objectId: '7', key: 'USR-7', name: '> 15.000', order: 7, factor: 2.0 },
];
const mockGovernanceModels: ReferenceValue[] = [
{ objectId: 'A', key: 'GOV-A', name: 'Centraal Beheer', summary: 'ICMT voert volledig beheer uit', description: 'ICMT voert volledig beheer uit' },
{ objectId: 'B', key: 'GOV-B', name: 'Federatief Beheer', summary: 'ICMT + business delen beheer', description: 'ICMT + business delen beheer' },
{ objectId: 'C', key: 'GOV-C', name: 'Uitbesteed met ICMT-Regie', summary: 'Leverancier beheert, ICMT regisseert', description: 'Leverancier beheert, ICMT regisseert' },
{ objectId: 'D', key: 'GOV-D', name: 'Uitbesteed met Business-Regie', summary: 'Leverancier beheert, business regisseert', description: 'Leverancier beheert, business regisseert' },
{ objectId: 'E', key: 'GOV-E', name: 'Volledig Decentraal Beheer', summary: 'Business voert volledig beheer uit', description: 'Business voert volledig beheer uit' },
];
const mockOrganisations: ReferenceValue[] = [
{ objectId: '1', key: 'ORG-1', name: 'Zorg' },
{ objectId: '2', key: 'ORG-2', name: 'Bedrijfsvoering' },
{ objectId: '3', key: 'ORG-3', name: 'ICMT' },
{ objectId: '4', key: 'ORG-4', name: 'Kwaliteit' },
{ objectId: '5', key: 'ORG-5', name: 'Onderzoek' },
{ objectId: '6', key: 'ORG-6', name: 'Onderwijs' },
];
const mockHostingTypes: ReferenceValue[] = [
{ objectId: '1', key: 'HOST-1', name: 'On-premises' },
{ objectId: '2', key: 'HOST-2', name: 'SaaS' },
{ objectId: '3', key: 'HOST-3', name: 'Cloud' },
{ objectId: '4', key: 'HOST-4', name: 'Hybrid' },
];
const mockBusinessImpactAnalyses: ReferenceValue[] = [
{ objectId: '1', key: 'BIA-1', name: 'BIA-2024-0042 (Klasse E)' },
{ objectId: '2', key: 'BIA-2', name: 'BIA-2024-0015 (Klasse D)' },
{ objectId: '3', key: 'BIA-3', name: 'BIA-2024-0028 (Klasse D)' },
{ objectId: '4', key: 'BIA-4', name: 'BIA-2024-0035 (Klasse C)' },
{ objectId: '5', key: 'BIA-5', name: 'BIA-2024-0001 (Klasse F)' },
{ objectId: '6', key: 'BIA-6', name: 'BIA-2024-0055 (Klasse C)' },
{ objectId: '7', key: 'BIA-7', name: 'BIA-2024-0022 (Klasse D)' },
{ objectId: '8', key: 'BIA-8', name: 'BIA-2024-0067 (Klasse C)' },
{ objectId: '9', key: 'BIA-9', name: 'BIA-2022-0089 (Klasse C)' },
];
const mockApplicationSubteams: ReferenceValue[] = [
{ objectId: '1', key: 'SUBTEAM-1', name: 'Zorgapplicaties' },
{ objectId: '2', key: 'SUBTEAM-2', name: 'Bedrijfsvoering' },
{ objectId: '3', key: 'SUBTEAM-3', name: 'Infrastructuur' },
];
const mockApplicationTypes: ReferenceValue[] = [
{ objectId: '1', key: 'TYPE-1', name: 'Applicatie' },
{ objectId: '2', key: 'TYPE-2', name: 'Platform' },
{ objectId: '3', key: 'TYPE-3', name: 'Workload' },
];
// Classification history
const mockClassificationHistory: ClassificationResult[] = [];
// Mock data service
export class MockDataService {
private applications: ApplicationDetails[] = [...mockApplications];
async searchApplications(
filters: SearchFilters,
page: number = 1,
pageSize: number = 25
): Promise<SearchResult> {
let filtered = [...this.applications];
// Apply search text filter
if (filters.searchText) {
const search = filters.searchText.toLowerCase();
filtered = filtered.filter(
(app) =>
app.name.toLowerCase().includes(search) ||
(app.description?.toLowerCase().includes(search) ?? false) ||
(app.supplierProduct?.toLowerCase().includes(search) ?? false) ||
(app.searchReference?.toLowerCase().includes(search) ?? false)
);
}
// Apply status filter
if (filters.statuses && filters.statuses.length > 0) {
filtered = filtered.filter((app) => {
// Handle empty/null status - treat as 'Undefined' for filtering
const status = app.status || 'Undefined';
return filters.statuses!.includes(status as ApplicationStatus);
});
}
// Apply applicationFunction filter
if (filters.applicationFunction === 'empty') {
filtered = filtered.filter((app) => app.applicationFunctions.length === 0);
} else if (filters.applicationFunction === 'filled') {
filtered = filtered.filter((app) => app.applicationFunctions.length > 0);
}
// Apply governanceModel filter
if (filters.governanceModel === 'empty') {
filtered = filtered.filter((app) => !app.governanceModel);
} else if (filters.governanceModel === 'filled') {
filtered = filtered.filter((app) => !!app.governanceModel);
}
// Apply dynamicsFactor filter
if (filters.dynamicsFactor === 'empty') {
filtered = filtered.filter((app) => !app.dynamicsFactor);
} else if (filters.dynamicsFactor === 'filled') {
filtered = filtered.filter((app) => !!app.dynamicsFactor);
}
// Apply complexityFactor filter
if (filters.complexityFactor === 'empty') {
filtered = filtered.filter((app) => !app.complexityFactor);
} else if (filters.complexityFactor === 'filled') {
filtered = filtered.filter((app) => !!app.complexityFactor);
}
// Apply applicationSubteam filter
if (filters.applicationSubteam === 'empty') {
filtered = filtered.filter((app) => !app.applicationSubteam);
} else if (filters.applicationSubteam === 'filled') {
filtered = filtered.filter((app) => !!app.applicationSubteam);
}
// Apply applicationType filter
if (filters.applicationType === 'empty') {
filtered = filtered.filter((app) => !app.applicationType);
} else if (filters.applicationType === 'filled') {
filtered = filtered.filter((app) => !!app.applicationType);
}
// Apply organisation filter
if (filters.organisation) {
filtered = filtered.filter((app) => app.organisation === filters.organisation);
}
// Apply hostingType filter
if (filters.hostingType) {
filtered = filtered.filter((app) => {
if (!app.hostingType) return false;
return app.hostingType.name === filters.hostingType || app.hostingType.key === filters.hostingType;
});
}
if (filters.businessImportance) {
filtered = filtered.filter((app) => app.businessImportance === filters.businessImportance);
}
const totalCount = filtered.length;
const totalPages = Math.ceil(totalCount / pageSize);
const startIndex = (page - 1) * pageSize;
const paginatedApps = filtered.slice(startIndex, startIndex + pageSize);
return {
applications: paginatedApps.map((app) => {
const effort = calculateRequiredEffortApplicationManagement(app);
return {
id: app.id,
key: app.key,
name: app.name,
status: app.status,
applicationFunctions: app.applicationFunctions,
governanceModel: app.governanceModel,
dynamicsFactor: app.dynamicsFactor,
complexityFactor: app.complexityFactor,
applicationSubteam: app.applicationSubteam,
applicationTeam: app.applicationTeam,
applicationType: app.applicationType,
platform: app.platform,
requiredEffortApplicationManagement: effort,
};
}),
totalCount,
currentPage: page,
pageSize,
totalPages,
};
}
async getApplicationById(id: string): Promise<ApplicationDetails | null> {
const app = this.applications.find((app) => app.id === id);
if (!app) return null;
// Calculate required effort
const effort = calculateRequiredEffortApplicationManagement(app);
return {
...app,
requiredEffortApplicationManagement: effort,
};
}
async updateApplication(
id: string,
updates: {
applicationFunctions?: ReferenceValue[];
dynamicsFactor?: ReferenceValue;
complexityFactor?: ReferenceValue;
numberOfUsers?: ReferenceValue;
governanceModel?: ReferenceValue;
applicationSubteam?: ReferenceValue;
applicationTeam?: ReferenceValue;
applicationType?: ReferenceValue;
hostingType?: ReferenceValue;
businessImpactAnalyse?: ReferenceValue;
}
): Promise<boolean> {
const index = this.applications.findIndex((app) => app.id === id);
if (index === -1) return false;
const app = this.applications[index];
if (updates.applicationFunctions !== undefined) {
app.applicationFunctions = updates.applicationFunctions;
}
if (updates.dynamicsFactor !== undefined) {
app.dynamicsFactor = updates.dynamicsFactor;
}
if (updates.complexityFactor !== undefined) {
app.complexityFactor = updates.complexityFactor;
}
if (updates.numberOfUsers !== undefined) {
app.numberOfUsers = updates.numberOfUsers;
}
if (updates.governanceModel !== undefined) {
app.governanceModel = updates.governanceModel;
}
if (updates.applicationSubteam !== undefined) {
app.applicationSubteam = updates.applicationSubteam;
}
if (updates.applicationTeam !== undefined) {
app.applicationTeam = updates.applicationTeam;
}
if (updates.applicationType !== undefined) {
app.applicationType = updates.applicationType;
}
if (updates.hostingType !== undefined) {
app.hostingType = updates.hostingType;
}
if (updates.businessImpactAnalyse !== undefined) {
app.businessImpactAnalyse = updates.businessImpactAnalyse;
}
return true;
}
async getDynamicsFactors(): Promise<ReferenceValue[]> {
return mockDynamicsFactors;
}
async getComplexityFactors(): Promise<ReferenceValue[]> {
return mockComplexityFactors;
}
async getNumberOfUsers(): Promise<ReferenceValue[]> {
return mockNumberOfUsers;
}
async getGovernanceModels(): Promise<ReferenceValue[]> {
return mockGovernanceModels;
}
async getOrganisations(): Promise<ReferenceValue[]> {
return mockOrganisations;
}
async getHostingTypes(): Promise<ReferenceValue[]> {
return mockHostingTypes;
}
async getBusinessImpactAnalyses(): Promise<ReferenceValue[]> {
return mockBusinessImpactAnalyses;
}
async getApplicationManagementHosting(): Promise<ReferenceValue[]> {
// Mock Application Management - Hosting values (v25)
return [
{ objectId: '1', key: 'AMH-1', name: 'On-Premises' },
{ objectId: '2', key: 'AMH-2', name: 'Azure - Eigen beheer' },
{ objectId: '3', key: 'AMH-3', name: 'Azure - Delegated Management' },
{ objectId: '4', key: 'AMH-4', name: 'Extern (SaaS)' },
];
}
async getApplicationManagementTAM(): Promise<ReferenceValue[]> {
// Mock Application Management - TAM values
return [
{ objectId: '1', key: 'TAM-1', name: 'ICMT' },
{ objectId: '2', key: 'TAM-2', name: 'Business' },
{ objectId: '3', key: 'TAM-3', name: 'Leverancier' },
];
}
async getApplicationFunctions(): Promise<ReferenceValue[]> {
// Return empty for mock - in real implementation, this comes from Jira
return [];
}
async getApplicationSubteams(): Promise<ReferenceValue[]> {
// Return empty for mock - in real implementation, this comes from Jira
return [];
}
async getApplicationTypes(): Promise<ReferenceValue[]> {
// Return empty for mock - in real implementation, this comes from Jira
return [];
}
async getBusinessImportance(): Promise<ReferenceValue[]> {
// Return empty for mock - in real implementation, this comes from Jira
return [];
}
async getApplicationFunctionCategories(): Promise<ReferenceValue[]> {
// Return empty for mock - in real implementation, this comes from Jira
return [];
}
async getStats() {
// Filter out applications with status "Closed" for KPIs
const activeApplications = this.applications.filter((a) => a.status !== 'Closed');
const total = activeApplications.length;
const classified = activeApplications.filter((a) => a.applicationFunctions.length > 0).length;
const unclassified = total - classified;
const byStatus: Record<string, number> = {};
const byGovernanceModel: Record<string, number> = {};
activeApplications.forEach((app) => {
if (app.status) {
byStatus[app.status] = (byStatus[app.status] || 0) + 1;
}
if (app.governanceModel) {
byGovernanceModel[app.governanceModel.name] =
(byGovernanceModel[app.governanceModel.name] || 0) + 1;
}
});
return {
totalApplications: total,
classifiedCount: classified,
unclassifiedCount: unclassified,
byStatus,
byDomain: {},
byGovernanceModel,
recentClassifications: mockClassificationHistory.slice(-10),
};
}
addClassificationResult(result: ClassificationResult): void {
mockClassificationHistory.push(result);
}
getClassificationHistory(): ClassificationResult[] {
return [...mockClassificationHistory];
}
async getTeamDashboardData(excludedStatuses: ApplicationStatus[] = []): Promise<TeamDashboardData> {
// Convert ApplicationDetails to ApplicationListItem for dashboard
let listItems: ApplicationListItem[] = this.applications.map(app => ({
id: app.id,
key: app.key,
name: app.name,
status: app.status,
applicationFunctions: app.applicationFunctions,
governanceModel: app.governanceModel,
dynamicsFactor: app.dynamicsFactor,
complexityFactor: app.complexityFactor,
applicationSubteam: app.applicationSubteam,
applicationTeam: app.applicationTeam,
applicationType: app.applicationType,
platform: app.platform,
requiredEffortApplicationManagement: app.requiredEffortApplicationManagement,
}));
// Filter out excluded statuses
if (excludedStatuses.length > 0) {
listItems = listItems.filter(app => !app.status || !excludedStatuses.includes(app.status));
}
// Separate applications into Platforms, Workloads, and regular applications
const platforms: ApplicationListItem[] = [];
const workloads: ApplicationListItem[] = [];
const regularApplications: ApplicationListItem[] = [];
for (const app of listItems) {
const isPlatform = app.applicationType?.name === 'Platform';
const isWorkload = app.platform !== null;
if (isPlatform) {
platforms.push(app);
} else if (isWorkload) {
workloads.push(app);
} else {
regularApplications.push(app);
}
}
// Group workloads by their platform
const workloadsByPlatform = new Map<string, ApplicationListItem[]>();
for (const workload of workloads) {
const platformId = workload.platform!.objectId;
if (!workloadsByPlatform.has(platformId)) {
workloadsByPlatform.set(platformId, []);
}
workloadsByPlatform.get(platformId)!.push(workload);
}
// Build PlatformWithWorkloads structures
const platformsWithWorkloads: import('../types/index.js').PlatformWithWorkloads[] = [];
for (const platform of platforms) {
const platformWorkloads = workloadsByPlatform.get(platform.id) || [];
const platformEffort = platform.requiredEffortApplicationManagement || 0;
const workloadsEffort = platformWorkloads.reduce((sum, w) => sum + (w.requiredEffortApplicationManagement || 0), 0);
platformsWithWorkloads.push({
platform,
workloads: platformWorkloads,
platformEffort,
workloadsEffort,
totalEffort: platformEffort + workloadsEffort,
});
}
// Group all applications (regular + platforms + workloads) by subteam
const subteamMap = new Map<string, {
regular: ApplicationListItem[];
platforms: import('../types/index.js').PlatformWithWorkloads[];
}>();
const unassigned: {
regular: ApplicationListItem[];
platforms: import('../types/index.js').PlatformWithWorkloads[];
} = {
regular: [],
platforms: [],
};
// Group regular applications by subteam
for (const app of regularApplications) {
if (app.applicationSubteam) {
const subteamId = app.applicationSubteam.objectId;
if (!subteamMap.has(subteamId)) {
subteamMap.set(subteamId, { regular: [], platforms: [] });
}
subteamMap.get(subteamId)!.regular.push(app);
} else {
unassigned.regular.push(app);
}
}
// Group platforms by subteam
for (const platformWithWorkloads of platformsWithWorkloads) {
const platform = platformWithWorkloads.platform;
if (platform.applicationSubteam) {
const subteamId = platform.applicationSubteam.objectId;
if (!subteamMap.has(subteamId)) {
subteamMap.set(subteamId, { regular: [], platforms: [] });
}
subteamMap.get(subteamId)!.platforms.push(platformWithWorkloads);
} else {
unassigned.platforms.push(platformWithWorkloads);
}
}
// Build subteams from mock data
const allSubteams = mockApplicationSubteams;
const subteams: import('../types/index.js').TeamDashboardSubteam[] = allSubteams.map(subteamRef => {
const subteamData = subteamMap.get(subteamRef.objectId) || { regular: [], platforms: [] };
const regularApps = subteamData.regular;
const platforms = subteamData.platforms;
// Calculate total effort: regular apps + platforms (including their workloads)
const regularEffort = regularApps.reduce((sum, app) =>
sum + (app.requiredEffortApplicationManagement || 0), 0
);
const platformsEffort = platforms.reduce((sum, p) => sum + p.totalEffort, 0);
const totalEffort = regularEffort + platformsEffort;
// Calculate total application count: regular apps + platforms + workloads
const platformsCount = platforms.length;
const workloadsCount = platforms.reduce((sum, p) => sum + p.workloads.length, 0);
const applicationCount = regularApps.length + platformsCount + workloadsCount;
// Calculate governance model distribution (including platforms and workloads)
const byGovernanceModel: Record<string, number> = {};
for (const app of regularApps) {
const govModel = app.governanceModel?.name || 'Niet ingesteld';
byGovernanceModel[govModel] = (byGovernanceModel[govModel] || 0) + 1;
}
for (const platformWithWorkloads of platforms) {
const platform = platformWithWorkloads.platform;
const govModel = platform.governanceModel?.name || 'Niet ingesteld';
byGovernanceModel[govModel] = (byGovernanceModel[govModel] || 0) + 1;
// Also count workloads
for (const workload of platformWithWorkloads.workloads) {
const workloadGovModel = workload.governanceModel?.name || 'Niet ingesteld';
byGovernanceModel[workloadGovModel] = (byGovernanceModel[workloadGovModel] || 0) + 1;
}
}
return {
subteam: subteamRef,
applications: regularApps,
platforms,
totalEffort,
minEffort: totalEffort * 0.8, // Mock: min is 80% of total
maxEffort: totalEffort * 1.2, // Mock: max is 120% of total
applicationCount,
byGovernanceModel,
};
}).filter(s => s.applicationCount > 0); // Only include subteams with apps
// Create a virtual team containing all subteams (since Team doesn't exist in mock data)
const virtualTeam: import('../types/index.js').TeamDashboardTeam = {
team: {
objectId: 'mock-team-1',
key: 'TEAM-1',
name: 'Mock Team',
teamType: 'Business',
},
subteams,
totalEffort: subteams.reduce((sum, s) => sum + s.totalEffort, 0),
minEffort: subteams.reduce((sum, s) => sum + s.minEffort, 0),
maxEffort: subteams.reduce((sum, s) => sum + s.maxEffort, 0),
applicationCount: subteams.reduce((sum, s) => sum + s.applicationCount, 0),
byGovernanceModel: subteams.reduce((acc, s) => {
for (const [key, count] of Object.entries(s.byGovernanceModel)) {
acc[key] = (acc[key] || 0) + count;
}
return acc;
}, {} as Record<string, number>),
};
// Calculate unassigned totals
const unassignedRegularEffort = unassigned.regular.reduce((sum, app) =>
sum + (app.requiredEffortApplicationManagement || 0), 0
);
const unassignedPlatformsEffort = unassigned.platforms.reduce((sum, p) => sum + p.totalEffort, 0);
const unassignedTotalEffort = unassignedRegularEffort + unassignedPlatformsEffort;
const unassignedPlatformsCount = unassigned.platforms.length;
const unassignedWorkloadsCount = unassigned.platforms.reduce((sum, p) => sum + p.workloads.length, 0);
const unassignedApplicationCount = unassigned.regular.length + unassignedPlatformsCount + unassignedWorkloadsCount;
// Calculate governance model distribution for unassigned
const unassignedByGovernanceModel: Record<string, number> = {};
for (const app of unassigned.regular) {
const govModel = app.governanceModel?.name || 'Niet ingesteld';
unassignedByGovernanceModel[govModel] = (unassignedByGovernanceModel[govModel] || 0) + 1;
}
for (const platformWithWorkloads of unassigned.platforms) {
const platform = platformWithWorkloads.platform;
const govModel = platform.governanceModel?.name || 'Niet ingesteld';
unassignedByGovernanceModel[govModel] = (unassignedByGovernanceModel[govModel] || 0) + 1;
for (const workload of platformWithWorkloads.workloads) {
const workloadGovModel = workload.governanceModel?.name || 'Niet ingesteld';
unassignedByGovernanceModel[workloadGovModel] = (unassignedByGovernanceModel[workloadGovModel] || 0) + 1;
}
}
return {
teams: subteams.length > 0 ? [virtualTeam] : [],
unassigned: {
subteam: null,
applications: unassigned.regular,
platforms: unassigned.platforms,
totalEffort: unassignedTotalEffort,
minEffort: unassignedTotalEffort * 0.8, // Mock: min is 80% of total
maxEffort: unassignedTotalEffort * 1.2, // Mock: max is 120% of total
applicationCount: unassignedApplicationCount,
byGovernanceModel: unassignedByGovernanceModel,
},
};
}
}
export const mockDataService = new MockDataService();

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,277 @@
/**
* Generic Query Builder
*
* Builds SQL queries dynamically based on filters and schema.
*/
import { logger } from './logger.js';
import { schemaDiscoveryService } from './schemaDiscoveryService.js';
import type { CMDBObjectTypeName } from '../generated/jira-types.js';
import type { AttributeDefinition } from '../generated/jira-schema.js';
class QueryBuilder {
/**
* Build WHERE clause from filters
*/
async buildWhereClause(
filters: Record<string, unknown>,
typeName: CMDBObjectTypeName
): Promise<{ whereClause: string; params: unknown[] }> {
const conditions: string[] = ['o.object_type_name = ?'];
const params: unknown[] = [typeName];
let paramIndex = 2;
for (const [fieldName, filterValue] of Object.entries(filters)) {
if (filterValue === undefined || filterValue === null) continue;
const attrDef = await schemaDiscoveryService.getAttribute(typeName, fieldName);
if (!attrDef) {
logger.debug(`QueryBuilder: Unknown field ${fieldName} for type ${typeName}, skipping`);
continue;
}
const condition = this.buildFilterCondition(fieldName, filterValue, attrDef, paramIndex);
if (condition.condition) {
conditions.push(condition.condition);
params.push(...condition.params);
paramIndex += condition.params.length;
}
}
const whereClause = conditions.length > 0 ? `WHERE ${conditions.join(' AND ')}` : '';
return { whereClause, params };
}
/**
* Build filter condition for one field
*/
buildFilterCondition(
fieldName: string,
filterValue: unknown,
attrDef: AttributeDefinition,
startParamIndex: number
): { condition: string; params: unknown[] } {
// Handle special operators
if (typeof filterValue === 'object' && filterValue !== null && !Array.isArray(filterValue)) {
const filterObj = filterValue as Record<string, unknown>;
// Exists check
if (filterObj.exists === true) {
return {
condition: `EXISTS (
SELECT 1 FROM attribute_values av
JOIN attributes a ON av.attribute_id = a.id
WHERE av.object_id = o.id AND a.field_name = ?
)`,
params: [fieldName]
};
}
// Empty check
if (filterObj.empty === true) {
return {
condition: `NOT EXISTS (
SELECT 1 FROM attribute_values av
JOIN attributes a ON av.attribute_id = a.id
WHERE av.object_id = o.id AND a.field_name = ?
)`,
params: [fieldName]
};
}
// Contains (text search)
if (filterObj.contains !== undefined && typeof filterObj.contains === 'string') {
if (attrDef.type === 'text' || attrDef.type === 'textarea') {
return {
condition: `EXISTS (
SELECT 1 FROM attribute_values av
JOIN attributes a ON av.attribute_id = a.id
WHERE av.object_id = o.id
AND a.field_name = ?
AND LOWER(av.text_value) LIKE LOWER(?)
)`,
params: [fieldName, `%${filterObj.contains}%`]
};
}
}
// Reference filters
if (attrDef.type === 'reference') {
if (filterObj.objectId !== undefined) {
return {
condition: `EXISTS (
SELECT 1 FROM attribute_values av
JOIN attributes a ON av.attribute_id = a.id
WHERE av.object_id = o.id
AND a.field_name = ?
AND av.reference_object_id = ?
)`,
params: [fieldName, String(filterObj.objectId)]
};
}
if (filterObj.objectKey !== undefined) {
return {
condition: `EXISTS (
SELECT 1 FROM attribute_values av
JOIN attributes a ON av.attribute_id = a.id
JOIN objects ref_obj ON av.reference_object_id = ref_obj.id
WHERE av.object_id = o.id
AND a.field_name = ?
AND ref_obj.object_key = ?
)`,
params: [fieldName, String(filterObj.objectKey)]
};
}
if (filterObj.label !== undefined) {
return {
condition: `EXISTS (
SELECT 1 FROM attribute_values av
JOIN attributes a ON av.attribute_id = a.id
JOIN objects ref_obj ON av.reference_object_id = ref_obj.id
WHERE av.object_id = o.id
AND a.field_name = ?
AND LOWER(ref_obj.label) = LOWER(?)
)`,
params: [fieldName, String(filterObj.label)]
};
}
}
}
// Handle array filters (for multiple reference fields)
if (attrDef.isMultiple && Array.isArray(filterValue)) {
if (attrDef.type === 'reference') {
const conditions: string[] = [];
const params: unknown[] = [];
for (const val of filterValue) {
if (typeof val === 'object' && val !== null) {
const ref = val as { objectId?: string; objectKey?: string };
if (ref.objectId) {
conditions.push(`EXISTS (
SELECT 1 FROM attribute_values av
JOIN attributes a ON av.attribute_id = a.id
WHERE av.object_id = o.id
AND a.field_name = ?
AND av.reference_object_id = ?
)`);
params.push(fieldName, ref.objectId);
} else if (ref.objectKey) {
conditions.push(`EXISTS (
SELECT 1 FROM attribute_values av
JOIN attributes a ON av.attribute_id = a.id
JOIN objects ref_obj ON av.reference_object_id = ref_obj.id
WHERE av.object_id = o.id
AND a.field_name = ?
AND ref_obj.object_key = ?
)`);
params.push(fieldName, ref.objectKey);
}
}
}
if (conditions.length > 0) {
return { condition: `(${conditions.join(' OR ')})`, params };
}
}
}
// Simple value filters
if (attrDef.type === 'reference') {
if (typeof filterValue === 'object' && filterValue !== null) {
const ref = filterValue as { objectId?: string; objectKey?: string; label?: string };
if (ref.objectId) {
return {
condition: `EXISTS (
SELECT 1 FROM attribute_values av
JOIN attributes a ON av.attribute_id = a.id
WHERE av.object_id = o.id
AND a.field_name = ?
AND av.reference_object_id = ?
)`,
params: [fieldName, ref.objectId]
};
} else if (ref.objectKey) {
return {
condition: `EXISTS (
SELECT 1 FROM attribute_values av
JOIN attributes a ON av.attribute_id = a.id
JOIN objects ref_obj ON av.reference_object_id = ref_obj.id
WHERE av.object_id = o.id
AND a.field_name = ?
AND ref_obj.object_key = ?
)`,
params: [fieldName, ref.objectKey]
};
} else if (ref.label) {
return {
condition: `EXISTS (
SELECT 1 FROM attribute_values av
JOIN attributes a ON av.attribute_id = a.id
JOIN objects ref_obj ON av.reference_object_id = ref_obj.id
WHERE av.object_id = o.id
AND a.field_name = ?
AND LOWER(ref_obj.label) = LOWER(?)
)`,
params: [fieldName, ref.label]
};
}
}
} else if (attrDef.type === 'text' || attrDef.type === 'textarea' || attrDef.type === 'url' || attrDef.type === 'email' || attrDef.type === 'select' || attrDef.type === 'user' || attrDef.type === 'status') {
return {
condition: `EXISTS (
SELECT 1 FROM attribute_values av
JOIN attributes a ON av.attribute_id = a.id
WHERE av.object_id = o.id
AND a.field_name = ?
AND av.text_value = ?
)`,
params: [fieldName, String(filterValue)]
};
} else if (attrDef.type === 'integer' || attrDef.type === 'float') {
return {
condition: `EXISTS (
SELECT 1 FROM attribute_values av
JOIN attributes a ON av.attribute_id = a.id
WHERE av.object_id = o.id
AND a.field_name = ?
AND av.number_value = ?
)`,
params: [fieldName, Number(filterValue)]
};
} else if (attrDef.type === 'boolean') {
return {
condition: `EXISTS (
SELECT 1 FROM attribute_values av
JOIN attributes a ON av.attribute_id = a.id
WHERE av.object_id = o.id
AND a.field_name = ?
AND av.boolean_value = ?
)`,
params: [fieldName, Boolean(filterValue)]
};
}
return { condition: '', params: [] };
}
/**
* Build ORDER BY clause
*/
buildOrderBy(orderBy?: string, orderDir?: 'ASC' | 'DESC'): string {
const safeOrderBy = ['id', 'object_key', 'object_type_name', 'label', 'cached_at'].includes(orderBy || '')
? (orderBy || 'label')
: 'label';
const safeOrderDir = orderDir === 'DESC' ? 'DESC' : 'ASC';
return `ORDER BY o.${safeOrderBy} ${safeOrderDir}`;
}
/**
* Build pagination clause
*/
buildPagination(limit?: number, offset?: number): string {
const limitValue = limit || 100;
const offsetValue = offset || 0;
return `LIMIT ${limitValue} OFFSET ${offsetValue}`;
}
}
export const queryBuilder = new QueryBuilder();

View File

@@ -0,0 +1,256 @@
/**
* Schema Cache Service
*
* In-memory cache for schema data with TTL support.
* Provides fast access to schema information without hitting the database on every request.
*/
import { logger } from './logger.js';
import { schemaDiscoveryService } from './schemaDiscoveryService.js';
import type { ObjectTypeDefinition, AttributeDefinition } from '../generated/jira-schema.js';
import { getDatabaseAdapter } from './database/singleton.js';
interface SchemaResponse {
metadata: {
generatedAt: string;
objectTypeCount: number;
totalAttributes: number;
enabledObjectTypeCount?: number;
};
objectTypes: Record<string, ObjectTypeWithLinks>;
cacheCounts?: Record<string, number>;
jiraCounts?: Record<string, number>;
}
interface ObjectTypeWithLinks extends ObjectTypeDefinition {
enabled: boolean; // Whether this object type is enabled for syncing
incomingLinks: Array<{
fromType: string;
fromTypeName: string;
attributeName: string;
isMultiple: boolean;
}>;
outgoingLinks: Array<{
toType: string;
toTypeName: string;
attributeName: string;
isMultiple: boolean;
}>;
}
class SchemaCacheService {
private cache: SchemaResponse | null = null;
private cacheTimestamp: number = 0;
private readonly CACHE_TTL_MS = 5 * 60 * 1000; // 5 minutes
private db = getDatabaseAdapter(); // Use shared database adapter singleton
/**
* Get schema from cache or fetch from database
*/
async getSchema(): Promise<SchemaResponse> {
// Check cache validity
const now = Date.now();
if (this.cache && (now - this.cacheTimestamp) < this.CACHE_TTL_MS) {
logger.debug('SchemaCache: Returning cached schema');
return this.cache;
}
// Cache expired or doesn't exist - fetch from database
logger.debug('SchemaCache: Cache expired or missing, fetching from database');
const schema = await this.fetchFromDatabase();
// Update cache
this.cache = schema;
this.cacheTimestamp = now;
return schema;
}
/**
* Invalidate cache (force refresh on next request)
*/
invalidate(): void {
logger.debug('SchemaCache: Invalidating cache');
this.cache = null;
this.cacheTimestamp = 0;
}
/**
* Fetch schema from database and build response
* Returns ALL object types (enabled and disabled) with their sync status
*/
private async fetchFromDatabase(): Promise<SchemaResponse> {
// Schema discovery must be manually triggered via API endpoints
// No automatic discovery on first run
// Fetch ALL object types (enabled and disabled) with their schema info
const objectTypeRows = await this.db.query<{
id: number;
schema_id: number;
jira_type_id: number;
type_name: string;
display_name: string;
description: string | null;
sync_priority: number;
object_count: number;
enabled: boolean | number;
}>(
`SELECT ot.id, ot.schema_id, ot.jira_type_id, ot.type_name, ot.display_name, ot.description, ot.sync_priority, ot.object_count, ot.enabled
FROM object_types ot
ORDER BY ot.sync_priority, ot.type_name`
);
if (objectTypeRows.length === 0) {
// No types found, return empty schema
return {
metadata: {
generatedAt: new Date().toISOString(),
objectTypeCount: 0,
totalAttributes: 0,
},
objectTypes: {},
};
}
// Fetch attributes for ALL object types using JOIN
const attributeRows = await this.db.query<{
id: number;
jira_attr_id: number;
object_type_name: string;
attr_name: string;
field_name: string;
attr_type: string;
is_multiple: boolean | number;
is_editable: boolean | number;
is_required: boolean | number;
is_system: boolean | number;
reference_type_name: string | null;
description: string | null;
position: number | null;
schema_id: number;
type_name: string;
}>(
`SELECT a.*, ot.schema_id, ot.type_name
FROM attributes a
INNER JOIN object_types ot ON a.object_type_name = ot.type_name
ORDER BY ot.type_name, COALESCE(a.position, 0), a.jira_attr_id`
);
logger.debug(`SchemaCache: Found ${objectTypeRows.length} object types (enabled and disabled) and ${attributeRows.length} attributes`);
// Build object types with attributes
// Use type_name as key (even if same type exists in multiple schemas, we'll show the first enabled one)
// In practice, if same type_name exists in multiple schemas, attributes should be the same
const objectTypesWithLinks: Record<string, ObjectTypeWithLinks> = {};
for (const typeRow of objectTypeRows) {
const typeName = typeRow.type_name;
// Skip if we already have this type_name (first enabled one wins)
if (objectTypesWithLinks[typeName]) {
logger.debug(`SchemaCache: Skipping duplicate type_name ${typeName} from schema ${typeRow.schema_id}`);
continue;
}
// Match attributes by both schema_id and type_name to ensure correct mapping
const matchingAttributes = attributeRows.filter(a => a.schema_id === typeRow.schema_id && a.type_name === typeName);
logger.debug(`SchemaCache: Found ${matchingAttributes.length} attributes for ${typeName} (schema_id: ${typeRow.schema_id})`);
const attributes = matchingAttributes.map(attrRow => {
// Convert boolean/number for SQLite compatibility
const isMultiple = typeof attrRow.is_multiple === 'boolean' ? attrRow.is_multiple : attrRow.is_multiple === 1;
const isEditable = typeof attrRow.is_editable === 'boolean' ? attrRow.is_editable : attrRow.is_editable === 1;
const isRequired = typeof attrRow.is_required === 'boolean' ? attrRow.is_required : attrRow.is_required === 1;
const isSystem = typeof attrRow.is_system === 'boolean' ? attrRow.is_system : attrRow.is_system === 1;
return {
jiraId: attrRow.jira_attr_id,
name: attrRow.attr_name,
fieldName: attrRow.field_name,
type: attrRow.attr_type as AttributeDefinition['type'],
isMultiple,
isEditable,
isRequired,
isSystem,
referenceTypeName: attrRow.reference_type_name || undefined,
description: attrRow.description || undefined,
position: attrRow.position ?? 0,
} as AttributeDefinition;
});
// Convert enabled boolean/number to boolean
const isEnabled = typeof typeRow.enabled === 'boolean' ? typeRow.enabled : typeRow.enabled === 1;
objectTypesWithLinks[typeName] = {
jiraTypeId: typeRow.jira_type_id,
name: typeRow.display_name,
typeName: typeName,
syncPriority: typeRow.sync_priority,
objectCount: typeRow.object_count,
enabled: isEnabled,
attributes,
incomingLinks: [],
outgoingLinks: [],
};
}
// Build link relationships
for (const [typeName, typeDef] of Object.entries(objectTypesWithLinks)) {
for (const attr of typeDef.attributes) {
if (attr.type === 'reference' && attr.referenceTypeName) {
// Add outgoing link from this type
typeDef.outgoingLinks.push({
toType: attr.referenceTypeName,
toTypeName: objectTypesWithLinks[attr.referenceTypeName]?.name || attr.referenceTypeName,
attributeName: attr.name,
isMultiple: attr.isMultiple,
});
// Add incoming link to the referenced type
if (objectTypesWithLinks[attr.referenceTypeName]) {
objectTypesWithLinks[attr.referenceTypeName].incomingLinks.push({
fromType: typeName,
fromTypeName: typeDef.name,
attributeName: attr.name,
isMultiple: attr.isMultiple,
});
}
}
}
}
// Get cache counts (objectsByType) if available
let cacheCounts: Record<string, number> | undefined;
try {
const { dataService } = await import('./dataService.js');
const cacheStatus = await dataService.getCacheStatus();
cacheCounts = cacheStatus.objectsByType;
} catch (err) {
logger.debug('SchemaCache: Could not fetch cache counts', err);
// Continue without cache counts - not critical
}
// Calculate metadata (include enabled count)
const totalAttributes = Object.values(objectTypesWithLinks).reduce(
(sum, t) => sum + t.attributes.length,
0
);
const enabledCount = Object.values(objectTypesWithLinks).filter(t => t.enabled).length;
const response: SchemaResponse = {
metadata: {
generatedAt: new Date().toISOString(),
objectTypeCount: objectTypeRows.length,
totalAttributes,
enabledObjectTypeCount: enabledCount,
},
objectTypes: objectTypesWithLinks,
cacheCounts,
};
return response;
}
}
// Export singleton instance
export const schemaCacheService = new SchemaCacheService();

View File

@@ -0,0 +1,478 @@
/**
* Schema Configuration Service
*
* Manages schema and object type configuration for syncing.
* Discovers schemas and object types from Jira Assets API and allows
* enabling/disabling specific object types for synchronization.
*/
import { logger } from './logger.js';
import { normalizedCacheStore } from './normalizedCacheStore.js';
import { config } from '../config/env.js';
import { toPascalCase } from './schemaUtils.js';
import type { DatabaseAdapter } from './database/interface.js';
export interface JiraSchema {
id: number;
name: string;
description?: string;
objectTypeCount?: number;
}
export interface JiraObjectType {
id: number;
name: string;
description?: string;
objectCount?: number;
objectSchemaId: number;
parentObjectTypeId?: number;
inherited?: boolean;
abstractObjectType?: boolean;
}
export interface ConfiguredObjectType {
id: string; // "schemaId:objectTypeId"
schemaId: string;
schemaName: string;
objectTypeId: number;
objectTypeName: string;
displayName: string;
description: string | null;
objectCount: number;
enabled: boolean;
discoveredAt: string;
updatedAt: string;
}
class SchemaConfigurationService {
constructor() {
// Configuration service - no API calls needed, uses database only
}
/**
* NOTE: Schema discovery is now handled by SchemaSyncService.
* This service only manages configuration (enabling/disabling object types).
* Use schemaSyncService.syncAll() to discover and sync schemas, object types, and attributes.
*/
/**
* Get all configured object types grouped by schema
*/
async getConfiguredObjectTypes(): Promise<Array<{
schemaId: string;
schemaName: string;
objectTypes: ConfiguredObjectType[];
}>> {
const db: DatabaseAdapter = (normalizedCacheStore as any).db;
if (!db) {
throw new Error('Database not available');
}
await (db as any).ensureInitialized?.();
const rows = await db.query<{
id: number;
schema_id: number;
jira_schema_id: string;
schema_name: string;
jira_type_id: number;
type_name: string;
display_name: string;
description: string | null;
object_count: number;
enabled: boolean | number;
discovered_at: string;
updated_at: string;
}>(`
SELECT
ot.id,
ot.schema_id,
s.jira_schema_id,
s.name as schema_name,
ot.jira_type_id,
ot.type_name,
ot.display_name,
ot.description,
ot.object_count,
ot.enabled,
ot.discovered_at,
ot.updated_at
FROM object_types ot
JOIN schemas s ON ot.schema_id = s.id
ORDER BY s.name ASC, ot.display_name ASC
`);
// Group by schema
const schemaMap = new Map<string, ConfiguredObjectType[]>();
for (const row of rows) {
const objectType: ConfiguredObjectType = {
id: `${row.jira_schema_id}:${row.jira_type_id}`, // Keep same format for compatibility
schemaId: row.jira_schema_id,
schemaName: row.schema_name,
objectTypeId: row.jira_type_id,
objectTypeName: row.type_name,
displayName: row.display_name,
description: row.description,
objectCount: row.object_count,
enabled: typeof row.enabled === 'boolean' ? row.enabled : row.enabled === 1,
discoveredAt: row.discovered_at,
updatedAt: row.updated_at,
};
if (!schemaMap.has(row.jira_schema_id)) {
schemaMap.set(row.jira_schema_id, []);
}
schemaMap.get(row.jira_schema_id)!.push(objectType);
}
// Convert to array
return Array.from(schemaMap.entries()).map(([schemaId, objectTypes]) => {
const firstType = objectTypes[0];
return {
schemaId,
schemaName: firstType.schemaName,
objectTypes,
};
});
}
/**
* Set enabled status for an object type
* id format: "schemaId:objectTypeId" (e.g., "6:123")
*/
async setObjectTypeEnabled(id: string, enabled: boolean): Promise<void> {
const db: DatabaseAdapter = (normalizedCacheStore as any).db;
if (!db) {
throw new Error('Database not available');
}
await (db as any).ensureInitialized?.();
// Parse id: "schemaId:objectTypeId"
const [schemaIdStr, objectTypeIdStr] = id.split(':');
if (!schemaIdStr || !objectTypeIdStr) {
throw new Error(`Invalid object type id format: ${id}. Expected format: "schemaId:objectTypeId"`);
}
const objectTypeId = parseInt(objectTypeIdStr, 10);
if (isNaN(objectTypeId)) {
throw new Error(`Invalid object type id: ${objectTypeIdStr}`);
}
// Get schema_id (FK) from schemas table
const schemaRow = await db.queryOne<{ id: number }>(
`SELECT id FROM schemas WHERE jira_schema_id = ?`,
[schemaIdStr]
);
if (!schemaRow) {
throw new Error(`Schema ${schemaIdStr} not found`);
}
// Check if type_name is missing and try to fix it if enabling
const currentType = await db.queryOne<{ type_name: string | null; display_name: string }>(
`SELECT type_name, display_name FROM object_types WHERE schema_id = ? AND jira_type_id = ?`,
[schemaRow.id, objectTypeId]
);
let typeNameToSet = currentType?.type_name;
const needsTypeNameFix = enabled && (!typeNameToSet || typeNameToSet.trim() === '');
if (needsTypeNameFix && currentType?.display_name) {
// Try to generate type_name from display_name (PascalCase)
const { toPascalCase } = await import('./schemaUtils.js');
typeNameToSet = toPascalCase(currentType.display_name);
logger.warn(`SchemaConfiguration: Type ${id} has missing type_name. Auto-generating "${typeNameToSet}" from display_name "${currentType.display_name}"`);
}
const now = new Date().toISOString();
if (db.isPostgres) {
if (needsTypeNameFix && typeNameToSet) {
await db.execute(`
UPDATE object_types
SET enabled = ?, type_name = ?, updated_at = ?
WHERE schema_id = ? AND jira_type_id = ?
`, [enabled, typeNameToSet, now, schemaRow.id, objectTypeId]);
logger.info(`SchemaConfiguration: Set object type ${id} enabled=${enabled} and fixed missing type_name to "${typeNameToSet}"`);
} else {
await db.execute(`
UPDATE object_types
SET enabled = ?, updated_at = ?
WHERE schema_id = ? AND jira_type_id = ?
`, [enabled, now, schemaRow.id, objectTypeId]);
logger.info(`SchemaConfiguration: Set object type ${id} enabled=${enabled}`);
}
} else {
if (needsTypeNameFix && typeNameToSet) {
await db.execute(`
UPDATE object_types
SET enabled = ?, type_name = ?, updated_at = ?
WHERE schema_id = ? AND jira_type_id = ?
`, [enabled ? 1 : 0, typeNameToSet, now, schemaRow.id, objectTypeId]);
logger.info(`SchemaConfiguration: Set object type ${id} enabled=${enabled} and fixed missing type_name to "${typeNameToSet}"`);
} else {
await db.execute(`
UPDATE object_types
SET enabled = ?, updated_at = ?
WHERE schema_id = ? AND jira_type_id = ?
`, [enabled ? 1 : 0, now, schemaRow.id, objectTypeId]);
logger.info(`SchemaConfiguration: Set object type ${id} enabled=${enabled}`);
}
}
}
/**
* Bulk update enabled status for multiple object types
*/
async bulkSetObjectTypesEnabled(updates: Array<{ id: string; enabled: boolean }>): Promise<void> {
const db: DatabaseAdapter = (normalizedCacheStore as any).db;
if (!db) {
throw new Error('Database not available');
}
await (db as any).ensureInitialized?.();
const now = new Date().toISOString();
await db.transaction(async (txDb: DatabaseAdapter) => {
for (const update of updates) {
// Parse id: "schemaId:objectTypeId"
const [schemaIdStr, objectTypeIdStr] = update.id.split(':');
if (!schemaIdStr || !objectTypeIdStr) {
logger.warn(`SchemaConfiguration: Invalid object type id format: ${update.id}`);
continue;
}
const objectTypeId = parseInt(objectTypeIdStr, 10);
if (isNaN(objectTypeId)) {
logger.warn(`SchemaConfiguration: Invalid object type id: ${objectTypeIdStr}`);
continue;
}
// Get schema_id (FK) from schemas table
const schemaRow = await txDb.queryOne<{ id: number }>(
`SELECT id FROM schemas WHERE jira_schema_id = ?`,
[schemaIdStr]
);
if (!schemaRow) {
logger.warn(`SchemaConfiguration: Schema ${schemaIdStr} not found`);
continue;
}
// Check if type_name is missing and try to fix it if enabling
const currentType = await txDb.queryOne<{ type_name: string | null; display_name: string }>(
`SELECT type_name, display_name FROM object_types WHERE schema_id = ? AND jira_type_id = ?`,
[schemaRow.id, objectTypeId]
);
let typeNameToSet = currentType?.type_name;
const needsTypeNameFix = update.enabled && (!typeNameToSet || typeNameToSet.trim() === '');
if (needsTypeNameFix && currentType?.display_name) {
// Try to generate type_name from display_name (PascalCase)
const { toPascalCase } = await import('./schemaUtils.js');
typeNameToSet = toPascalCase(currentType.display_name);
logger.warn(`SchemaConfiguration: Type ${update.id} has missing type_name. Auto-generating "${typeNameToSet}" from display_name "${currentType.display_name}"`);
}
if (txDb.isPostgres) {
if (needsTypeNameFix && typeNameToSet) {
await txDb.execute(`
UPDATE object_types
SET enabled = ?, type_name = ?, updated_at = ?
WHERE schema_id = ? AND jira_type_id = ?
`, [update.enabled, typeNameToSet, now, schemaRow.id, objectTypeId]);
} else {
await txDb.execute(`
UPDATE object_types
SET enabled = ?, updated_at = ?
WHERE schema_id = ? AND jira_type_id = ?
`, [update.enabled, now, schemaRow.id, objectTypeId]);
}
} else {
if (needsTypeNameFix && typeNameToSet) {
await txDb.execute(`
UPDATE object_types
SET enabled = ?, type_name = ?, updated_at = ?
WHERE schema_id = ? AND jira_type_id = ?
`, [update.enabled ? 1 : 0, typeNameToSet, now, schemaRow.id, objectTypeId]);
} else {
await txDb.execute(`
UPDATE object_types
SET enabled = ?, updated_at = ?
WHERE schema_id = ? AND jira_type_id = ?
`, [update.enabled ? 1 : 0, now, schemaRow.id, objectTypeId]);
}
}
}
});
logger.info(`SchemaConfiguration: Bulk updated ${updates.length} object types`);
}
/**
* Get enabled object types (for sync engine)
*/
async getEnabledObjectTypes(): Promise<Array<{
schemaId: string;
objectTypeId: number;
objectTypeName: string;
displayName: string;
}>> {
const db: DatabaseAdapter = (normalizedCacheStore as any).db;
if (!db) {
throw new Error('Database not available');
}
await (db as any).ensureInitialized?.();
// Use parameterized query to avoid boolean/integer comparison issues
const rows = await db.query<{
jira_schema_id: string;
jira_type_id: number;
type_name: string;
display_name: string;
}>(
`SELECT s.jira_schema_id, ot.jira_type_id, ot.type_name, ot.display_name
FROM object_types ot
JOIN schemas s ON ot.schema_id = s.id
WHERE ot.enabled = ?`,
[db.isPostgres ? true : 1]
);
return rows.map((row: {
jira_schema_id: string;
jira_type_id: number;
type_name: string;
display_name: string;
}) => ({
schemaId: row.jira_schema_id,
objectTypeId: row.jira_type_id,
objectTypeName: row.type_name,
displayName: row.display_name,
}));
}
/**
* Check if configuration is complete (at least one object type enabled)
*/
async isConfigurationComplete(): Promise<boolean> {
const enabledTypes = await this.getEnabledObjectTypes();
return enabledTypes.length > 0;
}
/**
* Get configuration statistics
*/
async getConfigurationStats(): Promise<{
totalSchemas: number;
totalObjectTypes: number;
enabledObjectTypes: number;
disabledObjectTypes: number;
isConfigured: boolean;
}> {
const db: DatabaseAdapter = (normalizedCacheStore as any).db;
if (!db) {
throw new Error('Database not available');
}
await (db as any).ensureInitialized?.();
const totalRow = await db.queryOne<{ count: number }>(`
SELECT COUNT(*) as count FROM object_types
`);
// Use parameterized query to avoid boolean/integer comparison issues
const enabledRow = await db.queryOne<{ count: number }>(
`SELECT COUNT(*) as count FROM object_types WHERE enabled = ?`,
[db.isPostgres ? true : 1]
);
const schemaRow = await db.queryOne<{ count: number }>(`
SELECT COUNT(*) as count FROM schemas
`);
const total = totalRow?.count || 0;
const enabled = enabledRow?.count || 0;
const schemas = schemaRow?.count || 0;
return {
totalSchemas: schemas,
totalObjectTypes: total,
enabledObjectTypes: enabled,
disabledObjectTypes: total - enabled,
isConfigured: enabled > 0,
};
}
/**
* Get all schemas with their search enabled status
*/
async getSchemas(): Promise<Array<{
schemaId: string;
schemaName: string;
searchEnabled: boolean;
}>> {
const db: DatabaseAdapter = (normalizedCacheStore as any).db;
if (!db) {
throw new Error('Database not available');
}
await (db as any).ensureInitialized?.();
const rows = await db.query<{
jira_schema_id: string;
name: string;
search_enabled: boolean | number;
}>(`
SELECT jira_schema_id, name, search_enabled
FROM schemas
ORDER BY name ASC
`);
return rows.map((row: {
jira_schema_id: string;
name: string;
search_enabled: boolean | number;
}) => ({
schemaId: row.jira_schema_id,
schemaName: row.name,
searchEnabled: typeof row.search_enabled === 'boolean' ? row.search_enabled : row.search_enabled === 1,
}));
}
/**
* Set search enabled status for a schema
*/
async setSchemaSearchEnabled(schemaId: string, searchEnabled: boolean): Promise<void> {
const db: DatabaseAdapter = (normalizedCacheStore as any).db;
if (!db) {
throw new Error('Database not available');
}
await (db as any).ensureInitialized?.();
const now = new Date().toISOString();
if (db.isPostgres) {
await db.execute(`
UPDATE schemas
SET search_enabled = ?, updated_at = ?
WHERE jira_schema_id = ?
`, [searchEnabled, now, schemaId]);
} else {
await db.execute(`
UPDATE schemas
SET search_enabled = ?, updated_at = ?
WHERE jira_schema_id = ?
`, [searchEnabled ? 1 : 0, now, schemaId]);
}
logger.info(`SchemaConfiguration: Set schema ${schemaId} search_enabled=${searchEnabled}`);
}
}
export const schemaConfigurationService = new SchemaConfigurationService();

View File

@@ -0,0 +1,182 @@
/**
* Schema Discovery Service
*
* Provides access to discovered schema data from the database.
* Schema synchronization is handled by SchemaSyncService.
* This service provides read-only access to the discovered schema.
*/
import { logger } from './logger.js';
import { getDatabaseAdapter } from './database/singleton.js';
import type { DatabaseAdapter } from './database/interface.js';
import { schemaSyncService } from './SchemaSyncService.js';
import type { ObjectTypeDefinition, AttributeDefinition } from '../generated/jira-schema.js';
// Jira API Types (kept for reference, but not used in this service anymore)
class SchemaDiscoveryService {
private db: DatabaseAdapter;
private isPostgres: boolean;
constructor() {
// Use shared database adapter singleton
this.db = getDatabaseAdapter();
// Determine if PostgreSQL based on adapter type
this.isPostgres = (this.db.isPostgres === true);
}
/**
* Discover schema from Jira Assets API and populate database
* Delegates to SchemaSyncService for the actual synchronization
*/
async discoverAndStoreSchema(force: boolean = false): Promise<void> {
logger.info('SchemaDiscovery: Delegating to SchemaSyncService for schema synchronization...');
await schemaSyncService.syncAll();
}
/**
* Get attribute definition from database
*/
async getAttribute(typeName: string, fieldName: string): Promise<AttributeDefinition | null> {
const row = await this.db.queryOne<{
jira_attr_id: number;
attr_name: string;
field_name: string;
attr_type: string;
is_multiple: boolean | number;
is_editable: boolean | number;
is_required: boolean | number;
is_system: boolean | number;
reference_type_name: string | null;
description: string | null;
}>(`
SELECT * FROM attributes
WHERE object_type_name = ? AND field_name = ?
`, [typeName, fieldName]);
if (!row) return null;
// Convert boolean/number for SQLite compatibility
const isMultiple = typeof row.is_multiple === 'boolean' ? row.is_multiple : row.is_multiple === 1;
const isEditable = typeof row.is_editable === 'boolean' ? row.is_editable : row.is_editable === 1;
const isRequired = typeof row.is_required === 'boolean' ? row.is_required : row.is_required === 1;
const isSystem = typeof row.is_system === 'boolean' ? row.is_system : row.is_system === 1;
return {
jiraId: row.jira_attr_id,
name: row.attr_name,
fieldName: row.field_name,
type: row.attr_type as AttributeDefinition['type'],
isMultiple,
isEditable,
isRequired,
isSystem,
referenceTypeName: row.reference_type_name || undefined,
description: row.description || undefined,
};
}
/**
* Get all attributes for a type
*/
async getAttributesForType(typeName: string): Promise<AttributeDefinition[]> {
const rows = await this.db.query<{
jira_attr_id: number;
attr_name: string;
field_name: string;
attr_type: string;
is_multiple: boolean | number;
is_editable: boolean | number;
is_required: boolean | number;
is_system: boolean | number;
reference_type_name: string | null;
description: string | null;
position: number | null;
}>(`
SELECT * FROM attributes
WHERE object_type_name = ?
ORDER BY COALESCE(position, 0), jira_attr_id
`, [typeName]);
return rows.map(row => {
// Convert boolean/number for SQLite compatibility
const isMultiple = typeof row.is_multiple === 'boolean' ? row.is_multiple : row.is_multiple === 1;
const isEditable = typeof row.is_editable === 'boolean' ? row.is_editable : row.is_editable === 1;
const isRequired = typeof row.is_required === 'boolean' ? row.is_required : row.is_required === 1;
const isSystem = typeof row.is_system === 'boolean' ? row.is_system : row.is_system === 1;
return {
jiraId: row.jira_attr_id,
name: row.attr_name,
fieldName: row.field_name,
type: row.attr_type as AttributeDefinition['type'],
isMultiple,
isEditable,
isRequired,
isSystem,
referenceTypeName: row.reference_type_name || undefined,
description: row.description || undefined,
position: row.position ?? 0,
};
});
}
/**
* Get object type definition from database
*/
async getObjectType(typeName: string): Promise<ObjectTypeDefinition | null> {
const row = await this.db.queryOne<{
jira_type_id: number;
type_name: string;
display_name: string;
description: string | null;
sync_priority: number;
object_count: number;
}>(`
SELECT * FROM object_types
WHERE type_name = ?
`, [typeName]);
if (!row) return null;
const attributes = await this.getAttributesForType(typeName);
return {
jiraTypeId: row.jira_type_id,
name: row.display_name,
typeName: row.type_name,
syncPriority: row.sync_priority,
objectCount: row.object_count,
attributes,
};
}
/**
* Get attribute ID by type and field name or attribute name
* Supports both fieldName (camelCase) and name (display name) for flexibility
*/
async getAttributeId(typeName: string, fieldNameOrName: string): Promise<number | null> {
// Try field_name first (camelCase)
let row = await this.db.queryOne<{ id: number }>(`
SELECT id FROM attributes
WHERE object_type_name = ? AND field_name = ?
`, [typeName, fieldNameOrName]);
// If not found, try attr_name (display name)
if (!row) {
row = await this.db.queryOne<{ id: number }>(`
SELECT id FROM attributes
WHERE object_type_name = ? AND attr_name = ?
`, [typeName, fieldNameOrName]);
}
return row?.id || null;
}
}
// Export singleton instance
export const schemaDiscoveryService = new SchemaDiscoveryService();

View File

@@ -0,0 +1,387 @@
/**
* Schema Mapping Service
*
* Manages mappings between object types and their Jira Assets schema IDs.
* Allows different object types to exist in different schemas.
*/
import { logger } from './logger.js';
import { normalizedCacheStore } from './normalizedCacheStore.js';
import { config } from '../config/env.js';
import type { CMDBObjectTypeName } from '../generated/jira-types.js';
import type { DatabaseAdapter } from './database/interface.js';
export interface SchemaMapping {
objectTypeName: string;
schemaId: string;
enabled: boolean;
createdAt: string;
updatedAt: string;
}
class SchemaMappingService {
private cache: Map<string, string> = new Map(); // objectTypeName -> schemaId
private cacheInitialized: boolean = false;
/**
* Get schema ID for an object type
* Returns the configured schema ID or the default from config
*/
async getSchemaId(objectTypeName: CMDBObjectTypeName | string): Promise<string> {
await this.ensureCacheInitialized();
// Check cache first
if (this.cache.has(objectTypeName)) {
return this.cache.get(objectTypeName)!;
}
// Try to get schema ID from database (from enabled object types)
try {
const { schemaConfigurationService } = await import('./schemaConfigurationService.js');
const enabledTypes = await schemaConfigurationService.getEnabledObjectTypes();
const type = enabledTypes.find(et => et.objectTypeName === objectTypeName);
if (type) {
return type.schemaId;
}
} catch (error) {
logger.warn(`SchemaMapping: Failed to get schema ID from database for ${objectTypeName}`, error);
}
// Return empty string if not found (no default)
return '';
}
/**
* Get all schema mappings
*/
async getAllMappings(): Promise<SchemaMapping[]> {
const db = (normalizedCacheStore as any).db;
if (!db) {
throw new Error('Database not available');
}
await db.ensureInitialized?.();
const typedDb = db as DatabaseAdapter;
const rows = await typedDb.query<{
object_type_name: string;
schema_id: string;
enabled: boolean | number;
created_at: string;
updated_at: string;
}>(`
SELECT object_type_name, schema_id, enabled, created_at, updated_at
FROM schema_mappings
ORDER BY object_type_name
`);
return rows.map((row: { object_type_name: string; schema_id: string; enabled: boolean | number; created_at: string; updated_at: string }) => ({
objectTypeName: row.object_type_name,
schemaId: row.schema_id,
enabled: typeof row.enabled === 'boolean' ? row.enabled : row.enabled === 1,
createdAt: row.created_at,
updatedAt: row.updated_at,
}));
}
/**
* Set schema mapping for an object type
*/
async setMapping(objectTypeName: string, schemaId: string, enabled: boolean = true): Promise<void> {
const db = (normalizedCacheStore as any).db;
if (!db) {
throw new Error('Database not available');
}
await db.ensureInitialized?.();
const now = new Date().toISOString();
if (db.isPostgres) {
await db.execute(`
INSERT INTO schema_mappings (object_type_name, schema_id, enabled, created_at, updated_at)
VALUES (?, ?, ?, ?, ?)
ON CONFLICT(object_type_name) DO UPDATE SET
schema_id = excluded.schema_id,
enabled = excluded.enabled,
updated_at = excluded.updated_at
`, [objectTypeName, schemaId, enabled, now, now]);
} else {
await db.execute(`
INSERT INTO schema_mappings (object_type_name, schema_id, enabled, created_at, updated_at)
VALUES (?, ?, ?, ?, ?)
ON CONFLICT(object_type_name) DO UPDATE SET
schema_id = excluded.schema_id,
enabled = excluded.enabled,
updated_at = excluded.updated_at
`, [objectTypeName, schemaId, enabled ? 1 : 0, now, now]);
}
// Update cache
if (enabled) {
this.cache.set(objectTypeName, schemaId);
} else {
this.cache.delete(objectTypeName);
}
logger.info(`SchemaMappingService: Set mapping for ${objectTypeName} -> schema ${schemaId} (enabled: ${enabled})`);
}
/**
* Delete schema mapping for an object type (will use default schema)
*/
async deleteMapping(objectTypeName: string): Promise<void> {
const db = (normalizedCacheStore as any).db;
if (!db) {
throw new Error('Database not available');
}
await db.ensureInitialized?.();
await db.execute(`
DELETE FROM schema_mappings WHERE object_type_name = ?
`, [objectTypeName]);
// Remove from cache
this.cache.delete(objectTypeName);
logger.info(`SchemaMappingService: Deleted mapping for ${objectTypeName}`);
}
/**
* Check if an object type should be synced (has enabled mapping or uses default schema)
*/
async isTypeEnabled(objectTypeName: string): Promise<boolean> {
await this.ensureCacheInitialized();
// If there's a mapping, check if it's enabled
if (this.cache.has(objectTypeName)) {
// Check if it's actually enabled in the database
const db = (normalizedCacheStore as any).db;
if (db) {
await db.ensureInitialized?.();
const typedDb = db as DatabaseAdapter;
const row = await typedDb.queryOne<{ enabled: boolean | number }>(`
SELECT enabled FROM schema_mappings WHERE object_type_name = ?
`, [objectTypeName]);
if (row) {
return typeof row.enabled === 'boolean' ? row.enabled : row.enabled === 1;
}
}
}
// If no mapping exists, check if object type is enabled in database
try {
const { schemaConfigurationService } = await import('./schemaConfigurationService.js');
const enabledTypes = await schemaConfigurationService.getEnabledObjectTypes();
return enabledTypes.some(et => et.objectTypeName === objectTypeName);
} catch (error) {
logger.warn(`SchemaMapping: Failed to check if ${objectTypeName} is enabled`, error);
return false;
}
}
/**
* Initialize cache from database
*/
private async ensureCacheInitialized(): Promise<void> {
if (this.cacheInitialized) return;
try {
const db = (normalizedCacheStore as any).db;
if (!db) {
this.cacheInitialized = true;
return;
}
await db.ensureInitialized?.();
// Use parameterized query to avoid boolean/integer comparison issues
const typedDb = db as DatabaseAdapter;
const rows = await typedDb.query<{
object_type_name: string;
schema_id: string;
enabled: boolean | number;
}>(
`SELECT object_type_name, schema_id, enabled
FROM schema_mappings
WHERE enabled = ?`,
[db.isPostgres ? true : 1]
);
this.cache.clear();
for (const row of rows) {
const enabled = typeof row.enabled === 'boolean' ? row.enabled : row.enabled === 1;
if (enabled) {
this.cache.set(row.object_type_name, row.schema_id);
}
}
this.cacheInitialized = true;
logger.debug(`SchemaMappingService: Initialized cache with ${this.cache.size} mappings`);
} catch (error) {
logger.warn('SchemaMappingService: Failed to initialize cache, using defaults', error);
this.cacheInitialized = true; // Mark as initialized to prevent retry loops
}
}
/**
* Get all object types with their sync configuration
*/
async getAllObjectTypesWithConfig(): Promise<Array<{
typeName: string;
displayName: string;
description: string | null;
schemaId: string | null;
enabled: boolean;
objectCount: number;
syncPriority: number;
}>> {
const db = (normalizedCacheStore as any).db;
if (!db) {
throw new Error('Database not available');
}
await db.ensureInitialized?.();
try {
// Get all object types with their mappings
const typedDb = db as DatabaseAdapter;
const rows = await typedDb.query<{
type_name: string;
display_name: string;
description: string | null;
object_count: number;
sync_priority: number;
schema_id: string | null;
enabled: boolean | number | null;
}>(`
SELECT
ot.type_name,
ot.display_name,
ot.description,
ot.object_count,
ot.sync_priority,
sm.schema_id,
sm.enabled
FROM object_types ot
LEFT JOIN schema_mappings sm ON ot.type_name = sm.object_type_name
ORDER BY ot.sync_priority ASC, ot.display_name ASC
`);
logger.debug(`SchemaMappingService: Found ${rows.length} object types in database`);
// Get first available schema ID from database
let defaultSchemaId: string | null = null;
try {
const { normalizedCacheStore } = await import('./normalizedCacheStore.js');
const db = (normalizedCacheStore as any).db;
if (db) {
await db.ensureInitialized?.();
const typedDb = db as DatabaseAdapter;
const schemaRow = await typedDb.queryOne<{ jira_schema_id: string }>(
`SELECT jira_schema_id FROM schemas ORDER BY jira_schema_id LIMIT 1`
);
defaultSchemaId = schemaRow?.jira_schema_id || null;
}
} catch (error) {
logger.warn('SchemaMapping: Failed to get default schema ID from database', error);
}
return rows.map((row: { type_name: string; display_name: string; description: string | null; object_count: number; sync_priority: number; schema_id: string | null; enabled: number | boolean | null }) => ({
typeName: row.type_name,
displayName: row.display_name,
description: row.description,
schemaId: row.schema_id || defaultSchemaId,
enabled: row.enabled === null
? true // Default: enabled if no mapping exists
: (typeof row.enabled === 'boolean' ? row.enabled : row.enabled === 1),
objectCount: row.object_count || 0,
syncPriority: row.sync_priority || 0,
}));
} catch (error) {
logger.error('SchemaMappingService: Failed to get object types with config', error);
throw error;
}
}
/**
* Enable or disable an object type for syncing
*/
async setTypeEnabled(objectTypeName: string, enabled: boolean): Promise<void> {
const db = (normalizedCacheStore as any).db;
if (!db) {
throw new Error('Database not available');
}
await db.ensureInitialized?.();
// Check if mapping exists
const typedDb = db as DatabaseAdapter;
const existing = await typedDb.queryOne<{ schema_id: string }>(`
SELECT schema_id FROM schema_mappings WHERE object_type_name = ?
`, [objectTypeName]);
// Get schema ID from existing mapping or from database
let schemaId = existing?.schema_id || '';
if (!schemaId) {
// Try to get schema ID from database (from enabled object types)
try {
const { schemaConfigurationService } = await import('./schemaConfigurationService.js');
const enabledTypes = await schemaConfigurationService.getEnabledObjectTypes();
const type = enabledTypes.find(et => et.objectTypeName === objectTypeName);
if (type) {
schemaId = type.schemaId;
}
} catch (error) {
logger.warn(`SchemaMapping: Failed to get schema ID from database for ${objectTypeName}`, error);
}
}
if (!schemaId) {
throw new Error(`No schema ID available for object type ${objectTypeName}. Please ensure the object type is discovered and configured.`);
}
// Create or update mapping
const now = new Date().toISOString();
if (db.isPostgres) {
await db.execute(`
INSERT INTO schema_mappings (object_type_name, schema_id, enabled, created_at, updated_at)
VALUES (?, ?, ?, ?, ?)
ON CONFLICT(object_type_name) DO UPDATE SET
enabled = excluded.enabled,
updated_at = excluded.updated_at
`, [objectTypeName, schemaId, enabled, now, now]);
} else {
await db.execute(`
INSERT INTO schema_mappings (object_type_name, schema_id, enabled, created_at, updated_at)
VALUES (?, ?, ?, ?, ?)
ON CONFLICT(object_type_name) DO UPDATE SET
enabled = excluded.enabled,
updated_at = excluded.updated_at
`, [objectTypeName, schemaId, enabled ? 1 : 0, now, now]);
}
// Update cache
if (enabled) {
this.cache.set(objectTypeName, schemaId);
} else {
this.cache.delete(objectTypeName);
}
this.clearCache();
logger.info(`SchemaMappingService: Set ${objectTypeName} enabled=${enabled}`);
}
/**
* Clear cache (useful after updates)
*/
clearCache(): void {
this.cache.clear();
this.cacheInitialized = false;
}
}
export const schemaMappingService = new SchemaMappingService();

View File

@@ -0,0 +1,149 @@
/**
* Schema Utility Functions
*
* Helper functions for schema discovery and type conversion
*/
// Jira attribute type mappings (based on Jira Insight/Assets API)
const JIRA_TYPE_MAP: Record<number, 'text' | 'integer' | 'float' | 'boolean' | 'date' | 'datetime' | 'select' | 'reference' | 'url' | 'email' | 'textarea' | 'user' | 'status' | 'unknown'> = {
0: 'text', // Default/Text
1: 'integer', // Integer
2: 'boolean', // Boolean
3: 'float', // Double/Float
4: 'date', // Date
5: 'datetime', // DateTime
6: 'url', // URL
7: 'email', // Email
8: 'textarea', // Textarea
9: 'select', // Select
10: 'reference', // Reference (Object)
11: 'user', // User
12: 'reference', // Confluence (treated as reference)
13: 'reference', // Group (treated as reference)
14: 'reference', // Version (treated as reference)
15: 'reference', // Project (treated as reference)
16: 'status', // Status
};
// Priority types - these sync first as they are reference data
const PRIORITY_TYPE_NAMES = new Set([
'Application Component',
'Server',
'Flows',
]);
// Reference data types - these sync with lower priority
const REFERENCE_TYPE_PATTERNS = [
/Factor$/,
/Model$/,
/Type$/,
/Category$/,
/Importance$/,
/Analyse$/,
/Organisation$/,
/Function$/,
];
/**
* Convert a string to camelCase while preserving existing casing patterns
* E.g., "Application Function" -> "applicationFunction"
* "ICT Governance Model" -> "ictGovernanceModel"
* "ApplicationFunction" -> "applicationFunction"
*/
export function toCamelCase(str: string): string {
// First split on spaces and special chars
const words = str
.replace(/[^a-zA-Z0-9\s]/g, ' ')
.split(/\s+/)
.filter(w => w.length > 0);
if (words.length === 0) return '';
// If it's a single word that's already camelCase or PascalCase, just lowercase first char
if (words.length === 1) {
const word = words[0];
return word.charAt(0).toLowerCase() + word.slice(1);
}
// Multiple words - first word lowercase, rest capitalize first letter
return words
.map((word, index) => {
if (index === 0) {
// First word: if all uppercase (acronym), lowercase it, otherwise just lowercase first char
if (word === word.toUpperCase() && word.length > 1) {
return word.toLowerCase();
}
return word.charAt(0).toLowerCase() + word.slice(1);
}
// Other words: capitalize first letter, keep rest as-is
return word.charAt(0).toUpperCase() + word.slice(1);
})
.join('');
}
/**
* Convert a string to PascalCase while preserving existing casing patterns
* E.g., "Application Function" -> "ApplicationFunction"
* "ICT Governance Model" -> "IctGovernanceModel"
* "applicationFunction" -> "ApplicationFunction"
*/
export function toPascalCase(str: string): string {
// First split on spaces and special chars
const words = str
.replace(/[^a-zA-Z0-9\s]/g, ' ')
.split(/\s+/)
.filter(w => w.length > 0);
if (words.length === 0) return '';
// If it's a single word, just capitalize first letter
if (words.length === 1) {
const word = words[0];
return word.charAt(0).toUpperCase() + word.slice(1);
}
// Multiple words - capitalize first letter of each
return words
.map(word => {
// If all uppercase (acronym) and first word, just capitalize first letter
if (word === word.toUpperCase() && word.length > 1) {
return word.charAt(0).toUpperCase() + word.slice(1).toLowerCase();
}
return word.charAt(0).toUpperCase() + word.slice(1);
})
.join('');
}
/**
* Map Jira attribute type ID to our type system
*/
export function mapJiraType(typeId: number): 'text' | 'integer' | 'float' | 'boolean' | 'date' | 'datetime' | 'select' | 'reference' | 'url' | 'email' | 'textarea' | 'user' | 'status' | 'unknown' {
return JIRA_TYPE_MAP[typeId] || 'unknown';
}
/**
* Determine sync priority for an object type
*/
export function determineSyncPriority(typeName: string, objectCount: number): number {
// Application Component and related main types first
if (PRIORITY_TYPE_NAMES.has(typeName)) {
return 1;
}
// Reference data types last
for (const pattern of REFERENCE_TYPE_PATTERNS) {
if (pattern.test(typeName)) {
return 10;
}
}
// Medium priority for types with more objects
if (objectCount > 100) {
return 2;
}
if (objectCount > 10) {
return 5;
}
return 8;
}

View File

@@ -8,10 +8,12 @@
*/
import { logger } from './logger.js';
import { cacheStore } from './cacheStore.js';
import { normalizedCacheStore as cacheStore } from './normalizedCacheStore.js';
import { jiraAssetsClient, JiraObjectNotFoundError } from './jiraAssetsClient.js';
import { OBJECT_TYPES, getObjectTypesBySyncPriority } from '../generated/jira-schema.js';
import type { CMDBObject, CMDBObjectTypeName } from '../generated/jira-types.js';
import { schemaDiscoveryService } from './schemaDiscoveryService.js';
import type { ObjectEntry } from '../domain/jiraAssetsPayload.js';
// =============================================================================
// Types
@@ -61,6 +63,7 @@ class SyncEngine {
private incrementalInterval: number;
private batchSize: number;
private lastIncrementalSync: Date | null = null;
private lastConfigCheck: number = 0; // Track last config check time to avoid log spam
constructor() {
this.incrementalInterval = parseInt(
@@ -93,7 +96,26 @@ class SyncEngine {
logger.info('SyncEngine: Sync uses service account token (JIRA_SERVICE_ACCOUNT_TOKEN) from .env');
this.isRunning = true;
// Sync can run automatically using service account token
// Check if configuration is complete before starting scheduler
const { schemaConfigurationService } = await import('./schemaConfigurationService.js');
const isConfigured = await schemaConfigurationService.isConfigurationComplete();
// Start incremental sync scheduler if token is available AND configuration is complete
if (jiraAssetsClient.hasToken()) {
if (isConfigured) {
this.startIncrementalSyncScheduler();
logger.info('SyncEngine: Incremental sync scheduler started (configuration complete)');
} else {
logger.info('SyncEngine: Incremental sync scheduler NOT started - schema configuration not complete. Please configure object types in settings first.');
// Start scheduler but it will check configuration on each run
// This allows scheduler to start automatically when configuration is completed later
this.startIncrementalSyncScheduler();
logger.info('SyncEngine: Incremental sync scheduler started (will check configuration on each run)');
}
} else {
logger.info('SyncEngine: Service account token not configured, incremental sync disabled');
}
logger.info('SyncEngine: Initialized (using service account token for sync operations)');
}
@@ -163,14 +185,42 @@ class SyncEngine {
logger.info('SyncEngine: Starting full sync...');
try {
// Get object types sorted by sync priority
const objectTypes = getObjectTypesBySyncPriority();
// Check if configuration is complete
const { schemaConfigurationService } = await import('./schemaConfigurationService.js');
const isConfigured = await schemaConfigurationService.isConfigurationComplete();
if (!isConfigured) {
throw new Error('Schema configuration not complete. Please configure at least one object type to be synced in the settings page.');
}
for (const typeDef of objectTypes) {
const typeStat = await this.syncObjectType(typeDef.typeName as CMDBObjectTypeName);
// Get enabled object types from configuration
logger.info('SyncEngine: Fetching enabled object types from configuration...');
const enabledTypes = await schemaConfigurationService.getEnabledObjectTypes();
logger.info(`SyncEngine: Found ${enabledTypes.length} enabled object types to sync`);
if (enabledTypes.length === 0) {
throw new Error('No object types enabled for syncing. Please enable at least one object type in the settings page.');
}
// Schema discovery will happen automatically when needed (e.g., for relation extraction)
// It's no longer required upfront - the user has already configured which object types to sync
logger.info('SyncEngine: Starting object sync for configured object types...');
// Sync each enabled object type
for (const enabledType of enabledTypes) {
try {
const typeStat = await this.syncConfiguredObjectType(enabledType);
stats.push(typeStat);
totalObjects += typeStat.objectsProcessed;
totalRelations += typeStat.relationsExtracted;
} catch (error) {
logger.error(`SyncEngine: Failed to sync ${enabledType.displayName}`, error);
stats.push({
objectType: enabledType.displayName,
objectsProcessed: 0,
relationsExtracted: 0,
duration: 0,
});
}
}
// Update sync metadata
@@ -205,52 +255,181 @@ class SyncEngine {
}
/**
* Sync a single object type
* Store an object and all its nested referenced objects recursively
* This method processes the entire object tree, storing all nested objects
* and extracting all relations, while preventing infinite loops with circular references.
*
* @param entry - The object entry to store (in ObjectEntry format from API)
* @param typeName - The type name of the object
* @param processedIds - Set of already processed object IDs (to prevent duplicates and circular refs)
* @returns Statistics about objects stored and relations extracted
*/
private async syncObjectType(typeName: CMDBObjectTypeName): Promise<SyncStats> {
private async storeObjectTree(
entry: ObjectEntry,
typeName: CMDBObjectTypeName,
processedIds: Set<string>
): Promise<{ objectsStored: number; relationsExtracted: number }> {
const entryId = String(entry.id);
// Skip if already processed (handles circular references)
if (processedIds.has(entryId)) {
logger.debug(`SyncEngine: Skipping already processed object ${entry.objectKey || entryId} of type ${typeName}`);
return { objectsStored: 0, relationsExtracted: 0 };
}
processedIds.add(entryId);
let objectsStored = 0;
let relationsExtracted = 0;
try {
logger.debug(`SyncEngine: [Recursive] Storing object tree for ${entry.objectKey || entryId} of type ${typeName} (depth: ${processedIds.size - 1})`);
// 1. Adapt and parse the object
const adapted = jiraAssetsClient.adaptObjectEntryToJiraAssetsObject(entry);
if (!adapted) {
logger.warn(`SyncEngine: Failed to adapt object ${entry.objectKey || entryId}`);
return { objectsStored: 0, relationsExtracted: 0 };
}
const parsed = await jiraAssetsClient.parseObject(adapted);
if (!parsed) {
logger.warn(`SyncEngine: Failed to parse object ${entry.objectKey || entryId}`);
return { objectsStored: 0, relationsExtracted: 0 };
}
// 2. Store the object
await cacheStore.upsertObject(typeName, parsed);
objectsStored++;
logger.debug(`SyncEngine: Stored object ${parsed.objectKey || parsed.id} of type ${typeName}`);
// 3. Schema discovery must be manually triggered via API endpoints
// No automatic discovery
// 4. Extract and store relations for this object
await cacheStore.extractAndStoreRelations(typeName, parsed);
relationsExtracted++;
logger.debug(`SyncEngine: Extracted relations for object ${parsed.objectKey || parsed.id}`);
// 5. Recursively process nested referenced objects
// Note: Lookup maps should already be initialized by getAllObjectsOfType
// Use a separate Set for extraction to avoid conflicts with storage tracking
const extractionProcessedIds = new Set<string>();
const nestedRefs = jiraAssetsClient.extractNestedReferencedObjects(
entry,
extractionProcessedIds, // Separate Set for extraction (prevents infinite loops in traversal)
5, // max depth
0 // current depth
);
if (nestedRefs.length > 0) {
logger.debug(`SyncEngine: [Recursive] Found ${nestedRefs.length} nested referenced objects for ${entry.objectKey || entryId}`);
// Group by type for better logging
const refsByType = new Map<string, number>();
for (const ref of nestedRefs) {
refsByType.set(ref.typeName, (refsByType.get(ref.typeName) || 0) + 1);
}
const typeSummary = Array.from(refsByType.entries())
.map(([type, count]) => `${count} ${type}`)
.join(', ');
logger.debug(`SyncEngine: [Recursive] Nested objects by type: ${typeSummary}`);
}
// 6. Recursively store each nested object
for (const { entry: nestedEntry, typeName: nestedTypeName } of nestedRefs) {
logger.debug(`SyncEngine: [Recursive] Processing nested object ${nestedEntry.objectKey || nestedEntry.id} of type ${nestedTypeName}`);
const nestedResult = await this.storeObjectTree(
nestedEntry,
nestedTypeName as CMDBObjectTypeName,
processedIds
);
objectsStored += nestedResult.objectsStored;
relationsExtracted += nestedResult.relationsExtracted;
}
logger.debug(`SyncEngine: [Recursive] Completed storing object tree for ${entry.objectKey || entryId}: ${objectsStored} objects, ${relationsExtracted} relations`);
return { objectsStored, relationsExtracted };
} catch (error) {
logger.error(`SyncEngine: Failed to store object tree for ${entry.objectKey || entryId}`, error);
return { objectsStored, relationsExtracted };
}
}
/**
* Sync a configured object type (from schema configuration)
*/
private async syncConfiguredObjectType(enabledType: {
schemaId: string;
objectTypeId: number;
objectTypeName: string;
displayName: string;
}): Promise<SyncStats> {
const startTime = Date.now();
let objectsProcessed = 0;
let relationsExtracted = 0;
try {
const typeDef = OBJECT_TYPES[typeName];
if (!typeDef) {
logger.warn(`SyncEngine: Unknown type ${typeName}`);
return { objectType: typeName, objectsProcessed: 0, relationsExtracted: 0, duration: 0 };
}
logger.info(`SyncEngine: Syncing ${enabledType.displayName} (${enabledType.objectTypeName}) from schema ${enabledType.schemaId}...`);
logger.debug(`SyncEngine: Syncing ${typeName}...`);
// Fetch all objects from Jira using the configured schema and object type
// This returns raw entries for recursive processing (includeAttributesDeep=2 provides nested data)
const { objects: jiraObjects, rawEntries } = await jiraAssetsClient.getAllObjectsOfType(
enabledType.displayName, // Use display name for Jira API
this.batchSize,
enabledType.schemaId
);
logger.info(`SyncEngine: Fetched ${jiraObjects.length} ${enabledType.displayName} objects from Jira (schema: ${enabledType.schemaId})`);
// Fetch all objects from Jira
const jiraObjects = await jiraAssetsClient.getAllObjectsOfType(typeName, this.batchSize);
logger.info(`SyncEngine: Fetched ${jiraObjects.length} ${typeName} objects from Jira`);
// Schema discovery must be manually triggered via API endpoints
// No automatic discovery
// Parse and cache objects
const parsedObjects: CMDBObject[] = [];
// Use objectTypeName for cache storage (PascalCase)
const typeName = enabledType.objectTypeName as CMDBObjectTypeName;
// Process each main object recursively using storeObjectTree
// This will store the object and all its nested referenced objects
const processedIds = new Set<string>(); // Track processed objects to prevent duplicates and circular refs
const failedObjects: Array<{ id: string; key: string; label: string; reason: string }> = [];
if (rawEntries && rawEntries.length > 0) {
logger.info(`SyncEngine: Processing ${rawEntries.length} ${enabledType.displayName} objects recursively...`);
for (const rawEntry of rawEntries) {
try {
const result = await this.storeObjectTree(rawEntry, typeName, processedIds);
objectsProcessed += result.objectsStored;
relationsExtracted += result.relationsExtracted;
} catch (error) {
const entryId = String(rawEntry.id);
failedObjects.push({
id: entryId,
key: rawEntry.objectKey || 'unknown',
label: rawEntry.label || 'unknown',
reason: error instanceof Error ? error.message : 'Unknown error',
});
logger.warn(`SyncEngine: Failed to store object tree for ${enabledType.displayName} object: ${rawEntry.objectKey || entryId} (${rawEntry.label || 'unknown label'})`, error);
}
}
} else {
// Fallback: if rawEntries not available, use adapted objects (less efficient, no recursion)
logger.warn(`SyncEngine: Raw entries not available, using fallback linear processing (no recursive nesting)`);
const parsedObjects: CMDBObject[] = [];
for (const jiraObj of jiraObjects) {
const parsed = jiraAssetsClient.parseObject(jiraObj);
const parsed = await jiraAssetsClient.parseObject(jiraObj);
if (parsed) {
parsedObjects.push(parsed);
} else {
// Track objects that failed to parse
failedObjects.push({
id: jiraObj.id?.toString() || 'unknown',
key: jiraObj.objectKey || 'unknown',
label: jiraObj.label || 'unknown',
reason: 'parseObject returned null',
});
logger.warn(`SyncEngine: Failed to parse ${typeName} object: ${jiraObj.objectKey || jiraObj.id} (${jiraObj.label || 'unknown label'})`);
logger.warn(`SyncEngine: Failed to parse ${enabledType.displayName} object: ${jiraObj.objectKey || jiraObj.id} (${jiraObj.label || 'unknown label'})`);
}
}
// Log parsing statistics
if (failedObjects.length > 0) {
logger.warn(`SyncEngine: ${failedObjects.length} ${typeName} objects failed to parse:`, failedObjects.map(o => `${o.key} (${o.label})`).join(', '));
}
// Batch upsert to cache
if (parsedObjects.length > 0) {
await cacheStore.batchUpsertObjects(typeName, parsedObjects);
objectsProcessed = parsedObjects.length;
@@ -261,25 +440,31 @@ class SyncEngine {
relationsExtracted++;
}
}
}
// Log parsing statistics
if (failedObjects.length > 0) {
logger.warn(`SyncEngine: ${failedObjects.length} ${enabledType.displayName} objects failed to process:`, failedObjects.map(o => `${o.key} (${o.label}): ${o.reason}`).join(', '));
}
const duration = Date.now() - startTime;
const skippedCount = jiraObjects.length - objectsProcessed;
if (skippedCount > 0) {
logger.warn(`SyncEngine: Synced ${objectsProcessed}/${jiraObjects.length} ${typeName} objects in ${duration}ms (${skippedCount} skipped)`);
logger.warn(`SyncEngine: Synced ${objectsProcessed}/${jiraObjects.length} ${enabledType.displayName} objects in ${duration}ms (${skippedCount} skipped)`);
} else {
logger.debug(`SyncEngine: Synced ${objectsProcessed} ${typeName} objects in ${duration}ms`);
logger.debug(`SyncEngine: Synced ${objectsProcessed} ${enabledType.displayName} objects in ${duration}ms`);
}
return {
objectType: typeName,
objectType: enabledType.displayName,
objectsProcessed,
relationsExtracted,
duration,
};
} catch (error) {
logger.error(`SyncEngine: Failed to sync ${typeName}`, error);
logger.error(`SyncEngine: Failed to sync ${enabledType.displayName}`, error);
return {
objectType: typeName,
objectType: enabledType.displayName,
objectsProcessed,
relationsExtracted,
duration: Date.now() - startTime,
@@ -287,12 +472,27 @@ class SyncEngine {
}
}
/**
* Sync a single object type (legacy method, kept for backward compatibility)
*/
private async syncObjectType(typeName: CMDBObjectTypeName): Promise<SyncStats> {
// This method is deprecated - use syncConfiguredObjectType instead
logger.warn(`SyncEngine: syncObjectType(${typeName}) is deprecated, use configured object types instead`);
return {
objectType: typeName,
objectsProcessed: 0,
relationsExtracted: 0,
duration: 0,
};
}
// ==========================================================================
// Incremental Sync
// ==========================================================================
/**
* Start the incremental sync scheduler
* The scheduler will check configuration on each run and only sync if configuration is complete
*/
private startIncrementalSyncScheduler(): void {
if (this.incrementalTimer) {
@@ -300,9 +500,11 @@ class SyncEngine {
}
logger.info(`SyncEngine: Starting incremental sync scheduler (every ${this.incrementalInterval}ms)`);
logger.info('SyncEngine: Scheduler will only perform syncs when schema configuration is complete');
this.incrementalTimer = setInterval(() => {
if (!this.isSyncing && this.isRunning) {
// incrementalSync() will check if configuration is complete before syncing
this.incrementalSync().catch(err => {
logger.error('SyncEngine: Incremental sync failed', err);
});
@@ -324,6 +526,26 @@ class SyncEngine {
return { success: false, updatedCount: 0 };
}
// Check if configuration is complete before attempting sync
const { schemaConfigurationService } = await import('./schemaConfigurationService.js');
const isConfigured = await schemaConfigurationService.isConfigurationComplete();
if (!isConfigured) {
// Don't log on every interval - only log once per minute to avoid spam
const now = Date.now();
if (!this.lastConfigCheck || now - this.lastConfigCheck > 60000) {
logger.debug('SyncEngine: Schema configuration not complete, skipping incremental sync. Please configure object types in settings.');
this.lastConfigCheck = now;
}
return { success: false, updatedCount: 0 };
}
// Get enabled object types - will be used later to filter updated objects
const enabledTypes = await schemaConfigurationService.getEnabledObjectTypes();
if (enabledTypes.length === 0) {
logger.debug('SyncEngine: No enabled object types, skipping incremental sync');
return { success: false, updatedCount: 0 };
}
if (this.isSyncing) {
return { success: false, updatedCount: 0 };
}
@@ -339,6 +561,15 @@ class SyncEngine {
logger.debug(`SyncEngine: Incremental sync since ${since.toISOString()}`);
// Get enabled object types to filter incremental sync
const enabledTypes = await schemaConfigurationService.getEnabledObjectTypes();
const enabledTypeNames = new Set(enabledTypes.map(et => et.objectTypeName));
if (enabledTypeNames.size === 0) {
logger.debug('SyncEngine: No enabled object types, skipping incremental sync');
return { success: false, updatedCount: 0 };
}
// Fetch updated objects from Jira
const updatedObjects = await jiraAssetsClient.getUpdatedObjectsSince(since, this.batchSize);
@@ -368,16 +599,50 @@ class SyncEngine {
return { success: true, updatedCount: 0 };
}
let updatedCount = 0;
// Schema discovery must be manually triggered via API endpoints
// No automatic discovery
let updatedCount = 0;
const processedIds = new Set<string>(); // Track processed objects for recursive sync
// Filter updated objects to only process enabled object types
// Use recursive processing to handle nested references
for (const jiraObj of updatedObjects) {
const parsed = jiraAssetsClient.parseObject(jiraObj);
const parsed = await jiraAssetsClient.parseObject(jiraObj);
if (parsed) {
const typeName = parsed._objectType as CMDBObjectTypeName;
// Only sync if this object type is enabled
if (!enabledTypeNames.has(typeName)) {
logger.debug(`SyncEngine: Skipping ${typeName} in incremental sync - not enabled`);
continue;
}
// Get raw entry for recursive processing
const objectId = parsed.id;
try {
const entry = await jiraAssetsClient.getObjectEntry(objectId);
if (entry) {
// Use recursive storeObjectTree to process object and all nested references
const result = await this.storeObjectTree(entry, typeName, processedIds);
if (result.objectsStored > 0) {
updatedCount++;
logger.debug(`SyncEngine: Incremental sync processed ${objectId}: ${result.objectsStored} objects, ${result.relationsExtracted} relations`);
}
} else {
// Fallback to linear processing if raw entry not available
await cacheStore.upsertObject(typeName, parsed);
await cacheStore.extractAndStoreRelations(typeName, parsed);
updatedCount++;
}
} catch (error) {
logger.warn(`SyncEngine: Failed to get raw entry for ${objectId}, using fallback`, error);
// Fallback to linear processing
await cacheStore.upsertObject(typeName, parsed);
await cacheStore.extractAndStoreRelations(typeName, parsed);
updatedCount++;
}
}
}
// Update sync metadata
@@ -404,6 +669,7 @@ class SyncEngine {
/**
* Trigger a sync for a specific object type
* Only syncs if the object type is enabled in configuration
* Allows concurrent syncs for different types, but blocks if:
* - A full sync is in progress
* - An incremental sync is in progress
@@ -420,10 +686,19 @@ class SyncEngine {
throw new Error(`Sync already in progress for ${typeName}`);
}
// Check if this type is enabled in configuration
const { schemaConfigurationService } = await import('./schemaConfigurationService.js');
const enabledTypes = await schemaConfigurationService.getEnabledObjectTypes();
const enabledType = enabledTypes.find(et => et.objectTypeName === typeName);
if (!enabledType) {
throw new Error(`Object type ${typeName} is not enabled for syncing. Please enable it in the Schema Configuration settings page.`);
}
this.syncingTypes.add(typeName);
try {
return await this.syncObjectType(typeName);
return await this.syncConfiguredObjectType(enabledType);
} finally {
this.syncingTypes.delete(typeName);
}
@@ -431,20 +706,39 @@ class SyncEngine {
/**
* Force sync a single object
* Only syncs if the object type is enabled in configuration
* If the object was deleted from Jira, it will be removed from the local cache
* Uses recursive processing to store nested referenced objects
*/
async syncObject(typeName: CMDBObjectTypeName, objectId: string): Promise<boolean> {
try {
const jiraObj = await jiraAssetsClient.getObject(objectId);
if (!jiraObj) return false;
// Check if this type is enabled in configuration
const { schemaConfigurationService } = await import('./schemaConfigurationService.js');
const enabledTypes = await schemaConfigurationService.getEnabledObjectTypes();
const isEnabled = enabledTypes.some(et => et.objectTypeName === typeName);
const parsed = jiraAssetsClient.parseObject(jiraObj);
if (!parsed) return false;
if (!isEnabled) {
logger.warn(`SyncEngine: Cannot sync object ${objectId} - type ${typeName} is not enabled for syncing`);
return false;
}
await cacheStore.upsertObject(typeName, parsed);
await cacheStore.extractAndStoreRelations(typeName, parsed);
// Schema discovery must be manually triggered via API endpoints
// No automatic discovery
// Get raw ObjectEntry for recursive processing
const entry = await jiraAssetsClient.getObjectEntry(objectId);
if (!entry) return false;
// Use recursive storeObjectTree to process object and all nested references
const processedIds = new Set<string>();
const result = await this.storeObjectTree(entry, typeName, processedIds);
if (result.objectsStored > 0) {
logger.info(`SyncEngine: Synced object ${objectId} recursively: ${result.objectsStored} objects, ${result.relationsExtracted} relations`);
return true;
}
return false;
} catch (error) {
// If object was deleted from Jira, remove it from our cache
if (error instanceof JiraObjectNotFoundError) {

21
docker-compose.dev.yml Normal file
View File

@@ -0,0 +1,21 @@
services:
postgres:
image: postgres:15-alpine
container_name: cmdb-postgres-dev
environment:
POSTGRES_DB: cmdb_insight
POSTGRES_USER: cmdb
POSTGRES_PASSWORD: cmdb-dev
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U cmdb"]
interval: 10s
timeout: 5s
retries: 5
restart: unless-stopped
volumes:
postgres_data:

View File

@@ -2,7 +2,7 @@ version: '3.8'
services:
backend:
image: zuyderlandcmdbacr.azurecr.io/zuyderland-cmdb-gui/backend:latest
image: zuyderlandcmdbacr.azurecr.io/cmdb-insight/backend:latest
environment:
- NODE_ENV=production
- PORT=3001
@@ -21,7 +21,7 @@ services:
start_period: 40s
frontend:
image: zdlas.azurecr.io/zuyderland-cmdb-gui/frontend:latest
image: zdlas.azurecr.io/cmdb-insight/frontend:latest
depends_on:
- backend
restart: unless-stopped

View File

@@ -1,10 +1,8 @@
version: '3.8'
services:
postgres:
image: postgres:15-alpine
environment:
POSTGRES_DB: cmdb
POSTGRES_DB: cmdb_insight
POSTGRES_USER: cmdb
POSTGRES_PASSWORD: cmdb-dev
ports:
@@ -30,12 +28,12 @@ services:
- DATABASE_TYPE=postgres
- DATABASE_HOST=postgres
- DATABASE_PORT=5432
- DATABASE_NAME=cmdb
- DATABASE_NAME=cmdb_insight
- DATABASE_USER=cmdb
- DATABASE_PASSWORD=cmdb-dev
# Optional Jira/AI variables (set in .env file or environment)
- JIRA_HOST=${JIRA_HOST}
- JIRA_PAT=${JIRA_PAT}
- JIRA_SCHEMA_ID=${JIRA_SCHEMA_ID}
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
volumes:
- ./backend/src:/app/src

View File

@@ -134,7 +134,6 @@ The following environment variables have been **REMOVED** from the codebase and
- `SESSION_SECRET`: Should be a secure random string in production (generate with `openssl rand -base64 32`)
- `ENCRYPTION_KEY`: Must be exactly 32 bytes when base64 decoded (generate with `openssl rand -base64 32`)
- `JIRA_SCHEMA_ID`: Required for Jira Assets integration
### Application Branding

View File

@@ -1,119 +0,0 @@
# Authentication System Implementation Status
## ✅ Completed Features
### Backend
- ✅ Database schema with users, roles, permissions, sessions, user_settings, email_tokens tables
- ✅ User service (CRUD, password hashing, email verification, password reset)
- ✅ Role service (dynamic role and permission management)
- ✅ Auth service (local auth + OAuth with database-backed sessions)
- ✅ Email service (Nodemailer with SMTP)
- ✅ Encryption service (AES-256-GCM for sensitive data)
- ✅ User settings service (Jira PAT, AI features, API keys)
- ✅ Authorization middleware (requireAuth, requireRole, requirePermission)
- ✅ All API routes protected with authentication
- ✅ Auth routes (login, logout, password reset, email verification, invitations)
- ✅ User management routes (admin only)
- ✅ Role management routes
- ✅ User settings routes
- ✅ Profile routes
### Frontend
- ✅ Auth store extended with roles, permissions, local auth support
- ✅ Permission hooks (useHasPermission, useHasRole, usePermissions)
- ✅ ProtectedRoute component
- ✅ Login component (local login + OAuth choice)
- ✅ ForgotPassword component
- ✅ ResetPassword component
- ✅ AcceptInvitation component
- ✅ UserManagement component (admin)
- ✅ RoleManagement component (admin)
- ✅ UserSettings component
- ✅ Profile component
- ✅ UserMenu with logout and profile/settings links
- ✅ Feature gating based on permissions
## 🔧 Configuration Required
### Environment Variables
**Required for local authentication:**
```env
LOCAL_AUTH_ENABLED=true
```
**Required for email functionality:**
```env
SMTP_HOST=smtp.example.com
SMTP_PORT=587
SMTP_SECURE=false
SMTP_USER=your-email@example.com
SMTP_PASSWORD=your-password
SMTP_FROM=noreply@example.com
```
**Required for encryption:**
```env
ENCRYPTION_KEY=your-32-byte-encryption-key-base64
```
**Optional - Initial admin user:**
```env
ADMIN_EMAIL=admin@example.com
ADMIN_PASSWORD=SecurePassword123!
ADMIN_USERNAME=admin
ADMIN_DISPLAY_NAME=Administrator
```
**Password requirements:**
```env
PASSWORD_MIN_LENGTH=8
PASSWORD_REQUIRE_UPPERCASE=true
PASSWORD_REQUIRE_LOWERCASE=true
PASSWORD_REQUIRE_NUMBER=true
PASSWORD_REQUIRE_SPECIAL=false
```
**Session duration:**
```env
SESSION_DURATION_HOURS=24
```
## 📝 Notes
### JIRA_AUTH Settings
- `JIRA_PAT` can be removed from global env - users configure their own PAT in settings
- `JIRA_OAUTH_CLIENT_ID` and `JIRA_OAUTH_CLIENT_SECRET` are still needed for OAuth flow
- `JIRA_HOST` and `JIRA_SCHEMA_ID` are still needed (infrastructure settings)
### AI API Keys
- `ANTHROPIC_API_KEY` can be removed from global env - users configure their own keys
- `OPENAI_API_KEY` can be removed from global env - users configure their own keys
- `TAVILY_API_KEY` can be removed from global env - users configure their own keys
- These are now stored per-user in the `user_settings` table (encrypted)
### Authentication Flow
1. On first run, migrations create database tables
2. If `ADMIN_EMAIL` and `ADMIN_PASSWORD` are set, initial admin user is created
3. Once users exist, authentication is automatically required
4. Users can log in with email/password (local auth) or OAuth (if configured)
5. User menu shows logged-in user with links to Profile and Settings
6. Logout is available for all authenticated users
## 🚀 Next Steps
1. Set `LOCAL_AUTH_ENABLED=true` in environment
2. Configure SMTP settings for email functionality
3. Generate encryption key: `openssl rand -base64 32`
4. Set initial admin credentials (optional)
5. Run the application - migrations will run automatically
6. Log in with admin account
7. Create additional users via User Management
8. Configure roles and permissions as needed
## ⚠️ Important
- Once users exist in the database, authentication is **automatically required**
- Service account mode only works if no users exist AND local auth is not enabled
- All API routes are protected - unauthenticated requests return 401
- User-specific settings (Jira PAT, AI keys) are encrypted at rest

View File

@@ -1,207 +0,0 @@
# Azure Container Registry - Domain Name Label Scope
## Wat is Domain Name Label Scope?
**Domain Name Label (DNL) Scope** is een security feature van Azure Container Registry die voorkomt dat iemand anders dezelfde DNS naam kan gebruiken als je registry wordt verwijderd (subdomain takeover prevention).
## Opties
### 1. **Unsecure** (Aanbevolen voor simpele setup) ⭐
**DNS Format:** `registryname.azurecr.io`
**Voorbeeld:**
- Registry naam: `zuyderlandcmdbacr`
- DNS naam: `zuyderlandcmdbacr.azurecr.io`
**Voordelen:**
- ✅ Eenvoudig en voorspelbaar
- ✅ Geen hash in de naam
- ✅ Makkelijk te onthouden en configureren
**Nadelen:**
- ❌ Minder security (maar meestal voldoende voor interne tools)
**Wanneer gebruiken:**
- ✅ Simpele setup
- ✅ Interne/corporate omgeving
- ✅ Je wilt een voorspelbare DNS naam
---
### 2. **Resource Group Reuse** (Aanbevolen voor security) 🔒
**DNS Format:** `registryname-hash.azurecr.io`
**Voorbeeld:**
- Registry naam: `zuyderlandcmdbacr`
- DNS naam: `zuyderlandcmdbacr-abc123.azurecr.io` (met unieke hash)
**Voordelen:**
- ✅ Extra security layer
- ✅ Consistent binnen resource group
- ✅ Voorkomt subdomain takeover
**Nadelen:**
- ❌ Hash in de naam (minder voorspelbaar)
- ❌ Moet alle configuraties aanpassen met volledige DNS naam
**Wanneer gebruiken:**
- ✅ Productie omgevingen
- ✅ Security is belangrijk
- ✅ Je wilt extra bescherming
---
### 3. **Subscription Reuse**
**DNS Format:** `registryname-hash.azurecr.io` (hash gebaseerd op subscription)
**Wanneer gebruiken:**
- Als je meerdere resource groups hebt binnen dezelfde subscription
- Je wilt consistentie binnen de subscription
---
### 4. **Tenant Reuse**
**DNS Format:** `registryname-hash.azurecr.io` (hash gebaseerd op tenant)
**Wanneer gebruiken:**
- Als je meerdere subscriptions hebt binnen dezelfde tenant
- Je wilt consistentie binnen de tenant
---
### 5. **No Reuse**
**DNS Format:** `registryname-uniquehash.azurecr.io` (altijd unieke hash)
**Wanneer gebruiken:**
- Maximale security vereist
- Je wilt absoluut geen risico op DNS conflicts
---
## 🎯 Aanbeveling voor Jouw Situatie
**Voor Zuyderland CMDB GUI (20 gebruikers, corporate omgeving):**
### Optie A: **"Unsecure"** (Aanbevolen) ⭐
**Waarom:**
- ✅ Eenvoudigste setup
- ✅ Voorspelbare DNS naam
- ✅ Geen configuratie wijzigingen nodig
- ✅ Voldoende voor interne corporate tool
**DNS naam wordt:** `zuyderlandcmdbacr.azurecr.io`
**Configuratie:**
```yaml
# azure-pipelines.yml
acrName: 'zuyderlandcmdbacr' # Simpel, zonder hash
```
---
### Optie B: **"Resource Group Reuse"** (Als je extra security wilt) 🔒
**Waarom:**
- ✅ Extra security layer
- ✅ Voorkomt subdomain takeover
- ✅ Consistent binnen resource group
**DNS naam wordt:** `zuyderlandcmdbacr-abc123.azurecr.io` (met hash)
**⚠️ Belangrijk:** Je moet dan alle configuraties aanpassen!
**Configuratie wijzigingen nodig:**
```yaml
# azure-pipelines.yml
acrName: 'zuyderlandcmdbacr-abc123' # Met hash!
```
```yaml
# docker-compose.prod.acr.yml
image: zuyderlandcmdbacr-abc123.azurecr.io/zuyderland-cmdb-gui/backend:latest
```
```bash
# scripts/build-and-push-azure.sh
REGISTRY="zuyderlandcmdbacr-abc123.azurecr.io" # Met hash!
```
---
## ⚠️ Belangrijke Waarschuwingen
### 1. **Permanente Keuze**
De DNL Scope keuze is **permanent** en kan **niet meer worden gewijzigd** na aanmaken van de registry!
### 2. **Geen Streepjes in Registry Naam**
Als je een DNL Scope met hash gebruikt, mag je **geen streepjes (`-`)** gebruiken in de registry naam, omdat de hash zelf al een streepje gebruikt als scheidingsteken.
**Goed:** `zuyderlandcmdbacr`
**Fout:** `zuyderland-cmdb-acr` (streepje conflict met hash)
### 3. **Configuratie Aanpassingen**
Als je een hash gebruikt, moet je **alle configuraties aanpassen** met de volledige DNS naam (inclusief hash).
---
## 📋 Checklist
### Als je "Unsecure" kiest:
- [ ] Registry naam zonder streepjes (bijv. `zuyderlandcmdbacr`)
- [ ] DNS naam wordt: `zuyderlandcmdbacr.azurecr.io`
- [ ] Geen configuratie wijzigingen nodig
- [ ] Gebruik `acrName: 'zuyderlandcmdbacr'` in pipeline
### Als je "Resource Group Reuse" kiest:
- [ ] Registry naam zonder streepjes (bijv. `zuyderlandcmdbacr`)
- [ ] Noteer de volledige DNS naam na aanmaken (met hash)
- [ ] Pas `azure-pipelines.yml` aan met volledige DNS naam
- [ ] Pas `docker-compose.prod.acr.yml` aan met volledige DNS naam
- [ ] Pas `scripts/build-and-push-azure.sh` aan met volledige DNS naam
---
## 🔍 DNS Naam Vinden
Na het aanmaken van de ACR, vind je de DNS naam:
**Via Azure Portal:**
1. Ga naar je Container Registry
2. Klik op **"Overview"**
3. De **"Login server"** is je DNS naam
**Via Azure CLI:**
```bash
az acr show --name zuyderlandcmdbacr --query loginServer -o tsv
```
**Output voorbeelden:**
- Unsecure: `zuyderlandcmdbacr.azurecr.io`
- Met hash: `zuyderlandcmdbacr-abc123.azurecr.io`
---
## 💡 Mijn Aanbeveling
**Voor jouw situatie (corporate tool, 20 gebruikers):**
Kies **"Unsecure"** omdat:
1. ✅ Eenvoudigste setup
2. ✅ Geen configuratie wijzigingen nodig
3. ✅ Voldoende security voor interne tool
4. ✅ Voorspelbare DNS naam
Als je later meer security nodig hebt, kun je altijd een nieuwe registry aanmaken met een andere scope (maar dan moet je wel alles migreren).
---
## 📚 Meer Informatie
- [Azure Container Registry DNL Scope Documentation](https://learn.microsoft.com/en-us/azure/container-registry/container-registry-get-started-portal)
- [Subdomain Takeover Prevention](https://learn.microsoft.com/en-us/azure/container-registry/container-registry-security)

View File

@@ -1,205 +0,0 @@
# Azure Container Registry - Role Assignment Permissions Mode
## 🎯 Aanbeveling voor Jouw Situatie
**Voor Zuyderland CMDB GUI (20 gebruikers, corporate tool, productie):**
### ✅ **RBAC Registry Permissions** (Aanbevolen) ⭐
**Waarom:**
- ✅ Eenvoudiger te beheren
- ✅ Voldoende voor jouw use case
- ✅ Minder complexiteit
- ✅ Standaard keuze voor de meeste scenario's
---
## 📊 Opties Vergelijking
### Optie 1: **RBAC Registry Permissions** ⭐ **AANBEVOLEN**
**Hoe het werkt:**
- Permissions worden ingesteld op **registry niveau**
- Alle repositories binnen de registry delen dezelfde permissions
- Gebruikers hebben toegang tot alle repositories of geen
**Voordelen:**
-**Eenvoudig** - Minder complexiteit
-**Makkelijk te beheren** - Eén set permissions voor de hele registry
-**Voldoende voor de meeste scenario's** - Perfect voor jouw situatie
-**Standaard keuze** - Meest gebruikte optie
**Nadelen:**
- ❌ Minder flexibel - Kan niet per repository permissions instellen
- ❌ Alle repositories hebben dezelfde toegang
**Wanneer gebruiken:**
-**Jouw situatie** - 20 gebruikers, corporate tool
- ✅ Kleine tot middelgrote teams
- ✅ Alle repositories hebben dezelfde toegangsvereisten
- ✅ Eenvoudige permission structuur gewenst
**Voorbeeld:**
- Alle developers hebben toegang tot alle repositories
- Alle CI/CD pipelines hebben toegang tot alle repositories
- Geen per-repository verschillen nodig
---
### Optie 2: **RBAC Registry + ABAC Repository Permissions**
**Hoe het werkt:**
- Permissions op **registry niveau** (RBAC)
- **Extra** permissions op **repository niveau** (ABAC - Attribute-Based Access Control)
- Kan per repository verschillende permissions instellen
**Voordelen:**
-**Flexibeler** - Per repository permissions mogelijk
-**Granular control** - Verschillende teams kunnen verschillende repositories hebben
-**Enterprise features** - Voor complexe organisaties
**Nadelen:**
-**Complexer** - Meer configuratie nodig
-**Moeilijker te beheren** - Meerdere permission levels
-**Meer overhead** - Meer tijd nodig voor setup en onderhoud
**Wanneer gebruiken:**
- ✅ Grote organisaties met meerdere teams
- ✅ Verschillende repositories hebben verschillende toegangsvereisten
- ✅ Compliance requirements die granular control vereisen
- ✅ Multi-tenant scenarios
**Voorbeeld:**
- Team A heeft alleen toegang tot repository A
- Team B heeft alleen toegang tot repository B
- CI/CD pipeline heeft toegang tot alle repositories
- Externe partners hebben alleen toegang tot specifieke repositories
---
## 🔍 Jouw Situatie Analyse
**Jouw setup:**
- 2 repositories: `zuyderland-cmdb-gui/backend` en `zuyderland-cmdb-gui/frontend`
- 20 gebruikers (klein team)
- Corporate tool (interne gebruikers)
- Productie omgeving
**Permission vereisten:**
- ✅ Alle teamleden hebben toegang tot beide repositories
- ✅ CI/CD pipeline heeft toegang tot beide repositories
- ✅ Geen per-repository verschillen nodig
- ✅ Eenvoudige beheer gewenst
**Conclusie:****RBAC Registry Permissions is perfect!**
---
## 📋 Checklist: Welke Keuze?
### Kies **RBAC Registry Permissions** als:
- [x] Je <50 gebruikers hebt ✅
- [x] Alle repositories dezelfde toegang hebben ✅
- [x] Je eenvoudige beheer wilt ✅
- [x] Je geen per-repository verschillen nodig hebt ✅
- [x] Je een klein tot middelgroot team hebt ✅
**→ Jouw situatie: ✅ Kies RBAC Registry Permissions!**
### Kies **RBAC Registry + ABAC Repository Permissions** als:
- [ ] Je >100 gebruikers hebt
- [ ] Verschillende repositories verschillende toegang nodig hebben
- [ ] Je granular control nodig hebt
- [ ] Je multi-tenant scenario hebt
- [ ] Je compliance requirements hebt die granular control vereisen
---
## 🔄 Kan Ik Later Wisselen?
**⚠️ Belangrijk:**
- Deze keuze is **permanent** en kan **niet meer worden gewijzigd** na aanmaken van de registry!
- Als je later ABAC nodig hebt, moet je een nieuwe registry aanmaken
**Aanbeveling:**
- Start met **RBAC Registry Permissions** (eenvoudigst)
- Als je later granular control nodig hebt, overweeg dan een nieuwe registry met ABAC
- Voor jouw situatie is RBAC Registry Permissions voldoende
---
## 💡 Permission Rollen (RBAC Registry Permissions)
Met RBAC Registry Permissions kun je deze rollen toewijzen:
### **AcrPull** (Lezen)
- Images pullen
- Voor: Developers, CI/CD pipelines
### **AcrPush** (Schrijven)
- Images pushen
- Voor: CI/CD pipelines, build servers
### **AcrDelete** (Verwijderen)
- Images verwijderen
- Voor: Administrators, cleanup scripts
### **Owner** (Volledig beheer)
- Alles + registry beheer
- Voor: Administrators
**Voor jouw situatie:**
- **Developers**: `AcrPull` (images pullen)
- **CI/CD Pipeline**: `AcrPush` (images pushen)
- **Administrators**: `Owner` (volledig beheer)
---
## 🎯 Mijn Aanbeveling
**Voor Zuyderland CMDB GUI:**
### ✅ **Kies RBAC Registry Permissions** ⭐
**Waarom:**
1.**Eenvoudig** - Minder complexiteit, makkelijker te beheren
2.**Voldoende** - Alle repositories hebben dezelfde toegang (wat je nodig hebt)
3.**Standaard** - Meest gebruikte optie, goed gedocumenteerd
4.**Perfect voor jouw situatie** - 20 gebruikers, 2 repositories, corporate tool
**Je hebt niet nodig:**
- ❌ Per-repository permissions (alle repositories hebben dezelfde toegang)
- ❌ Complexe permission structuur (klein team)
- ❌ Multi-tenant scenarios (corporate tool)
**Setup:**
1. Kies **RBAC Registry Permissions**
2. Wijs rollen toe aan gebruikers/groepen:
- Developers → `AcrPull`
- CI/CD → `AcrPush`
- Admins → `Owner`
**Klaar!**
---
## 📚 Meer Informatie
- [Azure Container Registry RBAC](https://learn.microsoft.com/en-us/azure/container-registry/container-registry-roles)
- [ACR Permissions Best Practices](https://learn.microsoft.com/en-us/azure/container-registry/container-registry-best-practices)
- [ABAC Repository Permissions](https://learn.microsoft.com/en-us/azure/container-registry/container-registry-repository-scoped-permissions)
---
## 🎯 Conclusie
**Kies: RBAC Registry Permissions**
Dit is de beste keuze voor:
- ✅ 20 gebruikers
- ✅ Corporate tool
- ✅ 2 repositories (backend + frontend)
- ✅ Eenvoudige beheer gewenst
- ✅ Alle repositories hebben dezelfde toegang
Je kunt altijd later een nieuwe registry aanmaken met ABAC als je granular control nodig hebt, maar voor jouw situatie is dat niet nodig.

View File

@@ -1,246 +0,0 @@
# Azure Container Registry - Pricing Plan Keuze
## 🎯 Aanbeveling voor Jouw Situatie
**Voor Zuyderland CMDB GUI (20 gebruikers, corporate tool, productie):**
### ✅ **Basic SKU** (Aanbevolen) ⭐
**Waarom:**
- ✅ Voldoende storage (10GB) voor meerdere versies
- ✅ Goedkoop (~€5/maand)
- ✅ Alle features die je nodig hebt
- ✅ Perfect voor kleine tot middelgrote teams
---
## 📊 SKU Vergelijking
### Basic SKU (~€5/maand) ⭐ **AANBEVOLEN**
**Inclusief:**
-**10GB storage** - Ruim voldoende voor backend + frontend images met meerdere versies
-**1GB/day webhook throughput** - Voldoende voor CI/CD
-**Unlimited pulls** - Geen extra kosten voor image pulls
-**Admin user enabled** - Voor development/productie
-**RBAC support** - Role-based access control
-**Content trust** - Image signing support
**Limitaties:**
- ❌ Geen geo-replicatie
- ❌ Geen security scanning (vulnerability scanning)
- ❌ Geen content trust storage
**Wanneer gebruiken:**
-**Jouw situatie** - 20 gebruikers, corporate tool
- ✅ Development en productie omgevingen
- ✅ Kleine tot middelgrote teams
- ✅ Budget-conscious deployments
**Voorbeeld kosten:**
- 2 images (backend + frontend)
- ~10 versies per image
- ~500MB per image = ~10GB totaal
- **Kosten: ~€5/maand** (alleen storage, geen extra pull kosten)
---
### Standard SKU (~€20/maand)
**Inclusief (alles van Basic +):**
-**100GB storage** - Voor grote deployments
-**10GB/day webhook throughput** - Voor hoge CI/CD volumes
-**Geo-replicatie** - Images repliceren naar meerdere regio's
-**Content trust storage** - Voor image signing
**Extra features:**
-**Better performance** - Snellere pulls voor geo-replicated images
-**Disaster recovery** - Images beschikbaar in meerdere regio's
**Wanneer gebruiken:**
- ✅ Grote deployments (>50GB images)
- ✅ Multi-region deployments nodig
- ✅ Hoge CI/CD volumes (>1GB/day)
- ✅ Disaster recovery requirements
**Voor jouw situatie:****Niet nodig** - Basic is voldoende
---
### Premium SKU (~€50/maand)
**Inclusief (alles van Standard +):**
-**500GB storage** - Voor zeer grote deployments
-**50GB/day webhook throughput** - Voor enterprise CI/CD
-**Security scanning** - Automatische vulnerability scanning
-**Advanced security features** - Firewall rules, private endpoints
-**Dedicated throughput** - Garantie op performance
**Extra features:**
-**Image vulnerability scanning** - Automatisch scannen op security issues
-**Private endpoints** - Volledig private connectivity
-**Firewall rules** - Network-level security
**Wanneer gebruiken:**
- ✅ Enterprise deployments
- ✅ Security compliance requirements (ISO 27001, etc.)
- ✅ Zeer grote deployments (>100GB)
- ✅ Multi-tenant scenarios
**Voor jouw situatie:****Niet nodig** - Overkill voor 20 gebruikers
---
## 💰 Kosten Breakdown
### Basic SKU (Aanbevolen) ⭐
**Maandelijkse kosten:**
- **Storage**: €0.167 per GB/maand
- **10GB storage**: ~€1.67/maand
- **Base fee**: ~€3-4/maand
- **Totaal**: ~€5/maand
**Voorbeeld voor jouw situatie:**
- Backend image: ~200MB
- Frontend image: ~50MB
- 10 versies per image: ~2.5GB
- **Ruim binnen 10GB limit** ✅
**Jaarlijkse kosten:** ~€60/jaar
---
### Standard SKU
**Maandelijkse kosten:**
- **Storage**: €0.167 per GB/maand (eerste 100GB)
- **100GB storage**: ~€16.70/maand
- **Base fee**: ~€3-4/maand
- **Totaal**: ~€20/maand
**Jaarlijkse kosten:** ~€240/jaar
**Voor jouw situatie:****Te duur** - Je gebruikt maar ~2.5GB
---
### Premium SKU
**Maandelijkse kosten:**
- **Storage**: €0.167 per GB/maand (eerste 500GB)
- **500GB storage**: ~€83.50/maand
- **Base fee**: ~€16.50/maand
- **Totaal**: ~€50-100/maand (afhankelijk van storage)
**Jaarlijkse kosten:** ~€600-1200/jaar
**Voor jouw situatie:****Veel te duur** - Niet nodig
---
## 📈 Wanneer Upgrade naar Standard/Premium?
### Upgrade naar Standard als:
- ✅ Je >50GB images hebt
- ✅ Je multi-region deployment nodig hebt
- ✅ Je >1GB/day webhook throughput nodig hebt
- ✅ Je disaster recovery nodig hebt
### Upgrade naar Premium als:
- ✅ Je security scanning nodig hebt (compliance)
- ✅ Je >100GB images hebt
- ✅ Je private endpoints nodig hebt
- ✅ Je enterprise security features nodig hebt
**Voor jouw situatie:** Start met **Basic**, upgrade later als nodig.
---
## 🔄 Upgrade/Downgrade
**Goed nieuws:**
- ✅ Je kunt altijd upgraden (Basic → Standard → Premium)
- ✅ Je kunt downgraden (Premium → Standard → Basic)
- ⚠️ **Let op**: Bij downgrade verlies je mogelijk data als je over de storage limit gaat
**Aanbeveling:**
- Start met **Basic**
- Monitor storage gebruik
- Upgrade alleen als je echt de extra features nodig hebt
---
## 📋 Checklist: Welke SKU?
### Kies Basic als:
- [x] Je <50GB images hebt ✅
- [x] Je <20 gebruikers hebt ✅
- [x] Je geen geo-replicatie nodig hebt ✅
- [x] Je geen security scanning nodig hebt ✅
- [x] Je budget-conscious bent ✅
**→ Jouw situatie: ✅ Kies Basic!**
### Kies Standard als:
- [ ] Je >50GB images hebt
- [ ] Je multi-region deployment nodig hebt
- [ ] Je disaster recovery nodig hebt
- [ ] Je >1GB/day webhook throughput nodig hebt
### Kies Premium als:
- [ ] Je security scanning nodig hebt (compliance)
- [ ] Je >100GB images hebt
- [ ] Je private endpoints nodig hebt
- [ ] Je enterprise security features nodig hebt
---
## 💡 Mijn Aanbeveling
**Voor Zuyderland CMDB GUI:**
### ✅ **Kies Basic SKU** ⭐
**Waarom:**
1.**Voldoende storage** - 10GB is ruim voldoende voor jouw 2 images met meerdere versies
2.**Kosteneffectief** - ~€5/maand vs €20-50/maand
3.**Alle features die je nodig hebt** - RBAC, content trust, unlimited pulls
4.**Eenvoudig** - Geen complexe configuratie nodig
5.**Upgrade mogelijk** - Je kunt altijd later upgraden als nodig
**Geschatte storage gebruik:**
- Backend: ~200MB × 10 versies = ~2GB
- Frontend: ~50MB × 10 versies = ~0.5GB
- **Totaal: ~2.5GB** (ruim binnen 10GB limit)
**Kosten:**
- **Maandelijks**: ~€5
- **Jaarlijks**: ~€60
- **Kosteneffectief** voor jouw use case
---
## 🎯 Conclusie
**Kies: Basic SKU**
Dit is de beste keuze voor:
- ✅ 20 gebruikers
- ✅ Corporate tool
- ✅ Productie omgeving
- ✅ Budget-conscious
- ✅ Eenvoudige setup
Je kunt altijd later upgraden naar Standard of Premium als je:
- Meer storage nodig hebt
- Geo-replicatie nodig hebt
- Security scanning nodig hebt
---
## 📚 Meer Informatie
- [Azure Container Registry Pricing](https://azure.microsoft.com/en-us/pricing/details/container-registry/)
- [ACR SKU Comparison](https://learn.microsoft.com/en-us/azure/container-registry/container-registry-skus)
- [ACR Storage Limits](https://learn.microsoft.com/en-us/azure/container-registry/container-registry-skus#sku-features-and-limits)

View File

@@ -1,279 +0,0 @@
# Azure App Service Deployment - Stap-voor-Stap Guide 🚀
Complete deployment guide voor Zuyderland CMDB GUI naar Azure App Service.
## 📋 Prerequisites
- Azure CLI geïnstalleerd en geconfigureerd (`az login`)
- Docker images in ACR: `zdlas.azurecr.io/zuyderland-cmdb-gui/backend:latest` en `frontend:latest`
- Azure DevOps pipeline werkt (images worden automatisch gebouwd)
---
## 🎯 Quick Start (15 minuten)
### Stap 1: Resource Group
```bash
az group create \
--name rg-cmdb-gui-prod \
--location westeurope
```
### Stap 2: App Service Plan
```bash
az appservice plan create \
--name plan-cmdb-gui-prod \
--resource-group rg-cmdb-gui-prod \
--sku B1 \
--is-linux
```
### Stap 3: Web Apps
```bash
# Backend
az webapp create \
--name cmdb-backend-prod \
--resource-group rg-cmdb-gui-prod \
--plan plan-cmdb-gui-prod \
--deployment-container-image-name zdlas.azurecr.io/zuyderland-cmdb-gui/backend:latest
# Frontend
az webapp create \
--name cmdb-frontend-prod \
--resource-group rg-cmdb-gui-prod \
--plan plan-cmdb-gui-prod \
--deployment-container-image-name zdlas.azurecr.io/zuyderland-cmdb-gui/frontend:latest
```
### Stap 4: ACR Authentication
```bash
# Enable Managed Identity
az webapp identity assign --name cmdb-backend-prod --resource-group rg-cmdb-gui-prod
az webapp identity assign --name cmdb-frontend-prod --resource-group rg-cmdb-gui-prod
# Get Principal IDs
BACKEND_PRINCIPAL_ID=$(az webapp identity show --name cmdb-backend-prod --resource-group rg-cmdb-gui-prod --query principalId -o tsv)
FRONTEND_PRINCIPAL_ID=$(az webapp identity show --name cmdb-frontend-prod --resource-group rg-cmdb-gui-prod --query principalId -o tsv)
# Get ACR Resource ID (vervang <acr-resource-group> met jouw ACR resource group)
ACR_ID=$(az acr show --name zdlas --query id -o tsv)
# Grant AcrPull permissions
az role assignment create --assignee $BACKEND_PRINCIPAL_ID --role AcrPull --scope $ACR_ID
az role assignment create --assignee $FRONTEND_PRINCIPAL_ID --role AcrPull --scope $ACR_ID
# Configure container settings
az webapp config container set \
--name cmdb-backend-prod \
--resource-group rg-cmdb-gui-prod \
--docker-custom-image-name zdlas.azurecr.io/zuyderland-cmdb-gui/backend:latest \
--docker-registry-server-url https://zdlas.azurecr.io
az webapp config container set \
--name cmdb-frontend-prod \
--resource-group rg-cmdb-gui-prod \
--docker-custom-image-name zdlas.azurecr.io/zuyderland-cmdb-gui/frontend:latest \
--docker-registry-server-url https://zdlas.azurecr.io
```
### Stap 5: Environment Variabelen
```bash
# Backend (vervang met jouw waarden)
az webapp config appsettings set \
--name cmdb-backend-prod \
--resource-group rg-cmdb-gui-prod \
--settings \
NODE_ENV=production \
PORT=3001 \
JIRA_BASE_URL=https://jira.zuyderland.nl \
JIRA_SCHEMA_ID=your-schema-id \
JIRA_PAT=your-pat-token \
SESSION_SECRET=$(openssl rand -hex 32) \
FRONTEND_URL=https://cmdb-frontend-prod.azurewebsites.net
# Frontend
az webapp config appsettings set \
--name cmdb-frontend-prod \
--resource-group rg-cmdb-gui-prod \
--settings \
VITE_API_URL=https://cmdb-backend-prod.azurewebsites.net/api
```
### Stap 6: Start Apps
```bash
az webapp start --name cmdb-backend-prod --resource-group rg-cmdb-gui-prod
az webapp start --name cmdb-frontend-prod --resource-group rg-cmdb-gui-prod
```
### Stap 7: Test
```bash
# Health check
curl https://cmdb-backend-prod.azurewebsites.net/api/health
# Frontend
curl https://cmdb-frontend-prod.azurewebsites.net
```
**🎉 Je applicatie is nu live!**
- Frontend: `https://cmdb-frontend-prod.azurewebsites.net`
- Backend API: `https://cmdb-backend-prod.azurewebsites.net/api`
---
## 🔐 Azure Key Vault Setup (Aanbevolen)
Voor productie: gebruik Azure Key Vault voor secrets.
### Stap 1: Key Vault Aanmaken
```bash
az keyvault create \
--name kv-cmdb-gui-prod \
--resource-group rg-cmdb-gui-prod \
--location westeurope \
--sku standard
```
### Stap 2: Secrets Toevoegen
```bash
az keyvault secret set --vault-name kv-cmdb-gui-prod --name JiraPat --value "your-token"
az keyvault secret set --vault-name kv-cmdb-gui-prod --name SessionSecret --value "$(openssl rand -hex 32)"
az keyvault secret set --vault-name kv-cmdb-gui-prod --name JiraSchemaId --value "your-schema-id"
```
### Stap 3: Grant Access
```bash
az keyvault set-policy \
--name kv-cmdb-gui-prod \
--object-id $BACKEND_PRINCIPAL_ID \
--secret-permissions get list
```
### Stap 4: Configure App Settings met Key Vault References
```bash
az webapp config appsettings set \
--name cmdb-backend-prod \
--resource-group rg-cmdb-gui-prod \
--settings \
JIRA_PAT="@Microsoft.KeyVault(SecretUri=https://kv-cmdb-gui-prod.vault.azure.net/secrets/JiraPat/)" \
SESSION_SECRET="@Microsoft.KeyVault(SecretUri=https://kv-cmdb-gui-prod.vault.azure.net/secrets/SessionSecret/)" \
JIRA_SCHEMA_ID="@Microsoft.KeyVault(SecretUri=https://kv-cmdb-gui-prod.vault.azure.net/secrets/JiraSchemaId/)"
```
---
## 📊 Monitoring Setup
### Application Insights
```bash
# Create Application Insights
az monitor app-insights component create \
--app cmdb-gui-prod \
--location westeurope \
--resource-group rg-cmdb-gui-prod \
--application-type web
# Get Instrumentation Key
INSTRUMENTATION_KEY=$(az monitor app-insights component show \
--app cmdb-gui-prod \
--resource-group rg-cmdb-gui-prod \
--query instrumentationKey -o tsv)
# Configure App Settings
az webapp config appsettings set \
--name cmdb-backend-prod \
--resource-group rg-cmdb-gui-prod \
--settings \
APPINSIGHTS_INSTRUMENTATIONKEY=$INSTRUMENTATION_KEY
```
---
## 🔄 Updates Deployen
### Optie 1: Manual (Eenvoudig)
```bash
# Restart Web Apps (pull nieuwe latest image)
az webapp restart --name cmdb-backend-prod --resource-group rg-cmdb-gui-prod
az webapp restart --name cmdb-frontend-prod --resource-group rg-cmdb-gui-prod
```
### Optie 2: Deployment Slots (Zero-Downtime)
```bash
# Create staging slot
az webapp deployment slot create \
--name cmdb-backend-prod \
--resource-group rg-cmdb-gui-prod \
--slot staging
# Deploy to staging
az webapp deployment slot swap \
--name cmdb-backend-prod \
--resource-group rg-cmdb-gui-prod \
--slot staging \
--target-slot production
```
---
## 🛠️ Troubleshooting
### Check Logs
```bash
# Live logs
az webapp log tail --name cmdb-backend-prod --resource-group rg-cmdb-gui-prod
# Download logs
az webapp log download --name cmdb-backend-prod --resource-group rg-cmdb-gui-prod --log-file logs.zip
```
### Check Status
```bash
az webapp show --name cmdb-backend-prod --resource-group rg-cmdb-gui-prod --query state
```
### Restart App
```bash
az webapp restart --name cmdb-backend-prod --resource-group rg-cmdb-gui-prod
```
---
## 📚 Meer Informatie
- **Deployment Advies**: `docs/DEPLOYMENT-ADVICE.md`
- **Quick Deployment Guide**: `docs/QUICK-DEPLOYMENT-GUIDE.md`
- **Production Deployment**: `docs/PRODUCTION-DEPLOYMENT.md`
---
## ✅ Checklist
- [ ] Resource Group aangemaakt
- [ ] App Service Plan aangemaakt
- [ ] Web Apps aangemaakt
- [ ] ACR authentication geconfigureerd
- [ ] Environment variabelen ingesteld
- [ ] Key Vault geconfigureerd (optioneel)
- [ ] Application Insights ingeschakeld
- [ ] Health checks werken
- [ ] Team geïnformeerd
**Veel succes! 🚀**

View File

@@ -1,451 +0,0 @@
# Azure Container Registry - Docker Images Build & Push Guide
Deze guide beschrijft hoe je Docker images bouwt en naar Azure Container Registry (ACR) pusht voor de Zuyderland CMDB GUI applicatie.
## 📋 Inhoudsopgave
1. [Azure Container Registry Setup](#azure-container-registry-setup)
2. [Lokale Build & Push](#lokale-build--push)
3. [Azure DevOps Pipeline](#azure-devops-pipeline)
4. [Docker Compose Configuration](#docker-compose-configuration)
5. [Best Practices](#best-practices)
---
## 🔧 Azure Container Registry Setup
### 1. Azure Container Registry Aanmaken
Als je nog geen ACR hebt, maak er een aan via Azure Portal of Azure CLI:
```bash
# Resource group (als nog niet bestaat)
az group create --name rg-cmdb-gui --location westeurope
# Azure Container Registry aanmaken
az acr create \
--resource-group rg-cmdb-gui \
--name zuyderlandcmdbacr \
--sku Basic \
--admin-enabled true
```
**ACR SKU Opties:**
- **Basic**: Geschikt voor development/test (~€5/maand)
- **Standard**: Voor productie met geo-replicatie (~€20/maand)
- **Premium**: Voor enterprise met security features (~€50/maand)
### 2. Registry URL
Na aanmaken is je registry beschikbaar op:
```
<acr-name>.azurecr.io
```
Bijvoorbeeld: `zuyderlandcmdbacr.azurecr.io`
### 3. Authentication
ACR ondersteunt meerdere authenticatiemethoden:
**A) Admin Credentials (Eenvoudig, voor development)**
```bash
# Admin credentials ophalen
az acr credential show --name zuyderlandcmdbacr
# Login met Docker
az acr login --name zuyderlandcmdbacr
# OF
docker login zuyderlandcmdbacr.azurecr.io -u <admin-username> -p <admin-password>
```
**B) Azure Service Principal (Aanbevolen voor CI/CD)**
```bash
# Service Principal aanmaken
az ad sp create-for-rbac --name "zuyderland-cmdb-acr-sp" --role acrpull --scopes /subscriptions/<subscription-id>/resourceGroups/rg-cmdb-gui/providers/Microsoft.ContainerRegistry/registries/zuyderlandcmdbacr
# Gebruik de output credentials in CI/CD
```
**C) Managed Identity (Best voor Azure services)**
- Gebruik Managed Identity voor Azure DevOps, App Service, etc.
- Configureer via Azure Portal → ACR → Access Control (IAM)
---
## 🐳 Lokale Build & Push
### Optie 1: Met Script (Aanbevolen)
Gebruik het `build-and-push-azure.sh` script:
```bash
# Maak script uitvoerbaar
chmod +x scripts/build-and-push-azure.sh
# Build en push (gebruikt 'latest' als versie)
./scripts/build-and-push-azure.sh
# Build en push met specifieke versie
./scripts/build-and-push-azure.sh 1.0.0
```
**Environment Variables:**
```bash
export ACR_NAME="zuyderlandcmdbacr"
export REPO_NAME="zuyderland-cmdb-gui"
./scripts/build-and-push-azure.sh 1.0.0
```
### Optie 2: Handmatig met Docker Commands
```bash
# Login
az acr login --name zuyderlandcmdbacr
# Set variabelen
ACR_NAME="zuyderlandcmdbacr"
REGISTRY="${ACR_NAME}.azurecr.io"
REPO_NAME="zuyderland-cmdb-gui"
VERSION="1.0.0"
# Build backend
docker build -t ${REGISTRY}/${REPO_NAME}/backend:${VERSION} \
-t ${REGISTRY}/${REPO_NAME}/backend:latest \
-f backend/Dockerfile.prod ./backend
# Build frontend
docker build -t ${REGISTRY}/${REPO_NAME}/frontend:${VERSION} \
-t ${REGISTRY}/${REPO_NAME}/frontend:latest \
-f frontend/Dockerfile.prod ./frontend
# Push images
docker push ${REGISTRY}/${REPO_NAME}/backend:${VERSION}
docker push ${REGISTRY}/${REPO_NAME}/backend:latest
docker push ${REGISTRY}/${REPO_NAME}/frontend:${VERSION}
docker push ${REGISTRY}/${REPO_NAME}/frontend:latest
```
---
## 🚀 Azure DevOps Pipeline
### 1. Service Connection Aanmaken
In Azure DevOps:
1. **Project Settings****Service connections****New service connection**
2. Kies **Docker Registry**
3. Kies **Azure Container Registry**
4. Selecteer je Azure subscription en ACR
5. Geef een naam: `zuyderland-cmdb-acr-connection`
### 2. Pipeline Configuratie
Het project bevat al een `azure-pipelines.yml` bestand. Configureer deze in Azure DevOps:
1. **Pipelines****New pipeline**
2. Kies je repository (Azure Repos)
3. Kies **Existing Azure Pipelines YAML file**
4. Selecteer `azure-pipelines.yml`
5. Review en run
### 3. Pipeline Variabelen Aanpassen
Pas de variabelen in `azure-pipelines.yml` aan naar jouw instellingen:
```yaml
variables:
acrName: 'zuyderlandcmdbacr' # Jouw ACR naam
repositoryName: 'zuyderland-cmdb-gui'
dockerRegistryServiceConnection: 'zuyderland-cmdb-acr-connection'
```
### 4. Automatische Triggers
De pipeline triggert automatisch bij:
- Push naar `main` branch
- Tags die beginnen met `v*` (bijv. `v1.0.0`)
**Handmatig Triggeren:**
```bash
# Tag aanmaken en pushen
git tag v1.0.0
git push origin v1.0.0
```
---
## 📦 Docker Compose Configuration
### Productie Docker Compose met ACR
Maak `docker-compose.prod.acr.yml`:
```yaml
version: '3.8'
services:
backend:
image: zuyderlandcmdbacr.azurecr.io/zuyderland-cmdb-gui/backend:latest
environment:
- NODE_ENV=production
- PORT=3001
env_file:
- .env.production
volumes:
- backend_data:/app/data
restart: unless-stopped
networks:
- internal
healthcheck:
test: ["CMD", "node", "-e", "require('http').get('http://localhost:3001/health', (r) => {process.exit(r.statusCode === 200 ? 0 : 1)})"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
frontend:
image: zuyderlandcmdbacr.azurecr.io/zuyderland-cmdb-gui/frontend:latest
depends_on:
- backend
restart: unless-stopped
networks:
- internal
healthcheck:
test: ["CMD", "wget", "--quiet", "--tries=1", "--spider", "http://localhost/"]
interval: 30s
timeout: 10s
retries: 3
nginx:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./nginx/ssl:/etc/nginx/ssl:ro
- nginx_cache:/var/cache/nginx
depends_on:
- frontend
- backend
restart: unless-stopped
networks:
- internal
volumes:
backend_data:
nginx_cache:
networks:
internal:
driver: bridge
```
### Gebruik Specifieke Versies
Voor productie deployments, gebruik specifieke versies:
```yaml
backend:
image: zuyderlandcmdbacr.azurecr.io/zuyderland-cmdb-gui/backend:v1.0.0
frontend:
image: zuyderlandcmdbacr.azurecr.io/zuyderland-cmdb-gui/frontend:v1.0.0
```
### Pull en Deploy
```bash
# Login (als nodig)
az acr login --name zuyderlandcmdbacr
# Pull images
docker-compose -f docker-compose.prod.acr.yml pull
# Deploy
docker-compose -f docker-compose.prod.acr.yml up -d
# Status checken
docker-compose -f docker-compose.prod.acr.yml ps
# Logs bekijken
docker-compose -f docker-compose.prod.acr.yml logs -f
```
---
## 🎯 Best Practices
### 1. Versioning
- **Gebruik semantic versioning**: `v1.0.0`, `v1.0.1`, etc.
- **Tag altijd als `latest`**: Voor development/CI/CD
- **Productie**: Gebruik specifieke versies, nooit `latest`
```bash
# Tag met versie
git tag v1.0.0
git push origin v1.0.0
# Build met versie
./scripts/build-and-push-azure.sh 1.0.0
```
### 2. Security
- **Admin credentials uitschakelen** in productie (gebruik Service Principal)
- **Enable Content Trust** voor image signing (optioneel)
- **Scan images** voor vulnerabilities (Azure Security Center)
```bash
# Admin uitschakelen
az acr update --name zuyderlandcmdbacr --admin-enabled false
```
### 3. Image Cleanup
ACR heeft een retention policy voor oude images:
```bash
# Retention policy instellen (bijv. laatste 10 tags behouden)
az acr repository show-tags --name zuyderlandcmdbacr --repository zuyderland-cmdb-gui/backend --orderby time_desc --top 10
# Oude tags verwijderen (handmatig of via policy)
az acr repository delete --name zuyderlandcmdbacr --image zuyderland-cmdb-gui/backend:old-tag
```
### 4. Multi-Stage Builds
De `Dockerfile.prod` bestanden gebruiken al multi-stage builds voor kleinere images.
### 5. Build Cache
Voor snellere builds, gebruik build cache:
```bash
# Build met cache
docker build --cache-from zuyderlandcmdbacr.azurecr.io/zuyderland-cmdb-gui/backend:latest \
-t zuyderlandcmdbacr.azurecr.io/zuyderland-cmdb-gui/backend:new-tag \
-f backend/Dockerfile.prod ./backend
```
---
## 🔍 Troubleshooting
### Authentication Issues
```bash
# Check Azure login
az account show
# Re-login
az login
az acr login --name zuyderlandcmdbacr
# Check Docker login
cat ~/.docker/config.json
```
### Build Errors
```bash
# Build met verbose output
docker build --progress=plain -t test-image -f backend/Dockerfile.prod ./backend
# Check lokale images
docker images | grep zuyderland-cmdb-gui
```
### Push Errors
```bash
# Check ACR connectivity
az acr check-health --name zuyderlandcmdbacr
# Check repository exists
az acr repository list --name zuyderlandcmdbacr
# View repository tags
az acr repository show-tags --name zuyderlandcmdbacr --repository zuyderland-cmdb-gui/backend
```
### Azure DevOps Pipeline Errors
- Check **Service Connection** permissions
- Verify **ACR naam** in pipeline variables
- Check **Dockerfile paths** zijn correct
- Review pipeline logs in Azure DevOps
---
## 📝 Usage Examples
### Eenvoudige Workflow
```bash
# 1. Code aanpassen en committen
git add .
git commit -m "Update feature"
git push origin main
# 2. Build en push naar ACR
./scripts/build-and-push-azure.sh
# 3. Deploy (op productie server)
az acr login --name zuyderlandcmdbacr
docker-compose -f docker-compose.prod.acr.yml pull
docker-compose -f docker-compose.prod.acr.yml up -d
```
### Versioned Release
```bash
# 1. Tag release
git tag v1.0.0
git push origin v1.0.0
# 2. Build en push met versie
./scripts/build-and-push-azure.sh 1.0.0
# 3. Update docker-compose met versie
# Edit docker-compose.prod.acr.yml: image: ...backend:v1.0.0
# 4. Deploy
docker-compose -f docker-compose.prod.acr.yml pull
docker-compose -f docker-compose.prod.acr.yml up -d
```
### Azure DevOps Automated
1. Push code naar `main` → Pipeline triggert automatisch
2. Pipeline bouwt images en pusht naar ACR
3. Deploy handmatig of via release pipeline
---
## 📚 Additional Resources
- [Azure Container Registry Documentation](https://docs.microsoft.com/en-us/azure/container-registry/)
- [Azure DevOps Docker Task](https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/build/docker)
- [ACR Best Practices](https://docs.microsoft.com/en-us/azure/container-registry/container-registry-best-practices)
- [Docker Compose Production Guide](./PRODUCTION-DEPLOYMENT.md)
---
## 🔄 Vergelijking: Gitea vs Azure Container Registry
| Feature | Gitea Registry | Azure Container Registry |
|---------|---------------|-------------------------|
| **Kosten** | Gratis (met Gitea) | €5-50/maand (afhankelijk van SKU) |
| **Security** | Basic | Enterprise-grade (RBAC, scanning) |
| **CI/CD** | Gitea Actions | Azure DevOps, GitHub Actions |
| **Geo-replicatie** | Nee | Ja (Standard/Premium) |
| **Image Scanning** | Nee | Ja (Azure Security Center) |
| **Integratie** | Gitea ecosystem | Azure ecosystem (App Service, AKS, etc.) |
**Aanbeveling:**
- **Development/Test**: Gitea Registry (gratis, eenvoudig)
- **Productie**: Azure Container Registry (security, enterprise features)

View File

@@ -1,272 +0,0 @@
# Azure Deployment - Infrastructure Samenvatting
## Applicatie Overzicht
**Zuyderland CMDB GUI** - Web applicatie voor classificatie en beheer van applicatiecomponenten in Jira Assets.
### Technologie Stack
- **Backend**: Node.js 20 (Express, TypeScript)
- **Frontend**: React 18 (Vite, TypeScript)
- **Database**: SQLite (cache layer, ~20MB, geen backup nodig - sync vanuit Jira)
- **Containerization**: Docker
- **Authentication**: Jira OAuth 2.0 of Personal Access Token
- **Gebruikers**: Max. 20 collega's
---
## Infrastructure Vereisten
### 1. Compute Resources
**Aanbevolen: Azure App Service (Basic Tier)**
- **App Service Plan**: B1 (1 vCPU, 1.75GB RAM) - **voldoende voor 20 gebruikers**
- 2 Web Apps: Backend + Frontend (deel dezelfde App Service Plan)
- **Kosten**: ~€15-25/maand
- **Voordelen**: Eenvoudig, managed service, voldoende voor kleine teams
**Alternatief: Azure Container Instances (ACI) - Als je containers prefereert**
- 2 containers: Backend + Frontend
- Backend: 1 vCPU, 2GB RAM
- Frontend: 0.5 vCPU, 1GB RAM
- **Kosten**: ~€30-50/maand
- **Nadeel**: Minder managed features dan App Service
### 2. Database & Storage
**Optie A: PostgreSQL (Aanbevolen) ⭐**
- **Azure Database for PostgreSQL**: Flexible Server Basic tier (B1ms)
- **Database**: ~20MB (huidige grootte, ruimte voor groei)
- **Kosten**: ~€20-30/maand
- **Voordelen**: Identieke dev/prod stack, betere concurrency, connection pooling
**Optie B: SQLite (Huidige situatie)**
- **SQLite Database**: ~20MB (in Azure Storage)
- **Azure Storage Account**: Standard LRS (Hot tier)
- **Kosten**: ~€1-3/maand
- **Nadelen**: Beperkte concurrency, geen connection pooling
**Logs**: ~500MB-1GB/maand (Application Insights)
### 3. Networking
**Vereisten:**
- **HTTPS**: SSL/TLS certificaat (Let's Encrypt of Azure App Service Certificate)
- **DNS**: Subdomain (bijv. `cmdb.zuyderland.nl`)
- **Firewall**: Inbound poorten 80/443, outbound naar Jira API
- **Load Balancer**: Azure Application Gateway (optioneel, voor HA)
**Network Security:**
- Private endpoints (optioneel, voor extra security)
- Network Security Groups (NSG)
- Azure Firewall (optioneel)
### 4. Secrets Management
**Azure Key Vault** voor:
- `JIRA_OAUTH_CLIENT_SECRET`
- `SESSION_SECRET`
- `ANTHROPIC_API_KEY`
- `JIRA_PAT` (indien gebruikt)
**Kosten**: ~€1-5/maand
### 5. Monitoring & Logging
**Azure Monitor:**
- Application Insights (Basic tier - gratis tot 5GB/maand)
- Log Analytics Workspace (Pay-as-you-go)
- Alerts voor health checks, errors
**Kosten**: ~€0-20/maand (met Basic tier vaak gratis voor kleine apps)
### 6. Backup & Disaster Recovery
**Geen backup vereist** - Data wordt gesynchroniseerd vanuit Jira Assets, dus backup is niet nodig.
De SQLite database is een cache layer die opnieuw opgebouwd kan worden via sync.
---
## Deployment Architectuur
### Aanbevolen: Azure App Service (Basic Tier)
**Eenvoudige setup voor kleine teams (20 gebruikers):**
```
┌─────────────────────────────────────┐
│ Azure App Service (B1 Plan) │
│ │
│ ┌──────────┐ ┌──────────┐ │
│ │ Frontend │ │ Backend │ │
│ │ Web App │ │ Web App │ │
│ └──────────┘ └────┬─────┘ │
└─────────────────────────┼──────────┘
┌─────────────┴─────────────┐
│ │
┌───────▼──────┐ ┌────────────▼────┐
│ Azure Storage│ │ Azure Key Vault │
│ (SQLite DB) │ │ (Secrets) │
└──────────────┘ └─────────────────┘
┌───────▼──────┐
│ Application │
│ Insights │
│ (Basic/FREE) │
└──────────────┘
```
**Opmerking**: Application Gateway is niet nodig voor 20 gebruikers - App Service heeft ingebouwde SSL en load balancing.
---
## Security Overwegingen
### 1. Authentication
- **Jira OAuth 2.0**: Gebruikers authenticeren via Jira
- **Session Management**: Sessions in-memory (overweeg Azure Redis Cache voor productie)
### 2. Network Security
- **HTTPS Only**: Alle verkeer via HTTPS
- **CORS**: Alleen toegestaan vanuit geconfigureerde frontend URL
- **Rate Limiting**: 100 requests/minuut per IP (configureerbaar)
### 3. Data Security
- **Secrets**: Alle secrets in Azure Key Vault
- **Database**: SQLite database in Azure Storage (encrypted at rest)
- **In Transit**: TLS 1.2+ voor alle communicatie
### 4. Compliance
- **Logging**: Alle API calls gelogd (geen PII)
- **Audit Trail**: Wijzigingen aan applicaties gelogd
- **Data Residency**: Data blijft in Azure West Europe (of gewenste regio)
---
## Externe Dependencies
### 1. Jira Assets API
- **Endpoint**: `https://jira.zuyderland.nl`
- **Authentication**: OAuth 2.0 of Personal Access Token
- **Rate Limits**: Respecteer Jira API rate limits
- **Network**: Outbound HTTPS naar Jira (poort 443)
### 2. AI API (Optioneel)
- **Anthropic Claude API**: Voor AI classificatie features
- **Network**: Outbound HTTPS naar `api.anthropic.com`
---
## Deployment Stappen
### 1. Azure Resources Aanmaken
```bash
# Resource Group
az group create --name rg-cmdb-gui --location westeurope
# App Service Plan (Basic B1 - voldoende voor 20 gebruikers)
az appservice plan create --name plan-cmdb-gui --resource-group rg-cmdb-gui --sku B1
# Web Apps (delen dezelfde plan - kostenbesparend)
az webapp create --name cmdb-backend --resource-group rg-cmdb-gui --plan plan-cmdb-gui
az webapp create --name cmdb-frontend --resource-group rg-cmdb-gui --plan plan-cmdb-gui
# Key Vault
az keyvault create --name kv-cmdb-gui --resource-group rg-cmdb-gui --location westeurope
# Storage Account (voor SQLite database - alleen bij SQLite optie)
az storage account create --name stcmdbgui --resource-group rg-cmdb-gui --location westeurope --sku Standard_LRS
```
**Met PostgreSQL (Aanbevolen):**
```bash
# PostgreSQL Database (Flexible Server)
az postgres flexible-server create \
--resource-group rg-cmdb-gui \
--name psql-cmdb-gui \
--location westeurope \
--admin-user cmdbadmin \
--admin-password <secure-password-from-key-vault> \
--sku-name Standard_B1ms \
--tier Burstable \
--storage-size 32 \
--version 15
# Database aanmaken
az postgres flexible-server db create \
--resource-group rg-cmdb-gui \
--server-name psql-cmdb-gui \
--database-name cmdb
```
### 2. Configuration
- Environment variabelen via App Service Configuration
- Secrets via Key Vault references
- SSL certificaat via App Service Certificate of Let's Encrypt
### 3. CI/CD
- **Azure DevOps Pipelines** of **GitHub Actions**
- Automatische deployment bij push naar main branch
- Deployment slots voor zero-downtime updates
---
## Kosten Schatting (Maandelijks)
**Voor 20 gebruikers - Basic Setup:**
**Met SQLite (huidige setup):**
| Component | Schatting |
|-----------|-----------|
| App Service Plan (B1) | €15-25 |
| Storage Account | €1-3 |
| Key Vault | €1-2 |
| Application Insights (Basic) | €0-5 |
| **Totaal** | **€17-35/maand** |
**Met PostgreSQL (aanbevolen):**
| Component | Schatting |
|-----------|-----------|
| App Service Plan (B1) | €15-25 |
| PostgreSQL Database (B1ms) | €20-30 |
| Key Vault | €1-2 |
| Application Insights (Basic) | €0-5 |
| **Totaal** | **€36-62/maand** |
*Inclusief: SSL certificaat (gratis via App Service), basis monitoring*
**Opmerking**: Met Basic tier en gratis Application Insights kan dit zelfs onder €20/maand blijven.
**Backup**: Niet nodig - data wordt gesynchroniseerd vanuit Jira Assets.
---
## Vragen voor Infrastructure Team
1. **DNS & Domain**: Kunnen we een subdomain krijgen? (bijv. `cmdb.zuyderland.nl`)
2. **SSL Certificaat**: Azure App Service Certificate of Let's Encrypt via certbot?
3. **Network**: Moeten we via VPN/ExpressRoute of direct internet toegang?
4. **Firewall Rules**: Welke outbound toegang is nodig? (Jira API, Anthropic API)
5. **Monitoring**: Gebruiken we bestaande Azure Monitor setup of aparte workspace?
6. **Backup**: Niet nodig - SQLite database is cache layer, data wordt gesynchroniseerd vanuit Jira Assets
7. **Disaster Recovery**: Data kan opnieuw gesynchroniseerd worden vanuit Jira (geen backup vereist)
8. **Compliance**: Zijn er specifieke compliance requirements? (ISO 27001, NEN 7510)
9. **Scaling**: Niet nodig - max. 20 gebruikers, Basic tier is voldoende
10. **Maintenance Windows**: Wanneer kunnen we updates deployen?
---
## Next Steps
1. **Kick-off Meeting**: Bespreken architectuur en requirements
2. **Proof of Concept**: Deploy naar Azure App Service (test environment)
3. **Security Review**: Security team review van configuratie
4. **Load Testing**: Testen onder verwachte load
5. **Production Deployment**: Go-live met monitoring
---
## Contact & Documentatie
- **Application Code**: [Git Repository]
- **Deployment Guide**: `PRODUCTION-DEPLOYMENT.md`
- **API Documentation**: `/api/config` endpoint

View File

@@ -1,250 +0,0 @@
# Azure DevOps Pipeline - Repository Not Found
## 🔴 Probleem: "No matching repositories were found"
Als Azure DevOps je repository niet kan vinden bij het aanmaken van een pipeline, probeer deze oplossingen:
---
## ✅ Oplossing 1: Check Repository Naam
**Probleem:** De repository naam komt mogelijk niet overeen.
**Oplossing:**
1. **Ga naar Repos** (links in het menu)
2. **Check de exacte repository naam**
- Kijk naar de repository die je hebt gepusht
- Noteer de exacte naam (inclusief hoofdletters/spaties)
3. **In de pipeline wizard:**
- Zoek naar de repository met de exacte naam
- Of probeer verschillende variaties:
- `Zuyderland CMDB GUI`
- `zuyderland-cmdb-gui`
- `ZuyderlandCMDBGUI`
**Jouw repository naam zou moeten zijn:** `Zuyderland CMDB GUI` (met spaties)
---
## ✅ Oplossing 2: Check Repository Type
**In de pipeline wizard, probeer verschillende repository types:**
1. **Azure Repos Git** (als je code in Azure DevOps staat)
- Dit is waarschijnlijk wat je nodig hebt
- Check of je repository hier staat
2. **GitHub** (als je code in GitHub staat)
- Niet van toepassing voor jou
3. **Other Git** (als je code ergens anders staat)
- Niet van toepassing voor jou
**Voor jouw situatie:** Kies **"Azure Repos Git"**
---
## ✅ Oplossing 3: Check Repository Toegang
**Probleem:** Je hebt mogelijk geen toegang tot de repository.
**Oplossing:**
1. **Ga naar Repos** (links in het menu)
2. **Check of je de repository ziet**
- Als je de repository niet ziet, heb je mogelijk geen toegang
3. **Check permissions:**
- Project Settings → Repositories → Security
- Check of je account toegang heeft
---
## ✅ Oplossing 4: Check Project
**Probleem:** Je bent mogelijk in het verkeerde project.
**Oplossing:**
1. **Check het project naam** (bovenaan links)
- Moet zijn: **"JiraAssetsCMDB"**
2. **Als je in een ander project bent:**
- Klik op het project dropdown (bovenaan links)
- Selecteer **"JiraAssetsCMDB"**
3. **Probeer opnieuw** de pipeline aan te maken
---
## ✅ Oplossing 5: Refresh/Herlaad
**Soms helpt een simpele refresh:**
1. **Refresh de browser pagina** (F5 of Cmd+R)
2. **Sluit en open opnieuw** de pipeline wizard
3. **Probeer opnieuw**
---
## ✅ Oplossing 6: Check of Repository Bestaat
**Probleem:** De repository bestaat mogelijk niet in Azure DevOps.
**Oplossing:**
1. **Ga naar Repos** (links in het menu)
2. **Check of je repository zichtbaar is**
- Je zou moeten zien: `Zuyderland CMDB GUI` (of jouw repo naam)
3. **Als de repository niet bestaat:**
- Je moet eerst de code pushen naar Azure DevOps
- Of de repository aanmaken in Azure DevOps
**Check of je code al in Azure DevOps staat:**
- Ga naar Repos → Files
- Je zou je code moeten zien (bijv. `azure-pipelines.yml`, `backend/`, `frontend/`, etc.)
---
## ✅ Oplossing 7: Maak Repository Aan (Als Die Niet Bestaat)
**Als de repository nog niet bestaat in Azure DevOps:**
### Optie A: Push Code naar Bestaande Repository
**Als de repository al bestaat maar leeg is:**
1. **Check de repository URL:**
```
git@ssh.dev.azure.com:v3/ZuyderlandMedischCentrum/JiraAssetsCMDB/Zuyderland%20CMDB%20GUI
```
2. **Push je code:**
```bash
cd /Users/berthausmans/Documents/Development/zuyderland-cmdb-gui
git push azure main
```
3. **Check in Azure DevOps:**
- Ga naar Repos → Files
- Je zou je code moeten zien
### Optie B: Maak Nieuwe Repository Aan
**Als de repository helemaal niet bestaat:**
1. **Ga naar Repos** (links in het menu)
2. **Klik op "New repository"** of het "+" icoon
3. **Vul in:**
- **Repository name**: `Zuyderland CMDB GUI`
- **Type**: Git
4. **Create**
5. **Push je code:**
```bash
cd /Users/berthausmans/Documents/Development/zuyderland-cmdb-gui
git remote add azure git@ssh.dev.azure.com:v3/ZuyderlandMedischCentrum/JiraAssetsCMDB/Zuyderland%20CMDB%20GUI
git push azure main
```
---
## ✅ Oplossing 8: Gebruik "Other Git" Als Workaround
**Als niets werkt, gebruik "Other Git" als tijdelijke oplossing:**
1. **In de pipeline wizard:**
- Kies **"Other Git"** (in plaats van "Azure Repos Git")
2. **Vul in:**
- **Repository URL**: `git@ssh.dev.azure.com:v3/ZuyderlandMedischCentrum/JiraAssetsCMDB/Zuyderland%20CMDB%20GUI`
- Of HTTPS: `https://ZuyderlandMedischCentrum@dev.azure.com/ZuyderlandMedischCentrum/JiraAssetsCMDB/_git/Zuyderland%20CMDB%20GUI`
3. **Branch**: `main`
4. **Continue**
**⚠️ Let op:** Dit werkt, maar "Azure Repos Git" is de voorkeursoptie.
---
## 🔍 Diagnose Stappen
**Om te diagnosticeren wat het probleem is:**
### 1. Check of Repository Bestaat
1. Ga naar **Repos** (links in het menu)
2. Check of je `Zuyderland CMDB GUI` ziet
3. Klik erop en check of je code ziet
### 2. Check Repository URL
**In Terminal:**
```bash
cd /Users/berthausmans/Documents/Development/zuyderland-cmdb-gui
git remote -v
```
**Je zou moeten zien:**
```
azure git@ssh.dev.azure.com:v3/ZuyderlandMedischCentrum/JiraAssetsCMDB/Zuyderland%20CMDB%20GUI (fetch)
azure git@ssh.dev.azure.com:v3/ZuyderlandMedischCentrum/JiraAssetsCMDB/Zuyderland%20CMDB%20GUI (push)
```
### 3. Check of Code Gepusht is
**In Terminal:**
```bash
git log azure/main --oneline -5
```
**Als je commits ziet:** ✅ Code is gepusht
**Als je een fout krijgt:** ❌ Code is niet gepusht
### 4. Push Code (Als Niet Gepusht)
```bash
git push azure main
```
---
## 💡 Aanbevolen Aanpak
**Probeer in deze volgorde:**
1. ✅ **Check Repos** - Ga naar Repos en check of je repository bestaat
2. ✅ **Check project naam** - Zorg dat je in "JiraAssetsCMDB" project bent
3. ✅ **Refresh pagina** - Soms helpt een simpele refresh
4. ✅ **Push code** - Als repository leeg is, push je code
5. ✅ **Gebruik "Other Git"** - Als workaround
---
## 🎯 Quick Fix (Meest Waarschijnlijk)
**Het probleem is waarschijnlijk dat de repository leeg is of niet bestaat:**
1. **Check in Azure DevOps:**
- Ga naar **Repos** → **Files**
- Check of je code ziet (bijv. `azure-pipelines.yml`)
2. **Als repository leeg is:**
```bash
cd /Users/berthausmans/Documents/Development/zuyderland-cmdb-gui
git push azure main
```
3. **Probeer opnieuw** de pipeline aan te maken
---
## 📚 Meer Informatie
- [Azure DevOps Repositories](https://learn.microsoft.com/en-us/azure/devops/repos/)
- [Create Pipeline from Repository](https://learn.microsoft.com/en-us/azure/devops/pipelines/create-first-pipeline)
---
## 🆘 Nog Steeds Problemen?
Als niets werkt:
1. **Check of je in het juiste project bent** (JiraAssetsCMDB)
2. **Check of de repository bestaat** (Repos → Files)
3. **Push je code** naar Azure DevOps
4. **Gebruik "Other Git"** als workaround
**De "Other Git" optie werkt altijd**, ook als de repository niet wordt gevonden in de dropdown.

View File

@@ -1,283 +0,0 @@
# Azure Container Registry - Moet ik dit aanvragen?
## 🤔 Korte Antwoord
**Het hangt af van je deployment strategie:**
1. **Azure App Service (zonder containers)** → ❌ **Geen ACR nodig**
- Direct deployment van code
- Eenvoudiger en goedkoper
- **Aanbevolen voor jouw situatie** (20 gebruikers)
2. **Container-based deployment** → ✅ **ACR nodig** (of alternatief)
- Azure Container Instances (ACI)
- Azure Kubernetes Service (AKS)
- VM met Docker Compose
---
## 📊 Deployment Opties Vergelijking
### Optie 1: Azure App Service (Zonder Containers) ⭐ **AANBEVOLEN**
**Wat je nodig hebt:**
- ✅ Azure App Service Plan (B1) - €15-25/maand
- ✅ Azure Key Vault - €1-2/maand
- ✅ Database (PostgreSQL of SQLite) - €1-30/maand
-**Geen Container Registry nodig!**
**Hoe het werkt:**
- Azure DevOps bouwt je code direct
- Deployt naar App Service via ZIP deploy of Git
- Geen Docker images nodig
**Voordelen:**
- ✅ Eenvoudiger setup
- ✅ Goedkoper (geen ACR kosten)
- ✅ Snellere deployments
- ✅ Managed service (minder onderhoud)
**Nadelen:**
- ❌ Minder flexibel dan containers
- ❌ Platform-specifiek (Azure only)
**Voor jouw situatie:****Dit is de beste optie!**
---
### Optie 2: Container Registry (Als je containers wilt gebruiken)
**Je hebt 3 keuzes voor een registry:**
#### A) Azure Container Registry (ACR) 💰
**Kosten:**
- Basic: ~€5/maand
- Standard: ~€20/maand
- Premium: ~€50/maand
**Voordelen:**
- ✅ Integratie met Azure services
- ✅ Security scanning (Premium)
- ✅ Geo-replicatie (Standard/Premium)
- ✅ RBAC integratie
**Nadelen:**
- ❌ Extra kosten
- ❌ Moet aangevraagd worden bij IT
**Wanneer aanvragen:**
- Als je containers gebruikt in productie
- Als je Azure-native deployment wilt
- Als je security scanning nodig hebt
---
#### B) Gitea Container Registry (Gratis) 🆓
**Kosten:** Gratis (als je al Gitea hebt)
**Voordelen:**
- ✅ Geen extra kosten
- ✅ Al beschikbaar (als Gitea dit ondersteunt)
- ✅ Eenvoudig te gebruiken
**Nadelen:**
- ❌ Minder features dan ACR
- ❌ Geen security scanning
- ❌ Geen geo-replicatie
**Je hebt al:**
- ✅ Script: `scripts/build-and-push.sh`
- ✅ Config: `docker-compose.prod.registry.yml`
- ✅ Documentatie: `docs/GITEA-DOCKER-REGISTRY.md`
**Wanneer gebruiken:**
- ✅ Development/test omgevingen
- ✅ Als je al Gitea hebt met registry enabled
- ✅ Kleine projecten zonder enterprise requirements
---
#### C) Docker Hub (Gratis/Paid)
**Kosten:**
- Free: 1 private repo, unlimited public
- Pro: $5/maand voor unlimited private repos
**Voordelen:**
- ✅ Eenvoudig te gebruiken
- ✅ Gratis voor public images
- ✅ Wereldwijd beschikbaar
**Nadelen:**
- ❌ Rate limits op free tier
- ❌ Minder integratie met Azure
- ❌ Security concerns (voor private data)
**Wanneer gebruiken:**
- Development/test
- Public images
- Als je geen Azure-native oplossing nodig hebt
---
## 🎯 Aanbeveling voor Jouw Situatie
### Scenario 1: Eenvoudige Productie Deployment (Aanbevolen) ⭐
**Gebruik: Azure App Service zonder containers**
**Waarom:**
- ✅ Geen ACR nodig
- ✅ Eenvoudiger en goedkoper
- ✅ Voldoende voor 20 gebruikers
- ✅ Minder complexiteit
**Stappen:**
1. **Niet nodig:** ACR aanvragen
2. **Wel nodig:** Azure App Service Plan aanvragen
3. **Pipeline aanpassen:** Gebruik Azure App Service deployment task in plaats van Docker
**Pipeline aanpassing:**
```yaml
# In plaats van Docker build/push:
- task: AzureWebApp@1
inputs:
azureSubscription: 'your-subscription'
appName: 'cmdb-backend'
package: '$(System.DefaultWorkingDirectory)'
```
---
### Scenario 2: Container-based Deployment
**Als je toch containers wilt gebruiken:**
**Optie A: Gebruik Gitea Registry (als beschikbaar)**
- ✅ Geen aanvraag nodig
- ✅ Gratis
- ✅ Al geconfigureerd in je project
**Optie B: Vraag ACR aan bij IT**
- 📧 Stuur een request naar IT/Infrastructure team
- 📋 Vermeld: "Azure Container Registry - Basic tier voor CMDB GUI project"
- 💰 Budget: ~€5-20/maand (afhankelijk van tier)
**Request Template:**
```
Onderwerp: Azure Container Registry aanvraag - CMDB GUI Project
Beste IT Team,
Voor het Zuyderland CMDB GUI project hebben we een Azure Container Registry nodig
voor het hosten van Docker images.
Details:
- Project: Zuyderland CMDB GUI
- Registry naam: zuyderlandcmdbacr (of zoals jullie naming convention)
- SKU: Basic (voor development/productie)
- Resource Group: rg-cmdb-gui
- Location: West Europe
- Doel: Hosten van backend en frontend Docker images voor productie deployment
Geschatte kosten: €5-20/maand (Basic tier)
Alvast bedankt!
```
---
## 📋 Beslissingsmatrix
| Situatie | Registry Nodig? | Welke? | Kosten |
|----------|----------------|--------|--------|
| **Azure App Service (code deploy)** | ❌ Nee | - | €0 |
| **Gitea Registry beschikbaar** | ✅ Ja | Gitea | €0 |
| **Containers + Azure native** | ✅ Ja | ACR Basic | €5/maand |
| **Containers + Security scanning** | ✅ Ja | ACR Standard | €20/maand |
| **Development/test only** | ✅ Ja | Docker Hub (free) | €0 |
---
## 🚀 Quick Start: Wat Moet Je Nu Doen?
### Als je Azure App Service gebruikt (Aanbevolen):
1.**Geen ACR nodig** - Skip deze stap
2. ✅ Vraag **Azure App Service Plan** aan bij IT
3. ✅ Configureer pipeline voor App Service deployment
4. ✅ Gebruik bestaande `azure-pipelines.yml` maar pas aan voor App Service
### Als je containers gebruikt:
1.**Check eerst:** Is Gitea Container Registry beschikbaar?
- Zo ja → Gebruik Gitea (gratis, al geconfigureerd)
- Zo nee → Vraag ACR Basic aan bij IT
2. ✅ Als je ACR aanvraagt:
- Stuur request naar IT team
- Gebruik request template hierboven
- Wacht op goedkeuring
3. ✅ Configureer pipeline:
- Pas `azure-pipelines.yml` aan met ACR naam
- Maak service connection in Azure DevOps
- Test pipeline
---
## 💡 Mijn Aanbeveling
**Voor jouw situatie (20 gebruikers, interne tool):**
1. **Start met Azure App Service** (zonder containers)
- Eenvoudiger
- Goedkoper
- Voldoende functionaliteit
2. **Als je later containers nodig hebt:**
- Gebruik eerst Gitea Registry (als beschikbaar)
- Vraag ACR aan als Gitea niet voldoet
3. **Vraag ACR alleen aan als:**
- Je security scanning nodig hebt
- Je geo-replicatie nodig hebt
- Je Azure-native container deployment wilt
---
## ❓ Vragen voor IT Team
Als je ACR wilt aanvragen, vraag dan:
1. **Hebben we al een ACR?** (misschien kunnen we die delen)
2. **Wat is de naming convention?** (voor registry naam)
3. **Welke SKU is aanbevolen?** (Basic/Standard/Premium)
4. **Welke resource group gebruiken we?** (best practices)
5. **Zijn er compliance requirements?** (security scanning, etc.)
6. **Heeft Gitea Container Registry?** (gratis alternatief)
---
## 📚 Meer Informatie
- **Azure App Service Deployment**: `docs/AZURE-DEPLOYMENT-SUMMARY.md`
- **Gitea Registry**: `docs/GITEA-DOCKER-REGISTRY.md`
- **Azure Container Registry**: `docs/AZURE-CONTAINER-REGISTRY.md`
- **Azure DevOps Setup**: `docs/AZURE-DEVOPS-SETUP.md`
---
## 🎯 Conclusie
**Kort antwoord:**
- **Azure App Service?** → ❌ Geen ACR nodig
- **Containers?** → ✅ ACR nodig (of Gitea/Docker Hub)
- **Aanbeveling:** Start met App Service, vraag ACR later aan als nodig
**Actie:**
1. Beslis: App Service of Containers?
2. Als Containers: Check Gitea Registry eerst
3. Als ACR nodig: Vraag aan bij IT met request template

View File

@@ -0,0 +1,240 @@
# Azure Resources Overview
Quick reference of all Azure resources needed for CMDB Insight deployment.
## 📋 Resources Summary
| Resource Type | Resource Name | Purpose | SKU/Tier | Estimated Cost | Shared? |
|--------------|---------------|---------|----------|----------------|--------|
| **Resource Group** | `rg-cmdb-insight-prod` | Container for all resources | - | Free | No |
| **Container Registry** | `yourcompanyacr` | Store Docker images (can be shared) | Basic/Standard | €5-20/month | ✅ Yes |
| **PostgreSQL Database** | `cmdb-postgres-prod` | Production database | Standard_B1ms | €20-30/month | No |
| **Key Vault** | `kv-cmdb-insight-prod` | Store secrets securely | Standard | €1-2/month | No |
| **App Service Plan** | `plan-cmdb-insight-prod` | Hosting plan | B1 | €15-25/month | No |
| **App Service (Backend)** | `cmdb-backend-prod` | Backend API | - | Included in plan | No |
| **App Service (Frontend)** | `cmdb-frontend-prod` | Frontend web app | - | Included in plan | No |
| **Application Insights** | `appi-cmdb-insight-prod` | Monitoring & logging | Basic | €0-5/month | No |
**Total Estimated Cost: €41-82/month** (depending on ACR tier and usage)
**💡 Note**: Container Registry can be **shared across multiple applications**. The repository name (`cmdb-insight`) separates this app from others. If you already have an ACR, reuse it to save costs!
---
## 🔗 Resource Dependencies
```
Resource Group (App-specific)
├── PostgreSQL Database
│ └── Stores: Application data
├── Key Vault
│ └── Stores: Secrets (JIRA tokens, passwords, etc.)
├── Application Insights
│ └── Monitors: Backend & Frontend apps
└── App Service Plan
├── Backend App Service
│ ├── Pulls from: Shared ACR (cmdb-insight/backend:latest)
│ ├── Connects to: PostgreSQL
│ ├── Reads from: Key Vault
│ └── Sends logs to: Application Insights
└── Frontend App Service
├── Pulls from: Shared ACR (cmdb-insight/frontend:latest)
└── Connects to: Backend App Service
Shared Resources (can be in separate resource group)
└── Container Registry (ACR) ← Shared across multiple applications
├── cmdb-insight/ ← This application
│ ├── backend:latest
│ └── frontend:latest
├── other-app/ ← Other applications
│ └── api:latest
└── shared-services/ ← Shared images
└── nginx:latest
```
---
## 🌐 Endpoints
After deployment, your application will be available at:
- **Frontend**: `https://cmdb-frontend-prod.azurewebsites.net`
- **Backend API**: `https://cmdb-backend-prod.azurewebsites.net/api`
- **Health Check**: `https://cmdb-backend-prod.azurewebsites.net/api/health`
If custom domain is configured:
- **Frontend**: `https://cmdb.yourcompany.com`
- **Backend API**: `https://api.cmdb.yourcompany.com` (or subdomain of your choice)
---
## 🔐 Required Secrets
These secrets should be stored in Azure Key Vault:
| Secret Name | Description | Example |
|-------------|-------------|---------|
| `JiraPat` | Jira Personal Access Token (if using PAT auth) | `ATATT3xFfGF0...` |
| `SessionSecret` | Session encryption secret | `a1b2c3d4e5f6...` (32+ chars) |
| `JiraOAuthClientId` | Jira OAuth Client ID | `OAuthClientId123` |
| `JiraOAuthClientSecret` | Jira OAuth Client Secret | `OAuthSecret456` |
| `DatabasePassword` | PostgreSQL admin password | `SecurePassword123!` |
---
## 📊 Resource Sizing Recommendations
### For 20 Users (Current)
| Resource | Recommended SKU | Alternative |
|----------|----------------|-------------|
| App Service Plan | B1 (1 vCore, 1.75GB RAM) | B2 if experiencing slowness |
| PostgreSQL | Standard_B1ms (1 vCore, 2GB RAM) | Standard_B2s for growth |
| Container Registry | Basic (10GB) | Standard for production |
| Key Vault | Standard | Standard (only option) |
### For 50+ Users (Future Growth)
| Resource | Recommended SKU | Notes |
|----------|----------------|-------|
| App Service Plan | B2 or S1 | Better performance |
| PostgreSQL | Standard_B2s (2 vCores, 4GB RAM) | More concurrent connections |
| Container Registry | Standard (100GB) | More storage, geo-replication |
---
## 🔄 Update/Deployment Flow
1. **Code Changes** → Push to repository
2. **CI/CD Pipeline** → Builds Docker images
3. **Push to ACR** → Images stored in Container Registry
4. **Restart App Services** → Pulls new images from ACR
5. **Application Updates** → New version live
### Manual Deployment
```bash
# Restart apps to pull latest images
az webapp restart --name cmdb-backend-prod --resource-group rg-cmdb-insight-prod
az webapp restart --name cmdb-frontend-prod --resource-group rg-cmdb-insight-prod
```
---
## 🛡️ Security Configuration
### Network Security
- **HTTPS Only**: Enabled on both App Services
- **Database Firewall**: Restricted to Azure services (can be further restricted)
- **Key Vault Access**: Managed Identity only (no shared keys)
### Authentication
- **App Services**: Managed Identity for ACR and Key Vault access
- **Database**: Username/password (stored in Key Vault)
- **Application**: Jira OAuth 2.0 or Personal Access Token
---
## 📈 Monitoring & Logging
### Application Insights
- **Metrics**: Response times, request rates, errors
- **Logs**: Application logs, exceptions, traces
- **Alerts**: Configured for downtime, errors, performance issues
### Access Logs
```bash
# Backend logs
az webapp log tail --name cmdb-backend-prod --resource-group rg-cmdb-insight-prod
# Frontend logs
az webapp log tail --name cmdb-frontend-prod --resource-group rg-cmdb-insight-prod
```
---
## 🔧 Configuration Files
### Environment Variables (Backend)
- `NODE_ENV=production`
- `PORT=3001`
- `DATABASE_TYPE=postgres`
- `DATABASE_URL` (from Key Vault)
- `JIRA_HOST=https://jira.zuyderland.nl`
- `JIRA_AUTH_METHOD=oauth`
- `JIRA_OAUTH_CLIENT_ID` (from Key Vault)
- `JIRA_OAUTH_CLIENT_SECRET` (from Key Vault)
- `JIRA_OAUTH_CALLBACK_URL`
- `SESSION_SECRET` (from Key Vault)
- `FRONTEND_URL`
- `APPINSIGHTS_INSTRUMENTATIONKEY`
### Environment Variables (Frontend)
- `VITE_API_URL` (points to backend API)
---
## 🗑️ Cleanup (If Needed)
To delete all resources:
```bash
# Delete entire resource group (deletes all resources)
az group delete --name rg-cmdb-insight-prod --yes --no-wait
# Or delete individual resources
az acr delete --name cmdbinsightacr --resource-group rg-cmdb-insight-prod
az postgres flexible-server delete --name cmdb-postgres-prod --resource-group rg-cmdb-insight-prod
az keyvault delete --name kv-cmdb-insight-prod --resource-group rg-cmdb-insight-prod
az appservice plan delete --name plan-cmdb-insight-prod --resource-group rg-cmdb-insight-prod
```
**⚠️ Warning**: This will permanently delete all resources and data. Make sure you have backups if needed.
---
## 📞 Quick Commands Reference
```bash
# Set variables
RESOURCE_GROUP="rg-cmdb-insight-prod"
BACKEND_APP="cmdb-backend-prod"
FRONTEND_APP="cmdb-frontend-prod"
# Check app status
az webapp show --name $BACKEND_APP --resource-group $RESOURCE_GROUP --query state
# View logs
az webapp log tail --name $BACKEND_APP --resource-group $RESOURCE_GROUP
# Restart apps
az webapp restart --name $BACKEND_APP --resource-group $RESOURCE_GROUP
az webapp restart --name $FRONTEND_APP --resource-group $RESOURCE_GROUP
# List all resources
az resource list --resource-group $RESOURCE_GROUP --output table
# Get app URLs
echo "Frontend: https://${FRONTEND_APP}.azurewebsites.net"
echo "Backend: https://${BACKEND_APP}.azurewebsites.net/api"
```
---
## 📚 Related Documentation
- **`AZURE-NEW-SUBSCRIPTION-SETUP.md`** - Complete step-by-step setup guide
- **`AZURE-APP-SERVICE-DEPLOYMENT.md`** - Detailed App Service deployment
- **`AZURE-ACR-SETUP.md`** - ACR setup and usage
- **`AZURE-QUICK-REFERENCE.md`** - Quick reference guide
- **`PRODUCTION-DEPLOYMENT.md`** - General production deployment
---
**Last Updated**: 2025-01-21

View File

@@ -2,7 +2,7 @@
## 🎯 Aanbeveling voor Jouw Situatie
**Voor Zuyderland CMDB GUI met Azure Container Registry:**
**Voor CMDB Insight met Azure Container Registry:**
### ✅ **Service Principal** (Aanbevolen) ⭐
@@ -184,7 +184,7 @@ Wanneer je **Service Principal** kiest:
## 💡 Mijn Aanbeveling
**Voor Zuyderland CMDB GUI:**
**Voor CMDB Insight:**
### ✅ **Kies Service Principal** ⭐

View File

@@ -0,0 +1,234 @@
# Azure DevOps Service Connection - Permissions Error Fix
## 🔴 Error Message
```
Failed to create an app in Microsoft Entra.
Error: Insufficient privileges to complete the operation in Microsoft Graph
Ensure that the user has permissions to create a Microsoft Entra Application.
```
## 🎯 Probleem
Je account heeft onvoldoende rechten in Microsoft Entra (Azure AD) om automatisch een Service Principal aan te maken. Azure DevOps probeert dit automatisch te doen wanneer je een service connection aanmaakt.
---
## ✅ Oplossing 1: Vraag Permissions Aan (Aanbevolen)
### Stap 1: Check Huidige Permissions
1. Ga naar **Azure Portal****Microsoft Entra ID** (of **Azure Active Directory**)
2. Ga naar **Roles and administrators**
3. Zoek je account en check welke rol je hebt
**Benodigde rollen:**
- **Application Administrator** (aanbevolen)
- **Cloud Application Administrator**
- **Global Administrator** (heeft alle rechten)
### Stap 2: Vraag Permissions Aan
**Optie A: Vraag aan Azure Administrator**
- Contacteer je Azure/IT administrator
- Vraag om **Application Administrator** of **Cloud Application Administrator** rol
- Of vraag om een Service Principal aan te maken voor jou
**Optie B: Vraag om Service Principal Aan te Maken**
- Vraag je administrator om een Service Principal aan te maken
- Gebruik deze in de "Others" optie (zie Oplossing 2)
---
## ✅ Oplossing 2: Gebruik "Others" Optie met Bestaande Credentials
Als je geen permissions kunt krijgen, gebruik dan de "Others" optie met ACR admin credentials:
### Stap 1: Haal ACR Admin Credentials Op
```bash
# Login bij Azure
az login
# Check of admin is enabled
az acr show --name zdlasacr --query adminEnabled
# Als false, enable admin (vereist Contributor of Owner rol op ACR)
az acr update --name zdlasacr --admin-enabled true
# Haal admin credentials op
az acr credential show --name zdlasacr
```
**Output:**
```json
{
"username": "zdlasacr",
"passwords": [
{
"name": "password",
"value": "xxxxxxxxxxxxx" Gebruik deze
}
]
}
```
### Stap 2: Maak Service Connection met "Others"
1. **Azure DevOps****Project Settings****Service connections****New service connection**
2. Kies **"Docker Registry"**
3. Kies **"Others"** (niet "Azure Container Registry")
4. Vul in:
- **Docker Registry**: `zdlasacr.azurecr.io`
- **Docker ID**: `zdlasacr` (of username uit output)
- **Docker Password**: `passwords[0].value` uit output
- **Service connection name**: `zuyderland-cmdb-acr-connection`
5. Klik **"Save"**
**✅ Dit werkt zonder extra Azure AD permissions!**
---
## ✅ Oplossing 3: Vraag Administrator om Service Principal
Als je administrator een Service Principal voor je kan aanmaken:
### Stap 1: Administrator Maakt Service Principal
```bash
# Administrator voert uit:
az login
# Maak Service Principal met ACR push permissions
az ad sp create-for-rbac \
--name "zuyderland-cmdb-acr-sp" \
--role acrpush \
--scopes /subscriptions/<subscription-id>/resourceGroups/<resource-group>/providers/Microsoft.ContainerRegistry/registries/zdlasacr
```
**Output:**
```json
{
"appId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"password": "xxxxxxxxxxxxx", Gebruik deze
"tenant": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
}
```
### Stap 2: Gebruik in Azure DevOps
1. **Azure DevOps****Project Settings****Service connections****New service connection**
2. Kies **"Docker Registry"** → **"Others"**
3. Vul in:
- **Docker Registry**: `zdlasacr.azurecr.io`
- **Docker ID**: `appId` uit output (de GUID)
- **Docker Password**: `password` uit output
- **Service connection name**: `zuyderland-cmdb-acr-connection`
4. Klik **"Save"**
---
## 🔍 Check Welke Permissions Je Hebt
### Via Azure Portal
1. Ga naar **Azure Portal****Microsoft Entra ID**
2. **Roles and administrators**
3. Zoek je account
4. Check welke rollen je hebt
### Via Azure CLI
```bash
# Login
az login
# Check je rollen
az role assignment list --assignee $(az account show --query user.name -o tsv) --all
# Check specifiek voor Microsoft Entra ID
az ad signed-in-user show --query "displayName"
```
---
## 📋 Benodigde Permissions Overzicht
| Rol | Kan Service Principal Aanmaken? | Kan ACR Toegang Geven? |
|-----|--------------------------------|------------------------|
| **Global Administrator** | ✅ Ja | ✅ Ja |
| **Application Administrator** | ✅ Ja | ✅ Ja |
| **Cloud Application Administrator** | ✅ Ja | ✅ Ja |
| **User** | ❌ Nee | ❌ Nee |
| **Contributor** (op Resource Group) | ❌ Nee | ✅ Ja (op resources) |
**Voor Azure DevOps Service Connection:**
- Je hebt **Application Administrator** of hoger nodig om automatisch Service Principal aan te maken
- Of gebruik **"Others"** optie met bestaande credentials (geen extra permissions nodig)
---
## 💡 Aanbeveling
**Voor jouw situatie:**
1. **Probeer eerst**: Vraag **Application Administrator** rol aan je Azure administrator
- Dit is de schoonste oplossing
- Werkt automatisch met Azure Container Registry optie
2. **Als dat niet kan**: Gebruik **"Others"** optie met ACR admin credentials
- Werkt altijd
- Geen extra permissions nodig
- Iets minder geautomatiseerd, maar volledig functioneel
3. **Alternatief**: Vraag administrator om Service Principal aan te maken
- Gebruik deze in "Others" optie
- Veiliger dan admin credentials
---
## 🔧 Troubleshooting
### "Admin is not enabled" Error
Als ACR admin niet is enabled:
```bash
# Enable admin (vereist Contributor of Owner op ACR)
az acr update --name zdlasacr --admin-enabled true
```
**Als je geen permissions hebt:**
- Vraag je administrator om admin te enableen
- Of gebruik Service Principal (zie Oplossing 3)
### "Cannot connect to ACR" Error
Check:
1. ACR naam is correct: `zdlasacr`
2. Credentials zijn correct
3. ACR is toegankelijk vanuit je netwerk
4. Firewall rules staan toe
---
## 📚 Gerelateerde Documentatie
- **`AZURE-PIPELINES.md`** - Pipeline troubleshooting (includes MSI error fix)
- **`AZURE-SERVICE-CONNECTION-TROUBLESHOOTING.md`** - Algemene troubleshooting
- **`AZURE-ACR-SETUP.md`** - ACR setup (includes permissions)
---
## ✅ Quick Fix Checklist
- [ ] Check huidige Azure AD rol
- [ ] Vraag Application Administrator rol aan (optie 1)
- [ ] OF gebruik "Others" optie met ACR admin credentials (optie 2)
- [ ] OF vraag administrator om Service Principal aan te maken (optie 3)
- [ ] Test service connection
- [ ] Test pipeline
---
**💡 Tip**: De "Others" optie is een volledig werkende oplossing en vereist geen extra Azure AD permissions. Het is iets minder geautomatiseerd, maar werkt perfect voor CI/CD pipelines.

View File

@@ -1,5 +1,60 @@
# Azure DevOps Service Connection - Troubleshooting
## 🔴 Probleem: "Could not fetch access token for Managed Service Principal" (MSI Error)
**Error Message:**
```
Could not fetch access token for Managed Service Principal.
Please configure Managed Service Identity (MSI) for virtual machine
```
**Oorzaak:**
De service connection is geconfigureerd om Managed Service Identity (MSI) te gebruiken, maar dit werkt **niet** met Azure DevOps Services (cloud). MSI werkt alleen met Azure DevOps Server (on-premises) met Managed Identity geconfigureerd.
**✅ Oplossing: Herconfigureer Service Connection met Service Principal**
### Stap 1: Verwijder Bestaande Service Connection
1. Ga naar **Azure DevOps****Project Settings****Service connections**
2. Zoek de service connection: `zuyderland-cmdb-acr-connection`
3. Klik op **...** (three dots) → **Delete**
4. Bevestig verwijdering
### Stap 2: Maak Nieuwe Service Connection met Service Principal
1. **Project Settings****Service connections****New service connection**
2. Kies **"Docker Registry"**
3. Kies **"Azure Container Registry"**
4. **Belangrijk**: Selecteer **"Service Principal"** als Authentication type (NIET Managed Identity!)
5. Vul in:
- **Azure subscription**: Selecteer je subscription
- **Azure container registry**: Selecteer je ACR (`zdlasacr`)
- **Service connection name**: `zuyderland-cmdb-acr-connection`
6. Klik **"Save"** (of **"Verify and save"**)
**✅ Dit zou nu moeten werken!**
### Alternatief: Gebruik "Others" Optie met Admin Credentials
Als de Azure Container Registry optie nog steeds problemen geeft:
1. **Kies "Docker Registry" → "Others"**
2. **Vul handmatig in:**
- **Docker Registry**: `zdlasacr.azurecr.io`
- **Docker ID**: (ACR admin username)
- **Docker Password**: (ACR admin password)
3. **Haal ACR admin credentials op:**
```bash
az acr credential show --name zdlasacr
```
Gebruik `username` en `passwords[0].value` uit de output.
4. **Service connection name**: `zuyderland-cmdb-acr-connection`
5. **Save**
---
## 🔴 Probleem: "Loading Registries..." blijft hangen
Als de Azure Container Registry dropdown blijft laden zonder resultaten, probeer deze oplossingen:

View File

@@ -80,7 +80,7 @@ variables:
# Pas deze aan naar jouw ACR naam
acrName: 'zuyderlandcmdbacr' # ← Jouw ACR naam hier
repositoryName: 'zuyderland-cmdb-gui'
repositoryName: 'cmdb-insight'
# Service connection naam (maak je in volgende stap)
dockerRegistryServiceConnection: 'zuyderland-cmdb-acr-connection'
@@ -124,7 +124,7 @@ Deze connection geeft Azure DevOps toegang tot je ACR.
2. Klik op **Pipelines** (links in het menu)
3. Klik op **"New pipeline"** of **"Create Pipeline"**
4. Kies **"Azure Repos Git"** (of waar je code staat)
5. Selecteer je repository: **"Zuyderland CMDB GUI"**
5. Selecteer je repository: **"CMDB Insight"**
6. Kies **"Existing Azure Pipelines YAML file"**
7. Selecteer:
- **Branch**: `main`
@@ -141,8 +141,8 @@ Deze connection geeft Azure DevOps toegang tot je ACR.
1. Ga naar je **Container Registry** (`zuyderlandcmdbacr`)
2. Klik op **"Repositories"**
3. Je zou moeten zien:
- `zuyderland-cmdb-gui/backend`
- `zuyderland-cmdb-gui/frontend`
- `cmdb-insight/backend`
- `cmdb-insight/frontend`
4. Klik op een repository om de tags te zien (bijv. `latest`, `123`)
### Via Azure CLI:
@@ -151,10 +151,10 @@ Deze connection geeft Azure DevOps toegang tot je ACR.
az acr repository list --name zuyderlandcmdbacr
# Lijst tags voor backend
az acr repository show-tags --name zuyderlandcmdbacr --repository zuyderland-cmdb-gui/backend
az acr repository show-tags --name zuyderlandcmdbacr --repository cmdb-insight/backend
# Lijst tags voor frontend
az acr repository show-tags --name zuyderlandcmdbacr --repository zuyderland-cmdb-gui/frontend
az acr repository show-tags --name zuyderlandcmdbacr --repository cmdb-insight/frontend
```
---
@@ -166,8 +166,8 @@ az acr repository show-tags --name zuyderlandcmdbacr --repository zuyderland-cmd
az acr login --name zuyderlandcmdbacr
# Pull images
docker pull zuyderlandcmdbacr.azurecr.io/zuyderland-cmdb-gui/backend:latest
docker pull zuyderlandcmdbacr.azurecr.io/zuyderland-cmdb-gui/frontend:latest
docker pull zuyderlandcmdbacr.azurecr.io/cmdb-insight/backend:latest
docker pull zuyderlandcmdbacr.azurecr.io/cmdb-insight/frontend:latest
# Test run (met docker-compose)
docker-compose -f docker-compose.prod.acr.yml pull
@@ -242,20 +242,39 @@ git push origin v1.0.0
**Aanbeveling: Basic SKU** ⭐ (~€5/maand)
**Basic SKU** (Aanbevolen voor jouw situatie):
- ✅ 10GB storage - Ruim voldoende voor backend + frontend met meerdere versies
- ✅ 1GB/day webhook throughput - Voldoende voor CI/CD
- ✅ Unlimited pulls - Geen extra kosten
- ✅ RBAC support - Role-based access control
### Basic SKU (Aanbevolen voor jouw situatie)
**Inclusief:**
- ✅ **10GB storage** - Ruim voldoende voor backend + frontend images met meerdere versies
- ✅ **1GB/day webhook throughput** - Voldoende voor CI/CD
- ✅ **Unlimited pulls** - Geen extra kosten voor image pulls
- ✅ **Admin user enabled** - Voor development/productie
- ✅ **RBAC support** - Role-based access control
- ✅ **Content trust** - Image signing support
- ✅ **Kosten: ~€5/maand**
**Standard SKU** (~€20/maand):
**Limitaties:**
- ❌ Geen geo-replicatie
- ❌ Geen security scanning (vulnerability scanning)
- ❌ Geen content trust storage
**Wanneer gebruiken:**
- ✅ **Jouw situatie** - 20 gebruikers, corporate tool
- ✅ Development en productie omgevingen
- ✅ Kleine tot middelgrote teams
- ✅ Budget-conscious deployments
### Standard SKU (~€20/maand)
**Inclusief (alles van Basic +):**
- 100GB storage
- 10GB/day webhook throughput
- Geo-replicatie
- **Niet nodig voor jouw situatie**
**Premium SKU** (~€50/maand):
### Premium SKU (~€50/maand)
**Inclusief (alles van Standard +):**
- 500GB storage
- Security scanning
- Private endpoints
@@ -263,13 +282,85 @@ git push origin v1.0.0
**Voor jouw situatie (20 gebruikers): Basic is perfect!** ✅
📚 Zie `docs/AZURE-ACR-PRICING.md` voor volledige vergelijking.
---
## 🔐 Permissions Mode
**Aanbeveling: RBAC Registry Permissions** ⭐
### RBAC Registry Permissions (Aanbevolen)
**Hoe het werkt:**
- Permissions worden ingesteld op **registry niveau**
- Alle repositories binnen de registry delen dezelfde permissions
- Gebruikers hebben toegang tot alle repositories of geen
**Voordelen:**
- ✅ **Eenvoudig** - Minder complexiteit
- ✅ **Makkelijk te beheren** - Eén set permissions voor de hele registry
- ✅ **Voldoende voor de meeste scenario's** - Perfect voor jouw situatie
- ✅ **Standaard keuze** - Meest gebruikte optie
**Wanneer gebruiken:**
- ✅ **Jouw situatie** - 20 gebruikers, corporate tool
- ✅ Kleine tot middelgrote teams
- ✅ Alle repositories hebben dezelfde toegangsvereisten
- ✅ Eenvoudige permission structuur gewenst
### RBAC Registry + ABAC Repository Permissions
**Wanneer gebruiken:**
- Als je per-repository permissions nodig hebt
- Grote teams met verschillende toegangsvereisten
- Complexe permission structuur
**Voor jouw situatie: RBAC Registry Permissions is perfect!** ✅
---
## 🔄 Shared ACR Setup (Optioneel)
Als je al een ACR hebt voor andere applicaties, kun je deze hergebruiken:
**Voordelen:**
- ✅ **Cost Savings**: Eén ACR voor alle applicaties (€5-20/month vs multiple ACRs)
- ✅ **Centralized Management**: All images in one place
- ✅ **Easier Collaboration**: Teams can share images
**Hoe het werkt:**
- ACR is shared, maar elke applicatie gebruikt een **unique repository name**
- Repository name (`cmdb-insight`) scheidt jouw app van anderen
- Images zijn georganiseerd per applicatie: `acr.azurecr.io/app-name/service:tag`
**Voorbeeld structuur:**
```
zuyderlandacr.azurecr.io/
├── cmdb-insight/ ← Deze applicatie
│ ├── backend:latest
│ └── frontend:latest
├── other-app/ ← Andere applicatie
│ └── api:latest
└── shared-services/ ← Gedeelde base images
└── nginx:latest
```
**Setup:**
```bash
# Gebruik bestaande ACR
ACR_NAME="your-existing-acr"
ACR_RESOURCE_GROUP="rg-shared-services"
# Verifieer dat het bestaat
az acr show --name $ACR_NAME --resource-group $ACR_RESOURCE_GROUP
# Update pipeline variabelen met bestaande ACR naam
```
---
## 📚 Meer Informatie
- **Volledige ACR Guide**: `docs/AZURE-CONTAINER-REGISTRY.md`
- **Deployment Guide**: `docs/AZURE-APP-SERVICE-DEPLOYMENT.md`
- **Azure DevOps Setup**: `docs/AZURE-DEVOPS-SETUP.md`
- **Deployment Guide**: `docs/PRODUCTION-DEPLOYMENT.md`
@@ -284,4 +375,4 @@ Nu je images in ACR staan, kun je ze deployen naar:
3. **Azure Kubernetes Service (AKS)** - Voor complexere setups
4. **VM met Docker Compose** - Volledige controle
Zie `docs/AZURE-DEPLOYMENT-SUMMARY.md` voor deployment opties.
Zie `docs/AZURE-APP-SERVICE-DEPLOYMENT.md` voor deployment opties.

View File

@@ -0,0 +1,401 @@
# Azure App Service Deployment - Stap-voor-Stap Guide 🚀
Complete deployment guide voor CMDB Insight naar Azure App Service.
## 🎯 Waarom Azure App Service?
Azure App Service is de aanbevolen deployment optie voor CMDB Insight omdat:
1. **Managed Service**
- Geen serverbeheer, SSH, Linux configuratie nodig
- Azure beheert alles (updates, security patches, scaling)
- Perfect voor teams die geen infrastructuur willen beheren
2. **Eenvoudig & Snel**
- Setup in ~15 minuten
- Automatische SSL/TLS certificaten
- Integratie met Azure DevOps pipelines
3. **Kosten-Effectief**
- Basic B1 plan: ~€15-25/maand
- Voldoende voor 20 gebruikers
- Geen verborgen kosten
4. **Flexibel**
- Deployment slots voor testen (staging → productie)
- Eenvoudige rollback
- Integratie met Azure Key Vault voor secrets
5. **Monitoring & Compliance**
- Integratie met Azure Monitor
- Logging en audit trails (NEN 7510 compliance)
- Health checks ingebouwd
**Geschatte kosten:** ~€20-25/maand (met PostgreSQL database)
---
## 📋 Prerequisites
- Azure CLI geïnstalleerd en geconfigureerd (`az login`)
- Docker images in ACR: `zdlasacr.azurecr.io/cmdb-insight/backend:latest` en `frontend:latest`
- Azure DevOps pipeline werkt (images worden automatisch gebouwd)
---
## 🎯 Quick Start (15 minuten)
### Stap 1: Resource Group
```bash
az group create \
--name zdl-cmdb-insight-prd-euwe-rg \
--location westeurope
```
### Stap 2: App Service Plan
```bash
az appservice plan create \
--name zdl-cmdb-insight-prd-euwe-appsvc \
--resource-group zdl-cmdb-insight-prd-euwe-rg \
--sku B1 \
--is-linux
```
### Stap 3: Web Apps
```bash
# Backend
az webapp create \
--name zdl-cmdb-insight-prd-backend-webapp \
--resource-group zdl-cmdb-insight-prd-euwe-rg \
--plan zdl-cmdb-insight-prd-euwe-appsvc \
--container-image-name zdlasacr.azurecr.io/cmdb-insight/backend:latest
# Frontend
az webapp create \
--name zdl-cmdb-insight-prd-frontend-webapp \
--resource-group zdl-cmdb-insight-prd-euwe-rg \
--plan zdl-cmdb-insight-prd-euwe-appsvc \
--container-image-name zdlasacr.azurecr.io/cmdb-insight/frontend:latest
```
### Stap 4: ACR Authentication
```bash
# Enable Managed Identity
az webapp identity assign --name zdl-cmdb-insight-prd-backend-webapp --resource-group zdl-cmdb-insight-prd-euwe-rg
az webapp identity assign --name zdl-cmdb-insight-prd-frontend-webapp --resource-group zdl-cmdb-insight-prd-euwe-rg
# Get Principal IDs
BACKEND_PRINCIPAL_ID=$(az webapp identity show \
--name zdl-cmdb-insight-prd-backend-webapp \
--resource-group zdl-cmdb-insight-prd-euwe-rg \
--query principalId -o tsv)
FRONTEND_PRINCIPAL_ID=$(az webapp identity show \
--name zdl-cmdb-insight-prd-frontend-webapp \
--resource-group zdl-cmdb-insight-prd-euwe-rg \
--query principalId -o tsv)
# Get ACR Resource ID
ACR_ID=$(az acr show --name zdlasacr --query id -o tsv)
# Grant AcrPull permissions
az role assignment create --assignee $BACKEND_PRINCIPAL_ID --role AcrPull --scope $ACR_ID
az role assignment create --assignee $FRONTEND_PRINCIPAL_ID --role AcrPull --scope $ACR_ID
# Configure container settings
az webapp config container set \
--name zdl-cmdb-insight-prd-backend-webapp \
--resource-group zdl-cmdb-insight-prd-euwe-rg \
--docker-registry-server-url https://zdlasacr.azurecr.io
az webapp config container set \
--name zdl-cmdb-insight-prd-frontend-webapp \
--resource-group zdl-cmdb-insight-prd-euwe-rg \
--docker-registry-server-url https://zdlasacr.azurecr.io
```
### Stap 5: PostgreSQL Database Setup (Aanbevolen voor Productie)
**Voor PostgreSQL setup, zie:** `docs/AZURE-POSTGRESQL-SETUP.md`
**Quick setup met script:**
```bash
./scripts/setup-postgresql.sh
```
**Of handmatig:**
```bash
# Maak PostgreSQL Flexible Server aan
az postgres flexible-server create \
--resource-group zdl-cmdb-insight-prd-euwe-rg \
--name zdl-cmdb-insight-prd-psql \
--location westeurope \
--admin-user cmdbadmin \
--admin-password $(openssl rand -base64 32) \
--sku-name Standard_B1ms \
--tier Burstable \
--storage-size 32 \
--version 15
# Maak database aan (één database is voldoende)
az postgres flexible-server db create \
--resource-group zdl-cmdb-insight-prd-euwe-rg \
--server-name zdl-cmdb-insight-prd-psql \
--database-name cmdb_insight
```
**Voor SQLite (alternatief, eenvoudiger maar minder geschikt voor productie):**
- Geen extra setup nodig
- Database wordt automatisch aangemaakt in container
- Zie Stap 5b hieronder
### Stap 5a: Environment Variabelen met PostgreSQL
```bash
# Backend met PostgreSQL (vervang met jouw waarden)
az webapp config appsettings set \
--name zdl-cmdb-insight-prd-backend-webapp \
--resource-group zdl-cmdb-insight-prd-euwe-rg \
--settings \
NODE_ENV=production \
PORT=3001 \
DATABASE_TYPE=postgres \
DATABASE_HOST=zdl-cmdb-insight-prd-psql.postgres.database.azure.com \
DATABASE_PORT=5432 \
DATABASE_NAME=cmdb_insight \
DATABASE_USER=cmdbadmin \
DATABASE_PASSWORD=your-database-password \
DATABASE_SSL=true \
JIRA_BASE_URL=https://jira.zuyderland.nl \
JIRA_PAT=your-pat-token \
SESSION_SECRET=$(openssl rand -hex 32) \
FRONTEND_URL=https://zdl-cmdb-insight-prd-frontend-webapp.azurewebsites.net
# Frontend
az webapp config appsettings set \
--name zdl-cmdb-insight-prd-frontend-webapp \
--resource-group zdl-cmdb-insight-prd-euwe-rg \
--settings \
VITE_API_URL=https://zdl-cmdb-insight-prd-backend-webapp.azurewebsites.net/api
```
### Stap 5b: Environment Variabelen met SQLite (Alternatief)
```bash
# Backend met SQLite (vervang met jouw waarden)
az webapp config appsettings set \
--name zdl-cmdb-insight-prd-backend-webapp \
--resource-group zdl-cmdb-insight-prd-euwe-rg \
--settings \
NODE_ENV=production \
PORT=3001 \
DATABASE_TYPE=sqlite \
JIRA_BASE_URL=https://jira.zuyderland.nl \
JIRA_PAT=your-pat-token \
SESSION_SECRET=$(openssl rand -hex 32) \
FRONTEND_URL=https://zdl-cmdb-insight-prd-frontend-webapp.azurewebsites.net
# Frontend
az webapp config appsettings set \
--name zdl-cmdb-insight-prd-frontend-webapp \
--resource-group zdl-cmdb-insight-prd-euwe-rg \
--settings \
VITE_API_URL=https://zdl-cmdb-insight-prd-backend-webapp.azurewebsites.net/api
```
### Stap 6: Start Apps
```bash
az webapp start --name zdl-cmdb-insight-prd-backend-webapp --resource-group zdl-cmdb-insight-prd-euwe-rg
az webapp start --name zdl-cmdb-insight-prd-frontend-webapp --resource-group zdl-cmdb-insight-prd-euwe-rg
```
### Stap 7: Test
```bash
# Health check
curl https://zdl-cmdb-insight-prd-backend-webapp.azurewebsites.net/api/health
# Frontend
curl https://zdl-cmdb-insight-prd-frontend-webapp.azurewebsites.net
```
**🎉 Je applicatie is nu live!**
- Frontend: `https://zdl-cmdb-insight-prd-frontend-webapp.azurewebsites.net`
- Backend API: `https://zdl-cmdb-insight-prd-backend-webapp.azurewebsites.net/api`
---
## 🔐 Azure Key Vault Setup (Aanbevolen)
Voor productie: gebruik Azure Key Vault voor secrets.
### Stap 1: Key Vault Aanmaken
```bash
az keyvault create \
--name kv-cmdb-insight-prod \
--resource-group zdl-cmdb-insight-prd-euwe-rg \
--location westeurope \
--sku standard
```
### Stap 2: Secrets Toevoegen
```bash
az keyvault secret set --vault-name kv-cmdb-insight-prod --name JiraPat --value "your-token"
az keyvault secret set --vault-name kv-cmdb-insight-prod --name SessionSecret --value "$(openssl rand -hex 32)"
```
### Stap 3: Grant Access
**Voor Key Vault met RBAC authorization (aanbevolen):**
```bash
# Get Key Vault Resource ID
KV_ID=$(az keyvault show --name zdl-cmdb-insight-prd-kv --query id -o tsv)
# Grant Key Vault Secrets User role to backend
az role assignment create \
--assignee $BACKEND_PRINCIPAL_ID \
--role "Key Vault Secrets User" \
--scope $KV_ID
# Grant to frontend (if needed)
az role assignment create \
--assignee $FRONTEND_PRINCIPAL_ID \
--role "Key Vault Secrets User" \
--scope $KV_ID
```
**Voor Key Vault met Access Policies (oude methode):**
```bash
az keyvault set-policy \
--name zdl-cmdb-insight-prd-kv \
--object-id $BACKEND_PRINCIPAL_ID \
--secret-permissions get list
```
**Let op:** Als je de fout krijgt "Cannot set policies to a vault with '--enable-rbac-authorization'", gebruik dan de RBAC methode hierboven.
### Stap 4: Configure App Settings met Key Vault References
```bash
az webapp config appsettings set \
--name zdl-cmdb-insight-prd-backend-webapp \
--resource-group zdl-cmdb-insight-prd-euwe-rg \
--settings \
JIRA_PAT="@Microsoft.KeyVault(SecretUri=https://kv-cmdb-insight-prod.vault.azure.net/secrets/JiraPat/)" \
SESSION_SECRET="@Microsoft.KeyVault(SecretUri=https://kv-cmdb-insight-prod.vault.azure.net/secrets/SessionSecret/)"
```
---
## 📊 Monitoring Setup
### Application Insights
```bash
# Create Application Insights
az monitor app-insights component create \
--app cmdb-insight-prod \
--location westeurope \
--resource-group zdl-cmdb-insight-prd-euwe-rg \
--application-type web
# Get Instrumentation Key
INSTRUMENTATION_KEY=$(az monitor app-insights component show \
--app cmdb-insight-prod \
--resource-group zdl-cmdb-insight-prd-euwe-rg \
--query instrumentationKey -o tsv)
# Configure App Settings
az webapp config appsettings set \
--name zdl-cmdb-insight-prd-backend-webapp \
--resource-group zdl-cmdb-insight-prd-euwe-rg \
--settings \
APPINSIGHTS_INSTRUMENTATIONKEY=$INSTRUMENTATION_KEY
```
---
## 🔄 Updates Deployen
### Optie 1: Manual (Eenvoudig)
```bash
# Restart Web Apps (pull nieuwe latest image)
az webapp restart --name zdl-cmdb-insight-prd-backend-webapp --resource-group zdl-cmdb-insight-prd-euwe-rg
az webapp restart --name zdl-cmdb-insight-prd-frontend-webapp --resource-group zdl-cmdb-insight-prd-euwe-rg
```
### Optie 2: Deployment Slots (Zero-Downtime)
```bash
# Create staging slot
az webapp deployment slot create \
--name zdl-cmdb-insight-prd-backend-webapp \
--resource-group zdl-cmdb-insight-prd-euwe-rg \
--slot staging
# Deploy to staging
az webapp deployment slot swap \
--name zdl-cmdb-insight-prd-backend-webapp \
--resource-group zdl-cmdb-insight-prd-euwe-rg \
--slot staging \
--target-slot production
```
---
## 🛠️ Troubleshooting
### Check Logs
```bash
# Live logs
az webapp log tail --name zdl-cmdb-insight-prd-backend-webapp --resource-group zdl-cmdb-insight-prd-euwe-rg
# Download logs
az webapp log download --name zdl-cmdb-insight-prd-backend-webapp --resource-group zdl-cmdb-insight-prd-euwe-rg --log-file logs.zip
```
### Check Status
```bash
az webapp show --name zdl-cmdb-insight-prd-backend-webapp --resource-group zdl-cmdb-insight-prd-euwe-rg --query state
```
### Restart App
```bash
az webapp restart --name zdl-cmdb-insight-prd-backend-webapp --resource-group zdl-cmdb-insight-prd-euwe-rg
```
---
## 📚 Meer Informatie
- **Quick Reference**: `docs/AZURE-QUICK-REFERENCE.md`
- **Production Deployment**: `docs/PRODUCTION-DEPLOYMENT.md`
---
## ✅ Checklist
- [ ] Resource Group aangemaakt
- [ ] App Service Plan aangemaakt
- [ ] Web Apps aangemaakt
- [ ] ACR authentication geconfigureerd
- [ ] Environment variabelen ingesteld
- [ ] Key Vault geconfigureerd (optioneel)
- [ ] Application Insights ingeschakeld
- [ ] Health checks werken
- [ ] Team geïnformeerd
**Veel succes! 🚀**

View File

@@ -0,0 +1,275 @@
# Azure Pipelines Usage Guide
Guide for using the separate build and deployment pipelines.
## 📋 Quick Reference
### Pipeline Variables
| Variable | Description | Example |
|----------|-------------|---------|
| `acrName` | Azure Container Registry name | `zdlas` |
| `repositoryName` | Docker repository name | `cmdb-insight` |
| `dockerRegistryServiceConnection` | ACR service connection name | `zuyderland-cmdb-acr-connection` |
| `resourceGroup` | Azure resource group | `rg-cmdb-insight-prod` |
| `backendAppName` | Backend App Service name | `cmdb-backend-prod` |
| `frontendAppName` | Frontend App Service name | `cmdb-frontend-prod` |
| `azureSubscription` | Azure service connection for deployment | `zuyderland-cmdb-subscription` |
| `deployToProduction` | Enable/disable deployment | `true` or `false` |
| `useDeploymentSlots` | Use staging slots for zero-downtime | `true` or `false` |
### Required Service Connections
1. **Docker Registry Connection** (for ACR)
- Type: Docker Registry → Azure Container Registry
- Name: Match `dockerRegistryServiceConnection` variable
- Authentication: **Service Principal** (not Managed Identity)
2. **Azure Resource Manager Connection** (for App Service deployment)
- Type: Azure Resource Manager
- Name: Match `azureSubscription` variable
- Authentication: Managed Identity or Service Principal
---
## 📋 Pipeline Files
### 1. `azure-pipelines.yml` - Build and Push Images
**Purpose**: Builds Docker images and pushes them to Azure Container Registry.
**What it does:**
- ✅ Builds backend Docker image
- ✅ Builds frontend Docker image
- ✅ Pushes both to ACR with tags: `$(Build.BuildId)` and `latest`
**When to use:**
- First time setup (to test image building)
- After code changes (to build new images)
- Before deployment (to ensure images are in ACR)
**Configuration:**
```yaml
variables:
acrName: 'zdlas' # Your ACR name
repositoryName: 'cmdb-insight'
dockerRegistryServiceConnection: 'zuyderland-cmdb-acr-connection'
```
### 2. `azure-pipelines-deploy.yml` - Deploy to App Service
**Purpose**: Deploys existing images from ACR to Azure App Services.
**What it does:**
- ✅ Deploys backend container to App Service
- ✅ Deploys frontend container to App Service
- ✅ Restarts both App Services
- ✅ Verifies deployment with health checks
**When to use:**
- After images are built and pushed to ACR
- When you want to deploy/update the application
- For production deployments
**Configuration:**
```yaml
variables:
acrName: 'zdlas' # Your ACR name
resourceGroup: 'rg-cmdb-insight-prod' # Your resource group
backendAppName: 'cmdb-backend-prod' # Your backend app name
frontendAppName: 'cmdb-frontend-prod' # Your frontend app name
azureSubscription: 'zuyderland-cmdb-subscription' # Azure service connection
imageTag: 'latest' # Image tag to deploy
```
## 🚀 Workflow
### Step 1: Build and Push Images
1. **Configure `azure-pipelines.yml`**:
- Update `acrName` with your ACR name
- Update `dockerRegistryServiceConnection` with your service connection name
2. **Create Pipeline in Azure DevOps**:
- Go to **Pipelines****New pipeline**
- Select **Existing Azure Pipelines YAML file**
- Choose `azure-pipelines.yml`
- Run the pipeline
3. **Verify Images in ACR**:
```bash
az acr repository list --name zdlas
az acr repository show-tags --name zdlas --repository cmdb-insight/backend
az acr repository show-tags --name zdlas --repository cmdb-insight/frontend
```
### Step 2: Deploy Application
1. **Ensure App Services exist**:
- Backend App Service: `cmdb-backend-prod`
- Frontend App Service: `cmdb-frontend-prod`
- See `AZURE-NEW-SUBSCRIPTION-SETUP.md` for setup instructions
2. **Configure `azure-pipelines-deploy.yml`**:
- Update all variables with your Azure resource names
- Create Azure service connection for App Service deployment
- Create `production` environment in Azure DevOps
3. **Create Deployment Pipeline**:
- Go to **Pipelines** → **New pipeline**
- Select **Existing Azure Pipelines YAML file**
- Choose `azure-pipelines-deploy.yml`
- Run the pipeline
4. **Verify Deployment**:
- Check backend: `https://cmdb-backend-prod.azurewebsites.net/api/health`
- Check frontend: `https://cmdb-frontend-prod.azurewebsites.net`
## 🔧 Setup Requirements
### For Build Pipeline (`azure-pipelines.yml`)
**Required:**
- ✅ Docker Registry service connection (for ACR)
- ✅ ACR exists and is accessible
- ✅ Service connection has push permissions
**Setup:**
1. Create Docker Registry service connection:
- **Project Settings** → **Service connections** → **New service connection**
- Choose **Docker Registry** → **Azure Container Registry**
- Select your ACR
- Name: `zuyderland-cmdb-acr-connection`
### For Deployment Pipeline (`azure-pipelines-deploy.yml`)
**Required:**
- ✅ Azure Resource Manager service connection
- ✅ App Services exist in Azure
- ✅ `production` environment created in Azure DevOps
- ✅ Images exist in ACR
**Setup:**
1. Create Azure service connection:
- **Project Settings** → **Service connections** → **New service connection**
- Choose **Azure Resource Manager**
- Select your subscription
- Name: `zuyderland-cmdb-subscription`
2. Create environment:
- **Pipelines** → **Environments** → **Create environment**
- Name: `production`
- (Optional) Add approvals for manual control
## 📝 Typical Usage Scenarios
### Scenario 1: First Time Setup
```bash
# 1. Build and push images
# Run azure-pipelines.yml → Images in ACR
# 2. Create App Services (manual or via script)
# See AZURE-NEW-SUBSCRIPTION-SETUP.md
# 3. Deploy application
# Run azure-pipelines-deploy.yml → App deployed
```
### Scenario 2: Code Update
```bash
# 1. Push code to main branch
git push origin main
# 2. Build pipeline runs automatically
# azure-pipelines.yml → New images in ACR
# 3. Deploy new version
# Run azure-pipelines-deploy.yml → App updated
```
### Scenario 3: Deploy Specific Version
```bash
# 1. Update azure-pipelines-deploy.yml
imageTag: 'v1.0.0' # Or specific build ID
# 2. Run deployment pipeline
# Deploys specific version
```
## 🔄 Combining Pipelines (Future)
Once you're comfortable with both pipelines, you can:
1. **Combine them** into one pipeline with conditional deployment
2. **Use deployment slots** for zero-downtime updates
3. **Add approval gates** for production deployments
See `azure-pipelines-slots.yml` for an advanced example with deployment slots.
## 🛠️ Troubleshooting
### Build Pipeline Fails
**Issue**: "Service connection not found"
- **Solution**: Verify service connection name matches `dockerRegistryServiceConnection` variable
- Check: **Project Settings** → **Service connections** → Verify name matches
**Issue**: "ACR not found"
- **Solution**: Check `acrName` variable matches your ACR name
- Verify: `az acr list --query "[].name"`
**Issue**: "MSI Authentication Error" / "Could not fetch access token for Managed Service Principal"
- **Solution**: Service connection must use **Service Principal** authentication (not Managed Identity)
- Recreate service connection: **Docker Registry** → **Azure Container Registry** → Use **Service Principal**
- See `docs/AZURE-SERVICE-CONNECTION-TROUBLESHOOTING.md` for details
**Issue**: "Permission denied"
- **Solution**: Verify service connection has correct permissions
- Check ACR admin is enabled: `az acr update --name <acr-name> --admin-enabled true`
### Deployment Pipeline Fails
**Issue**: "App Service not found"
- **Solution**: Verify app names match your Azure resources
- Check: `az webapp list --query "[].name"`
**Issue**: "Environment not found"
- **Solution**: Create `production` environment in Azure DevOps
- **Pipelines** → **Environments** → **Create environment**
**Issue**: "Image not found in ACR"
- **Solution**: Run build pipeline first to push images to ACR
- Verify: `az acr repository show-tags --name <acr-name> --repository cmdb-insight/backend`
### Repository Not Found
**Issue**: "No matching repositories were found" when creating pipeline
- **Solution 1**: Check repository exists in Azure DevOps → **Repos** → **Files**
- **Solution 2**: Push code to Azure DevOps: `git push azure main`
- **Solution 3**: Use **"Other Git"** option in pipeline wizard with repository URL
- **Solution 4**: Verify you're in the correct project (check project name in dropdown)
- See `docs/AZURE-SERVICE-CONNECTION-TROUBLESHOOTING.md` for details
## ✅ Checklist
### Build Pipeline Setup
- [ ] Docker Registry service connection created
- [ ] `azure-pipelines.yml` variables configured
- [ ] Pipeline created in Azure DevOps
- [ ] Test run successful
- [ ] Images visible in ACR
### Deployment Pipeline Setup
- [ ] Azure Resource Manager service connection created
- [ ] `production` environment created
- [ ] App Services exist in Azure
- [ ] `azure-pipelines-deploy.yml` variables configured
- [ ] Deployment pipeline created in Azure DevOps
- [ ] Test deployment successful
---
**Workflow**: Build first → Deploy second → Verify success!

View File

@@ -0,0 +1,371 @@
# Azure PostgreSQL Setup for Production
Complete guide for setting up Azure Database for PostgreSQL Flexible Server for CMDB Insight production deployment.
## 🎯 Overview
**Why PostgreSQL for Production?**
- ✅ Better concurrency handling (multiple users)
- ✅ Connection pooling support
- ✅ Better performance for 20+ users
- ✅ Production-ready database solution
- ✅ Identical dev/prod stack
**Cost:** ~€20-30/month (Basic B1ms tier)
---
## 📋 Prerequisites
- Azure CLI installed and configured (`az login`)
- Resource group created: `zdl-cmdb-insight-prd-euwe-rg`
- Appropriate permissions to create Azure Database resources
---
## 🚀 Quick Setup (15 minutes)
### Step 1: Create PostgreSQL Flexible Server
```bash
# Set variables
RESOURCE_GROUP="zdl-cmdb-insight-prd-euwe-rg"
SERVER_NAME="zdl-cmdb-insight-prd-psql"
ADMIN_USER="cmdbadmin"
ADMIN_PASSWORD="$(openssl rand -base64 32)" # Generate secure password
LOCATION="westeurope"
# Create PostgreSQL Flexible Server
az postgres flexible-server create \
--resource-group $RESOURCE_GROUP \
--name $SERVER_NAME \
--location $LOCATION \
--admin-user $ADMIN_USER \
--admin-password $ADMIN_PASSWORD \
--sku-name Standard_B1ms \
--tier Burstable \
--storage-size 32 \
--version 15 \
--public-access 0.0.0.0 \
--high-availability Disabled
echo "PostgreSQL server created!"
echo "Server: $SERVER_NAME.postgres.database.azure.com"
echo "Admin User: $ADMIN_USER"
echo "Password: $ADMIN_PASSWORD"
echo ""
echo "⚠️ Save the password securely!"
```
### Step 2: Create Database
**Note:** The application uses a single database for all data. All tables (CMDB cache, classification history, and session state) are stored in the same database.
```bash
# Create main database (this is all you need)
az postgres flexible-server db create \
--resource-group $RESOURCE_GROUP \
--server-name $SERVER_NAME \
--database-name cmdb_insight
echo "✅ Database created"
```
### Step 3: Configure Firewall Rules
Allow Azure App Service to connect:
```bash
# Get App Service outbound IPs
BACKEND_IPS=$(az webapp show \
--name zdl-cmdb-insight-prd-backend-webapp \
--resource-group $RESOURCE_GROUP \
--query "outboundIpAddresses" -o tsv)
# Add firewall rule for App Service (use first IP, or add all)
az postgres flexible-server firewall-rule create \
--resource-group $RESOURCE_GROUP \
--name $SERVER_NAME \
--rule-name AllowAppService \
--start-ip-address 0.0.0.0 \
--end-ip-address 255.255.255.255
# Or more secure: Allow Azure Services only
az postgres flexible-server firewall-rule create \
--resource-group $RESOURCE_GROUP \
--name $SERVER_NAME \
--rule-name AllowAzureServices \
--start-ip-address 0.0.0.0 \
--end-ip-address 0.0.0.0
```
**Note:** `0.0.0.0` to `0.0.0.0` allows all Azure services. For production, consider using specific App Service outbound IPs.
### Step 4: Store Credentials in Key Vault
```bash
KEY_VAULT="zdl-cmdb-insight-prd-kv"
# Store database password
az keyvault secret set \
--vault-name $KEY_VAULT \
--name DatabasePassword \
--value $ADMIN_PASSWORD
# Store connection string (optional, can construct from components)
CONNECTION_STRING="postgresql://${ADMIN_USER}:${ADMIN_PASSWORD}@${SERVER_NAME}.postgres.database.azure.com:5432/cmdb_insight?sslmode=require"
az keyvault secret set \
--vault-name $KEY_VAULT \
--name DatabaseUrl \
--value $CONNECTION_STRING
echo "✅ Credentials stored in Key Vault"
```
### Step 5: Configure App Service App Settings
```bash
# Get Key Vault URL
KV_URL=$(az keyvault show --name $KEY_VAULT --query properties.vaultUri -o tsv)
# Configure backend app settings
az webapp config appsettings set \
--name zdl-cmdb-insight-prd-backend-webapp \
--resource-group $RESOURCE_GROUP \
--settings \
DATABASE_TYPE=postgres \
DATABASE_HOST="${SERVER_NAME}.postgres.database.azure.com" \
DATABASE_PORT=5432 \
DATABASE_NAME=cmdb_insight \
DATABASE_USER=$ADMIN_USER \
DATABASE_PASSWORD="@Microsoft.KeyVault(SecretUri=${KV_URL}secrets/DatabasePassword/)" \
DATABASE_SSL=true
echo "✅ App settings configured"
```
**Alternative: Use DATABASE_URL directly**
```bash
az webapp config appsettings set \
--name zdl-cmdb-insight-prd-backend-webapp \
--resource-group $RESOURCE_GROUP \
--settings \
DATABASE_TYPE=postgres \
DATABASE_URL="@Microsoft.KeyVault(SecretUri=${KV_URL}secrets/DatabaseUrl/)"
```
---
## 🔐 Security Best Practices
### 1. Use Key Vault for Secrets
**Do:** Store database password in Key Vault
**Don't:** Store password in app settings directly
### 2. Enable SSL/TLS
**Do:** Always use `DATABASE_SSL=true` or `?sslmode=require` in connection string
**Don't:** Connect without SSL in production
### 3. Firewall Rules
**Do:** Restrict to specific IPs or Azure services
**Don't:** Allow `0.0.0.0/0` (all IPs) unless necessary
### 4. Use Managed Identity (Advanced)
For even better security, use Managed Identity instead of passwords:
```bash
# Enable Managed Identity on PostgreSQL server
az postgres flexible-server identity assign \
--resource-group $RESOURCE_GROUP \
--name $SERVER_NAME \
--identity /subscriptions/.../resourceGroups/.../providers/Microsoft.ManagedIdentity/userAssignedIdentities/...
# Grant access
az postgres flexible-server ad-admin create \
--resource-group $RESOURCE_GROUP \
--server-name $SERVER_NAME \
--display-name "App Service Identity" \
--object-id <principal-id>
```
---
## 📊 Database Configuration
### Connection Pooling
The application uses connection pooling automatically via the `pg` library:
- **Max connections:** 20 (configured in `PostgresAdapter`)
- **Idle timeout:** 30 seconds
- **Connection timeout:** 10 seconds
### Database Sizes
For 20 users:
- **Database (cmdb_insight):** ~25-60MB total (includes CMDB cache, classification history, and session state)
- **Total storage:** 32GB (plenty of room for growth)
**Note:** All data (CMDB objects, classification history, and session state) is stored in a single database.
---
## 🔄 Migration from SQLite
If you're migrating from SQLite to PostgreSQL:
```bash
# 1. Export data from SQLite (if needed)
# The application will automatically sync from Jira, so migration may not be necessary
# 2. Set DATABASE_TYPE=postgres in app settings
# 3. Restart the app - it will create tables automatically on first run
# 4. The app will sync data from Jira Assets on first sync
```
**Note:** Since the database is a cache layer that syncs from Jira, you typically don't need to migrate data - just let it sync fresh.
---
## 🧪 Testing Connection
### Test from Local Machine
```bash
# Install psql if needed
# macOS: brew install postgresql
# Ubuntu: sudo apt-get install postgresql-client
# Connect (replace with your values)
psql "host=${SERVER_NAME}.postgres.database.azure.com port=5432 dbname=cmdb_insight user=${ADMIN_USER} password=${ADMIN_PASSWORD} sslmode=require"
```
### Test from App Service
```bash
# Check app logs
az webapp log tail \
--name zdl-cmdb-insight-prd-backend-webapp \
--resource-group $RESOURCE_GROUP
# Look for: "Creating PostgreSQL adapter" or connection errors
```
---
## 📈 Monitoring
### Check Database Status
```bash
az postgres flexible-server show \
--resource-group $RESOURCE_GROUP \
--name $SERVER_NAME \
--query "{state:state, version:version, sku:sku}"
```
### View Database Size
```sql
-- Connect to database
SELECT
pg_database.datname,
pg_size_pretty(pg_database_size(pg_database.datname)) AS size
FROM pg_database
WHERE datname = 'cmdb_insight';
```
### Monitor Connections
```sql
SELECT
count(*) as total_connections,
state,
application_name
FROM pg_stat_activity
WHERE datname = 'cmdb_insight'
GROUP BY state, application_name;
```
---
## 💰 Cost Optimization
### Current Setup (Recommended)
- **Tier:** Burstable (B1ms)
- **vCores:** 1
- **RAM:** 2GB
- **Storage:** 32GB
- **Cost:** ~€20-30/month
### If You Need More Performance
- **Upgrade to:** Standard_B2s (2 vCores, 4GB RAM) - ~€40-50/month
- **Or:** Standard_B1ms with more storage if needed
### Cost Savings Tips
1. **Use Burstable tier** - Perfect for 20 users
2. **Start with 32GB storage** - Can scale up later
3. **Disable high availability** - Not needed for small teams
4. **Use same region** - Reduces latency and costs
---
## 🛠️ Troubleshooting
### Connection Refused
**Problem:** Can't connect to database
**Solutions:**
1. Check firewall rules: `az postgres flexible-server firewall-rule list --resource-group $RESOURCE_GROUP --name $SERVER_NAME`
2. Verify SSL is enabled: `DATABASE_SSL=true`
3. Check credentials in Key Vault
### Authentication Failed
**Problem:** Wrong username/password
**Solutions:**
1. Verify admin user: `az postgres flexible-server show --resource-group $RESOURCE_GROUP --name $SERVER_NAME --query administratorLogin`
2. Reset password if needed: `az postgres flexible-server update --resource-group $RESOURCE_GROUP --name $SERVER_NAME --admin-password "new-password"`
### SSL Required Error
**Problem:** "SSL connection required"
**Solution:** Add `DATABASE_SSL=true` or `?sslmode=require` to connection string
---
## 📚 Related Documentation
- **`docs/AZURE-APP-SERVICE-DEPLOYMENT.md`** - Complete App Service deployment
- **`docs/DATABASE-RECOMMENDATION.md`** - Database comparison and recommendations
- **`docs/LOCAL-DEVELOPMENT-SETUP.md`** - Local PostgreSQL setup
---
## ✅ Checklist
- [ ] PostgreSQL Flexible Server created
- [ ] Database created (cmdb_insight)
- [ ] Firewall rules configured
- [ ] Credentials stored in Key Vault
- [ ] App Service app settings configured
- [ ] SSL enabled (`DATABASE_SSL=true`)
- [ ] Connection tested
- [ ] Monitoring configured
---
**🎉 Your PostgreSQL database is ready for production!**

View File

@@ -2,7 +2,7 @@
## 🎯 In één oogopslag
**Applicatie**: Zuyderland CMDB GUI (Node.js + React web app)
**Applicatie**: CMDB Insight (Node.js + React web app)
**Doel**: Hosten in Azure App Service
**Gebruikers**: Max. 20 collega's
**Geschatte kosten**: €18-39/maand (Basic tier)
@@ -135,8 +135,43 @@
---
## 📞 Contact
## 📋 Deployment Stappen Overzicht
Voor vragen over de applicatie zelf, zie:
- `PRODUCTION-DEPLOYMENT.md` - Volledige deployment guide
- `AZURE-DEPLOYMENT-SUMMARY.md` - Uitgebreide Azure specifieke info
### 1. Azure Resources Aanmaken
```bash
# Resource Group
az group create --name rg-cmdb-gui --location westeurope
# App Service Plan (Basic B1)
az appservice plan create --name plan-cmdb-gui --resource-group rg-cmdb-gui --sku B1 --is-linux
# Web Apps
az webapp create --name cmdb-backend --resource-group rg-cmdb-gui --plan plan-cmdb-gui
az webapp create --name cmdb-frontend --resource-group rg-cmdb-gui --plan plan-cmdb-gui
# Key Vault
az keyvault create --name kv-cmdb-gui --resource-group rg-cmdb-gui --location westeurope
```
### 2. Database Setup
- **PostgreSQL (Aanbevolen)**: Zie `docs/AZURE-POSTGRESQL-SETUP.md`
- **SQLite**: Geen extra setup nodig (database in container)
### 3. Configuration
- Environment variabelen via App Service Configuration
- Secrets via Key Vault references
- SSL certificaat via App Service (automatisch voor *.azurewebsites.net)
### 4. CI/CD
- Azure DevOps Pipelines: Zie `docs/AZURE-PIPELINES.md`
- Automatische deployment bij push naar main branch
---
## 📞 Contact & Documentatie
Voor volledige deployment guides, zie:
- `docs/AZURE-APP-SERVICE-DEPLOYMENT.md` - Complete stap-voor-stap guide
- `docs/AZURE-POSTGRESQL-SETUP.md` - Database setup
- `docs/AZURE-PIPELINES.md` - CI/CD pipelines
- `docs/PRODUCTION-DEPLOYMENT.md` - Production best practices

View File

@@ -0,0 +1,590 @@
# Green Field Deployment Guide
## Overzicht
Deze guide beschrijft hoe je de applicatie volledig opnieuw deployt met de nieuwe genormaliseerde database structuur. Aangezien het een green field deployment is, kunnen we alles schoon opzetten.
---
## Stap 1: Database Setup
### Option A: PostgreSQL (Aanbevolen voor productie)
#### 1.1 Azure Database for PostgreSQL
```bash
# Via Azure Portal of CLI
az postgres flexible-server create \
--resource-group <resource-group> \
--name <server-name> \
--location <location> \
--admin-user <admin-user> \
--admin-password <admin-password> \
--sku-name Standard_B1ms \
--tier Burstable \
--version 14
```
#### 1.2 Database Aanmaken
**Note:** The application uses a single database for all data. All tables (CMDB cache, classification history, and session state) are stored in the same database.
```sql
-- Connect to PostgreSQL
CREATE DATABASE cmdb_insight;
-- Create user (optional, can use admin user)
CREATE USER cmdb_user WITH PASSWORD 'secure_password';
GRANT ALL PRIVILEGES ON DATABASE cmdb_insight TO cmdb_user;
```
#### 1.3 Connection String
```env
DATABASE_TYPE=postgres
DATABASE_URL=postgresql://cmdb_user:secure_password@<server-name>.postgres.database.azure.com:5432/cmdb_insight?sslmode=require
```
### Option B: SQLite (Voor development/testing)
```env
DATABASE_TYPE=sqlite
# Database files worden automatisch aangemaakt in backend/data/
```
---
## Stap 2: Environment Variables
### 2.1 Basis Configuratie
Maak `.env` bestand in project root:
```env
# Server
PORT=3001
NODE_ENV=production
FRONTEND_URL=https://your-domain.com
# Database (zie Stap 1)
DATABASE_TYPE=postgres
DATABASE_URL=postgresql://...
# Jira Assets
JIRA_HOST=https://jira.zuyderland.nl
JIRA_SERVICE_ACCOUNT_TOKEN=<service_account_token>
# Jira Authentication Method
JIRA_AUTH_METHOD=oauth
# OAuth Configuration (als JIRA_AUTH_METHOD=oauth)
JIRA_OAUTH_CLIENT_ID=<client_id>
JIRA_OAUTH_CLIENT_SECRET=<client_secret>
JIRA_OAUTH_CALLBACK_URL=https://your-domain.com/api/auth/callback
JIRA_OAUTH_SCOPES=READ WRITE
# Session
SESSION_SECRET=<generate_secure_random_string>
# AI (configureer per gebruiker in profile settings)
# ANTHROPIC_API_KEY, OPENAI_API_KEY, TAVILY_API_KEY worden per gebruiker ingesteld
```
### 2.2 Session Secret Genereren
```bash
# Generate secure random string
node -e "console.log(require('crypto').randomBytes(32).toString('hex'))"
```
---
## Stap 3: Schema Discovery
### 3.1 Schema Genereren
```bash
cd backend
npm run generate-schema
```
Dit:
- Haalt schema op van Jira Assets API
- Genereert `backend/src/generated/jira-schema.ts`
- Genereert `backend/src/generated/jira-types.ts`
### 3.2 Schema Populeren in Database
Bij eerste start van de applicatie:
- `schemaDiscoveryService` leest `OBJECT_TYPES` uit generated schema
- Populeert `object_types` en `attributes` tabellen
- Gebeurt automatisch bij initialisatie
---
## Stap 4: Application Build
### 4.1 Dependencies Installeren
```bash
# Root
npm install
# Backend
cd backend
npm install
# Frontend
cd frontend
npm install
```
### 4.2 Build
```bash
# Backend
cd backend
npm run build
# Frontend
cd frontend
npm run build
```
---
## Stap 5: Database Initialisatie
### 5.1 Automatische Initialisatie
Bij eerste start:
1. Normalized schema wordt aangemaakt
2. Schema discovery wordt uitgevoerd
3. Tabellen worden gevuld met object types en attributes
### 5.2 Handmatige Verificatie (Optioneel)
```sql
-- Check object types
SELECT * FROM object_types ORDER BY sync_priority;
-- Check attributes
SELECT COUNT(*) FROM attributes;
-- Check per type
SELECT object_type_name, COUNT(*) as attr_count
FROM attributes
GROUP BY object_type_name;
```
---
## Stap 6: Data Sync
### 6.1 Eerste Sync
```bash
# Via API (na deployment)
curl -X POST https://your-domain.com/api/cache/sync \
-H "Authorization: Bearer <token>"
```
Of via applicatie:
- Ga naar Settings → Cache Management
- Klik "Full Sync"
### 6.2 Sync Status Checken
```bash
curl https://your-domain.com/api/cache/status \
-H "Authorization: Bearer <token>"
```
---
## Stap 7: Docker Deployment
### 7.1 Build Images
```bash
# Backend
docker build -t cmdb-backend:latest -f backend/Dockerfile .
# Frontend
docker build -t cmdb-frontend:latest -f frontend/Dockerfile .
```
### 7.2 Docker Compose (Production)
```yaml
# docker-compose.prod.yml
version: '3.8'
services:
backend:
image: cmdb-backend:latest
environment:
- DATABASE_TYPE=postgres
- DATABASE_URL=${DATABASE_URL}
- JIRA_HOST=${JIRA_HOST}
- JIRA_SERVICE_ACCOUNT_TOKEN=${JIRA_SERVICE_ACCOUNT_TOKEN}
- SESSION_SECRET=${SESSION_SECRET}
ports:
- "3001:3001"
frontend:
image: cmdb-frontend:latest
ports:
- "80:80"
depends_on:
- backend
nginx:
image: nginx:alpine
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
ports:
- "443:443"
depends_on:
- frontend
- backend
```
### 7.3 Starten
```bash
docker-compose -f docker-compose.prod.yml up -d
```
---
## Stap 8: Azure App Service Deployment
### 8.1 Azure Container Registry
```bash
# Login
az acr login --name <registry-name>
# Tag images
docker tag cmdb-backend:latest <registry-name>.azurecr.io/cmdb-backend:latest
docker tag cmdb-frontend:latest <registry-name>.azurecr.io/cmdb-frontend:latest
# Push
docker push <registry-name>.azurecr.io/cmdb-backend:latest
docker push <registry-name>.azurecr.io/cmdb-frontend:latest
```
### 8.2 App Service Configuratie
**Backend App Service:**
- Container: `<registry-name>.azurecr.io/cmdb-backend:latest`
- Environment variables: Alle `.env` variabelen
- Port: 3001
**Frontend App Service:**
- Container: `<registry-name>.azurecr.io/cmdb-frontend:latest`
- Environment variables: `VITE_API_URL=https://backend-app.azurewebsites.net`
### 8.3 Deployment via Azure DevOps
Zie `azure-pipelines.yml` voor CI/CD pipeline.
---
## Stap 9: Verificatie
### 9.1 Health Checks
```bash
# Backend health
curl https://backend-app.azurewebsites.net/health
# Frontend
curl https://frontend-app.azurewebsites.net
```
### 9.2 Database Verificatie
```sql
-- Check object count
SELECT object_type_name, COUNT(*) as count
FROM objects
GROUP BY object_type_name;
-- Check attribute values
SELECT COUNT(*) FROM attribute_values;
-- Check relations
SELECT COUNT(*) FROM object_relations;
```
### 9.3 Functionaliteit Testen
1. **Login** - Test authenticatie
2. **Dashboard** - Check of data wordt getoond
3. **Application List** - Test filters
4. **Application Detail** - Test edit functionaliteit
5. **Sync** - Test manual sync
---
## Stap 10: Monitoring & Maintenance
### 10.1 Logs
```bash
# Azure App Service logs
az webapp log tail --name <app-name> --resource-group <resource-group>
# Docker logs
docker-compose logs -f backend
```
### 10.2 Database Monitoring
```sql
-- Database size (single database contains all data)
SELECT pg_database_size('cmdb_insight');
-- Table sizes
SELECT
schemaname,
tablename,
pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) AS size
FROM pg_tables
WHERE schemaname = 'public'
ORDER BY pg_total_relation_size(schemaname||'.'||tablename) DESC;
-- Index usage
SELECT
schemaname,
tablename,
indexname,
idx_scan as index_scans
FROM pg_stat_user_indexes
ORDER BY idx_scan DESC;
```
### 10.3 Performance Monitoring
- Monitor query performance
- Check sync duration
- Monitor database connections
- Check memory usage
---
## Troubleshooting
### Database Connection Issues
```bash
# Test connection
psql $DATABASE_URL -c "SELECT 1"
# Check firewall rules (Azure)
az postgres flexible-server firewall-rule list \
--resource-group <resource-group> \
--name <server-name>
```
### Schema Discovery Fails
```bash
# Check Jira connection
curl -H "Authorization: Bearer $JIRA_SERVICE_ACCOUNT_TOKEN" \
$JIRA_HOST/rest/insight/1.0/objectschema/list
# Regenerate schema
cd backend
npm run generate-schema
```
### Sync Issues
```bash
# Check sync status
curl https://your-domain.com/api/cache/status
# Manual sync for specific type
curl -X POST https://your-domain.com/api/cache/sync/ApplicationComponent
```
---
## Rollback Plan
Als er problemen zijn:
1. **Stop applicatie**
2. **Revert code** (git)
3. **Herstart applicatie**
Aangezien het green field is, is er geen data migration nodig voor rollback.
---
## Post-Deployment Checklist
- [ ] Database verbinding werkt
- [ ] Schema discovery succesvol
- [ ] Eerste sync voltooid
- [ ] Alle object types gesynct
- [ ] Queries werken correct
- [ ] Filters werken
- [ ] Edit functionaliteit werkt
- [ ] Authentication werkt
- [ ] Logs zijn zichtbaar
- [ ] Monitoring is ingesteld
---
## Performance Tips
1. **Database Indexes** - Zijn automatisch aangemaakt
2. **Connection Pooling** - PostgreSQL adapter gebruikt pool (max 20)
3. **Query Optimization** - Gebruik `queryWithFilters()` voor gefilterde queries
4. **Sync Frequency** - Incremental sync elke 30 seconden (configureerbaar)
---
## Security Checklist
- [ ] `SESSION_SECRET` is sterk en uniek
- [ ] Database credentials zijn secure
- [ ] HTTPS is ingeschakeld
- [ ] CORS is correct geconfigureerd
- [ ] OAuth callback URL is correct
- [ ] Environment variables zijn niet gecommit
---
## Extra Tips & Best Practices
### Database Performance
1. **Connection Pooling**
- PostgreSQL adapter gebruikt automatisch connection pooling (max 20)
- Monitor pool usage in production
2. **Query Optimization**
- Gebruik `queryWithFilters()` voor gefilterde queries (veel sneller)
- Indexes zijn automatisch aangemaakt
- Monitor slow queries
3. **Sync Performance**
- Batch size: 50 objects per batch (configureerbaar via `JIRA_API_BATCH_SIZE`)
- Incremental sync: elke 30 seconden (configureerbaar via `SYNC_INCREMENTAL_INTERVAL_MS`)
### Monitoring
1. **Application Logs**
- Check voor schema discovery errors
- Monitor sync duration
- Check query performance
2. **Database Metrics**
- Table sizes groeien
- Index usage
- Connection pool usage
3. **Jira API**
- Monitor rate limiting
- Check API response times
- Monitor sync success rate
### Backup Strategy
1. **Database Backups**
- Azure PostgreSQL: automatische dagelijkse backups
- SQLite: maak periodieke copies van `.db` files
2. **Configuration Backup**
- Backup `.env` file (securely!)
- Document alle environment variables
### Scaling Considerations
1. **Database**
- PostgreSQL kan schalen (vertical scaling)
- Overweeg read replicas voor grote datasets
2. **Application**
- Stateless design - kan horizontaal schalen
- Session storage in database (scalable)
3. **Cache**
- Normalized structure is efficient
- Indexes zorgen voor goede performance
### Troubleshooting Common Issues
#### Issue: Schema Discovery Fails
**Symptoom:** Error bij startup, geen object types in database
**Oplossing:**
```bash
# Check Jira connection
curl -H "Authorization: Bearer $JIRA_SERVICE_ACCOUNT_TOKEN" \
$JIRA_HOST/rest/insight/1.0/objectschema/list
# Regenerate schema
cd backend
npm run generate-schema
# Restart application
```
#### Issue: Sync is Slow
**Symptoom:** Full sync duurt lang
**Oplossing:**
- Check Jira API response times
- Verhoog batch size (maar niet te veel - rate limiting)
- Check database connection pool
#### Issue: Queries zijn Langzaam
**Symptoom:** Filters werken traag
**Oplossing:**
- Check of indexes bestaan: `\d+ attribute_values` in PostgreSQL
- Gebruik `queryWithFilters()` in plaats van JavaScript filtering
- Check query execution plan
#### Issue: Memory Usage Hoog
**Symptoom:** Applicatie gebruikt veel geheugen
**Oplossing:**
- Normalized storage gebruikt minder geheugen dan JSONB
- Check of oude cacheStore nog ergens gebruikt wordt
- Monitor object reconstruction (kan N+1 queries veroorzaken)
### Development vs Production
**Development:**
- SQLite is prima voor testing
- Lokale database in `backend/data/`
- Geen SSL nodig
**Production:**
- PostgreSQL is aanbevolen
- Azure Database for PostgreSQL
- SSL vereist
- Connection pooling
- Monitoring ingeschakeld
### Migration van Development naar Production
1. **Schema is hetzelfde** - geen migratie nodig
2. **Data sync** - eerste sync haalt alles op van Jira
3. **Environment variables** - update voor productie
4. **OAuth callback URL** - update naar productie domain
---
**End of Guide**

313
docs/DATA-INTEGRITY-PLAN.md Normal file
View File

@@ -0,0 +1,313 @@
# Data Integriteit Plan - Voorkomen van Kapotte Referenties
## Probleem
Kapotte referenties ontstaan wanneer `attribute_values.reference_object_id` verwijst naar objecten die niet in de `objects` tabel bestaan. Dit kan gebeuren door:
1. Objecten verwijderd uit Jira maar referenties blijven bestaan
2. Onvolledige sync (niet alle gerelateerde objecten zijn gesynchroniseerd)
3. Objecten verwijderd uit cache maar referenties blijven bestaan
4. Race conditions tijdens sync
## Strategie: Multi-layer Aanpak
### Laag 1: Preventie tijdens Sync (Hoogste Prioriteit)
#### 1.1 Referenced Object Validation tijdens Sync
**Doel**: Zorg dat alle referenced objects bestaan voordat we referenties opslaan
**Implementatie**:
- Voeg validatie toe in `extractAndStoreRelations()` en `normalizeObjectWithDb()`
- Voor elke reference: check of target object bestaat in cache
- Als target niet bestaat: fetch het object eerst van Jira
- Als object niet in Jira bestaat: sla referentie NIET op (of markeer als "missing")
**Code locatie**: `backend/src/services/normalizedCacheStore.ts`
#### 1.2 Deep Sync voor Referenced Objects
**Doel**: Sync automatisch alle referenced objects tijdens object sync
**Implementatie**:
- Wanneer een object wordt gesynct, identificeer alle referenced objects
- Queue deze referenced objects voor sync (als ze niet recent gesynct zijn)
- Voer sync uit in dependency order (sync referenced objects eerst)
**Code locatie**: `backend/src/services/syncEngine.ts`
#### 1.3 Transactional Reference Storage
**Doel**: Zorg dat object + referenties atomisch worden opgeslagen
**Implementatie**:
- Gebruik database transactions voor object + referenties
- Rollback als referenced object niet bestaat
- Valideer alle references voordat commit
**Code locatie**: `backend/src/services/normalizedCacheStore.ts`
### Laag 2: Database Constraints (Waar Mogelijk)
#### 2.1 Foreign Key Constraints voor Referenties
**Doel**: Database-level validatie van referenties
**Huidige situatie**:
- `attribute_values.reference_object_id` heeft GEEN foreign key constraint
- Dit is bewust omdat referenced objects mogelijk niet in cache zijn
**Optie A: Soft Foreign Key (Aanbevolen)**
- Voeg een CHECK constraint toe die valideert dat reference_object_id NULL is OF bestaat in objects
- Dit vereist een database trigger of function
**Optie B: Staging Table**
- Sla nieuwe referenties eerst op in een staging table
- Valideer en migreer alleen valide referenties naar attribute_values
- Cleanup staging table periodiek
**Implementatie**: Database migration + trigger/function
#### 2.2 Database Triggers voor Cleanup
**Doel**: Automatische cleanup wanneer objecten worden verwijderd
**Implementatie**:
- Trigger op DELETE van objects
- Automatisch verwijder of nullify alle attribute_values met reference_object_id = deleted.id
- Log cleanup acties voor audit
**Code locatie**: Database migration
### Laag 3: Validatie en Cleanup Procedures
#### 3.1 Periodieke Validatie Job
**Doel**: Detecteer en repareer kapotte referenties automatisch
**Implementatie**:
- Dagelijkse/nachtelijke job die alle broken references detecteert
- Probeer referenced objects te fetchen van Jira
- Als object bestaat: sync en repareer referentie
- Als object niet bestaat: verwijder referentie (of markeer als "deleted")
**Code locatie**: `backend/src/services/dataIntegrityService.ts` (nieuw)
**Scheduling**: Via cron job of scheduled task
#### 3.2 Manual Cleanup Endpoint
**Doel**: Admin tool om broken references te repareren
**Implementatie**:
- POST `/api/data-validation/repair-broken-references`
- Opties:
- `mode: 'delete'` - Verwijder kapotte referenties
- `mode: 'fetch'` - Probeer objects te fetchen van Jira
- `mode: 'dry-run'` - Toon wat er zou gebeuren zonder wijzigingen
**Code locatie**: `backend/src/routes/dataValidation.ts`
#### 3.3 Reference Validation tijdens Object Retrieval
**Doel**: Valideer en repareer referenties wanneer objecten worden opgehaald
**Implementatie**:
- In `reconstructObject()`: check alle references
- Als reference broken is: probeer te fetchen van Jira
- Als fetch succesvol: update cache en referentie
- Als fetch faalt: markeer als "missing" in response
**Code locatie**: `backend/src/services/normalizedCacheStore.ts`
### Laag 4: Sync Verbeteringen
#### 4.1 Dependency-Aware Sync Order
**Doel**: Sync objecten in de juiste volgorde (dependencies eerst)
**Implementatie**:
- Analyseer schema om dependency graph te bouwen
- Sync object types in dependency order
- Bijvoorbeeld: sync "Application Function" voordat "Application Component"
**Code locatie**: `backend/src/services/syncEngine.ts`
#### 4.2 Batch Sync met Reference Resolution
**Doel**: Sync batches van objecten en resolve alle references
**Implementatie**:
- Verzamel alle referenced object IDs tijdens batch sync
- Fetch alle referenced objects in parallel
- Valideer alle references voordat batch commit
**Code locatie**: `backend/src/services/syncEngine.ts`
#### 4.3 Incremental Sync met Deletion Detection
**Doel**: Detecteer verwijderde objecten tijdens incremental sync
**Implementatie**:
- Vergelijk cache objecten met Jira objecten
- Identificeer objecten die in cache zijn maar niet in Jira
- Verwijder deze objecten (cascade verwijdert referenties)
**Code locatie**: `backend/src/services/syncEngine.ts`
### Laag 5: Monitoring en Alerting
#### 5.1 Metrics en Dashboard
**Doel**: Monitor data integriteit in real-time
**Implementatie**:
- Track broken references count over tijd
- Alert wanneer count boven threshold komt
- Grafiek in dashboard: broken references trend
**Code locatie**: `backend/src/routes/dataValidation.ts` + dashboard
#### 5.2 Sync Health Checks
**Doel**: Valideer data integriteit na elke sync
**Implementatie**:
- Na sync: check voor broken references
- Log warning als broken references gevonden
- Optioneel: auto-repair tijdens sync
**Code locatie**: `backend/src/services/syncEngine.ts`
#### 5.3 Audit Logging
**Doel**: Track alle data integriteit acties
**Implementatie**:
- Log wanneer broken references worden gerepareerd
- Log wanneer objecten worden verwijderd
- Log wanneer references worden toegevoegd/verwijderd
**Code locatie**: Logger service
## Implementatie Prioriteiten
### Fase 1: Quick Wins (Week 1)
1. ✅ Manual cleanup endpoint (`/api/data-validation/repair-broken-references`)
2. ✅ Periodieke validatie job (dagelijks)
3. ✅ Metrics in dashboard
### Fase 2: Preventie (Week 2-3)
4. Referenced object validation tijdens sync
5. Deep sync voor referenced objects
6. Transactional reference storage
### Fase 3: Database Level (Week 4)
7. Database triggers voor cleanup
8. Soft foreign key constraints (waar mogelijk)
### Fase 4: Advanced (Week 5+)
9. Dependency-aware sync order
10. Incremental sync met deletion detection
11. Auto-repair tijdens object retrieval
## Technische Details
### Referenced Object Validation Pattern
```typescript
async function validateAndFetchReference(
referenceObjectId: string,
targetType?: string
): Promise<{ exists: boolean; object?: CMDBObject }> {
// 1. Check cache first
const cached = await cacheStore.getObjectById(referenceObjectId);
if (cached) return { exists: true, object: cached };
// 2. Try to fetch from Jira
try {
const jiraObj = await jiraAssetsClient.getObject(referenceObjectId);
if (jiraObj) {
// Parse and cache
const parsed = jiraAssetsClient.parseObject(jiraObj);
if (parsed) {
await cacheStore.upsertObject(parsed._objectType, parsed);
return { exists: true, object: parsed };
}
}
} catch (error) {
if (error instanceof JiraObjectNotFoundError) {
return { exists: false };
}
throw error;
}
return { exists: false };
}
```
### Cleanup Procedure
```typescript
async function repairBrokenReferences(mode: 'delete' | 'fetch' | 'dry-run'): Promise<{
total: number;
repaired: number;
deleted: number;
failed: number;
}> {
const brokenRefs = await cacheStore.getBrokenReferences(10000, 0);
let repaired = 0;
let deleted = 0;
let failed = 0;
for (const ref of brokenRefs) {
if (mode === 'fetch') {
// Try to fetch from Jira
const result = await validateAndFetchReference(ref.reference_object_id);
if (result.exists) {
repaired++;
} else {
// Object doesn't exist, delete reference
await cacheStore.deleteAttributeValue(ref.object_id, ref.attribute_id);
deleted++;
}
} else if (mode === 'delete') {
await cacheStore.deleteAttributeValue(ref.object_id, ref.attribute_id);
deleted++;
}
// dry-run: just count
}
return { total: brokenRefs.length, repaired, deleted, failed };
}
```
## Success Criteria
- ✅ Broken references count < 1% van totaal aantal referenties
- ✅ Geen nieuwe broken references tijdens normale sync
- ✅ Auto-repair binnen 24 uur na detectie
- ✅ Monitoring dashboard toont real-time integriteit status
- ✅ Sync performance blijft acceptabel (< 10% overhead)
## Monitoring
### Key Metrics
- `broken_references_count` - Totaal aantal kapotte referenties
- `broken_references_rate` - Percentage van totaal aantal referenties
- `reference_repair_success_rate` - Percentage succesvol gerepareerde referenties
- `sync_integrity_check_duration` - Tijd voor integriteit check
- `objects_with_broken_refs` - Aantal objecten met kapotte referenties
### Alerts
- 🔴 Critical: > 5% broken references
- 🟡 Warning: > 1% broken references
- 🟢 Info: Broken references count veranderd
## Rollout Plan
1. **Week 1**: Implementeer cleanup tools en monitoring
2. **Week 2**: Repareer bestaande broken references
3. **Week 3**: Implementeer preventieve maatregelen
4. **Week 4**: Test en monitor
5. **Week 5**: Database-level constraints (optioneel)
## Risico's en Mitigatie
| Risico | Impact | Mitigatie |
|--------|--------|-----------|
| Sync performance degradation | Hoog | Batch processing, parallel fetching, caching |
| Database locks tijdens cleanup | Medium | Off-peak scheduling, batch size limits |
| False positives in validation | Low | Dry-run mode, manual review |
| Jira API rate limits | Medium | Rate limiting, exponential backoff |
## Documentatie Updates
- [ ] Update sync engine documentatie
- [ ] Update data validation dashboard documentatie
- [ ] Create runbook voor broken references cleanup
- [ ] Update API documentatie voor nieuwe endpoints

View File

@@ -1,142 +0,0 @@
# Database Access Guide
This guide shows you how to easily access and view records in the PostgreSQL database.
## Quick Access
### Option 1: Using the Script (Easiest)
```bash
# Connect using psql
./scripts/open-database.sh psql
# Or via Docker
./scripts/open-database.sh docker
# Or get connection string for GUI tools
./scripts/open-database.sh url
```
### Option 2: Direct psql Command
```bash
# If PostgreSQL is running locally
PGPASSWORD=cmdb-dev psql -h localhost -p 5432 -U cmdb -d cmdb
```
### Option 3: Via Docker
```bash
# Connect to PostgreSQL container
docker exec -it $(docker ps | grep postgres | awk '{print $1}') psql -U cmdb -d cmdb
```
## Connection Details
From `docker-compose.yml`:
- **Host**: localhost (or `postgres` if connecting from Docker network)
- **Port**: 5432
- **Database**: cmdb
- **User**: cmdb
- **Password**: cmdb-dev
**Connection String:**
```
postgresql://cmdb:cmdb-dev@localhost:5432/cmdb
```
## GUI Tools
### pgAdmin (Free, Web-based)
1. Download from: https://www.pgadmin.org/download/
2. Add new server with connection details above
3. Browse tables and run queries
### DBeaver (Free, Cross-platform)
1. Download from: https://dbeaver.io/download/
2. Create new PostgreSQL connection
3. Use connection string or individual fields
### TablePlus (macOS, Paid but has free tier)
1. Download from: https://tableplus.com/
2. Create new PostgreSQL connection
3. Enter connection details
### DataGrip (JetBrains, Paid)
1. Part of JetBrains IDEs or standalone
2. Create new PostgreSQL data source
3. Use connection string
## Useful SQL Commands
Once connected, try these commands:
```sql
-- List all tables
\dt
-- Describe a table structure
\d users
\d classifications
\d cache_objects
-- View all users
SELECT * FROM users;
-- View classifications
SELECT * FROM classifications ORDER BY created_at DESC LIMIT 10;
-- View cached objects
SELECT object_key, object_type, updated_at FROM cache_objects ORDER BY updated_at DESC LIMIT 20;
-- Count records per table
SELECT
'users' as table_name, COUNT(*) as count FROM users
UNION ALL
SELECT
'classifications', COUNT(*) FROM classifications
UNION ALL
SELECT
'cache_objects', COUNT(*) FROM cache_objects;
-- View user settings
SELECT u.username, u.email, us.ai_provider, us.ai_enabled
FROM users u
LEFT JOIN user_settings us ON u.id = us.user_id;
```
## Environment Variables
If you're using environment variables instead of Docker:
```bash
# Check your .env file for:
DATABASE_URL=postgresql://cmdb:cmdb-dev@localhost:5432/cmdb
# or
DATABASE_TYPE=postgres
DATABASE_HOST=localhost
DATABASE_PORT=5432
DATABASE_NAME=cmdb
DATABASE_USER=cmdb
DATABASE_PASSWORD=cmdb-dev
```
## Troubleshooting
### Database not running
```bash
# Start PostgreSQL container
docker-compose up -d postgres
# Check if it's running
docker ps | grep postgres
```
### Connection refused
- Make sure PostgreSQL container is running
- Check if port 5432 is already in use
- Verify connection details match docker-compose.yml
### Permission denied
- Verify username and password match docker-compose.yml
- Check if user has access to the database

View File

@@ -3,8 +3,9 @@
## Huidige Situatie
De applicatie gebruikt momenteel **SQLite** via `better-sqlite3`:
- **cmdb-cache.db**: ~20MB - CMDB object cache
- **classifications.db**: Classification history
- **cmdb-cache.db**: ~20MB - Alle data (CMDB object cache, classification history, session state)
**Note:** All data (cache, classifications, session state) is stored in a single database file.
## Aanbeveling: PostgreSQL

View File

@@ -0,0 +1,310 @@
# Complete Database Reset Guide
This guide explains how to force a complete reset of the database so that the cache/data is built completely from scratch.
## Overview
There are three main approaches to resetting the database:
1. **API-based reset** (Recommended) - Clears cache via API and triggers rebuild
2. **Database volume reset** - Completely removes PostgreSQL volume and recreates
3. **Manual SQL reset** - Direct database commands to clear all data
## Option 1: API-Based Reset (Recommended)
This is the cleanest approach as it uses the application's built-in endpoints.
### Using the Script
```bash
# Run the automated reset script
./scripts/reset-and-rebuild.sh
```
The script will:
1. Clear all cached data via `DELETE /api/cache/clear`
2. Trigger a full sync via `POST /api/cache/sync`
3. Monitor the process
### Manual API Calls
If you prefer to do it manually:
```bash
# Set your backend URL
BACKEND_URL="http://localhost:3001"
API_URL="$BACKEND_URL/api"
# Step 1: Clear all cache
curl -X DELETE "$API_URL/cache/clear" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_TOKEN"
# Step 2: Trigger full sync
curl -X POST "$API_URL/cache/sync" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_TOKEN"
```
**Note:** You need to be authenticated. Either:
- Use a valid JWT token from your session
- Or authenticate via the UI first and copy the token from browser dev tools
### Via the UI
1. Navigate to **Settings → Cache Management**
2. Click **"Clear All Cache"** button
3. Click **"Full Sync"** button
4. Monitor progress in the same page
## Option 2: Database Volume Reset (Nuclear Option)
This completely removes the PostgreSQL database and recreates it from scratch.
### Using the Reset Script
```bash
# Run the PostgreSQL reset script
./scripts/reset-postgres.sh
```
This will:
1. Stop containers
2. Remove the PostgreSQL volume (deletes ALL data)
3. Restart PostgreSQL with a fresh database
4. Wait for PostgreSQL to be ready
**⚠️ Warning:** This deletes ALL data including:
- All cached CMDB objects
- All relations
- All attribute values
- Schema discovery cache
- Sync metadata
### Manual Volume Reset
```bash
# Step 1: Stop containers
docker-compose down
# Step 2: Remove PostgreSQL volume
docker volume ls | grep postgres
docker volume rm cmdb-insight_postgres_data
# Step 3: Start PostgreSQL again
docker-compose up -d postgres
# Step 4: Wait for PostgreSQL to be ready
docker-compose exec postgres pg_isready -U cmdb
```
### After Volume Reset
After resetting the volume, you need to:
1. **Start the backend** - The schema will be created automatically:
```bash
docker-compose up -d backend
```
2. **Check logs** to verify schema creation:
```bash
docker-compose logs -f backend
```
You should see:
- `NormalizedCacheStore: Database schema initialized`
- `SchemaDiscovery: Schema discovery complete`
3. **Trigger a full sync** to rebuild data:
- Via UI: Settings → Cache Management → Full Sync
- Via API: `POST /api/cache/sync`
- Or use the reset script: `./scripts/reset-and-rebuild.sh`
## Option 3: Manual SQL Reset
If you want to clear data but keep the database structure:
### For PostgreSQL
```bash
# Connect to database
docker-compose exec postgres psql -U cmdb -d cmdb_insight
# Clear all data (keeps schema)
TRUNCATE TABLE attribute_values CASCADE;
TRUNCATE TABLE object_relations CASCADE;
TRUNCATE TABLE objects CASCADE;
TRUNCATE TABLE sync_metadata CASCADE;
TRUNCATE TABLE schema_cache CASCADE;
TRUNCATE TABLE schema_mappings CASCADE;
# Exit
\q
```
Then trigger a full sync:
```bash
curl -X POST "http://localhost:3001/api/cache/sync" \
-H "Authorization: Bearer YOUR_TOKEN"
```
### For SQLite (if using SQLite)
```bash
# Connect to backend container
docker-compose exec backend sh
# Clear all data
sqlite3 /app/data/cmdb-cache.db <<EOF
DELETE FROM attribute_values;
DELETE FROM object_relations;
DELETE FROM objects;
DELETE FROM sync_metadata;
DELETE FROM schema_cache;
DELETE FROM schema_mappings;
VACUUM;
EOF
```
## What Gets Reset?
### Cleared by `DELETE /api/cache/clear`:
- ✅ All cached CMDB objects (`objects` table)
- ✅ All attribute values (`attribute_values` table)
- ✅ All relations (`object_relations` table)
- ❌ Schema cache (kept for faster schema discovery)
- ❌ Schema mappings (kept for configuration)
- ❌ User data and classifications (stored in same database, but not cleared by cache clear)
### Cleared by Volume Reset:
- ✅ Everything above
- ✅ Schema cache
- ✅ Schema mappings
- ✅ All database structure (recreated on next start)
## Verification
After reset, verify the database is empty and ready for rebuild:
```bash
# Check object counts (should be 0)
docker-compose exec postgres psql -U cmdb -d cmdb_insight -c "
SELECT
(SELECT COUNT(*) FROM objects) as objects,
(SELECT COUNT(*) FROM attribute_values) as attributes,
(SELECT COUNT(*) FROM object_relations) as relations;
"
# Check sync status
curl "$BACKEND_URL/api/cache/status" \
-H "Authorization: Bearer YOUR_TOKEN"
```
## Troubleshooting
### Authentication Issues
If you get authentication errors:
1. **Get a token from the UI:**
- Log in via the UI
- Open browser dev tools → Network tab
- Find any API request → Copy the `Authorization` header value
- Use it in your curl commands
2. **Or use the UI directly:**
- Navigate to Settings → Cache Management
- Use the buttons there (no token needed)
### Backend Not Running
```bash
# Check if backend is running
docker-compose ps backend
# Start backend if needed
docker-compose up -d backend
# Check logs
docker-compose logs -f backend
```
### Sync Not Starting
Check that Jira credentials are configured:
```bash
# Check environment variables
docker-compose exec backend env | grep JIRA
# Required:
# - JIRA_HOST
# - JIRA_SERVICE_ACCOUNT_TOKEN (for sync operations)
```
### Database Connection Issues
```bash
# Test PostgreSQL connection
docker-compose exec postgres pg_isready -U cmdb
# Check database exists
docker-compose exec postgres psql -U cmdb -l
# Check connection from backend
docker-compose exec backend node -e "
const { createDatabaseAdapter } = require('./src/services/database/factory.js');
const db = createDatabaseAdapter();
db.query('SELECT 1').then(() => console.log('OK')).catch(e => console.error(e));
"
```
## Complete Reset Workflow
For a complete "green field" reset:
```bash
# 1. Stop everything
docker-compose down
# 2. Remove PostgreSQL volume (nuclear option)
docker volume rm cmdb-insight_postgres_data
# 3. Start PostgreSQL
docker-compose up -d postgres
# 4. Wait for PostgreSQL
sleep 5
docker-compose exec postgres pg_isready -U cmdb
# 5. Start backend (creates schema automatically)
docker-compose up -d backend
# 6. Wait for backend to initialize
sleep 10
docker-compose logs backend | grep "initialization complete"
# 7. Clear cache and trigger sync (via script)
./scripts/reset-and-rebuild.sh
# OR manually via API
curl -X DELETE "http://localhost:3001/api/cache/clear" \
-H "Authorization: Bearer YOUR_TOKEN"
curl -X POST "http://localhost:3001/api/cache/sync" \
-H "Authorization: Bearer YOUR_TOKEN"
```
## Quick Reference
| Method | Speed | Data Loss | Schema Loss | Recommended For |
|--------|-------|-----------|-------------|-----------------|
| API Clear + Sync | Fast | Cache only | No | Regular resets |
| Volume Reset | Medium | Everything | Yes | Complete rebuild |
| SQL TRUNCATE | Fast | Cache only | No | Quick clear |
## Related Documentation
- [Local PostgreSQL Reset](./LOCAL-POSTGRES-RESET.md) - Detailed PostgreSQL reset guide
- [Local Development Setup](./LOCAL-DEVELOPMENT-SETUP.md) - Initial setup guide
- [Database Schema](./DATABASE-DRIVEN-SCHEMA-IMPLEMENTATION-PLAN.md) - Schema documentation

Some files were not shown because too many files have changed in this diff Show More