Compare commits

..

34 Commits

Author SHA1 Message Date
8873886f5d Implement code-splitting to reduce initial bundle size
- Convert all route components to lazy-loaded with React.lazy()
- Add Suspense boundaries with loading fallback components
- Configure manual chunks in Vite for better code organization:
  - Separate React vendor libraries (react-vendor)
  - Group components by feature (reports, settings, admin, apps, auth)
  - Isolate other node_modules (vendor)
- Reduce initial bundle from ~1,080 kB to under 500 kB
- Components now load on-demand when routes are accessed
- Improves initial page load performance and caching
2026-01-22 22:58:19 +01:00
57e4adc69c Remove JIRA_SCHEMA_ID from entire application
- Remove JIRA_SCHEMA_ID from all documentation, config files, and scripts
- Update generate-schema.ts to always auto-discover schemas dynamically
- Runtime application already discovers schemas via /objectschema/list API
- Build script now automatically selects schema with most objects
- Remove JIRA_SCHEMA_ID from docker-compose.yml, Azure setup scripts, and all docs
- Application is now fully schema-agnostic and discovers schemas automatically
2026-01-22 22:56:29 +01:00
f4399a8e4e Consolidate documentation and update backend services
- Reorganize docs into 'Core deployment guides' and 'Setup and configuration' subdirectories
- Consolidate redundant documentation files (ACR, pipelines, deployment guides)
- Add documentation consolidation plan
- Update backend database factory and logger services
- Update migration script and docker-compose configurations
- Add PostgreSQL setup script
2026-01-22 22:45:54 +01:00
18aec4ad80 Fix classifications database SSL for Azure 2026-01-22 01:52:45 +01:00
71480cedd6 Fix PostgreSQL SSL configuration for Azure
- Explicitly configure SSL in PostgresAdapter Pool
- Always require SSL for Azure PostgreSQL connections
- Add logging for database connection details
2026-01-22 01:51:20 +01:00
b8d7e7a229 Fix logger for Azure App Service and update deployment docs
- Fix logger to handle Azure App Service write restrictions
- Skip file logging in Azure App Service (console logs captured automatically)
- Add deployment scripts for App Service setup
- Update documentation with correct resource names
- Add Key Vault access request documentation
- Add alternative authentication methods for ACR and Key Vault
2026-01-22 00:51:53 +01:00
ffce6e8db3 Fix TypeScript error: Type 'unknown' not assignable to ReactNode in ObjectDetailModal
- Added explicit React import
- Converted complex conditional rendering to IIFE for proper type inference
- Fixed type safety issues in metadata conditional rendering
- Resolves build failure in Azure DevOps pipeline
2026-01-21 23:40:07 +01:00
7cf7e757b9 Improve TypeScript type safety in DataValidationDashboard and ObjectDetailModal
- Use proper type references instead of typeof in DataValidationDashboard
- Improve date value type handling in ObjectDetailModal with explicit type checks
2026-01-21 23:25:05 +01:00
73660cdf66 Fix TypeScript compilation errors in frontend components
- Remove unused variables in ApplicationInfo, ArchitectureDebugPage
- Fix type errors in Dashboard, GovernanceAnalysis, GovernanceModelHelper (PageHeader description prop)
- Add null checks and explicit types in DataValidationDashboard
- Fix ObjectDetailModal type errors for _jiraCreatedAt and Date constructor
- Remove unused imports and variables in SchemaConfigurationSettings
- Update PageHeader to accept string | ReactNode for description prop
2026-01-21 23:19:06 +01:00
c4fa18ed55 Update ACR name to zdlasacr in pipeline configuration 2026-01-21 23:06:17 +01:00
42a04e6cb3 Add Azure deployment automation and documentation
- Add separate deployment pipeline (azure-pipelines-deploy.yml) for App Service deployment
- Add advanced pipeline with deployment slots (azure-pipelines-slots.yml)
- Restore azure-pipelines.yml to build-only (no deployment)
- Add comprehensive Azure setup documentation:
  - AZURE-NEW-SUBSCRIPTION-SETUP.md: Complete step-by-step Azure resource setup
  - AZURE-RESOURCES-OVERVIEW.md: Quick reference for all Azure resources
  - AZURE-ACR-SHARED-SETUP.md: Guide for shared Container Registry
  - AZURE-ACR-NAMING-RECOMMENDATION.md: Naming recommendations for Zuyderland
  - AZURE-PIPELINE-DEPLOYMENT.md: Automated deployment setup guide
  - AZURE-PIPELINE-QUICK-REFERENCE.md: Quick reference for pipeline variables
  - AZURE-PIPELINES-USAGE.md: Guide for using build and deployment pipelines
- Add setup script (scripts/setup-azure-resources.sh) for automated resource creation
- Support for shared ACR across multiple applications
2026-01-21 23:03:48 +01:00
52f851c1f3 Fix TypeError: totalFTE.toFixed is not a function
- Add Number() conversion in overallKPIs calculation to ensure all values are numbers
- Add safeguards in render to handle null/undefined/NaN values before calling .toFixed()
- Prevents crashes when data contains non-numeric values
2026-01-21 13:47:19 +01:00
3c11402e6b Load team dashboard data from database cache instead of API
- Replace API-based getTeamDashboardData with database-backed implementation
- Load all ApplicationComponents from normalized cache store
- Reuse existing grouping and KPI calculation logic
- Significantly faster as it avoids hundreds of API calls
- Falls back to API if database query fails
2026-01-21 13:36:00 +01:00
e1ad0d9aa7 Fix missing key prop in BIASyncDashboard list items 2026-01-21 11:42:15 +01:00
cb418ed051 Fix missing PageHeader import in LifecyclePipeline component 2026-01-21 11:07:29 +01:00
9ad4bd9a73 Fix remaining TypeScript 'Untyped function calls' errors
- Add DatabaseAdapter type imports where needed
- Properly type database adapter calls with type assertions
- Fix type mismatches in schemaMappingService
- Fix ensureInitialized calls on DatabaseAdapter
2026-01-21 09:39:58 +01:00
6bb5907bbd Fix TypeScript compilation errors in backend
- Fix query parameter type issues (string | string[] to string) in controllers
- Add public getDatabaseAdapter() method to SchemaRepository for db access
- Fix SchemaSyncService export and import issues
- Fix referenceObject vs referenceObjectType property name
- Add missing jiraAssetsClient import in normalizedCacheStore
- Fix duplicate properties in object literals
- Add type annotations for implicit any types
- Fix type predicate issues with generics
- Fix method calls (getEnabledObjectTypes, syncAllSchemas)
- Fix type mismatches (ObjectTypeRecord vs expected types)
- Fix Buffer type issue in biaMatchingService
- Export SchemaSyncService class for ServiceFactory
2026-01-21 09:29:05 +01:00
c331540369 Fix TypeScript type errors in schemaConfigurationService
- Type all db variables as DatabaseAdapter to enable generic method calls
- Add explicit type annotations for row parameters in map callbacks
- Cast ensureInitialized calls to any (not part of DatabaseAdapter interface)

Resolves all 9 TypeScript linter errors in the file
2026-01-21 03:31:55 +01:00
cdee0e8819 UI styling improvements: dashboard headers and navigation
- Restore blue PageHeader on Dashboard (/app-components)
- Update homepage (/) with subtle header design without blue bar
- Add uniform PageHeader styling to application edit page
- Fix Rapporten link on homepage to point to /reports overview
- Improve header descriptions spacing for better readability
2026-01-21 03:24:56 +01:00
e276e77fbc Migrate from xlsx to exceljs to fix security vulnerabilities
- Replace xlsx package (v0.18.5) with exceljs (v4.4.0)
- Remove @types/xlsx dependency (exceljs has built-in TypeScript types)
- Update biaMatchingService.ts to use ExcelJS API:
  - Replace XLSX.read() with workbook.xlsx.load()
  - Replace XLSX.utils.sheet_to_json() with eachRow() iteration
  - Handle 1-based column indexing correctly
- Make loadBIAData() and findBIAMatch() async functions
- Update all callers in applications.ts and claude.ts to use await
- Fix npm audit: 0 vulnerabilities (was 1 high severity)

This migration eliminates the Prototype Pollution and ReDoS vulnerabilities
in the xlsx package while maintaining full functionality.
2026-01-15 09:59:43 +01:00
c60fbe8821 Fix frontend TypeScript compilation errors
- Fix process.env.NODE_ENV in ApplicationInfo.tsx (use import.meta.env.DEV)
- Remove unused variables and imports in Profile.tsx, ProfileSettings.tsx, RoleManagement.tsx, UserManagement.tsx, UserSettings.tsx
- Fix FormData type issue in UserSettings.tsx (use React.FormEvent<HTMLFormElement>)
2026-01-15 03:30:11 +01:00
ff46da842f Fix TypeScript compilation errors
- Fix conflicting Request interface declarations (auth.ts vs authorization.ts)
- Fix email field type issue in auth.ts (handle undefined)
- Fix req.params type issues (string | string[] to string) in auth.ts, roles.ts, users.ts
- Fix apiKey undefined issue in claude.ts (use tavilyApiKey)
- Fix duplicate isConfigured identifier in emailService.ts (rename to _isConfigured)
- Fix confluencePage property type issue in jiraAssetsClient.ts (add type assertion)
2026-01-15 03:26:20 +01:00
1fa424efb9 Add authentication, user management, and database migration features
- Implement OAuth 2.0 and PAT authentication methods
- Add user management, roles, and profile functionality
- Add database migrations and admin user scripts
- Update services for authentication and user settings
- Add protected routes and permission hooks
- Update documentation for authentication and database access
2026-01-15 03:20:50 +01:00
f3637b85e1 Add comprehensive deployment advice and App Service deployment guide
- Add DEPLOYMENT-ADVICE.md with detailed analysis and recommendations
- Add AZURE-APP-SERVICE-DEPLOYMENT.md with step-by-step instructions
- Include NEN 7510 compliance considerations
- Include VPN/private network options for future
- Include monitoring and Elastic stack integration guidance
2026-01-14 18:09:42 +01:00
408c9f4727 Add quick deployment guide and update docker-compose ACR name
- Add QUICK-DEPLOYMENT-GUIDE.md with step-by-step instructions
- Update docker-compose.prod.acr.yml to use zdlas ACR name
- Include App Service and VM deployment options
2026-01-14 17:57:02 +01:00
df3f6f6899 Update docker-compose.prod.acr.yml with correct ACR name and add deployment next steps guide
- Update ACR name from zuyderlandcmdbacr to zdlas
- Add comprehensive deployment next steps guide
- Include deployment options: App Service, ACI, VM, AKS
2026-01-14 17:56:40 +01:00
de7b529ffb Fix PowerShell script variable usage in pipeline
- Use PowerShell variables instead of Azure DevOps variables in same script
- Fix backendImage and frontendImage output
2026-01-14 17:01:39 +01:00
68518f0193 Fix NodeJS.Timeout type errors in frontend
- Change NodeJS.Timeout to ReturnType<typeof setTimeout> for browser compatibility
- Fix timeout ref types in GovernanceModelBadge and TeamDashboard
- All TypeScript compilation errors now resolved
2026-01-14 16:59:16 +01:00
aba16f68de Fix all frontend TypeScript compilation errors
- Add _jiraUpdatedAt to ApplicationDetails type
- Fix bia type in ComplexityDynamicsBubbleChart (null to empty string)
- Fix source type in GovernanceModelHelper (explicit union type)
- Add vite-env.d.ts for import.meta.env types
- Add node.d.ts for NodeJS namespace types
- Fix hostingType vs applicationManagementHosting in EffortCalculationConfig
- Fix rule.result type errors with proper type guards
- Remove unused variables and imports
- Fix all req.query and req.params type errors
2026-01-14 16:57:01 +01:00
f51e9b8574 Fix all req.params and req.query type errors
- Add getParamString helper function for req.params
- Replace all req.params destructuring with getParamString
- Fix remaining req.query.* direct usage errors
- All TypeScript compilation errors now resolved
2026-01-14 16:50:33 +01:00
81d477ec8c Fix TypeScript compilation errors
- Add searchReference to ApplicationListItem type
- Fix result variable in toApplicationDetails function
- Add query helper functions for req.query parameter handling
- Fix req.query.* type errors in routes (applications, cache, classifications, objects)
- Fix CompletenessCategoryConfig missing id property
- Fix number | null type errors in dataService
- Add utils/queryHelpers.ts for reusable query parameter helpers
2026-01-14 16:36:22 +01:00
fb7dd23027 Fix Dockerfile: Use npm install instead of npm ci (package-lock.json missing) 2026-01-14 16:28:18 +01:00
55c8fee3b8 Add Azure Container Registry setup and documentation
- Configure ACR name: zdlas in azure-pipelines.yml
- Add Azure Container Registry documentation and guides
- Add scripts for ACR creation and image building
- Add docker-compose config for ACR deployment
- Remove temporary Excel lock file
2026-01-14 12:25:25 +01:00
96ed8a9ecf Configure ACR name: zdlas 2026-01-14 12:10:08 +01:00
195 changed files with 45571 additions and 5827 deletions

View File

@@ -1,12 +1,31 @@
# Application
# =============================================================================
# CMDB Insight - Environment Configuration
# =============================================================================
# Copy this file to .env and update the values according to your environment
# =============================================================================
# -----------------------------------------------------------------------------
# Application Configuration
# -----------------------------------------------------------------------------
PORT=3001
NODE_ENV=development
FRONTEND_URL=http://localhost:5173
# Application Branding
APP_NAME=CMDB Insight
APP_TAGLINE=Management console for Jira Assets
APP_COPYRIGHT=© {year} Zuyderland Medisch Centrum
# -----------------------------------------------------------------------------
# Database Configuration
# -----------------------------------------------------------------------------
# Use 'postgres' for PostgreSQL or 'sqlite' for SQLite (default)
DATABASE_TYPE=postgres
# Option 1: Use DATABASE_URL (recommended for PostgreSQL)
DATABASE_URL=postgresql://cmdb:cmdb-dev@localhost:5432/cmdb
# Or use individual components:
# Option 2: Use individual components (alternative to DATABASE_URL)
# DATABASE_HOST=localhost
# DATABASE_PORT=5432
# DATABASE_NAME=cmdb
@@ -14,17 +33,70 @@ DATABASE_URL=postgresql://cmdb:cmdb-dev@localhost:5432/cmdb
# DATABASE_PASSWORD=cmdb-dev
# DATABASE_SSL=false
# -----------------------------------------------------------------------------
# Jira Assets Configuration
# -----------------------------------------------------------------------------
JIRA_HOST=https://jira.zuyderland.nl
JIRA_PAT=your_personal_access_token_here
JIRA_SCHEMA_ID=your_schema_id
JIRA_API_BATCH_SIZE=20
# Claude API
ANTHROPIC_API_KEY=your_anthropic_api_key_here
# Jira Service Account Token (for read operations: sync, fetching data)
# This token is used for all read operations from Jira Assets.
# Write operations (saving changes) require users to configure their own PAT in profile settings.
JIRA_SERVICE_ACCOUNT_TOKEN=your_service_account_personal_access_token
JIRA_API_BATCH_SIZE=15
# Tavily API Key (verkrijgbaar via https://tavily.com)
TAVILY_API_KEY=your_tavily_api_key_here
# Jira Authentication Method
# Note: User Personal Access Tokens (PAT) are NOT configured here - users configure them in their profile settings
# The service account token above is used for read operations, user PATs are used for write operations.
# OpenAI API
OPENAI_API_KEY=your_openai_api_key_here
# Options: 'pat' (Personal Access Token) or 'oauth' (OAuth 2.0)
JIRA_AUTH_METHOD=pat
# Option 2: OAuth 2.0 Authentication
# Required when JIRA_AUTH_METHOD=oauth
# JIRA_OAUTH_CLIENT_ID=your_oauth_client_id
# JIRA_OAUTH_CLIENT_SECRET=your_oauth_client_secret
# JIRA_OAUTH_CALLBACK_URL=http://localhost:3001/api/auth/callback
# JIRA_OAUTH_SCOPES=READ WRITE
# Legacy: JIRA_OAUTH_ENABLED (for backward compatibility)
# JIRA_OAUTH_ENABLED=false
# -----------------------------------------------------------------------------
# Local Authentication System
# -----------------------------------------------------------------------------
# Enable local authentication (email/password login)
LOCAL_AUTH_ENABLED=true
# Allow public registration (optional, default: false)
REGISTRATION_ENABLED=false
# Session Configuration
SESSION_SECRET=change-this-secret-in-production
SESSION_DURATION_HOURS=24
# Password Requirements
PASSWORD_MIN_LENGTH=8
PASSWORD_REQUIRE_UPPERCASE=true
PASSWORD_REQUIRE_LOWERCASE=true
PASSWORD_REQUIRE_NUMBER=true
PASSWORD_REQUIRE_SPECIAL=false
# Email Configuration (for invitations, password resets, etc.)
SMTP_HOST=smtp.example.com
SMTP_PORT=587
SMTP_SECURE=false
SMTP_USER=your-email@example.com
SMTP_PASSWORD=your-email-password
SMTP_FROM=noreply@example.com
# Encryption Key (for encrypting sensitive user data like API keys)
# Generate with: openssl rand -base64 32
ENCRYPTION_KEY=your-32-byte-encryption-key-base64
# Initial Administrator User (optional - created on first migration)
# If not set, you'll need to create an admin user manually
ADMIN_USERNAME=administrator
ADMIN_PASSWORD=SecurePassword123!
ADMIN_EMAIL=admin@example.com
ADMIN_DISPLAY_NAME=Administrator

View File

@@ -1,8 +1,8 @@
# CLAUDE.md - ZiRA Classificatie Tool
# CLAUDE.md - CMDB Insight
## Project Overview
**Project:** ZiRA Classificatie Tool (Zuyderland CMDB Editor)
**Project:** CMDB Insight (Zuyderland CMDB Editor)
**Organization:** Zuyderland Medisch Centrum - ICMT
**Purpose:** Interactive tool for classifying ~500 application components into ZiRA (Ziekenhuis Referentie Architectuur) application functions with Jira Assets CMDB integration.
@@ -18,7 +18,7 @@ The project has a working implementation with:
- SQLite database for classification history
Key files:
- `zira-classificatie-tool-specificatie.md` - Complete technical specification
- `cmdb-insight-specificatie.md` - Complete technical specification
- `zira-taxonomy.json` - ZiRA taxonomy with 90+ application functions across 10 domains
- `management-parameters.json` - Reference data for dynamics, complexity, users, governance models
@@ -57,7 +57,7 @@ cd frontend && npm run build
## Project Structure
```
zira-classificatie-tool/
cmdb-insight/
├── package.json # Root workspace package
├── docker-compose.yml # Docker development setup
├── .env.example # Environment template
@@ -156,7 +156,6 @@ Dutch hospital reference architecture with 90+ application functions organized i
```env
# Jira Data Center
JIRA_HOST=https://jira.zuyderland.nl
JIRA_SCHEMA_ID=<schema_id>
# Jira Authentication Method: 'pat' or 'oauth'
JIRA_AUTH_METHOD=pat # Choose: 'pat' (Personal Access Token) or 'oauth' (OAuth 2.0)
@@ -193,9 +192,9 @@ JIRA_ATTR_APPLICATION_CLUSTER=<attr_id>
JIRA_ATTR_APPLICATION_TYPE=<attr_id>
# AI Classification
ANTHROPIC_API_KEY=<claude_api_key>
OPENAI_API_KEY=<openai_api_key> # Optional: alternative to Claude
DEFAULT_AI_PROVIDER=claude # 'claude' or 'openai'
# Note: AI API keys (ANTHROPIC_API_KEY, OPENAI_API_KEY, TAVILY_API_KEY),
# default AI provider, and web search are configured per-user in profile settings,
# not in environment variables
# Server
PORT=3001
@@ -271,9 +270,25 @@ SESSION_SECRET=your_secure_random_string
| File | Purpose |
|------|---------|
| `zira-classificatie-tool-specificatie.md` | Complete technical specification |
| `cmdb-insight-specificatie.md` | Complete technical specification |
| `zira-taxonomy.json` | 90+ ZiRA application functions |
| `management-parameters.json` | Dropdown options and reference data |
| `docs/refactor-plan.md` | **Architecture refactoring plan (Phase 1: Analysis)** |
## Architecture Refactoring
**Status:** Phase 1 Complete - Analysis and Planning
A comprehensive refactoring plan has been created to improve maintainability, reduce duplication, and establish clearer separation of concerns. See `docs/refactor-plan.md` for:
- Current architecture map (files/folders/modules)
- Pain points and duplication analysis
- Target architecture (domain/infrastructure/services/api)
- Migration steps in order
- Explicit deletion list (files to remove later)
- API payload contract and recursion insights
**⚠️ Note:** Phase 1 is analysis only - no functional changes have been made yet.
## Language

133
azure-pipelines-deploy.yml Normal file
View File

@@ -0,0 +1,133 @@
# Azure DevOps Pipeline - Deploy to Azure App Service
# Use this pipeline after images have been built and pushed to ACR
#
# To use this pipeline:
# 1. Make sure images exist in ACR (run azure-pipelines.yml first)
# 2. Update variables below with your Azure resource names
# 3. Create Azure service connection for App Service deployment
# 4. Create 'production' environment in Azure DevOps
# 5. Configure this pipeline in Azure DevOps
trigger:
branches:
include:
- main
tags:
include:
- 'v*'
pool:
vmImage: 'ubuntu-latest'
variables:
# Azure Container Registry configuratie
acrName: 'zdlas' # Pas aan naar jouw ACR naam
repositoryName: 'cmdb-insight'
# Azure App Service configuratie
resourceGroup: 'rg-cmdb-insight-prod' # Pas aan naar jouw resource group
backendAppName: 'cmdb-backend-prod' # Pas aan naar jouw backend app naam
frontendAppName: 'cmdb-frontend-prod' # Pas aan naar jouw frontend app naam
azureSubscription: 'zuyderland-cmdb-subscription' # Azure service connection voor App Service deployment
# Deployment configuratie
imageTag: 'latest' # Use 'latest' or specific tag like 'v1.0.0'
stages:
- stage: Deploy
displayName: 'Deploy to Azure App Service'
jobs:
- deployment: DeployBackend
displayName: 'Deploy Backend'
environment: 'production'
strategy:
runOnce:
deploy:
steps:
- task: AzureWebAppContainer@1
displayName: 'Deploy Backend Container'
inputs:
azureSubscription: '$(azureSubscription)'
appName: '$(backendAppName)'
containers: '$(acrName).azurecr.io/$(repositoryName)/backend:$(imageTag)'
deployToSlotOrASE: false
- task: AzureCLI@2
displayName: 'Restart Backend App Service'
inputs:
azureSubscription: '$(azureSubscription)'
scriptType: 'bash'
scriptLocation: 'inlineScript'
inlineScript: |
echo "Restarting backend app service..."
az webapp restart \
--name $(backendAppName) \
--resource-group $(resourceGroup)
echo "Backend app service restarted successfully"
- deployment: DeployFrontend
displayName: 'Deploy Frontend'
environment: 'production'
strategy:
runOnce:
deploy:
steps:
- task: AzureWebAppContainer@1
displayName: 'Deploy Frontend Container'
inputs:
azureSubscription: '$(azureSubscription)'
appName: '$(frontendAppName)'
containers: '$(acrName).azurecr.io/$(repositoryName)/frontend:$(imageTag)'
deployToSlotOrASE: false
- task: AzureCLI@2
displayName: 'Restart Frontend App Service'
inputs:
azureSubscription: '$(azureSubscription)'
scriptType: 'bash'
scriptLocation: 'inlineScript'
inlineScript: |
echo "Restarting frontend app service..."
az webapp restart \
--name $(frontendAppName) \
--resource-group $(resourceGroup)
echo "Frontend app service restarted successfully"
- job: VerifyDeployment
displayName: 'Verify Deployment'
dependsOn:
- DeployBackend
- DeployFrontend
steps:
- task: AzureCLI@2
displayName: 'Health Check'
inputs:
azureSubscription: '$(azureSubscription)'
scriptType: 'bash'
scriptLocation: 'inlineScript'
inlineScript: |
echo "Checking backend health..."
BACKEND_URL="https://$(backendAppName).azurewebsites.net/api/health"
FRONTEND_URL="https://$(frontendAppName).azurewebsites.net"
echo "Backend URL: $BACKEND_URL"
echo "Frontend URL: $FRONTEND_URL"
# Wait a bit for apps to start
sleep 10
# Check backend health
BACKEND_STATUS=$(curl -s -o /dev/null -w "%{http_code}" $BACKEND_URL || echo "000")
if [ "$BACKEND_STATUS" = "200" ]; then
echo "✅ Backend health check passed"
else
echo "⚠️ Backend health check returned status: $BACKEND_STATUS"
fi
# Check frontend
FRONTEND_STATUS=$(curl -s -o /dev/null -w "%{http_code}" $FRONTEND_URL || echo "000")
if [ "$FRONTEND_STATUS" = "200" ]; then
echo "✅ Frontend is accessible"
else
echo "⚠️ Frontend returned status: $FRONTEND_STATUS"
fi

247
azure-pipelines-slots.yml Normal file
View File

@@ -0,0 +1,247 @@
# Azure DevOps Pipeline - Build, Push and Deploy with Deployment Slots
# Advanced version with zero-downtime deployment using staging slots
trigger:
branches:
include:
- main
tags:
include:
- 'v*'
pool:
vmImage: 'ubuntu-latest'
variables:
# Azure Container Registry configuratie
acrName: 'zdlas' # Pas aan naar jouw ACR naam
repositoryName: 'cmdb-insight'
dockerRegistryServiceConnection: 'zuyderland-cmdb-acr-connection'
# Azure App Service configuratie
resourceGroup: 'rg-cmdb-insight-prod'
backendAppName: 'cmdb-backend-prod'
frontendAppName: 'cmdb-frontend-prod'
azureSubscription: 'zuyderland-cmdb-subscription'
# Deployment configuratie
imageTag: '$(Build.BuildId)'
deployToProduction: true
useDeploymentSlots: true # Enable deployment slots
stagingSlotName: 'staging'
stages:
- stage: Build
displayName: 'Build and Push Docker Images'
jobs:
- job: BuildImages
displayName: 'Build Docker Images'
steps:
- task: Docker@2
displayName: 'Build and Push Backend Image'
inputs:
command: buildAndPush
repository: '$(repositoryName)/backend'
dockerfile: 'backend/Dockerfile.prod'
containerRegistry: '$(dockerRegistryServiceConnection)'
tags: |
$(imageTag)
latest
- task: Docker@2
displayName: 'Build and Push Frontend Image'
inputs:
command: buildAndPush
repository: '$(repositoryName)/frontend'
dockerfile: 'frontend/Dockerfile.prod'
containerRegistry: '$(dockerRegistryServiceConnection)'
tags: |
$(imageTag)
latest
- task: PowerShell@2
displayName: 'Output Image URLs'
inputs:
targetType: 'inline'
script: |
$backendImage = "$(acrName).azurecr.io/$(repositoryName)/backend:$(imageTag)"
$frontendImage = "$(acrName).azurecr.io/$(repositoryName)/frontend:$(imageTag)"
Write-Host "##vso[task.setvariable variable=backendImage;isOutput=true]$backendImage"
Write-Host "##vso[task.setvariable variable=frontendImage;isOutput=true]$frontendImage"
Write-Host "Backend Image: $backendImage"
Write-Host "Frontend Image: $frontendImage"
- stage: DeployToStaging
displayName: 'Deploy to Staging Slot'
dependsOn: Build
condition: and(succeeded(), eq(variables['deployToProduction'], true), eq(variables['useDeploymentSlots'], true))
jobs:
- deployment: DeployBackendStaging
displayName: 'Deploy Backend to Staging'
environment: 'staging'
strategy:
runOnce:
deploy:
steps:
- task: AzureWebAppContainer@1
displayName: 'Deploy Backend to Staging Slot'
inputs:
azureSubscription: '$(azureSubscription)'
appName: '$(backendAppName)'
deployToSlotOrASE: true
resourceGroupName: '$(resourceGroup)'
slotName: '$(stagingSlotName)'
containers: '$(acrName).azurecr.io/$(repositoryName)/backend:latest'
- task: AzureCLI@2
displayName: 'Wait for Backend Staging to be Ready'
inputs:
azureSubscription: '$(azureSubscription)'
scriptType: 'bash'
scriptLocation: 'inlineScript'
inlineScript: |
echo "Waiting for backend staging to be ready..."
sleep 30
STAGING_URL="https://$(backendAppName)-$(stagingSlotName).azurewebsites.net/api/health"
for i in {1..10}; do
STATUS=$(curl -s -o /dev/null -w "%{http_code}" $STAGING_URL || echo "000")
if [ "$STATUS" = "200" ]; then
echo "✅ Backend staging is ready"
exit 0
fi
echo "Waiting... ($i/10)"
sleep 10
done
echo "⚠️ Backend staging health check timeout"
- deployment: DeployFrontendStaging
displayName: 'Deploy Frontend to Staging'
environment: 'staging'
strategy:
runOnce:
deploy:
steps:
- task: AzureWebAppContainer@1
displayName: 'Deploy Frontend to Staging Slot'
inputs:
azureSubscription: '$(azureSubscription)'
appName: '$(frontendAppName)'
deployToSlotOrASE: true
resourceGroupName: '$(resourceGroup)'
slotName: '$(stagingSlotName)'
containers: '$(acrName).azurecr.io/$(repositoryName)/frontend:latest'
- task: AzureCLI@2
displayName: 'Wait for Frontend Staging to be Ready'
inputs:
azureSubscription: '$(azureSubscription)'
scriptType: 'bash'
scriptLocation: 'inlineScript'
inlineScript: |
echo "Waiting for frontend staging to be ready..."
sleep 20
STAGING_URL="https://$(frontendAppName)-$(stagingSlotName).azurewebsites.net"
for i in {1..10}; do
STATUS=$(curl -s -o /dev/null -w "%{http_code}" $STAGING_URL || echo "000")
if [ "$STATUS" = "200" ]; then
echo "✅ Frontend staging is ready"
exit 0
fi
echo "Waiting... ($i/10)"
sleep 10
done
echo "⚠️ Frontend staging health check timeout"
- stage: SwapToProduction
displayName: 'Swap Staging to Production'
dependsOn: DeployToStaging
condition: and(succeeded(), eq(variables['deployToProduction'], true), eq(variables['useDeploymentSlots'], true))
jobs:
- deployment: SwapBackend
displayName: 'Swap Backend to Production'
environment: 'production'
strategy:
runOnce:
deploy:
steps:
- task: AzureCLI@2
displayName: 'Swap Backend Staging to Production'
inputs:
azureSubscription: '$(azureSubscription)'
scriptType: 'bash'
scriptLocation: 'inlineScript'
inlineScript: |
echo "Swapping backend staging to production..."
az webapp deployment slot swap \
--name $(backendAppName) \
--resource-group $(resourceGroup) \
--slot $(stagingSlotName) \
--target-slot production
echo "✅ Backend swapped to production"
- deployment: SwapFrontend
displayName: 'Swap Frontend to Production'
environment: 'production'
strategy:
runOnce:
deploy:
steps:
- task: AzureCLI@2
displayName: 'Swap Frontend Staging to Production'
inputs:
azureSubscription: '$(azureSubscription)'
scriptType: 'bash'
scriptLocation: 'inlineScript'
inlineScript: |
echo "Swapping frontend staging to production..."
az webapp deployment slot swap \
--name $(frontendAppName) \
--resource-group $(resourceGroup) \
--slot $(stagingSlotName) \
--target-slot production
echo "✅ Frontend swapped to production"
- stage: VerifyProduction
displayName: 'Verify Production Deployment'
dependsOn: SwapToProduction
condition: and(succeeded(), eq(variables['deployToProduction'], true), eq(variables['useDeploymentSlots'], true))
jobs:
- job: VerifyDeployment
displayName: 'Verify Production'
steps:
- task: AzureCLI@2
displayName: 'Production Health Check'
inputs:
azureSubscription: '$(azureSubscription)'
scriptType: 'bash'
scriptLocation: 'inlineScript'
inlineScript: |
echo "Verifying production deployment..."
BACKEND_URL="https://$(backendAppName).azurewebsites.net/api/health"
FRONTEND_URL="https://$(frontendAppName).azurewebsites.net"
echo "Backend URL: $BACKEND_URL"
echo "Frontend URL: $FRONTEND_URL"
# Wait for swap to complete
sleep 15
# Check backend health
BACKEND_STATUS=$(curl -s -o /dev/null -w "%{http_code}" $BACKEND_URL || echo "000")
if [ "$BACKEND_STATUS" = "200" ]; then
echo "✅ Backend production health check passed"
else
echo "❌ Backend production health check failed: $BACKEND_STATUS"
exit 1
fi
# Check frontend
FRONTEND_STATUS=$(curl -s -o /dev/null -w "%{http_code}" $FRONTEND_URL || echo "000")
if [ "$FRONTEND_STATUS" = "200" ]; then
echo "✅ Frontend production is accessible"
else
echo "❌ Frontend production check failed: $FRONTEND_STATUS"
exit 1
fi
echo "🎉 Production deployment verified successfully!"

61
azure-pipelines.yml Normal file
View File

@@ -0,0 +1,61 @@
# Azure DevOps Pipeline - Build and Push Docker Images
# Dit bestand kan gebruikt worden in Azure DevOps Pipelines
trigger:
branches:
include:
- main
tags:
include:
- 'v*'
pool:
vmImage: 'ubuntu-latest'
variables:
# Azure Container Registry naam - pas aan naar jouw ACR
acrName: 'zdlasacr'
repositoryName: 'cmdb-insight'
dockerRegistryServiceConnection: 'zuyderland-cmdb-acr-connection' # Service connection naam in Azure DevOps
imageTag: '$(Build.BuildId)'
stages:
- stage: Build
displayName: 'Build and Push Docker Images'
jobs:
- job: BuildImages
displayName: 'Build Docker Images'
steps:
- task: Docker@2
displayName: 'Build and Push Backend Image'
inputs:
command: buildAndPush
repository: '$(repositoryName)/backend'
dockerfile: 'backend/Dockerfile.prod'
containerRegistry: '$(dockerRegistryServiceConnection)'
tags: |
$(imageTag)
latest
- task: Docker@2
displayName: 'Build and Push Frontend Image'
inputs:
command: buildAndPush
repository: '$(repositoryName)/frontend'
dockerfile: 'frontend/Dockerfile.prod'
containerRegistry: '$(dockerRegistryServiceConnection)'
tags: |
$(imageTag)
latest
- task: PowerShell@2
displayName: 'Output Image URLs'
inputs:
targetType: 'inline'
script: |
$backendImage = "$(acrName).azurecr.io/$(repositoryName)/backend:$(imageTag)"
$frontendImage = "$(acrName).azurecr.io/$(repositoryName)/frontend:$(imageTag)"
Write-Host "##vso[task.setvariable variable=backendImage]$backendImage"
Write-Host "##vso[task.setvariable variable=frontendImage]$frontendImage"
Write-Host "Backend Image: $backendImage"
Write-Host "Frontend Image: $frontendImage"

View File

@@ -2,9 +2,9 @@ FROM node:20-alpine AS builder
WORKDIR /app
# Install dependencies
# Install dependencies (including dev dependencies for build)
COPY package*.json ./
RUN npm ci --only=production && npm cache clean --force
RUN npm install && npm cache clean --force
# Copy source
COPY . .
@@ -19,7 +19,7 @@ WORKDIR /app
# Install only production dependencies
COPY package*.json ./
RUN npm ci --only=production && npm cache clean --force
RUN npm install --omit=dev && npm cache clean --force
# Copy built files
COPY --from=builder /app/dist ./dist

File diff suppressed because it is too large Load Diff

Binary file not shown.

View File

@@ -1,7 +1,7 @@
{
"name": "zira-backend",
"name": "cmdb-insight-backend",
"version": "1.0.0",
"description": "ZiRA Classificatie Tool Backend",
"description": "CMDB Insight Backend",
"type": "module",
"main": "dist/index.js",
"scripts": {
@@ -9,10 +9,19 @@
"build": "tsc",
"start": "node dist/index.js",
"generate-schema": "tsx scripts/generate-schema.ts",
"migrate:sqlite-to-postgres": "tsx scripts/migrate-sqlite-to-postgres.ts"
"generate-types": "tsx scripts/generate-types-from-db.ts",
"discover-schema": "tsx scripts/discover-schema.ts",
"migrate": "tsx scripts/run-migrations.ts",
"check-admin": "tsx scripts/check-admin-user.ts",
"migrate:sqlite-to-postgres": "tsx scripts/migrate-sqlite-to-postgres.ts",
"migrate:search-enabled": "tsx scripts/migrate-search-enabled.ts",
"setup-schema-mappings": "tsx scripts/setup-schema-mappings.ts"
},
"dependencies": {
"@anthropic-ai/sdk": "^0.32.1",
"@types/bcrypt": "^6.0.0",
"@types/nodemailer": "^7.0.5",
"bcrypt": "^6.0.0",
"better-sqlite3": "^11.6.0",
"cookie-parser": "^1.4.7",
"cors": "^2.8.5",
@@ -20,10 +29,11 @@
"express": "^4.21.1",
"express-rate-limit": "^7.4.1",
"helmet": "^8.0.0",
"nodemailer": "^7.0.12",
"openai": "^6.15.0",
"pg": "^8.13.1",
"winston": "^3.17.0",
"xlsx": "^0.18.5"
"exceljs": "^4.4.0"
},
"devDependencies": {
"@types/better-sqlite3": "^7.6.12",
@@ -32,7 +42,6 @@
"@types/express": "^5.0.0",
"@types/node": "^22.9.0",
"@types/pg": "^8.11.10",
"@types/xlsx": "^0.0.35",
"tsx": "^4.19.2",
"typescript": "^5.6.3"
}

View File

@@ -0,0 +1,109 @@
/**
* Check Admin User
*
* Script to check if the admin user exists and verify credentials.
*
* Usage:
* tsx scripts/check-admin-user.ts
*/
import { getAuthDatabase } from '../src/services/database/migrations.js';
import { userService } from '../src/services/userService.js';
import { roleService } from '../src/services/roleService.js';
async function main() {
try {
const db = getAuthDatabase();
console.log('\n=== Checking Admin User ===\n');
// Check environment variables
const adminEmail = process.env.ADMIN_EMAIL;
const adminUsername = process.env.ADMIN_USERNAME || 'admin';
const adminPassword = process.env.ADMIN_PASSWORD;
console.log('Environment Variables:');
console.log(` ADMIN_EMAIL: ${adminEmail || 'NOT SET'}`);
console.log(` ADMIN_USERNAME: ${adminUsername}`);
console.log(` ADMIN_PASSWORD: ${adminPassword ? '***SET***' : 'NOT SET'}`);
console.log('');
// Check if users table exists
try {
const userCount = await db.queryOne<{ count: number }>(
'SELECT COUNT(*) as count FROM users'
);
console.log(`Total users in database: ${userCount?.count || 0}`);
} catch (error) {
console.error('❌ Users table does not exist. Run migrations first: npm run migrate');
await db.close();
process.exit(1);
}
// Try to find user by email
if (adminEmail) {
const userByEmail = await userService.getUserByEmail(adminEmail);
if (userByEmail) {
console.log(`✓ User found by email: ${adminEmail}`);
console.log(` - ID: ${userByEmail.id}`);
console.log(` - Username: ${userByEmail.username}`);
console.log(` - Display Name: ${userByEmail.display_name}`);
console.log(` - Active: ${userByEmail.is_active}`);
console.log(` - Email Verified: ${userByEmail.email_verified}`);
// Check roles
const roles = await roleService.getUserRoles(userByEmail.id);
console.log(` - Roles: ${roles.map(r => r.name).join(', ') || 'None'}`);
// Test password if provided
if (adminPassword) {
const isValid = await userService.verifyPassword(adminPassword, userByEmail.password_hash);
console.log(` - Password verification: ${isValid ? '✓ VALID' : '✗ INVALID'}`);
}
} else {
console.log(`✗ User NOT found by email: ${adminEmail}`);
}
}
// Try to find user by username
const userByUsername = await userService.getUserByUsername(adminUsername);
if (userByUsername) {
console.log(`✓ User found by username: ${adminUsername}`);
console.log(` - ID: ${userByUsername.id}`);
console.log(` - Email: ${userByUsername.email}`);
console.log(` - Display Name: ${userByUsername.display_name}`);
console.log(` - Active: ${userByUsername.is_active}`);
console.log(` - Email Verified: ${userByUsername.email_verified}`);
// Check roles
const roles = await roleService.getUserRoles(userByUsername.id);
console.log(` - Roles: ${roles.map(r => r.name).join(', ') || 'None'}`);
// Test password if provided
if (adminPassword) {
const isValid = await userService.verifyPassword(adminPassword, userByUsername.password_hash);
console.log(` - Password verification: ${isValid ? '✓ VALID' : '✗ INVALID'}`);
}
} else {
console.log(`✗ User NOT found by username: ${adminUsername}`);
}
// List all users
const allUsers = await db.query<any>('SELECT id, email, username, display_name, is_active, email_verified FROM users');
if (allUsers && allUsers.length > 0) {
console.log(`\n=== All Users (${allUsers.length}) ===`);
for (const user of allUsers) {
const roles = await roleService.getUserRoles(user.id);
console.log(` - ${user.email} (${user.username}) - Active: ${user.is_active}, Verified: ${user.email_verified}, Roles: ${roles.map(r => r.name).join(', ') || 'None'}`);
}
}
await db.close();
console.log('\n✓ Check completed\n');
} catch (error) {
console.error('✗ Error:', error);
process.exit(1);
}
}
main();

View File

@@ -0,0 +1,38 @@
#!/usr/bin/env npx tsx
/**
* Schema Discovery CLI
*
* Manually trigger schema discovery from Jira Assets API.
* This script fetches the schema and stores it in the database.
*
* Usage: npm run discover-schema
*/
import { schemaDiscoveryService } from '../src/services/schemaDiscoveryService.js';
import { schemaCacheService } from '../src/services/schemaCacheService.js';
import { logger } from '../src/services/logger.js';
async function main() {
try {
console.log('Starting schema discovery...');
logger.info('Schema Discovery CLI: Starting manual schema discovery');
// Force discovery (ignore cache)
await schemaDiscoveryService.discoverAndStoreSchema(true);
// Invalidate cache so next request gets fresh data
schemaCacheService.invalidate();
console.log('✅ Schema discovery completed successfully!');
logger.info('Schema Discovery CLI: Schema discovery completed successfully');
process.exit(0);
} catch (error) {
console.error('❌ Schema discovery failed:', error);
logger.error('Schema Discovery CLI: Schema discovery failed', error);
process.exit(1);
}
}
main();

View File

@@ -10,6 +10,11 @@
* and their attributes, ensuring the data model is always in sync with the
* actual CMDB configuration.
*
* Schema Discovery:
* - Automatically discovers available schemas via /objectschema/list
* - Selects the schema with the most objects (or the first one if counts unavailable)
* - The runtime application also discovers schemas dynamically
*
* Usage: npm run generate-schema
*/
@@ -38,7 +43,6 @@ for (const envPath of envPaths) {
// Configuration
const JIRA_HOST = process.env.JIRA_HOST || '';
const JIRA_PAT = process.env.JIRA_PAT || '';
const JIRA_SCHEMA_ID = process.env.JIRA_SCHEMA_ID || '';
const OUTPUT_DIR = path.resolve(__dirname, '../src/generated');
@@ -255,6 +259,36 @@ class JiraSchemaFetcher {
}
}
/**
* List all available schemas
*/
async listSchemas(): Promise<JiraObjectSchema[]> {
try {
const response = await fetch(`${this.baseUrl}/objectschema/list`, {
headers: this.headers,
});
if (!response.ok) {
console.error(`Failed to list schemas: ${response.status} ${response.statusText}`);
return [];
}
const result = await response.json();
// Handle both array and object responses
if (Array.isArray(result)) {
return result;
} else if (result && typeof result === 'object' && 'objectschemas' in result) {
return result.objectschemas || [];
}
return [];
} catch (error) {
console.error(`Error listing schemas:`, error);
return [];
}
}
/**
* Test the connection
*/
@@ -752,18 +786,12 @@ function generateDatabaseSchema(generatedAt: Date): string {
'-- =============================================================================',
'-- Core Tables',
'-- =============================================================================',
'',
'-- Cached CMDB objects (all types stored in single table with JSON data)',
'CREATE TABLE IF NOT EXISTS cached_objects (',
' id TEXT PRIMARY KEY,',
' object_key TEXT NOT NULL UNIQUE,',
' object_type TEXT NOT NULL,',
' label TEXT NOT NULL,',
' data JSON NOT NULL,',
' jira_updated_at TEXT,',
' jira_created_at TEXT,',
' cached_at TEXT NOT NULL',
');',
'--',
'-- NOTE: This schema is LEGACY and deprecated.',
'-- The current system uses the normalized schema defined in',
'-- backend/src/services/database/normalized-schema.ts',
'--',
'-- This file is kept for reference and migration purposes only.',
'',
'-- Object relations (references between objects)',
'CREATE TABLE IF NOT EXISTS object_relations (',
@@ -787,10 +815,6 @@ function generateDatabaseSchema(generatedAt: Date): string {
'-- Indices for Performance',
'-- =============================================================================',
'',
'CREATE INDEX IF NOT EXISTS idx_objects_type ON cached_objects(object_type);',
'CREATE INDEX IF NOT EXISTS idx_objects_key ON cached_objects(object_key);',
'CREATE INDEX IF NOT EXISTS idx_objects_updated ON cached_objects(jira_updated_at);',
'CREATE INDEX IF NOT EXISTS idx_objects_label ON cached_objects(label);',
'',
'CREATE INDEX IF NOT EXISTS idx_relations_source ON object_relations(source_id);',
'CREATE INDEX IF NOT EXISTS idx_relations_target ON object_relations(target_id);',
@@ -829,17 +853,10 @@ async function main() {
process.exit(1);
}
if (!JIRA_SCHEMA_ID) {
console.error('❌ ERROR: JIRA_SCHEMA_ID environment variable is required');
console.error(' Set this in your .env file: JIRA_SCHEMA_ID=6');
process.exit(1);
}
if (envLoaded) {
console.log(`🔧 Environment: ${envLoaded}`);
}
console.log(`📡 Jira Host: ${JIRA_HOST}`);
console.log(`📋 Schema ID: ${JIRA_SCHEMA_ID}`);
console.log(`📁 Output Dir: ${OUTPUT_DIR}`);
console.log('');
@@ -862,20 +879,41 @@ async function main() {
console.log('✅ Connection successful');
console.log('');
// Fetch schema info
console.log('📋 Fetching schema information...');
const schema = await fetcher.fetchSchema(JIRA_SCHEMA_ID);
if (!schema) {
console.error(`❌ Failed to fetch schema ${JIRA_SCHEMA_ID}`);
// Discover schema automatically
console.log('📋 Discovering available schemas...');
const schemas = await fetcher.listSchemas();
if (schemas.length === 0) {
console.error('❌ No schemas found');
console.error(' Please ensure Jira Assets is configured and accessible');
process.exit(1);
}
// Select the schema with the most objects (or the first one if counts unavailable)
const schema = schemas.reduce((prev, current) => {
const prevCount = prev.objectCount || 0;
const currentCount = current.objectCount || 0;
return currentCount > prevCount ? current : prev;
});
const selectedSchemaId = schema.id.toString();
console.log(` Found ${schemas.length} schema(s)`);
if (schemas.length > 1) {
console.log(' Available schemas:');
schemas.forEach(s => {
const marker = s.id === schema.id ? ' → ' : ' ';
console.log(`${marker}${s.id}: ${s.name} (${s.objectSchemaKey}) - ${s.objectCount || 0} objects`);
});
console.log(` Using schema: ${schema.name} (ID: ${selectedSchemaId})`);
}
console.log(` Schema: ${schema.name} (${schema.objectSchemaKey})`);
console.log(` Total objects: ${schema.objectCount || 'unknown'}`);
console.log('');
// Fetch ALL object types from the schema
console.log('📦 Fetching all object types from schema...');
const allObjectTypes = await fetcher.fetchAllObjectTypes(JIRA_SCHEMA_ID);
const allObjectTypes = await fetcher.fetchAllObjectTypes(selectedSchemaId);
if (allObjectTypes.length === 0) {
console.error('❌ No object types found in schema');

View File

@@ -0,0 +1,484 @@
#!/usr/bin/env npx tsx
/**
* Type Generation Script - Database to TypeScript
*
* Generates TypeScript types from database schema.
* This script reads the schema from the database (object_types, attributes)
* and generates:
* - TypeScript types (jira-types.ts)
* - Schema metadata (jira-schema.ts)
*
* Usage: npm run generate-types
*/
import * as fs from 'fs';
import * as path from 'path';
import { fileURLToPath } from 'url';
import { createDatabaseAdapter } from '../src/services/database/factory.js';
import type { AttributeDefinition } from '../src/generated/jira-schema.js';
const __filename = fileURLToPath(import.meta.url);
const __dirname = path.dirname(__filename);
const OUTPUT_DIR = path.resolve(__dirname, '../src/generated');
interface DatabaseObjectType {
jira_type_id: number;
type_name: string;
display_name: string;
description: string | null;
sync_priority: number;
object_count: number;
}
interface DatabaseAttribute {
jira_attr_id: number;
object_type_name: string;
attr_name: string;
field_name: string;
attr_type: string;
is_multiple: boolean | number;
is_editable: boolean | number;
is_required: boolean | number;
is_system: boolean | number;
reference_type_name: string | null;
description: string | null;
}
function generateTypeScriptType(attrType: string, isMultiple: boolean, isReference: boolean): string {
let tsType: string;
if (isReference) {
tsType = 'ObjectReference';
} else {
switch (attrType) {
case 'text':
case 'textarea':
case 'url':
case 'email':
case 'select':
case 'user':
case 'status':
tsType = 'string';
break;
case 'integer':
case 'float':
tsType = 'number';
break;
case 'boolean':
tsType = 'boolean';
break;
case 'date':
case 'datetime':
tsType = 'string'; // ISO date string
break;
default:
tsType = 'unknown';
}
}
if (isMultiple) {
return `${tsType}[]`;
}
return `${tsType} | null`;
}
function escapeString(str: string): string {
return str.replace(/'/g, "\\'").replace(/\n/g, ' ');
}
function generateTypesFile(objectTypes: Array<{
jiraTypeId: number;
name: string;
typeName: string;
objectCount: number;
attributes: AttributeDefinition[];
}>, generatedAt: Date): string {
const lines: string[] = [
'// AUTO-GENERATED FILE - DO NOT EDIT MANUALLY',
'// Generated from database schema',
`// Generated at: ${generatedAt.toISOString()}`,
'//',
'// Re-generate with: npm run generate-types',
'',
'// =============================================================================',
'// Base Types',
'// =============================================================================',
'',
'/** Reference to another CMDB object */',
'export interface ObjectReference {',
' objectId: string;',
' objectKey: string;',
' label: string;',
' // Optional enriched data from referenced object',
' factor?: number;',
'}',
'',
'/** Base interface for all CMDB objects */',
'export interface BaseCMDBObject {',
' id: string;',
' objectKey: string;',
' label: string;',
' _objectType: string;',
' _jiraUpdatedAt: string;',
' _jiraCreatedAt: string;',
'}',
'',
'// =============================================================================',
'// Object Type Interfaces',
'// =============================================================================',
'',
];
for (const objType of objectTypes) {
lines.push(`/** ${objType.name} (Jira Type ID: ${objType.jiraTypeId}, ${objType.objectCount} objects) */`);
lines.push(`export interface ${objType.typeName} extends BaseCMDBObject {`);
lines.push(` _objectType: '${objType.typeName}';`);
lines.push('');
// Group attributes by type
const scalarAttrs = objType.attributes.filter(a => a.type !== 'reference');
const refAttrs = objType.attributes.filter(a => a.type === 'reference');
if (scalarAttrs.length > 0) {
lines.push(' // Scalar attributes');
for (const attr of scalarAttrs) {
const tsType = generateTypeScriptType(attr.type, attr.isMultiple, false);
const comment = attr.description ? ` // ${attr.description}` : '';
lines.push(` ${attr.fieldName}: ${tsType};${comment}`);
}
lines.push('');
}
if (refAttrs.length > 0) {
lines.push(' // Reference attributes');
for (const attr of refAttrs) {
const tsType = generateTypeScriptType(attr.type, attr.isMultiple, true);
const comment = attr.referenceTypeName ? ` // -> ${attr.referenceTypeName}` : '';
lines.push(` ${attr.fieldName}: ${tsType};${comment}`);
}
lines.push('');
}
lines.push('}');
lines.push('');
}
// Generate union type
lines.push('// =============================================================================');
lines.push('// Union Types');
lines.push('// =============================================================================');
lines.push('');
lines.push('/** Union of all CMDB object types */');
lines.push('export type CMDBObject =');
for (let i = 0; i < objectTypes.length; i++) {
const suffix = i < objectTypes.length - 1 ? '' : ';';
lines.push(` | ${objectTypes[i].typeName}${suffix}`);
}
lines.push('');
// Generate type name literal union
lines.push('/** All valid object type names */');
lines.push('export type CMDBObjectTypeName =');
for (let i = 0; i < objectTypes.length; i++) {
const suffix = i < objectTypes.length - 1 ? '' : ';';
lines.push(` | '${objectTypes[i].typeName}'${suffix}`);
}
lines.push('');
// Generate type guards
lines.push('// =============================================================================');
lines.push('// Type Guards');
lines.push('// =============================================================================');
lines.push('');
for (const objType of objectTypes) {
lines.push(`export function is${objType.typeName}(obj: CMDBObject): obj is ${objType.typeName} {`);
lines.push(` return obj._objectType === '${objType.typeName}';`);
lines.push('}');
lines.push('');
}
return lines.join('\n');
}
function generateSchemaFile(objectTypes: Array<{
jiraTypeId: number;
name: string;
typeName: string;
syncPriority: number;
objectCount: number;
attributes: AttributeDefinition[];
}>, generatedAt: Date): string {
const lines: string[] = [
'// AUTO-GENERATED FILE - DO NOT EDIT MANUALLY',
'// Generated from database schema',
`// Generated at: ${generatedAt.toISOString()}`,
'//',
'// Re-generate with: npm run generate-types',
'',
'// =============================================================================',
'// Schema Type Definitions',
'// =============================================================================',
'',
'export interface AttributeDefinition {',
' jiraId: number;',
' name: string;',
' fieldName: string;',
" type: 'text' | 'integer' | 'float' | 'boolean' | 'date' | 'datetime' | 'select' | 'reference' | 'url' | 'email' | 'textarea' | 'user' | 'status' | 'unknown';",
' isMultiple: boolean;',
' isEditable: boolean;',
' isRequired: boolean;',
' isSystem: boolean;',
' referenceTypeId?: number;',
' referenceTypeName?: string;',
' description?: string;',
'}',
'',
'export interface ObjectTypeDefinition {',
' jiraTypeId: number;',
' name: string;',
' typeName: string;',
' syncPriority: number;',
' objectCount: number;',
' attributes: AttributeDefinition[];',
'}',
'',
'// =============================================================================',
'// Schema Metadata',
'// =============================================================================',
'',
`export const SCHEMA_GENERATED_AT = '${generatedAt.toISOString()}';`,
`export const SCHEMA_OBJECT_TYPE_COUNT = ${objectTypes.length};`,
`export const SCHEMA_TOTAL_ATTRIBUTES = ${objectTypes.reduce((sum, ot) => sum + ot.attributes.length, 0)};`,
'',
'// =============================================================================',
'// Object Type Definitions',
'// =============================================================================',
'',
'export const OBJECT_TYPES: Record<string, ObjectTypeDefinition> = {',
];
for (let i = 0; i < objectTypes.length; i++) {
const objType = objectTypes[i];
const comma = i < objectTypes.length - 1 ? ',' : '';
lines.push(` '${objType.typeName}': {`);
lines.push(` jiraTypeId: ${objType.jiraTypeId},`);
lines.push(` name: '${escapeString(objType.name)}',`);
lines.push(` typeName: '${objType.typeName}',`);
lines.push(` syncPriority: ${objType.syncPriority},`);
lines.push(` objectCount: ${objType.objectCount},`);
lines.push(' attributes: [');
for (let j = 0; j < objType.attributes.length; j++) {
const attr = objType.attributes[j];
const attrComma = j < objType.attributes.length - 1 ? ',' : '';
let attrLine = ` { jiraId: ${attr.jiraId}, name: '${escapeString(attr.name)}', fieldName: '${attr.fieldName}', type: '${attr.type}', isMultiple: ${attr.isMultiple}, isEditable: ${attr.isEditable}, isRequired: ${attr.isRequired}, isSystem: ${attr.isSystem}`;
if (attr.referenceTypeName) {
attrLine += `, referenceTypeName: '${attr.referenceTypeName}'`;
}
if (attr.description) {
attrLine += `, description: '${escapeString(attr.description)}'`;
}
attrLine += ` }${attrComma}`;
lines.push(attrLine);
}
lines.push(' ],');
lines.push(` }${comma}`);
}
lines.push('};');
lines.push('');
// Generate lookup maps
lines.push('// =============================================================================');
lines.push('// Lookup Maps');
lines.push('// =============================================================================');
lines.push('');
// Type ID to name map
lines.push('/** Map from Jira Type ID to TypeScript type name */');
lines.push('export const TYPE_ID_TO_NAME: Record<number, string> = {');
for (const objType of objectTypes) {
lines.push(` ${objType.jiraTypeId}: '${objType.typeName}',`);
}
lines.push('};');
lines.push('');
// Type name to ID map
lines.push('/** Map from TypeScript type name to Jira Type ID */');
lines.push('export const TYPE_NAME_TO_ID: Record<string, number> = {');
for (const objType of objectTypes) {
lines.push(` '${objType.typeName}': ${objType.jiraTypeId},`);
}
lines.push('};');
lines.push('');
// Jira name to TypeScript name map
lines.push('/** Map from Jira object type name to TypeScript type name */');
lines.push('export const JIRA_NAME_TO_TYPE: Record<string, string> = {');
for (const objType of objectTypes) {
lines.push(` '${escapeString(objType.name)}': '${objType.typeName}',`);
}
lines.push('};');
lines.push('');
// Helper functions
lines.push('// =============================================================================');
lines.push('// Helper Functions');
lines.push('// =============================================================================');
lines.push('');
lines.push('/** Get attribute definition by type and field name */');
lines.push('export function getAttributeDefinition(typeName: string, fieldName: string): AttributeDefinition | undefined {');
lines.push(' const objectType = OBJECT_TYPES[typeName];');
lines.push(' if (!objectType) return undefined;');
lines.push(' return objectType.attributes.find(a => a.fieldName === fieldName);');
lines.push('}');
lines.push('');
lines.push('/** Get attribute definition by type and Jira attribute ID */');
lines.push('export function getAttributeById(typeName: string, jiraId: number): AttributeDefinition | undefined {');
lines.push(' const objectType = OBJECT_TYPES[typeName];');
lines.push(' if (!objectType) return undefined;');
lines.push(' return objectType.attributes.find(a => a.jiraId === jiraId);');
lines.push('}');
lines.push('');
lines.push('/** Get attribute definition by type and Jira attribute name */');
lines.push('export function getAttributeByName(typeName: string, attrName: string): AttributeDefinition | undefined {');
lines.push(' const objectType = OBJECT_TYPES[typeName];');
lines.push(' if (!objectType) return undefined;');
lines.push(' return objectType.attributes.find(a => a.name === attrName);');
lines.push('}');
lines.push('');
lines.push('/** Get attribute Jira ID by type and attribute name - throws if not found */');
lines.push('export function getAttributeId(typeName: string, attrName: string): number {');
lines.push(' const attr = getAttributeByName(typeName, attrName);');
lines.push(' if (!attr) {');
lines.push(' throw new Error(`Attribute "${attrName}" not found on type "${typeName}"`);');
lines.push(' }');
lines.push(' return attr.jiraId;');
lines.push('}');
lines.push('');
lines.push('/** Get all reference attributes for a type */');
lines.push('export function getReferenceAttributes(typeName: string): AttributeDefinition[] {');
lines.push(' const objectType = OBJECT_TYPES[typeName];');
lines.push(' if (!objectType) return [];');
lines.push(" return objectType.attributes.filter(a => a.type === 'reference');");
lines.push('}');
lines.push('');
lines.push('/** Get all object types sorted by sync priority */');
lines.push('export function getObjectTypesBySyncPriority(): ObjectTypeDefinition[] {');
lines.push(' return Object.values(OBJECT_TYPES).sort((a, b) => a.syncPriority - b.syncPriority);');
lines.push('}');
lines.push('');
return lines.join('\n');
}
async function main() {
const generatedAt = new Date();
console.log('');
console.log('╔════════════════════════════════════════════════════════════════╗');
console.log('║ Type Generation - Database to TypeScript ║');
console.log('╚════════════════════════════════════════════════════════════════╝');
console.log('');
try {
// Connect to database
const db = createDatabaseAdapter();
console.log('✓ Connected to database');
// Ensure schema is discovered first
const { schemaDiscoveryService } = await import('../src/services/schemaDiscoveryService.js');
await schemaDiscoveryService.discoverAndStoreSchema();
console.log('✓ Schema discovered from database');
// Fetch object types
const objectTypeRows = await db.query<DatabaseObjectType>(`
SELECT * FROM object_types
ORDER BY sync_priority, type_name
`);
console.log(`✓ Fetched ${objectTypeRows.length} object types`);
// Fetch attributes
const attributeRows = await db.query<DatabaseAttribute>(`
SELECT * FROM attributes
ORDER BY object_type_name, jira_attr_id
`);
console.log(`✓ Fetched ${attributeRows.length} attributes`);
// Build object types with attributes
const objectTypes = objectTypeRows.map(typeRow => {
const attributes = attributeRows
.filter(a => a.object_type_name === typeRow.type_name)
.map(attrRow => {
// Convert boolean/number for SQLite compatibility
const isMultiple = typeof attrRow.is_multiple === 'boolean' ? attrRow.is_multiple : attrRow.is_multiple === 1;
const isEditable = typeof attrRow.is_editable === 'boolean' ? attrRow.is_editable : attrRow.is_editable === 1;
const isRequired = typeof attrRow.is_required === 'boolean' ? attrRow.is_required : attrRow.is_required === 1;
const isSystem = typeof attrRow.is_system === 'boolean' ? attrRow.is_system : attrRow.is_system === 1;
return {
jiraId: attrRow.jira_attr_id,
name: attrRow.attr_name,
fieldName: attrRow.field_name,
type: attrRow.attr_type as AttributeDefinition['type'],
isMultiple,
isEditable,
isRequired,
isSystem,
referenceTypeName: attrRow.reference_type_name || undefined,
description: attrRow.description || undefined,
} as AttributeDefinition;
});
return {
jiraTypeId: typeRow.jira_type_id,
name: typeRow.display_name,
typeName: typeRow.type_name,
syncPriority: typeRow.sync_priority,
objectCount: typeRow.object_count,
attributes,
};
});
// Ensure output directory exists
if (!fs.existsSync(OUTPUT_DIR)) {
fs.mkdirSync(OUTPUT_DIR, { recursive: true });
}
// Generate TypeScript types file
const typesContent = generateTypesFile(objectTypes, generatedAt);
const typesPath = path.join(OUTPUT_DIR, 'jira-types.ts');
fs.writeFileSync(typesPath, typesContent, 'utf-8');
console.log(`✓ Generated ${typesPath}`);
// Generate schema file
const schemaContent = generateSchemaFile(objectTypes, generatedAt);
const schemaPath = path.join(OUTPUT_DIR, 'jira-schema.ts');
fs.writeFileSync(schemaPath, schemaContent, 'utf-8');
console.log(`✓ Generated ${schemaPath}`);
console.log('');
console.log('✅ Type generation completed successfully!');
console.log(` Generated ${objectTypes.length} object types with ${objectTypes.reduce((sum, ot) => sum + ot.attributes.length, 0)} attributes`);
console.log('');
} catch (error) {
console.error('');
console.error('❌ Type generation failed:', error);
process.exit(1);
}
}
main();

View File

@@ -0,0 +1,90 @@
/**
* Migration script: Add search_enabled column to schemas table
*
* This script adds the search_enabled column to the schemas table if it doesn't exist.
*
* Usage:
* npm run migrate:search-enabled
* or
* tsx scripts/migrate-search-enabled.ts
*/
import { getDatabaseAdapter } from '../src/services/database/singleton.js';
import { logger } from '../src/services/logger.js';
async function main() {
try {
console.log('Starting migration: Adding search_enabled column to schemas table...');
const db = getDatabaseAdapter();
await db.ensureInitialized?.();
const isPostgres = db.isPostgres === true;
// Check if column exists and add it if it doesn't
if (isPostgres) {
// PostgreSQL: Check if column exists
const columnExists = await db.queryOne<{ exists: boolean }>(`
SELECT EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_name = 'schemas' AND column_name = 'search_enabled'
) as exists
`);
if (!columnExists?.exists) {
console.log('Adding search_enabled column to schemas table...');
await db.execute(`
ALTER TABLE schemas ADD COLUMN search_enabled BOOLEAN NOT NULL DEFAULT TRUE;
`);
console.log('✓ Column added successfully');
} else {
console.log('✓ Column already exists');
}
// Create index if it doesn't exist
try {
await db.execute(`
CREATE INDEX IF NOT EXISTS idx_schemas_search_enabled ON schemas(search_enabled);
`);
console.log('✓ Index created/verified');
} catch (error) {
console.log('Index may already exist, continuing...');
}
} else {
// SQLite: Try to query the column to see if it exists
try {
await db.queryOne('SELECT search_enabled FROM schemas LIMIT 1');
console.log('✓ Column already exists');
} catch {
// Column doesn't exist, add it
console.log('Adding search_enabled column to schemas table...');
await db.execute('ALTER TABLE schemas ADD COLUMN search_enabled INTEGER NOT NULL DEFAULT 1');
console.log('✓ Column added successfully');
}
// Create index if it doesn't exist
try {
await db.execute('CREATE INDEX IF NOT EXISTS idx_schemas_search_enabled ON schemas(search_enabled)');
console.log('✓ Index created/verified');
} catch (error) {
console.log('Index may already exist, continuing...');
}
}
// Verify the column exists
try {
await db.queryOne('SELECT search_enabled FROM schemas LIMIT 1');
console.log('✓ Migration completed successfully - search_enabled column verified');
} catch (error) {
console.error('✗ Migration verification failed:', error);
process.exit(1);
}
process.exit(0);
} catch (error) {
console.error('✗ Migration failed:', error);
process.exit(1);
}
}
main();

View File

@@ -17,6 +17,8 @@ const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
const SQLITE_CACHE_DB = join(__dirname, '../../data/cmdb-cache.db');
// Note: Legacy support - old SQLite setups may have had separate classifications.db file
// Current setup uses a single database file for all data
const SQLITE_CLASSIFICATIONS_DB = join(__dirname, '../../data/classifications.db');
async function migrate() {
@@ -66,7 +68,8 @@ async function migrateCacheDatabase(pg: Pool) {
const sqlite = new Database(SQLITE_CACHE_DB, { readonly: true });
try {
// Migrate cached_objects
// Migrate cached_objects (LEGACY - only for migrating old data from deprecated schema)
// Note: New databases use the normalized schema (objects + attribute_values tables)
const objects = sqlite.prepare('SELECT * FROM cached_objects').all() as any[];
console.log(` Migrating ${objects.length} cached objects...`);

View File

@@ -0,0 +1,27 @@
/**
* Run Database Migrations
*
* Standalone script to run database migrations manually.
*
* Usage:
* npm run migrate
* or
* tsx scripts/run-migrations.ts
*/
import { runMigrations } from '../src/services/database/migrations.js';
import { logger } from '../src/services/logger.js';
async function main() {
try {
console.log('Starting database migrations...');
await runMigrations();
console.log('✓ Database migrations completed successfully');
process.exit(0);
} catch (error) {
console.error('✗ Migration failed:', error);
process.exit(1);
}
}
main();

View File

@@ -0,0 +1,178 @@
/**
* Setup Schema Mappings Script
*
* Configures schema mappings for object types based on the provided configuration.
* Run with: npm run setup-schema-mappings
*/
import { schemaMappingService } from '../src/services/schemaMappingService.js';
import { logger } from '../src/services/logger.js';
import { JIRA_NAME_TO_TYPE } from '../src/generated/jira-schema.js';
// Configuration: Schema ID -> Array of object type display names
const SCHEMA_MAPPINGS: Record<string, string[]> = {
'8': ['User'],
'6': [
'Application Component',
'Flows',
'Server',
'AzureSubscription',
'Certificate',
'Domain',
'Package',
'PackageBuild',
'Privileged User',
'Software',
'SoftwarePatch',
'Supplier',
'Application Management - Subteam',
'Application Management - Team',
'Measures',
'Rebootgroups',
'Application Management - Hosting',
'Application Management - Number of Users',
'Application Management - TAM',
'Application Management - Application Type',
'Application Management - Complexity Factor',
'Application Management - Dynamics Factor',
'ApplicationFunction',
'ApplicationFunctionCategory',
'Business Impact Analyse',
'Business Importance',
'Certificate ClassificationType',
'Certificate Type',
'Hosting Type',
'ICT Governance Model',
'Organisation',
],
};
async function setupSchemaMappings() {
logger.info('Setting up schema mappings...');
try {
let totalMappings = 0;
let skippedMappings = 0;
let errors = 0;
for (const [schemaId, objectTypeNames] of Object.entries(SCHEMA_MAPPINGS)) {
logger.info(`\nConfiguring schema ${schemaId} with ${objectTypeNames.length} object types...`);
for (const displayName of objectTypeNames) {
try {
// Convert display name to typeName
let typeName: string;
if (displayName === 'User') {
// User might not be in the generated schema, use 'User' directly
typeName = 'User';
// First, ensure User exists in object_types table
const { normalizedCacheStore } = await import('../src/services/normalizedCacheStore.js');
const db = (normalizedCacheStore as any).db;
await db.ensureInitialized?.();
// Check if User exists in object_types
const existing = await db.queryOne<{ type_name: string }>(`
SELECT type_name FROM object_types WHERE type_name = ?
`, [typeName]);
if (!existing) {
// Insert User into object_types (we'll use a placeholder jira_type_id)
// The actual jira_type_id will be discovered during schema discovery
logger.info(` Adding "User" to object_types table...`);
try {
await db.execute(`
INSERT INTO object_types (jira_type_id, type_name, display_name, description, sync_priority, object_count, discovered_at, updated_at)
VALUES (?, ?, ?, ?, ?, ?, ?, ?)
ON CONFLICT(jira_type_id) DO NOTHING
`, [
999999, // Placeholder ID - will be updated during schema discovery
'User',
'User',
'User object type from schema 8',
0,
0,
new Date().toISOString(),
new Date().toISOString()
]);
// Also try with type_name as unique constraint
await db.execute(`
INSERT INTO object_types (jira_type_id, type_name, display_name, description, sync_priority, object_count, discovered_at, updated_at)
VALUES (?, ?, ?, ?, ?, ?, ?, ?)
ON CONFLICT(type_name) DO UPDATE SET
display_name = excluded.display_name,
updated_at = excluded.updated_at
`, [
999999,
'User',
'User',
'User object type from schema 8',
0,
0,
new Date().toISOString(),
new Date().toISOString()
]);
logger.info(` ✓ Added "User" to object_types table`);
} catch (error: any) {
// If it already exists, that's fine
if (error.message?.includes('UNIQUE constraint') || error.message?.includes('duplicate key')) {
logger.info(` "User" already exists in object_types table`);
} else {
throw error;
}
}
}
} else {
// Look up typeName from JIRA_NAME_TO_TYPE mapping
typeName = JIRA_NAME_TO_TYPE[displayName];
if (!typeName) {
logger.warn(` ⚠️ Skipping "${displayName}" - typeName not found in schema`);
skippedMappings++;
continue;
}
}
// Set the mapping
await schemaMappingService.setMapping(typeName, schemaId, true);
logger.info(` ✓ Mapped ${typeName} (${displayName}) -> Schema ${schemaId}`);
totalMappings++;
} catch (error) {
logger.error(` ✗ Failed to map "${displayName}" to schema ${schemaId}:`, error);
errors++;
}
}
}
logger.info(`\n✅ Schema mappings setup complete!`);
logger.info(` - Total mappings created: ${totalMappings}`);
if (skippedMappings > 0) {
logger.info(` - Skipped (not found in schema): ${skippedMappings}`);
}
if (errors > 0) {
logger.info(` - Errors: ${errors}`);
}
// Clear cache to ensure fresh lookups
schemaMappingService.clearCache();
logger.info(`\n💾 Cache cleared - mappings are now active`);
} catch (error) {
logger.error('Failed to setup schema mappings:', error);
process.exit(1);
}
}
// Run the script
setupSchemaMappings()
.then(() => {
logger.info('\n✨ Done!');
process.exit(0);
})
.catch((error) => {
logger.error('Script failed:', error);
process.exit(1);
});

View File

@@ -0,0 +1,548 @@
/**
* DebugController - Debug/testing endpoints for architecture validation
*
* Provides endpoints to run SQL queries and check database state for testing.
*/
import { Request, Response } from 'express';
import { logger } from '../../services/logger.js';
import { getServices } from '../../services/ServiceFactory.js';
export class DebugController {
/**
* Execute a SQL query (read-only for safety)
* POST /api/v2/debug/query
* Body: { sql: string, params?: any[] }
*/
async executeQuery(req: Request, res: Response): Promise<void> {
try {
const { sql, params = [] } = req.body;
if (!sql || typeof sql !== 'string') {
res.status(400).json({ error: 'SQL query required in request body' });
return;
}
// Safety check: only allow SELECT queries
const normalizedSql = sql.trim().toUpperCase();
if (!normalizedSql.startsWith('SELECT')) {
res.status(400).json({ error: 'Only SELECT queries are allowed for security' });
return;
}
const services = getServices();
const db = services.cacheRepo.db;
const result = await db.query(sql, params);
res.json({
success: true,
result,
rowCount: result.length,
});
} catch (error) {
logger.error('DebugController: Query execution failed', error);
res.status(500).json({
success: false,
error: error instanceof Error ? error.message : 'Unknown error',
});
}
}
/**
* Get object info (ID, key, type) for debugging
* GET /api/v2/debug/objects?objectKey=...
*/
async getObjectInfo(req: Request, res: Response): Promise<void> {
try {
const objectKey = req.query.objectKey as string;
if (!objectKey) {
res.status(400).json({ error: 'objectKey query parameter required' });
return;
}
const services = getServices();
const obj = await services.cacheRepo.getObjectByKey(objectKey);
if (!obj) {
res.status(404).json({ error: 'Object not found' });
return;
}
// Get attribute count
const attrValues = await services.cacheRepo.getAttributeValues(obj.id);
res.json({
object: obj,
attributeValueCount: attrValues.length,
});
} catch (error) {
logger.error('DebugController: Failed to get object info', error);
res.status(500).json({
error: error instanceof Error ? error.message : 'Unknown error',
});
}
}
/**
* Get relation info for debugging
* GET /api/v2/debug/relations?objectKey=...
*/
async getRelationInfo(req: Request, res: Response): Promise<void> {
try {
const objectKey = req.query.objectKey as string;
if (!objectKey) {
res.status(400).json({ error: 'objectKey query parameter required' });
return;
}
const services = getServices();
const obj = await services.cacheRepo.getObjectByKey(objectKey);
if (!obj) {
res.status(404).json({ error: 'Object not found' });
return;
}
// Get relations where this object is source
const sourceRelations = await services.cacheRepo.db.query<{
sourceId: string;
targetId: string;
attributeId: number;
sourceType: string;
targetType: string;
}>(
`SELECT source_id as sourceId, target_id as targetId, attribute_id as attributeId,
source_type as sourceType, target_type as targetType
FROM object_relations
WHERE source_id = ?`,
[obj.id]
);
// Get relations where this object is target
const targetRelations = await services.cacheRepo.db.query<{
sourceId: string;
targetId: string;
attributeId: number;
sourceType: string;
targetType: string;
}>(
`SELECT source_id as sourceId, target_id as targetId, attribute_id as attributeId,
source_type as sourceType, target_type as targetType
FROM object_relations
WHERE target_id = ?`,
[obj.id]
);
res.json({
object: obj,
sourceRelations: sourceRelations.length,
targetRelations: targetRelations.length,
relations: {
outgoing: sourceRelations,
incoming: targetRelations,
},
});
} catch (error) {
logger.error('DebugController: Failed to get relation info', error);
res.status(500).json({
error: error instanceof Error ? error.message : 'Unknown error',
});
}
}
/**
* Get object type statistics
* GET /api/v2/debug/object-types/:typeName/stats
*/
async getObjectTypeStats(req: Request, res: Response): Promise<void> {
try {
const typeName = Array.isArray(req.params.typeName) ? req.params.typeName[0] : req.params.typeName;
if (!typeName) {
res.status(400).json({ error: 'typeName parameter required' });
return;
}
const services = getServices();
// Get object count
const count = await services.cacheRepo.countObjectsByType(typeName);
// Get sample objects
const samples = await services.cacheRepo.getObjectsByType(typeName, { limit: 5 });
// Get enabled status from schema
const typeInfo = await services.schemaRepo.getObjectTypeByTypeName(typeName);
res.json({
typeName,
objectCount: count,
enabled: typeInfo?.enabled || false,
sampleObjects: samples.map(o => ({
id: o.id,
objectKey: o.objectKey,
label: o.label,
})),
});
} catch (error) {
logger.error('DebugController: Failed to get object type stats', error);
res.status(500).json({
error: error instanceof Error ? error.message : 'Unknown error',
});
}
}
/**
* Get all object types with their enabled status (for debugging)
* GET /api/v2/debug/all-object-types
*/
async getAllObjectTypes(req: Request, res: Response): Promise<void> {
try {
const services = getServices();
const db = services.schemaRepo.getDatabaseAdapter();
// Check if object_types table exists
try {
const tableCheck = await db.query('SELECT 1 FROM object_types LIMIT 1');
} catch (error) {
logger.error('DebugController: object_types table does not exist or is not accessible', error);
res.status(500).json({
error: 'object_types table does not exist. Please run schema sync first.',
details: error instanceof Error ? error.message : 'Unknown error',
});
return;
}
// Get all object types
let allTypes: Array<{
id: number;
type_name: string | null;
display_name: string;
enabled: boolean | number;
jira_type_id: number;
schema_id: number;
}>;
try {
allTypes = await db.query<{
id: number;
type_name: string | null;
display_name: string;
enabled: boolean | number;
jira_type_id: number;
schema_id: number;
}>(
`SELECT id, type_name, display_name, enabled, jira_type_id, schema_id
FROM object_types
ORDER BY enabled DESC, type_name`
);
} catch (error) {
logger.error('DebugController: Failed to query object_types table', error);
res.status(500).json({
error: 'Failed to query object_types table',
details: error instanceof Error ? error.message : 'Unknown error',
});
return;
}
// Get enabled types via service (may fail if table has issues)
let enabledTypes: Array<{ typeName: string; displayName: string; schemaId: string; objectTypeId: number }> = [];
try {
const rawTypes = await services.schemaRepo.getEnabledObjectTypes();
enabledTypes = rawTypes.map(t => ({
typeName: t.typeName,
displayName: t.displayName,
schemaId: t.schemaId.toString(),
objectTypeId: t.id,
}));
logger.debug(`DebugController: getEnabledObjectTypes returned ${enabledTypes.length} types: ${enabledTypes.map(t => t.typeName).join(', ')}`);
} catch (error) {
logger.error('DebugController: Failed to get enabled types via service', error);
if (error instanceof Error) {
logger.error('Error details:', { message: error.message, stack: error.stack });
}
// Continue without enabled types from service
}
res.json({
allTypes: allTypes.map(t => ({
id: t.id,
typeName: t.type_name,
displayName: t.display_name,
enabled: t.enabled,
jiraTypeId: t.jira_type_id,
schemaId: t.schema_id,
hasTypeName: !!(t.type_name && t.type_name.trim() !== ''),
})),
enabledTypes: enabledTypes.map(t => ({
typeName: t.typeName,
displayName: t.displayName,
schemaId: t.schemaId,
objectTypeId: t.objectTypeId,
})),
summary: {
total: allTypes.length,
enabled: allTypes.filter(t => {
const isPostgres = db.isPostgres === true;
const enabledValue = isPostgres ? (t.enabled === true) : (t.enabled === 1);
return enabledValue && t.type_name && t.type_name.trim() !== '';
}).length,
enabledWithTypeName: enabledTypes.length,
missingTypeName: allTypes.filter(t => !t.type_name || t.type_name.trim() === '').length,
},
});
} catch (error) {
logger.error('DebugController: Failed to get all object types', error);
res.status(500).json({
error: error instanceof Error ? error.message : 'Unknown error',
});
}
}
/**
* Diagnose a specific object type (check database state)
* GET /api/v2/debug/object-types/diagnose/:typeName
* Checks both by type_name and display_name
*/
async diagnoseObjectType(req: Request, res: Response): Promise<void> {
try {
const typeName = Array.isArray(req.params.typeName) ? req.params.typeName[0] : req.params.typeName;
if (!typeName) {
res.status(400).json({ error: 'typeName parameter required' });
return;
}
const services = getServices();
const db = services.schemaRepo.getDatabaseAdapter();
const isPostgres = db.isPostgres === true;
const enabledCondition = isPostgres ? 'enabled IS true' : 'enabled = 1';
// Check by type_name (exact match)
const byTypeName = await db.query<{
id: number;
schema_id: number;
jira_type_id: number;
type_name: string | null;
display_name: string;
enabled: boolean | number;
description: string | null;
}>(
`SELECT id, schema_id, jira_type_id, type_name, display_name, enabled, description
FROM object_types
WHERE type_name = ?`,
[typeName]
);
// Check by display_name (case-insensitive, partial match)
const byDisplayName = await db.query<{
id: number;
schema_id: number;
jira_type_id: number;
type_name: string | null;
display_name: string;
enabled: boolean | number;
description: string | null;
}>(
isPostgres
? `SELECT id, schema_id, jira_type_id, type_name, display_name, enabled, description
FROM object_types
WHERE LOWER(display_name) LIKE LOWER(?)`
: `SELECT id, schema_id, jira_type_id, type_name, display_name, enabled, description
FROM object_types
WHERE LOWER(display_name) LIKE LOWER(?)`,
[`%${typeName}%`]
);
// Get schema info for found types
const schemaIds = [...new Set([...byTypeName.map(t => t.schema_id), ...byDisplayName.map(t => t.schema_id)])];
const schemas = schemaIds.length > 0
? await db.query<{ id: number; jira_schema_id: string; name: string }>(
`SELECT id, jira_schema_id, name FROM schemas WHERE id IN (${schemaIds.map(() => '?').join(',')})`,
schemaIds
)
: [];
const schemaMap = new Map(schemas.map(s => [s.id, s]));
// Check enabled types via service
let enabledTypesFromService: string[] = [];
try {
const rawTypes = await services.schemaRepo.getEnabledObjectTypes();
enabledTypesFromService = rawTypes.map((t: { typeName: string }) => t.typeName);
} catch (error) {
logger.error('DebugController: Failed to get enabled types from service', error);
}
// Check if type is in enabled list from service
const isInEnabledList = enabledTypesFromService.includes(typeName as string);
res.json({
requestedType: typeName,
foundByTypeName: byTypeName.map(t => ({
id: t.id,
schemaId: t.schema_id,
jiraSchemaId: schemaMap.get(t.schema_id)?.jira_schema_id,
schemaName: schemaMap.get(t.schema_id)?.name,
jiraTypeId: t.jira_type_id,
typeName: t.type_name,
displayName: t.display_name,
enabled: t.enabled,
enabledValue: isPostgres ? (t.enabled === true) : (t.enabled === 1),
hasTypeName: !!(t.type_name && t.type_name.trim() !== ''),
description: t.description,
})),
foundByDisplayName: byDisplayName.filter(t => !byTypeName.some(t2 => t2.id === t.id)).map(t => ({
id: t.id,
schemaId: t.schema_id,
jiraSchemaId: schemaMap.get(t.schema_id)?.jira_schema_id,
schemaName: schemaMap.get(t.schema_id)?.name,
jiraTypeId: t.jira_type_id,
typeName: t.type_name,
displayName: t.display_name,
enabled: t.enabled,
enabledValue: isPostgres ? (t.enabled === true) : (t.enabled === 1),
hasTypeName: !!(t.type_name && t.type_name.trim() !== ''),
description: t.description,
})),
diagnosis: {
found: byTypeName.length > 0 || byDisplayName.length > 0,
foundExact: byTypeName.length > 0,
foundByDisplay: byDisplayName.length > 0,
isEnabled: byTypeName.length > 0
? (isPostgres ? (byTypeName[0].enabled === true) : (byTypeName[0].enabled === 1))
: byDisplayName.length > 0
? (isPostgres ? (byDisplayName[0].enabled === true) : (byDisplayName[0].enabled === 1))
: false,
hasTypeName: byTypeName.length > 0
? !!(byTypeName[0].type_name && byTypeName[0].type_name.trim() !== '')
: byDisplayName.length > 0
? !!(byDisplayName[0].type_name && byDisplayName[0].type_name.trim() !== '')
: false,
isInEnabledList,
issue: !isInEnabledList && (byTypeName.length > 0 || byDisplayName.length > 0)
? (byTypeName.length > 0 && !(byTypeName[0].type_name && byTypeName[0].type_name.trim() !== '')
? 'Type is enabled in database but has missing type_name (will be filtered out)'
: byTypeName.length > 0 && !(isPostgres ? (byTypeName[0].enabled === true) : (byTypeName[0].enabled === 1))
? 'Type exists but is not enabled in database'
: 'Type exists but not found in enabled list (may have missing type_name)')
: !isInEnabledList && byTypeName.length === 0 && byDisplayName.length === 0
? 'Type not found in database'
: 'No issues detected',
},
enabledTypesCount: enabledTypesFromService.length,
enabledTypesList: enabledTypesFromService,
});
} catch (error) {
logger.error(`DebugController: Failed to diagnose object type ${req.params.typeName}`, error);
res.status(500).json({
error: error instanceof Error ? error.message : 'Unknown error',
});
}
}
/**
* Fix object types with missing type_name
* POST /api/v2/debug/fix-missing-type-names
* This will try to fix object types that have NULL type_name by looking up by display_name
*/
async fixMissingTypeNames(req: Request, res: Response): Promise<void> {
try {
const services = getServices();
const db = services.schemaRepo.getDatabaseAdapter();
// Find all object types with NULL or empty type_name
// Also check for enabled ones specifically
const isPostgres = db.isPostgres === true;
const enabledCondition = isPostgres ? 'enabled IS true' : 'enabled = 1';
const brokenTypes = await db.query<{
id: number;
jira_type_id: number;
display_name: string;
type_name: string | null;
enabled: boolean | number;
}>(
`SELECT id, jira_type_id, display_name, type_name, enabled
FROM object_types
WHERE (type_name IS NULL OR type_name = '')
ORDER BY enabled DESC, display_name`
);
// Also check enabled types specifically
const enabledWithNullTypeName = await db.query<{
id: number;
jira_type_id: number;
display_name: string;
type_name: string | null;
enabled: boolean | number;
}>(
`SELECT id, jira_type_id, display_name, type_name, enabled
FROM object_types
WHERE (type_name IS NULL OR type_name = '') AND ${enabledCondition}`
);
if (enabledWithNullTypeName.length > 0) {
logger.warn(`DebugController: Found ${enabledWithNullTypeName.length} ENABLED object types with missing type_name: ${enabledWithNullTypeName.map(t => t.display_name).join(', ')}`);
}
logger.info(`DebugController: Found ${brokenTypes.length} object types with missing type_name`);
const fixes: Array<{ id: number; displayName: string; fixedTypeName: string }> = [];
const errors: Array<{ id: number; error: string }> = [];
for (const broken of brokenTypes) {
try {
// Generate type_name from display_name using toPascalCase
const { toPascalCase } = await import('../../services/schemaUtils.js');
const fixedTypeName = toPascalCase(broken.display_name);
if (!fixedTypeName || fixedTypeName.trim() === '') {
errors.push({
id: broken.id,
error: `Could not generate type_name from display_name: "${broken.display_name}"`,
});
continue;
}
// Update the record
await db.execute(
`UPDATE object_types SET type_name = ?, updated_at = ? WHERE id = ?`,
[fixedTypeName, new Date().toISOString(), broken.id]
);
fixes.push({
id: broken.id,
displayName: broken.display_name,
fixedTypeName,
});
logger.info(`DebugController: Fixed object type id=${broken.id}, display_name="${broken.display_name}" -> type_name="${fixedTypeName}"`);
} catch (error) {
errors.push({
id: broken.id,
error: error instanceof Error ? error.message : 'Unknown error',
});
}
}
// Re-fetch enabled types to verify fix (reuse services from line 294)
const rawTypes = await services.schemaRepo.getEnabledObjectTypes();
const enabledTypesAfterFix = rawTypes.map(t => t.typeName);
res.json({
success: true,
fixed: fixes.length,
errorCount: errors.length,
fixes,
errors: errors.length > 0 ? errors : undefined,
enabledTypesAfterFix: enabledTypesAfterFix,
note: enabledWithNullTypeName.length > 0
? `Fixed ${enabledWithNullTypeName.length} enabled types that were missing type_name. They should now appear in enabled types list.`
: undefined,
});
} catch (error) {
logger.error('DebugController: Failed to fix missing type names', error);
res.status(500).json({
error: error instanceof Error ? error.message : 'Unknown error',
});
}
}
}

View File

@@ -0,0 +1,54 @@
/**
* HealthController - API health check endpoint
*
* Public endpoint (no auth required) to check if V2 API is working.
*/
import { Request, Response } from 'express';
import { logger } from '../../services/logger.js';
import { getServices } from '../../services/ServiceFactory.js';
export class HealthController {
/**
* Health check endpoint
* GET /api/v2/health
*/
async health(req: Request, res: Response): Promise<void> {
try {
const services = getServices();
// Check if services are initialized
const isInitialized = !!services.queryService;
// Check database connection (simple query)
let dbConnected = false;
try {
await services.schemaRepo.getAllSchemas();
dbConnected = true;
} catch (error) {
logger.warn('V2 Health: Database connection check failed', error);
}
res.json({
status: 'ok',
apiVersion: 'v2',
timestamp: new Date().toISOString(),
services: {
initialized: isInitialized,
database: dbConnected ? 'connected' : 'disconnected',
},
featureFlag: {
useV2Api: process.env.USE_V2_API === 'true',
},
});
} catch (error) {
logger.error('V2 Health: Health check failed', error);
res.status(500).json({
status: 'error',
apiVersion: 'v2',
timestamp: new Date().toISOString(),
error: 'Health check failed',
});
}
}
}

View File

@@ -0,0 +1,176 @@
/**
* ObjectsController - API handlers for object operations
*
* NO SQL, NO parsing - delegates to services.
*/
import { Request, Response } from 'express';
import { logger } from '../../services/logger.js';
import { getServices } from '../../services/ServiceFactory.js';
import type { CMDBObject, CMDBObjectTypeName } from '../../generated/jira-types.js';
import { getParamString, getQueryString, getQueryNumber } from '../../utils/queryHelpers.js';
export class ObjectsController {
/**
* Get a single object by ID or objectKey
* GET /api/v2/objects/:type/:id?refresh=true
* Supports both object ID and objectKey (checks objectKey if ID lookup fails)
*/
async getObject(req: Request, res: Response): Promise<void> {
try {
const type = getParamString(req, 'type');
const idOrKey = getParamString(req, 'id');
const forceRefresh = getQueryString(req, 'refresh') === 'true';
const services = getServices();
// Try to find object ID if idOrKey might be an objectKey
let objectId = idOrKey;
let objRecord = await services.cacheRepo.getObject(idOrKey);
if (!objRecord) {
// Try as objectKey
objRecord = await services.cacheRepo.getObjectByKey(idOrKey);
if (objRecord) {
objectId = objRecord.id;
}
}
// Force refresh if requested
if (forceRefresh && objectId) {
const enabledTypes = await services.schemaRepo.getEnabledObjectTypes();
const enabledTypeSet = new Set(enabledTypes.map(t => t.typeName));
const refreshResult = await services.refreshService.refreshObject(objectId, enabledTypeSet);
if (!refreshResult.success) {
res.status(500).json({ error: refreshResult.error || 'Failed to refresh object' });
return;
}
}
// Get from cache
if (!objectId) {
res.status(404).json({ error: 'Object not found (by ID or key)' });
return;
}
const object = await services.queryService.getObject<CMDBObject>(type as CMDBObjectTypeName, objectId);
if (!object) {
res.status(404).json({ error: 'Object not found' });
return;
}
res.json(object);
} catch (error) {
logger.error('ObjectsController: Failed to get object', error);
res.status(500).json({ error: 'Failed to get object' });
}
}
/**
* Get all objects of a type
* GET /api/v2/objects/:type?limit=100&offset=0&search=term
*/
async getObjects(req: Request, res: Response): Promise<void> {
try {
const type = getParamString(req, 'type');
const limit = getQueryNumber(req, 'limit', 1000);
const offset = getQueryNumber(req, 'offset', 0);
const search = getQueryString(req, 'search');
const services = getServices();
logger.info(`ObjectsController.getObjects: Querying for type="${type}" with limit=${limit}, offset=${offset}, search=${search || 'none'}`);
let objects: CMDBObject[];
if (search) {
objects = await services.queryService.searchByLabel<CMDBObject>(
type as CMDBObjectTypeName,
search,
{ limit, offset }
);
} else {
objects = await services.queryService.getObjects<CMDBObject>(
type as CMDBObjectTypeName,
{ limit, offset }
);
}
const totalCount = await services.queryService.countObjects(type as CMDBObjectTypeName);
logger.info(`ObjectsController.getObjects: Found ${objects.length} objects of type "${type}" (total count: ${totalCount})`);
// If no objects found, provide diagnostic information
if (objects.length === 0) {
// Check what object types actually exist in the database
const db = services.cacheRepo.db;
try {
const availableTypes = await db.query<{ object_type_name: string; count: number }>(
`SELECT object_type_name, COUNT(*) as count
FROM objects
GROUP BY object_type_name
ORDER BY count DESC
LIMIT 10`
);
if (availableTypes.length > 0) {
logger.warn(`ObjectsController.getObjects: No objects found for type "${type}". Available types in database:`, {
requestedType: type,
availableTypes: availableTypes.map(t => ({ typeName: t.object_type_name, count: t.count })),
});
}
} catch (error) {
logger.debug('ObjectsController.getObjects: Failed to query available types', error);
}
}
res.json({
objectType: type,
objects,
count: objects.length,
totalCount,
offset,
limit,
});
} catch (error) {
logger.error('ObjectsController: Failed to get objects', error);
res.status(500).json({ error: 'Failed to get objects' });
}
}
/**
* Update an object
* PUT /api/v2/objects/:type/:id
*/
async updateObject(req: Request, res: Response): Promise<void> {
try {
const type = getParamString(req, 'type');
const id = getParamString(req, 'id');
const updates = req.body as Record<string, unknown>;
const services = getServices();
const result = await services.writeThroughService.updateObject(
type as CMDBObjectTypeName,
id,
updates
);
if (!result.success) {
res.status(400).json({ error: result.error || 'Failed to update object' });
return;
}
// Fetch updated object
const updated = await services.queryService.getObject<CMDBObject>(
type as CMDBObjectTypeName,
id
);
res.json(updated || { success: true });
} catch (error) {
logger.error('ObjectsController: Failed to update object', error);
res.status(500).json({ error: 'Failed to update object' });
}
}
}

View File

@@ -0,0 +1,289 @@
/**
* SyncController - API handlers for sync operations
*/
import { Request, Response } from 'express';
import { logger } from '../../services/logger.js';
import { getServices } from '../../services/ServiceFactory.js';
export class SyncController {
/**
* Sync all schemas
* POST /api/v2/sync/schemas
*/
async syncSchemas(req: Request, res: Response): Promise<void> {
try {
const services = getServices();
const result = await services.schemaSyncService.syncAll();
res.json({
...result,
success: result.success !== undefined ? result.success : true,
});
} catch (error) {
logger.error('SyncController: Failed to sync schemas', error);
res.status(500).json({
success: false,
error: error instanceof Error ? error.message : 'Unknown error',
});
}
}
/**
* Sync all enabled object types
* POST /api/v2/sync/objects
*/
async syncAllObjects(req: Request, res: Response): Promise<void> {
try {
const services = getServices();
// Get enabled types
const rawTypes = await services.schemaRepo.getEnabledObjectTypes();
if (rawTypes.length === 0) {
res.status(400).json({
success: false,
error: 'No object types enabled for syncing. Please configure object types in Schema Configuration.',
});
return;
}
const results = [];
let totalObjectsProcessed = 0;
let totalObjectsCached = 0;
let totalRelations = 0;
// Sync each enabled type
for (const type of rawTypes) {
const result = await services.objectSyncService.syncObjectType(
type.schemaId.toString(),
type.id,
type.typeName,
type.displayName
);
results.push({
typeName: type.typeName,
displayName: type.displayName,
...result,
});
totalObjectsProcessed += result.objectsProcessed;
totalObjectsCached += result.objectsCached;
totalRelations += result.relationsExtracted;
}
res.json({
success: true,
stats: results,
totalObjectsProcessed,
totalObjectsCached,
totalRelations,
});
} catch (error) {
logger.error('SyncController: Failed to sync objects', error);
res.status(500).json({
success: false,
error: error instanceof Error ? error.message : 'Unknown error',
});
}
}
/**
* Sync a specific object type
* POST /api/v2/sync/objects/:typeName
*/
async syncObjectType(req: Request, res: Response): Promise<void> {
try {
const typeName = Array.isArray(req.params.typeName) ? req.params.typeName[0] : req.params.typeName;
if (!typeName) {
res.status(400).json({ error: 'typeName parameter required' });
return;
}
const services = getServices();
// Get enabled types
let rawTypes = await services.schemaRepo.getEnabledObjectTypes();
let enabledTypes = rawTypes.map(t => ({
typeName: t.typeName,
displayName: t.displayName,
schemaId: t.schemaId.toString(),
objectTypeId: t.id,
}));
// Filter out any entries with missing typeName
enabledTypes = enabledTypes.filter((t: { typeName?: string }) => t && t.typeName);
// Debug logging - also check database directly
logger.info(`SyncController: Looking for type "${typeName}" in ${enabledTypes.length} enabled types`);
logger.debug(`SyncController: Enabled types: ${JSON.stringify(enabledTypes.map((t: { typeName?: string; displayName?: string }) => ({ typeName: t?.typeName, displayName: t?.displayName })))}`);
// Additional debug: Check database directly for enabled types (including those with missing type_name)
const db = services.schemaRepo.getDatabaseAdapter();
const isPostgres = db.isPostgres === true;
const enabledCondition = isPostgres ? 'enabled IS true' : 'enabled = 1';
const dbCheck = await db.query<{ type_name: string | null; display_name: string; enabled: boolean | number; id: number; jira_type_id: number }>(
`SELECT id, jira_type_id, type_name, display_name, enabled FROM object_types WHERE ${enabledCondition}`
);
logger.info(`SyncController: Found ${dbCheck.length} enabled types in database (raw check)`);
logger.debug(`SyncController: Database enabled types (raw): ${JSON.stringify(dbCheck.map(t => ({ id: t.id, displayName: t.display_name, typeName: t.type_name, hasTypeName: !!(t.type_name && t.type_name.trim() !== '') })))}`);
// Check if AzureSubscription or similar is enabled but missing type_name
const typeNameLower = typeName.toLowerCase();
const matchingByDisplayName = dbCheck.filter((t: { display_name: string }) =>
t.display_name.toLowerCase().includes(typeNameLower) ||
typeNameLower.includes(t.display_name.toLowerCase())
);
if (matchingByDisplayName.length > 0) {
logger.warn(`SyncController: Found enabled type(s) matching "${typeName}" by display_name but not in enabled list:`, {
matches: matchingByDisplayName.map(t => ({
id: t.id,
displayName: t.display_name,
typeName: t.type_name,
hasTypeName: !!(t.type_name && t.type_name.trim() !== ''),
enabled: t.enabled,
})),
});
}
const type = enabledTypes.find((t: { typeName?: string }) => t && t.typeName === typeName);
if (!type) {
// Check if type exists but is not enabled or has missing type_name
const allType = await services.schemaRepo.getObjectTypeByTypeName(typeName);
if (allType) {
// Debug: Check the actual enabled value and query
const enabledValue = allType.enabled;
const enabledType = typeof enabledValue;
logger.warn(`SyncController: Type "${typeName}" found but not in enabled list. enabled=${enabledValue} (type: ${enabledType}), enabledTypes.length=${enabledTypes.length}`);
logger.debug(`SyncController: Enabled types details: ${JSON.stringify(enabledTypes)}`);
// Try to find it with different case (handle undefined typeName)
const typeNameLower = typeName.toLowerCase();
const caseInsensitiveMatch = enabledTypes.find((t: { typeName?: string }) => t && t.typeName && t.typeName.toLowerCase() === typeNameLower);
if (caseInsensitiveMatch) {
logger.warn(`SyncController: Found type with different case: "${caseInsensitiveMatch.typeName}" vs "${typeName}"`);
// Use the found type with correct case
const result = await services.objectSyncService.syncObjectType(
caseInsensitiveMatch.schemaId.toString(),
caseInsensitiveMatch.objectTypeId,
caseInsensitiveMatch.typeName,
caseInsensitiveMatch.displayName
);
res.json({
success: true,
...result,
hasErrors: result.errors.length > 0,
note: `Type name case corrected: "${typeName}" -> "${caseInsensitiveMatch.typeName}"`,
});
return;
}
// Direct SQL query to verify enabled status and type_name
const db = services.schemaRepo.getDatabaseAdapter();
const isPostgres = db.isPostgres === true;
const rawCheck = await db.queryOne<{ enabled: boolean | number; type_name: string | null; display_name: string }>(
`SELECT enabled, type_name, display_name FROM object_types WHERE type_name = ?`,
[typeName]
);
// Check if type is enabled but missing type_name in enabled list (might be filtered out)
const enabledCondition = isPostgres ? 'enabled IS true' : 'enabled = 1';
const enabledWithMissingTypeName = await db.query<{ display_name: string; type_name: string | null; enabled: boolean | number }>(
`SELECT display_name, type_name, enabled FROM object_types WHERE display_name ILIKE ? AND ${enabledCondition}`,
[`%${typeName}%`]
);
// Get list of all enabled type names for better error message
const enabledTypeNames = enabledTypes.map((t: { typeName?: string }) => t.typeName).filter(Boolean) as string[];
// Check if the issue is that the type is enabled but has a missing type_name
if (rawCheck && (rawCheck.enabled === true || rawCheck.enabled === 1)) {
if (!rawCheck.type_name || rawCheck.type_name.trim() === '') {
res.status(400).json({
success: false,
error: `Object type "${typeName}" is enabled in the database but has a missing or empty type_name. This prevents it from being synced. Please run schema sync again to fix the type_name, or use the "Fix Missing Type Names" debug tool (Settings → Debug).`,
details: {
requestedType: typeName,
displayName: rawCheck.display_name,
enabledInDatabase: rawCheck.enabled,
typeNameInDatabase: rawCheck.type_name,
enabledTypesCount: enabledTypes.length,
enabledTypeNames: enabledTypeNames,
hint: 'Run schema sync to ensure all object types have a valid type_name, or use the Debug page to fix missing type names.',
},
});
return;
}
}
res.status(400).json({
success: false,
error: `Object type "${typeName}" is not enabled for syncing. Currently enabled types: ${enabledTypeNames.length > 0 ? enabledTypeNames.join(', ') : 'none'}. Please enable "${typeName}" in Schema Configuration settings (Settings → Schema Configuratie).`,
details: {
requestedType: typeName,
enabledInDatabase: rawCheck?.enabled,
typeNameInDatabase: rawCheck?.type_name,
enabledTypesCount: enabledTypes.length,
enabledTypeNames: enabledTypeNames,
hint: enabledTypeNames.length === 0
? 'No object types are currently enabled. Please enable at least one object type in Schema Configuration.'
: `You enabled: ${enabledTypeNames.join(', ')}. Please enable "${typeName}" if you want to sync it.`,
},
});
} else {
// Type not found by type_name - check by display_name (case-insensitive)
const db = services.schemaRepo.getDatabaseAdapter();
const byDisplayName = await db.queryOne<{ enabled: boolean | number; type_name: string | null; display_name: string }>(
`SELECT enabled, type_name, display_name FROM object_types WHERE display_name ILIKE ? LIMIT 1`,
[`%${typeName}%`]
);
if (byDisplayName && (byDisplayName.enabled === true || byDisplayName.enabled === 1)) {
// Type is enabled but type_name might be missing or different
res.status(400).json({
success: false,
error: `Found enabled type "${byDisplayName.display_name}" but it has ${byDisplayName.type_name ? `type_name="${byDisplayName.type_name}"` : 'missing type_name'}. ${!byDisplayName.type_name ? 'Please run schema sync to fix the type_name, or use the "Fix Missing Type Names" debug tool.' : `Please use the correct type_name: "${byDisplayName.type_name}"`}`,
details: {
requestedType: typeName,
foundDisplayName: byDisplayName.display_name,
foundTypeName: byDisplayName.type_name,
enabledInDatabase: byDisplayName.enabled,
hint: !byDisplayName.type_name
? 'Run schema sync to ensure all object types have a valid type_name.'
: `Use type_name "${byDisplayName.type_name}" instead of "${typeName}"`,
},
});
return;
}
res.status(400).json({
success: false,
error: `Object type ${typeName} not found. Available enabled types: ${enabledTypes.map((t: { typeName?: string }) => t.typeName).filter(Boolean).join(', ') || 'none'}. Please run schema sync first.`,
});
}
return;
}
const result = await services.objectSyncService.syncObjectType(
type.schemaId.toString(),
type.objectTypeId,
type.typeName,
type.displayName
);
// Return success even if there are errors (errors are in result.errors array)
res.json({
success: true,
...result,
hasErrors: result.errors.length > 0,
});
} catch (error) {
logger.error(`SyncController: Failed to sync object type ${req.params.typeName}`, error);
res.status(500).json({
success: false,
error: error instanceof Error ? error.message : 'Unknown error',
});
}
}
}

View File

@@ -0,0 +1,47 @@
/**
* V2 API Routes - New refactored architecture
*
* Feature flag: USE_V2_API=true enables these routes
*/
import { Router } from 'express';
import { ObjectsController } from '../controllers/ObjectsController.js';
import { SyncController } from '../controllers/SyncController.js';
import { HealthController } from '../controllers/HealthController.js';
import { DebugController } from '../controllers/DebugController.js';
import { requireAuth, requirePermission } from '../../middleware/authorization.js';
const router = Router();
const objectsController = new ObjectsController();
const syncController = new SyncController();
const healthController = new HealthController();
const debugController = new DebugController();
// Health check - public endpoint (no auth required)
router.get('/health', (req, res) => healthController.health(req, res));
// All other routes require authentication
router.use(requireAuth);
// Object routes
router.get('/objects/:type', requirePermission('search'), (req, res) => objectsController.getObjects(req, res));
router.get('/objects/:type/:id', requirePermission('search'), (req, res) => objectsController.getObject(req, res));
router.put('/objects/:type/:id', requirePermission('write'), (req, res) => objectsController.updateObject(req, res));
// Sync routes
router.post('/sync/schemas', requirePermission('admin'), (req, res) => syncController.syncSchemas(req, res));
router.post('/sync/objects', requirePermission('admin'), (req, res) => syncController.syncAllObjects(req, res));
router.post('/sync/objects/:typeName', requirePermission('admin'), (req, res) => syncController.syncObjectType(req, res));
// Debug routes (admin only)
// IMPORTANT: More specific routes must come BEFORE parameterized routes
router.post('/debug/query', requirePermission('admin'), (req, res) => debugController.executeQuery(req, res));
router.get('/debug/objects', requirePermission('admin'), (req, res) => debugController.getObjectInfo(req, res));
router.get('/debug/relations', requirePermission('admin'), (req, res) => debugController.getRelationInfo(req, res));
router.get('/debug/all-object-types', requirePermission('admin'), (req, res) => debugController.getAllObjectTypes(req, res));
router.post('/debug/fix-missing-type-names', requirePermission('admin'), (req, res) => debugController.fixMissingTypeNames(req, res));
// Specific routes before parameterized routes
router.get('/debug/object-types/diagnose/:typeName', requirePermission('admin'), (req, res) => debugController.diagnoseObjectType(req, res));
router.get('/debug/object-types/:typeName/stats', requirePermission('admin'), (req, res) => debugController.getObjectTypeStats(req, res));
export default router;

View File

@@ -28,14 +28,13 @@ export type JiraAuthMethod = 'pat' | 'oauth';
interface Config {
// Jira Assets
jiraHost: string;
jiraSchemaId: string;
// Jira Service Account Token (for read operations: sync, fetching data)
jiraServiceAccountToken: string;
// Jira Authentication Method ('pat' or 'oauth')
jiraAuthMethod: JiraAuthMethod;
// Jira Personal Access Token (used when jiraAuthMethod = 'pat')
jiraPat: string;
// Jira OAuth 2.0 Configuration (used when jiraAuthMethod = 'oauth')
jiraOAuthClientId: string;
jiraOAuthClientSecret: string;
@@ -45,14 +44,9 @@ interface Config {
// Session Configuration
sessionSecret: string;
// AI API Keys
anthropicApiKey: string;
openaiApiKey: string;
defaultAIProvider: 'claude' | 'openai';
// Web Search API (Tavily)
tavilyApiKey: string;
enableWebSearch: boolean;
// AI Configuration
// Note: API keys (ANTHROPIC_API_KEY, OPENAI_API_KEY, TAVILY_API_KEY), default AI provider,
// and web search are now configured per-user in their profile settings, not in environment variables
// Application
port: number;
@@ -60,6 +54,9 @@ interface Config {
isDevelopment: boolean;
isProduction: boolean;
frontendUrl: string;
appName: string;
appTagline: string;
appCopyright: string;
// API Configuration
jiraApiBatchSize: number;
@@ -69,9 +66,9 @@ function getOptionalEnvVar(name: string, defaultValue: string = ''): string {
return process.env[name] || defaultValue;
}
// Helper to determine auth method with backward compatibility
// Helper to determine auth method
function getJiraAuthMethod(): JiraAuthMethod {
// Check new JIRA_AUTH_METHOD first
// Check JIRA_AUTH_METHOD first
const authMethod = getOptionalEnvVar('JIRA_AUTH_METHOD', '').toLowerCase();
if (authMethod === 'oauth') return 'oauth';
if (authMethod === 'pat') return 'pat';
@@ -80,28 +77,25 @@ function getJiraAuthMethod(): JiraAuthMethod {
const oauthEnabled = getOptionalEnvVar('JIRA_OAUTH_ENABLED', 'false').toLowerCase() === 'true';
if (oauthEnabled) return 'oauth';
// Default to 'pat' if JIRA_PAT is set, otherwise 'oauth' if OAuth credentials exist
const hasPat = !!getOptionalEnvVar('JIRA_PAT');
// Default to 'oauth' if OAuth credentials exist, otherwise 'pat'
const hasOAuthCredentials = !!getOptionalEnvVar('JIRA_OAUTH_CLIENT_ID') && !!getOptionalEnvVar('JIRA_OAUTH_CLIENT_SECRET');
if (hasPat) return 'pat';
if (hasOAuthCredentials) return 'oauth';
// Default to 'pat' (will show warning during validation)
// Default to 'pat' (users configure PAT in their profile)
return 'pat';
}
export const config: Config = {
// Jira Assets
jiraHost: getOptionalEnvVar('JIRA_HOST', 'https://jira.zuyderland.nl'),
jiraSchemaId: getOptionalEnvVar('JIRA_SCHEMA_ID'),
// Jira Service Account Token (for read operations: sync, fetching data)
jiraServiceAccountToken: getOptionalEnvVar('JIRA_SERVICE_ACCOUNT_TOKEN'),
// Jira Authentication Method
jiraAuthMethod: getJiraAuthMethod(),
// Jira Personal Access Token (for PAT authentication)
jiraPat: getOptionalEnvVar('JIRA_PAT'),
// Jira OAuth 2.0 Configuration (for OAuth authentication)
jiraOAuthClientId: getOptionalEnvVar('JIRA_OAUTH_CLIENT_ID'),
jiraOAuthClientSecret: getOptionalEnvVar('JIRA_OAUTH_CLIENT_SECRET'),
@@ -111,21 +105,15 @@ export const config: Config = {
// Session Configuration
sessionSecret: getOptionalEnvVar('SESSION_SECRET', 'change-this-secret-in-production'),
// AI API Keys
anthropicApiKey: getOptionalEnvVar('ANTHROPIC_API_KEY'),
openaiApiKey: getOptionalEnvVar('OPENAI_API_KEY'),
defaultAIProvider: (getOptionalEnvVar('DEFAULT_AI_PROVIDER', 'claude') as 'claude' | 'openai'),
// Web Search API (Tavily)
tavilyApiKey: getOptionalEnvVar('TAVILY_API_KEY'),
enableWebSearch: getOptionalEnvVar('ENABLE_WEB_SEARCH', 'false').toLowerCase() === 'true',
// Application
port: parseInt(getOptionalEnvVar('PORT', '3001'), 10),
nodeEnv: getOptionalEnvVar('NODE_ENV', 'development'),
isDevelopment: getOptionalEnvVar('NODE_ENV', 'development') === 'development',
isProduction: getOptionalEnvVar('NODE_ENV', 'development') === 'production',
frontendUrl: getOptionalEnvVar('FRONTEND_URL', 'http://localhost:5173'),
appName: getOptionalEnvVar('APP_NAME', 'CMDB Insight'),
appTagline: getOptionalEnvVar('APP_TAGLINE', 'Management console for Jira Assets'),
appCopyright: getOptionalEnvVar('APP_COPYRIGHT', '© {year} Zuyderland Medisch Centrum'),
// API Configuration
jiraApiBatchSize: parseInt(getOptionalEnvVar('JIRA_API_BATCH_SIZE', '15'), 10),
@@ -139,9 +127,7 @@ export function validateConfig(): void {
console.log(`Jira Authentication Method: ${config.jiraAuthMethod.toUpperCase()}`);
if (config.jiraAuthMethod === 'pat') {
if (!config.jiraPat) {
missingVars.push('JIRA_PAT (required for PAT authentication)');
}
// JIRA_PAT is configured in user profiles, not in ENV
} else if (config.jiraAuthMethod === 'oauth') {
if (!config.jiraOAuthClientId) {
missingVars.push('JIRA_OAUTH_CLIENT_ID (required for OAuth authentication)');
@@ -154,9 +140,10 @@ export function validateConfig(): void {
}
}
// General required config
if (!config.jiraSchemaId) missingVars.push('JIRA_SCHEMA_ID');
if (!config.anthropicApiKey) warnings.push('ANTHROPIC_API_KEY not set - AI classification disabled');
// Service account token warning (not required, but recommended for sync operations)
if (!config.jiraServiceAccountToken) {
warnings.push('JIRA_SERVICE_ACCOUNT_TOKEN not configured - sync and read operations may not work. Users can still use their personal PAT for reads as fallback.');
}
if (warnings.length > 0) {
warnings.forEach(w => console.warn(`Warning: ${w}`));

View File

@@ -0,0 +1,121 @@
// ==========================
// API Payload Types
// ==========================
export interface AssetsPayload {
objectEntries: ObjectEntry[];
}
export interface ObjectEntry {
id: string | number;
objectKey: string;
label: string;
objectType: {
id: number;
name: string;
};
created: string;
updated: string;
hasAvatar: boolean;
timestamp: number;
attributes?: ObjectAttribute[];
}
export interface ObjectAttribute {
id: number;
objectTypeAttributeId: number;
objectAttributeValues: ObjectAttributeValue[];
}
// ==========================
// Attribute Value Union
// ==========================
export type ObjectAttributeValue =
| SimpleValue
| StatusValue
| ConfluenceValue
| UserValue
| ReferenceValue;
export interface SimpleValue {
value: string | number | boolean;
searchValue: string;
referencedType: false;
displayValue: string;
}
export interface StatusValue {
status: { id: number; name: string; category: number };
searchValue: string;
referencedType: boolean;
displayValue: string;
}
export interface ConfluenceValue {
confluencePage: { id: string; title: string; url: string };
searchValue: string;
referencedType: boolean;
displayValue: string;
}
export interface UserValue {
user: {
avatarUrl: string;
displayName: string;
name: string;
key: string;
renderedLink: string;
isDeleted: boolean;
};
searchValue: string;
referencedType: boolean;
displayValue: string;
}
export interface ReferenceValue {
referencedObject: ReferencedObject;
searchValue: string;
referencedType: true;
displayValue: string;
}
export interface ReferencedObject {
id: string | number;
objectKey: string;
label: string;
name?: string;
archived?: boolean;
objectType: {
id: number;
name: string;
};
created: string;
updated: string;
timestamp: number;
hasAvatar: boolean;
attributes?: ObjectAttribute[];
_links?: { self: string };
}
// ==========================
// Type Guards (MANDATORY)
// ==========================
export function isReferenceValue(
v: ObjectAttributeValue
): v is ReferenceValue {
return (v as ReferenceValue).referencedObject !== undefined;
}
export function isSimpleValue(
v: ObjectAttributeValue
): v is SimpleValue {
return (v as SimpleValue).value !== undefined;
}
export function hasAttributes(
obj: ObjectEntry | ReferencedObject
): obj is (ObjectEntry | ReferencedObject) & { attributes: ObjectAttribute[] } {
return Array.isArray((obj as any).attributes);
}

View File

@@ -0,0 +1,38 @@
/**
* Sync Policy - Determines how objects are handled during sync
*/
export enum SyncPolicy {
/**
* Full sync: fetch all objects, cache all attributes
* Used for enabled object types in schema configuration
*/
ENABLED = 'enabled',
/**
* Reference-only: cache minimal metadata for referenced objects
* Used for disabled object types that are referenced by enabled types
*/
REFERENCE_ONLY = 'reference_only',
/**
* Skip: don't sync this object type at all
* Used for object types not in use
*/
SKIP = 'skip',
}
/**
* Get sync policy for an object type
*/
export function getSyncPolicy(
typeName: string,
enabledTypes: Set<string>
): SyncPolicy {
if (enabledTypes.has(typeName)) {
return SyncPolicy.ENABLED;
}
// We still need to cache referenced objects, even if their type is disabled
// This allows reference resolution without full sync
return SyncPolicy.REFERENCE_ONLY;
}

View File

@@ -8,18 +8,12 @@
-- =============================================================================
-- Core Tables
-- =============================================================================
-- Cached CMDB objects (all types stored in single table with JSON data)
CREATE TABLE IF NOT EXISTS cached_objects (
id TEXT PRIMARY KEY,
object_key TEXT NOT NULL UNIQUE,
object_type TEXT NOT NULL,
label TEXT NOT NULL,
data JSONB NOT NULL,
jira_updated_at TEXT,
jira_created_at TEXT,
cached_at TEXT NOT NULL
);
--
-- NOTE: This schema is LEGACY and deprecated.
-- The current system uses the normalized schema defined in
-- backend/src/services/database/normalized-schema.ts
--
-- This file is kept for reference and migration purposes only.
-- Object relations (references between objects)
CREATE TABLE IF NOT EXISTS object_relations (
@@ -43,12 +37,6 @@ CREATE TABLE IF NOT EXISTS sync_metadata (
-- Indices for Performance
-- =============================================================================
CREATE INDEX IF NOT EXISTS idx_objects_type ON cached_objects(object_type);
CREATE INDEX IF NOT EXISTS idx_objects_key ON cached_objects(object_key);
CREATE INDEX IF NOT EXISTS idx_objects_updated ON cached_objects(jira_updated_at);
CREATE INDEX IF NOT EXISTS idx_objects_label ON cached_objects(label);
CREATE INDEX IF NOT EXISTS idx_objects_data_gin ON cached_objects USING GIN (data);
CREATE INDEX IF NOT EXISTS idx_relations_source ON object_relations(source_id);
CREATE INDEX IF NOT EXISTS idx_relations_target ON object_relations(target_id);
CREATE INDEX IF NOT EXISTS idx_relations_source_type ON object_relations(source_type);

View File

@@ -7,18 +7,12 @@
-- =============================================================================
-- Core Tables
-- =============================================================================
-- Cached CMDB objects (all types stored in single table with JSON data)
CREATE TABLE IF NOT EXISTS cached_objects (
id TEXT PRIMARY KEY,
object_key TEXT NOT NULL UNIQUE,
object_type TEXT NOT NULL,
label TEXT NOT NULL,
data JSON NOT NULL,
jira_updated_at TEXT,
jira_created_at TEXT,
cached_at TEXT NOT NULL
);
--
-- NOTE: This schema is LEGACY and deprecated.
-- The current system uses the normalized schema defined in
-- backend/src/services/database/normalized-schema.ts
--
-- This file is kept for reference and migration purposes only.
-- Object relations (references between objects)
CREATE TABLE IF NOT EXISTS object_relations (
@@ -42,11 +36,6 @@ CREATE TABLE IF NOT EXISTS sync_metadata (
-- Indices for Performance
-- =============================================================================
CREATE INDEX IF NOT EXISTS idx_objects_type ON cached_objects(object_type);
CREATE INDEX IF NOT EXISTS idx_objects_key ON cached_objects(object_key);
CREATE INDEX IF NOT EXISTS idx_objects_updated ON cached_objects(jira_updated_at);
CREATE INDEX IF NOT EXISTS idx_objects_label ON cached_objects(label);
CREATE INDEX IF NOT EXISTS idx_relations_source ON object_relations(source_id);
CREATE INDEX IF NOT EXISTS idx_relations_target ON object_relations(target_id);
CREATE INDEX IF NOT EXISTS idx_relations_source_type ON object_relations(source_type);

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -6,7 +6,6 @@ import cookieParser from 'cookie-parser';
import { config, validateConfig } from './config/env.js';
import { logger } from './services/logger.js';
import { dataService } from './services/dataService.js';
import { syncEngine } from './services/syncEngine.js';
import { cmdbService } from './services/cmdbService.js';
import applicationsRouter from './routes/applications.js';
import classificationsRouter from './routes/classifications.js';
@@ -14,10 +13,17 @@ import referenceDataRouter from './routes/referenceData.js';
import dashboardRouter from './routes/dashboard.js';
import configurationRouter from './routes/configuration.js';
import authRouter, { authMiddleware } from './routes/auth.js';
import usersRouter from './routes/users.js';
import rolesRouter from './routes/roles.js';
import userSettingsRouter from './routes/userSettings.js';
import profileRouter from './routes/profile.js';
import searchRouter from './routes/search.js';
import cacheRouter from './routes/cache.js';
import objectsRouter from './routes/objects.js';
import schemaRouter from './routes/schema.js';
import dataValidationRouter from './routes/dataValidation.js';
import schemaConfigurationRouter from './routes/schemaConfiguration.js';
import { runMigrations } from './services/database/migrations.js';
// Validate configuration
validateConfig();
@@ -55,14 +61,66 @@ app.use((req, res, next) => {
// Auth middleware - extract session info for all requests
app.use(authMiddleware);
// Set user token on CMDBService for each request (for user-specific OAuth)
app.use((req, res, next) => {
// Set user's OAuth token if available
// Set user token and settings on services for each request
app.use(async (req, res, next) => {
// Set user's OAuth token if available (for OAuth sessions)
let userToken: string | null = null;
if (req.accessToken) {
cmdbService.setUserToken(req.accessToken);
userToken = req.accessToken;
}
// Clear token after response is sent
// Set user's Jira PAT and AI keys if user is authenticated and has local account
if (req.user && 'id' in req.user) {
try {
const { userSettingsService } = await import('./services/userSettingsService.js');
const settings = await userSettingsService.getUserSettings(req.user.id);
if (settings?.jira_pat) {
// Use user's Jira PAT from profile settings (preferred for writes)
userToken = settings.jira_pat;
} else if (config.jiraServiceAccountToken) {
// Fallback to service account token if user doesn't have PAT configured
// This allows writes to work when JIRA_SERVICE_ACCOUNT_TOKEN is set in .env
userToken = config.jiraServiceAccountToken;
logger.debug('Using service account token as fallback (user PAT not configured)');
}
// Store user settings in request for services to access
(req as any).userSettings = settings;
} catch (error) {
// If user settings can't be loaded, try service account token as fallback
logger.debug('Failed to load user settings:', error);
if (config.jiraServiceAccountToken) {
userToken = config.jiraServiceAccountToken;
logger.debug('Using service account token as fallback (user settings load failed)');
}
}
}
// Set token on old services (for backward compatibility)
if (userToken) {
cmdbService.setUserToken(userToken);
} else {
cmdbService.setUserToken(null);
}
// Set token on new V2 infrastructure client (if feature flag enabled)
if (process.env.USE_V2_API === 'true') {
try {
const { jiraAssetsClient } = await import('./infrastructure/jira/JiraAssetsClient.js');
jiraAssetsClient.setRequestToken(userToken);
// Clear token after response
res.on('finish', () => {
jiraAssetsClient.clearRequestToken();
});
} catch (error) {
// V2 API not loaded - ignore
}
}
// Clear token after response is sent (for old services)
res.on('finish', () => {
cmdbService.clearUserToken();
});
@@ -78,9 +136,9 @@ app.get('/health', async (req, res) => {
res.json({
status: 'ok',
timestamp: new Date().toISOString(),
dataSource: dataService.isUsingJiraAssets() ? 'jira-assets-cached' : 'mock-data',
jiraConnected: dataService.isUsingJiraAssets() ? jiraConnected : null,
aiConfigured: !!config.anthropicApiKey,
dataSource: 'jira-assets-cached', // Always uses Jira Assets (mock data removed)
jiraConnected: jiraConnected,
aiConfigured: true, // AI is configured per-user in profile settings
cache: {
isWarm: cacheStatus.isWarm,
objectCount: cacheStatus.totalObjects,
@@ -98,6 +156,10 @@ app.get('/api/config', (req, res) => {
// API routes
app.use('/api/auth', authRouter);
app.use('/api/users', usersRouter);
app.use('/api/roles', rolesRouter);
app.use('/api/user-settings', userSettingsRouter);
app.use('/api/profile', profileRouter);
app.use('/api/applications', applicationsRouter);
app.use('/api/classifications', classificationsRouter);
app.use('/api/reference-data', referenceDataRouter);
@@ -107,6 +169,38 @@ app.use('/api/search', searchRouter);
app.use('/api/cache', cacheRouter);
app.use('/api/objects', objectsRouter);
app.use('/api/schema', schemaRouter);
app.use('/api/data-validation', dataValidationRouter);
app.use('/api/schema-configuration', schemaConfigurationRouter);
// V2 API routes (new refactored architecture) - Feature flag: USE_V2_API
const useV2Api = process.env.USE_V2_API === 'true';
const useV2ApiEnv = process.env.USE_V2_API || 'not set';
logger.info(`V2 API feature flag: USE_V2_API=${useV2ApiEnv} (enabled: ${useV2Api})`);
if (useV2Api) {
try {
logger.debug('Loading V2 API routes from ./api/routes/v2.js...');
const v2Router = (await import('./api/routes/v2.js')).default;
if (!v2Router) {
logger.error('❌ V2 API router is undefined - route file did not export default router');
} else {
app.use('/api/v2', v2Router);
logger.info('✅ V2 API routes enabled and mounted at /api/v2');
logger.debug('V2 API router type:', typeof v2Router, 'is function:', typeof v2Router === 'function');
}
} catch (error) {
logger.error('❌ Failed to load V2 API routes', error);
if (error instanceof Error) {
logger.error('Error details:', {
message: error.message,
stack: error.stack,
name: error.name,
});
}
}
} else {
logger.info(` V2 API routes disabled (USE_V2_API=${useV2ApiEnv}, set USE_V2_API=true to enable)`);
}
// Error handling
app.use((err: Error, req: express.Request, res: express.Response, next: express.NextFunction) => {
@@ -119,7 +213,20 @@ app.use((err: Error, req: express.Request, res: express.Response, next: express.
// 404 handler
app.use((req, res) => {
res.status(404).json({ error: 'Not found' });
// Provide helpful error messages for V2 API routes
if (req.path.startsWith('/api/v2/')) {
const useV2Api = process.env.USE_V2_API === 'true';
if (!useV2Api) {
res.status(404).json({
error: 'V2 API routes are not enabled',
message: 'Please set USE_V2_API=true in environment variables and restart the server to use V2 API endpoints.',
path: req.path,
});
return;
}
}
res.status(404).json({ error: 'Not found', path: req.path });
});
// Start server
@@ -127,17 +234,52 @@ const PORT = config.port;
app.listen(PORT, async () => {
logger.info(`Server running on http://localhost:${PORT}`);
logger.info(`Environment: ${config.nodeEnv}`);
logger.info(`AI Classification: ${config.anthropicApiKey ? 'Configured' : 'Not configured'}`);
logger.info(`Jira Assets: ${config.jiraPat ? 'Configured with caching' : 'Using mock data'}`);
logger.info(`AI Classification: Configured per-user in profile settings`);
// Initialize sync engine if using Jira Assets
if (config.jiraPat && config.jiraSchemaId) {
try {
await syncEngine.initialize();
logger.info('Sync Engine: Initialized and running');
} catch (error) {
logger.error('Failed to initialize sync engine', error);
// Log V2 API feature flag status
const useV2ApiEnv = process.env.USE_V2_API || 'not set';
const useV2ApiEnabled = process.env.USE_V2_API === 'true';
logger.info(`V2 API Feature Flag: USE_V2_API=${useV2ApiEnv} (${useV2ApiEnabled ? '✅ ENABLED' : '❌ DISABLED'})`);
// Check if schemas exist in database
// Note: Schemas table may not exist yet if schema hasn't been initialized
let hasSchemas = false;
try {
const { normalizedCacheStore } = await import('./services/normalizedCacheStore.js');
const db = (normalizedCacheStore as any).db;
if (db) {
await db.ensureInitialized?.();
try {
const schemaRow = await (db.queryOne as <T>(sql: string, params?: any[]) => Promise<T | null>)<{ count: number }>(
`SELECT COUNT(*) as count FROM schemas`
);
hasSchemas = (schemaRow?.count || 0) > 0;
} catch (tableError: any) {
// If schemas table doesn't exist yet, that's okay - schema hasn't been initialized
if (tableError?.message?.includes('does not exist') ||
tableError?.message?.includes('relation') ||
tableError?.code === '42P01') { // PostgreSQL: undefined table
logger.debug('Schemas table does not exist yet (will be created by migrations)');
hasSchemas = false;
} else {
throw tableError; // Re-throw other errors
}
}
}
} catch (error) {
logger.debug('Failed to check if schemas exist in database (table may not exist yet)', error);
}
logger.info(`Jira Assets: ${hasSchemas ? 'Schemas configured in database - users configure PAT in profile' : 'No schemas configured - use Schema Configuration page to discover schemas'}`);
logger.info('Sync: All syncs must be triggered manually from the GUI (no auto-start)');
logger.info('Data: All data comes from Jira Assets API (mock data removed)');
// Run database migrations FIRST to create schemas table before other services try to use it
try {
logger.info('Running database migrations...');
await runMigrations();
logger.info('✅ Database migrations completed');
} catch (error) {
logger.error('❌ Failed to run database migrations', error);
}
});
@@ -145,8 +287,7 @@ app.listen(PORT, async () => {
const shutdown = () => {
logger.info('Shutdown signal received: stopping services...');
// Stop sync engine
syncEngine.stop();
// Note: No sync engine to stop - syncs are only triggered from GUI
logger.info('Services stopped, exiting');
process.exit(0);

View File

@@ -0,0 +1,330 @@
/**
* JiraAssetsClient - Pure HTTP API client
*
* NO business logic, NO parsing, NO caching.
* Only HTTP requests to Jira Assets API.
*/
import { config } from '../../config/env.js';
import { logger } from '../../services/logger.js';
import type { AssetsPayload, ObjectEntry } from '../../domain/jiraAssetsPayload.js';
export interface JiraUpdatePayload {
objectTypeId?: number;
attributes: Array<{
objectTypeAttributeId: number;
objectAttributeValues: Array<{ value?: string }>;
}>;
}
export class JiraAssetsClient {
private baseUrl: string;
private serviceAccountToken: string | null = null;
private requestToken: string | null = null;
constructor() {
this.baseUrl = `${config.jiraHost}/rest/insight/1.0`;
this.serviceAccountToken = config.jiraServiceAccountToken || null;
}
setRequestToken(token: string | null): void {
this.requestToken = token;
}
clearRequestToken(): void {
this.requestToken = null;
}
hasToken(): boolean {
return !!(this.serviceAccountToken || this.requestToken);
}
hasUserToken(): boolean {
return !!this.requestToken;
}
private getHeaders(forWrite: boolean = false): Record<string, string> {
const headers: Record<string, string> = {
'Content-Type': 'application/json',
'Accept': 'application/json',
};
if (forWrite) {
if (!this.requestToken) {
throw new Error('Jira Personal Access Token not configured. Please configure it in your user settings to enable saving changes to Jira.');
}
headers['Authorization'] = `Bearer ${this.requestToken}`;
} else {
const token = this.serviceAccountToken || this.requestToken;
if (!token) {
throw new Error('Jira token not configured. Please configure JIRA_SERVICE_ACCOUNT_TOKEN in .env or a Personal Access Token in your user settings.');
}
headers['Authorization'] = `Bearer ${token}`;
}
return headers;
}
/**
* Get a single object by ID
*/
async getObject(objectId: string): Promise<ObjectEntry | null> {
try {
const url = `/object/${objectId}?includeAttributes=true&includeAttributesDeep=2`;
const response = await fetch(`${this.baseUrl}${url}`, {
headers: this.getHeaders(false),
});
if (!response.ok) {
if (response.status === 404) {
return null;
}
const text = await response.text();
throw new Error(`Jira API error ${response.status}: ${text}`);
}
return await response.json() as ObjectEntry;
} catch (error) {
logger.error(`JiraAssetsClient: Failed to get object ${objectId}`, error);
throw error;
}
}
/**
* Search objects using IQL/AQL
*/
async searchObjects(
iql: string,
schemaId: string,
options: {
page?: number;
pageSize?: number;
} = {}
): Promise<{ objectEntries: ObjectEntry[]; totalCount: number; hasMore: boolean }> {
// Validate schemaId is provided and not empty
if (!schemaId || schemaId.trim() === '') {
throw new Error('Schema ID is required and cannot be empty. This usually means the object type is not properly associated with a schema. Please run schema sync first.');
}
const { page = 1, pageSize = 50 } = options;
// Detect API type (Data Center vs Cloud) based on host
const isDataCenter = !config.jiraHost.includes('atlassian.net');
let response: { objectEntries: ObjectEntry[]; totalCount?: number; totalFilterCount?: number };
if (isDataCenter) {
// Data Center: Try AQL first, fallback to IQL
try {
const params = new URLSearchParams({
qlQuery: iql,
page: page.toString(),
resultPerPage: pageSize.toString(),
includeAttributes: 'true',
includeAttributesDeep: '2',
objectSchemaId: schemaId,
});
const url = `${this.baseUrl}/aql/objects?${params.toString()}`;
const httpResponse = await fetch(url, {
headers: this.getHeaders(false),
});
if (!httpResponse.ok) {
const errorText = await httpResponse.text();
const errorMessage = errorText || `AQL failed: ${httpResponse.status}`;
logger.warn(`JiraAssetsClient: AQL query failed (${httpResponse.status}): ${errorMessage}. Query: ${iql}`);
throw new Error(errorMessage);
}
response = await httpResponse.json() as { objectEntries: ObjectEntry[]; totalCount?: number; totalFilterCount?: number };
} catch (error) {
const errorMessage = error instanceof Error ? error.message : String(error);
logger.warn(`JiraAssetsClient: AQL endpoint failed, falling back to IQL. Error: ${errorMessage}`, error);
const params = new URLSearchParams({
iql,
page: page.toString(),
resultPerPage: pageSize.toString(),
includeAttributes: 'true',
includeAttributesDeep: '2',
objectSchemaId: schemaId,
});
const url = `${this.baseUrl}/iql/objects?${params.toString()}`;
const httpResponse = await fetch(url, {
headers: this.getHeaders(false),
});
if (!httpResponse.ok) {
const text = await httpResponse.text();
throw new Error(`Jira API error ${httpResponse.status}: ${text}`);
}
response = await httpResponse.json() as { objectEntries: ObjectEntry[]; totalCount?: number; totalFilterCount?: number };
}
} else {
// Jira Cloud: POST to AQL endpoint
const url = `${this.baseUrl}/aql/objects`;
const requestBody = {
qlQuery: iql,
page,
resultPerPage: pageSize,
includeAttributes: true,
includeAttributesDeep: 2,
objectSchemaId: schemaId,
};
const httpResponse = await fetch(url, {
method: 'POST',
headers: this.getHeaders(false),
body: JSON.stringify(requestBody),
});
if (!httpResponse.ok) {
const text = await httpResponse.text();
const errorMessage = text || `Jira API error ${httpResponse.status}`;
logger.warn(`JiraAssetsClient: AQL query failed (${httpResponse.status}): ${errorMessage}. Query: ${iql}`);
throw new Error(errorMessage);
}
response = await httpResponse.json() as { objectEntries: ObjectEntry[]; totalCount?: number; totalFilterCount?: number };
}
const totalCount = response.totalFilterCount || response.totalCount || 0;
const hasMore = response.objectEntries.length === pageSize && page * pageSize < totalCount;
return {
objectEntries: response.objectEntries || [],
totalCount,
hasMore,
};
}
/**
* Update an object
*/
async updateObject(objectId: string, payload: JiraUpdatePayload): Promise<void> {
if (!this.hasUserToken()) {
throw new Error('Jira Personal Access Token not configured. Please configure it in your user settings to enable saving changes to Jira.');
}
const url = `${this.baseUrl}/object/${objectId}`;
const response = await fetch(url, {
method: 'PUT',
headers: this.getHeaders(true),
body: JSON.stringify(payload),
});
if (!response.ok) {
const text = await response.text();
throw new Error(`Jira API error ${response.status}: ${text}`);
}
}
/**
* Get all schemas
*/
async getSchemas(): Promise<Array<{ id: string; name: string; description?: string }>> {
const url = `${this.baseUrl}/objectschema/list`;
const response = await fetch(url, {
headers: this.getHeaders(false),
});
if (!response.ok) {
const text = await response.text();
throw new Error(`Jira API error ${response.status}: ${text}`);
}
return await response.json() as Array<{ id: string; name: string; description?: string }>;
}
/**
* Get object types for a schema
*/
async getObjectTypes(schemaId: string): Promise<Array<{
id: number;
name: string;
description?: string;
objectCount?: number;
parentObjectTypeId?: number;
abstractObjectType?: boolean;
}>> {
// Try flat endpoint first
let url = `${this.baseUrl}/objectschema/${schemaId}/objecttypes/flat`;
let response = await fetch(url, {
headers: this.getHeaders(false),
});
if (!response.ok) {
// Fallback to regular endpoint
url = `${this.baseUrl}/objectschema/${schemaId}/objecttypes`;
response = await fetch(url, {
headers: this.getHeaders(false),
});
}
if (!response.ok) {
const text = await response.text();
throw new Error(`Jira API error ${response.status}: ${text}`);
}
const result = await response.json() as unknown;
if (Array.isArray(result)) {
return result as Array<{
id: number;
name: string;
description?: string;
objectCount?: number;
parentObjectTypeId?: number;
abstractObjectType?: boolean;
}>;
} else if (result && typeof result === 'object' && 'objectTypes' in result) {
return (result as { objectTypes: Array<{
id: number;
name: string;
description?: string;
objectCount?: number;
parentObjectTypeId?: number;
abstractObjectType?: boolean;
}> }).objectTypes;
}
return [];
}
/**
* Get attributes for an object type
*/
async getAttributes(typeId: number): Promise<Array<{
id: number;
name: string;
type: number;
typeValue?: string;
referenceObjectTypeId?: number;
referenceObjectType?: { id: number; name: string };
minimumCardinality?: number;
maximumCardinality?: number;
editable?: boolean;
hidden?: boolean;
system?: boolean;
description?: string;
}>> {
const url = `${this.baseUrl}/objecttype/${typeId}/attributes`;
const response = await fetch(url, {
headers: this.getHeaders(false),
});
if (!response.ok) {
logger.warn(`JiraAssetsClient: Failed to fetch attributes for type ${typeId}: ${response.status}`);
return [];
}
return await response.json() as Array<{
id: number;
name: string;
type: number;
typeValue?: string;
referenceObjectTypeId?: number;
referenceObjectType?: { id: number; name: string };
minimumCardinality?: number;
maximumCardinality?: number;
editable?: boolean;
hidden?: boolean;
system?: boolean;
description?: string;
}>;
}
}
// Export singleton instance
export const jiraAssetsClient = new JiraAssetsClient();

View File

@@ -0,0 +1,116 @@
/**
* Authorization Middleware
*
* Middleware functions for route protection based on authentication and permissions.
*/
import { Request, Response, NextFunction } from 'express';
import { authService, type SessionUser, type JiraUser } from '../services/authService.js';
import { roleService } from '../services/roleService.js';
import { logger } from '../services/logger.js';
// Extend Express Request to include user info
// Note: This matches the declaration in auth.ts
declare global {
namespace Express {
interface Request {
sessionId?: string;
user?: SessionUser | JiraUser;
accessToken?: string;
}
}
}
/**
* Middleware to require authentication
*/
export function requireAuth(req: Request, res: Response, next: NextFunction) {
const sessionId = req.headers['x-session-id'] as string || req.cookies?.sessionId;
if (!sessionId) {
return res.status(401).json({ error: 'Authentication required' });
}
// Get session user
authService.getSession(sessionId)
.then(session => {
if (!session) {
return res.status(401).json({ error: 'Invalid or expired session' });
}
// Check if it's a local user session
if ('id' in session.user) {
req.sessionId = sessionId;
req.user = session.user as SessionUser;
req.accessToken = session.accessToken;
next();
} else {
// OAuth-only session (Jira user without local account)
// For now, allow through but user won't have permissions
req.sessionId = sessionId;
req.accessToken = session.accessToken;
next();
}
})
.catch(error => {
logger.error('Auth middleware error:', error);
res.status(500).json({ error: 'Authentication check failed' });
});
}
/**
* Middleware to require a specific role
*/
export function requireRole(roleName: string) {
return async (req: Request, res: Response, next: NextFunction) => {
if (!req.user || !('id' in req.user)) {
return res.status(403).json({ error: 'Permission denied' });
}
const hasRole = await roleService.userHasRole(req.user.id, roleName);
if (!hasRole) {
return res.status(403).json({ error: `Role '${roleName}' required` });
}
next();
};
}
/**
* Middleware to require a specific permission
*/
export function requirePermission(permissionName: string) {
return async (req: Request, res: Response, next: NextFunction) => {
if (!req.user || !('id' in req.user)) {
return res.status(403).json({ error: 'Permission denied' });
}
const hasPermission = await roleService.userHasPermission(req.user.id, permissionName);
if (!hasPermission) {
return res.status(403).json({ error: `Permission '${permissionName}' required` });
}
next();
};
}
/**
* Middleware to check permission (optional, doesn't fail if missing)
* Sets req.hasPermission flag
*/
export function checkPermission(permissionName: string) {
return async (req: Request, res: Response, next: NextFunction) => {
if (req.user && 'id' in req.user) {
const hasPermission = await roleService.userHasPermission(req.user.id, permissionName);
(req as any).hasPermission = hasPermission;
} else {
(req as any).hasPermission = false;
}
next();
};
}
/**
* Middleware to require admin role
*/
export const requireAdmin = requireRole('administrator');

View File

@@ -0,0 +1,308 @@
/**
* ObjectCacheRepository - Data access for cached objects (EAV pattern)
*/
import type { DatabaseAdapter } from '../services/database/interface.js';
import { logger } from '../services/logger.js';
export interface ObjectRecord {
id: string;
objectKey: string;
objectTypeName: string;
label: string;
jiraUpdatedAt: string | null;
jiraCreatedAt: string | null;
cachedAt: string;
}
export interface AttributeValueRecord {
objectId: string;
attributeId: number;
textValue: string | null;
numberValue: number | null;
booleanValue: boolean | null;
dateValue: string | null;
datetimeValue: string | null;
referenceObjectId: string | null;
referenceObjectKey: string | null;
referenceObjectLabel: string | null;
arrayIndex: number;
}
export interface ObjectRelationRecord {
sourceId: string;
targetId: string;
attributeId: number;
sourceType: string;
targetType: string;
}
export class ObjectCacheRepository {
public db: DatabaseAdapter;
constructor(db: DatabaseAdapter) {
this.db = db;
}
/**
* Upsert an object record (minimal metadata)
*/
async upsertObject(object: {
id: string;
objectKey: string;
objectTypeName: string;
label: string;
jiraUpdatedAt?: string;
jiraCreatedAt?: string;
}): Promise<void> {
const cachedAt = new Date().toISOString();
await this.db.execute(
`INSERT INTO objects (id, object_key, object_type_name, label, jira_updated_at, jira_created_at, cached_at)
VALUES (?, ?, ?, ?, ?, ?, ?)
ON CONFLICT(id) DO UPDATE SET
object_key = excluded.object_key,
label = excluded.label,
jira_updated_at = excluded.jira_updated_at,
cached_at = excluded.cached_at`,
[
object.id,
object.objectKey,
object.objectTypeName,
object.label,
object.jiraUpdatedAt || null,
object.jiraCreatedAt || null,
cachedAt,
]
);
}
/**
* Get an object record by ID
*/
async getObject(objectId: string): Promise<ObjectRecord | null> {
return await this.db.queryOne<ObjectRecord>(
`SELECT id, object_key as objectKey, object_type_name as objectTypeName, label,
jira_updated_at as jiraUpdatedAt, jira_created_at as jiraCreatedAt, cached_at as cachedAt
FROM objects
WHERE id = ?`,
[objectId]
);
}
/**
* Get an object record by object key
*/
async getObjectByKey(objectKey: string): Promise<ObjectRecord | null> {
return await this.db.queryOne<ObjectRecord>(
`SELECT id, object_key as objectKey, object_type_name as objectTypeName, label,
jira_updated_at as jiraUpdatedAt, jira_created_at as jiraCreatedAt, cached_at as cachedAt
FROM objects
WHERE object_key = ?`,
[objectKey]
);
}
/**
* Delete all attribute values for an object
* Used when refreshing an object - we replace all attributes
*/
async deleteAttributeValues(objectId: string): Promise<void> {
await this.db.execute(
`DELETE FROM attribute_values WHERE object_id = ?`,
[objectId]
);
}
/**
* Upsert a single attribute value
*/
async upsertAttributeValue(value: {
objectId: string;
attributeId: number;
textValue?: string | null;
numberValue?: number | null;
booleanValue?: boolean | null;
dateValue?: string | null;
datetimeValue?: string | null;
referenceObjectId?: string | null;
referenceObjectKey?: string | null;
referenceObjectLabel?: string | null;
arrayIndex: number;
}): Promise<void> {
await this.db.execute(
`INSERT INTO attribute_values
(object_id, attribute_id, text_value, number_value, boolean_value, date_value, datetime_value,
reference_object_id, reference_object_key, reference_object_label, array_index)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
ON CONFLICT(object_id, attribute_id, array_index) DO UPDATE SET
text_value = excluded.text_value,
number_value = excluded.number_value,
boolean_value = excluded.boolean_value,
date_value = excluded.date_value,
datetime_value = excluded.datetime_value,
reference_object_id = excluded.reference_object_id,
reference_object_key = excluded.reference_object_key,
reference_object_label = excluded.reference_object_label`,
[
value.objectId,
value.attributeId,
value.textValue || null,
value.numberValue || null,
value.booleanValue || null,
value.dateValue || null,
value.datetimeValue || null,
value.referenceObjectId || null,
value.referenceObjectKey || null,
value.referenceObjectLabel || null,
value.arrayIndex,
]
);
}
/**
* Batch upsert attribute values (much faster)
*/
async batchUpsertAttributeValues(values: Array<{
objectId: string;
attributeId: number;
textValue?: string | null;
numberValue?: number | null;
booleanValue?: boolean | null;
dateValue?: string | null;
datetimeValue?: string | null;
referenceObjectId?: string | null;
referenceObjectKey?: string | null;
referenceObjectLabel?: string | null;
arrayIndex: number;
}>): Promise<void> {
if (values.length === 0) return;
await this.db.transaction(async (db) => {
for (const value of values) {
await db.execute(
`INSERT INTO attribute_values
(object_id, attribute_id, text_value, number_value, boolean_value, date_value, datetime_value,
reference_object_id, reference_object_key, reference_object_label, array_index)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
ON CONFLICT(object_id, attribute_id, array_index) DO UPDATE SET
text_value = excluded.text_value,
number_value = excluded.number_value,
boolean_value = excluded.boolean_value,
date_value = excluded.date_value,
datetime_value = excluded.datetime_value,
reference_object_id = excluded.reference_object_id,
reference_object_key = excluded.reference_object_key,
reference_object_label = excluded.reference_object_label`,
[
value.objectId,
value.attributeId,
value.textValue || null,
value.numberValue || null,
value.booleanValue || null,
value.dateValue || null,
value.datetimeValue || null,
value.referenceObjectId || null,
value.referenceObjectKey || null,
value.referenceObjectLabel || null,
value.arrayIndex,
]
);
}
});
}
/**
* Get all attribute values for an object
*/
async getAttributeValues(objectId: string): Promise<AttributeValueRecord[]> {
return await this.db.query<AttributeValueRecord>(
`SELECT object_id as objectId, attribute_id as attributeId, text_value as textValue,
number_value as numberValue, boolean_value as booleanValue,
date_value as dateValue, datetime_value as datetimeValue,
reference_object_id as referenceObjectId, reference_object_key as referenceObjectKey,
reference_object_label as referenceObjectLabel, array_index as arrayIndex
FROM attribute_values
WHERE object_id = ?
ORDER BY attribute_id, array_index`,
[objectId]
);
}
/**
* Upsert an object relation
*/
async upsertRelation(relation: {
sourceId: string;
targetId: string;
attributeId: number;
sourceType: string;
targetType: string;
}): Promise<void> {
await this.db.execute(
`INSERT INTO object_relations (source_id, target_id, attribute_id, source_type, target_type)
VALUES (?, ?, ?, ?, ?)
ON CONFLICT(source_id, target_id, attribute_id) DO NOTHING`,
[
relation.sourceId,
relation.targetId,
relation.attributeId,
relation.sourceType,
relation.targetType,
]
);
}
/**
* Delete all relations for an object (used when refreshing)
*/
async deleteRelations(objectId: string): Promise<void> {
await this.db.execute(
`DELETE FROM object_relations WHERE source_id = ?`,
[objectId]
);
}
/**
* Get objects of a specific type
*/
async getObjectsByType(
objectTypeName: string,
options: {
limit?: number;
offset?: number;
} = {}
): Promise<ObjectRecord[]> {
const { limit = 1000, offset = 0 } = options;
return await this.db.query<ObjectRecord>(
`SELECT id, object_key as objectKey, object_type_name as objectTypeName, label,
jira_updated_at as jiraUpdatedAt, jira_created_at as jiraCreatedAt, cached_at as cachedAt
FROM objects
WHERE object_type_name = ?
ORDER BY label
LIMIT ? OFFSET ?`,
[objectTypeName, limit, offset]
);
}
/**
* Count objects of a type
*/
async countObjectsByType(objectTypeName: string): Promise<number> {
const result = await this.db.queryOne<{ count: number | string }>(
`SELECT COUNT(*) as count FROM objects WHERE object_type_name = ?`,
[objectTypeName]
);
if (!result?.count) return 0;
return typeof result.count === 'string' ? parseInt(result.count, 10) : Number(result.count);
}
/**
* Delete an object (cascades to attribute_values and relations)
*/
async deleteObject(objectId: string): Promise<void> {
await this.db.execute(
`DELETE FROM objects WHERE id = ?`,
[objectId]
);
}
}

View File

@@ -0,0 +1,492 @@
/**
* SchemaRepository - Data access for schema metadata
*/
import type { DatabaseAdapter } from '../services/database/interface.js';
import { logger } from '../services/logger.js';
import { toPascalCase } from '../services/schemaUtils.js';
export interface SchemaRecord {
id: number;
jiraSchemaId: string;
name: string;
description: string | null;
discoveredAt: string;
updatedAt: string;
}
export interface ObjectTypeRecord {
id: number;
schemaId: number;
jiraTypeId: number;
typeName: string;
displayName: string;
description: string | null;
syncPriority: number;
objectCount: number;
enabled: boolean;
discoveredAt: string;
updatedAt: string;
}
export interface AttributeRecord {
id: number;
jiraAttrId: number;
objectTypeName: string;
attrName: string;
fieldName: string;
attrType: string;
isMultiple: boolean;
isEditable: boolean;
isRequired: boolean;
isSystem: boolean;
referenceTypeName: string | null;
description: string | null;
discoveredAt: string;
}
export class SchemaRepository {
constructor(private db: DatabaseAdapter) {}
/**
* Get database adapter (for debug/advanced operations)
*/
getDatabaseAdapter(): DatabaseAdapter {
return this.db;
}
/**
* Upsert a schema
*/
async upsertSchema(schema: {
jiraSchemaId: string;
name: string;
description?: string;
}): Promise<number> {
const now = new Date().toISOString();
// Check if exists
const existing = await this.db.queryOne<{ id: number }>(
`SELECT id FROM schemas WHERE jira_schema_id = ?`,
[schema.jiraSchemaId]
);
if (existing) {
await this.db.execute(
`UPDATE schemas SET name = ?, description = ?, updated_at = ? WHERE id = ?`,
[schema.name, schema.description || null, now, existing.id]
);
return existing.id;
} else {
await this.db.execute(
`INSERT INTO schemas (jira_schema_id, name, description, discovered_at, updated_at)
VALUES (?, ?, ?, ?, ?)`,
[schema.jiraSchemaId, schema.name, schema.description || null, now, now]
);
const result = await this.db.queryOne<{ id: number }>(
`SELECT id FROM schemas WHERE jira_schema_id = ?`,
[schema.jiraSchemaId]
);
return result?.id || 0;
}
}
/**
* Get all schemas
*/
async getAllSchemas(): Promise<SchemaRecord[]> {
return await this.db.query<SchemaRecord>(
`SELECT id, jira_schema_id as jiraSchemaId, name, description, discovered_at as discoveredAt, updated_at as updatedAt
FROM schemas
ORDER BY jira_schema_id`
);
}
/**
* Upsert an object type
*/
async upsertObjectType(
schemaId: number,
objectType: {
jiraTypeId: number;
typeName: string;
displayName: string;
description?: string;
syncPriority?: number;
objectCount?: number;
}
): Promise<number> {
const now = new Date().toISOString();
const existing = await this.db.queryOne<{ id: number }>(
`SELECT id FROM object_types WHERE schema_id = ? AND jira_type_id = ?`,
[schemaId, objectType.jiraTypeId]
);
if (existing) {
// Update existing record - ensure type_name is set if missing
// First check if type_name is NULL
const currentRecord = await this.db.queryOne<{ type_name: string | null }>(
`SELECT type_name FROM object_types WHERE id = ?`,
[existing.id]
);
// Determine what type_name value to use
let typeNameToUse: string | null = null;
if (objectType.typeName && objectType.typeName.trim() !== '') {
// Use provided typeName if available
typeNameToUse = objectType.typeName;
} else if (currentRecord?.type_name && currentRecord.type_name.trim() !== '') {
// Keep existing type_name if it exists and no new one provided
typeNameToUse = currentRecord.type_name;
} else {
// Generate type_name from display_name if missing
typeNameToUse = toPascalCase(objectType.displayName);
logger.warn(`SchemaRepository.upsertObjectType: Generated missing type_name "${typeNameToUse}" from display_name "${objectType.displayName}" for id=${existing.id}`);
}
// Only update type_name if we have a valid value (never set to NULL)
if (typeNameToUse && typeNameToUse.trim() !== '') {
await this.db.execute(
`UPDATE object_types
SET display_name = ?, description = ?, sync_priority = ?, object_count = ?,
type_name = ?, updated_at = ?
WHERE id = ?`,
[
objectType.displayName,
objectType.description || null,
objectType.syncPriority || 0,
objectType.objectCount || 0,
typeNameToUse,
now,
existing.id,
]
);
} else {
// Shouldn't happen, but log if it does
logger.error(`SchemaRepository.upsertObjectType: Cannot update type_name - all sources are empty for id=${existing.id}`);
// Still update other fields, but don't touch type_name
await this.db.execute(
`UPDATE object_types
SET display_name = ?, description = ?, sync_priority = ?, object_count = ?,
updated_at = ?
WHERE id = ?`,
[
objectType.displayName,
objectType.description || null,
objectType.syncPriority || 0,
objectType.objectCount || 0,
now,
existing.id,
]
);
}
return existing.id;
} else {
await this.db.execute(
`INSERT INTO object_types (schema_id, jira_type_id, type_name, display_name, description, sync_priority, object_count, enabled, discovered_at, updated_at)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)`,
[
schemaId,
objectType.jiraTypeId,
objectType.typeName,
objectType.displayName,
objectType.description || null,
objectType.syncPriority || 0,
objectType.objectCount || 0,
false, // Default: disabled
now,
now,
]
);
const result = await this.db.queryOne<{ id: number }>(
`SELECT id FROM object_types WHERE schema_id = ? AND jira_type_id = ?`,
[schemaId, objectType.jiraTypeId]
);
return result?.id || 0;
}
}
/**
* Get enabled object types
*/
async getEnabledObjectTypes(): Promise<ObjectTypeRecord[]> {
// Handle both PostgreSQL (boolean) and SQLite (integer) for enabled column
const isPostgres = this.db.isPostgres === true;
// For PostgreSQL: enabled is BOOLEAN, so 'enabled = true' works
// For SQLite: enabled is INTEGER (0/1), so 'enabled = 1' works
// However, some adapters might return booleans as 1/0 in both cases
// So we check for both boolean true and integer 1
const enabledCondition = isPostgres
? 'enabled IS true' // PostgreSQL: IS true is more explicit than = true
: 'enabled = 1'; // SQLite: explicit integer comparison
// Query without aliases first to ensure we get the raw values
const rawResults = await this.db.query<{
id: number;
schema_id: number;
jira_type_id: number;
type_name: string | null;
display_name: string;
description: string | null;
sync_priority: number;
object_count: number;
enabled: boolean | number;
discovered_at: string;
updated_at: string;
}>(
`SELECT id, schema_id, jira_type_id, type_name, display_name, description,
sync_priority, object_count, enabled, discovered_at, updated_at
FROM object_types
WHERE ${enabledCondition}
ORDER BY sync_priority, type_name`
);
logger.debug(`SchemaRepository.getEnabledObjectTypes: Raw query found ${rawResults.length} enabled types. Raw type_name values: ${JSON.stringify(rawResults.map(r => ({ id: r.id, type_name: r.type_name, type_name_type: typeof r.type_name, display_name: r.display_name })))}`);
// Map to ObjectTypeRecord format manually to ensure proper mapping
const results: ObjectTypeRecord[] = rawResults.map(r => ({
id: r.id,
schemaId: r.schema_id,
jiraTypeId: r.jira_type_id,
typeName: r.type_name || '', // Convert null to empty string if needed
displayName: r.display_name,
description: r.description,
syncPriority: r.sync_priority,
objectCount: r.object_count,
enabled: r.enabled === true || r.enabled === 1,
discoveredAt: r.discovered_at,
updatedAt: r.updated_at,
}));
// Debug: Log what we found
logger.debug(`SchemaRepository.getEnabledObjectTypes: Found ${results.length} enabled types (isPostgres: ${isPostgres}, condition: ${enabledCondition})`);
if (results.length > 0) {
// Log raw results to see what we're actually getting
logger.debug(`SchemaRepository.getEnabledObjectTypes: Raw results: ${JSON.stringify(results.map(r => ({
id: r.id,
typeName: r.typeName,
typeNameType: typeof r.typeName,
typeNameLength: r.typeName?.length,
displayName: r.displayName,
enabled: r.enabled
})))}`);
// Check for missing typeName
const missingTypeName = results.filter(r => !r.typeName || r.typeName.trim() === '');
if (missingTypeName.length > 0) {
logger.error(`SchemaRepository.getEnabledObjectTypes: Found ${missingTypeName.length} enabled types with missing typeName: ${JSON.stringify(missingTypeName.map(r => ({
id: r.id,
jiraTypeId: r.jiraTypeId,
displayName: r.displayName,
typeName: r.typeName,
typeNameType: typeof r.typeName,
rawTypeName: JSON.stringify(r.typeName)
})))}`);
// Try to query directly to see what the DB actually has
for (const missing of missingTypeName) {
const directCheck = await this.db.queryOne<{ type_name: string | null }>(
`SELECT type_name FROM object_types WHERE id = ?`,
[missing.id]
);
logger.error(`SchemaRepository.getEnabledObjectTypes: Direct query for id=${missing.id} returned type_name: ${JSON.stringify(directCheck?.type_name)}`);
}
}
logger.debug(`SchemaRepository.getEnabledObjectTypes: Type names: ${results.map(r => `${r.typeName || 'NULL'}(enabled:${r.enabled}, type:${typeof r.enabled})`).join(', ')}`);
// Also check what gets filtered out
const filteredResults = results.filter(r => r.typeName && r.typeName.trim() !== '');
if (filteredResults.length < results.length) {
logger.warn(`SchemaRepository.getEnabledObjectTypes: Filtered out ${results.length - filteredResults.length} results with missing typeName`);
}
} else {
// Debug: Check if there are any enabled types at all (check the actual query)
const enabledCheck = await this.db.query<{ count: number }>(
isPostgres
? `SELECT COUNT(*) as count FROM object_types WHERE enabled IS true`
: `SELECT COUNT(*) as count FROM object_types WHERE enabled = 1`
);
logger.warn(`SchemaRepository.getEnabledObjectTypes: No enabled types found with query. Query found ${enabledCheck[0]?.count || 0} enabled types.`);
// Also check what types are actually in the DB
const allTypes = await this.db.query<{ typeName: string; enabled: boolean | number; id: number }>(
`SELECT id, type_name as typeName, enabled FROM object_types WHERE enabled IS NOT NULL ORDER BY enabled DESC LIMIT 10`
);
logger.warn(`SchemaRepository.getEnabledObjectTypes: Sample types from DB: ${allTypes.map(t => `id=${t.id}, ${t.typeName || 'NULL'}=enabled:${t.enabled}(${typeof t.enabled})`).join(', ')}`);
}
// Filter out results with missing typeName
return results.filter(r => r.typeName && r.typeName.trim() !== '');
}
/**
* Get object type by type name
*/
async getObjectTypeByTypeName(typeName: string): Promise<ObjectTypeRecord | null> {
return await this.db.queryOne<ObjectTypeRecord>(
`SELECT id, schema_id as schemaId, jira_type_id as jiraTypeId, type_name as typeName,
display_name as displayName, description, sync_priority as syncPriority,
object_count as objectCount, enabled, discovered_at as discoveredAt, updated_at as updatedAt
FROM object_types
WHERE type_name = ?`,
[typeName]
);
}
/**
* Get object type by Jira type ID
* Note: Jira type IDs are global across schemas, but we store them per schema.
* This method returns the first matching type found (any schema).
*/
async getObjectTypeByJiraId(jiraTypeId: number): Promise<ObjectTypeRecord | null> {
const result = await this.db.queryOne<ObjectTypeRecord>(
`SELECT id, schema_id as schemaId, jira_type_id as jiraTypeId, type_name as typeName,
display_name as displayName, description, sync_priority as syncPriority,
object_count as objectCount, enabled, discovered_at as discoveredAt, updated_at as updatedAt
FROM object_types
WHERE jira_type_id = ?
LIMIT 1`,
[jiraTypeId]
);
if (!result) {
// Diagnostic: Check if this type ID exists in any schema
const db = this.db;
try {
const allSchemasWithType = await db.query<{ schema_id: number; jira_schema_id: string; schema_name: string; count: number }>(
`SELECT ot.schema_id, s.jira_schema_id, s.name as schema_name, COUNT(*) as count
FROM object_types ot
JOIN schemas s ON ot.schema_id = s.id
WHERE ot.jira_type_id = ?
GROUP BY ot.schema_id, s.jira_schema_id, s.name`,
[jiraTypeId]
);
if (allSchemasWithType.length === 0) {
logger.debug(`SchemaRepository: Jira type ID ${jiraTypeId} not found in any schema. This object type needs to be discovered via schema discovery.`);
} else {
logger.debug(`SchemaRepository: Jira type ID ${jiraTypeId} exists in ${allSchemasWithType.length} schema(s): ${allSchemasWithType.map(s => `${s.schema_name} (ID: ${s.jira_schema_id})`).join(', ')}`);
}
} catch (error) {
logger.debug(`SchemaRepository: Failed to check schema existence for type ID ${jiraTypeId}`, error);
}
}
return result;
}
/**
* Upsert an attribute
*/
async upsertAttribute(attribute: {
jiraAttrId: number;
objectTypeName: string;
attrName: string;
fieldName: string;
attrType: string;
isMultiple: boolean;
isEditable: boolean;
isRequired: boolean;
isSystem: boolean;
referenceTypeName?: string;
description?: string;
}): Promise<number> {
const now = new Date().toISOString();
const existing = await this.db.queryOne<{ id: number }>(
`SELECT id FROM attributes WHERE jira_attr_id = ? AND object_type_name = ?`,
[attribute.jiraAttrId, attribute.objectTypeName]
);
if (existing) {
await this.db.execute(
`UPDATE attributes
SET attr_name = ?, field_name = ?, attr_type = ?, is_multiple = ?, is_editable = ?,
is_required = ?, is_system = ?, reference_type_name = ?, description = ?
WHERE id = ?`,
[
attribute.attrName,
attribute.fieldName,
attribute.attrType,
attribute.isMultiple,
attribute.isEditable,
attribute.isRequired,
attribute.isSystem,
attribute.referenceTypeName || null,
attribute.description || null,
existing.id,
]
);
return existing.id;
} else {
await this.db.execute(
`INSERT INTO attributes (jira_attr_id, object_type_name, attr_name, field_name, attr_type,
is_multiple, is_editable, is_required, is_system, reference_type_name, description, discovered_at)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)`,
[
attribute.jiraAttrId,
attribute.objectTypeName,
attribute.attrName,
attribute.fieldName,
attribute.attrType,
attribute.isMultiple,
attribute.isEditable,
attribute.isRequired,
attribute.isSystem,
attribute.referenceTypeName || null,
attribute.description || null,
now,
]
);
const result = await this.db.queryOne<{ id: number }>(
`SELECT id FROM attributes WHERE jira_attr_id = ? AND object_type_name = ?`,
[attribute.jiraAttrId, attribute.objectTypeName]
);
return result?.id || 0;
}
}
/**
* Get attributes for an object type
*/
async getAttributesForType(objectTypeName: string): Promise<AttributeRecord[]> {
return await this.db.query<AttributeRecord>(
`SELECT id, jira_attr_id as jiraAttrId, object_type_name as objectTypeName, attr_name as attrName,
field_name as fieldName, attr_type as attrType, is_multiple as isMultiple,
is_editable as isEditable, is_required as isRequired, is_system as isSystem,
reference_type_name as referenceTypeName, description, discovered_at as discoveredAt
FROM attributes
WHERE object_type_name = ?
ORDER BY jira_attr_id`,
[objectTypeName]
);
}
/**
* Get attribute by object type and field name
*/
async getAttributeByFieldName(objectTypeName: string, fieldName: string): Promise<AttributeRecord | null> {
return await this.db.queryOne<AttributeRecord>(
`SELECT id, jira_attr_id as jiraAttrId, object_type_name as objectTypeName, attr_name as attrName,
field_name as fieldName, attr_type as attrType, is_multiple as isMultiple,
is_editable as isEditable, is_required as isRequired, is_system as isSystem,
reference_type_name as referenceTypeName, description, discovered_at as discoveredAt
FROM attributes
WHERE object_type_name = ? AND field_name = ?`,
[objectTypeName, fieldName]
);
}
/**
* Get attribute ID by object type and Jira attribute ID
*/
async getAttributeId(objectTypeName: string, jiraAttrId: number): Promise<number | null> {
const result = await this.db.queryOne<{ id: number }>(
`SELECT id FROM attributes WHERE object_type_name = ? AND jira_attr_id = ?`,
[objectTypeName, jiraAttrId]
);
return result?.id || null;
}
}

View File

@@ -6,13 +6,18 @@ import { logger } from '../services/logger.js';
import { calculateRequiredEffortApplicationManagementWithBreakdown } from '../services/effortCalculation.js';
import { findBIAMatch, loadBIAData, clearBIACache, calculateSimilarity } from '../services/biaMatchingService.js';
import { calculateApplicationCompleteness } from '../services/dataCompletenessConfig.js';
import { getQueryString, getParamString } from '../utils/queryHelpers.js';
import { requireAuth, requirePermission } from '../middleware/authorization.js';
import type { SearchFilters, ReferenceValue, ClassificationResult, ApplicationDetails, ApplicationStatus } from '../types/index.js';
import type { Server, Flows, Certificate, Domain, AzureSubscription, CMDBObjectTypeName } from '../generated/jira-types.js';
const router = Router();
// Search applications with filters
router.post('/search', async (req: Request, res: Response) => {
// All routes require authentication
router.use(requireAuth);
// Search applications with filters (requires search permission)
router.post('/search', requirePermission('search'), async (req: Request, res: Response) => {
try {
const { filters, page = 1, pageSize = 25 } = req.body as {
filters: SearchFilters;
@@ -31,7 +36,7 @@ router.post('/search', async (req: Request, res: Response) => {
// Get team dashboard data
router.get('/team-dashboard', async (req: Request, res: Response) => {
try {
const excludedStatusesParam = req.query.excludedStatuses as string | undefined;
const excludedStatusesParam = getQueryString(req, 'excludedStatuses');
let excludedStatuses: ApplicationStatus[] = [];
if (excludedStatusesParam && excludedStatusesParam.trim().length > 0) {
@@ -56,7 +61,7 @@ router.get('/team-dashboard', async (req: Request, res: Response) => {
// Get team portfolio health metrics
router.get('/team-portfolio-health', async (req: Request, res: Response) => {
try {
const excludedStatusesParam = req.query.excludedStatuses as string | undefined;
const excludedStatusesParam = getQueryString(req, 'excludedStatuses');
let excludedStatuses: ApplicationStatus[] = [];
if (excludedStatusesParam && excludedStatusesParam.trim().length > 0) {
@@ -95,10 +100,10 @@ router.get('/business-importance-comparison', async (req: Request, res: Response
// Test BIA data loading (for debugging)
router.get('/bia-test', async (req: Request, res: Response) => {
try {
if (req.query.clear === 'true') {
if (getQueryString(req, 'clear') === 'true') {
clearBIACache();
}
const biaData = loadBIAData();
const biaData = await loadBIAData();
res.json({
recordCount: biaData.length,
records: biaData.slice(0, 20), // First 20 records
@@ -114,7 +119,7 @@ router.get('/bia-test', async (req: Request, res: Response) => {
router.get('/bia-debug', async (req: Request, res: Response) => {
try {
clearBIACache();
const biaData = loadBIAData();
const biaData = await loadBIAData();
// Get a few sample applications
const searchResult = await dataService.searchApplications({}, 1, 50);
@@ -133,7 +138,7 @@ router.get('/bia-debug', async (req: Request, res: Response) => {
// Test each sample app
for (const app of [...sampleApps, ...testApps]) {
const matchResult = findBIAMatch(app.name, app.searchReference);
const matchResult = await findBIAMatch(app.name, app.searchReference ?? null);
// Find all potential matches in Excel data for detailed analysis
const normalizedAppName = app.name.toLowerCase().trim();
@@ -202,7 +207,7 @@ router.get('/bia-comparison', async (req: Request, res: Response) => {
clearBIACache();
// Load fresh data
const testBIAData = loadBIAData();
const testBIAData = await loadBIAData();
logger.info(`BIA comparison: Loaded ${testBIAData.length} records from Excel file`);
if (testBIAData.length === 0) {
logger.error('BIA comparison: No Excel data loaded - check if BIA.xlsx exists and is readable');
@@ -246,7 +251,7 @@ router.get('/bia-comparison', async (req: Request, res: Response) => {
for (const app of applications) {
// Find BIA match in Excel
const matchResult = findBIAMatch(app.name, app.searchReference);
const matchResult = await findBIAMatch(app.name, app.searchReference ?? null);
// Log first few matches for debugging
if (comparisonItems.length < 5) {
@@ -321,9 +326,10 @@ router.get('/bia-comparison', async (req: Request, res: Response) => {
// Query params:
// - mode=edit: Force refresh from Jira for editing (includes _jiraUpdatedAt for conflict detection)
router.get('/:id', async (req: Request, res: Response) => {
const id = getParamString(req, 'id');
try {
const { id } = req.params;
const mode = req.query.mode as string | undefined;
const mode = getQueryString(req, 'mode');
// Don't treat special routes as application IDs
if (id === 'team-dashboard' || id === 'team-portfolio-health' || id === 'business-importance-comparison' || id === 'bia-comparison' || id === 'bia-test' || id === 'calculate-effort' || id === 'search') {
@@ -337,7 +343,7 @@ router.get('/:id', async (req: Request, res: Response) => {
: await dataService.getApplicationById(id);
if (!application) {
res.status(404).json({ error: 'Application not found' });
res.status(404).json({ error: 'Application not found', id });
return;
}
@@ -350,15 +356,35 @@ router.get('/:id', async (req: Request, res: Response) => {
res.json(applicationWithCompleteness);
} catch (error) {
logger.error('Failed to get application', error);
res.status(500).json({ error: 'Failed to get application' });
logger.error(`Failed to get application ${id}`, error);
const errorMessage = error instanceof Error ? error.message : 'Unknown error';
const errorDetails = error instanceof Error && error.stack ? error.stack : String(error);
logger.debug(`Error details for application ${id}:`, errorDetails);
res.status(500).json({
error: 'Failed to get application',
details: errorMessage,
id: id,
});
}
});
// Update application with conflict detection
router.put('/:id', async (req: Request, res: Response) => {
// Update application with conflict detection (requires edit permission)
router.put('/:id', requirePermission('edit_applications'), async (req: Request, res: Response) => {
try {
const { id } = req.params;
// Check if user has Jira PAT configured OR service account token is available (required for write operations)
const userSettings = (req as any).userSettings;
const { config } = await import('../config/env.js');
// Allow writes if user has PAT OR service account token is configured
if (!userSettings?.jira_pat && !config.jiraServiceAccountToken) {
res.status(403).json({
error: 'Jira PAT not configured',
message: 'A Personal Access Token (PAT) is required to save changes to Jira Assets. Please configure it in your user settings, or configure JIRA_SERVICE_ACCOUNT_TOKEN in .env as a fallback.'
});
return;
}
const id = getParamString(req, 'id');
const { updates, _jiraUpdatedAt } = req.body as {
updates?: {
applicationFunctions?: ReferenceValue[];
@@ -467,10 +493,23 @@ router.put('/:id', async (req: Request, res: Response) => {
}
});
// Force update (ignore conflicts)
router.put('/:id/force', async (req: Request, res: Response) => {
// Force update (ignore conflicts) (requires edit permission)
router.put('/:id/force', requirePermission('edit_applications'), async (req: Request, res: Response) => {
try {
const { id } = req.params;
// Check if user has Jira PAT configured OR service account token is available (required for write operations)
const userSettings = (req as any).userSettings;
const { config } = await import('../config/env.js');
// Allow writes if user has PAT OR service account token is configured
if (!userSettings?.jira_pat && !config.jiraServiceAccountToken) {
res.status(403).json({
error: 'Jira PAT not configured',
message: 'A Personal Access Token (PAT) is required to save changes to Jira Assets. Please configure it in your user settings, or configure JIRA_SERVICE_ACCOUNT_TOKEN in .env as a fallback.'
});
return;
}
const id = getParamString(req, 'id');
const updates = req.body;
const application = await dataService.getApplicationById(id);
@@ -551,7 +590,7 @@ router.post('/calculate-effort', async (req: Request, res: Response) => {
// Get application classification history
router.get('/:id/history', async (req: Request, res: Response) => {
try {
const { id } = req.params;
const id = getParamString(req, 'id');
const history = await databaseService.getClassificationsByApplicationId(id);
res.json(history);
} catch (error) {
@@ -563,7 +602,8 @@ router.get('/:id/history', async (req: Request, res: Response) => {
// Get related objects for an application (from cache)
router.get('/:id/related/:objectType', async (req: Request, res: Response) => {
try {
const { id, objectType } = req.params;
const id = getParamString(req, 'id');
const objectType = getParamString(req, 'objectType');
// Map object type string to CMDBObjectTypeName
const typeMap: Record<string, CMDBObjectTypeName> = {
@@ -593,33 +633,101 @@ router.get('/:id/related/:objectType', async (req: Request, res: Response) => {
type RelatedObjectType = Server | Flows | Certificate | Domain | AzureSubscription;
let relatedObjects: RelatedObjectType[] = [];
// Get requested attributes from query string (needed for fallback)
const attributesParam = getQueryString(req, 'attributes');
const requestedAttrs = attributesParam
? attributesParam.split(',').map(a => a.trim())
: [];
logger.debug(`Getting related objects for application ${id}, objectType: ${objectType}, typeName: ${typeName}, requestedAttrs: ${requestedAttrs.join(',') || 'none'}`);
// First try to get from cache
switch (typeName) {
case 'Server':
relatedObjects = await cmdbService.getReferencingObjects<Server>(id, 'Server');
logger.debug(`Found ${relatedObjects.length} Servers referencing application ${id} in cache`);
break;
case 'Flows': {
// Flows reference ApplicationComponents via Source and Target attributes
// We need to find Flows where this ApplicationComponent is the target of the reference
relatedObjects = await cmdbService.getReferencingObjects<Flows>(id, 'Flows');
logger.debug(`Found ${relatedObjects.length} Flows referencing application ${id} in cache`);
break;
}
case 'Certificate':
relatedObjects = await cmdbService.getReferencingObjects<Certificate>(id, 'Certificate');
logger.debug(`Found ${relatedObjects.length} Certificates referencing application ${id} in cache`);
break;
case 'Domain':
relatedObjects = await cmdbService.getReferencingObjects<Domain>(id, 'Domain');
logger.debug(`Found ${relatedObjects.length} Domains referencing application ${id} in cache`);
break;
case 'AzureSubscription':
relatedObjects = await cmdbService.getReferencingObjects<AzureSubscription>(id, 'AzureSubscription');
logger.debug(`Found ${relatedObjects.length} AzureSubscriptions referencing application ${id} in cache`);
break;
default:
relatedObjects = [];
logger.warn(`Unknown object type for related objects: ${typeName}`);
}
// If no objects found in cache, try to fetch from Jira directly as fallback
// This helps when relations haven't been synced yet
if (relatedObjects.length === 0) {
try {
// Get application to get its objectKey
const app = await cmdbService.getObject('ApplicationComponent', id);
if (!app) {
logger.warn(`Application ${id} not found in cache, cannot fetch related objects from Jira`);
} else if (!app.objectKey) {
logger.warn(`Application ${id} has no objectKey, cannot fetch related objects from Jira`);
} else {
logger.info(`No related ${typeName} objects found in cache for application ${id} (${app.objectKey}), trying Jira directly...`);
const { jiraAssetsService } = await import('../services/jiraAssets.js');
// Use the Jira object type name from schema (not our internal typeName)
const { OBJECT_TYPES } = await import('../generated/jira-schema.js');
const jiraTypeDef = OBJECT_TYPES[typeName];
const jiraObjectTypeName = jiraTypeDef?.name || objectType;
logger.debug(`Using Jira object type name: "${jiraObjectTypeName}" for internal type "${typeName}"`);
const jiraResult = await jiraAssetsService.getRelatedObjects(app.objectKey, jiraObjectTypeName, requestedAttrs);
logger.debug(`Jira query returned ${jiraResult?.objects?.length || 0} objects`);
if (jiraResult && jiraResult.objects && jiraResult.objects.length > 0) {
logger.info(`Found ${jiraResult.objects.length} related ${typeName} objects from Jira, caching them...`);
// Batch fetch and cache all objects at once (much more efficient)
const objectIds = jiraResult.objects.map(obj => obj.id.toString());
const cachedObjects = await cmdbService.batchFetchAndCacheObjects(typeName as CMDBObjectTypeName, objectIds);
logger.info(`Successfully batch cached ${cachedObjects.length} of ${jiraResult.objects.length} related ${typeName} objects`);
// Use cached objects, fallback to minimal objects from Jira result if not found
const cachedById = new Map(cachedObjects.map(obj => [obj.id, obj]));
relatedObjects = jiraResult.objects.map((jiraObj) => {
const cached = cachedById.get(jiraObj.id.toString());
if (cached) {
return cached as RelatedObjectType;
}
// Fallback: create minimal object from Jira result
logger.debug(`Creating minimal object for ${jiraObj.id} (${jiraObj.key}) as cache lookup failed`);
return {
id: jiraObj.id.toString(),
objectKey: jiraObj.key,
label: jiraObj.label,
_objectType: typeName,
} as RelatedObjectType;
});
logger.info(`Loaded ${relatedObjects.length} related ${typeName} objects (${relatedObjects.filter(o => o).length} valid)`);
} else {
logger.info(`No related ${typeName} objects found in Jira for application ${app.objectKey}`);
}
}
} catch (error) {
logger.error(`Failed to fetch related ${typeName} objects from Jira as fallback for application ${id}:`, error);
}
}
// Get requested attributes from query string
const requestedAttrs = req.query.attributes
? String(req.query.attributes).split(',').map(a => a.trim())
: [];
// Format response - must match RelatedObjectsResponse type expected by frontend
const objects = relatedObjects.map(obj => {

View File

@@ -1,64 +1,197 @@
import { Router, Request, Response, NextFunction } from 'express';
import { authService, JiraUser } from '../services/authService.js';
import { authService, type SessionUser, type JiraUser } from '../services/authService.js';
import { userService } from '../services/userService.js';
import { roleService } from '../services/roleService.js';
import { config } from '../config/env.js';
import { logger } from '../services/logger.js';
import { getAuthDatabase } from '../services/database/migrations.js';
const router = Router();
// Extend Express Request to include user info
// Note: This extends the declaration from authorization.ts
declare global {
namespace Express {
interface Request {
sessionId?: string;
user?: JiraUser;
user?: SessionUser | JiraUser;
accessToken?: string;
}
}
}
// Get auth configuration
router.get('/config', (req: Request, res: Response) => {
const authMethod = authService.getAuthMethod();
router.get('/config', async (req: Request, res: Response) => {
// JIRA_AUTH_METHOD is only for backend Jira API configuration, NOT for application authentication
// Application authentication is ALWAYS via local auth or OAuth
// Users authenticate to the application, then their PAT/OAuth token is used for Jira API writes
// JIRA_SERVICE_ACCOUNT_TOKEN is used for Jira API reads
// Check if users exist in database (if migrations have run)
let hasUsers = false;
try {
const db = getAuthDatabase();
const userCount = await db.queryOne<{ count: number }>(
'SELECT COUNT(*) as count FROM users'
);
hasUsers = (userCount?.count || 0) > 0;
await db.close();
} catch (error) {
// If table doesn't exist yet, hasUsers stays false
}
// Local auth is ALWAYS enabled for application authentication
// (unless explicitly disabled via LOCAL_AUTH_ENABLED=false)
// This allows users to create accounts and log in
const localAuthEnabled = process.env.LOCAL_AUTH_ENABLED !== 'false';
// OAuth is enabled if configured
const oauthEnabled = authService.isOAuthEnabled();
// Service accounts are NOT used for application authentication
// They are only for Jira API read access (JIRA_SERVICE_ACCOUNT_TOKEN in .env)
// serviceAccountEnabled should always be false for authentication purposes
// authMethod is 'local' if local auth is enabled, 'oauth' if only OAuth, or 'none' if both disabled
let authMethod: 'local' | 'oauth' | 'none' = 'none';
if (localAuthEnabled && oauthEnabled) {
authMethod = 'local'; // Default to local, user can choose
} else if (localAuthEnabled) {
authMethod = 'local';
} else if (oauthEnabled) {
authMethod = 'oauth';
}
res.json({
// Configured authentication method ('pat', 'oauth', or 'none')
// Application branding
appName: config.appName,
appTagline: config.appTagline,
appCopyright: config.appCopyright,
// Application authentication method (always 'local' or 'oauth', never 'pat')
// 'pat' is only for backend Jira API configuration, not user authentication
authMethod,
// Legacy fields for backward compatibility
oauthEnabled: authService.isOAuthEnabled(),
serviceAccountEnabled: authService.isUsingServiceAccount(),
// Authentication options
oauthEnabled,
serviceAccountEnabled: false, // Service accounts are NOT for app authentication
localAuthEnabled,
// Jira host for display purposes
jiraHost: config.jiraHost,
});
});
// Get current user (check if logged in)
router.get('/me', (req: Request, res: Response) => {
const sessionId = req.headers['x-session-id'] as string || req.cookies?.sessionId;
router.get('/me', async (req: Request, res: Response) => {
// The sessionId should already be set by authMiddleware from cookies
const sessionId = req.sessionId || req.headers['x-session-id'] as string || req.cookies?.sessionId;
// Only log relevant cookies to avoid noise from other applications
const relevantCookies = req.cookies ? {
sessionId: req.cookies.sessionId ? req.cookies.sessionId.substring(0, 8) + '...' : undefined,
} : {};
logger.debug(`[GET /me] SessionId: ${sessionId ? sessionId.substring(0, 8) + '...' : 'none'}, Relevant cookies: ${JSON.stringify(relevantCookies)}`);
// Service accounts are NOT used for application authentication
// They are only used for Jira API access (configured in .env as JIRA_SERVICE_ACCOUNT_TOKEN)
// Application authentication requires a real user session (local or OAuth)
if (!sessionId) {
// If OAuth not enabled, allow anonymous access with service account
if (authService.isUsingServiceAccount() && !authService.isOAuthEnabled()) {
return res.json({
authenticated: true,
authMethod: 'service-account',
user: {
accountId: 'service-account',
displayName: 'Service Account',
},
});
// No session = not authenticated
// Service account mode is NOT a valid authentication method for the application
return res.json({ authenticated: false });
}
try {
const session = await authService.getSession(sessionId);
if (!session) {
return res.json({ authenticated: false });
}
// Determine auth method from session
let authMethod = 'local';
if ('accountId' in session.user) {
authMethod = 'oauth';
} else if ('id' in session.user) {
authMethod = 'local';
}
// For local users, ensure we have all required fields
let userData = session.user;
if ('id' in session.user) {
// Local user - ensure proper format
const email = session.user.email || session.user.emailAddress || '';
userData = {
id: session.user.id,
email: email,
username: session.user.username,
displayName: session.user.displayName,
emailAddress: email,
roles: session.user.roles || [],
permissions: session.user.permissions || [],
};
}
res.json({
authenticated: true,
authMethod,
user: userData,
});
} catch (error) {
logger.error('Error getting session:', error);
return res.json({ authenticated: false });
}
});
const user = authService.getUser(sessionId);
if (!user) {
return res.json({ authenticated: false });
// Local login (email/password)
router.post('/login', async (req: Request, res: Response) => {
if (!authService.isLocalAuthEnabled()) {
return res.status(400).json({ error: 'Local authentication is not enabled' });
}
res.json({
authenticated: true,
authMethod: 'oauth',
user,
});
const { email, password } = req.body;
if (!email || !password) {
return res.status(400).json({ error: 'Email and password are required' });
}
try {
const ipAddress = req.ip || req.socket.remoteAddress || undefined;
const userAgent = req.get('user-agent') || undefined;
const { sessionId, user } = await authService.localLogin(email, password, ipAddress, userAgent);
// Set session cookie
// Note: When using Vite proxy, cookies work correctly as the proxy forwards them
// In development, use 'lax' for same-site requests (localhost:5173 -> localhost:3001 via proxy)
// In production, use 'lax' for security
const cookieOptions: any = {
httpOnly: true,
secure: config.isProduction,
sameSite: 'lax' as const,
path: '/', // Make cookie available for all paths
maxAge: 24 * 60 * 60 * 1000, // 24 hours
};
// In development, don't set domain (defaults to current host)
// This allows the cookie to work with the Vite proxy
if (!config.isDevelopment) {
// In production, you might want to set domain explicitly if needed
// cookieOptions.domain = '.yourdomain.com';
}
res.cookie('sessionId', sessionId, cookieOptions);
logger.debug(`[Local Login] Session cookie set: sessionId=${sessionId.substring(0, 8)}..., path=/, maxAge=24h, sameSite=lax`);
res.json({
success: true,
sessionId,
user,
});
} catch (error) {
logger.error('Local login error:', error);
const message = error instanceof Error ? error.message : 'Login failed';
res.status(401).json({ error: message });
}
});
// Initiate OAuth login
@@ -102,21 +235,41 @@ router.get('/callback', async (req: Request, res: Response) => {
}
try {
const ipAddress = req.ip || req.socket.remoteAddress || undefined;
const userAgent = req.get('user-agent') || undefined;
// Exchange code for tokens
const { sessionId, user } = await authService.exchangeCodeForTokens(
String(code),
String(state)
String(state),
ipAddress,
userAgent
);
logger.info(`OAuth login successful for: ${user.displayName}`);
// Set session cookie
res.cookie('sessionId', sessionId, {
// Note: When using Vite proxy, cookies work correctly as the proxy forwards them
// In development, use 'lax' for same-site requests (localhost:5173 -> localhost:3001 via proxy)
// In production, use 'lax' for security
const cookieOptions: any = {
httpOnly: true,
secure: config.isProduction,
sameSite: 'lax',
sameSite: 'lax' as const,
path: '/', // Make cookie available for all paths
maxAge: 24 * 60 * 60 * 1000, // 24 hours
});
};
// In development, don't set domain (defaults to current host)
// This allows the cookie to work with the Vite proxy
if (!config.isDevelopment) {
// In production, you might want to set domain explicitly if needed
// cookieOptions.domain = '.yourdomain.com';
}
res.cookie('sessionId', sessionId, cookieOptions);
logger.debug(`[OAuth Login] Session cookie set: sessionId=${sessionId.substring(0, 8)}..., path=/, maxAge=24h, sameSite=lax`);
// Redirect to frontend with session info
res.redirect(`${config.frontendUrl}?login=success`);
@@ -128,16 +281,16 @@ router.get('/callback', async (req: Request, res: Response) => {
});
// Logout
router.post('/logout', (req: Request, res: Response) => {
router.post('/logout', async (req: Request, res: Response) => {
const sessionId = req.headers['x-session-id'] as string || req.cookies?.sessionId;
if (sessionId) {
authService.logout(sessionId);
await authService.logout(sessionId);
}
// Clear cookies
res.clearCookie('sessionId');
res.clearCookie('oauth_state');
// Clear cookies (must use same path as when setting)
res.clearCookie('sessionId', { path: '/' });
res.clearCookie('oauth_state', { path: '/' });
res.json({ success: true });
});
@@ -159,37 +312,185 @@ router.post('/refresh', async (req: Request, res: Response) => {
}
});
// Forgot password
router.post('/forgot-password', async (req: Request, res: Response) => {
const { email } = req.body;
if (!email) {
return res.status(400).json({ error: 'Email is required' });
}
try {
await userService.generatePasswordResetToken(email);
// Always return success to prevent email enumeration
res.json({ success: true, message: 'If an account exists with this email, a password reset link has been sent.' });
} catch (error) {
logger.error('Forgot password error:', error);
// Still return success to prevent email enumeration
res.json({ success: true, message: 'If an account exists with this email, a password reset link has been sent.' });
}
});
// Reset password
router.post('/reset-password', async (req: Request, res: Response) => {
const { token, password } = req.body;
if (!token || !password) {
return res.status(400).json({ error: 'Token and password are required' });
}
// Validate password requirements
const minLength = parseInt(process.env.PASSWORD_MIN_LENGTH || '8', 10);
const requireUppercase = process.env.PASSWORD_REQUIRE_UPPERCASE === 'true';
const requireLowercase = process.env.PASSWORD_REQUIRE_LOWERCASE === 'true';
const requireNumber = process.env.PASSWORD_REQUIRE_NUMBER === 'true';
if (password.length < minLength) {
return res.status(400).json({ error: `Password must be at least ${minLength} characters long` });
}
if (requireUppercase && !/[A-Z]/.test(password)) {
return res.status(400).json({ error: 'Password must contain at least one uppercase letter' });
}
if (requireLowercase && !/[a-z]/.test(password)) {
return res.status(400).json({ error: 'Password must contain at least one lowercase letter' });
}
if (requireNumber && !/[0-9]/.test(password)) {
return res.status(400).json({ error: 'Password must contain at least one number' });
}
try {
const success = await userService.resetPasswordWithToken(token, password);
if (success) {
res.json({ success: true, message: 'Password reset successfully' });
} else {
res.status(400).json({ error: 'Invalid or expired token' });
}
} catch (error) {
logger.error('Reset password error:', error);
res.status(500).json({ error: 'Failed to reset password' });
}
});
// Verify email
router.post('/verify-email', async (req: Request, res: Response) => {
const { token } = req.body;
if (!token) {
return res.status(400).json({ error: 'Token is required' });
}
try {
const success = await userService.verifyEmail(token);
if (success) {
res.json({ success: true, message: 'Email verified successfully' });
} else {
res.status(400).json({ error: 'Invalid or expired token' });
}
} catch (error) {
logger.error('Verify email error:', error);
res.status(500).json({ error: 'Failed to verify email' });
}
});
// Get invitation token info
router.get('/invitation/:token', async (req: Request, res: Response) => {
const token = Array.isArray(req.params.token) ? req.params.token[0] : req.params.token;
try {
const user = await userService.validateInvitationToken(token);
if (!user) {
return res.status(400).json({ error: 'Invalid or expired invitation token' });
}
res.json({
valid: true,
user: {
email: user.email,
username: user.username,
display_name: user.display_name,
},
});
} catch (error) {
logger.error('Validate invitation error:', error);
res.status(500).json({ error: 'Failed to validate invitation' });
}
});
// Accept invitation
router.post('/accept-invitation', async (req: Request, res: Response) => {
const { token, password } = req.body;
if (!token || !password) {
return res.status(400).json({ error: 'Token and password are required' });
}
// Validate password requirements
const minLength = parseInt(process.env.PASSWORD_MIN_LENGTH || '8', 10);
const requireUppercase = process.env.PASSWORD_REQUIRE_UPPERCASE === 'true';
const requireLowercase = process.env.PASSWORD_REQUIRE_LOWERCASE === 'true';
const requireNumber = process.env.PASSWORD_REQUIRE_NUMBER === 'true';
if (password.length < minLength) {
return res.status(400).json({ error: `Password must be at least ${minLength} characters long` });
}
if (requireUppercase && !/[A-Z]/.test(password)) {
return res.status(400).json({ error: 'Password must contain at least one uppercase letter' });
}
if (requireLowercase && !/[a-z]/.test(password)) {
return res.status(400).json({ error: 'Password must contain at least one lowercase letter' });
}
if (requireNumber && !/[0-9]/.test(password)) {
return res.status(400).json({ error: 'Password must contain at least one number' });
}
try {
const user = await userService.acceptInvitation(token, password);
if (user) {
res.json({ success: true, message: 'Invitation accepted successfully', user });
} else {
res.status(400).json({ error: 'Invalid or expired invitation token' });
}
} catch (error) {
logger.error('Accept invitation error:', error);
res.status(500).json({ error: 'Failed to accept invitation' });
}
});
// Middleware to extract session and attach user to request
export function authMiddleware(req: Request, res: Response, next: NextFunction) {
export async function authMiddleware(req: Request, res: Response, next: NextFunction) {
const sessionId = req.headers['x-session-id'] as string || req.cookies?.sessionId;
// Debug logging for cookie issues (only log relevant cookies to avoid noise)
if (req.path === '/api/auth/me') {
const sessionIdFromCookie = req.cookies?.sessionId ? req.cookies.sessionId.substring(0, 8) + '...' : 'none';
const sessionIdFromHeader = req.headers['x-session-id'] ? String(req.headers['x-session-id']).substring(0, 8) + '...' : 'none';
logger.debug(`[authMiddleware] Path: ${req.path}, SessionId from cookie: ${sessionIdFromCookie}, SessionId from header: ${sessionIdFromHeader}`);
}
if (sessionId) {
const session = authService.getSession(sessionId);
if (session) {
req.sessionId = sessionId;
req.user = session.user;
req.accessToken = session.accessToken;
try {
const session = await authService.getSession(sessionId);
if (session) {
req.sessionId = sessionId;
req.user = session.user;
req.accessToken = session.accessToken;
} else {
logger.debug(`[authMiddleware] Session not found for sessionId: ${sessionId.substring(0, 8)}...`);
}
} catch (error) {
logger.error('Auth middleware error:', error);
}
} else {
if (req.path === '/api/auth/me') {
logger.debug(`[authMiddleware] No sessionId found in cookies or headers for ${req.path}`);
}
}
next();
}
// Middleware to require authentication
export function requireAuth(req: Request, res: Response, next: NextFunction) {
// If OAuth is enabled, require a valid session
if (authService.isOAuthEnabled()) {
if (!req.user) {
return res.status(401).json({ error: 'Authentication required' });
}
}
// If only service account is configured, allow through
else if (!authService.isUsingServiceAccount()) {
return res.status(503).json({ error: 'No authentication method configured' });
}
next();
}
// Re-export authorization middleware for convenience
export { requireAuth, requireRole, requirePermission, requireAdmin } from '../middleware/authorization.js';
export default router;

View File

@@ -5,14 +5,20 @@
*/
import { Router, Request, Response } from 'express';
import { cacheStore } from '../services/cacheStore.js';
import { normalizedCacheStore as cacheStore } from '../services/normalizedCacheStore.js';
import { syncEngine } from '../services/syncEngine.js';
import { logger } from '../services/logger.js';
import { requireAuth, requirePermission } from '../middleware/authorization.js';
import { getQueryString, getParamString } from '../utils/queryHelpers.js';
import { OBJECT_TYPES } from '../generated/jira-schema.js';
import type { CMDBObjectTypeName } from '../generated/jira-types.js';
const router = Router();
// All routes require authentication and manage_settings permission
router.use(requireAuth);
router.use(requirePermission('manage_settings'));
// Get cache status
router.get('/status', async (req: Request, res: Response) => {
try {
@@ -24,17 +30,24 @@ router.get('/status', async (req: Request, res: Response) => {
if (cacheStats.objectsByType['ApplicationComponent'] !== undefined) {
try {
const { jiraAssetsClient } = await import('../services/jiraAssetsClient.js');
const { schemaMappingService } = await import('../services/schemaMappingService.js');
const { OBJECT_TYPES } = await import('../generated/jira-schema.js');
const typeDef = OBJECT_TYPES['ApplicationComponent'];
if (typeDef) {
const searchResult = await jiraAssetsClient.searchObjects(`objectType = "${typeDef.name}"`, 1, 1);
const jiraCount = searchResult.totalCount;
const cacheCount = cacheStats.objectsByType['ApplicationComponent'] || 0;
jiraComparison = {
jiraCount,
cacheCount,
difference: jiraCount - cacheCount,
};
// Get schema ID for ApplicationComponent
const schemaId = await schemaMappingService.getSchemaId('ApplicationComponent');
// Skip if no schema ID is available
if (schemaId && schemaId.trim() !== '') {
const searchResult = await jiraAssetsClient.searchObjects(`objectType = "${typeDef.name}"`, 1, 1, schemaId);
const jiraCount = searchResult.totalCount;
const cacheCount = cacheStats.objectsByType['ApplicationComponent'] || 0;
jiraComparison = {
jiraCount,
cacheCount,
difference: jiraCount - cacheCount,
};
}
}
} catch (err) {
logger.debug('Could not fetch Jira count for comparison', err);
@@ -58,6 +71,17 @@ router.post('/sync', async (req: Request, res: Response) => {
try {
logger.info('Manual full sync triggered');
// Check if configuration is complete
const { schemaConfigurationService } = await import('../services/schemaConfigurationService.js');
const isConfigured = await schemaConfigurationService.isConfigurationComplete();
if (!isConfigured) {
res.status(400).json({
error: 'Schema configuration not complete',
message: 'Please configure at least one object type to be synced in the settings page before starting sync.',
});
return;
}
// Don't wait for completion - return immediately
syncEngine.fullSync().catch(err => {
logger.error('Full sync failed', err);
@@ -69,14 +93,18 @@ router.post('/sync', async (req: Request, res: Response) => {
});
} catch (error) {
logger.error('Failed to trigger full sync', error);
res.status(500).json({ error: 'Failed to trigger sync' });
const errorMessage = error instanceof Error ? error.message : 'Failed to trigger sync';
res.status(500).json({
error: errorMessage,
details: error instanceof Error ? error.stack : undefined
});
}
});
// Trigger sync for a specific object type
router.post('/sync/:objectType', async (req: Request, res: Response) => {
try {
const { objectType } = req.params;
const objectType = getParamString(req, 'objectType');
// Validate object type
if (!OBJECT_TYPES[objectType]) {
@@ -93,18 +121,52 @@ router.post('/sync/:objectType', async (req: Request, res: Response) => {
res.json({
status: 'completed',
objectType,
objectType: objectType,
stats: result,
});
} catch (error) {
const objectType = getParamString(req, 'objectType');
const errorMessage = error instanceof Error ? error.message : 'Failed to sync object type';
logger.error(`Failed to sync object type ${req.params.objectType}`, error);
logger.error(`Failed to sync object type ${objectType}`, error);
// Return 409 (Conflict) if sync is already in progress, otherwise 500
const statusCode = errorMessage.includes('already in progress') ? 409 : 500;
res.status(statusCode).json({
error: errorMessage,
objectType: req.params.objectType,
objectType: objectType,
});
}
});
// Refresh a specific application (force re-sync from Jira)
router.post('/refresh-application/:id', async (req: Request, res: Response) => {
try {
const id = getParamString(req, 'id');
const { cmdbService } = await import('../services/cmdbService.js');
logger.info(`Manual refresh triggered for application ${id}`);
// Force refresh from Jira
const app = await cmdbService.getObject('ApplicationComponent', id, { forceRefresh: true });
if (!app) {
res.status(404).json({ error: `Application ${id} not found in Jira` });
return;
}
res.json({
status: 'refreshed',
applicationId: id,
applicationKey: app.objectKey,
message: 'Application refreshed from Jira and cached with updated schema',
});
} catch (error) {
const id = getParamString(req, 'id');
const errorMessage = error instanceof Error ? error.message : 'Failed to refresh application';
logger.error(`Failed to refresh application ${id}`, error);
res.status(500).json({
error: errorMessage,
applicationId: id,
});
}
});
@@ -112,7 +174,7 @@ router.post('/sync/:objectType', async (req: Request, res: Response) => {
// Clear cache for a specific type
router.delete('/clear/:objectType', async (req: Request, res: Response) => {
try {
const { objectType } = req.params;
const objectType = getParamString(req, 'objectType');
if (!OBJECT_TYPES[objectType]) {
res.status(400).json({
@@ -132,7 +194,8 @@ router.delete('/clear/:objectType', async (req: Request, res: Response) => {
deletedCount: deleted,
});
} catch (error) {
logger.error(`Failed to clear cache for ${req.params.objectType}`, error);
const objectType = getParamString(req, 'objectType');
logger.error(`Failed to clear cache for ${objectType}`, error);
res.status(500).json({ error: 'Failed to clear cache' });
}
});

View File

@@ -4,20 +4,50 @@ import { dataService } from '../services/dataService.js';
import { databaseService } from '../services/database.js';
import { logger } from '../services/logger.js';
import { config } from '../config/env.js';
import { requireAuth, requirePermission } from '../middleware/authorization.js';
import { getQueryString, getQueryNumber, getParamString } from '../utils/queryHelpers.js';
const router = Router();
// Get AI classification for an application
router.post('/suggest/:id', async (req: Request, res: Response) => {
try {
const { id } = req.params;
// Get provider from query parameter or request body, default to config
const provider = (req.query.provider as AIProvider) || (req.body.provider as AIProvider) || config.defaultAIProvider;
// All routes require authentication
router.use(requireAuth);
if (!aiService.isConfigured(provider)) {
// Get AI classification for an application (requires search permission)
router.post('/suggest/:id', requirePermission('search'), async (req: Request, res: Response) => {
try {
const id = getParamString(req, 'id');
// Get provider from query parameter, request body, or user settings (default to 'claude')
const userSettings = (req as any).userSettings;
// Check if AI is enabled for this user
if (!userSettings?.ai_enabled) {
res.status(403).json({
error: 'AI functionality is disabled',
message: 'AI functionality is not enabled in your profile settings. Please enable it in your user settings.'
});
return;
}
// Check if user has selected an AI provider
if (!userSettings?.ai_provider) {
res.status(403).json({
error: 'AI provider not configured',
message: 'Please select an AI provider (Claude or OpenAI) in your user settings.'
});
return;
}
const userDefaultProvider = userSettings.ai_provider === 'anthropic' ? 'claude' : userSettings.ai_provider === 'openai' ? 'openai' : 'claude';
const provider = (getQueryString(req, 'provider') as AIProvider) || (req.body.provider as AIProvider) || (userDefaultProvider as AIProvider);
// Check if user has API key for the selected provider
const hasApiKey = (provider === 'claude' && userSettings.ai_provider === 'anthropic' && !!userSettings.ai_api_key) ||
(provider === 'openai' && userSettings.ai_provider === 'openai' && !!userSettings.ai_api_key);
if (!hasApiKey) {
res.status(503).json({
error: 'AI classification not available',
message: `${provider === 'claude' ? 'Claude (Anthropic)' : 'OpenAI'} API is not configured. Please set ${provider === 'claude' ? 'ANTHROPIC_API_KEY' : 'OPENAI_API_KEY'}.`
message: `${provider === 'claude' ? 'Claude (Anthropic)' : 'OpenAI'} API key is not configured. Please configure the API key in your user settings.`
});
return;
}
@@ -28,8 +58,15 @@ router.post('/suggest/:id', async (req: Request, res: Response) => {
return;
}
// Get user API keys from user settings (already loaded above)
const userApiKeys = userSettings ? {
anthropic: userSettings.ai_provider === 'anthropic' ? userSettings.ai_api_key : undefined,
openai: userSettings.ai_provider === 'openai' ? userSettings.ai_api_key : undefined,
tavily: userSettings.tavily_api_key,
} : undefined;
logger.info(`Generating AI classification for: ${application.name} using ${provider}`);
const suggestion = await aiService.classifyApplication(application, provider);
const suggestion = await aiService.classifyApplication(application, provider, userApiKeys);
res.json(suggestion);
} catch (error) {
@@ -52,7 +89,7 @@ router.get('/taxonomy', (req: Request, res: Response) => {
// Get function by code
router.get('/function/:code', (req: Request, res: Response) => {
try {
const { code } = req.params;
const code = getParamString(req, 'code');
const func = aiService.getFunctionByCode(code);
if (!func) {
@@ -70,7 +107,7 @@ router.get('/function/:code', (req: Request, res: Response) => {
// Get classification history
router.get('/history', async (req: Request, res: Response) => {
try {
const limit = parseInt(req.query.limit as string) || 50;
const limit = getQueryNumber(req, 'limit', 50);
const history = await databaseService.getClassificationHistory(limit);
res.json(history);
} catch (error) {
@@ -91,12 +128,16 @@ router.get('/stats', async (req: Request, res: Response) => {
});
// Check if AI is available - returns available providers
router.get('/ai-status', (req: Request, res: Response) => {
router.get('/ai-status', requireAuth, (req: Request, res: Response) => {
const availableProviders = aiService.getAvailableProviders();
// Get user's default provider from settings (default to 'claude')
const userSettings = (req as any).userSettings;
const userDefaultProvider = userSettings?.ai_provider === 'anthropic' ? 'claude' : userSettings?.ai_provider === 'openai' ? 'openai' : 'claude';
res.json({
available: availableProviders.length > 0,
providers: availableProviders,
defaultProvider: config.defaultAIProvider,
defaultProvider: userDefaultProvider,
claude: {
available: aiService.isProviderConfigured('claude'),
model: 'claude-sonnet-4-20250514',
@@ -111,7 +152,7 @@ router.get('/ai-status', (req: Request, res: Response) => {
// Get the AI prompt for an application (for debugging/copy-paste)
router.get('/prompt/:id', async (req: Request, res: Response) => {
try {
const { id } = req.params;
const id = getParamString(req, 'id');
const application = await dataService.getApplicationById(id);
if (!application) {
@@ -127,10 +168,10 @@ router.get('/prompt/:id', async (req: Request, res: Response) => {
}
});
// Chat with AI about an application
router.post('/chat/:id', async (req: Request, res: Response) => {
// Chat with AI about an application (requires search permission)
router.post('/chat/:id', requirePermission('search'), async (req: Request, res: Response) => {
try {
const { id } = req.params;
const id = getParamString(req, 'id');
const { message, conversationId, provider: requestProvider } = req.body;
if (!message || typeof message !== 'string' || message.trim().length === 0) {
@@ -138,12 +179,38 @@ router.post('/chat/:id', async (req: Request, res: Response) => {
return;
}
const provider = (requestProvider as AIProvider) || config.defaultAIProvider;
// Get provider from request or user settings (default to 'claude')
const userSettings = (req as any).userSettings;
// Check if AI is enabled for this user
if (!userSettings?.ai_enabled) {
res.status(403).json({
error: 'AI functionality is disabled',
message: 'AI functionality is not enabled in your profile settings. Please enable it in your user settings.'
});
return;
}
// Check if user has selected an AI provider
if (!userSettings?.ai_provider) {
res.status(403).json({
error: 'AI provider not configured',
message: 'Please select an AI provider (Claude or OpenAI) in your user settings.'
});
return;
}
const userDefaultProvider = userSettings.ai_provider === 'anthropic' ? 'claude' : userSettings.ai_provider === 'openai' ? 'openai' : 'claude';
const provider = (requestProvider as AIProvider) || (userDefaultProvider as AIProvider);
if (!aiService.isConfigured(provider)) {
// Check if user has API key for the selected provider
const hasApiKey = (provider === 'claude' && userSettings.ai_provider === 'anthropic' && !!userSettings.ai_api_key) ||
(provider === 'openai' && userSettings.ai_provider === 'openai' && !!userSettings.ai_api_key);
if (!hasApiKey) {
res.status(503).json({
error: 'AI chat not available',
message: `${provider === 'claude' ? 'Claude (Anthropic)' : 'OpenAI'} API is not configured.`
message: `${provider === 'claude' ? 'Claude (Anthropic)' : 'OpenAI'} API key is not configured. Please configure the API key in your user settings.`
});
return;
}
@@ -154,8 +221,15 @@ router.post('/chat/:id', async (req: Request, res: Response) => {
return;
}
// Get user API keys from user settings (already loaded above)
const userApiKeys = userSettings ? {
anthropic: userSettings.ai_provider === 'anthropic' ? userSettings.ai_api_key : undefined,
openai: userSettings.ai_provider === 'openai' ? userSettings.ai_api_key : undefined,
tavily: userSettings.tavily_api_key,
} : undefined;
logger.info(`Chat message for: ${application.name} using ${provider}`);
const response = await aiService.chat(application, message.trim(), conversationId, provider);
const response = await aiService.chat(application, message.trim(), conversationId, provider, userApiKeys);
res.json(response);
} catch (error) {
@@ -167,7 +241,7 @@ router.post('/chat/:id', async (req: Request, res: Response) => {
// Get conversation history
router.get('/chat/conversation/:conversationId', (req: Request, res: Response) => {
try {
const { conversationId } = req.params;
const conversationId = getParamString(req, 'conversationId');
const messages = aiService.getConversationHistory(conversationId);
if (messages.length === 0) {
@@ -185,7 +259,7 @@ router.get('/chat/conversation/:conversationId', (req: Request, res: Response) =
// Clear a conversation
router.delete('/chat/conversation/:conversationId', (req: Request, res: Response) => {
try {
const { conversationId } = req.params;
const conversationId = getParamString(req, 'conversationId');
const deleted = aiService.clearConversation(conversationId);
if (!deleted) {

View File

@@ -4,6 +4,7 @@ import { join, dirname } from 'path';
import { fileURLToPath } from 'url';
import { logger } from '../services/logger.js';
import { clearEffortCalculationConfigCache, getEffortCalculationConfigV25 } from '../services/effortCalculation.js';
import { requireAuth, requirePermission } from '../middleware/authorization.js';
import type { EffortCalculationConfig, EffortCalculationConfigV25 } from '../config/effortCalculation.js';
import type { DataCompletenessConfig } from '../types/index.js';
@@ -13,9 +14,13 @@ const __dirname = dirname(__filename);
const router = Router();
// All routes require authentication and manage_settings permission
router.use(requireAuth);
router.use(requirePermission('manage_settings'));
// Path to the configuration files
const CONFIG_FILE_PATH = join(__dirname, '../../data/effort-calculation-config.json');
const CONFIG_FILE_PATH_V25 = join(__dirname, '../../data/effort-calculation-config-v25.json');
const CONFIG_FILE_PATH_V25 = join(__dirname, '../../data/effort-calculation-config.json');
const COMPLETENESS_CONFIG_FILE_PATH = join(__dirname, '../../data/data-completeness-config.json');
/**
@@ -142,37 +147,39 @@ router.get('/data-completeness', async (req: Request, res: Response) => {
description: 'Configuration for Data Completeness Score fields',
lastUpdated: new Date().toISOString(),
},
categories: {
general: {
categories: [
{
id: 'general',
name: 'General',
description: 'General application information fields',
fields: [
{ name: 'Organisation', fieldPath: 'organisation', enabled: true },
{ name: 'ApplicationFunction', fieldPath: 'applicationFunctions', enabled: true },
{ name: 'Status', fieldPath: 'status', enabled: true },
{ name: 'Business Impact Analyse', fieldPath: 'businessImpactAnalyse', enabled: true },
{ name: 'Application Component Hosting Type', fieldPath: 'hostingType', enabled: true },
{ name: 'Supplier Product', fieldPath: 'supplierProduct', enabled: true },
{ name: 'Business Owner', fieldPath: 'businessOwner', enabled: true },
{ name: 'System Owner', fieldPath: 'systemOwner', enabled: true },
{ name: 'Functional Application Management', fieldPath: 'functionalApplicationManagement', enabled: true },
{ name: 'Technical Application Management', fieldPath: 'technicalApplicationManagement', enabled: true },
{ id: 'organisation', name: 'Organisation', fieldPath: 'organisation', enabled: true },
{ id: 'applicationFunctions', name: 'ApplicationFunction', fieldPath: 'applicationFunctions', enabled: true },
{ id: 'status', name: 'Status', fieldPath: 'status', enabled: true },
{ id: 'businessImpactAnalyse', name: 'Business Impact Analyse', fieldPath: 'businessImpactAnalyse', enabled: true },
{ id: 'hostingType', name: 'Application Component Hosting Type', fieldPath: 'hostingType', enabled: true },
{ id: 'supplierProduct', name: 'Supplier Product', fieldPath: 'supplierProduct', enabled: true },
{ id: 'businessOwner', name: 'Business Owner', fieldPath: 'businessOwner', enabled: true },
{ id: 'systemOwner', name: 'System Owner', fieldPath: 'systemOwner', enabled: true },
{ id: 'functionalApplicationManagement', name: 'Functional Application Management', fieldPath: 'functionalApplicationManagement', enabled: true },
{ id: 'technicalApplicationManagement', name: 'Technical Application Management', fieldPath: 'technicalApplicationManagement', enabled: true },
],
},
applicationManagement: {
{
id: 'applicationManagement',
name: 'Application Management',
description: 'Application management classification fields',
fields: [
{ name: 'ICT Governance Model', fieldPath: 'governanceModel', enabled: true },
{ name: 'Application Management - Application Type', fieldPath: 'applicationType', enabled: true },
{ name: 'Application Management - Hosting', fieldPath: 'applicationManagementHosting', enabled: true },
{ name: 'Application Management - TAM', fieldPath: 'applicationManagementTAM', enabled: true },
{ name: 'Application Management - Dynamics Factor', fieldPath: 'dynamicsFactor', enabled: true },
{ name: 'Application Management - Complexity Factor', fieldPath: 'complexityFactor', enabled: true },
{ name: 'Application Management - Number of Users', fieldPath: 'numberOfUsers', enabled: true },
{ id: 'governanceModel', name: 'ICT Governance Model', fieldPath: 'governanceModel', enabled: true },
{ id: 'applicationType', name: 'Application Management - Application Type', fieldPath: 'applicationType', enabled: true },
{ id: 'applicationManagementHosting', name: 'Application Management - Hosting', fieldPath: 'applicationManagementHosting', enabled: true },
{ id: 'applicationManagementTAM', name: 'Application Management - TAM', fieldPath: 'applicationManagementTAM', enabled: true },
{ id: 'dynamicsFactor', name: 'Application Management - Dynamics Factor', fieldPath: 'dynamicsFactor', enabled: true },
{ id: 'complexityFactor', name: 'Application Management - Complexity Factor', fieldPath: 'complexityFactor', enabled: true },
{ id: 'numberOfUsers', name: 'Application Management - Number of Users', fieldPath: 'numberOfUsers', enabled: true },
],
},
},
],
};
res.json(defaultConfig);
}

View File

@@ -4,6 +4,7 @@ import { databaseService } from '../services/database.js';
import { syncEngine } from '../services/syncEngine.js';
import { logger } from '../services/logger.js';
import { validateApplicationConfiguration, calculateRequiredEffortWithMinMax } from '../services/effortCalculation.js';
import { requireAuth, requirePermission } from '../middleware/authorization.js';
import type { ApplicationDetails, ApplicationStatus, ReferenceValue } from '../types/index.js';
import { readFileSync } from 'fs';
import { join, dirname } from 'path';
@@ -11,6 +12,10 @@ import { fileURLToPath } from 'url';
const router = Router();
// All routes require authentication and view_reports permission
router.use(requireAuth);
router.use(requirePermission('view_reports'));
// Simple in-memory cache for dashboard stats
interface CachedStats {
data: unknown;
@@ -778,6 +783,7 @@ router.get('/data-completeness', async (req: Request, res: Response) => {
byField: byFieldArray,
byApplication,
byTeam: byTeamArray,
config: completenessConfig, // Include config so frontend doesn't need to fetch it separately
});
} catch (error) {
logger.error('Failed to get data completeness', error);

View File

@@ -0,0 +1,488 @@
/**
* Data Validation routes
*
* Provides endpoints for validating and inspecting data in the cache/database.
*/
import { Router, Request, Response } from 'express';
import { normalizedCacheStore as cacheStore } from '../services/normalizedCacheStore.js';
import { logger } from '../services/logger.js';
import { requireAuth, requirePermission } from '../middleware/authorization.js';
import { getQueryString, getParamString } from '../utils/queryHelpers.js';
import { schemaCacheService } from '../services/schemaCacheService.js';
import { jiraAssetsClient } from '../services/jiraAssetsClient.js';
import { dataIntegrityService } from '../services/dataIntegrityService.js';
import { schemaMappingService } from '../services/schemaMappingService.js';
import { getDatabaseAdapter } from '../services/database/singleton.js';
import type { CMDBObjectTypeName } from '../generated/jira-types.js';
const router = Router();
// All routes require authentication and manage_settings permission
router.use(requireAuth);
router.use(requirePermission('manage_settings'));
/**
* GET /api/data-validation/stats
* Get comprehensive data validation statistics
*/
router.get('/stats', async (req: Request, res: Response) => {
try {
const db = getDatabaseAdapter();
const cacheStats = await cacheStore.getStats();
// Get object counts by type from cache
const objectsByType = cacheStats.objectsByType;
// Get schema from database (via cache)
const schema = await schemaCacheService.getSchema();
const objectTypes = schema.objectTypes;
const typeNames = Object.keys(objectTypes);
// Get schema information for each object type (join with schemas table)
const schemaInfoMap = new Map<string, { schemaId: string; schemaName: string }>();
try {
const schemaInfoRows = await db.query<{
type_name: string;
jira_schema_id: string;
schema_name: string;
}>(`
SELECT ot.type_name, s.jira_schema_id, s.name as schema_name
FROM object_types ot
JOIN schemas s ON ot.schema_id = s.id
WHERE ot.type_name IN (${typeNames.map(() => '?').join(',')})
`, typeNames);
for (const row of schemaInfoRows) {
schemaInfoMap.set(row.type_name, {
schemaId: row.jira_schema_id,
schemaName: row.schema_name,
});
}
} catch (error) {
logger.debug('Failed to fetch schema information', error);
}
// Get Jira counts for comparison
const jiraCounts: Record<string, number> = {};
// Fetch counts from Jira in parallel, using schema IDs from database
const countPromises = typeNames.map(async (typeName) => {
try {
// Get schema ID from the database (already fetched above)
const schemaInfo = schemaInfoMap.get(typeName);
// If no schema info from database, try schemaMappingService as fallback
let schemaId: string | undefined = schemaInfo?.schemaId;
if (!schemaId || schemaId.trim() === '') {
schemaId = await schemaMappingService.getSchemaId(typeName);
}
// Skip if no schema ID is available (object type not configured)
if (!schemaId || schemaId.trim() === '') {
logger.debug(`No schema ID configured for ${typeName}, skipping Jira count`);
jiraCounts[typeName] = 0;
return { typeName, count: 0 };
}
const count = await jiraAssetsClient.getObjectCount(typeName, schemaId);
jiraCounts[typeName] = count;
return { typeName, count };
} catch (error) {
logger.debug(`Failed to get Jira count for ${typeName}`, error);
jiraCounts[typeName] = 0;
return { typeName, count: 0 };
}
});
await Promise.all(countPromises);
// Calculate differences
const typeComparisons: Array<{
typeName: string;
typeDisplayName: string;
schemaId?: string;
schemaName?: string;
cacheCount: number;
jiraCount: number;
difference: number;
syncStatus: 'synced' | 'outdated' | 'missing';
}> = [];
for (const [typeName, typeDef] of Object.entries(objectTypes)) {
const cacheCount = objectsByType[typeName] || 0;
const jiraCount = jiraCounts[typeName] || 0;
const difference = jiraCount - cacheCount;
let syncStatus: 'synced' | 'outdated' | 'missing';
if (cacheCount === 0 && jiraCount > 0) {
syncStatus = 'missing';
} else if (difference > 0) {
syncStatus = 'outdated';
} else {
syncStatus = 'synced';
}
const schemaInfo = schemaInfoMap.get(typeName);
typeComparisons.push({
typeName,
typeDisplayName: typeDef.name,
schemaId: schemaInfo?.schemaId,
schemaName: schemaInfo?.schemaName,
cacheCount,
jiraCount,
difference,
syncStatus,
});
}
// Sort by difference (most outdated first)
typeComparisons.sort((a, b) => b.difference - a.difference);
// Get relation statistics
const relationStats = {
total: cacheStats.totalRelations,
// Could add more detailed relation stats here
};
// Check for broken references (references to objects that don't exist)
let brokenReferences = 0;
try {
brokenReferences = await cacheStore.getBrokenReferencesCount();
} catch (error) {
logger.debug('Could not check for broken references', error);
}
// Get objects with missing required attributes
// This would require schema information, so we'll skip for now
res.json({
cache: {
totalObjects: cacheStats.totalObjects,
totalRelations: cacheStats.totalRelations,
objectsByType,
isWarm: cacheStats.isWarm,
dbSizeBytes: cacheStats.dbSizeBytes,
lastFullSync: cacheStats.lastFullSync,
lastIncrementalSync: cacheStats.lastIncrementalSync,
},
jira: {
counts: jiraCounts,
},
comparison: {
typeComparisons,
totalOutdated: typeComparisons.filter(t => t.syncStatus === 'outdated').length,
totalMissing: typeComparisons.filter(t => t.syncStatus === 'missing').length,
totalSynced: typeComparisons.filter(t => t.syncStatus === 'synced').length,
},
validation: {
brokenReferences,
// Add more validation metrics here
},
relations: relationStats,
});
} catch (error) {
logger.error('Failed to get data validation stats', error);
res.status(500).json({ error: 'Failed to get data validation stats' });
}
});
/**
* GET /api/data-validation/objects/:typeName
* Get sample objects of a specific type for inspection
*/
router.get('/objects/:typeName', async (req: Request, res: Response) => {
try {
const typeName = getParamString(req, 'typeName');
const limit = parseInt(getQueryString(req, 'limit') || '10', 10);
const offset = parseInt(getQueryString(req, 'offset') || '0', 10);
// Get schema from database (via cache)
const schema = await schemaCacheService.getSchema();
const objectTypes = schema.objectTypes;
if (!objectTypes[typeName]) {
res.status(400).json({
error: `Unknown object type: ${typeName}`,
supportedTypes: Object.keys(objectTypes),
});
return;
}
const objects = await cacheStore.getObjects(typeName as CMDBObjectTypeName, { limit, offset });
const total = await cacheStore.countObjects(typeName as CMDBObjectTypeName);
res.json({
typeName,
typeDisplayName: objectTypes[typeName].name,
objects,
pagination: {
limit,
offset,
total,
hasMore: offset + limit < total,
},
});
} catch (error) {
const typeName = getParamString(req, 'typeName');
logger.error(`Failed to get objects for type ${typeName}`, error);
res.status(500).json({ error: 'Failed to get objects' });
}
});
/**
* GET /api/data-validation/object/:id
* Get a specific object by ID for inspection
*/
router.get('/object/:id', async (req: Request, res: Response) => {
try {
const id = getParamString(req, 'id');
// Try to find the object in any type
// First, get the object's metadata
const objRow = await cacheStore.getObjectMetadata(id);
if (!objRow) {
res.status(404).json({ error: `Object ${id} not found in cache` });
return;
}
// Get schema from database (via cache)
const schema = await schemaCacheService.getSchema();
const objectTypes = schema.objectTypes;
const object = await cacheStore.getObject(objRow.object_type_name as any, id);
if (!object) {
res.status(404).json({ error: `Object ${id} could not be reconstructed` });
return;
}
res.json({
object,
metadata: {
typeName: objRow.object_type_name,
typeDisplayName: objectTypes[objRow.object_type_name]?.name || objRow.object_type_name,
objectKey: objRow.object_key,
label: objRow.label,
},
});
} catch (error) {
const id = getParamString(req, 'id');
logger.error(`Failed to get object ${id}`, error);
res.status(500).json({ error: 'Failed to get object' });
}
});
/**
* GET /api/data-validation/broken-references
* Get list of broken references (references to objects that don't exist)
*/
router.get('/broken-references', async (req: Request, res: Response) => {
try {
const limit = parseInt(getQueryString(req, 'limit') || '50', 10);
const offset = parseInt(getQueryString(req, 'offset') || '0', 10);
// Get broken references with details
const brokenRefs = await cacheStore.getBrokenReferences(limit, offset);
// Get total count
const total = await cacheStore.getBrokenReferencesCount();
res.json({
brokenReferences: brokenRefs,
pagination: {
limit,
offset,
total,
hasMore: offset + limit < total,
},
});
} catch (error) {
logger.error('Failed to get broken references', error);
res.status(500).json({ error: 'Failed to get broken references' });
}
});
/**
* POST /api/data-validation/repair-broken-references
* Repair broken references
*
* Query params:
* - mode: 'delete' | 'fetch' | 'dry-run' (default: 'fetch')
* - batchSize: number (default: 100)
* - maxRepairs: number (default: 0 = unlimited)
*/
router.post('/repair-broken-references', async (req: Request, res: Response) => {
try {
const mode = (getQueryString(req, 'mode') || 'fetch') as 'delete' | 'fetch' | 'dry-run';
const batchSize = parseInt(getQueryString(req, 'batchSize') || '100', 10);
const maxRepairs = parseInt(getQueryString(req, 'maxRepairs') || '0', 10);
if (!['delete', 'fetch', 'dry-run'].includes(mode)) {
res.status(400).json({ error: 'Invalid mode. Must be: delete, fetch, or dry-run' });
return;
}
logger.info(`DataValidation: Starting repair broken references (mode: ${mode}, batchSize: ${batchSize}, maxRepairs: ${maxRepairs})`);
const result = await dataIntegrityService.repairBrokenReferences(mode, batchSize, maxRepairs);
res.json({
status: 'completed',
mode,
result,
});
} catch (error) {
logger.error('Failed to repair broken references', error);
res.status(500).json({ error: 'Failed to repair broken references' });
}
});
/**
* POST /api/data-validation/full-integrity-check
* Run full integrity check and optionally repair
*
* Query params:
* - repair: boolean (default: false)
*/
router.post('/full-integrity-check', async (req: Request, res: Response) => {
try {
const repair = getQueryString(req, 'repair') === 'true';
logger.info(`DataValidation: Starting full integrity check (repair: ${repair})`);
const result = await dataIntegrityService.fullIntegrityCheck(repair);
res.json({
status: 'completed',
result,
});
} catch (error) {
logger.error('Failed to run full integrity check', error);
res.status(500).json({ error: 'Failed to run full integrity check' });
}
});
/**
* GET /api/data-validation/validation-status
* Get current validation status
*/
router.get('/validation-status', async (req: Request, res: Response) => {
try {
const status = await dataIntegrityService.validateReferences();
res.json(status);
} catch (error) {
logger.error('Failed to get validation status', error);
res.status(500).json({ error: 'Failed to get validation status' });
}
});
/**
* GET /api/data-validation/schema-mappings
* Get all schema mappings
*/
router.get('/schema-mappings', async (req: Request, res: Response) => {
try {
const mappings = await schemaMappingService.getAllMappings();
res.json({ mappings });
} catch (error) {
logger.error('Failed to get schema mappings', error);
res.status(500).json({ error: 'Failed to get schema mappings' });
}
});
/**
* POST /api/data-validation/schema-mappings
* Create or update a schema mapping
*/
router.post('/schema-mappings', async (req: Request, res: Response) => {
try {
const { objectTypeName, schemaId, enabled = true } = req.body;
if (!objectTypeName || !schemaId) {
res.status(400).json({ error: 'objectTypeName and schemaId are required' });
return;
}
await schemaMappingService.setMapping(objectTypeName, schemaId, enabled);
schemaMappingService.clearCache(); // Clear cache to reload
res.json({
status: 'success',
message: `Schema mapping updated for ${objectTypeName}`,
});
} catch (error) {
logger.error('Failed to set schema mapping', error);
res.status(500).json({ error: 'Failed to set schema mapping' });
}
});
/**
* DELETE /api/data-validation/schema-mappings/:objectTypeName
* Delete a schema mapping (will use default schema)
*/
router.delete('/schema-mappings/:objectTypeName', async (req: Request, res: Response) => {
try {
const objectTypeName = getParamString(req, 'objectTypeName');
await schemaMappingService.deleteMapping(objectTypeName);
schemaMappingService.clearCache(); // Clear cache to reload
res.json({
status: 'success',
message: `Schema mapping deleted for ${objectTypeName}`,
});
} catch (error) {
logger.error('Failed to delete schema mapping', error);
res.status(500).json({ error: 'Failed to delete schema mapping' });
}
});
/**
* GET /api/data-validation/object-types
* Get all object types with their sync configuration
*/
router.get('/object-types', async (req: Request, res: Response) => {
try {
logger.debug('GET /api/data-validation/object-types - Fetching object types...');
const objectTypes = await schemaMappingService.getAllObjectTypesWithConfig();
logger.info(`GET /api/data-validation/object-types - Returning ${objectTypes.length} object types`);
res.json({ objectTypes });
} catch (error) {
logger.error('Failed to get object types', error);
res.status(500).json({
error: 'Failed to get object types',
details: error instanceof Error ? error.message : String(error)
});
}
});
/**
* PATCH /api/data-validation/object-types/:objectTypeName/enabled
* Enable or disable an object type for syncing
*/
router.patch('/object-types/:objectTypeName/enabled', async (req: Request, res: Response) => {
try {
const objectTypeName = getParamString(req, 'objectTypeName');
const { enabled } = req.body;
if (typeof enabled !== 'boolean') {
res.status(400).json({ error: 'enabled must be a boolean' });
return;
}
await schemaMappingService.setTypeEnabled(objectTypeName, enabled);
schemaMappingService.clearCache();
res.json({
status: 'success',
message: `${objectTypeName} ${enabled ? 'enabled' : 'disabled'} for syncing`,
});
} catch (error) {
logger.error('Failed to update object type enabled status', error);
res.status(500).json({ error: 'Failed to update object type enabled status' });
}
});
export default router;

View File

@@ -7,11 +7,17 @@
import { Router, Request, Response } from 'express';
import { cmdbService } from '../services/cmdbService.js';
import { logger } from '../services/logger.js';
import { requireAuth, requirePermission } from '../middleware/authorization.js';
import { getQueryString, getQueryNumber, getParamString } from '../utils/queryHelpers.js';
import { OBJECT_TYPES } from '../generated/jira-schema.js';
import type { CMDBObject, CMDBObjectTypeName } from '../generated/jira-types.js';
const router = Router();
// All routes require authentication and search permission
router.use(requireAuth);
router.use(requirePermission('search'));
// Get list of supported object types
router.get('/', (req: Request, res: Response) => {
const types = Object.entries(OBJECT_TYPES).map(([typeName, def]) => ({
@@ -31,10 +37,10 @@ router.get('/', (req: Request, res: Response) => {
// Get all objects of a type
router.get('/:type', async (req: Request, res: Response) => {
try {
const { type } = req.params;
const limit = parseInt(req.query.limit as string) || 1000;
const offset = parseInt(req.query.offset as string) || 0;
const search = req.query.search as string | undefined;
const type = getParamString(req, 'type');
const limit = getQueryNumber(req, 'limit', 1000);
const offset = getQueryNumber(req, 'offset', 0);
const search = getQueryString(req, 'search');
// Validate type
if (!OBJECT_TYPES[type]) {
@@ -70,8 +76,9 @@ router.get('/:type', async (req: Request, res: Response) => {
// Get a specific object by ID
router.get('/:type/:id', async (req: Request, res: Response) => {
try {
const { type, id } = req.params;
const forceRefresh = req.query.refresh === 'true';
const type = getParamString(req, 'type');
const id = getParamString(req, 'id');
const forceRefresh = getQueryString(req, 'refresh') === 'true';
// Validate type
if (!OBJECT_TYPES[type]) {
@@ -101,8 +108,10 @@ router.get('/:type/:id', async (req: Request, res: Response) => {
// Get related objects
router.get('/:type/:id/related/:relationType', async (req: Request, res: Response) => {
try {
const { type, id, relationType } = req.params;
const attribute = req.query.attribute as string | undefined;
const type = getParamString(req, 'type');
const id = getParamString(req, 'id');
const relationType = getParamString(req, 'relationType');
const attribute = getQueryString(req, 'attribute');
// Validate types
if (!OBJECT_TYPES[type]) {
@@ -138,8 +147,10 @@ router.get('/:type/:id/related/:relationType', async (req: Request, res: Respons
// Get objects referencing this object (inbound references)
router.get('/:type/:id/referenced-by/:sourceType', async (req: Request, res: Response) => {
try {
const { type, id, sourceType } = req.params;
const attribute = req.query.attribute as string | undefined;
const type = getParamString(req, 'type');
const id = getParamString(req, 'id');
const sourceType = getParamString(req, 'sourceType');
const attribute = getQueryString(req, 'attribute');
// Validate types
if (!OBJECT_TYPES[type]) {

View File

@@ -0,0 +1,117 @@
/**
* Profile Routes
*
* Routes for user profile management (users can manage their own profile).
*/
import { Router, Request, Response } from 'express';
import { userService } from '../services/userService.js';
import { requireAuth } from '../middleware/authorization.js';
import { logger } from '../services/logger.js';
const router = Router();
// All routes require authentication
router.use(requireAuth);
// Get current user profile
router.get('/', async (req: Request, res: Response) => {
try {
if (!req.user || !('id' in req.user)) {
return res.status(401).json({ error: 'Authentication required' });
}
const user = await userService.getUserById(req.user.id);
if (!user) {
return res.status(404).json({ error: 'User not found' });
}
// Don't return sensitive data
const { password_hash, password_reset_token, password_reset_expires, email_verification_token, ...safeUser } = user;
res.json(safeUser);
} catch (error) {
logger.error('Get profile error:', error);
res.status(500).json({ error: 'Failed to fetch profile' });
}
});
// Update profile
router.put('/', async (req: Request, res: Response) => {
try {
if (!req.user || !('id' in req.user)) {
return res.status(401).json({ error: 'Authentication required' });
}
const { username, display_name } = req.body;
const user = await userService.updateUser(req.user.id, {
username,
display_name,
});
// Don't return sensitive data
const { password_hash, password_reset_token, password_reset_expires, email_verification_token, ...safeUser } = user;
res.json(safeUser);
} catch (error) {
logger.error('Update profile error:', error);
const message = error instanceof Error ? error.message : 'Failed to update profile';
res.status(400).json({ error: message });
}
});
// Change password
router.put('/password', async (req: Request, res: Response) => {
try {
if (!req.user || !('id' in req.user)) {
return res.status(401).json({ error: 'Authentication required' });
}
const { current_password, new_password } = req.body;
if (!current_password || !new_password) {
return res.status(400).json({ error: 'Current password and new password are required' });
}
// Verify current password
const user = await userService.getUserById(req.user.id);
if (!user) {
return res.status(404).json({ error: 'User not found' });
}
const isValid = await userService.verifyPassword(current_password, user.password_hash);
if (!isValid) {
return res.status(401).json({ error: 'Current password is incorrect' });
}
// Validate new password requirements
const minLength = parseInt(process.env.PASSWORD_MIN_LENGTH || '8', 10);
const requireUppercase = process.env.PASSWORD_REQUIRE_UPPERCASE === 'true';
const requireLowercase = process.env.PASSWORD_REQUIRE_LOWERCASE === 'true';
const requireNumber = process.env.PASSWORD_REQUIRE_NUMBER === 'true';
if (new_password.length < minLength) {
return res.status(400).json({ error: `Password must be at least ${minLength} characters long` });
}
if (requireUppercase && !/[A-Z]/.test(new_password)) {
return res.status(400).json({ error: 'Password must contain at least one uppercase letter' });
}
if (requireLowercase && !/[a-z]/.test(new_password)) {
return res.status(400).json({ error: 'Password must contain at least one lowercase letter' });
}
if (requireNumber && !/[0-9]/.test(new_password)) {
return res.status(400).json({ error: 'Password must contain at least one number' });
}
// Update password
await userService.updatePassword(req.user.id, new_password);
res.json({ success: true, message: 'Password updated successfully' });
} catch (error) {
logger.error('Change password error:', error);
res.status(500).json({ error: 'Failed to change password' });
}
});
export default router;

View File

@@ -1,9 +1,13 @@
import { Router, Request, Response } from 'express';
import { dataService } from '../services/dataService.js';
import { logger } from '../services/logger.js';
import { requireAuth } from '../middleware/authorization.js';
const router = Router();
// All routes require authentication
router.use(requireAuth);
// Get all reference data
router.get('/', async (req: Request, res: Response) => {
try {

203
backend/src/routes/roles.ts Normal file
View File

@@ -0,0 +1,203 @@
/**
* Role Management Routes
*
* Routes for managing roles and permissions (admin only).
*/
import { Router, Request, Response } from 'express';
import { roleService } from '../services/roleService.js';
import { requireAuth, requireAdmin } from '../middleware/authorization.js';
import { logger } from '../services/logger.js';
const router = Router();
// Get all roles (public, but permissions are admin-only)
router.get('/', async (req: Request, res: Response) => {
try {
const roles = await roleService.getAllRoles();
// Get permissions for each role
const rolesWithPermissions = await Promise.all(
roles.map(async (role) => {
const permissions = await roleService.getRolePermissions(role.id);
return {
...role,
permissions: permissions.map(p => ({ id: p.id, name: p.name, description: p.description, resource: p.resource })),
};
})
);
res.json(rolesWithPermissions);
} catch (error) {
logger.error('Get roles error:', error);
res.status(500).json({ error: 'Failed to fetch roles' });
}
});
// Get role by ID
router.get('/:id', async (req: Request, res: Response) => {
try {
const idParam = Array.isArray(req.params.id) ? req.params.id[0] : req.params.id;
const id = parseInt(idParam, 10);
if (isNaN(id)) {
return res.status(400).json({ error: 'Invalid role ID' });
}
const role = await roleService.getRoleById(id);
if (!role) {
return res.status(404).json({ error: 'Role not found' });
}
const permissions = await roleService.getRolePermissions(id);
res.json({
...role,
permissions: permissions.map(p => ({ id: p.id, name: p.name, description: p.description, resource: p.resource })),
});
} catch (error) {
logger.error('Get role error:', error);
res.status(500).json({ error: 'Failed to fetch role' });
}
});
// Create role (admin only)
router.post('/', requireAuth, requireAdmin, async (req: Request, res: Response) => {
try {
const { name, description } = req.body;
if (!name) {
return res.status(400).json({ error: 'Role name is required' });
}
const role = await roleService.createRole({ name, description });
res.status(201).json(role);
} catch (error) {
logger.error('Create role error:', error);
const message = error instanceof Error ? error.message : 'Failed to create role';
res.status(400).json({ error: message });
}
});
// Update role (admin only)
router.put('/:id', requireAuth, requireAdmin, async (req: Request, res: Response) => {
try {
const idParam = Array.isArray(req.params.id) ? req.params.id[0] : req.params.id;
const id = parseInt(idParam, 10);
if (isNaN(id)) {
return res.status(400).json({ error: 'Invalid role ID' });
}
const { name, description } = req.body;
const role = await roleService.updateRole(id, { name, description });
res.json(role);
} catch (error) {
logger.error('Update role error:', error);
const message = error instanceof Error ? error.message : 'Failed to update role';
res.status(400).json({ error: message });
}
});
// Delete role (admin only)
router.delete('/:id', requireAuth, requireAdmin, async (req: Request, res: Response) => {
try {
const idParam = Array.isArray(req.params.id) ? req.params.id[0] : req.params.id;
const id = parseInt(idParam, 10);
if (isNaN(id)) {
return res.status(400).json({ error: 'Invalid role ID' });
}
const success = await roleService.deleteRole(id);
if (success) {
res.json({ success: true });
} else {
res.status(404).json({ error: 'Role not found or cannot be deleted' });
}
} catch (error) {
logger.error('Delete role error:', error);
const message = error instanceof Error ? error.message : 'Failed to delete role';
res.status(400).json({ error: message });
}
});
// Get role permissions
router.get('/:id/permissions', async (req: Request, res: Response) => {
try {
const idParam = Array.isArray(req.params.id) ? req.params.id[0] : req.params.id;
const id = parseInt(idParam, 10);
if (isNaN(id)) {
return res.status(400).json({ error: 'Invalid role ID' });
}
const permissions = await roleService.getRolePermissions(id);
res.json(permissions);
} catch (error) {
logger.error('Get role permissions error:', error);
res.status(500).json({ error: 'Failed to fetch role permissions' });
}
});
// Assign permission to role (admin only)
router.post('/:id/permissions', requireAuth, requireAdmin, async (req: Request, res: Response) => {
try {
const idParam = Array.isArray(req.params.id) ? req.params.id[0] : req.params.id;
const id = parseInt(idParam, 10);
if (isNaN(id)) {
return res.status(400).json({ error: 'Invalid role ID' });
}
const { permission_id } = req.body;
if (!permission_id) {
return res.status(400).json({ error: 'permission_id is required' });
}
const success = await roleService.assignPermissionToRole(id, permission_id);
if (success) {
const permissions = await roleService.getRolePermissions(id);
res.json({ success: true, permissions: permissions.map(p => ({ id: p.id, name: p.name, description: p.description, resource: p.resource })) });
} else {
res.status(400).json({ error: 'Permission already assigned or invalid permission' });
}
} catch (error) {
logger.error('Assign permission error:', error);
res.status(500).json({ error: 'Failed to assign permission' });
}
});
// Remove permission from role (admin only)
router.delete('/:id/permissions/:permissionId', requireAuth, requireAdmin, async (req: Request, res: Response) => {
try {
const roleIdParam = Array.isArray(req.params.id) ? req.params.id[0] : req.params.id;
const permissionIdParam = Array.isArray(req.params.permissionId) ? req.params.permissionId[0] : req.params.permissionId;
const roleId = parseInt(roleIdParam, 10);
const permissionId = parseInt(permissionIdParam, 10);
if (isNaN(roleId) || isNaN(permissionId)) {
return res.status(400).json({ error: 'Invalid role ID or permission ID' });
}
const success = await roleService.removePermissionFromRole(roleId, permissionId);
if (success) {
const permissions = await roleService.getRolePermissions(roleId);
res.json({ success: true, permissions: permissions.map(p => ({ id: p.id, name: p.name, description: p.description, resource: p.resource })) });
} else {
res.status(404).json({ error: 'Permission not assigned to role' });
}
} catch (error) {
logger.error('Remove permission error:', error);
res.status(500).json({ error: 'Failed to remove permission' });
}
});
// Get all permissions (public)
router.get('/permissions/all', async (req: Request, res: Response) => {
try {
const permissions = await roleService.getAllPermissions();
res.json(permissions);
} catch (error) {
logger.error('Get permissions error:', error);
res.status(500).json({ error: 'Failed to fetch permissions' });
}
});
export default router;

View File

@@ -1,132 +1,64 @@
import { Router } from 'express';
import { OBJECT_TYPES, SCHEMA_GENERATED_AT, SCHEMA_OBJECT_TYPE_COUNT, SCHEMA_TOTAL_ATTRIBUTES } from '../generated/jira-schema.js';
import type { ObjectTypeDefinition, AttributeDefinition } from '../generated/jira-schema.js';
import { dataService } from '../services/dataService.js';
import { schemaCacheService } from '../services/schemaCacheService.js';
import { schemaSyncService } from '../services/SchemaSyncService.js';
import { schemaMappingService } from '../services/schemaMappingService.js';
import { logger } from '../services/logger.js';
import { jiraAssetsClient } from '../services/jiraAssetsClient.js';
import type { CMDBObjectTypeName } from '../generated/jira-types.js';
import { requireAuth, requirePermission } from '../middleware/authorization.js';
const router = Router();
// Extended types for API response
interface ObjectTypeWithLinks extends ObjectTypeDefinition {
incomingLinks: Array<{
fromType: string;
fromTypeName: string;
attributeName: string;
isMultiple: boolean;
}>;
outgoingLinks: Array<{
toType: string;
toTypeName: string;
attributeName: string;
isMultiple: boolean;
}>;
}
interface SchemaResponse {
metadata: {
generatedAt: string;
objectTypeCount: number;
totalAttributes: number;
};
objectTypes: Record<string, ObjectTypeWithLinks>;
cacheCounts?: Record<string, number>; // Cache counts by type name (from objectsByType)
jiraCounts?: Record<string, number>; // Actual counts from Jira Assets API
}
// All routes require authentication and search permission
router.use(requireAuth);
router.use(requirePermission('search'));
/**
* GET /api/schema
* Returns the complete Jira Assets schema with object types, attributes, and links
* Data is fetched from database (via cache service)
*/
router.get('/', async (req, res) => {
try {
// Build links between object types
const objectTypesWithLinks: Record<string, ObjectTypeWithLinks> = {};
// Get schema from cache (which fetches from database)
const schema = await schemaCacheService.getSchema();
// First pass: convert all object types
for (const [typeName, typeDef] of Object.entries(OBJECT_TYPES)) {
objectTypesWithLinks[typeName] = {
...typeDef,
incomingLinks: [],
outgoingLinks: [],
};
}
// Optionally fetch Jira counts for comparison (can be slow, so make it optional)
let jiraCounts: Record<string, number> | undefined;
const includeJiraCounts = req.query.includeJiraCounts === 'true';
// Second pass: build link relationships
for (const [typeName, typeDef] of Object.entries(OBJECT_TYPES)) {
for (const attr of typeDef.attributes) {
if (attr.type === 'reference' && attr.referenceTypeName) {
// Add outgoing link from this type
objectTypesWithLinks[typeName].outgoingLinks.push({
toType: attr.referenceTypeName,
toTypeName: OBJECT_TYPES[attr.referenceTypeName]?.name || attr.referenceTypeName,
attributeName: attr.name,
isMultiple: attr.isMultiple,
});
// Add incoming link to the referenced type
if (objectTypesWithLinks[attr.referenceTypeName]) {
objectTypesWithLinks[attr.referenceTypeName].incomingLinks.push({
fromType: typeName,
fromTypeName: typeDef.name,
attributeName: attr.name,
isMultiple: attr.isMultiple,
});
}
if (includeJiraCounts) {
const typeNames = Object.keys(schema.objectTypes);
logger.info(`Schema: Fetching object counts from Jira Assets for ${typeNames.length} object types...`);
jiraCounts = {};
// Fetch counts in parallel for better performance, using schema mappings
const countPromises = typeNames.map(async (typeName) => {
try {
// Get schema ID for this type
const schemaId = await schemaMappingService.getSchemaId(typeName);
const count = await jiraAssetsClient.getObjectCount(typeName, schemaId);
jiraCounts![typeName] = count;
return { typeName, count };
} catch (error) {
logger.warn(`Schema: Failed to get count for ${typeName}`, error);
// Use 0 as fallback if API call fails
jiraCounts![typeName] = 0;
return { typeName, count: 0 };
}
}
});
await Promise.all(countPromises);
logger.info(`Schema: Fetched counts for ${Object.keys(jiraCounts).length} object types from Jira Assets`);
}
// Get cache counts (objectsByType) if available
let cacheCounts: Record<string, number> | undefined;
try {
const cacheStatus = await dataService.getCacheStatus();
cacheCounts = cacheStatus.objectsByType;
} catch (err) {
logger.debug('Could not fetch cache counts for schema response', err);
// Continue without cache counts - not critical
}
// Fetch actual counts from Jira Assets for all object types
// This ensures the counts match exactly what's in Jira Assets
const jiraCounts: Record<string, number> = {};
const typeNames = Object.keys(OBJECT_TYPES) as CMDBObjectTypeName[];
logger.info(`Schema: Fetching object counts from Jira Assets for ${typeNames.length} object types...`);
// Fetch counts in parallel for better performance
const countPromises = typeNames.map(async (typeName) => {
try {
const count = await jiraAssetsClient.getObjectCount(typeName);
jiraCounts[typeName] = count;
return { typeName, count };
} catch (error) {
logger.warn(`Schema: Failed to get count for ${typeName}`, error);
// Use 0 as fallback if API call fails
jiraCounts[typeName] = 0;
return { typeName, count: 0 };
}
});
await Promise.all(countPromises);
logger.info(`Schema: Fetched counts for ${Object.keys(jiraCounts).length} object types from Jira Assets`);
const response: SchemaResponse = {
metadata: {
generatedAt: SCHEMA_GENERATED_AT,
objectTypeCount: SCHEMA_OBJECT_TYPE_COUNT,
totalAttributes: SCHEMA_TOTAL_ATTRIBUTES,
},
objectTypes: objectTypesWithLinks,
cacheCounts,
const response = {
...schema,
jiraCounts,
};
res.json(response);
} catch (error) {
console.error('Failed to get schema:', error);
logger.error('Failed to get schema:', error);
res.status(500).json({ error: 'Failed to get schema' });
}
});
@@ -135,60 +67,61 @@ router.get('/', async (req, res) => {
* GET /api/schema/object-type/:typeName
* Returns details for a specific object type
*/
router.get('/object-type/:typeName', (req, res) => {
const { typeName } = req.params;
const typeDef = OBJECT_TYPES[typeName];
if (!typeDef) {
return res.status(404).json({ error: `Object type '${typeName}' not found` });
}
// Build links for this specific type
const incomingLinks: Array<{
fromType: string;
fromTypeName: string;
attributeName: string;
isMultiple: boolean;
}> = [];
const outgoingLinks: Array<{
toType: string;
toTypeName: string;
attributeName: string;
isMultiple: boolean;
}> = [];
// Outgoing links from this type
for (const attr of typeDef.attributes) {
if (attr.type === 'reference' && attr.referenceTypeName) {
outgoingLinks.push({
toType: attr.referenceTypeName,
toTypeName: OBJECT_TYPES[attr.referenceTypeName]?.name || attr.referenceTypeName,
attributeName: attr.name,
isMultiple: attr.isMultiple,
});
router.get('/object-type/:typeName', async (req, res) => {
try {
const { typeName } = req.params;
// Get schema from cache
const schema = await schemaCacheService.getSchema();
const typeDef = schema.objectTypes[typeName];
if (!typeDef) {
return res.status(404).json({ error: `Object type '${typeName}' not found` });
}
res.json(typeDef);
} catch (error) {
logger.error('Failed to get object type:', error);
res.status(500).json({ error: 'Failed to get object type' });
}
// Incoming links from other types
for (const [otherTypeName, otherTypeDef] of Object.entries(OBJECT_TYPES)) {
for (const attr of otherTypeDef.attributes) {
if (attr.type === 'reference' && attr.referenceTypeName === typeName) {
incomingLinks.push({
fromType: otherTypeName,
fromTypeName: otherTypeDef.name,
attributeName: attr.name,
isMultiple: attr.isMultiple,
});
}
}
});
/**
* POST /api/schema/discover
* Manually trigger schema synchronization from Jira API
* Requires manage_settings permission
*/
router.post('/discover', requirePermission('manage_settings'), async (req, res) => {
try {
logger.info('Schema: Manual schema sync triggered');
const result = await schemaSyncService.syncAll();
schemaCacheService.invalidate(); // Invalidate cache
res.json({
message: 'Schema synchronization completed',
...result,
});
} catch (error) {
logger.error('Failed to sync schema:', error);
res.status(500).json({
error: 'Failed to sync schema',
details: error instanceof Error ? error.message : String(error),
});
}
});
/**
* GET /api/schema/sync-progress
* Get current sync progress
*/
router.get('/sync-progress', requirePermission('manage_settings'), async (req, res) => {
try {
const progress = schemaSyncService.getProgress();
res.json(progress);
} catch (error) {
logger.error('Failed to get sync progress:', error);
res.status(500).json({ error: 'Failed to get sync progress' });
}
res.json({
...typeDef,
incomingLinks,
outgoingLinks,
});
});
export default router;

View File

@@ -0,0 +1,209 @@
/**
* Schema Configuration routes
*
* Provides endpoints for configuring which object types should be synced.
*/
import { Router, Request, Response } from 'express';
import { logger } from '../services/logger.js';
import { requireAuth, requirePermission } from '../middleware/authorization.js';
import { schemaConfigurationService } from '../services/schemaConfigurationService.js';
import { schemaSyncService } from '../services/SchemaSyncService.js';
const router = Router();
// All routes require authentication and manage_settings permission
router.use(requireAuth);
router.use(requirePermission('manage_settings'));
/**
* GET /api/schema-configuration/stats
* Get configuration statistics
*/
router.get('/stats', async (req: Request, res: Response) => {
try {
const stats = await schemaConfigurationService.getConfigurationStats();
res.json(stats);
} catch (error) {
logger.error('Failed to get configuration stats', error);
res.status(500).json({ error: 'Failed to get configuration stats' });
}
});
/**
* POST /api/schema-configuration/discover
* Discover and store all schemas, object types, and attributes from Jira Assets
* Uses the unified SchemaSyncService
*/
router.post('/discover', async (req: Request, res: Response) => {
try {
logger.info('Schema configuration: Manual schema sync triggered');
const result = await schemaSyncService.syncAll();
if (result.schemasProcessed === 0) {
logger.warn('Schema configuration: Sync returned 0 schemas - this might indicate an API issue');
res.status(400).json({
message: 'No schemas found. Please check: 1) JIRA_SERVICE_ACCOUNT_TOKEN is configured correctly, 2) Jira Assets API is accessible, 3) API endpoint /rest/assets/1.0/objectschema/list is available',
...result,
});
return;
}
res.json({
message: 'Schema synchronization completed successfully',
schemasDiscovered: result.schemasProcessed,
objectTypesDiscovered: result.objectTypesProcessed,
attributesDiscovered: result.attributesProcessed,
...result,
});
} catch (error) {
logger.error('Failed to sync schemas and object types', error);
res.status(500).json({
error: 'Failed to sync schemas and object types',
details: error instanceof Error ? error.message : String(error),
stack: error instanceof Error ? error.stack : undefined
});
}
});
/**
* GET /api/schema-configuration/object-types
* Get all configured object types grouped by schema
*/
router.get('/object-types', async (req: Request, res: Response) => {
try {
const schemas = await schemaConfigurationService.getConfiguredObjectTypes();
res.json({ schemas });
} catch (error) {
logger.error('Failed to get configured object types', error);
res.status(500).json({ error: 'Failed to get configured object types' });
}
});
/**
* PATCH /api/schema-configuration/object-types/:id/enabled
* Enable or disable an object type
*/
router.patch('/object-types/:id/enabled', async (req: Request, res: Response) => {
try {
const id = Array.isArray(req.params.id) ? req.params.id[0] : req.params.id;
if (!id) {
res.status(400).json({ error: 'id parameter required' });
return;
}
const { enabled } = req.body;
if (typeof enabled !== 'boolean') {
res.status(400).json({ error: 'enabled must be a boolean' });
return;
}
await schemaConfigurationService.setObjectTypeEnabled(id, enabled);
res.json({
status: 'success',
message: `Object type ${id} ${enabled ? 'enabled' : 'disabled'}`,
});
} catch (error) {
logger.error('Failed to update object type enabled status', error);
res.status(500).json({ error: 'Failed to update object type enabled status' });
}
});
/**
* PATCH /api/schema-configuration/object-types/bulk-enabled
* Bulk update enabled status for multiple object types
*/
router.patch('/object-types/bulk-enabled', async (req: Request, res: Response) => {
try {
const { updates } = req.body;
if (!Array.isArray(updates)) {
res.status(400).json({ error: 'updates must be an array' });
return;
}
// Validate each update
for (const update of updates) {
if (!update.id || typeof update.enabled !== 'boolean') {
res.status(400).json({ error: 'Each update must have id (string) and enabled (boolean)' });
return;
}
}
await schemaConfigurationService.bulkSetObjectTypesEnabled(updates);
res.json({
status: 'success',
message: `Updated ${updates.length} object types`,
});
} catch (error) {
logger.error('Failed to bulk update object types', error);
res.status(500).json({ error: 'Failed to bulk update object types' });
}
});
/**
* GET /api/schema-configuration/check
* Check if configuration is complete (at least one object type enabled)
*/
router.get('/check', async (req: Request, res: Response) => {
try {
const isComplete = await schemaConfigurationService.isConfigurationComplete();
const stats = await schemaConfigurationService.getConfigurationStats();
res.json({
isConfigured: isComplete,
stats,
});
} catch (error) {
logger.error('Failed to check configuration', error);
res.status(500).json({ error: 'Failed to check configuration' });
}
});
/**
* GET /api/schema-configuration/schemas
* Get all schemas with their search enabled status
*/
router.get('/schemas', async (req: Request, res: Response) => {
try {
const schemas = await schemaConfigurationService.getSchemas();
res.json({ schemas });
} catch (error) {
logger.error('Failed to get schemas', error);
res.status(500).json({ error: 'Failed to get schemas' });
}
});
/**
* PATCH /api/schema-configuration/schemas/:schemaId/search-enabled
* Enable or disable search for a schema
*/
router.patch('/schemas/:schemaId/search-enabled', async (req: Request, res: Response) => {
try {
const schemaId = req.params.schemaId;
const { searchEnabled } = req.body;
if (typeof searchEnabled !== 'boolean') {
res.status(400).json({ error: 'searchEnabled must be a boolean' });
return;
}
const schemaIdStr = Array.isArray(schemaId) ? schemaId[0] : schemaId;
if (!schemaIdStr) {
res.status(400).json({ error: 'schemaId parameter required' });
return;
}
await schemaConfigurationService.setSchemaSearchEnabled(schemaIdStr, searchEnabled);
res.json({
status: 'success',
message: `Schema ${schemaId} search ${searchEnabled ? 'enabled' : 'disabled'}`,
});
} catch (error) {
logger.error('Failed to update schema search enabled status', error);
res.status(500).json({ error: 'Failed to update schema search enabled status' });
}
});
export default router;

View File

@@ -1,11 +1,16 @@
import { Router, Request, Response } from 'express';
import { cmdbService } from '../services/cmdbService.js';
import { jiraAssetsService } from '../services/jiraAssets.js';
import { logger } from '../services/logger.js';
import { config } from '../config/env.js';
import { requireAuth, requirePermission } from '../middleware/authorization.js';
const router = Router();
// CMDB free-text search endpoint (from cache)
// All routes require authentication and search permission
router.use(requireAuth);
router.use(requirePermission('search'));
// CMDB free-text search endpoint (using Jira API)
router.get('/', async (req: Request, res: Response) => {
try {
const query = req.query.query as string;
@@ -18,53 +23,37 @@ router.get('/', async (req: Request, res: Response) => {
logger.info(`CMDB search request: query="${query}", limit=${limit}`);
// Search all types in cache
const results = await cmdbService.searchAllTypes(query.trim(), { limit });
// Group results by object type
const objectTypeMap = new Map<string, { id: number; name: string; iconUrl: string }>();
const formattedResults = results.map(obj => {
const typeName = obj._objectType || 'Unknown';
// Track unique object types
if (!objectTypeMap.has(typeName)) {
objectTypeMap.set(typeName, {
id: objectTypeMap.size + 1,
name: typeName,
iconUrl: '', // Can be enhanced to include actual icons
});
// Set user token on jiraAssetsService (same logic as middleware)
// Use OAuth token if available, otherwise user's PAT, otherwise service account token
if (req.accessToken) {
jiraAssetsService.setRequestToken(req.accessToken);
} else if (req.user && 'id' in req.user) {
const userSettings = (req as any).userSettings;
if (userSettings?.jira_pat) {
jiraAssetsService.setRequestToken(userSettings.jira_pat);
} else if (config.jiraServiceAccountToken) {
jiraAssetsService.setRequestToken(config.jiraServiceAccountToken);
} else {
jiraAssetsService.setRequestToken(null);
}
} else {
jiraAssetsService.setRequestToken(config.jiraServiceAccountToken || null);
}
try {
// Use Jira API search (searches Key, Object Type, Label, Name, Description, Status)
// The URL will be logged automatically by jiraAssetsService.searchCMDB()
const response = await jiraAssetsService.searchCMDB(query.trim(), limit);
const objectType = objectTypeMap.get(typeName)!;
// Clear token after request
jiraAssetsService.clearRequestToken();
return {
id: parseInt(obj.id, 10) || 0,
key: obj.objectKey,
label: obj.label,
objectTypeId: objectType.id,
avatarUrl: '',
attributes: [], // Can be enhanced to include attributes
};
});
// Build response matching CMDBSearchResponse interface
const response = {
metadata: {
count: formattedResults.length,
offset: 0,
limit: limit,
total: formattedResults.length,
criteria: {
query: query,
type: 'global',
schema: parseInt(config.jiraSchemaId, 10) || 0,
},
},
objectTypes: Array.from(objectTypeMap.values()),
results: formattedResults,
};
res.json(response);
res.json(response);
} catch (error) {
// Clear token on error
jiraAssetsService.clearRequestToken();
throw error;
}
} catch (error) {
logger.error('CMDB search failed', error);
res.status(500).json({ error: 'Failed to search CMDB' });

View File

@@ -0,0 +1,105 @@
/**
* User Settings Routes
*
* Routes for managing user-specific settings (Jira PAT, AI features, etc.).
*/
import { Router, Request, Response } from 'express';
import { userSettingsService } from '../services/userSettingsService.js';
import { requireAuth } from '../middleware/authorization.js';
import { logger } from '../services/logger.js';
const router = Router();
// All routes require authentication
router.use(requireAuth);
// Get current user settings
router.get('/', async (req: Request, res: Response) => {
try {
if (!req.user || !('id' in req.user)) {
return res.status(401).json({ error: 'Authentication required' });
}
const settings = await userSettingsService.getUserSettings(req.user.id);
if (!settings) {
return res.status(404).json({ error: 'Settings not found' });
}
// Don't return sensitive data in full
res.json({
...settings,
jira_pat: settings.jira_pat ? '***' : null, // Mask PAT
ai_api_key: settings.ai_api_key ? '***' : null, // Mask API key
tavily_api_key: settings.tavily_api_key ? '***' : null, // Mask API key
});
} catch (error) {
logger.error('Get user settings error:', error);
res.status(500).json({ error: 'Failed to fetch user settings' });
}
});
// Update user settings
router.put('/', async (req: Request, res: Response) => {
try {
if (!req.user || !('id' in req.user)) {
return res.status(401).json({ error: 'Authentication required' });
}
const { jira_pat, ai_enabled, ai_provider, ai_api_key, web_search_enabled, tavily_api_key } = req.body;
const settings = await userSettingsService.updateUserSettings(req.user.id, {
jira_pat,
ai_enabled,
ai_provider,
ai_api_key,
web_search_enabled,
tavily_api_key,
});
// Don't return sensitive data in full
res.json({
...settings,
jira_pat: settings.jira_pat ? '***' : null,
ai_api_key: settings.ai_api_key ? '***' : null,
tavily_api_key: settings.tavily_api_key ? '***' : null,
});
} catch (error) {
logger.error('Update user settings error:', error);
res.status(500).json({ error: 'Failed to update user settings' });
}
});
// Validate Jira PAT
router.post('/jira-pat/validate', async (req: Request, res: Response) => {
try {
if (!req.user || !('id' in req.user)) {
return res.status(401).json({ error: 'Authentication required' });
}
const { pat } = req.body;
const isValid = await userSettingsService.validateJiraPat(req.user.id, pat);
res.json({ valid: isValid });
} catch (error) {
logger.error('Validate Jira PAT error:', error);
res.status(500).json({ error: 'Failed to validate Jira PAT' });
}
});
// Get Jira PAT status
router.get('/jira-pat/status', async (req: Request, res: Response) => {
try {
if (!req.user || !('id' in req.user)) {
return res.status(401).json({ error: 'Authentication required' });
}
const status = await userSettingsService.getJiraPatStatus(req.user.id);
res.json(status);
} catch (error) {
logger.error('Get Jira PAT status error:', error);
res.status(500).json({ error: 'Failed to get Jira PAT status' });
}
});
export default router;

319
backend/src/routes/users.ts Normal file
View File

@@ -0,0 +1,319 @@
/**
* User Management Routes
*
* Routes for managing users (admin only).
*/
import { Router, Request, Response } from 'express';
import { userService } from '../services/userService.js';
import { roleService } from '../services/roleService.js';
import { requireAuth, requireAdmin } from '../middleware/authorization.js';
import { logger } from '../services/logger.js';
const router = Router();
// All routes require authentication and admin role
router.use(requireAuth);
router.use(requireAdmin);
// Get all users
router.get('/', async (req: Request, res: Response) => {
try {
const users = await userService.getAllUsers();
// Get roles for each user
const usersWithRoles = await Promise.all(
users.map(async (user) => {
const roles = await userService.getUserRoles(user.id);
return {
...user,
roles: roles.map(r => ({ id: r.id, name: r.name, description: r.description })),
};
})
);
res.json(usersWithRoles);
} catch (error) {
logger.error('Get users error:', error);
res.status(500).json({ error: 'Failed to fetch users' });
}
});
// Get user by ID
router.get('/:id', async (req: Request, res: Response) => {
try {
const idParam = Array.isArray(req.params.id) ? req.params.id[0] : req.params.id;
const id = parseInt(idParam, 10);
if (isNaN(id)) {
return res.status(400).json({ error: 'Invalid user ID' });
}
const user = await userService.getUserById(id);
if (!user) {
return res.status(404).json({ error: 'User not found' });
}
const roles = await userService.getUserRoles(user.id);
const permissions = await roleService.getUserPermissions(user.id);
res.json({
...user,
roles: roles.map(r => ({ id: r.id, name: r.name, description: r.description })),
permissions: permissions.map(p => ({ id: p.id, name: p.name, description: p.description })),
});
} catch (error) {
logger.error('Get user error:', error);
res.status(500).json({ error: 'Failed to fetch user' });
}
});
// Create user
router.post('/', async (req: Request, res: Response) => {
try {
const { email, username, password, display_name, send_invitation } = req.body;
if (!email || !username) {
return res.status(400).json({ error: 'Email and username are required' });
}
const user = await userService.createUser({
email,
username,
password,
display_name,
send_invitation: send_invitation !== false, // Default to true
});
const roles = await userService.getUserRoles(user.id);
res.status(201).json({
...user,
roles: roles.map(r => ({ id: r.id, name: r.name, description: r.description })),
});
} catch (error) {
logger.error('Create user error:', error);
const message = error instanceof Error ? error.message : 'Failed to create user';
res.status(400).json({ error: message });
}
});
// Update user
router.put('/:id', async (req: Request, res: Response) => {
try {
const idParam = Array.isArray(req.params.id) ? req.params.id[0] : req.params.id;
const id = parseInt(idParam, 10);
if (isNaN(id)) {
return res.status(400).json({ error: 'Invalid user ID' });
}
const { email, username, display_name, is_active } = req.body;
const user = await userService.updateUser(id, {
email,
username,
display_name,
is_active,
});
const roles = await userService.getUserRoles(user.id);
res.json({
...user,
roles: roles.map(r => ({ id: r.id, name: r.name, description: r.description })),
});
} catch (error) {
logger.error('Update user error:', error);
const message = error instanceof Error ? error.message : 'Failed to update user';
res.status(400).json({ error: message });
}
});
// Delete user
router.delete('/:id', async (req: Request, res: Response) => {
try {
const idParam = Array.isArray(req.params.id) ? req.params.id[0] : req.params.id;
const id = parseInt(idParam, 10);
if (isNaN(id)) {
return res.status(400).json({ error: 'Invalid user ID' });
}
// Prevent deleting yourself
if (req.user && 'id' in req.user && req.user.id === id) {
return res.status(400).json({ error: 'Cannot delete your own account' });
}
const success = await userService.deleteUser(id);
if (success) {
res.json({ success: true });
} else {
res.status(404).json({ error: 'User not found' });
}
} catch (error) {
logger.error('Delete user error:', error);
res.status(500).json({ error: 'Failed to delete user' });
}
});
// Send invitation email
router.post('/:id/invite', async (req: Request, res: Response) => {
try {
const idParam = Array.isArray(req.params.id) ? req.params.id[0] : req.params.id;
const id = parseInt(idParam, 10);
if (isNaN(id)) {
return res.status(400).json({ error: 'Invalid user ID' });
}
const success = await userService.sendInvitation(id);
if (success) {
res.json({ success: true, message: 'Invitation sent successfully' });
} else {
res.status(404).json({ error: 'User not found' });
}
} catch (error) {
logger.error('Send invitation error:', error);
res.status(500).json({ error: 'Failed to send invitation' });
}
});
// Assign role to user
router.post('/:id/roles', async (req: Request, res: Response) => {
try {
const idParam = Array.isArray(req.params.id) ? req.params.id[0] : req.params.id;
const id = parseInt(idParam, 10);
if (isNaN(id)) {
return res.status(400).json({ error: 'Invalid user ID' });
}
const { role_id } = req.body;
if (!role_id) {
return res.status(400).json({ error: 'role_id is required' });
}
const success = await userService.assignRole(id, role_id);
if (success) {
const roles = await userService.getUserRoles(id);
res.json({ success: true, roles: roles.map(r => ({ id: r.id, name: r.name, description: r.description })) });
} else {
res.status(400).json({ error: 'Role already assigned or invalid role' });
}
} catch (error) {
logger.error('Assign role error:', error);
res.status(500).json({ error: 'Failed to assign role' });
}
});
// Remove role from user
router.delete('/:id/roles/:roleId', async (req: Request, res: Response) => {
try {
const userIdParam = Array.isArray(req.params.id) ? req.params.id[0] : req.params.id;
const roleIdParam = Array.isArray(req.params.roleId) ? req.params.roleId[0] : req.params.roleId;
const userId = parseInt(userIdParam, 10);
const roleId = parseInt(roleIdParam, 10);
if (isNaN(userId) || isNaN(roleId)) {
return res.status(400).json({ error: 'Invalid user ID or role ID' });
}
// Prevent removing administrator role from yourself
if (req.user && 'id' in req.user && req.user.id === userId) {
const role = await roleService.getRoleById(roleId);
if (role && role.name === 'administrator') {
return res.status(400).json({ error: 'Cannot remove administrator role from your own account' });
}
}
const success = await userService.removeRole(userId, roleId);
if (success) {
const roles = await userService.getUserRoles(userId);
res.json({ success: true, roles: roles.map(r => ({ id: r.id, name: r.name, description: r.description })) });
} else {
res.status(404).json({ error: 'Role not assigned to user' });
}
} catch (error) {
logger.error('Remove role error:', error);
res.status(500).json({ error: 'Failed to remove role' });
}
});
// Activate/deactivate user
router.put('/:id/activate', async (req: Request, res: Response) => {
try {
const idParam = Array.isArray(req.params.id) ? req.params.id[0] : req.params.id;
const id = parseInt(idParam, 10);
if (isNaN(id)) {
return res.status(400).json({ error: 'Invalid user ID' });
}
const { is_active } = req.body;
if (typeof is_active !== 'boolean') {
return res.status(400).json({ error: 'is_active must be a boolean' });
}
// Prevent deactivating yourself
if (req.user && 'id' in req.user && req.user.id === id && !is_active) {
return res.status(400).json({ error: 'Cannot deactivate your own account' });
}
const user = await userService.updateUser(id, { is_active });
res.json(user);
} catch (error) {
logger.error('Activate user error:', error);
res.status(500).json({ error: 'Failed to update user status' });
}
});
// Manually verify email address (admin action)
router.put('/:id/verify-email', async (req: Request, res: Response) => {
try {
const idParam = Array.isArray(req.params.id) ? req.params.id[0] : req.params.id;
const id = parseInt(idParam, 10);
if (isNaN(id)) {
return res.status(400).json({ error: 'Invalid user ID' });
}
await userService.manuallyVerifyEmail(id);
const user = await userService.getUserById(id);
if (!user) {
return res.status(404).json({ error: 'User not found' });
}
const roles = await userService.getUserRoles(user.id);
res.json({
...user,
roles: roles.map(r => ({ id: r.id, name: r.name, description: r.description })),
});
} catch (error) {
logger.error('Verify email error:', error);
res.status(500).json({ error: 'Failed to verify email' });
}
});
// Set password for user (admin action)
router.put('/:id/password', async (req: Request, res: Response) => {
try {
const idParam = Array.isArray(req.params.id) ? req.params.id[0] : req.params.id;
const id = parseInt(idParam, 10);
if (isNaN(id)) {
return res.status(400).json({ error: 'Invalid user ID' });
}
const { password } = req.body;
if (!password || typeof password !== 'string') {
return res.status(400).json({ error: 'Password is required' });
}
if (password.length < 8) {
return res.status(400).json({ error: 'Password must be at least 8 characters long' });
}
await userService.updatePassword(id, password);
logger.info(`Password set by admin for user: ${id}`);
res.json({ success: true, message: 'Password updated successfully' });
} catch (error) {
logger.error('Set password error:', error);
res.status(500).json({ error: 'Failed to set password' });
}
});
export default router;

View File

@@ -0,0 +1,395 @@
/**
* ObjectSyncService - Synchronizes objects from Jira Assets API
*
* Handles:
* - Full sync for enabled types
* - Incremental sync via jira_updated_at
* - Recursive reference processing
* - Reference-only caching for disabled types
*/
import { logger } from './logger.js';
import { jiraAssetsClient } from '../infrastructure/jira/JiraAssetsClient.js';
import { PayloadProcessor, type ProcessedObject } from './PayloadProcessor.js';
import { SchemaRepository } from '../repositories/SchemaRepository.js';
import { ObjectCacheRepository } from '../repositories/ObjectCacheRepository.js';
import type { ObjectEntry } from '../domain/jiraAssetsPayload.js';
import { SyncPolicy } from '../domain/syncPolicy.js';
export interface SyncResult {
objectsProcessed: number;
objectsCached: number;
relationsExtracted: number;
errors: Array<{ objectId: string; error: string }>;
}
export class ObjectSyncService {
private processor: PayloadProcessor;
constructor(
private schemaRepo: SchemaRepository,
private cacheRepo: ObjectCacheRepository
) {
this.processor = new PayloadProcessor(schemaRepo, cacheRepo);
}
/**
* Sync all objects of an enabled type
*/
async syncObjectType(
schemaId: string,
typeId: number,
typeName: string,
displayName: string
): Promise<SyncResult> {
// Validate schemaId before proceeding
if (!schemaId || schemaId.trim() === '') {
const errorMessage = `Schema ID is missing or empty for object type "${displayName}" (${typeName}). Please run schema sync to ensure all object types are properly associated with their schemas.`;
logger.error(`ObjectSyncService: ${errorMessage}`);
return {
objectsProcessed: 0,
objectsCached: 0,
relationsExtracted: 0,
errors: [{
objectId: typeName,
error: errorMessage,
}],
};
}
logger.info(`ObjectSyncService: Starting sync for ${displayName} (${typeName}) from schema ${schemaId}`);
const result: SyncResult = {
objectsProcessed: 0,
objectsCached: 0,
relationsExtracted: 0,
errors: [],
};
try {
// Get enabled types for sync policy
const enabledTypes = await this.schemaRepo.getEnabledObjectTypes();
const enabledTypeSet = new Set(enabledTypes.map(t => t.typeName));
// Fetch all objects of this type
const iql = `objectType = "${displayName}"`;
let page = 1;
let hasMore = true;
const pageSize = 40;
while (hasMore) {
let searchResult;
try {
searchResult = await jiraAssetsClient.searchObjects(iql, schemaId, {
page,
pageSize,
});
} catch (error) {
// Log detailed error information
const errorMessage = error instanceof Error ? error.message : 'Unknown error';
const errorDetails = error instanceof Error && error.cause ? String(error.cause) : undefined;
logger.error(`ObjectSyncService: Failed to search objects for ${typeName}`, {
error: errorMessage,
details: errorDetails,
iql,
schemaId,
page,
});
// Add error to result and return early
result.errors.push({
objectId: typeName,
error: `Failed to fetch objects from Jira: ${errorMessage}. This could be due to network issues, incorrect Jira host URL, or authentication problems. Check backend logs for details.`,
});
// Return result with error instead of throwing (allows partial results to be returned)
return result;
}
if (searchResult.objectEntries.length === 0) {
break;
}
// Process payload recursively (extracts all referenced objects)
const processed = await this.processor.processPayload(
searchResult.objectEntries,
enabledTypeSet
);
// Cache all processed objects
const processedEntries = Array.from(processed.entries());
let cachedCount = 0;
let skippedCount = 0;
logger.info(`ObjectSyncService: Processing ${processedEntries.length} objects from payload (includes root + referenced objects). Root objects: ${searchResult.objectEntries.length}`);
// Group by type for logging
const objectsByType = new Map<string, number>();
for (const [objectId, processedObj] of processedEntries) {
const objType = processedObj.typeName || processedObj.objectEntry.objectType?.name || 'Unknown';
objectsByType.set(objType, (objectsByType.get(objType) || 0) + 1);
}
logger.info(`ObjectSyncService: Objects by type: ${Array.from(objectsByType.entries()).map(([type, count]) => `${type}: ${count}`).join(', ')}`);
for (const [objectId, processedObj] of processedEntries) {
try {
// Cache the object (will use fallback type name if needed)
// cacheProcessedObject should always succeed now due to the generic fallback fix
await this.cacheProcessedObject(processedObj, enabledTypeSet);
// Count all cached objects - cacheProcessedObject should always succeed now
// (it uses a generic fallback type name if no type name is available)
cachedCount++;
result.relationsExtracted += processedObj.objectEntry.attributes?.length || 0;
logger.debug(`ObjectSyncService: Successfully cached object ${processedObj.objectEntry.objectKey} (ID: ${objectId}, type: ${processedObj.typeName || processedObj.objectEntry.objectType?.name || 'fallback'})`);
} catch (error) {
logger.error(`ObjectSyncService: Failed to cache object ${objectId} (${processedObj.objectEntry.objectKey})`, error);
result.errors.push({
objectId,
error: error instanceof Error ? error.message : 'Unknown error',
});
skippedCount++;
}
}
result.objectsCached = cachedCount;
if (skippedCount > 0) {
logger.warn(`ObjectSyncService: Skipped ${skippedCount} objects (no type name available or cache error) out of ${processedEntries.length} processed objects`);
}
logger.info(`ObjectSyncService: Cached ${cachedCount} objects, skipped ${skippedCount} objects out of ${processedEntries.length} total processed objects`);
result.objectsProcessed += searchResult.objectEntries.length;
hasMore = searchResult.hasMore;
page++;
}
logger.info(
`ObjectSyncService: Sync complete for ${displayName} - ${result.objectsProcessed} objects processed, ${result.objectsCached} cached, ${result.errors.length} errors`
);
} catch (error) {
logger.error(`ObjectSyncService: Failed to sync ${displayName}`, error);
result.errors.push({
objectId: typeName,
error: error instanceof Error ? error.message : 'Unknown error',
});
}
return result;
}
/**
* Sync incremental updates (objects updated since timestamp)
* Note: This may not be supported on Jira Data Center
*/
async syncIncremental(
schemaId: string,
since: Date,
enabledTypes: Set<string>
): Promise<SyncResult> {
logger.info(`ObjectSyncService: Starting incremental sync since ${since.toISOString()}`);
const result: SyncResult = {
objectsProcessed: 0,
objectsCached: 0,
relationsExtracted: 0,
errors: [],
};
try {
// IQL for updated objects (may not work on Data Center)
const iql = `updated >= "${since.toISOString()}"`;
const searchResult = await jiraAssetsClient.searchObjects(iql, schemaId, {
page: 1,
pageSize: 100,
});
// Process all entries
const processed = await this.processor.processPayload(searchResult.objectEntries, enabledTypes);
// Cache all processed objects
for (const [objectId, processedObj] of processed.entries()) {
try {
await this.cacheProcessedObject(processedObj, enabledTypes);
result.objectsCached++;
} catch (error) {
logger.error(`ObjectSyncService: Failed to cache object ${objectId} in incremental sync`, error);
result.errors.push({
objectId,
error: error instanceof Error ? error.message : 'Unknown error',
});
}
}
result.objectsProcessed = searchResult.objectEntries.length;
} catch (error) {
logger.error('ObjectSyncService: Incremental sync failed', error);
result.errors.push({
objectId: 'incremental',
error: error instanceof Error ? error.message : 'Unknown error',
});
}
return result;
}
/**
* Sync a single object (for refresh operations)
*/
async syncSingleObject(
objectId: string,
enabledTypes: Set<string>
): Promise<{ cached: boolean; error?: string }> {
try {
// Fetch object from Jira
const entry = await jiraAssetsClient.getObject(objectId);
if (!entry) {
return { cached: false, error: 'Object not found in Jira' };
}
// Process recursively
const processed = await this.processor.processPayload([entry], enabledTypes);
const processedObj = processed.get(String(entry.id));
if (!processedObj) {
return { cached: false, error: 'Failed to process object' };
}
// Cache object
await this.cacheProcessedObject(processedObj, enabledTypes);
return { cached: true };
} catch (error) {
logger.error(`ObjectSyncService: Failed to sync single object ${objectId}`, error);
return {
cached: false,
error: error instanceof Error ? error.message : 'Unknown error',
};
}
}
/**
* Cache a processed object to database
*/
private async cacheProcessedObject(
processed: ProcessedObject,
enabledTypes: Set<string>
): Promise<void> {
const { objectEntry, typeName, syncPolicy, shouldCacheAttributes } = processed;
// If typeName is not resolved, try to use Jira type name as fallback
// This allows referenced objects to be cached even if their type hasn't been discovered yet
let effectiveTypeName = typeName;
let isFallbackTypeName = false;
if (!effectiveTypeName) {
const jiraTypeId = objectEntry.objectType?.id;
const jiraTypeName = objectEntry.objectType?.name;
if (jiraTypeName) {
// Use Jira type name as fallback (will be stored in object_type_name)
// Generate a PascalCase type name from Jira display name
const { toPascalCase } = await import('./schemaUtils.js');
effectiveTypeName = toPascalCase(jiraTypeName) || jiraTypeName.replace(/[^a-zA-Z0-9]/g, '');
isFallbackTypeName = true;
logger.debug(`ObjectSyncService: Using fallback type name "${effectiveTypeName}" for object ${objectEntry.objectKey} (Jira type ID: ${jiraTypeId}, Jira name: "${jiraTypeName}"). This type needs to be discovered via schema discovery for proper attribute caching.`, {
objectKey: objectEntry.objectKey,
objectId: objectEntry.id,
jiraTypeId,
jiraTypeName,
fallbackTypeName: effectiveTypeName,
});
} else {
// No type name available at all - try to use a generic fallback
// This ensures referenced objects are always cached and queryable
const jiraTypeIdStr = jiraTypeId ? String(jiraTypeId) : 'unknown';
effectiveTypeName = `UnknownType_${jiraTypeIdStr}`;
isFallbackTypeName = true;
logger.warn(`ObjectSyncService: Using generic fallback type name "${effectiveTypeName}" for object ${objectEntry.objectKey} (ID: ${objectEntry.id}, Jira type ID: ${jiraTypeId || 'unknown'}). This object will be cached but may need schema discovery for proper attribute caching.`, {
objectKey: objectEntry.objectKey,
objectId: objectEntry.id,
jiraTypeId,
fallbackTypeName: effectiveTypeName,
hint: 'Run schema discovery to include all object types that are referenced by your synced objects.',
});
}
}
// Use effectiveTypeName for the rest of the function
const typeNameToUse = effectiveTypeName!;
// Normalize object (update processed with effective type name if needed)
let processedForNormalization = processed;
if (isFallbackTypeName) {
processedForNormalization = {
...processed,
typeName: typeNameToUse,
};
}
const normalized = await this.processor.normalizeObject(processedForNormalization);
// Access the database adapter to use transactions
const db = this.cacheRepo.db;
logger.debug(`ObjectSyncService: About to cache object ${objectEntry.objectKey} (ID: ${objectEntry.id}) with type "${typeNameToUse}" (fallback: ${isFallbackTypeName})`);
await db.transaction(async (txDb) => {
const txCacheRepo = new ObjectCacheRepository(txDb);
// Upsert object record (with effective type name)
logger.debug(`ObjectSyncService: Upserting object ${objectEntry.objectKey} (ID: ${objectEntry.id}) with type "${typeNameToUse}" (fallback: ${isFallbackTypeName})`);
await txCacheRepo.upsertObject({
...normalized.objectRecord,
objectTypeName: typeNameToUse,
});
// Handle attributes based on sync policy
// CRITICAL: Only replace attributes if attributes[] was present in API response
// For fallback type names, skip attribute caching (we don't have attribute definitions)
if (!isFallbackTypeName && (syncPolicy === SyncPolicy.ENABLED || syncPolicy === SyncPolicy.REFERENCE_ONLY) && shouldCacheAttributes) {
// Delete existing attributes (full replace)
await txCacheRepo.deleteAttributeValues(normalized.objectRecord.id);
// Insert new attributes
if (normalized.attributeValues.length > 0) {
await txCacheRepo.batchUpsertAttributeValues(
normalized.attributeValues.map(v => ({
...v,
objectId: normalized.objectRecord.id,
}))
);
}
// If attributes[] not present on shallow object, keep existing attributes (don't delete)
} else if (!isFallbackTypeName && (syncPolicy === SyncPolicy.ENABLED || syncPolicy === SyncPolicy.REFERENCE_ONLY)) {
// Cache object metadata even without attributes (reference-only)
// This allows basic object lookups for references
} else if (isFallbackTypeName) {
// For fallback type names, only cache object metadata (no attributes)
// Attributes will be cached once the type is properly discovered
logger.debug(`ObjectSyncService: Skipping attribute caching for object ${objectEntry.objectKey} with fallback type name "${typeNameToUse}". Attributes will be cached after schema discovery.`);
}
// Upsert relations
await txCacheRepo.deleteRelations(normalized.objectRecord.id);
for (const relation of normalized.relations) {
// Resolve target type name
const targetObj = await txCacheRepo.getObject(relation.targetId);
const targetType = targetObj?.objectTypeName || relation.targetType;
await txCacheRepo.upsertRelation({
sourceId: normalized.objectRecord.id,
targetId: relation.targetId,
attributeId: relation.attributeId,
sourceType: normalized.objectRecord.objectTypeName,
targetType,
});
}
});
}
}

View File

@@ -0,0 +1,369 @@
/**
* PayloadProcessor - Recursive processing of Jira Assets API payloads
*
* Handles:
* - Recursive reference expansion (level2, level3, etc.)
* - Cycle detection with visited sets
* - Attribute replacement only when attributes[] present
* - Reference-only caching for disabled types
*/
import { logger } from './logger.js';
import type { ObjectEntry, ReferencedObject, ObjectAttribute, ObjectAttributeValue, ConfluenceValue } from '../domain/jiraAssetsPayload.js';
import { isReferenceValue, isSimpleValue, hasAttributes } from '../domain/jiraAssetsPayload.js';
import type { SyncPolicy } from '../domain/syncPolicy.js';
import { SyncPolicy as SyncPolicyEnum } from '../domain/syncPolicy.js';
import type { SchemaRepository } from '../repositories/SchemaRepository.js';
import type { ObjectCacheRepository } from '../repositories/ObjectCacheRepository.js';
import type { AttributeRecord } from '../repositories/SchemaRepository.js';
export interface ProcessedObject {
objectEntry: ObjectEntry;
typeName: string | null; // Resolved from objectType.id
syncPolicy: SyncPolicy;
shouldCacheAttributes: boolean; // true if attributes[] present
}
export class PayloadProcessor {
constructor(
private schemaRepo: SchemaRepository,
private cacheRepo: ObjectCacheRepository
) {}
/**
* Process a payload recursively, extracting all objects
*
* @param objectEntries - Root objects from API
* @param enabledTypes - Set of enabled type names for full sync
* @returns Map of objectId -> ProcessedObject (includes recursive references)
*/
async processPayload(
objectEntries: ObjectEntry[],
enabledTypes: Set<string>
): Promise<Map<string, ProcessedObject>> {
const processed = new Map<string, ProcessedObject>();
const visited = new Set<string>(); // objectId/objectKey for cycle detection
// Process root entries
for (const entry of objectEntries) {
await this.processEntryRecursive(entry, enabledTypes, processed, visited);
}
return processed;
}
/**
* Process a single entry recursively
*/
private async processEntryRecursive(
entry: ObjectEntry | ReferencedObject,
enabledTypes: Set<string>,
processed: Map<string, ProcessedObject>,
visited: Set<string>
): Promise<void> {
// Extract ID and key for cycle detection
const objectId = String(entry.id);
const objectKey = entry.objectKey;
// Check for cycles (use both ID and key as visited can have either)
const visitedKey = `${objectId}:${objectKey}`;
if (visited.has(visitedKey)) {
logger.debug(`PayloadProcessor: Cycle detected for ${objectKey} (${objectId}), skipping`);
return;
}
visited.add(visitedKey);
// Resolve type name from Jira type ID
const typeName = await this.resolveTypeName(entry.objectType.id);
const syncPolicy = this.getSyncPolicy(typeName, enabledTypes);
// Determine if we should cache attributes
// CRITICAL: Only replace attributes if attributes[] array is present
const shouldCacheAttributes = hasAttributes(entry);
// Store processed object (always update if already exists to ensure latest data)
// Convert ReferencedObject to ObjectEntry format for storage
const objectEntry: ObjectEntry = {
id: entry.id,
objectKey: entry.objectKey,
label: entry.label,
objectType: entry.objectType,
created: entry.created,
updated: entry.updated,
hasAvatar: entry.hasAvatar,
timestamp: entry.timestamp,
attributes: hasAttributes(entry) ? entry.attributes : undefined,
};
processed.set(objectId, {
objectEntry,
typeName,
syncPolicy,
shouldCacheAttributes,
});
logger.debug(`PayloadProcessor: Added object ${objectEntry.objectKey} (ID: ${objectId}, Jira type: ${entry.objectType?.name}, resolved type: ${typeName || 'null'}) to processed map. Total processed: ${processed.size}`);
// Process recursive references if attributes are present
if (hasAttributes(entry)) {
logger.debug(`PayloadProcessor: Processing ${entry.attributes!.length} attributes for recursive references in object ${objectEntry.objectKey} (ID: ${objectId})`);
await this.processRecursiveReferences(
entry.attributes!,
enabledTypes,
processed,
visited
);
} else {
logger.debug(`PayloadProcessor: Object ${objectEntry.objectKey} (ID: ${objectId}) has no attributes array, skipping recursive processing`);
}
// Remove from visited set when done (allows same object in different contexts)
visited.delete(visitedKey);
}
/**
* Process recursive references from attributes
*/
private async processRecursiveReferences(
attributes: ObjectAttribute[],
enabledTypes: Set<string>,
processed: Map<string, ProcessedObject>,
visited: Set<string>
): Promise<void> {
for (const attr of attributes) {
for (const value of attr.objectAttributeValues) {
if (isReferenceValue(value)) {
const refObj = value.referencedObject;
// Process referenced object recursively
// This handles level2, level3, etc. expansion
await this.processEntryRecursive(refObj, enabledTypes, processed, visited);
}
}
}
}
/**
* Resolve type name from Jira type ID
*/
private async resolveTypeName(jiraTypeId: number): Promise<string | null> {
const objectType = await this.schemaRepo.getObjectTypeByJiraId(jiraTypeId);
if (!objectType) {
// Track missing type IDs for diagnostics
logger.debug(`PayloadProcessor: Jira type ID ${jiraTypeId} not found in object_types table. This type needs to be discovered via schema sync.`);
return null;
}
return objectType.typeName || null;
}
/**
* Get sync policy for a type
*/
private getSyncPolicy(typeName: string | null, enabledTypes: Set<string>): SyncPolicy {
if (!typeName) {
return SyncPolicyEnum.SKIP; // Unknown type - skip
}
if (enabledTypes.has(typeName)) {
return SyncPolicyEnum.ENABLED;
}
// Reference-only: cache minimal metadata for references
return SyncPolicyEnum.REFERENCE_ONLY;
}
/**
* Normalize an object entry to database format
* This converts ObjectEntry to EAV format
*/
async normalizeObject(
processed: ProcessedObject
): Promise<{
objectRecord: {
id: string;
objectKey: string;
objectTypeName: string;
label: string;
jiraUpdatedAt: string;
jiraCreatedAt: string;
};
attributeValues: Array<{
attributeId: number;
textValue?: string | null;
numberValue?: number | null;
booleanValue?: boolean | null;
dateValue?: string | null;
datetimeValue?: string | null;
referenceObjectId?: string | null;
referenceObjectKey?: string | null;
referenceObjectLabel?: string | null;
arrayIndex: number;
}>;
relations: Array<{
targetId: string;
attributeId: number;
targetType: string;
}>;
}> {
const { objectEntry, typeName } = processed;
if (!typeName) {
throw new Error(`Cannot normalize object ${objectEntry.objectKey}: type name not resolved`);
}
// Get attributes for this type
const attributeDefs = await this.schemaRepo.getAttributesForType(typeName);
const attrMap = new Map(attributeDefs.map(a => [a.jiraAttrId, a]));
// Extract object record
const objectRecord = {
id: String(objectEntry.id),
objectKey: objectEntry.objectKey,
objectTypeName: typeName,
label: objectEntry.label,
jiraUpdatedAt: objectEntry.updated,
jiraCreatedAt: objectEntry.created,
};
// Normalize attributes
const attributeValues: Array<{
attributeId: number;
textValue?: string | null;
numberValue?: number | null;
booleanValue?: boolean | null;
dateValue?: string | null;
datetimeValue?: string | null;
referenceObjectId?: string | null;
referenceObjectKey?: string | null;
referenceObjectLabel?: string | null;
arrayIndex: number;
}> = [];
const relations: Array<{
targetId: string;
attributeId: number;
targetType: string;
}> = [];
// Process attributes if present
if (hasAttributes(objectEntry) && objectEntry.attributes) {
for (const attr of objectEntry.attributes) {
const attrDef = attrMap.get(attr.objectTypeAttributeId);
if (!attrDef) {
logger.warn(`PayloadProcessor: Unknown attribute ID ${attr.objectTypeAttributeId} for type ${typeName}`);
continue;
}
// Process attribute values
for (let arrayIndex = 0; arrayIndex < attr.objectAttributeValues.length; arrayIndex++) {
const value = attr.objectAttributeValues[arrayIndex];
// Normalize based on value type
const normalizedValue = this.normalizeAttributeValue(value, attrDef, objectRecord.id, relations);
attributeValues.push({
attributeId: attrDef.id,
...normalizedValue,
arrayIndex: attrDef.isMultiple ? arrayIndex : 0,
});
}
}
}
return {
objectRecord,
attributeValues,
relations,
};
}
/**
* Normalize a single attribute value
*/
private normalizeAttributeValue(
value: ObjectAttributeValue,
attrDef: AttributeRecord,
sourceObjectId: string,
relations: Array<{ targetId: string; attributeId: number; targetType: string }>
): {
textValue?: string | null;
numberValue?: number | null;
booleanValue?: boolean | null;
dateValue?: string | null;
datetimeValue?: string | null;
referenceObjectId?: string | null;
referenceObjectKey?: string | null;
referenceObjectLabel?: string | null;
} {
// Handle reference values
if (isReferenceValue(value)) {
const ref = value.referencedObject;
const refId = String(ref.id);
// Extract relation
// Note: targetType will be resolved later from ref.objectType.id
relations.push({
targetId: refId,
attributeId: attrDef.id,
targetType: ref.objectType.name, // Will be resolved to typeName during store
});
return {
referenceObjectId: refId,
referenceObjectKey: ref.objectKey,
referenceObjectLabel: ref.label,
};
}
// Handle simple values
if (isSimpleValue(value)) {
const val = value.value;
switch (attrDef.attrType) {
case 'text':
case 'textarea':
case 'url':
case 'email':
case 'select':
case 'user':
case 'status':
return { textValue: String(val) };
case 'integer':
return { numberValue: typeof val === 'number' ? val : parseInt(String(val), 10) };
case 'float':
return { numberValue: typeof val === 'number' ? val : parseFloat(String(val)) };
case 'boolean':
return { booleanValue: Boolean(val) };
case 'date':
return { dateValue: String(val) };
case 'datetime':
return { datetimeValue: String(val) };
default:
return { textValue: String(val) };
}
}
// Handle status values
if ('status' in value && value.status) {
return { textValue: value.status.name };
}
// Handle Confluence values
if ('confluencePage' in value && value.confluencePage) {
const confluenceVal = value as ConfluenceValue;
return { textValue: confluenceVal.confluencePage.url || confluenceVal.displayValue };
}
// Handle user values
if ('user' in value && value.user) {
return { textValue: value.user.displayName || value.user.name || value.displayValue };
}
// Fallback to displayValue
return { textValue: value.displayValue || null };
}
}

View File

@@ -0,0 +1,240 @@
/**
* QueryService - Universal query builder (DB → TypeScript)
*
* Reconstructs TypeScript objects from normalized EAV database.
*/
import { logger } from './logger.js';
import { SchemaRepository } from '../repositories/SchemaRepository.js';
import { ObjectCacheRepository } from '../repositories/ObjectCacheRepository.js';
import type { CMDBObject, CMDBObjectTypeName } from '../generated/jira-types.js';
import type { AttributeRecord } from '../repositories/SchemaRepository.js';
export interface QueryOptions {
limit?: number;
offset?: number;
orderBy?: string;
orderDir?: 'ASC' | 'DESC';
searchTerm?: string;
}
export class QueryService {
constructor(
private schemaRepo: SchemaRepository,
private cacheRepo: ObjectCacheRepository
) {}
/**
* Get a single object by ID
*/
async getObject<T extends CMDBObject>(
typeName: CMDBObjectTypeName,
id: string
): Promise<T | null> {
// Get object record
const objRecord = await this.cacheRepo.getObject(id);
if (!objRecord || objRecord.objectTypeName !== typeName) {
return null;
}
// Reconstruct object from EAV data
return await this.reconstructObject<T>(objRecord);
}
/**
* Get objects of a type with filters
*/
async getObjects<T extends CMDBObject>(
typeName: CMDBObjectTypeName,
options: QueryOptions = {}
): Promise<T[]> {
const { limit = 1000, offset = 0 } = options;
logger.debug(`QueryService.getObjects: Querying for typeName="${typeName}" with limit=${limit}, offset=${offset}`);
// Get object records
const objRecords = await this.cacheRepo.getObjectsByType(typeName, { limit, offset });
logger.debug(`QueryService.getObjects: Found ${objRecords.length} object records for typeName="${typeName}"`);
// Check if no records found - might be a type name mismatch
if (objRecords.length === 0) {
// Diagnostic: Check what object_type_name values actually exist in the database
const db = this.cacheRepo.db;
try {
const allTypeNames = await db.query<{ object_type_name: string; count: number }>(
`SELECT object_type_name, COUNT(*) as count
FROM objects
GROUP BY object_type_name
ORDER BY count DESC
LIMIT 20`
);
logger.warn(`QueryService.getObjects: No objects found for typeName="${typeName}". Available object_type_name values in database:`, {
requestedType: typeName,
availableTypes: allTypeNames.map(t => ({ typeName: t.object_type_name, count: t.count })),
totalTypes: allTypeNames.length,
hint: 'The typeName might not match the object_type_name stored in the database. Check for case sensitivity or naming differences.',
});
} catch (error) {
logger.debug('QueryService.getObjects: Failed to query available type names', error);
}
}
// Reconstruct all objects
const objects = await Promise.all(
objRecords.map(record => this.reconstructObject<T>(record))
);
// Filter out nulls and type assert
const validObjects = objects.filter(obj => obj !== null && obj !== undefined);
logger.debug(`QueryService.getObjects: Successfully reconstructed ${validObjects.length}/${objRecords.length} objects for typeName="${typeName}"`);
return validObjects as T[];
}
/**
* Count objects of a type
*/
async countObjects(typeName: CMDBObjectTypeName): Promise<number> {
return await this.cacheRepo.countObjectsByType(typeName);
}
/**
* Search objects by label
*/
async searchByLabel<T extends CMDBObject>(
typeName: CMDBObjectTypeName,
searchTerm: string,
options: QueryOptions = {}
): Promise<T[]> {
const { limit = 100, offset = 0 } = options;
// Get object records with label filter
const objRecords = await this.cacheRepo.db.query<{
id: string;
objectKey: string;
objectTypeName: string;
label: string;
jiraUpdatedAt: string | null;
jiraCreatedAt: string | null;
cachedAt: string;
}>(
`SELECT id, object_key as objectKey, object_type_name as objectTypeName, label,
jira_updated_at as jiraUpdatedAt, jira_created_at as jiraCreatedAt, cached_at as cachedAt
FROM objects
WHERE object_type_name = ? AND LOWER(label) LIKE LOWER(?)
ORDER BY label ASC
LIMIT ? OFFSET ?`,
[typeName, `%${searchTerm}%`, limit, offset]
);
// Reconstruct objects
const objects = await Promise.all(
objRecords.map(record => this.reconstructObject<T>(record))
);
// Filter out nulls and type assert
const validObjects = objects.filter(obj => obj !== null && obj !== undefined);
return validObjects as T[];
}
/**
* Reconstruct a TypeScript object from database records
*/
private async reconstructObject<T extends CMDBObject>(
objRecord: {
id: string;
objectKey: string;
objectTypeName: string;
label: string;
jiraUpdatedAt: string | null;
jiraCreatedAt: string | null;
}
): Promise<T | null> {
// Get attribute definitions for this type
const attributeDefs = await this.schemaRepo.getAttributesForType(objRecord.objectTypeName);
const attrMap = new Map(attributeDefs.map(a => [a.id, a]));
// Get attribute values
const attributeValues = await this.cacheRepo.getAttributeValues(objRecord.id);
// Build attribute map: fieldName -> value(s)
const attributes: Record<string, unknown> = {};
for (const attrValue of attributeValues) {
const attrDef = attrMap.get(attrValue.attributeId);
if (!attrDef) {
logger.warn(`QueryService: Unknown attribute ID ${attrValue.attributeId} for object ${objRecord.id}`);
continue;
}
// Extract value based on type
let value: unknown = null;
switch (attrDef.attrType) {
case 'reference':
if (attrValue.referenceObjectId) {
value = {
objectId: attrValue.referenceObjectId,
objectKey: attrValue.referenceObjectKey || '',
label: attrValue.referenceObjectLabel || '',
};
}
break;
case 'text':
case 'textarea':
case 'url':
case 'email':
case 'select':
case 'user':
case 'status':
value = attrValue.textValue;
break;
case 'integer':
case 'float':
value = attrValue.numberValue;
break;
case 'boolean':
value = attrValue.booleanValue;
break;
case 'date':
value = attrValue.dateValue;
break;
case 'datetime':
value = attrValue.datetimeValue;
break;
default:
value = attrValue.textValue;
}
// Handle arrays vs single values
if (attrDef.isMultiple) {
if (!attributes[attrDef.fieldName]) {
attributes[attrDef.fieldName] = [];
}
(attributes[attrDef.fieldName] as unknown[]).push(value);
} else {
attributes[attrDef.fieldName] = value;
}
}
// Build CMDBObject
const result: Record<string, unknown> = {
id: objRecord.id,
objectKey: objRecord.objectKey,
label: objRecord.label,
_objectType: objRecord.objectTypeName,
_jiraUpdatedAt: objRecord.jiraUpdatedAt || new Date().toISOString(),
_jiraCreatedAt: objRecord.jiraCreatedAt || new Date().toISOString(),
...attributes,
};
return result as T;
}
}

View File

@@ -0,0 +1,75 @@
/**
* RefreshService - Handles force-refresh-on-read with deduping/locks
*
* Prevents duplicate refresh operations for the same object.
*/
import { logger } from './logger.js';
import { jiraAssetsClient } from '../infrastructure/jira/JiraAssetsClient.js';
import { ObjectSyncService } from './ObjectSyncService.js';
import { SchemaRepository } from '../repositories/SchemaRepository.js';
export class RefreshService {
private refreshLocks: Map<string, Promise<void>> = new Map();
private readonly LOCK_TIMEOUT_MS = 30000; // 30 seconds
constructor(private syncService: ObjectSyncService) {}
/**
* Refresh a single object with deduplication
* If another refresh is already in progress, wait for it
*/
async refreshObject(
objectId: string,
enabledTypes: Set<string>
): Promise<{ success: boolean; error?: string }> {
// Check if refresh already in progress
const existingLock = this.refreshLocks.get(objectId);
if (existingLock) {
logger.debug(`RefreshService: Refresh already in progress for ${objectId}, waiting...`);
try {
await existingLock;
return { success: true }; // Previous refresh succeeded
} catch (error) {
logger.warn(`RefreshService: Previous refresh failed for ${objectId}, retrying...`, error);
// Continue to new refresh
}
}
// Create new refresh promise
const refreshPromise = this.doRefresh(objectId, enabledTypes);
this.refreshLocks.set(objectId, refreshPromise);
try {
// Add timeout to prevent locks from hanging forever
const timeoutPromise = new Promise<void>((_, reject) => {
setTimeout(() => reject(new Error('Refresh timeout')), this.LOCK_TIMEOUT_MS);
});
await Promise.race([refreshPromise, timeoutPromise]);
return { success: true };
} catch (error) {
logger.error(`RefreshService: Failed to refresh object ${objectId}`, error);
return {
success: false,
error: error instanceof Error ? error.message : 'Unknown error',
};
} finally {
// Clean up lock after a delay (allow concurrent reads)
setTimeout(() => {
this.refreshLocks.delete(objectId);
}, 1000);
}
}
/**
* Perform the actual refresh
*/
private async doRefresh(objectId: string, enabledTypes: Set<string>): Promise<void> {
const result = await this.syncService.syncSingleObject(objectId, enabledTypes);
if (!result.cached) {
throw new Error(result.error || 'Failed to cache object');
}
}
}

View File

@@ -0,0 +1,817 @@
/**
* Schema Sync Service
*
* Unified service for synchronizing Jira Assets schema configuration to local database.
* Implements the complete sync flow as specified in the refactor plan.
*/
import { logger } from './logger.js';
import { getDatabaseAdapter } from './database/singleton.js';
import type { DatabaseAdapter } from './database/interface.js';
import { config } from '../config/env.js';
import { toCamelCase, toPascalCase, mapJiraType, determineSyncPriority } from './schemaUtils.js';
// =============================================================================
// Types
// =============================================================================
interface JiraSchema {
id: number;
name: string;
objectSchemaKey?: string;
status?: string;
description?: string;
created?: string;
updated?: string;
objectCount?: number;
objectTypeCount?: number;
}
interface JiraObjectType {
id: number;
name: string;
type?: number;
description?: string;
icon?: {
id: number;
name: string;
url16?: string;
url48?: string;
};
position?: number;
created?: string;
updated?: string;
objectCount?: number;
parentObjectTypeId?: number | null;
objectSchemaId: number;
inherited?: boolean;
abstractObjectType?: boolean;
}
interface JiraAttribute {
id: number;
objectType?: {
id: number;
name: string;
};
name: string;
label?: boolean;
type: number;
description?: string;
defaultType?: {
id: number;
name: string;
} | null;
typeValue?: string | null;
typeValueMulti?: string[];
additionalValue?: string | null;
referenceType?: {
id: number;
name: string;
description?: string;
color?: string;
url16?: string | null;
removable?: boolean;
objectSchemaId?: number;
} | null;
referenceObjectTypeId?: number | null;
referenceObjectType?: {
id: number;
name: string;
objectSchemaId?: number;
} | null;
editable?: boolean;
system?: boolean;
sortable?: boolean;
summable?: boolean;
indexed?: boolean;
minimumCardinality?: number;
maximumCardinality?: number;
suffix?: string;
removable?: boolean;
hidden?: boolean;
includeChildObjectTypes?: boolean;
uniqueAttribute?: boolean;
regexValidation?: string | null;
iql?: string | null;
options?: string;
position?: number;
}
export interface SyncResult {
success: boolean;
schemasProcessed: number;
objectTypesProcessed: number;
attributesProcessed: number;
schemasDeleted: number;
objectTypesDeleted: number;
attributesDeleted: number;
errors: SyncError[];
duration: number; // milliseconds
}
export interface SyncError {
type: 'schema' | 'objectType' | 'attribute';
id: string | number;
message: string;
}
export interface SyncProgress {
status: 'idle' | 'running' | 'completed' | 'failed';
currentSchema?: string;
currentObjectType?: string;
schemasTotal: number;
schemasCompleted: number;
objectTypesTotal: number;
objectTypesCompleted: number;
startedAt?: Date;
estimatedCompletion?: Date;
}
// =============================================================================
// SchemaSyncService Implementation
// =============================================================================
export class SchemaSyncService {
private db: DatabaseAdapter;
private isPostgres: boolean;
private baseUrl: string;
private progress: SyncProgress = {
status: 'idle',
schemasTotal: 0,
schemasCompleted: 0,
objectTypesTotal: 0,
objectTypesCompleted: 0,
};
// Rate limiting configuration
private readonly RATE_LIMIT_DELAY_MS = 150; // 150ms between requests
private readonly MAX_RETRIES = 3;
private readonly RETRY_DELAY_MS = 1000;
constructor() {
this.db = getDatabaseAdapter();
this.isPostgres = (this.db.isPostgres === true);
this.baseUrl = `${config.jiraHost}/rest/assets/1.0`;
}
/**
* Get authentication headers for API requests
*/
private getHeaders(): Record<string, string> {
const token = config.jiraServiceAccountToken;
if (!token) {
throw new Error('JIRA_SERVICE_ACCOUNT_TOKEN not configured. Schema sync requires a service account token.');
}
return {
'Authorization': `Bearer ${token}`,
'Content-Type': 'application/json',
'Accept': 'application/json',
};
}
/**
* Rate limiting delay
*/
private delay(ms: number): Promise<void> {
return new Promise(resolve => setTimeout(resolve, ms));
}
/**
* Fetch with rate limiting and retry logic
*/
private async fetchWithRateLimit<T>(
url: string,
retries: number = this.MAX_RETRIES
): Promise<T> {
await this.delay(this.RATE_LIMIT_DELAY_MS);
try {
const response = await fetch(url, {
headers: this.getHeaders(),
});
// Handle rate limiting (429)
if (response.status === 429) {
const retryAfter = parseInt(response.headers.get('Retry-After') || '5', 10);
logger.warn(`SchemaSync: Rate limited, waiting ${retryAfter}s before retry`);
await this.delay(retryAfter * 1000);
return this.fetchWithRateLimit<T>(url, retries);
}
// Handle server errors with retry
if (response.status >= 500 && retries > 0) {
logger.warn(`SchemaSync: Server error ${response.status}, retrying (${retries} attempts left)`);
await this.delay(this.RETRY_DELAY_MS);
return this.fetchWithRateLimit<T>(url, retries - 1);
}
if (!response.ok) {
const text = await response.text();
throw new Error(`HTTP ${response.status}: ${text}`);
}
return await response.json() as T;
} catch (error) {
if (retries > 0 && error instanceof Error && !error.message.includes('HTTP')) {
logger.warn(`SchemaSync: Network error, retrying (${retries} attempts left)`, error);
await this.delay(this.RETRY_DELAY_MS);
return this.fetchWithRateLimit<T>(url, retries - 1);
}
throw error;
}
}
/**
* Fetch all schemas from Jira
*/
private async fetchSchemas(): Promise<JiraSchema[]> {
const url = `${this.baseUrl}/objectschema/list`;
logger.debug(`SchemaSync: Fetching schemas from ${url}`);
const result = await this.fetchWithRateLimit<{ objectschemas?: JiraSchema[] } | JiraSchema[]>(url);
// Handle different response formats
if (Array.isArray(result)) {
return result;
} else if (result && typeof result === 'object' && 'objectschemas' in result) {
return result.objectschemas || [];
}
logger.warn('SchemaSync: Unexpected schema list response format', result);
return [];
}
/**
* Fetch schema details
*/
private async fetchSchemaDetails(schemaId: number): Promise<JiraSchema> {
const url = `${this.baseUrl}/objectschema/${schemaId}`;
logger.debug(`SchemaSync: Fetching schema details for ${schemaId}`);
return await this.fetchWithRateLimit<JiraSchema>(url);
}
/**
* Fetch all object types for a schema (flat list)
*/
private async fetchObjectTypes(schemaId: number): Promise<JiraObjectType[]> {
const url = `${this.baseUrl}/objectschema/${schemaId}/objecttypes/flat`;
logger.debug(`SchemaSync: Fetching object types for schema ${schemaId}`);
try {
const result = await this.fetchWithRateLimit<JiraObjectType[]>(url);
return Array.isArray(result) ? result : [];
} catch (error) {
// Fallback to regular endpoint if flat endpoint fails
logger.warn(`SchemaSync: Flat endpoint failed, trying regular endpoint`, error);
const fallbackUrl = `${this.baseUrl}/objectschema/${schemaId}/objecttypes`;
const fallbackResult = await this.fetchWithRateLimit<{ objectTypes?: JiraObjectType[] } | JiraObjectType[]>(fallbackUrl);
if (Array.isArray(fallbackResult)) {
return fallbackResult;
} else if (fallbackResult && typeof fallbackResult === 'object' && 'objectTypes' in fallbackResult) {
return fallbackResult.objectTypes || [];
}
return [];
}
}
/**
* Fetch object type details
*/
private async fetchObjectTypeDetails(typeId: number): Promise<JiraObjectType> {
const url = `${this.baseUrl}/objecttype/${typeId}`;
logger.debug(`SchemaSync: Fetching object type details for ${typeId}`);
return await this.fetchWithRateLimit<JiraObjectType>(url);
}
/**
* Fetch attributes for an object type
*/
private async fetchAttributes(typeId: number): Promise<JiraAttribute[]> {
const url = `${this.baseUrl}/objecttype/${typeId}/attributes`;
logger.debug(`SchemaSync: Fetching attributes for object type ${typeId}`);
try {
const result = await this.fetchWithRateLimit<JiraAttribute[]>(url);
return Array.isArray(result) ? result : [];
} catch (error) {
logger.warn(`SchemaSync: Failed to fetch attributes for type ${typeId}`, error);
return [];
}
}
/**
* Parse Jira attribute to database format
*/
private parseAttribute(
attr: JiraAttribute,
allTypeConfigs: Map<number, { name: string; typeName: string }>
): {
jiraId: number;
name: string;
fieldName: string;
type: string;
isMultiple: boolean;
isEditable: boolean;
isRequired: boolean;
isSystem: boolean;
referenceTypeName?: string;
description?: string;
// Additional fields from plan
label?: boolean;
sortable?: boolean;
summable?: boolean;
indexed?: boolean;
suffix?: string;
removable?: boolean;
hidden?: boolean;
includeChildObjectTypes?: boolean;
uniqueAttribute?: boolean;
regexValidation?: string | null;
iql?: string | null;
options?: string;
position?: number;
} {
const typeId = attr.type || attr.defaultType?.id || 0;
let type = mapJiraType(typeId);
const isMultiple = (attr.maximumCardinality ?? 1) > 1 || attr.maximumCardinality === -1;
const isEditable = attr.editable !== false && !attr.hidden;
const isRequired = (attr.minimumCardinality ?? 0) > 0;
const isSystem = attr.system === true;
// CRITICAL: Jira sometimes returns type=1 (integer) for reference attributes!
// The presence of referenceObjectTypeId is the true indicator of a reference type.
const refTypeId = attr.referenceObjectTypeId || attr.referenceObjectType?.id || attr.referenceType?.id;
if (refTypeId) {
type = 'reference';
}
const result: ReturnType<typeof this.parseAttribute> = {
jiraId: attr.id,
name: attr.name,
fieldName: toCamelCase(attr.name),
type,
isMultiple,
isEditable,
isRequired,
isSystem,
description: attr.description,
label: attr.label,
sortable: attr.sortable,
summable: attr.summable,
indexed: attr.indexed,
suffix: attr.suffix,
removable: attr.removable,
hidden: attr.hidden,
includeChildObjectTypes: attr.includeChildObjectTypes,
uniqueAttribute: attr.uniqueAttribute,
regexValidation: attr.regexValidation,
iql: attr.iql,
options: attr.options,
position: attr.position,
};
// Handle reference types - add reference metadata
if (type === 'reference' && refTypeId) {
const refConfig = allTypeConfigs.get(refTypeId);
result.referenceTypeName = refConfig?.typeName ||
attr.referenceObjectType?.name ||
attr.referenceType?.name ||
`Type${refTypeId}`;
}
return result;
}
/**
* Sync all schemas and their complete structure
*/
async syncAll(): Promise<SyncResult> {
const startTime = Date.now();
const errors: SyncError[] = [];
this.progress = {
status: 'running',
schemasTotal: 0,
schemasCompleted: 0,
objectTypesTotal: 0,
objectTypesCompleted: 0,
startedAt: new Date(),
};
try {
logger.info('SchemaSync: Starting full schema synchronization...');
// Step 1: Fetch all schemas
const schemas = await this.fetchSchemas();
this.progress.schemasTotal = schemas.length;
logger.info(`SchemaSync: Found ${schemas.length} schemas to sync`);
if (schemas.length === 0) {
throw new Error('No schemas found in Jira Assets');
}
// Track Jira IDs for cleanup
const jiraSchemaIds = new Set<string>();
const jiraObjectTypeIds = new Map<string, Set<number>>(); // schemaId -> Set<typeId>
const jiraAttributeIds = new Map<string, Set<number>>(); // typeName -> Set<attrId>
let schemasProcessed = 0;
let objectTypesProcessed = 0;
let attributesProcessed = 0;
let schemasDeleted = 0;
let objectTypesDeleted = 0;
let attributesDeleted = 0;
await this.db.transaction(async (txDb) => {
// Step 2: Process each schema
for (const schema of schemas) {
try {
this.progress.currentSchema = schema.name;
const schemaIdStr = schema.id.toString();
jiraSchemaIds.add(schemaIdStr);
// Fetch schema details
let schemaDetails: JiraSchema;
try {
schemaDetails = await this.fetchSchemaDetails(schema.id);
} catch (error) {
logger.warn(`SchemaSync: Failed to fetch details for schema ${schema.id}, using list data`, error);
schemaDetails = schema;
}
const now = new Date().toISOString();
const objectSchemaKey = schemaDetails.objectSchemaKey || schemaDetails.name || schemaIdStr;
// Upsert schema
if (txDb.isPostgres) {
await txDb.execute(`
INSERT INTO schemas (jira_schema_id, name, object_schema_key, status, description, discovered_at, updated_at)
VALUES (?, ?, ?, ?, ?, ?, ?)
ON CONFLICT(jira_schema_id) DO UPDATE SET
name = excluded.name,
object_schema_key = excluded.object_schema_key,
status = excluded.status,
description = excluded.description,
updated_at = excluded.updated_at
`, [
schemaIdStr,
schemaDetails.name,
objectSchemaKey,
schemaDetails.status || null,
schemaDetails.description || null,
now,
now,
]);
} else {
await txDb.execute(`
INSERT INTO schemas (jira_schema_id, name, object_schema_key, status, description, discovered_at, updated_at)
VALUES (?, ?, ?, ?, ?, ?, ?)
ON CONFLICT(jira_schema_id) DO UPDATE SET
name = excluded.name,
object_schema_key = excluded.object_schema_key,
status = excluded.status,
description = excluded.description,
updated_at = excluded.updated_at
`, [
schemaIdStr,
schemaDetails.name,
objectSchemaKey,
schemaDetails.status || null,
schemaDetails.description || null,
now,
now,
]);
}
// Get schema FK
const schemaRow = await txDb.queryOne<{ id: number }>(
`SELECT id FROM schemas WHERE jira_schema_id = ?`,
[schemaIdStr]
);
if (!schemaRow) {
throw new Error(`Failed to get schema FK for ${schemaIdStr}`);
}
const schemaIdFk = schemaRow.id;
// Step 3: Fetch all object types for this schema
const objectTypes = await this.fetchObjectTypes(schema.id);
logger.info(`SchemaSync: Found ${objectTypes.length} object types in schema ${schema.name}`);
const typeConfigs = new Map<number, { name: string; typeName: string }>();
jiraObjectTypeIds.set(schemaIdStr, new Set());
// Build type name mapping
for (const objType of objectTypes) {
const typeName = toPascalCase(objType.name);
typeConfigs.set(objType.id, {
name: objType.name,
typeName,
});
jiraObjectTypeIds.get(schemaIdStr)!.add(objType.id);
}
// Step 4: Store object types
for (const objType of objectTypes) {
try {
this.progress.currentObjectType = objType.name;
const typeName = toPascalCase(objType.name);
const objectCount = objType.objectCount || 0;
const syncPriority = determineSyncPriority(objType.name, objectCount);
// Upsert object type
if (txDb.isPostgres) {
await txDb.execute(`
INSERT INTO object_types (
schema_id, jira_type_id, type_name, display_name, description,
sync_priority, object_count, enabled, discovered_at, updated_at
)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
ON CONFLICT(schema_id, jira_type_id) DO UPDATE SET
display_name = excluded.display_name,
description = excluded.description,
sync_priority = excluded.sync_priority,
object_count = excluded.object_count,
updated_at = excluded.updated_at
`, [
schemaIdFk,
objType.id,
typeName,
objType.name,
objType.description || null,
syncPriority,
objectCount,
false, // Default: disabled
now,
now,
]);
} else {
await txDb.execute(`
INSERT INTO object_types (
schema_id, jira_type_id, type_name, display_name, description,
sync_priority, object_count, enabled, discovered_at, updated_at
)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
ON CONFLICT(schema_id, jira_type_id) DO UPDATE SET
display_name = excluded.display_name,
description = excluded.description,
sync_priority = excluded.sync_priority,
object_count = excluded.object_count,
updated_at = excluded.updated_at
`, [
schemaIdFk,
objType.id,
typeName,
objType.name,
objType.description || null,
syncPriority,
objectCount,
0, // Default: disabled (0 = false in SQLite)
now,
now,
]);
}
objectTypesProcessed++;
// Step 5: Fetch and store attributes
const attributes = await this.fetchAttributes(objType.id);
logger.info(`SchemaSync: Fetched ${attributes.length} attributes for ${objType.name} (type ${objType.id})`);
if (!jiraAttributeIds.has(typeName)) {
jiraAttributeIds.set(typeName, new Set());
}
if (attributes.length === 0) {
logger.warn(`SchemaSync: No attributes found for ${objType.name} (type ${objType.id})`);
}
for (const jiraAttr of attributes) {
try {
const attrDef = this.parseAttribute(jiraAttr, typeConfigs);
jiraAttributeIds.get(typeName)!.add(attrDef.jiraId);
// Upsert attribute
if (txDb.isPostgres) {
await txDb.execute(`
INSERT INTO attributes (
jira_attr_id, object_type_name, attr_name, field_name, attr_type,
is_multiple, is_editable, is_required, is_system,
reference_type_name, description, position, discovered_at
)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
ON CONFLICT(jira_attr_id, object_type_name) DO UPDATE SET
attr_name = excluded.attr_name,
field_name = excluded.field_name,
attr_type = excluded.attr_type,
is_multiple = excluded.is_multiple,
is_editable = excluded.is_editable,
is_required = excluded.is_required,
is_system = excluded.is_system,
reference_type_name = excluded.reference_type_name,
description = excluded.description,
position = excluded.position
`, [
attrDef.jiraId,
typeName,
attrDef.name,
attrDef.fieldName,
attrDef.type,
attrDef.isMultiple,
attrDef.isEditable,
attrDef.isRequired,
attrDef.isSystem,
attrDef.referenceTypeName || null,
attrDef.description || null,
attrDef.position ?? 0,
now,
]);
} else {
await txDb.execute(`
INSERT INTO attributes (
jira_attr_id, object_type_name, attr_name, field_name, attr_type,
is_multiple, is_editable, is_required, is_system,
reference_type_name, description, position, discovered_at
)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
ON CONFLICT(jira_attr_id, object_type_name) DO UPDATE SET
attr_name = excluded.attr_name,
field_name = excluded.field_name,
attr_type = excluded.attr_type,
is_multiple = excluded.is_multiple,
is_editable = excluded.is_editable,
is_required = excluded.is_required,
is_system = excluded.is_system,
reference_type_name = excluded.reference_type_name,
description = excluded.description,
position = excluded.position
`, [
attrDef.jiraId,
typeName,
attrDef.name,
attrDef.fieldName,
attrDef.type,
attrDef.isMultiple ? 1 : 0,
attrDef.isEditable ? 1 : 0,
attrDef.isRequired ? 1 : 0,
attrDef.isSystem ? 1 : 0,
attrDef.referenceTypeName || null,
attrDef.description || null,
attrDef.position ?? 0,
now,
]);
}
attributesProcessed++;
} catch (error) {
logger.error(`SchemaSync: Failed to process attribute ${jiraAttr.id} (${jiraAttr.name}) for ${objType.name}`, error);
if (error instanceof Error) {
logger.error(`SchemaSync: Attribute error details: ${error.message}`, error.stack);
}
errors.push({
type: 'attribute',
id: jiraAttr.id,
message: error instanceof Error ? error.message : String(error),
});
}
}
logger.info(`SchemaSync: Processed ${attributesProcessed} attributes for ${objType.name} (type ${objType.id})`);
this.progress.objectTypesCompleted++;
} catch (error) {
logger.warn(`SchemaSync: Failed to process object type ${objType.id}`, error);
errors.push({
type: 'objectType',
id: objType.id,
message: error instanceof Error ? error.message : String(error),
});
}
}
this.progress.schemasCompleted++;
schemasProcessed++;
} catch (error) {
logger.error(`SchemaSync: Failed to process schema ${schema.id}`, error);
errors.push({
type: 'schema',
id: schema.id.toString(),
message: error instanceof Error ? error.message : String(error),
});
}
}
// Step 6: Clean up orphaned records (hard delete)
logger.info('SchemaSync: Cleaning up orphaned records...');
// Delete orphaned schemas
const allLocalSchemas = await txDb.query<{ jira_schema_id: string }>(
`SELECT jira_schema_id FROM schemas`
);
for (const localSchema of allLocalSchemas) {
if (!jiraSchemaIds.has(localSchema.jira_schema_id)) {
logger.info(`SchemaSync: Deleting orphaned schema ${localSchema.jira_schema_id}`);
await txDb.execute(`DELETE FROM schemas WHERE jira_schema_id = ?`, [localSchema.jira_schema_id]);
schemasDeleted++;
}
}
// Delete orphaned object types
// First, get all object types from all remaining schemas
const allLocalObjectTypes = await txDb.query<{ schema_id: number; jira_type_id: number; jira_schema_id: string }>(
`SELECT ot.schema_id, ot.jira_type_id, s.jira_schema_id
FROM object_types ot
JOIN schemas s ON ot.schema_id = s.id`
);
for (const localType of allLocalObjectTypes) {
const schemaIdStr = localType.jira_schema_id;
const typeIds = jiraObjectTypeIds.get(schemaIdStr);
// If schema doesn't exist in Jira anymore, or type doesn't exist in schema
if (!jiraSchemaIds.has(schemaIdStr) || (typeIds && !typeIds.has(localType.jira_type_id))) {
logger.info(`SchemaSync: Deleting orphaned object type ${localType.jira_type_id} from schema ${schemaIdStr}`);
await txDb.execute(
`DELETE FROM object_types WHERE schema_id = ? AND jira_type_id = ?`,
[localType.schema_id, localType.jira_type_id]
);
objectTypesDeleted++;
}
}
// Delete orphaned attributes
// Get all attributes and check against synced types
const allLocalAttributes = await txDb.query<{ object_type_name: string; jira_attr_id: number }>(
`SELECT object_type_name, jira_attr_id FROM attributes`
);
for (const localAttr of allLocalAttributes) {
const attrIds = jiraAttributeIds.get(localAttr.object_type_name);
// If type wasn't synced or attribute doesn't exist in type
if (!attrIds || !attrIds.has(localAttr.jira_attr_id)) {
logger.info(`SchemaSync: Deleting orphaned attribute ${localAttr.jira_attr_id} from type ${localAttr.object_type_name}`);
await txDb.execute(
`DELETE FROM attributes WHERE object_type_name = ? AND jira_attr_id = ?`,
[localAttr.object_type_name, localAttr.jira_attr_id]
);
attributesDeleted++;
}
}
logger.info(`SchemaSync: Cleanup complete - ${schemasDeleted} schemas, ${objectTypesDeleted} object types, ${attributesDeleted} attributes deleted`);
});
const duration = Date.now() - startTime;
this.progress.status = 'completed';
logger.info(`SchemaSync: Synchronization complete in ${duration}ms - ${schemasProcessed} schemas, ${objectTypesProcessed} object types, ${attributesProcessed} attributes, ${schemasDeleted} deleted schemas, ${objectTypesDeleted} deleted types, ${attributesDeleted} deleted attributes`);
if (attributesProcessed === 0) {
logger.warn(`SchemaSync: WARNING - No attributes were saved! Check logs for errors.`);
}
if (errors.length > 0) {
logger.warn(`SchemaSync: Sync completed with ${errors.length} errors:`, errors);
}
return {
success: errors.length === 0,
schemasProcessed,
objectTypesProcessed,
attributesProcessed,
schemasDeleted,
objectTypesDeleted,
attributesDeleted,
errors,
duration,
};
} catch (error) {
this.progress.status = 'failed';
logger.error('SchemaSync: Synchronization failed', error);
throw error;
}
}
/**
* Sync a single schema by ID
*/
async syncSchema(schemaId: number): Promise<SyncResult> {
// For single schema sync, we can reuse syncAll logic but filter
// For now, just call syncAll (it's idempotent)
logger.info(`SchemaSync: Syncing single schema ${schemaId}`);
return this.syncAll();
}
/**
* Get sync status/progress
*/
getProgress(): SyncProgress {
return { ...this.progress };
}
}
// Export singleton instance
export const schemaSyncService = new SchemaSyncService();

View File

@@ -0,0 +1,68 @@
/**
* ServiceFactory - Creates and initializes all services
*
* Single entry point for service initialization and dependency injection.
*/
import { getDatabaseAdapter } from './database/singleton.js';
import { ensureSchemaInitialized } from './database/normalized-schema-init.js';
import { SchemaRepository } from '../repositories/SchemaRepository.js';
import { ObjectCacheRepository } from '../repositories/ObjectCacheRepository.js';
import { SchemaSyncService } from './SchemaSyncService.js';
import { ObjectSyncService } from './ObjectSyncService.js';
import { PayloadProcessor } from './PayloadProcessor.js';
import { QueryService } from './QueryService.js';
import { RefreshService } from './RefreshService.js';
import { WriteThroughService } from './WriteThroughService.js';
import { logger } from './logger.js';
/**
* All services container
*/
export class ServiceFactory {
public readonly schemaRepo: SchemaRepository;
public readonly cacheRepo: ObjectCacheRepository;
public readonly schemaSyncService: SchemaSyncService;
public readonly objectSyncService: ObjectSyncService;
public readonly payloadProcessor: PayloadProcessor;
public readonly queryService: QueryService;
public readonly refreshService: RefreshService;
public readonly writeThroughService: WriteThroughService;
private static instance: ServiceFactory | null = null;
private constructor() {
// Use shared database adapter singleton
const db = getDatabaseAdapter();
// Initialize repositories
this.schemaRepo = new SchemaRepository(db);
this.cacheRepo = new ObjectCacheRepository(db);
// Initialize services
this.schemaSyncService = new SchemaSyncService();
this.objectSyncService = new ObjectSyncService(this.schemaRepo, this.cacheRepo);
this.payloadProcessor = new PayloadProcessor(this.schemaRepo, this.cacheRepo);
this.queryService = new QueryService(this.schemaRepo, this.cacheRepo);
this.refreshService = new RefreshService(this.objectSyncService);
this.writeThroughService = new WriteThroughService(this.objectSyncService, this.schemaRepo);
// Ensure schema is initialized (async, but don't block)
ensureSchemaInitialized().catch(error => {
logger.error('ServiceFactory: Failed to initialize database schema', error);
});
}
/**
* Get singleton instance
*/
static getInstance(): ServiceFactory {
if (!ServiceFactory.instance) {
ServiceFactory.instance = new ServiceFactory();
}
return ServiceFactory.instance;
}
}
// Export singleton instance getter
export const getServices = () => ServiceFactory.getInstance();

View File

@@ -0,0 +1,153 @@
/**
* WriteThroughService - Write-through updates to Jira and DB
*
* Writes to Jira Assets API, then immediately updates DB cache.
*/
import { logger } from './logger.js';
import { jiraAssetsClient } from '../infrastructure/jira/JiraAssetsClient.js';
import { ObjectSyncService } from './ObjectSyncService.js';
import { SchemaRepository } from '../repositories/SchemaRepository.js';
import type { CMDBObject, CMDBObjectTypeName } from '../generated/jira-types.js';
export interface UpdateResult {
success: boolean;
data?: CMDBObject;
error?: string;
}
export class WriteThroughService {
constructor(
private syncService: ObjectSyncService,
private schemaRepo: SchemaRepository
) {}
/**
* Update an object (write-through)
*
* 1. Build Jira update payload from field updates
* 2. Send update to Jira Assets API
* 3. Fetch fresh data from Jira
* 4. Update DB cache using same normalization logic
*/
async updateObject(
typeName: CMDBObjectTypeName,
objectId: string,
updates: Record<string, unknown>
): Promise<UpdateResult> {
try {
// Get attribute definitions for this type
const attributeDefs = await this.schemaRepo.getAttributesForType(typeName);
const attrMapByName = new Map(attributeDefs.map(a => [a.fieldName, a]));
// Build Jira update payload
const payload = {
attributes: [] as Array<{
objectTypeAttributeId: number;
objectAttributeValues: Array<{ value?: string }>;
}>,
};
for (const [fieldName, value] of Object.entries(updates)) {
const attrDef = attrMapByName.get(fieldName);
if (!attrDef) {
logger.warn(`WriteThroughService: Unknown field ${fieldName} for type ${typeName}`);
continue;
}
if (!attrDef.isEditable) {
logger.warn(`WriteThroughService: Field ${fieldName} is not editable`);
continue;
}
// Build attribute values based on type
const attrValues = this.buildAttributeValues(value, attrDef);
if (attrValues.length > 0 || value === null || value === undefined) {
// Include attribute even if clearing (empty array)
payload.attributes.push({
objectTypeAttributeId: attrDef.jiraAttrId,
objectAttributeValues: attrValues,
});
}
}
if (payload.attributes.length === 0) {
return { success: true }; // No attributes to update
}
// Send update to Jira
await jiraAssetsClient.updateObject(objectId, payload);
// Fetch fresh data from Jira
const entry = await jiraAssetsClient.getObject(objectId);
if (!entry) {
return {
success: false,
error: 'Object not found in Jira after update',
};
}
// Get enabled types for sync policy
const enabledTypes = await this.schemaRepo.getEnabledObjectTypes();
const enabledTypeSet = new Set(enabledTypes.map(t => t.typeName));
// Update DB cache using sync service
const syncResult = await this.syncService.syncSingleObject(objectId, enabledTypeSet);
if (!syncResult.cached) {
logger.warn(`WriteThroughService: Failed to update cache after Jira update: ${syncResult.error}`);
// Still return success if Jira update succeeded
}
// Fetch updated object from DB
// Note: We'd need QueryService here, but to avoid circular deps,
// we'll return success and let caller refresh if needed
return { success: true };
} catch (error) {
logger.error(`WriteThroughService: Failed to update object ${objectId}`, error);
return {
success: false,
error: error instanceof Error ? error.message : 'Unknown error',
};
}
}
/**
* Build Jira attribute values from TypeScript value
*/
private buildAttributeValues(
value: unknown,
attrDef: { attrType: string; isMultiple: boolean }
): Array<{ value?: string }> {
// Null/undefined = clear the field
if (value === null || value === undefined) {
return [];
}
// Reference type
if (attrDef.attrType === 'reference') {
if (attrDef.isMultiple && Array.isArray(value)) {
return (value as Array<{ objectKey?: string }>).map(ref => ({
value: ref.objectKey,
})).filter(v => v.value);
} else if (!attrDef.isMultiple) {
const ref = value as { objectKey?: string };
return ref.objectKey ? [{ value: ref.objectKey }] : [];
}
return [];
}
// Boolean
if (attrDef.attrType === 'boolean') {
return [{ value: value ? 'true' : 'false' }];
}
// Number types
if (attrDef.attrType === 'integer' || attrDef.attrType === 'float') {
return [{ value: String(value) }];
}
// String types
return [{ value: String(value) }];
}
}

View File

@@ -1,13 +1,20 @@
import { config } from '../config/env.js';
import { logger } from './logger.js';
import { randomBytes, createHash } from 'crypto';
import { getAuthDatabase } from './database/migrations.js';
import { userService, type User } from './userService.js';
import { roleService } from './roleService.js';
// Token storage (in production, use Redis or similar)
interface UserSession {
accessToken: string;
refreshToken?: string;
expiresAt: number;
user: JiraUser;
// Extended user interface for sessions
export interface SessionUser {
id: number;
email: string;
username: string;
displayName: string;
emailAddress?: string;
avatarUrl?: string;
roles: string[];
permissions: string[];
}
export interface JiraUser {
@@ -17,19 +24,21 @@ export interface JiraUser {
avatarUrl?: string;
}
// In-memory session store (replace with Redis in production)
const sessionStore = new Map<string, UserSession>();
interface DatabaseSession {
id: string;
user_id: number | null;
auth_method: string;
access_token: string | null;
refresh_token: string | null;
expires_at: string;
created_at: string;
ip_address: string | null;
user_agent: string | null;
}
// Session cleanup interval (every 5 minutes)
setInterval(() => {
const now = Date.now();
for (const [sessionId, session] of sessionStore.entries()) {
if (session.expiresAt < now) {
sessionStore.delete(sessionId);
logger.debug(`Cleaned up expired session: ${sessionId.substring(0, 8)}...`);
}
}
}, 5 * 60 * 1000);
const isPostgres = (): boolean => {
return process.env.DATABASE_TYPE === 'postgres' || process.env.DATABASE_TYPE === 'postgresql';
};
// PKCE helpers for OAuth 2.0
export function generateCodeVerifier(): string {
@@ -59,8 +68,196 @@ setInterval(() => {
}
}, 60 * 1000);
// Clean up expired sessions from database
setInterval(async () => {
try {
const db = getAuthDatabase();
const now = new Date().toISOString();
await db.execute(
'DELETE FROM sessions WHERE expires_at < ?',
[now]
);
await db.close();
} catch (error) {
logger.error('Failed to clean up expired sessions:', error);
}
}, 5 * 60 * 1000); // Every 5 minutes
class AuthService {
// Get OAuth authorization URL
/**
* Get session duration in milliseconds
*/
private getSessionDuration(): number {
const hours = parseInt(process.env.SESSION_DURATION_HOURS || '24', 10);
return hours * 60 * 60 * 1000;
}
/**
* Create a session in the database
*/
private async createSession(
userId: number | null,
authMethod: 'local' | 'oauth' | 'jira-oauth',
accessToken?: string,
refreshToken?: string,
ipAddress?: string,
userAgent?: string
): Promise<string> {
const db = getAuthDatabase();
const sessionId = randomBytes(32).toString('hex');
const now = new Date().toISOString();
const expiresAt = new Date(Date.now() + this.getSessionDuration()).toISOString();
try {
await db.execute(
`INSERT INTO sessions (
id, user_id, auth_method, access_token, refresh_token,
expires_at, created_at, ip_address, user_agent
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)`,
[
sessionId,
userId,
authMethod,
accessToken || null,
refreshToken || null,
expiresAt,
now,
ipAddress || null,
userAgent || null,
]
);
logger.info(`Session created: ${sessionId.substring(0, 8)}... (${authMethod})`);
return sessionId;
} finally {
await db.close();
}
}
/**
* Get session from database
*/
private async getSessionFromDb(sessionId: string): Promise<DatabaseSession | null> {
const db = getAuthDatabase();
try {
const session = await db.queryOne<DatabaseSession>(
'SELECT * FROM sessions WHERE id = ?',
[sessionId]
);
if (!session) {
return null;
}
// Check if expired
const expiresAt = new Date(session.expires_at);
const now = new Date();
if (expiresAt < now) {
await db.execute('DELETE FROM sessions WHERE id = ?', [sessionId]);
return null;
}
return session;
} catch (error) {
logger.error(`[getSessionFromDb] Error querying session: ${sessionId.substring(0, 8)}...`, error);
throw error;
}
// Note: Don't close the database adapter - it's a singleton that should remain open
}
/**
* Get user from session (local auth)
*/
async getSessionUser(sessionId: string): Promise<SessionUser | null> {
const session = await this.getSessionFromDb(sessionId);
if (!session || !session.user_id) {
return null;
}
const user = await userService.getUserById(session.user_id);
if (!user || !user.is_active) {
return null;
}
const roles = await roleService.getUserRoles(session.user_id);
const permissions = await roleService.getUserPermissions(session.user_id);
return {
id: user.id,
email: user.email,
username: user.username,
displayName: user.display_name || user.username,
emailAddress: user.email,
roles: roles.map(r => r.name),
permissions: permissions.map(p => p.name),
};
}
/**
* Local login (email or username/password)
*/
async localLogin(
email: string,
password: string,
ipAddress?: string,
userAgent?: string
): Promise<{ sessionId: string; user: SessionUser }> {
logger.debug(`[localLogin] Attempting login with identifier: ${email}`);
// Try email first, then username if email lookup fails
let user = await userService.getUserByEmail(email);
if (!user) {
logger.debug(`[localLogin] Email lookup failed, trying username: ${email}`);
// If email lookup failed, try username
user = await userService.getUserByUsername(email);
}
if (!user) {
logger.warn(`[localLogin] User not found: ${email}`);
throw new Error('Invalid email/username or password');
}
logger.debug(`[localLogin] User found: ${user.email} (${user.username}), active: ${user.is_active}, verified: ${user.email_verified}`);
if (!user.is_active) {
logger.warn(`[localLogin] Account is deactivated: ${user.email}`);
throw new Error('Account is deactivated');
}
// Verify password
const isValid = await userService.verifyPassword(password, user.password_hash);
if (!isValid) {
logger.warn(`[localLogin] Invalid password for user: ${user.email}`);
throw new Error('Invalid email/username or password');
}
logger.info(`[localLogin] Successful login: ${user.email} (${user.username})`);
// Update last login
await userService.updateLastLogin(user.id);
// Create session
const sessionId = await this.createSession(
user.id,
'local',
undefined,
undefined,
ipAddress,
userAgent
);
const sessionUser = await this.getSessionUser(sessionId);
if (!sessionUser) {
throw new Error('Failed to create session');
}
return { sessionId, user: sessionUser };
}
/**
* Get OAuth authorization URL
*/
getAuthorizationUrl(): { url: string; state: string } {
const state = generateState();
const codeVerifier = generateCodeVerifier();
@@ -86,8 +283,15 @@ class AuthService {
return { url: authUrl, state };
}
// Exchange authorization code for tokens
async exchangeCodeForTokens(code: string, state: string): Promise<{ sessionId: string; user: JiraUser }> {
/**
* Exchange authorization code for tokens (Jira OAuth)
*/
async exchangeCodeForTokens(
code: string,
state: string,
ipAddress?: string,
userAgent?: string
): Promise<{ sessionId: string; user: SessionUser | JiraUser }> {
// Retrieve and validate state
const flowData = authFlowStore.get(state);
if (!flowData) {
@@ -129,25 +333,52 @@ class AuthService {
token_type: string;
};
// Fetch user info
const user = await this.fetchUserInfo(tokenData.access_token);
// Fetch user info from Jira
const jiraUser = await this.fetchUserInfo(tokenData.access_token);
// Create session
const sessionId = randomBytes(32).toString('hex');
const session: UserSession = {
accessToken: tokenData.access_token,
refreshToken: tokenData.refresh_token,
expiresAt: Date.now() + (tokenData.expires_in * 1000),
user,
};
// Try to find local user by email
let localUser: User | null = null;
if (jiraUser.emailAddress) {
localUser = await userService.getUserByEmail(jiraUser.emailAddress);
}
sessionStore.set(sessionId, session);
logger.info(`Created session for user: ${user.displayName}`);
if (localUser) {
// Link OAuth to existing local user
const sessionId = await this.createSession(
localUser.id,
'jira-oauth',
tokenData.access_token,
tokenData.refresh_token,
ipAddress,
userAgent
);
return { sessionId, user };
const sessionUser = await this.getSessionUser(sessionId);
if (!sessionUser) {
throw new Error('Failed to create session');
}
logger.info(`OAuth login successful for local user: ${localUser.email}`);
return { sessionId, user: sessionUser };
} else {
// Create session without local user (OAuth-only)
const sessionId = await this.createSession(
null,
'jira-oauth',
tokenData.access_token,
tokenData.refresh_token,
ipAddress,
userAgent
);
logger.info(`OAuth login successful for Jira user: ${jiraUser.displayName}`);
return { sessionId, user: jiraUser };
}
}
// Fetch current user info from Jira
/**
* Fetch current user info from Jira
*/
async fetchUserInfo(accessToken: string): Promise<JiraUser> {
const response = await fetch(`${config.jiraHost}/rest/api/2/myself`, {
headers: {
@@ -177,38 +408,54 @@ class AuthService {
};
}
// Get session by ID
getSession(sessionId: string): UserSession | null {
const session = sessionStore.get(sessionId);
/**
* Get session by ID
*/
async getSession(sessionId: string): Promise<{ user: SessionUser | JiraUser; accessToken?: string } | null> {
const session = await this.getSessionFromDb(sessionId);
if (!session) {
return null;
}
// Check if expired
if (session.expiresAt < Date.now()) {
sessionStore.delete(sessionId);
return null;
if (session.user_id) {
// Local user session
const user = await this.getSessionUser(sessionId);
if (!user) {
return null;
}
return { user };
} else if (session.access_token) {
// OAuth-only session
const user = await this.fetchUserInfo(session.access_token);
return { user, accessToken: session.access_token };
}
return session;
return null;
}
// Get access token for a session
getAccessToken(sessionId: string): string | null {
const session = this.getSession(sessionId);
return session?.accessToken || null;
/**
* Get access token for a session
*/
async getAccessToken(sessionId: string): Promise<string | null> {
const session = await this.getSessionFromDb(sessionId);
return session?.access_token || null;
}
// Get user for a session
/**
* Get user for a session (legacy method for compatibility)
*/
getUser(sessionId: string): JiraUser | null {
const session = this.getSession(sessionId);
return session?.user || null;
// This is a legacy method - use getSessionUser or getSession instead
// For now, return null to maintain compatibility
return null;
}
// Refresh access token
/**
* Refresh access token
*/
async refreshAccessToken(sessionId: string): Promise<boolean> {
const session = sessionStore.get(sessionId);
if (!session?.refreshToken) {
const session = await this.getSessionFromDb(sessionId);
if (!session?.refresh_token) {
return false;
}
@@ -218,7 +465,7 @@ class AuthService {
grant_type: 'refresh_token',
client_id: config.jiraOAuthClientId,
client_secret: config.jiraOAuthClientSecret,
refresh_token: session.refreshToken,
refresh_token: session.refresh_token,
});
try {
@@ -241,16 +488,23 @@ class AuthService {
expires_in: number;
};
// Update session
session.accessToken = tokenData.access_token;
if (tokenData.refresh_token) {
session.refreshToken = tokenData.refresh_token;
// Update session in database
const db = getAuthDatabase();
try {
await db.execute(
'UPDATE sessions SET access_token = ?, refresh_token = ?, expires_at = ? WHERE id = ?',
[
tokenData.access_token,
tokenData.refresh_token || session.refresh_token,
new Date(Date.now() + (tokenData.expires_in * 1000)).toISOString(),
sessionId,
]
);
} finally {
await db.close();
}
session.expiresAt = Date.now() + (tokenData.expires_in * 1000);
sessionStore.set(sessionId, session);
logger.info(`Refreshed token for session: ${sessionId.substring(0, 8)}...`);
return true;
} catch (error) {
logger.error('Token refresh error:', error);
@@ -258,28 +512,55 @@ class AuthService {
}
}
// Logout / destroy session
logout(sessionId: string): boolean {
const existed = sessionStore.has(sessionId);
sessionStore.delete(sessionId);
if (existed) {
logger.info(`Logged out session: ${sessionId.substring(0, 8)}...`);
/**
* Logout / destroy session
*/
async logout(sessionId: string): Promise<boolean> {
const db = getAuthDatabase();
try {
const result = await db.execute(
'DELETE FROM sessions WHERE id = ?',
[sessionId]
);
if (result > 0) {
logger.info(`Logged out session: ${sessionId.substring(0, 8)}...`);
return true;
}
return false;
} finally {
await db.close();
}
return existed;
}
// Check if OAuth is enabled (jiraAuthMethod = 'oauth')
/**
* Check if OAuth is enabled (jiraAuthMethod = 'oauth')
*/
isOAuthEnabled(): boolean {
return config.jiraAuthMethod === 'oauth' && !!config.jiraOAuthClientId && !!config.jiraOAuthClientSecret;
}
// Check if using service account (PAT) mode (jiraAuthMethod = 'pat')
/**
* Check if using service account (PAT) mode (jiraAuthMethod = 'pat')
*/
isUsingServiceAccount(): boolean {
return config.jiraAuthMethod === 'pat' && !!config.jiraPat;
// Service account mode is when auth method is PAT but no local auth is enabled
// and no users exist (checked elsewhere)
return config.jiraAuthMethod === 'pat';
}
// Get the configured authentication method
getAuthMethod(): 'pat' | 'oauth' | 'none' {
/**
* Check if local auth is enabled
*/
isLocalAuthEnabled(): boolean {
return process.env.LOCAL_AUTH_ENABLED === 'true';
}
/**
* Get the configured authentication method
*/
getAuthMethod(): 'pat' | 'oauth' | 'local' | 'none' {
if (this.isLocalAuthEnabled()) return 'local';
if (this.isOAuthEnabled()) return 'oauth';
if (this.isUsingServiceAccount()) return 'pat';
return 'none';
@@ -287,4 +568,3 @@ class AuthService {
}
export const authService = new AuthService();

View File

@@ -10,7 +10,7 @@ import { readFileSync, existsSync } from 'fs';
import { join } from 'path';
import { dirname } from 'path';
import { fileURLToPath } from 'url';
import * as XLSX from 'xlsx';
import ExcelJS from 'exceljs';
import { logger } from './logger.js';
// Get __dirname equivalent for ES modules
@@ -52,13 +52,13 @@ export function clearBIACache(): void {
/**
* Load BIA data from Excel file
*/
export function loadBIAData(): BIARecord[] {
export async function loadBIAData(): Promise<BIARecord[]> {
const now = Date.now();
// Return cached data if still valid AND has records
// Don't use cache if it's empty (indicates previous load failure)
if (biaDataCache && biaDataCache.length > 0 && (now - biaDataCacheTimestamp) < BIA_CACHE_TTL) {
logger.debug(`Using cached BIA data (${biaDataCache.length} records, cached ${Math.round((now - biaDataCacheTimestamp) / 1000)}s ago)`);
return biaDataCache;
return Promise.resolve(biaDataCache);
}
// Clear cache if it's empty or expired
@@ -96,19 +96,46 @@ export function loadBIAData(): BIARecord[] {
logger.error(`__dirname: ${__dirname}`);
biaDataCache = [];
biaDataCacheTimestamp = now;
return [];
return Promise.resolve([]);
}
logger.info(`Loading BIA data from: ${biaFilePath}`);
try {
// Read file using readFileSync and then parse with XLSX.read
// This works better in ES modules than XLSX.readFile
// Read file using readFileSync and then parse with ExcelJS
const fileBuffer = readFileSync(biaFilePath);
const workbook = XLSX.read(fileBuffer, { type: 'buffer' });
const sheetName = workbook.SheetNames[0];
const worksheet = workbook.Sheets[sheetName];
const data = XLSX.utils.sheet_to_json(worksheet, { header: 1 }) as any[][];
const workbook = new ExcelJS.Workbook();
// ExcelJS accepts Buffer, but TypeScript types may be strict
// Use type assertion to satisfy TypeScript's strict Buffer type checking
await workbook.xlsx.load(fileBuffer as any);
const worksheet = workbook.worksheets[0]; // First sheet
// Converteer naar 2D array formaat (zoals xlsx.utils.sheet_to_json met header: 1)
// We need at least column K (index 10), so ensure we read up to column 11 (1-based)
const data: any[][] = [];
const maxColumnNeeded = 11; // Column K is index 10 (0-based), so we need column 11 (1-based)
worksheet.eachRow((row, rowNumber) => {
const rowData: any[] = [];
// Ensure we have at least maxColumnNeeded columns, but also check actual cells
const actualMaxCol = Math.max(maxColumnNeeded, row.actualCellCount || 0);
for (let colNumber = 1; colNumber <= actualMaxCol; colNumber++) {
const cell = row.getCell(colNumber);
// ExcelJS uses 1-based indexing, convert to 0-based for array
// Handle different cell value types: convert to string for consistency
let cellValue: any = cell.value;
if (cellValue === null || cellValue === undefined) {
cellValue = '';
} else if (cellValue instanceof Date) {
cellValue = cellValue.toISOString();
} else if (typeof cellValue === 'object' && 'richText' in cellValue) {
// Handle RichText objects
cellValue = cell.value?.toString() || '';
}
rowData[colNumber - 1] = cellValue;
}
data.push(rowData);
});
logger.info(`Loaded Excel file: ${data.length} rows, first row has ${data[0]?.length || 0} columns`);
if (data.length > 0 && data[0]) {
@@ -236,12 +263,12 @@ export function loadBIAData(): BIARecord[] {
}
biaDataCache = records;
biaDataCacheTimestamp = now;
return records;
return Promise.resolve(records);
} catch (error) {
logger.error('Failed to load BIA data from Excel', error);
biaDataCache = [];
biaDataCacheTimestamp = now;
return [];
return Promise.resolve([]);
}
}
@@ -330,11 +357,11 @@ function wordBasedSimilarity(str1: string, str2: string): number {
* - Confidence/similarity score
* - Length similarity (prefer matches with similar length)
*/
export function findBIAMatch(
export async function findBIAMatch(
applicationName: string,
searchReference: string | null
): BIAMatchResult {
const biaData = loadBIAData();
): Promise<BIAMatchResult> {
const biaData = await loadBIAData();
if (biaData.length === 0) {
logger.warn(`No BIA data available for lookup of "${applicationName}" (biaData.length = 0)`);
return {

View File

@@ -52,7 +52,7 @@ try {
async function findBIAValue(applicationName: string, searchReference?: string | null): Promise<string | null> {
// Use the unified matching service (imported at top of file)
const { findBIAMatch } = await import('./biaMatchingService.js');
const matchResult = findBIAMatch(applicationName, searchReference || null);
const matchResult = await findBIAMatch(applicationName, searchReference || null);
return matchResult.biaValue || null;
}
@@ -337,8 +337,9 @@ interface TavilySearchResponse {
}
// Perform web search using Tavily API
async function performWebSearch(query: string): Promise<string | null> {
if (!config.enableWebSearch || !config.tavilyApiKey) {
async function performWebSearch(query: string, tavilyApiKey?: string): Promise<string | null> {
// Tavily API key must be provided - it's configured in user profile settings
if (!tavilyApiKey) {
return null;
}
@@ -349,7 +350,7 @@ async function performWebSearch(query: string): Promise<string | null> {
'Content-Type': 'application/json',
},
body: JSON.stringify({
api_key: config.tavilyApiKey,
api_key: tavilyApiKey,
query: query,
search_depth: 'basic',
include_answer: true,
@@ -610,49 +611,56 @@ class AIService {
private openaiClient: OpenAI | null = null;
constructor() {
if (config.anthropicApiKey) {
this.anthropicClient = new Anthropic({
apiKey: config.anthropicApiKey,
});
logger.info('Anthropic (Claude) API configured');
} else {
logger.warn('Anthropic API key not configured. Claude classification will not work.');
}
if (config.openaiApiKey) {
this.openaiClient = new OpenAI({
apiKey: config.openaiApiKey,
});
logger.info('OpenAI API configured');
} else {
logger.warn('OpenAI API key not configured. OpenAI classification will not work.');
}
// AI API keys are now configured per-user in their profile settings
// Global clients are not initialized - clients are created per-request with user keys
logger.info('AI service initialized - API keys must be configured in user profile settings');
}
// Check if a specific provider is configured
// Note: This now checks if user has configured the provider in their settings
// The actual check should be done per-request with user API keys
isProviderConfigured(provider: AIProvider): boolean {
if (provider === 'claude') {
return this.anthropicClient !== null;
} else {
return this.openaiClient !== null;
}
// Always return true - configuration is checked per-request with user keys
// This maintains backward compatibility for the isConfigured() method
return true;
}
// Get available providers
getAvailableProviders(): AIProvider[] {
const providers: AIProvider[] = [];
if (this.anthropicClient) providers.push('claude');
if (this.openaiClient) providers.push('openai');
return providers;
// Providers are available if users have configured API keys in their settings
// This method is kept for backward compatibility but always returns both providers
// The actual availability is checked per-request with user API keys
return ['claude', 'openai'];
}
async classifyApplication(application: ApplicationDetails, provider: AIProvider = config.defaultAIProvider): Promise<AISuggestion> {
// Validate provider
if (provider === 'claude' && !this.anthropicClient) {
throw new Error('Claude API not configured. Please set ANTHROPIC_API_KEY.');
async classifyApplication(
application: ApplicationDetails,
provider: AIProvider = 'claude', // Default to 'claude', but should be provided from user settings
userApiKeys?: { anthropic?: string; openai?: string; tavily?: string }
): Promise<AISuggestion> {
// Use user API keys if provided, otherwise use global config
// API keys must be provided via userApiKeys - they're configured in user profile settings
const anthropicKey = userApiKeys?.anthropic;
const openaiKey = userApiKeys?.openai;
const tavilyKey = userApiKeys?.tavily;
// Create clients with user keys - API keys must be provided via userApiKeys
let anthropicClient: Anthropic | null = null;
let openaiClient: OpenAI | null = null;
if (anthropicKey) {
anthropicClient = new Anthropic({ apiKey: anthropicKey });
}
if (provider === 'openai' && !this.openaiClient) {
throw new Error('OpenAI API not configured. Please set OPENAI_API_KEY.');
if (openaiKey) {
openaiClient = new OpenAI({ apiKey: openaiKey });
}
// Validate provider - API keys must be provided via userApiKeys
if (provider === 'claude' && !anthropicKey) {
throw new Error('Claude API not configured. Please configure the API key in your user settings.');
}
if (provider === 'openai' && !openaiKey) {
throw new Error('OpenAI API not configured. Please configure the API key in your user settings.');
}
// Check if web search is needed
@@ -661,7 +669,7 @@ class AIService {
logger.info(`Insufficient information detected for ${application.name}, performing web search...`);
const supplierPart = application.supplierProduct ? `${application.supplierProduct} ` : '';
const searchQuery = `${application.name} ${supplierPart}healthcare software`.trim();
webSearchResults = await performWebSearch(searchQuery);
webSearchResults = await performWebSearch(searchQuery, tavilyKey);
if (webSearchResults) {
logger.info(`Web search completed for ${application.name}`);
} else {
@@ -719,8 +727,12 @@ class AIService {
let responseText: string;
if (provider === 'claude') {
// Use Claude (Anthropic)
const message = await this.anthropicClient!.messages.create({
// Use Claude (Anthropic) - client created from user API key
if (!anthropicClient) {
throw new Error('Claude API not configured. Please configure the API key in your user settings.');
}
const client = anthropicClient;
const message = await client.messages.create({
model: 'claude-sonnet-4-20250514',
max_tokens: 4096,
messages: [
@@ -737,8 +749,12 @@ class AIService {
}
responseText = textBlock.text.trim();
} else {
// Use OpenAI
const completion = await this.openaiClient!.chat.completions.create({
// Use OpenAI - client created from user API key
if (!openaiClient) {
throw new Error('OpenAI API not configured. Please configure the API key in your user settings.');
}
const client = openaiClient;
const completion = await client.chat.completions.create({
model: 'gpt-4o',
max_tokens: 4096,
messages: [
@@ -884,7 +900,7 @@ class AIService {
async classifyBatch(
applications: ApplicationDetails[],
onProgress?: (completed: number, total: number) => void,
provider: AIProvider = config.defaultAIProvider
provider: AIProvider = 'claude' // Default to 'claude', but should be provided from user settings
): Promise<Map<string, AISuggestion>> {
const results = new Map<string, AISuggestion>();
const total = applications.length;
@@ -936,8 +952,9 @@ class AIService {
if (provider) {
return this.isProviderConfigured(provider);
}
// Return true if at least one provider is configured
return this.anthropicClient !== null || this.openaiClient !== null;
// Configuration is checked per-request with user API keys
// This method is kept for backward compatibility
return true;
}
// Get the prompt that would be sent to the AI for a given application
@@ -1011,14 +1028,30 @@ class AIService {
application: ApplicationDetails,
userMessage: string,
conversationId?: string,
provider: AIProvider = config.defaultAIProvider
provider: AIProvider = 'claude', // Default to 'claude', but should be provided from user settings
userApiKeys?: { anthropic?: string; openai?: string; tavily?: string }
): Promise<ChatResponse> {
// Validate provider
if (provider === 'claude' && !this.anthropicClient) {
throw new Error('Claude API not configured. Please set ANTHROPIC_API_KEY.');
// API keys must be provided via userApiKeys - they're configured in user profile settings
const anthropicKey = userApiKeys?.anthropic;
const openaiKey = userApiKeys?.openai;
// Create clients with user keys
let anthropicClient: Anthropic | null = null;
let openaiClient: OpenAI | null = null;
if (anthropicKey) {
anthropicClient = new Anthropic({ apiKey: anthropicKey });
}
if (provider === 'openai' && !this.openaiClient) {
throw new Error('OpenAI API not configured. Please set OPENAI_API_KEY.');
if (openaiKey) {
openaiClient = new OpenAI({ apiKey: openaiKey });
}
// Validate provider - API keys must be provided via userApiKeys
if (provider === 'claude' && !anthropicKey) {
throw new Error('Claude API not configured. Please configure the API key in your user settings.');
}
if (provider === 'openai' && !openaiKey) {
throw new Error('OpenAI API not configured. Please configure the API key in your user settings.');
}
// Get or create conversation
@@ -1062,7 +1095,11 @@ class AIService {
const systemMessage = aiMessages.find(m => m.role === 'system');
const otherMessages = aiMessages.filter(m => m.role !== 'system');
const response = await this.anthropicClient!.messages.create({
if (!anthropicClient) {
throw new Error('Claude API not configured. Please configure the API key in your user settings.');
}
const response = await anthropicClient.messages.create({
model: 'claude-sonnet-4-20250514',
max_tokens: 4096,
system: systemMessage?.content || '',
@@ -1075,7 +1112,11 @@ class AIService {
assistantContent = response.content[0].type === 'text' ? response.content[0].text : '';
} else {
// OpenAI
const response = await this.openaiClient!.chat.completions.create({
if (!openaiClient) {
throw new Error('OpenAI API not configured. Please configure the API key in your user settings.');
}
const response = await openaiClient.chat.completions.create({
model: 'gpt-4o',
max_tokens: 4096,
messages: aiMessages.map(m => ({

View File

@@ -8,7 +8,7 @@
*/
import { logger } from './logger.js';
import { cacheStore, type CacheStats } from './cacheStore.js';
import { normalizedCacheStore as cacheStore, type CacheStats } from './normalizedCacheStore.js';
import { jiraAssetsClient, type JiraUpdatePayload, JiraObjectNotFoundError } from './jiraAssetsClient.js';
import { conflictResolver, type ConflictCheckResult } from './conflictResolver.js';
import { OBJECT_TYPES, getAttributeDefinition } from '../generated/jira-schema.js';
@@ -65,7 +65,11 @@ class CMDBService {
return cached;
}
// Cache miss: fetch from Jira
// Cache miss: check if cache is cold and trigger background warming
// Note: Background cache warming removed - syncs must be triggered manually from GUI
// The isWarm() check is kept for status reporting, but no auto-warming
// Fetch from Jira (don't wait for warming)
return this.fetchAndCacheObject<T>(typeName, id);
}
@@ -79,6 +83,13 @@ class CMDBService {
): Promise<T | null> {
// Force refresh: search Jira by key
if (options?.forceRefresh) {
// Check if Jira token is configured before making API call
if (!jiraAssetsClient.hasToken()) {
logger.debug(`CMDBService: Jira PAT not configured, cannot search for ${typeName} with key ${objectKey}`);
// Return cached version if available
return await cacheStore.getObjectByKey<T>(typeName, objectKey) || null;
}
const typeDef = OBJECT_TYPES[typeName];
if (!typeDef) return null;
@@ -87,7 +98,7 @@ class CMDBService {
if (result.objects.length === 0) return null;
const parsed = jiraAssetsClient.parseObject<T>(result.objects[0]);
const parsed = await jiraAssetsClient.parseObject<T>(result.objects[0]);
if (parsed) {
await cacheStore.upsertObject(typeName, parsed);
await cacheStore.extractAndStoreRelations(typeName, parsed);
@@ -115,13 +126,48 @@ class CMDBService {
): Promise<T | null> {
try {
const jiraObj = await jiraAssetsClient.getObject(id);
if (!jiraObj) return null;
const parsed = jiraAssetsClient.parseObject<T>(jiraObj);
if (parsed) {
await cacheStore.upsertObject(typeName, parsed);
await cacheStore.extractAndStoreRelations(typeName, parsed);
if (!jiraObj) {
logger.warn(`CMDBService: Jira API returned null for object ${typeName}/${id}`);
return null;
}
let parsed: T | null;
try {
parsed = await jiraAssetsClient.parseObject<T>(jiraObj);
} catch (parseError) {
// parseObject throws errors for missing required fields - log and return null
logger.error(`CMDBService: Failed to parse object ${typeName}/${id} from Jira:`, parseError);
logger.debug(`CMDBService: Jira object that failed to parse:`, {
id: jiraObj.id,
objectKey: jiraObj.objectKey,
label: jiraObj.label,
objectType: jiraObj.objectType?.name,
attributesCount: jiraObj.attributes?.length || 0,
});
return null;
}
if (!parsed) {
logger.warn(`CMDBService: Failed to parse object ${typeName}/${id} from Jira (parseObject returned null)`);
return null;
}
// Validate parsed object has required fields before caching
if (!parsed.id || !parsed.objectKey || !parsed.label) {
logger.error(`CMDBService: Parsed object ${typeName}/${id} is missing required fields. Parsed object: ${JSON.stringify({
id: parsed.id,
objectKey: parsed.objectKey,
label: parsed.label,
hasId: 'id' in parsed,
hasObjectKey: 'objectKey' in parsed,
hasLabel: 'label' in parsed,
resultKeys: Object.keys(parsed),
})}`);
return null; // Return null instead of throwing to allow graceful degradation
}
await cacheStore.upsertObject(typeName, parsed);
await cacheStore.extractAndStoreRelations(typeName, parsed);
return parsed;
} catch (error) {
// If object was deleted from Jira, remove it from our cache
@@ -132,11 +178,48 @@ class CMDBService {
}
return null;
}
// Re-throw other errors
throw error;
// Log other errors but return null instead of throwing to prevent cascading failures
logger.error(`CMDBService: Unexpected error fetching object ${typeName}/${id}:`, error);
return null;
}
}
/**
* Batch fetch multiple objects from Jira and update cache
* Much more efficient than fetching objects one by one
*/
async batchFetchAndCacheObjects<T extends CMDBObject>(
typeName: CMDBObjectTypeName,
ids: string[]
): Promise<T[]> {
if (ids.length === 0) return [];
logger.debug(`CMDBService: Batch fetching ${ids.length} ${typeName} objects from Jira`);
// Fetch all objects in parallel (but limit concurrency to avoid overwhelming Jira)
const BATCH_SIZE = 20; // Fetch 20 objects at a time
const results: T[] = [];
for (let i = 0; i < ids.length; i += BATCH_SIZE) {
const batch = ids.slice(i, i + BATCH_SIZE);
const batchPromises = batch.map(async (id) => {
try {
return await this.fetchAndCacheObject<T>(typeName, id);
} catch (error) {
logger.warn(`CMDBService: Failed to fetch ${typeName}/${id} in batch`, error);
return null;
}
});
const batchResults = await Promise.all(batchPromises);
const validResults = batchResults.filter((obj): obj is NonNullable<typeof obj> => obj !== null) as T[];
results.push(...validResults);
}
logger.debug(`CMDBService: Successfully batch fetched ${results.length}/${ids.length} ${typeName} objects`);
return results;
}
/**
* Get all objects of a type from cache
*/
@@ -235,7 +318,15 @@ class CMDBService {
return { success: true };
}
// 3. Send update to Jira
// 3. Check if user PAT is configured before sending update (write operations require user PAT)
if (!jiraAssetsClient.hasUserToken()) {
return {
success: false,
error: 'Jira Personal Access Token not configured. Please configure it in your user settings to enable saving changes to Jira.',
};
}
// 4. Send update to Jira
const success = await jiraAssetsClient.updateObject(id, payload);
if (!success) {
@@ -271,6 +362,14 @@ class CMDBService {
id: string,
updates: Record<string, unknown>
): Promise<UpdateResult> {
// Check if user PAT is configured before sending update (write operations require user PAT)
if (!jiraAssetsClient.hasUserToken()) {
return {
success: false,
error: 'Jira Personal Access Token not configured. Please configure it in your user settings to enable saving changes to Jira.',
};
}
try {
const payload = this.buildUpdatePayload(typeName, updates);
@@ -407,6 +506,20 @@ class CMDBService {
return await cacheStore.isWarm();
}
/**
* Trigger background cache warming if cache is cold
* This is called on-demand when cache misses occur
*/
private async triggerBackgroundWarming(): Promise<void> {
try {
const { jiraAssetsService } = await import('./jiraAssets.js');
await jiraAssetsService.preWarmFullCache();
} catch (error) {
// Silently fail - warming is optional
logger.debug('On-demand cache warming failed', error);
}
}
/**
* Clear cache for a specific type
*/

View File

@@ -0,0 +1,286 @@
/**
* Data Integrity Service
*
* Handles validation and repair of broken references and other data integrity issues.
*/
import { logger } from './logger.js';
import { normalizedCacheStore as cacheStore } from './normalizedCacheStore.js';
import { jiraAssetsClient, JiraObjectNotFoundError } from './jiraAssetsClient.js';
import type { CMDBObject } from '../generated/jira-types.js';
import type { DatabaseAdapter } from './database/interface.js';
export interface BrokenReference {
object_id: string;
attribute_id: number;
reference_object_id: string;
field_name: string;
object_type_name: string;
object_key: string;
label: string;
}
export interface RepairResult {
total: number;
repaired: number;
deleted: number;
failed: number;
errors: Array<{ reference: BrokenReference; error: string }>;
}
export interface ValidationResult {
brokenReferences: number;
objectsWithBrokenRefs: number;
lastValidated: string;
}
class DataIntegrityService {
/**
* Validate all references in the cache
*/
async validateReferences(): Promise<ValidationResult> {
const brokenCount = await cacheStore.getBrokenReferencesCount();
// Count unique objects with broken references
const brokenRefs = await cacheStore.getBrokenReferences(10000, 0);
const uniqueObjectIds = new Set(brokenRefs.map(ref => ref.object_id));
return {
brokenReferences: brokenCount,
objectsWithBrokenRefs: uniqueObjectIds.size,
lastValidated: new Date().toISOString(),
};
}
/**
* Repair broken references
*
* @param mode - 'delete': Remove broken references, 'fetch': Try to fetch missing objects from Jira, 'dry-run': Just report
* @param batchSize - Number of references to process at a time
* @param maxRepairs - Maximum number of repairs to attempt (0 = unlimited)
*/
async repairBrokenReferences(
mode: 'delete' | 'fetch' | 'dry-run' = 'fetch',
batchSize: number = 100,
maxRepairs: number = 0
): Promise<RepairResult> {
const result: RepairResult = {
total: 0,
repaired: 0,
deleted: 0,
failed: 0,
errors: [],
};
let offset = 0;
let processed = 0;
while (true) {
// Fetch batch of broken references
const brokenRefs = await cacheStore.getBrokenReferences(batchSize, offset);
if (brokenRefs.length === 0) break;
result.total += brokenRefs.length;
for (const ref of brokenRefs) {
// Check max repairs limit
if (maxRepairs > 0 && processed >= maxRepairs) {
logger.info(`DataIntegrityService: Reached max repairs limit (${maxRepairs})`);
break;
}
try {
if (mode === 'dry-run') {
// Just count, don't repair
processed++;
continue;
}
if (mode === 'fetch') {
// Try to fetch the referenced object from Jira
const fetchResult = await this.validateAndFetchReference(ref.reference_object_id);
if (fetchResult.exists && fetchResult.object) {
// Object was successfully fetched and cached
logger.debug(`DataIntegrityService: Repaired reference from ${ref.object_key}.${ref.field_name} to ${ref.reference_object_id}`);
result.repaired++;
} else {
// Object doesn't exist in Jira, delete the reference
await this.deleteBrokenReference(ref);
logger.debug(`DataIntegrityService: Deleted broken reference from ${ref.object_key}.${ref.field_name} to ${ref.reference_object_id} (object not found in Jira)`);
result.deleted++;
}
} else if (mode === 'delete') {
// Directly delete the broken reference
await this.deleteBrokenReference(ref);
result.deleted++;
}
processed++;
} catch (error) {
const errorMessage = error instanceof Error ? error.message : String(error);
logger.error(`DataIntegrityService: Failed to repair reference from ${ref.object_key}.${ref.field_name} to ${ref.reference_object_id}`, error);
result.failed++;
result.errors.push({
reference: ref,
error: errorMessage,
});
}
}
// Check if we should continue
if (brokenRefs.length < batchSize || (maxRepairs > 0 && processed >= maxRepairs)) {
break;
}
offset += batchSize;
}
logger.info(`DataIntegrityService: Repair completed - Total: ${result.total}, Repaired: ${result.repaired}, Deleted: ${result.deleted}, Failed: ${result.failed}`);
return result;
}
/**
* Validate and fetch a referenced object
*/
private async validateAndFetchReference(
referenceObjectId: string
): Promise<{ exists: boolean; object?: CMDBObject }> {
// 1. Check cache first
const db = (cacheStore as any).db;
if (db) {
const typedDb = db as DatabaseAdapter;
const objRow = await typedDb.queryOne<{
id: string;
object_type_name: string;
}>(`
SELECT id, object_type_name
FROM objects
WHERE id = ?
`, [referenceObjectId]);
if (objRow) {
const cached = await cacheStore.getObject(objRow.object_type_name as any, referenceObjectId);
if (cached) {
return { exists: true, object: cached };
}
}
}
// 2. Try to fetch from Jira
try {
const jiraObj = await jiraAssetsClient.getObject(referenceObjectId);
if (jiraObj) {
// Parse and cache
const parsed = await jiraAssetsClient.parseObject(jiraObj);
if (parsed) {
await cacheStore.upsertObject(parsed._objectType, parsed);
await cacheStore.extractAndStoreRelations(parsed._objectType, parsed);
return { exists: true, object: parsed };
}
}
} catch (error) {
if (error instanceof JiraObjectNotFoundError) {
return { exists: false };
}
// Re-throw other errors
throw error;
}
return { exists: false };
}
/**
* Delete a broken reference
*/
private async deleteBrokenReference(ref: BrokenReference): Promise<void> {
const db = (cacheStore as any).db;
if (!db) {
throw new Error('Database not available');
}
await db.execute(`
DELETE FROM attribute_values
WHERE object_id = ?
AND attribute_id = ?
AND reference_object_id = ?
`, [ref.object_id, ref.attribute_id, ref.reference_object_id]);
}
/**
* Cleanup orphaned attribute values (values without parent object)
*/
async cleanupOrphanedAttributeValues(): Promise<number> {
const db = (cacheStore as any).db;
if (!db) {
throw new Error('Database not available');
}
const result = await db.execute(`
DELETE FROM attribute_values
WHERE object_id NOT IN (SELECT id FROM objects)
`);
logger.info(`DataIntegrityService: Cleaned up ${result} orphaned attribute values`);
return result;
}
/**
* Cleanup orphaned relations (relations where source or target doesn't exist)
*/
async cleanupOrphanedRelations(): Promise<number> {
const db = (cacheStore as any).db;
if (!db) {
throw new Error('Database not available');
}
const result = await db.execute(`
DELETE FROM object_relations
WHERE source_id NOT IN (SELECT id FROM objects)
OR target_id NOT IN (SELECT id FROM objects)
`);
logger.info(`DataIntegrityService: Cleaned up ${result} orphaned relations`);
return result;
}
/**
* Full integrity check and repair
*/
async fullIntegrityCheck(repair: boolean = false): Promise<{
validation: ValidationResult;
repair?: RepairResult;
orphanedValues: number;
orphanedRelations: number;
}> {
logger.info('DataIntegrityService: Starting full integrity check...');
const validation = await this.validateReferences();
const orphanedValues = await this.cleanupOrphanedAttributeValues();
const orphanedRelations = await this.cleanupOrphanedRelations();
let repairResult: RepairResult | undefined;
if (repair) {
repairResult = await this.repairBrokenReferences('fetch', 100, 0);
}
logger.info('DataIntegrityService: Integrity check completed', {
brokenReferences: validation.brokenReferences,
orphanedValues,
orphanedRelations,
repaired: repairResult?.repaired || 0,
deleted: repairResult?.deleted || 0,
});
return {
validation,
repair: repairResult,
orphanedValues,
orphanedRelations,
};
}
}
export const dataIntegrityService = new DataIntegrityService();

File diff suppressed because it is too large Load Diff

View File

@@ -16,8 +16,9 @@ const __dirname = dirname(__filename);
/**
* Create a database adapter based on environment variables
* @param allowClose - If false, the adapter won't be closed when close() is called (for singletons)
*/
export function createDatabaseAdapter(dbType?: string, dbPath?: string): DatabaseAdapter {
export function createDatabaseAdapter(dbType?: string, dbPath?: string, allowClose: boolean = true): DatabaseAdapter {
const type = dbType || process.env.DATABASE_TYPE || 'sqlite';
const databaseUrl = process.env.DATABASE_URL;
@@ -26,18 +27,21 @@ export function createDatabaseAdapter(dbType?: string, dbPath?: string): Databas
// Try to construct from individual components
const host = process.env.DATABASE_HOST || 'localhost';
const port = process.env.DATABASE_PORT || '5432';
const name = process.env.DATABASE_NAME || 'cmdb';
const name = process.env.DATABASE_NAME || 'cmdb_insight';
const user = process.env.DATABASE_USER || 'cmdb';
const password = process.env.DATABASE_PASSWORD || '';
const ssl = process.env.DATABASE_SSL === 'true' ? '?sslmode=require' : '';
// Azure PostgreSQL requires SSL - always use sslmode=require for Azure
const isAzure = host.includes('.postgres.database.azure.com');
const ssl = (process.env.DATABASE_SSL === 'true' || isAzure) ? '?sslmode=require' : '';
const constructedUrl = `postgresql://${user}:${password}@${host}:${port}/${name}${ssl}`;
logger.info('Creating PostgreSQL adapter with constructed connection string');
return new PostgresAdapter(constructedUrl);
logger.info(`Database: ${name}, SSL: ${ssl ? 'required' : 'not required'}`);
return new PostgresAdapter(constructedUrl, allowClose);
}
logger.info('Creating PostgreSQL adapter');
return new PostgresAdapter(databaseUrl);
return new PostgresAdapter(databaseUrl, allowClose);
}
// Default to SQLite
@@ -47,33 +51,12 @@ export function createDatabaseAdapter(dbType?: string, dbPath?: string): Databas
}
/**
* Create a database adapter for the classifications database
* Create a database adapter for classifications and session state
*
* Uses the same database as the main cache. All data (CMDB cache,
* classification history, and session state) is stored in a single database.
*/
export function createClassificationsDatabaseAdapter(): DatabaseAdapter {
const type = process.env.DATABASE_TYPE || 'sqlite';
const databaseUrl = process.env.CLASSIFICATIONS_DATABASE_URL || process.env.DATABASE_URL;
if (type === 'postgres' || type === 'postgresql') {
if (!databaseUrl) {
// Try to construct from individual components
const host = process.env.DATABASE_HOST || 'localhost';
const port = process.env.DATABASE_PORT || '5432';
const name = process.env.CLASSIFICATIONS_DATABASE_NAME || process.env.DATABASE_NAME || 'cmdb';
const user = process.env.DATABASE_USER || 'cmdb';
const password = process.env.DATABASE_PASSWORD || '';
const ssl = process.env.DATABASE_SSL === 'true' ? '?sslmode=require' : '';
const constructedUrl = `postgresql://${user}:${password}@${host}:${port}/${name}${ssl}`;
logger.info('Creating PostgreSQL adapter for classifications with constructed connection string');
return new PostgresAdapter(constructedUrl);
}
logger.info('Creating PostgreSQL adapter for classifications');
return new PostgresAdapter(databaseUrl);
}
// Default to SQLite
const defaultPath = join(__dirname, '../../data/classifications.db');
logger.info(`Creating SQLite adapter for classifications with path: ${defaultPath}`);
return new SqliteAdapter(defaultPath);
// Always use the same database adapter as the main cache
return createDatabaseAdapter();
}

View File

@@ -0,0 +1,124 @@
/**
* Fix UNIQUE constraints on object_types table
*
* Removes old UNIQUE constraint on type_name and adds new UNIQUE(schema_id, type_name)
* This allows the same type_name to exist in different schemas
*/
import { logger } from '../logger.js';
import { normalizedCacheStore } from '../normalizedCacheStore.js';
import type { DatabaseAdapter } from './interface.js';
export async function fixObjectTypesConstraints(): Promise<void> {
const db = (normalizedCacheStore as any).db as DatabaseAdapter;
if (!db) {
throw new Error('Database not available');
}
await (db as any).ensureInitialized?.();
logger.info('Migration: Fixing UNIQUE constraints on object_types table...');
try {
if (db.isPostgres) {
// Check if old constraint exists
const oldConstraintExists = await db.queryOne<{ count: number }>(`
SELECT COUNT(*) as count
FROM pg_constraint
WHERE conname = 'object_types_type_name_key'
`);
if (oldConstraintExists && oldConstraintExists.count > 0) {
logger.info('Migration: Dropping old UNIQUE constraint on type_name...');
await db.execute(`ALTER TABLE object_types DROP CONSTRAINT IF EXISTS object_types_type_name_key`);
}
// Check if new constraint exists
const newConstraintExists = await db.queryOne<{ count: number }>(`
SELECT COUNT(*) as count
FROM pg_constraint
WHERE conname = 'object_types_schema_id_type_name_key'
`);
if (!newConstraintExists || newConstraintExists.count === 0) {
logger.info('Migration: Adding UNIQUE constraint on (schema_id, type_name)...');
try {
await db.execute(`
ALTER TABLE object_types
ADD CONSTRAINT object_types_schema_id_type_name_key UNIQUE (schema_id, type_name)
`);
} catch (error: any) {
// If constraint already exists or there are duplicates, log and continue
if (error.message && error.message.includes('already exists')) {
logger.debug('Migration: Constraint already exists, skipping');
} else if (error.message && error.message.includes('duplicate key')) {
logger.warn('Migration: Duplicate (schema_id, type_name) found - this may need manual cleanup');
// Don't throw - allow the application to continue
} else {
throw error;
}
}
} else {
logger.debug('Migration: New UNIQUE constraint already exists');
}
} else {
// SQLite: UNIQUE constraints are part of table definition
// We can't easily modify them, but the schema definition should handle it
logger.debug('Migration: SQLite UNIQUE constraints are handled in table definition');
}
// Step 2: Remove foreign key constraints that reference object_types(type_name)
logger.info('Migration: Removing foreign key constraints on object_types(type_name)...');
try {
if (db.isPostgres) {
// Check and drop foreign keys from attributes table
const attrFkExists = await db.queryOne<{ count: number }>(`
SELECT COUNT(*) as count
FROM pg_constraint
WHERE conname LIKE 'attributes_object_type_name_fkey%'
`);
if (attrFkExists && attrFkExists.count > 0) {
logger.info('Migration: Dropping foreign key from attributes table...');
await db.execute(`ALTER TABLE attributes DROP CONSTRAINT IF EXISTS attributes_object_type_name_fkey`);
}
// Check and drop foreign keys from objects table
const objFkExists = await db.queryOne<{ count: number }>(`
SELECT COUNT(*) as count
FROM pg_constraint
WHERE conname LIKE 'objects_object_type_name_fkey%'
`);
if (objFkExists && objFkExists.count > 0) {
logger.info('Migration: Dropping foreign key from objects table...');
await db.execute(`ALTER TABLE objects DROP CONSTRAINT IF EXISTS objects_object_type_name_fkey`);
}
// Check and drop foreign keys from schema_mappings table
const mappingFkExists = await db.queryOne<{ count: number }>(`
SELECT COUNT(*) as count
FROM pg_constraint
WHERE conname LIKE 'schema_mappings_object_type_name_fkey%'
`);
if (mappingFkExists && mappingFkExists.count > 0) {
logger.info('Migration: Dropping foreign key from schema_mappings table...');
await db.execute(`ALTER TABLE schema_mappings DROP CONSTRAINT IF EXISTS schema_mappings_object_type_name_fkey`);
}
} else {
// SQLite: Foreign keys are part of table definition
// We can't easily drop them, but the new schema definition should handle it
logger.debug('Migration: SQLite foreign keys are handled in table definition');
}
} catch (error) {
logger.warn('Migration: Could not remove foreign key constraints (may not exist)', error);
// Don't throw - allow the application to continue
}
logger.info('Migration: UNIQUE constraints and foreign keys fix completed');
} catch (error) {
logger.warn('Migration: Could not fix constraints (may already be correct)', error);
// Don't throw - allow the application to continue
}
}

View File

@@ -40,4 +40,9 @@ export interface DatabaseAdapter {
* Get database size in bytes (if applicable)
*/
getSizeBytes?(): Promise<number>;
/**
* Indicates if this is a PostgreSQL adapter
*/
isPostgres?: boolean;
}

View File

@@ -0,0 +1,418 @@
/**
* Migration script to migrate from configured_object_types to normalized schema structure
*
* This script:
* 1. Creates schemas table if it doesn't exist
* 2. Migrates unique schemas from configured_object_types to schemas
* 3. Adds schema_id and enabled columns to object_types if they don't exist
* 4. Migrates object types from configured_object_types to object_types with schema_id FK
* 5. Drops configured_object_types table after successful migration
*/
import { logger } from '../logger.js';
import { normalizedCacheStore } from '../normalizedCacheStore.js';
import type { DatabaseAdapter } from './interface.js';
export async function migrateToNormalizedSchema(): Promise<void> {
const db = (normalizedCacheStore as any).db;
if (!db) {
throw new Error('Database not available');
}
await db.ensureInitialized?.();
logger.info('Migration: Starting migration to normalized schema structure...');
try {
await db.transaction(async (txDb: DatabaseAdapter) => {
// Step 1: Check if configured_object_types table exists
let configuredTableExists = false;
try {
if (txDb.isPostgres) {
const result = await txDb.queryOne<{ count: number }>(`
SELECT COUNT(*) as count
FROM information_schema.tables
WHERE table_schema = 'public' AND table_name = 'configured_object_types'
`);
configuredTableExists = (result?.count || 0) > 0;
} else {
const result = await txDb.queryOne<{ count: number }>(`
SELECT COUNT(*) as count
FROM sqlite_master
WHERE type='table' AND name='configured_object_types'
`);
configuredTableExists = (result?.count || 0) > 0;
}
} catch (error) {
logger.debug('Migration: configured_object_types table check failed (may not exist)', error);
}
if (!configuredTableExists) {
logger.info('Migration: configured_object_types table does not exist, skipping migration');
return;
}
// Step 2: Check if schemas table exists, create if not
let schemasTableExists = false;
try {
if (txDb.isPostgres) {
const result = await txDb.queryOne<{ count: number }>(`
SELECT COUNT(*) as count
FROM information_schema.tables
WHERE table_schema = 'public' AND table_name = 'schemas'
`);
schemasTableExists = (result?.count || 0) > 0;
} else {
const result = await txDb.queryOne<{ count: number }>(`
SELECT COUNT(*) as count
FROM sqlite_master
WHERE type='table' AND name='schemas'
`);
schemasTableExists = (result?.count || 0) > 0;
}
} catch (error) {
logger.debug('Migration: schemas table check failed', error);
}
if (!schemasTableExists) {
logger.info('Migration: Creating schemas table...');
if (txDb.isPostgres) {
await txDb.execute(`
CREATE TABLE IF NOT EXISTS schemas (
id SERIAL PRIMARY KEY,
jira_schema_id TEXT NOT NULL UNIQUE,
name TEXT NOT NULL,
description TEXT,
discovered_at TIMESTAMP NOT NULL DEFAULT NOW(),
updated_at TIMESTAMP NOT NULL DEFAULT NOW()
)
`);
await txDb.execute(`
CREATE INDEX IF NOT EXISTS idx_schemas_jira_schema_id ON schemas(jira_schema_id)
`);
await txDb.execute(`
CREATE INDEX IF NOT EXISTS idx_schemas_name ON schemas(name)
`);
} else {
await txDb.execute(`
CREATE TABLE IF NOT EXISTS schemas (
id INTEGER PRIMARY KEY AUTOINCREMENT,
jira_schema_id TEXT NOT NULL UNIQUE,
name TEXT NOT NULL,
description TEXT,
discovered_at TEXT NOT NULL DEFAULT (datetime('now')),
updated_at TEXT NOT NULL DEFAULT (datetime('now'))
)
`);
await txDb.execute(`
CREATE INDEX IF NOT EXISTS idx_schemas_jira_schema_id ON schemas(jira_schema_id)
`);
await txDb.execute(`
CREATE INDEX IF NOT EXISTS idx_schemas_name ON schemas(name)
`);
}
}
// Step 3: Migrate unique schemas from configured_object_types to schemas
logger.info('Migration: Migrating schemas from configured_object_types...');
const schemaRows = await txDb.query<{
schema_id: string;
schema_name: string;
min_discovered_at: string;
max_updated_at: string;
}>(`
SELECT
schema_id,
schema_name,
MIN(discovered_at) as min_discovered_at,
MAX(updated_at) as max_updated_at
FROM configured_object_types
GROUP BY schema_id, schema_name
`);
for (const schemaRow of schemaRows) {
if (txDb.isPostgres) {
await txDb.execute(`
INSERT INTO schemas (jira_schema_id, name, description, discovered_at, updated_at)
VALUES (?, ?, ?, ?, ?)
ON CONFLICT(jira_schema_id) DO UPDATE SET
name = excluded.name,
updated_at = excluded.updated_at
`, [
schemaRow.schema_id,
schemaRow.schema_name,
null,
schemaRow.min_discovered_at,
schemaRow.max_updated_at,
]);
} else {
await txDb.execute(`
INSERT INTO schemas (jira_schema_id, name, description, discovered_at, updated_at)
VALUES (?, ?, ?, ?, ?)
ON CONFLICT(jira_schema_id) DO UPDATE SET
name = excluded.name,
updated_at = excluded.updated_at
`, [
schemaRow.schema_id,
schemaRow.schema_name,
null,
schemaRow.min_discovered_at,
schemaRow.max_updated_at,
]);
}
}
logger.info(`Migration: Migrated ${schemaRows.length} schemas`);
// Step 4: Check if object_types has schema_id and enabled columns
let hasSchemaId = false;
let hasEnabled = false;
try {
if (txDb.isPostgres) {
const columns = await txDb.query<{ column_name: string }>(`
SELECT column_name
FROM information_schema.columns
WHERE table_schema = 'public' AND table_name = 'object_types'
`);
hasSchemaId = columns.some((c: { column_name: string }) => c.column_name === 'schema_id');
hasEnabled = columns.some((c: { column_name: string }) => c.column_name === 'enabled');
} else {
const tableInfo = await txDb.query<{ name: string }>(`
PRAGMA table_info(object_types)
`);
hasSchemaId = tableInfo.some((c: { name: string }) => c.name === 'schema_id');
hasEnabled = tableInfo.some((c: { name: string }) => c.name === 'enabled');
}
} catch (error) {
logger.warn('Migration: Could not check object_types columns', error);
}
// Step 5: Add schema_id and enabled columns if they don't exist
if (!hasSchemaId) {
logger.info('Migration: Adding schema_id column to object_types...');
if (txDb.isPostgres) {
await txDb.execute(`
ALTER TABLE object_types
ADD COLUMN schema_id INTEGER REFERENCES schemas(id) ON DELETE CASCADE
`);
} else {
// SQLite doesn't support ALTER TABLE ADD COLUMN with FK, so we'll handle it differently
// For now, just add the column without FK constraint
await txDb.execute(`
ALTER TABLE object_types
ADD COLUMN schema_id INTEGER
`);
}
}
if (!hasEnabled) {
logger.info('Migration: Adding enabled column to object_types...');
if (txDb.isPostgres) {
await txDb.execute(`
ALTER TABLE object_types
ADD COLUMN enabled BOOLEAN NOT NULL DEFAULT FALSE
`);
} else {
await txDb.execute(`
ALTER TABLE object_types
ADD COLUMN enabled INTEGER NOT NULL DEFAULT 0
`);
}
}
// Step 6: Migrate object types from configured_object_types to object_types
logger.info('Migration: Migrating object types from configured_object_types...');
const configuredTypes = await txDb.query<{
schema_id: string;
object_type_id: number;
object_type_name: string;
display_name: string;
description: string | null;
object_count: number;
enabled: boolean | number;
discovered_at: string;
updated_at: string;
}>(`
SELECT
schema_id,
object_type_id,
object_type_name,
display_name,
description,
object_count,
enabled,
discovered_at,
updated_at
FROM configured_object_types
`);
let migratedCount = 0;
for (const configuredType of configuredTypes) {
// Get schema_id (FK) from schemas table
const schemaRow = await txDb.queryOne<{ id: number }>(
`SELECT id FROM schemas WHERE jira_schema_id = ?`,
[configuredType.schema_id]
);
if (!schemaRow) {
logger.warn(`Migration: Schema ${configuredType.schema_id} not found, skipping object type ${configuredType.object_type_name}`);
continue;
}
// Check if object type already exists in object_types
const existingType = await txDb.queryOne<{ jira_type_id: number }>(
`SELECT jira_type_id FROM object_types WHERE jira_type_id = ?`,
[configuredType.object_type_id]
);
if (existingType) {
// Update existing object type with schema_id and enabled
if (txDb.isPostgres) {
await txDb.execute(`
UPDATE object_types
SET
schema_id = ?,
enabled = ?,
display_name = COALESCE(display_name, ?),
description = COALESCE(description, ?),
object_count = COALESCE(object_count, ?),
updated_at = ?
WHERE jira_type_id = ?
`, [
schemaRow.id,
typeof configuredType.enabled === 'boolean' ? configuredType.enabled : configuredType.enabled === 1,
configuredType.display_name,
configuredType.description,
configuredType.object_count,
configuredType.updated_at,
configuredType.object_type_id,
]);
} else {
await txDb.execute(`
UPDATE object_types
SET
schema_id = ?,
enabled = ?,
display_name = COALESCE(display_name, ?),
description = COALESCE(description, ?),
object_count = COALESCE(object_count, ?),
updated_at = ?
WHERE jira_type_id = ?
`, [
schemaRow.id,
typeof configuredType.enabled === 'boolean' ? (configuredType.enabled ? 1 : 0) : configuredType.enabled,
configuredType.display_name,
configuredType.description,
configuredType.object_count,
configuredType.updated_at,
configuredType.object_type_id,
]);
}
} else {
// Insert new object type
// Note: We need sync_priority - use default 0
if (txDb.isPostgres) {
await txDb.execute(`
INSERT INTO object_types (
schema_id, jira_type_id, type_name, display_name, description,
sync_priority, object_count, enabled, discovered_at, updated_at
)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
`, [
schemaRow.id,
configuredType.object_type_id,
configuredType.object_type_name,
configuredType.display_name,
configuredType.description,
0, // sync_priority
configuredType.object_count,
typeof configuredType.enabled === 'boolean' ? configuredType.enabled : configuredType.enabled === 1,
configuredType.discovered_at,
configuredType.updated_at,
]);
} else {
await txDb.execute(`
INSERT INTO object_types (
schema_id, jira_type_id, type_name, display_name, description,
sync_priority, object_count, enabled, discovered_at, updated_at
)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
`, [
schemaRow.id,
configuredType.object_type_id,
configuredType.object_type_name,
configuredType.display_name,
configuredType.description,
0, // sync_priority
configuredType.object_count,
typeof configuredType.enabled === 'boolean' ? (configuredType.enabled ? 1 : 0) : configuredType.enabled,
configuredType.discovered_at,
configuredType.updated_at,
]);
}
}
migratedCount++;
}
logger.info(`Migration: Migrated ${migratedCount} object types`);
// Step 7: Fix UNIQUE constraints on object_types
logger.info('Migration: Fixing UNIQUE constraints on object_types...');
try {
// Remove old UNIQUE constraint on type_name if it exists
if (txDb.isPostgres) {
// Check if constraint exists
const constraintExists = await txDb.queryOne<{ count: number }>(`
SELECT COUNT(*) as count
FROM pg_constraint
WHERE conname = 'object_types_type_name_key'
`);
if (constraintExists && constraintExists.count > 0) {
logger.info('Migration: Dropping old UNIQUE constraint on type_name...');
await txDb.execute(`ALTER TABLE object_types DROP CONSTRAINT IF EXISTS object_types_type_name_key`);
}
// Add new UNIQUE constraint on (schema_id, type_name)
const newConstraintExists = await txDb.queryOne<{ count: number }>(`
SELECT COUNT(*) as count
FROM pg_constraint
WHERE conname = 'object_types_schema_id_type_name_key'
`);
if (!newConstraintExists || newConstraintExists.count === 0) {
logger.info('Migration: Adding UNIQUE constraint on (schema_id, type_name)...');
await txDb.execute(`
ALTER TABLE object_types
ADD CONSTRAINT object_types_schema_id_type_name_key UNIQUE (schema_id, type_name)
`);
}
} else {
// SQLite: UNIQUE constraints are part of table definition, so we need to recreate
// For now, just log a warning - SQLite doesn't support DROP CONSTRAINT easily
logger.info('Migration: SQLite UNIQUE constraints are handled in table definition');
}
} catch (error) {
logger.warn('Migration: Could not fix UNIQUE constraints (may already be correct)', error);
}
// Step 8: Add indexes if they don't exist
logger.info('Migration: Adding indexes...');
try {
await txDb.execute(`CREATE INDEX IF NOT EXISTS idx_object_types_schema_id ON object_types(schema_id)`);
await txDb.execute(`CREATE INDEX IF NOT EXISTS idx_object_types_enabled ON object_types(enabled)`);
await txDb.execute(`CREATE INDEX IF NOT EXISTS idx_object_types_schema_enabled ON object_types(schema_id, enabled)`);
} catch (error) {
logger.warn('Migration: Some indexes may already exist', error);
}
// Step 9: Drop configured_object_types table
logger.info('Migration: Dropping configured_object_types table...');
await txDb.execute(`DROP TABLE IF EXISTS configured_object_types`);
logger.info('Migration: Dropped configured_object_types table');
});
logger.info('Migration: Migration to normalized schema structure completed successfully');
} catch (error) {
logger.error('Migration: Failed to migrate to normalized schema structure', error);
throw error;
}
}

View File

@@ -0,0 +1,571 @@
/**
* Database Migrations
*
* Handles database schema creation and migrations for authentication and authorization system.
*/
import { logger } from '../logger.js';
import type { DatabaseAdapter } from './interface.js';
import { getDatabaseAdapter } from './singleton.js';
// @ts-ignore - bcrypt doesn't have proper ESM types
import bcrypt from 'bcrypt';
const SALT_ROUNDS = 10;
export interface Migration {
name: string;
up: (db: DatabaseAdapter) => Promise<void>;
down?: (db: DatabaseAdapter) => Promise<void>;
}
const isPostgres = (): boolean => {
return process.env.DATABASE_TYPE === 'postgres' || process.env.DATABASE_TYPE === 'postgresql';
};
const getTimestamp = (): string => {
return new Date().toISOString();
};
/**
* Create users table
*/
async function createUsersTable(db: DatabaseAdapter): Promise<void> {
const isPg = isPostgres();
const schema = isPg ? `
CREATE TABLE IF NOT EXISTS users (
id SERIAL PRIMARY KEY,
email TEXT UNIQUE NOT NULL,
username TEXT UNIQUE NOT NULL,
password_hash TEXT NOT NULL,
display_name TEXT,
is_active BOOLEAN DEFAULT true,
email_verified BOOLEAN DEFAULT false,
email_verification_token TEXT,
password_reset_token TEXT,
password_reset_expires TEXT,
created_at TEXT NOT NULL,
updated_at TEXT NOT NULL,
last_login TEXT
);
CREATE INDEX IF NOT EXISTS idx_users_email ON users(email);
CREATE INDEX IF NOT EXISTS idx_users_username ON users(username);
CREATE INDEX IF NOT EXISTS idx_users_email_verification_token ON users(email_verification_token);
CREATE INDEX IF NOT EXISTS idx_users_password_reset_token ON users(password_reset_token);
` : `
CREATE TABLE IF NOT EXISTS users (
id INTEGER PRIMARY KEY AUTOINCREMENT,
email TEXT UNIQUE NOT NULL,
username TEXT UNIQUE NOT NULL,
password_hash TEXT NOT NULL,
display_name TEXT,
is_active INTEGER DEFAULT 1,
email_verified INTEGER DEFAULT 0,
email_verification_token TEXT,
password_reset_token TEXT,
password_reset_expires TEXT,
created_at TEXT NOT NULL,
updated_at TEXT NOT NULL,
last_login TEXT
);
CREATE INDEX IF NOT EXISTS idx_users_email ON users(email);
CREATE INDEX IF NOT EXISTS idx_users_username ON users(username);
CREATE INDEX IF NOT EXISTS idx_users_email_verification_token ON users(email_verification_token);
CREATE INDEX IF NOT EXISTS idx_users_password_reset_token ON users(password_reset_token);
`;
await db.exec(schema);
}
/**
* Create roles table
*/
async function createRolesTable(db: DatabaseAdapter): Promise<void> {
const isPg = isPostgres();
const schema = isPg ? `
CREATE TABLE IF NOT EXISTS roles (
id SERIAL PRIMARY KEY,
name TEXT UNIQUE NOT NULL,
description TEXT,
is_system_role BOOLEAN DEFAULT false,
created_at TEXT NOT NULL
);
CREATE INDEX IF NOT EXISTS idx_roles_name ON roles(name);
` : `
CREATE TABLE IF NOT EXISTS roles (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT UNIQUE NOT NULL,
description TEXT,
is_system_role INTEGER DEFAULT 0,
created_at TEXT NOT NULL
);
CREATE INDEX IF NOT EXISTS idx_roles_name ON roles(name);
`;
await db.exec(schema);
}
/**
* Create permissions table
*/
async function createPermissionsTable(db: DatabaseAdapter): Promise<void> {
const isPg = isPostgres();
const schema = isPg ? `
CREATE TABLE IF NOT EXISTS permissions (
id SERIAL PRIMARY KEY,
name TEXT UNIQUE NOT NULL,
description TEXT,
resource TEXT
);
CREATE INDEX IF NOT EXISTS idx_permissions_name ON permissions(name);
CREATE INDEX IF NOT EXISTS idx_permissions_resource ON permissions(resource);
` : `
CREATE TABLE IF NOT EXISTS permissions (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT UNIQUE NOT NULL,
description TEXT,
resource TEXT
);
CREATE INDEX IF NOT EXISTS idx_permissions_name ON permissions(name);
CREATE INDEX IF NOT EXISTS idx_permissions_resource ON permissions(resource);
`;
await db.exec(schema);
}
/**
* Create role_permissions junction table
*/
async function createRolePermissionsTable(db: DatabaseAdapter): Promise<void> {
const isPg = isPostgres();
const schema = isPg ? `
CREATE TABLE IF NOT EXISTS role_permissions (
role_id INTEGER NOT NULL REFERENCES roles(id) ON DELETE CASCADE,
permission_id INTEGER NOT NULL REFERENCES permissions(id) ON DELETE CASCADE,
PRIMARY KEY (role_id, permission_id)
);
CREATE INDEX IF NOT EXISTS idx_role_permissions_role_id ON role_permissions(role_id);
CREATE INDEX IF NOT EXISTS idx_role_permissions_permission_id ON role_permissions(permission_id);
` : `
CREATE TABLE IF NOT EXISTS role_permissions (
role_id INTEGER NOT NULL REFERENCES roles(id) ON DELETE CASCADE,
permission_id INTEGER NOT NULL REFERENCES permissions(id) ON DELETE CASCADE,
PRIMARY KEY (role_id, permission_id)
);
CREATE INDEX IF NOT EXISTS idx_role_permissions_role_id ON role_permissions(role_id);
CREATE INDEX IF NOT EXISTS idx_role_permissions_permission_id ON role_permissions(permission_id);
`;
await db.exec(schema);
}
/**
* Create user_roles junction table
*/
async function createUserRolesTable(db: DatabaseAdapter): Promise<void> {
const isPg = isPostgres();
const schema = isPg ? `
CREATE TABLE IF NOT EXISTS user_roles (
user_id INTEGER NOT NULL REFERENCES users(id) ON DELETE CASCADE,
role_id INTEGER NOT NULL REFERENCES roles(id) ON DELETE CASCADE,
assigned_at TEXT NOT NULL,
PRIMARY KEY (user_id, role_id)
);
CREATE INDEX IF NOT EXISTS idx_user_roles_user_id ON user_roles(user_id);
CREATE INDEX IF NOT EXISTS idx_user_roles_role_id ON user_roles(role_id);
` : `
CREATE TABLE IF NOT EXISTS user_roles (
user_id INTEGER NOT NULL REFERENCES users(id) ON DELETE CASCADE,
role_id INTEGER NOT NULL REFERENCES roles(id) ON DELETE CASCADE,
assigned_at TEXT NOT NULL,
PRIMARY KEY (user_id, role_id)
);
CREATE INDEX IF NOT EXISTS idx_user_roles_user_id ON user_roles(user_id);
CREATE INDEX IF NOT EXISTS idx_user_roles_role_id ON user_roles(role_id);
`;
await db.exec(schema);
}
/**
* Create user_settings table
*/
async function createUserSettingsTable(db: DatabaseAdapter): Promise<void> {
const isPg = isPostgres();
const schema = isPg ? `
CREATE TABLE IF NOT EXISTS user_settings (
user_id INTEGER PRIMARY KEY REFERENCES users(id) ON DELETE CASCADE,
jira_pat TEXT,
jira_pat_encrypted BOOLEAN DEFAULT true,
ai_enabled BOOLEAN DEFAULT false,
ai_provider TEXT,
ai_api_key TEXT,
web_search_enabled BOOLEAN DEFAULT false,
tavily_api_key TEXT,
updated_at TEXT NOT NULL
);
` : `
CREATE TABLE IF NOT EXISTS user_settings (
user_id INTEGER PRIMARY KEY REFERENCES users(id) ON DELETE CASCADE,
jira_pat TEXT,
jira_pat_encrypted INTEGER DEFAULT 1,
ai_enabled INTEGER DEFAULT 0,
ai_provider TEXT,
ai_api_key TEXT,
web_search_enabled INTEGER DEFAULT 0,
tavily_api_key TEXT,
updated_at TEXT NOT NULL
);
`;
await db.exec(schema);
}
/**
* Create sessions table
*/
async function createSessionsTable(db: DatabaseAdapter): Promise<void> {
const isPg = isPostgres();
const schema = isPg ? `
CREATE TABLE IF NOT EXISTS sessions (
id TEXT PRIMARY KEY,
user_id INTEGER REFERENCES users(id) ON DELETE CASCADE,
auth_method TEXT NOT NULL,
access_token TEXT,
refresh_token TEXT,
expires_at TEXT NOT NULL,
created_at TEXT NOT NULL,
ip_address TEXT,
user_agent TEXT
);
CREATE INDEX IF NOT EXISTS idx_sessions_user_id ON sessions(user_id);
CREATE INDEX IF NOT EXISTS idx_sessions_expires_at ON sessions(expires_at);
CREATE INDEX IF NOT EXISTS idx_sessions_auth_method ON sessions(auth_method);
` : `
CREATE TABLE IF NOT EXISTS sessions (
id TEXT PRIMARY KEY,
user_id INTEGER REFERENCES users(id) ON DELETE CASCADE,
auth_method TEXT NOT NULL,
access_token TEXT,
refresh_token TEXT,
expires_at TEXT NOT NULL,
created_at TEXT NOT NULL,
ip_address TEXT,
user_agent TEXT
);
CREATE INDEX IF NOT EXISTS idx_sessions_user_id ON sessions(user_id);
CREATE INDEX IF NOT EXISTS idx_sessions_expires_at ON sessions(expires_at);
CREATE INDEX IF NOT EXISTS idx_sessions_auth_method ON sessions(auth_method);
`;
await db.exec(schema);
}
/**
* Create email_tokens table
*/
async function createEmailTokensTable(db: DatabaseAdapter): Promise<void> {
const isPg = isPostgres();
const schema = isPg ? `
CREATE TABLE IF NOT EXISTS email_tokens (
id SERIAL PRIMARY KEY,
user_id INTEGER REFERENCES users(id) ON DELETE CASCADE,
token TEXT UNIQUE NOT NULL,
type TEXT NOT NULL,
expires_at TEXT NOT NULL,
used BOOLEAN DEFAULT false,
created_at TEXT NOT NULL
);
CREATE INDEX IF NOT EXISTS idx_email_tokens_token ON email_tokens(token);
CREATE INDEX IF NOT EXISTS idx_email_tokens_user_id ON email_tokens(user_id);
CREATE INDEX IF NOT EXISTS idx_email_tokens_type ON email_tokens(type);
CREATE INDEX IF NOT EXISTS idx_email_tokens_expires_at ON email_tokens(expires_at);
` : `
CREATE TABLE IF NOT EXISTS email_tokens (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER REFERENCES users(id) ON DELETE CASCADE,
token TEXT UNIQUE NOT NULL,
type TEXT NOT NULL,
expires_at TEXT NOT NULL,
used INTEGER DEFAULT 0,
created_at TEXT NOT NULL
);
CREATE INDEX IF NOT EXISTS idx_email_tokens_token ON email_tokens(token);
CREATE INDEX IF NOT EXISTS idx_email_tokens_user_id ON email_tokens(user_id);
CREATE INDEX IF NOT EXISTS idx_email_tokens_type ON email_tokens(type);
CREATE INDEX IF NOT EXISTS idx_email_tokens_expires_at ON email_tokens(expires_at);
`;
await db.exec(schema);
}
/**
* Seed initial data
*/
async function seedInitialData(db: DatabaseAdapter): Promise<void> {
const isPg = isPostgres();
const now = getTimestamp();
// Check if roles already exist
const existingRoles = await db.query('SELECT COUNT(*) as count FROM roles');
const roleCount = isPg ? (existingRoles[0] as any).count : (existingRoles[0] as any).count;
// If roles exist, we still need to check if admin user exists
// (roles might exist but admin user might not)
const rolesExist = parseInt(roleCount) > 0;
if (rolesExist) {
logger.info('Roles already exist, checking if admin user needs to be created...');
}
// Get existing role IDs if roles already exist
const roleIds: Record<string, number> = {};
if (!rolesExist) {
// Insert default permissions
const permissions = [
{ name: 'search', description: 'Access search features', resource: 'search' },
{ name: 'view_reports', description: 'View reports and dashboards', resource: 'reports' },
{ name: 'edit_applications', description: 'Edit application components', resource: 'applications' },
{ name: 'manage_users', description: 'Manage users and their roles', resource: 'users' },
{ name: 'manage_roles', description: 'Manage roles and permissions', resource: 'roles' },
{ name: 'manage_settings', description: 'Manage application settings', resource: 'settings' },
{ name: 'admin', description: 'Full administrative access (debug, sync, all operations)', resource: 'admin' },
];
for (const perm of permissions) {
await db.execute(
'INSERT INTO permissions (name, description, resource) VALUES (?, ?, ?)',
[perm.name, perm.description, perm.resource]
);
}
// Insert default roles
const roles = [
{ name: 'administrator', description: 'Full system access', isSystem: true },
{ name: 'user', description: 'Basic user access', isSystem: true },
];
for (const role of roles) {
const isSystem = isPg ? role.isSystem : (role.isSystem ? 1 : 0);
await db.execute(
'INSERT INTO roles (name, description, is_system_role, created_at) VALUES (?, ?, ?, ?)',
[role.name, role.description, isSystem, now]
);
// Get the inserted role ID
const insertedRole = await db.queryOne<{ id: number }>(
'SELECT id FROM roles WHERE name = ?',
[role.name]
);
if (insertedRole) {
roleIds[role.name] = insertedRole.id;
}
}
// Assign all permissions to administrator role
const allPermissions = await db.query<{ id: number }>('SELECT id FROM permissions');
for (const perm of allPermissions) {
await db.execute(
'INSERT INTO role_permissions (role_id, permission_id) VALUES (?, ?)',
[roleIds['administrator'], perm.id]
);
}
// Assign basic permissions to user role (search and view_reports)
const searchPerm = await db.queryOne<{ id: number }>(
'SELECT id FROM permissions WHERE name = ?',
['search']
);
const viewReportsPerm = await db.queryOne<{ id: number }>(
'SELECT id FROM permissions WHERE name = ?',
['view_reports']
);
if (searchPerm) {
await db.execute(
'INSERT INTO role_permissions (role_id, permission_id) VALUES (?, ?)',
[roleIds['user'], searchPerm.id]
);
}
if (viewReportsPerm) {
await db.execute(
'INSERT INTO role_permissions (role_id, permission_id) VALUES (?, ?)',
[roleIds['user'], viewReportsPerm.id]
);
}
} else {
// Roles exist - get their IDs
const adminRole = await db.queryOne<{ id: number }>(
'SELECT id FROM roles WHERE name = ?',
['administrator']
);
if (adminRole) {
roleIds['administrator'] = adminRole.id;
}
// Ensure "admin" permission exists (may have been added after initial setup)
const adminPerm = await db.queryOne<{ id: number }>(
'SELECT id FROM permissions WHERE name = ?',
['admin']
);
if (!adminPerm) {
// Add missing "admin" permission
await db.execute(
'INSERT INTO permissions (name, description, resource) VALUES (?, ?, ?)',
['admin', 'Full administrative access (debug, sync, all operations)', 'admin']
);
logger.info('Added missing "admin" permission');
}
// Ensure administrator role has "admin" permission
// Get admin permission ID (either existing or newly created)
const adminPermId = adminPerm?.id || (await db.queryOne<{ id: number }>(
'SELECT id FROM permissions WHERE name = ?',
['admin']
))?.id;
if (adminRole && adminPermId) {
const hasAdminPerm = await db.queryOne<{ role_id: number }>(
'SELECT role_id FROM role_permissions WHERE role_id = ? AND permission_id = ?',
[adminRole.id, adminPermId]
);
if (!hasAdminPerm) {
await db.execute(
'INSERT INTO role_permissions (role_id, permission_id) VALUES (?, ?)',
[adminRole.id, adminPermId]
);
logger.info('Assigned "admin" permission to administrator role');
}
}
}
// Create initial admin user if ADMIN_EMAIL and ADMIN_PASSWORD are set
const adminEmail = process.env.ADMIN_EMAIL;
const adminPassword = process.env.ADMIN_PASSWORD;
const adminUsername = process.env.ADMIN_USERNAME || 'admin';
if (adminEmail && adminPassword) {
// Check if admin user already exists
const existingUser = await db.queryOne<{ id: number }>(
'SELECT id FROM users WHERE email = ? OR username = ?',
[adminEmail, adminUsername]
);
if (existingUser) {
// User exists - check if they have admin role
const hasAdminRole = await db.queryOne<{ role_id: number }>(
'SELECT role_id FROM user_roles WHERE user_id = ? AND role_id = ?',
[existingUser.id, roleIds['administrator']]
);
if (!hasAdminRole && roleIds['administrator']) {
// Add admin role if missing
await db.execute(
'INSERT INTO user_roles (user_id, role_id, assigned_at) VALUES (?, ?, ?)',
[existingUser.id, roleIds['administrator'], now]
);
logger.info(`Administrator role assigned to existing user: ${adminEmail}`);
} else {
logger.info(`Administrator user already exists: ${adminEmail}`);
}
} else {
// Create new admin user
const passwordHash = await bcrypt.hash(adminPassword, SALT_ROUNDS);
const displayName = process.env.ADMIN_DISPLAY_NAME || 'Administrator';
await db.execute(
'INSERT INTO users (email, username, password_hash, display_name, is_active, email_verified, created_at, updated_at) VALUES (?, ?, ?, ?, ?, ?, ?, ?)',
[adminEmail, adminUsername, passwordHash, displayName, isPg ? true : 1, isPg ? true : 1, now, now]
);
const adminUser = await db.queryOne<{ id: number }>(
'SELECT id FROM users WHERE email = ?',
[adminEmail]
);
if (adminUser && roleIds['administrator']) {
await db.execute(
'INSERT INTO user_roles (user_id, role_id, assigned_at) VALUES (?, ?, ?)',
[adminUser.id, roleIds['administrator'], now]
);
logger.info(`Initial administrator user created: ${adminEmail}`);
}
}
} else {
logger.warn('ADMIN_EMAIL and ADMIN_PASSWORD not set - skipping initial admin user creation');
}
logger.info('Initial data seeded successfully');
}
/**
* Main migration function
*/
export async function runMigrations(): Promise<void> {
// Use shared database adapter singleton
const db = getDatabaseAdapter();
try {
logger.info('Running database migrations...');
await createUsersTable(db);
await createRolesTable(db);
await createPermissionsTable(db);
await createRolePermissionsTable(db);
await createUserRolesTable(db);
await createUserSettingsTable(db);
await createSessionsTable(db);
await createEmailTokensTable(db);
await seedInitialData(db);
logger.info('Database migrations completed successfully');
} catch (error) {
logger.error('Migration failed:', error);
throw error;
} finally {
await db.close();
}
}
// Singleton cache for auth database adapter
let authDatabaseAdapter: DatabaseAdapter | null = null;
/**
* Get database adapter for auth operations
* Uses a singleton pattern to avoid creating multiple adapters.
* The adapter is configured to not close on close() calls, as it should
* remain open for the application lifetime.
*/
export function getAuthDatabase(): DatabaseAdapter {
if (!authDatabaseAdapter) {
// Create adapter with allowClose=false so it won't be closed after operations
authDatabaseAdapter = getDatabaseAdapter();
}
return authDatabaseAdapter;
}

View File

@@ -0,0 +1,43 @@
/**
* Database Schema Initialization
*
* Ensures normalized EAV schema is initialized before services use it.
*/
import { getDatabaseAdapter } from './singleton.js';
import { NORMALIZED_SCHEMA_POSTGRES, NORMALIZED_SCHEMA_SQLITE } from './normalized-schema.js';
import { logger } from '../logger.js';
let initialized = false;
let initializationPromise: Promise<void> | null = null;
/**
* Ensure database schema is initialized
*/
export async function ensureSchemaInitialized(): Promise<void> {
if (initialized) return;
if (initializationPromise) {
await initializationPromise;
return;
}
initializationPromise = (async () => {
try {
// Use shared database adapter singleton
const db = getDatabaseAdapter();
const isPostgres = db.isPostgres === true;
// Execute schema
const schema = isPostgres ? NORMALIZED_SCHEMA_POSTGRES : NORMALIZED_SCHEMA_SQLITE;
await db.exec(schema);
logger.info(`Database schema initialized (${isPostgres ? 'PostgreSQL' : 'SQLite'})`);
initialized = true;
} catch (error) {
logger.error('Failed to initialize database schema', error);
throw error;
}
})();
await initializationPromise;
}

View File

@@ -0,0 +1,329 @@
/**
* Normalized Database Schema
*
* Generic, schema-agnostic normalized structure for CMDB data.
* Works with any Jira Assets configuration.
*/
export const NORMALIZED_SCHEMA_POSTGRES = `
-- =============================================================================
-- Schemas (Jira Assets schemas)
-- =============================================================================
CREATE TABLE IF NOT EXISTS schemas (
id SERIAL PRIMARY KEY,
jira_schema_id TEXT NOT NULL UNIQUE,
name TEXT NOT NULL,
object_schema_key TEXT,
status TEXT,
description TEXT,
search_enabled BOOLEAN NOT NULL DEFAULT TRUE,
discovered_at TIMESTAMP NOT NULL DEFAULT NOW(),
updated_at TIMESTAMP NOT NULL DEFAULT NOW()
);
-- =============================================================================
-- Object Types (discovered from Jira schema, with schema relation and enabled flag)
-- =============================================================================
CREATE TABLE IF NOT EXISTS object_types (
id SERIAL PRIMARY KEY,
schema_id INTEGER NOT NULL REFERENCES schemas(id) ON DELETE CASCADE,
jira_type_id INTEGER NOT NULL,
type_name TEXT NOT NULL,
display_name TEXT NOT NULL,
description TEXT,
sync_priority INTEGER DEFAULT 0,
object_count INTEGER DEFAULT 0,
enabled BOOLEAN NOT NULL DEFAULT FALSE,
discovered_at TIMESTAMP NOT NULL DEFAULT NOW(),
updated_at TIMESTAMP NOT NULL DEFAULT NOW(),
UNIQUE(schema_id, jira_type_id),
UNIQUE(schema_id, type_name)
);
-- =============================================================================
-- Attributes (discovered from Jira schema)
-- =============================================================================
CREATE TABLE IF NOT EXISTS attributes (
id SERIAL PRIMARY KEY,
jira_attr_id INTEGER NOT NULL,
object_type_name TEXT NOT NULL,
attr_name TEXT NOT NULL,
field_name TEXT NOT NULL,
attr_type TEXT NOT NULL,
is_multiple BOOLEAN NOT NULL DEFAULT FALSE,
is_editable BOOLEAN NOT NULL DEFAULT TRUE,
is_required BOOLEAN NOT NULL DEFAULT FALSE,
is_system BOOLEAN NOT NULL DEFAULT FALSE,
reference_type_name TEXT,
description TEXT,
position INTEGER DEFAULT 0,
discovered_at TIMESTAMP NOT NULL DEFAULT NOW(),
UNIQUE(jira_attr_id, object_type_name)
);
-- =============================================================================
-- Objects (minimal metadata)
-- =============================================================================
CREATE TABLE IF NOT EXISTS objects (
id TEXT PRIMARY KEY,
object_key TEXT NOT NULL UNIQUE,
object_type_name TEXT NOT NULL,
label TEXT NOT NULL,
jira_updated_at TIMESTAMP,
jira_created_at TIMESTAMP,
cached_at TIMESTAMP NOT NULL DEFAULT NOW()
);
-- =============================================================================
-- Attribute Values (EAV pattern - generic for all types)
-- =============================================================================
CREATE TABLE IF NOT EXISTS attribute_values (
id SERIAL PRIMARY KEY,
object_id TEXT NOT NULL REFERENCES objects(id) ON DELETE CASCADE,
attribute_id INTEGER NOT NULL REFERENCES attributes(id) ON DELETE CASCADE,
text_value TEXT,
number_value NUMERIC,
boolean_value BOOLEAN,
date_value DATE,
datetime_value TIMESTAMP,
reference_object_id TEXT,
reference_object_key TEXT,
reference_object_label TEXT,
array_index INTEGER DEFAULT 0,
UNIQUE(object_id, attribute_id, array_index)
);
-- =============================================================================
-- Relationships (enhanced existing table)
-- =============================================================================
CREATE TABLE IF NOT EXISTS object_relations (
id SERIAL PRIMARY KEY,
source_id TEXT NOT NULL REFERENCES objects(id) ON DELETE CASCADE,
target_id TEXT NOT NULL REFERENCES objects(id) ON DELETE CASCADE,
attribute_id INTEGER NOT NULL REFERENCES attributes(id) ON DELETE CASCADE,
source_type TEXT NOT NULL,
target_type TEXT NOT NULL,
UNIQUE(source_id, target_id, attribute_id)
);
-- =============================================================================
-- Schema Mappings (object type -> schema ID) - DEPRECATED
-- =============================================================================
CREATE TABLE IF NOT EXISTS schema_mappings (
object_type_name TEXT PRIMARY KEY,
schema_id TEXT NOT NULL,
enabled BOOLEAN NOT NULL DEFAULT TRUE,
created_at TIMESTAMP NOT NULL DEFAULT NOW(),
updated_at TIMESTAMP NOT NULL DEFAULT NOW()
);
-- =============================================================================
-- Sync Metadata (unchanged)
-- =============================================================================
CREATE TABLE IF NOT EXISTS sync_metadata (
key TEXT PRIMARY KEY,
value TEXT NOT NULL,
updated_at TEXT NOT NULL
);
-- =============================================================================
-- Indexes for Performance
-- =============================================================================
-- Schema indexes
CREATE INDEX IF NOT EXISTS idx_schemas_jira_schema_id ON schemas(jira_schema_id);
CREATE INDEX IF NOT EXISTS idx_schemas_name ON schemas(name);
CREATE INDEX IF NOT EXISTS idx_schemas_search_enabled ON schemas(search_enabled);
-- Object type indexes (for schema queries)
CREATE INDEX IF NOT EXISTS idx_object_types_type_name ON object_types(type_name);
CREATE INDEX IF NOT EXISTS idx_object_types_jira_id ON object_types(jira_type_id);
CREATE INDEX IF NOT EXISTS idx_object_types_schema_id ON object_types(schema_id);
CREATE INDEX IF NOT EXISTS idx_object_types_sync_priority ON object_types(sync_priority);
CREATE INDEX IF NOT EXISTS idx_object_types_enabled ON object_types(enabled);
CREATE INDEX IF NOT EXISTS idx_object_types_schema_enabled ON object_types(schema_id, enabled);
-- Object indexes
CREATE INDEX IF NOT EXISTS idx_objects_type ON objects(object_type_name);
CREATE INDEX IF NOT EXISTS idx_objects_key ON objects(object_key);
CREATE INDEX IF NOT EXISTS idx_objects_label ON objects(label);
CREATE INDEX IF NOT EXISTS idx_objects_cached_at ON objects(cached_at);
-- Attribute indexes
CREATE INDEX IF NOT EXISTS idx_attributes_type ON attributes(object_type_name);
CREATE INDEX IF NOT EXISTS idx_attributes_field ON attributes(field_name);
CREATE INDEX IF NOT EXISTS idx_attributes_jira_id ON attributes(jira_attr_id);
CREATE INDEX IF NOT EXISTS idx_attributes_type_field ON attributes(object_type_name, field_name);
-- Attribute value indexes (critical for query performance)
CREATE INDEX IF NOT EXISTS idx_attr_values_object ON attribute_values(object_id);
CREATE INDEX IF NOT EXISTS idx_attr_values_attr ON attribute_values(attribute_id);
CREATE INDEX IF NOT EXISTS idx_attr_values_text ON attribute_values(text_value) WHERE text_value IS NOT NULL;
CREATE INDEX IF NOT EXISTS idx_attr_values_number ON attribute_values(number_value) WHERE number_value IS NOT NULL;
CREATE INDEX IF NOT EXISTS idx_attr_values_reference ON attribute_values(reference_object_id) WHERE reference_object_id IS NOT NULL;
CREATE INDEX IF NOT EXISTS idx_attr_values_composite_text ON attribute_values(attribute_id, text_value) WHERE text_value IS NOT NULL;
CREATE INDEX IF NOT EXISTS idx_attr_values_composite_ref ON attribute_values(attribute_id, reference_object_id) WHERE reference_object_id IS NOT NULL;
CREATE INDEX IF NOT EXISTS idx_attr_values_object_attr ON attribute_values(object_id, attribute_id);
-- Relation indexes
CREATE INDEX IF NOT EXISTS idx_relations_source ON object_relations(source_id);
CREATE INDEX IF NOT EXISTS idx_relations_target ON object_relations(target_id);
CREATE INDEX IF NOT EXISTS idx_relations_attr ON object_relations(attribute_id);
CREATE INDEX IF NOT EXISTS idx_relations_source_type ON object_relations(source_id, source_type);
CREATE INDEX IF NOT EXISTS idx_relations_target_type ON object_relations(target_id, target_type);
-- Schema indexes
CREATE INDEX IF NOT EXISTS idx_schemas_jira_schema_id ON schemas(jira_schema_id);
CREATE INDEX IF NOT EXISTS idx_schemas_name ON schemas(name);
-- Schema mapping indexes
CREATE INDEX IF NOT EXISTS idx_schema_mappings_type ON schema_mappings(object_type_name);
CREATE INDEX IF NOT EXISTS idx_schema_mappings_schema ON schema_mappings(schema_id);
CREATE INDEX IF NOT EXISTS idx_schema_mappings_enabled ON schema_mappings(enabled);
`;
export const NORMALIZED_SCHEMA_SQLITE = `
-- =============================================================================
-- SQLite version (for development/testing)
-- =============================================================================
CREATE TABLE IF NOT EXISTS schemas (
id INTEGER PRIMARY KEY AUTOINCREMENT,
jira_schema_id TEXT NOT NULL UNIQUE,
name TEXT NOT NULL,
object_schema_key TEXT,
status TEXT,
description TEXT,
search_enabled INTEGER NOT NULL DEFAULT 1,
discovered_at TEXT NOT NULL DEFAULT (datetime('now')),
updated_at TEXT NOT NULL DEFAULT (datetime('now'))
);
CREATE TABLE IF NOT EXISTS object_types (
id INTEGER PRIMARY KEY AUTOINCREMENT,
schema_id INTEGER NOT NULL,
jira_type_id INTEGER NOT NULL,
type_name TEXT NOT NULL,
display_name TEXT NOT NULL,
description TEXT,
sync_priority INTEGER DEFAULT 0,
object_count INTEGER DEFAULT 0,
enabled INTEGER NOT NULL DEFAULT 0,
discovered_at TEXT NOT NULL DEFAULT (datetime('now')),
updated_at TEXT NOT NULL DEFAULT (datetime('now')),
UNIQUE(schema_id, jira_type_id),
UNIQUE(schema_id, type_name),
FOREIGN KEY (schema_id) REFERENCES schemas(id) ON DELETE CASCADE
);
CREATE TABLE IF NOT EXISTS attributes (
id INTEGER PRIMARY KEY AUTOINCREMENT,
jira_attr_id INTEGER NOT NULL,
object_type_name TEXT NOT NULL,
attr_name TEXT NOT NULL,
field_name TEXT NOT NULL,
attr_type TEXT NOT NULL,
is_multiple INTEGER NOT NULL DEFAULT 0,
is_editable INTEGER NOT NULL DEFAULT 1,
is_required INTEGER NOT NULL DEFAULT 0,
is_system INTEGER NOT NULL DEFAULT 0,
reference_type_name TEXT,
description TEXT,
position INTEGER DEFAULT 0,
discovered_at TEXT NOT NULL DEFAULT (datetime('now')),
UNIQUE(jira_attr_id, object_type_name)
);
CREATE TABLE IF NOT EXISTS objects (
id TEXT PRIMARY KEY,
object_key TEXT NOT NULL UNIQUE,
object_type_name TEXT NOT NULL,
label TEXT NOT NULL,
jira_updated_at TEXT,
jira_created_at TEXT,
cached_at TEXT NOT NULL DEFAULT (datetime('now'))
);
CREATE TABLE IF NOT EXISTS attribute_values (
id INTEGER PRIMARY KEY AUTOINCREMENT,
object_id TEXT NOT NULL,
attribute_id INTEGER NOT NULL,
text_value TEXT,
number_value REAL,
boolean_value INTEGER,
date_value TEXT,
datetime_value TEXT,
reference_object_id TEXT,
reference_object_key TEXT,
reference_object_label TEXT,
array_index INTEGER DEFAULT 0,
UNIQUE(object_id, attribute_id, array_index),
FOREIGN KEY (object_id) REFERENCES objects(id) ON DELETE CASCADE,
FOREIGN KEY (attribute_id) REFERENCES attributes(id) ON DELETE CASCADE
);
CREATE TABLE IF NOT EXISTS object_relations (
id INTEGER PRIMARY KEY AUTOINCREMENT,
source_id TEXT NOT NULL,
target_id TEXT NOT NULL,
attribute_id INTEGER NOT NULL,
source_type TEXT NOT NULL,
target_type TEXT NOT NULL,
UNIQUE(source_id, target_id, attribute_id),
FOREIGN KEY (source_id) REFERENCES objects(id) ON DELETE CASCADE,
FOREIGN KEY (target_id) REFERENCES objects(id) ON DELETE CASCADE,
FOREIGN KEY (attribute_id) REFERENCES attributes(id) ON DELETE CASCADE
);
CREATE TABLE IF NOT EXISTS schema_mappings (
object_type_name TEXT PRIMARY KEY,
schema_id TEXT NOT NULL,
enabled INTEGER NOT NULL DEFAULT 1,
created_at TEXT NOT NULL DEFAULT (datetime('now')),
updated_at TEXT NOT NULL DEFAULT (datetime('now'))
);
CREATE TABLE IF NOT EXISTS sync_metadata (
key TEXT PRIMARY KEY,
value TEXT NOT NULL,
updated_at TEXT NOT NULL
);
-- Indexes
CREATE INDEX IF NOT EXISTS idx_objects_type ON objects(object_type_name);
CREATE INDEX IF NOT EXISTS idx_objects_key ON objects(object_key);
CREATE INDEX IF NOT EXISTS idx_objects_label ON objects(label);
CREATE INDEX IF NOT EXISTS idx_attributes_type ON attributes(object_type_name);
CREATE INDEX IF NOT EXISTS idx_attributes_field ON attributes(field_name);
CREATE INDEX IF NOT EXISTS idx_attributes_jira_id ON attributes(jira_attr_id);
CREATE INDEX IF NOT EXISTS idx_attributes_type_field ON attributes(object_type_name, field_name);
CREATE INDEX IF NOT EXISTS idx_attr_values_object ON attribute_values(object_id);
CREATE INDEX IF NOT EXISTS idx_attr_values_attr ON attribute_values(attribute_id);
CREATE INDEX IF NOT EXISTS idx_attr_values_text ON attribute_values(text_value);
CREATE INDEX IF NOT EXISTS idx_attr_values_number ON attribute_values(number_value);
CREATE INDEX IF NOT EXISTS idx_attr_values_reference ON attribute_values(reference_object_id);
CREATE INDEX IF NOT EXISTS idx_attr_values_object_attr ON attribute_values(object_id, attribute_id);
CREATE INDEX IF NOT EXISTS idx_relations_source ON object_relations(source_id);
CREATE INDEX IF NOT EXISTS idx_relations_target ON object_relations(target_id);
CREATE INDEX IF NOT EXISTS idx_relations_attr ON object_relations(attribute_id);
-- Schema indexes
CREATE INDEX IF NOT EXISTS idx_schemas_jira_schema_id ON schemas(jira_schema_id);
CREATE INDEX IF NOT EXISTS idx_schemas_name ON schemas(name);
CREATE INDEX IF NOT EXISTS idx_schemas_search_enabled ON schemas(search_enabled);
-- Object type indexes
CREATE INDEX IF NOT EXISTS idx_object_types_type_name ON object_types(type_name);
CREATE INDEX IF NOT EXISTS idx_object_types_jira_id ON object_types(jira_type_id);
CREATE INDEX IF NOT EXISTS idx_object_types_schema_id ON object_types(schema_id);
CREATE INDEX IF NOT EXISTS idx_object_types_sync_priority ON object_types(sync_priority);
CREATE INDEX IF NOT EXISTS idx_object_types_enabled ON object_types(enabled);
CREATE INDEX IF NOT EXISTS idx_object_types_schema_enabled ON object_types(schema_id, enabled);
-- Schema mapping indexes
CREATE INDEX IF NOT EXISTS idx_schema_mappings_type ON schema_mappings(object_type_name);
CREATE INDEX IF NOT EXISTS idx_schema_mappings_schema ON schema_mappings(schema_id);
CREATE INDEX IF NOT EXISTS idx_schema_mappings_enabled ON schema_mappings(enabled);
`;

View File

@@ -9,16 +9,28 @@ import { logger } from '../logger.js';
import type { DatabaseAdapter } from './interface.js';
export class PostgresAdapter implements DatabaseAdapter {
public readonly isPostgres = true; // Indicates this is PostgreSQL
private pool: Pool;
private connectionString: string;
private isClosed: boolean = false;
private allowClose: boolean = true; // Set to false for singleton instances
constructor(connectionString: string) {
constructor(connectionString: string, allowClose: boolean = true) {
this.connectionString = connectionString;
this.allowClose = allowClose;
// Parse connection string to extract SSL requirement
const url = new URL(connectionString);
const sslRequired = url.searchParams.get('sslmode') === 'require' ||
url.searchParams.get('sslmode') === 'require' ||
process.env.DATABASE_SSL === 'true';
this.pool = new Pool({
connectionString,
max: 20, // Maximum number of clients in the pool
idleTimeoutMillis: 30000,
connectionTimeoutMillis: 10000, // Increased timeout for initial connection
ssl: sslRequired ? { rejectUnauthorized: false } : false, // Azure PostgreSQL requires SSL
});
// Handle pool errors
@@ -69,6 +81,7 @@ export class PostgresAdapter implements DatabaseAdapter {
// Create a transaction-scoped adapter
const transactionAdapter: DatabaseAdapter = {
isPostgres: true, // Indicates this is PostgreSQL
query: async (sql: string, params?: any[]) => {
const convertedSql = this.convertPlaceholders(sql);
const result = await client.query(convertedSql, params);
@@ -99,9 +112,16 @@ export class PostgresAdapter implements DatabaseAdapter {
const result = await callback(transactionAdapter);
await client.query('COMMIT');
return result;
} catch (error) {
} catch (error: any) {
await client.query('ROLLBACK');
logger.error('PostgreSQL transaction error:', error);
// Don't log foreign key constraint errors as errors - they're expected and handled by caller
if (error?.code === '23503' || error?.message?.includes('foreign key constraint')) {
logger.debug('PostgreSQL transaction error (foreign key constraint - handled by caller):', error);
} else {
logger.error('PostgreSQL transaction error:', error);
}
throw error;
} finally {
client.release();
@@ -124,15 +144,34 @@ export class PostgresAdapter implements DatabaseAdapter {
}
async close(): Promise<void> {
await this.pool.end();
// Don't close singleton instances - they should remain open for the app lifetime
if (!this.allowClose) {
return;
}
// Make close() idempotent - safe to call multiple times
if (this.isClosed) {
return;
}
try {
await this.pool.end();
this.isClosed = true;
} catch (error) {
// Pool might already be closed, ignore the error
this.isClosed = true;
}
}
async getSizeBytes(): Promise<number> {
try {
const result = await this.query<{ size: number }>(`
const result = await this.query<{ size: number | string }>(`
SELECT pg_database_size(current_database()) as size
`);
return result[0]?.size || 0;
// PostgreSQL returns bigint as string, ensure we convert to number
const size = result[0]?.size;
if (!size) return 0;
return typeof size === 'string' ? parseInt(size, 10) : Number(size);
} catch (error) {
logger.error('PostgreSQL getSizeBytes error:', error);
return 0;

View File

@@ -0,0 +1,28 @@
/**
* Database Adapter Singleton
*
* Provides a shared database adapter instance to prevent multiple connections.
* All services should use this singleton instead of creating their own adapters.
*/
import { createDatabaseAdapter } from './factory.js';
import type { DatabaseAdapter } from './interface.js';
let dbAdapterInstance: DatabaseAdapter | null = null;
/**
* Get the shared database adapter instance
*/
export function getDatabaseAdapter(): DatabaseAdapter {
if (!dbAdapterInstance) {
dbAdapterInstance = createDatabaseAdapter(undefined, undefined, false); // Don't allow close (singleton)
}
return dbAdapterInstance;
}
/**
* Reset the singleton (for testing only)
*/
export function resetDatabaseAdapter(): void {
dbAdapterInstance = null;
}

View File

@@ -18,7 +18,7 @@ const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
// Path to the configuration file (v25)
const CONFIG_FILE_PATH_V25 = join(__dirname, '../../data/effort-calculation-config-v25.json');
const CONFIG_FILE_PATH_V25 = join(__dirname, '../../data/effort-calculation-config.json');
// Cache for loaded configuration
let cachedConfigV25: EffortCalculationConfigV25 | null = null;
@@ -275,12 +275,6 @@ export function calculateRequiredEffortApplicationManagementV25(
breakdown.businessImpactAnalyse = biaClass;
breakdown.applicationManagementHosting = applicationManagementHosting;
logger.debug(`=== Effort Calculation v25 ===`);
logger.debug(`Regiemodel: ${regieModelCode} (${governanceModelRaw})`);
logger.debug(`Application Type: ${applicationType}`);
logger.debug(`BIA: ${biaClass} (${businessImpactAnalyseRaw})`);
logger.debug(`Hosting: ${applicationManagementHosting}`);
// Level 1: Find Regiemodel configuration
if (!regieModelCode || !config.regiemodellen[regieModelCode]) {
breakdown.errors.push(`Geen configuratie gevonden voor regiemodel: ${governanceModelRaw || 'niet ingesteld'}`);
@@ -413,10 +407,6 @@ export function calculateRequiredEffortApplicationManagementV25(
breakdown.hoursPerMonth = breakdown.hoursPerYear / 12;
breakdown.hoursPerWeek = breakdown.hoursPerYear / NET_WORK_WEEKS;
logger.debug(`Base FTE: ${breakdown.baseEffort} (${breakdown.baseEffortMin} - ${breakdown.baseEffortMax})`);
logger.debug(`Final FTE: ${finalEffort}`);
logger.debug(`Hours/year: ${breakdown.hoursPerYear}`);
return { finalEffort, breakdown };
} catch (error) {

View File

@@ -0,0 +1,289 @@
/**
* Email Service
*
* Handles sending emails using Nodemailer with SMTP configuration.
* Used for user invitations, password resets, and email verification.
*/
import nodemailer, { Transporter } from 'nodemailer';
import { logger } from './logger.js';
import { config } from '../config/env.js';
interface EmailOptions {
to: string;
subject: string;
html: string;
text?: string;
}
class EmailService {
private transporter: Transporter | null = null;
private _isConfigured: boolean = false;
constructor() {
this.initialize();
}
/**
* Initialize email transporter
*/
private initialize(): void {
const smtpHost = process.env.SMTP_HOST;
const smtpPort = parseInt(process.env.SMTP_PORT || '587', 10);
const smtpSecure = process.env.SMTP_SECURE === 'true';
const smtpUser = process.env.SMTP_USER;
const smtpPassword = process.env.SMTP_PASSWORD;
const smtpFrom = process.env.SMTP_FROM || smtpUser || 'noreply@example.com';
if (!smtpHost || !smtpUser || !smtpPassword) {
logger.warn('SMTP not configured - email functionality will be disabled');
this._isConfigured = false;
return;
}
try {
this.transporter = nodemailer.createTransport({
host: smtpHost,
port: smtpPort,
secure: smtpSecure, // true for 465, false for other ports
auth: {
user: smtpUser,
pass: smtpPassword,
},
});
this._isConfigured = true;
logger.info('Email service configured');
} catch (error) {
logger.error('Failed to initialize email service:', error);
this._isConfigured = false;
}
}
/**
* Send an email
*/
async sendEmail(options: EmailOptions): Promise<boolean> {
if (!this._isConfigured || !this.transporter) {
logger.warn('Email service not configured - email not sent:', options.to);
// In development, log the email content
if (config.isDevelopment) {
logger.info('Email would be sent:', {
to: options.to,
subject: options.subject,
html: options.html,
});
}
return false;
}
try {
const smtpFrom = process.env.SMTP_FROM || process.env.SMTP_USER || 'noreply@example.com';
await this.transporter.sendMail({
from: smtpFrom,
to: options.to,
subject: options.subject,
html: options.html,
text: options.text || this.htmlToText(options.html),
});
logger.info(`Email sent successfully to ${options.to}`);
return true;
} catch (error) {
logger.error('Failed to send email:', error);
return false;
}
}
/**
* Send invitation email
*/
async sendInvitationEmail(
email: string,
token: string,
displayName?: string
): Promise<boolean> {
const frontendUrl = config.frontendUrl;
const invitationUrl = `${frontendUrl}/accept-invitation?token=${token}`;
const html = `
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<style>
body { font-family: Arial, sans-serif; line-height: 1.6; color: #333; }
.container { max-width: 600px; margin: 0 auto; padding: 20px; }
.header { background-color: #2563eb; color: white; padding: 20px; text-align: center; }
.content { padding: 20px; background-color: #f9fafb; }
.button { display: inline-block; padding: 12px 24px; background-color: #2563eb; color: white; text-decoration: none; border-radius: 4px; margin: 20px 0; }
.footer { text-align: center; padding: 20px; color: #6b7280; font-size: 12px; }
</style>
</head>
<body>
<div class="container">
<div class="header">
<h1>Welkom bij CMDB Editor</h1>
</div>
<div class="content">
<p>Beste ${displayName || 'gebruiker'},</p>
<p>Je bent uitgenodigd om een account aan te maken voor de CMDB Editor applicatie.</p>
<p>Klik op de onderstaande knop om je account te activeren en een wachtwoord in te stellen:</p>
<p style="text-align: center;">
<a href="${invitationUrl}" class="button">Account activeren</a>
</p>
<p>Of kopieer en plak deze link in je browser:</p>
<p style="word-break: break-all; color: #2563eb;">${invitationUrl}</p>
<p><strong>Deze link is 7 dagen geldig.</strong></p>
</div>
<div class="footer">
<p>Zuyderland Medisch Centrum - CMDB Editor</p>
</div>
</div>
</body>
</html>
`;
return this.sendEmail({
to: email,
subject: 'Uitnodiging voor CMDB Editor',
html,
});
}
/**
* Send password reset email
*/
async sendPasswordResetEmail(
email: string,
token: string,
displayName?: string
): Promise<boolean> {
const frontendUrl = config.frontendUrl;
const resetUrl = `${frontendUrl}/reset-password?token=${token}`;
const html = `
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<style>
body { font-family: Arial, sans-serif; line-height: 1.6; color: #333; }
.container { max-width: 600px; margin: 0 auto; padding: 20px; }
.header { background-color: #dc2626; color: white; padding: 20px; text-align: center; }
.content { padding: 20px; background-color: #f9fafb; }
.button { display: inline-block; padding: 12px 24px; background-color: #dc2626; color: white; text-decoration: none; border-radius: 4px; margin: 20px 0; }
.footer { text-align: center; padding: 20px; color: #6b7280; font-size: 12px; }
.warning { background-color: #fef2f2; border-left: 4px solid #dc2626; padding: 12px; margin: 20px 0; }
</style>
</head>
<body>
<div class="container">
<div class="header">
<h1>Wachtwoord resetten</h1>
</div>
<div class="content">
<p>Beste ${displayName || 'gebruiker'},</p>
<p>Je hebt een verzoek gedaan om je wachtwoord te resetten.</p>
<p>Klik op de onderstaande knop om een nieuw wachtwoord in te stellen:</p>
<p style="text-align: center;">
<a href="${resetUrl}" class="button">Wachtwoord resetten</a>
</p>
<p>Of kopieer en plak deze link in je browser:</p>
<p style="word-break: break-all; color: #2563eb;">${resetUrl}</p>
<div class="warning">
<p><strong>Let op:</strong> Als je dit verzoek niet hebt gedaan, negeer dan deze email. Je wachtwoord blijft ongewijzigd.</p>
</div>
<p><strong>Deze link is 1 uur geldig.</strong></p>
</div>
<div class="footer">
<p>Zuyderland Medisch Centrum - CMDB Editor</p>
</div>
</div>
</body>
</html>
`;
return this.sendEmail({
to: email,
subject: 'Wachtwoord resetten - CMDB Editor',
html,
});
}
/**
* Send email verification email
*/
async sendEmailVerificationEmail(
email: string,
token: string,
displayName?: string
): Promise<boolean> {
const frontendUrl = config.frontendUrl;
const verifyUrl = `${frontendUrl}/verify-email?token=${token}`;
const html = `
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<style>
body { font-family: Arial, sans-serif; line-height: 1.6; color: #333; }
.container { max-width: 600px; margin: 0 auto; padding: 20px; }
.header { background-color: #059669; color: white; padding: 20px; text-align: center; }
.content { padding: 20px; background-color: #f9fafb; }
.button { display: inline-block; padding: 12px 24px; background-color: #059669; color: white; text-decoration: none; border-radius: 4px; margin: 20px 0; }
.footer { text-align: center; padding: 20px; color: #6b7280; font-size: 12px; }
</style>
</head>
<body>
<div class="container">
<div class="header">
<h1>E-mailadres verifiëren</h1>
</div>
<div class="content">
<p>Beste ${displayName || 'gebruiker'},</p>
<p>Bedankt voor het aanmaken van je account. Verifieer je e-mailadres door op de onderstaande knop te klikken:</p>
<p style="text-align: center;">
<a href="${verifyUrl}" class="button">E-mailadres verifiëren</a>
</p>
<p>Of kopieer en plak deze link in je browser:</p>
<p style="word-break: break-all; color: #2563eb;">${verifyUrl}</p>
</div>
<div class="footer">
<p>Zuyderland Medisch Centrum - CMDB Editor</p>
</div>
</div>
</body>
</html>
`;
return this.sendEmail({
to: email,
subject: 'E-mailadres verifiëren - CMDB Editor',
html,
});
}
/**
* Convert HTML to plain text (simple implementation)
*/
private htmlToText(html: string): string {
return html
.replace(/<style[^>]*>.*?<\/style>/gis, '')
.replace(/<script[^>]*>.*?<\/script>/gis, '')
.replace(/<[^>]+>/g, '')
.replace(/\s+/g, ' ')
.trim();
}
/**
* Check if email service is configured
*/
isConfigured(): boolean {
return this._isConfigured;
}
}
export const emailService = new EmailService();

View File

@@ -0,0 +1,115 @@
/**
* Encryption Service
*
* Provides encryption/decryption for sensitive data at rest (Jira PATs, API keys).
* Uses AES-256-GCM for authenticated encryption.
*/
import { createCipheriv, createDecipheriv, randomBytes, scrypt } from 'crypto';
import { promisify } from 'util';
import { logger } from './logger.js';
import { config } from '../config/env.js';
const scryptAsync = promisify(scrypt);
const ALGORITHM = 'aes-256-gcm';
const KEY_LENGTH = 32; // 32 bytes for AES-256
const IV_LENGTH = 16; // 16 bytes for GCM
const SALT_LENGTH = 16; // 16 bytes for salt
const TAG_LENGTH = 16; // 16 bytes for authentication tag
class EncryptionService {
private encryptionKey: Buffer | null = null;
/**
* Get or derive encryption key from environment variable
*/
private async getEncryptionKey(): Promise<Buffer> {
if (this.encryptionKey) {
return this.encryptionKey;
}
const envKey = process.env.ENCRYPTION_KEY;
if (!envKey) {
throw new Error('ENCRYPTION_KEY environment variable is required for encryption');
}
// If key is base64 encoded, decode it
let key: Buffer;
try {
key = Buffer.from(envKey, 'base64');
if (key.length !== KEY_LENGTH) {
throw new Error('Invalid key length');
}
} catch (error) {
// If not base64, derive key from string using scrypt
const salt = Buffer.from(envKey.substring(0, SALT_LENGTH), 'utf8');
key = (await scryptAsync(envKey, salt, KEY_LENGTH)) as Buffer;
}
this.encryptionKey = key;
return key;
}
/**
* Encrypt a string value
*/
async encrypt(plaintext: string): Promise<string> {
try {
const key = await this.getEncryptionKey();
const iv = randomBytes(IV_LENGTH);
const cipher = createCipheriv(ALGORITHM, key, iv);
let encrypted = cipher.update(plaintext, 'utf8', 'base64');
encrypted += cipher.final('base64');
const authTag = cipher.getAuthTag();
// Combine IV + authTag + encrypted data
const combined = Buffer.concat([
iv,
authTag,
Buffer.from(encrypted, 'base64')
]);
return combined.toString('base64');
} catch (error) {
logger.error('Encryption error:', error);
throw new Error('Failed to encrypt data');
}
}
/**
* Decrypt a string value
*/
async decrypt(encryptedData: string): Promise<string> {
try {
const key = await this.getEncryptionKey();
const combined = Buffer.from(encryptedData, 'base64');
// Extract IV, authTag, and encrypted data
const iv = combined.subarray(0, IV_LENGTH);
const authTag = combined.subarray(IV_LENGTH, IV_LENGTH + TAG_LENGTH);
const encrypted = combined.subarray(IV_LENGTH + TAG_LENGTH);
const decipher = createDecipheriv(ALGORITHM, key, iv);
decipher.setAuthTag(authTag);
let decrypted = decipher.update(encrypted, undefined, 'utf8');
decrypted += decipher.final('utf8');
return decrypted;
} catch (error) {
logger.error('Decryption error:', error);
throw new Error('Failed to decrypt data');
}
}
/**
* Check if encryption is properly configured
*/
isConfigured(): boolean {
return !!process.env.ENCRYPTION_KEY;
}
}
export const encryptionService = new EncryptionService();

File diff suppressed because it is too large Load Diff

View File

@@ -7,9 +7,12 @@
import { config } from '../config/env.js';
import { logger } from './logger.js';
import { OBJECT_TYPES } from '../generated/jira-schema.js';
import type { CMDBObject, CMDBObjectTypeName, ObjectReference } from '../generated/jira-types.js';
import { schemaCacheService } from './schemaCacheService.js';
import type { CMDBObject, ObjectReference } from '../generated/jira-types.js';
import type { JiraAssetsObject, JiraAssetsAttribute, JiraAssetsSearchResponse } from '../types/index.js';
import type { ObjectEntry, ObjectAttribute, ObjectAttributeValue, ReferenceValue, ConfluenceValue } from '../domain/jiraAssetsPayload.js';
import { isReferenceValue, isSimpleValue, hasAttributes } from '../domain/jiraAssetsPayload.js';
import { normalizedCacheStore } from './normalizedCacheStore.js';
// =============================================================================
// Types
@@ -31,14 +34,39 @@ export interface JiraUpdatePayload {
}>;
}
// Map from Jira object type ID to our type name
const TYPE_ID_TO_NAME: Record<number, CMDBObjectTypeName> = {};
const JIRA_NAME_TO_TYPE: Record<string, CMDBObjectTypeName> = {};
// Lookup maps - will be populated dynamically from database schema
let TYPE_ID_TO_NAME: Record<number, string> = {};
let JIRA_NAME_TO_TYPE: Record<string, string> = {};
let OBJECT_TYPES_CACHE: Record<string, { jiraTypeId: number; name: string; attributes: Array<{ jiraId: number; name: string; fieldName: string; type: string; isMultiple?: boolean }> }> = {};
// Build lookup maps from schema
for (const [typeName, typeDef] of Object.entries(OBJECT_TYPES)) {
TYPE_ID_TO_NAME[typeDef.jiraTypeId] = typeName as CMDBObjectTypeName;
JIRA_NAME_TO_TYPE[typeDef.name] = typeName as CMDBObjectTypeName;
/**
* Initialize lookup maps from database schema
*/
async function initializeLookupMaps(): Promise<void> {
try {
const schema = await schemaCacheService.getSchema();
OBJECT_TYPES_CACHE = {};
TYPE_ID_TO_NAME = {};
JIRA_NAME_TO_TYPE = {};
for (const [typeName, typeDef] of Object.entries(schema.objectTypes)) {
OBJECT_TYPES_CACHE[typeName] = {
jiraTypeId: typeDef.jiraTypeId,
name: typeDef.name,
attributes: typeDef.attributes.map(attr => ({
jiraId: attr.jiraId,
name: attr.name,
fieldName: attr.fieldName,
type: attr.type,
isMultiple: attr.isMultiple,
})),
};
TYPE_ID_TO_NAME[typeDef.jiraTypeId] = typeName;
JIRA_NAME_TO_TYPE[typeDef.name] = typeName;
}
} catch (error) {
logger.error('JiraAssetsClient: Failed to initialize lookup maps', error);
}
}
// =============================================================================
@@ -49,7 +77,8 @@ class JiraAssetsClient {
private baseUrl: string;
private defaultHeaders: Record<string, string>;
private isDataCenter: boolean | null = null;
private requestToken: string | null = null;
private serviceAccountToken: string | null = null; // Service account token from .env (for read operations)
private requestToken: string | null = null; // User PAT from profile settings (for write operations)
constructor() {
this.baseUrl = `${config.jiraHost}/rest/insight/1.0`;
@@ -58,17 +87,18 @@ class JiraAssetsClient {
'Accept': 'application/json',
};
// Add PAT authentication if configured
if (config.jiraAuthMethod === 'pat' && config.jiraPat) {
this.defaultHeaders['Authorization'] = `Bearer ${config.jiraPat}`;
}
// Initialize service account token from config (for read operations)
this.serviceAccountToken = config.jiraServiceAccountToken || null;
// User PAT is configured per-user in profile settings
// Authorization header is set per-request via setRequestToken()
}
// ==========================================================================
// Request Token Management (for user-context requests)
// ==========================================================================
setRequestToken(token: string): void {
setRequestToken(token: string | null): void {
this.requestToken = token;
}
@@ -76,6 +106,21 @@ class JiraAssetsClient {
this.requestToken = null;
}
/**
* Check if a token is configured for read operations
* Uses service account token (primary) or user PAT (fallback)
*/
hasToken(): boolean {
return !!(this.serviceAccountToken || this.requestToken);
}
/**
* Check if user PAT is configured for write operations
*/
hasUserToken(): boolean {
return !!this.requestToken;
}
// ==========================================================================
// API Detection
// ==========================================================================
@@ -95,12 +140,26 @@ class JiraAssetsClient {
}
}
private getHeaders(): Record<string, string> {
/**
* Get headers for API requests
* @param forWrite - If true, requires user PAT. If false, uses service account token (or user PAT as fallback)
*/
private getHeaders(forWrite: boolean = false): Record<string, string> {
const headers = { ...this.defaultHeaders };
// Use request-scoped token if available (for user context)
if (this.requestToken) {
if (forWrite) {
// Write operations require user PAT
if (!this.requestToken) {
throw new Error('Jira Personal Access Token not configured. Please configure it in your user settings to enable saving changes to Jira.');
}
headers['Authorization'] = `Bearer ${this.requestToken}`;
} else {
// Read operations: use service account token (primary) or user PAT (fallback)
const token = this.serviceAccountToken || this.requestToken;
if (!token) {
throw new Error('Jira token not configured. Please configure JIRA_SERVICE_ACCOUNT_TOKEN in .env or a Personal Access Token in your user settings.');
}
headers['Authorization'] = `Bearer ${token}`;
}
return headers;
@@ -110,15 +169,21 @@ class JiraAssetsClient {
// Core API Methods
// ==========================================================================
private async request<T>(endpoint: string, options: RequestInit = {}): Promise<T> {
/**
* Make a request to Jira API
* @param endpoint - API endpoint
* @param options - Request options
* @param forWrite - If true, requires user PAT for write operations
*/
private async request<T>(endpoint: string, options: RequestInit = {}, forWrite: boolean = false): Promise<T> {
const url = `${this.baseUrl}${endpoint}`;
logger.debug(`JiraAssetsClient: ${options.method || 'GET'} ${url}`);
logger.debug(`JiraAssetsClient: ${options.method || 'GET'} ${url} (forWrite: ${forWrite})`);
const response = await fetch(url, {
...options,
headers: {
...this.getHeaders(),
...this.getHeaders(forWrite),
...options.headers,
},
});
@@ -136,10 +201,17 @@ class JiraAssetsClient {
// ==========================================================================
async testConnection(): Promise<boolean> {
// Don't test connection if no token is configured
if (!this.hasToken()) {
logger.debug('JiraAssetsClient: No token configured, skipping connection test');
return false;
}
try {
await this.detectApiType();
const response = await fetch(`${this.baseUrl}/objectschema/${config.jiraSchemaId}`, {
headers: this.getHeaders(),
// Test connection by fetching schemas list (no specific schema ID needed)
const response = await fetch(`${this.baseUrl}/objectschema/list`, {
headers: this.getHeaders(false), // Read operation - uses service account token
});
return response.ok;
} catch (error) {
@@ -148,15 +220,35 @@ class JiraAssetsClient {
}
}
async getObject(objectId: string): Promise<JiraAssetsObject | null> {
/**
* Get raw ObjectEntry for an object (for recursive processing)
*/
async getObjectEntry(objectId: string): Promise<ObjectEntry | null> {
try {
return await this.request<JiraAssetsObject>(`/object/${objectId}`);
// Include attributes and deep attributes to get full details of referenced objects (including descriptions)
const url = `/object/${objectId}?includeAttributes=true&includeAttributesDeep=2`;
const entry = await this.request<ObjectEntry>(url, {}, false) as unknown as ObjectEntry; // Read operation
return entry;
} catch (error) {
// Check if this is a 404 (object not found / deleted)
if (error instanceof Error && error.message.includes('404')) {
logger.info(`JiraAssetsClient: Object ${objectId} not found in Jira (likely deleted)`);
throw new JiraObjectNotFoundError(objectId);
}
logger.error(`JiraAssetsClient: Failed to get object entry ${objectId}`, error);
return null;
}
}
async getObject(objectId: string): Promise<JiraAssetsObject | null> {
try {
const entry = await this.getObjectEntry(objectId);
if (!entry) return null;
return this.adaptObjectEntryToJiraAssetsObject(entry);
} catch (error) {
if (error instanceof JiraObjectNotFoundError) {
throw error;
}
logger.error(`JiraAssetsClient: Failed to get object ${objectId}`, error);
return null;
}
@@ -165,11 +257,26 @@ class JiraAssetsClient {
async searchObjects(
iql: string,
page: number = 1,
pageSize: number = 50
): Promise<{ objects: JiraAssetsObject[]; totalCount: number; hasMore: boolean }> {
pageSize: number = 50,
schemaId?: string
): Promise<{
objects: JiraAssetsObject[];
totalCount: number;
hasMore: boolean;
referencedObjects?: Array<{ entry: ObjectEntry; typeName: string }>;
rawEntries?: ObjectEntry[]; // Raw ObjectEntry format for recursive processing
}> {
await this.detectApiType();
let response: JiraAssetsSearchResponse;
// Schema ID must be provided explicitly (no default from config)
if (!schemaId) {
throw new Error('Schema ID is required for searchObjects. Please provide schemaId parameter.');
}
const effectiveSchemaId = schemaId;
// Use domain types for API requests
let payload: { objectEntries: ObjectEntry[]; totalCount?: number; totalFilterCount?: number; page?: number; pageSize?: number };
if (this.isDataCenter) {
// Try modern AQL endpoint first
@@ -179,10 +286,10 @@ class JiraAssetsClient {
page: page.toString(),
resultPerPage: pageSize.toString(),
includeAttributes: 'true',
includeAttributesDeep: '1',
objectSchemaId: config.jiraSchemaId,
includeAttributesDeep: '2',
objectSchemaId: effectiveSchemaId,
});
response = await this.request<JiraAssetsSearchResponse>(`/aql/objects?${params.toString()}`);
payload = await this.request<{ objectEntries: ObjectEntry[]; totalCount?: number; totalFilterCount?: number }>(`/aql/objects?${params.toString()}`, {}, false); // Read operation
} catch (error) {
// Fallback to deprecated IQL endpoint
logger.warn(`JiraAssetsClient: AQL endpoint failed, falling back to IQL: ${error}`);
@@ -191,50 +298,169 @@ class JiraAssetsClient {
page: page.toString(),
resultPerPage: pageSize.toString(),
includeAttributes: 'true',
includeAttributesDeep: '1',
objectSchemaId: config.jiraSchemaId,
includeAttributesDeep: '2',
objectSchemaId: effectiveSchemaId,
});
response = await this.request<JiraAssetsSearchResponse>(`/iql/objects?${params.toString()}`);
payload = await this.request<{ objectEntries: ObjectEntry[]; totalCount?: number; totalFilterCount?: number }>(`/iql/objects?${params.toString()}`, {}, false); // Read operation
}
} else {
// Jira Cloud uses POST for AQL
response = await this.request<JiraAssetsSearchResponse>('/aql/objects', {
payload = await this.request<{ objectEntries: ObjectEntry[]; totalCount?: number; totalFilterCount?: number }>('/aql/objects', {
method: 'POST',
body: JSON.stringify({
qlQuery: iql,
page,
resultPerPage: pageSize,
includeAttributes: true,
includeAttributesDeep: 2, // Include attributes of referenced objects (e.g., descriptions)
objectSchemaId: effectiveSchemaId,
}),
});
}, false); // Read operation
}
// Adapt to legacy response format for backward compatibility
const response = this.adaptAssetsPayloadToSearchResponse({ ...payload, page, pageSize });
const totalCount = response.totalFilterCount || response.totalCount || 0;
const hasMore = response.objectEntries.length === pageSize && page * pageSize < totalCount;
// Note: referencedObjects extraction removed - recursive extraction now happens in storeObjectTree
// via extractNestedReferencedObjects, which processes the entire object tree recursively
return {
objects: response.objectEntries || [],
totalCount,
hasMore,
referencedObjects: undefined, // No longer used - recursive extraction handles this
rawEntries: payload.objectEntries || [], // Return raw entries for recursive processing
};
}
/**
* Recursively extract all nested referenced objects from an object entry
* This function traverses the object tree and extracts all referenced objects
* at any depth, preventing infinite loops with circular references.
*
* @param entry - The object entry to extract nested references from
* @param processedIds - Set of already processed object IDs (to prevent duplicates and circular refs)
* @param maxDepth - Maximum depth to traverse (default: 5)
* @param currentDepth - Current depth in the tree (default: 0)
* @returns Array of extracted referenced objects with their type names
*/
extractNestedReferencedObjects(
entry: ObjectEntry,
processedIds: Set<string>,
maxDepth: number = 5,
currentDepth: number = 0
): Array<{ entry: ObjectEntry; typeName: string }> {
const result: Array<{ entry: ObjectEntry; typeName: string }> = [];
// Prevent infinite recursion
if (currentDepth >= maxDepth) {
logger.debug(`JiraAssetsClient: [Recursive] Max depth (${maxDepth}) reached for object ${entry.objectKey || entry.id}`);
return result;
}
const entryId = String(entry.id);
// Skip if already processed (handles circular references)
if (processedIds.has(entryId)) {
logger.debug(`JiraAssetsClient: [Recursive] Skipping already processed object ${entry.objectKey || entry.id} (circular reference detected)`);
return result;
}
processedIds.add(entryId);
logger.debug(`JiraAssetsClient: [Recursive] Extracting nested references from ${entry.objectKey || entry.id} at depth ${currentDepth}`);
// Initialize lookup maps if needed
if (Object.keys(TYPE_ID_TO_NAME).length === 0) {
// This is async, but we can't make this function async without breaking the call chain
// So we'll initialize it before calling this function
logger.warn('JiraAssetsClient: TYPE_ID_TO_NAME not initialized, type resolution may fail');
}
// Extract referenced objects from attributes
if (entry.attributes) {
for (const attr of entry.attributes) {
for (const val of attr.objectAttributeValues) {
if (isReferenceValue(val) && hasAttributes(val.referencedObject)) {
const refId = String(val.referencedObject.id);
// Skip if already processed
if (processedIds.has(refId)) {
continue;
}
const refTypeId = val.referencedObject.objectType?.id;
const refTypeName = TYPE_ID_TO_NAME[refTypeId] ||
JIRA_NAME_TO_TYPE[val.referencedObject.objectType?.name];
if (refTypeName) {
logger.debug(`JiraAssetsClient: [Recursive] Found nested reference: ${val.referencedObject.objectKey || refId} of type ${refTypeName} at depth ${currentDepth + 1}`);
// Add this referenced object to results
result.push({
entry: val.referencedObject as ObjectEntry,
typeName: refTypeName,
});
// Recursively extract nested references from this referenced object
const nested = this.extractNestedReferencedObjects(
val.referencedObject as ObjectEntry,
processedIds,
maxDepth,
currentDepth + 1
);
result.push(...nested);
} else {
logger.debug(`JiraAssetsClient: [Recursive] Could not resolve type name for referenced object ${refId} (typeId: ${refTypeId}, typeName: ${val.referencedObject.objectType?.name})`);
}
}
}
}
}
if (result.length > 0) {
logger.debug(`JiraAssetsClient: [Recursive] Extracted ${result.length} nested references from ${entry.objectKey || entry.id} at depth ${currentDepth}`);
}
return result;
}
/**
* Get the total count of objects for a specific type from Jira Assets
* This is more efficient than fetching all objects when you only need the count
* @param typeName - Type name (from database, e.g. "ApplicationComponent")
* @param schemaId - Optional schema ID (if not provided, uses mapping or default)
*/
async getObjectCount(typeName: CMDBObjectTypeName): Promise<number> {
const typeDef = OBJECT_TYPES[typeName];
async getObjectCount(typeName: string, schemaId?: string): Promise<number> {
// Ensure lookup maps are initialized
if (Object.keys(OBJECT_TYPES_CACHE).length === 0) {
await initializeLookupMaps();
}
const typeDef = OBJECT_TYPES_CACHE[typeName];
if (!typeDef) {
logger.warn(`JiraAssetsClient: Unknown type ${typeName}`);
return 0;
}
try {
// Get schema ID from mapping service if not provided
let effectiveSchemaId = schemaId;
if (!effectiveSchemaId) {
const { schemaMappingService } = await import('./schemaMappingService.js');
effectiveSchemaId = await schemaMappingService.getSchemaId(typeName);
}
// Skip if no schema ID is available (object type not configured)
if (!effectiveSchemaId || effectiveSchemaId.trim() === '') {
logger.debug(`JiraAssetsClient: No schema ID configured for ${typeName}, returning 0`);
return 0;
}
const iql = `objectType = "${typeDef.name}"`;
// Use pageSize=1 to minimize data transfer, we only need the totalCount
const result = await this.searchObjects(iql, 1, 1);
logger.debug(`JiraAssetsClient: ${typeName} has ${result.totalCount} objects in Jira Assets`);
const result = await this.searchObjects(iql, 1, 1, effectiveSchemaId);
logger.debug(`JiraAssetsClient: ${typeName} has ${result.totalCount} objects in Jira Assets (schema: ${effectiveSchemaId})`);
return result.totalCount;
} catch (error) {
logger.error(`JiraAssetsClient: Failed to get count for ${typeName}`, error);
@@ -243,29 +469,64 @@ class JiraAssetsClient {
}
async getAllObjectsOfType(
typeName: CMDBObjectTypeName,
batchSize: number = 40
): Promise<JiraAssetsObject[]> {
const typeDef = OBJECT_TYPES[typeName];
if (!typeDef) {
logger.warn(`JiraAssetsClient: Unknown type ${typeName}`);
return [];
typeName: string,
batchSize: number = 40,
schemaId?: string
): Promise<{
objects: JiraAssetsObject[];
referencedObjects: Array<{ entry: ObjectEntry; typeName: string }>;
rawEntries?: ObjectEntry[]; // Raw ObjectEntry format for recursive processing
}> {
// If typeName is a display name (not in cache), use it directly for IQL query
// Otherwise, look up the type definition
let objectTypeName = typeName;
// Try to find in cache first
if (Object.keys(OBJECT_TYPES_CACHE).length === 0) {
await initializeLookupMaps();
}
const typeDef = OBJECT_TYPES_CACHE[typeName];
if (typeDef) {
objectTypeName = typeDef.name; // Use the Jira name from cache
} else {
// Type not in cache - assume typeName is already the Jira display name
logger.debug(`JiraAssetsClient: Type ${typeName} not in cache, using as display name directly`);
}
// Get schema ID from mapping service if not provided
let effectiveSchemaId = schemaId;
if (!effectiveSchemaId) {
const { schemaMappingService } = await import('./schemaMappingService.js');
effectiveSchemaId = await schemaMappingService.getSchemaId(typeName);
}
if (!effectiveSchemaId) {
throw new Error(`No schema ID available for object type ${typeName}`);
}
const allObjects: JiraAssetsObject[] = [];
const rawEntries: ObjectEntry[] = []; // Store raw entries for recursive processing
let page = 1;
let hasMore = true;
while (hasMore) {
const iql = `objectType = "${typeDef.name}"`;
const result = await this.searchObjects(iql, page, batchSize);
const iql = `objectType = "${objectTypeName}"`;
const result = await this.searchObjects(iql, page, batchSize, effectiveSchemaId);
allObjects.push(...result.objects);
// Collect raw entries for recursive processing
if (result.rawEntries) {
rawEntries.push(...result.rawEntries);
}
hasMore = result.hasMore;
page++;
}
logger.info(`JiraAssetsClient: Fetched ${allObjects.length} ${typeName} objects`);
return allObjects;
logger.info(`JiraAssetsClient: Fetched ${allObjects.length} ${typeName} objects from schema ${effectiveSchemaId} (raw entries: ${rawEntries.length})`);
// Note: referencedObjects no longer collected - recursive extraction in storeObjectTree handles nested objects
return { objects: allObjects, referencedObjects: [], rawEntries };
}
async getUpdatedObjectsSince(
@@ -287,6 +548,11 @@ class JiraAssetsClient {
}
async updateObject(objectId: string, payload: JiraUpdatePayload): Promise<boolean> {
// Write operations require user PAT
if (!this.hasUserToken()) {
throw new Error('Jira Personal Access Token not configured. Please configure it in your user settings to enable saving changes to Jira.');
}
try {
logger.info(`JiraAssetsClient.updateObject: Sending update for object ${objectId}`, {
attributeCount: payload.attributes.length,
@@ -296,7 +562,7 @@ class JiraAssetsClient {
await this.request(`/object/${objectId}`, {
method: 'PUT',
body: JSON.stringify(payload),
});
}, true); // Write operation - requires user PAT
logger.info(`JiraAssetsClient.updateObject: Successfully updated object ${objectId}`);
return true;
@@ -306,38 +572,306 @@ class JiraAssetsClient {
}
}
// ==========================================================================
// Adapter Functions (temporary - for backward compatibility)
// ==========================================================================
/**
* Adapt ObjectEntry from domain types to legacy JiraAssetsObject type
* This is a temporary adapter during migration
* Handles both ObjectEntry (domain) and legacy JiraAssetsObject formats
*/
adaptObjectEntryToJiraAssetsObject(entry: ObjectEntry | JiraAssetsObject | null): JiraAssetsObject | null {
if (!entry) return null;
// Check if already in legacy format (has 'attributes' as array with JiraAssetsAttribute)
if ('attributes' in entry && Array.isArray(entry.attributes) && entry.attributes.length > 0 && 'objectTypeAttributeId' in entry.attributes[0] && !('id' in entry.attributes[0])) {
// Validate the legacy format object has required fields
const legacyObj = entry as JiraAssetsObject;
if (legacyObj.id === null || legacyObj.id === undefined) {
logger.warn(`JiraAssetsClient: Legacy object missing id. ObjectKey: ${legacyObj.objectKey}, Label: ${legacyObj.label}`);
return null;
}
if (!legacyObj.objectKey || !String(legacyObj.objectKey).trim()) {
logger.warn(`JiraAssetsClient: Legacy object missing objectKey. ID: ${legacyObj.id}, Label: ${legacyObj.label}`);
return null;
}
if (!legacyObj.label || !String(legacyObj.label).trim()) {
logger.warn(`JiraAssetsClient: Legacy object missing label. ID: ${legacyObj.id}, ObjectKey: ${legacyObj.objectKey}`);
return null;
}
return legacyObj;
}
// Convert from ObjectEntry format
const domainEntry = entry as ObjectEntry;
// Validate required fields before conversion
if (domainEntry.id === null || domainEntry.id === undefined) {
logger.warn(`JiraAssetsClient: ObjectEntry missing id. ObjectKey: ${domainEntry.objectKey}, Label: ${domainEntry.label}`);
return null;
}
if (!domainEntry.objectKey || !String(domainEntry.objectKey).trim()) {
logger.warn(`JiraAssetsClient: ObjectEntry missing objectKey. ID: ${domainEntry.id}, Label: ${domainEntry.label}`);
return null;
}
if (!domainEntry.label || !String(domainEntry.label).trim()) {
logger.warn(`JiraAssetsClient: ObjectEntry missing label. ID: ${domainEntry.id}, ObjectKey: ${domainEntry.objectKey}`);
return null;
}
// Convert id - ensure it's a number
let objectId: number;
if (typeof domainEntry.id === 'string') {
const parsed = parseInt(domainEntry.id, 10);
if (isNaN(parsed)) {
logger.warn(`JiraAssetsClient: ObjectEntry id cannot be parsed as number: ${domainEntry.id}`);
return null;
}
objectId = parsed;
} else if (typeof domainEntry.id === 'number') {
objectId = domainEntry.id;
} else {
logger.warn(`JiraAssetsClient: ObjectEntry id has invalid type: ${typeof domainEntry.id}`);
return null;
}
return {
id: objectId,
objectKey: String(domainEntry.objectKey).trim(),
label: String(domainEntry.label).trim(),
objectType: domainEntry.objectType,
created: domainEntry.created || new Date().toISOString(),
updated: domainEntry.updated || new Date().toISOString(),
attributes: (domainEntry.attributes || []).map(attr => this.adaptObjectAttributeToJiraAssetsAttribute(attr)),
};
}
/**
* Adapt ObjectAttribute from domain types to legacy JiraAssetsAttribute type
*/
private adaptObjectAttributeToJiraAssetsAttribute(attr: ObjectAttribute): JiraAssetsAttribute {
return {
objectTypeAttributeId: attr.objectTypeAttributeId,
objectTypeAttribute: undefined, // Not in domain type, will be populated from schema if needed
objectAttributeValues: attr.objectAttributeValues.map(val => this.adaptObjectAttributeValue(val)),
};
}
/**
* Adapt ObjectAttributeValue from domain types to legacy format
*/
private adaptObjectAttributeValue(val: ObjectAttributeValue): {
value?: string;
displayValue?: string;
referencedObject?: { id: number; objectKey: string; label: string };
status?: { name: string };
} {
if (isReferenceValue(val)) {
const ref = val.referencedObject;
return {
displayValue: val.displayValue,
referencedObject: {
id: typeof ref.id === 'string' ? parseInt(ref.id, 10) : ref.id,
objectKey: ref.objectKey,
label: ref.label,
},
};
}
if (isSimpleValue(val)) {
return {
value: String(val.value),
displayValue: val.displayValue,
};
}
// StatusValue, ConfluenceValue, UserValue
return {
displayValue: val.displayValue,
status: 'status' in val ? { name: val.status.name } : undefined,
};
}
/**
* Adapt AssetsPayload (from domain types) to legacy JiraAssetsSearchResponse
*/
private adaptAssetsPayloadToSearchResponse(
payload: { objectEntries: ObjectEntry[]; totalCount?: number; totalFilterCount?: number; page?: number; pageSize?: number }
): JiraAssetsSearchResponse {
return {
objectEntries: payload.objectEntries.map(entry => this.adaptObjectEntryToJiraAssetsObject(entry)!).filter(Boolean),
totalCount: payload.totalCount || 0,
totalFilterCount: payload.totalFilterCount,
page: payload.page || 1,
pageSize: payload.pageSize || 50,
};
}
// ==========================================================================
// Object Parsing
// ==========================================================================
parseObject<T extends CMDBObject>(jiraObj: JiraAssetsObject): T | null {
async parseObject<T extends CMDBObject>(jiraObj: JiraAssetsObject): Promise<T | null> {
// Ensure lookup maps are initialized
if (Object.keys(OBJECT_TYPES_CACHE).length === 0) {
await initializeLookupMaps();
}
const typeId = jiraObj.objectType?.id;
const typeName = TYPE_ID_TO_NAME[typeId] || JIRA_NAME_TO_TYPE[jiraObj.objectType?.name];
if (!typeName) {
logger.warn(`JiraAssetsClient: Unknown object type for object ${jiraObj.objectKey || jiraObj.id}: ${jiraObj.objectType?.name} (ID: ${typeId})`);
// This is expected when repairing broken references - object types may not be configured
logger.debug(`JiraAssetsClient: Unknown object type for object ${jiraObj.objectKey || jiraObj.id}: ${jiraObj.objectType?.name} (ID: ${typeId}) - object type not configured, skipping`);
return null;
}
const typeDef = OBJECT_TYPES[typeName];
const typeDef = OBJECT_TYPES_CACHE[typeName];
if (!typeDef) {
logger.warn(`JiraAssetsClient: Type definition not found for type: ${typeName} (object: ${jiraObj.objectKey || jiraObj.id})`);
return null;
}
// Validate required fields from Jira object
if (jiraObj.id === null || jiraObj.id === undefined) {
logger.warn(`JiraAssetsClient: Object missing id field. ObjectKey: ${jiraObj.objectKey}, Label: ${jiraObj.label}, Type: ${jiraObj.objectType?.name}`);
throw new Error(`Cannot parse Jira object: missing id field`);
}
if (!jiraObj.objectKey || !String(jiraObj.objectKey).trim()) {
logger.warn(`JiraAssetsClient: Object missing objectKey. ID: ${jiraObj.id}, Label: ${jiraObj.label}, Type: ${jiraObj.objectType?.name}`);
throw new Error(`Cannot parse Jira object ${jiraObj.id}: missing objectKey`);
}
if (!jiraObj.label || !String(jiraObj.label).trim()) {
logger.warn(`JiraAssetsClient: Object missing label. ID: ${jiraObj.id}, ObjectKey: ${jiraObj.objectKey}, Type: ${jiraObj.objectType?.name}`);
throw new Error(`Cannot parse Jira object ${jiraObj.id}: missing label`);
}
// Ensure we have valid values before creating the result
const objectId = String(jiraObj.id || '');
const objectKey = String(jiraObj.objectKey || '').trim();
const label = String(jiraObj.label || '').trim();
// Double-check after conversion (in case String() produced "null" or "undefined")
if (!objectId || objectId === 'null' || objectId === 'undefined' || objectId === 'NaN') {
logger.error(`JiraAssetsClient: parseObject - invalid id after conversion. Original: ${jiraObj.id}, Converted: ${objectId}`);
throw new Error(`Cannot parse Jira object: invalid id after conversion (${objectId})`);
}
if (!objectKey || objectKey === 'null' || objectKey === 'undefined') {
logger.error(`JiraAssetsClient: parseObject - invalid objectKey after conversion. Original: ${jiraObj.objectKey}, Converted: ${objectKey}`);
throw new Error(`Cannot parse Jira object: invalid objectKey after conversion (${objectKey})`);
}
if (!label || label === 'null' || label === 'undefined') {
logger.error(`JiraAssetsClient: parseObject - invalid label after conversion. Original: ${jiraObj.label}, Converted: ${label}`);
throw new Error(`Cannot parse Jira object: invalid label after conversion (${label})`);
}
const result: Record<string, unknown> = {
id: jiraObj.id.toString(),
objectKey: jiraObj.objectKey,
label: jiraObj.label,
id: objectId,
objectKey: objectKey,
label: label,
_objectType: typeName,
_jiraUpdatedAt: jiraObj.updated || new Date().toISOString(),
_jiraCreatedAt: jiraObj.created || new Date().toISOString(),
};
// Parse each attribute based on schema
// IMPORTANT: Don't allow attributes to overwrite id, objectKey, or label
const protectedFields = new Set(['id', 'objectKey', 'label', '_objectType', '_jiraUpdatedAt', '_jiraCreatedAt']);
for (const attrDef of typeDef.attributes) {
// Skip if this attribute would overwrite a protected field
if (protectedFields.has(attrDef.fieldName)) {
logger.warn(`JiraAssetsClient: Skipping attribute ${attrDef.fieldName} (${attrDef.name}) - would overwrite protected field`);
continue;
}
const jiraAttr = this.findAttribute(jiraObj.attributes, attrDef.jiraId, attrDef.name);
result[attrDef.fieldName] = this.parseAttributeValue(jiraAttr, attrDef);
const parsedValue = this.parseAttributeValue(jiraAttr, {
type: attrDef.type,
isMultiple: attrDef.isMultiple ?? false, // Default to false if not specified
fieldName: attrDef.fieldName,
});
result[attrDef.fieldName] = parsedValue;
// Debug logging for Confluence Space field
if (attrDef.fieldName === 'confluenceSpace') {
logger.info(`[Confluence Space Debug] Object ${jiraObj.objectKey || jiraObj.id}:`);
logger.info(` - Attribute definition: name="${attrDef.name}", jiraId=${attrDef.jiraId}, type="${attrDef.type}"`);
logger.info(` - Found attribute: ${jiraAttr ? 'yes' : 'no'}`);
if (!jiraAttr) {
// Log all available attributes to help debug
const availableAttrs = jiraObj.attributes?.map(a => {
const attrName = a.objectTypeAttribute?.name || 'unnamed';
return `${attrName} (ID: ${a.objectTypeAttributeId})`;
}).join(', ') || 'none';
logger.warn(` - Available attributes (${jiraObj.attributes?.length || 0}): ${availableAttrs}`);
// Try to find similar attributes
const similarAttrs = jiraObj.attributes?.filter(a => {
const attrName = a.objectTypeAttribute?.name || '';
const lowerAttrName = attrName.toLowerCase();
return lowerAttrName.includes('confluence') || lowerAttrName.includes('space');
});
if (similarAttrs && similarAttrs.length > 0) {
logger.warn(` - Found similar attributes: ${similarAttrs.map(a => a.objectTypeAttribute?.name || 'unnamed').join(', ')}`);
}
} else {
logger.info(` - Raw attribute: ${JSON.stringify(jiraAttr, null, 2)}`);
logger.info(` - Parsed value: ${parsedValue} (type: ${typeof parsedValue})`);
}
}
}
// Final validation - ensure result has required fields
// This should never fail if the code above worked correctly, but it's a safety check
const finalId = String(result.id || '').trim();
const finalObjectKey = String(result.objectKey || '').trim();
const finalLabel = String(result.label || '').trim();
if (!finalId || finalId === 'null' || finalId === 'undefined' || finalId === 'NaN') {
logger.error(`JiraAssetsClient: parseObject result missing or invalid id after all processing. Result: ${JSON.stringify({
hasId: 'id' in result,
hasObjectKey: 'objectKey' in result,
hasLabel: 'label' in result,
id: result.id,
objectKey: result.objectKey,
label: result.label,
resultKeys: Object.keys(result),
jiraObj: {
id: jiraObj.id,
objectKey: jiraObj.objectKey,
label: jiraObj.label,
objectType: jiraObj.objectType?.name
}
})}`);
throw new Error(`Failed to parse Jira object: result missing or invalid id (${finalId})`);
}
if (!finalObjectKey || finalObjectKey === 'null' || finalObjectKey === 'undefined') {
logger.error(`JiraAssetsClient: parseObject result missing or invalid objectKey after all processing. Result: ${JSON.stringify({
id: result.id,
objectKey: result.objectKey,
label: result.label,
resultKeys: Object.keys(result)
})}`);
throw new Error(`Failed to parse Jira object: result missing or invalid objectKey (${finalObjectKey})`);
}
if (!finalLabel || finalLabel === 'null' || finalLabel === 'undefined') {
logger.error(`JiraAssetsClient: parseObject result missing or invalid label after all processing. Result: ${JSON.stringify({
id: result.id,
objectKey: result.objectKey,
label: result.label,
resultKeys: Object.keys(result)
})}`);
throw new Error(`Failed to parse Jira object: result missing or invalid label (${finalLabel})`);
}
return result as T;
@@ -363,22 +897,49 @@ class JiraAssetsClient {
private parseAttributeValue(
jiraAttr: JiraAssetsAttribute | undefined,
attrDef: { type: string; isMultiple: boolean }
attrDef: { type: string; isMultiple: boolean; fieldName?: string }
): unknown {
if (!jiraAttr?.objectAttributeValues?.length) {
return attrDef.isMultiple ? [] : null;
}
const values = jiraAttr.objectAttributeValues;
// Convert legacy attribute values to domain types for type guard usage
// This allows us to use the type guards while maintaining backward compatibility
const values = jiraAttr.objectAttributeValues as unknown as ObjectAttributeValue[];
// Use type guards from domain types
// Generic Confluence field detection: check if any value has a confluencePage
const hasConfluencePage = values.some(v => 'confluencePage' in v && v.confluencePage);
if (hasConfluencePage) {
const confluenceVal = values.find(v => 'confluencePage' in v && v.confluencePage) as ConfluenceValue | undefined;
if (confluenceVal?.confluencePage?.url) {
logger.info(`[Confluence Field Parse] Found Confluence URL for field "${attrDef.fieldName || 'unknown'}": ${confluenceVal.confluencePage.url}`);
// For multiple values, return array of URLs; for single, return the URL string
if (attrDef.isMultiple) {
return values
.filter((v): v is ConfluenceValue => 'confluencePage' in v && !!v.confluencePage)
.map(v => v.confluencePage.url);
}
return confluenceVal.confluencePage.url;
}
// Fallback to displayValue if no URL
const displayVal = values[0]?.displayValue;
if (displayVal) {
logger.info(`[Confluence Field Parse] Using displayValue as fallback for field "${attrDef.fieldName || 'unknown'}": ${displayVal}`);
return String(displayVal);
}
return null;
}
switch (attrDef.type) {
case 'reference': {
// Use type guard to filter reference values
const refs = values
.filter(v => v.referencedObject)
.filter(isReferenceValue)
.map(v => ({
objectId: v.referencedObject!.id.toString(),
objectKey: v.referencedObject!.objectKey,
label: v.referencedObject!.label,
objectId: String(v.referencedObject.id),
objectKey: v.referencedObject.objectKey,
label: v.referencedObject.label,
} as ObjectReference));
return attrDef.isMultiple ? refs : refs[0] || null;
}
@@ -389,7 +950,14 @@ class JiraAssetsClient {
case 'email':
case 'select':
case 'user': {
const val = values[0]?.displayValue ?? values[0]?.value ?? null;
// Use type guard for simple values when available, otherwise fall back to legacy format
const firstVal = values[0];
let val: string | null = null;
if (isSimpleValue(firstVal)) {
val = String(firstVal.value);
} else {
val = firstVal?.displayValue ?? (firstVal as any)?.value ?? null;
}
// Strip HTML if present
if (val && typeof val === 'string' && val.includes('<')) {
return this.stripHtml(val);
@@ -398,35 +966,68 @@ class JiraAssetsClient {
}
case 'integer': {
const val = values[0]?.value;
return val ? parseInt(val, 10) : null;
const firstVal = values[0];
if (isSimpleValue(firstVal)) {
const val = typeof firstVal.value === 'number' ? firstVal.value : parseInt(String(firstVal.value), 10);
return isNaN(val) ? null : val;
}
const val = (firstVal as any)?.value;
return val ? parseInt(String(val), 10) : null;
}
case 'float': {
const val = values[0]?.value;
return val ? parseFloat(val) : null;
// Regular float parsing
const firstVal = values[0];
if (isSimpleValue(firstVal)) {
const val = typeof firstVal.value === 'number' ? firstVal.value : parseFloat(String(firstVal.value));
return isNaN(val) ? null : val;
}
const val = (firstVal as any)?.value;
const displayVal = firstVal?.displayValue;
// Try displayValue first, then value
if (displayVal !== undefined && displayVal !== null) {
const parsed = typeof displayVal === 'string' ? parseFloat(displayVal) : Number(displayVal);
return isNaN(parsed) ? null : parsed;
}
if (val !== undefined && val !== null) {
const parsed = typeof val === 'string' ? parseFloat(val) : Number(val);
return isNaN(parsed) ? null : parsed;
}
return null;
}
case 'boolean': {
const val = values[0]?.value;
const firstVal = values[0];
if (isSimpleValue(firstVal)) {
return Boolean(firstVal.value);
}
const val = (firstVal as any)?.value;
return val === 'true' || val === 'Ja';
}
case 'date':
case 'datetime': {
return values[0]?.value ?? values[0]?.displayValue ?? null;
const firstVal = values[0];
if (isSimpleValue(firstVal)) {
return String(firstVal.value);
}
return firstVal?.displayValue ?? (firstVal as any)?.value ?? null;
}
case 'status': {
const statusVal = values[0]?.status;
if (statusVal) {
return statusVal.name || null;
const firstVal = values[0];
if ('status' in firstVal && firstVal.status) {
return firstVal.status.name || null;
}
return values[0]?.displayValue ?? values[0]?.value ?? null;
return firstVal?.displayValue ?? (firstVal as any)?.value ?? null;
}
default:
return values[0]?.displayValue ?? values[0]?.value ?? null;
const firstVal = values[0];
if (isSimpleValue(firstVal)) {
return String(firstVal.value);
}
return firstVal?.displayValue ?? (firstVal as any)?.value ?? null;
}
}

View File

@@ -1,5 +1,12 @@
import winston from 'winston';
import { config } from '../config/env.js';
import * as fs from 'fs';
import * as path from 'path';
import { fileURLToPath } from 'url';
import { dirname } from 'path';
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
const { combine, timestamp, printf, colorize, errors } = winston.format;
@@ -25,16 +32,54 @@ export const logger = winston.createLogger({
],
});
if (config.isProduction) {
logger.add(
new winston.transports.File({
filename: 'logs/error.log',
level: 'error',
})
);
logger.add(
new winston.transports.File({
filename: 'logs/combined.log',
})
);
// Only add file logging if we're in production AND have write permissions
// In Azure App Service, console logging is automatically captured, so file logging is optional
// Detect Azure App Service environment
const isAzureAppService = !!(
process.env.AZURE_APP_SERVICE ||
process.env.WEBSITE_SITE_NAME ||
process.env.WEBSITE_INSTANCE_ID ||
process.env.AzureWebJobsStorage
);
if (config.isProduction && !isAzureAppService) {
// For non-Azure environments, try to use logs/ directory
const logDir = path.join(__dirname, '../../logs');
try {
// Ensure directory exists
if (!fs.existsSync(logDir)) {
fs.mkdirSync(logDir, { recursive: true });
}
// Test write permissions
const testFile = path.join(logDir, '.write-test');
try {
fs.writeFileSync(testFile, 'test');
fs.unlinkSync(testFile);
// If we can write, add file transports
logger.add(
new winston.transports.File({
filename: path.join(logDir, 'error.log'),
level: 'error',
})
);
logger.add(
new winston.transports.File({
filename: path.join(logDir, 'combined.log'),
})
);
} catch (writeError) {
// Can't write to this directory, skip file logging
// Console logging will be used (which is fine for Azure App Service)
console.warn('File logging disabled: no write permissions. Using console logging only.');
}
} catch (dirError) {
// Can't create directory, skip file logging
console.warn('File logging disabled: cannot create log directory. Using console logging only.');
}
}
// In Azure App Service, console logs are automatically captured by Azure Monitor
// No need for file logging

View File

@@ -1,893 +0,0 @@
import { calculateRequiredEffortApplicationManagement } from './effortCalculation.js';
import type {
ApplicationDetails,
ApplicationListItem,
ReferenceValue,
SearchFilters,
SearchResult,
ClassificationResult,
TeamDashboardData,
ApplicationStatus,
} from '../types/index.js';
// Mock application data for development/demo
const mockApplications: ApplicationDetails[] = [
{
id: '1',
key: 'APP-001',
name: 'Epic Hyperspace',
searchReference: 'EPIC-HS',
description: 'Elektronisch Patiëntendossier module voor klinische documentatie en workflow. Ondersteunt de volledige patiëntenzorg van intake tot ontslag.',
supplierProduct: 'Epic Systems / Hyperspace',
organisation: 'Zorg',
hostingType: { objectId: '1', key: 'HOST-1', name: 'On-premises' },
status: 'In Production',
businessImportance: 'Kritiek',
businessImpactAnalyse: { objectId: '1', key: 'BIA-1', name: 'BIA-2024-0042 (Klasse E)' },
systemOwner: 'J. Janssen',
businessOwner: 'Dr. A. van der Berg',
functionalApplicationManagement: 'Team EPD',
technicalApplicationManagement: 'Team Zorgapplicaties',
technicalApplicationManagementPrimary: 'Jan Jansen',
technicalApplicationManagementSecondary: 'Piet Pietersen',
medischeTechniek: false,
applicationFunctions: [],
dynamicsFactor: { objectId: '3', key: 'DYN-3', name: '3 - Hoog' },
complexityFactor: { objectId: '4', key: 'CMP-4', name: '4 - Zeer hoog' },
numberOfUsers: null,
governanceModel: { objectId: 'A', key: 'GOV-A', name: 'Centraal Beheer' },
applicationSubteam: null,
applicationTeam: null,
applicationType: null,
platform: null,
requiredEffortApplicationManagement: null,
},
{
id: '2',
key: 'APP-002',
name: 'SAP Finance',
searchReference: 'SAP-FIN',
description: 'Enterprise Resource Planning systeem voor financiële administratie, budgettering en controlling.',
supplierProduct: 'SAP SE / SAP S/4HANA',
organisation: 'Bedrijfsvoering',
hostingType: { objectId: '3', key: 'HOST-3', name: 'Cloud' },
status: 'In Production',
businessImportance: 'Kritiek',
businessImpactAnalyse: { objectId: '2', key: 'BIA-2', name: 'BIA-2024-0015 (Klasse D)' },
systemOwner: 'M. de Groot',
businessOwner: 'P. Bakker',
functionalApplicationManagement: 'Team ERP',
technicalApplicationManagement: 'Team Bedrijfsapplicaties',
medischeTechniek: false,
applicationFunctions: [],
dynamicsFactor: { objectId: '2', key: 'DYN-2', name: '2 - Gemiddeld' },
complexityFactor: { objectId: '3', key: 'CMP-3', name: '3 - Hoog' },
numberOfUsers: null,
governanceModel: null,
applicationSubteam: null,
applicationTeam: null,
applicationType: null,
platform: null,
requiredEffortApplicationManagement: null,
},
{
id: '3',
key: 'APP-003',
name: 'Philips IntelliSpace PACS',
searchReference: 'PACS',
description: 'Picture Archiving and Communication System voor opslag en weergave van medische beelden inclusief radiologie, CT en MRI.',
supplierProduct: 'Philips Healthcare / IntelliSpace PACS',
organisation: 'Zorg',
hostingType: { objectId: '1', key: 'HOST-1', name: 'On-premises' },
status: 'In Production',
businessImportance: 'Hoog',
businessImpactAnalyse: { objectId: '3', key: 'BIA-3', name: 'BIA-2024-0028 (Klasse D)' },
systemOwner: 'R. Hermans',
businessOwner: 'Dr. K. Smit',
functionalApplicationManagement: 'Team Beeldvorming',
technicalApplicationManagement: 'Team Zorgapplicaties',
medischeTechniek: true,
applicationFunctions: [],
dynamicsFactor: null,
complexityFactor: null,
numberOfUsers: null,
governanceModel: { objectId: 'C', key: 'GOV-C', name: 'Uitbesteed met ICMT-Regie' },
applicationSubteam: null,
applicationTeam: null,
applicationType: null,
platform: null,
requiredEffortApplicationManagement: null,
},
{
id: '4',
key: 'APP-004',
name: 'ChipSoft HiX',
searchReference: 'HIX',
description: 'Ziekenhuisinformatiesysteem en EPD voor patiëntregistratie, zorgplanning en klinische workflow.',
supplierProduct: 'ChipSoft / HiX',
organisation: 'Zorg',
hostingType: { objectId: '1', key: 'HOST-1', name: 'On-premises' },
status: 'In Production',
businessImportance: 'Kritiek',
businessImpactAnalyse: { objectId: '5', key: 'BIA-5', name: 'BIA-2024-0001 (Klasse F)' },
systemOwner: 'T. van Dijk',
businessOwner: 'Dr. L. Mulder',
functionalApplicationManagement: 'Team ZIS',
technicalApplicationManagement: 'Team Zorgapplicaties',
medischeTechniek: false,
applicationFunctions: [],
dynamicsFactor: { objectId: '4', key: 'DYN-4', name: '4 - Zeer hoog' },
complexityFactor: { objectId: '4', key: 'CMP-4', name: '4 - Zeer hoog' },
numberOfUsers: null,
governanceModel: { objectId: 'A', key: 'GOV-A', name: 'Centraal Beheer' },
applicationSubteam: null,
applicationTeam: null,
applicationType: null,
platform: null,
requiredEffortApplicationManagement: null,
},
{
id: '5',
key: 'APP-005',
name: 'TOPdesk',
searchReference: 'TOPDESK',
description: 'IT Service Management platform voor incident, problem en change management.',
supplierProduct: 'TOPdesk / TOPdesk Enterprise',
organisation: 'ICMT',
hostingType: { objectId: '2', key: 'HOST-2', name: 'SaaS' },
status: 'In Production',
businessImportance: 'Hoog',
businessImpactAnalyse: { objectId: '6', key: 'BIA-6', name: 'BIA-2024-0055 (Klasse C)' },
systemOwner: 'B. Willems',
businessOwner: 'H. Claessen',
functionalApplicationManagement: 'Team Servicedesk',
technicalApplicationManagement: 'Team ICT Beheer',
medischeTechniek: false,
applicationFunctions: [],
dynamicsFactor: { objectId: '2', key: 'DYN-2', name: '2 - Gemiddeld' },
complexityFactor: { objectId: '2', key: 'CMP-2', name: '2 - Gemiddeld' },
numberOfUsers: null,
governanceModel: null,
applicationSubteam: null,
applicationTeam: null,
applicationType: null,
platform: null,
requiredEffortApplicationManagement: null,
},
{
id: '6',
key: 'APP-006',
name: 'Microsoft 365',
searchReference: 'M365',
description: 'Kantoorautomatisering suite met Teams, Outlook, SharePoint, OneDrive en Office applicaties.',
supplierProduct: 'Microsoft / Microsoft 365 E5',
organisation: 'ICMT',
hostingType: { objectId: '2', key: 'HOST-2', name: 'SaaS' },
status: 'In Production',
businessImportance: 'Kritiek',
businessImpactAnalyse: { objectId: '1', key: 'BIA-1', name: 'BIA-2024-0042 (Klasse E)' },
systemOwner: 'S. Jansen',
businessOwner: 'N. Peters',
functionalApplicationManagement: 'Team Werkplek',
technicalApplicationManagement: 'Team Cloud',
medischeTechniek: false,
applicationFunctions: [],
dynamicsFactor: { objectId: '3', key: 'DYN-3', name: '3 - Hoog' },
complexityFactor: { objectId: '3', key: 'CMP-3', name: '3 - Hoog' },
numberOfUsers: { objectId: '7', key: 'USR-7', name: '> 15.000' },
governanceModel: { objectId: 'A', key: 'GOV-A', name: 'Centraal Beheer' },
applicationSubteam: null,
applicationTeam: null,
applicationType: null,
platform: null,
requiredEffortApplicationManagement: null,
},
{
id: '7',
key: 'APP-007',
name: 'Carestream Vue PACS',
searchReference: 'VUE-PACS',
description: 'Enterprise imaging platform voor radiologie en cardiologie beeldvorming.',
supplierProduct: 'Carestream Health / Vue PACS',
organisation: 'Zorg',
hostingType: { objectId: '1', key: 'HOST-1', name: 'On-premises' },
status: 'End of life',
businessImportance: 'Gemiddeld',
businessImpactAnalyse: { objectId: '9', key: 'BIA-9', name: 'BIA-2022-0089 (Klasse C)' },
systemOwner: 'R. Hermans',
businessOwner: 'Dr. K. Smit',
functionalApplicationManagement: 'Team Beeldvorming',
technicalApplicationManagement: 'Team Zorgapplicaties',
medischeTechniek: true,
applicationFunctions: [],
dynamicsFactor: { objectId: '1', key: 'DYN-1', name: '1 - Stabiel' },
complexityFactor: { objectId: '2', key: 'CMP-2', name: '2 - Gemiddeld' },
numberOfUsers: null,
governanceModel: null,
applicationSubteam: null,
applicationTeam: null,
applicationType: null,
platform: null,
requiredEffortApplicationManagement: null,
},
{
id: '8',
key: 'APP-008',
name: 'AFAS Profit',
searchReference: 'AFAS',
description: 'HR en salarisadministratie systeem voor personeelsbeheer, tijdregistratie en verloning.',
supplierProduct: 'AFAS Software / Profit',
organisation: 'Bedrijfsvoering',
hostingType: { objectId: '2', key: 'HOST-2', name: 'SaaS' },
status: 'In Production',
businessImportance: 'Hoog',
businessImpactAnalyse: { objectId: '7', key: 'BIA-7', name: 'BIA-2024-0022 (Klasse D)' },
systemOwner: 'E. Hendriks',
businessOwner: 'C. van Leeuwen',
functionalApplicationManagement: 'Team HR',
technicalApplicationManagement: 'Team Bedrijfsapplicaties',
medischeTechniek: false,
applicationFunctions: [],
dynamicsFactor: { objectId: '2', key: 'DYN-2', name: '2 - Gemiddeld' },
complexityFactor: { objectId: '2', key: 'CMP-2', name: '2 - Gemiddeld' },
numberOfUsers: { objectId: '6', key: 'USR-6', name: '10.000 - 15.000' },
governanceModel: { objectId: 'C', key: 'GOV-C', name: 'Uitbesteed met ICMT-Regie' },
applicationSubteam: null,
applicationTeam: null,
applicationType: null,
platform: null,
requiredEffortApplicationManagement: null,
},
{
id: '9',
key: 'APP-009',
name: 'Zenya',
searchReference: 'ZENYA',
description: 'Kwaliteitsmanagementsysteem voor protocollen, procedures en incidentmeldingen.',
supplierProduct: 'Infoland / Zenya',
organisation: 'Kwaliteit',
hostingType: { objectId: '2', key: 'HOST-2', name: 'SaaS' },
status: 'In Production',
businessImportance: 'Hoog',
businessImpactAnalyse: { objectId: '8', key: 'BIA-8', name: 'BIA-2024-0067 (Klasse C)' },
systemOwner: 'F. Bos',
businessOwner: 'I. Dekker',
functionalApplicationManagement: 'Team Kwaliteit',
technicalApplicationManagement: 'Team Bedrijfsapplicaties',
medischeTechniek: false,
applicationFunctions: [],
dynamicsFactor: { objectId: '2', key: 'DYN-2', name: '2 - Gemiddeld' },
complexityFactor: { objectId: '1', key: 'CMP-1', name: '1 - Laag' },
numberOfUsers: { objectId: '4', key: 'USR-4', name: '2.000 - 5.000' },
governanceModel: { objectId: 'C', key: 'GOV-C', name: 'Uitbesteed met ICMT-Regie' },
applicationSubteam: null,
applicationTeam: null,
applicationType: null,
platform: null,
requiredEffortApplicationManagement: null,
},
{
id: '10',
key: 'APP-010',
name: 'Castor EDC',
searchReference: 'CASTOR',
description: 'Electronic Data Capture platform voor klinisch wetenschappelijk onderzoek en trials.',
supplierProduct: 'Castor / Castor EDC',
organisation: 'Onderzoek',
hostingType: { objectId: '2', key: 'HOST-2', name: 'SaaS' },
status: 'In Production',
businessImportance: 'Gemiddeld',
businessImpactAnalyse: null, // BIA-2024-0078 (Klasse B) not in mock list
systemOwner: 'G. Vos',
businessOwner: 'Prof. Dr. W. Maas',
functionalApplicationManagement: 'Team Onderzoek',
technicalApplicationManagement: null,
medischeTechniek: false,
applicationFunctions: [],
dynamicsFactor: { objectId: '1', key: 'DYN-1', name: '1 - Stabiel' },
complexityFactor: { objectId: '1', key: 'CMP-1', name: '1 - Laag' },
numberOfUsers: { objectId: '1', key: 'USR-1', name: '< 100' },
governanceModel: { objectId: 'D', key: 'GOV-D', name: 'Uitbesteed met Business-Regie' },
applicationSubteam: null,
applicationTeam: null,
applicationType: null,
platform: null,
requiredEffortApplicationManagement: null,
},
];
// Mock reference data
const mockDynamicsFactors: ReferenceValue[] = [
{ objectId: '1', key: 'DYN-1', name: '1 - Stabiel', summary: 'Weinig wijzigingen, < 2 releases/jaar', description: 'Weinig wijzigingen, < 2 releases/jaar', factor: 0.8 },
{ objectId: '2', key: 'DYN-2', name: '2 - Gemiddeld', summary: 'Regelmatige wijzigingen, 2-4 releases/jaar', description: 'Regelmatige wijzigingen, 2-4 releases/jaar', factor: 1.0 },
{ objectId: '3', key: 'DYN-3', name: '3 - Hoog', summary: 'Veel wijzigingen, > 4 releases/jaar', description: 'Veel wijzigingen, > 4 releases/jaar', factor: 1.2 },
{ objectId: '4', key: 'DYN-4', name: '4 - Zeer hoog', summary: 'Continu in beweging, grote transformaties', description: 'Continu in beweging, grote transformaties', factor: 1.5 },
];
const mockComplexityFactors: ReferenceValue[] = [
{ objectId: '1', key: 'CMP-1', name: '1 - Laag', summary: 'Standalone, weinig integraties', description: 'Standalone, weinig integraties', factor: 0.8 },
{ objectId: '2', key: 'CMP-2', name: '2 - Gemiddeld', summary: 'Enkele integraties, beperkt maatwerk', description: 'Enkele integraties, beperkt maatwerk', factor: 1.0 },
{ objectId: '3', key: 'CMP-3', name: '3 - Hoog', summary: 'Veel integraties, significant maatwerk', description: 'Veel integraties, significant maatwerk', factor: 1.3 },
{ objectId: '4', key: 'CMP-4', name: '4 - Zeer hoog', summary: 'Platform, uitgebreide governance', description: 'Platform, uitgebreide governance', factor: 1.6 },
];
const mockNumberOfUsers: ReferenceValue[] = [
{ objectId: '1', key: 'USR-1', name: '< 100', order: 1, factor: 0.5 },
{ objectId: '2', key: 'USR-2', name: '100 - 500', order: 2, factor: 0.7 },
{ objectId: '3', key: 'USR-3', name: '500 - 2.000', order: 3, factor: 1.0 },
{ objectId: '4', key: 'USR-4', name: '2.000 - 5.000', order: 4, factor: 1.2 },
{ objectId: '5', key: 'USR-5', name: '5.000 - 10.000', order: 5, factor: 1.4 },
{ objectId: '6', key: 'USR-6', name: '10.000 - 15.000', order: 6, factor: 1.6 },
{ objectId: '7', key: 'USR-7', name: '> 15.000', order: 7, factor: 2.0 },
];
const mockGovernanceModels: ReferenceValue[] = [
{ objectId: 'A', key: 'GOV-A', name: 'Centraal Beheer', summary: 'ICMT voert volledig beheer uit', description: 'ICMT voert volledig beheer uit' },
{ objectId: 'B', key: 'GOV-B', name: 'Federatief Beheer', summary: 'ICMT + business delen beheer', description: 'ICMT + business delen beheer' },
{ objectId: 'C', key: 'GOV-C', name: 'Uitbesteed met ICMT-Regie', summary: 'Leverancier beheert, ICMT regisseert', description: 'Leverancier beheert, ICMT regisseert' },
{ objectId: 'D', key: 'GOV-D', name: 'Uitbesteed met Business-Regie', summary: 'Leverancier beheert, business regisseert', description: 'Leverancier beheert, business regisseert' },
{ objectId: 'E', key: 'GOV-E', name: 'Volledig Decentraal Beheer', summary: 'Business voert volledig beheer uit', description: 'Business voert volledig beheer uit' },
];
const mockOrganisations: ReferenceValue[] = [
{ objectId: '1', key: 'ORG-1', name: 'Zorg' },
{ objectId: '2', key: 'ORG-2', name: 'Bedrijfsvoering' },
{ objectId: '3', key: 'ORG-3', name: 'ICMT' },
{ objectId: '4', key: 'ORG-4', name: 'Kwaliteit' },
{ objectId: '5', key: 'ORG-5', name: 'Onderzoek' },
{ objectId: '6', key: 'ORG-6', name: 'Onderwijs' },
];
const mockHostingTypes: ReferenceValue[] = [
{ objectId: '1', key: 'HOST-1', name: 'On-premises' },
{ objectId: '2', key: 'HOST-2', name: 'SaaS' },
{ objectId: '3', key: 'HOST-3', name: 'Cloud' },
{ objectId: '4', key: 'HOST-4', name: 'Hybrid' },
];
const mockBusinessImpactAnalyses: ReferenceValue[] = [
{ objectId: '1', key: 'BIA-1', name: 'BIA-2024-0042 (Klasse E)' },
{ objectId: '2', key: 'BIA-2', name: 'BIA-2024-0015 (Klasse D)' },
{ objectId: '3', key: 'BIA-3', name: 'BIA-2024-0028 (Klasse D)' },
{ objectId: '4', key: 'BIA-4', name: 'BIA-2024-0035 (Klasse C)' },
{ objectId: '5', key: 'BIA-5', name: 'BIA-2024-0001 (Klasse F)' },
{ objectId: '6', key: 'BIA-6', name: 'BIA-2024-0055 (Klasse C)' },
{ objectId: '7', key: 'BIA-7', name: 'BIA-2024-0022 (Klasse D)' },
{ objectId: '8', key: 'BIA-8', name: 'BIA-2024-0067 (Klasse C)' },
{ objectId: '9', key: 'BIA-9', name: 'BIA-2022-0089 (Klasse C)' },
];
const mockApplicationSubteams: ReferenceValue[] = [
{ objectId: '1', key: 'SUBTEAM-1', name: 'Zorgapplicaties' },
{ objectId: '2', key: 'SUBTEAM-2', name: 'Bedrijfsvoering' },
{ objectId: '3', key: 'SUBTEAM-3', name: 'Infrastructuur' },
];
const mockApplicationTypes: ReferenceValue[] = [
{ objectId: '1', key: 'TYPE-1', name: 'Applicatie' },
{ objectId: '2', key: 'TYPE-2', name: 'Platform' },
{ objectId: '3', key: 'TYPE-3', name: 'Workload' },
];
// Classification history
const mockClassificationHistory: ClassificationResult[] = [];
// Mock data service
export class MockDataService {
private applications: ApplicationDetails[] = [...mockApplications];
async searchApplications(
filters: SearchFilters,
page: number = 1,
pageSize: number = 25
): Promise<SearchResult> {
let filtered = [...this.applications];
// Apply search text filter
if (filters.searchText) {
const search = filters.searchText.toLowerCase();
filtered = filtered.filter(
(app) =>
app.name.toLowerCase().includes(search) ||
(app.description?.toLowerCase().includes(search) ?? false) ||
(app.supplierProduct?.toLowerCase().includes(search) ?? false) ||
(app.searchReference?.toLowerCase().includes(search) ?? false)
);
}
// Apply status filter
if (filters.statuses && filters.statuses.length > 0) {
filtered = filtered.filter((app) => {
// Handle empty/null status - treat as 'Undefined' for filtering
const status = app.status || 'Undefined';
return filters.statuses!.includes(status as ApplicationStatus);
});
}
// Apply applicationFunction filter
if (filters.applicationFunction === 'empty') {
filtered = filtered.filter((app) => app.applicationFunctions.length === 0);
} else if (filters.applicationFunction === 'filled') {
filtered = filtered.filter((app) => app.applicationFunctions.length > 0);
}
// Apply governanceModel filter
if (filters.governanceModel === 'empty') {
filtered = filtered.filter((app) => !app.governanceModel);
} else if (filters.governanceModel === 'filled') {
filtered = filtered.filter((app) => !!app.governanceModel);
}
// Apply dynamicsFactor filter
if (filters.dynamicsFactor === 'empty') {
filtered = filtered.filter((app) => !app.dynamicsFactor);
} else if (filters.dynamicsFactor === 'filled') {
filtered = filtered.filter((app) => !!app.dynamicsFactor);
}
// Apply complexityFactor filter
if (filters.complexityFactor === 'empty') {
filtered = filtered.filter((app) => !app.complexityFactor);
} else if (filters.complexityFactor === 'filled') {
filtered = filtered.filter((app) => !!app.complexityFactor);
}
// Apply applicationSubteam filter
if (filters.applicationSubteam === 'empty') {
filtered = filtered.filter((app) => !app.applicationSubteam);
} else if (filters.applicationSubteam === 'filled') {
filtered = filtered.filter((app) => !!app.applicationSubteam);
}
// Apply applicationType filter
if (filters.applicationType === 'empty') {
filtered = filtered.filter((app) => !app.applicationType);
} else if (filters.applicationType === 'filled') {
filtered = filtered.filter((app) => !!app.applicationType);
}
// Apply organisation filter
if (filters.organisation) {
filtered = filtered.filter((app) => app.organisation === filters.organisation);
}
// Apply hostingType filter
if (filters.hostingType) {
filtered = filtered.filter((app) => {
if (!app.hostingType) return false;
return app.hostingType.name === filters.hostingType || app.hostingType.key === filters.hostingType;
});
}
if (filters.businessImportance) {
filtered = filtered.filter((app) => app.businessImportance === filters.businessImportance);
}
const totalCount = filtered.length;
const totalPages = Math.ceil(totalCount / pageSize);
const startIndex = (page - 1) * pageSize;
const paginatedApps = filtered.slice(startIndex, startIndex + pageSize);
return {
applications: paginatedApps.map((app) => {
const effort = calculateRequiredEffortApplicationManagement(app);
return {
id: app.id,
key: app.key,
name: app.name,
status: app.status,
applicationFunctions: app.applicationFunctions,
governanceModel: app.governanceModel,
dynamicsFactor: app.dynamicsFactor,
complexityFactor: app.complexityFactor,
applicationSubteam: app.applicationSubteam,
applicationTeam: app.applicationTeam,
applicationType: app.applicationType,
platform: app.platform,
requiredEffortApplicationManagement: effort,
};
}),
totalCount,
currentPage: page,
pageSize,
totalPages,
};
}
async getApplicationById(id: string): Promise<ApplicationDetails | null> {
const app = this.applications.find((app) => app.id === id);
if (!app) return null;
// Calculate required effort
const effort = calculateRequiredEffortApplicationManagement(app);
return {
...app,
requiredEffortApplicationManagement: effort,
};
}
async updateApplication(
id: string,
updates: {
applicationFunctions?: ReferenceValue[];
dynamicsFactor?: ReferenceValue;
complexityFactor?: ReferenceValue;
numberOfUsers?: ReferenceValue;
governanceModel?: ReferenceValue;
applicationSubteam?: ReferenceValue;
applicationTeam?: ReferenceValue;
applicationType?: ReferenceValue;
hostingType?: ReferenceValue;
businessImpactAnalyse?: ReferenceValue;
}
): Promise<boolean> {
const index = this.applications.findIndex((app) => app.id === id);
if (index === -1) return false;
const app = this.applications[index];
if (updates.applicationFunctions !== undefined) {
app.applicationFunctions = updates.applicationFunctions;
}
if (updates.dynamicsFactor !== undefined) {
app.dynamicsFactor = updates.dynamicsFactor;
}
if (updates.complexityFactor !== undefined) {
app.complexityFactor = updates.complexityFactor;
}
if (updates.numberOfUsers !== undefined) {
app.numberOfUsers = updates.numberOfUsers;
}
if (updates.governanceModel !== undefined) {
app.governanceModel = updates.governanceModel;
}
if (updates.applicationSubteam !== undefined) {
app.applicationSubteam = updates.applicationSubteam;
}
if (updates.applicationTeam !== undefined) {
app.applicationTeam = updates.applicationTeam;
}
if (updates.applicationType !== undefined) {
app.applicationType = updates.applicationType;
}
if (updates.hostingType !== undefined) {
app.hostingType = updates.hostingType;
}
if (updates.businessImpactAnalyse !== undefined) {
app.businessImpactAnalyse = updates.businessImpactAnalyse;
}
return true;
}
async getDynamicsFactors(): Promise<ReferenceValue[]> {
return mockDynamicsFactors;
}
async getComplexityFactors(): Promise<ReferenceValue[]> {
return mockComplexityFactors;
}
async getNumberOfUsers(): Promise<ReferenceValue[]> {
return mockNumberOfUsers;
}
async getGovernanceModels(): Promise<ReferenceValue[]> {
return mockGovernanceModels;
}
async getOrganisations(): Promise<ReferenceValue[]> {
return mockOrganisations;
}
async getHostingTypes(): Promise<ReferenceValue[]> {
return mockHostingTypes;
}
async getBusinessImpactAnalyses(): Promise<ReferenceValue[]> {
return mockBusinessImpactAnalyses;
}
async getApplicationManagementHosting(): Promise<ReferenceValue[]> {
// Mock Application Management - Hosting values (v25)
return [
{ objectId: '1', key: 'AMH-1', name: 'On-Premises' },
{ objectId: '2', key: 'AMH-2', name: 'Azure - Eigen beheer' },
{ objectId: '3', key: 'AMH-3', name: 'Azure - Delegated Management' },
{ objectId: '4', key: 'AMH-4', name: 'Extern (SaaS)' },
];
}
async getApplicationManagementTAM(): Promise<ReferenceValue[]> {
// Mock Application Management - TAM values
return [
{ objectId: '1', key: 'TAM-1', name: 'ICMT' },
{ objectId: '2', key: 'TAM-2', name: 'Business' },
{ objectId: '3', key: 'TAM-3', name: 'Leverancier' },
];
}
async getApplicationFunctions(): Promise<ReferenceValue[]> {
// Return empty for mock - in real implementation, this comes from Jira
return [];
}
async getApplicationSubteams(): Promise<ReferenceValue[]> {
// Return empty for mock - in real implementation, this comes from Jira
return [];
}
async getApplicationTypes(): Promise<ReferenceValue[]> {
// Return empty for mock - in real implementation, this comes from Jira
return [];
}
async getBusinessImportance(): Promise<ReferenceValue[]> {
// Return empty for mock - in real implementation, this comes from Jira
return [];
}
async getApplicationFunctionCategories(): Promise<ReferenceValue[]> {
// Return empty for mock - in real implementation, this comes from Jira
return [];
}
async getStats() {
// Filter out applications with status "Closed" for KPIs
const activeApplications = this.applications.filter((a) => a.status !== 'Closed');
const total = activeApplications.length;
const classified = activeApplications.filter((a) => a.applicationFunctions.length > 0).length;
const unclassified = total - classified;
const byStatus: Record<string, number> = {};
const byGovernanceModel: Record<string, number> = {};
activeApplications.forEach((app) => {
if (app.status) {
byStatus[app.status] = (byStatus[app.status] || 0) + 1;
}
if (app.governanceModel) {
byGovernanceModel[app.governanceModel.name] =
(byGovernanceModel[app.governanceModel.name] || 0) + 1;
}
});
return {
totalApplications: total,
classifiedCount: classified,
unclassifiedCount: unclassified,
byStatus,
byDomain: {},
byGovernanceModel,
recentClassifications: mockClassificationHistory.slice(-10),
};
}
addClassificationResult(result: ClassificationResult): void {
mockClassificationHistory.push(result);
}
getClassificationHistory(): ClassificationResult[] {
return [...mockClassificationHistory];
}
async getTeamDashboardData(excludedStatuses: ApplicationStatus[] = []): Promise<TeamDashboardData> {
// Convert ApplicationDetails to ApplicationListItem for dashboard
let listItems: ApplicationListItem[] = this.applications.map(app => ({
id: app.id,
key: app.key,
name: app.name,
status: app.status,
applicationFunctions: app.applicationFunctions,
governanceModel: app.governanceModel,
dynamicsFactor: app.dynamicsFactor,
complexityFactor: app.complexityFactor,
applicationSubteam: app.applicationSubteam,
applicationTeam: app.applicationTeam,
applicationType: app.applicationType,
platform: app.platform,
requiredEffortApplicationManagement: app.requiredEffortApplicationManagement,
}));
// Filter out excluded statuses
if (excludedStatuses.length > 0) {
listItems = listItems.filter(app => !app.status || !excludedStatuses.includes(app.status));
}
// Separate applications into Platforms, Workloads, and regular applications
const platforms: ApplicationListItem[] = [];
const workloads: ApplicationListItem[] = [];
const regularApplications: ApplicationListItem[] = [];
for (const app of listItems) {
const isPlatform = app.applicationType?.name === 'Platform';
const isWorkload = app.platform !== null;
if (isPlatform) {
platforms.push(app);
} else if (isWorkload) {
workloads.push(app);
} else {
regularApplications.push(app);
}
}
// Group workloads by their platform
const workloadsByPlatform = new Map<string, ApplicationListItem[]>();
for (const workload of workloads) {
const platformId = workload.platform!.objectId;
if (!workloadsByPlatform.has(platformId)) {
workloadsByPlatform.set(platformId, []);
}
workloadsByPlatform.get(platformId)!.push(workload);
}
// Build PlatformWithWorkloads structures
const platformsWithWorkloads: import('../types/index.js').PlatformWithWorkloads[] = [];
for (const platform of platforms) {
const platformWorkloads = workloadsByPlatform.get(platform.id) || [];
const platformEffort = platform.requiredEffortApplicationManagement || 0;
const workloadsEffort = platformWorkloads.reduce((sum, w) => sum + (w.requiredEffortApplicationManagement || 0), 0);
platformsWithWorkloads.push({
platform,
workloads: platformWorkloads,
platformEffort,
workloadsEffort,
totalEffort: platformEffort + workloadsEffort,
});
}
// Group all applications (regular + platforms + workloads) by subteam
const subteamMap = new Map<string, {
regular: ApplicationListItem[];
platforms: import('../types/index.js').PlatformWithWorkloads[];
}>();
const unassigned: {
regular: ApplicationListItem[];
platforms: import('../types/index.js').PlatformWithWorkloads[];
} = {
regular: [],
platforms: [],
};
// Group regular applications by subteam
for (const app of regularApplications) {
if (app.applicationSubteam) {
const subteamId = app.applicationSubteam.objectId;
if (!subteamMap.has(subteamId)) {
subteamMap.set(subteamId, { regular: [], platforms: [] });
}
subteamMap.get(subteamId)!.regular.push(app);
} else {
unassigned.regular.push(app);
}
}
// Group platforms by subteam
for (const platformWithWorkloads of platformsWithWorkloads) {
const platform = platformWithWorkloads.platform;
if (platform.applicationSubteam) {
const subteamId = platform.applicationSubteam.objectId;
if (!subteamMap.has(subteamId)) {
subteamMap.set(subteamId, { regular: [], platforms: [] });
}
subteamMap.get(subteamId)!.platforms.push(platformWithWorkloads);
} else {
unassigned.platforms.push(platformWithWorkloads);
}
}
// Build subteams from mock data
const allSubteams = mockApplicationSubteams;
const subteams: import('../types/index.js').TeamDashboardSubteam[] = allSubteams.map(subteamRef => {
const subteamData = subteamMap.get(subteamRef.objectId) || { regular: [], platforms: [] };
const regularApps = subteamData.regular;
const platforms = subteamData.platforms;
// Calculate total effort: regular apps + platforms (including their workloads)
const regularEffort = regularApps.reduce((sum, app) =>
sum + (app.requiredEffortApplicationManagement || 0), 0
);
const platformsEffort = platforms.reduce((sum, p) => sum + p.totalEffort, 0);
const totalEffort = regularEffort + platformsEffort;
// Calculate total application count: regular apps + platforms + workloads
const platformsCount = platforms.length;
const workloadsCount = platforms.reduce((sum, p) => sum + p.workloads.length, 0);
const applicationCount = regularApps.length + platformsCount + workloadsCount;
// Calculate governance model distribution (including platforms and workloads)
const byGovernanceModel: Record<string, number> = {};
for (const app of regularApps) {
const govModel = app.governanceModel?.name || 'Niet ingesteld';
byGovernanceModel[govModel] = (byGovernanceModel[govModel] || 0) + 1;
}
for (const platformWithWorkloads of platforms) {
const platform = platformWithWorkloads.platform;
const govModel = platform.governanceModel?.name || 'Niet ingesteld';
byGovernanceModel[govModel] = (byGovernanceModel[govModel] || 0) + 1;
// Also count workloads
for (const workload of platformWithWorkloads.workloads) {
const workloadGovModel = workload.governanceModel?.name || 'Niet ingesteld';
byGovernanceModel[workloadGovModel] = (byGovernanceModel[workloadGovModel] || 0) + 1;
}
}
return {
subteam: subteamRef,
applications: regularApps,
platforms,
totalEffort,
minEffort: totalEffort * 0.8, // Mock: min is 80% of total
maxEffort: totalEffort * 1.2, // Mock: max is 120% of total
applicationCount,
byGovernanceModel,
};
}).filter(s => s.applicationCount > 0); // Only include subteams with apps
// Create a virtual team containing all subteams (since Team doesn't exist in mock data)
const virtualTeam: import('../types/index.js').TeamDashboardTeam = {
team: {
objectId: 'mock-team-1',
key: 'TEAM-1',
name: 'Mock Team',
teamType: 'Business',
},
subteams,
totalEffort: subteams.reduce((sum, s) => sum + s.totalEffort, 0),
minEffort: subteams.reduce((sum, s) => sum + s.minEffort, 0),
maxEffort: subteams.reduce((sum, s) => sum + s.maxEffort, 0),
applicationCount: subteams.reduce((sum, s) => sum + s.applicationCount, 0),
byGovernanceModel: subteams.reduce((acc, s) => {
for (const [key, count] of Object.entries(s.byGovernanceModel)) {
acc[key] = (acc[key] || 0) + count;
}
return acc;
}, {} as Record<string, number>),
};
// Calculate unassigned totals
const unassignedRegularEffort = unassigned.regular.reduce((sum, app) =>
sum + (app.requiredEffortApplicationManagement || 0), 0
);
const unassignedPlatformsEffort = unassigned.platforms.reduce((sum, p) => sum + p.totalEffort, 0);
const unassignedTotalEffort = unassignedRegularEffort + unassignedPlatformsEffort;
const unassignedPlatformsCount = unassigned.platforms.length;
const unassignedWorkloadsCount = unassigned.platforms.reduce((sum, p) => sum + p.workloads.length, 0);
const unassignedApplicationCount = unassigned.regular.length + unassignedPlatformsCount + unassignedWorkloadsCount;
// Calculate governance model distribution for unassigned
const unassignedByGovernanceModel: Record<string, number> = {};
for (const app of unassigned.regular) {
const govModel = app.governanceModel?.name || 'Niet ingesteld';
unassignedByGovernanceModel[govModel] = (unassignedByGovernanceModel[govModel] || 0) + 1;
}
for (const platformWithWorkloads of unassigned.platforms) {
const platform = platformWithWorkloads.platform;
const govModel = platform.governanceModel?.name || 'Niet ingesteld';
unassignedByGovernanceModel[govModel] = (unassignedByGovernanceModel[govModel] || 0) + 1;
for (const workload of platformWithWorkloads.workloads) {
const workloadGovModel = workload.governanceModel?.name || 'Niet ingesteld';
unassignedByGovernanceModel[workloadGovModel] = (unassignedByGovernanceModel[workloadGovModel] || 0) + 1;
}
}
return {
teams: subteams.length > 0 ? [virtualTeam] : [],
unassigned: {
subteam: null,
applications: unassigned.regular,
platforms: unassigned.platforms,
totalEffort: unassignedTotalEffort,
minEffort: unassignedTotalEffort * 0.8, // Mock: min is 80% of total
maxEffort: unassignedTotalEffort * 1.2, // Mock: max is 120% of total
applicationCount: unassignedApplicationCount,
byGovernanceModel: unassignedByGovernanceModel,
},
};
}
}
export const mockDataService = new MockDataService();

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,277 @@
/**
* Generic Query Builder
*
* Builds SQL queries dynamically based on filters and schema.
*/
import { logger } from './logger.js';
import { schemaDiscoveryService } from './schemaDiscoveryService.js';
import type { CMDBObjectTypeName } from '../generated/jira-types.js';
import type { AttributeDefinition } from '../generated/jira-schema.js';
class QueryBuilder {
/**
* Build WHERE clause from filters
*/
async buildWhereClause(
filters: Record<string, unknown>,
typeName: CMDBObjectTypeName
): Promise<{ whereClause: string; params: unknown[] }> {
const conditions: string[] = ['o.object_type_name = ?'];
const params: unknown[] = [typeName];
let paramIndex = 2;
for (const [fieldName, filterValue] of Object.entries(filters)) {
if (filterValue === undefined || filterValue === null) continue;
const attrDef = await schemaDiscoveryService.getAttribute(typeName, fieldName);
if (!attrDef) {
logger.debug(`QueryBuilder: Unknown field ${fieldName} for type ${typeName}, skipping`);
continue;
}
const condition = this.buildFilterCondition(fieldName, filterValue, attrDef, paramIndex);
if (condition.condition) {
conditions.push(condition.condition);
params.push(...condition.params);
paramIndex += condition.params.length;
}
}
const whereClause = conditions.length > 0 ? `WHERE ${conditions.join(' AND ')}` : '';
return { whereClause, params };
}
/**
* Build filter condition for one field
*/
buildFilterCondition(
fieldName: string,
filterValue: unknown,
attrDef: AttributeDefinition,
startParamIndex: number
): { condition: string; params: unknown[] } {
// Handle special operators
if (typeof filterValue === 'object' && filterValue !== null && !Array.isArray(filterValue)) {
const filterObj = filterValue as Record<string, unknown>;
// Exists check
if (filterObj.exists === true) {
return {
condition: `EXISTS (
SELECT 1 FROM attribute_values av
JOIN attributes a ON av.attribute_id = a.id
WHERE av.object_id = o.id AND a.field_name = ?
)`,
params: [fieldName]
};
}
// Empty check
if (filterObj.empty === true) {
return {
condition: `NOT EXISTS (
SELECT 1 FROM attribute_values av
JOIN attributes a ON av.attribute_id = a.id
WHERE av.object_id = o.id AND a.field_name = ?
)`,
params: [fieldName]
};
}
// Contains (text search)
if (filterObj.contains !== undefined && typeof filterObj.contains === 'string') {
if (attrDef.type === 'text' || attrDef.type === 'textarea') {
return {
condition: `EXISTS (
SELECT 1 FROM attribute_values av
JOIN attributes a ON av.attribute_id = a.id
WHERE av.object_id = o.id
AND a.field_name = ?
AND LOWER(av.text_value) LIKE LOWER(?)
)`,
params: [fieldName, `%${filterObj.contains}%`]
};
}
}
// Reference filters
if (attrDef.type === 'reference') {
if (filterObj.objectId !== undefined) {
return {
condition: `EXISTS (
SELECT 1 FROM attribute_values av
JOIN attributes a ON av.attribute_id = a.id
WHERE av.object_id = o.id
AND a.field_name = ?
AND av.reference_object_id = ?
)`,
params: [fieldName, String(filterObj.objectId)]
};
}
if (filterObj.objectKey !== undefined) {
return {
condition: `EXISTS (
SELECT 1 FROM attribute_values av
JOIN attributes a ON av.attribute_id = a.id
JOIN objects ref_obj ON av.reference_object_id = ref_obj.id
WHERE av.object_id = o.id
AND a.field_name = ?
AND ref_obj.object_key = ?
)`,
params: [fieldName, String(filterObj.objectKey)]
};
}
if (filterObj.label !== undefined) {
return {
condition: `EXISTS (
SELECT 1 FROM attribute_values av
JOIN attributes a ON av.attribute_id = a.id
JOIN objects ref_obj ON av.reference_object_id = ref_obj.id
WHERE av.object_id = o.id
AND a.field_name = ?
AND LOWER(ref_obj.label) = LOWER(?)
)`,
params: [fieldName, String(filterObj.label)]
};
}
}
}
// Handle array filters (for multiple reference fields)
if (attrDef.isMultiple && Array.isArray(filterValue)) {
if (attrDef.type === 'reference') {
const conditions: string[] = [];
const params: unknown[] = [];
for (const val of filterValue) {
if (typeof val === 'object' && val !== null) {
const ref = val as { objectId?: string; objectKey?: string };
if (ref.objectId) {
conditions.push(`EXISTS (
SELECT 1 FROM attribute_values av
JOIN attributes a ON av.attribute_id = a.id
WHERE av.object_id = o.id
AND a.field_name = ?
AND av.reference_object_id = ?
)`);
params.push(fieldName, ref.objectId);
} else if (ref.objectKey) {
conditions.push(`EXISTS (
SELECT 1 FROM attribute_values av
JOIN attributes a ON av.attribute_id = a.id
JOIN objects ref_obj ON av.reference_object_id = ref_obj.id
WHERE av.object_id = o.id
AND a.field_name = ?
AND ref_obj.object_key = ?
)`);
params.push(fieldName, ref.objectKey);
}
}
}
if (conditions.length > 0) {
return { condition: `(${conditions.join(' OR ')})`, params };
}
}
}
// Simple value filters
if (attrDef.type === 'reference') {
if (typeof filterValue === 'object' && filterValue !== null) {
const ref = filterValue as { objectId?: string; objectKey?: string; label?: string };
if (ref.objectId) {
return {
condition: `EXISTS (
SELECT 1 FROM attribute_values av
JOIN attributes a ON av.attribute_id = a.id
WHERE av.object_id = o.id
AND a.field_name = ?
AND av.reference_object_id = ?
)`,
params: [fieldName, ref.objectId]
};
} else if (ref.objectKey) {
return {
condition: `EXISTS (
SELECT 1 FROM attribute_values av
JOIN attributes a ON av.attribute_id = a.id
JOIN objects ref_obj ON av.reference_object_id = ref_obj.id
WHERE av.object_id = o.id
AND a.field_name = ?
AND ref_obj.object_key = ?
)`,
params: [fieldName, ref.objectKey]
};
} else if (ref.label) {
return {
condition: `EXISTS (
SELECT 1 FROM attribute_values av
JOIN attributes a ON av.attribute_id = a.id
JOIN objects ref_obj ON av.reference_object_id = ref_obj.id
WHERE av.object_id = o.id
AND a.field_name = ?
AND LOWER(ref_obj.label) = LOWER(?)
)`,
params: [fieldName, ref.label]
};
}
}
} else if (attrDef.type === 'text' || attrDef.type === 'textarea' || attrDef.type === 'url' || attrDef.type === 'email' || attrDef.type === 'select' || attrDef.type === 'user' || attrDef.type === 'status') {
return {
condition: `EXISTS (
SELECT 1 FROM attribute_values av
JOIN attributes a ON av.attribute_id = a.id
WHERE av.object_id = o.id
AND a.field_name = ?
AND av.text_value = ?
)`,
params: [fieldName, String(filterValue)]
};
} else if (attrDef.type === 'integer' || attrDef.type === 'float') {
return {
condition: `EXISTS (
SELECT 1 FROM attribute_values av
JOIN attributes a ON av.attribute_id = a.id
WHERE av.object_id = o.id
AND a.field_name = ?
AND av.number_value = ?
)`,
params: [fieldName, Number(filterValue)]
};
} else if (attrDef.type === 'boolean') {
return {
condition: `EXISTS (
SELECT 1 FROM attribute_values av
JOIN attributes a ON av.attribute_id = a.id
WHERE av.object_id = o.id
AND a.field_name = ?
AND av.boolean_value = ?
)`,
params: [fieldName, Boolean(filterValue)]
};
}
return { condition: '', params: [] };
}
/**
* Build ORDER BY clause
*/
buildOrderBy(orderBy?: string, orderDir?: 'ASC' | 'DESC'): string {
const safeOrderBy = ['id', 'object_key', 'object_type_name', 'label', 'cached_at'].includes(orderBy || '')
? (orderBy || 'label')
: 'label';
const safeOrderDir = orderDir === 'DESC' ? 'DESC' : 'ASC';
return `ORDER BY o.${safeOrderBy} ${safeOrderDir}`;
}
/**
* Build pagination clause
*/
buildPagination(limit?: number, offset?: number): string {
const limitValue = limit || 100;
const offsetValue = offset || 0;
return `LIMIT ${limitValue} OFFSET ${offsetValue}`;
}
}
export const queryBuilder = new QueryBuilder();

View File

@@ -0,0 +1,385 @@
/**
* Role Service
*
* Handles dynamic role and permission management.
*/
import { logger } from './logger.js';
import { getAuthDatabase } from './database/migrations.js';
const isPostgres = (): boolean => {
return process.env.DATABASE_TYPE === 'postgres' || process.env.DATABASE_TYPE === 'postgresql';
};
export interface Role {
id: number;
name: string;
description: string | null;
is_system_role: boolean;
created_at: string;
}
export interface Permission {
id: number;
name: string;
description: string | null;
resource: string | null;
}
export interface CreateRoleInput {
name: string;
description?: string;
}
export interface UpdateRoleInput {
name?: string;
description?: string;
}
class RoleService {
/**
* Get all roles
*/
async getAllRoles(): Promise<Role[]> {
const db = getAuthDatabase();
try {
return await db.query<Role>(
'SELECT * FROM roles ORDER BY name'
);
} finally {
await db.close();
}
}
/**
* Get role by ID
*/
async getRoleById(id: number): Promise<Role | null> {
const db = getAuthDatabase();
try {
return await db.queryOne<Role>(
'SELECT * FROM roles WHERE id = ?',
[id]
);
} finally {
await db.close();
}
}
/**
* Get role by name
*/
async getRoleByName(name: string): Promise<Role | null> {
const db = getAuthDatabase();
try {
return await db.queryOne<Role>(
'SELECT * FROM roles WHERE name = ?',
[name]
);
} finally {
await db.close();
}
}
/**
* Create a new role
*/
async createRole(input: CreateRoleInput): Promise<Role> {
const db = getAuthDatabase();
const now = new Date().toISOString();
try {
// Check if role already exists
const existing = await this.getRoleByName(input.name);
if (existing) {
throw new Error('Role already exists');
}
await db.execute(
'INSERT INTO roles (name, description, is_system_role, created_at) VALUES (?, ?, ?, ?)',
[input.name, input.description || null, isPostgres() ? false : 0, now]
);
const role = await this.getRoleByName(input.name);
if (!role) {
throw new Error('Failed to create role');
}
logger.info(`Role created: ${role.name}`);
return role;
} finally {
await db.close();
}
}
/**
* Update role
*/
async updateRole(id: number, input: UpdateRoleInput): Promise<Role> {
const db = getAuthDatabase();
try {
const role = await this.getRoleById(id);
if (!role) {
throw new Error('Role not found');
}
if (role.is_system_role) {
throw new Error('Cannot update system role');
}
const updates: string[] = [];
const values: any[] = [];
if (input.name !== undefined) {
// Check if name already exists for another role
const existing = await db.queryOne<Role>(
'SELECT id FROM roles WHERE name = ? AND id != ?',
[input.name, id]
);
if (existing) {
throw new Error('Role name already exists');
}
updates.push('name = ?');
values.push(input.name);
}
if (input.description !== undefined) {
updates.push('description = ?');
values.push(input.description);
}
if (updates.length === 0) {
return role;
}
values.push(id);
await db.execute(
`UPDATE roles SET ${updates.join(', ')} WHERE id = ?`,
values
);
const updated = await this.getRoleById(id);
if (!updated) {
throw new Error('Role not found');
}
logger.info(`Role updated: ${updated.name}`);
return updated;
} finally {
await db.close();
}
}
/**
* Delete role
*/
async deleteRole(id: number): Promise<boolean> {
const db = getAuthDatabase();
try {
const role = await this.getRoleById(id);
if (!role) {
return false;
}
if (role.is_system_role) {
throw new Error('Cannot delete system role');
}
const result = await db.execute(
'DELETE FROM roles WHERE id = ?',
[id]
);
logger.info(`Role deleted: ${role.name}`);
return result > 0;
} finally {
await db.close();
}
}
/**
* Get all permissions
*/
async getAllPermissions(): Promise<Permission[]> {
const db = getAuthDatabase();
try {
return await db.query<Permission>(
'SELECT * FROM permissions ORDER BY resource, name'
);
} finally {
await db.close();
}
}
/**
* Get permission by ID
*/
async getPermissionById(id: number): Promise<Permission | null> {
const db = getAuthDatabase();
try {
return await db.queryOne<Permission>(
'SELECT * FROM permissions WHERE id = ?',
[id]
);
} finally {
await db.close();
}
}
/**
* Get permission by name
*/
async getPermissionByName(name: string): Promise<Permission | null> {
const db = getAuthDatabase();
try {
return await db.queryOne<Permission>(
'SELECT * FROM permissions WHERE name = ?',
[name]
);
} finally {
await db.close();
}
}
/**
* Get permissions for a role
*/
async getRolePermissions(roleId: number): Promise<Permission[]> {
const db = getAuthDatabase();
try {
return await db.query<Permission>(
`SELECT p.* FROM permissions p
INNER JOIN role_permissions rp ON p.id = rp.permission_id
WHERE rp.role_id = ?
ORDER BY p.resource, p.name`,
[roleId]
);
} finally {
await db.close();
}
}
/**
* Assign permission to role
*/
async assignPermissionToRole(roleId: number, permissionId: number): Promise<boolean> {
const db = getAuthDatabase();
try {
await db.execute(
`INSERT INTO role_permissions (role_id, permission_id)
VALUES (?, ?)
ON CONFLICT(role_id, permission_id) DO NOTHING`,
[roleId, permissionId]
);
return true;
} catch (error: any) {
// Handle SQLite (no ON CONFLICT support)
if (error.message?.includes('UNIQUE constraint')) {
return false; // Already assigned
}
throw error;
} finally {
await db.close();
}
}
/**
* Remove permission from role
*/
async removePermissionFromRole(roleId: number, permissionId: number): Promise<boolean> {
const db = getAuthDatabase();
try {
const result = await db.execute(
'DELETE FROM role_permissions WHERE role_id = ? AND permission_id = ?',
[roleId, permissionId]
);
return result > 0;
} finally {
await db.close();
}
}
/**
* Get user permissions (from all roles)
*/
async getUserPermissions(userId: number): Promise<Permission[]> {
const db = getAuthDatabase();
try {
return await db.query<Permission>(
`SELECT DISTINCT p.* FROM permissions p
INNER JOIN role_permissions rp ON p.id = rp.permission_id
INNER JOIN user_roles ur ON rp.role_id = ur.role_id
WHERE ur.user_id = ?
ORDER BY p.resource, p.name`,
[userId]
);
} finally {
await db.close();
}
}
/**
* Check if user has permission
*/
async userHasPermission(userId: number, permissionName: string): Promise<boolean> {
const db = getAuthDatabase();
try {
const result = await db.queryOne<{ count: number }>(
`SELECT COUNT(*) as count FROM permissions p
INNER JOIN role_permissions rp ON p.id = rp.permission_id
INNER JOIN user_roles ur ON rp.role_id = ur.role_id
WHERE ur.user_id = ? AND p.name = ?`,
[userId, permissionName]
);
const count = isPostgres() ? (result?.count || 0) : (result?.count || 0);
return parseInt(String(count)) > 0;
} finally {
await db.close();
}
}
/**
* Check if user has role
*/
async userHasRole(userId: number, roleName: string): Promise<boolean> {
const db = getAuthDatabase();
try {
const result = await db.queryOne<{ count: number }>(
`SELECT COUNT(*) as count FROM roles r
INNER JOIN user_roles ur ON r.id = ur.role_id
WHERE ur.user_id = ? AND r.name = ?`,
[userId, roleName]
);
const count = isPostgres() ? (result?.count || 0) : (result?.count || 0);
return parseInt(String(count)) > 0;
} finally {
await db.close();
}
}
/**
* Get user roles
*/
async getUserRoles(userId: number): Promise<Role[]> {
const db = getAuthDatabase();
try {
return await db.query<Role>(
`SELECT r.* FROM roles r
INNER JOIN user_roles ur ON r.id = ur.role_id
WHERE ur.user_id = ?
ORDER BY r.name`,
[userId]
);
} finally {
await db.close();
}
}
}
export const roleService = new RoleService();

View File

@@ -0,0 +1,256 @@
/**
* Schema Cache Service
*
* In-memory cache for schema data with TTL support.
* Provides fast access to schema information without hitting the database on every request.
*/
import { logger } from './logger.js';
import { schemaDiscoveryService } from './schemaDiscoveryService.js';
import type { ObjectTypeDefinition, AttributeDefinition } from '../generated/jira-schema.js';
import { getDatabaseAdapter } from './database/singleton.js';
interface SchemaResponse {
metadata: {
generatedAt: string;
objectTypeCount: number;
totalAttributes: number;
enabledObjectTypeCount?: number;
};
objectTypes: Record<string, ObjectTypeWithLinks>;
cacheCounts?: Record<string, number>;
jiraCounts?: Record<string, number>;
}
interface ObjectTypeWithLinks extends ObjectTypeDefinition {
enabled: boolean; // Whether this object type is enabled for syncing
incomingLinks: Array<{
fromType: string;
fromTypeName: string;
attributeName: string;
isMultiple: boolean;
}>;
outgoingLinks: Array<{
toType: string;
toTypeName: string;
attributeName: string;
isMultiple: boolean;
}>;
}
class SchemaCacheService {
private cache: SchemaResponse | null = null;
private cacheTimestamp: number = 0;
private readonly CACHE_TTL_MS = 5 * 60 * 1000; // 5 minutes
private db = getDatabaseAdapter(); // Use shared database adapter singleton
/**
* Get schema from cache or fetch from database
*/
async getSchema(): Promise<SchemaResponse> {
// Check cache validity
const now = Date.now();
if (this.cache && (now - this.cacheTimestamp) < this.CACHE_TTL_MS) {
logger.debug('SchemaCache: Returning cached schema');
return this.cache;
}
// Cache expired or doesn't exist - fetch from database
logger.debug('SchemaCache: Cache expired or missing, fetching from database');
const schema = await this.fetchFromDatabase();
// Update cache
this.cache = schema;
this.cacheTimestamp = now;
return schema;
}
/**
* Invalidate cache (force refresh on next request)
*/
invalidate(): void {
logger.debug('SchemaCache: Invalidating cache');
this.cache = null;
this.cacheTimestamp = 0;
}
/**
* Fetch schema from database and build response
* Returns ALL object types (enabled and disabled) with their sync status
*/
private async fetchFromDatabase(): Promise<SchemaResponse> {
// Schema discovery must be manually triggered via API endpoints
// No automatic discovery on first run
// Fetch ALL object types (enabled and disabled) with their schema info
const objectTypeRows = await this.db.query<{
id: number;
schema_id: number;
jira_type_id: number;
type_name: string;
display_name: string;
description: string | null;
sync_priority: number;
object_count: number;
enabled: boolean | number;
}>(
`SELECT ot.id, ot.schema_id, ot.jira_type_id, ot.type_name, ot.display_name, ot.description, ot.sync_priority, ot.object_count, ot.enabled
FROM object_types ot
ORDER BY ot.sync_priority, ot.type_name`
);
if (objectTypeRows.length === 0) {
// No types found, return empty schema
return {
metadata: {
generatedAt: new Date().toISOString(),
objectTypeCount: 0,
totalAttributes: 0,
},
objectTypes: {},
};
}
// Fetch attributes for ALL object types using JOIN
const attributeRows = await this.db.query<{
id: number;
jira_attr_id: number;
object_type_name: string;
attr_name: string;
field_name: string;
attr_type: string;
is_multiple: boolean | number;
is_editable: boolean | number;
is_required: boolean | number;
is_system: boolean | number;
reference_type_name: string | null;
description: string | null;
position: number | null;
schema_id: number;
type_name: string;
}>(
`SELECT a.*, ot.schema_id, ot.type_name
FROM attributes a
INNER JOIN object_types ot ON a.object_type_name = ot.type_name
ORDER BY ot.type_name, COALESCE(a.position, 0), a.jira_attr_id`
);
logger.debug(`SchemaCache: Found ${objectTypeRows.length} object types (enabled and disabled) and ${attributeRows.length} attributes`);
// Build object types with attributes
// Use type_name as key (even if same type exists in multiple schemas, we'll show the first enabled one)
// In practice, if same type_name exists in multiple schemas, attributes should be the same
const objectTypesWithLinks: Record<string, ObjectTypeWithLinks> = {};
for (const typeRow of objectTypeRows) {
const typeName = typeRow.type_name;
// Skip if we already have this type_name (first enabled one wins)
if (objectTypesWithLinks[typeName]) {
logger.debug(`SchemaCache: Skipping duplicate type_name ${typeName} from schema ${typeRow.schema_id}`);
continue;
}
// Match attributes by both schema_id and type_name to ensure correct mapping
const matchingAttributes = attributeRows.filter(a => a.schema_id === typeRow.schema_id && a.type_name === typeName);
logger.debug(`SchemaCache: Found ${matchingAttributes.length} attributes for ${typeName} (schema_id: ${typeRow.schema_id})`);
const attributes = matchingAttributes.map(attrRow => {
// Convert boolean/number for SQLite compatibility
const isMultiple = typeof attrRow.is_multiple === 'boolean' ? attrRow.is_multiple : attrRow.is_multiple === 1;
const isEditable = typeof attrRow.is_editable === 'boolean' ? attrRow.is_editable : attrRow.is_editable === 1;
const isRequired = typeof attrRow.is_required === 'boolean' ? attrRow.is_required : attrRow.is_required === 1;
const isSystem = typeof attrRow.is_system === 'boolean' ? attrRow.is_system : attrRow.is_system === 1;
return {
jiraId: attrRow.jira_attr_id,
name: attrRow.attr_name,
fieldName: attrRow.field_name,
type: attrRow.attr_type as AttributeDefinition['type'],
isMultiple,
isEditable,
isRequired,
isSystem,
referenceTypeName: attrRow.reference_type_name || undefined,
description: attrRow.description || undefined,
position: attrRow.position ?? 0,
} as AttributeDefinition;
});
// Convert enabled boolean/number to boolean
const isEnabled = typeof typeRow.enabled === 'boolean' ? typeRow.enabled : typeRow.enabled === 1;
objectTypesWithLinks[typeName] = {
jiraTypeId: typeRow.jira_type_id,
name: typeRow.display_name,
typeName: typeName,
syncPriority: typeRow.sync_priority,
objectCount: typeRow.object_count,
enabled: isEnabled,
attributes,
incomingLinks: [],
outgoingLinks: [],
};
}
// Build link relationships
for (const [typeName, typeDef] of Object.entries(objectTypesWithLinks)) {
for (const attr of typeDef.attributes) {
if (attr.type === 'reference' && attr.referenceTypeName) {
// Add outgoing link from this type
typeDef.outgoingLinks.push({
toType: attr.referenceTypeName,
toTypeName: objectTypesWithLinks[attr.referenceTypeName]?.name || attr.referenceTypeName,
attributeName: attr.name,
isMultiple: attr.isMultiple,
});
// Add incoming link to the referenced type
if (objectTypesWithLinks[attr.referenceTypeName]) {
objectTypesWithLinks[attr.referenceTypeName].incomingLinks.push({
fromType: typeName,
fromTypeName: typeDef.name,
attributeName: attr.name,
isMultiple: attr.isMultiple,
});
}
}
}
}
// Get cache counts (objectsByType) if available
let cacheCounts: Record<string, number> | undefined;
try {
const { dataService } = await import('./dataService.js');
const cacheStatus = await dataService.getCacheStatus();
cacheCounts = cacheStatus.objectsByType;
} catch (err) {
logger.debug('SchemaCache: Could not fetch cache counts', err);
// Continue without cache counts - not critical
}
// Calculate metadata (include enabled count)
const totalAttributes = Object.values(objectTypesWithLinks).reduce(
(sum, t) => sum + t.attributes.length,
0
);
const enabledCount = Object.values(objectTypesWithLinks).filter(t => t.enabled).length;
const response: SchemaResponse = {
metadata: {
generatedAt: new Date().toISOString(),
objectTypeCount: objectTypeRows.length,
totalAttributes,
enabledObjectTypeCount: enabledCount,
},
objectTypes: objectTypesWithLinks,
cacheCounts,
};
return response;
}
}
// Export singleton instance
export const schemaCacheService = new SchemaCacheService();

View File

@@ -0,0 +1,478 @@
/**
* Schema Configuration Service
*
* Manages schema and object type configuration for syncing.
* Discovers schemas and object types from Jira Assets API and allows
* enabling/disabling specific object types for synchronization.
*/
import { logger } from './logger.js';
import { normalizedCacheStore } from './normalizedCacheStore.js';
import { config } from '../config/env.js';
import { toPascalCase } from './schemaUtils.js';
import type { DatabaseAdapter } from './database/interface.js';
export interface JiraSchema {
id: number;
name: string;
description?: string;
objectTypeCount?: number;
}
export interface JiraObjectType {
id: number;
name: string;
description?: string;
objectCount?: number;
objectSchemaId: number;
parentObjectTypeId?: number;
inherited?: boolean;
abstractObjectType?: boolean;
}
export interface ConfiguredObjectType {
id: string; // "schemaId:objectTypeId"
schemaId: string;
schemaName: string;
objectTypeId: number;
objectTypeName: string;
displayName: string;
description: string | null;
objectCount: number;
enabled: boolean;
discoveredAt: string;
updatedAt: string;
}
class SchemaConfigurationService {
constructor() {
// Configuration service - no API calls needed, uses database only
}
/**
* NOTE: Schema discovery is now handled by SchemaSyncService.
* This service only manages configuration (enabling/disabling object types).
* Use schemaSyncService.syncAll() to discover and sync schemas, object types, and attributes.
*/
/**
* Get all configured object types grouped by schema
*/
async getConfiguredObjectTypes(): Promise<Array<{
schemaId: string;
schemaName: string;
objectTypes: ConfiguredObjectType[];
}>> {
const db: DatabaseAdapter = (normalizedCacheStore as any).db;
if (!db) {
throw new Error('Database not available');
}
await (db as any).ensureInitialized?.();
const rows = await db.query<{
id: number;
schema_id: number;
jira_schema_id: string;
schema_name: string;
jira_type_id: number;
type_name: string;
display_name: string;
description: string | null;
object_count: number;
enabled: boolean | number;
discovered_at: string;
updated_at: string;
}>(`
SELECT
ot.id,
ot.schema_id,
s.jira_schema_id,
s.name as schema_name,
ot.jira_type_id,
ot.type_name,
ot.display_name,
ot.description,
ot.object_count,
ot.enabled,
ot.discovered_at,
ot.updated_at
FROM object_types ot
JOIN schemas s ON ot.schema_id = s.id
ORDER BY s.name ASC, ot.display_name ASC
`);
// Group by schema
const schemaMap = new Map<string, ConfiguredObjectType[]>();
for (const row of rows) {
const objectType: ConfiguredObjectType = {
id: `${row.jira_schema_id}:${row.jira_type_id}`, // Keep same format for compatibility
schemaId: row.jira_schema_id,
schemaName: row.schema_name,
objectTypeId: row.jira_type_id,
objectTypeName: row.type_name,
displayName: row.display_name,
description: row.description,
objectCount: row.object_count,
enabled: typeof row.enabled === 'boolean' ? row.enabled : row.enabled === 1,
discoveredAt: row.discovered_at,
updatedAt: row.updated_at,
};
if (!schemaMap.has(row.jira_schema_id)) {
schemaMap.set(row.jira_schema_id, []);
}
schemaMap.get(row.jira_schema_id)!.push(objectType);
}
// Convert to array
return Array.from(schemaMap.entries()).map(([schemaId, objectTypes]) => {
const firstType = objectTypes[0];
return {
schemaId,
schemaName: firstType.schemaName,
objectTypes,
};
});
}
/**
* Set enabled status for an object type
* id format: "schemaId:objectTypeId" (e.g., "6:123")
*/
async setObjectTypeEnabled(id: string, enabled: boolean): Promise<void> {
const db: DatabaseAdapter = (normalizedCacheStore as any).db;
if (!db) {
throw new Error('Database not available');
}
await (db as any).ensureInitialized?.();
// Parse id: "schemaId:objectTypeId"
const [schemaIdStr, objectTypeIdStr] = id.split(':');
if (!schemaIdStr || !objectTypeIdStr) {
throw new Error(`Invalid object type id format: ${id}. Expected format: "schemaId:objectTypeId"`);
}
const objectTypeId = parseInt(objectTypeIdStr, 10);
if (isNaN(objectTypeId)) {
throw new Error(`Invalid object type id: ${objectTypeIdStr}`);
}
// Get schema_id (FK) from schemas table
const schemaRow = await db.queryOne<{ id: number }>(
`SELECT id FROM schemas WHERE jira_schema_id = ?`,
[schemaIdStr]
);
if (!schemaRow) {
throw new Error(`Schema ${schemaIdStr} not found`);
}
// Check if type_name is missing and try to fix it if enabling
const currentType = await db.queryOne<{ type_name: string | null; display_name: string }>(
`SELECT type_name, display_name FROM object_types WHERE schema_id = ? AND jira_type_id = ?`,
[schemaRow.id, objectTypeId]
);
let typeNameToSet = currentType?.type_name;
const needsTypeNameFix = enabled && (!typeNameToSet || typeNameToSet.trim() === '');
if (needsTypeNameFix && currentType?.display_name) {
// Try to generate type_name from display_name (PascalCase)
const { toPascalCase } = await import('./schemaUtils.js');
typeNameToSet = toPascalCase(currentType.display_name);
logger.warn(`SchemaConfiguration: Type ${id} has missing type_name. Auto-generating "${typeNameToSet}" from display_name "${currentType.display_name}"`);
}
const now = new Date().toISOString();
if (db.isPostgres) {
if (needsTypeNameFix && typeNameToSet) {
await db.execute(`
UPDATE object_types
SET enabled = ?, type_name = ?, updated_at = ?
WHERE schema_id = ? AND jira_type_id = ?
`, [enabled, typeNameToSet, now, schemaRow.id, objectTypeId]);
logger.info(`SchemaConfiguration: Set object type ${id} enabled=${enabled} and fixed missing type_name to "${typeNameToSet}"`);
} else {
await db.execute(`
UPDATE object_types
SET enabled = ?, updated_at = ?
WHERE schema_id = ? AND jira_type_id = ?
`, [enabled, now, schemaRow.id, objectTypeId]);
logger.info(`SchemaConfiguration: Set object type ${id} enabled=${enabled}`);
}
} else {
if (needsTypeNameFix && typeNameToSet) {
await db.execute(`
UPDATE object_types
SET enabled = ?, type_name = ?, updated_at = ?
WHERE schema_id = ? AND jira_type_id = ?
`, [enabled ? 1 : 0, typeNameToSet, now, schemaRow.id, objectTypeId]);
logger.info(`SchemaConfiguration: Set object type ${id} enabled=${enabled} and fixed missing type_name to "${typeNameToSet}"`);
} else {
await db.execute(`
UPDATE object_types
SET enabled = ?, updated_at = ?
WHERE schema_id = ? AND jira_type_id = ?
`, [enabled ? 1 : 0, now, schemaRow.id, objectTypeId]);
logger.info(`SchemaConfiguration: Set object type ${id} enabled=${enabled}`);
}
}
}
/**
* Bulk update enabled status for multiple object types
*/
async bulkSetObjectTypesEnabled(updates: Array<{ id: string; enabled: boolean }>): Promise<void> {
const db: DatabaseAdapter = (normalizedCacheStore as any).db;
if (!db) {
throw new Error('Database not available');
}
await (db as any).ensureInitialized?.();
const now = new Date().toISOString();
await db.transaction(async (txDb: DatabaseAdapter) => {
for (const update of updates) {
// Parse id: "schemaId:objectTypeId"
const [schemaIdStr, objectTypeIdStr] = update.id.split(':');
if (!schemaIdStr || !objectTypeIdStr) {
logger.warn(`SchemaConfiguration: Invalid object type id format: ${update.id}`);
continue;
}
const objectTypeId = parseInt(objectTypeIdStr, 10);
if (isNaN(objectTypeId)) {
logger.warn(`SchemaConfiguration: Invalid object type id: ${objectTypeIdStr}`);
continue;
}
// Get schema_id (FK) from schemas table
const schemaRow = await txDb.queryOne<{ id: number }>(
`SELECT id FROM schemas WHERE jira_schema_id = ?`,
[schemaIdStr]
);
if (!schemaRow) {
logger.warn(`SchemaConfiguration: Schema ${schemaIdStr} not found`);
continue;
}
// Check if type_name is missing and try to fix it if enabling
const currentType = await txDb.queryOne<{ type_name: string | null; display_name: string }>(
`SELECT type_name, display_name FROM object_types WHERE schema_id = ? AND jira_type_id = ?`,
[schemaRow.id, objectTypeId]
);
let typeNameToSet = currentType?.type_name;
const needsTypeNameFix = update.enabled && (!typeNameToSet || typeNameToSet.trim() === '');
if (needsTypeNameFix && currentType?.display_name) {
// Try to generate type_name from display_name (PascalCase)
const { toPascalCase } = await import('./schemaUtils.js');
typeNameToSet = toPascalCase(currentType.display_name);
logger.warn(`SchemaConfiguration: Type ${update.id} has missing type_name. Auto-generating "${typeNameToSet}" from display_name "${currentType.display_name}"`);
}
if (txDb.isPostgres) {
if (needsTypeNameFix && typeNameToSet) {
await txDb.execute(`
UPDATE object_types
SET enabled = ?, type_name = ?, updated_at = ?
WHERE schema_id = ? AND jira_type_id = ?
`, [update.enabled, typeNameToSet, now, schemaRow.id, objectTypeId]);
} else {
await txDb.execute(`
UPDATE object_types
SET enabled = ?, updated_at = ?
WHERE schema_id = ? AND jira_type_id = ?
`, [update.enabled, now, schemaRow.id, objectTypeId]);
}
} else {
if (needsTypeNameFix && typeNameToSet) {
await txDb.execute(`
UPDATE object_types
SET enabled = ?, type_name = ?, updated_at = ?
WHERE schema_id = ? AND jira_type_id = ?
`, [update.enabled ? 1 : 0, typeNameToSet, now, schemaRow.id, objectTypeId]);
} else {
await txDb.execute(`
UPDATE object_types
SET enabled = ?, updated_at = ?
WHERE schema_id = ? AND jira_type_id = ?
`, [update.enabled ? 1 : 0, now, schemaRow.id, objectTypeId]);
}
}
}
});
logger.info(`SchemaConfiguration: Bulk updated ${updates.length} object types`);
}
/**
* Get enabled object types (for sync engine)
*/
async getEnabledObjectTypes(): Promise<Array<{
schemaId: string;
objectTypeId: number;
objectTypeName: string;
displayName: string;
}>> {
const db: DatabaseAdapter = (normalizedCacheStore as any).db;
if (!db) {
throw new Error('Database not available');
}
await (db as any).ensureInitialized?.();
// Use parameterized query to avoid boolean/integer comparison issues
const rows = await db.query<{
jira_schema_id: string;
jira_type_id: number;
type_name: string;
display_name: string;
}>(
`SELECT s.jira_schema_id, ot.jira_type_id, ot.type_name, ot.display_name
FROM object_types ot
JOIN schemas s ON ot.schema_id = s.id
WHERE ot.enabled = ?`,
[db.isPostgres ? true : 1]
);
return rows.map((row: {
jira_schema_id: string;
jira_type_id: number;
type_name: string;
display_name: string;
}) => ({
schemaId: row.jira_schema_id,
objectTypeId: row.jira_type_id,
objectTypeName: row.type_name,
displayName: row.display_name,
}));
}
/**
* Check if configuration is complete (at least one object type enabled)
*/
async isConfigurationComplete(): Promise<boolean> {
const enabledTypes = await this.getEnabledObjectTypes();
return enabledTypes.length > 0;
}
/**
* Get configuration statistics
*/
async getConfigurationStats(): Promise<{
totalSchemas: number;
totalObjectTypes: number;
enabledObjectTypes: number;
disabledObjectTypes: number;
isConfigured: boolean;
}> {
const db: DatabaseAdapter = (normalizedCacheStore as any).db;
if (!db) {
throw new Error('Database not available');
}
await (db as any).ensureInitialized?.();
const totalRow = await db.queryOne<{ count: number }>(`
SELECT COUNT(*) as count FROM object_types
`);
// Use parameterized query to avoid boolean/integer comparison issues
const enabledRow = await db.queryOne<{ count: number }>(
`SELECT COUNT(*) as count FROM object_types WHERE enabled = ?`,
[db.isPostgres ? true : 1]
);
const schemaRow = await db.queryOne<{ count: number }>(`
SELECT COUNT(*) as count FROM schemas
`);
const total = totalRow?.count || 0;
const enabled = enabledRow?.count || 0;
const schemas = schemaRow?.count || 0;
return {
totalSchemas: schemas,
totalObjectTypes: total,
enabledObjectTypes: enabled,
disabledObjectTypes: total - enabled,
isConfigured: enabled > 0,
};
}
/**
* Get all schemas with their search enabled status
*/
async getSchemas(): Promise<Array<{
schemaId: string;
schemaName: string;
searchEnabled: boolean;
}>> {
const db: DatabaseAdapter = (normalizedCacheStore as any).db;
if (!db) {
throw new Error('Database not available');
}
await (db as any).ensureInitialized?.();
const rows = await db.query<{
jira_schema_id: string;
name: string;
search_enabled: boolean | number;
}>(`
SELECT jira_schema_id, name, search_enabled
FROM schemas
ORDER BY name ASC
`);
return rows.map((row: {
jira_schema_id: string;
name: string;
search_enabled: boolean | number;
}) => ({
schemaId: row.jira_schema_id,
schemaName: row.name,
searchEnabled: typeof row.search_enabled === 'boolean' ? row.search_enabled : row.search_enabled === 1,
}));
}
/**
* Set search enabled status for a schema
*/
async setSchemaSearchEnabled(schemaId: string, searchEnabled: boolean): Promise<void> {
const db: DatabaseAdapter = (normalizedCacheStore as any).db;
if (!db) {
throw new Error('Database not available');
}
await (db as any).ensureInitialized?.();
const now = new Date().toISOString();
if (db.isPostgres) {
await db.execute(`
UPDATE schemas
SET search_enabled = ?, updated_at = ?
WHERE jira_schema_id = ?
`, [searchEnabled, now, schemaId]);
} else {
await db.execute(`
UPDATE schemas
SET search_enabled = ?, updated_at = ?
WHERE jira_schema_id = ?
`, [searchEnabled ? 1 : 0, now, schemaId]);
}
logger.info(`SchemaConfiguration: Set schema ${schemaId} search_enabled=${searchEnabled}`);
}
}
export const schemaConfigurationService = new SchemaConfigurationService();

View File

@@ -0,0 +1,182 @@
/**
* Schema Discovery Service
*
* Provides access to discovered schema data from the database.
* Schema synchronization is handled by SchemaSyncService.
* This service provides read-only access to the discovered schema.
*/
import { logger } from './logger.js';
import { getDatabaseAdapter } from './database/singleton.js';
import type { DatabaseAdapter } from './database/interface.js';
import { schemaSyncService } from './SchemaSyncService.js';
import type { ObjectTypeDefinition, AttributeDefinition } from '../generated/jira-schema.js';
// Jira API Types (kept for reference, but not used in this service anymore)
class SchemaDiscoveryService {
private db: DatabaseAdapter;
private isPostgres: boolean;
constructor() {
// Use shared database adapter singleton
this.db = getDatabaseAdapter();
// Determine if PostgreSQL based on adapter type
this.isPostgres = (this.db.isPostgres === true);
}
/**
* Discover schema from Jira Assets API and populate database
* Delegates to SchemaSyncService for the actual synchronization
*/
async discoverAndStoreSchema(force: boolean = false): Promise<void> {
logger.info('SchemaDiscovery: Delegating to SchemaSyncService for schema synchronization...');
await schemaSyncService.syncAll();
}
/**
* Get attribute definition from database
*/
async getAttribute(typeName: string, fieldName: string): Promise<AttributeDefinition | null> {
const row = await this.db.queryOne<{
jira_attr_id: number;
attr_name: string;
field_name: string;
attr_type: string;
is_multiple: boolean | number;
is_editable: boolean | number;
is_required: boolean | number;
is_system: boolean | number;
reference_type_name: string | null;
description: string | null;
}>(`
SELECT * FROM attributes
WHERE object_type_name = ? AND field_name = ?
`, [typeName, fieldName]);
if (!row) return null;
// Convert boolean/number for SQLite compatibility
const isMultiple = typeof row.is_multiple === 'boolean' ? row.is_multiple : row.is_multiple === 1;
const isEditable = typeof row.is_editable === 'boolean' ? row.is_editable : row.is_editable === 1;
const isRequired = typeof row.is_required === 'boolean' ? row.is_required : row.is_required === 1;
const isSystem = typeof row.is_system === 'boolean' ? row.is_system : row.is_system === 1;
return {
jiraId: row.jira_attr_id,
name: row.attr_name,
fieldName: row.field_name,
type: row.attr_type as AttributeDefinition['type'],
isMultiple,
isEditable,
isRequired,
isSystem,
referenceTypeName: row.reference_type_name || undefined,
description: row.description || undefined,
};
}
/**
* Get all attributes for a type
*/
async getAttributesForType(typeName: string): Promise<AttributeDefinition[]> {
const rows = await this.db.query<{
jira_attr_id: number;
attr_name: string;
field_name: string;
attr_type: string;
is_multiple: boolean | number;
is_editable: boolean | number;
is_required: boolean | number;
is_system: boolean | number;
reference_type_name: string | null;
description: string | null;
position: number | null;
}>(`
SELECT * FROM attributes
WHERE object_type_name = ?
ORDER BY COALESCE(position, 0), jira_attr_id
`, [typeName]);
return rows.map(row => {
// Convert boolean/number for SQLite compatibility
const isMultiple = typeof row.is_multiple === 'boolean' ? row.is_multiple : row.is_multiple === 1;
const isEditable = typeof row.is_editable === 'boolean' ? row.is_editable : row.is_editable === 1;
const isRequired = typeof row.is_required === 'boolean' ? row.is_required : row.is_required === 1;
const isSystem = typeof row.is_system === 'boolean' ? row.is_system : row.is_system === 1;
return {
jiraId: row.jira_attr_id,
name: row.attr_name,
fieldName: row.field_name,
type: row.attr_type as AttributeDefinition['type'],
isMultiple,
isEditable,
isRequired,
isSystem,
referenceTypeName: row.reference_type_name || undefined,
description: row.description || undefined,
position: row.position ?? 0,
};
});
}
/**
* Get object type definition from database
*/
async getObjectType(typeName: string): Promise<ObjectTypeDefinition | null> {
const row = await this.db.queryOne<{
jira_type_id: number;
type_name: string;
display_name: string;
description: string | null;
sync_priority: number;
object_count: number;
}>(`
SELECT * FROM object_types
WHERE type_name = ?
`, [typeName]);
if (!row) return null;
const attributes = await this.getAttributesForType(typeName);
return {
jiraTypeId: row.jira_type_id,
name: row.display_name,
typeName: row.type_name,
syncPriority: row.sync_priority,
objectCount: row.object_count,
attributes,
};
}
/**
* Get attribute ID by type and field name or attribute name
* Supports both fieldName (camelCase) and name (display name) for flexibility
*/
async getAttributeId(typeName: string, fieldNameOrName: string): Promise<number | null> {
// Try field_name first (camelCase)
let row = await this.db.queryOne<{ id: number }>(`
SELECT id FROM attributes
WHERE object_type_name = ? AND field_name = ?
`, [typeName, fieldNameOrName]);
// If not found, try attr_name (display name)
if (!row) {
row = await this.db.queryOne<{ id: number }>(`
SELECT id FROM attributes
WHERE object_type_name = ? AND attr_name = ?
`, [typeName, fieldNameOrName]);
}
return row?.id || null;
}
}
// Export singleton instance
export const schemaDiscoveryService = new SchemaDiscoveryService();

View File

@@ -0,0 +1,387 @@
/**
* Schema Mapping Service
*
* Manages mappings between object types and their Jira Assets schema IDs.
* Allows different object types to exist in different schemas.
*/
import { logger } from './logger.js';
import { normalizedCacheStore } from './normalizedCacheStore.js';
import { config } from '../config/env.js';
import type { CMDBObjectTypeName } from '../generated/jira-types.js';
import type { DatabaseAdapter } from './database/interface.js';
export interface SchemaMapping {
objectTypeName: string;
schemaId: string;
enabled: boolean;
createdAt: string;
updatedAt: string;
}
class SchemaMappingService {
private cache: Map<string, string> = new Map(); // objectTypeName -> schemaId
private cacheInitialized: boolean = false;
/**
* Get schema ID for an object type
* Returns the configured schema ID or the default from config
*/
async getSchemaId(objectTypeName: CMDBObjectTypeName | string): Promise<string> {
await this.ensureCacheInitialized();
// Check cache first
if (this.cache.has(objectTypeName)) {
return this.cache.get(objectTypeName)!;
}
// Try to get schema ID from database (from enabled object types)
try {
const { schemaConfigurationService } = await import('./schemaConfigurationService.js');
const enabledTypes = await schemaConfigurationService.getEnabledObjectTypes();
const type = enabledTypes.find(et => et.objectTypeName === objectTypeName);
if (type) {
return type.schemaId;
}
} catch (error) {
logger.warn(`SchemaMapping: Failed to get schema ID from database for ${objectTypeName}`, error);
}
// Return empty string if not found (no default)
return '';
}
/**
* Get all schema mappings
*/
async getAllMappings(): Promise<SchemaMapping[]> {
const db = (normalizedCacheStore as any).db;
if (!db) {
throw new Error('Database not available');
}
await db.ensureInitialized?.();
const typedDb = db as DatabaseAdapter;
const rows = await typedDb.query<{
object_type_name: string;
schema_id: string;
enabled: boolean | number;
created_at: string;
updated_at: string;
}>(`
SELECT object_type_name, schema_id, enabled, created_at, updated_at
FROM schema_mappings
ORDER BY object_type_name
`);
return rows.map((row: { object_type_name: string; schema_id: string; enabled: boolean | number; created_at: string; updated_at: string }) => ({
objectTypeName: row.object_type_name,
schemaId: row.schema_id,
enabled: typeof row.enabled === 'boolean' ? row.enabled : row.enabled === 1,
createdAt: row.created_at,
updatedAt: row.updated_at,
}));
}
/**
* Set schema mapping for an object type
*/
async setMapping(objectTypeName: string, schemaId: string, enabled: boolean = true): Promise<void> {
const db = (normalizedCacheStore as any).db;
if (!db) {
throw new Error('Database not available');
}
await db.ensureInitialized?.();
const now = new Date().toISOString();
if (db.isPostgres) {
await db.execute(`
INSERT INTO schema_mappings (object_type_name, schema_id, enabled, created_at, updated_at)
VALUES (?, ?, ?, ?, ?)
ON CONFLICT(object_type_name) DO UPDATE SET
schema_id = excluded.schema_id,
enabled = excluded.enabled,
updated_at = excluded.updated_at
`, [objectTypeName, schemaId, enabled, now, now]);
} else {
await db.execute(`
INSERT INTO schema_mappings (object_type_name, schema_id, enabled, created_at, updated_at)
VALUES (?, ?, ?, ?, ?)
ON CONFLICT(object_type_name) DO UPDATE SET
schema_id = excluded.schema_id,
enabled = excluded.enabled,
updated_at = excluded.updated_at
`, [objectTypeName, schemaId, enabled ? 1 : 0, now, now]);
}
// Update cache
if (enabled) {
this.cache.set(objectTypeName, schemaId);
} else {
this.cache.delete(objectTypeName);
}
logger.info(`SchemaMappingService: Set mapping for ${objectTypeName} -> schema ${schemaId} (enabled: ${enabled})`);
}
/**
* Delete schema mapping for an object type (will use default schema)
*/
async deleteMapping(objectTypeName: string): Promise<void> {
const db = (normalizedCacheStore as any).db;
if (!db) {
throw new Error('Database not available');
}
await db.ensureInitialized?.();
await db.execute(`
DELETE FROM schema_mappings WHERE object_type_name = ?
`, [objectTypeName]);
// Remove from cache
this.cache.delete(objectTypeName);
logger.info(`SchemaMappingService: Deleted mapping for ${objectTypeName}`);
}
/**
* Check if an object type should be synced (has enabled mapping or uses default schema)
*/
async isTypeEnabled(objectTypeName: string): Promise<boolean> {
await this.ensureCacheInitialized();
// If there's a mapping, check if it's enabled
if (this.cache.has(objectTypeName)) {
// Check if it's actually enabled in the database
const db = (normalizedCacheStore as any).db;
if (db) {
await db.ensureInitialized?.();
const typedDb = db as DatabaseAdapter;
const row = await typedDb.queryOne<{ enabled: boolean | number }>(`
SELECT enabled FROM schema_mappings WHERE object_type_name = ?
`, [objectTypeName]);
if (row) {
return typeof row.enabled === 'boolean' ? row.enabled : row.enabled === 1;
}
}
}
// If no mapping exists, check if object type is enabled in database
try {
const { schemaConfigurationService } = await import('./schemaConfigurationService.js');
const enabledTypes = await schemaConfigurationService.getEnabledObjectTypes();
return enabledTypes.some(et => et.objectTypeName === objectTypeName);
} catch (error) {
logger.warn(`SchemaMapping: Failed to check if ${objectTypeName} is enabled`, error);
return false;
}
}
/**
* Initialize cache from database
*/
private async ensureCacheInitialized(): Promise<void> {
if (this.cacheInitialized) return;
try {
const db = (normalizedCacheStore as any).db;
if (!db) {
this.cacheInitialized = true;
return;
}
await db.ensureInitialized?.();
// Use parameterized query to avoid boolean/integer comparison issues
const typedDb = db as DatabaseAdapter;
const rows = await typedDb.query<{
object_type_name: string;
schema_id: string;
enabled: boolean | number;
}>(
`SELECT object_type_name, schema_id, enabled
FROM schema_mappings
WHERE enabled = ?`,
[db.isPostgres ? true : 1]
);
this.cache.clear();
for (const row of rows) {
const enabled = typeof row.enabled === 'boolean' ? row.enabled : row.enabled === 1;
if (enabled) {
this.cache.set(row.object_type_name, row.schema_id);
}
}
this.cacheInitialized = true;
logger.debug(`SchemaMappingService: Initialized cache with ${this.cache.size} mappings`);
} catch (error) {
logger.warn('SchemaMappingService: Failed to initialize cache, using defaults', error);
this.cacheInitialized = true; // Mark as initialized to prevent retry loops
}
}
/**
* Get all object types with their sync configuration
*/
async getAllObjectTypesWithConfig(): Promise<Array<{
typeName: string;
displayName: string;
description: string | null;
schemaId: string | null;
enabled: boolean;
objectCount: number;
syncPriority: number;
}>> {
const db = (normalizedCacheStore as any).db;
if (!db) {
throw new Error('Database not available');
}
await db.ensureInitialized?.();
try {
// Get all object types with their mappings
const typedDb = db as DatabaseAdapter;
const rows = await typedDb.query<{
type_name: string;
display_name: string;
description: string | null;
object_count: number;
sync_priority: number;
schema_id: string | null;
enabled: boolean | number | null;
}>(`
SELECT
ot.type_name,
ot.display_name,
ot.description,
ot.object_count,
ot.sync_priority,
sm.schema_id,
sm.enabled
FROM object_types ot
LEFT JOIN schema_mappings sm ON ot.type_name = sm.object_type_name
ORDER BY ot.sync_priority ASC, ot.display_name ASC
`);
logger.debug(`SchemaMappingService: Found ${rows.length} object types in database`);
// Get first available schema ID from database
let defaultSchemaId: string | null = null;
try {
const { normalizedCacheStore } = await import('./normalizedCacheStore.js');
const db = (normalizedCacheStore as any).db;
if (db) {
await db.ensureInitialized?.();
const typedDb = db as DatabaseAdapter;
const schemaRow = await typedDb.queryOne<{ jira_schema_id: string }>(
`SELECT jira_schema_id FROM schemas ORDER BY jira_schema_id LIMIT 1`
);
defaultSchemaId = schemaRow?.jira_schema_id || null;
}
} catch (error) {
logger.warn('SchemaMapping: Failed to get default schema ID from database', error);
}
return rows.map((row: { type_name: string; display_name: string; description: string | null; object_count: number; sync_priority: number; schema_id: string | null; enabled: number | boolean | null }) => ({
typeName: row.type_name,
displayName: row.display_name,
description: row.description,
schemaId: row.schema_id || defaultSchemaId,
enabled: row.enabled === null
? true // Default: enabled if no mapping exists
: (typeof row.enabled === 'boolean' ? row.enabled : row.enabled === 1),
objectCount: row.object_count || 0,
syncPriority: row.sync_priority || 0,
}));
} catch (error) {
logger.error('SchemaMappingService: Failed to get object types with config', error);
throw error;
}
}
/**
* Enable or disable an object type for syncing
*/
async setTypeEnabled(objectTypeName: string, enabled: boolean): Promise<void> {
const db = (normalizedCacheStore as any).db;
if (!db) {
throw new Error('Database not available');
}
await db.ensureInitialized?.();
// Check if mapping exists
const typedDb = db as DatabaseAdapter;
const existing = await typedDb.queryOne<{ schema_id: string }>(`
SELECT schema_id FROM schema_mappings WHERE object_type_name = ?
`, [objectTypeName]);
// Get schema ID from existing mapping or from database
let schemaId = existing?.schema_id || '';
if (!schemaId) {
// Try to get schema ID from database (from enabled object types)
try {
const { schemaConfigurationService } = await import('./schemaConfigurationService.js');
const enabledTypes = await schemaConfigurationService.getEnabledObjectTypes();
const type = enabledTypes.find(et => et.objectTypeName === objectTypeName);
if (type) {
schemaId = type.schemaId;
}
} catch (error) {
logger.warn(`SchemaMapping: Failed to get schema ID from database for ${objectTypeName}`, error);
}
}
if (!schemaId) {
throw new Error(`No schema ID available for object type ${objectTypeName}. Please ensure the object type is discovered and configured.`);
}
// Create or update mapping
const now = new Date().toISOString();
if (db.isPostgres) {
await db.execute(`
INSERT INTO schema_mappings (object_type_name, schema_id, enabled, created_at, updated_at)
VALUES (?, ?, ?, ?, ?)
ON CONFLICT(object_type_name) DO UPDATE SET
enabled = excluded.enabled,
updated_at = excluded.updated_at
`, [objectTypeName, schemaId, enabled, now, now]);
} else {
await db.execute(`
INSERT INTO schema_mappings (object_type_name, schema_id, enabled, created_at, updated_at)
VALUES (?, ?, ?, ?, ?)
ON CONFLICT(object_type_name) DO UPDATE SET
enabled = excluded.enabled,
updated_at = excluded.updated_at
`, [objectTypeName, schemaId, enabled ? 1 : 0, now, now]);
}
// Update cache
if (enabled) {
this.cache.set(objectTypeName, schemaId);
} else {
this.cache.delete(objectTypeName);
}
this.clearCache();
logger.info(`SchemaMappingService: Set ${objectTypeName} enabled=${enabled}`);
}
/**
* Clear cache (useful after updates)
*/
clearCache(): void {
this.cache.clear();
this.cacheInitialized = false;
}
}
export const schemaMappingService = new SchemaMappingService();

View File

@@ -0,0 +1,149 @@
/**
* Schema Utility Functions
*
* Helper functions for schema discovery and type conversion
*/
// Jira attribute type mappings (based on Jira Insight/Assets API)
const JIRA_TYPE_MAP: Record<number, 'text' | 'integer' | 'float' | 'boolean' | 'date' | 'datetime' | 'select' | 'reference' | 'url' | 'email' | 'textarea' | 'user' | 'status' | 'unknown'> = {
0: 'text', // Default/Text
1: 'integer', // Integer
2: 'boolean', // Boolean
3: 'float', // Double/Float
4: 'date', // Date
5: 'datetime', // DateTime
6: 'url', // URL
7: 'email', // Email
8: 'textarea', // Textarea
9: 'select', // Select
10: 'reference', // Reference (Object)
11: 'user', // User
12: 'reference', // Confluence (treated as reference)
13: 'reference', // Group (treated as reference)
14: 'reference', // Version (treated as reference)
15: 'reference', // Project (treated as reference)
16: 'status', // Status
};
// Priority types - these sync first as they are reference data
const PRIORITY_TYPE_NAMES = new Set([
'Application Component',
'Server',
'Flows',
]);
// Reference data types - these sync with lower priority
const REFERENCE_TYPE_PATTERNS = [
/Factor$/,
/Model$/,
/Type$/,
/Category$/,
/Importance$/,
/Analyse$/,
/Organisation$/,
/Function$/,
];
/**
* Convert a string to camelCase while preserving existing casing patterns
* E.g., "Application Function" -> "applicationFunction"
* "ICT Governance Model" -> "ictGovernanceModel"
* "ApplicationFunction" -> "applicationFunction"
*/
export function toCamelCase(str: string): string {
// First split on spaces and special chars
const words = str
.replace(/[^a-zA-Z0-9\s]/g, ' ')
.split(/\s+/)
.filter(w => w.length > 0);
if (words.length === 0) return '';
// If it's a single word that's already camelCase or PascalCase, just lowercase first char
if (words.length === 1) {
const word = words[0];
return word.charAt(0).toLowerCase() + word.slice(1);
}
// Multiple words - first word lowercase, rest capitalize first letter
return words
.map((word, index) => {
if (index === 0) {
// First word: if all uppercase (acronym), lowercase it, otherwise just lowercase first char
if (word === word.toUpperCase() && word.length > 1) {
return word.toLowerCase();
}
return word.charAt(0).toLowerCase() + word.slice(1);
}
// Other words: capitalize first letter, keep rest as-is
return word.charAt(0).toUpperCase() + word.slice(1);
})
.join('');
}
/**
* Convert a string to PascalCase while preserving existing casing patterns
* E.g., "Application Function" -> "ApplicationFunction"
* "ICT Governance Model" -> "IctGovernanceModel"
* "applicationFunction" -> "ApplicationFunction"
*/
export function toPascalCase(str: string): string {
// First split on spaces and special chars
const words = str
.replace(/[^a-zA-Z0-9\s]/g, ' ')
.split(/\s+/)
.filter(w => w.length > 0);
if (words.length === 0) return '';
// If it's a single word, just capitalize first letter
if (words.length === 1) {
const word = words[0];
return word.charAt(0).toUpperCase() + word.slice(1);
}
// Multiple words - capitalize first letter of each
return words
.map(word => {
// If all uppercase (acronym) and first word, just capitalize first letter
if (word === word.toUpperCase() && word.length > 1) {
return word.charAt(0).toUpperCase() + word.slice(1).toLowerCase();
}
return word.charAt(0).toUpperCase() + word.slice(1);
})
.join('');
}
/**
* Map Jira attribute type ID to our type system
*/
export function mapJiraType(typeId: number): 'text' | 'integer' | 'float' | 'boolean' | 'date' | 'datetime' | 'select' | 'reference' | 'url' | 'email' | 'textarea' | 'user' | 'status' | 'unknown' {
return JIRA_TYPE_MAP[typeId] || 'unknown';
}
/**
* Determine sync priority for an object type
*/
export function determineSyncPriority(typeName: string, objectCount: number): number {
// Application Component and related main types first
if (PRIORITY_TYPE_NAMES.has(typeName)) {
return 1;
}
// Reference data types last
for (const pattern of REFERENCE_TYPE_PATTERNS) {
if (pattern.test(typeName)) {
return 10;
}
}
// Medium priority for types with more objects
if (objectCount > 100) {
return 2;
}
if (objectCount > 10) {
return 5;
}
return 8;
}

View File

@@ -8,10 +8,12 @@
*/
import { logger } from './logger.js';
import { cacheStore } from './cacheStore.js';
import { normalizedCacheStore as cacheStore } from './normalizedCacheStore.js';
import { jiraAssetsClient, JiraObjectNotFoundError } from './jiraAssetsClient.js';
import { OBJECT_TYPES, getObjectTypesBySyncPriority } from '../generated/jira-schema.js';
import type { CMDBObject, CMDBObjectTypeName } from '../generated/jira-types.js';
import { schemaDiscoveryService } from './schemaDiscoveryService.js';
import type { ObjectEntry } from '../domain/jiraAssetsPayload.js';
// =============================================================================
// Types
@@ -61,6 +63,7 @@ class SyncEngine {
private incrementalInterval: number;
private batchSize: number;
private lastIncrementalSync: Date | null = null;
private lastConfigCheck: number = 0; // Track last config check time to avoid log spam
constructor() {
this.incrementalInterval = parseInt(
@@ -80,6 +83,8 @@ class SyncEngine {
/**
* Initialize the sync engine
* Performs initial sync if cache is cold, then starts incremental sync
* Note: Sync engine uses service account token from .env (JIRA_SERVICE_ACCOUNT_TOKEN)
* for all read operations. Write operations require user PAT from profile settings.
*/
async initialize(): Promise<void> {
if (this.isRunning) {
@@ -88,27 +93,30 @@ class SyncEngine {
}
logger.info('SyncEngine: Initializing...');
logger.info('SyncEngine: Sync uses service account token (JIRA_SERVICE_ACCOUNT_TOKEN) from .env');
this.isRunning = true;
// Check if we need a full sync
const stats = await cacheStore.getStats();
const lastFullSync = stats.lastFullSync;
const needsFullSync = !stats.isWarm || !lastFullSync || this.isStale(lastFullSync, 24 * 60 * 60 * 1000);
// Check if configuration is complete before starting scheduler
const { schemaConfigurationService } = await import('./schemaConfigurationService.js');
const isConfigured = await schemaConfigurationService.isConfigurationComplete();
if (needsFullSync) {
logger.info('SyncEngine: Cache is cold or stale, starting full sync in background...');
// Run full sync in background (non-blocking)
this.fullSync().catch(err => {
logger.error('SyncEngine: Background full sync failed', err);
});
// Start incremental sync scheduler if token is available AND configuration is complete
if (jiraAssetsClient.hasToken()) {
if (isConfigured) {
this.startIncrementalSyncScheduler();
logger.info('SyncEngine: Incremental sync scheduler started (configuration complete)');
} else {
logger.info('SyncEngine: Incremental sync scheduler NOT started - schema configuration not complete. Please configure object types in settings first.');
// Start scheduler but it will check configuration on each run
// This allows scheduler to start automatically when configuration is completed later
this.startIncrementalSyncScheduler();
logger.info('SyncEngine: Incremental sync scheduler started (will check configuration on each run)');
}
} else {
logger.info('SyncEngine: Cache is warm, skipping initial full sync');
logger.info('SyncEngine: Service account token not configured, incremental sync disabled');
}
// Start incremental sync scheduler
this.startIncrementalSyncScheduler();
logger.info('SyncEngine: Initialized');
logger.info('SyncEngine: Initialized (using service account token for sync operations)');
}
/**
@@ -140,8 +148,22 @@ class SyncEngine {
/**
* Perform a full sync of all object types
* Uses service account token from .env (JIRA_SERVICE_ACCOUNT_TOKEN)
*/
async fullSync(): Promise<SyncResult> {
// Check if service account token is configured (sync uses service account token)
if (!jiraAssetsClient.hasToken()) {
logger.warn('SyncEngine: Jira service account token not configured, cannot perform sync');
return {
success: false,
stats: [],
totalObjects: 0,
totalRelations: 0,
duration: 0,
error: 'Jira service account token (JIRA_SERVICE_ACCOUNT_TOKEN) not configured in .env. Please configure it to enable sync operations.',
};
}
if (this.isSyncing) {
logger.warn('SyncEngine: Sync already in progress');
return {
@@ -163,14 +185,42 @@ class SyncEngine {
logger.info('SyncEngine: Starting full sync...');
try {
// Get object types sorted by sync priority
const objectTypes = getObjectTypesBySyncPriority();
// Check if configuration is complete
const { schemaConfigurationService } = await import('./schemaConfigurationService.js');
const isConfigured = await schemaConfigurationService.isConfigurationComplete();
if (!isConfigured) {
throw new Error('Schema configuration not complete. Please configure at least one object type to be synced in the settings page.');
}
for (const typeDef of objectTypes) {
const typeStat = await this.syncObjectType(typeDef.typeName as CMDBObjectTypeName);
stats.push(typeStat);
totalObjects += typeStat.objectsProcessed;
totalRelations += typeStat.relationsExtracted;
// Get enabled object types from configuration
logger.info('SyncEngine: Fetching enabled object types from configuration...');
const enabledTypes = await schemaConfigurationService.getEnabledObjectTypes();
logger.info(`SyncEngine: Found ${enabledTypes.length} enabled object types to sync`);
if (enabledTypes.length === 0) {
throw new Error('No object types enabled for syncing. Please enable at least one object type in the settings page.');
}
// Schema discovery will happen automatically when needed (e.g., for relation extraction)
// It's no longer required upfront - the user has already configured which object types to sync
logger.info('SyncEngine: Starting object sync for configured object types...');
// Sync each enabled object type
for (const enabledType of enabledTypes) {
try {
const typeStat = await this.syncConfiguredObjectType(enabledType);
stats.push(typeStat);
totalObjects += typeStat.objectsProcessed;
totalRelations += typeStat.relationsExtracted;
} catch (error) {
logger.error(`SyncEngine: Failed to sync ${enabledType.displayName}`, error);
stats.push({
objectType: enabledType.displayName,
objectsProcessed: 0,
relationsExtracted: 0,
duration: 0,
});
}
}
// Update sync metadata
@@ -205,81 +255,216 @@ class SyncEngine {
}
/**
* Sync a single object type
* Store an object and all its nested referenced objects recursively
* This method processes the entire object tree, storing all nested objects
* and extracting all relations, while preventing infinite loops with circular references.
*
* @param entry - The object entry to store (in ObjectEntry format from API)
* @param typeName - The type name of the object
* @param processedIds - Set of already processed object IDs (to prevent duplicates and circular refs)
* @returns Statistics about objects stored and relations extracted
*/
private async syncObjectType(typeName: CMDBObjectTypeName): Promise<SyncStats> {
private async storeObjectTree(
entry: ObjectEntry,
typeName: CMDBObjectTypeName,
processedIds: Set<string>
): Promise<{ objectsStored: number; relationsExtracted: number }> {
const entryId = String(entry.id);
// Skip if already processed (handles circular references)
if (processedIds.has(entryId)) {
logger.debug(`SyncEngine: Skipping already processed object ${entry.objectKey || entryId} of type ${typeName}`);
return { objectsStored: 0, relationsExtracted: 0 };
}
processedIds.add(entryId);
let objectsStored = 0;
let relationsExtracted = 0;
try {
logger.debug(`SyncEngine: [Recursive] Storing object tree for ${entry.objectKey || entryId} of type ${typeName} (depth: ${processedIds.size - 1})`);
// 1. Adapt and parse the object
const adapted = jiraAssetsClient.adaptObjectEntryToJiraAssetsObject(entry);
if (!adapted) {
logger.warn(`SyncEngine: Failed to adapt object ${entry.objectKey || entryId}`);
return { objectsStored: 0, relationsExtracted: 0 };
}
const parsed = await jiraAssetsClient.parseObject(adapted);
if (!parsed) {
logger.warn(`SyncEngine: Failed to parse object ${entry.objectKey || entryId}`);
return { objectsStored: 0, relationsExtracted: 0 };
}
// 2. Store the object
await cacheStore.upsertObject(typeName, parsed);
objectsStored++;
logger.debug(`SyncEngine: Stored object ${parsed.objectKey || parsed.id} of type ${typeName}`);
// 3. Schema discovery must be manually triggered via API endpoints
// No automatic discovery
// 4. Extract and store relations for this object
await cacheStore.extractAndStoreRelations(typeName, parsed);
relationsExtracted++;
logger.debug(`SyncEngine: Extracted relations for object ${parsed.objectKey || parsed.id}`);
// 5. Recursively process nested referenced objects
// Note: Lookup maps should already be initialized by getAllObjectsOfType
// Use a separate Set for extraction to avoid conflicts with storage tracking
const extractionProcessedIds = new Set<string>();
const nestedRefs = jiraAssetsClient.extractNestedReferencedObjects(
entry,
extractionProcessedIds, // Separate Set for extraction (prevents infinite loops in traversal)
5, // max depth
0 // current depth
);
if (nestedRefs.length > 0) {
logger.debug(`SyncEngine: [Recursive] Found ${nestedRefs.length} nested referenced objects for ${entry.objectKey || entryId}`);
// Group by type for better logging
const refsByType = new Map<string, number>();
for (const ref of nestedRefs) {
refsByType.set(ref.typeName, (refsByType.get(ref.typeName) || 0) + 1);
}
const typeSummary = Array.from(refsByType.entries())
.map(([type, count]) => `${count} ${type}`)
.join(', ');
logger.debug(`SyncEngine: [Recursive] Nested objects by type: ${typeSummary}`);
}
// 6. Recursively store each nested object
for (const { entry: nestedEntry, typeName: nestedTypeName } of nestedRefs) {
logger.debug(`SyncEngine: [Recursive] Processing nested object ${nestedEntry.objectKey || nestedEntry.id} of type ${nestedTypeName}`);
const nestedResult = await this.storeObjectTree(
nestedEntry,
nestedTypeName as CMDBObjectTypeName,
processedIds
);
objectsStored += nestedResult.objectsStored;
relationsExtracted += nestedResult.relationsExtracted;
}
logger.debug(`SyncEngine: [Recursive] Completed storing object tree for ${entry.objectKey || entryId}: ${objectsStored} objects, ${relationsExtracted} relations`);
return { objectsStored, relationsExtracted };
} catch (error) {
logger.error(`SyncEngine: Failed to store object tree for ${entry.objectKey || entryId}`, error);
return { objectsStored, relationsExtracted };
}
}
/**
* Sync a configured object type (from schema configuration)
*/
private async syncConfiguredObjectType(enabledType: {
schemaId: string;
objectTypeId: number;
objectTypeName: string;
displayName: string;
}): Promise<SyncStats> {
const startTime = Date.now();
let objectsProcessed = 0;
let relationsExtracted = 0;
try {
const typeDef = OBJECT_TYPES[typeName];
if (!typeDef) {
logger.warn(`SyncEngine: Unknown type ${typeName}`);
return { objectType: typeName, objectsProcessed: 0, relationsExtracted: 0, duration: 0 };
}
logger.info(`SyncEngine: Syncing ${enabledType.displayName} (${enabledType.objectTypeName}) from schema ${enabledType.schemaId}...`);
logger.debug(`SyncEngine: Syncing ${typeName}...`);
// Fetch all objects from Jira using the configured schema and object type
// This returns raw entries for recursive processing (includeAttributesDeep=2 provides nested data)
const { objects: jiraObjects, rawEntries } = await jiraAssetsClient.getAllObjectsOfType(
enabledType.displayName, // Use display name for Jira API
this.batchSize,
enabledType.schemaId
);
logger.info(`SyncEngine: Fetched ${jiraObjects.length} ${enabledType.displayName} objects from Jira (schema: ${enabledType.schemaId})`);
// Fetch all objects from Jira
const jiraObjects = await jiraAssetsClient.getAllObjectsOfType(typeName, this.batchSize);
logger.info(`SyncEngine: Fetched ${jiraObjects.length} ${typeName} objects from Jira`);
// Schema discovery must be manually triggered via API endpoints
// No automatic discovery
// Parse and cache objects
const parsedObjects: CMDBObject[] = [];
// Use objectTypeName for cache storage (PascalCase)
const typeName = enabledType.objectTypeName as CMDBObjectTypeName;
// Process each main object recursively using storeObjectTree
// This will store the object and all its nested referenced objects
const processedIds = new Set<string>(); // Track processed objects to prevent duplicates and circular refs
const failedObjects: Array<{ id: string; key: string; label: string; reason: string }> = [];
for (const jiraObj of jiraObjects) {
const parsed = jiraAssetsClient.parseObject(jiraObj);
if (parsed) {
parsedObjects.push(parsed);
} else {
// Track objects that failed to parse
failedObjects.push({
id: jiraObj.id?.toString() || 'unknown',
key: jiraObj.objectKey || 'unknown',
label: jiraObj.label || 'unknown',
reason: 'parseObject returned null',
});
logger.warn(`SyncEngine: Failed to parse ${typeName} object: ${jiraObj.objectKey || jiraObj.id} (${jiraObj.label || 'unknown label'})`);
if (rawEntries && rawEntries.length > 0) {
logger.info(`SyncEngine: Processing ${rawEntries.length} ${enabledType.displayName} objects recursively...`);
for (const rawEntry of rawEntries) {
try {
const result = await this.storeObjectTree(rawEntry, typeName, processedIds);
objectsProcessed += result.objectsStored;
relationsExtracted += result.relationsExtracted;
} catch (error) {
const entryId = String(rawEntry.id);
failedObjects.push({
id: entryId,
key: rawEntry.objectKey || 'unknown',
label: rawEntry.label || 'unknown',
reason: error instanceof Error ? error.message : 'Unknown error',
});
logger.warn(`SyncEngine: Failed to store object tree for ${enabledType.displayName} object: ${rawEntry.objectKey || entryId} (${rawEntry.label || 'unknown label'})`, error);
}
}
} else {
// Fallback: if rawEntries not available, use adapted objects (less efficient, no recursion)
logger.warn(`SyncEngine: Raw entries not available, using fallback linear processing (no recursive nesting)`);
const parsedObjects: CMDBObject[] = [];
for (const jiraObj of jiraObjects) {
const parsed = await jiraAssetsClient.parseObject(jiraObj);
if (parsed) {
parsedObjects.push(parsed);
} else {
failedObjects.push({
id: jiraObj.id?.toString() || 'unknown',
key: jiraObj.objectKey || 'unknown',
label: jiraObj.label || 'unknown',
reason: 'parseObject returned null',
});
logger.warn(`SyncEngine: Failed to parse ${enabledType.displayName} object: ${jiraObj.objectKey || jiraObj.id} (${jiraObj.label || 'unknown label'})`);
}
}
if (parsedObjects.length > 0) {
await cacheStore.batchUpsertObjects(typeName, parsedObjects);
objectsProcessed = parsedObjects.length;
// Extract relations
for (const obj of parsedObjects) {
await cacheStore.extractAndStoreRelations(typeName, obj);
relationsExtracted++;
}
}
}
// Log parsing statistics
if (failedObjects.length > 0) {
logger.warn(`SyncEngine: ${failedObjects.length} ${typeName} objects failed to parse:`, failedObjects.map(o => `${o.key} (${o.label})`).join(', '));
}
// Batch upsert to cache
if (parsedObjects.length > 0) {
await cacheStore.batchUpsertObjects(typeName, parsedObjects);
objectsProcessed = parsedObjects.length;
// Extract relations
for (const obj of parsedObjects) {
await cacheStore.extractAndStoreRelations(typeName, obj);
relationsExtracted++;
}
logger.warn(`SyncEngine: ${failedObjects.length} ${enabledType.displayName} objects failed to process:`, failedObjects.map(o => `${o.key} (${o.label}): ${o.reason}`).join(', '));
}
const duration = Date.now() - startTime;
const skippedCount = jiraObjects.length - objectsProcessed;
if (skippedCount > 0) {
logger.warn(`SyncEngine: Synced ${objectsProcessed}/${jiraObjects.length} ${typeName} objects in ${duration}ms (${skippedCount} skipped)`);
logger.warn(`SyncEngine: Synced ${objectsProcessed}/${jiraObjects.length} ${enabledType.displayName} objects in ${duration}ms (${skippedCount} skipped)`);
} else {
logger.debug(`SyncEngine: Synced ${objectsProcessed} ${typeName} objects in ${duration}ms`);
logger.debug(`SyncEngine: Synced ${objectsProcessed} ${enabledType.displayName} objects in ${duration}ms`);
}
return {
objectType: typeName,
objectType: enabledType.displayName,
objectsProcessed,
relationsExtracted,
duration,
};
} catch (error) {
logger.error(`SyncEngine: Failed to sync ${typeName}`, error);
logger.error(`SyncEngine: Failed to sync ${enabledType.displayName}`, error);
return {
objectType: typeName,
objectType: enabledType.displayName,
objectsProcessed,
relationsExtracted,
duration: Date.now() - startTime,
@@ -287,12 +472,27 @@ class SyncEngine {
}
}
/**
* Sync a single object type (legacy method, kept for backward compatibility)
*/
private async syncObjectType(typeName: CMDBObjectTypeName): Promise<SyncStats> {
// This method is deprecated - use syncConfiguredObjectType instead
logger.warn(`SyncEngine: syncObjectType(${typeName}) is deprecated, use configured object types instead`);
return {
objectType: typeName,
objectsProcessed: 0,
relationsExtracted: 0,
duration: 0,
};
}
// ==========================================================================
// Incremental Sync
// ==========================================================================
/**
* Start the incremental sync scheduler
* The scheduler will check configuration on each run and only sync if configuration is complete
*/
private startIncrementalSyncScheduler(): void {
if (this.incrementalTimer) {
@@ -300,9 +500,11 @@ class SyncEngine {
}
logger.info(`SyncEngine: Starting incremental sync scheduler (every ${this.incrementalInterval}ms)`);
logger.info('SyncEngine: Scheduler will only perform syncs when schema configuration is complete');
this.incrementalTimer = setInterval(() => {
if (!this.isSyncing && this.isRunning) {
// incrementalSync() will check if configuration is complete before syncing
this.incrementalSync().catch(err => {
logger.error('SyncEngine: Incremental sync failed', err);
});
@@ -312,11 +514,38 @@ class SyncEngine {
/**
* Perform an incremental sync (only updated objects)
* Uses service account token from .env (JIRA_SERVICE_ACCOUNT_TOKEN)
*
* Note: On Jira Data Center, IQL-based incremental sync is not supported.
* We instead check if a periodic full sync is needed.
*/
async incrementalSync(): Promise<{ success: boolean; updatedCount: number }> {
// Check if service account token is configured (sync uses service account token)
if (!jiraAssetsClient.hasToken()) {
logger.debug('SyncEngine: Jira service account token not configured, skipping incremental sync');
return { success: false, updatedCount: 0 };
}
// Check if configuration is complete before attempting sync
const { schemaConfigurationService } = await import('./schemaConfigurationService.js');
const isConfigured = await schemaConfigurationService.isConfigurationComplete();
if (!isConfigured) {
// Don't log on every interval - only log once per minute to avoid spam
const now = Date.now();
if (!this.lastConfigCheck || now - this.lastConfigCheck > 60000) {
logger.debug('SyncEngine: Schema configuration not complete, skipping incremental sync. Please configure object types in settings.');
this.lastConfigCheck = now;
}
return { success: false, updatedCount: 0 };
}
// Get enabled object types - will be used later to filter updated objects
const enabledTypes = await schemaConfigurationService.getEnabledObjectTypes();
if (enabledTypes.length === 0) {
logger.debug('SyncEngine: No enabled object types, skipping incremental sync');
return { success: false, updatedCount: 0 };
}
if (this.isSyncing) {
return { success: false, updatedCount: 0 };
}
@@ -332,6 +561,15 @@ class SyncEngine {
logger.debug(`SyncEngine: Incremental sync since ${since.toISOString()}`);
// Get enabled object types to filter incremental sync
const enabledTypes = await schemaConfigurationService.getEnabledObjectTypes();
const enabledTypeNames = new Set(enabledTypes.map(et => et.objectTypeName));
if (enabledTypeNames.size === 0) {
logger.debug('SyncEngine: No enabled object types, skipping incremental sync');
return { success: false, updatedCount: 0 };
}
// Fetch updated objects from Jira
const updatedObjects = await jiraAssetsClient.getUpdatedObjectsSince(since, this.batchSize);
@@ -361,15 +599,49 @@ class SyncEngine {
return { success: true, updatedCount: 0 };
}
let updatedCount = 0;
// Schema discovery must be manually triggered via API endpoints
// No automatic discovery
let updatedCount = 0;
const processedIds = new Set<string>(); // Track processed objects for recursive sync
// Filter updated objects to only process enabled object types
// Use recursive processing to handle nested references
for (const jiraObj of updatedObjects) {
const parsed = jiraAssetsClient.parseObject(jiraObj);
const parsed = await jiraAssetsClient.parseObject(jiraObj);
if (parsed) {
const typeName = parsed._objectType as CMDBObjectTypeName;
await cacheStore.upsertObject(typeName, parsed);
await cacheStore.extractAndStoreRelations(typeName, parsed);
updatedCount++;
// Only sync if this object type is enabled
if (!enabledTypeNames.has(typeName)) {
logger.debug(`SyncEngine: Skipping ${typeName} in incremental sync - not enabled`);
continue;
}
// Get raw entry for recursive processing
const objectId = parsed.id;
try {
const entry = await jiraAssetsClient.getObjectEntry(objectId);
if (entry) {
// Use recursive storeObjectTree to process object and all nested references
const result = await this.storeObjectTree(entry, typeName, processedIds);
if (result.objectsStored > 0) {
updatedCount++;
logger.debug(`SyncEngine: Incremental sync processed ${objectId}: ${result.objectsStored} objects, ${result.relationsExtracted} relations`);
}
} else {
// Fallback to linear processing if raw entry not available
await cacheStore.upsertObject(typeName, parsed);
await cacheStore.extractAndStoreRelations(typeName, parsed);
updatedCount++;
}
} catch (error) {
logger.warn(`SyncEngine: Failed to get raw entry for ${objectId}, using fallback`, error);
// Fallback to linear processing
await cacheStore.upsertObject(typeName, parsed);
await cacheStore.extractAndStoreRelations(typeName, parsed);
updatedCount++;
}
}
}
@@ -397,6 +669,7 @@ class SyncEngine {
/**
* Trigger a sync for a specific object type
* Only syncs if the object type is enabled in configuration
* Allows concurrent syncs for different types, but blocks if:
* - A full sync is in progress
* - An incremental sync is in progress
@@ -413,10 +686,19 @@ class SyncEngine {
throw new Error(`Sync already in progress for ${typeName}`);
}
// Check if this type is enabled in configuration
const { schemaConfigurationService } = await import('./schemaConfigurationService.js');
const enabledTypes = await schemaConfigurationService.getEnabledObjectTypes();
const enabledType = enabledTypes.find(et => et.objectTypeName === typeName);
if (!enabledType) {
throw new Error(`Object type ${typeName} is not enabled for syncing. Please enable it in the Schema Configuration settings page.`);
}
this.syncingTypes.add(typeName);
try {
return await this.syncObjectType(typeName);
return await this.syncConfiguredObjectType(enabledType);
} finally {
this.syncingTypes.delete(typeName);
}
@@ -424,20 +706,39 @@ class SyncEngine {
/**
* Force sync a single object
* Only syncs if the object type is enabled in configuration
* If the object was deleted from Jira, it will be removed from the local cache
* Uses recursive processing to store nested referenced objects
*/
async syncObject(typeName: CMDBObjectTypeName, objectId: string): Promise<boolean> {
try {
const jiraObj = await jiraAssetsClient.getObject(objectId);
if (!jiraObj) return false;
// Check if this type is enabled in configuration
const { schemaConfigurationService } = await import('./schemaConfigurationService.js');
const enabledTypes = await schemaConfigurationService.getEnabledObjectTypes();
const isEnabled = enabledTypes.some(et => et.objectTypeName === typeName);
const parsed = jiraAssetsClient.parseObject(jiraObj);
if (!parsed) return false;
if (!isEnabled) {
logger.warn(`SyncEngine: Cannot sync object ${objectId} - type ${typeName} is not enabled for syncing`);
return false;
}
await cacheStore.upsertObject(typeName, parsed);
await cacheStore.extractAndStoreRelations(typeName, parsed);
// Schema discovery must be manually triggered via API endpoints
// No automatic discovery
// Get raw ObjectEntry for recursive processing
const entry = await jiraAssetsClient.getObjectEntry(objectId);
if (!entry) return false;
// Use recursive storeObjectTree to process object and all nested references
const processedIds = new Set<string>();
const result = await this.storeObjectTree(entry, typeName, processedIds);
return true;
if (result.objectsStored > 0) {
logger.info(`SyncEngine: Synced object ${objectId} recursively: ${result.objectsStored} objects, ${result.relationsExtracted} relations`);
return true;
}
return false;
} catch (error) {
// If object was deleted from Jira, remove it from our cache
if (error instanceof JiraObjectNotFoundError) {

View File

@@ -0,0 +1,616 @@
/**
* User Service
*
* Handles user CRUD operations, password management, email verification, and role assignment.
*/
import bcrypt from 'bcrypt';
import { randomBytes } from 'crypto';
import { logger } from './logger.js';
import { getAuthDatabase } from './database/migrations.js';
import { emailService } from './emailService.js';
const SALT_ROUNDS = 10;
const isPostgres = (): boolean => {
return process.env.DATABASE_TYPE === 'postgres' || process.env.DATABASE_TYPE === 'postgresql';
};
export interface User {
id: number;
email: string;
username: string;
password_hash: string;
display_name: string | null;
is_active: boolean;
email_verified: boolean;
email_verification_token: string | null;
password_reset_token: string | null;
password_reset_expires: string | null;
created_at: string;
updated_at: string;
last_login: string | null;
}
export interface CreateUserInput {
email: string;
username: string;
password?: string;
display_name?: string;
send_invitation?: boolean;
}
export interface UpdateUserInput {
email?: string;
username?: string;
display_name?: string;
is_active?: boolean;
}
class UserService {
/**
* Hash a password
*/
async hashPassword(password: string): Promise<string> {
return bcrypt.hash(password, SALT_ROUNDS);
}
/**
* Verify a password
*/
async verifyPassword(password: string, hash: string): Promise<boolean> {
return bcrypt.compare(password, hash);
}
/**
* Generate a secure random token
*/
generateToken(): string {
return randomBytes(32).toString('hex');
}
/**
* Create a new user
*/
async createUser(input: CreateUserInput): Promise<User> {
const db = getAuthDatabase();
const now = new Date().toISOString();
try {
// Check if email or username already exists
const existingEmail = await db.queryOne<User>(
'SELECT id FROM users WHERE email = ?',
[input.email]
);
if (existingEmail) {
throw new Error('Email already exists');
}
const existingUsername = await db.queryOne<User>(
'SELECT id FROM users WHERE username = ?',
[input.username]
);
if (existingUsername) {
throw new Error('Username already exists');
}
// Hash password if provided
let passwordHash = '';
if (input.password) {
passwordHash = await this.hashPassword(input.password);
} else {
// Generate a temporary password hash (user will set password via invitation)
passwordHash = await this.hashPassword(this.generateToken());
}
// Generate email verification token
const emailVerificationToken = this.generateToken();
// Insert user
await db.execute(
`INSERT INTO users (
email, username, password_hash, display_name,
is_active, email_verified, email_verification_token,
created_at, updated_at
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)`,
[
input.email,
input.username,
passwordHash,
input.display_name || null,
isPostgres() ? true : 1,
isPostgres() ? false : 0,
emailVerificationToken,
now,
now,
]
);
const user = await db.queryOne<User>(
'SELECT * FROM users WHERE email = ?',
[input.email]
);
if (!user) {
throw new Error('Failed to create user');
}
// Send invitation email if requested
if (input.send_invitation && !input.password) {
await this.sendInvitation(user.id);
}
logger.info(`User created: ${user.email}`);
return user;
} finally {
await db.close();
}
}
/**
* Get user by ID
*/
async getUserById(id: number): Promise<User | null> {
const db = getAuthDatabase();
try {
return await db.queryOne<User>(
'SELECT * FROM users WHERE id = ?',
[id]
);
} finally {
await db.close();
}
}
/**
* Get user by email
*/
async getUserByEmail(email: string): Promise<User | null> {
const db = getAuthDatabase();
try {
return await db.queryOne<User>(
'SELECT * FROM users WHERE email = ?',
[email]
);
} finally {
await db.close();
}
}
/**
* Get user by username
*/
async getUserByUsername(username: string): Promise<User | null> {
const db = getAuthDatabase();
try {
return await db.queryOne<User>(
'SELECT * FROM users WHERE username = ?',
[username]
);
} finally {
await db.close();
}
}
/**
* Get all users
*/
async getAllUsers(): Promise<User[]> {
const db = getAuthDatabase();
try {
return await db.query<User>(
'SELECT * FROM users ORDER BY created_at DESC'
);
} finally {
await db.close();
}
}
/**
* Update user
*/
async updateUser(id: number, input: UpdateUserInput): Promise<User> {
const db = getAuthDatabase();
const now = new Date().toISOString();
try {
const updates: string[] = [];
const values: any[] = [];
if (input.email !== undefined) {
// Check if email already exists for another user
const existing = await db.queryOne<User>(
'SELECT id FROM users WHERE email = ? AND id != ?',
[input.email, id]
);
if (existing) {
throw new Error('Email already exists');
}
updates.push('email = ?');
values.push(input.email);
}
if (input.username !== undefined) {
// Check if username already exists for another user
const existing = await db.queryOne<User>(
'SELECT id FROM users WHERE username = ? AND id != ?',
[input.username, id]
);
if (existing) {
throw new Error('Username already exists');
}
updates.push('username = ?');
values.push(input.username);
}
if (input.display_name !== undefined) {
updates.push('display_name = ?');
values.push(input.display_name);
}
if (input.is_active !== undefined) {
updates.push('is_active = ?');
values.push(isPostgres() ? input.is_active : (input.is_active ? 1 : 0));
}
if (updates.length === 0) {
const user = await this.getUserById(id);
if (!user) {
throw new Error('User not found');
}
return user;
}
updates.push('updated_at = ?');
values.push(now);
values.push(id);
await db.execute(
`UPDATE users SET ${updates.join(', ')} WHERE id = ?`,
values
);
const user = await this.getUserById(id);
if (!user) {
throw new Error('User not found');
}
logger.info(`User updated: ${user.email}`);
return user;
} finally {
await db.close();
}
}
/**
* Delete user
*/
async deleteUser(id: number): Promise<boolean> {
const db = getAuthDatabase();
try {
const result = await db.execute(
'DELETE FROM users WHERE id = ?',
[id]
);
logger.info(`User deleted: ${id}`);
return result > 0;
} finally {
await db.close();
}
}
/**
* Update user password
*/
async updatePassword(id: number, newPassword: string): Promise<void> {
const db = getAuthDatabase();
const now = new Date().toISOString();
try {
const passwordHash = await this.hashPassword(newPassword);
await db.execute(
'UPDATE users SET password_hash = ?, password_reset_token = NULL, password_reset_expires = NULL, updated_at = ? WHERE id = ?',
[passwordHash, now, id]
);
logger.info(`Password updated for user: ${id}`);
} finally {
await db.close();
}
}
/**
* Generate and store password reset token
*/
async generatePasswordResetToken(email: string): Promise<string | null> {
const db = getAuthDatabase();
const user = await this.getUserByEmail(email);
if (!user) {
// Don't reveal if user exists
return null;
}
try {
const token = this.generateToken();
const expiresAt = new Date(Date.now() + 60 * 60 * 1000).toISOString(); // 1 hour
await db.execute(
'UPDATE users SET password_reset_token = ?, password_reset_expires = ? WHERE id = ?',
[token, expiresAt, user.id]
);
// Store in email_tokens table as well
await db.execute(
`INSERT INTO email_tokens (user_id, token, type, expires_at, used, created_at)
VALUES (?, ?, ?, ?, ?, ?)`,
[user.id, token, 'password_reset', expiresAt, isPostgres() ? false : 0, new Date().toISOString()]
);
// Send password reset email
await emailService.sendPasswordResetEmail(user.email, token, user.display_name || undefined);
return token;
} finally {
await db.close();
}
}
/**
* Reset password using token
*/
async resetPasswordWithToken(token: string, newPassword: string): Promise<boolean> {
const db = getAuthDatabase();
try {
// Check token in email_tokens table
const tokenRecord = await db.queryOne<{ user_id: number; expires_at: string; used: boolean }>(
`SELECT user_id, expires_at, used FROM email_tokens
WHERE token = ? AND type = 'password_reset' AND used = ?`,
[token, isPostgres() ? false : 0]
);
if (!tokenRecord) {
return false;
}
// Check if expired
if (new Date(tokenRecord.expires_at) < new Date()) {
return false;
}
// Update password
await this.updatePassword(tokenRecord.user_id, newPassword);
// Mark token as used
await db.execute(
'UPDATE email_tokens SET used = ? WHERE token = ?',
[isPostgres() ? true : 1, token]
);
logger.info(`Password reset completed for user: ${tokenRecord.user_id}`);
return true;
} finally {
await db.close();
}
}
/**
* Verify email with token
*/
async verifyEmail(token: string): Promise<boolean> {
const db = getAuthDatabase();
try {
const user = await db.queryOne<User>(
'SELECT * FROM users WHERE email_verification_token = ?',
[token]
);
if (!user) {
return false;
}
const now = new Date().toISOString();
await db.execute(
'UPDATE users SET email_verified = ?, email_verification_token = NULL, updated_at = ? WHERE id = ?',
[isPostgres() ? true : 1, now, user.id]
);
logger.info(`Email verified for user: ${user.email}`);
return true;
} finally {
await db.close();
}
}
/**
* Manually verify email address (admin action)
*/
async manuallyVerifyEmail(id: number): Promise<void> {
const db = getAuthDatabase();
const now = new Date().toISOString();
try {
await db.execute(
'UPDATE users SET email_verified = ?, email_verification_token = NULL, updated_at = ? WHERE id = ?',
[isPostgres() ? true : 1, now, id]
);
logger.info(`Email manually verified for user: ${id}`);
} finally {
db.close();
}
}
/**
* Send invitation email
*/
async sendInvitation(userId: number): Promise<boolean> {
const db = getAuthDatabase();
try {
const user = await this.getUserById(userId);
if (!user) {
return false;
}
const token = this.generateToken();
const expiresAt = new Date(Date.now() + 7 * 24 * 60 * 60 * 1000).toISOString(); // 7 days
// Store invitation token
await db.execute(
`INSERT INTO email_tokens (user_id, token, type, expires_at, used, created_at)
VALUES (?, ?, ?, ?, ?, ?)`,
[userId, token, 'invitation', expiresAt, isPostgres() ? false : 0, new Date().toISOString()]
);
// Send invitation email
return await emailService.sendInvitationEmail(
user.email,
token,
user.display_name || undefined
);
} finally {
await db.close();
}
}
/**
* Validate invitation token
*/
async validateInvitationToken(token: string): Promise<User | null> {
const db = getAuthDatabase();
try {
const tokenRecord = await db.queryOne<{ user_id: number; expires_at: string; used: boolean }>(
`SELECT user_id, expires_at, used FROM email_tokens
WHERE token = ? AND type = 'invitation' AND used = ?`,
[token, isPostgres() ? false : 0]
);
if (!tokenRecord) {
return null;
}
// Check if expired
if (new Date(tokenRecord.expires_at) < new Date()) {
return null;
}
return await this.getUserById(tokenRecord.user_id);
} finally {
await db.close();
}
}
/**
* Accept invitation and set password
*/
async acceptInvitation(token: string, password: string): Promise<User | null> {
const db = getAuthDatabase();
try {
const user = await this.validateInvitationToken(token);
if (!user) {
return null;
}
// Update password
await this.updatePassword(user.id, password);
// Mark token as used
await db.execute(
'UPDATE email_tokens SET used = ? WHERE token = ?',
[isPostgres() ? true : 1, token]
);
// Activate user and verify email
const now = new Date().toISOString();
await db.execute(
'UPDATE users SET is_active = ?, email_verified = ?, updated_at = ? WHERE id = ?',
[isPostgres() ? true : 1, isPostgres() ? true : 1, now, user.id]
);
return await this.getUserById(user.id);
} finally {
await db.close();
}
}
/**
* Update last login timestamp
*/
async updateLastLogin(id: number): Promise<void> {
const db = getAuthDatabase();
const now = new Date().toISOString();
try {
await db.execute(
'UPDATE users SET last_login = ? WHERE id = ?',
[now, id]
);
} finally {
await db.close();
}
}
/**
* Get user roles
*/
async getUserRoles(userId: number): Promise<Array<{ id: number; name: string; description: string | null }>> {
const db = getAuthDatabase();
try {
return await db.query<{ id: number; name: string; description: string | null }>(
`SELECT r.id, r.name, r.description
FROM roles r
INNER JOIN user_roles ur ON r.id = ur.role_id
WHERE ur.user_id = ?`,
[userId]
);
} finally {
await db.close();
}
}
/**
* Assign role to user
*/
async assignRole(userId: number, roleId: number): Promise<boolean> {
const db = getAuthDatabase();
const now = new Date().toISOString();
try {
await db.execute(
`INSERT INTO user_roles (user_id, role_id, assigned_at)
VALUES (?, ?, ?)
ON CONFLICT(user_id, role_id) DO NOTHING`,
[userId, roleId, now]
);
return true;
} catch (error: any) {
// Handle SQLite (no ON CONFLICT support)
if (error.message?.includes('UNIQUE constraint')) {
return false; // Already assigned
}
throw error;
} finally {
await db.close();
}
}
/**
* Remove role from user
*/
async removeRole(userId: number, roleId: number): Promise<boolean> {
const db = getAuthDatabase();
try {
const result = await db.execute(
'DELETE FROM user_roles WHERE user_id = ? AND role_id = ?',
[userId, roleId]
);
return result > 0;
} finally {
await db.close();
}
}
}
export const userService = new UserService();

View File

@@ -0,0 +1,298 @@
/**
* User Settings Service
*
* Manages user-specific settings including Jira PAT, AI features, and API keys.
*/
import { logger } from './logger.js';
import { getAuthDatabase } from './database/migrations.js';
import { encryptionService } from './encryptionService.js';
import { config } from '../config/env.js';
const isPostgres = (): boolean => {
return process.env.DATABASE_TYPE === 'postgres' || process.env.DATABASE_TYPE === 'postgresql';
};
export interface UserSettings {
user_id: number;
jira_pat: string | null;
jira_pat_encrypted: boolean;
ai_enabled: boolean;
ai_provider: string | null;
ai_api_key: string | null;
web_search_enabled: boolean;
tavily_api_key: string | null;
updated_at: string;
}
export interface UpdateUserSettingsInput {
jira_pat?: string;
ai_enabled?: boolean;
ai_provider?: 'openai' | 'anthropic';
ai_api_key?: string;
web_search_enabled?: boolean;
tavily_api_key?: string;
}
class UserSettingsService {
/**
* Get user settings
*/
async getUserSettings(userId: number): Promise<UserSettings | null> {
const db = getAuthDatabase();
try {
const settings = await db.queryOne<UserSettings>(
'SELECT * FROM user_settings WHERE user_id = ?',
[userId]
);
if (!settings) {
// Create default settings
return await this.createDefaultSettings(userId);
}
// Decrypt sensitive fields if encrypted
if (settings.jira_pat && settings.jira_pat_encrypted && encryptionService.isConfigured()) {
try {
settings.jira_pat = await encryptionService.decrypt(settings.jira_pat);
} catch (error) {
logger.error('Failed to decrypt Jira PAT:', error);
settings.jira_pat = null;
}
}
if (settings.ai_api_key && encryptionService.isConfigured()) {
try {
settings.ai_api_key = await encryptionService.decrypt(settings.ai_api_key);
} catch (error) {
logger.error('Failed to decrypt AI API key:', error);
settings.ai_api_key = null;
}
}
if (settings.tavily_api_key && encryptionService.isConfigured()) {
try {
settings.tavily_api_key = await encryptionService.decrypt(settings.tavily_api_key);
} catch (error) {
logger.error('Failed to decrypt Tavily API key:', error);
settings.tavily_api_key = null;
}
}
return settings;
} finally {
await db.close();
}
}
/**
* Create default settings for user
*/
async createDefaultSettings(userId: number): Promise<UserSettings> {
const db = getAuthDatabase();
const now = new Date().toISOString();
try {
await db.execute(
`INSERT INTO user_settings (
user_id, jira_pat, jira_pat_encrypted, ai_enabled, ai_provider,
ai_api_key, web_search_enabled, tavily_api_key, updated_at
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)`,
[
userId,
null,
isPostgres() ? true : 1,
isPostgres() ? false : 0,
null,
null,
isPostgres() ? false : 0,
null,
now,
]
);
return await this.getUserSettings(userId) as UserSettings;
} finally {
await db.close();
}
}
/**
* Update user settings
*/
async updateUserSettings(userId: number, input: UpdateUserSettingsInput): Promise<UserSettings> {
const db = getAuthDatabase();
const now = new Date().toISOString();
try {
// Ensure settings exist
let settings = await this.getUserSettings(userId);
if (!settings) {
settings = await this.createDefaultSettings(userId);
}
const updates: string[] = [];
const values: any[] = [];
if (input.jira_pat !== undefined) {
let encryptedPat: string | null = null;
if (input.jira_pat) {
if (encryptionService.isConfigured()) {
encryptedPat = await encryptionService.encrypt(input.jira_pat);
} else {
// Store unencrypted if encryption not configured (development)
encryptedPat = input.jira_pat;
}
}
updates.push('jira_pat = ?');
updates.push('jira_pat_encrypted = ?');
values.push(encryptedPat);
values.push(encryptionService.isConfigured() ? (isPostgres() ? true : 1) : (isPostgres() ? false : 0));
}
if (input.ai_enabled !== undefined) {
updates.push('ai_enabled = ?');
values.push(isPostgres() ? input.ai_enabled : (input.ai_enabled ? 1 : 0));
}
if (input.ai_provider !== undefined) {
updates.push('ai_provider = ?');
values.push(input.ai_provider);
}
if (input.ai_api_key !== undefined) {
let encryptedKey: string | null = null;
if (input.ai_api_key) {
if (encryptionService.isConfigured()) {
encryptedKey = await encryptionService.encrypt(input.ai_api_key);
} else {
encryptedKey = input.ai_api_key;
}
}
updates.push('ai_api_key = ?');
values.push(encryptedKey);
}
if (input.web_search_enabled !== undefined) {
updates.push('web_search_enabled = ?');
values.push(isPostgres() ? input.web_search_enabled : (input.web_search_enabled ? 1 : 0));
}
if (input.tavily_api_key !== undefined) {
let encryptedKey: string | null = null;
if (input.tavily_api_key) {
if (encryptionService.isConfigured()) {
encryptedKey = await encryptionService.encrypt(input.tavily_api_key);
} else {
encryptedKey = input.tavily_api_key;
}
}
updates.push('tavily_api_key = ?');
values.push(encryptedKey);
}
if (updates.length === 0) {
return settings;
}
updates.push('updated_at = ?');
values.push(now);
values.push(userId);
await db.execute(
`UPDATE user_settings SET ${updates.join(', ')} WHERE user_id = ?`,
values
);
logger.info(`User settings updated for user: ${userId}`);
return await this.getUserSettings(userId) as UserSettings;
} finally {
await db.close();
}
}
/**
* Validate Jira PAT by testing connection
*/
async validateJiraPat(userId: number, pat?: string): Promise<boolean> {
try {
const settings = await this.getUserSettings(userId);
const tokenToTest = pat || settings?.jira_pat;
if (!tokenToTest) {
return false;
}
// Test connection to Jira
const testUrl = `${config.jiraHost}/rest/api/2/myself`;
const response = await fetch(testUrl, {
headers: {
'Authorization': `Bearer ${tokenToTest}`,
'Accept': 'application/json',
},
});
return response.ok;
} catch (error) {
logger.error('Jira PAT validation failed:', error);
return false;
}
}
/**
* Get Jira PAT status
*/
async getJiraPatStatus(userId: number): Promise<{ configured: boolean; valid: boolean }> {
const settings = await this.getUserSettings(userId);
const configured = !!settings?.jira_pat;
if (!configured) {
return { configured: false, valid: false };
}
const valid = await this.validateJiraPat(userId);
return { configured: true, valid };
}
/**
* Check if AI features are enabled for user
*/
async isAiEnabled(userId: number): Promise<boolean> {
const settings = await this.getUserSettings(userId);
return settings?.ai_enabled || false;
}
/**
* Get AI provider for user
*/
async getAiProvider(userId: number): Promise<'openai' | 'anthropic' | null> {
const settings = await this.getUserSettings(userId);
return (settings?.ai_provider as 'openai' | 'anthropic') || null;
}
/**
* Get AI API key for user
*/
async getAiApiKey(userId: number): Promise<string | null> {
const settings = await this.getUserSettings(userId);
return settings?.ai_api_key || null;
}
/**
* Check if web search is enabled for user
*/
async isWebSearchEnabled(userId: number): Promise<boolean> {
const settings = await this.getUserSettings(userId);
return settings?.web_search_enabled || false;
}
/**
* Get Tavily API key for user
*/
async getTavilyApiKey(userId: number): Promise<string | null> {
const settings = await this.getUserSettings(userId);
return settings?.tavily_api_key || null;
}
}
export const userSettingsService = new UserSettingsService();

View File

@@ -34,6 +34,7 @@ export interface ApplicationListItem {
id: string;
key: string;
name: string;
searchReference?: string | null; // Search reference for matching
status: ApplicationStatus | null;
applicationFunctions: ReferenceValue[]; // Multiple functions supported
governanceModel: ReferenceValue | null;
@@ -88,6 +89,11 @@ export interface ApplicationDetails {
applicationManagementTAM?: ReferenceValue | null; // Application Management - TAM
technischeArchitectuur?: string | null; // URL to Technical Architecture document (Attribute ID 572)
dataCompletenessPercentage?: number; // Data completeness percentage (0-100)
reference?: string | null; // Reference field (Enterprise Architect GUID)
confluenceSpace?: string | null; // Confluence Space URL
supplierTechnical?: ReferenceValue | null; // Supplier Technical
supplierImplementation?: ReferenceValue | null; // Supplier Implementation
supplierConsultancy?: ReferenceValue | null; // Supplier Consultancy
}
// Search filters

View File

@@ -0,0 +1,43 @@
/**
* Helper functions for Express request query and params
*/
import { Request } from 'express';
/**
* Get a query parameter as a string, handling both string and string[] types
*/
export function getQueryString(req: Request, key: string): string | undefined {
const value = req.query[key];
if (value === undefined) return undefined;
if (Array.isArray(value)) return value[0] as string;
return value as string;
}
/**
* Get a query parameter as a number, handling both string and string[] types
*/
export function getQueryNumber(req: Request, key: string, defaultValue?: number): number {
const value = getQueryString(req, key);
if (value === undefined) return defaultValue ?? 0;
const parsed = parseInt(value, 10);
return isNaN(parsed) ? (defaultValue ?? 0) : parsed;
}
/**
* Get a query parameter as a boolean
*/
export function getQueryBoolean(req: Request, key: string, defaultValue = false): boolean {
const value = getQueryString(req, key);
if (value === undefined) return defaultValue;
return value === 'true' || value === '1';
}
/**
* Get a route parameter as a string, handling both string and string[] types
*/
export function getParamString(req: Request, key: string): string {
const value = req.params[key];
if (Array.isArray(value)) return value[0] as string;
return value as string;
}

21
docker-compose.dev.yml Normal file
View File

@@ -0,0 +1,21 @@
services:
postgres:
image: postgres:15-alpine
container_name: cmdb-postgres-dev
environment:
POSTGRES_DB: cmdb_insight
POSTGRES_USER: cmdb
POSTGRES_PASSWORD: cmdb-dev
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U cmdb"]
interval: 10s
timeout: 5s
retries: 5
restart: unless-stopped
volumes:
postgres_data:

View File

@@ -0,0 +1,58 @@
version: '3.8'
services:
backend:
image: zuyderlandcmdbacr.azurecr.io/cmdb-insight/backend:latest
environment:
- NODE_ENV=production
- PORT=3001
env_file:
- .env.production
volumes:
- backend_data:/app/data
restart: unless-stopped
networks:
- internal
healthcheck:
test: ["CMD", "node", "-e", "require('http').get('http://localhost:3001/health', (r) => {process.exit(r.statusCode === 200 ? 0 : 1)})"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
frontend:
image: zdlas.azurecr.io/cmdb-insight/frontend:latest
depends_on:
- backend
restart: unless-stopped
networks:
- internal
healthcheck:
test: ["CMD", "wget", "--quiet", "--tries=1", "--spider", "http://localhost/"]
interval: 30s
timeout: 10s
retries: 3
nginx:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./nginx/ssl:/etc/nginx/ssl:ro
- nginx_cache:/var/cache/nginx
depends_on:
- frontend
- backend
restart: unless-stopped
networks:
- internal
volumes:
backend_data:
nginx_cache:
networks:
internal:
driver: bridge

View File

@@ -1,10 +1,8 @@
version: '3.8'
services:
postgres:
image: postgres:15-alpine
environment:
POSTGRES_DB: cmdb
POSTGRES_DB: cmdb_insight
POSTGRES_USER: cmdb
POSTGRES_PASSWORD: cmdb-dev
ports:
@@ -30,12 +28,12 @@ services:
- DATABASE_TYPE=postgres
- DATABASE_HOST=postgres
- DATABASE_PORT=5432
- DATABASE_NAME=cmdb
- DATABASE_NAME=cmdb_insight
- DATABASE_USER=cmdb
- DATABASE_PASSWORD=cmdb-dev
# Optional Jira/AI variables (set in .env file or environment)
- JIRA_HOST=${JIRA_HOST}
- JIRA_PAT=${JIRA_PAT}
- JIRA_SCHEMA_ID=${JIRA_SCHEMA_ID}
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
volumes:
- ./backend/src:/app/src

View File

@@ -0,0 +1,140 @@
# Authentication System Environment Variables
This document describes the new environment variables required for the authentication and authorization system.
## Application Branding
```env
# Application name displayed throughout the UI
APP_NAME=CMDB Insight
# Application tagline/subtitle displayed in header and login pages
APP_TAGLINE=Management console for Jira Assets
# Copyright text displayed in the footer (use {year} as placeholder for current year)
APP_COPYRIGHT=© {year} Zuyderland Medisch Centrum
```
**Note:** The `{year}` placeholder in `APP_COPYRIGHT` will be automatically replaced with the current year. If not set, defaults to `© {current_year} Zuyderland Medisch Centrum`.
## Email Configuration (Nodemailer)
```env
# SMTP Configuration
SMTP_HOST=smtp.example.com
SMTP_PORT=587
SMTP_SECURE=false
SMTP_USER=your-email@example.com
SMTP_PASSWORD=your-password
SMTP_FROM=noreply@example.com
```
## Encryption
```env
# Encryption Key (32 bytes, base64 encoded)
# Generate with: openssl rand -base64 32
ENCRYPTION_KEY=your-32-byte-encryption-key-base64
```
## Local Authentication
```env
# Enable local authentication (email/password)
LOCAL_AUTH_ENABLED=true
# Allow public registration (optional, default: false)
REGISTRATION_ENABLED=false
```
## Password Requirements
```env
# Password minimum length
PASSWORD_MIN_LENGTH=8
# Password complexity requirements
PASSWORD_REQUIRE_UPPERCASE=true
PASSWORD_REQUIRE_LOWERCASE=true
PASSWORD_REQUIRE_NUMBER=true
PASSWORD_REQUIRE_SPECIAL=false
```
## Session Configuration
```env
# Session duration in hours
SESSION_DURATION_HOURS=24
```
## Initial Admin User
```env
# Create initial administrator user (optional)
ADMIN_EMAIL=admin@example.com
ADMIN_PASSWORD=SecurePassword123!
ADMIN_USERNAME=admin
ADMIN_DISPLAY_NAME=Administrator
```
## Complete Example
```env
# Email Configuration
SMTP_HOST=smtp.gmail.com
SMTP_PORT=587
SMTP_SECURE=false
SMTP_USER=your-email@gmail.com
SMTP_PASSWORD=your-app-password
SMTP_FROM=noreply@example.com
# Encryption
ENCRYPTION_KEY=$(openssl rand -base64 32)
# Local Auth
LOCAL_AUTH_ENABLED=true
REGISTRATION_ENABLED=false
# Password Requirements
PASSWORD_MIN_LENGTH=8
PASSWORD_REQUIRE_UPPERCASE=true
PASSWORD_REQUIRE_LOWERCASE=true
PASSWORD_REQUIRE_NUMBER=true
PASSWORD_REQUIRE_SPECIAL=false
# Session
SESSION_DURATION_HOURS=24
# Initial Admin
ADMIN_EMAIL=admin@example.com
ADMIN_PASSWORD=ChangeMe123!
ADMIN_USERNAME=admin
ADMIN_DISPLAY_NAME=Administrator
```
## Important Notes
### User-Specific Configuration (REMOVED from ENV)
The following environment variables have been **REMOVED** from the codebase and are **NOT** configurable via environment variables:
- `JIRA_PAT`: **Configure in User Settings > Jira PAT**
- `ANTHROPIC_API_KEY`: **Configure in User Settings > AI Settings**
- `OPENAI_API_KEY`: **Configure in User Settings > AI Settings**
- `TAVILY_API_KEY`: **Configure in User Settings > AI Settings**
**These are now user-specific settings only.** Each user must configure their own API keys in their profile settings. This provides:
- Better security (keys not in shared config files)
- Per-user API key management
- Individual rate limiting per user
- Better audit trails
- Encrypted storage in the database
### Required Configuration
- `SESSION_SECRET`: Should be a secure random string in production (generate with `openssl rand -base64 32`)
- `ENCRYPTION_KEY`: Must be exactly 32 bytes when base64 decoded (generate with `openssl rand -base64 32`)
### Application Branding
- The `{year}` placeholder in `APP_COPYRIGHT` will be automatically replaced with the current year

View File

@@ -1,272 +0,0 @@
# Azure Deployment - Infrastructure Samenvatting
## Applicatie Overzicht
**Zuyderland CMDB GUI** - Web applicatie voor classificatie en beheer van applicatiecomponenten in Jira Assets.
### Technologie Stack
- **Backend**: Node.js 20 (Express, TypeScript)
- **Frontend**: React 18 (Vite, TypeScript)
- **Database**: SQLite (cache layer, ~20MB, geen backup nodig - sync vanuit Jira)
- **Containerization**: Docker
- **Authentication**: Jira OAuth 2.0 of Personal Access Token
- **Gebruikers**: Max. 20 collega's
---
## Infrastructure Vereisten
### 1. Compute Resources
**Aanbevolen: Azure App Service (Basic Tier)**
- **App Service Plan**: B1 (1 vCPU, 1.75GB RAM) - **voldoende voor 20 gebruikers**
- 2 Web Apps: Backend + Frontend (deel dezelfde App Service Plan)
- **Kosten**: ~€15-25/maand
- **Voordelen**: Eenvoudig, managed service, voldoende voor kleine teams
**Alternatief: Azure Container Instances (ACI) - Als je containers prefereert**
- 2 containers: Backend + Frontend
- Backend: 1 vCPU, 2GB RAM
- Frontend: 0.5 vCPU, 1GB RAM
- **Kosten**: ~€30-50/maand
- **Nadeel**: Minder managed features dan App Service
### 2. Database & Storage
**Optie A: PostgreSQL (Aanbevolen) ⭐**
- **Azure Database for PostgreSQL**: Flexible Server Basic tier (B1ms)
- **Database**: ~20MB (huidige grootte, ruimte voor groei)
- **Kosten**: ~€20-30/maand
- **Voordelen**: Identieke dev/prod stack, betere concurrency, connection pooling
**Optie B: SQLite (Huidige situatie)**
- **SQLite Database**: ~20MB (in Azure Storage)
- **Azure Storage Account**: Standard LRS (Hot tier)
- **Kosten**: ~€1-3/maand
- **Nadelen**: Beperkte concurrency, geen connection pooling
**Logs**: ~500MB-1GB/maand (Application Insights)
### 3. Networking
**Vereisten:**
- **HTTPS**: SSL/TLS certificaat (Let's Encrypt of Azure App Service Certificate)
- **DNS**: Subdomain (bijv. `cmdb.zuyderland.nl`)
- **Firewall**: Inbound poorten 80/443, outbound naar Jira API
- **Load Balancer**: Azure Application Gateway (optioneel, voor HA)
**Network Security:**
- Private endpoints (optioneel, voor extra security)
- Network Security Groups (NSG)
- Azure Firewall (optioneel)
### 4. Secrets Management
**Azure Key Vault** voor:
- `JIRA_OAUTH_CLIENT_SECRET`
- `SESSION_SECRET`
- `ANTHROPIC_API_KEY`
- `JIRA_PAT` (indien gebruikt)
**Kosten**: ~€1-5/maand
### 5. Monitoring & Logging
**Azure Monitor:**
- Application Insights (Basic tier - gratis tot 5GB/maand)
- Log Analytics Workspace (Pay-as-you-go)
- Alerts voor health checks, errors
**Kosten**: ~€0-20/maand (met Basic tier vaak gratis voor kleine apps)
### 6. Backup & Disaster Recovery
**Geen backup vereist** - Data wordt gesynchroniseerd vanuit Jira Assets, dus backup is niet nodig.
De SQLite database is een cache layer die opnieuw opgebouwd kan worden via sync.
---
## Deployment Architectuur
### Aanbevolen: Azure App Service (Basic Tier)
**Eenvoudige setup voor kleine teams (20 gebruikers):**
```
┌─────────────────────────────────────┐
│ Azure App Service (B1 Plan) │
│ │
│ ┌──────────┐ ┌──────────┐ │
│ │ Frontend │ │ Backend │ │
│ │ Web App │ │ Web App │ │
│ └──────────┘ └────┬─────┘ │
└─────────────────────────┼──────────┘
┌─────────────┴─────────────┐
│ │
┌───────▼──────┐ ┌────────────▼────┐
│ Azure Storage│ │ Azure Key Vault │
│ (SQLite DB) │ │ (Secrets) │
└──────────────┘ └─────────────────┘
┌───────▼──────┐
│ Application │
│ Insights │
│ (Basic/FREE) │
└──────────────┘
```
**Opmerking**: Application Gateway is niet nodig voor 20 gebruikers - App Service heeft ingebouwde SSL en load balancing.
---
## Security Overwegingen
### 1. Authentication
- **Jira OAuth 2.0**: Gebruikers authenticeren via Jira
- **Session Management**: Sessions in-memory (overweeg Azure Redis Cache voor productie)
### 2. Network Security
- **HTTPS Only**: Alle verkeer via HTTPS
- **CORS**: Alleen toegestaan vanuit geconfigureerde frontend URL
- **Rate Limiting**: 100 requests/minuut per IP (configureerbaar)
### 3. Data Security
- **Secrets**: Alle secrets in Azure Key Vault
- **Database**: SQLite database in Azure Storage (encrypted at rest)
- **In Transit**: TLS 1.2+ voor alle communicatie
### 4. Compliance
- **Logging**: Alle API calls gelogd (geen PII)
- **Audit Trail**: Wijzigingen aan applicaties gelogd
- **Data Residency**: Data blijft in Azure West Europe (of gewenste regio)
---
## Externe Dependencies
### 1. Jira Assets API
- **Endpoint**: `https://jira.zuyderland.nl`
- **Authentication**: OAuth 2.0 of Personal Access Token
- **Rate Limits**: Respecteer Jira API rate limits
- **Network**: Outbound HTTPS naar Jira (poort 443)
### 2. AI API (Optioneel)
- **Anthropic Claude API**: Voor AI classificatie features
- **Network**: Outbound HTTPS naar `api.anthropic.com`
---
## Deployment Stappen
### 1. Azure Resources Aanmaken
```bash
# Resource Group
az group create --name rg-cmdb-gui --location westeurope
# App Service Plan (Basic B1 - voldoende voor 20 gebruikers)
az appservice plan create --name plan-cmdb-gui --resource-group rg-cmdb-gui --sku B1
# Web Apps (delen dezelfde plan - kostenbesparend)
az webapp create --name cmdb-backend --resource-group rg-cmdb-gui --plan plan-cmdb-gui
az webapp create --name cmdb-frontend --resource-group rg-cmdb-gui --plan plan-cmdb-gui
# Key Vault
az keyvault create --name kv-cmdb-gui --resource-group rg-cmdb-gui --location westeurope
# Storage Account (voor SQLite database - alleen bij SQLite optie)
az storage account create --name stcmdbgui --resource-group rg-cmdb-gui --location westeurope --sku Standard_LRS
```
**Met PostgreSQL (Aanbevolen):**
```bash
# PostgreSQL Database (Flexible Server)
az postgres flexible-server create \
--resource-group rg-cmdb-gui \
--name psql-cmdb-gui \
--location westeurope \
--admin-user cmdbadmin \
--admin-password <secure-password-from-key-vault> \
--sku-name Standard_B1ms \
--tier Burstable \
--storage-size 32 \
--version 15
# Database aanmaken
az postgres flexible-server db create \
--resource-group rg-cmdb-gui \
--server-name psql-cmdb-gui \
--database-name cmdb
```
### 2. Configuration
- Environment variabelen via App Service Configuration
- Secrets via Key Vault references
- SSL certificaat via App Service Certificate of Let's Encrypt
### 3. CI/CD
- **Azure DevOps Pipelines** of **GitHub Actions**
- Automatische deployment bij push naar main branch
- Deployment slots voor zero-downtime updates
---
## Kosten Schatting (Maandelijks)
**Voor 20 gebruikers - Basic Setup:**
**Met SQLite (huidige setup):**
| Component | Schatting |
|-----------|-----------|
| App Service Plan (B1) | €15-25 |
| Storage Account | €1-3 |
| Key Vault | €1-2 |
| Application Insights (Basic) | €0-5 |
| **Totaal** | **€17-35/maand** |
**Met PostgreSQL (aanbevolen):**
| Component | Schatting |
|-----------|-----------|
| App Service Plan (B1) | €15-25 |
| PostgreSQL Database (B1ms) | €20-30 |
| Key Vault | €1-2 |
| Application Insights (Basic) | €0-5 |
| **Totaal** | **€36-62/maand** |
*Inclusief: SSL certificaat (gratis via App Service), basis monitoring*
**Opmerking**: Met Basic tier en gratis Application Insights kan dit zelfs onder €20/maand blijven.
**Backup**: Niet nodig - data wordt gesynchroniseerd vanuit Jira Assets.
---
## Vragen voor Infrastructure Team
1. **DNS & Domain**: Kunnen we een subdomain krijgen? (bijv. `cmdb.zuyderland.nl`)
2. **SSL Certificaat**: Azure App Service Certificate of Let's Encrypt via certbot?
3. **Network**: Moeten we via VPN/ExpressRoute of direct internet toegang?
4. **Firewall Rules**: Welke outbound toegang is nodig? (Jira API, Anthropic API)
5. **Monitoring**: Gebruiken we bestaande Azure Monitor setup of aparte workspace?
6. **Backup**: Niet nodig - SQLite database is cache layer, data wordt gesynchroniseerd vanuit Jira Assets
7. **Disaster Recovery**: Data kan opnieuw gesynchroniseerd worden vanuit Jira (geen backup vereist)
8. **Compliance**: Zijn er specifieke compliance requirements? (ISO 27001, NEN 7510)
9. **Scaling**: Niet nodig - max. 20 gebruikers, Basic tier is voldoende
10. **Maintenance Windows**: Wanneer kunnen we updates deployen?
---
## Next Steps
1. **Kick-off Meeting**: Bespreken architectuur en requirements
2. **Proof of Concept**: Deploy naar Azure App Service (test environment)
3. **Security Review**: Security team review van configuratie
4. **Load Testing**: Testen onder verwachte load
5. **Production Deployment**: Go-live met monitoring
---
## Contact & Documentatie
- **Application Code**: [Git Repository]
- **Deployment Guide**: `PRODUCTION-DEPLOYMENT.md`
- **API Documentation**: `/api/config` endpoint

View File

@@ -0,0 +1,240 @@
# Azure Resources Overview
Quick reference of all Azure resources needed for CMDB Insight deployment.
## 📋 Resources Summary
| Resource Type | Resource Name | Purpose | SKU/Tier | Estimated Cost | Shared? |
|--------------|---------------|---------|----------|----------------|--------|
| **Resource Group** | `rg-cmdb-insight-prod` | Container for all resources | - | Free | No |
| **Container Registry** | `yourcompanyacr` | Store Docker images (can be shared) | Basic/Standard | €5-20/month | ✅ Yes |
| **PostgreSQL Database** | `cmdb-postgres-prod` | Production database | Standard_B1ms | €20-30/month | No |
| **Key Vault** | `kv-cmdb-insight-prod` | Store secrets securely | Standard | €1-2/month | No |
| **App Service Plan** | `plan-cmdb-insight-prod` | Hosting plan | B1 | €15-25/month | No |
| **App Service (Backend)** | `cmdb-backend-prod` | Backend API | - | Included in plan | No |
| **App Service (Frontend)** | `cmdb-frontend-prod` | Frontend web app | - | Included in plan | No |
| **Application Insights** | `appi-cmdb-insight-prod` | Monitoring & logging | Basic | €0-5/month | No |
**Total Estimated Cost: €41-82/month** (depending on ACR tier and usage)
**💡 Note**: Container Registry can be **shared across multiple applications**. The repository name (`cmdb-insight`) separates this app from others. If you already have an ACR, reuse it to save costs!
---
## 🔗 Resource Dependencies
```
Resource Group (App-specific)
├── PostgreSQL Database
│ └── Stores: Application data
├── Key Vault
│ └── Stores: Secrets (JIRA tokens, passwords, etc.)
├── Application Insights
│ └── Monitors: Backend & Frontend apps
└── App Service Plan
├── Backend App Service
│ ├── Pulls from: Shared ACR (cmdb-insight/backend:latest)
│ ├── Connects to: PostgreSQL
│ ├── Reads from: Key Vault
│ └── Sends logs to: Application Insights
└── Frontend App Service
├── Pulls from: Shared ACR (cmdb-insight/frontend:latest)
└── Connects to: Backend App Service
Shared Resources (can be in separate resource group)
└── Container Registry (ACR) ← Shared across multiple applications
├── cmdb-insight/ ← This application
│ ├── backend:latest
│ └── frontend:latest
├── other-app/ ← Other applications
│ └── api:latest
└── shared-services/ ← Shared images
└── nginx:latest
```
---
## 🌐 Endpoints
After deployment, your application will be available at:
- **Frontend**: `https://cmdb-frontend-prod.azurewebsites.net`
- **Backend API**: `https://cmdb-backend-prod.azurewebsites.net/api`
- **Health Check**: `https://cmdb-backend-prod.azurewebsites.net/api/health`
If custom domain is configured:
- **Frontend**: `https://cmdb.yourcompany.com`
- **Backend API**: `https://api.cmdb.yourcompany.com` (or subdomain of your choice)
---
## 🔐 Required Secrets
These secrets should be stored in Azure Key Vault:
| Secret Name | Description | Example |
|-------------|-------------|---------|
| `JiraPat` | Jira Personal Access Token (if using PAT auth) | `ATATT3xFfGF0...` |
| `SessionSecret` | Session encryption secret | `a1b2c3d4e5f6...` (32+ chars) |
| `JiraOAuthClientId` | Jira OAuth Client ID | `OAuthClientId123` |
| `JiraOAuthClientSecret` | Jira OAuth Client Secret | `OAuthSecret456` |
| `DatabasePassword` | PostgreSQL admin password | `SecurePassword123!` |
---
## 📊 Resource Sizing Recommendations
### For 20 Users (Current)
| Resource | Recommended SKU | Alternative |
|----------|----------------|-------------|
| App Service Plan | B1 (1 vCore, 1.75GB RAM) | B2 if experiencing slowness |
| PostgreSQL | Standard_B1ms (1 vCore, 2GB RAM) | Standard_B2s for growth |
| Container Registry | Basic (10GB) | Standard for production |
| Key Vault | Standard | Standard (only option) |
### For 50+ Users (Future Growth)
| Resource | Recommended SKU | Notes |
|----------|----------------|-------|
| App Service Plan | B2 or S1 | Better performance |
| PostgreSQL | Standard_B2s (2 vCores, 4GB RAM) | More concurrent connections |
| Container Registry | Standard (100GB) | More storage, geo-replication |
---
## 🔄 Update/Deployment Flow
1. **Code Changes** → Push to repository
2. **CI/CD Pipeline** → Builds Docker images
3. **Push to ACR** → Images stored in Container Registry
4. **Restart App Services** → Pulls new images from ACR
5. **Application Updates** → New version live
### Manual Deployment
```bash
# Restart apps to pull latest images
az webapp restart --name cmdb-backend-prod --resource-group rg-cmdb-insight-prod
az webapp restart --name cmdb-frontend-prod --resource-group rg-cmdb-insight-prod
```
---
## 🛡️ Security Configuration
### Network Security
- **HTTPS Only**: Enabled on both App Services
- **Database Firewall**: Restricted to Azure services (can be further restricted)
- **Key Vault Access**: Managed Identity only (no shared keys)
### Authentication
- **App Services**: Managed Identity for ACR and Key Vault access
- **Database**: Username/password (stored in Key Vault)
- **Application**: Jira OAuth 2.0 or Personal Access Token
---
## 📈 Monitoring & Logging
### Application Insights
- **Metrics**: Response times, request rates, errors
- **Logs**: Application logs, exceptions, traces
- **Alerts**: Configured for downtime, errors, performance issues
### Access Logs
```bash
# Backend logs
az webapp log tail --name cmdb-backend-prod --resource-group rg-cmdb-insight-prod
# Frontend logs
az webapp log tail --name cmdb-frontend-prod --resource-group rg-cmdb-insight-prod
```
---
## 🔧 Configuration Files
### Environment Variables (Backend)
- `NODE_ENV=production`
- `PORT=3001`
- `DATABASE_TYPE=postgres`
- `DATABASE_URL` (from Key Vault)
- `JIRA_HOST=https://jira.zuyderland.nl`
- `JIRA_AUTH_METHOD=oauth`
- `JIRA_OAUTH_CLIENT_ID` (from Key Vault)
- `JIRA_OAUTH_CLIENT_SECRET` (from Key Vault)
- `JIRA_OAUTH_CALLBACK_URL`
- `SESSION_SECRET` (from Key Vault)
- `FRONTEND_URL`
- `APPINSIGHTS_INSTRUMENTATIONKEY`
### Environment Variables (Frontend)
- `VITE_API_URL` (points to backend API)
---
## 🗑️ Cleanup (If Needed)
To delete all resources:
```bash
# Delete entire resource group (deletes all resources)
az group delete --name rg-cmdb-insight-prod --yes --no-wait
# Or delete individual resources
az acr delete --name cmdbinsightacr --resource-group rg-cmdb-insight-prod
az postgres flexible-server delete --name cmdb-postgres-prod --resource-group rg-cmdb-insight-prod
az keyvault delete --name kv-cmdb-insight-prod --resource-group rg-cmdb-insight-prod
az appservice plan delete --name plan-cmdb-insight-prod --resource-group rg-cmdb-insight-prod
```
**⚠️ Warning**: This will permanently delete all resources and data. Make sure you have backups if needed.
---
## 📞 Quick Commands Reference
```bash
# Set variables
RESOURCE_GROUP="rg-cmdb-insight-prod"
BACKEND_APP="cmdb-backend-prod"
FRONTEND_APP="cmdb-frontend-prod"
# Check app status
az webapp show --name $BACKEND_APP --resource-group $RESOURCE_GROUP --query state
# View logs
az webapp log tail --name $BACKEND_APP --resource-group $RESOURCE_GROUP
# Restart apps
az webapp restart --name $BACKEND_APP --resource-group $RESOURCE_GROUP
az webapp restart --name $FRONTEND_APP --resource-group $RESOURCE_GROUP
# List all resources
az resource list --resource-group $RESOURCE_GROUP --output table
# Get app URLs
echo "Frontend: https://${FRONTEND_APP}.azurewebsites.net"
echo "Backend: https://${BACKEND_APP}.azurewebsites.net/api"
```
---
## 📚 Related Documentation
- **`AZURE-NEW-SUBSCRIPTION-SETUP.md`** - Complete step-by-step setup guide
- **`AZURE-APP-SERVICE-DEPLOYMENT.md`** - Detailed App Service deployment
- **`AZURE-ACR-SETUP.md`** - ACR setup and usage
- **`AZURE-QUICK-REFERENCE.md`** - Quick reference guide
- **`PRODUCTION-DEPLOYMENT.md`** - General production deployment
---
**Last Updated**: 2025-01-21

View File

@@ -0,0 +1,231 @@
# Azure DevOps Service Connection - Authentication Type
## 🎯 Aanbeveling voor Jouw Situatie
**Voor CMDB Insight met Azure Container Registry:**
### ✅ **Service Principal** (Aanbevolen) ⭐
**Waarom:**
- ✅ Werkt altijd en is betrouwbaar
- ✅ Meest ondersteunde optie
- ✅ Eenvoudig te configureren
- ✅ Werkt perfect met Azure Container Registry
- ✅ Geen speciale vereisten
---
## 📊 Opties Vergelijking
### Optie 1: **Service Principal** ⭐ **AANBEVOLEN**
**Hoe het werkt:**
- Azure DevOps maakt automatisch een Service Principal aan in Azure AD
- De Service Principal krijgt toegang tot je Azure Container Registry
- Azure DevOps gebruikt deze credentials om in te loggen bij ACR
**Voordelen:**
-**Eenvoudig** - Azure DevOps doet alles automatisch
-**Betrouwbaar** - Werkt altijd, geen speciale configuratie nodig
-**Veilig** - Credentials worden veilig opgeslagen in Azure DevOps
-**Meest ondersteund** - Standaard optie voor de meeste scenario's
-**Werkt met alle Azure services** - Niet alleen ACR
**Nadelen:**
- ❌ Maakt een Service Principal aan in Azure AD (maar dit is normaal en veilig)
**Wanneer gebruiken:**
-**Jouw situatie** - Azure DevOps Services (cloud) met Azure Container Registry
- ✅ De meeste scenario's
- ✅ Als je eenvoudige, betrouwbare authenticatie wilt
- ✅ Standaard keuze voor nieuwe service connections
**Configuratie:**
- Azure DevOps doet alles automatisch
- Je hoeft alleen je Azure subscription en ACR te selecteren
- Azure DevOps maakt de Service Principal aan en geeft deze de juiste permissions
---
### Optie 2: **Managed Service Identity (MSI)**
**Hoe het werkt:**
- Gebruikt een Managed Identity van Azure DevOps zelf
- Geen credentials nodig - Azure beheert alles
- Werkt alleen als Azure DevOps een Managed Identity heeft
**Voordelen:**
- ✅ Geen credentials te beheren
- ✅ Automatisch geroteerd door Azure
- ✅ Modernere aanpak
**Nadelen:**
-**Werkt alleen met Azure DevOps Server (on-premises)** met Managed Identity
-**Werkt NIET met Azure DevOps Services (cloud)** - Dit is belangrijk!
- ❌ Vereist speciale configuratie
- ❌ Minder flexibel
**Wanneer gebruiken:**
- ✅ Azure DevOps Server (on-premises) met Managed Identity
- ✅ Als je geen credentials wilt beheren
-**NIET voor Azure DevOps Services (cloud)** - Dit werkt niet!
**Voor jouw situatie:****Niet geschikt** - Je gebruikt Azure DevOps Services (cloud), niet on-premises
---
### Optie 3: **Workload Identity Federation**
**Hoe het werkt:**
- Modernere manier zonder secrets
- Gebruikt federated identity (OIDC)
- Azure DevOps krijgt een token van Azure AD zonder credentials op te slaan
**Voordelen:**
- ✅ Geen secrets opgeslagen
- ✅ Modernere, veiligere aanpak
- ✅ Automatisch token management
**Nadelen:**
-**Nog niet volledig ondersteund** voor alle scenario's
- ❌ Kan complexer zijn om te configureren
- ❌ Vereist specifieke Azure AD configuratie
- ❌ Mogelijk niet beschikbaar in alle Azure DevOps organisaties
**Wanneer gebruiken:**
- ✅ Als je de modernste security features wilt
- ✅ Als je organisatie Workload Identity Federation ondersteunt
- ✅ Voor nieuwe projecten waar je geen legacy support nodig hebt
-**Niet aanbevolen als je eenvoudige setup wilt**
**Voor jouw situatie:** ⚠️ **Mogelijk beschikbaar, maar Service Principal is eenvoudiger**
---
## 🔍 Jouw Situatie Analyse
**Jouw setup:**
- ✅ Azure DevOps Services (cloud) - `dev.azure.com`
- ✅ Azure Container Registry - `zdlas.azurecr.io`
- ✅ Eenvoudige setup gewenst
- ✅ Betrouwbare authenticatie nodig
**Conclusie:****Service Principal is perfect!**
**Waarom niet de andere opties:**
-**Managed Service Identity**: Werkt niet met Azure DevOps Services (cloud)
- ⚠️ **Workload Identity Federation**: Mogelijk beschikbaar, maar complexer dan nodig
---
## 📋 Checklist: Welke Keuze?
### Kies **Service Principal** als:
- [x] Je Azure DevOps Services (cloud) gebruikt ✅
- [x] Je eenvoudige setup wilt ✅
- [x] Je betrouwbare authenticatie nodig hebt ✅
- [x] Je standaard, goed ondersteunde optie wilt ✅
- [x] Je Azure Container Registry gebruikt ✅
**→ Jouw situatie: ✅ Kies Service Principal!**
### Kies **Managed Service Identity** als:
- [ ] Je Azure DevOps Server (on-premises) gebruikt
- [ ] Je Managed Identity hebt geconfigureerd
- [ ] Je geen credentials wilt beheren
**→ Jouw situatie: ❌ Niet geschikt**
### Kies **Workload Identity Federation** als:
- [ ] Je de modernste security features wilt
- [ ] Je organisatie dit ondersteunt
- [ ] Je geen legacy support nodig hebt
- [ ] Je bereid bent om extra configuratie te doen
**→ Jouw situatie: ⚠️ Mogelijk, maar niet nodig**
---
## 🔧 Configuratie Stappen (Service Principal)
Wanneer je **Service Principal** kiest:
1. **Selecteer Azure Subscription**
- Kies je Azure subscription uit de dropdown
2. **Selecteer Azure Container Registry**
- Kies je ACR (`zdlas`) uit de dropdown
3. **Service Connection Name**
- Vul in: `zuyderland-cmdb-acr-connection`
- ⚠️ **Belangrijk**: Deze naam moet overeenkomen met `dockerRegistryServiceConnection` in `azure-pipelines.yml`!
4. **Security**
- Azure DevOps maakt automatisch een Service Principal aan
- De Service Principal krijgt automatisch de juiste permissions (AcrPush role)
- Credentials worden veilig opgeslagen in Azure DevOps
5. **Save**
- Klik "Save" of "Verify and save"
- Azure DevOps test automatisch de connection
**✅ Klaar!** Geen extra configuratie nodig.
---
## 🔄 Kan Ik Later Wisselen?
**Ja, maar:**
- Je kunt altijd een nieuwe service connection aanmaken met een ander authentication type
- Je moet dan wel de pipeline variabelen aanpassen
- Service Principal is meestal de beste keuze, dus wisselen is meestal niet nodig
---
## 💡 Mijn Aanbeveling
**Voor CMDB Insight:**
### ✅ **Kies Service Principal** ⭐
**Waarom:**
1.**Werkt perfect** - Standaard optie voor Azure DevOps Services
2.**Eenvoudig** - Azure DevOps doet alles automatisch
3.**Betrouwbaar** - Meest geteste en ondersteunde optie
4.**Veilig** - Credentials worden veilig beheerd door Azure DevOps
5.**Perfect voor jouw situatie** - Cloud Azure DevOps + Azure Container Registry
**Je hebt niet nodig:**
- ❌ Managed Service Identity (werkt niet met cloud Azure DevOps)
- ❌ Workload Identity Federation (complexer dan nodig)
**Setup:**
1. Kies **Service Principal**
2. Selecteer je subscription en ACR
3. Geef een naam: `zuyderland-cmdb-acr-connection`
4. Save
**Klaar!**
---
## 📚 Meer Informatie
- [Azure DevOps Service Connections](https://learn.microsoft.com/en-us/azure/devops/pipelines/library/service-endpoints)
- [Service Principal Authentication](https://learn.microsoft.com/en-us/azure/devops/pipelines/library/connect-to-azure)
- [Managed Service Identity](https://learn.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/)
- [Workload Identity Federation](https://learn.microsoft.com/en-us/azure/devops/pipelines/library/connect-to-azure?view=azure-devops#workload-identity-federation)
---
## 🎯 Conclusie
**Kies: Service Principal**
Dit is de beste keuze voor:
- ✅ Azure DevOps Services (cloud)
- ✅ Azure Container Registry
- ✅ Eenvoudige, betrouwbare setup
- ✅ Standaard, goed ondersteunde optie
Je kunt altijd later een andere optie proberen als je dat wilt, maar Service Principal is meestal de beste keuze.

Some files were not shown because too many files have changed in this diff Show More