UI styling improvements: dashboard headers and navigation

- Restore blue PageHeader on Dashboard (/app-components)
- Update homepage (/) with subtle header design without blue bar
- Add uniform PageHeader styling to application edit page
- Fix Rapporten link on homepage to point to /reports overview
- Improve header descriptions spacing for better readability
This commit is contained in:
2026-01-21 03:24:56 +01:00
parent e276e77fbc
commit cdee0e8819
138 changed files with 24551 additions and 3352 deletions

View File

@@ -85,7 +85,7 @@
## 🎯 Aanbeveling voor Jouw Situatie
**Voor Zuyderland CMDB GUI (20 gebruikers, corporate omgeving):**
**Voor CMDB Insight (20 gebruikers, corporate omgeving):**
### Optie A: **"Unsecure"** (Aanbevolen) ⭐
@@ -124,7 +124,7 @@ acrName: 'zuyderlandcmdbacr-abc123' # Met hash!
```yaml
# docker-compose.prod.acr.yml
image: zuyderlandcmdbacr-abc123.azurecr.io/zuyderland-cmdb-gui/backend:latest
image: zuyderlandcmdbacr-abc123.azurecr.io/cmdb-insight/backend:latest
```
```bash

View File

@@ -2,7 +2,7 @@
## 🎯 Aanbeveling voor Jouw Situatie
**Voor Zuyderland CMDB GUI (20 gebruikers, corporate tool, productie):**
**Voor CMDB Insight (20 gebruikers, corporate tool, productie):**
### ✅ **RBAC Registry Permissions** (Aanbevolen) ⭐
@@ -80,7 +80,7 @@
## 🔍 Jouw Situatie Analyse
**Jouw setup:**
- 2 repositories: `zuyderland-cmdb-gui/backend` en `zuyderland-cmdb-gui/frontend`
- 2 repositories: `cmdb-insight/backend` en `cmdb-insight/frontend`
- 20 gebruikers (klein team)
- Corporate tool (interne gebruikers)
- Productie omgeving
@@ -157,7 +157,7 @@ Met RBAC Registry Permissions kun je deze rollen toewijzen:
## 🎯 Mijn Aanbeveling
**Voor Zuyderland CMDB GUI:**
**Voor CMDB Insight:**
### ✅ **Kies RBAC Registry Permissions** ⭐

View File

@@ -2,7 +2,7 @@
## 🎯 Aanbeveling voor Jouw Situatie
**Voor Zuyderland CMDB GUI (20 gebruikers, corporate tool, productie):**
**Voor CMDB Insight (20 gebruikers, corporate tool, productie):**
### ✅ **Basic SKU** (Aanbevolen) ⭐
@@ -198,7 +198,7 @@
## 💡 Mijn Aanbeveling
**Voor Zuyderland CMDB GUI:**
**Voor CMDB Insight:**
### ✅ **Kies Basic SKU** ⭐

View File

@@ -80,7 +80,7 @@ variables:
# Pas deze aan naar jouw ACR naam
acrName: 'zuyderlandcmdbacr' # ← Jouw ACR naam hier
repositoryName: 'zuyderland-cmdb-gui'
repositoryName: 'cmdb-insight'
# Service connection naam (maak je in volgende stap)
dockerRegistryServiceConnection: 'zuyderland-cmdb-acr-connection'
@@ -124,7 +124,7 @@ Deze connection geeft Azure DevOps toegang tot je ACR.
2. Klik op **Pipelines** (links in het menu)
3. Klik op **"New pipeline"** of **"Create Pipeline"**
4. Kies **"Azure Repos Git"** (of waar je code staat)
5. Selecteer je repository: **"Zuyderland CMDB GUI"**
5. Selecteer je repository: **"CMDB Insight"**
6. Kies **"Existing Azure Pipelines YAML file"**
7. Selecteer:
- **Branch**: `main`
@@ -141,8 +141,8 @@ Deze connection geeft Azure DevOps toegang tot je ACR.
1. Ga naar je **Container Registry** (`zuyderlandcmdbacr`)
2. Klik op **"Repositories"**
3. Je zou moeten zien:
- `zuyderland-cmdb-gui/backend`
- `zuyderland-cmdb-gui/frontend`
- `cmdb-insight/backend`
- `cmdb-insight/frontend`
4. Klik op een repository om de tags te zien (bijv. `latest`, `123`)
### Via Azure CLI:
@@ -151,10 +151,10 @@ Deze connection geeft Azure DevOps toegang tot je ACR.
az acr repository list --name zuyderlandcmdbacr
# Lijst tags voor backend
az acr repository show-tags --name zuyderlandcmdbacr --repository zuyderland-cmdb-gui/backend
az acr repository show-tags --name zuyderlandcmdbacr --repository cmdb-insight/backend
# Lijst tags voor frontend
az acr repository show-tags --name zuyderlandcmdbacr --repository zuyderland-cmdb-gui/frontend
az acr repository show-tags --name zuyderlandcmdbacr --repository cmdb-insight/frontend
```
---
@@ -166,8 +166,8 @@ az acr repository show-tags --name zuyderlandcmdbacr --repository zuyderland-cmd
az acr login --name zuyderlandcmdbacr
# Pull images
docker pull zuyderlandcmdbacr.azurecr.io/zuyderland-cmdb-gui/backend:latest
docker pull zuyderlandcmdbacr.azurecr.io/zuyderland-cmdb-gui/frontend:latest
docker pull zuyderlandcmdbacr.azurecr.io/cmdb-insight/backend:latest
docker pull zuyderlandcmdbacr.azurecr.io/cmdb-insight/frontend:latest
# Test run (met docker-compose)
docker-compose -f docker-compose.prod.acr.yml pull

View File

@@ -1,11 +1,11 @@
# Azure App Service Deployment - Stap-voor-Stap Guide 🚀
Complete deployment guide voor Zuyderland CMDB GUI naar Azure App Service.
Complete deployment guide voor CMDB Insight naar Azure App Service.
## 📋 Prerequisites
- Azure CLI geïnstalleerd en geconfigureerd (`az login`)
- Docker images in ACR: `zdlas.azurecr.io/zuyderland-cmdb-gui/backend:latest` en `frontend:latest`
- Docker images in ACR: `zdlas.azurecr.io/cmdb-insight/backend:latest` en `frontend:latest`
- Azure DevOps pipeline werkt (images worden automatisch gebouwd)
---
@@ -38,14 +38,14 @@ az webapp create \
--name cmdb-backend-prod \
--resource-group rg-cmdb-gui-prod \
--plan plan-cmdb-gui-prod \
--deployment-container-image-name zdlas.azurecr.io/zuyderland-cmdb-gui/backend:latest
--deployment-container-image-name zdlas.azurecr.io/cmdb-insight/backend:latest
# Frontend
az webapp create \
--name cmdb-frontend-prod \
--resource-group rg-cmdb-gui-prod \
--plan plan-cmdb-gui-prod \
--deployment-container-image-name zdlas.azurecr.io/zuyderland-cmdb-gui/frontend:latest
--deployment-container-image-name zdlas.azurecr.io/cmdb-insight/frontend:latest
```
### Stap 4: ACR Authentication
@@ -70,13 +70,13 @@ az role assignment create --assignee $FRONTEND_PRINCIPAL_ID --role AcrPull --sco
az webapp config container set \
--name cmdb-backend-prod \
--resource-group rg-cmdb-gui-prod \
--docker-custom-image-name zdlas.azurecr.io/zuyderland-cmdb-gui/backend:latest \
--docker-custom-image-name zdlas.azurecr.io/cmdb-insight/backend:latest \
--docker-registry-server-url https://zdlas.azurecr.io
az webapp config container set \
--name cmdb-frontend-prod \
--resource-group rg-cmdb-gui-prod \
--docker-custom-image-name zdlas.azurecr.io/zuyderland-cmdb-gui/frontend:latest \
--docker-custom-image-name zdlas.azurecr.io/cmdb-insight/frontend:latest \
--docker-registry-server-url https://zdlas.azurecr.io
```

View File

@@ -1,6 +1,6 @@
# Azure Container Registry - Docker Images Build & Push Guide
Deze guide beschrijft hoe je Docker images bouwt en naar Azure Container Registry (ACR) pusht voor de Zuyderland CMDB GUI applicatie.
Deze guide beschrijft hoe je Docker images bouwt en naar Azure Container Registry (ACR) pusht voor de CMDB Insight applicatie.
## 📋 Inhoudsopgave
@@ -93,7 +93,7 @@ chmod +x scripts/build-and-push-azure.sh
**Environment Variables:**
```bash
export ACR_NAME="zuyderlandcmdbacr"
export REPO_NAME="zuyderland-cmdb-gui"
export REPO_NAME="cmdb-insight"
./scripts/build-and-push-azure.sh 1.0.0
```
@@ -106,7 +106,7 @@ az acr login --name zuyderlandcmdbacr
# Set variabelen
ACR_NAME="zuyderlandcmdbacr"
REGISTRY="${ACR_NAME}.azurecr.io"
REPO_NAME="zuyderland-cmdb-gui"
REPO_NAME="cmdb-insight"
VERSION="1.0.0"
# Build backend
@@ -157,7 +157,7 @@ Pas de variabelen in `azure-pipelines.yml` aan naar jouw instellingen:
```yaml
variables:
acrName: 'zuyderlandcmdbacr' # Jouw ACR naam
repositoryName: 'zuyderland-cmdb-gui'
repositoryName: 'cmdb-insight'
dockerRegistryServiceConnection: 'zuyderland-cmdb-acr-connection'
```
@@ -187,7 +187,7 @@ version: '3.8'
services:
backend:
image: zuyderlandcmdbacr.azurecr.io/zuyderland-cmdb-gui/backend:latest
image: zuyderlandcmdbacr.azurecr.io/cmdb-insight/backend:latest
environment:
- NODE_ENV=production
- PORT=3001
@@ -206,7 +206,7 @@ services:
start_period: 40s
frontend:
image: zuyderlandcmdbacr.azurecr.io/zuyderland-cmdb-gui/frontend:latest
image: zuyderlandcmdbacr.azurecr.io/cmdb-insight/frontend:latest
depends_on:
- backend
restart: unless-stopped
@@ -249,10 +249,10 @@ Voor productie deployments, gebruik specifieke versies:
```yaml
backend:
image: zuyderlandcmdbacr.azurecr.io/zuyderland-cmdb-gui/backend:v1.0.0
image: zuyderlandcmdbacr.azurecr.io/cmdb-insight/backend:v1.0.0
frontend:
image: zuyderlandcmdbacr.azurecr.io/zuyderland-cmdb-gui/frontend:v1.0.0
image: zuyderlandcmdbacr.azurecr.io/cmdb-insight/frontend:v1.0.0
```
### Pull en Deploy
@@ -310,10 +310,10 @@ ACR heeft een retention policy voor oude images:
```bash
# Retention policy instellen (bijv. laatste 10 tags behouden)
az acr repository show-tags --name zuyderlandcmdbacr --repository zuyderland-cmdb-gui/backend --orderby time_desc --top 10
az acr repository show-tags --name zuyderlandcmdbacr --repository cmdb-insight/backend --orderby time_desc --top 10
# Oude tags verwijderen (handmatig of via policy)
az acr repository delete --name zuyderlandcmdbacr --image zuyderland-cmdb-gui/backend:old-tag
az acr repository delete --name zuyderlandcmdbacr --image cmdb-insight/backend:old-tag
```
### 4. Multi-Stage Builds
@@ -326,8 +326,8 @@ Voor snellere builds, gebruik build cache:
```bash
# Build met cache
docker build --cache-from zuyderlandcmdbacr.azurecr.io/zuyderland-cmdb-gui/backend:latest \
-t zuyderlandcmdbacr.azurecr.io/zuyderland-cmdb-gui/backend:new-tag \
docker build --cache-from zuyderlandcmdbacr.azurecr.io/cmdb-insight/backend:latest \
-t zuyderlandcmdbacr.azurecr.io/cmdb-insight/backend:new-tag \
-f backend/Dockerfile.prod ./backend
```
@@ -356,7 +356,7 @@ cat ~/.docker/config.json
docker build --progress=plain -t test-image -f backend/Dockerfile.prod ./backend
# Check lokale images
docker images | grep zuyderland-cmdb-gui
docker images | grep cmdb-insight
```
### Push Errors
@@ -369,7 +369,7 @@ az acr check-health --name zuyderlandcmdbacr
az acr repository list --name zuyderlandcmdbacr
# View repository tags
az acr repository show-tags --name zuyderlandcmdbacr --repository zuyderland-cmdb-gui/backend
az acr repository show-tags --name zuyderlandcmdbacr --repository cmdb-insight/backend
```
### Azure DevOps Pipeline Errors

View File

@@ -2,7 +2,7 @@
## Applicatie Overzicht
**Zuyderland CMDB GUI** - Web applicatie voor classificatie en beheer van applicatiecomponenten in Jira Assets.
**CMDB Insight** - Web applicatie voor classificatie en beheer van applicatiecomponenten in Jira Assets.
### Technologie Stack
- **Backend**: Node.js 20 (Express, TypeScript)

View File

@@ -92,7 +92,7 @@ variables:
# Pas deze aan naar jouw ACR naam
acrName: 'zuyderlandcmdbacr' # ← Jouw ACR naam hier
repositoryName: 'zuyderland-cmdb-gui'
repositoryName: 'cmdb-insight'
# Pas deze aan naar de service connection naam die je net hebt gemaakt
dockerRegistryServiceConnection: 'zuyderland-cmdb-acr-connection' # ← Jouw service connection naam
@@ -115,7 +115,7 @@ git push origin main
2. Klik op **Pipelines** (links in het menu)
3. Klik op **"New pipeline"** of **"Create Pipeline"**
4. Kies **"Azure Repos Git"** (of waar je code staat)
5. Selecteer je repository: **"Zuyderland CMDB GUI"** (of jouw repo naam)
5. Selecteer je repository: **"CMDB Insight"** (of jouw repo naam)
6. Kies **"Existing Azure Pipelines YAML file"**
7. Selecteer:
- **Branch**: `main`
@@ -142,8 +142,8 @@ De pipeline start automatisch en zal:
**Verwachte output:**
```
Backend Image: zuyderlandcmdbacr.azurecr.io/zuyderland-cmdb-gui/backend:123
Frontend Image: zuyderlandcmdbacr.azurecr.io/zuyderland-cmdb-gui/frontend:123
Backend Image: zuyderlandcmdbacr.azurecr.io/cmdb-insight/backend:123
Frontend Image: zuyderlandcmdbacr.azurecr.io/cmdb-insight/frontend:123
```
---
@@ -154,8 +154,8 @@ Frontend Image: zuyderlandcmdbacr.azurecr.io/zuyderland-cmdb-gui/frontend:123
1. Ga naar je **Container Registry** (`zuyderlandcmdbacr`)
2. Klik op **"Repositories"**
3. Je zou moeten zien:
- `zuyderland-cmdb-gui/backend`
- `zuyderland-cmdb-gui/frontend`
- `cmdb-insight/backend`
- `cmdb-insight/frontend`
4. Klik op een repository om de tags te zien (bijv. `latest`, `123`)
**Via Azure CLI:**
@@ -164,10 +164,10 @@ Frontend Image: zuyderlandcmdbacr.azurecr.io/zuyderland-cmdb-gui/frontend:123
az acr repository list --name zuyderlandcmdbacr
# Lijst tags voor backend
az acr repository show-tags --name zuyderlandcmdbacr --repository zuyderland-cmdb-gui/backend
az acr repository show-tags --name zuyderlandcmdbacr --repository cmdb-insight/backend
# Lijst tags voor frontend
az acr repository show-tags --name zuyderlandcmdbacr --repository zuyderland-cmdb-gui/frontend
az acr repository show-tags --name zuyderlandcmdbacr --repository cmdb-insight/frontend
```
---
@@ -229,7 +229,7 @@ Nu je images in Azure Container Registry staan, kun je ze deployen:
# Web App aanmaken en configureren
az webapp create --name cmdb-backend --resource-group rg-cmdb-gui --plan plan-cmdb-gui
az webapp config container set --name cmdb-backend --resource-group rg-cmdb-gui \
--docker-custom-image-name zuyderlandcmdbacr.azurecr.io/zuyderland-cmdb-gui/backend:latest \
--docker-custom-image-name zuyderlandcmdbacr.azurecr.io/cmdb-insight/backend:latest \
--docker-registry-server-url https://zuyderlandcmdbacr.azurecr.io
```
@@ -252,7 +252,7 @@ docker-compose -f docker-compose.prod.acr.yml up -d
az container create \
--resource-group rg-cmdb-gui \
--name cmdb-backend \
--image zuyderlandcmdbacr.azurecr.io/zuyderland-cmdb-gui/backend:latest \
--image zuyderlandcmdbacr.azurecr.io/cmdb-insight/backend:latest \
--registry-login-server zuyderlandcmdbacr.azurecr.io \
--registry-username <acr-username> \
--registry-password <acr-password>
@@ -282,7 +282,7 @@ az acr login --name zuyderlandcmdbacr
**Images Lijsten:**
```bash
az acr repository list --name zuyderlandcmdbacr
az acr repository show-tags --name zuyderlandcmdbacr --repository zuyderland-cmdb-gui/backend
az acr repository show-tags --name zuyderlandcmdbacr --repository cmdb-insight/backend
```
**Pipeline Handmatig Triggeren:**

View File

@@ -19,11 +19,11 @@ Als Azure DevOps je repository niet kan vinden bij het aanmaken van een pipeline
3. **In de pipeline wizard:**
- Zoek naar de repository met de exacte naam
- Of probeer verschillende variaties:
- `Zuyderland CMDB GUI`
- `zuyderland-cmdb-gui`
- `CMDB Insight`
- `cmdb-insight`
- `ZuyderlandCMDBGUI`
**Jouw repository naam zou moeten zijn:** `Zuyderland CMDB GUI` (met spaties)
**Jouw repository naam zou moeten zijn:** `CMDB Insight` (met spaties)
---
@@ -65,10 +65,10 @@ Als Azure DevOps je repository niet kan vinden bij het aanmaken van een pipeline
**Oplossing:**
1. **Check het project naam** (bovenaan links)
- Moet zijn: **"JiraAssetsCMDB"**
- Moet zijn: **"cmdb"**
2. **Als je in een ander project bent:**
- Klik op het project dropdown (bovenaan links)
- Selecteer **"JiraAssetsCMDB"**
- Selecteer **"cmdb"**
3. **Probeer opnieuw** de pipeline aan te maken
---
@@ -90,7 +90,7 @@ Als Azure DevOps je repository niet kan vinden bij het aanmaken van een pipeline
**Oplossing:**
1. **Ga naar Repos** (links in het menu)
2. **Check of je repository zichtbaar is**
- Je zou moeten zien: `Zuyderland CMDB GUI` (of jouw repo naam)
- Je zou moeten zien: `CMDB Insight` (of jouw repo naam)
3. **Als de repository niet bestaat:**
- Je moet eerst de code pushen naar Azure DevOps
- Of de repository aanmaken in Azure DevOps
@@ -111,12 +111,12 @@ Als Azure DevOps je repository niet kan vinden bij het aanmaken van een pipeline
1. **Check de repository URL:**
```
git@ssh.dev.azure.com:v3/ZuyderlandMedischCentrum/JiraAssetsCMDB/Zuyderland%20CMDB%20GUI
git@ssh.dev.azure.com:v3/ZuyderlandMedischCentrum/cmdb/cmdb-insight
```
2. **Push je code:**
```bash
cd /Users/berthausmans/Documents/Development/zuyderland-cmdb-gui
cd /Users/berthausmans/Documents/Development/cmdb-insight
git push azure main
```
@@ -131,13 +131,13 @@ Als Azure DevOps je repository niet kan vinden bij het aanmaken van een pipeline
1. **Ga naar Repos** (links in het menu)
2. **Klik op "New repository"** of het "+" icoon
3. **Vul in:**
- **Repository name**: `Zuyderland CMDB GUI`
- **Repository name**: `CMDB Insight`
- **Type**: Git
4. **Create**
5. **Push je code:**
```bash
cd /Users/berthausmans/Documents/Development/zuyderland-cmdb-gui
git remote add azure git@ssh.dev.azure.com:v3/ZuyderlandMedischCentrum/JiraAssetsCMDB/Zuyderland%20CMDB%20GUI
cd /Users/berthausmans/Documents/Development/cmdb-insight
git remote add azure git@ssh.dev.azure.com:v3/ZuyderlandMedischCentrum/cmdb/cmdb-insight
git push azure main
```
@@ -150,8 +150,8 @@ Als Azure DevOps je repository niet kan vinden bij het aanmaken van een pipeline
1. **In de pipeline wizard:**
- Kies **"Other Git"** (in plaats van "Azure Repos Git")
2. **Vul in:**
- **Repository URL**: `git@ssh.dev.azure.com:v3/ZuyderlandMedischCentrum/JiraAssetsCMDB/Zuyderland%20CMDB%20GUI`
- Of HTTPS: `https://ZuyderlandMedischCentrum@dev.azure.com/ZuyderlandMedischCentrum/JiraAssetsCMDB/_git/Zuyderland%20CMDB%20GUI`
- **Repository URL**: `git@ssh.dev.azure.com:v3/ZuyderlandMedischCentrum/cmdb/cmdb-insight`
- Of HTTPS: `https://ZuyderlandMedischCentrum@dev.azure.com/ZuyderlandMedischCentrum/cmdb/_git/cmdb-insight`
3. **Branch**: `main`
4. **Continue**
@@ -166,21 +166,21 @@ Als Azure DevOps je repository niet kan vinden bij het aanmaken van een pipeline
### 1. Check of Repository Bestaat
1. Ga naar **Repos** (links in het menu)
2. Check of je `Zuyderland CMDB GUI` ziet
2. Check of je `CMDB Insight` ziet
3. Klik erop en check of je code ziet
### 2. Check Repository URL
**In Terminal:**
```bash
cd /Users/berthausmans/Documents/Development/zuyderland-cmdb-gui
cd /Users/berthausmans/Documents/Development/cmdb-insight
git remote -v
```
**Je zou moeten zien:**
```
azure git@ssh.dev.azure.com:v3/ZuyderlandMedischCentrum/JiraAssetsCMDB/Zuyderland%20CMDB%20GUI (fetch)
azure git@ssh.dev.azure.com:v3/ZuyderlandMedischCentrum/JiraAssetsCMDB/Zuyderland%20CMDB%20GUI (push)
azure git@ssh.dev.azure.com:v3/ZuyderlandMedischCentrum/cmdb/cmdb-insight (fetch)
azure git@ssh.dev.azure.com:v3/ZuyderlandMedischCentrum/cmdb/cmdb-insight (push)
```
### 3. Check of Code Gepusht is
@@ -206,7 +206,7 @@ git push azure main
**Probeer in deze volgorde:**
1. ✅ **Check Repos** - Ga naar Repos en check of je repository bestaat
2. ✅ **Check project naam** - Zorg dat je in "JiraAssetsCMDB" project bent
2. ✅ **Check project naam** - Zorg dat je in "cmdb" project bent
3. ✅ **Refresh pagina** - Soms helpt een simpele refresh
4. ✅ **Push code** - Als repository leeg is, push je code
5. ✅ **Gebruik "Other Git"** - Als workaround
@@ -223,7 +223,7 @@ git push azure main
2. **Als repository leeg is:**
```bash
cd /Users/berthausmans/Documents/Development/zuyderland-cmdb-gui
cd /Users/berthausmans/Documents/Development/cmdb-insight
git push azure main
```
@@ -242,7 +242,7 @@ git push azure main
Als niets werkt:
1. **Check of je in het juiste project bent** (JiraAssetsCMDB)
1. **Check of je in het juiste project bent** (cmdb)
2. **Check of de repository bestaat** (Repos → Files)
3. **Push je code** naar Azure DevOps
4. **Gebruik "Other Git"** als workaround

View File

@@ -2,7 +2,7 @@
## 🎯 In één oogopslag
**Applicatie**: Zuyderland CMDB GUI (Node.js + React web app)
**Applicatie**: CMDB Insight (Node.js + React web app)
**Doel**: Hosten in Azure App Service
**Gebruikers**: Max. 20 collega's
**Geschatte kosten**: €18-39/maand (Basic tier)

View File

@@ -171,11 +171,11 @@ Onderwerp: Azure Container Registry aanvraag - CMDB GUI Project
Beste IT Team,
Voor het Zuyderland CMDB GUI project hebben we een Azure Container Registry nodig
Voor het CMDB Insight project hebben we een Azure Container Registry nodig
voor het hosten van Docker images.
Details:
- Project: Zuyderland CMDB GUI
- Project: CMDB Insight
- Registry naam: zuyderlandcmdbacr (of zoals jullie naming convention)
- SKU: Basic (voor development/productie)
- Resource Group: rg-cmdb-gui

View File

@@ -2,7 +2,7 @@
## 🎯 Aanbeveling voor Jouw Situatie
**Voor Zuyderland CMDB GUI met Azure Container Registry:**
**Voor CMDB Insight met Azure Container Registry:**
### ✅ **Service Principal** (Aanbevolen) ⭐
@@ -184,7 +184,7 @@ Wanneer je **Service Principal** kiest:
## 💡 Mijn Aanbeveling
**Voor Zuyderland CMDB GUI:**
**Voor CMDB Insight:**
### ✅ **Kies Service Principal** ⭐

313
docs/DATA-INTEGRITY-PLAN.md Normal file
View File

@@ -0,0 +1,313 @@
# Data Integriteit Plan - Voorkomen van Kapotte Referenties
## Probleem
Kapotte referenties ontstaan wanneer `attribute_values.reference_object_id` verwijst naar objecten die niet in de `objects` tabel bestaan. Dit kan gebeuren door:
1. Objecten verwijderd uit Jira maar referenties blijven bestaan
2. Onvolledige sync (niet alle gerelateerde objecten zijn gesynchroniseerd)
3. Objecten verwijderd uit cache maar referenties blijven bestaan
4. Race conditions tijdens sync
## Strategie: Multi-layer Aanpak
### Laag 1: Preventie tijdens Sync (Hoogste Prioriteit)
#### 1.1 Referenced Object Validation tijdens Sync
**Doel**: Zorg dat alle referenced objects bestaan voordat we referenties opslaan
**Implementatie**:
- Voeg validatie toe in `extractAndStoreRelations()` en `normalizeObjectWithDb()`
- Voor elke reference: check of target object bestaat in cache
- Als target niet bestaat: fetch het object eerst van Jira
- Als object niet in Jira bestaat: sla referentie NIET op (of markeer als "missing")
**Code locatie**: `backend/src/services/normalizedCacheStore.ts`
#### 1.2 Deep Sync voor Referenced Objects
**Doel**: Sync automatisch alle referenced objects tijdens object sync
**Implementatie**:
- Wanneer een object wordt gesynct, identificeer alle referenced objects
- Queue deze referenced objects voor sync (als ze niet recent gesynct zijn)
- Voer sync uit in dependency order (sync referenced objects eerst)
**Code locatie**: `backend/src/services/syncEngine.ts`
#### 1.3 Transactional Reference Storage
**Doel**: Zorg dat object + referenties atomisch worden opgeslagen
**Implementatie**:
- Gebruik database transactions voor object + referenties
- Rollback als referenced object niet bestaat
- Valideer alle references voordat commit
**Code locatie**: `backend/src/services/normalizedCacheStore.ts`
### Laag 2: Database Constraints (Waar Mogelijk)
#### 2.1 Foreign Key Constraints voor Referenties
**Doel**: Database-level validatie van referenties
**Huidige situatie**:
- `attribute_values.reference_object_id` heeft GEEN foreign key constraint
- Dit is bewust omdat referenced objects mogelijk niet in cache zijn
**Optie A: Soft Foreign Key (Aanbevolen)**
- Voeg een CHECK constraint toe die valideert dat reference_object_id NULL is OF bestaat in objects
- Dit vereist een database trigger of function
**Optie B: Staging Table**
- Sla nieuwe referenties eerst op in een staging table
- Valideer en migreer alleen valide referenties naar attribute_values
- Cleanup staging table periodiek
**Implementatie**: Database migration + trigger/function
#### 2.2 Database Triggers voor Cleanup
**Doel**: Automatische cleanup wanneer objecten worden verwijderd
**Implementatie**:
- Trigger op DELETE van objects
- Automatisch verwijder of nullify alle attribute_values met reference_object_id = deleted.id
- Log cleanup acties voor audit
**Code locatie**: Database migration
### Laag 3: Validatie en Cleanup Procedures
#### 3.1 Periodieke Validatie Job
**Doel**: Detecteer en repareer kapotte referenties automatisch
**Implementatie**:
- Dagelijkse/nachtelijke job die alle broken references detecteert
- Probeer referenced objects te fetchen van Jira
- Als object bestaat: sync en repareer referentie
- Als object niet bestaat: verwijder referentie (of markeer als "deleted")
**Code locatie**: `backend/src/services/dataIntegrityService.ts` (nieuw)
**Scheduling**: Via cron job of scheduled task
#### 3.2 Manual Cleanup Endpoint
**Doel**: Admin tool om broken references te repareren
**Implementatie**:
- POST `/api/data-validation/repair-broken-references`
- Opties:
- `mode: 'delete'` - Verwijder kapotte referenties
- `mode: 'fetch'` - Probeer objects te fetchen van Jira
- `mode: 'dry-run'` - Toon wat er zou gebeuren zonder wijzigingen
**Code locatie**: `backend/src/routes/dataValidation.ts`
#### 3.3 Reference Validation tijdens Object Retrieval
**Doel**: Valideer en repareer referenties wanneer objecten worden opgehaald
**Implementatie**:
- In `reconstructObject()`: check alle references
- Als reference broken is: probeer te fetchen van Jira
- Als fetch succesvol: update cache en referentie
- Als fetch faalt: markeer als "missing" in response
**Code locatie**: `backend/src/services/normalizedCacheStore.ts`
### Laag 4: Sync Verbeteringen
#### 4.1 Dependency-Aware Sync Order
**Doel**: Sync objecten in de juiste volgorde (dependencies eerst)
**Implementatie**:
- Analyseer schema om dependency graph te bouwen
- Sync object types in dependency order
- Bijvoorbeeld: sync "Application Function" voordat "Application Component"
**Code locatie**: `backend/src/services/syncEngine.ts`
#### 4.2 Batch Sync met Reference Resolution
**Doel**: Sync batches van objecten en resolve alle references
**Implementatie**:
- Verzamel alle referenced object IDs tijdens batch sync
- Fetch alle referenced objects in parallel
- Valideer alle references voordat batch commit
**Code locatie**: `backend/src/services/syncEngine.ts`
#### 4.3 Incremental Sync met Deletion Detection
**Doel**: Detecteer verwijderde objecten tijdens incremental sync
**Implementatie**:
- Vergelijk cache objecten met Jira objecten
- Identificeer objecten die in cache zijn maar niet in Jira
- Verwijder deze objecten (cascade verwijdert referenties)
**Code locatie**: `backend/src/services/syncEngine.ts`
### Laag 5: Monitoring en Alerting
#### 5.1 Metrics en Dashboard
**Doel**: Monitor data integriteit in real-time
**Implementatie**:
- Track broken references count over tijd
- Alert wanneer count boven threshold komt
- Grafiek in dashboard: broken references trend
**Code locatie**: `backend/src/routes/dataValidation.ts` + dashboard
#### 5.2 Sync Health Checks
**Doel**: Valideer data integriteit na elke sync
**Implementatie**:
- Na sync: check voor broken references
- Log warning als broken references gevonden
- Optioneel: auto-repair tijdens sync
**Code locatie**: `backend/src/services/syncEngine.ts`
#### 5.3 Audit Logging
**Doel**: Track alle data integriteit acties
**Implementatie**:
- Log wanneer broken references worden gerepareerd
- Log wanneer objecten worden verwijderd
- Log wanneer references worden toegevoegd/verwijderd
**Code locatie**: Logger service
## Implementatie Prioriteiten
### Fase 1: Quick Wins (Week 1)
1. ✅ Manual cleanup endpoint (`/api/data-validation/repair-broken-references`)
2. ✅ Periodieke validatie job (dagelijks)
3. ✅ Metrics in dashboard
### Fase 2: Preventie (Week 2-3)
4. Referenced object validation tijdens sync
5. Deep sync voor referenced objects
6. Transactional reference storage
### Fase 3: Database Level (Week 4)
7. Database triggers voor cleanup
8. Soft foreign key constraints (waar mogelijk)
### Fase 4: Advanced (Week 5+)
9. Dependency-aware sync order
10. Incremental sync met deletion detection
11. Auto-repair tijdens object retrieval
## Technische Details
### Referenced Object Validation Pattern
```typescript
async function validateAndFetchReference(
referenceObjectId: string,
targetType?: string
): Promise<{ exists: boolean; object?: CMDBObject }> {
// 1. Check cache first
const cached = await cacheStore.getObjectById(referenceObjectId);
if (cached) return { exists: true, object: cached };
// 2. Try to fetch from Jira
try {
const jiraObj = await jiraAssetsClient.getObject(referenceObjectId);
if (jiraObj) {
// Parse and cache
const parsed = jiraAssetsClient.parseObject(jiraObj);
if (parsed) {
await cacheStore.upsertObject(parsed._objectType, parsed);
return { exists: true, object: parsed };
}
}
} catch (error) {
if (error instanceof JiraObjectNotFoundError) {
return { exists: false };
}
throw error;
}
return { exists: false };
}
```
### Cleanup Procedure
```typescript
async function repairBrokenReferences(mode: 'delete' | 'fetch' | 'dry-run'): Promise<{
total: number;
repaired: number;
deleted: number;
failed: number;
}> {
const brokenRefs = await cacheStore.getBrokenReferences(10000, 0);
let repaired = 0;
let deleted = 0;
let failed = 0;
for (const ref of brokenRefs) {
if (mode === 'fetch') {
// Try to fetch from Jira
const result = await validateAndFetchReference(ref.reference_object_id);
if (result.exists) {
repaired++;
} else {
// Object doesn't exist, delete reference
await cacheStore.deleteAttributeValue(ref.object_id, ref.attribute_id);
deleted++;
}
} else if (mode === 'delete') {
await cacheStore.deleteAttributeValue(ref.object_id, ref.attribute_id);
deleted++;
}
// dry-run: just count
}
return { total: brokenRefs.length, repaired, deleted, failed };
}
```
## Success Criteria
- ✅ Broken references count < 1% van totaal aantal referenties
- ✅ Geen nieuwe broken references tijdens normale sync
- ✅ Auto-repair binnen 24 uur na detectie
- ✅ Monitoring dashboard toont real-time integriteit status
- ✅ Sync performance blijft acceptabel (< 10% overhead)
## Monitoring
### Key Metrics
- `broken_references_count` - Totaal aantal kapotte referenties
- `broken_references_rate` - Percentage van totaal aantal referenties
- `reference_repair_success_rate` - Percentage succesvol gerepareerde referenties
- `sync_integrity_check_duration` - Tijd voor integriteit check
- `objects_with_broken_refs` - Aantal objecten met kapotte referenties
### Alerts
- 🔴 Critical: > 5% broken references
- 🟡 Warning: > 1% broken references
- 🟢 Info: Broken references count veranderd
## Rollout Plan
1. **Week 1**: Implementeer cleanup tools en monitoring
2. **Week 2**: Repareer bestaande broken references
3. **Week 3**: Implementeer preventieve maatregelen
4. **Week 4**: Test en monitor
5. **Week 5**: Database-level constraints (optioneel)
## Risico's en Mitigatie
| Risico | Impact | Mitigatie |
|--------|--------|-----------|
| Sync performance degradation | Hoog | Batch processing, parallel fetching, caching |
| Database locks tijdens cleanup | Medium | Off-peak scheduling, batch size limits |
| False positives in validation | Low | Dry-run mode, manual review |
| Jira API rate limits | Medium | Rate limiting, exponential backoff |
## Documentatie Updates
- [ ] Update sync engine documentatie
- [ ] Update data validation dashboard documentatie
- [ ] Create runbook voor broken references cleanup
- [ ] Update API documentatie voor nieuwe endpoints

View File

@@ -0,0 +1,211 @@
# Database-Driven Schema Implementation Plan
## Overzicht
Dit plan beschrijft de migratie van statische schema files naar een volledig database-driven aanpak waarbij:
1. Schema wordt dynamisch opgehaald van Jira Assets API
2. Schema wordt opgeslagen in PostgreSQL database
3. Datamodel en datavalidatie pagina's worden opgebouwd vanuit database
4. TypeScript types worden gegenereerd vanuit database (handmatig)
## Architectuur
```
┌─────────────────┐
│ Jira Assets API │ (Authoritative Source)
└────────┬────────┘
│ Schema Discovery
┌─────────────────┐
│ Schema Discovery│ (Jira API → Database)
│ Service │
└────────┬────────┘
│ Store in DB
┌─────────────────┐
│ PostgreSQL DB │ (Cached Schema)
│ - object_types │
│ - attributes │
└────────┬────────┘
│ Serve to Frontend
┌─────────────────┐
│ API Endpoints │ (/api/schema)
└────────┬────────┘
│ Code Generation
┌─────────────────┐
│ TypeScript Types │ (Handmatig gegenereerd)
└─────────────────┘
```
## Database Schema
De database heeft al de benodigde tabellen in `normalized-schema.ts`:
- `object_types` - Object type definities
- `attributes` - Attribute definities per object type
**Geen extra tabellen nodig!** We gebruiken de bestaande structuur.
## Implementatie Stappen
### Stap 1: Schema Discovery Service Aanpassen ✅
**Huidige situatie:**
- `schemaDiscoveryService` haalt data uit statische `OBJECT_TYPES` file
**Nieuwe situatie:**
- `schemaDiscoveryService` haalt schema direct van Jira Assets API
- Gebruikt `JiraSchemaFetcher` logica (uit `generate-schema.ts`)
- Slaat schema op in database tabellen
**Bestanden:**
- `backend/src/services/schemaDiscoveryService.ts` - Aanpassen om API calls te maken
### Stap 2: Schema Cache Service ✅
**Nieuwe service:**
- In-memory cache met 5 minuten TTL
- Cache invalidation bij schema updates
- Snelle response voor `/api/schema` endpoint
**Bestanden:**
- `backend/src/services/schemaCacheService.ts` - Nieuw bestand
### Stap 3: Schema API Endpoint Migreren ✅
**Huidige situatie:**
- `/api/schema` endpoint leest van statische `OBJECT_TYPES` file
**Nieuwe situatie:**
- `/api/schema` endpoint leest van database (via cache)
- Gebruikt `schemaCacheService` voor performance
**Bestanden:**
- `backend/src/routes/schema.ts` - Aanpassen om database te gebruiken
### Stap 4: Code Generation Script ✅
**Nieuwe functionaliteit:**
- Script dat database schema → TypeScript types genereert
- Handmatig uitvoerbaar via CLI command
- Genereert: `jira-schema.ts`, `jira-types.ts`
**Bestanden:**
- `backend/scripts/generate-types-from-db.ts` - Nieuw bestand
- `package.json` - NPM script toevoegen
### Stap 5: Datavalidatie Pagina Migreren ✅
**Huidige situatie:**
- Gebruikt mogelijk statische schema files
**Nieuwe situatie:**
- Volledig database-driven
- Gebruikt `schemaDiscoveryService` voor schema data
**Bestanden:**
- `backend/src/routes/dataValidation.ts` - Controleren en aanpassen indien nodig
### Stap 6: Database Indexes ✅
**Toevoegen:**
- Indexes voor snelle schema queries
- Performance optimalisatie
**Bestanden:**
- `backend/src/services/database/normalized-schema.ts` - Indexes toevoegen
### Stap 7: CLI Command voor Schema Discovery ✅
**Nieuwe functionaliteit:**
- Handmatige trigger voor schema discovery
- Bijvoorbeeld: `npm run discover-schema`
**Bestanden:**
- `backend/scripts/discover-schema.ts` - Nieuw bestand
- `package.json` - NPM script toevoegen
## API Endpoints
### GET /api/schema
**Huidig:** Leest van statische files
**Nieuw:** Leest van database (via cache)
**Response format:** Ongewijzigd (backward compatible)
### POST /api/schema/discover (Nieuw)
**Functionaliteit:** Handmatige trigger voor schema discovery
**Gebruik:** Admin endpoint voor handmatige schema refresh
## Code Generation
### Script: `generate-types-from-db.ts`
**Input:** Database schema (object_types, attributes)
**Output:**
- `backend/src/generated/jira-schema.ts`
- `backend/src/generated/jira-types.ts`
**Uitvoering:** Handmatig via `npm run generate-types`
## Migratie Strategie
1. **Parallelle implementatie:** Nieuwe code naast oude code
2. **Feature flag:** Optioneel om tussen oude/nieuwe aanpak te switchen
3. **Testing:** Uitgebreide tests voor schema discovery
4. **Handmatige migratie:** Breaking changes worden handmatig opgelost
## Performance Overwegingen
- **In-memory cache:** 5 minuten TTL voor schema endpoints
- **Database indexes:** Voor snelle queries op object_types en attributes
- **Lazy loading:** Schema wordt alleen geladen wanneer nodig
## Breaking Changes
- **Geen fallback:** Als database schema niet beschikbaar is, werkt niets
- **TypeScript errors:** Bij schema wijzigingen ontstaan compile errors
- **Handmatige fix:** Developers lossen errors handmatig op
## Testing Checklist
- [ ] Schema discovery van Jira API werkt
- [ ] Schema wordt correct opgeslagen in database
- [ ] `/api/schema` endpoint retourneert database data
- [ ] Cache werkt correct (TTL, invalidation)
- [ ] Code generation script werkt
- [ ] Datamodel pagina toont database data
- [ ] Datavalidatie pagina toont database data
- [ ] Handmatige schema discovery trigger werkt
## Rollout Plan
1. **Fase 1:** Schema discovery service aanpassen (API calls)
2. **Fase 2:** Schema cache service implementeren
3. **Fase 3:** API endpoints migreren
4. **Fase 4:** Code generation script maken
5. **Fase 5:** Testing en validatie
6. **Fase 6:** Oude statische files verwijderen (na handmatige migratie)
## Risico's en Mitigatie
| Risico | Impact | Mitigatie |
|--------|--------|-----------|
| Jira API niet beschikbaar | Hoog | Geen fallback - downtime acceptabel |
| Schema wijzigingen | Medium | TypeScript errors - handmatig oplossen |
| Performance issues | Laag | Cache + indexes |
| Data migratie fouten | Medium | Uitgebreide tests |
## Success Criteria
✅ Schema wordt dynamisch opgehaald van Jira API
✅ Schema wordt opgeslagen in database
✅ Datamodel pagina toont database data
✅ Datavalidatie pagina toont database data
✅ Code generation script werkt
✅ Handmatige schema discovery werkt
✅ Performance is acceptabel (< 1s voor schema endpoint)

View File

@@ -0,0 +1,197 @@
# Database Normalisatie Voorstel
## Huidige Probleem
De huidige database structuur heeft duplicatie en is niet goed genormaliseerd:
1. **`object_types`** tabel bevat:
- `jira_type_id`, `type_name`, `display_name`, `description`, `sync_priority`, `object_count`
2. **`configured_object_types`** tabel bevat:
- `schema_id`, `schema_name`, `object_type_id`, `object_type_name`, `display_name`, `description`, `object_count`, `enabled`
**Problemen:**
- Duplicatie van `display_name`, `description`, `object_count`
- Geen expliciete relatie tussen schemas en object types
- `schema_name` wordt opgeslagen in elke object type row (niet genormaliseerd)
- Verwarring tussen `object_type_name` en `type_name`
- Twee tabellen die dezelfde informatie bevatten
## Voorgestelde Genormaliseerde Structuur
### 1. `schemas` Tabel
```sql
CREATE TABLE IF NOT EXISTS schemas (
id SERIAL PRIMARY KEY, -- Auto-increment PK
jira_schema_id TEXT NOT NULL UNIQUE, -- Jira schema ID (bijv. "6", "8")
name TEXT NOT NULL, -- Schema naam (bijv. "Application Management")
description TEXT, -- Optionele beschrijving
discovered_at TIMESTAMP NOT NULL DEFAULT NOW(),
updated_at TIMESTAMP NOT NULL DEFAULT NOW()
);
```
**Doel:** Centrale opslag van alle Jira Assets schemas.
### 2. `object_types` Tabel (Aangepast)
```sql
CREATE TABLE IF NOT EXISTS object_types (
id SERIAL PRIMARY KEY, -- Auto-increment PK
schema_id INTEGER NOT NULL REFERENCES schemas(id) ON DELETE CASCADE,
jira_type_id INTEGER NOT NULL, -- Jira object type ID
type_name TEXT NOT NULL UNIQUE, -- PascalCase type name (bijv. "ApplicationComponent")
display_name TEXT NOT NULL, -- Original Jira name (bijv. "Application Component")
description TEXT, -- Optionele beschrijving
sync_priority INTEGER DEFAULT 0, -- Sync prioriteit
object_count INTEGER DEFAULT 0, -- Aantal objecten in Jira
enabled BOOLEAN NOT NULL DEFAULT FALSE, -- KEY CHANGE: enabled flag hier!
discovered_at TIMESTAMP NOT NULL DEFAULT NOW(),
updated_at TIMESTAMP NOT NULL DEFAULT NOW(),
UNIQUE(schema_id, jira_type_id) -- Een object type kan maar 1x per schema voorkomen
);
```
**Doel:** Alle object types met hun schema relatie en enabled status.
### 3. `attributes` Tabel (Ongewijzigd)
```sql
CREATE TABLE IF NOT EXISTS attributes (
id SERIAL PRIMARY KEY,
jira_attr_id INTEGER NOT NULL,
object_type_name TEXT NOT NULL REFERENCES object_types(type_name) ON DELETE CASCADE,
-- ... rest blijft hetzelfde
);
```
## Voordelen van Genormaliseerde Structuur
1. **Geen Duplicatie:**
- Schema informatie staat maar 1x in `schemas` tabel
- Object type informatie staat maar 1x in `object_types` tabel
- `enabled` flag staat direct bij object type
2. **Duidelijke Relaties:**
- Foreign key `schema_id` maakt relatie expliciet
- Database constraints zorgen voor data integriteit
3. **Eenvoudigere Queries:**
```sql
-- Alle enabled object types met hun schema
SELECT ot.*, s.name as schema_name
FROM object_types ot
JOIN schemas s ON ot.schema_id = s.id
WHERE ot.enabled = TRUE;
```
4. **Minder Verwarring:**
- Geen `object_type_name` vs `type_name` meer
- Geen `configured_object_types` vs `object_types` meer
- Eén bron van waarheid
5. **Eenvoudigere Migratie:**
- `configured_object_types` kan worden verwijderd
- Data kan worden gemigreerd naar nieuwe structuur
## Migratie Plan
1. **Nieuwe Tabellen Aanmaken:**
- `schemas` tabel
- `object_types` tabel aanpassen (toevoegen `schema_id`, `enabled`)
2. **Data Migreren:**
- Unieke schemas uit `configured_object_types` naar `schemas`
- Object types uit `configured_object_types` naar `object_types` met juiste `schema_id` FK
- `enabled` flag overnemen
3. **Foreign Keys Aanpassen:**
- `attributes.object_type_name` blijft verwijzen naar `object_types.type_name`
- `objects.object_type_name` blijft verwijzen naar `object_types.type_name`
4. **Code Aanpassen:**
- `schemaConfigurationService` aanpassen voor nieuwe structuur
- `schemaDiscoveryService` aanpassen voor nieuwe structuur
- `schemaCacheService` aanpassen voor JOIN met `schemas`
5. **Oude Tabel Verwijderen:**
- `configured_object_types` tabel verwijderen na migratie
## Impact op Bestaande Code
### Services die aangepast moeten worden:
1. **`schemaConfigurationService.ts`:**
- `discoverAndStoreSchemasAndObjectTypes()` - eerst schemas opslaan, dan object types
- `getConfiguredObjectTypes()` - JOIN met schemas
- `setObjectTypeEnabled()` - direct op object_types.enabled
- `getEnabledObjectTypes()` - WHERE enabled = TRUE
2. **`schemaDiscoveryService.ts`:**
- Moet ook schemas en object types in nieuwe structuur opslaan
- Moet `enabled` flag respecteren
3. **`schemaCacheService.ts`:**
- `fetchFromDatabase()` - JOIN met schemas voor schema naam
- Filter op `object_types.enabled = TRUE`
4. **`syncEngine.ts`:**
- Gebruikt al `getEnabledObjectTypes()` - blijft werken na aanpassing service
## SQL Migratie Script
```sql
-- Stap 1: Maak schemas tabel
CREATE TABLE IF NOT EXISTS schemas (
id SERIAL PRIMARY KEY,
jira_schema_id TEXT NOT NULL UNIQUE,
name TEXT NOT NULL,
description TEXT,
discovered_at TIMESTAMP NOT NULL DEFAULT NOW(),
updated_at TIMESTAMP NOT NULL DEFAULT NOW()
);
-- Stap 2: Voeg schema_id en enabled toe aan object_types
ALTER TABLE object_types
ADD COLUMN IF NOT EXISTS schema_id INTEGER REFERENCES schemas(id) ON DELETE CASCADE,
ADD COLUMN IF NOT EXISTS enabled BOOLEAN NOT NULL DEFAULT FALSE;
-- Stap 3: Migreer data
-- Eerst: unieke schemas
INSERT INTO schemas (jira_schema_id, name, description, discovered_at, updated_at)
SELECT DISTINCT
schema_id as jira_schema_id,
schema_name as name,
NULL as description,
MIN(discovered_at) as discovered_at,
MAX(updated_at) as updated_at
FROM configured_object_types
GROUP BY schema_id, schema_name
ON CONFLICT(jira_schema_id) DO NOTHING;
-- Dan: object types met schema_id FK
UPDATE object_types ot
SET
schema_id = s.id,
enabled = COALESCE(
(SELECT enabled FROM configured_object_types cot
WHERE cot.object_type_id = ot.jira_type_id
AND cot.schema_id = s.jira_schema_id
LIMIT 1),
FALSE
)
FROM schemas s
WHERE EXISTS (
SELECT 1 FROM configured_object_types cot
WHERE cot.object_type_id = ot.jira_type_id
AND cot.schema_id = s.jira_schema_id
);
-- Stap 4: Verwijder oude tabel (na verificatie)
-- DROP TABLE IF EXISTS configured_object_types;
```
## Conclusie
De genormaliseerde structuur is veel cleaner, elimineert duplicatie, en maakt queries eenvoudiger. De `enabled` flag staat nu direct bij het object type, wat logischer is.

View File

@@ -0,0 +1,310 @@
# Complete Database Reset Guide
This guide explains how to force a complete reset of the database so that the cache/data is built completely from scratch.
## Overview
There are three main approaches to resetting the database:
1. **API-based reset** (Recommended) - Clears cache via API and triggers rebuild
2. **Database volume reset** - Completely removes PostgreSQL volume and recreates
3. **Manual SQL reset** - Direct database commands to clear all data
## Option 1: API-Based Reset (Recommended)
This is the cleanest approach as it uses the application's built-in endpoints.
### Using the Script
```bash
# Run the automated reset script
./scripts/reset-and-rebuild.sh
```
The script will:
1. Clear all cached data via `DELETE /api/cache/clear`
2. Trigger a full sync via `POST /api/cache/sync`
3. Monitor the process
### Manual API Calls
If you prefer to do it manually:
```bash
# Set your backend URL
BACKEND_URL="http://localhost:3001"
API_URL="$BACKEND_URL/api"
# Step 1: Clear all cache
curl -X DELETE "$API_URL/cache/clear" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_TOKEN"
# Step 2: Trigger full sync
curl -X POST "$API_URL/cache/sync" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_TOKEN"
```
**Note:** You need to be authenticated. Either:
- Use a valid JWT token from your session
- Or authenticate via the UI first and copy the token from browser dev tools
### Via the UI
1. Navigate to **Settings → Cache Management**
2. Click **"Clear All Cache"** button
3. Click **"Full Sync"** button
4. Monitor progress in the same page
## Option 2: Database Volume Reset (Nuclear Option)
This completely removes the PostgreSQL database and recreates it from scratch.
### Using the Reset Script
```bash
# Run the PostgreSQL reset script
./scripts/reset-postgres.sh
```
This will:
1. Stop containers
2. Remove the PostgreSQL volume (deletes ALL data)
3. Restart PostgreSQL with a fresh database
4. Wait for PostgreSQL to be ready
**⚠️ Warning:** This deletes ALL data including:
- All cached CMDB objects
- All relations
- All attribute values
- Schema discovery cache
- Sync metadata
### Manual Volume Reset
```bash
# Step 1: Stop containers
docker-compose down
# Step 2: Remove PostgreSQL volume
docker volume ls | grep postgres
docker volume rm cmdb-insight_postgres_data
# Step 3: Start PostgreSQL again
docker-compose up -d postgres
# Step 4: Wait for PostgreSQL to be ready
docker-compose exec postgres pg_isready -U cmdb
```
### After Volume Reset
After resetting the volume, you need to:
1. **Start the backend** - The schema will be created automatically:
```bash
docker-compose up -d backend
```
2. **Check logs** to verify schema creation:
```bash
docker-compose logs -f backend
```
You should see:
- `NormalizedCacheStore: Database schema initialized`
- `SchemaDiscovery: Schema discovery complete`
3. **Trigger a full sync** to rebuild data:
- Via UI: Settings → Cache Management → Full Sync
- Via API: `POST /api/cache/sync`
- Or use the reset script: `./scripts/reset-and-rebuild.sh`
## Option 3: Manual SQL Reset
If you want to clear data but keep the database structure:
### For PostgreSQL
```bash
# Connect to database
docker-compose exec postgres psql -U cmdb -d cmdb_cache
# Clear all data (keeps schema)
TRUNCATE TABLE attribute_values CASCADE;
TRUNCATE TABLE object_relations CASCADE;
TRUNCATE TABLE objects CASCADE;
TRUNCATE TABLE sync_metadata CASCADE;
TRUNCATE TABLE schema_cache CASCADE;
TRUNCATE TABLE schema_mappings CASCADE;
# Exit
\q
```
Then trigger a full sync:
```bash
curl -X POST "http://localhost:3001/api/cache/sync" \
-H "Authorization: Bearer YOUR_TOKEN"
```
### For SQLite (if using SQLite)
```bash
# Connect to backend container
docker-compose exec backend sh
# Clear all data
sqlite3 /app/data/cmdb-cache.db <<EOF
DELETE FROM attribute_values;
DELETE FROM object_relations;
DELETE FROM objects;
DELETE FROM sync_metadata;
DELETE FROM schema_cache;
DELETE FROM schema_mappings;
VACUUM;
EOF
```
## What Gets Reset?
### Cleared by `DELETE /api/cache/clear`:
- ✅ All cached CMDB objects (`objects` table)
- ✅ All attribute values (`attribute_values` table)
- ✅ All relations (`object_relations` table)
- ❌ Schema cache (kept for faster schema discovery)
- ❌ Schema mappings (kept for configuration)
- ❌ User data and classifications (separate database)
### Cleared by Volume Reset:
- ✅ Everything above
- ✅ Schema cache
- ✅ Schema mappings
- ✅ All database structure (recreated on next start)
## Verification
After reset, verify the database is empty and ready for rebuild:
```bash
# Check object counts (should be 0)
docker-compose exec postgres psql -U cmdb -d cmdb_cache -c "
SELECT
(SELECT COUNT(*) FROM objects) as objects,
(SELECT COUNT(*) FROM attribute_values) as attributes,
(SELECT COUNT(*) FROM object_relations) as relations;
"
# Check sync status
curl "$BACKEND_URL/api/cache/status" \
-H "Authorization: Bearer YOUR_TOKEN"
```
## Troubleshooting
### Authentication Issues
If you get authentication errors:
1. **Get a token from the UI:**
- Log in via the UI
- Open browser dev tools → Network tab
- Find any API request → Copy the `Authorization` header value
- Use it in your curl commands
2. **Or use the UI directly:**
- Navigate to Settings → Cache Management
- Use the buttons there (no token needed)
### Backend Not Running
```bash
# Check if backend is running
docker-compose ps backend
# Start backend if needed
docker-compose up -d backend
# Check logs
docker-compose logs -f backend
```
### Sync Not Starting
Check that Jira credentials are configured:
```bash
# Check environment variables
docker-compose exec backend env | grep JIRA
# Required:
# - JIRA_HOST
# - JIRA_SERVICE_ACCOUNT_TOKEN (for sync operations)
```
### Database Connection Issues
```bash
# Test PostgreSQL connection
docker-compose exec postgres pg_isready -U cmdb
# Check database exists
docker-compose exec postgres psql -U cmdb -l
# Check connection from backend
docker-compose exec backend node -e "
const { createDatabaseAdapter } = require('./src/services/database/factory.js');
const db = createDatabaseAdapter();
db.query('SELECT 1').then(() => console.log('OK')).catch(e => console.error(e));
"
```
## Complete Reset Workflow
For a complete "green field" reset:
```bash
# 1. Stop everything
docker-compose down
# 2. Remove PostgreSQL volume (nuclear option)
docker volume rm cmdb-insight_postgres_data
# 3. Start PostgreSQL
docker-compose up -d postgres
# 4. Wait for PostgreSQL
sleep 5
docker-compose exec postgres pg_isready -U cmdb
# 5. Start backend (creates schema automatically)
docker-compose up -d backend
# 6. Wait for backend to initialize
sleep 10
docker-compose logs backend | grep "initialization complete"
# 7. Clear cache and trigger sync (via script)
./scripts/reset-and-rebuild.sh
# OR manually via API
curl -X DELETE "http://localhost:3001/api/cache/clear" \
-H "Authorization: Bearer YOUR_TOKEN"
curl -X POST "http://localhost:3001/api/cache/sync" \
-H "Authorization: Bearer YOUR_TOKEN"
```
## Quick Reference
| Method | Speed | Data Loss | Schema Loss | Recommended For |
|--------|-------|-----------|-------------|-----------------|
| API Clear + Sync | Fast | Cache only | No | Regular resets |
| Volume Reset | Medium | Everything | Yes | Complete rebuild |
| SQL TRUNCATE | Fast | Cache only | No | Quick clear |
## Related Documentation
- [Local PostgreSQL Reset](./LOCAL-POSTGRES-RESET.md) - Detailed PostgreSQL reset guide
- [Local Development Setup](./LOCAL-DEVELOPMENT-SETUP.md) - Initial setup guide
- [Database Schema](./DATABASE-DRIVEN-SCHEMA-IMPLEMENTATION-PLAN.md) - Schema documentation

View File

@@ -0,0 +1,136 @@
# Database Tables Audit
**Date:** 2026-01-09
**Purpose:** Verify all database tables are actually being used and clean up unused ones.
## Summary
**All active tables are in use**
**Removed:** `cached_objects` (legacy, replaced by normalized schema)
## Table Usage Status
### ✅ Active Tables (Normalized Schema)
These tables are part of the current normalized schema and are actively used:
1. **`schemas`** - ✅ USED
- Stores Jira Assets schema metadata
- Used by: `SchemaRepository`, `schemaDiscoveryService`, `schemaConfigurationService`
2. **`object_types`** - ✅ USED
- Stores discovered object types from Jira schema
- Used by: `SchemaRepository`, `schemaDiscoveryService`, `schemaCacheService`, `schemaConfigurationService`
3. **`attributes`** - ✅ USED
- Stores discovered attributes per object type
- Used by: `SchemaRepository`, `schemaDiscoveryService`, `schemaCacheService`, `queryBuilder`
4. **`objects`** - ✅ USED
- Stores minimal object metadata
- Used by: `normalizedCacheStore`, `ObjectCacheRepository`, `QueryService`, `dataIntegrityService`
5. **`attribute_values`** - ✅ USED
- EAV pattern for storing all attribute values
- Used by: `normalizedCacheStore`, `ObjectCacheRepository`, `QueryService`, `queryBuilder`, `dataIntegrityService`
6. **`object_relations`** - ✅ USED
- Stores relationships between objects
- Used by: `normalizedCacheStore`, `ObjectCacheRepository`, `QueryService`, `dataIntegrityService`, `DebugController`
7. **`sync_metadata`** - ✅ USED
- Tracks sync state
- Used by: `normalizedCacheStore`, `cacheStore.old.ts` (legacy)
8. **`schema_mappings`** - ⚠️ DEPRECATED but still in use
- Maps object types to schema IDs
- **Status:** Marked as DEPRECATED in schema comments, but still actively used by `schemaMappingService`
- **Note:** According to refactor plan, this should be removed after consolidation
- Used by: `schemaMappingService`, `jiraAssetsClient`, `jiraAssets`, `dataValidation` routes
### ✅ Active Tables (Classification Database)
9. **`classification_history`** - ✅ USED
- Stores AI classification history
- Used by: `databaseService`, `classifications` routes, `dashboard` routes
10. **`session_state`** - ✅ USED
- Stores session state for classification workflow
- Used by: `databaseService`
### ✅ Active Tables (Authentication & Authorization)
11. **`users`** - ✅ USED
- User accounts
- Used by: `userService`, `authService`, `auth` routes
12. **`roles`** - ✅ USED
- Role definitions
- Used by: `roleService`, `roles` routes
13. **`permissions`** - ✅ USED
- Permission definitions
- Used by: `roleService`, `roles` routes
14. **`role_permissions`** - ✅ USED
- Junction table: roles ↔ permissions
- Used by: `roleService`
15. **`user_roles`** - ✅ USED
- Junction table: users ↔ roles
- Used by: `roleService`, `userService`
16. **`user_settings`** - ✅ USED
- Per-user settings (PAT, AI keys, etc.)
- Used by: `userSettingsService`
17. **`sessions`** - ✅ USED
- User sessions (OAuth and local auth)
- Used by: `authService`
18. **`email_tokens`** - ✅ USED
- Email verification and password reset tokens
- Used by: `userService`
## ❌ Removed Tables
### `cached_objects` - REMOVED (Legacy)
**Status:** ❌ Removed from schema generation
**Reason:** Replaced by normalized schema (`objects` + `attribute_values` tables)
**Previous Usage:**
- Only used by deprecated `cacheStore.old.ts`
- Old schema stored all object data as JSON in a single table
- New normalized schema uses EAV pattern for better querying and flexibility
**Cleanup Actions:**
- ✅ Removed from `generate-schema.ts` script
- ✅ Removed from `backend/src/generated/db-schema.sql`
- ✅ Removed from `backend/src/generated/db-schema-postgres.sql`
- ✅ Added comment in migration script explaining it's only for legacy data migration
**Migration Note:**
- Migration script (`migrate-sqlite-to-postgres.ts`) still references `cached_objects` for migrating old data
- This is intentional - needed for one-time migration from old schema to new schema
## Notes
### Schema Mappings Table
The `schema_mappings` table is marked as DEPRECATED in the schema comments but is still actively used. According to the refactor plan (`docs/REFACTOR-PHASE-2B-3-STATUS.md`), this table should be removed after consolidation of schema services. For now, it remains in use and should not be deleted.
### Legacy Files
The following files still reference old schema but are deprecated:
- `backend/src/services/cacheStore.old.ts` - Old cache store implementation
- `backend/src/generated/db-schema.sql` - Legacy schema (now cleaned up)
- `backend/src/generated/db-schema-postgres.sql` - Legacy schema (now cleaned up)
## Conclusion
**All 18 active tables are in use**
**1 unused table (`cached_objects`) has been removed from schema generation**
⚠️ **1 table (`schema_mappings`) is marked deprecated but still in use - keep for now**
The database schema is now clean with only actively used tables defined in the schema generation scripts.

View File

@@ -1,4 +1,4 @@
# Deployment Advies - Zuyderland CMDB GUI 🎯
# Deployment Advies - CMDB Insight 🎯
**Datum:** {{ vandaag }}
**Aanbeveling:** Azure App Service (Basic Tier)
@@ -119,14 +119,14 @@ az webapp create \
--name cmdb-backend-prod \
--resource-group rg-cmdb-gui-prod \
--plan plan-cmdb-gui-prod \
--deployment-container-image-name zdlas.azurecr.io/zuyderland-cmdb-gui/backend:latest
--deployment-container-image-name zdlas.azurecr.io/cmdb-insight/backend:latest
# Frontend Web App
az webapp create \
--name cmdb-frontend-prod \
--resource-group rg-cmdb-gui-prod \
--plan plan-cmdb-gui-prod \
--deployment-container-image-name zdlas.azurecr.io/zuyderland-cmdb-gui/frontend:latest
--deployment-container-image-name zdlas.azurecr.io/cmdb-insight/frontend:latest
```
---
@@ -186,14 +186,14 @@ az role assignment create \
az webapp config container set \
--name cmdb-backend-prod \
--resource-group rg-cmdb-gui-prod \
--docker-custom-image-name zdlas.azurecr.io/zuyderland-cmdb-gui/backend:latest \
--docker-custom-image-name zdlas.azurecr.io/cmdb-insight/backend:latest \
--docker-registry-server-url https://zdlas.azurecr.io
# Frontend
az webapp config container set \
--name cmdb-frontend-prod \
--resource-group rg-cmdb-gui-prod \
--docker-custom-image-name zdlas.azurecr.io/zuyderland-cmdb-gui/frontend:latest \
--docker-custom-image-name zdlas.azurecr.io/cmdb-insight/frontend:latest \
--docker-registry-server-url https://zdlas.azurecr.io
```

View File

@@ -6,8 +6,8 @@ Je Docker images zijn succesvol gebouwd en gepusht naar Azure Container Registry
- ✅ Azure Container Registry (ACR): `zdlas.azurecr.io`
- ✅ Docker images gebouwd en gepusht:
- `zdlas.azurecr.io/zuyderland-cmdb-gui/backend:latest`
- `zdlas.azurecr.io/zuyderland-cmdb-gui/frontend:latest`
- `zdlas.azurecr.io/cmdb-insight/backend:latest`
- `zdlas.azurecr.io/cmdb-insight/frontend:latest`
- ✅ Azure DevOps Pipeline: Automatische builds bij push naar `main`
- ✅ Docker Compose configuratie: `docker-compose.prod.acr.yml`
@@ -112,19 +112,19 @@ az acr login --name zdlas
az acr repository list --name zdlas --output table
# List tags voor backend
az acr repository show-tags --name zdlas --repository zuyderland-cmdb-gui/backend --output table
az acr repository show-tags --name zdlas --repository cmdb-insight/backend --output table
# List tags voor frontend
az acr repository show-tags --name zdlas --repository zuyderland-cmdb-gui/frontend --output table
az acr repository show-tags --name zdlas --repository cmdb-insight/frontend --output table
```
**Verwachte output:**
```
REPOSITORY TAG CREATED
zuyderland-cmdb-gui/backend latest ...
zuyderland-cmdb-gui/backend 88764 ...
zuyderland-cmdb-gui/frontend latest ...
zuyderland-cmdb-gui/frontend 88764 ...
cmdb-insight/backend latest ...
cmdb-insight/backend 88764 ...
cmdb-insight/frontend latest ...
cmdb-insight/frontend 88764 ...
```
---
@@ -136,9 +136,9 @@ zuyderland-cmdb-gui/frontend 88764 ...
```yaml
services:
backend:
image: zdlas.azurecr.io/zuyderland-cmdb-gui/backend:latest
image: zdlas.azurecr.io/cmdb-insight/backend:latest
frontend:
image: zdlas.azurecr.io/zuyderland-cmdb-gui/frontend:latest
image: zdlas.azurecr.io/cmdb-insight/frontend:latest
```
**Let op:** De huidige configuratie gebruikt `zuyderlandcmdbacr.azurecr.io` - pas dit aan naar `zdlas.azurecr.io` als dat je ACR naam is.
@@ -224,7 +224,7 @@ VITE_API_URL=https://your-backend-url.com/api
5. **Clone repository en deploy:**
```bash
git clone <your-repo-url>
cd zuyderland-cmdb-gui
cd cmdb-insight
# Update docker-compose.prod.acr.yml met juiste ACR naam
# Maak .env.production aan

View File

@@ -0,0 +1,76 @@
# Docker Compose Warnings Oplossen
## Warnings
Je ziet mogelijk deze warnings:
```
WARN[0000] The "JIRA_PAT" variable is not set. Defaulting to a blank string.
WARN[0000] The "ANTHROPIC_API_KEY" variable is not set. Defaulting to a blank string.
WARN[0000] docker-compose.yml: the attribute `version` is obsolete
```
## Oplossingen
### 1. Version Attribute (Opgelost)
De `version: '3.8'` regel is verwijderd uit `docker-compose.yml` omdat het obsolete is in nieuwere versies van Docker Compose.
### 2. Environment Variables
De warnings over ontbrekende environment variables zijn **normaal** als je geen `.env` bestand hebt of als deze variabelen niet gezet zijn.
**Optie A: Maak een `.env` bestand (Aanbevolen)**
```bash
# Kopieer .env.example naar .env
cp .env.example .env
# Edit .env en vul de waarden in
nano .env
```
**Optie B: Zet variabelen in je shell**
```bash
export JIRA_HOST=https://jira.zuyderland.nl
export JIRA_PAT=your_token
export JIRA_SCHEMA_ID=your_schema_id
export ANTHROPIC_API_KEY=your_key
docker-compose up
```
**Optie C: Negeer de warnings (OK voor development)**
De warnings zijn **niet kritisch**. De applicatie werkt ook zonder deze variabelen (je kunt dan alleen geen Jira sync doen of AI features gebruiken).
## .env Bestand Voorbeeld
```env
# Jira Assets
JIRA_HOST=https://jira.zuyderland.nl
JIRA_PAT=your_personal_access_token
JIRA_SCHEMA_ID=your_schema_id
# AI (optioneel)
ANTHROPIC_API_KEY=your_anthropic_key
```
## Verificatie
Na het aanmaken van `.env`:
```bash
# Check of warnings weg zijn
docker-compose config
# Start containers
docker-compose up -d
```
## Notitie
- `.env` staat in `.gitignore` - wordt niet gecommit
- Gebruik `.env.example` als template
- Voor productie: gebruik Azure App Service environment variables of Key Vault

View File

@@ -1,6 +1,6 @@
# Gitea Docker Container Registry - Deployment Guide
Deze guide beschrijft hoe je Gitea gebruikt als Docker Container Registry voor het deployen van de Zuyderland CMDB GUI applicatie in productie.
Deze guide beschrijft hoe je Gitea gebruikt als Docker Container Registry voor het deployen van de CMDB Insight applicatie in productie.
## 📋 Inhoudsopgave

View File

@@ -0,0 +1,593 @@
# Green Field Deployment Guide
## Overzicht
Deze guide beschrijft hoe je de applicatie volledig opnieuw deployt met de nieuwe genormaliseerde database structuur. Aangezien het een green field deployment is, kunnen we alles schoon opzetten.
---
## Stap 1: Database Setup
### Option A: PostgreSQL (Aanbevolen voor productie)
#### 1.1 Azure Database for PostgreSQL
```bash
# Via Azure Portal of CLI
az postgres flexible-server create \
--resource-group <resource-group> \
--name <server-name> \
--location <location> \
--admin-user <admin-user> \
--admin-password <admin-password> \
--sku-name Standard_B1ms \
--tier Burstable \
--version 14
```
#### 1.2 Database Aanmaken
```sql
-- Connect to PostgreSQL
CREATE DATABASE cmdb_cache;
CREATE DATABASE cmdb_classifications;
-- Create user (optional, can use admin user)
CREATE USER cmdb_user WITH PASSWORD 'secure_password';
GRANT ALL PRIVILEGES ON DATABASE cmdb_cache TO cmdb_user;
GRANT ALL PRIVILEGES ON DATABASE cmdb_classifications TO cmdb_user;
```
#### 1.3 Connection String
```env
DATABASE_TYPE=postgres
DATABASE_URL=postgresql://cmdb_user:secure_password@<server-name>.postgres.database.azure.com:5432/cmdb_cache?sslmode=require
CLASSIFICATIONS_DATABASE_URL=postgresql://cmdb_user:secure_password@<server-name>.postgres.database.azure.com:5432/cmdb_classifications?sslmode=require
```
### Option B: SQLite (Voor development/testing)
```env
DATABASE_TYPE=sqlite
# Database files worden automatisch aangemaakt in backend/data/
```
---
## Stap 2: Environment Variables
### 2.1 Basis Configuratie
Maak `.env` bestand in project root:
```env
# Server
PORT=3001
NODE_ENV=production
FRONTEND_URL=https://your-domain.com
# Database (zie Stap 1)
DATABASE_TYPE=postgres
DATABASE_URL=postgresql://...
# Jira Assets
JIRA_HOST=https://jira.zuyderland.nl
JIRA_SCHEMA_ID=<your_schema_id>
JIRA_SERVICE_ACCOUNT_TOKEN=<service_account_token>
# Jira Authentication Method
JIRA_AUTH_METHOD=oauth
# OAuth Configuration (als JIRA_AUTH_METHOD=oauth)
JIRA_OAUTH_CLIENT_ID=<client_id>
JIRA_OAUTH_CLIENT_SECRET=<client_secret>
JIRA_OAUTH_CALLBACK_URL=https://your-domain.com/api/auth/callback
JIRA_OAUTH_SCOPES=READ WRITE
# Session
SESSION_SECRET=<generate_secure_random_string>
# AI (configureer per gebruiker in profile settings)
# ANTHROPIC_API_KEY, OPENAI_API_KEY, TAVILY_API_KEY worden per gebruiker ingesteld
```
### 2.2 Session Secret Genereren
```bash
# Generate secure random string
node -e "console.log(require('crypto').randomBytes(32).toString('hex'))"
```
---
## Stap 3: Schema Discovery
### 3.1 Schema Genereren
```bash
cd backend
npm run generate-schema
```
Dit:
- Haalt schema op van Jira Assets API
- Genereert `backend/src/generated/jira-schema.ts`
- Genereert `backend/src/generated/jira-types.ts`
### 3.2 Schema Populeren in Database
Bij eerste start van de applicatie:
- `schemaDiscoveryService` leest `OBJECT_TYPES` uit generated schema
- Populeert `object_types` en `attributes` tabellen
- Gebeurt automatisch bij initialisatie
---
## Stap 4: Application Build
### 4.1 Dependencies Installeren
```bash
# Root
npm install
# Backend
cd backend
npm install
# Frontend
cd frontend
npm install
```
### 4.2 Build
```bash
# Backend
cd backend
npm run build
# Frontend
cd frontend
npm run build
```
---
## Stap 5: Database Initialisatie
### 5.1 Automatische Initialisatie
Bij eerste start:
1. Normalized schema wordt aangemaakt
2. Schema discovery wordt uitgevoerd
3. Tabellen worden gevuld met object types en attributes
### 5.2 Handmatige Verificatie (Optioneel)
```sql
-- Check object types
SELECT * FROM object_types ORDER BY sync_priority;
-- Check attributes
SELECT COUNT(*) FROM attributes;
-- Check per type
SELECT object_type_name, COUNT(*) as attr_count
FROM attributes
GROUP BY object_type_name;
```
---
## Stap 6: Data Sync
### 6.1 Eerste Sync
```bash
# Via API (na deployment)
curl -X POST https://your-domain.com/api/cache/sync \
-H "Authorization: Bearer <token>"
```
Of via applicatie:
- Ga naar Settings → Cache Management
- Klik "Full Sync"
### 6.2 Sync Status Checken
```bash
curl https://your-domain.com/api/cache/status \
-H "Authorization: Bearer <token>"
```
---
## Stap 7: Docker Deployment
### 7.1 Build Images
```bash
# Backend
docker build -t cmdb-backend:latest -f backend/Dockerfile .
# Frontend
docker build -t cmdb-frontend:latest -f frontend/Dockerfile .
```
### 7.2 Docker Compose (Production)
```yaml
# docker-compose.prod.yml
version: '3.8'
services:
backend:
image: cmdb-backend:latest
environment:
- DATABASE_TYPE=postgres
- DATABASE_URL=${DATABASE_URL}
- JIRA_HOST=${JIRA_HOST}
- JIRA_SCHEMA_ID=${JIRA_SCHEMA_ID}
- JIRA_SERVICE_ACCOUNT_TOKEN=${JIRA_SERVICE_ACCOUNT_TOKEN}
- SESSION_SECRET=${SESSION_SECRET}
ports:
- "3001:3001"
frontend:
image: cmdb-frontend:latest
ports:
- "80:80"
depends_on:
- backend
nginx:
image: nginx:alpine
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
ports:
- "443:443"
depends_on:
- frontend
- backend
```
### 7.3 Starten
```bash
docker-compose -f docker-compose.prod.yml up -d
```
---
## Stap 8: Azure App Service Deployment
### 8.1 Azure Container Registry
```bash
# Login
az acr login --name <registry-name>
# Tag images
docker tag cmdb-backend:latest <registry-name>.azurecr.io/cmdb-backend:latest
docker tag cmdb-frontend:latest <registry-name>.azurecr.io/cmdb-frontend:latest
# Push
docker push <registry-name>.azurecr.io/cmdb-backend:latest
docker push <registry-name>.azurecr.io/cmdb-frontend:latest
```
### 8.2 App Service Configuratie
**Backend App Service:**
- Container: `<registry-name>.azurecr.io/cmdb-backend:latest`
- Environment variables: Alle `.env` variabelen
- Port: 3001
**Frontend App Service:**
- Container: `<registry-name>.azurecr.io/cmdb-frontend:latest`
- Environment variables: `VITE_API_URL=https://backend-app.azurewebsites.net`
### 8.3 Deployment via Azure DevOps
Zie `azure-pipelines.yml` voor CI/CD pipeline.
---
## Stap 9: Verificatie
### 9.1 Health Checks
```bash
# Backend health
curl https://backend-app.azurewebsites.net/health
# Frontend
curl https://frontend-app.azurewebsites.net
```
### 9.2 Database Verificatie
```sql
-- Check object count
SELECT object_type_name, COUNT(*) as count
FROM objects
GROUP BY object_type_name;
-- Check attribute values
SELECT COUNT(*) FROM attribute_values;
-- Check relations
SELECT COUNT(*) FROM object_relations;
```
### 9.3 Functionaliteit Testen
1. **Login** - Test authenticatie
2. **Dashboard** - Check of data wordt getoond
3. **Application List** - Test filters
4. **Application Detail** - Test edit functionaliteit
5. **Sync** - Test manual sync
---
## Stap 10: Monitoring & Maintenance
### 10.1 Logs
```bash
# Azure App Service logs
az webapp log tail --name <app-name> --resource-group <resource-group>
# Docker logs
docker-compose logs -f backend
```
### 10.2 Database Monitoring
```sql
-- Database size
SELECT pg_database_size('cmdb_cache');
-- Table sizes
SELECT
schemaname,
tablename,
pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) AS size
FROM pg_tables
WHERE schemaname = 'public'
ORDER BY pg_total_relation_size(schemaname||'.'||tablename) DESC;
-- Index usage
SELECT
schemaname,
tablename,
indexname,
idx_scan as index_scans
FROM pg_stat_user_indexes
ORDER BY idx_scan DESC;
```
### 10.3 Performance Monitoring
- Monitor query performance
- Check sync duration
- Monitor database connections
- Check memory usage
---
## Troubleshooting
### Database Connection Issues
```bash
# Test connection
psql $DATABASE_URL -c "SELECT 1"
# Check firewall rules (Azure)
az postgres flexible-server firewall-rule list \
--resource-group <resource-group> \
--name <server-name>
```
### Schema Discovery Fails
```bash
# Check Jira connection
curl -H "Authorization: Bearer $JIRA_SERVICE_ACCOUNT_TOKEN" \
$JIRA_HOST/rest/insight/1.0/objectschema/list
# Regenerate schema
cd backend
npm run generate-schema
```
### Sync Issues
```bash
# Check sync status
curl https://your-domain.com/api/cache/status
# Manual sync for specific type
curl -X POST https://your-domain.com/api/cache/sync/ApplicationComponent
```
---
## Rollback Plan
Als er problemen zijn:
1. **Stop applicatie**
2. **Revert code** (git)
3. **Herstart applicatie**
Aangezien het green field is, is er geen data migration nodig voor rollback.
---
## Post-Deployment Checklist
- [ ] Database verbinding werkt
- [ ] Schema discovery succesvol
- [ ] Eerste sync voltooid
- [ ] Alle object types gesynct
- [ ] Queries werken correct
- [ ] Filters werken
- [ ] Edit functionaliteit werkt
- [ ] Authentication werkt
- [ ] Logs zijn zichtbaar
- [ ] Monitoring is ingesteld
---
## Performance Tips
1. **Database Indexes** - Zijn automatisch aangemaakt
2. **Connection Pooling** - PostgreSQL adapter gebruikt pool (max 20)
3. **Query Optimization** - Gebruik `queryWithFilters()` voor gefilterde queries
4. **Sync Frequency** - Incremental sync elke 30 seconden (configureerbaar)
---
## Security Checklist
- [ ] `SESSION_SECRET` is sterk en uniek
- [ ] Database credentials zijn secure
- [ ] HTTPS is ingeschakeld
- [ ] CORS is correct geconfigureerd
- [ ] OAuth callback URL is correct
- [ ] Environment variables zijn niet gecommit
---
## Extra Tips & Best Practices
### Database Performance
1. **Connection Pooling**
- PostgreSQL adapter gebruikt automatisch connection pooling (max 20)
- Monitor pool usage in production
2. **Query Optimization**
- Gebruik `queryWithFilters()` voor gefilterde queries (veel sneller)
- Indexes zijn automatisch aangemaakt
- Monitor slow queries
3. **Sync Performance**
- Batch size: 50 objects per batch (configureerbaar via `JIRA_API_BATCH_SIZE`)
- Incremental sync: elke 30 seconden (configureerbaar via `SYNC_INCREMENTAL_INTERVAL_MS`)
### Monitoring
1. **Application Logs**
- Check voor schema discovery errors
- Monitor sync duration
- Check query performance
2. **Database Metrics**
- Table sizes groeien
- Index usage
- Connection pool usage
3. **Jira API**
- Monitor rate limiting
- Check API response times
- Monitor sync success rate
### Backup Strategy
1. **Database Backups**
- Azure PostgreSQL: automatische dagelijkse backups
- SQLite: maak periodieke copies van `.db` files
2. **Configuration Backup**
- Backup `.env` file (securely!)
- Document alle environment variables
### Scaling Considerations
1. **Database**
- PostgreSQL kan schalen (vertical scaling)
- Overweeg read replicas voor grote datasets
2. **Application**
- Stateless design - kan horizontaal schalen
- Session storage in database (scalable)
3. **Cache**
- Normalized structure is efficient
- Indexes zorgen voor goede performance
### Troubleshooting Common Issues
#### Issue: Schema Discovery Fails
**Symptoom:** Error bij startup, geen object types in database
**Oplossing:**
```bash
# Check Jira connection
curl -H "Authorization: Bearer $JIRA_SERVICE_ACCOUNT_TOKEN" \
$JIRA_HOST/rest/insight/1.0/objectschema/list
# Regenerate schema
cd backend
npm run generate-schema
# Restart application
```
#### Issue: Sync is Slow
**Symptoom:** Full sync duurt lang
**Oplossing:**
- Check Jira API response times
- Verhoog batch size (maar niet te veel - rate limiting)
- Check database connection pool
#### Issue: Queries zijn Langzaam
**Symptoom:** Filters werken traag
**Oplossing:**
- Check of indexes bestaan: `\d+ attribute_values` in PostgreSQL
- Gebruik `queryWithFilters()` in plaats van JavaScript filtering
- Check query execution plan
#### Issue: Memory Usage Hoog
**Symptoom:** Applicatie gebruikt veel geheugen
**Oplossing:**
- Normalized storage gebruikt minder geheugen dan JSONB
- Check of oude cacheStore nog ergens gebruikt wordt
- Monitor object reconstruction (kan N+1 queries veroorzaken)
### Development vs Production
**Development:**
- SQLite is prima voor testing
- Lokale database in `backend/data/`
- Geen SSL nodig
**Production:**
- PostgreSQL is aanbevolen
- Azure Database for PostgreSQL
- SSL vereist
- Connection pooling
- Monitoring ingeschakeld
### Migration van Development naar Production
1. **Schema is hetzelfde** - geen migratie nodig
2. **Data sync** - eerste sync haalt alles op van Jira
3. **Environment variables** - update voor productie
4. **OAuth callback URL** - update naar productie domain
---
**End of Guide**

View File

@@ -0,0 +1,634 @@
# Cursor AI Prompt: Jira Assets Schema Synchronization
## Context
This application syncs Jira Assets (Data Center) data to a local database with a generic structure. Your task is to review, implement, and/or modify the schema synchronization feature that fetches the complete Jira Assets configuration structure.
## Objective
Implement or verify the schema sync functionality that extracts the complete Jira Assets schema structure using the REST API. This includes:
- Object Schemas
- Object Types (with hierarchy)
- Object Type Attributes (field definitions)
**Note:** This task focuses on syncing the *structure/configuration* only, not the actual object data.
---
## API Reference
### Base URL
```
{JIRA_BASE_URL}/rest/assets/1.0
```
### Authentication
- HTTP Basic Authentication (username + password/API token)
- All requests require `Accept: application/json` header
---
## Required API Endpoints & Response Structures
### 1. List All Schemas
```
GET /rest/assets/1.0/objectschema/list
```
**Response Structure:**
```json
{
"objectschemas": [
{
"id": 1,
"name": "IT Assets",
"objectSchemaKey": "IT",
"status": "Ok",
"description": "IT Asset Management Schema",
"created": "2024-01-15T10:30:00.000Z",
"updated": "2024-01-20T14:45:00.000Z",
"objectCount": 1500,
"objectTypeCount": 25
}
]
}
```
**Fields to Store:**
| Field | Type | Description |
|-------|------|-------------|
| id | integer | Primary identifier |
| name | string | Schema name |
| objectSchemaKey | string | Unique key (e.g., "IT") |
| status | string | Schema status |
| description | string | Optional description |
| created | datetime | Creation timestamp |
| updated | datetime | Last modification |
| objectCount | integer | Total objects in schema |
| objectTypeCount | integer | Total object types |
---
### 2. Get Schema Details
```
GET /rest/assets/1.0/objectschema/:id
```
**Response Structure:**
```json
{
"id": 1,
"name": "IT Assets",
"objectSchemaKey": "IT",
"status": "Ok",
"description": "IT Asset Management Schema",
"created": "2024-01-15T10:30:00.000Z",
"updated": "2024-01-20T14:45:00.000Z",
"objectCount": 1500,
"objectTypeCount": 25
}
```
---
### 3. Get Object Types (Flat List)
```
GET /rest/assets/1.0/objectschema/:id/objecttypes/flat
```
**Response Structure:**
```json
[
{
"id": 10,
"name": "Hardware",
"type": 0,
"description": "Physical hardware assets",
"icon": {
"id": 1,
"name": "Computer",
"url16": "/rest/assets/1.0/icon/1/16",
"url48": "/rest/assets/1.0/icon/1/48"
},
"position": 0,
"created": "2024-01-15T10:30:00.000Z",
"updated": "2024-01-20T14:45:00.000Z",
"objectCount": 500,
"parentObjectTypeId": null,
"objectSchemaId": 1,
"inherited": false,
"abstractObjectType": false
},
{
"id": 11,
"name": "Computer",
"type": 0,
"description": "Desktop and laptop computers",
"icon": {
"id": 2,
"name": "Laptop",
"url16": "/rest/assets/1.0/icon/2/16",
"url48": "/rest/assets/1.0/icon/2/48"
},
"position": 0,
"created": "2024-01-15T10:35:00.000Z",
"updated": "2024-01-20T14:50:00.000Z",
"objectCount": 200,
"parentObjectTypeId": 10,
"objectSchemaId": 1,
"inherited": true,
"abstractObjectType": false
}
]
```
**Fields to Store:**
| Field | Type | Description |
|-------|------|-------------|
| id | integer | Primary identifier |
| name | string | Object type name |
| type | integer | Type classification (0=normal) |
| description | string | Optional description |
| icon | object | Icon details (id, name, url16, url48) |
| position | integer | Display position in hierarchy |
| created | datetime | Creation timestamp |
| updated | datetime | Last modification |
| objectCount | integer | Number of objects of this type |
| parentObjectTypeId | integer/null | Parent type ID (null if root) |
| objectSchemaId | integer | Parent schema ID |
| inherited | boolean | Whether attributes are inherited |
| abstractObjectType | boolean | Whether type is abstract (no direct objects) |
---
### 4. Get Object Type Details
```
GET /rest/assets/1.0/objecttype/:id
```
**Response Structure:** Same as individual item in the flat list above.
---
### 5. Get Object Type Attributes
```
GET /rest/assets/1.0/objecttype/:id/attributes
```
**Response Structure:**
```json
[
{
"id": 100,
"objectType": {
"id": 11,
"name": "Computer"
},
"name": "Name",
"label": true,
"type": 0,
"description": "Asset name/label",
"defaultType": {
"id": 0,
"name": "Text"
},
"typeValue": null,
"typeValueMulti": [],
"additionalValue": null,
"referenceType": null,
"referenceObjectTypeId": null,
"referenceObjectType": null,
"editable": true,
"system": true,
"sortable": true,
"summable": false,
"indexed": true,
"minimumCardinality": 1,
"maximumCardinality": 1,
"suffix": "",
"removable": false,
"hidden": false,
"includeChildObjectTypes": false,
"uniqueAttribute": false,
"regexValidation": null,
"iql": null,
"options": "",
"position": 0
},
{
"id": 101,
"objectType": {
"id": 11,
"name": "Computer"
},
"name": "Serial Number",
"label": false,
"type": 0,
"description": "Device serial number",
"defaultType": {
"id": 0,
"name": "Text"
},
"typeValue": null,
"typeValueMulti": [],
"additionalValue": null,
"referenceType": null,
"referenceObjectTypeId": null,
"referenceObjectType": null,
"editable": true,
"system": false,
"sortable": true,
"summable": false,
"indexed": true,
"minimumCardinality": 0,
"maximumCardinality": 1,
"suffix": "",
"removable": true,
"hidden": false,
"includeChildObjectTypes": false,
"uniqueAttribute": true,
"regexValidation": "^[A-Z0-9]{10,20}$",
"iql": null,
"options": "",
"position": 1
},
{
"id": 102,
"objectType": {
"id": 11,
"name": "Computer"
},
"name": "Assigned User",
"label": false,
"type": 2,
"description": "User assigned to this asset",
"defaultType": null,
"typeValue": "SHOW_ON_ASSET",
"typeValueMulti": [],
"additionalValue": null,
"referenceType": null,
"referenceObjectTypeId": null,
"referenceObjectType": null,
"editable": true,
"system": false,
"sortable": true,
"summable": false,
"indexed": true,
"minimumCardinality": 0,
"maximumCardinality": 1,
"suffix": "",
"removable": true,
"hidden": false,
"includeChildObjectTypes": false,
"uniqueAttribute": false,
"regexValidation": null,
"iql": null,
"options": "",
"position": 2
},
{
"id": 103,
"objectType": {
"id": 11,
"name": "Computer"
},
"name": "Location",
"label": false,
"type": 1,
"description": "Physical location of the asset",
"defaultType": null,
"typeValue": null,
"typeValueMulti": [],
"additionalValue": null,
"referenceType": {
"id": 1,
"name": "Reference",
"description": "Standard reference",
"color": "#0052CC",
"url16": null,
"removable": false,
"objectSchemaId": 1
},
"referenceObjectTypeId": 20,
"referenceObjectType": {
"id": 20,
"name": "Location",
"objectSchemaId": 1
},
"editable": true,
"system": false,
"sortable": true,
"summable": false,
"indexed": true,
"minimumCardinality": 0,
"maximumCardinality": 1,
"suffix": "",
"removable": true,
"hidden": false,
"includeChildObjectTypes": true,
"uniqueAttribute": false,
"regexValidation": null,
"iql": "objectType = Location",
"options": "",
"position": 3
},
{
"id": 104,
"objectType": {
"id": 11,
"name": "Computer"
},
"name": "Status",
"label": false,
"type": 7,
"description": "Current asset status",
"defaultType": null,
"typeValue": "1",
"typeValueMulti": ["1", "2", "3"],
"additionalValue": null,
"referenceType": null,
"referenceObjectTypeId": null,
"referenceObjectType": null,
"editable": true,
"system": false,
"sortable": true,
"summable": false,
"indexed": true,
"minimumCardinality": 1,
"maximumCardinality": 1,
"suffix": "",
"removable": true,
"hidden": false,
"includeChildObjectTypes": false,
"uniqueAttribute": false,
"regexValidation": null,
"iql": null,
"options": "",
"position": 4
}
]
```
**Attribute Fields to Store:**
| Field | Type | Description |
|-------|------|-------------|
| id | integer | Attribute ID |
| objectType | object | Parent object type {id, name} |
| name | string | Attribute name |
| label | boolean | Is this the label/display attribute |
| type | integer | Attribute type (see type reference below) |
| description | string | Optional description |
| defaultType | object/null | Default type info {id, name} for type=0 |
| typeValue | string/null | Type-specific configuration |
| typeValueMulti | array | Multiple type values (e.g., allowed status IDs) |
| additionalValue | string/null | Additional configuration |
| referenceType | object/null | Reference type details for type=1 |
| referenceObjectTypeId | integer/null | Target object type ID for references |
| referenceObjectType | object/null | Target object type details |
| editable | boolean | Can values be edited |
| system | boolean | Is system attribute (Name, Key, Created, Updated) |
| sortable | boolean | Can sort by this attribute |
| summable | boolean | Can sum values (numeric types) |
| indexed | boolean | Is indexed for search |
| minimumCardinality | integer | Minimum required values (0=optional, 1=required) |
| maximumCardinality | integer | Maximum values (-1=unlimited, 1=single) |
| suffix | string | Display suffix (e.g., "GB", "USD") |
| removable | boolean | Can attribute be deleted |
| hidden | boolean | Is hidden from default view |
| includeChildObjectTypes | boolean | Include child types in reference selection |
| uniqueAttribute | boolean | Must values be unique |
| regexValidation | string/null | Validation regex pattern |
| iql | string/null | IQL/AQL filter for reference selection |
| options | string | Additional options (CSV for Select type) |
| position | integer | Display order position |
---
## Attribute Type Reference
### Main Types (type field)
| Type | Name | Description | Uses defaultType |
|------|------|-------------|------------------|
| 0 | Default | Uses defaultType for specific type | Yes |
| 1 | Object | Reference to another Assets object | No |
| 2 | User | Jira user reference | No |
| 3 | Confluence | Confluence page reference | No |
| 4 | Group | Jira group reference | No |
| 5 | Version | Jira version reference | No |
| 6 | Project | Jira project reference | No |
| 7 | Status | Status type reference | No |
### Default Types (defaultType.id when type=0)
| ID | Name | Description |
|----|------|-------------|
| 0 | Text | Single-line text |
| 1 | Integer | Whole number |
| 2 | Boolean | True/False checkbox |
| 3 | Double | Decimal number |
| 4 | Date | Date only (no time) |
| 5 | Time | Time only (no date) |
| 6 | DateTime | Date and time |
| 7 | URL | Web link |
| 8 | Email | Email address |
| 9 | Textarea | Multi-line text |
| 10 | Select | Dropdown selection (options in `options` field) |
| 11 | IP Address | IP address format |
---
## Implementation Requirements
### 1. Sync Flow
Implement the following synchronization flow:
```
┌─────────────────────────────────────────────────────────────┐
│ Schema Sync Process │
├─────────────────────────────────────────────────────────────┤
│ │
│ 1. GET /objectschema/list │
│ └── Store/Update all schemas in local DB │
│ │
│ 2. For each schema: │
│ ├── GET /objectschema/:id │
│ │ └── Update schema details │
│ │ │
│ └── GET /objectschema/:id/objecttypes/flat │
│ └── Store/Update all object types │
│ │
│ 3. For each object type: │
│ ├── GET /objecttype/:id (optional, for latest details) │
│ │ │
│ └── GET /objecttype/:id/attributes │
│ └── Store/Update all attributes │
│ │
│ 4. Clean up orphaned records (deleted in Jira) │
│ │
└─────────────────────────────────────────────────────────────┘
```
### 2. Database Operations
For each entity type, implement:
- **Upsert logic**: Insert new records, update existing ones based on Jira ID
- **Soft delete or cleanup**: Handle items that exist locally but not in Jira anymore
- **Relationship mapping**: Maintain foreign key relationships (schema → object types → attributes)
### 3. Rate Limiting
Implement rate limiting to avoid overloading the Jira server:
- Add 100-200ms delay between API requests
- Implement exponential backoff on 429 (Too Many Requests) responses
- Maximum 3-5 concurrent requests if using parallel processing
```typescript
// Example rate limiting implementation
const delay = (ms: number) => new Promise(resolve => setTimeout(resolve, ms));
async function fetchWithRateLimit<T>(url: string): Promise<T> {
await delay(150); // 150ms between requests
const response = await fetch(url, { headers: getAuthHeaders() });
if (response.status === 429) {
const retryAfter = parseInt(response.headers.get('Retry-After') || '5');
await delay(retryAfter * 1000);
return fetchWithRateLimit(url);
}
return response.json();
}
```
### 4. Error Handling
Handle these scenarios:
- **401 Unauthorized**: Invalid credentials
- **403 Forbidden**: Insufficient permissions
- **404 Not Found**: Schema/Type deleted during sync
- **429 Too Many Requests**: Rate limited (implement backoff)
- **5xx Server Errors**: Retry with exponential backoff
### 5. Progress Tracking
Implement progress reporting:
- Total schemas to process
- Current schema being processed
- Total object types to process
- Current object type being processed
- Estimated time remaining (optional)
---
## Code Structure Suggestions
### Service/Repository Pattern
```
src/
├── services/
│ └── jira-assets/
│ ├── JiraAssetsApiClient.ts # HTTP client with auth & rate limiting
│ ├── SchemaSyncService.ts # Main sync orchestration
│ ├── ObjectTypeSyncService.ts # Object type sync logic
│ └── AttributeSyncService.ts # Attribute sync logic
├── repositories/
│ ├── SchemaRepository.ts # Schema DB operations
│ ├── ObjectTypeRepository.ts # Object type DB operations
│ └── AttributeRepository.ts # Attribute DB operations
└── models/
├── Schema.ts # Schema entity/model
├── ObjectType.ts # Object type entity/model
└── ObjectTypeAttribute.ts # Attribute entity/model
```
### Sync Service Interface
```typescript
interface SchemaSyncService {
/**
* Sync all schemas and their complete structure
* @returns Summary of sync operation
*/
syncAll(): Promise<SyncResult>;
/**
* Sync a single schema by ID
* @param schemaId - Jira schema ID
*/
syncSchema(schemaId: number): Promise<SyncResult>;
/**
* Get sync status/progress
*/
getProgress(): SyncProgress;
}
interface SyncResult {
success: boolean;
schemasProcessed: number;
objectTypesProcessed: number;
attributesProcessed: number;
errors: SyncError[];
duration: number; // milliseconds
}
interface SyncProgress {
status: 'idle' | 'running' | 'completed' | 'failed';
currentSchema?: string;
currentObjectType?: string;
schemasTotal: number;
schemasCompleted: number;
objectTypesTotal: number;
objectTypesCompleted: number;
startedAt?: Date;
estimatedCompletion?: Date;
}
```
---
## Validation Checklist
After implementation, verify:
- [ ] All schemas are fetched from `/objectschema/list`
- [ ] Schema details are updated from `/objectschema/:id`
- [ ] All object types are fetched for each schema from `/objectschema/:id/objecttypes/flat`
- [ ] Object type hierarchy (parentObjectTypeId) is preserved
- [ ] All attributes are fetched for each object type from `/objecttype/:id/attributes`
- [ ] Attribute types are correctly mapped (type + defaultType)
- [ ] Reference attributes store referenceObjectTypeId and referenceType
- [ ] Status attributes store typeValueMulti (allowed status IDs)
- [ ] Rate limiting prevents 429 errors
- [ ] Error handling covers all failure scenarios
- [ ] Sync can be resumed after failure
- [ ] Orphaned local records are handled (deleted in Jira)
- [ ] Foreign key relationships are maintained
- [ ] Timestamps (created, updated) are stored correctly
---
## Testing Scenarios
1. **Initial sync**: Empty local database, full sync from Jira
2. **Incremental sync**: Existing data, detect changes
3. **Schema added**: New schema created in Jira
4. **Schema deleted**: Schema removed from Jira
5. **Object type added**: New type in existing schema
6. **Object type moved**: Parent changed in hierarchy
7. **Attribute added/modified/removed**: Changes to type attributes
8. **Large schema**: Schema with 50+ object types, 500+ attributes
9. **Network failure**: Handle timeouts and retries
10. **Rate limiting**: Handle 429 responses gracefully
---
## Notes
- The `/objectschema/:id/objecttypes/flat` endpoint returns ALL object types in one call, which is more efficient than fetching hierarchically
- The `label` field on attributes indicates which attribute is used as the display name for objects
- System attributes (system=true) are: Name, Key, Created, Updated - these exist on all object types
- The `iql` field on reference attributes contains the filter query for selecting valid reference targets
- The `options` field on Select type attributes (type=0, defaultType.id=10) contains comma-separated options

View File

@@ -0,0 +1,177 @@
# Local Development Setup
## PostgreSQL Only (Recommended for Local Development)
Voor lokale development heb je alleen PostgreSQL nodig. De backend en frontend draaien lokaal op je MacBook.
### Start PostgreSQL
```bash
# Start alleen PostgreSQL
docker-compose -f docker-compose.dev.yml up -d
# Check status
docker-compose -f docker-compose.dev.yml ps
# Check logs
docker-compose -f docker-compose.dev.yml logs -f postgres
```
### Connection String
```env
DATABASE_TYPE=postgres
DATABASE_URL=postgresql://cmdb:cmdb-dev@localhost:5432/cmdb_cache
```
Of individuele variabelen:
```env
DATABASE_TYPE=postgres
DATABASE_HOST=localhost
DATABASE_PORT=5432
DATABASE_NAME=cmdb_cache
DATABASE_USER=cmdb
DATABASE_PASSWORD=cmdb-dev
```
### Stop PostgreSQL
```bash
docker-compose -f docker-compose.dev.yml down
```
### Reset Database
```bash
# Stop en verwijder volume
docker-compose -f docker-compose.dev.yml down -v
# Start opnieuw
docker-compose -f docker-compose.dev.yml up -d
```
### Connect to Database
```bash
# Via psql
docker-compose -f docker-compose.dev.yml exec postgres psql -U cmdb -d cmdb_cache
# Of direct
psql postgresql://cmdb:cmdb-dev@localhost:5432/cmdb_cache
```
### Useful Commands
```bash
# List databases
docker-compose -f docker-compose.dev.yml exec postgres psql -U cmdb -c "\l"
# List tables
docker-compose -f docker-compose.dev.yml exec postgres psql -U cmdb -d cmdb_cache -c "\dt"
# Check database size
docker-compose -f docker-compose.dev.yml exec postgres psql -U cmdb -d cmdb_cache -c "
SELECT pg_size_pretty(pg_database_size('cmdb_cache')) as size;
"
# Count objects
docker-compose -f docker-compose.dev.yml exec postgres psql -U cmdb -d cmdb_cache -c "
SELECT object_type_name, COUNT(*)
FROM objects
GROUP BY object_type_name;
"
```
## Full Stack (Alternative)
Als je de hele stack in Docker wilt draaien:
```bash
docker-compose up -d
```
Dit start:
- PostgreSQL
- Backend (in Docker)
- Frontend (in Docker)
## Backend Development
Met alleen PostgreSQL draaiend:
```bash
# In backend directory
cd backend
npm install
npm run dev
```
Backend draait op `http://localhost:3001`
## Frontend Development
```bash
# In frontend directory
cd frontend
npm install
npm run dev
```
Frontend draait op `http://localhost:5173`
## Environment Variables
Maak een `.env` bestand in de root:
```env
# Database (voor backend)
DATABASE_TYPE=postgres
DATABASE_URL=postgresql://cmdb:cmdb-dev@localhost:5432/cmdb_cache
# Jira (optioneel)
JIRA_HOST=https://jira.zuyderland.nl
JIRA_PAT=your_token
JIRA_SCHEMA_ID=your_schema_id
# AI (optioneel)
ANTHROPIC_API_KEY=your_key
```
## Troubleshooting
### Port Already in Use
Als poort 5432 al in gebruik is:
```yaml
# In docker-compose.dev.yml, wijzig:
ports:
- "5433:5432" # Gebruik 5433 lokaal
```
En update je `.env`:
```env
DATABASE_PORT=5433
```
### Connection Refused
```bash
# Check of container draait
docker ps | grep postgres
# Check logs
docker-compose -f docker-compose.dev.yml logs postgres
# Test connection
docker-compose -f docker-compose.dev.yml exec postgres pg_isready -U cmdb
```
### Database Not Found
De database wordt automatisch aangemaakt bij eerste start van de backend. Of maak handmatig:
```bash
docker-compose -f docker-compose.dev.yml exec postgres psql -U cmdb -c "CREATE DATABASE cmdb_cache;"
```

View File

@@ -0,0 +1,185 @@
# PostgreSQL Database Reset (Lokaal)
## Quick Reset
Om de PostgreSQL database volledig te resetten (green field simulatie):
```bash
# Option 1: Gebruik het reset script
./scripts/reset-postgres.sh
```
## Handmatige Reset
Als je het handmatig wilt doen:
### Stap 1: Stop Containers
```bash
docker-compose down
```
### Stap 2: Verwijder PostgreSQL Volume
```bash
# Check volumes
docker volume ls | grep postgres
# Verwijder volume (dit verwijdert ALLE data!)
docker volume rm cmdb-insight_postgres_data
```
**Let op:** Dit verwijdert alle data permanent!
### Stap 3: Start Containers Opnieuw
```bash
docker-compose up -d postgres
```
### Stap 4: Wacht tot PostgreSQL Ready is
```bash
# Check status
docker-compose ps
# Test connection
docker-compose exec postgres pg_isready -U cmdb
```
### Stap 5: Maak Databases Aan (Optioneel)
De applicatie maakt databases automatisch aan, maar je kunt ze ook handmatig aanmaken:
```bash
docker-compose exec postgres psql -U cmdb -c "CREATE DATABASE cmdb_cache;"
docker-compose exec postgres psql -U cmdb -c "CREATE DATABASE cmdb_classifications;"
```
## Verificatie
Na reset, check of alles werkt:
```bash
# Connect to database
docker-compose exec postgres psql -U cmdb -d cmdb_cache
# Check tables (zou leeg moeten zijn)
\dt
# Exit
\q
```
## Wat Gebeurt Er Bij Reset?
1. **Alle data wordt verwijderd** - Alle tabellen, objecten, relaties
2. **Volume wordt verwijderd** - PostgreSQL data directory wordt gewist
3. **Nieuwe database** - Bij volgende start is het een schone database
4. **Schema wordt automatisch aangemaakt** - Bij eerste start van backend
## Na Reset
1. **Start backend:**
```bash
docker-compose up -d backend
```
2. **Check logs:**
```bash
docker-compose logs -f backend
```
Je zou moeten zien:
- "NormalizedCacheStore: Database schema initialized"
- "SchemaDiscovery: Schema discovery complete"
3. **Schema genereren (als nodig):**
```bash
docker-compose exec backend npm run generate-schema
```
4. **Start sync:**
- Via UI: Settings → Cache Management → Full Sync
- Of via API: `POST /api/cache/sync`
## Troubleshooting
### Volume niet gevonden
```bash
# Check alle volumes
docker volume ls
# Zoek naar postgres volume
docker volume ls | grep postgres
```
### Database bestaat al
Als je een fout krijgt dat de database al bestaat:
```bash
# Drop en recreate
docker-compose exec postgres psql -U cmdb -c "DROP DATABASE IF EXISTS cmdb_cache;"
docker-compose exec postgres psql -U cmdb -c "CREATE DATABASE cmdb_cache;"
```
### Connection Issues
```bash
# Check of PostgreSQL draait
docker-compose ps postgres
# Check logs
docker-compose logs postgres
# Test connection
docker-compose exec postgres pg_isready -U cmdb
```
## Environment Variables
Zorg dat je `.env` bestand correct is:
```env
DATABASE_TYPE=postgres
DATABASE_HOST=postgres
DATABASE_PORT=5432
DATABASE_NAME=cmdb_cache
DATABASE_USER=cmdb
DATABASE_PASSWORD=cmdb-dev
```
Of gebruik connection string:
```env
DATABASE_TYPE=postgres
DATABASE_URL=postgresql://cmdb:cmdb-dev@postgres:5432/cmdb_cache
```
## Snelle Commando's
```bash
# Alles in één keer
docker-compose down && \
docker volume rm cmdb-insight_postgres_data && \
docker-compose up -d postgres && \
sleep 5 && \
docker-compose exec postgres pg_isready -U cmdb
# Check database size (na sync)
docker-compose exec postgres psql -U cmdb -d cmdb_cache -c "
SELECT
pg_size_pretty(pg_database_size('cmdb_cache')) as size;
"
# List all tables
docker-compose exec postgres psql -U cmdb -d cmdb_cache -c "\dt"
# Count objects
docker-compose exec postgres psql -U cmdb -d cmdb_cache -c "
SELECT object_type_name, COUNT(*)
FROM objects
GROUP BY object_type_name;
"
```

View File

@@ -148,7 +148,7 @@ az acr credential show --name zdlas
3. **Selecteer Repository**
- Kies **"Azure Repos Git"** (of waar je code staat)
- Selecteer je repository: **"Zuyderland CMDB GUI"** (of jouw repo naam)
- Selecteer je repository: **"CMDB Insight"** (of jouw repo naam)
4. **Kies YAML File**
- Kies **"Existing Azure Pipelines YAML file"**
@@ -189,11 +189,11 @@ De pipeline zal:
2. **Bekijk Repositories**
- Klik op **"Repositories"** (links in het menu)
- Je zou moeten zien:
- `zuyderland-cmdb-gui/backend`
- `zuyderland-cmdb-gui/frontend`
- `cmdb-insight/backend`
- `cmdb-insight/frontend`
3. **Bekijk Tags**
- Klik op een repository (bijv. `zuyderland-cmdb-gui/backend`)
- Klik op een repository (bijv. `cmdb-insight/backend`)
- Je zou tags moeten zien:
- `latest`
- `123` (of build ID nummer)
@@ -207,10 +207,10 @@ De pipeline zal:
az acr repository list --name zuyderlandcmdbacr
# Lijst tags voor backend
az acr repository show-tags --name zuyderlandcmdbacr --repository zuyderland-cmdb-gui/backend --orderby time_desc
az acr repository show-tags --name zuyderlandcmdbacr --repository cmdb-insight/backend --orderby time_desc
# Lijst tags voor frontend
az acr repository show-tags --name zuyderlandcmdbacr --repository zuyderland-cmdb-gui/frontend --orderby time_desc
az acr repository show-tags --name zuyderlandcmdbacr --repository cmdb-insight/frontend --orderby time_desc
```
### In Azure DevOps:
@@ -224,8 +224,8 @@ az acr repository show-tags --name zuyderlandcmdbacr --repository zuyderland-cmd
- Bekijk de logs per stap
- Bij success zie je:
```
Backend Image: zuyderlandcmdbacr.azurecr.io/zuyderland-cmdb-gui/backend:123
Frontend Image: zuyderlandcmdbacr.azurecr.io/zuyderland-cmdb-gui/frontend:123
Backend Image: zuyderlandcmdbacr.azurecr.io/cmdb-insight/backend:123
Frontend Image: zuyderlandcmdbacr.azurecr.io/cmdb-insight/frontend:123
```
---
@@ -281,8 +281,8 @@ Als alles goed is gegaan, heb je nu:
- ✅ Docker images gebouwd en gepusht naar ACR
**Je images zijn nu beschikbaar op:**
- Backend: `zuyderlandcmdbacr.azurecr.io/zuyderland-cmdb-gui/backend:latest`
- Frontend: `zuyderlandcmdbacr.azurecr.io/zuyderland-cmdb-gui/frontend:latest`
- Backend: `zuyderlandcmdbacr.azurecr.io/cmdb-insight/backend:latest`
- Frontend: `zuyderlandcmdbacr.azurecr.io/cmdb-insight/frontend:latest`
---

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
# Productie Deployment Guide
Deze guide beschrijft hoe je de Zuyderland CMDB GUI applicatie veilig en betrouwbaar in productie kunt draaien.
Deze guide beschrijft hoe je de CMDB Insight applicatie veilig en betrouwbaar in productie kunt draaien.
## 📋 Inhoudsopgave

View File

@@ -39,14 +39,14 @@ az webapp create \
--name cmdb-backend-prod \
--resource-group rg-cmdb-gui \
--plan plan-cmdb-gui \
--deployment-container-image-name zdlas.azurecr.io/zuyderland-cmdb-gui/backend:latest
--deployment-container-image-name zdlas.azurecr.io/cmdb-insight/backend:latest
# Frontend Web App
az webapp create \
--name cmdb-frontend-prod \
--resource-group rg-cmdb-gui \
--plan plan-cmdb-gui \
--deployment-container-image-name zdlas.azurecr.io/zuyderland-cmdb-gui/frontend:latest
--deployment-container-image-name zdlas.azurecr.io/cmdb-insight/frontend:latest
```
### Stap 3: Configureer ACR Authentication
@@ -70,7 +70,7 @@ az acr update \
az webapp config container set \
--name cmdb-backend-prod \
--resource-group rg-cmdb-gui \
--docker-custom-image-name zdlas.azurecr.io/zuyderland-cmdb-gui/backend:latest \
--docker-custom-image-name zdlas.azurecr.io/cmdb-insight/backend:latest \
--docker-registry-server-url https://zdlas.azurecr.io \
--docker-registry-server-user $(az acr credential show --name zdlas --query username -o tsv) \
--docker-registry-server-password $(az acr credential show --name zdlas --query passwords[0].value -o tsv)
@@ -78,7 +78,7 @@ az webapp config container set \
az webapp config container set \
--name cmdb-frontend-prod \
--resource-group rg-cmdb-gui \
--docker-custom-image-name zdlas.azurecr.io/zuyderland-cmdb-gui/frontend:latest \
--docker-custom-image-name zdlas.azurecr.io/cmdb-insight/frontend:latest \
--docker-registry-server-url https://zdlas.azurecr.io \
--docker-registry-server-user $(az acr credential show --name zdlas --query username -o tsv) \
--docker-registry-server-password $(az acr credential show --name zdlas --query passwords[0].value -o tsv)
@@ -190,7 +190,7 @@ docker login zdlas.azurecr.io -u <username> -p <password>
```bash
# Clone repository
git clone <your-repo-url>
cd zuyderland-cmdb-gui
cd cmdb-insight
# Maak .env.production aan
nano .env.production

View File

@@ -0,0 +1,283 @@
# Refactor Phase 2B + 3: Implementation Status
**Date:** 2025-01-XX
**Status:** ✅ Phase 2B Complete - New Architecture Implemented
**Next:** Phase 3 - Migration & Cleanup
## Summary
New refactored architecture has been fully implemented and wired behind feature flag `USE_V2_API=true`. All new services, repositories, and API controllers are in place.
---
## ✅ Completed Components
### Infrastructure Layer (`/infrastructure`)
1. **`infrastructure/jira/JiraAssetsClient.ts`** ✅
- Pure HTTP API client (no business logic)
- Methods: `getObject()`, `searchObjects()`, `updateObject()`, `getSchemas()`, `getObjectTypes()`, `getAttributes()`
- Token management (service account for reads, user PAT for writes)
- Returns `ObjectEntry` from domain types
### Domain Layer (`/domain`)
1. **`domain/jiraAssetsPayload.ts`** ✅ (Phase 2A)
- Complete API payload contract
- Type guards: `isReferenceValue()`, `isSimpleValue()`, `hasAttributes()`
2. **`domain/syncPolicy.ts`** ✅
- `SyncPolicy` enum (ENABLED, REFERENCE_ONLY, SKIP)
- Policy resolution logic
### Repository Layer (`/repositories`)
1. **`repositories/SchemaRepository.ts`** ✅
- Schema CRUD: `upsertSchema()`, `getAllSchemas()`
- Object type CRUD: `upsertObjectType()`, `getEnabledObjectTypes()`, `getObjectTypeByJiraId()`
- Attribute CRUD: `upsertAttribute()`, `getAttributesForType()`, `getAttributeByFieldName()`
2. **`repositories/ObjectCacheRepository.ts`** ✅
- Object CRUD: `upsertObject()`, `getObject()`, `getObjectByKey()`, `deleteObject()`
- Attribute value CRUD: `upsertAttributeValue()`, `batchUpsertAttributeValues()`, `getAttributeValues()`, `deleteAttributeValues()`
- Relations: `upsertRelation()`, `deleteRelations()`
- Queries: `getObjectsByType()`, `countObjectsByType()`
### Service Layer (`/services`)
1. **`services/PayloadProcessor.ts`** ✅
- **Recursive reference processing** with visited-set cycle detection
- Processes `ObjectEntry` and `ReferencedObject` recursively (level2, level3, etc.)
- **CRITICAL**: Only replaces attributes if `attributes[]` array is present
- Extracts relations from references
- Normalizes to EAV format
2. **`services/SchemaSyncService.ts`** ✅
- Syncs schemas from Jira API: `syncAllSchemas()`
- Discovers and stores object types and attributes
- Returns enabled types for sync orchestration
3. **`services/ObjectSyncService.ts`** ✅
- Full sync: `syncObjectType()` - syncs all objects of an enabled type
- Incremental sync: `syncIncremental()` - syncs objects updated since timestamp
- Single object sync: `syncSingleObject()` - for refresh operations
- Recursive processing via `PayloadProcessor`
- Respects `SyncPolicy` (ENABLED vs REFERENCE_ONLY)
4. **`services/QueryService.ts`** ✅
- Universal query builder (DB → TypeScript)
- `getObject()` - reconstruct single object
- `getObjects()` - list objects of type
- `countObjects()` - count by type
- `searchByLabel()` - search by label
5. **`services/RefreshService.ts`** ✅
- Force-refresh-on-read with deduplication
- Locking mechanism prevents duplicate refresh operations
- Timeout protection (30s)
6. **`services/WriteThroughService.ts`** ✅
- Write-through updates: Jira API → DB cache
- Builds Jira update payload from field updates
- Uses same normalization logic as sync
7. **`services/ServiceFactory.ts`** ✅
- Singleton factory for all services
- Initializes all dependencies
- Single entry point: `getServices()`
### API Layer (`/api`)
1. **`api/controllers/ObjectsController.ts`** ✅
- `GET /api/v2/objects/:type` - List objects
- `GET /api/v2/objects/:type/:id?refresh=true` - Get object (with force refresh)
- `PUT /api/v2/objects/:type/:id` - Update object
2. **`api/controllers/SyncController.ts`** ✅
- `POST /api/v2/sync/schemas` - Sync all schemas
- `POST /api/v2/sync/objects` - Sync all enabled types
- `POST /api/v2/sync/objects/:typeName` - Sync single type
3. **`api/routes/v2.ts`** ✅
- V2 routes mounted at `/api/v2`
- Feature flag: `USE_V2_API=true` enables routes
- All routes require authentication
### Integration (`/backend/src/index.ts`)
✅ V2 routes wired with feature flag
✅ Token management for new `JiraAssetsClient`
✅ Backward compatible with old services
---
## 🔧 Key Features Implemented
### 1. Recursive Reference Processing ✅
- **Cycle detection**: Visited set using `objectId:objectKey` keys
- **Recursive expansion**: Processes `referencedObject.attributes[]` (level2, level3, etc.)
- **Preserves shallow objects**: Doesn't wipe attributes if `attributes[]` absent
### 2. Sync Policy Enforcement ✅
- **ENABLED**: Full sync with all attributes
- **REFERENCE_ONLY**: Cache minimal metadata for references
- **SKIP**: Unknown types skipped
### 3. Attribute Replacement Logic ✅
**CRITICAL RULE**: Only replaces attributes if `attributes[]` array is present in API response.
```typescript
if (shouldCacheAttributes) {
// attributes[] present - full replace
await deleteAttributeValues(objectId);
await batchUpsertAttributeValues(...);
}
// If attributes[] absent - keep existing attributes
```
### 4. Write-Through Updates ✅
1. Build Jira update payload
2. Send to Jira Assets API
3. Fetch fresh data
4. Update DB cache using same normalization
### 5. Force Refresh with Deduping ✅
- Lock mechanism prevents duplicate refreshes
- Timeout protection (30s)
- Concurrent reads allowed
---
## 📁 File Structure
```
backend/src/
├── domain/
│ ├── jiraAssetsPayload.ts ✅ Phase 2A
│ └── syncPolicy.ts ✅ New
├── infrastructure/
│ └── jira/
│ └── JiraAssetsClient.ts ✅ New (pure API)
├── repositories/
│ ├── SchemaRepository.ts ✅ New
│ └── ObjectCacheRepository.ts ✅ New
├── services/
│ ├── PayloadProcessor.ts ✅ New (recursive)
│ ├── SchemaSyncService.ts ✅ New
│ ├── ObjectSyncService.ts ✅ New
│ ├── QueryService.ts ✅ New
│ ├── RefreshService.ts ✅ New
│ ├── WriteThroughService.ts ✅ New
│ └── ServiceFactory.ts ✅ New
└── api/
├── controllers/
│ ├── ObjectsController.ts ✅ New
│ └── SyncController.ts ✅ New
└── routes/
└── v2.ts ✅ New
```
---
## 🚀 Usage (Feature Flag)
### Enable V2 API
```bash
# .env
USE_V2_API=true
```
### New Endpoints
```
GET /api/v2/objects/:type # List objects
GET /api/v2/objects/:type/:id?refresh=true # Get object (with refresh)
PUT /api/v2/objects/:type/:id # Update object
POST /api/v2/sync/schemas # Sync all schemas
POST /api/v2/sync/objects # Sync all enabled types
POST /api/v2/sync/objects/:typeName # Sync single type
```
---
## ✅ API Payload Contract Compliance
All services correctly handle:
-`objectEntries[]``ObjectEntry[]`
-`ObjectEntry.attributes[]``ObjectAttribute[]` (optional)
-`ObjectAttribute.objectAttributeValues[]``ObjectAttributeValue` union
-`ReferencedObject.attributes[]` → Recursive (level2+)
- ✅ Cycle detection with visited sets
-**CRITICAL**: Don't wipe attributes if `attributes[]` absent on shallow objects
---
## 🧪 Testing Status
**Compilation**: ✅ New code compiles without errors (pre-existing TypeScript config issues unrelated)
**Ready for Testing**:
1. Enable `USE_V2_API=true`
2. Test new endpoints
3. Verify recursive reference processing
4. Verify attribute replacement logic
5. Verify write-through updates
---
## 📋 Next Steps (Phase 3)
### Step 1: Test V2 API ✅ (Ready)
- [ ] Enable feature flag
- [ ] Test schema sync endpoint
- [ ] Test object sync endpoint
- [ ] Test object read endpoint
- [ ] Test object write endpoint
- [ ] Verify recursive references processed
- [ ] Verify attribute replacement logic
### Step 2: Migrate Existing Endpoints
After V2 API is validated:
- [ ] Update `routes/objects.ts` to use new services
- [ ] Update `routes/cache.ts` to use new services
- [ ] Update `routes/schema.ts` to use new services
### Step 3: Delete Old Code
After migration complete:
- [ ] Delete `services/jiraAssets.ts` (merge remaining business logic first)
- [ ] Delete `services/jiraAssetsClient.ts` (replaced by infrastructure client)
- [ ] Delete `services/cacheStore.old.ts`
- [ ] Delete `services/normalizedCacheStore.ts` (replace with repositories)
- [ ] Delete `services/queryBuilder.ts` (functionality in QueryService)
- [ ] Delete `services/schemaDiscoveryService.ts` (replaced by SchemaSyncService)
- [ ] Delete `services/schemaCacheService.ts` (merged into SchemaRepository)
- [ ] Delete `services/schemaConfigurationService.ts` (functionality moved to SchemaRepository)
- [ ] Delete `services/schemaMappingService.ts` (deprecated)
- [ ] Delete `services/syncEngine.ts` (replaced by ObjectSyncService)
- [ ] Delete `services/cmdbService.ts` (functionality split into QueryService + WriteThroughService + RefreshService)
---
## ⚠️ Important Notes
1. **No Functional Changes Yet**: Old code still runs in parallel
2. **Feature Flag Required**: V2 API only active when `USE_V2_API=true`
3. **Token Management**: New client receives tokens from middleware (same as old)
4. **Database Schema**: Uses existing normalized EAV schema (no migration needed)
---
**End of Phase 2B + 3 Implementation Status**

View File

@@ -0,0 +1,116 @@
# Schema Discovery and Data Loading Flow
## Overview
This document describes the logical flow for setting up and using CMDB Insight, from schema discovery to data viewing.
## Implementation Verification
**Data Structure Alignment**: The data structure from Jira Assets REST API **matches** the discovered schema structure. There are **no fallbacks or guessing** - the system uses the discovered schema exclusively.
### How It Works
1. **Schema Discovery** (`/api/schema-configuration/discover`):
- Discovers all schemas and object types from Jira Assets API
- Stores them in the `schemas` and `object_types` database tables
2. **Attribute Discovery** (automatic after schema discovery):
- `schemaDiscoveryService.discoverAndStoreSchema()` fetches attributes for each object type
- Stores attributes in the `attributes` table with:
- `jira_attr_id` - Jira's attribute ID
- `field_name` - Our internal field name (camelCase)
- `attr_type` - Data type (text, reference, integer, etc.)
- `is_multiple` - Whether it's a multi-value field
3. **Data Loading** (when syncing):
- `jiraAssetsClient.parseObject()` uses the discovered schema from the database
- Maps Jira API attributes to our field names using `jira_attr_id` matching
- **No fallbacks** - if a type is not discovered, the object is skipped (returns null)
### Code Flow
```
Schema Discovery → Database (attributes table) → schemaCacheService.getSchema()
→ OBJECT_TYPES_CACHE → jiraAssetsClient.parseObject() → CMDBObject
```
## User Flow
### Step 1: Schema Discovery & Configuration
**Page**: `/settings/schema-configuration`
1. **Discover Schemas**: Click "Ontdek Schemas & Object Types"
- Fetches all schemas and object types from Jira Assets
- Stores them in the database
2. **Configure Object Types**: Enable/disable which object types to sync
- By default, all object types are disabled
- Enable the object types you want to sync
3. **Manual Sync**: Click "Nu synchroniseren" (in CacheStatusIndicator)
- Loads data from Jira Assets REST API
- Uses the discovered schema structure to map attributes
- Stores objects in the normalized database
### Step 2: View Data Structure
**Page**: `/settings/data-model`
- View the discovered schema structure
- See object types, attributes, and relationships
- Verify that the structure matches your Jira Assets configuration
### Step 3: Validate Data
**Page**: `/settings/data-validation`
- Validate data integrity
- Check for missing or invalid data
- Debug data loading issues
### Step 4: Use Application Components
**Pages**: `/application/overview`, `/app-components`, etc.
- View and manage application components
- All data uses the discovered schema structure
## Navigation Structure
The navigation menu follows this logical flow:
1. **Setup** - Initial configuration
- Schema Configuratie (discover + configure + sync)
- Datamodel Overzicht (view structure)
2. **Data** - Data management
- Datamodel (view structure)
- Data Validatie (validate data)
3. **Application Component** - Application management
- Dashboard, Overzicht, FTE Calculator
4. **Rapporten** - Reports and analytics
5. **Apps** - Additional tools
- BIA Sync
6. **Instellingen** - Advanced configuration
- Data Completeness Config
- FTE Config
7. **Beheer** - Administration
- Users, Roles, Debug
## Key Points
-**No guessing**: The system uses the discovered schema exclusively
-**No fallbacks**: If a type is not discovered, objects are skipped
-**Schema-driven**: All data mapping uses the discovered schema structure
-**Database-driven**: Schema is stored in the database, not hardcoded
## Troubleshooting
If data is not loading correctly:
1. **Check Schema Discovery**: Ensure schemas and object types are discovered
2. **Check Configuration**: Ensure at least one object type is enabled
3. **Check Attributes**: Verify that attributes are discovered (check `/settings/data-model`)
4. **Check Logs**: Look for "Unknown object type" or "Type definition not found" warnings

View File

@@ -1,4 +1,4 @@
# ZiRA Classificatie Tool - Technische Specificatie
# CMDB Insight - Technische Specificatie
## Projectoverzicht
@@ -23,7 +23,7 @@ Ontwikkelen van een interactieve tool voor het classificeren van applicatiecompo
```
┌─────────────────────────────────────────────────────────────────┐
ZiRA Classificatie Tool
CMDB Insight
├─────────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌─────────────────┐ │
@@ -870,7 +870,7 @@ interface ReferenceOptions {
## Project Structuur
```
zira-classificatie-tool/
cmdb-insight/
├── package.json
├── .env.example
├── README.md

801
docs/refactor-plan.md Normal file
View File

@@ -0,0 +1,801 @@
# Refactor Plan - Phase 1: Architecture Analysis
**Created:** 2025-01-XX
**Status:** Phase 1 - Analysis Only (No functional changes)
## Executive Summary
This document provides a comprehensive analysis of the current architecture and a plan for refactoring the CMDB Insight codebase to improve maintainability, reduce duplication, and establish clearer separation of concerns.
**Scope:** This is Phase 1 - analysis and planning only. No code changes will be made in this phase.
---
## Table of Contents
1. [Current Architecture Map](#current-architecture-map)
2. [Pain Points & Duplication](#pain-points--duplication)
3. [Target Architecture](#target-architecture)
4. [Migration Steps](#migration-steps)
5. [Explicit Deletion List](#explicit-deletion-list)
6. [API Payload Contract & Recursion Insights](#api-payload-contract--recursion-insights)
---
## Current Architecture Map
### File/Folder Structure
```
backend/src/
├── services/
│ ├── jiraAssets.ts # High-level Jira Assets service (business logic, ~3454 lines)
│ ├── jiraAssetsClient.ts # Low-level Jira Assets API client (~646 lines)
│ ├── schemaDiscoveryService.ts # Discovers schema from Jira API (~520 lines)
│ ├── schemaCacheService.ts # Caches schema metadata
│ ├── schemaConfigurationService.ts # Manages enabled object types
│ ├── schemaMappingService.ts # Maps object types to schema IDs
│ ├── syncEngine.ts # Background sync service (full/incremental) (~630 lines)
│ ├── normalizedCacheStore.ts # EAV pattern DB store (~1695 lines)
│ ├── cmdbService.ts # Universal schema-driven CMDB service (~531 lines)
│ ├── queryBuilder.ts # Dynamic SQL query builder (~278 lines)
│ ├── cacheStore.old.ts # Legacy cache store (deprecated)
│ └── database/
│ ├── normalized-schema.ts # DB schema definitions (Postgres/SQLite)
│ ├── factory.ts # Database adapter factory
│ ├── interface.ts # Database adapter interface
│ ├── postgresAdapter.ts # PostgreSQL adapter
│ ├── sqliteAdapter.ts # SQLite adapter
│ ├── migrate-to-normalized-schema.ts
│ └── fix-object-types-constraints.ts
├── routes/
│ ├── applications.ts # Application-specific endpoints (~780 lines)
│ ├── objects.ts # Generic object endpoints (~185 lines)
│ ├── cache.ts # Cache/sync endpoints (~165 lines)
│ ├── schema.ts # Schema endpoints (~107 lines)
│ └── schemaConfiguration.ts # Schema configuration endpoints
├── generated/
│ ├── jira-types.ts # Generated TypeScript types (~934 lines)
│ └── jira-schema.ts # Generated schema metadata (~895 lines)
└── scripts/
├── discover-schema.ts # Schema discovery CLI
├── generate-types-from-db.ts # Type generation from DB (~485 lines)
└── generate-schema.ts # Legacy schema generation
```
### Module Responsibilities
#### 1. Jira Assets API Client Calls
**Primary Files:**
- `services/jiraAssetsClient.ts` - Low-level HTTP client
- Methods: `getObject()`, `searchObjects()`, `getAllObjectsOfType()`, `updateObject()`, `parseObject()`
- Handles authentication (service account token for reads, user PAT for writes)
- API detection (Data Center vs Cloud)
- Object parsing from Jira format to CMDB format
- `services/jiraAssets.ts` - High-level business logic wrapper
- Application-specific methods (e.g., `getApplications()`, `updateApplication()`)
- Dashboard data aggregation
- Reference data caching
- Team dashboard calculations
- Legacy API methods
**Dependencies:**
- Uses `schemaCacheService` for type lookups
- Uses `schemaMappingService` for schema ID resolution
#### 2. Schema Discovery/Sync
**Primary Files:**
- `services/schemaDiscoveryService.ts`
- Discovers object types from Jira API (`/objectschema/{id}/objecttypes/flat`)
- Discovers attributes for each object type (`/objecttype/{id}/attributes`)
- Stores schema in database (`object_types`, `attributes` tables)
- Provides lookup methods: `getAttribute()`, `getAttributesForType()`, `getObjectType()`
- `services/schemaCacheService.ts`
- Caches schema from database
- Provides runtime schema access
- `services/schemaConfigurationService.ts`
- Manages enabled/disabled object types
- Schema-to-object-type mapping
- Configuration validation
- `services/schemaMappingService.ts`
- Maps object type names to schema IDs
- Legacy compatibility
**Scripts:**
- `scripts/discover-schema.ts` - CLI tool to trigger schema discovery
- `scripts/generate-types-from-db.ts` - Generates TypeScript types from database
#### 3. Object Sync/Import
**Primary Files:**
- `services/syncEngine.ts`
- `fullSync()` - Syncs all enabled object types
- `incrementalSync()` - Periodic sync of updated objects
- `syncType()` - Sync single object type
- `syncObject()` - Sync single object
- Uses `jiraAssetsClient.getAllObjectsOfType()` for fetching
- Uses `normalizedCacheStore.batchUpsertObjects()` for storage
**Flow:**
1. Fetch objects from Jira via `jiraAssetsClient`
2. Parse objects via `jiraAssetsClient.parseObject()`
3. Store objects via `normalizedCacheStore.batchUpsertObjects()`
4. Extract relations via `normalizedCacheStore.extractAndStoreRelations()`
#### 4. DB Normalization Store (EAV)
**Primary Files:**
- `services/normalizedCacheStore.ts` (~1695 lines)
- **Storage:** `normalizeObject()`, `batchUpsertObjects()`, `upsertObject()`
- **Retrieval:** `getObject()`, `getObjects()`, `reconstructObject()`, `loadAttributeValues()`
- **Relations:** `extractAndStoreRelations()`, `getRelatedObjects()`, `getReferencingObjects()`
- **Query:** `queryWithFilters()` (uses `queryBuilder`)
- `services/database/normalized-schema.ts`
- Defines EAV schema: `objects`, `attributes`, `attribute_values`, `object_relations`
**EAV Pattern:**
- `objects` table: Minimal metadata (id, objectKey, label, type, timestamps)
- `attributes` table: Schema metadata (jira_attr_id, field_name, type, is_multiple, etc.)
- `attribute_values` table: Actual values (text_value, number_value, boolean_value, reference_object_id, array_index)
- `object_relations` table: Extracted relationships (source_id, target_id, attribute_id)
#### 5. Backend API Endpoints
**Primary Files:**
- `routes/applications.ts` - Application-specific endpoints
- `POST /applications/search` - Search with filters
- `GET /applications/:id` - Get application details
- `PUT /applications/:id` - Update application
- `GET /applications/:id/related/:type` - Get related objects
- Dashboard endpoints (`/team-dashboard`, `/team-portfolio-health`)
- `routes/objects.ts` - Generic object endpoints
- `GET /objects` - List supported types
- `GET /objects/:type` - Get all objects of type
- `GET /objects/:type/:id` - Get single object
- `GET /objects/:type/:id/related/:relationType` - Get related objects
- `routes/cache.ts` - Cache management
- `POST /cache/sync` - Trigger full sync
- `POST /cache/sync/:objectType` - Sync single type
- `POST /cache/refresh-application/:id` - Refresh single object
- `routes/schema.ts` - Schema endpoints
- `GET /schema` - Get schema metadata
- `GET /schema/types` - List object types
- `GET /schema/types/:type` - Get type definition
**Service Layer:**
- Routes delegate to `cmdbService`, `dataService`, `syncEngine`
- `cmdbService` provides unified interface (read/write with conflict detection)
- `dataService` provides application-specific business logic
#### 6. Query Builder (Object Reconstruction)
**Primary Files:**
- `services/queryBuilder.ts`
- `buildWhereClause()` - Builds WHERE conditions from filters
- `buildFilterCondition()` - Handles different attribute types (text, reference, number, etc.)
- `buildOrderBy()` - ORDER BY clause
- `buildPagination()` - LIMIT/OFFSET clause
**Usage:**
- Used by `normalizedCacheStore.queryWithFilters()` to build dynamic SQL
- Handles complex filters (exact match, exists, contains, reference filters)
#### 7. Generated Types/Reflect Scripts
**Primary Files:**
- `scripts/generate-types-from-db.ts`
- Reads from `object_types` and `attributes` tables
- Generates `generated/jira-types.ts` (TypeScript interfaces)
- Generates `generated/jira-schema.ts` (Schema metadata with lookup maps)
**Generated Output:**
- `jira-types.ts`: TypeScript interfaces for each object type (e.g., `ApplicationComponent`, `Server`)
- `jira-schema.ts`: `OBJECT_TYPES` record, lookup maps (`TYPE_ID_TO_NAME`, `JIRA_NAME_TO_TYPE`), helper functions
---
## Pain Points & Duplication
### 1. Dual API Clients (jiraAssets.ts vs jiraAssetsClient.ts)
**Issue:** Two separate services handling Jira API calls:
- `jiraAssetsClient.ts` - Low-level, focused on API communication
- `jiraAssets.ts` - High-level, contains business logic + API calls
**Problems:**
- Duplication of API request logic
- Inconsistent error handling
- Mixed concerns (business logic + infrastructure)
- `jiraAssets.ts` is huge (~3454 lines) and hard to maintain
**Location:**
- `backend/src/services/jiraAssets.ts` - Contains both API calls and business logic
- `backend/src/services/jiraAssetsClient.ts` - Clean separation but incomplete
### 2. Schema Discovery/Caching Duplication
**Issue:** Multiple services handling schema metadata:
- `schemaDiscoveryService.ts` - Discovers and stores schema
- `schemaCacheService.ts` - Caches schema from DB
- `schemaConfigurationService.ts` - Manages enabled types
- `schemaMappingService.ts` - Maps types to schema IDs
**Problems:**
- Unclear boundaries between services
- Potential for stale cache
- Complex initialization dependencies
**Location:**
- `backend/src/services/schema*.ts` files
### 3. Mixed Responsibilities in normalizedCacheStore.ts
**Issue:** Large file (~1695 lines) handling multiple concerns:
- Database operations (EAV storage/retrieval)
- Object reconstruction (TypeScript object building)
- Reference resolution (fetching missing referenced objects)
- Relation extraction
**Problems:**
- Hard to test individual concerns
- Difficult to optimize specific operations
- Violates single responsibility principle
**Location:**
- `backend/src/services/normalizedCacheStore.ts`
### 4. Application-Specific Logic in Generic Services
**Issue:** Application-specific business logic scattered:
- `routes/applications.ts` - Application-specific endpoints (~780 lines)
- `services/dataService.ts` - Application business logic
- `services/jiraAssets.ts` - Application aggregation logic
- `services/cmdbService.ts` - Generic service used by applications
**Problems:**
- Hard to extend to other object types
- Tight coupling between routes and services
- Business logic mixed with data access
**Location:**
- `backend/src/routes/applications.ts`
- `backend/src/services/dataService.ts`
### 5. Type Generation Pipeline Complexity
**Issue:** Multiple scripts and services involved in type generation:
- `scripts/discover-schema.ts` - Triggers schema discovery
- `services/schemaDiscoveryService.ts` - Discovers schema
- `scripts/generate-types-from-db.ts` - Generates TypeScript files
- `generated/jira-types.ts` - Generated output (must be regenerated when schema changes)
**Problems:**
- Unclear workflow
- Manual steps required
- Generated files can get out of sync
**Location:**
- `backend/scripts/discover-schema.ts`
- `backend/scripts/generate-types-from-db.ts`
- `backend/src/generated/*.ts`
### 6. Legacy Code (cacheStore.old.ts)
**Issue:** Old cache store still present in codebase:
- `services/cacheStore.old.ts` - Deprecated implementation
**Problems:**
- Confusing for new developers
- Takes up space
- No longer used
**Location:**
- `backend/src/services/cacheStore.old.ts`
### 7. Inconsistent Error Handling
**Issue:** Different error handling patterns across services:
- Some use try/catch with logger
- Some throw errors
- Some return null/undefined
- Inconsistent error messages
**Problems:**
- Hard to debug issues
- Inconsistent API responses
- No centralized error handling
**Location:**
- Throughout codebase
---
## Target Architecture
### Domain/Infrastructure/Services/API Separation
```
backend/src/
├── domain/ # Domain models & business logic
│ ├── cmdb/
│ │ ├── Object.ts # CMDBObject base interface
│ │ ├── ObjectType.ts # ObjectTypeDefinition
│ │ ├── Attribute.ts # AttributeDefinition
│ │ └── Reference.ts # ObjectReference
│ ├── schema/
│ │ ├── Schema.ts # Schema domain model
│ │ └── SchemaDiscovery.ts # Schema discovery business logic
│ └── sync/
│ ├── SyncEngine.ts # Sync orchestration logic
│ └── SyncStrategy.ts # Sync strategies (full, incremental)
├── infrastructure/ # External integrations & infrastructure
│ ├── jira/
│ │ ├── JiraAssetsClient.ts # Low-level HTTP client (pure API calls)
│ │ ├── JiraAssetsApi.ts # API contract definitions
│ │ └── JiraResponseParser.ts # Response parsing utilities
│ └── database/
│ ├── adapters/ # Database adapters (Postgres, SQLite)
│ ├── schema/ # Schema definitions
│ └── migrations/ # Database migrations
├── services/ # Application services (use cases)
│ ├── cmdb/
│ │ ├── CmdbReadService.ts # Read operations
│ │ ├── CmdbWriteService.ts # Write operations with conflict detection
│ │ └── CmdbQueryService.ts # Query operations
│ ├── schema/
│ │ ├── SchemaService.ts # Schema CRUD operations
│ │ └── SchemaDiscoveryService.ts # Schema discovery orchestration
│ └── sync/
│ └── SyncService.ts # Sync orchestration
├── repositories/ # Data access layer
│ ├── ObjectRepository.ts # Object CRUD (uses EAV store)
│ ├── AttributeRepository.ts # Attribute value access
│ ├── RelationRepository.ts # Relationship access
│ └── SchemaRepository.ts # Schema metadata access
├── stores/ # Storage implementations
│ ├── NormalizedObjectStore.ts # EAV pattern implementation
│ ├── ObjectReconstructor.ts # Object reconstruction from EAV
│ └── RelationExtractor.ts # Relation extraction logic
├── api/ # HTTP API layer
│ ├── routes/
│ │ ├── objects.ts # Generic object endpoints
│ │ ├── schema.ts # Schema endpoints
│ │ └── sync.ts # Sync endpoints
│ ├── handlers/ # Request handlers (thin layer)
│ │ ├── ObjectHandler.ts
│ │ ├── SchemaHandler.ts
│ │ └── SyncHandler.ts
│ └── middleware/ # Auth, validation, etc.
├── queries/ # Query builders
│ ├── ObjectQueryBuilder.ts # SQL query construction
│ └── FilterBuilder.ts # Filter condition builder
└── scripts/ # CLI tools
├── discover-schema.ts # Schema discovery CLI
└── generate-types.ts # Type generation CLI
```
### Key Principles
1. **Domain Layer**: Pure business logic, no infrastructure dependencies
2. **Infrastructure Layer**: External integrations (Jira API, database)
3. **Services Layer**: Orchestrates domain logic and infrastructure
4. **Repository Layer**: Data access abstraction
5. **Store Layer**: Storage implementations (EAV, caching)
6. **API Layer**: Thin HTTP handlers that delegate to services
---
## Migration Steps
### Step 1: Extract Jira API Client (Infrastructure)
**Goal:** Create pure infrastructure client with no business logic
1. Consolidate `jiraAssetsClient.ts` and `jiraAssets.ts` API methods into single `JiraAssetsClient`
2. Extract API contract types to `infrastructure/jira/JiraAssetsApi.ts`
3. Move response parsing to `infrastructure/jira/JiraResponseParser.ts`
4. Remove business logic from API client (delegate to services)
**Files to Create:**
- `infrastructure/jira/JiraAssetsClient.ts`
- `infrastructure/jira/JiraAssetsApi.ts`
- `infrastructure/jira/JiraResponseParser.ts`
**Files to Modify:**
- `services/jiraAssets.ts` - Remove API calls, keep business logic
- `services/jiraAssetsClient.ts` - Merge into infrastructure client
**Files to Delete:**
- None (yet - deprecate old files after migration)
### Step 2: Extract Schema Domain & Services
**Goal:** Separate schema discovery business logic from infrastructure
1. Create `domain/schema/` with domain models
2. Move schema discovery logic to `services/schema/SchemaDiscoveryService.ts`
3. Consolidate schema caching in `services/schema/SchemaService.ts`
4. Remove duplication between `schemaCacheService`, `schemaConfigurationService`, `schemaMappingService`
**Files to Create:**
- `domain/schema/Schema.ts`
- `services/schema/SchemaService.ts`
- `services/schema/SchemaDiscoveryService.ts`
**Files to Modify:**
- `services/schemaDiscoveryService.ts` - Split into domain + service
- `services/schemaCacheService.ts` - Merge into SchemaService
- `services/schemaConfigurationService.ts` - Merge into SchemaService
- `services/schemaMappingService.ts` - Merge into SchemaService
**Files to Delete:**
- `services/schemaCacheService.ts` (after merge)
- `services/schemaMappingService.ts` (after merge)
### Step 3: Extract Repository Layer
**Goal:** Abstract data access from business logic
1. Create `repositories/ObjectRepository.ts` - Interface for object CRUD
2. Create `repositories/AttributeRepository.ts` - Interface for attribute access
3. Create `repositories/RelationRepository.ts` - Interface for relationships
4. Implement repositories using `NormalizedObjectStore`
**Files to Create:**
- `repositories/ObjectRepository.ts`
- `repositories/AttributeRepository.ts`
- `repositories/RelationRepository.ts`
- `repositories/SchemaRepository.ts`
**Files to Modify:**
- `services/normalizedCacheStore.ts` - Extract repository implementations
### Step 4: Extract Store Implementations
**Goal:** Separate storage implementations from business logic
1. Extract EAV storage to `stores/NormalizedObjectStore.ts`
2. Extract object reconstruction to `stores/ObjectReconstructor.ts`
3. Extract relation extraction to `stores/RelationExtractor.ts`
**Files to Create:**
- `stores/NormalizedObjectStore.ts` - EAV storage/retrieval
- `stores/ObjectReconstructor.ts` - TypeScript object reconstruction
- `stores/RelationExtractor.ts` - Relation extraction from objects
**Files to Modify:**
- `services/normalizedCacheStore.ts` - Split into store classes
### Step 5: Extract Query Builders
**Goal:** Separate query construction from execution
1. Move `queryBuilder.ts` to `queries/ObjectQueryBuilder.ts`
2. Extract filter building to `queries/FilterBuilder.ts`
**Files to Create:**
- `queries/ObjectQueryBuilder.ts`
- `queries/FilterBuilder.ts`
**Files to Modify:**
- `services/queryBuilder.ts` - Move to queries/
### Step 6: Extract CMDB Services
**Goal:** Separate read/write/query concerns
1. Create `services/cmdb/CmdbReadService.ts` - Read operations
2. Create `services/cmdb/CmdbWriteService.ts` - Write operations with conflict detection
3. Create `services/cmdb/CmdbQueryService.ts` - Query operations
**Files to Create:**
- `services/cmdb/CmdbReadService.ts`
- `services/cmdb/CmdbWriteService.ts`
- `services/cmdb/CmdbQueryService.ts`
**Files to Modify:**
- `services/cmdbService.ts` - Split into read/write/query services
### Step 7: Extract Sync Service
**Goal:** Separate sync orchestration from storage
1. Create `domain/sync/SyncEngine.ts` - Sync business logic
2. Create `services/sync/SyncService.ts` - Sync orchestration
**Files to Create:**
- `domain/sync/SyncEngine.ts`
- `services/sync/SyncService.ts`
**Files to Modify:**
- `services/syncEngine.ts` - Split into domain + service
### Step 8: Refactor API Routes
**Goal:** Thin HTTP handlers delegating to services
1. Create `api/handlers/` directory
2. Move route logic to handlers
3. Routes become thin wrappers around handlers
**Files to Create:**
- `api/handlers/ObjectHandler.ts`
- `api/handlers/SchemaHandler.ts`
- `api/handlers/SyncHandler.ts`
**Files to Modify:**
- `routes/applications.ts` - Extract handlers
- `routes/objects.ts` - Extract handlers
- `routes/cache.ts` - Extract handlers
- `routes/schema.ts` - Extract handlers
### Step 9: Clean Up Legacy Code
**Goal:** Remove deprecated files
**Files to Delete:**
- `services/cacheStore.old.ts`
- Deprecated service files after migration complete
### Step 10: Update Type Generation
**Goal:** Simplify type generation workflow
1. Consolidate type generation logic
2. Add automatic type generation on schema discovery
3. Update documentation
**Files to Modify:**
- `scripts/generate-types-from-db.ts` - Enhance with auto-discovery
- `scripts/discover-schema.ts` - Auto-generate types after discovery
---
## Explicit Deletion List
### Phase 2 (After Migration Complete)
1. **`backend/src/services/cacheStore.old.ts`**
- Reason: Legacy implementation, replaced by `normalizedCacheStore.ts`
- Deprecation date: TBD
2. **`backend/src/services/jiraAssets.ts`** (after extracting business logic)
- Reason: API calls moved to infrastructure layer, business logic to services
- Replacement: `infrastructure/jira/JiraAssetsClient.ts` + `services/cmdb/Cmdb*Service.ts`
3. **`backend/src/services/schemaCacheService.ts`** (after consolidation)
- Reason: Merged into `services/schema/SchemaService.ts`
4. **`backend/src/services/schemaMappingService.ts`** (after consolidation)
- Reason: Merged into `services/schema/SchemaService.ts`
5. **`backend/scripts/generate-schema.ts`** (if still present)
- Reason: Replaced by `generate-types-from-db.ts`
### Notes
- Keep old files until migration is complete and tested
- Mark as deprecated with `@deprecated` JSDoc comments
- Add migration guide for each deprecated file
---
## API Payload Contract & Recursion Insights
### Jira Assets API Payload Structure
The Jira Assets API returns objects with the following nested structure:
```typescript
interface JiraAssetsSearchResponse {
objectEntries: JiraAssetsObject[]; // Top-level array of objects
// ... pagination metadata
}
interface JiraAssetsObject {
id: number;
objectKey: string;
label: string;
objectType: {
id: number;
name: string;
};
attributes: JiraAssetsAttribute[]; // Array of attributes
updated?: string;
created?: string;
}
interface JiraAssetsAttribute {
objectTypeAttributeId: number;
objectTypeAttribute?: {
id: number;
name: string;
};
objectAttributeValues: Array<{ // Union type of value representations
value?: string; // For scalar values (text, number, etc.)
displayValue?: string; // Human-readable value
referencedObject?: { // For reference attributes
id: number;
objectKey: string;
label: string;
// ⚠️ CRITICAL: referencedObject may include attributes (level 2)
attributes?: JiraAssetsAttribute[]; // Recursive structure
};
status?: { // For status attributes
name: string;
};
// ... other type-specific fields
}>;
}
```
### Key Insights
#### 1. Recursive Structure
**Issue:** `referencedObject` may include `attributes[]` array (level 2 recursion).
**Current Handling:**
- `jiraAssetsClient.ts` uses `includeAttributesDeep=2` parameter
- This causes referenced objects to include their attributes
- Referenced objects' attributes may themselves contain referenced objects (level 3, 4, etc.)
- **Cycles are possible** (Object A references Object B, Object B references Object A)
**Current Code Location:**
- `backend/src/services/jiraAssetsClient.ts:222` - `includeAttributesDeep=2`
- `backend/src/services/jiraAssetsClient.ts:259-260` - Search with deep attributes
- `backend/src/services/jiraAssetsClient.ts:285-286` - POST search with deep attributes
**Impact:**
- Response payloads can be very large (deeply nested)
- Memory usage increases with depth
- Parsing becomes more complex
#### 2. Shallow Referenced Objects
**Issue:** When `attributes[]` is absent on a shallow `referencedObject`, **do not wipe attributes**.
**Current Behavior:**
- Some code paths may clear attributes if `attributes` is missing
- This is incorrect - absence of `attributes` array does not mean "no attributes"
- It simply means "attributes not included in this response"
**Critical Rule:**
```typescript
// ❌ WRONG: Don't do this
if (!referencedObject.attributes) {
referencedObject.attributes = []; // This wipes existing attributes!
}
// ✅ CORRECT: Preserve existing attributes if missing from response
if (referencedObject.attributes === undefined) {
// Don't modify - attributes simply not included in this response
// Keep any existing attributes that were previously loaded
}
```
**Code Locations to Review:**
- `backend/src/services/jiraAssetsClient.ts:parseObject()` - Object parsing
- `backend/src/services/jiraAssetsClient.ts:parseAttributeValue()` - Reference parsing
- `backend/src/services/normalizedCacheStore.ts:loadAttributeValues()` - Reference reconstruction
#### 3. Attribute Values Union Type
**Issue:** `objectAttributeValues` is a union type - different value representations based on attribute type.
**Value Types:**
- Scalar (text, number, boolean): `{ value?: string, displayValue?: string }`
- Reference: `{ referencedObject?: { id, objectKey, label, attributes? } }`
- Status: `{ status?: { name: string } }`
- Date/Datetime: `{ value?: string }` (ISO string)
**Current Handling:**
- `jiraAssetsClient.ts:parseAttributeValue()` uses switch on `attrDef.type`
- Different parsing logic for each type
- Reference types extract `referencedObject`, others use `value` or `displayValue`
**Code Location:**
- `backend/src/services/jiraAssetsClient.ts:521-628` - `parseAttributeValue()` method
#### 4. Cycles and Recursion Depth
**Issue:** Recursive references can create cycles.
**Examples:**
- Application A references Team X
- Team X references Application A (via some attribute)
- This creates a cycle at depth 2
**Current Handling:**
- No explicit cycle detection
- `includeAttributesDeep=2` limits depth but doesn't prevent cycles
- Potential for infinite loops during reconstruction
**Recommendation:**
- Add cycle detection during object reconstruction
- Use visited set to track processed object IDs
- Limit recursion depth explicitly (not just via API parameter)
**Code Locations:**
- `backend/src/services/normalizedCacheStore.ts:loadAttributeValues()` - Reference resolution
- `backend/src/services/normalizedCacheStore.ts:reconstructObject()` - Object reconstruction
### Refactoring Considerations
1. **Create dedicated parser module** for handling recursive payloads
2. **Add cycle detection** utility
3. **Separate shallow vs deep parsing** logic
4. **Preserve attribute state** when attributes array is absent
5. **Document recursion depth limits** clearly
---
## Appendix: Module Dependency Graph
```
┌─────────────────────────────────────────────────────────┐
│ API Routes │
│ (applications.ts, objects.ts, cache.ts, schema.ts) │
└────────────┬────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────┐
│ Application Services │
│ (cmdbService.ts, dataService.ts) │
└────────────┬────────────────────────────────────────────┘
┌────────┴────────┐
▼ ▼
┌────────────┐ ┌──────────────────────────────┐
│ Sync │ │ Normalized Cache Store │
│ Engine │ │ (EAV Pattern) │
└─────┬──────┘ └──────────┬───────────────────┘
│ │
│ ▼
│ ┌─────────────────┐
│ │ Query Builder │
│ └─────────────────┘
┌─────────────────────────────────────────┐
│ Jira Assets Client │
│ (jiraAssetsClient.ts, jiraAssets.ts) │
└────────────┬────────────────────────────┘
┌─────────────────────────────────────────┐
│ Schema Services │
│ (schemaDiscovery, schemaCache, etc.) │
└─────────────────────────────────────────┘
```
---
## Next Steps (Phase 2)
1. Review and approve this plan
2. Create detailed task breakdown for each migration step
3. Set up feature branch for refactoring
4. Implement changes incrementally with tests
5. Update documentation as we go
---
**End of Phase 1 - Analysis Document**