Skip to content

Health Check and Verification

After deployment, verify every service is running correctly before declaring the environment ready.

Service Health Checks

Run these commands to check each service. A healthy response is {"status":"UP"}.

bash
# Core service
curl -s http://localhost:18020/prometheus/actuator/health

# Support services
curl -s http://localhost:18030/email-service/actuator/health
curl -s http://localhost:18040/file-service/actuator/health
curl -s http://localhost:18050/schedule/actuator/health

# Partner integrations (if deployed)
curl -s http://localhost:18080/partner/channel/slash/actuator/health
curl -s http://localhost:18090/partner/channel/stripe/actuator/health

# Wallet services (if deployed)
curl -s http://localhost:18060/actuator/health
curl -s http://localhost:18070/actuator/health

Readiness Probe

The readiness endpoint checks all downstream dependencies:

bash
curl -s http://localhost:18020/prometheus/actuator/health/readiness | python3 -m json.tool

Expected output includes component checks:

json
{
  "status": "UP",
  "components": {
    "db": { "status": "UP" },
    "redis": { "status": "UP" },
    "rabbit": { "status": "UP" }
  }
}

If any component shows DOWN, check the corresponding middleware container.

Quick Status Script

Check all services at once:

bash
#!/bin/bash
SERVICES=(
  "18020|app-prometheus|/prometheus"
  "18030|support-email|/email-service"
  "18040|support-file|/file-service"
  "18050|support-schedule|/schedule"
  # Uncomment optional services as needed:
  # "18080|partner-slash|/partner/channel/slash"
  # "18090|partner-stripe|/partner/channel/stripe"
  # "18060|support-tron-wallet|"
  # "18070|support-solana-wallet|"
)

for svc in "${SERVICES[@]}"; do
  IFS='|' read -r port name path <<< "$svc"
  status=$(curl -sf "http://localhost:$port$path/actuator/health" 2>/dev/null | grep -o '"status":"[A-Z]*"' | head -1)
  if [ -n "$status" ]; then
    echo "✓ $name ($port): $status"
  else
    echo "✗ $name ($port): UNREACHABLE"
  fi
done

Smoke Test: End-to-End Login Flow

After all health checks pass, verify the full API flow works:

Step 1: Get Public Key (Secure Channel)

bash
curl -s http://localhost:18020/prometheus/actuator/health
# Confirm UP first, then:

curl -s -X GET http://localhost:18020/prometheus/web/v1/system/secure-channel/public-key \
  -H "X-PORTAL-ACCESS-CODE: <your-portal-access-code>"

Expected: RSA public key in response.

Step 2: Create Secure Channel Session

bash
curl -s -X POST http://localhost:18020/prometheus/web/v1/system/secure-channel/session \
  -H "Content-Type: application/json" \
  -H "X-PORTAL-ACCESS-CODE: <your-portal-access-code>" \
  -d '{"clientPublicKey": "<base64-encoded-client-public-key>"}'

Expected: Session ID and server public key.

Step 3: Initiate Login

bash
curl -s -X POST http://localhost:18020/prometheus/web/v1/system/auth/login/initiate \
  -H "Content-Type: application/json" \
  -H "X-PORTAL-ACCESS-CODE: <your-portal-access-code>" \
  -d '{"rawRequestBody": "<encrypted-payload>"}'

Expected: Login session with MFA challenge or direct JWT.

TIP

The smoke test requires a valid portal access code and encrypted request body. For a simpler check, the health and readiness endpoints are sufficient to confirm the system is operational.

Log Inspection

View Recent Logs

bash
# Last 100 lines
docker logs --tail 100 slaunchx-app-prometheus-test

# Follow logs in real-time
docker logs -f slaunchx-app-prometheus-test

# Filter for errors
docker logs slaunchx-app-prometheus-test 2>&1 | grep -i "error\|exception" | tail -20

Common Startup Errors

Error PatternCauseFix
Connection refused: MySQLMySQL not running or wrong hostCheck DB_HOST matches MySQL container name
NOAUTH Authentication requiredRedis password missingSet REDIS_PASSWORD in env file
ACCESS_REFUSEDRabbitMQ credentials wrongCheck RABBITMQ_USERNAME / RABBITMQ_PASSWORD
Flyway migration failedSchema conflictCheck migration files, may need repair
EncryptionKeyNotFoundExceptionMissing master keySet SLAUNCHX_SECURITY_ENCRYPTION_MASTER_KEYS_K1
Port already in useAnother container on same portdocker ps to find conflicting container

Container Status Check

bash
# List all SlaunchX containers
docker ps -a --filter "name=slaunchx-" --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}"

# Check for crashed containers
docker ps -a --filter "name=slaunchx-" --filter "status=exited" --format "{{.Names}}: {{.Status}}"

Monitoring Integration

Uptime Kuma

If Uptime Kuma is deployed (port 3001), add HTTP monitors for each service:

MonitorURLInterval
app-prometheushttp://localhost:18020/prometheus/actuator/health60s
support-emailhttp://localhost:18030/email-service/actuator/health60s
support-filehttp://localhost:18040/file-service/actuator/health60s
support-schedulehttp://localhost:18050/schedule/actuator/health60s
MySQLTCP check localhost:1830660s
RedisTCP check localhost:1837960s
RabbitMQHTTP http://localhost:18672/api/healthchecks/node60s

Verification Checklist

  • [ ] All service health endpoints return {"status":"UP"}
  • [ ] Readiness probe shows all components UP (db, redis, rabbit)
  • [ ] No error/exception in startup logs
  • [ ] All containers in running status (none exited)
  • [ ] (Optional) Smoke test login flow succeeds
  • [ ] (Optional) Uptime Kuma monitors configured

Internal Handbook