Health Check and Verification
After deployment, verify every service is running correctly before declaring the environment ready.
Service Health Checks
Run these commands to check each service. A healthy response is {"status":"UP"}.
# Core service
curl -s http://localhost:18020/prometheus/actuator/health
# Support services
curl -s http://localhost:18030/email-service/actuator/health
curl -s http://localhost:18040/file-service/actuator/health
curl -s http://localhost:18050/schedule/actuator/health
# Partner integrations (if deployed)
curl -s http://localhost:18080/partner/channel/slash/actuator/health
curl -s http://localhost:18090/partner/channel/stripe/actuator/health
# Wallet services (if deployed)
curl -s http://localhost:18060/actuator/health
curl -s http://localhost:18070/actuator/healthReadiness Probe
The readiness endpoint checks all downstream dependencies:
curl -s http://localhost:18020/prometheus/actuator/health/readiness | python3 -m json.toolExpected output includes component checks:
{
"status": "UP",
"components": {
"db": { "status": "UP" },
"redis": { "status": "UP" },
"rabbit": { "status": "UP" }
}
}If any component shows DOWN, check the corresponding middleware container.
Quick Status Script
Check all services at once:
#!/bin/bash
SERVICES=(
"18020|app-prometheus|/prometheus"
"18030|support-email|/email-service"
"18040|support-file|/file-service"
"18050|support-schedule|/schedule"
# Uncomment optional services as needed:
# "18080|partner-slash|/partner/channel/slash"
# "18090|partner-stripe|/partner/channel/stripe"
# "18060|support-tron-wallet|"
# "18070|support-solana-wallet|"
)
for svc in "${SERVICES[@]}"; do
IFS='|' read -r port name path <<< "$svc"
status=$(curl -sf "http://localhost:$port$path/actuator/health" 2>/dev/null | grep -o '"status":"[A-Z]*"' | head -1)
if [ -n "$status" ]; then
echo "✓ $name ($port): $status"
else
echo "✗ $name ($port): UNREACHABLE"
fi
doneSmoke Test: End-to-End Login Flow
After all health checks pass, verify the full API flow works:
Step 1: Get Public Key (Secure Channel)
curl -s http://localhost:18020/prometheus/actuator/health
# Confirm UP first, then:
curl -s -X GET http://localhost:18020/prometheus/web/v1/system/secure-channel/public-key \
-H "X-PORTAL-ACCESS-CODE: <your-portal-access-code>"Expected: RSA public key in response.
Step 2: Create Secure Channel Session
curl -s -X POST http://localhost:18020/prometheus/web/v1/system/secure-channel/session \
-H "Content-Type: application/json" \
-H "X-PORTAL-ACCESS-CODE: <your-portal-access-code>" \
-d '{"clientPublicKey": "<base64-encoded-client-public-key>"}'Expected: Session ID and server public key.
Step 3: Initiate Login
curl -s -X POST http://localhost:18020/prometheus/web/v1/system/auth/login/initiate \
-H "Content-Type: application/json" \
-H "X-PORTAL-ACCESS-CODE: <your-portal-access-code>" \
-d '{"rawRequestBody": "<encrypted-payload>"}'Expected: Login session with MFA challenge or direct JWT.
TIP
The smoke test requires a valid portal access code and encrypted request body. For a simpler check, the health and readiness endpoints are sufficient to confirm the system is operational.
Log Inspection
View Recent Logs
# Last 100 lines
docker logs --tail 100 slaunchx-app-prometheus-test
# Follow logs in real-time
docker logs -f slaunchx-app-prometheus-test
# Filter for errors
docker logs slaunchx-app-prometheus-test 2>&1 | grep -i "error\|exception" | tail -20Common Startup Errors
| Error Pattern | Cause | Fix |
|---|---|---|
Connection refused: MySQL | MySQL not running or wrong host | Check DB_HOST matches MySQL container name |
NOAUTH Authentication required | Redis password missing | Set REDIS_PASSWORD in env file |
ACCESS_REFUSED | RabbitMQ credentials wrong | Check RABBITMQ_USERNAME / RABBITMQ_PASSWORD |
Flyway migration failed | Schema conflict | Check migration files, may need repair |
EncryptionKeyNotFoundException | Missing master key | Set SLAUNCHX_SECURITY_ENCRYPTION_MASTER_KEYS_K1 |
Port already in use | Another container on same port | docker ps to find conflicting container |
Container Status Check
# List all SlaunchX containers
docker ps -a --filter "name=slaunchx-" --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}"
# Check for crashed containers
docker ps -a --filter "name=slaunchx-" --filter "status=exited" --format "{{.Names}}: {{.Status}}"Monitoring Integration
Uptime Kuma
If Uptime Kuma is deployed (port 3001), add HTTP monitors for each service:
| Monitor | URL | Interval |
|---|---|---|
| app-prometheus | http://localhost:18020/prometheus/actuator/health | 60s |
| support-email | http://localhost:18030/email-service/actuator/health | 60s |
| support-file | http://localhost:18040/file-service/actuator/health | 60s |
| support-schedule | http://localhost:18050/schedule/actuator/health | 60s |
| MySQL | TCP check localhost:18306 | 60s |
| Redis | TCP check localhost:18379 | 60s |
| RabbitMQ | HTTP http://localhost:18672/api/healthchecks/node | 60s |
Verification Checklist
- [ ] All service health endpoints return
{"status":"UP"} - [ ] Readiness probe shows all components UP (db, redis, rabbit)
- [ ] No error/exception in startup logs
- [ ] All containers in
runningstatus (noneexited) - [ ] (Optional) Smoke test login flow succeeds
- [ ] (Optional) Uptime Kuma monitors configured