Files
goplt/docs/content/stories/epic1/SUMMARY.md
0x1d fde01bfc73 feat(epic1): complete OpenTelemetry integration and add verification documentation
Story 1.6: OpenTelemetry Distributed Tracing
- Implemented tracer initialization with stdout (dev) and OTLP (prod) exporters
- Added HTTP request instrumentation via Gin middleware
- Integrated trace ID correlation in structured logs
- Added tracing configuration to config files
- Registered tracer provider in DI container

Documentation and Setup:
- Created Docker Compose setup for PostgreSQL database
- Added comprehensive Epic 1 summary with verification instructions
- Added Epic 0 summary with verification instructions
- Linked summaries in documentation index and epic READMEs
- Included detailed database testing instructions
- Added Docker Compose commands and troubleshooting guide

All Epic 1 stories (1.1-1.6) are now complete. Story 1.7 depends on Epic 2.
2025-11-05 18:20:15 +01:00

9.9 KiB

Epic 1: Implementation Summary

Overview

Epic 1 implements the core kernel and infrastructure of the Go Platform, including database layer with Ent ORM, health monitoring, metrics, error handling, HTTP server, and OpenTelemetry tracing. This epic provides the foundation for all future modules and services.

Completed Stories

1.1 Enhanced Dependency Injection Container

  • Extended DI container with providers for all core services
  • Database, health, metrics, error bus, and HTTP server providers
  • Lifecycle management for all services
  • CoreModule() exports all core services

1.2 Database Layer with Ent ORM

  • Ent schema for User, Role, Permission, AuditLog entities
  • Many-to-many relationships (User-Role, Role-Permission)
  • Database client wrapper with connection pooling
  • Automatic migrations on startup
  • PostgreSQL support with connection management

1.3 Health Monitoring and Metrics System

  • Health check registry with extensible checkers
  • Database health checker
  • Prometheus metrics with HTTP instrumentation
  • /healthz, /ready, and /metrics endpoints

1.4 Error Handling and Error Bus

  • Channel-based error bus with background consumer
  • ErrorPublisher interface
  • Panic recovery middleware
  • Error context preservation

1.5 HTTP Server Foundation

  • Gin-based HTTP server
  • Comprehensive middleware stack:
    • Request ID generation
    • Structured logging
    • Panic recovery with error bus
    • Prometheus metrics
    • CORS support
  • Core routes registration
  • Graceful shutdown

1.6 OpenTelemetry Distributed Tracing

  • Tracer initialization with stdout (dev) and OTLP (prod) exporters
  • HTTP request instrumentation
  • Trace ID correlation in logs
  • Configurable tracing

Prerequisites

Before verifying Epic 1, ensure you have:

  1. Docker and Docker Compose installed
  2. PostgreSQL client (optional, for direct database access)
  3. Go 1.24+ installed
  4. curl or similar HTTP client for testing endpoints

Setup Instructions

1. Start PostgreSQL Database

The project includes a docker-compose.yml file for easy database setup:

# Start PostgreSQL container
docker-compose up -d postgres

# Verify container is running
docker-compose ps

# Check database logs
docker-compose logs postgres

The database will be available at:

  • Host: localhost
  • Port: 5432
  • Database: goplt
  • User: goplt
  • Password: goplt_password

2. Configure Database Connection

Update config/default.yaml or set environment variable:

# Option 1: Edit config/default.yaml
# Set database.dsn to:
database:
  dsn: "postgres://goplt:goplt_password@localhost:5432/goplt?sslmode=disable"

# Option 2: Set environment variable
export DATABASE_DSN="postgres://goplt:goplt_password@localhost:5432/goplt?sslmode=disable"

3. Build and Run the Application

# Build the application
make build

# Or build directly
go build -o bin/platform ./cmd/platform

# Run the application
./bin/platform

# Or run directly
go run ./cmd/platform/main.go

The application will:

  1. Load configuration
  2. Initialize logger
  3. Connect to database
  4. Run migrations (create tables)
  5. Start HTTP server on port 8080

Verification Instructions

1. Verify Database Connection and Migrations

Option A: Using Application Logs

When you start the application, you should see:

  • Database connection successful
  • Migrations executed (tables created)

Option B: Using PostgreSQL Client

# Connect to database
docker exec -it goplt-postgres psql -U goplt -d goplt

# List tables (should see User, Role, Permission, AuditLog, etc.)
\dt

# Check a specific table structure
\d users
\d roles
\d permissions
\d audit_logs

# Exit psql
\q

Option C: Using SQL Query

# Execute SQL query
docker exec -it goplt-postgres psql -U goplt -d goplt -c "SELECT table_name FROM information_schema.tables WHERE table_schema = 'public';"

# Expected output should include:
# - users
# - roles
# - permissions
# - audit_logs
# - user_roles
# - role_permissions

2. Verify Health Endpoints

# Test liveness probe (should return 200)
curl http://localhost:8080/healthz

# Expected response:
# {"status":"healthy"}

# Test readiness probe (should return 200 if database is connected)
curl http://localhost:8080/ready

# Expected response:
# {"status":"healthy","components":[{"name":"database","status":"healthy"}]}

# If database is not connected, you'll see:
# {"status":"unhealthy","components":[{"name":"database","status":"unhealthy","error":"..."}]}

3. Verify Metrics Endpoint

# Get Prometheus metrics
curl http://localhost:8080/metrics

# Expected output should include:
# - http_request_duration_seconds
# - http_requests_total
# - http_errors_total
# - go_* (Go runtime metrics)
# - process_* (Process metrics)

4. Verify HTTP Server Functionality

# Make a request to trigger logging and metrics
curl -v http://localhost:8080/healthz

# Check application logs for:
# - Request ID in logs
# - Structured JSON logs
# - Request method, path, status, duration

5. Verify Error Handling

To test panic recovery and error bus:

# The error bus will capture any panics automatically
# Check logs for error bus messages when errors occur

6. Verify OpenTelemetry Tracing

Development Mode (stdout)

When tracing.enabled: true and environment: development, traces are exported to stdout:

# Start the application and make requests
curl http://localhost:8080/healthz

# Check application stdout for trace output
# Should see JSON trace spans with:
# - Trace ID
# - Span ID
# - Operation name
# - Attributes (method, path, status, etc.)

Verify Trace ID in Logs

# Make a request
curl http://localhost:8080/healthz

# Check application logs for trace_id and span_id fields
# Example log entry:
# {"level":"info","msg":"HTTP request","method":"GET","path":"/healthz","status":200,"trace_id":"...","span_id":"..."}

7. Verify Database Operations

Test Database Write

You can test database operations by creating a simple test script or using the database client directly. For now, verify that migrations worked (see Verification 1).

Test Database Health Check

# The /ready endpoint includes database health check
curl http://localhost:8080/ready

# If healthy, you'll see database component status: "healthy"

Testing Database Specifically

Direct Database Testing

  1. Connect to Database:
docker exec -it goplt-postgres psql -U goplt -d goplt
  1. Verify Tables Exist:
SELECT table_name 
FROM information_schema.tables 
WHERE table_schema = 'public' 
ORDER BY table_name;
  1. Check Table Structures:
-- Check users table
\d users

-- Check relationships
\d user_roles
\d role_permissions
  1. Test Insert Operation (manual test):
-- Note: Ent generates UUIDs, so we'd need to use the Ent client
-- This is just to verify the schema is correct
-- Actual inserts should be done through the application/Ent client

Using Application to Test Database

The database is automatically tested through:

  1. Migrations: Run on startup - if they succeed, schema is correct
  2. Health Check: /ready endpoint tests database connectivity
  3. Connection Pool: Database client manages connections automatically

Docker Compose Commands

# Start database
docker-compose up -d postgres

# Stop database
docker-compose stop postgres

# Stop and remove containers
docker-compose down

# Stop and remove containers + volumes (WARNING: deletes data)
docker-compose down -v

# View database logs
docker-compose logs -f postgres

# Access database shell
docker exec -it goplt-postgres psql -U goplt -d goplt

# Check database health
docker-compose ps

Common Issues and Solutions

Issue: Database connection fails

Symptoms: Application fails to start, error about database connection

Solutions:

  1. Ensure PostgreSQL container is running: docker-compose ps
  2. Check database DSN in config: postgres://goplt:goplt_password@localhost:5432/goplt?sslmode=disable
  3. Verify port 5432 is not in use: lsof -i :5432
  4. Check database logs: docker-compose logs postgres

Issue: Migrations fail

Symptoms: Error during startup about migrations

Solutions:

  1. Ensure database is accessible
  2. Check database user has proper permissions
  3. Verify Ent schema is correct: go generate ./internal/ent
  4. Check for existing tables that might conflict

Issue: Health check fails

Symptoms: /ready endpoint returns unhealthy

Solutions:

  1. Verify database connection
  2. Check database health: docker-compose ps
  3. Review application logs for specific error

Issue: Metrics not appearing

Symptoms: /metrics endpoint is empty or missing metrics

Solutions:

  1. Make some HTTP requests first (metrics are collected per request)
  2. Verify Prometheus registry is initialized
  3. Check middleware is registered correctly

Issue: Traces not appearing

Symptoms: No trace output in logs

Solutions:

  1. Verify tracing.enabled: true in config
  2. Check environment is set correctly (development = stdout, production = OTLP)
  3. Make HTTP requests to generate traces

Expected Application Output

When running successfully, you should see logs like:

{"level":"info","msg":"Application starting","component":"bootstrap"}
{"level":"info","msg":"Database migrations completed"}
{"level":"info","msg":"HTTP server listening","addr":"0.0.0.0:8080"}

When making requests:

{"level":"info","msg":"HTTP request","method":"GET","path":"/healthz","status":200,"duration_ms":5,"request_id":"...","trace_id":"...","span_id":"..."}

Next Steps

After verifying Epic 1:

  1. All core infrastructure is in place
  2. Database is ready for Epic 2 (Authentication & Authorization)
  3. HTTP server is ready for API endpoints
  4. Observability is ready for production monitoring

Proceed to Epic 2 to implement authentication and authorization features.