docs: add mkdocs, update links, add architecture documentation

This commit is contained in:
2025-11-05 07:44:21 +01:00
parent 6a17236474
commit 54a047f5dc
351 changed files with 3482 additions and 10 deletions

View File

@@ -0,0 +1,37 @@
# ADR-0001: Go Module Path
## Status
Accepted
## Context
The project needs a Go module path that uniquely identifies the platform. This path will be used:
- In `go.mod` file
- For importing packages within the project
- For module dependencies
- For future module publishing
## Decision
Use `git.dcentral.systems/toolz/goplt` as the Go module path.
**Rationale:**
- Matches the organization's Git hosting structure
- Follows Go module naming conventions
- Clearly identifies the project as a Go platform tool
- Prevents naming conflicts with other modules
## Consequences
### Positive
- Clear, descriptive module path
- Aligns with organization's infrastructure
- Easy to identify in dependency graphs
### Negative
- Requires access to `git.dcentral.systems` for module resolution
- May need to configure GOPRIVATE/GONOPROXY if using private registry
### Implementation Notes
- Initialize module: `go mod init git.dcentral.systems/toolz/goplt`
- Update all import paths in code to use this module path
- Configure `.git/config` or Go environment variables if needed for private module access

View File

@@ -0,0 +1,39 @@
# ADR-0002: Go Version
## Status
Accepted
## Context
Go releases new versions regularly with new features, performance improvements, and security fixes. We need to choose a Go version that:
- Provides necessary features for the platform
- Has good ecosystem support
- Is stable and production-ready
- Supports required tooling (plugins, etc.)
## Decision
Use **Go 1.24.3** as the minimum required version for the platform.
**Rationale:**
- Latest stable version available
- Provides all required features for the platform
- Ensures compatibility with modern Go tooling
- Supports all planned features (modules, plugins, generics)
## Consequences
### Positive
- Access to latest Go features and performance improvements
- Better security with latest patches
- Modern tooling support
### Negative
- Requires developers to have Go 1.24.3+ installed
- CI/CD must use compatible Go version
- May limit compatibility with some older dependencies (if any)
### Implementation Notes
- Specify in `go.mod`: `go 1.24`
- Document in `README.md` and CI configuration
- Update `.github/workflows/ci.yml` to use `actions/setup-go@v5` with version `1.24.3`
- Add version check script if needed

View File

@@ -0,0 +1,49 @@
# ADR-0003: Dependency Injection Framework
## Status
Accepted
## Context
The platform requires dependency injection to:
- Manage service lifecycle
- Wire dependencies between components
- Support module system initialization
- Handle graceful shutdown
- Provide testability through dependency substitution
Options considered:
1. **uber-go/fx** - Runtime dependency injection with lifecycle management
2. **uber-go/dig** - Compile-time dependency injection
3. **Manual constructor injection** - No framework, explicit wiring
## Decision
Use **uber-go/fx** (v1.23.0+) as the dependency injection framework.
**Rationale:**
- Provides lifecycle management (OnStart/OnStop hooks) crucial for services
- Supports module-based architecture through fx.Option composition
- Runtime dependency resolution with compile-time type safety
- Excellent for modular monolith architecture
- Well-documented and actively maintained
- Used by major Go projects (Uber, etc.)
## Consequences
### Positive
- Clean lifecycle management for services
- Easy module composition via fx.Option
- Graceful shutdown handling built-in
- Test-friendly with fx.Options for test overrides
### Negative
- Runtime reflection overhead (minimal)
- Learning curve for developers unfamiliar with fx
- Slightly more complex error messages on dependency resolution failures
### Implementation Notes
- Install: `go get go.uber.org/fx@v1.23.0`
- Create `internal/di/container.go` with fx.New()
- Use fx.Provide() for service registration
- Use fx.Invoke() for initialization tasks
- Leverage fx.Lifecycle for service startup/shutdown

View File

@@ -0,0 +1,50 @@
# ADR-0004: Configuration Management Library
## Status
Accepted
## Context
The platform needs a configuration system that:
- Supports hierarchical configuration (defaults → files → env → secrets)
- Handles multiple formats (YAML, JSON, env vars)
- Provides type-safe access to configuration values
- Supports environment-specific overrides
- Can integrate with secret managers (future)
Options considered:
1. **spf13/viper** - Comprehensive configuration management
2. **envconfig** - Environment variable only
3. **koanf** - Lightweight configuration library
4. **Standard library + manual parsing** - No external dependency
## Decision
Use **spf13/viper** (v1.18.0+) with **spf13/cobra** (v1.8.0+) for configuration management.
**Rationale:**
- Industry standard for Go configuration management
- Supports multiple sources (files, env vars, flags)
- Hierarchical configuration with precedence rules
- Easy integration with Cobra for CLI commands
- Well-documented and widely used
- Supports future secret manager integration
## Consequences
### Positive
- Flexible configuration loading from multiple sources
- Easy to add new configuration sources
- Type-safe access methods
- Environment variable support via automatic env binding
### Negative
- Additional dependency
- Viper can be verbose for simple use cases
- Some learning curve for advanced features
### Implementation Notes
- Install: `go get github.com/spf13/viper@v1.18.0` and `github.com/spf13/cobra@v1.8.0`
- Create `pkg/config/config.go` interface to abstract Viper
- Implement `internal/config/viper_config.go` as concrete implementation
- Load order: `default.yaml``development.yaml`/`production.yaml` → env vars → secrets (future)
- Use typed getters (GetString, GetInt, GetBool) for type safety

View File

@@ -0,0 +1,50 @@
# ADR-0005: Logging Framework
## Status
Accepted
## Context
The platform requires structured logging that:
- Supports multiple log levels
- Provides structured output (JSON for production)
- Allows adding contextual fields
- Performs well under load
- Integrates with observability tools
Options considered:
1. **go.uber.org/zap** - High-performance structured logging
2. **rs/zerolog** - Zero-allocation logger
3. **sirupsen/logrus** - Structured logger (maintenance mode)
4. **Standard library log** - Basic logging (insufficient)
## Decision
Use **go.uber.org/zap** (v1.26.0+) as the logging framework.
**Rationale:**
- Industry standard for high-performance Go applications
- Excellent structured logging with field support
- Very low overhead (designed for high-throughput systems)
- JSON output for production, human-readable for development
- Strong ecosystem integration
- Actively maintained by Uber
## Consequences
### Positive
- High performance (low latency, high throughput)
- Rich structured logging with fields
- Easy integration with observability tools
- Configurable output formats (JSON/console)
### Negative
- Slightly more verbose API than standard library
- Requires wrapping for common use cases (we'll abstract via interface)
### Implementation Notes
- Install: `go get go.uber.org/zap@v1.26.0`
- Create `pkg/logger/logger.go` interface to abstract zap
- Implement `internal/logger/zap_logger.go` as concrete implementation
- Use JSON encoder for production, console encoder for development
- Support request-scoped fields via context
- Export global logger via `pkg/logger` package

View File

@@ -0,0 +1,50 @@
# ADR-0006: HTTP Framework
## Status
Accepted
## Context
The platform needs an HTTP framework for:
- REST API endpoints
- Middleware support (auth, logging, metrics)
- Request/response handling
- Route registration from modules
- Integration with observability tools
Options considered:
1. **gin-gonic/gin** - Fast, feature-rich HTTP web framework
2. **gorilla/mux** - Lightweight router
3. **go-chi/chi** - Lightweight, idiomatic router
4. **net/http** (standard library) - No external dependency
## Decision
Use **gin-gonic/gin** (v1.9.1+) as the HTTP framework.
**Rationale:**
- Fast performance (comparable to net/http)
- Rich middleware ecosystem
- Excellent for REST APIs
- Easy route grouping (useful for modules)
- Good OpenTelemetry integration support
- Widely used and well-documented
- Recommended in playbook-golang.md
## Consequences
### Positive
- High performance
- Easy middleware chaining
- Route grouping supports module architecture
- Good ecosystem support
### Negative
- Additional dependency (though lightweight)
- Slight learning curve for developers unfamiliar with Gin
### Implementation Notes
- Install: `go get github.com/gin-gonic/gin@v1.9.1`
- Create router in `internal/server/server.go`
- Use route groups for module isolation: `r.Group("/api/v1/blog")`
- Add middleware stack: logging, recovery, metrics, auth (later)
- Support graceful shutdown via fx lifecycle

View File

@@ -0,0 +1,82 @@
# ADR-0007: Project Directory Structure
## Status
Accepted
## Context
The project needs a clear, scalable directory structure that:
- Follows Go best practices
- Separates public interfaces from implementations
- Supports modular architecture
- Is maintainable and discoverable
- Aligns with Go community standards
## Decision
Adopt a **standard Go project layout** with **internal/** and **pkg/** separation:
```
goplt/
├── cmd/
│ └── platform/ # Application entry point
├── internal/ # Private implementation code
│ ├── di/ # Dependency injection
│ ├── registry/ # Module registry
│ ├── pluginloader/ # Plugin loader (optional)
│ ├── config/ # Config implementation
│ ├── logger/ # Logger implementation
│ └── infra/ # Infrastructure adapters
├── pkg/ # Public interfaces (exported)
│ ├── config/ # ConfigProvider interface
│ ├── logger/ # Logger interface
│ ├── module/ # IModule interface
│ ├── auth/ # Auth interfaces (Phase 2)
│ ├── perm/ # Permission DSL (Phase 2)
│ └── infra/ # Infrastructure interfaces
├── modules/ # Feature modules
│ └── blog/ # Sample module (Phase 4)
├── config/ # Configuration files
│ ├── default.yaml
│ ├── development.yaml
│ └── production.yaml
├── api/ # OpenAPI specs
├── scripts/ # Build/test scripts
├── docs/ # Documentation
│ └── adr/ # Architecture Decision Records
├── ops/ # Operations (Grafana dashboards, etc.)
├── .github/
│ └── workflows/
│ └── ci.yml
├── Dockerfile
├── docker-compose.yml
├── docker-compose.test.yml
└── go.mod
```
**Rationale:**
- `internal/` prevents external packages from importing implementation details
- `pkg/` exposes only interfaces that modules need
- `cmd/` follows Go standard for application entry points
- `modules/` clearly separates feature modules
- `config/` centralizes configuration files
- Separates concerns and supports clean architecture
## Consequences
### Positive
- Clear separation of concerns
- Prevents circular dependencies
- Easy to navigate and understand
- Aligns with Go community standards
- Supports modular architecture
### Negative
- Slightly more directories than minimal structure
- Requires discipline to maintain boundaries
### Implementation Notes
- Initialize with `go mod init git.dcentral.systems/toolz/goplt`
- Create all directories upfront in Phase 0
- Document structure in `README.md`
- Enforce boundaries via `internal/` package visibility
- Use `go build ./...` to verify structure

View File

@@ -0,0 +1,57 @@
# ADR-0008: Error Handling Strategy
## Status
Accepted
## Context
Go's error handling philosophy requires explicit error checking. We need a consistent approach for:
- Error creation and wrapping
- Error propagation
- Error classification (domain vs infrastructure)
- Error reporting (logging, monitoring)
- HTTP error responses
## Decision
Adopt a **wrapped error pattern** with **structured error types**:
1. **Error Wrapping**: Use `fmt.Errorf("context: %w", err)` for error wrapping
2. **Error Types**: Define custom error types for domain errors
3. **Error Classification**: Distinguish between:
- Domain errors (business logic failures)
- Infrastructure errors (external system failures)
- Validation errors (input validation failures)
4. **Error Context**: Always wrap errors with context about where they occurred
**Rationale:**
- Follows Go 1.13+ error wrapping best practices
- Enables error inspection with `errors.Is()` and `errors.As()`
- Maintains error chains for debugging
- Allows structured error handling
## Consequences
### Positive
- Full error traceability through call stack
- Can inspect and handle specific error types
- Better debugging with error context
- Aligns with Go best practices
### Negative
- Requires discipline to wrap errors consistently
- Can be verbose in some cases
### Implementation Notes
- Always wrap errors: `return nil, fmt.Errorf("failed to load config: %w", err)`
- Create error types for domain errors:
```go
type ConfigError struct {
Key string
Cause error
}
func (e *ConfigError) Error() string { ... }
func (e *ConfigError) Unwrap() error { return e.Cause }
```
- Use `errors.Is()` and `errors.As()` for error checking
- Log errors with context before returning
- Map domain errors to HTTP status codes in handlers

View File

@@ -0,0 +1,56 @@
# ADR-0009: Context Key Types
## Status
Accepted
## Context
The platform will use `context.Context` to propagate request-scoped values such as:
- User ID (from authentication)
- Request ID (for tracing)
- Tenant ID (for multi-tenancy)
- Logger instance (with request-scoped fields)
Go best practices recommend using typed keys instead of string keys to avoid collisions.
## Decision
Use **typed context keys** for all context values:
```go
type contextKey string
const (
userIDKey contextKey = "user_id"
requestIDKey contextKey = "request_id"
tenantIDKey contextKey = "tenant_id"
loggerKey contextKey = "logger"
)
```
**Rationale:**
- Prevents key collisions between packages
- Type-safe access to context values
- Aligns with Go best practices (see `context.WithValue` documentation)
- Makes context usage explicit and discoverable
## Consequences
### Positive
- Type-safe context access
- Prevents accidental key collisions
- Clear intent in code
- Better IDE support
### Negative
- Slightly more verbose than string keys
- Requires defining keys upfront
### Implementation Notes
- Create `pkg/context/keys.go` with all context key definitions
- Provide helper functions for setting/getting values:
```go
func WithUserID(ctx context.Context, userID string) context.Context
func UserIDFromContext(ctx context.Context) (string, bool)
```
- Use in middleware and services
- Document all context keys and their usage

View File

@@ -0,0 +1,50 @@
# ADR-0010: CI/CD Platform
## Status
Accepted
## Context
The platform needs a CI/CD system for:
- Automated testing on pull requests
- Code quality checks (linting, formatting)
- Building binaries and Docker images
- Publishing artifacts
- Running integration tests
Options considered:
1. **GitHub Actions** - Native GitHub integration
2. **GitLab CI** - If using GitLab
3. **Jenkins** - Self-hosted option
4. **CircleCI** - Cloud-based CI/CD
## Decision
Use **GitHub Actions** for CI/CD pipeline.
**Rationale:**
- Native integration with GitHub repositories
- Free for public repos, reasonable for private
- Rich ecosystem of actions
- Easy to configure with YAML
- Good documentation and community support
- Recommended in playbook-golang.md
## Consequences
### Positive
- Easy setup and configuration
- Good GitHub integration
- Large action marketplace
- Free for public repositories
### Negative
- Tied to GitHub (if migrating Git hosts, need to migrate CI)
- Limited customization compared to self-hosted solutions
### Implementation Notes
- Create `.github/workflows/ci.yml`
- Use `actions/setup-go@v5` for Go setup
- Configure caching for Go modules
- Run: linting, unit tests, integration tests, build
- Use `actions/cache@v4` for module caching
- Add build matrix if needed for multiple Go versions (future)

View File

@@ -0,0 +1,53 @@
# ADR-0011: Code Generation Tools
## Status
Accepted
## Context
The platform will use code generation for:
- Permission constants from module manifests
- Ent ORM code generation
- Mock generation for testing
- OpenAPI client/server code (future)
We need to decide on tooling and workflow.
## Decision
Use **standard Go generation tools** with `go generate`:
1. **Ent ORM**: `entgo.io/ent/cmd/ent` for schema code generation
2. **Mocks**: `github.com/vektra/mockery/v2` or `github.com/golang/mock/mockgen`
3. **Permissions**: Custom `scripts/generate-permissions.go`
4. **OpenAPI**: `github.com/deepmap/oapi-codegen` (future)
**Workflow:**
- Use `//go:generate` directives in source files
- Run `go generate ./...` before commits
- Document in `Makefile` with `make generate` target
- CI should verify generated code is up-to-date
**Rationale:**
- Standard Go tooling, well-supported
- `go generate` is the idiomatic way to run code generation
- Easy to integrate into CI/CD
- Reduces manual code maintenance
## Consequences
### Positive
- Automated code generation reduces errors
- Consistent code style
- Easy to maintain
- Standard Go workflow
### Negative
- Requires developers to run generation before commits
- Generated code must be committed (or verified in CI)
- Slight learning curve for new developers
### Implementation Notes
- Add `//go:generate` directives where needed
- Create `Makefile` target: `make generate`
- Add CI step to verify generated code: `go generate ./... && git diff --exit-code`
- Document in `CONTRIBUTING.md`

View File

@@ -0,0 +1,62 @@
# ADR-0012: Logger Interface Design
## Status
Accepted
## Context
We're using zap for logging, but want to abstract it behind an interface for:
- Testability (mock logger in tests)
- Flexibility (could swap implementations)
- Module compatibility (modules use interface, not concrete type)
We need to decide on the interface design.
## Decision
Create a **simple logger interface** that matches zap's API pattern but uses generic types:
```go
type Field interface {
// Field represents a key-value pair for structured logging
}
type Logger interface {
Debug(msg string, fields ...Field)
Info(msg string, fields ...Field)
Warn(msg string, fields ...Field)
Error(msg string, fields ...Field)
With(fields ...Field) Logger
}
```
**Implementation:**
- Use `zap.Field` as the Field type (no abstraction needed for now)
- Provide helper functions in `pkg/logger` for creating fields:
```go
func String(key, value string) Field
func Int(key string, value int) Field
func Error(err error) Field
```
**Rationale:**
- Simple interface that modules can depend on
- Matches zap's usage patterns
- Easy to test with mock implementations
- Allows future swap if needed (though unlikely)
## Consequences
### Positive
- Clean abstraction for modules
- Testable with mocks
- Simple API for modules to use
### Negative
- Slight indirection overhead
- Need to maintain interface compatibility
### Implementation Notes
- Define interface in `pkg/logger/logger.go`
- Implement in `internal/logger/zap_logger.go`
- Export helper functions in `pkg/logger/fields.go`
- Modules import `pkg/logger`, not `internal/logger`

View File

@@ -0,0 +1,54 @@
# ADR-0013: Database ORM Selection
## Status
Accepted
## Context
The platform needs a database ORM/library that:
- Supports PostgreSQL (primary database)
- Provides type-safe query building
- Supports code generation (reduces boilerplate)
- Handles migrations
- Supports relationships (many-to-many, etc.)
- Integrates with Ent (code generation)
Options considered:
1. **entgo.io/ent** - Code-generated, type-safe ORM
2. **gorm.io/gorm** - Feature-rich ORM with reflection
3. **sqlx** - Lightweight wrapper around database/sql
4. **Standard library database/sql** - No ORM, raw SQL
## Decision
Use **entgo.io/ent** as the primary ORM for the platform.
**Rationale:**
- Code generation provides compile-time type safety
- Excellent schema definition and migration support
- Strong relationship modeling
- Good performance (no reflection at runtime)
- Active development and good documentation
- Recommended in playbook-golang.md
- Easy to integrate with OpenTelemetry
## Consequences
### Positive
- Type-safe queries eliminate runtime errors
- Schema changes are explicit and versioned
- Code generation reduces boilerplate
- Good migration support
- Strong relationship support
### Negative
- Requires code generation step (`go generate`)
- Learning curve for developers unfamiliar with Ent
- Less flexible than raw SQL for complex queries
- Generated code must be committed or verified in CI
### Implementation Notes
- Install: `go get entgo.io/ent/cmd/ent`
- Initialize schema: `go run entgo.io/ent/cmd/ent init User Role Permission`
- Use `//go:generate` directives for code generation
- Run migrations on startup via `client.Schema.Create()`
- Create wrapper in `internal/infra/database/client.go` for DI injection

View File

@@ -0,0 +1,52 @@
# ADR-0014: Health Check Implementation
## Status
Accepted
## Context
The platform needs health check endpoints for:
- Kubernetes liveness probes (`/healthz`)
- Kubernetes readiness probes (`/ready`)
- Monitoring and alerting
- Load balancer health checks
Health checks should be:
- Fast and lightweight
- Check critical dependencies (database, cache, etc.)
- Provide clear status indicators
## Decision
Implement **custom health check registry** with composable checkers:
1. **Liveness endpoint** (`/healthz`): Always returns 200 if process is running
2. **Readiness endpoint** (`/ready`): Checks all registered health checkers
3. **Health check interface**: `type HealthChecker interface { Check(ctx context.Context) error }`
4. **Registry pattern**: Modules can register additional health checkers
**Rationale:**
- Custom implementation gives full control
- Composable design allows modules to add checks
- Simple interface is easy to test
- No external dependency for basic functionality
- Can extend with Prometheus metrics later
## Consequences
### Positive
- Lightweight and fast
- Extensible by modules
- Easy to test
- Clear separation of liveness vs readiness
### Negative
- Need to implement ourselves (though simple)
- Must maintain the registry
### Implementation Notes
- Create `pkg/health/health.go` interface
- Implement `internal/health/registry.go` with checker map
- Register core checkers: database, cache (if enabled)
- Add endpoints to HTTP router
- Return JSON response: `{"status": "ok", "checks": {...}}`
- Consider timeout (e.g., 5 seconds) for readiness checks

View File

@@ -0,0 +1,55 @@
# ADR-0015: Error Bus Implementation
## Status
Accepted
## Context
The platform needs a centralized error handling mechanism for:
- Capturing panics and errors
- Logging errors consistently
- Sending errors to external services (Sentry, etc.)
- Avoiding error handling duplication
Options considered:
1. **Channel-based in-process bus** - Simple, Go-idiomatic
2. **Event bus integration** - Use existing event bus
3. **Direct logging** - No bus, direct integration
4. **External service integration** - Direct to Sentry
## Decision
Implement a **channel-based error bus** with pluggable sinks:
1. **Error bus interface**: `type ErrorPublisher interface { Publish(err error) }`
2. **Channel-based implementation**: Background goroutine consumes errors from channel
3. **Pluggable sinks**: Logger (always), Sentry (optional, Phase 6)
4. **Panic recovery middleware**: Automatically publishes panics to error bus
**Rationale:**
- Simple, idiomatic Go pattern
- Non-blocking error publishing (buffered channel)
- Decouples error capture from error handling
- Easy to add new sinks (Sentry, logging, metrics)
- Can be extended to use event bus later if needed
## Consequences
### Positive
- Centralized error handling
- Non-blocking (doesn't slow down request path)
- Easy to extend with new sinks
- Consistent error handling across the platform
### Negative
- Additional goroutine overhead (minimal)
- Must ensure error bus doesn't become bottleneck
### Implementation Notes
- Create `pkg/errorbus/errorbus.go` interface
- Implement `internal/errorbus/channel_bus.go`:
- Buffered channel (e.g., size 100)
- Background goroutine consumes errors
- Multiple sinks (logger, optional Sentry)
- Add panic recovery middleware that publishes to bus
- Register in DI container as singleton
- Monitor channel size to detect error storms

View File

@@ -0,0 +1,56 @@
# ADR-0016: OpenTelemetry Observability Strategy
## Status
Accepted
## Context
The platform needs distributed tracing and observability for:
- Request tracing across services/modules
- Performance monitoring
- Debugging production issues
- Integration with observability tools (Jaeger, Grafana, etc.)
Options considered:
1. **OpenTelemetry** - Industry standard, vendor-neutral
2. **Zipkin** - Older standard, less ecosystem support
3. **Custom tracing** - Build our own
4. **No tracing** - Only logs and metrics
## Decision
Use **OpenTelemetry (OTEL)** for all observability:
1. **Tracing**: Distributed tracing with spans
2. **Metrics**: Prometheus-compatible metrics
3. **Logs**: Structured logs with trace correlation
4. **Export**: OTLP collector for production, stdout for development
**Rationale:**
- Industry standard, vendor-neutral
- Excellent Go SDK support
- Integrates with major observability tools
- Supports metrics, traces, and logs
- Recommended in playbook-golang.md
- Future-proof (not locked to specific vendor)
## Consequences
### Positive
- Vendor-neutral (can switch backends)
- Rich ecosystem and tooling
- Excellent Go SDK
- Supports all observability signals
### Negative
- Learning curve for OpenTelemetry concepts
- Slight overhead (minimal with sampling)
- Requires OTLP collector or compatible backend
### Implementation Notes
- Install: `go.opentelemetry.io/otel` and contrib packages
- Initialize TracerProvider in `internal/observability/tracer.go`
- Use HTTP instrumentation middleware: `otelhttp.NewHandler()`
- Add database instrumentation via Ent interceptor
- Export to stdout for development, OTLP for production
- Include trace ID in structured logs
- Configure sampling for production (e.g., 10% or adaptive)

View File

@@ -0,0 +1,55 @@
# ADR-0017: JWT Token Strategy
## Status
Accepted
## Context
The platform needs authentication tokens that:
- Are stateless (no server-side session storage)
- Support role and permission claims
- Can be revoked (challenge)
- Have appropriate lifetimes
- Support multi-tenancy (tenant ID in claims)
Token strategies considered:
1. **Short-lived access tokens + long-lived refresh tokens** - Industry standard
2. **Single long-lived tokens** - Simple but insecure
3. **Short-lived tokens only** - Secure but poor UX
4. **Session-based** - Stateful, requires storage
## Decision
Use **short-lived access tokens + long-lived refresh tokens**:
1. **Access tokens**: 15 minutes lifetime, contain user ID, roles, tenant ID
2. **Refresh tokens**: 7 days lifetime, stored in database (for revocation)
3. **Token format**: JWT with claims: `sub` (user ID), `roles`, `tenant_id`, `exp`
4. **Revocation**: Refresh tokens stored in DB, can be revoked/deleted
**Rationale:**
- Industry best practice (OAuth2/OIDC pattern)
- Good balance of security and UX
- Access tokens can't be revoked (short lifetime mitigates risk)
- Refresh tokens can be revoked (stored in DB)
- Supports stateless authentication for most requests
## Consequences
### Positive
- Secure (short access token lifetime)
- Good UX (refresh tokens prevent frequent re-login)
- Stateless for most requests (access tokens)
- Supports revocation (refresh tokens)
### Negative
- Requires refresh token storage (DB table)
- More complex than single token
- Need to handle token refresh flow
### Implementation Notes
- Use `github.com/golang-jwt/jwt/v5` for JWT handling
- Store refresh tokens in `refresh_tokens` table (user_id, token_hash, expires_at)
- Generate access tokens with HS256 or RS256 signing
- Include roles in token claims (not just role IDs)
- Validate token signature and expiration on each request
- Refresh endpoint validates refresh token and issues new access token

View File

@@ -0,0 +1,53 @@
# ADR-0018: Password Hashing Algorithm
## Status
Accepted
## Context
The platform needs to securely store user passwords. Requirements:
- Resist brute-force attacks
- Resist rainbow table attacks
- Future-proof against advances in computing
- Reasonable performance (not too slow)
Options considered:
1. **bcrypt** - Battle-tested, widely used
2. **argon2id** - Modern, memory-hard, recommended by OWASP
3. **scrypt** - Memory-hard, good alternative
4. **PBKDF2** - Older standard, less secure
## Decision
Use **argon2id** for password hashing with recommended parameters:
- **Algorithm**: argon2id (variant)
- **Memory**: 64 MB (65536 KB)
- **Iterations**: 3 (time cost)
- **Parallelism**: 4 (number of threads)
- **Salt length**: 16 bytes (random, unique per password)
**Rationale:**
- Recommended by OWASP for new applications
- Memory-hard algorithm (resistant to GPU/ASIC attacks)
- Good balance of security and performance
- Future-proof design
- Standard library support in Go 1.23+
## Consequences
### Positive
- Strong security guarantees
- Memory-hard (resistant to hardware attacks)
- OWASP recommended
- Standard library support
### Negative
- Slightly slower than bcrypt (acceptable trade-off)
- Requires tuning parameters for production
### Implementation Notes
- Use `golang.org/x/crypto/argon2` package
- Store hash in format: `$argon2id$v=19$m=65536,t=3,p=4$salt$hash`
- Use `crypto/rand` for salt generation
- Verify passwords with `argon2.CompareHashAndPassword()`
- Consider increasing parameters for high-security environments

View File

@@ -0,0 +1,57 @@
# ADR-0019: Permission DSL Format
## Status
Accepted
## Context
The platform needs a permission system that:
- Is extensible by modules
- Prevents typos and errors (compile-time safety)
- Supports hierarchical permissions
- Is easy to understand and use
Permission formats considered:
1. **String format**: `"module.resource.action"` - Simple, flexible
2. **Enum/Constants**: Type-safe but less flexible
3. **Hierarchical tree**: Complex but powerful
4. **Bitmask**: Efficient but hard to read
## Decision
Use **string-based permission format** with **code-generated constants**:
1. **Format**: `"{module}.{resource}.{action}"`
- Examples: `blog.post.create`, `user.read`, `system.health.check`
2. **Code generation**: Generate constants from `module.yaml` files
3. **Type safety**: `type Permission string` with generated constants
4. **Validation**: Compile-time constants prevent typos
**Rationale:**
- Simple and readable
- Easy to extend (modules define in manifest)
- Code generation provides compile-time safety
- Flexible (modules can define any format)
- Hierarchical structure is intuitive
- Easy to parse and match
## Consequences
### Positive
- Simple and intuitive format
- Compile-time safety via code generation
- Easy to extend by modules
- Human-readable
- Flexible for various permission models
### Negative
- String comparisons (minimal performance impact)
- Requires code generation step
- Potential for permission string conflicts (mitigated by module prefix)
### Implementation Notes
- Define `type Permission string` in `pkg/perm/perm.go`
- Create code generator: `scripts/generate-permissions.go`
- Scan `modules/*/module.yaml` for permissions
- Generate constants in `pkg/perm/generated.go`
- Use `//go:generate` directive
- Validate format: `^[a-z0-9]+(\.[a-z0-9]+)*$` (lowercase, dots)

View File

@@ -0,0 +1,63 @@
# ADR-0020: Audit Logging Storage
## Status
Accepted
## Context
The platform needs to audit all security-relevant actions:
- User logins and authentication attempts
- Permission changes
- Data modifications
- Administrative actions
Audit logs must be:
- Immutable (append-only)
- Queryable
- Performant (don't slow down operations)
- Compliant with audit requirements
Storage options considered:
1. **PostgreSQL table** - Simple, queryable, transactional
2. **Elasticsearch** - Excellent for searching, but additional dependency
3. **File-based logs** - Simple but hard to query
4. **External audit service** - Overkill for initial version
## Decision
Store audit logs in **PostgreSQL append-only table** with JSON metadata:
1. **Table structure**: `audit_logs` with columns:
- `id`, `actor_id`, `action`, `target_id`, `metadata` (JSONB), `timestamp`
2. **Append-only**: No UPDATE or DELETE operations
3. **JSON metadata**: Flexible storage for additional context
4. **Indexing**: Index on `actor_id`, `action`, `timestamp` for queries
**Rationale:**
- Simple (no additional infrastructure)
- Queryable via SQL
- Transactional (consistent with other data)
- JSONB provides flexibility for metadata
- Can migrate to Elasticsearch later if needed
- Good performance for typical audit volumes
## Consequences
### Positive
- Simple implementation
- Queryable via SQL
- No additional infrastructure
- Transactional consistency
- Can archive old logs if needed
### Negative
- Adds load to primary database
- May need archiving strategy for large volumes
- Less powerful search than Elasticsearch
### Implementation Notes
- Create `audit_logs` table via Ent schema
- Use JSONB for metadata column (PostgreSQL-specific)
- Add indexes: `(actor_id, timestamp)`, `(action, timestamp)`
- Implement async logging (optional, via channel) for high throughput
- Consider partitioning by date for large volumes
- Add retention policy (e.g., archive after 1 year)

View File

@@ -0,0 +1,54 @@
# ADR-0021: Module Loading Strategy
## Status
Accepted
## Context
The platform needs to support pluggable modules. Two approaches:
1. **Static registration** - Modules compiled into binary
2. **Dynamic plugin loading** - Load `.so` files at runtime
Each has trade-offs for development, CI, and production.
## Decision
Support **both approaches** with **static registration as primary**:
1. **Static registration (primary)**:
- Modules register via `init()` function
- Imported via `import _ "module/pkg"` in main
- Works everywhere (Windows, Linux, macOS)
- Compile-time type safety
2. **Dynamic plugin loading (optional)**:
- Support via Go `plugin` package
- Load `.so` files from `./plugins/` directory
- Only for production scenarios requiring hot-swap
- Linux/macOS only (Go plugin limitation)
**Rationale:**
- Static registration is simpler and more reliable
- Works in CI/CD (no plugin compilation needed)
- Compile-time safety catches errors early
- Dynamic loading provides flexibility for specific use cases
- Modules can choose their approach
## Consequences
### Positive
- Flexible: static for most cases, dynamic when needed
- Static registration works everywhere
- Compile-time safety with static
- Hot-swap capability with dynamic (Linux/macOS)
### Negative
- Two code paths to maintain
- Dynamic plugins have version compatibility constraints
- Plugin debugging is harder
### Implementation Notes
- Implement static registry in `internal/registry/registry.go`
- Modules register via: `registry.Register(Module)` in `init()`
- Implement plugin loader in `internal/pluginloader/plugin_loader.go` (optional)
- Document when to use each approach
- Validate plugin version compatibility if using dynamic loading

View File

@@ -0,0 +1,56 @@
# ADR-0022: Cache Implementation
## Status
Accepted
## Context
The platform needs caching for:
- Performance optimization (reduce database load)
- Frequently accessed data (user permissions, roles)
- Session data (optional)
- Query results
Options considered:
1. **Redis** - Industry standard, feature-rich
2. **In-memory cache** - Simple, no external dependency
3. **Memcached** - Simple, but less features than Redis
4. **No cache** - Simplest, but poor performance at scale
## Decision
Use **Redis** as the primary cache with **in-memory fallback**:
1. **Primary**: Redis for production
2. **Fallback**: In-memory cache for development/testing
3. **Interface abstraction**: `Cache` interface allows swapping implementations
4. **Use cases**: Permission lookups, role assignments, query caching
**Rationale:**
- Industry standard, widely supported
- Rich feature set (TTL, pub/sub, etc.)
- Can be shared across instances (multi-instance deployments)
- Good performance
- Easy to abstract behind interface
## Consequences
### Positive
- High performance
- Shared across instances
- Rich feature set
- Easy to scale horizontally
- Abstraction allows swapping implementations
### Negative
- Additional infrastructure dependency
- Network latency (minimal with proper setup)
- Need to handle Redis failures gracefully
### Implementation Notes
- Install: `github.com/redis/go-redis/v9`
- Create `pkg/infra/cache/cache.go` interface
- Implement `internal/infra/cache/redis_cache.go`
- Implement `internal/infra/cache/memory_cache.go` for fallback
- Use connection pooling
- Handle Redis failures gracefully (fallback or error)
- Configure TTLs appropriately (e.g., 5 minutes for permissions)

View File

@@ -0,0 +1,59 @@
# ADR-0023: Event Bus Implementation
## Status
Accepted
## Context
The platform needs an event bus for:
- Module-to-module communication
- Decoupled event publishing
- Event sourcing (optional, future)
- Integration with external systems
Options considered:
1. **In-process channel-based bus** - Simple, for development/testing
2. **Kafka** - Production-grade, scalable
3. **RabbitMQ** - Alternative message broker
4. **Redis pub/sub** - Simple but less reliable
## Decision
Support **dual implementation** with **in-process primary, Kafka for production**:
1. **In-process bus (default)**:
- Channel-based implementation
- Used for development, testing, small deployments
- Simple, no external dependencies
2. **Kafka bus (production)**:
- Full Kafka integration via `segmentio/kafka-go`
- Producer/consumer groups
- Configurable via environment (switch implementation)
**Rationale:**
- In-process bus is simple for development
- Kafka provides production-grade reliability and scalability
- Interface abstraction allows swapping
- Modules don't need to know which implementation
- Can start simple and scale up
## Consequences
### Positive
- Simple for development (no Kafka needed)
- Scalable for production (Kafka)
- Flexible (can choose implementation)
- Modules are decoupled from implementation
### Negative
- Two implementations to maintain
- Need to ensure interface covers both use cases
- Kafka adds infrastructure complexity
### Implementation Notes
- Create `pkg/eventbus/eventbus.go` interface
- Implement `internal/infra/bus/inprocess_bus.go` (channel-based)
- Implement `internal/infra/bus/kafka_bus.go` (Kafka)
- Select implementation via config
- Support both sync and async event publishing
- Handle errors gracefully (retry, dead letter queue)

View File

@@ -0,0 +1,56 @@
# ADR-0024: Background Job Scheduler
## Status
Accepted
## Context
The platform needs background job processing for:
- Periodic tasks (cron jobs)
- Asynchronous processing
- Long-running operations
- Retry logic for failed jobs
Options considered:
1. **asynq (Redis-based)** - Simple, feature-rich
2. **cron + custom queue** - Build our own
3. **Kafka consumers** - Use event bus
4. **External service** - AWS SQS, etc.
## Decision
Use **asynq** (Redis-backed) for job scheduling:
1. **Cron jobs**: `github.com/robfig/cron/v3` for periodic tasks
2. **Job queue**: `github.com/hibiken/asynq` for async jobs
3. **Storage**: Redis (shared with cache)
4. **Features**: Retries, backoff, job status tracking
**Rationale:**
- Simple, Redis-backed (no new infrastructure)
- Good Go library support
- Built-in retry and backoff
- Job status tracking
- Easy to integrate
- Can scale horizontally (multiple workers)
## Consequences
### Positive
- Simple (uses existing Redis)
- Feature-rich (retries, backoff)
- Good performance
- Easy to scale
- Job status tracking
### Negative
- Tied to Redis (but we're already using it)
- Requires Redis to be available
### Implementation Notes
- Install: `github.com/hibiken/asynq` and `github.com/robfig/cron/v3`
- Create `pkg/scheduler/scheduler.go` interface
- Implement `internal/infra/scheduler/asynq_scheduler.go`
- Register jobs in `internal/infra/scheduler/job_registry.go`
- Start worker in fx lifecycle
- Configure retry policies (exponential backoff)
- Add job monitoring endpoint

View File

@@ -0,0 +1,49 @@
# ADR-0025: Multi-tenancy Model
## Status
Accepted
## Context
The platform may need multi-tenancy support for SaaS deployments. Options:
1. **Shared database with tenant_id column** - Single DB, row-level isolation
2. **Schema-per-tenant** - Single DB, separate schemas
3. **Database-per-tenant** - Separate databases
Each has trade-offs for isolation, performance, and operational complexity.
## Decision
Use **shared database with tenant_id column** (optional feature):
1. **Model**: Single PostgreSQL database with `tenant_id` column on tenant-scoped tables
2. **Isolation**: Row-level via Ent interceptors (automatic filtering)
3. **Tenant resolution**: From header (`X-Tenant-ID`), subdomain, or JWT claim
4. **Optional**: Can be disabled for single-tenant deployments
**Rationale:**
- Simplest operational model (single database)
- Good performance (can index tenant_id)
- Easy to implement (Ent interceptors)
- Can migrate to schema-per-tenant later if needed
- Flexible (can support both single and multi-tenant)
## Consequences
### Positive
- Simple operations (single database)
- Good performance with proper indexing
- Easy to implement
- Flexible (optional feature)
### Negative
- Requires careful query design (ensure tenant_id filtering)
- Data isolation at application level (not database level)
- Potential for data leakage if bugs occur
### Implementation Notes
- Make tenant_id optional (nullable) for single-tenant mode
- Add Ent interceptor to automatically filter by tenant_id
- Resolve tenant from context via middleware
- Add tenant_id to JWT claims
- Document tenant isolation guarantees
- Consider adding tenant_id to all tenant-scoped tables

View File

@@ -0,0 +1,54 @@
# ADR-0026: Error Reporting Service
## Status
Accepted
## Context
The platform needs error reporting for:
- Production error tracking
- Stack trace collection
- Error aggregation and analysis
- Integration with monitoring
Options considered:
1. **Sentry** - Popular, feature-rich
2. **Rollbar** - Alternative error tracking
3. **Custom solution** - Build our own
4. **Logs only** - No external service
## Decision
Use **Sentry** for error reporting (optional, configurable):
1. **Integration**: Via error bus sink
2. **Configuration**: Sentry DSN from config
3. **Context**: Include user ID, trace ID, module name
4. **Optional**: Can be disabled for development
**Rationale:**
- Industry standard error tracking
- Excellent Go SDK
- Rich features (release tracking, grouping, etc.)
- Good free tier
- Easy to integrate
## Consequences
### Positive
- Excellent error tracking
- Rich context and grouping
- Easy integration
- Good free tier
### Negative
- External dependency
- Additional cost at scale
- Privacy considerations (data sent to Sentry)
### Implementation Notes
- Install: `github.com/getsentry/sentry-go`
- Create Sentry sink for error bus
- Configure via environment variable
- Include context: user ID, trace ID, module name
- Set up release tracking
- Configure sampling for high-volume deployments

View File

@@ -0,0 +1,54 @@
# ADR-0027: Rate Limiting Strategy
## Status
Accepted
## Context
The platform needs rate limiting to:
- Prevent abuse and DoS attacks
- Protect against brute-force attacks
- Ensure fair resource usage
- Comply with API usage policies
Rate limiting strategies:
1. **Per-user rate limiting** - Based on authenticated user
2. **Per-IP rate limiting** - Based on client IP
3. **Fixed rate limiting** - Global limits
4. **Distributed rate limiting** - Shared state across instances
## Decision
Implement **multi-level rate limiting**:
1. **Per-user rate limiting**: For authenticated requests (e.g., 100 req/min)
2. **Per-IP rate limiting**: For all requests (e.g., 1000 req/min)
3. **Storage**: Redis for distributed rate limiting
4. **Algorithm**: Token bucket or sliding window
**Rationale:**
- Multi-level provides defense in depth
- Per-user prevents abuse by authenticated users
- Per-IP protects against unauthenticated abuse
- Redis enables distributed rate limiting (multi-instance)
- Token bucket provides smooth rate limiting
## Consequences
### Positive
- Multi-layer protection
- Works with multiple instances
- Configurable per endpoint
- Standard approach
### Negative
- Requires Redis (or shared state)
- Additional latency (minimal)
- Need to handle Redis failures gracefully
### Implementation Notes
- Use `github.com/ulule/limiter/v3` library
- Configure limits in config file
- Store rate limit state in Redis
- Return `X-RateLimit-*` headers
- Handle Redis failures gracefully (fail open or closed based on config)
- Configure different limits for different endpoints

View File

@@ -0,0 +1,67 @@
# ADR-0028: Testing Strategy
## Status
Accepted
## Context
The platform needs a comprehensive testing strategy:
- Unit tests for individual components
- Integration tests for full flows
- Contract tests for API compatibility
- Load tests for performance
Testing tools and approaches vary in complexity and coverage.
## Decision
Adopt a **multi-layered testing approach**:
1. **Unit tests**:
- Tool: Standard `testing` package + `testify`
- Coverage: >80% for core modules
- Mocks: `mockery` or `mockgen`
- Fast execution (< 1 second)
2. **Integration tests**:
- Tool: `testcontainers-go` for Docker-based services
- Coverage: End-to-end flows (auth, modules, etc.)
- Infrastructure: PostgreSQL, Redis, Kafka via testcontainers
- Tagged: `//go:build integration`
3. **Contract tests**:
- Tool: OpenAPI validator (`kin-openapi`)
- Coverage: API request/response validation
- Optional: Pact for service contracts
4. **Load tests**:
- Tool: k6 or vegeta
- Coverage: Critical endpoints (auth, API)
- Performance benchmarks
**Rationale:**
- Comprehensive coverage across layers
- Fast feedback with unit tests
- Realistic testing with integration tests
- API compatibility with contract tests
- Performance validation with load tests
## Consequences
### Positive
- High confidence in code quality
- Fast unit tests for quick feedback
- Realistic integration tests
- API compatibility guaranteed
### Negative
- Integration tests are slower
- Requires Docker for testcontainers
- More complex CI setup
### Implementation Notes
- Use `testify` for assertions: `require` and `assert`
- Generate mocks with `mockery` or `mockgen`
- Create test helpers in `internal/testutil/`
- Use test tags: `go test -tags=integration ./...`
- Run integration tests in separate CI job
- Document testing approach in `CONTRIBUTING.md`

View File

@@ -0,0 +1,86 @@
# Architecture Decision Records (ADRs)
This directory contains Architecture Decision Records (ADRs) for the Go Platform project.
## What are ADRs?
ADRs document important architectural decisions made during the project. They help:
- Track why decisions were made
- Understand the context and constraints
- Review decisions when requirements change
- Onboard new team members
## ADR Format
Each ADR follows this structure:
- **Status**: Proposed | Accepted | Rejected | Superseded
- **Context**: The situation that led to the decision
- **Decision**: What was decided
- **Consequences**: Positive and negative impacts
## ADR Index
### Phase 0: Project Setup & Foundation
- [ADR-0001: Go Module Path](./0001-go-module-path.md) - Module path: `git.dcentral.systems/toolz/goplt`
- [ADR-0002: Go Version](./0002-go-version.md) - Go 1.24.3
- [ADR-0003: Dependency Injection Framework](./0003-dependency-injection-framework.md) - uber-go/fx
- [ADR-0004: Configuration Management](./0004-configuration-management.md) - spf13/viper + cobra
- [ADR-0005: Logging Framework](./0005-logging-framework.md) - go.uber.org/zap
- [ADR-0006: HTTP Framework](./0006-http-framework.md) - gin-gonic/gin
- [ADR-0007: Project Directory Structure](./0007-project-directory-structure.md) - Standard Go layout with internal/pkg separation
- [ADR-0008: Error Handling Strategy](./0008-error-handling-strategy.md) - Wrapped errors with typed errors
- [ADR-0009: Context Key Types](./0009-context-key-types.md) - Typed context keys
- [ADR-0010: CI/CD Platform](./0010-ci-cd-platform.md) - GitHub Actions
- [ADR-0011: Code Generation Tools](./0011-code-generation-tools.md) - go generate workflow
- [ADR-0012: Logger Interface Design](./0012-logger-interface-design.md) - Logger interface abstraction
### Phase 1: Core Kernel & Infrastructure
- [ADR-0013: Database ORM Selection](./0013-database-orm.md) - entgo.io/ent
- [ADR-0014: Health Check Implementation](./0014-health-check-implementation.md) - Custom health check registry
- [ADR-0015: Error Bus Implementation](./0015-error-bus-implementation.md) - Channel-based error bus with pluggable sinks
- [ADR-0016: OpenTelemetry Observability Strategy](./0016-opentelemetry-observability.md) - OpenTelemetry for tracing, metrics, logs
### Phase 2: Authentication & Authorization
- [ADR-0017: JWT Token Strategy](./0017-jwt-token-strategy.md) - Short-lived access tokens + long-lived refresh tokens
- [ADR-0018: Password Hashing Algorithm](./0018-password-hashing.md) - argon2id
- [ADR-0019: Permission DSL Format](./0019-permission-dsl-format.md) - String-based format with code generation
- [ADR-0020: Audit Logging Storage](./0020-audit-logging-storage.md) - PostgreSQL append-only table with JSONB metadata
### Phase 3: Module Framework
- [ADR-0021: Module Loading Strategy](./0021-module-loading-strategy.md) - Static registration (primary) + dynamic plugin loading (optional)
### Phase 5: Infrastructure Adapters
- [ADR-0022: Cache Implementation](./0022-cache-implementation.md) - Redis with in-memory fallback
- [ADR-0023: Event Bus Implementation](./0023-event-bus-implementation.md) - In-process bus (default) + Kafka (production)
- [ADR-0024: Background Job Scheduler](./0024-job-scheduler.md) - asynq (Redis-backed) + cron
- [ADR-0025: Multi-tenancy Model](./0025-multitenancy-model.md) - Shared database with tenant_id column (optional)
### Phase 6: Observability & Production Readiness
- [ADR-0026: Error Reporting Service](./0026-error-reporting-service.md) - Sentry (optional, configurable)
- [ADR-0027: Rate Limiting Strategy](./0027-rate-limiting-strategy.md) - Multi-level (per-user + per-IP) with Redis
### Phase 7: Testing, Documentation & CI/CD
- [ADR-0028: Testing Strategy](./0028-testing-strategy.md) - Multi-layered (unit, integration, contract, load)
## Adding New ADRs
When making a new architectural decision:
1. Create a new file: `XXXX-short-title.md` (next sequential number)
2. Follow the ADR template
3. Update this README with the new entry
4. Set status to "Proposed" initially
5. Update to "Accepted" after review/approval
## References
- [ADR Template](https://adr.github.io/madr/)
- [Documenting Architecture Decisions](https://cognitect.com/blog/2011/11/15/documenting-architecture-decisions)

View File

@@ -0,0 +1,561 @@
# Module Architecture
This document details the architecture of modules, how they are structured, how they interact with the core platform, and how multiple modules work together.
## Table of Contents
- [Module Structure](#module-structure)
- [Module Interface](#module-interface)
- [Module Lifecycle](#module-lifecycle)
- [Module Dependencies](#module-dependencies)
- [Module Communication](#module-communication)
- [Module Data Isolation](#module-data-isolation)
- [Module Examples](#module-examples)
## Module Structure
Every module follows a consistent structure that separates concerns and enables clean integration with the platform.
```mermaid
graph TD
subgraph "Module Structure"
Manifest[module.yaml<br/>Manifest]
subgraph "Public API (pkg/)"
ModuleInterface[IModule Interface]
ModuleTypes[Public Types]
end
subgraph "Internal Implementation (internal/)"
API[API Handlers]
Service[Domain Services]
Repo[Repositories]
Domain[Domain Models]
end
subgraph "Database Schema"
EntSchema[Ent Schemas]
Migrations[Migrations]
end
end
Manifest --> ModuleInterface
ModuleInterface --> API
API --> Service
Service --> Repo
Repo --> Domain
Repo --> EntSchema
EntSchema --> Migrations
style Manifest fill:#4a90e2,stroke:#2e5c8a,stroke-width:3px,color:#fff
style ModuleInterface fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
style Service fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
```
### Module Directory Structure
```
modules/blog/
├── go.mod # Module dependencies
├── module.yaml # Module manifest
├── pkg/
│ └── module.go # IModule implementation
├── internal/
│ ├── api/
│ │ └── handler.go # HTTP handlers
│ ├── domain/
│ │ ├── post.go # Domain entities
│ │ └── post_repo.go # Repository interface
│ ├── service/
│ │ └── post_service.go # Business logic
│ └── ent/
│ ├── schema/
│ │ └── post.go # Ent schema
│ └── migrate/ # Migrations
└── tests/
└── integration_test.go
```
## Module Interface
All modules must implement the `IModule` interface to integrate with the platform.
```mermaid
classDiagram
class IModule {
<<interface>>
+Name() string
+Version() string
+Dependencies() []string
+Init() fx.Option
+Migrations() []MigrationFunc
+OnStart(ctx) error
+OnStop(ctx) error
}
class BlogModule {
+Name() string
+Version() string
+Dependencies() []string
+Init() fx.Option
+Migrations() []MigrationFunc
}
class BillingModule {
+Name() string
+Version() string
+Dependencies() []string
+Init() fx.Option
+Migrations() []MigrationFunc
}
IModule <|.. BlogModule
IModule <|.. BillingModule
```
### IModule Interface
```go
type IModule interface {
// Name returns a unique, human-readable identifier
Name() string
// Version returns the module version (semantic versioning)
Version() string
// Dependencies returns list of required modules (e.g., ["core >= 1.0.0"])
Dependencies() []string
// Init returns fx.Option that registers all module services
Init() fx.Option
// Migrations returns database migration functions
Migrations() []func(*ent.Client) error
// OnStart is called during application startup (optional)
OnStart(ctx context.Context) error
// OnStop is called during graceful shutdown (optional)
OnStop(ctx context.Context) error
}
```
## Module Lifecycle
Modules go through a well-defined lifecycle from discovery to shutdown.
```mermaid
stateDiagram-v2
[*] --> Discovered: Module found
Discovered --> Validated: Check dependencies
Validated --> Loaded: Load module
Loaded --> Initialized: Call Init()
Initialized --> Migrated: Run migrations
Migrated --> Started: Call OnStart()
Started --> Running: Module active
Running --> Stopping: Shutdown signal
Stopping --> Stopped: Call OnStop()
Stopped --> [*]
Validated --> Rejected: Dependency check fails
Rejected --> [*]
```
### Module Initialization Sequence
```mermaid
sequenceDiagram
participant Main
participant Loader
participant Registry
participant Module
participant DI
participant Router
participant DB
participant Scheduler
Main->>Loader: DiscoverModules()
Loader->>Registry: Scan for modules
Registry-->>Loader: Module list
loop For each module
Loader->>Module: Load module
Module->>Registry: Register module
Registry->>Registry: Validate dependencies
end
Main->>Registry: GetAllModules()
Registry->>Registry: Resolve dependencies (topological sort)
Registry-->>Main: Ordered module list
Main->>DI: Create fx container
loop For each module (in dependency order)
Main->>Module: Init()
Module->>DI: fx.Provide(services)
Module->>Router: Register routes
Module->>Scheduler: Register jobs
Module->>DB: Register migrations
end
Main->>DB: Run migrations (core first)
Main->>DI: Start container
Main->>Module: OnStart() (optional)
Main->>Router: Start HTTP server
```
## Module Dependencies
Modules can depend on other modules, creating a dependency graph that must be resolved.
```mermaid
graph TD
Core[Core Kernel]
Blog[Blog Module]
Billing[Billing Module]
Analytics[Analytics Module]
Notifications[Notification Module]
Blog --> Core
Billing --> Core
Analytics --> Core
Notifications --> Core
Analytics --> Blog
Analytics --> Billing
Billing --> Blog
Notifications --> Blog
Notifications --> Billing
style Core fill:#4a90e2,stroke:#2e5c8a,stroke-width:3px,color:#fff
style Blog fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
style Billing fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
```
### Dependency Resolution
```mermaid
graph LR
subgraph "Module Dependency Graph"
M1[Module A<br/>depends on: Core]
M2[Module B<br/>depends on: Core, Module A]
M3[Module C<br/>depends on: Core, Module B]
Core[Core Kernel]
end
subgraph "Resolved Load Order"
Step1[1. Core Kernel]
Step2[2. Module A]
Step3[3. Module B]
Step4[4. Module C]
end
Core --> M1
M1 --> M2
M2 --> M3
Step1 --> Step2
Step2 --> Step3
Step3 --> Step4
style Core fill:#4a90e2,stroke:#2e5c8a,stroke-width:3px,color:#fff
style Step1 fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
```
## Module Communication
Modules communicate through well-defined interfaces provided by the core platform.
### Communication Patterns
```mermaid
graph TB
subgraph "Communication Patterns"
Direct[Direct Service Calls<br/>via DI]
Events[Event Bus<br/>Publish/Subscribe]
Shared[Shared Interfaces<br/>Core Services]
end
subgraph "Module A"
AService[Service A]
AHandler[Handler A]
end
subgraph "Core Services"
EventBus[Event Bus]
AuthService[Auth Service]
CacheService[Cache Service]
end
subgraph "Module B"
BService[Service B]
BHandler[Handler B]
end
AHandler --> AService
AService --> AuthService
AService --> CacheService
AService -->|Publish| EventBus
EventBus -->|Subscribe| BService
BService --> AuthService
BHandler --> BService
style EventBus fill:#4a90e2,stroke:#2e5c8a,stroke-width:3px,color:#fff
style AService fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
style BService fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
```
### Event-Driven Communication
```mermaid
sequenceDiagram
participant BlogModule
participant EventBus
participant AnalyticsModule
participant NotificationModule
participant AuditModule
BlogModule->>EventBus: Publish("blog.post.created", event)
EventBus->>AnalyticsModule: Deliver event
EventBus->>NotificationModule: Deliver event
EventBus->>AuditModule: Deliver event
AnalyticsModule->>AnalyticsModule: Track post creation
NotificationModule->>NotificationModule: Send notification
AuditModule->>AuditModule: Log audit entry
```
## Module Data Isolation
Modules can have their own database tables while sharing core tables.
```mermaid
erDiagram
USERS ||--o{ USER_ROLES : has
ROLES ||--o{ USER_ROLES : assigned_to
ROLES ||--o{ ROLE_PERMISSIONS : has
PERMISSIONS ||--o{ ROLE_PERMISSIONS : assigned_to
BLOG_POSTS {
string id PK
string author_id FK
string title
string content
timestamp created_at
}
BILLING_SUBSCRIPTIONS {
string id PK
string user_id FK
string plan
timestamp expires_at
}
USERS ||--o{ BLOG_POSTS : creates
USERS ||--o{ BILLING_SUBSCRIPTIONS : subscribes
AUDIT_LOGS {
string id PK
string actor_id
string action
string target_id
jsonb metadata
}
USERS ||--o{ AUDIT_LOGS : performs
```
### Multi-Tenancy Data Isolation
```mermaid
graph TB
subgraph "Single Database"
subgraph "Core Tables"
Users[users<br/>tenant_id]
Roles[roles<br/>tenant_id]
end
subgraph "Blog Module Tables"
Posts[blog_posts<br/>tenant_id]
Comments[blog_comments<br/>tenant_id]
end
subgraph "Billing Module Tables"
Subscriptions[billing_subscriptions<br/>tenant_id]
Invoices[billing_invoices<br/>tenant_id]
end
end
subgraph "Query Filtering"
EntInterceptor[Ent Interceptor]
TenantFilter[WHERE tenant_id = ?]
end
Users --> EntInterceptor
Posts --> EntInterceptor
Subscriptions --> EntInterceptor
EntInterceptor --> TenantFilter
style EntInterceptor fill:#4a90e2,stroke:#2e5c8a,stroke-width:3px,color:#fff
```
## Module Examples
### Example: Blog Module
```mermaid
graph TB
subgraph "Blog Module"
BlogHandler[Blog Handler<br/>/api/v1/blog/posts]
BlogService[Post Service]
PostRepo[Post Repository]
PostEntity[Post Entity]
end
subgraph "Core Services Used"
AuthService[Auth Service]
AuthzService[Authorization Service]
EventBus[Event Bus]
AuditService[Audit Service]
CacheService[Cache Service]
end
subgraph "Database"
PostsTable[(blog_posts)]
end
BlogHandler --> AuthService
BlogHandler --> AuthzService
BlogHandler --> BlogService
BlogService --> PostRepo
BlogService --> EventBus
BlogService --> AuditService
BlogService --> CacheService
PostRepo --> PostsTable
PostRepo --> PostEntity
style BlogModule fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
style AuthService fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
```
### Module Integration Example
```mermaid
graph LR
subgraph "Request Flow"
Request[HTTP Request<br/>POST /api/v1/blog/posts]
Auth[Auth Middleware]
Authz[Authz Middleware]
Handler[Blog Handler]
Service[Blog Service]
Repo[Blog Repository]
DB[(Database)]
end
subgraph "Side Effects"
EventBus[Event Bus]
Audit[Audit Log]
Cache[Cache]
end
Request --> Auth
Auth --> Authz
Authz --> Handler
Handler --> Service
Service --> Repo
Repo --> DB
Service --> EventBus
Service --> Audit
Service --> Cache
style Request fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
style Service fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
```
## Module Registration Flow
```mermaid
flowchart TD
Start([Application Start]) --> LoadManifests["Load module.yaml files"]
LoadManifests --> ValidateDeps["Validate dependencies"]
ValidateDeps -->|Valid| SortModules["Topological sort modules"]
ValidateDeps -->|Invalid| Error([Error: Missing dependencies])
SortModules --> CreateDI["Create DI container"]
CreateDI --> RegisterCore["Register core services"]
RegisterCore --> LoopModules{"More modules?"}
LoopModules -->|Yes| LoadModule["Load module"]
LoadModule --> CallInit["Call module.Init()"]
CallInit --> RegisterServices["Register module services"]
RegisterServices --> RegisterRoutes["Register module routes"]
RegisterRoutes --> RegisterJobs["Register module jobs"]
RegisterJobs --> RegisterMigrations["Register module migrations"]
RegisterMigrations --> LoopModules
LoopModules -->|No| RunMigrations["Run all migrations"]
RunMigrations --> StartModules["Call OnStart() for each module"]
StartModules --> StartServer["Start HTTP server"]
StartServer --> Running([Application Running])
Running --> Shutdown([Shutdown Signal])
Shutdown --> StopServer["Stop HTTP server"]
StopServer --> StopModules["Call OnStop() for each module"]
StopModules --> Cleanup["Cleanup resources"]
Cleanup --> End([Application Stopped])
style Start fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
style Running fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
style Error fill:#ff6b6b,stroke:#c92a2a,stroke-width:2px,color:#fff
```
## Module Permissions Integration
Modules declare permissions that are automatically integrated into the permission system.
```mermaid
graph TB
subgraph "Permission Generation"
Manifest["module.yaml<br/>permissions: array"]
Generator["Permission Generator"]
GeneratedCode["pkg/perm/generated.go"]
end
subgraph "Permission Resolution"
Request["HTTP Request"]
AuthzMiddleware["Authz Middleware"]
PermissionResolver["Permission Resolver"]
UserRoles["User Roles"]
RolePermissions["Role Permissions"]
Response["HTTP Response"]
end
Manifest --> Generator
Generator --> GeneratedCode
GeneratedCode --> PermissionResolver
Request --> AuthzMiddleware
AuthzMiddleware --> PermissionResolver
PermissionResolver --> UserRoles
PermissionResolver --> RolePermissions
UserRoles --> PermissionResolver
RolePermissions --> PermissionResolver
PermissionResolver --> AuthzMiddleware
AuthzMiddleware --> Response
classDef generation fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
classDef resolution fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
class Manifest,Generator,GeneratedCode generation
class PermissionResolver resolution
```
## Next Steps
- [Module Requirements](./module-requirements.md) - Detailed requirements for each module
- [Component Relationships](./component-relationships.md) - How components interact
- [System Architecture](./architecture.md) - Overall system architecture

View File

@@ -0,0 +1,590 @@
# System Architecture
This document provides a comprehensive overview of the Go Platform architecture, including system components, their relationships, and how modules integrate with the core platform.
## Table of Contents
- [High-Level Architecture](#high-level-architecture)
- [Layered Architecture](#layered-architecture)
- [Module System Architecture](#module-system-architecture)
- [Component Relationships](#component-relationships)
- [Data Flow](#data-flow)
- [Deployment Architecture](#deployment-architecture)
## High-Level Architecture
The Go Platform follows a **modular monolith** architecture that can evolve into microservices. The platform consists of a core kernel and pluggable feature modules.
```mermaid
graph TB
subgraph "Go Platform"
Core[Core Kernel]
Module1[Module 1<br/>Blog]
Module2[Module 2<br/>Billing]
Module3[Module N<br/>Custom]
end
subgraph "Infrastructure"
DB[(PostgreSQL)]
Cache[(Redis)]
Queue[Kafka/Event Bus]
Storage[S3/Blob Storage]
end
subgraph "External Services"
OIDC[OIDC Provider]
Email[Email Service]
Sentry[Sentry]
end
Core --> DB
Core --> Cache
Core --> Queue
Core --> Storage
Core --> OIDC
Core --> Email
Core --> Sentry
Module1 --> Core
Module2 --> Core
Module3 --> Core
Module1 --> DB
Module2 --> DB
Module3 --> DB
Module1 --> Queue
Module2 --> Queue
style Core fill:#4a90e2,stroke:#2e5c8a,stroke-width:3px,color:#fff
style Module1 fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
style Module2 fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
style Module3 fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
```
## Layered Architecture
The platform follows a **clean/hexagonal architecture** with clear separation of concerns across layers.
```mermaid
graph TD
subgraph "Presentation Layer"
HTTP[HTTP/REST API]
GraphQL[GraphQL API]
CLI[CLI Interface]
end
subgraph "Application Layer"
AuthMiddleware[Auth Middleware]
AuthzMiddleware[Authorization Middleware]
RateLimit[Rate Limiting]
Handlers[Request Handlers]
end
subgraph "Domain Layer"
Services[Domain Services]
Entities[Domain Entities]
Policies[Business Policies]
end
subgraph "Infrastructure Layer"
Repos[Repositories]
CacheAdapter[Cache Adapter]
EventBus[Event Bus]
Jobs[Scheduler/Jobs]
end
subgraph "Core Kernel"
DI[DI Container]
Config[Config Manager]
Logger[Logger]
Metrics[Metrics]
Health[Health Checks]
end
HTTP --> AuthMiddleware
GraphQL --> AuthMiddleware
CLI --> AuthMiddleware
AuthMiddleware --> AuthzMiddleware
AuthzMiddleware --> RateLimit
RateLimit --> Handlers
Handlers --> Services
Services --> Entities
Services --> Policies
Services --> Repos
Services --> CacheAdapter
Services --> EventBus
Services --> Jobs
Repos --> DB[(Database)]
CacheAdapter --> Cache[(Redis)]
EventBus --> Queue[(Kafka)]
Services --> DI
Repos --> DI
Handlers --> DI
DI --> Config
DI --> Logger
DI --> Metrics
DI --> Health
style Core fill:#4a90e2,stroke:#2e5c8a,stroke-width:3px,color:#fff
style Services fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
style Repos fill:#ff6b6b,stroke:#c92a2a,stroke-width:2px,color:#fff
```
## Module System Architecture
Modules are the building blocks of the platform. Each module can register services, routes, permissions, and background jobs.
```mermaid
graph TB
subgraph Lifecycle["Module Lifecycle"]
Discover["1. Discover Modules"]
Load["2. Load Module"]
Validate["3. Validate Dependencies"]
Init["4. Initialize Module"]
Start["5. Start Module"]
end
subgraph Registration["Module Registration"]
Static["Static Registration<br/>via init()"]
Dynamic["Dynamic Loading<br/>via .so files"]
end
subgraph Components["Module Components"]
Routes["HTTP Routes"]
Services["Services"]
Repos["Repositories"]
Perms["Permissions"]
Jobs["Background Jobs"]
Migrations["Database Migrations"]
end
Discover --> Load
Load --> Static
Load --> Dynamic
Static --> Validate
Dynamic --> Validate
Validate --> Init
Init --> Routes
Init --> Services
Init --> Repos
Init --> Perms
Init --> Jobs
Init --> Migrations
Routes --> Start
Services --> Start
Repos --> Start
Perms --> Start
Jobs --> Start
Migrations --> Start
classDef lifecycle fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
classDef registration fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
classDef components fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
classDef start fill:#ff6b6b,stroke:#c92a2a,stroke-width:2px,color:#fff
class Discover,Load,Validate,Init lifecycle
class Static,Dynamic registration
class Routes,Services,Repos,Perms,Jobs,Migrations components
class Start start
```
### Module Initialization Sequence
```mermaid
sequenceDiagram
participant Main
participant Loader
participant Registry
participant Module
participant DI
participant Router
participant DB
Main->>Loader: LoadModules()
Loader->>Registry: Discover modules
Registry-->>Loader: List of modules
loop For each module
Loader->>Module: Load module
Module->>Registry: Register(module)
Registry->>Registry: Validate dependencies
end
Main->>Registry: GetAllModules()
Registry-->>Main: Ordered module list
Main->>DI: Create container
loop For each module
Main->>Module: Init()
Module->>DI: Provide services
Module->>Router: Register routes
Module->>DB: Register migrations
end
Main->>DB: Run migrations
Main->>Router: Start HTTP server
```
## Component Relationships
This diagram shows how core components interact with each other and with modules.
```mermaid
graph TB
subgraph "Core Kernel Components"
ConfigMgr[Config Manager]
LoggerService[Logger Service]
DI[DI Container]
ModuleLoader[Module Loader]
HealthRegistry[Health Registry]
MetricsRegistry[Metrics Registry]
ErrorBus[Error Bus]
EventBus[Event Bus]
end
subgraph "Security Components"
AuthService[Auth Service]
AuthzService[Authorization Service]
TokenProvider[Token Provider]
PermissionResolver[Permission Resolver]
AuditService[Audit Service]
end
subgraph "Infrastructure Components"
DBClient[Database Client]
CacheClient[Cache Client]
Scheduler[Scheduler]
Notifier[Notifier]
end
subgraph "Module Components"
ModuleRoutes[Module Routes]
ModuleServices[Module Services]
ModuleRepos[Module Repositories]
end
DI --> ConfigMgr
DI --> LoggerService
DI --> ModuleLoader
DI --> HealthRegistry
DI --> MetricsRegistry
DI --> ErrorBus
DI --> EventBus
DI --> AuthService
DI --> AuthzService
DI --> DBClient
DI --> CacheClient
DI --> Scheduler
DI --> Notifier
AuthService --> TokenProvider
AuthzService --> PermissionResolver
AuthzService --> AuditService
ModuleServices --> DBClient
ModuleServices --> CacheClient
ModuleServices --> EventBus
ModuleServices --> AuthzService
ModuleRepos --> DBClient
ModuleRoutes --> AuthzService
Scheduler --> CacheClient
Notifier --> EventBus
ErrorBus --> LoggerService
ErrorBus --> Sentry
style DI fill:#4a90e2,stroke:#2e5c8a,stroke-width:3px,color:#fff
style AuthService fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
style ModuleServices fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
```
## Data Flow
### Request Processing Flow
```mermaid
sequenceDiagram
participant Client
participant Router
participant AuthMW[Auth Middleware]
participant AuthzMW[Authz Middleware]
participant RateLimit[Rate Limiter]
participant Handler
participant Service
participant Repo
participant DB
participant Cache
participant EventBus
participant Audit
Client->>Router: HTTP Request
Router->>AuthMW: Extract JWT
AuthMW->>AuthMW: Validate token
AuthMW->>Router: Add user to context
Router->>AuthzMW: Check permissions
AuthzMW->>AuthzMW: Resolve permissions
AuthzMW->>Router: Authorized
Router->>RateLimit: Check rate limits
RateLimit->>Cache: Get rate limit state
Cache-->>RateLimit: Rate limit status
RateLimit->>Router: Within limits
Router->>Handler: Process request
Handler->>Service: Business logic
Service->>Cache: Check cache
Cache-->>Service: Cache miss
Service->>Repo: Query data
Repo->>DB: Execute query
DB-->>Repo: Return data
Repo-->>Service: Domain entity
Service->>Cache: Store in cache
Service->>EventBus: Publish event
Service->>Audit: Record action
Service-->>Handler: Response data
Handler-->>Router: HTTP response
Router-->>Client: JSON response
```
### Module Event Flow
```mermaid
graph LR
subgraph "Module A"
AService[Service A]
AHandler[Handler A]
end
subgraph "Event Bus"
Bus[Event Bus]
end
subgraph "Module B"
BService[Service B]
BHandler[Handler B]
end
subgraph "Module C"
CService[Service C]
end
AHandler --> AService
AService -->|Publish Event| Bus
Bus -->|Subscribe| BService
Bus -->|Subscribe| CService
BService --> BHandler
CService --> CService
```
## Deployment Architecture
### Development Deployment
```mermaid
graph TB
subgraph "Developer Machine"
IDE[IDE/Editor]
Go[Go Runtime]
Docker[Docker]
end
subgraph "Local Services"
App[Platform App<br/>:8080]
DB[(PostgreSQL<br/>:5432)]
Redis[(Redis<br/>:6379)]
Kafka[Kafka<br/>:9092]
end
IDE --> Go
Go --> App
App --> DB
App --> Redis
App --> Kafka
Docker --> DB
Docker --> Redis
Docker --> Kafka
```
### Production Deployment
```mermaid
graph TB
subgraph "Load Balancer"
LB[Load Balancer<br/>HTTPS]
end
subgraph "Platform Instances"
App1[Platform Instance 1]
App2[Platform Instance 2]
App3[Platform Instance N]
end
subgraph "Database Cluster"
Primary[(PostgreSQL<br/>Primary)]
Replica[(PostgreSQL<br/>Replica)]
end
subgraph "Cache Cluster"
Redis1[(Redis<br/>Master)]
Redis2[(Redis<br/>Replica)]
end
subgraph "Message Queue"
Kafka1[Kafka Broker 1]
Kafka2[Kafka Broker 2]
Kafka3[Kafka Broker 3]
end
subgraph "Observability"
Prometheus[Prometheus]
Grafana[Grafana]
Jaeger[Jaeger]
Loki[Loki]
end
subgraph "External Services"
Sentry[Sentry]
S3[S3 Storage]
end
LB --> App1
LB --> App2
LB --> App3
App1 --> Primary
App2 --> Primary
App3 --> Primary
App1 --> Replica
App2 --> Replica
App3 --> Replica
App1 --> Redis1
App2 --> Redis1
App3 --> Redis1
App1 --> Kafka1
App2 --> Kafka2
App3 --> Kafka3
App1 --> Prometheus
App2 --> Prometheus
App3 --> Prometheus
Prometheus --> Grafana
App1 --> Jaeger
App2 --> Jaeger
App3 --> Jaeger
App1 --> Loki
App2 --> Loki
App3 --> Loki
App1 --> Sentry
App2 --> Sentry
App3 --> Sentry
App1 --> S3
App2 --> S3
App3 --> S3
style LB fill:#4a90e2,stroke:#2e5c8a,stroke-width:3px,color:#fff
style Primary fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
style Redis1 fill:#ff6b6b,stroke:#c92a2a,stroke-width:2px,color:#fff
```
## Core Kernel Components
The core kernel provides the foundation for all modules. Each component has specific responsibilities:
### Component Responsibilities
```mermaid
mindmap
root((Core Kernel))
Configuration
Load configs
Environment vars
Secret management
Dependency Injection
Service registration
Lifecycle management
Module wiring
Logging
Structured logs
Request correlation
Log levels
Observability
Metrics
Tracing
Health checks
Security
Authentication
Authorization
Audit logging
Module System
Module discovery
Module loading
Dependency resolution
```
## Module Integration Points
Modules integrate with the core through well-defined interfaces:
```mermaid
graph TB
subgraph "Core Kernel Interfaces"
IConfig[ConfigProvider]
ILogger[Logger]
IAuth[Authenticator]
IAuthz[Authorizer]
IEventBus[EventBus]
ICache[Cache]
IBlobStore[BlobStore]
IScheduler[Scheduler]
INotifier[Notifier]
end
subgraph "Module Implementation"
Module[Feature Module]
ModuleServices[Module Services]
ModuleRoutes[Module Routes]
end
Module --> IConfig
Module --> ILogger
ModuleServices --> IAuth
ModuleServices --> IAuthz
ModuleServices --> IEventBus
ModuleServices --> ICache
ModuleServices --> IBlobStore
ModuleServices --> IScheduler
ModuleServices --> INotifier
ModuleRoutes --> IAuthz
style IConfig fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
style Module fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
```
## Next Steps
- [Module Architecture](./architecture-modules.md) - Detailed module architecture and design
- [Module Requirements](./module-requirements.md) - Requirements for each module
- [Component Relationships](./component-relationships.md) - Detailed component interactions
- [ADRs](../adr/README.md) - Architecture Decision Records

View File

@@ -0,0 +1,23 @@
/* Full width content */
.md-content {
max-width: 100% !important;
}
.md-main__inner {
max-width: 100% !important;
}
.md-grid {
max-width: 100% !important;
}
/* Ensure content area uses full width while keeping readable line length */
.md-content__inner {
max-width: 100%;
}
/* Adjust container padding for better full-width experience */
.md-container {
max-width: 100%;
}

View File

@@ -0,0 +1,455 @@
# Component Relationships
This document details how different components of the Go Platform interact with each other, including dependency relationships, data flow, and integration patterns.
## Table of Contents
- [Core Component Dependencies](#core-component-dependencies)
- [Module to Core Integration](#module-to-core-integration)
- [Service Interaction Patterns](#service-interaction-patterns)
- [Data Flow Patterns](#data-flow-patterns)
- [Dependency Graph](#dependency-graph)
## Core Component Dependencies
The core kernel components have well-defined dependencies that form the foundation of the platform.
```mermaid
graph TD
subgraph "Foundation Layer"
Config[Config Manager]
Logger[Logger Service]
end
subgraph "DI Layer"
DI[DI Container]
end
subgraph "Infrastructure Layer"
DB[Database Client]
Cache[Cache Client]
EventBus[Event Bus]
Scheduler[Scheduler]
end
subgraph "Security Layer"
Auth[Auth Service]
Authz[Authz Service]
Audit[Audit Service]
end
subgraph "Observability Layer"
Metrics[Metrics Registry]
Health[Health Registry]
Tracer[OpenTelemetry Tracer]
end
Config --> Logger
Config --> DI
Logger --> DI
DI --> DB
DI --> Cache
DI --> EventBus
DI --> Scheduler
DI --> Auth
DI --> Authz
DI --> Audit
DI --> Metrics
DI --> Health
DI --> Tracer
Auth --> DB
Authz --> DB
Authz --> Cache
Audit --> DB
DB --> Tracer
Cache --> Tracer
EventBus --> Tracer
style Config fill:#4a90e2,stroke:#2e5c8a,stroke-width:3px,color:#fff
style DI fill:#50c878,stroke:#2e7d4e,stroke-width:3px,color:#fff
style Auth fill:#ff6b6b,stroke:#c92a2a,stroke-width:2px,color:#fff
```
## Module to Core Integration
Modules integrate with core services through well-defined interfaces and dependency injection.
```mermaid
graph LR
subgraph "Feature Module"
ModuleHandler[Module Handler]
ModuleService[Module Service]
ModuleRepo[Module Repository]
end
subgraph "Core Services"
AuthService[Auth Service]
AuthzService[Authz Service]
EventBusService[Event Bus]
CacheService[Cache Service]
AuditService[Audit Service]
LoggerService[Logger Service]
end
subgraph "Infrastructure"
DBClient[Database Client]
CacheClient[Cache Client]
QueueClient[Message Queue]
end
ModuleHandler --> AuthService
ModuleHandler --> AuthzService
ModuleHandler --> ModuleService
ModuleService --> ModuleRepo
ModuleService --> EventBusService
ModuleService --> CacheService
ModuleService --> AuditService
ModuleService --> LoggerService
ModuleRepo --> DBClient
CacheService --> CacheClient
EventBusService --> QueueClient
style ModuleService fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
style AuthService fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
style DBClient fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
```
## Service Interaction Patterns
### Authentication Flow
```mermaid
sequenceDiagram
participant Client
participant Router
participant AuthMiddleware
participant AuthService
participant TokenProvider
participant UserRepo
participant DB
Client->>Router: POST /api/v1/auth/login
Router->>AuthMiddleware: Extract credentials
AuthMiddleware->>AuthService: Authenticate(email, password)
AuthService->>UserRepo: FindByEmail(email)
UserRepo->>DB: Query user
DB-->>UserRepo: User data
UserRepo-->>AuthService: User entity
AuthService->>AuthService: Verify password
AuthService->>TokenProvider: GenerateAccessToken(user)
AuthService->>TokenProvider: GenerateRefreshToken(user)
TokenProvider-->>AuthService: Tokens
AuthService->>DB: Store refresh token
AuthService-->>AuthMiddleware: Auth response
AuthMiddleware-->>Router: Tokens
Router-->>Client: JSON response with tokens
```
### Authorization Flow
```mermaid
sequenceDiagram
participant Request
participant AuthzMiddleware
participant Authorizer
participant PermissionResolver
participant Cache
participant UserRepo
participant RoleRepo
participant DB
Request->>AuthzMiddleware: HTTP request + permission
AuthzMiddleware->>Authorizer: Authorize(ctx, permission)
Authorizer->>Authorizer: Extract user from context
Authorizer->>PermissionResolver: HasPermission(user, permission)
PermissionResolver->>Cache: Check cache
Cache-->>PermissionResolver: Cache miss
PermissionResolver->>UserRepo: GetUserRoles(userID)
UserRepo->>DB: Query user_roles
DB-->>UserRepo: Role IDs
UserRepo-->>PermissionResolver: Roles
PermissionResolver->>RoleRepo: GetRolePermissions(roleIDs)
RoleRepo->>DB: Query role_permissions
DB-->>RoleRepo: Permissions
RoleRepo-->>PermissionResolver: Permission list
PermissionResolver->>PermissionResolver: Check if permission in list
PermissionResolver->>Cache: Store in cache
PermissionResolver-->>Authorizer: Has permission: true/false
Authorizer-->>AuthzMiddleware: Authorized or error
AuthzMiddleware-->>Request: Continue or 403
```
### Event Publishing Flow
```mermaid
sequenceDiagram
participant ModuleService
participant EventBus
participant Kafka
participant Subscriber1
participant Subscriber2
ModuleService->>EventBus: Publish(topic, event)
EventBus->>EventBus: Serialize event
EventBus->>Kafka: Send to topic
Kafka-->>EventBus: Acknowledged
Kafka->>Subscriber1: Deliver event
Kafka->>Subscriber2: Deliver event
Subscriber1->>Subscriber1: Process event
Subscriber2->>Subscriber2: Process event
```
## Data Flow Patterns
### Request to Response Flow
```mermaid
graph LR
Client[Client] -->|HTTP Request| LB[Load Balancer]
LB -->|Route| Server1[Instance 1]
LB -->|Route| Server2[Instance 2]
Server1 --> AuthMW[Auth Middleware]
Server1 --> AuthzMW[Authz Middleware]
Server1 --> RateLimit[Rate Limiter]
Server1 --> Handler[Request Handler]
Server1 --> Service[Domain Service]
Server1 --> Cache[Cache Check]
Server1 --> Repo[Repository]
Server1 --> DB[(Database)]
Service --> EventBus[Event Bus]
Service --> Audit[Audit Log]
Handler -->|Response| Server1
Server1 -->|HTTP Response| LB
LB -->|Response| Client
style Server1 fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
style Service fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
```
### Caching Flow
```mermaid
graph TD
Request[Service Request] --> CacheCheck{Cache Hit?}
CacheCheck -->|Yes| CacheGet[Get from Cache]
CacheCheck -->|No| DBQuery[Query Database]
DBQuery --> DBResponse[Database Response]
DBResponse --> CacheStore[Store in Cache]
CacheStore --> Return[Return Data]
CacheGet --> Return
style CacheCheck fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
style CacheGet fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
style DBQuery fill:#ff6b6b,stroke:#c92a2a,stroke-width:2px,color:#fff
```
## Dependency Graph
Complete dependency graph showing all components and their relationships.
```mermaid
graph TB
subgraph "Application Entry"
Main[Main Application]
end
subgraph "Core Kernel"
Config[Config]
Logger[Logger]
DI[DI Container]
ModuleLoader[Module Loader]
end
subgraph "Security"
Auth[Auth]
Authz[Authz]
Identity[Identity]
Audit[Audit]
end
subgraph "Infrastructure"
DB[Database]
Cache[Cache]
EventBus[Event Bus]
Scheduler[Scheduler]
BlobStore[Blob Store]
Notifier[Notifier]
end
subgraph "Observability"
Metrics[Metrics]
Health[Health]
Tracer[Tracer]
ErrorBus[Error Bus]
end
subgraph "Module"
ModuleHandler[Module Handler]
ModuleService[Module Service]
ModuleRepo[Module Repo]
end
Main --> Config
Main --> Logger
Main --> DI
Main --> ModuleLoader
Config --> Logger
Config --> DI
DI --> Auth
DI --> Authz
DI --> Identity
DI --> Audit
DI --> DB
DI --> Cache
DI --> EventBus
DI --> Scheduler
DI --> BlobStore
DI --> Notifier
DI --> Metrics
DI --> Health
DI --> Tracer
DI --> ErrorBus
Auth --> Identity
Auth --> DB
Authz --> Identity
Authz --> Cache
Authz --> Audit
Audit --> DB
Audit --> Logger
ModuleLoader --> DI
ModuleHandler --> Auth
ModuleHandler --> Authz
ModuleService --> ModuleRepo
ModuleService --> EventBus
ModuleService --> Cache
ModuleService --> Audit
ModuleRepo --> DB
Scheduler --> Cache
Notifier --> EventBus
ErrorBus --> Logger
ErrorBus --> Sentry
DB --> Tracer
Cache --> Tracer
EventBus --> Tracer
style Main fill:#4a90e2,stroke:#2e5c8a,stroke-width:4px,color:#fff
style DI fill:#50c878,stroke:#2e7d4e,stroke-width:3px,color:#fff
style ModuleService fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
```
## Component Interaction Matrix
| Component | Depends On | Used By |
|-----------|-----------|---------|
| Config | None | All components |
| Logger | Config | All components |
| DI Container | Config, Logger | All components |
| Auth Service | Identity, DB | Auth Middleware, Modules |
| Authz Service | Identity, Cache, Audit | Authz Middleware, Modules |
| Identity Service | DB, Cache, Notifier | Auth, Authz, Modules |
| Database Client | Config, Logger, Tracer | All repositories |
| Cache Client | Config, Logger | Authz, Scheduler, Modules |
| Event Bus | Config, Logger, Tracer | Modules, Notifier |
| Scheduler | Cache, Logger | Modules |
| Error Bus | Logger | All components (via panic recovery) |
## Integration Patterns
### Module Service Integration
```mermaid
graph TB
subgraph "Module Layer"
Handler[HTTP Handler]
Service[Domain Service]
Repo[Repository]
end
subgraph "Core Services"
Auth[Auth Service]
Authz[Authz Service]
EventBus[Event Bus]
Cache[Cache]
Audit[Audit]
end
subgraph "Infrastructure"
DB[(Database)]
Redis[(Redis)]
Kafka[Kafka]
end
Handler --> Auth
Handler --> Authz
Handler --> Service
Service --> Repo
Service --> EventBus
Service --> Cache
Service --> Audit
Repo --> DB
Cache --> Redis
EventBus --> Kafka
Audit --> DB
style Service fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
style Auth fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
style DB fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
```
### Cross-Module Communication
```mermaid
graph LR
subgraph "Module A"
AService[Service A]
end
subgraph "Module B"
BService[Service B]
end
subgraph "Core Services"
EventBus[Event Bus]
Authz[Authz Service]
Cache[Cache]
end
AService -->|Direct Call| Authz
AService -->|Publish Event| EventBus
EventBus -->|Subscribe| BService
AService -->|Cache Access| Cache
BService -->|Cache Access| Cache
style AService fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
style BService fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
style EventBus fill:#4a90e2,stroke:#2e5c8a,stroke-width:3px,color:#fff
```
## Next Steps
- [System Architecture](./architecture.md) - Overall system architecture
- [Module Architecture](./architecture-modules.md) - Module design and integration
- [Module Requirements](./module-requirements.md) - Detailed module requirements

74
docs/content/index.md Normal file
View File

@@ -0,0 +1,74 @@
# Go Platform Documentation
Welcome to the Go Platform documentation! This is a plugin-friendly SaaS/Enterprise platform built with Go.
## What is Go Platform?
Go Platform is a modular, extensible platform designed to support multiple business domains through a plugin architecture. It provides:
- **Core Kernel**: Foundation services including authentication, authorization, configuration, logging, and observability
- **Module Framework**: Plugin system for extending functionality
- **Infrastructure Adapters**: Support for databases, caching, event buses, and job scheduling
- **Security-by-Design**: Built-in JWT authentication, RBAC/ABAC authorization, and audit logging
- **Observability**: OpenTelemetry integration for tracing, metrics, and logging
## Documentation Structure
### 📋 Overview
- **[Requirements](requirements.md)**: High-level architectural principles and requirements
- **[Implementation Plan](plan.md)**: Phased implementation plan with timelines
- **[Playbook](playbook.md)**: Detailed implementation guide and best practices
### 🏛️ Architecture
- **[Architecture Overview](architecture.md)**: System architecture with diagrams
- **[Module Architecture](architecture-modules.md)**: Module system design and integration
- **[Module Requirements](module-requirements.md)**: Detailed requirements for each module
- **[Component Relationships](component-relationships.md)**: Component interactions and dependencies
### 🏗️ Architecture Decision Records (ADRs)
All architectural decisions are documented in [ADR records](adr/README.md), organized by implementation phase:
- **Phase 0**: Project Setup & Foundation
- **Phase 1**: Core Kernel & Infrastructure
- **Phase 2**: Authentication & Authorization
- **Phase 3**: Module Framework
- **Phase 5**: Infrastructure Adapters
- **Phase 6**: Observability & Production Readiness
- **Phase 7**: Testing, Documentation & CI/CD
### 📝 Implementation Tasks
Detailed task definitions for each phase are available in the [Stories section](stories/README.md):
- Phase 0: Project Setup & Foundation
- Phase 1: Core Kernel & Infrastructure
- Phase 2: Authentication & Authorization
- Phase 3: Module Framework
- Phase 4: Sample Feature Module (Blog)
- Phase 5: Infrastructure Adapters
- Phase 6: Observability & Production Readiness
- Phase 7: Testing, Documentation & CI/CD
- Phase 8: Advanced Features & Polish (Optional)
## Quick Start
1. Review the [Requirements](requirements.md) to understand the platform goals
2. Check the [Implementation Plan](plan.md) for the phased approach
3. Follow the [Playbook](playbook.md) for implementation details
4. Refer to [ADRs](adr/README.md) when making architectural decisions
5. Use the [Task Stories](stories/README.md) as a checklist for implementation
## Key Principles
- **Clean/Hexagonal Architecture**: Clear separation between core and plugins
- **Modular Monolith**: Start simple, evolve to microservices if needed
- **Plugin-First Design**: Extensible architecture supporting static and dynamic modules
- **Security-by-Design**: Built-in authentication, authorization, and audit capabilities
- **Observability**: Comprehensive logging, metrics, and tracing
- **API-First**: OpenAPI/GraphQL schema generation
## Contributing
When contributing to the platform:
1. Review relevant ADRs before making architectural decisions
2. Follow the task structure defined in the Stories
3. Update documentation as you implement features
4. Ensure all tests pass before submitting changes

View File

@@ -0,0 +1,816 @@
# Module Requirements
This document provides detailed requirements for each module in the Go Platform, including interfaces, responsibilities, and integration points.
## Table of Contents
- [Core Kernel Modules](#core-kernel-modules)
- [Security Modules](#security-modules)
- [Infrastructure Modules](#infrastructure-modules)
- [Feature Modules](#feature-modules)
## Core Kernel Modules
### Configuration Module
**Purpose**: Hierarchical configuration management with support for multiple sources.
**Requirements**:
- Load configuration from YAML files (default, environment-specific)
- Support environment variable overrides
- Support secret manager integration (AWS Secrets Manager, Vault)
- Type-safe configuration access
- Configuration validation
**Interface**:
```go
type ConfigProvider interface {
Get(key string) any
Unmarshal(v any) error
GetString(key string) string
GetInt(key string) int
GetBool(key string) bool
GetStringSlice(key string) []string
GetDuration(key string) time.Duration
IsSet(key string) bool
}
```
**Implementation**:
- Uses `github.com/spf13/viper` for configuration loading
- Load order: `default.yaml``{env}.yaml` → environment variables → secrets
- Supports nested configuration keys (e.g., `server.port`)
**Configuration Schema**:
```yaml
environment: development
server:
port: 8080
host: "0.0.0.0"
timeout: 30s
database:
driver: "postgres"
dsn: ""
max_connections: 25
max_idle_connections: 5
logging:
level: "info"
format: "json"
output: "stdout"
cache:
enabled: true
ttl: 5m
```
**Dependencies**: None (foundation module)
---
### Logging Module
**Purpose**: Structured logging with support for multiple outputs and log levels.
**Requirements**:
- Structured JSON logging for production
- Human-readable logging for development
- Support for log levels (debug, info, warn, error)
- Request-scoped fields (request_id, user_id, trace_id)
- Contextual logging (with fields)
- Performance: minimal overhead
**Interface**:
```go
type Field interface{}
type Logger interface {
Debug(msg string, fields ...Field)
Info(msg string, fields ...Field)
Warn(msg string, fields ...Field)
Error(msg string, fields ...Field)
Fatal(msg string, fields ...Field)
With(fields ...Field) Logger
}
// Helper functions
func String(key, value string) Field
func Int(key string, value int) Field
func Error(err error) Field
func Duration(key string, value time.Duration) Field
```
**Implementation**:
- Uses `go.uber.org/zap` for high-performance logging
- JSON encoder for production, console encoder for development
- Global logger instance accessible via `pkg/logger`
- Request-scoped logger via context
**Example Usage**:
```go
logger.Info("User logged in",
logger.String("user_id", userID),
logger.String("ip", ipAddress),
logger.Duration("duration", duration),
)
```
**Dependencies**: Configuration Module
---
### Dependency Injection Module
**Purpose**: Service registration and lifecycle management.
**Requirements**:
- Service registration via constructors
- Lifecycle management (OnStart/OnStop hooks)
- Dependency resolution
- Service overrides for testing
- Module-based service composition
**Implementation**:
- Uses `go.uber.org/fx` for dependency injection
- Core services registered in `internal/di/core_module.go`
- Modules register services via `fx.Provide()` in `Init()`
- Lifecycle hooks via `fx.Lifecycle`
**Core Module Structure**:
```go
var CoreModule = fx.Options(
fx.Provide(ProvideConfig),
fx.Provide(ProvideLogger),
fx.Provide(ProvideDatabase),
fx.Provide(ProvideHealthCheckers),
fx.Provide(ProvideMetrics),
fx.Provide(ProvideErrorBus),
fx.Provide(ProvideEventBus),
// ... other core services
)
```
**Dependencies**: Configuration Module, Logging Module
---
### Health & Metrics Module
**Purpose**: Health checks and metrics collection.
**Requirements**:
- Liveness endpoint (`/healthz`)
- Readiness endpoint (`/ready`)
- Metrics endpoint (`/metrics`) in Prometheus format
- Composable health checkers
- Custom metrics support
**Interface**:
```go
type HealthChecker interface {
Check(ctx context.Context) error
}
type HealthRegistry interface {
Register(name string, checker HealthChecker)
Check(ctx context.Context) map[string]error
}
```
**Core Health Checkers**:
- Database connectivity
- Redis connectivity
- Kafka connectivity (if enabled)
- Disk space
- Memory usage
**Metrics**:
- HTTP request duration (histogram)
- HTTP request count (counter)
- Database query duration (histogram)
- Cache hit/miss ratio (gauge)
- Error count (counter)
**Dependencies**: Configuration Module, Logging Module
---
### Error Bus Module
**Purpose**: Centralized error handling and reporting.
**Requirements**:
- Non-blocking error publishing
- Multiple error sinks (logger, Sentry)
- Error context preservation
- Panic recovery integration
**Interface**:
```go
type ErrorPublisher interface {
Publish(err error)
PublishWithContext(ctx context.Context, err error)
}
```
**Implementation**:
- Channel-based error bus
- Background goroutine consumes errors
- Pluggable sinks (logger, Sentry)
- Context extraction (user_id, trace_id, module)
**Dependencies**: Logging Module
---
## Security Modules
### Authentication Module
**Purpose**: User authentication via JWT tokens.
**Requirements**:
- JWT access token generation (short-lived, 15 minutes)
- JWT refresh token generation (long-lived, 7 days)
- Token validation and verification
- Token claims extraction
- Refresh token storage and revocation
**Interface**:
```go
type Authenticator interface {
GenerateAccessToken(userID string, roles []string, tenantID string) (string, error)
GenerateRefreshToken(userID string) (string, error)
VerifyToken(token string) (*TokenClaims, error)
RevokeRefreshToken(tokenHash string) error
}
type TokenClaims struct {
UserID string
Roles []string
TenantID string
ExpiresAt time.Time
IssuedAt time.Time
}
```
**Token Format**:
- Algorithm: HS256 or RS256
- Claims: `sub` (user ID), `roles`, `tenant_id`, `exp`, `iat`
- Refresh tokens stored in database with hash
**Endpoints**:
- `POST /api/v1/auth/login` - Authenticate and get tokens
- `POST /api/v1/auth/refresh` - Refresh access token
- `POST /api/v1/auth/logout` - Revoke refresh token
**Dependencies**: Identity Module, Configuration Module
---
### Authorization Module
**Purpose**: Role-based and attribute-based access control.
**Requirements**:
- Permission-based authorization
- Role-to-permission mapping
- User-to-role assignment
- Permission caching
- Context-aware authorization
**Interface**:
```go
type PermissionResolver interface {
HasPermission(ctx context.Context, userID string, perm Permission) (bool, error)
GetUserPermissions(ctx context.Context, userID string) ([]Permission, error)
}
type Authorizer interface {
Authorize(ctx context.Context, perm Permission) error
}
```
**Permission Format**:
- String format: `"{module}.{resource}.{action}"`
- Examples: `blog.post.create`, `user.read`, `system.health.check`
- Code-generated constants for type safety
**Authorization Flow**:
```mermaid
sequenceDiagram
participant Request
participant AuthzMiddleware
participant Authorizer
participant PermissionResolver
participant Cache
participant DB
Request->>AuthzMiddleware: HTTP request with permission
AuthzMiddleware->>Authorizer: Authorize(ctx, permission)
Authorizer->>Authorizer: Extract user from context
Authorizer->>PermissionResolver: HasPermission(user, permission)
PermissionResolver->>Cache: Check cache
Cache-->>PermissionResolver: Cache miss
PermissionResolver->>DB: Load user roles
PermissionResolver->>DB: Load role permissions
DB-->>PermissionResolver: Permissions
PermissionResolver->>Cache: Store in cache
PermissionResolver-->>Authorizer: Has permission: true/false
Authorizer-->>AuthzMiddleware: Authorized or error
AuthzMiddleware-->>Request: Continue or 403
```
**Dependencies**: Identity Module, Cache Module
---
### Identity Module
**Purpose**: User and role management.
**Requirements**:
- User CRUD operations
- Password hashing (argon2id)
- Email verification
- Password reset flow
- Role management
- Permission management
- User-role assignment
**Interfaces**:
```go
type UserRepository interface {
FindByID(ctx context.Context, id string) (*User, error)
FindByEmail(ctx context.Context, email string) (*User, error)
Create(ctx context.Context, u *User) error
Update(ctx context.Context, u *User) error
Delete(ctx context.Context, id string) error
List(ctx context.Context, filters UserFilters) ([]*User, error)
}
type UserService interface {
Register(ctx context.Context, email, password string) (*User, error)
VerifyEmail(ctx context.Context, token string) error
ResetPassword(ctx context.Context, email string) error
ChangePassword(ctx context.Context, userID, oldPassword, newPassword string) error
UpdateProfile(ctx context.Context, userID string, updates UserUpdates) error
}
type RoleRepository interface {
FindByID(ctx context.Context, id string) (*Role, error)
Create(ctx context.Context, r *Role) error
Update(ctx context.Context, r *Role) error
Delete(ctx context.Context, id string) error
AssignPermissions(ctx context.Context, roleID string, permissions []Permission) error
AssignToUser(ctx context.Context, userID string, roleIDs []string) error
}
```
**User Entity**:
- ID (UUID)
- Email (unique, verified)
- Password hash (argon2id)
- Email verified (boolean)
- Created at, updated at
- Tenant ID (optional, for multi-tenancy)
**Role Entity**:
- ID (UUID)
- Name (unique)
- Description
- Created at
- Permissions (many-to-many)
**Dependencies**: Database Module, Notification Module, Cache Module
---
### Audit Module
**Purpose**: Immutable audit logging of security-relevant actions.
**Requirements**:
- Append-only audit log
- Actor tracking (user ID)
- Action tracking (what was done)
- Target tracking (what was affected)
- Metadata storage (JSON)
- Correlation IDs
- High-performance writes
**Interface**:
```go
type Auditor interface {
Record(ctx context.Context, action AuditAction) error
Query(ctx context.Context, filters AuditFilters) ([]AuditEntry, error)
}
type AuditAction struct {
ActorID string
Action string // e.g., "user.created", "role.assigned"
TargetID string
Metadata map[string]any
IPAddress string
UserAgent string
}
```
**Audit Log Schema**:
- ID (UUID)
- Actor ID (user ID)
- Action (string)
- Target ID (resource ID)
- Metadata (JSONB)
- Timestamp
- Request ID
- IP Address
- User Agent
**Automatic Audit Events**:
- User login/logout
- Password changes
- Role assignments
- Permission grants
- Data modifications (configurable)
**Dependencies**: Database Module, Logging Module
---
## Infrastructure Modules
### Database Module
**Purpose**: Database access and ORM functionality.
**Requirements**:
- PostgreSQL support (primary)
- Connection pooling
- Transaction support
- Migration management
- Query instrumentation (OpenTelemetry)
- Multi-tenancy support (tenant_id filtering)
**Implementation**:
- Uses `entgo.io/ent` for code generation
- Ent schemas for all entities
- Migration runner on startup
- Connection pool configuration
**Database Client Interface**:
```go
type DatabaseClient interface {
Client() *ent.Client
Migrate(ctx context.Context) error
Close() error
HealthCheck(ctx context.Context) error
}
```
**Connection Pooling**:
- Max connections: 25
- Max idle connections: 5
- Connection lifetime: 5 minutes
- Idle timeout: 10 minutes
**Multi-Tenancy**:
- Automatic tenant_id filtering via Ent interceptors
- Tenant-aware queries
- Tenant isolation at application level
**Dependencies**: Configuration Module, Logging Module
---
### Cache Module
**Purpose**: Distributed caching with Redis.
**Requirements**:
- Key-value storage
- TTL support
- Distributed caching (shared across instances)
- Cache invalidation
- Fallback to in-memory cache
**Interface**:
```go
type Cache interface {
Get(ctx context.Context, key string) ([]byte, error)
Set(ctx context.Context, key string, value []byte, ttl time.Duration) error
Delete(ctx context.Context, key string) error
Exists(ctx context.Context, key string) (bool, error)
Increment(ctx context.Context, key string) (int64, error)
}
```
**Use Cases**:
- User permissions caching
- Role assignments caching
- Session data
- Rate limiting state
- Query result caching (optional)
**Cache Key Format**:
- `user:{user_id}:permissions`
- `role:{role_id}:permissions`
- `session:{session_id}`
- `ratelimit:{user_id}:{endpoint}`
**Dependencies**: Configuration Module, Logging Module
---
### Event Bus Module
**Purpose**: Event-driven communication between modules.
**Requirements**:
- Publish/subscribe pattern
- Topic-based routing
- In-process bus (development)
- Kafka bus (production)
- Error handling and retries
- Event ordering (per partition)
**Interface**:
```go
type EventBus interface {
Publish(ctx context.Context, topic string, event Event) error
Subscribe(topic string, handler EventHandler) error
Unsubscribe(topic string) error
}
type Event struct {
ID string
Type string
Source string
Timestamp time.Time
Data map[string]any
}
type EventHandler func(ctx context.Context, event Event) error
```
**Core Events**:
- `platform.user.created`
- `platform.user.updated`
- `platform.user.deleted`
- `platform.role.assigned`
- `platform.role.revoked`
- `platform.permission.granted`
**Event Flow**:
```mermaid
graph LR
Publisher[Module Publisher]
Bus[Event Bus]
Subscriber1[Module Subscriber 1]
Subscriber2[Module Subscriber 2]
Subscriber3[Module Subscriber 3]
Publisher -->|Publish| Bus
Bus -->|Deliver| Subscriber1
Bus -->|Deliver| Subscriber2
Bus -->|Deliver| Subscriber3
```
**Dependencies**: Configuration Module, Logging Module
---
### Scheduler Module
**Purpose**: Background job processing and cron scheduling.
**Requirements**:
- Cron job scheduling
- Async job queuing
- Job retries with backoff
- Job status tracking
- Concurrency control
- Job persistence
**Interface**:
```go
type Scheduler interface {
Cron(spec string, job JobFunc) error
Enqueue(queue string, payload any) error
EnqueueWithRetry(queue string, payload any, retries int) error
}
type JobFunc func(ctx context.Context) error
```
**Implementation**:
- Uses `github.com/robfig/cron/v3` for cron jobs
- Uses `github.com/hibiken/asynq` for job queuing
- Redis-backed job queue
- Job processor with worker pool
**Example Jobs**:
- Cleanup expired tokens (daily)
- Send digest emails (weekly)
- Generate reports (monthly)
- Data archival (custom schedule)
**Dependencies**: Cache Module (Redis), Logging Module
---
### Blob Storage Module
**Purpose**: File and blob storage abstraction.
**Requirements**:
- File upload
- File download
- File deletion
- Signed URL generation
- Versioning support (optional)
**Interface**:
```go
type BlobStore interface {
Upload(ctx context.Context, key string, data []byte, contentType string) error
Download(ctx context.Context, key string) ([]byte, error)
Delete(ctx context.Context, key string) error
GetSignedURL(ctx context.Context, key string, ttl time.Duration) (string, error)
Exists(ctx context.Context, key string) (bool, error)
}
```
**Implementation**:
- AWS S3 adapter (primary)
- Local file system adapter (development)
- GCS adapter (optional)
**Key Format**:
- `{module}/{resource_type}/{resource_id}/{filename}`
- Example: `blog/posts/abc123/image.jpg`
**Dependencies**: Configuration Module, Logging Module
---
### Notification Module
**Purpose**: Multi-channel notifications (email, SMS, push).
**Requirements**:
- Email sending (SMTP, AWS SES)
- SMS sending (Twilio, optional)
- Push notifications (FCM, APNs, optional)
- Webhook notifications
- Template support
- Retry logic
**Interface**:
```go
type Notifier interface {
SendEmail(ctx context.Context, to, subject, body string) error
SendEmailWithTemplate(ctx context.Context, to, template string, data map[string]any) error
SendSMS(ctx context.Context, to, message string) error
SendPush(ctx context.Context, deviceToken string, payload PushPayload) error
SendWebhook(ctx context.Context, url string, payload map[string]any) error
}
```
**Email Templates**:
- Email verification
- Password reset
- Welcome email
- Notification digest
**Dependencies**: Configuration Module, Logging Module, Event Bus Module
---
## Feature Modules
### Blog Module (Example)
**Purpose**: Blog post management functionality.
**Requirements**:
- Post CRUD operations
- Comment system (optional)
- Author-based access control
- Post publishing workflow
- Tag/category support
**Permissions**:
- `blog.post.create`
- `blog.post.read`
- `blog.post.update`
- `blog.post.delete`
- `blog.post.publish`
**Routes**:
- `POST /api/v1/blog/posts` - Create post
- `GET /api/v1/blog/posts` - List posts
- `GET /api/v1/blog/posts/:id` - Get post
- `PUT /api/v1/blog/posts/:id` - Update post
- `DELETE /api/v1/blog/posts/:id` - Delete post
**Domain Model**:
```go
type Post struct {
ID string
Title string
Content string
AuthorID string
Status PostStatus // draft, published, archived
CreatedAt time.Time
UpdatedAt time.Time
PublishedAt *time.Time
}
```
**Events Published**:
- `blog.post.created`
- `blog.post.updated`
- `blog.post.published`
- `blog.post.deleted`
**Dependencies**: Core Kernel, Identity Module, Event Bus Module
---
## Module Integration Matrix
```mermaid
graph TB
subgraph "Core Kernel (Required)"
Config[Config]
Logger[Logger]
DI[DI Container]
Health[Health]
end
subgraph "Security (Required)"
Auth[Auth]
Authz[Authz]
Identity[Identity]
Audit[Audit]
end
subgraph "Infrastructure (Optional)"
DB[Database]
Cache[Cache]
EventBus[Event Bus]
Scheduler[Scheduler]
BlobStore[Blob Store]
Notifier[Notifier]
end
subgraph "Feature Modules"
Blog[Blog]
Billing[Billing]
Custom[Custom Modules]
end
Config --> Logger
Config --> DI
DI --> Health
DI --> Auth
DI --> Authz
DI --> Identity
DI --> Audit
DI --> DB
DI --> Cache
DI --> EventBus
DI --> Scheduler
DI --> BlobStore
DI --> Notifier
Auth --> Identity
Authz --> Identity
Authz --> Audit
Blog --> Auth
Blog --> Authz
Blog --> DB
Blog --> EventBus
Blog --> Cache
Billing --> Auth
Billing --> Authz
Billing --> DB
Billing --> EventBus
Billing --> Cache
Custom --> Auth
Custom --> Authz
Custom --> DB
style Config fill:#4a90e2,stroke:#2e5c8a,stroke-width:3px,color:#fff
style Auth fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
style Blog fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
```
## Next Steps
- [Component Relationships](./component-relationships.md) - Detailed component interactions
- [System Architecture](./architecture.md) - Overall system architecture
- [Module Architecture](./architecture-modules.md) - Module design and integration

1485
docs/content/plan.md Normal file

File diff suppressed because it is too large Load Diff

620
docs/content/playbook.md Normal file
View File

@@ -0,0 +1,620 @@
# GoPlatform Boilerplate Playbook
**“Pluginfriendly SaaS/Enterprise Platform Go Edition”**
## 1⃣ ARCHITECTURAL IMPERATIVES (Goflavoured)
| Principle | Gospecific rationale | Enforcement Technique |
|-----------|-----------------------|------------------------|
| **Clean / Hexagonal Architecture** | Gos packagelevel visibility (`internal/`) naturally creates a *boundary* between core and plugins. | Keep all **domain** code in `internal/domain`, expose only **interfaces** in `pkg/`. |
| **Dependency Injection (DI) via Constructors** | Go avoids reflectionheavy containers; compiletime wiring is preferred. | Use **ubergo/fx** (runtime graph) *or* **ubergo/dig** for optional runtime DI. For a lighter weight solution, use plain **constructor injection** with a small **registry**. |
| **Modular Monolith → Microserviceready** | A single binary is cheap in Go; later you can extract modules into separate services without breaking APIs. | Each module lives in its own Go **module** (`go.mod`) under `./modules/*`. The core loads them via the **Go plugin** system *or* static registration (preferred for CI stability). |
| **Pluginfirst design** | Gos `plugin` package allows runtime loading of compiled `.so` files (Linux/macOS). | Provide an **IModule** interface and a **loader** that discovers `*.so` files (or compiledin modules for CI). |
| **APIFirst (OpenAPI + gin/gorilla)** | Guarantees languageagnostic contracts. | Generate server stubs from an `openapi.yaml` stored in `api/`. |
| **SecuritybyDesign** | Gos static typing makes it easy to keep auth data out of the request flow. | Central middleware for JWT verification + contextbased user propagation. |
| **Observability (OpenTelemetry)** | The Go ecosystem ships firstclass OTEL SDK. | Instrument HTTP, DB, queues, and custom events automatically. |
| **ConfigurationasCode** | Viper + Cobra give hierarchical config and flag parsing. | Load defaults → file → env → secret manager (AWS Secrets Manager / Vault). |
| **Testing & CI** | `go test` is fast; Testcontainers via **testcontainers-go** can spin up DB, Redis, Kafka. | CI pipeline runs unit, integration, and contract tests on each PR. |
| **Semantic Versioning & Compatibility** | Go modules already enforce version constraints. | Core declares **minimal required versions** in `go.mod` and uses `replace` for local dev. |
---
## 2⃣ CORE KERNEL (What every Goplatform must ship)
| Module | Public Interfaces (exported from `pkg/`) | Recommended Packages | Brief Implementation Sketch |
|--------|-------------------------------------------|----------------------|------------------------------|
| **Config** | `type ConfigProvider interface { Get(key string) any; Unmarshal(v any) error }` | `github.com/spf13/viper` | Load defaults (`config/default.yaml`), then env overrides, then optional secretstore. |
| **Logger** | `type Logger interface { Debug(msg string, fields ...Field); Info(...); Error(...); With(fields ...Field) Logger }` | `go.uber.org/zap` (or `zerolog`) | Global logger is created in `cmd/main.go`; exported via `pkg/logger`. |
| **DI / Service Registry** | `type Container interface { Provide(constructor any) error; Invoke(fn any) error }` | `go.uber.org/dig` (or `fx` for lifecycle) | Core creates a `dig.New()` container, registers core services, then calls `container.Invoke(app.Start)`. |
| **Health & Metrics** | `type HealthChecker interface { Check(ctx context.Context) error }` | `github.com/prometheus/client_golang/prometheus`, `github.com/heptiolabs/healthcheck` | Expose `/healthz`, `/ready`, `/metrics`. |
| **Error Bus** | `type ErrorPublisher interface { Publish(err error) }` | Simple channelbased implementation + optional Sentry (`github.com/getsentry/sentry-go`) | Core registers a singleton `ErrorBus`. |
| **Auth (JWT + OIDC)** | `type Authenticator interface { GenerateToken(userID string, roles []string) (string, error); VerifyToken(token string) (*TokenClaims, error) }` | `github.com/golang-jwt/jwt/v5`, `github.com/coreos/go-oidc` | Token claims embed `sub`, `roles`, `tenant_id`. Middleware adds `User` to `context.Context`. |
| **Authorization (RBAC/ABAC)** | `type Authorizer interface { Authorize(ctx context.Context, perm Permission) error }` | Custom DSL, `github.com/casbin/casbin/v2` (optional) | Permission format: `"module.resource.action"`; core ships a simple inmemory resolver and a `casbin` adapter. |
| **Audit** | `type Auditor interface { Record(ctx context.Context, act AuditAction) error }` | Write to appendonly table (Postgres) or Elastic via `olivere/elastic` | Audits include `actorID`, `action`, `targetID`, `metadata`. |
| **Event Bus** | `type EventBus interface { Publish(ctx context.Context, ev Event) error; Subscribe(topic string, handler EventHandler) }` | `github.com/segmentio/kafka-go` (for production) + inprocess fallback | Core ships an **inprocess bus** used by tests and a **Kafka bus** for real deployments. |
| **Persistence (Repository)** | `type UserRepo interface { FindByID(id string) (*User, error); Create(u *User) error; … }` | `entgo.io/ent` (codegen ORM) **or** `gorm.io/gorm` | Core provides an `EntClient` wrapper that implements all core repos. |
| **Scheduler / Background Jobs** | `type Scheduler interface { Cron(spec string, job JobFunc) error; Enqueue(q string, payload any) error }` | `github.com/robfig/cron/v3`, `github.com/hibiken/asynq` (Redisbacked) | Expose a `JobRegistry` where modules can register periodic jobs. |
| **Notification** | `type Notifier interface { Send(ctx context.Context, n Notification) error }` | `github.com/go-mail/mail` (SMTP), `github.com/aws/aws-sdk-go-v2/service/ses`, `github.com/IBM/sarama` (for push) | Core supplies an `EmailNotifier` and a `WebhookNotifier`. |
| **Multitenancy (optional)** | `type TenantResolver interface { Resolve(ctx context.Context) (tenantID string, err error) }` | Header/ subdomain parser + JWT claim scanner | Tenant ID is stored in request context and automatically added to SQL queries via Ents `Client` interceptor. |
All *public* interfaces live under `pkg/` so that plugins can import them without pulling in implementation details. The concrete implementations stay in `internal/` (or separate go.mod modules) and are **registered with the container** during bootstrap.
---
## 3⃣ MODULE (PLUGIN) FRAMEWORK
### 3.1 Interface that every module must implement
```go
// pkg/module/module.go
package module
import (
"context"
"go.uber.org/fx"
)
// IModule is the contract a feature plugin must fulfil.
type IModule interface {
// Name returns a unique, humanreadable identifier.
Name() string
// Init registers the module's services, routes, jobs, and permissions.
// The fx.Options returned are merged into the app's lifecycle.
Init() fx.Option
// Migrations returns a slice of Ent migration functions (or raw SQL) that
// the core will run when the platform starts.
Migrations() []func(*ent.Client) error
}
```
### 3.2 Registration Mechanics
Two ways to get a module into the platform:
| Approach | When to use | Pros | Cons |
|----------|-------------|------|------|
| **Static registration** each module imports `core` and calls `module.Register(MyBlogModule{})` in its own `init()` | Development, CI (no `.so` needed) | Simpler, works on Windows; compiletime type safety | Requires recompiling the binary for new modules |
| **Runtime `plugin` loading** compile each module as a `go build -buildmode=plugin -o blog.so ./modules/blog` | Production SaaS where clients drop new modules, or separate microservice extraction | Hotswap without rebuild | Only works on Linux/macOS; plugins must be compiled with same Go version & same `go.mod` replace graph; debugging harder |
**Static registration example**
```go
// internal/registry/registry.go
package registry
import (
"sync"
"github.com/yourorg/platform/pkg/module"
)
var (
mu sync.Mutex
modules = make(map[string]module.IModule)
)
func Register(m module.IModule) {
mu.Lock()
defer mu.Unlock()
if _, ok := modules[m.Name()]; ok {
panic("module already registered: " + m.Name())
}
modules[m.Name()] = m
}
func All() []module.IModule {
mu.Lock()
defer mu.Unlock()
out := make([]module.IModule, 0, len(modules))
for _, m := range modules {
out = append(out, m)
}
return out
}
```
**Plugin loader skeleton**
```go
// internal/pluginloader/loader.go
package pluginloader
import (
"plugin"
"github.com/yourorg/platform/pkg/module"
)
func Load(path string) (module.IModule, error) {
p, err := plugin.Open(path)
if err != nil {
return nil, err
}
sym, err := p.Lookup("Module")
if err != nil {
return nil, err
}
mod, ok := sym.(module.IModule)
if !ok {
return nil, fmt.Errorf("invalid module type")
}
return mod, nil
}
```
> **Tip:** Ship a tiny CLI (`platformctl modules list`) that scans `./plugins/*.so`, loads each via `Load`, and prints `Name()`. This is a great sanity check for ops.
### 3.3 Permissions DSL (compiletime safety)
```go
// pkg/perm/perm.go
package perm
type Permission string
func (p Permission) String() string { return string(p) }
var (
// Core permissions
SystemHealthCheck Permission = "system.health.check"
// Blog module generated by a small go:generate script
BlogPostCreate Permission = "blog.post.create"
BlogPostRead Permission = "blog.post.read"
BlogPostUpdate Permission = "blog.post.update"
BlogPostDelete Permission = "blog.post.delete"
)
```
A **codegen** tool (`go generate ./...`) can scan each modules `module.yaml` for declared actions and emit a single `perm.go` file, guaranteeing no duplicate strings.
---
## 4⃣ SAMPLE FEATURE MODULE **Blog**
```
modules/
└─ blog/
├─ go.mod # (module github.com/yourorg/blog)
├─ module.yaml
├─ internal/
│ ├─ api/
│ │ └─ handler.go
│ ├─ domain/
│ │ ├─ post.go
│ │ └─ post_repo.go
│ └─ service/
│ └─ post_service.go
└─ pkg/
└─ module.go
```
### 4.1 `module.yaml`
```yaml
name: blog
version: 0.1.0
dependencies:
- core >= 1.3.0
permissions:
- blog.post.create
- blog.post.read
- blog.post.update
- blog.post.delete
routes:
- method: POST
path: /api/v1/blog/posts
permission: blog.post.create
- method: GET
path: /api/v1/blog/posts/:id
permission: blog.post.read
```
### 4.2 Go implementation
```go
// pkg/module.go
package blog
import (
"github.com/yourorg/platform/pkg/module"
"go.uber.org/fx"
)
type BlogModule struct{}
func (b BlogModule) Name() string { return "blog" }
func (b BlogModule) Init() fx.Option {
return fx.Options(
// Register repository implementation
fx.Provide(NewPostRepo),
// Register service layer
fx.Provide(NewPostService),
// Register HTTP handlers (using Gin)
fx.Invoke(RegisterHandlers),
// Register permissions (optional just for documentation)
fx.Invoke(RegisterPermissions),
)
}
func (b BlogModule) Migrations() []func(*ent.Client) error {
// Ent migration generated in internal/ent/migrate
return []func(*ent.Client) error{
func(c *ent.Client) error { return c.Schema.Create(context.Background()) },
}
}
// Export a variable for the plugin loader
var Module BlogModule
```
**Handler registration (Gin example)**
```go
// internal/api/handler.go
package api
import (
"github.com/gin-gonic/gin"
"github.com/yourorg/blog/internal/service"
"github.com/yourorg/platform/pkg/perm"
"github.com/yourorg/platform/pkg/auth"
)
func RegisterHandlers(r *gin.Engine, svc *service.PostService, authz auth.Authorizer) {
grp := r.Group("/api/v1/blog")
grp.Use(auth.AuthMiddleware()) // verifies JWT, injects user in context
// POST /posts
grp.POST("/posts", func(c *gin.Context) {
if err := authz.Authorize(c.Request.Context(), perm.BlogPostCreate); err != nil {
c.JSON(403, gin.H{"error": "forbidden"})
return
}
// decode request, call svc.Create, return 201…
})
// GET /posts/:id (similar)
}
```
**Repository using Ent**
```go
// internal/domain/post_repo.go
package domain
import (
"context"
"github.com/yourorg/platform/internal/ent"
"github.com/yourorg/platform/internal/ent/post"
)
type PostRepo struct{ client *ent.Client }
func NewPostRepo(client *ent.Client) *PostRepo { return &PostRepo{client} }
func (r *PostRepo) Create(ctx context.Context, p *Post) (*Post, error) {
entPost, err := r.client.Post.
Create().
SetTitle(p.Title).
SetContent(p.Content).
SetAuthorID(p.AuthorID).
Save(ctx)
if err != nil {
return nil, err
}
return fromEnt(entPost), nil
}
```
> **Result:** Adding a new feature is just a matter of creating a new folder under `modules/`, implementing `module.IModule`, registering routes, permissions and migrations. The core automatically wires everything together.
---
## 5⃣ INFRASTRUCTURE ADAPTERS (swapable, perenvironment)
| Concern | Implementation (Go) | Where it lives |
|---------|---------------------|----------------|
| **Database** | `entgo.io/ent` (codegen) | `internal/infra/ent/` |
| **Cache** | `github.com/go-redis/redis/v9` | `internal/infra/cache/` |
| **Message Queue** | `github.com/segmentio/kafka-go` (Kafka) **or** `github.com/hibiken/asynq` (Redis) | `internal/infra/bus/` |
| **Blob Storage** | `github.com/aws/aws-sdk-go-v2/service/s3` (or GCS) | `internal/infra/blob/` |
| **Email** | `github.com/go-mail/mail` (SMTP) | `internal/infra/email/` |
| **SMS / Push** | Twilio SDK, Firebase Cloud Messaging | `internal/infra/notify/` |
| **Secret Store** | AWS Secrets Manager (`aws-sdk-go-v2`) or HashiCorp Vault (`github.com/hashicorp/vault/api`) | `internal/infra/secret/` |
All adapters expose an **interface** in `pkg/infra/…` and are registered in the DI container as **singletons**.
---
## 6⃣ OBSERVABILITY STACK
| Layer | Library | What it does |
|-------|---------|--------------|
| **Tracing** | `go.opentelemetry.io/otel`, `go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp` | Autoinstrument HTTP, DB (Ent plugin), Kafka, Redis |
| **Metrics** | `github.com/prometheus/client_golang/prometheus` | Counter/Histogram per request, DB latency, job execution |
| **Logging** | `go.uber.org/zap` (structured JSON) | Global logger, requestscoped fields (`request_id`, `user_id`, `tenant_id`) |
| **Error Reporting** | `github.com/getsentry/sentry-go` (optional) | Capture panics & errors, link to trace ID |
| **Dashboard** | Grafana + Prometheus + Loki (logs) | Provide readymade dashboards in `ops/` folder |
**Instrumentation example (HTTP)**
```go
import (
"net/http"
"go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
)
func main() {
r := gin.New()
// Wrap the entire router with OTEL middleware
wrapped := otelhttp.NewHandler(r, "http-server")
http.ListenAndServe(":8080", wrapped)
}
```
**Metrics middleware**
```go
func PromMetrics() gin.HandlerFunc {
return func(c *gin.Context) {
start := time.Now()
c.Next()
duration := time.Since(start).Seconds()
method := c.Request.Method
path := c.FullPath()
status := fmt.Sprintf("%d", c.Writer.Status())
requestDuration.WithLabelValues(method, path, status).Observe(duration)
}
}
```
---
## 7⃣ CONFIGURATION & ENVIRONMENT
```
config/
├─ default.yaml # baseline values
├─ development.yaml
├─ production.yaml
└─ secrets/ # gitignored, loaded via secret manager
```
**Bootstrap**
```go
func LoadConfig() (*Config, error) {
v := viper.New()
v.SetConfigName("default")
v.AddConfigPath("./config")
if err := v.ReadInConfig(); err != nil { return nil, err }
env := v.GetString("environment") // dev / prod / test
v.SetConfigName(env)
v.MergeInConfig() // overrides defaults
v.AutomaticEnv() // env vars win
// optional: secret manager overlay
return &Config{v}, nil
}
```
All services receive a `*Config` via DI.
---
## 8⃣ CI / CD PIPELINE (GitHub Actions)
```yaml
name: CI
on:
push:
branches: [main]
pull_request:
jobs:
build:
runs-on: ubuntu-latest
env:
GO111MODULE: on
steps:
- uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: '1.22'
- name: Cache Go modules
uses: actions/cache@v4
with:
path: |
~/.cache/go-build
~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
- name: Install Tools
run: |
go install github.com/vektra/mockery/v2@latest
go install github.com/golang/mock/mockgen@latest
- name: Lint
run: |
go install golang.org/x/lint/golint@latest
golint ./...
- name: Unit Tests
run: |
go test ./... -cover -race -short
- name: Integration Tests (Docker Compose)
run: |
docker compose -f ./docker-compose.test.yml up -d
go test ./... -tags=integration -count=1
docker compose -f ./docker-compose.test.yml down
- name: Build Binaries
run: |
go build -ldflags="-X main.version=${{ github.sha }}" -o bin/platform ./cmd/platform
- name: Publish Docker Image
uses: docker/build-push-action@v5
with:
context: .
push: ${{ github.ref == 'refs/heads/main' }}
tags: ghcr.io/yourorg/platform:${{ github.sha }}
```
**Key points**
* `go test ./... -tags=integration` runs tests that spin up Postgres, Redis, Kafka via **Testcontainers** (`github.com/testcontainers/testcontainers-go`).
* Linting via `golint` or `staticcheck`.
* Docker image built from the compiled binary (multistage: `golang:1.22-alpine``scratch` or `distroless`).
* Semanticrelease can be added on top (`semantic-release` action) to tag releases automatically.
---
## 9⃣ TESTING STRATEGY
| Test type | Tools | Typical coverage |
|-----------|-------|------------------|
| **Unit** | `testing`, `github.com/stretchr/testify`, `github.com/golang/mock` | Individual services, repositories (use inmemory DB or mocks). |
| **Integration** | `testcontainers-go` for Postgres, Redis, Kafka; real Ent client | Endtoend request → DB → event bus flow. |
| **Contract** | `pact-go` or **OpenAPI** validator middleware (`github.com/getkin/kin-openapi`) | Guarantees that modules do not break the published API. |
| **Load / Stress** | `k6` or `vegeta` scripts in `perf/` | Verify that auth middleware adds < 2ms latency per request. |
| **Security** | `gosec`, `zap` for secret detection, OWASP ZAP for API scan | Detect hardcoded secrets, SQL injection risk. |
**Example integration test skeleton**
```go
func TestCreatePost_Integration(t *testing.T) {
ctx := context.Background()
// Spin up a PostgreSQL container
pg, err := testcontainers.GenericContainer(ctx, testcontainers.GenericContainerRequest{
ContainerRequest: testcontainers.ContainerRequest{
Image: "postgres:15-alpine",
Env: map[string]string{"POSTGRES_PASSWORD": "secret", "POSTGRES_DB": "test"},
ExposedPorts: []string{"5432/tcp"},
WaitingFor: wait.ForListeningPort("5432/tcp"),
},
Started: true,
})
require.NoError(t, err)
defer pg.Terminate(ctx)
// Build DSN from container host/port
host, _ := pg.Endpoint(ctx)
dsn := fmt.Sprintf("postgres://postgres:secret@%s/test?sslmode=disable", host)
// Initialize Ent client against that DSN
client, err := ent.Open("postgres", dsn)
require.NoError(t, err)
defer client.Close()
// Run schema migration
err = client.Schema.Create(ctx)
require.NoError(t, err)
// Build the whole app using fx, injecting the test client
app := fx.New(
core.ProvideAll(),
fx.Provide(func() *ent.Client { return client }),
blog.Module.Init(),
// ...
)
// Start the app, issue a HTTP POST request through httptest.Server,
// assert 201 and DB row existence.
}
```
---
## 10⃣ COMMON PITFALLS & SOLUTIONS (Gocentric)
| Pitfall | Symptom | Remedy |
|---------|----------|--------|
| **Circular imports** (core module) | `import cycle not allowed` build error | Keep **interfaces only** in `pkg/`. Core implements, modules depend on the interface only. |
| **Toobig binary** (all modules compiled in) | Long build times, memory pressure on CI | Use **Go plugins** for truly optional modules; keep core binary minimal. |
| **Version mismatch with plugins** | Panic: `plugin was built with a different version of package X` | Enforce a **single `replace` directive** for core in each modules `go.mod` (e.g., `replace github.com/yourorg/platform => ../../platform`). |
| **Context leakage** (request data not passed) | Logs missing `user_id`, permission checks use zerovalue user | Always store user/tenant info in `context.Context` via middleware; provide helper `auth.FromContext(ctx)`. |
| **Ent migrations outofsync** | Startup fails with column does not exist | Run `go generate ./...` (ent codegen) and `ent/migrate` automatically at boot **after all module migrations are collected**. |
| **Hardcoded permission strings** | Typos go unnoticed 403 bugs | Use the **generated `perm` package** (see §4.1) and reference constants everywhere. |
| **Blocking I/O in request path** | 500ms latency spikes | Offload longrunning work to **asynq jobs** or **Kafka consumers**; keep request handlers thin. |
| **Overexposed HTTP handlers** | Missing auth middleware open endpoint | Wrap the router with a **global security middleware** that checks a whitelist and enforces `Authorizer` for everything else. |
| **Memory leaks with goroutine workers** | Leak after many module reloads | Use **fx.Lifecycle** to start/stop background workers; always cancel contexts on `Stop`. |
| **Testing with real DB slows CI** | Pipeline > 10min | Use **testcontainers** in parallel jobs, cache Docker images, or use inmemory SQLite for unit tests and only run DBheavy tests in a dedicated job. |
---
## 11⃣ QUICKSTART STEPS (What to code first)
1. **Bootstrap repo**
```bash
mkdir platform && cd platform
go mod init github.com/yourorg/platform
mkdir cmd internal pkg config modules
touch cmd/main.go
```
2. **Create core container** (`internal/di/container.go`) using `dig`. Register Config, Logger, DB, EventBus, Authenticator, Authorizer.
3. **Add Auth middleware** (`pkg/auth/middleware.go`) that extracts JWT, validates claims, injects `User` into context.
4. **Implement Permission DSL** (`pkg/perm/perm.go`) and a **simple inmemory resolver** (`pkg/perm/resolver.go`).
5. **Write `module` interface** (`pkg/module/module.go`) and a **static registry** (`internal/registry/registry.go`).
6. **Add first plugin** copy the **Blog** module skeleton from §4.
`cd modules/blog && go mod init github.com/yourorg/blog && go run ../..` (build & test).
7. **Wire everything in `cmd/main.go`** using `fx.New(...)`:
```go
app := fx.New(
core.Module, // registers core services
fx.Invoke(registry.RegisterAllModules), // loads static modules
fx.Invoke(func(lc fx.Lifecycle, r *gin.Engine) {
lc.Append(fx.Hook{
OnStart: func(context.Context) error {
go r.Run(":8080")
return nil
},
OnStop: func(ctx context.Context) error { return nil },
})
}),
)
app.Run()
```
8. **Run integration test** (`go test ./... -tags=integration`) to sanitycheck DB + routes.
9. **Add CI workflow** (copy the `.github/workflows/ci.yml` from §8) and push.
10. **Publish first Docker image** (`docker build -t ghcr.io/yourorg/platform:dev .`).
After step10 you have a **complete, productiongrade scaffolding** that:
* authenticates users via JWT (with optional OIDC),
* authorizes actions through a compiletimechecked permission DSL,
* logs/metrics/traces out of the box,
* lets any team drop a new `*.so` or static module under `modules/` to add resources,
* ships with a working CI pipeline and a readytorun Docker image.
---
## 12⃣ REFERENCE IMPLEMENTATION (public)
If you prefer to start from a **real opensource baseline**, check out the following community projects that already adopt most of the ideas above:
| Repo | Highlights |
|------|------------|
| `github.com/gomicroservices/cleanarch` | Cleanarchitecture skeleton, fx DI, Ent ORM, JWT auth |
| `github.com/ory/hydra` | Full OIDC provider (good for auth reference) |
| `github.com/segmentio/kafka-go` examples | Eventbus integration |
| `github.com/ThreeDotsLabs/watermill` | Pub/sub abstraction usable as bus wrapper |
| `github.com/hibiken/asynq` | Background job framework (Redis based) |
Fork one, strip the business logic, and rename the packages to match *your* `github.com/yourorg/platform` namespace.
---
## 13⃣ FINAL CHECKLIST (before you ship)
- [ ] Core modules compiled & registered in `internal/di`.
- [ ] `module.IModule` interface and static registry in place.
- [ ] JWT auth + middleware + `User` context helper.
- [ ] Permission constants generated & compiled in `pkg/perm`.
- [ ] Ent schema + migration runner that aggregates all module migrations.
- [ ] OpenTelemetry tracer/provider wired at process start.
- [ ] Prometheus metrics endpoint (`/metrics`).
- [ ] Health endpoints (`/healthz`, `/ready`).
- [ ] Dockerfile (multistage) and `docker-compose.yml` for dev.
- [ ] GitHub Actions pipeline (CI) passes all tests.
- [ ] Sample plugin (Blog) builds, loads, registers routes, and passes integration test.
- [ ] Documentation: `README.md`, `docs/architecture.md`, `docs/extension-points.md`.
> **Congratulations!** You now have a **robust, extensible Go platform boilerplate** that can be the foundation for any SaaS, internal toolset, or microservice ecosystem you wish to build. Happy coding! 🚀

View File

@@ -0,0 +1,319 @@
# Requirements
## HIGHLEVEL ARCHITECTURAL PRINCIPLES
| Principle | Why it matters for a modular platform | How to enforce it |
|-----------|----------------------------------------|-------------------|
| **Separation of Concerns (SoC)** | Keeps core services (auth, audit, config) independent from business modules. | Use **layered** or **hexagonal/cleanarchitecture** boundaries. |
| **DomainDriven Design (DDD) Bounded Contexts** | Allows each module to own its own model & rules while sharing a common identity kernel. | Define a **Core Context** (Identity, Security, Infrastructure) and **Feature Contexts** (Billing, CMS, Chat, …). |
| **Modular Monolith → Microserviceready** | Start simple (single process) but keep each module in its own package so you can later split to services if needed. | Package each module as an **independent library** with its own **DI containermodule** and **routing**. |
| **Plugin / Extensionpoint model** | Enables customers or internal teams to drop new features without touching core code. | Export **welldefined interfaces** (e.g., `IUserProvider`, `IPermissionResolver`, `IModuleInitializer`). |
| **APIFirst** | Guarantees that any UI (web, mobile, CLI) can be built on top of the same contract. | Publish **OpenAPI/GraphQL schema** as part of the build artefact. |
| **SecuritybyDesign** | The platform will hold user credentials, roles and possibly PII. | Centralize **authentication**, **authorization**, **audit**, **ratelimiting**, **CORS**, **CSP**, **secure defaults**. |
| **Observability** | You need to know when a plugin crashes or misbehaves. | Provide **structured logging**, **metrics**, **tracing**, **healthchecks**, **central error bus**. |
| **CI/CDReady Boilerplate** | Keeps the framework maintainable and encourages bestpractice adoption. | Include **GitHub Actions / GitLab CI** pipelines that run lint, unit, integration, contract tests, and a publish step. |
| **ConfigurationasCode** | Each module may need its own settings but you want a single source of truth. | Use a **hierarchical config system** (environment > file > secret store). |
| **Multitenancy (optional)** | If you plan SaaS, the core must be ready to isolate data per tenant. | Provide **tenantaware repository abstractions** and **tenantscoped middlewares**. |
| **Versioning & Compatibility** | Modules evolve; the core should never break existing plugins. | Adopt **semantic versioning** + **backwardscompatibility shim layer** (e.g., deprecation warnings). |
---
## LAYERED / HEXAGONAL BLUEPRINT
```
+---------------------------------------------------+
| Presentation |
| (REST/GraphQL/ gRPC Controllers, UI SDKs, CLI) |
+-------------------+-------------------------------+
| Application Services (Usecases) |
| - orchestrate core & feature modules |
| - enforce policies (RBAC, ratelimit) |
+-------------------+-------------------------------+
| Domain (Business Logic) |
| - Core Entities (User, Role, Permission) |
| - Domain Services (PasswordPolicy, TokenMgr) |
+-------------------+-------------------------------+
| Infrastructure / Adapters |
| - Persistence (ORM/NoSQL Repositories) |
| - External services (Email, SMS, OIDC) |
| - Event bus (Kafka/Rabbit, Inprocess) |
| - File storage, Cache, Search |
+---------------------------------------------------+
| Platform Core (Kernel) |
| - DI container, Module loader, Config manager |
| - Crosscutting concerns (Logging, Metrics) |
| - Security subsystem (AuthN/AuthZ, Token Issuer)|
+---------------------------------------------------+
```
*All layers talk to each other **through interfaces** defined in the Core kernel. Feature modules implement those interfaces and register themselves at startup.*
---
## REQUIRED BASE MODULES (THE “CORE KERNEL”)
| Module | Core responsibilities | Public API / Extension points |
|--------|-----------------------|--------------------------------|
| **IdentityManagement** | - User CRUD (local + federation) <br>- Password hashing, reset flow <br>- Email/Phone verification | `IUserRepository`, `IUserService`, `IExternalIdpProvider` |
| **Roles & Permissions** | - Role hierarchy (role → permission set) <br>- Permission definitions (string, enum, policy) <br>- Dynamic permission evaluation (ABAC) | `IPermissionResolver`, `IRoleService` |
| **Authentication** | - JWT / OAuth2 Access & Refresh tokens <br>- OpenID Connect Provider (optional) <br>- Session Management (stateless + optional stateful) | `IAuthService`, `ITokenProvider` |
| **Authorization Middleware** | - Enforce RBAC/ABAC on each request <br>- Policy DSL (e.g., `hasPermission('project.read') && resource.ownerId == user.id`) | `IAuthorizationHandler` |
| **Audit & Activity Log** | - Immutable log of securityrelevant actions <br>- Correlation IDs, actor, target, timestamp | `IAuditSink`, `IAuditService` |
| **Configuration** | - Hierarchical sources (env, files, secret manager) <br>- Validation schema (JSONSchema / Yup / JSR380) | `IConfigProvider` |
| **Health & Metrics** | - `/healthz`, `/ready`, `/metrics` endpoints <br>- Export to Prometheus, Grafana, CloudWatch | `IHealthCheck`, `IMetricsRegistry` |
| **Error Handling** | - Centralized error objects, stacktrace masking <br>- Automatic mapping to HTTP status <br>- Exceptiontoevent publishing | `IErrorMapper`, `IErrorBus` |
| **Logging** | - Structured JSON logs (level, requestId, userId) <br>- pluggable sinks (stdout, file, ELK, Cloud Logging) | `ILoggerFactory` |
| **Notification** | - Email, SMS, Push (FCM/APNs), Webhooks <br>- Queuebacked delivery, retries | `INotificationService` |
| **File / Blob Storage** | - Abstracted bucket API (upload, version, signed URLs) | `IBlobStore` |
| **Scheduler / Background Jobs** | - Cronlike tasks, queue workers, retry/backoff policies | `IJobScheduler`, `IJobProcessor` |
| **Internationalization (i18n)** | - Message catalog, locale negotiation, runtime translation | `I18nService` |
| **API Gateway (optional)** | - Ratelimit, request/response transformation, APIkey handling, request routing to modules | `IGatewayPlugin` |
| **Multitenancy (optional)** | - Tenant identification (subdomain, header, JWT claim) <br>- Tenantscoped data isolation primitives | `ITenantResolver`, `ITenantContext` |
> **Tip:** Pack each module as a **separate NPM/ Maven / NuGet / Go module** with its own `package.json` / `pom.xml` etc. The platforms **bootstrapper** loads every module that implements `IModuleInitializer` (or similar) and calls `ConfigureServices` / `RegisterRoutes`.
---
## EXTENSIONPOINT DESIGN (HOW PLUGINS HOOK IN)
1. **Module Manifest** a tiny JSON/YAML file (`module.yaml`) that declares:
- Module name, version, dependencies (core ≥ 1.2.0, other modules)
- Public routes (e.g., `/api/v1/blog/**`)
- Required permissions (autogenerated from source annotations)
- UI assets (static folder, React component entry point)
2. **Bootstrap Interface**
```ts
export interface IModuleInitializer {
/**
* Called during platform startup.
* Register services, routes, policies, background jobs.
*/
init(app: IApplicationBuilder, container: IServiceContainer): Promise<void>;
}
```
3. **Dependency Injection (DI) Conventions**
- Core registers **contracts** (`IUserRepository`, `IPermissionResolver`) as **singletons**.
- Modules register **implementations** with a **named scope** (e.g., `UserRepository:Local`).
- Override is possible via **module ordering** or explicit `container.override(...)`.
4. **Policy/Permission Extension**
```ts
// core lib
export type Permission = `${string}.${string}`; // e.g., "blog.post.create"
// module
export const BLOG_PERMS = {
POST_CREATE: 'blog.post.create',
POST_READ: 'blog.post.read',
POST_UPDATE: 'blog.post.update',
POST_DELETE: 'blog.post.delete',
} as const;
```
5. **Event Bus & Hooks**
- Central **topic**: `platform.*` (user.created, role.assigned, tenant.created)
- Modules can **publish** and **subscribe** via `IEventBus`.
- Provide **synchronous guard hooks** (`beforeUserCreate`, `afterRoleDelete`) for validation & sideeffects.
6. **UI Plugin System**
- Serve a **manifest** at `/modules` that frontend bundles read to render navigation.
- Encourage **Web Component / Module Federation** pattern for SPA integration.
---
## SAMPLE REPOSITORY LAYOUT (languageagnostic)
```
/platform-root
├─ /core # ---- Kernel / Base modules ----
│ ├─ /auth
│ │ ├─ src/
│ │ └─ package.json
│ ├─ /identity
│ ├─ /authorization
│ ├─ /audit
│ ├─ /config
│ ├─ /logging
│ ├─ /metrics
│ └─ index.ts (exports all core APIs)
├─ /modules # ---- Feature plugins ----
│ ├─ /blog
│ │ ├─ module.yaml # manifest
│ │ ├─ src/
│ │ │ ├─ BlogController.ts
│ │ │ ├─ BlogService.ts
│ │ │ └─ BlogModule.ts (implements IModuleInitializer)
│ │ └─ package.json
│ │
│ ├─ /billing
│ └─ /chat
├─ /infra # ---- Infrastructure adapters ----
│ ├─ /orm (typeorm/hibernate/EFCore etc.)
│ ├─ /cache (redis)
│ ├─ /queue (rabbit/kafka)
│ └─ /storage (s3/azureblob)
├─ /gateway (optional APIgateway layer)
├─ /scripts # build / lint / test helpers
├─ /ci
│ └─ github-actions.yml
├─ /docs
│ └─ architecture.md
├─ package.json (or pom.xml / go.mod)
└─ README.md
```
### How it boots
```ts
// platform-root/src/main.ts
import { createApp } from '@core/app';
import { loadModules } from '@core/module-loader';
import { CoreModule } from '@core';
async function bootstrap() {
const app = await createApp();
// 1⃣ Load core kernel (DI, config, logger)
await app.register(CoreModule);
// 2⃣ Dynamically discover all `module.yaml` under /modules
const modules = await loadModules(__dirname + '/modules');
// 3⃣ Initialise each module (order can be defined in manifest)
for (const mod of modules) {
await mod.instance.init(app.builder, app.container);
}
// 4⃣ Start HTTP / gRPC server
await app.listen(process.env.PORT || 3000);
}
bootstrap().catch(err => {
console.error('❌ Platform failed to start', err);
process.exit(1);
});
```
---
## KEY DECISIONS YOU MUST TAKE EARLY
| Decision | Options | Implications |
|----------|---------|--------------|
| **Language / Runtime** | Node.js (NestJS, Fastify), Java (Spring Boot), .NET (ASP.NET Core), Go (Gin/Fiber), Python (FastAPI) | Affects DI framework, module packaging, community libs for auth/OIDC, testing. |
| **Persistence Strategy** | Relational (PostgreSQL, MySQL) + optional NoSQL (Mongo, Dynamo) | Choose an ORM/Repository pattern that can be swapped per module. |
| **Auth Protocol** | JWT + Refresh, OAuth2 Authorization Server, OpenID Connect Provider, or integrate with external IdP (Keycloak, Auth0) | Influences token lifetimes, revocation strategy, multitenant support. |
| **Event Bus** | Inprocess EventEmitter (for monolith) → Kafka/Rabbit for scaling | Must expose both sync and async hooks. |
| **Module Packaging** | NPM packages (private registry) / Maven artifacts / Docker images (for microservice extraction) | Define a *semantic version* policy (core ≥1.0.0 never forces breaking changes on plugins). |
| **Multitenancy Model** | Single DB with tenant_id column (shared), Schemapertenant, or DBpertenant | Affects repository base class and migrations tooling. |
| **Internationalisation** | i18next (frontend) + ICU messages in backend, or .NET Resource files | Choose a format that can be merged from modules at build time. |
| **CI/CD** | GitHub Actions + Docker Buildx + semanticrelease | Automate publishing of core + modules to same artifact registry. |
| **Testing Strategy** | Unit (Jest, JUnit, xUnit), Integration (Testcontainers), Contract (Pact) | Provide a **core testing harness** that loads a dummy module and asserts the contract of each extension point. |
---
## COMMON PITFALLS & HOW TO AVOID THEM
| Pitfall | Symptoms | Fix / Guardrail |
|---------|----------|-----------------|
| **Tight coupling of modules to core implementation** | Module imports internal ORM classes, fails on core upgrade. | Expose **only interfaces** (`IUserRepository`) from core and keep the concrete implementation as a private package. |
| **Hardcoded permission strings** | Duplicate names across modules, typos cause silent authorisation bypass. | Provide a **Permission Builder DSL** (`Permission.define('blog.post', ['create', 'read'])`) that generates constants and registers them automatically. |
| **Global state in modules** | Tests interfere with each other, memory leaks when hotreloading. | Enforce **stateless services**; keep perrequest scoped data (e.g., via DI context). |
| **Schema migrations clash** | Two modules try to add the same column or foreign key. | Adopt a **central migration orchestrator** (e.g., Flyway/DBMate) that loads migration scripts from each module in alphabetical order. |
| **Authorization checks omitted in new routes** | Security hole for new plugin routes. | Provide a **base controller class** that autoapplies `Authorize` filter, or a compiletime lint rule that checks every exported route for a permission annotation. |
| **Vendor lockin to a particular IdP** | Hard to replace Keycloak later. | Keep **IdP adapters** behind a `IIdentityProvider` interface; ship at least two (local DB + OIDC). |
| **Unbounded background jobs** | Queue overflow, OOM, duplicate processing. | Use a **jobscheduler abstraction** that caps concurrency, persists state, and provides `@Retry` decorator. |
| **Insufficient observability** | You cant tell which module caused latency spikes. | Tag every log/metric with `module=<moduleName>` automatically via middleware. |
| **Version drift between core and modules** | Module built against core1.0 fails on core1.5. | Publish a **core compatibility matrix** and enforce `peerDependencies` in package.json; CI should fail on mismatched ranges. |
---
## QUICK START GUIDE (What to Build First)
1. **Create the Core Kernel**
- Set up DI container, config loader, logger, health/metrics endpoint.
- Scaffold `IUserRepository`, `IPermissionResolver`, `ITokenProvider`.
2. **Implement Identity & Auth**
- Choose JWT + Refresh + optional OpenID Connect.
- Add password hashing (bcrypt/argon2) and email verification flow.
3. **Add Role/Permission Engine**
- Simple RBAC matrix with an extensible `Permission` type.
- Provide a UI admin UI (or API only) to manage roles.
4. **Set Up Event Bus & Audit**
- Publish `user.created`, `role.granted` events.
- Store audit entries in an appendonly table (or log to Elastic).
5. **Build the Module Loader**
- Scan `modules/*/module.yaml`, load via `require()`/classpath.
- Register each `IModuleInitializer`.
6. **Create a Sample Feature Module** e.g., **Blog**
- Define its own entities (`Post`, `Comment`).
- Register routes (`/api/v1/blog/posts`).
- Declare required permissions (`blog.post.create`).
7. **Write Integration Tests**
- Spin up an inmemory DB (SQLite or H2).
- Load core + blog module, assert that a user without `blog.post.create` receives 403.
8. **Add CI Pipeline**
- Lint → Unit → Integration (Docker Compose with DB + Redis).
- On tag, publish `core` and `blog` packages to your private registry.
9. **Document Extension Points**
- Provide a **Developer Handbook** (README + `docs/extension-points.md`).
10. **Iterate** add Notification, Scheduler, Multitenancy, APIGateway as needed.
---
## TOOLS & LIBRARIES (starter suggestions per stack)
| Stack | Core | Auth | DI / Module | Event Bus | ORM | Validation | Testing |
|-------|------|------|-------------|-----------|-----|------------|---------|
| **Node (TypeScript)** | NestJS (or Fastify + `awilix`) | `@nestjs/passport`, `passport-jwt`, `openid-client` | NestJS dynamic modules or `@nestjs-modules/mailer` | `@nestjs/event-emitter` or `KafkaJS` | TypeORM / Prisma | `class-validator` + `class-transformer` | Jest + `supertest`, Testcontainers |
| **Java** | Spring Boot | Spring Security + `spring-boot-starter-oauth2-resource-server` | Spring Boot `@Configuration` + `ImportBeanDefinitionRegistrar` | Spring Cloud Stream (Kafka) | JPA / Hibernate | Bean Validation (Hibernate Validator) | JUnit5 + Testcontainers |
| **.NET 8** | ASP.NET Core | `Microsoft.AspNetCore.Authentication.JwtBearer` | `IHostedService` + `Scrutor` for module discovery | MassTransit (Rabbit/Kafka) | EF Core | FluentValidation | xUnit + DockerTestContainers |
| **Go** | Echo / Fiber | `golang.org/x/oauth2` + `github.com/golang-jwt/jwt/v5` | `uber-go/fx` for DI, module registration | `segmentio/kafka-go` | GORM / Ent | `go-playground/validator` | Testify + Dockertest |
| **Python** | FastAPI | `fastapi-users` / `Authlib` | `pluggy` (pytest plugins) or custom loader | `aiokafka` | SQLModel / Tortoise ORM | Pydantic | Pytest + pytestasyncio, Testcontainers |
Pick the stack youre most comfortable with; the concepts stay identical.
---
## TL;DR What You Must Deliver
| Layer | Musthave components | Why |
|-------|----------------------|-----|
| **Core Kernel** | Config, Logger, DI, Health, Metrics, Error Bus | Foundation for any module. |
| **Security** | Auth (JWT/OIDC), Authorization (RBAC + ABAC), Audit | Guarantees secure, traceable access. |
| **User & Role Management** | User CRUD, Password reset, Role ↔ Permission matrix | The “identity” piece everyone will reuse. |
| **Extension System** | `IModuleInitializer`, `module.yaml`, EventBus, Permission DSL | Enables plugins without touching core. |
| **Infrastructure Adapters** | DB repo, Cache, Queue, Blob storage, Email/SMS | Keeps core agnostic to any concrete tech. |
| **Observability** | Structured logs, Prometheus metrics, OpenTelemetry traces | You can monitor each module individually. |
| **DevOps Boilerplate** | CI pipelines, Dockerfiles, Semanticrelease, Docs | Makes the framework productionready outofthebox. |
| **Sample Feature Module** | (e.g., Blog) to show how to add routes, permissions, DB entities | Provides a reference implementation for future developers. |
When you scaffold those pieces **once**, any downstream team can drop a new folder that follows the `module.yaml` contract, implement the initializer, add its own tables & APIs, and instantly get:
* secure authentication,
* rolebased authorization,
* logging/metrics,
* unified config,
* CIready testing,
* optional multitenant isolation.
Thats the foundation of a **robust, futureproof platform boilerplate**. Happy building! 🚀

View File

@@ -0,0 +1,62 @@
# Complete Task List
This document provides a comprehensive list of all tasks across all phases. Each task has a corresponding detailed file in the phase-specific directories.
## Task Organization
Tasks are organized by phase and section. Each task file follows the naming convention: `{section}.{subtask}-{description}.md`
## Phase 0: Project Setup & Foundation
### 0.1 Repository Bootstrap
- [0.1.1 - Initialize Go Module](./phase0/0.1.1-initialize-go-module.md)
- [0.1.2 - Create Directory Structure](./phase0/0.1.2-create-directory-structure.md)
- [0.1.3 - Add Gitignore](./phase0/0.1.3-add-gitignore.md)
- [0.1.4 - Create Initial README](./phase0/0.1.4-create-initial-readme.md)
### 0.2 Configuration System
- [0.2.1 - Install Configuration Dependencies](./phase0/0.2.1-install-config-dependencies.md)
- [0.2.2 - Create Config Interface](./phase0/0.2.2-create-config-interface.md)
- [0.2.3 - Implement Config Loader](./phase0/0.2.3-implement-config-loader.md)
- [0.2.4 - Create Configuration Files](./phase0/0.2.4-create-configuration-files.md)
### 0.3 Logging Foundation
- [0.3.1 - Install Logging Dependencies](./phase0/0.3.1-install-logging-dependencies.md)
- See [Phase 0 README](./phase0/README.md) for remaining tasks
### 0.4 Basic CI/CD Pipeline
- See [Phase 0 README](./phase0/README.md) for tasks
### 0.5 Dependency Injection Setup
- See [Phase 0 README](./phase0/README.md) for tasks
## Phase 1-8 Tasks
Detailed task files for Phases 1-8 are being created. See individual phase README files:
- [Phase 1 README](./phase1/README.md) - Core Kernel & Infrastructure
- [Phase 2 README](./phase2/README.md) - Authentication & Authorization
- [Phase 3 README](./phase3/README.md) - Module Framework
- [Phase 4 README](./phase4/README.md) - Sample Feature Module (Blog)
- [Phase 5 README](./phase5/README.md) - Infrastructure Adapters
- [Phase 6 README](./phase6/README.md) - Observability & Production Readiness
- [Phase 7 README](./phase7/README.md) - Testing, Documentation & CI/CD
- [Phase 8 README](./phase8/README.md) - Advanced Features & Polish
## Task Status Tracking
To track task completion:
1. Update the Status field in each task file
2. Update checkboxes in the main plan.md
3. Reference task IDs in commit messages: `[0.1.1] Initialize Go module`
4. Link GitHub issues to tasks if using issue tracking
## Generating Missing Task Files
A script is available to generate task files from plan.md:
```bash
cd docs/tasks
python3 generate_tasks.py
```
Note: Manually review and refine generated task files as needed.

View File

@@ -0,0 +1,63 @@
# Implementation Tasks
This directory contains detailed task definitions for each phase of the Go Platform implementation.
## Task Organization
Tasks are organized by phase, with each major task section having its own detailed file:
### Phase 0: Project Setup & Foundation
- [Phase 0 Tasks](./phase0/README.md) - All Phase 0 tasks
### Phase 1: Core Kernel & Infrastructure
- [Phase 1 Tasks](./phase1/README.md) - All Phase 1 tasks
### Phase 2: Authentication & Authorization
- [Phase 2 Tasks](./phase2/README.md) - All Phase 2 tasks
### Phase 3: Module Framework
- [Phase 3 Tasks](./phase3/README.md) - All Phase 3 tasks
### Phase 4: Sample Feature Module (Blog)
- [Phase 4 Tasks](./phase4/README.md) - All Phase 4 tasks
### Phase 5: Infrastructure Adapters
- [Phase 5 Tasks](./phase5/README.md) - All Phase 5 tasks
### Phase 6: Observability & Production Readiness
- [Phase 6 Tasks](./phase6/README.md) - All Phase 6 tasks
### Phase 7: Testing, Documentation & CI/CD
- [Phase 7 Tasks](./phase7/README.md) - All Phase 7 tasks
### Phase 8: Advanced Features & Polish (Optional)
- [Phase 8 Tasks](./phase8/README.md) - All Phase 8 tasks
## Task Status
Each task file includes:
- **Task ID**: Unique identifier (e.g., `0.1.1`)
- **Title**: Descriptive task name
- **Phase**: Implementation phase
- **Status**: Pending | In Progress | Completed | Blocked
- **Priority**: High | Medium | Low
- **Dependencies**: Tasks that must complete first
- **Description**: Detailed requirements
- **Acceptance Criteria**: How to verify completion
- **Implementation Notes**: Technical details and references
- **Related ADRs**: Links to relevant architecture decisions
## Task Tracking
Tasks can be tracked using:
- GitHub Issues (linked from tasks)
- Project boards
- Task management tools
- Direct commit messages referencing task IDs
## Task Naming Convention
Tasks follow the format: `{phase}.{section}.{subtask}`
Example: `0.1.1` = Phase 0, Section 1 (Repository Bootstrap), Subtask 1

View File

@@ -0,0 +1,54 @@
# Task Template
Use this template for creating new task files.
## Metadata
- **Task ID**: {phase}.{section}.{subtask}
- **Title**: {Descriptive Task Name}
- **Phase**: {Phase Number} - {Phase Name}
- **Section**: {Section Number}.{Section Name}
- **Status**: Pending | In Progress | Completed | Blocked
- **Priority**: High | Medium | Low
- **Estimated Time**: {time estimate}
- **Dependencies**: {task IDs that must complete first}
## Description
{Clear description of what needs to be done}
## Requirements
- {Requirement 1}
- {Requirement 2}
- {Requirement 3}
## Implementation Steps
1. {Step 1}
2. {Step 2}
3. {Step 3}
## Acceptance Criteria
- [ ] {Criterion 1}
- [ ] {Criterion 2}
- [ ] {Criterion 3}
## Related ADRs
- [ADR-XXXX: {ADR Title}](../../adr/XXXX-adr-title.md)
## Implementation Notes
- {Note 1}
- {Note 2}
- {Note 3}
## Testing
```bash
# Test commands
go test ./...
```
## Files to Create/Modify
- `path/to/file.go` - {Description}
- `path/to/file_test.go` - {Description}
## References
- {Link to relevant documentation}
- {Link to example code}

View File

@@ -0,0 +1,219 @@
#!/usr/bin/env python3
"""
Generate task files from plan.md
This script parses the plan.md file and creates detailed task files for each task.
"""
import re
import os
from pathlib import Path
def parse_plan(plan_path):
"""Parse plan.md and extract tasks"""
with open(plan_path, 'r') as f:
content = f.read()
tasks = []
current_phase = None
current_section = None
subtask_num = 1
lines = content.split('\n')
i = 0
while i < len(lines):
line = lines[i].rstrip()
# Phase header
phase_match = re.match(r'^## Phase (\d+):', line)
if phase_match:
current_phase = int(phase_match.group(1))
subtask_num = 1 # Reset subtask counter for new phase
i += 1
continue
# Section header (e.g., "#### 0.1 Repository Bootstrap")
section_match = re.match(r'^#### (\d+\.\d+)', line)
if section_match:
current_section = section_match.group(1)
subtask_num = 1 # Reset subtask counter for new section
i += 1
continue
# Task item (checkbox) - must match exactly
task_match = re.match(r'^- \[ \] (.+)', line)
if task_match and current_phase is not None and current_section is not None:
task_desc = task_match.group(1).strip()
# Handle tasks that end with colon (might have code block or list following)
code_block = ""
# Skip empty lines and code blocks
if i + 1 < len(lines):
next_line = lines[i + 1].strip()
if next_line.startswith('```'):
# Extract code block
j = i + 2
while j < len(lines) and not lines[j].strip().startswith('```'):
code_block += lines[j] + '\n'
j += 1
i = j + 1
elif next_line.startswith('- [ ]') or next_line.startswith('```'):
# Next task or code block, don't skip
i += 1
else:
i += 1
else:
i += 1
# Only add if we have valid phase and section
if current_phase is not None and current_section is not None:
tasks.append({
'phase': current_phase,
'section': current_section,
'subtask': subtask_num,
'description': task_desc,
'code': code_block.strip()
})
subtask_num += 1
continue
i += 1
return tasks
def create_task_file(task, output_dir):
"""Create a task markdown file"""
phase_dir = output_dir / f"phase{task['phase']}"
phase_dir.mkdir(exist_ok=True)
task_id = f"{task['section']}.{task['subtask']}"
# Create safe filename
safe_desc = re.sub(r'[^\w\s-]', '', task['description'])[:50].strip().replace(' ', '-').lower()
filename = f"{task_id}-{safe_desc}.md"
filepath = phase_dir / filename
# Generate content
content = f"""# Task {task_id}: {task['description']}
## Metadata
- **Task ID**: {task_id}
- **Title**: {task['description']}
- **Phase**: {task['phase']} - {get_phase_name(task['phase'])}
- **Section**: {task['section']}
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: TBD
- **Dependencies**: TBD
## Description
{task['description']}
## Requirements
- {task['description']}
## Implementation Steps
1. TODO: Add implementation steps
2. TODO: Add implementation steps
3. TODO: Add implementation steps
## Acceptance Criteria
- [ ] Task {task_id} is completed
- [ ] All requirements are met
- [ ] Code compiles and tests pass
## Related ADRs
- See relevant ADRs in `docs/adr/`
## Implementation Notes
- TODO: Add implementation notes
## Testing
```bash
# TODO: Add test commands
go test ./...
```
"""
if task['code']:
content += f"\n## Code Reference\n\n```go\n{task['code']}\n```\n"
with open(filepath, 'w') as f:
f.write(content)
return filepath
def get_phase_name(phase_num):
"""Get phase name from number"""
phases = {
0: "Project Setup & Foundation",
1: "Core Kernel & Infrastructure",
2: "Authentication & Authorization",
3: "Module Framework",
4: "Sample Feature Module (Blog)",
5: "Infrastructure Adapters",
6: "Observability & Production Readiness",
7: "Testing, Documentation & CI/CD",
8: "Advanced Features & Polish"
}
return phases.get(phase_num, "Unknown")
def main():
script_dir = Path(__file__).parent
plan_path = script_dir.parent / "plan.md"
output_dir = script_dir
if not plan_path.exists():
print(f"Error: {plan_path} not found")
return
print(f"Parsing {plan_path}...")
try:
tasks = parse_plan(plan_path)
print(f"Found {len(tasks)} tasks")
if len(tasks) == 0:
print("Warning: No tasks found. Check the plan.md format.")
return
created = 0
skipped = 0
for task in tasks:
try:
task_id = f"{task['section']}.{task['subtask']}"
# Determine filepath before creating
phase_dir = output_dir / f"phase{task['phase']}"
phase_dir.mkdir(exist_ok=True)
# Create safe filename
safe_desc = re.sub(r'[^\w\s-]', '', task['description'])[:50].strip().replace(' ', '-').lower()
filename = f"{task_id}-{safe_desc}.md"
filepath = phase_dir / filename
# Check if file already exists (skip if so)
if filepath.exists() and filepath.stat().st_size > 100:
skipped += 1
continue
# Create the file
create_task_file(task, output_dir)
created += 1
if created % 10 == 0:
print(f"Created {created} task files...")
except Exception as e:
print(f"Error creating task {task.get('section', '?')}.{task.get('subtask', '?')}: {e}")
import traceback
traceback.print_exc()
print(f"\nCreated {created} new task files")
if skipped > 0:
print(f"Skipped {skipped} existing task files")
print(f"Total tasks processed: {len(tasks)}")
except Exception as e:
print(f"Error: {e}")
import traceback
traceback.print_exc()
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,47 @@
# Task 0.1.1: Initialize Go Module
## Metadata
- **Task ID**: 0.1.1
- **Title**: Initialize Go Module
- **Phase**: 0 - Project Setup & Foundation
- **Section**: 0.1 Repository Bootstrap
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: 5 minutes
- **Dependencies**: None
## Description
Initialize the Go module with the correct module path for the platform.
## Requirements
- Use module path: `git.dcentral.systems/toolz/goplt`
- Go version: 1.24.3
- Ensure `go.mod` file is created correctly
## Implementation Steps
1. Run `go mod init git.dcentral.systems/toolz/goplt` in the project root
2. Verify `go.mod` file is created with correct module path
3. Set Go version in `go.mod`: `go 1.24`
## Acceptance Criteria
- [ ] `go.mod` file exists in project root
- [ ] Module path is `git.dcentral.systems/toolz/goplt`
- [ ] Go version is set to `1.24`
- [ ] `go mod verify` passes
## Related ADRs
- [ADR-0001: Go Module Path](../../adr/0001-go-module-path.md)
- [ADR-0002: Go Version](../../adr/0002-go-version.md)
## Implementation Notes
- Ensure the module path matches the organization's Git hosting structure
- The module path will be used for all internal imports
- Update any documentation that references placeholder module paths
## Testing
```bash
# Verify module initialization
go mod verify
go mod tidy
```

View File

@@ -0,0 +1,47 @@
# Task 0.1.1: Initialize Go Module
## Metadata
- **Task ID**: 0.1.1
- **Title**: Initialize Go Module
- **Phase**: 0 - Project Setup & Foundation
- **Section**: 0.1 Repository Bootstrap
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: 5 minutes
- **Dependencies**: None
## Description
Initialize the Go module with the correct module path for the platform.
## Requirements
- Use module path: `git.dcentral.systems/toolz/goplt`
- Go version: 1.24.3
- Ensure `go.mod` file is created correctly
## Implementation Steps
1. Run `go mod init git.dcentral.systems/toolz/goplt` in the project root
2. Verify `go.mod` file is created with correct module path
3. Set Go version in `go.mod`: `go 1.24`
## Acceptance Criteria
- [ ] `go.mod` file exists in project root
- [ ] Module path is `git.dcentral.systems/toolz/goplt`
- [ ] Go version is set to `1.24`
- [ ] `go mod verify` passes
## Related ADRs
- [ADR-0001: Go Module Path](../../adr/0001-go-module-path.md)
- [ADR-0002: Go Version](../../adr/0002-go-version.md)
## Implementation Notes
- Ensure the module path matches the organization's Git hosting structure
- The module path will be used for all internal imports
- Update any documentation that references placeholder module paths
## Testing
```bash
# Verify module initialization
go mod verify
go mod tidy
```

View File

@@ -0,0 +1,77 @@
# Task 0.1.2: Create directory structure:
## Metadata
- **Task ID**: 0.1.2
- **Title**: Create directory structure:
- **Phase**: 0 - Project Setup & Foundation
- **Section**: 0.1
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: TBD
- **Dependencies**: TBD
## Description
Create directory structure:
## Requirements
- Create directory structure:
## Implementation Steps
1. TODO: Add implementation steps
2. TODO: Add implementation steps
3. TODO: Add implementation steps
## Acceptance Criteria
- [ ] Task 0.1.2 is completed
- [ ] All requirements are met
- [ ] Code compiles and tests pass
## Related ADRs
- See relevant ADRs in `docs/adr/`
## Implementation Notes
- TODO: Add implementation notes
## Testing
```bash
# TODO: Add test commands
go test ./...
```
## Code Reference
```go
platform/
├── cmd/
└── platform/ # Main entry point
├── internal/ # Private implementation code
├── di/ # Dependency injection container
├── registry/ # Module registry
├── pluginloader/ # Plugin loader (optional)
└── infra/ # Infrastructure adapters
├── pkg/ # Public interfaces (exported)
├── config/ # ConfigProvider interface
├── logger/ # Logger interface
├── module/ # IModule interface
├── auth/ # Auth interfaces
├── perm/ # Permission DSL
└── infra/ # Infrastructure interfaces
├── modules/ # Feature modules
└── blog/ # Sample Blog module (Phase 4)
├── config/ # Configuration files
├── default.yaml
├── development.yaml
└── production.yaml
├── api/ # OpenAPI specs
├── scripts/ # Build/test scripts
├── docs/ # Documentation
├── ops/ # Operations (Grafana dashboards, etc.)
├── .github/
└── workflows/
└── ci.yml
├── Dockerfile
├── docker-compose.yml
├── docker-compose.test.yml
└── go.mod
```

View File

@@ -0,0 +1,56 @@
# Task 0.1.3: Add Gitignore
## Metadata
- **Task ID**: 0.1.3
- **Title**: Add Gitignore
- **Phase**: 0 - Project Setup & Foundation
- **Section**: 0.1 Repository Bootstrap
- **Status**: Pending
- **Priority**: Medium
- **Estimated Time**: 5 minutes
- **Dependencies**: 0.1.1
## Description
Create a comprehensive `.gitignore` file for Go projects that excludes build artifacts, dependencies, IDE files, and sensitive data.
## Requirements
- Ignore Go build artifacts
- Ignore dependency caches
- Ignore IDE-specific files
- Ignore environment-specific files
- Ignore secrets and sensitive data
## Implementation Steps
1. Create `.gitignore` in project root
2. Add standard Go ignores:
- `*.exe`, `*.exe~`, `*.dll`, `*.so`, `*.dylib`
- `*.test`, `*.out`
- `go.work`, `go.work.sum`
3. Add IDE ignores:
- `.vscode/`, `.idea/`, `*.swp`, `*.swo`
4. Add environment ignores:
- `.env`, `.env.local`, `config/secrets/`
5. Add OS ignores:
- `.DS_Store`, `Thumbs.db`
6. Add build artifacts:
- `bin/`, `dist/`, `tmp/`
## Acceptance Criteria
- [ ] `.gitignore` file exists
- [ ] Common Go artifacts are ignored
- [ ] IDE files are ignored
- [ ] Sensitive files are ignored
- [ ] Test with `git status` to verify
## Implementation Notes
- Use standard Go `.gitignore` templates
- Ensure `config/secrets/` is ignored (for secret files)
- Consider adding `*.log` for log files
## Testing
```bash
# Verify gitignore works
git status
# Should not show build artifacts or IDE files
```

View File

@@ -0,0 +1,56 @@
# Task 0.1.3: Add Gitignore
## Metadata
- **Task ID**: 0.1.3
- **Title**: Add Gitignore
- **Phase**: 0 - Project Setup & Foundation
- **Section**: 0.1 Repository Bootstrap
- **Status**: Pending
- **Priority**: Medium
- **Estimated Time**: 5 minutes
- **Dependencies**: 0.1.1
## Description
Create a comprehensive `.gitignore` file for Go projects that excludes build artifacts, dependencies, IDE files, and sensitive data.
## Requirements
- Ignore Go build artifacts
- Ignore dependency caches
- Ignore IDE-specific files
- Ignore environment-specific files
- Ignore secrets and sensitive data
## Implementation Steps
1. Create `.gitignore` in project root
2. Add standard Go ignores:
- `*.exe`, `*.exe~`, `*.dll`, `*.so`, `*.dylib`
- `*.test`, `*.out`
- `go.work`, `go.work.sum`
3. Add IDE ignores:
- `.vscode/`, `.idea/`, `*.swp`, `*.swo`
4. Add environment ignores:
- `.env`, `.env.local`, `config/secrets/`
5. Add OS ignores:
- `.DS_Store`, `Thumbs.db`
6. Add build artifacts:
- `bin/`, `dist/`, `tmp/`
## Acceptance Criteria
- [ ] `.gitignore` file exists
- [ ] Common Go artifacts are ignored
- [ ] IDE files are ignored
- [ ] Sensitive files are ignored
- [ ] Test with `git status` to verify
## Implementation Notes
- Use standard Go `.gitignore` templates
- Ensure `config/secrets/` is ignored (for secret files)
- Consider adding `*.log` for log files
## Testing
```bash
# Verify gitignore works
git status
# Should not show build artifacts or IDE files
```

View File

@@ -0,0 +1,63 @@
# Task 0.1.4: Create Initial README
## Metadata
- **Task ID**: 0.1.4
- **Title**: Create Initial README
- **Phase**: 0 - Project Setup & Foundation
- **Section**: 0.1 Repository Bootstrap
- **Status**: Pending
- **Priority**: Medium
- **Estimated Time**: 20 minutes
- **Dependencies**: 0.1.1
## Description
Create an initial `README.md` file that provides an overview of the project, its purpose, architecture, and quick start instructions.
## Requirements
- Project overview and description
- Architecture overview
- Quick start guide
- Links to documentation
- Build and run instructions
## Implementation Steps
1. Create `README.md` in project root
2. Add project title and description
3. Add architecture overview section
4. Add quick start instructions
5. Add links to documentation (`docs/`)
6. Add build and run commands
7. Add contribution guidelines (placeholder)
## Acceptance Criteria
- [ ] `README.md` exists
- [ ] Project overview is clear
- [ ] Quick start instructions are present
- [ ] Links to documentation work
- [ ] Build instructions are accurate
## Implementation Notes
- Keep README concise but informative
- Update as project evolves
- Include badges (build status, etc.) later
- Reference ADRs for architecture decisions
## Content Structure
```markdown
# Go Platform (goplt)
[Description]
## Architecture
[Overview]
## Quick Start
[Instructions]
## Documentation
[Links]
## Development
[Setup instructions]
```

View File

@@ -0,0 +1,63 @@
# Task 0.1.4: Create Initial README
## Metadata
- **Task ID**: 0.1.4
- **Title**: Create Initial README
- **Phase**: 0 - Project Setup & Foundation
- **Section**: 0.1 Repository Bootstrap
- **Status**: Pending
- **Priority**: Medium
- **Estimated Time**: 20 minutes
- **Dependencies**: 0.1.1
## Description
Create an initial `README.md` file that provides an overview of the project, its purpose, architecture, and quick start instructions.
## Requirements
- Project overview and description
- Architecture overview
- Quick start guide
- Links to documentation
- Build and run instructions
## Implementation Steps
1. Create `README.md` in project root
2. Add project title and description
3. Add architecture overview section
4. Add quick start instructions
5. Add links to documentation (`docs/`)
6. Add build and run commands
7. Add contribution guidelines (placeholder)
## Acceptance Criteria
- [ ] `README.md` exists
- [ ] Project overview is clear
- [ ] Quick start instructions are present
- [ ] Links to documentation work
- [ ] Build instructions are accurate
## Implementation Notes
- Keep README concise but informative
- Update as project evolves
- Include badges (build status, etc.) later
- Reference ADRs for architecture decisions
## Content Structure
```markdown
# Go Platform (goplt)
[Description]
## Architecture
[Overview]
## Quick Start
[Instructions]
## Documentation
[Links]
## Development
[Setup instructions]
```

View File

@@ -0,0 +1,47 @@
# Task 0.2.1: Install Configuration Dependencies
## Metadata
- **Task ID**: 0.2.1
- **Title**: Install Configuration Dependencies
- **Phase**: 0 - Project Setup & Foundation
- **Section**: 0.2 Configuration System
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: 5 minutes
- **Dependencies**: 0.1.1
## Description
Install Viper and Cobra packages for configuration management and CLI support.
## Requirements
- Install `github.com/spf13/viper` v1.18.0+
- Install `github.com/spf13/cobra` v1.8.0+
- Add to `go.mod` with proper version constraints
## Implementation Steps
1. Run `go get github.com/spf13/viper@v1.18.0`
2. Run `go get github.com/spf13/cobra@v1.8.0`
3. Run `go mod tidy` to update dependencies
4. Verify packages in `go.mod`
## Acceptance Criteria
- [ ] Viper is listed in `go.mod`
- [ ] Cobra is listed in `go.mod`
- [ ] `go mod verify` passes
- [ ] Dependencies are properly versioned
## Related ADRs
- [ADR-0004: Configuration Management](../../adr/0004-configuration-management.md)
## Implementation Notes
- Use specific versions for reproducibility
- Consider using `go get -u` for latest patch versions
- Document version choices in ADR
## Testing
```bash
go mod verify
go list -m github.com/spf13/viper
go list -m github.com/spf13/cobra
```

View File

@@ -0,0 +1,47 @@
# Task 0.2.1: Install Configuration Dependencies
## Metadata
- **Task ID**: 0.2.1
- **Title**: Install Configuration Dependencies
- **Phase**: 0 - Project Setup & Foundation
- **Section**: 0.2 Configuration System
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: 5 minutes
- **Dependencies**: 0.1.1
## Description
Install Viper and Cobra packages for configuration management and CLI support.
## Requirements
- Install `github.com/spf13/viper` v1.18.0+
- Install `github.com/spf13/cobra` v1.8.0+
- Add to `go.mod` with proper version constraints
## Implementation Steps
1. Run `go get github.com/spf13/viper@v1.18.0`
2. Run `go get github.com/spf13/cobra@v1.8.0`
3. Run `go mod tidy` to update dependencies
4. Verify packages in `go.mod`
## Acceptance Criteria
- [ ] Viper is listed in `go.mod`
- [ ] Cobra is listed in `go.mod`
- [ ] `go mod verify` passes
- [ ] Dependencies are properly versioned
## Related ADRs
- [ADR-0004: Configuration Management](../../adr/0004-configuration-management.md)
## Implementation Notes
- Use specific versions for reproducibility
- Consider using `go get -u` for latest patch versions
- Document version choices in ADR
## Testing
```bash
go mod verify
go list -m github.com/spf13/viper
go list -m github.com/spf13/cobra
```

View File

@@ -0,0 +1,59 @@
# Task 0.2.2: Create Config Interface
## Metadata
- **Task ID**: 0.2.2
- **Title**: Create Config Interface
- **Phase**: 0 - Project Setup & Foundation
- **Section**: 0.2 Configuration System
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: 15 minutes
- **Dependencies**: 0.2.1
## Description
Create the `ConfigProvider` interface in `pkg/config/` to abstract configuration access. This interface will be used by all modules and services.
## Requirements
- Define interface in `pkg/config/config.go`
- Include methods for type-safe access
- Support nested configuration keys
- Support unmarshaling into structs
## Implementation Steps
1. Create `pkg/config/config.go`
2. Define `ConfigProvider` interface:
```go
type ConfigProvider interface {
Get(key string) any
Unmarshal(v any) error
GetString(key string) string
GetInt(key string) int
GetBool(key string) bool
GetStringSlice(key string) []string
}
```
3. Add package documentation
4. Export interface for use by modules
## Acceptance Criteria
- [ ] `pkg/config/config.go` exists
- [ ] `ConfigProvider` interface is defined
- [ ] Interface methods match requirements
- [ ] Package documentation is present
- [ ] Interface compiles without errors
## Related ADRs
- [ADR-0004: Configuration Management](../../adr/0004-configuration-management.md)
## Implementation Notes
- Interface should be minimal and focused
- Additional methods can be added later if needed
- Consider adding `GetDuration()` for time.Duration values
- Consider adding `IsSet(key string) bool` to check if key exists
## Testing
```bash
go build ./pkg/config
go vet ./pkg/config
```

View File

@@ -0,0 +1,59 @@
# Task 0.2.2: Create Config Interface
## Metadata
- **Task ID**: 0.2.2
- **Title**: Create Config Interface
- **Phase**: 0 - Project Setup & Foundation
- **Section**: 0.2 Configuration System
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: 15 minutes
- **Dependencies**: 0.2.1
## Description
Create the `ConfigProvider` interface in `pkg/config/` to abstract configuration access. This interface will be used by all modules and services.
## Requirements
- Define interface in `pkg/config/config.go`
- Include methods for type-safe access
- Support nested configuration keys
- Support unmarshaling into structs
## Implementation Steps
1. Create `pkg/config/config.go`
2. Define `ConfigProvider` interface:
```go
type ConfigProvider interface {
Get(key string) any
Unmarshal(v any) error
GetString(key string) string
GetInt(key string) int
GetBool(key string) bool
GetStringSlice(key string) []string
}
```
3. Add package documentation
4. Export interface for use by modules
## Acceptance Criteria
- [ ] `pkg/config/config.go` exists
- [ ] `ConfigProvider` interface is defined
- [ ] Interface methods match requirements
- [ ] Package documentation is present
- [ ] Interface compiles without errors
## Related ADRs
- [ADR-0004: Configuration Management](../../adr/0004-configuration-management.md)
## Implementation Notes
- Interface should be minimal and focused
- Additional methods can be added later if needed
- Consider adding `GetDuration()` for time.Duration values
- Consider adding `IsSet(key string) bool` to check if key exists
## Testing
```bash
go build ./pkg/config
go vet ./pkg/config
```

View File

@@ -0,0 +1,60 @@
# Task 0.2.3: Implement Config Loader
## Metadata
- **Task ID**: 0.2.3
- **Title**: Implement Config Loader
- **Phase**: 0 - Project Setup & Foundation
- **Section**: 0.2 Configuration System
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: 30 minutes
- **Dependencies**: 0.2.1, 0.2.2
## Description
Implement the Viper-based configuration loader in `internal/config/` that implements the `ConfigProvider` interface. This loader will handle hierarchical configuration loading from files and environment variables.
## Requirements
- Implement `ConfigProvider` interface using Viper
- Load configuration in order: defaults → environment-specific → env vars
- Support YAML configuration files
- Support environment variable overrides
- Provide placeholder for secret manager integration (Phase 6)
## Implementation Steps
1. Create `internal/config/config.go`:
- Implement `ConfigProvider` interface
- Wrap Viper instance
- Implement all interface methods
2. Create `internal/config/loader.go`:
- `LoadConfig()` function
- Load `config/default.yaml` as baseline
- Merge environment-specific YAML (development/production)
- Apply environment variable overrides
- Set up automatic environment variable binding
3. Add error handling for missing config files
4. Add logging for configuration loading
## Acceptance Criteria
- [ ] `internal/config/config.go` implements `ConfigProvider`
- [ ] `internal/config/loader.go` has `LoadConfig()` function
- [ ] Configuration loads from `config/default.yaml`
- [ ] Environment-specific configs are merged correctly
- [ ] Environment variables override file values
- [ ] Errors are handled gracefully
## Related ADRs
- [ADR-0004: Configuration Management](../../adr/0004-configuration-management.md)
## Implementation Notes
- Use Viper's `SetConfigName()` and `AddConfigPath()`
- Use `MergeInConfig()` for environment-specific files
- Use `AutomaticEnv()` for environment variable binding
- Set environment variable prefix (e.g., `GOPLT_`)
- Use `SetEnvKeyReplacer()` to replace dots with underscores
## Testing
```bash
# Test config loading
go test ./internal/config -v
```

View File

@@ -0,0 +1,60 @@
# Task 0.2.3: Implement Config Loader
## Metadata
- **Task ID**: 0.2.3
- **Title**: Implement Config Loader
- **Phase**: 0 - Project Setup & Foundation
- **Section**: 0.2 Configuration System
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: 30 minutes
- **Dependencies**: 0.2.1, 0.2.2
## Description
Implement the Viper-based configuration loader in `internal/config/` that implements the `ConfigProvider` interface. This loader will handle hierarchical configuration loading from files and environment variables.
## Requirements
- Implement `ConfigProvider` interface using Viper
- Load configuration in order: defaults → environment-specific → env vars
- Support YAML configuration files
- Support environment variable overrides
- Provide placeholder for secret manager integration (Phase 6)
## Implementation Steps
1. Create `internal/config/config.go`:
- Implement `ConfigProvider` interface
- Wrap Viper instance
- Implement all interface methods
2. Create `internal/config/loader.go`:
- `LoadConfig()` function
- Load `config/default.yaml` as baseline
- Merge environment-specific YAML (development/production)
- Apply environment variable overrides
- Set up automatic environment variable binding
3. Add error handling for missing config files
4. Add logging for configuration loading
## Acceptance Criteria
- [ ] `internal/config/config.go` implements `ConfigProvider`
- [ ] `internal/config/loader.go` has `LoadConfig()` function
- [ ] Configuration loads from `config/default.yaml`
- [ ] Environment-specific configs are merged correctly
- [ ] Environment variables override file values
- [ ] Errors are handled gracefully
## Related ADRs
- [ADR-0004: Configuration Management](../../adr/0004-configuration-management.md)
## Implementation Notes
- Use Viper's `SetConfigName()` and `AddConfigPath()`
- Use `MergeInConfig()` for environment-specific files
- Use `AutomaticEnv()` for environment variable binding
- Set environment variable prefix (e.g., `GOPLT_`)
- Use `SetEnvKeyReplacer()` to replace dots with underscores
## Testing
```bash
# Test config loading
go test ./internal/config -v
```

View File

@@ -0,0 +1,67 @@
# Task 0.2.4: Create Configuration Files
## Metadata
- **Task ID**: 0.2.4
- **Title**: Create Configuration Files
- **Phase**: 0 - Project Setup & Foundation
- **Section**: 0.2 Configuration System
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: 15 minutes
- **Dependencies**: 0.1.2
## Description
Create the baseline configuration YAML files that define the default configuration structure for the platform.
## Requirements
- Create `config/default.yaml` with baseline values
- Create `config/development.yaml` with development overrides
- Create `config/production.yaml` with production overrides
- Define configuration schema for all core services
## Implementation Steps
1. Create `config/default.yaml`:
```yaml
environment: development
server:
port: 8080
host: "0.0.0.0"
database:
driver: "postgres"
dsn: ""
logging:
level: "info"
format: "json"
```
2. Create `config/development.yaml`:
- Override logging level to "debug"
- Add development-specific settings
3. Create `config/production.yaml`:
- Override logging level to "warn"
- Add production-specific settings
4. Document configuration options
## Acceptance Criteria
- [ ] `config/default.yaml` exists with complete structure
- [ ] `config/development.yaml` exists
- [ ] `config/production.yaml` exists
- [ ] All configuration files are valid YAML
- [ ] Configuration structure is documented
## Related ADRs
- [ADR-0004: Configuration Management](../../adr/0004-configuration-management.md)
## Implementation Notes
- Use consistent indentation (2 spaces)
- Add comments for unclear configuration options
- Use environment variables for sensitive values (DSN, secrets)
- Consider adding validation schema later
## Testing
```bash
# Validate YAML syntax
yamllint config/*.yaml
# or
python3 -c "import yaml; yaml.safe_load(open('config/default.yaml'))"
```

View File

@@ -0,0 +1,67 @@
# Task 0.2.4: Create Configuration Files
## Metadata
- **Task ID**: 0.2.4
- **Title**: Create Configuration Files
- **Phase**: 0 - Project Setup & Foundation
- **Section**: 0.2 Configuration System
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: 15 minutes
- **Dependencies**: 0.1.2
## Description
Create the baseline configuration YAML files that define the default configuration structure for the platform.
## Requirements
- Create `config/default.yaml` with baseline values
- Create `config/development.yaml` with development overrides
- Create `config/production.yaml` with production overrides
- Define configuration schema for all core services
## Implementation Steps
1. Create `config/default.yaml`:
```yaml
environment: development
server:
port: 8080
host: "0.0.0.0"
database:
driver: "postgres"
dsn: ""
logging:
level: "info"
format: "json"
```
2. Create `config/development.yaml`:
- Override logging level to "debug"
- Add development-specific settings
3. Create `config/production.yaml`:
- Override logging level to "warn"
- Add production-specific settings
4. Document configuration options
## Acceptance Criteria
- [ ] `config/default.yaml` exists with complete structure
- [ ] `config/development.yaml` exists
- [ ] `config/production.yaml` exists
- [ ] All configuration files are valid YAML
- [ ] Configuration structure is documented
## Related ADRs
- [ADR-0004: Configuration Management](../../adr/0004-configuration-management.md)
## Implementation Notes
- Use consistent indentation (2 spaces)
- Add comments for unclear configuration options
- Use environment variables for sensitive values (DSN, secrets)
- Consider adding validation schema later
## Testing
```bash
# Validate YAML syntax
yamllint config/*.yaml
# or
python3 -c "import yaml; yaml.safe_load(open('config/default.yaml'))"
```

View File

@@ -0,0 +1,40 @@
# Task 0.2.5: Add `internal/config/loader.go` with `LoadConfig()` function
## Metadata
- **Task ID**: 0.2.5
- **Title**: Add `internal/config/loader.go` with `LoadConfig()` function
- **Phase**: 0 - Project Setup & Foundation
- **Section**: 0.2
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: TBD
- **Dependencies**: TBD
## Description
Add `internal/config/loader.go` with `LoadConfig()` function
## Requirements
- Add `internal/config/loader.go` with `LoadConfig()` function
## Implementation Steps
1. TODO: Add implementation steps
2. TODO: Add implementation steps
3. TODO: Add implementation steps
## Acceptance Criteria
- [ ] Task 0.2.5 is completed
- [ ] All requirements are met
- [ ] Code compiles and tests pass
## Related ADRs
- See relevant ADRs in `docs/adr/`
## Implementation Notes
- TODO: Add implementation notes
## Testing
```bash
# TODO: Add test commands
go test ./...
```

View File

@@ -0,0 +1,33 @@
# Task 0.3.1: Install Logging Dependencies
## Metadata
- **Task ID**: 0.3.1
- **Title**: Install Logging Dependencies
- **Phase**: 0 - Project Setup & Foundation
- **Section**: 0.3 Logging Foundation
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: 5 minutes
- **Dependencies**: 0.1.1
## Description
Install the Zap logging library for structured logging.
## Requirements
- Install `go.uber.org/zap` v1.26.0+
- Add to `go.mod` with proper version constraints
## Implementation Steps
1. Run `go get go.uber.org/zap@v1.26.0`
2. Run `go mod tidy`
3. Verify package in `go.mod`
## Acceptance Criteria
- [ ] Zap is listed in `go.mod`
- [ ] Version is v1.26.0 or later
- [ ] `go mod verify` passes
## Related ADRs
- [ADR-0005: Logging Framework](../../adr/0005-logging-framework.md)
- [ADR-0012: Logger Interface Design](../../adr/0012-logger-interface-design.md)

View File

@@ -0,0 +1,33 @@
# Task 0.3.1: Install Logging Dependencies
## Metadata
- **Task ID**: 0.3.1
- **Title**: Install Logging Dependencies
- **Phase**: 0 - Project Setup & Foundation
- **Section**: 0.3 Logging Foundation
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: 5 minutes
- **Dependencies**: 0.1.1
## Description
Install the Zap logging library for structured logging.
## Requirements
- Install `go.uber.org/zap` v1.26.0+
- Add to `go.mod` with proper version constraints
## Implementation Steps
1. Run `go get go.uber.org/zap@v1.26.0`
2. Run `go mod tidy`
3. Verify package in `go.mod`
## Acceptance Criteria
- [ ] Zap is listed in `go.mod`
- [ ] Version is v1.26.0 or later
- [ ] `go mod verify` passes
## Related ADRs
- [ADR-0005: Logging Framework](../../adr/0005-logging-framework.md)
- [ADR-0012: Logger Interface Design](../../adr/0012-logger-interface-design.md)

View File

@@ -0,0 +1,52 @@
# Task 0.3.2: Create `pkg/logger/logger.go` interface:
## Metadata
- **Task ID**: 0.3.2
- **Title**: Create `pkg/logger/logger.go` interface:
- **Phase**: 0 - Project Setup & Foundation
- **Section**: 0.3
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: TBD
- **Dependencies**: TBD
## Description
Create `pkg/logger/logger.go` interface:
## Requirements
- Create `pkg/logger/logger.go` interface:
## Implementation Steps
1. TODO: Add implementation steps
2. TODO: Add implementation steps
3. TODO: Add implementation steps
## Acceptance Criteria
- [ ] Task 0.3.2 is completed
- [ ] All requirements are met
- [ ] Code compiles and tests pass
## Related ADRs
- See relevant ADRs in `docs/adr/`
## Implementation Notes
- TODO: Add implementation notes
## Testing
```bash
# TODO: Add test commands
go test ./...
```
## Code Reference
```go
type Logger interface {
Debug(msg string, fields ...Field)
Info(msg string, fields ...Field)
Warn(msg string, fields ...Field)
Error(msg string, fields ...Field)
With(fields ...Field) Logger
}
```

View File

@@ -0,0 +1,40 @@
# Task 0.3.3: Implement `internal/logger/zap_logger.go`:
## Metadata
- **Task ID**: 0.3.3
- **Title**: Implement `internal/logger/zap_logger.go`:
- **Phase**: 0 - Project Setup & Foundation
- **Section**: 0.3
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: TBD
- **Dependencies**: TBD
## Description
Implement `internal/logger/zap_logger.go`:
## Requirements
- Implement `internal/logger/zap_logger.go`:
## Implementation Steps
1. TODO: Add implementation steps
2. TODO: Add implementation steps
3. TODO: Add implementation steps
## Acceptance Criteria
- [ ] Task 0.3.3 is completed
- [ ] All requirements are met
- [ ] Code compiles and tests pass
## Related ADRs
- See relevant ADRs in `docs/adr/`
## Implementation Notes
- TODO: Add implementation notes
## Testing
```bash
# TODO: Add test commands
go test ./...
```

View File

@@ -0,0 +1,40 @@
# Task 0.3.4: Add request ID middleware helper (Gin middleware)
## Metadata
- **Task ID**: 0.3.4
- **Title**: Add request ID middleware helper (Gin middleware)
- **Phase**: 0 - Project Setup & Foundation
- **Section**: 0.3
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: TBD
- **Dependencies**: TBD
## Description
Add request ID middleware helper (Gin middleware)
## Requirements
- Add request ID middleware helper (Gin middleware)
## Implementation Steps
1. TODO: Add implementation steps
2. TODO: Add implementation steps
3. TODO: Add implementation steps
## Acceptance Criteria
- [ ] Task 0.3.4 is completed
- [ ] All requirements are met
- [ ] Code compiles and tests pass
## Related ADRs
- See relevant ADRs in `docs/adr/`
## Implementation Notes
- TODO: Add implementation notes
## Testing
```bash
# TODO: Add test commands
go test ./...
```

View File

@@ -0,0 +1,40 @@
# Task 0.4.1: Create `.github/workflows/ci.yml`:
## Metadata
- **Task ID**: 0.4.1
- **Title**: Create `.github/workflows/ci.yml`:
- **Phase**: 0 - Project Setup & Foundation
- **Section**: 0.4
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: TBD
- **Dependencies**: TBD
## Description
Create `.github/workflows/ci.yml`:
## Requirements
- Create `.github/workflows/ci.yml`:
## Implementation Steps
1. TODO: Add implementation steps
2. TODO: Add implementation steps
3. TODO: Add implementation steps
## Acceptance Criteria
- [ ] Task 0.4.1 is completed
- [ ] All requirements are met
- [ ] Code compiles and tests pass
## Related ADRs
- See relevant ADRs in `docs/adr/`
## Implementation Notes
- TODO: Add implementation notes
## Testing
```bash
# TODO: Add test commands
go test ./...
```

View File

@@ -0,0 +1,40 @@
# Task 0.4.2: Add `Makefile` with common commands:
## Metadata
- **Task ID**: 0.4.2
- **Title**: Add `Makefile` with common commands:
- **Phase**: 0 - Project Setup & Foundation
- **Section**: 0.4
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: TBD
- **Dependencies**: TBD
## Description
Add `Makefile` with common commands:
## Requirements
- Add `Makefile` with common commands:
## Implementation Steps
1. TODO: Add implementation steps
2. TODO: Add implementation steps
3. TODO: Add implementation steps
## Acceptance Criteria
- [ ] Task 0.4.2 is completed
- [ ] All requirements are met
- [ ] Code compiles and tests pass
## Related ADRs
- See relevant ADRs in `docs/adr/`
## Implementation Notes
- TODO: Add implementation notes
## Testing
```bash
# TODO: Add test commands
go test ./...
```

View File

@@ -0,0 +1,40 @@
# Task 0.5.1: Install `go.uber.org/fx`
## Metadata
- **Task ID**: 0.5.1
- **Title**: Install `go.uber.org/fx`
- **Phase**: 0 - Project Setup & Foundation
- **Section**: 0.5
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: TBD
- **Dependencies**: TBD
## Description
Install `go.uber.org/fx`
## Requirements
- Install `go.uber.org/fx`
## Implementation Steps
1. TODO: Add implementation steps
2. TODO: Add implementation steps
3. TODO: Add implementation steps
## Acceptance Criteria
- [ ] Task 0.5.1 is completed
- [ ] All requirements are met
- [ ] Code compiles and tests pass
## Related ADRs
- See relevant ADRs in `docs/adr/`
## Implementation Notes
- TODO: Add implementation notes
## Testing
```bash
# TODO: Add test commands
go test ./...
```

View File

@@ -0,0 +1,40 @@
# Task 0.5.2: Create `internal/di/container.go`:
## Metadata
- **Task ID**: 0.5.2
- **Title**: Create `internal/di/container.go`:
- **Phase**: 0 - Project Setup & Foundation
- **Section**: 0.5
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: TBD
- **Dependencies**: TBD
## Description
Create `internal/di/container.go`:
## Requirements
- Create `internal/di/container.go`:
## Implementation Steps
1. TODO: Add implementation steps
2. TODO: Add implementation steps
3. TODO: Add implementation steps
## Acceptance Criteria
- [ ] Task 0.5.2 is completed
- [ ] All requirements are met
- [ ] Code compiles and tests pass
## Related ADRs
- See relevant ADRs in `docs/adr/`
## Implementation Notes
- TODO: Add implementation notes
## Testing
```bash
# TODO: Add test commands
go test ./...
```

View File

@@ -0,0 +1,40 @@
# Task 0.5.3: Create `cmd/platform/main.go` skeleton:
## Metadata
- **Task ID**: 0.5.3
- **Title**: Create `cmd/platform/main.go` skeleton:
- **Phase**: 0 - Project Setup & Foundation
- **Section**: 0.5
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: TBD
- **Dependencies**: TBD
## Description
Create `cmd/platform/main.go` skeleton:
## Requirements
- Create `cmd/platform/main.go` skeleton:
## Implementation Steps
1. TODO: Add implementation steps
2. TODO: Add implementation steps
3. TODO: Add implementation steps
## Acceptance Criteria
- [ ] Task 0.5.3 is completed
- [ ] All requirements are met
- [ ] Code compiles and tests pass
## Related ADRs
- See relevant ADRs in `docs/adr/`
## Implementation Notes
- TODO: Add implementation notes
## Testing
```bash
# TODO: Add test commands
go test ./...
```

View File

@@ -0,0 +1,47 @@
# Phase 0: Project Setup & Foundation
## Overview
Initialize repository structure, set up Go modules and basic tooling, create configuration management foundation, and establish CI/CD skeleton.
## Tasks
### 0.1 Repository Bootstrap
- [0.1.1 - Initialize Go Module](./0.1.1-initialize-go-module.md)
- [0.1.2 - Create Directory Structure](./0.1.2-create-directory-structure.md)
- [0.1.3 - Add Gitignore](./0.1.3-add-gitignore.md)
- [0.1.4 - Create Initial README](./0.1.4-create-initial-readme.md)
### 0.2 Configuration System
- [0.2.1 - Install Configuration Dependencies](./0.2.1-install-config-dependencies.md)
- [0.2.2 - Create Config Interface](./0.2.2-create-config-interface.md)
- [0.2.3 - Implement Config Loader](./0.2.3-implement-config-loader.md)
- [0.2.4 - Create Configuration Files](./0.2.4-create-configuration-files.md)
### 0.3 Logging Foundation
- [0.3.1 - Install Logging Dependencies](./0.3.1-install-logging-dependencies.md)
- [0.3.2 - Create Logger Interface](./0.3.2-create-logger-interface.md) - Create `pkg/logger/logger.go` interface
- [0.3.3 - Implement Zap Logger](./0.3.3-implement-zap-logger.md) - Implement `internal/logger/zap_logger.go`
- [0.3.4 - Add Request ID Middleware](./0.3.4-add-request-id-middleware.md) - Create Gin middleware for request IDs
### 0.4 Basic CI/CD Pipeline
- [0.4.1 - Create GitHub Actions Workflow](./0.4.1-create-github-actions-workflow.md)
- [0.4.2 - Create Makefile](./0.4.2-create-makefile.md)
### 0.5 Dependency Injection Setup
- [0.5.1 - Install FX Dependency](./0.5.1-install-fx-dependency.md)
- [0.5.2 - Create DI Container](./0.5.2-create-di-container.md)
- [0.5.3 - Create Main Entry Point](./0.5.3-create-main-entry-point.md)
## Deliverables Checklist
- [ ] Repository structure in place
- [ ] Configuration system loads YAML files and env vars
- [ ] Structured logging works
- [ ] CI pipeline runs linting and builds binary
- [ ] Basic DI container initialized
## Acceptance Criteria
- `go build ./cmd/platform` succeeds
- `go test ./...` runs (even if tests are empty)
- CI pipeline passes on empty commit
- Config loads from `config/default.yaml`

View File

@@ -0,0 +1,40 @@
# Task 1.1.1: Extend `internal/di/container.go`:
## Metadata
- **Task ID**: 1.1.1
- **Title**: Extend `internal/di/container.go`:
- **Phase**: 1 - Core Kernel & Infrastructure
- **Section**: 1.1
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: TBD
- **Dependencies**: TBD
## Description
Extend `internal/di/container.go`:
## Requirements
- Extend `internal/di/container.go`:
## Implementation Steps
1. TODO: Add implementation steps
2. TODO: Add implementation steps
3. TODO: Add implementation steps
## Acceptance Criteria
- [ ] Task 1.1.1 is completed
- [ ] All requirements are met
- [ ] Code compiles and tests pass
## Related ADRs
- See relevant ADRs in `docs/adr/`
## Implementation Notes
- TODO: Add implementation notes
## Testing
```bash
# TODO: Add test commands
go test ./...
```

View File

@@ -0,0 +1,40 @@
# Task 1.1.2: Create `internal/di/providers.go`:
## Metadata
- **Task ID**: 1.1.2
- **Title**: Create `internal/di/providers.go`:
- **Phase**: 1 - Core Kernel & Infrastructure
- **Section**: 1.1
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: TBD
- **Dependencies**: TBD
## Description
Create `internal/di/providers.go`:
## Requirements
- Create `internal/di/providers.go`:
## Implementation Steps
1. TODO: Add implementation steps
2. TODO: Add implementation steps
3. TODO: Add implementation steps
## Acceptance Criteria
- [ ] Task 1.1.2 is completed
- [ ] All requirements are met
- [ ] Code compiles and tests pass
## Related ADRs
- See relevant ADRs in `docs/adr/`
## Implementation Notes
- TODO: Add implementation notes
## Testing
```bash
# TODO: Add test commands
go test ./...
```

View File

@@ -0,0 +1,40 @@
# Task 1.1.3: Add `internal/di/core_module.go`:
## Metadata
- **Task ID**: 1.1.3
- **Title**: Add `internal/di/core_module.go`:
- **Phase**: 1 - Core Kernel & Infrastructure
- **Section**: 1.1
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: TBD
- **Dependencies**: TBD
## Description
Add `internal/di/core_module.go`:
## Requirements
- Add `internal/di/core_module.go`:
## Implementation Steps
1. TODO: Add implementation steps
2. TODO: Add implementation steps
3. TODO: Add implementation steps
## Acceptance Criteria
- [ ] Task 1.1.3 is completed
- [ ] All requirements are met
- [ ] Code compiles and tests pass
## Related ADRs
- See relevant ADRs in `docs/adr/`
## Implementation Notes
- TODO: Add implementation notes
## Testing
```bash
# TODO: Add test commands
go test ./...
```

View File

@@ -0,0 +1,40 @@
# Task 1.2.1: Install `entgo.io/ent/cmd/ent`
## Metadata
- **Task ID**: 1.2.1
- **Title**: Install `entgo.io/ent/cmd/ent`
- **Phase**: 1 - Core Kernel & Infrastructure
- **Section**: 1.2
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: TBD
- **Dependencies**: TBD
## Description
Install `entgo.io/ent/cmd/ent`
## Requirements
- Install `entgo.io/ent/cmd/ent`
## Implementation Steps
1. TODO: Add implementation steps
2. TODO: Add implementation steps
3. TODO: Add implementation steps
## Acceptance Criteria
- [ ] Task 1.2.1 is completed
- [ ] All requirements are met
- [ ] Code compiles and tests pass
## Related ADRs
- See relevant ADRs in `docs/adr/`
## Implementation Notes
- TODO: Add implementation notes
## Testing
```bash
# TODO: Add test commands
go test ./...
```

View File

@@ -0,0 +1,46 @@
# Task 1.2.2: Initialize Ent schema:
## Metadata
- **Task ID**: 1.2.2
- **Title**: Initialize Ent schema:
- **Phase**: 1 - Core Kernel & Infrastructure
- **Section**: 1.2
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: TBD
- **Dependencies**: TBD
## Description
Initialize Ent schema:
## Requirements
- Initialize Ent schema:
## Implementation Steps
1. TODO: Add implementation steps
2. TODO: Add implementation steps
3. TODO: Add implementation steps
## Acceptance Criteria
- [ ] Task 1.2.2 is completed
- [ ] All requirements are met
- [ ] Code compiles and tests pass
## Related ADRs
- See relevant ADRs in `docs/adr/`
## Implementation Notes
- TODO: Add implementation notes
## Testing
```bash
# TODO: Add test commands
go test ./...
```
## Code Reference
```go
go run entgo.io/ent/cmd/ent init User Role Permission AuditLog
```

View File

@@ -0,0 +1,40 @@
# Task 1.2.3: Define core entities in `internal/ent/schema/`:
## Metadata
- **Task ID**: 1.2.3
- **Title**: Define core entities in `internal/ent/schema/`:
- **Phase**: 1 - Core Kernel & Infrastructure
- **Section**: 1.2
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: TBD
- **Dependencies**: TBD
## Description
Define core entities in `internal/ent/schema/`:
## Requirements
- Define core entities in `internal/ent/schema/`:
## Implementation Steps
1. TODO: Add implementation steps
2. TODO: Add implementation steps
3. TODO: Add implementation steps
## Acceptance Criteria
- [ ] Task 1.2.3 is completed
- [ ] All requirements are met
- [ ] Code compiles and tests pass
## Related ADRs
- See relevant ADRs in `docs/adr/`
## Implementation Notes
- TODO: Add implementation notes
## Testing
```bash
# TODO: Add test commands
go test ./...
```

View File

@@ -0,0 +1,40 @@
# Task 1.2.4: Generate Ent code: `go generate ./internal/ent`
## Metadata
- **Task ID**: 1.2.4
- **Title**: Generate Ent code: `go generate ./internal/ent`
- **Phase**: 1 - Core Kernel & Infrastructure
- **Section**: 1.2
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: TBD
- **Dependencies**: TBD
## Description
Generate Ent code: `go generate ./internal/ent`
## Requirements
- Generate Ent code: `go generate ./internal/ent`
## Implementation Steps
1. TODO: Add implementation steps
2. TODO: Add implementation steps
3. TODO: Add implementation steps
## Acceptance Criteria
- [ ] Task 1.2.4 is completed
- [ ] All requirements are met
- [ ] Code compiles and tests pass
## Related ADRs
- See relevant ADRs in `docs/adr/`
## Implementation Notes
- TODO: Add implementation notes
## Testing
```bash
# TODO: Add test commands
go test ./...
```

View File

@@ -0,0 +1,40 @@
# Task 1.2.5: Create `internal/infra/database/client.go`:
## Metadata
- **Task ID**: 1.2.5
- **Title**: Create `internal/infra/database/client.go`:
- **Phase**: 1 - Core Kernel & Infrastructure
- **Section**: 1.2
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: TBD
- **Dependencies**: TBD
## Description
Create `internal/infra/database/client.go`:
## Requirements
- Create `internal/infra/database/client.go`:
## Implementation Steps
1. TODO: Add implementation steps
2. TODO: Add implementation steps
3. TODO: Add implementation steps
## Acceptance Criteria
- [ ] Task 1.2.5 is completed
- [ ] All requirements are met
- [ ] Code compiles and tests pass
## Related ADRs
- See relevant ADRs in `docs/adr/`
## Implementation Notes
- TODO: Add implementation notes
## Testing
```bash
# TODO: Add test commands
go test ./...
```

View File

@@ -0,0 +1,40 @@
# Task 1.2.6: Add database config to `config/default.yaml`
## Metadata
- **Task ID**: 1.2.6
- **Title**: Add database config to `config/default.yaml`
- **Phase**: 1 - Core Kernel & Infrastructure
- **Section**: 1.2
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: TBD
- **Dependencies**: TBD
## Description
Add database config to `config/default.yaml`
## Requirements
- Add database config to `config/default.yaml`
## Implementation Steps
1. TODO: Add implementation steps
2. TODO: Add implementation steps
3. TODO: Add implementation steps
## Acceptance Criteria
- [ ] Task 1.2.6 is completed
- [ ] All requirements are met
- [ ] Code compiles and tests pass
## Related ADRs
- See relevant ADRs in `docs/adr/`
## Implementation Notes
- TODO: Add implementation notes
## Testing
```bash
# TODO: Add test commands
go test ./...
```

View File

@@ -0,0 +1,40 @@
# Task 1.3.1: Install `github.com/prometheus/client_golang/prometheus`
## Metadata
- **Task ID**: 1.3.1
- **Title**: Install `github.com/prometheus/client_golang/prometheus`
- **Phase**: 1 - Core Kernel & Infrastructure
- **Section**: 1.3
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: TBD
- **Dependencies**: TBD
## Description
Install `github.com/prometheus/client_golang/prometheus`
## Requirements
- Install `github.com/prometheus/client_golang/prometheus`
## Implementation Steps
1. TODO: Add implementation steps
2. TODO: Add implementation steps
3. TODO: Add implementation steps
## Acceptance Criteria
- [ ] Task 1.3.1 is completed
- [ ] All requirements are met
- [ ] Code compiles and tests pass
## Related ADRs
- See relevant ADRs in `docs/adr/`
## Implementation Notes
- TODO: Add implementation notes
## Testing
```bash
# TODO: Add test commands
go test ./...
```

View File

@@ -0,0 +1,40 @@
# Task 1.3.2: Install `github.com/heptiolabs/healthcheck` (optional, or custom)
## Metadata
- **Task ID**: 1.3.2
- **Title**: Install `github.com/heptiolabs/healthcheck` (optional, or custom)
- **Phase**: 1 - Core Kernel & Infrastructure
- **Section**: 1.3
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: TBD
- **Dependencies**: TBD
## Description
Install `github.com/heptiolabs/healthcheck` (optional, or custom)
## Requirements
- Install `github.com/heptiolabs/healthcheck` (optional, or custom)
## Implementation Steps
1. TODO: Add implementation steps
2. TODO: Add implementation steps
3. TODO: Add implementation steps
## Acceptance Criteria
- [ ] Task 1.3.2 is completed
- [ ] All requirements are met
- [ ] Code compiles and tests pass
## Related ADRs
- See relevant ADRs in `docs/adr/`
## Implementation Notes
- TODO: Add implementation notes
## Testing
```bash
# TODO: Add test commands
go test ./...
```

View File

@@ -0,0 +1,48 @@
# Task 1.3.3: Create `pkg/health/health.go` interface:
## Metadata
- **Task ID**: 1.3.3
- **Title**: Create `pkg/health/health.go` interface:
- **Phase**: 1 - Core Kernel & Infrastructure
- **Section**: 1.3
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: TBD
- **Dependencies**: TBD
## Description
Create `pkg/health/health.go` interface:
## Requirements
- Create `pkg/health/health.go` interface:
## Implementation Steps
1. TODO: Add implementation steps
2. TODO: Add implementation steps
3. TODO: Add implementation steps
## Acceptance Criteria
- [ ] Task 1.3.3 is completed
- [ ] All requirements are met
- [ ] Code compiles and tests pass
## Related ADRs
- See relevant ADRs in `docs/adr/`
## Implementation Notes
- TODO: Add implementation notes
## Testing
```bash
# TODO: Add test commands
go test ./...
```
## Code Reference
```go
type HealthChecker interface {
Check(ctx context.Context) error
}
```

View File

@@ -0,0 +1,40 @@
# Task 1.3.4: Implement `internal/health/registry.go`:
## Metadata
- **Task ID**: 1.3.4
- **Title**: Implement `internal/health/registry.go`:
- **Phase**: 1 - Core Kernel & Infrastructure
- **Section**: 1.3
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: TBD
- **Dependencies**: TBD
## Description
Implement `internal/health/registry.go`:
## Requirements
- Implement `internal/health/registry.go`:
## Implementation Steps
1. TODO: Add implementation steps
2. TODO: Add implementation steps
3. TODO: Add implementation steps
## Acceptance Criteria
- [ ] Task 1.3.4 is completed
- [ ] All requirements are met
- [ ] Code compiles and tests pass
## Related ADRs
- See relevant ADRs in `docs/adr/`
## Implementation Notes
- TODO: Add implementation notes
## Testing
```bash
# TODO: Add test commands
go test ./...
```

View File

@@ -0,0 +1,40 @@
# Task 1.3.5: Create `internal/metrics/metrics.go`:
## Metadata
- **Task ID**: 1.3.5
- **Title**: Create `internal/metrics/metrics.go`:
- **Phase**: 1 - Core Kernel & Infrastructure
- **Section**: 1.3
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: TBD
- **Dependencies**: TBD
## Description
Create `internal/metrics/metrics.go`:
## Requirements
- Create `internal/metrics/metrics.go`:
## Implementation Steps
1. TODO: Add implementation steps
2. TODO: Add implementation steps
3. TODO: Add implementation steps
## Acceptance Criteria
- [ ] Task 1.3.5 is completed
- [ ] All requirements are met
- [ ] Code compiles and tests pass
## Related ADRs
- See relevant ADRs in `docs/adr/`
## Implementation Notes
- TODO: Add implementation notes
## Testing
```bash
# TODO: Add test commands
go test ./...
```

View File

@@ -0,0 +1,40 @@
# Task 1.3.6: Add `/metrics` endpoint (Prometheus format)
## Metadata
- **Task ID**: 1.3.6
- **Title**: Add `/metrics` endpoint (Prometheus format)
- **Phase**: 1 - Core Kernel & Infrastructure
- **Section**: 1.3
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: TBD
- **Dependencies**: TBD
## Description
Add `/metrics` endpoint (Prometheus format)
## Requirements
- Add `/metrics` endpoint (Prometheus format)
## Implementation Steps
1. TODO: Add implementation steps
2. TODO: Add implementation steps
3. TODO: Add implementation steps
## Acceptance Criteria
- [ ] Task 1.3.6 is completed
- [ ] All requirements are met
- [ ] Code compiles and tests pass
## Related ADRs
- See relevant ADRs in `docs/adr/`
## Implementation Notes
- TODO: Add implementation notes
## Testing
```bash
# TODO: Add test commands
go test ./...
```

View File

@@ -0,0 +1,40 @@
# Task 1.3.7: Register endpoints in main HTTP router
## Metadata
- **Task ID**: 1.3.7
- **Title**: Register endpoints in main HTTP router
- **Phase**: 1 - Core Kernel & Infrastructure
- **Section**: 1.3
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: TBD
- **Dependencies**: TBD
## Description
Register endpoints in main HTTP router
## Requirements
- Register endpoints in main HTTP router
## Implementation Steps
1. TODO: Add implementation steps
2. TODO: Add implementation steps
3. TODO: Add implementation steps
## Acceptance Criteria
- [ ] Task 1.3.7 is completed
- [ ] All requirements are met
- [ ] Code compiles and tests pass
## Related ADRs
- See relevant ADRs in `docs/adr/`
## Implementation Notes
- TODO: Add implementation notes
## Testing
```bash
# TODO: Add test commands
go test ./...
```

View File

@@ -0,0 +1,48 @@
# Task 1.4.1: Create `pkg/errorbus/errorbus.go` interface:
## Metadata
- **Task ID**: 1.4.1
- **Title**: Create `pkg/errorbus/errorbus.go` interface:
- **Phase**: 1 - Core Kernel & Infrastructure
- **Section**: 1.4
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: TBD
- **Dependencies**: TBD
## Description
Create `pkg/errorbus/errorbus.go` interface:
## Requirements
- Create `pkg/errorbus/errorbus.go` interface:
## Implementation Steps
1. TODO: Add implementation steps
2. TODO: Add implementation steps
3. TODO: Add implementation steps
## Acceptance Criteria
- [ ] Task 1.4.1 is completed
- [ ] All requirements are met
- [ ] Code compiles and tests pass
## Related ADRs
- See relevant ADRs in `docs/adr/`
## Implementation Notes
- TODO: Add implementation notes
## Testing
```bash
# TODO: Add test commands
go test ./...
```
## Code Reference
```go
type ErrorPublisher interface {
Publish(err error)
}
```

View File

@@ -0,0 +1,40 @@
# Task 1.4.2: Implement `internal/errorbus/channel_bus.go`:
## Metadata
- **Task ID**: 1.4.2
- **Title**: Implement `internal/errorbus/channel_bus.go`:
- **Phase**: 1 - Core Kernel & Infrastructure
- **Section**: 1.4
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: TBD
- **Dependencies**: TBD
## Description
Implement `internal/errorbus/channel_bus.go`:
## Requirements
- Implement `internal/errorbus/channel_bus.go`:
## Implementation Steps
1. TODO: Add implementation steps
2. TODO: Add implementation steps
3. TODO: Add implementation steps
## Acceptance Criteria
- [ ] Task 1.4.2 is completed
- [ ] All requirements are met
- [ ] Code compiles and tests pass
## Related ADRs
- See relevant ADRs in `docs/adr/`
## Implementation Notes
- TODO: Add implementation notes
## Testing
```bash
# TODO: Add test commands
go test ./...
```

View File

@@ -0,0 +1,40 @@
# Task 1.4.3: Add panic recovery middleware that publishes to error bus
## Metadata
- **Task ID**: 1.4.3
- **Title**: Add panic recovery middleware that publishes to error bus
- **Phase**: 1 - Core Kernel & Infrastructure
- **Section**: 1.4
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: TBD
- **Dependencies**: TBD
## Description
Add panic recovery middleware that publishes to error bus
## Requirements
- Add panic recovery middleware that publishes to error bus
## Implementation Steps
1. TODO: Add implementation steps
2. TODO: Add implementation steps
3. TODO: Add implementation steps
## Acceptance Criteria
- [ ] Task 1.4.3 is completed
- [ ] All requirements are met
- [ ] Code compiles and tests pass
## Related ADRs
- See relevant ADRs in `docs/adr/`
## Implementation Notes
- TODO: Add implementation notes
## Testing
```bash
# TODO: Add test commands
go test ./...
```

View File

@@ -0,0 +1,40 @@
# Task 1.4.4: Register error bus in DI container
## Metadata
- **Task ID**: 1.4.4
- **Title**: Register error bus in DI container
- **Phase**: 1 - Core Kernel & Infrastructure
- **Section**: 1.4
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: TBD
- **Dependencies**: TBD
## Description
Register error bus in DI container
## Requirements
- Register error bus in DI container
## Implementation Steps
1. TODO: Add implementation steps
2. TODO: Add implementation steps
3. TODO: Add implementation steps
## Acceptance Criteria
- [ ] Task 1.4.4 is completed
- [ ] All requirements are met
- [ ] Code compiles and tests pass
## Related ADRs
- See relevant ADRs in `docs/adr/`
## Implementation Notes
- TODO: Add implementation notes
## Testing
```bash
# TODO: Add test commands
go test ./...
```

View File

@@ -0,0 +1,40 @@
# Task 1.5.1: Install `github.com/gin-gonic/gin`
## Metadata
- **Task ID**: 1.5.1
- **Title**: Install `github.com/gin-gonic/gin`
- **Phase**: 1 - Core Kernel & Infrastructure
- **Section**: 1.5
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: TBD
- **Dependencies**: TBD
## Description
Install `github.com/gin-gonic/gin`
## Requirements
- Install `github.com/gin-gonic/gin`
## Implementation Steps
1. TODO: Add implementation steps
2. TODO: Add implementation steps
3. TODO: Add implementation steps
## Acceptance Criteria
- [ ] Task 1.5.1 is completed
- [ ] All requirements are met
- [ ] Code compiles and tests pass
## Related ADRs
- See relevant ADRs in `docs/adr/`
## Implementation Notes
- TODO: Add implementation notes
## Testing
```bash
# TODO: Add test commands
go test ./...
```

View File

@@ -0,0 +1,40 @@
# Task 1.5.2: Create `internal/server/server.go`:
## Metadata
- **Task ID**: 1.5.2
- **Title**: Create `internal/server/server.go`:
- **Phase**: 1 - Core Kernel & Infrastructure
- **Section**: 1.5
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: TBD
- **Dependencies**: TBD
## Description
Create `internal/server/server.go`:
## Requirements
- Create `internal/server/server.go`:
## Implementation Steps
1. TODO: Add implementation steps
2. TODO: Add implementation steps
3. TODO: Add implementation steps
## Acceptance Criteria
- [ ] Task 1.5.2 is completed
- [ ] All requirements are met
- [ ] Code compiles and tests pass
## Related ADRs
- See relevant ADRs in `docs/adr/`
## Implementation Notes
- TODO: Add implementation notes
## Testing
```bash
# TODO: Add test commands
go test ./...
```

View File

@@ -0,0 +1,40 @@
# Task 1.5.3: Wire HTTP server into fx lifecycle:
## Metadata
- **Task ID**: 1.5.3
- **Title**: Wire HTTP server into fx lifecycle:
- **Phase**: 1 - Core Kernel & Infrastructure
- **Section**: 1.5
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: TBD
- **Dependencies**: TBD
## Description
Wire HTTP server into fx lifecycle:
## Requirements
- Wire HTTP server into fx lifecycle:
## Implementation Steps
1. TODO: Add implementation steps
2. TODO: Add implementation steps
3. TODO: Add implementation steps
## Acceptance Criteria
- [ ] Task 1.5.3 is completed
- [ ] All requirements are met
- [ ] Code compiles and tests pass
## Related ADRs
- See relevant ADRs in `docs/adr/`
## Implementation Notes
- TODO: Add implementation notes
## Testing
```bash
# TODO: Add test commands
go test ./...
```

View File

@@ -0,0 +1,40 @@
# Task 1.5.4: Update `cmd/platform/main.go` to use fx lifecycle
## Metadata
- **Task ID**: 1.5.4
- **Title**: Update `cmd/platform/main.go` to use fx lifecycle
- **Phase**: 1 - Core Kernel & Infrastructure
- **Section**: 1.5
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: TBD
- **Dependencies**: TBD
## Description
Update `cmd/platform/main.go` to use fx lifecycle
## Requirements
- Update `cmd/platform/main.go` to use fx lifecycle
## Implementation Steps
1. TODO: Add implementation steps
2. TODO: Add implementation steps
3. TODO: Add implementation steps
## Acceptance Criteria
- [ ] Task 1.5.4 is completed
- [ ] All requirements are met
- [ ] Code compiles and tests pass
## Related ADRs
- See relevant ADRs in `docs/adr/`
## Implementation Notes
- TODO: Add implementation notes
## Testing
```bash
# TODO: Add test commands
go test ./...
```

View File

@@ -0,0 +1,40 @@
# Task 1.6.1: Install OpenTelemetry packages:
## Metadata
- **Task ID**: 1.6.1
- **Title**: Install OpenTelemetry packages:
- **Phase**: 1 - Core Kernel & Infrastructure
- **Section**: 1.6
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: TBD
- **Dependencies**: TBD
## Description
Install OpenTelemetry packages:
## Requirements
- Install OpenTelemetry packages:
## Implementation Steps
1. TODO: Add implementation steps
2. TODO: Add implementation steps
3. TODO: Add implementation steps
## Acceptance Criteria
- [ ] Task 1.6.1 is completed
- [ ] All requirements are met
- [ ] Code compiles and tests pass
## Related ADRs
- See relevant ADRs in `docs/adr/`
## Implementation Notes
- TODO: Add implementation notes
## Testing
```bash
# TODO: Add test commands
go test ./...
```

View File

@@ -0,0 +1,40 @@
# Task 1.6.2: Create `internal/observability/tracer.go`:
## Metadata
- **Task ID**: 1.6.2
- **Title**: Create `internal/observability/tracer.go`:
- **Phase**: 1 - Core Kernel & Infrastructure
- **Section**: 1.6
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: TBD
- **Dependencies**: TBD
## Description
Create `internal/observability/tracer.go`:
## Requirements
- Create `internal/observability/tracer.go`:
## Implementation Steps
1. TODO: Add implementation steps
2. TODO: Add implementation steps
3. TODO: Add implementation steps
## Acceptance Criteria
- [ ] Task 1.6.2 is completed
- [ ] All requirements are met
- [ ] Code compiles and tests pass
## Related ADRs
- See relevant ADRs in `docs/adr/`
## Implementation Notes
- TODO: Add implementation notes
## Testing
```bash
# TODO: Add test commands
go test ./...
```

View File

@@ -0,0 +1,40 @@
# Task 1.6.3: Add HTTP instrumentation middleware
## Metadata
- **Task ID**: 1.6.3
- **Title**: Add HTTP instrumentation middleware
- **Phase**: 1 - Core Kernel & Infrastructure
- **Section**: 1.6
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: TBD
- **Dependencies**: TBD
## Description
Add HTTP instrumentation middleware
## Requirements
- Add HTTP instrumentation middleware
## Implementation Steps
1. TODO: Add implementation steps
2. TODO: Add implementation steps
3. TODO: Add implementation steps
## Acceptance Criteria
- [ ] Task 1.6.3 is completed
- [ ] All requirements are met
- [ ] Code compiles and tests pass
## Related ADRs
- See relevant ADRs in `docs/adr/`
## Implementation Notes
- TODO: Add implementation notes
## Testing
```bash
# TODO: Add test commands
go test ./...
```

View File

@@ -0,0 +1,40 @@
# Task 1.6.4: Add trace context propagation to requests
## Metadata
- **Task ID**: 1.6.4
- **Title**: Add trace context propagation to requests
- **Phase**: 1 - Core Kernel & Infrastructure
- **Section**: 1.6
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: TBD
- **Dependencies**: TBD
## Description
Add trace context propagation to requests
## Requirements
- Add trace context propagation to requests
## Implementation Steps
1. TODO: Add implementation steps
2. TODO: Add implementation steps
3. TODO: Add implementation steps
## Acceptance Criteria
- [ ] Task 1.6.4 is completed
- [ ] All requirements are met
- [ ] Code compiles and tests pass
## Related ADRs
- See relevant ADRs in `docs/adr/`
## Implementation Notes
- TODO: Add implementation notes
## Testing
```bash
# TODO: Add test commands
go test ./...
```

View File

@@ -0,0 +1,64 @@
# Phase 1: Core Kernel & Infrastructure
## Overview
Implement dependency injection container, set up database (Ent ORM), create health and metrics endpoints, implement error bus, and add basic HTTP server with middleware.
## Tasks
### 1.1 Dependency Injection Container
- [1.1.1 - Extend DI Container](./1.1.1-extend-internaldicontainergo.md)
- [1.1.2 - Create DI Providers](./1.1.2-create-internaldiprovidersgo.md)
- [1.1.3 - Add Core Module](./1.1.3-add-internaldicore_modulego.md)
### 1.2 Database Setup (Ent)
- [1.2.1 - Install Ent](./1.2.1-install-entgoioentcmdent.md)
- [1.2.2 - Initialize Ent Schema](./1.2.2-initialize-ent-schema.md)
- [1.2.3 - Define Core Entities](./1.2.3-define-core-entities-in-internalentschema.md)
- [1.2.4 - Generate Ent Code](./1.2.4-generate-ent-code-go-generate-internalent.md)
- [1.2.5 - Create Database Client](./1.2.5-create-internalinfradatabaseclientgo.md)
- [1.2.6 - Add Database Config](./1.2.6-add-database-config-to-configdefaultyaml.md)
### 1.3 Health & Metrics
- [1.3.1 - Install Prometheus](./1.3.1-install-githubcomprometheusclient_golangprometheus.md)
- [1.3.2 - Install Health Check](./1.3.2-install-githubcomheptiolabshealthcheck-optional-or.md)
- [1.3.3 - Create Health Interface](./1.3.3-create-pkghealthhealthgo-interface.md)
- [1.3.4 - Implement Health Registry](./1.3.4-implement-internalhealthregistrygo.md)
- [1.3.5 - Create Metrics](./1.3.5-create-internalmetricsmetricsgo.md)
- [1.3.6 - Add Metrics Endpoint](./1.3.6-add-metrics-endpoint-prometheus-format.md)
- [1.3.7 - Register Endpoints](./1.3.7-register-endpoints-in-main-http-router.md)
### 1.4 Error Bus
- [1.4.1 - Create Error Bus Interface](./1.4.1-create-pkgerrorbuserrorbusgo-interface.md)
- [1.4.2 - Implement Channel Bus](./1.4.2-implement-internalerrorbuschannel_busgo.md)
- [1.4.3 - Add Panic Recovery Middleware](./1.4.3-add-panic-recovery-middleware-that-publishes-to-er.md)
- [1.4.4 - Register Error Bus](./1.4.4-register-error-bus-in-di-container.md)
### 1.5 HTTP Server Foundation
- [1.5.1 - Install Gin](./1.5.1-install-githubcomgin-gonicgin.md)
- [1.5.2 - Create Server](./1.5.2-create-internalserverservergo.md)
- [1.5.3 - Wire HTTP Server](./1.5.3-wire-http-server-into-fx-lifecycle.md)
- [1.5.4 - Update Main Entry Point](./1.5.4-update-cmdplatformmaingo-to-use-fx-lifecycle.md)
### 1.6 Observability (OpenTelemetry)
- [1.6.1 - Install OpenTelemetry](./1.6.1-install-opentelemetry-packages.md)
- [1.6.2 - Create Tracer](./1.6.2-create-internalobservabilitytracergo.md)
- [1.6.3 - Add HTTP Instrumentation](./1.6.3-add-http-instrumentation-middleware.md)
- [1.6.4 - Add Trace Context Propagation](./1.6.4-add-trace-context-propagation-to-requests.md)
## Deliverables Checklist
- [ ] DI container with all core services registered
- [ ] Database schema defined with Ent
- [ ] Health check endpoints working
- [ ] Metrics endpoint exposed
- [ ] Error bus implemented and integrated
- [ ] HTTP server with middleware stack
- [ ] OpenTelemetry tracing integrated
## Acceptance Criteria
- `GET /healthz` returns 200
- `GET /ready` checks database connectivity
- `GET /metrics` returns Prometheus metrics
- HTTP requests are logged with structured logging
- Panic recovery middleware catches and reports errors
- OpenTelemetry traces are generated for HTTP requests

View File

@@ -0,0 +1,40 @@
# Task 2.1.1: Install `github.com/golang-jwt/jwt/v5`
## Metadata
- **Task ID**: 2.1.1
- **Title**: Install `github.com/golang-jwt/jwt/v5`
- **Phase**: 2 - Authentication & Authorization
- **Section**: 2.1
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: TBD
- **Dependencies**: TBD
## Description
Install `github.com/golang-jwt/jwt/v5`
## Requirements
- Install `github.com/golang-jwt/jwt/v5`
## Implementation Steps
1. TODO: Add implementation steps
2. TODO: Add implementation steps
3. TODO: Add implementation steps
## Acceptance Criteria
- [ ] Task 2.1.1 is completed
- [ ] All requirements are met
- [ ] Code compiles and tests pass
## Related ADRs
- See relevant ADRs in `docs/adr/`
## Implementation Notes
- TODO: Add implementation notes
## Testing
```bash
# TODO: Add test commands
go test ./...
```

View File

@@ -0,0 +1,56 @@
# Task 2.1.2: Create `pkg/auth/auth.go` interfaces:
## Metadata
- **Task ID**: 2.1.2
- **Title**: Create `pkg/auth/auth.go` interfaces:
- **Phase**: 2 - Authentication & Authorization
- **Section**: 2.1
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: TBD
- **Dependencies**: TBD
## Description
Create `pkg/auth/auth.go` interfaces:
## Requirements
- Create `pkg/auth/auth.go` interfaces:
## Implementation Steps
1. TODO: Add implementation steps
2. TODO: Add implementation steps
3. TODO: Add implementation steps
## Acceptance Criteria
- [ ] Task 2.1.2 is completed
- [ ] All requirements are met
- [ ] Code compiles and tests pass
## Related ADRs
- See relevant ADRs in `docs/adr/`
## Implementation Notes
- TODO: Add implementation notes
## Testing
```bash
# TODO: Add test commands
go test ./...
```
## Code Reference
```go
type Authenticator interface {
GenerateToken(userID string, roles []string, tenantID string) (string, error)
VerifyToken(token string) (*TokenClaims, error)
}
type TokenClaims struct {
UserID string
Roles []string
TenantID string
ExpiresAt time.Time
}
```

Some files were not shown because too many files have changed in this diff Show More