Compare commits

...

3 Commits

403 changed files with 9839 additions and 13958 deletions

14
.dockerignore Normal file
View File

@@ -0,0 +1,14 @@
# Docker ignore file for MkDocs build
site/
.mkdocs_cache/
__pycache__/
*.pyc
*.pyo
*.pyd
.Python
venv/
env/
ENV/
.git/
.gitignore

24
.gitignore vendored Normal file
View File

@@ -0,0 +1,24 @@
# MkDocs build output
docs/site/
docs/.mkdocs_cache/
# Python
__pycache__/
*.py[cod]
*$py.class
*.so
.Python
venv/
env/
ENV/
# Go (if not already present)
*.exe
*.exe~
*.dll
*.so
*.dylib
*.test
*.out
go.work

106
Makefile Normal file
View File

@@ -0,0 +1,106 @@
.PHONY: help docs-install docs-serve docs-build docs-deploy docs-clean docs-validate
.PHONY: docs-docker-build docs-docker-serve docs-docker-build-site docs-docker-clean docs-docker-compose-up docs-docker-compose-down
# Default target
help:
@echo "Available targets:"
@echo ""
@echo "Local commands (require Python/pip):"
@echo " make docs-install - Install MkDocs dependencies"
@echo " make docs-serve - Serve documentation locally (http://127.0.0.1:8000)"
@echo " make docs-build - Build static documentation site"
@echo " make docs-deploy - Deploy documentation to GitHub Pages"
@echo " make docs-clean - Clean build artifacts"
@echo " make docs-validate - Validate MkDocs configuration"
@echo ""
@echo "Docker commands (no Python installation required):"
@echo " make docs-docker-build - Build Docker image for MkDocs"
@echo " make docs-docker-serve - Serve documentation using Docker"
@echo " make docs-docker-build-site - Build static site in Docker container"
@echo " make docs-docker-clean - Clean Docker images and containers"
@echo " make docs-docker-compose-up - Start docs server with docker-compose"
@echo " make docs-docker-compose-down - Stop docs server with docker-compose"
@echo ""
@echo "Documentation shortcuts:"
@echo " make docs - Alias for docs-serve"
@echo " make build-docs - Alias for docs-build"
@echo " make docs-docker - Alias for docs-docker-serve"
# Install MkDocs and dependencies
docs-install:
@echo "Installing MkDocs dependencies..."
cd docs && pip install -r requirements.txt
# Serve documentation locally with auto-reload
docs-serve:
@echo "Starting MkDocs development server..."
@echo "Documentation will be available at http://127.0.0.1:8000"
cd docs && mkdocs serve
# Build static documentation site
docs-build:
@echo "Building static documentation site..."
cd docs && mkdocs build
@echo "Build complete! Output is in the 'docs/site/' directory"
# Deploy documentation to GitHub Pages
docs-deploy:
@echo "Deploying documentation to GitHub Pages..."
cd docs && mkdocs gh-deploy
# Clean build artifacts
docs-clean:
@echo "Cleaning MkDocs build artifacts..."
rm -rf docs/site/
rm -rf docs/.mkdocs_cache/
@echo "Clean complete!"
# Validate MkDocs configuration
docs-validate:
@echo "Validating MkDocs configuration..."
cd docs && mkdocs build --strict
@echo "Configuration is valid!"
# Docker commands
docs-docker-build:
@echo "Building Docker image for MkDocs..."
cd docs && docker build -f Dockerfile -t goplt-docs:latest .
docs-docker-serve: docs-docker-build
@echo "Starting MkDocs development server in Docker..."
@echo "Documentation will be available at http://127.0.0.1:8000"
cd docs && docker run --rm -it \
-p 8000:8000 \
-v "$$(pwd):/docs:ro" \
goplt-docs:latest
docs-docker-build-site: docs-docker-build
@echo "Building static documentation site in Docker..."
cd docs && docker run --rm \
-v "$$(pwd):/docs:ro" \
-v "$$(pwd)/site:/docs/site" \
goplt-docs:latest mkdocs build
@echo "Build complete! Output is in the 'docs/site/' directory"
docs-docker-clean:
@echo "Cleaning Docker images and containers..."
-docker stop goplt-docs 2>/dev/null || true
-docker rm goplt-docs 2>/dev/null || true
-docker rmi goplt-docs:latest 2>/dev/null || true
@echo "Clean complete!"
docs-docker-compose-up:
@echo "Starting MkDocs server with docker-compose..."
@echo "Documentation will be available at http://127.0.0.1:8000"
cd docs && docker-compose -f docker-compose.yml up --build
docs-docker-compose-down:
@echo "Stopping MkDocs server..."
cd docs && docker-compose -f docker-compose.yml down
# Convenience aliases
docs: docs-serve
build-docs: docs-build
clean-docs: docs-clean
docs-docker: docs-docker-serve

14
docs/.dockerignore Normal file
View File

@@ -0,0 +1,14 @@
# Docker ignore file for MkDocs build
site/
.mkdocs_cache/
__pycache__/
*.pyc
*.pyo
*.pyd
.Python
venv/
env/
ENV/
.git/
.gitignore

26
docs/Dockerfile Normal file
View File

@@ -0,0 +1,26 @@
# Dockerfile for MkDocs documentation
FROM python:3.11-slim
WORKDIR /docs
# Install system dependencies
RUN apt-get update && \
apt-get install -y --no-install-recommends \
git \
&& rm -rf /var/lib/apt/lists/*
# Copy requirements first for better caching
COPY requirements.txt .
# Install Python dependencies
RUN pip install --no-cache-dir -r requirements.txt
# Copy all documentation files
COPY . .
# Expose MkDocs default port
EXPOSE 8000
# Default command: serve documentation
CMD ["mkdocs", "serve", "--dev-addr=0.0.0.0:8000"]

163
docs/MKDOCS_README.md Normal file
View File

@@ -0,0 +1,163 @@
# MkDocs Setup
This project uses [MkDocs](https://www.mkdocs.org/) with the [Material theme](https://squidfunk.github.io/mkdocs-material/) for documentation.
All documentation tooling files are self-contained in the `docs/` directory.
## Prerequisites
You can run MkDocs in two ways:
**Option 1: Local Python Installation**
- Python 3.8 or higher
- pip (Python package manager)
**Option 2: Docker (Recommended - No Python installation needed)**
- Docker and Docker Compose
## Installation
1. Install the required Python packages:
```bash
cd docs
pip install -r requirements.txt
```
Or if you prefer using a virtual environment:
```bash
cd docs
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txt
```
## Usage
You can use either the Makefile commands from the project root (recommended) or run MkDocs directly from the `docs/` directory.
### Using Makefile (Recommended)
From the project root, use the Makefile commands:
```bash
# Using Docker (easiest, no Python needed)
make docs-docker-compose-up
# or
make docs-docker
# Using local Python
make docs-install # Install dependencies
make docs-serve # Serve documentation
make docs-build # Build static site
make docs-deploy # Deploy to GitHub Pages
# Show all available commands
make help
```
The documentation will be available at `http://127.0.0.1:8000` with live reload enabled.
### Using Docker Directly
From the `docs/` directory:
```bash
cd docs
# Using docker-compose (easiest)
docker-compose up --build
# Using Docker directly
docker build -t goplt-docs:latest .
docker run --rm -it -p 8000:8000 -v "$(pwd):/docs:ro" goplt-docs:latest
# Build static site
docker run --rm -v "$(pwd):/docs:ro" -v "$(pwd)/site:/docs/site" goplt-docs:latest mkdocs build
```
### Using MkDocs Directly
From the `docs/` directory:
```bash
cd docs
# Serve documentation locally with auto-reload
mkdocs serve
# Build static site
mkdocs build
# Deploy to GitHub Pages
mkdocs gh-deploy
```
## Configuration
The main configuration file is `docs/mkdocs.yml`. Key settings:
- **Site name and metadata**: Configured at the top
- **Theme**: Material theme with light/dark mode support
- **Navigation**: Defined in the `nav` section
- **Plugins**: Search and git revision date plugins enabled
- **Markdown extensions**: Various extensions for code highlighting, tables, etc.
- **docs_dir**: Set to "content" - markdown files are in the content/ subdirectory
## Project Structure
```
goplt/
├── docs/ # Documentation directory (self-contained)
│ ├── mkdocs.yml # MkDocs configuration
│ ├── requirements.txt # Python dependencies
│ ├── Dockerfile # Docker image for MkDocs
│ ├── docker-compose.yml # Docker Compose configuration
│ ├── .dockerignore # Docker ignore patterns
│ ├── MKDOCS_README.md # This file
│ ├── content/ # Documentation content (markdown files)
│ │ ├── index.md # Home page
│ │ ├── requirements.md # Requirements document
│ │ ├── plan.md # Implementation plan
│ │ ├── playbook.md # Implementation playbook
│ │ ├── adr/ # Architecture Decision Records
│ │ └── stories/ # Implementation tasks
│ └── site/ # Generated site (gitignored)
└── Makefile # Makefile with docs commands (runs from root)
```
## Adding New Documentation
1. Add markdown files to the `docs/content/` directory
2. Update `docs/mkdocs.yml` to add entries to the `nav` section
3. Run `make docs-serve` or `make docs-docker` from project root to preview changes
## Customization
### Theme Colors
Edit the `theme.palette` section in `docs/mkdocs.yml` to change colors.
### Navigation
Update the `nav` section in `docs/mkdocs.yml` to reorganize or add new pages.
### Plugins
Add or remove plugins in the `plugins` section of `docs/mkdocs.yml`.
## Troubleshooting
- **Import errors**: Make sure all dependencies are installed: `cd docs && pip install -r requirements.txt`
- **Navigation issues**: Check that file paths in `docs/mkdocs.yml` match actual file locations
- **Build errors**: Run `mkdocs build --verbose` from the `docs/` directory for detailed error messages
- **Docker issues**: Make sure Docker is running and you have permissions to run containers
## Benefits of Self-Contained Docs
- All documentation tooling is in one place
- Easy to share or move the docs directory
- Doesn't pollute the project root
- Can be versioned or deployed independently

View File

@@ -29,11 +29,11 @@ goplt/
│ ├── config/ # ConfigProvider interface
│ ├── logger/ # Logger interface
│ ├── module/ # IModule interface
│ ├── auth/ # Auth interfaces (Phase 2)
│ ├── perm/ # Permission DSL (Phase 2)
│ ├── auth/ # Auth interfaces (Epic 2)
│ ├── perm/ # Permission DSL (Epic 2)
│ └── infra/ # Infrastructure interfaces
├── modules/ # Feature modules
│ └── blog/ # Sample module (Phase 4)
│ └── blog/ # Sample module (Epic 4)
├── config/ # Configuration files
│ ├── default.yaml
│ ├── development.yaml
@@ -75,7 +75,7 @@ goplt/
### Implementation Notes
- Initialize with `go mod init git.dcentral.systems/toolz/goplt`
- Create all directories upfront in Phase 0
- Create all directories upfront in Epic 0
- Document structure in `README.md`
- Enforce boundaries via `internal/` package visibility
- Use `go build ./...` to verify structure

View File

@@ -21,7 +21,7 @@ Implement a **channel-based error bus** with pluggable sinks:
1. **Error bus interface**: `type ErrorPublisher interface { Publish(err error) }`
2. **Channel-based implementation**: Background goroutine consumes errors from channel
3. **Pluggable sinks**: Logger (always), Sentry (optional, Phase 6)
3. **Pluggable sinks**: Logger (always), Sentry (optional, Epic 6)
4. **Panic recovery middleware**: Automatically publishes panics to error bus
**Rationale:**

View File

@@ -0,0 +1,87 @@
# ADR-0029: Microservices Architecture
## Status
Accepted
## Context
The platform needs to scale independently, support team autonomy, and enable flexible deployment. A microservices architecture provides these benefits from day one, and the complexity of supporting both monolith and microservices modes is unnecessary.
## Decision
Design the platform as **microservices architecture from day one**:
1. **Service-Based Architecture**: All modules are independent services:
- Each module is a separate service with its own process
- Services communicate via gRPC (primary) or HTTP (fallback)
- Service client interfaces for all inter-service communication
- No direct in-process calls between services
2. **Service Registry**: Central registry for service discovery:
- All services register on startup
- Service discovery via registry
- Health checking and automatic deregistration
- Support for Consul, etcd, or Kubernetes service discovery
3. **Communication Patterns**:
- **Synchronous**: gRPC service calls (primary), HTTP/REST (fallback)
- **Asynchronous**: Event bus via Kafka
- **Shared State**: Cache (Redis) and Database (PostgreSQL)
4. **Service Boundaries**: Each module is an independent service:
- Independent Go modules (`go.mod`)
- Own database schema (via Ent)
- Own API routes
- Own process and deployment
- Can be scaled independently
5. **Development Simplification**: For local development, multiple services can run in the same process, but they still communicate via service clients (no direct calls)
## Consequences
### Positive
- **Simplified Architecture**: Single architecture pattern, no dual-mode complexity
- **Independent Scaling**: Scale individual services based on load
- **Team Autonomy**: Teams can own and deploy their services independently
- **Technology Diversity**: Different services can use different tech stacks (future)
- **Fault Isolation**: Failure in one service doesn't bring down entire platform
- **Deployment Flexibility**: Deploy services independently
- **Clear Boundaries**: Service boundaries are explicit from the start
### Negative
- **Network Latency**: Inter-service calls have network overhead
- **Distributed System Challenges**: Need to handle network failures, retries, timeouts
- **Service Discovery Overhead**: Additional infrastructure needed
- **Debugging Complexity**: Distributed tracing becomes essential
- **Data Consistency**: Cross-service transactions become challenging
- **Development Setup**: More complex local development (multiple services)
### Mitigations
- **Service Mesh**: Use service mesh (Istio, Linkerd) for advanced microservices features
- **API Gateway**: Central gateway for routing and cross-cutting concerns
- **Event Sourcing**: Use events for eventual consistency
- **Circuit Breakers**: Implement circuit breakers for resilience
- **Comprehensive Observability**: OpenTelemetry, metrics, logging essential
- **Docker Compose**: Simplify local development with docker-compose
- **Development Mode**: Run multiple services in same process for local dev (still use service clients)
## Implementation Strategy
### Epic 1: Service Client Interfaces (Epic 1)
- Define service client interfaces for all core services
- All inter-service communication goes through interfaces
### Epic 2: Service Registry (Epic 3)
- Create service registry interface
- Implement service discovery
- Support for Consul, Kubernetes service discovery
### Epic 3: gRPC Services (Epic 5)
- Implement gRPC service definitions
- Create gRPC servers for all services
- Create gRPC clients for service communication
- HTTP clients as fallback option
## References
- [Service Abstraction Pattern](https://microservices.io/patterns/data/service-per-database.html)
- [Service Discovery Patterns](https://microservices.io/patterns/service-registry.html)
- [gRPC Documentation](https://grpc.io/docs/)

View File

@@ -0,0 +1,73 @@
# ADR-0030: Service Communication Strategy
## Status
Accepted
## Context
Services need to communicate with each other in a microservices architecture. All communication must go through well-defined interfaces that support network calls.
## Decision
Use a **service client-based communication strategy**:
1. **Service Client Interfaces** (Primary for synchronous calls):
- Define interfaces in `pkg/services/` for all services
- All implementations are network-based:
- `internal/services/grpc/client/` - gRPC clients (primary)
- `internal/services/http/client/` - HTTP clients (fallback)
2. **Event Bus** (Primary for asynchronous communication):
- Distributed via Kafka
- Preferred for cross-service communication
- Event-driven architecture for loose coupling
3. **Shared Infrastructure** (For state):
- Redis for cache and distributed state
- PostgreSQL for persistent data
- Kafka for events
## Service Client Pattern
```go
// Interface in pkg/services/
type IdentityServiceClient interface {
GetUser(ctx context.Context, id string) (*User, error)
CreateUser(ctx context.Context, user *User) (*User, error)
}
// gRPC implementation (primary)
type grpcIdentityClient struct {
conn *grpc.ClientConn
client pb.IdentityServiceClient
}
// HTTP implementation (fallback)
type httpIdentityClient struct {
baseURL string
httpClient *http.Client
}
```
## Development Mode
For local development, multiple services can run in the same process, but they still communicate via service clients (gRPC or HTTP) - no direct in-process calls. This ensures the architecture is consistent.
## Consequences
### Positive
- **Unified Interface**: Consistent interface across all services
- **Easy Testing**: Can mock service clients
- **Type Safety**: gRPC provides type-safe contracts
- **Clear Boundaries**: Service boundaries are explicit
- **Scalability**: Services can be scaled independently
### Negative
- **Network Overhead**: All calls go over network
- **Interface Evolution**: Changes require coordination
- **Versioning**: Need service versioning strategy
- **Development Complexity**: More setup required for local development
## Implementation
- All services use gRPC clients (primary)
- HTTP clients as fallback option
- Service registry for service discovery
- Circuit breakers and retries for resilience

View File

@@ -20,7 +20,7 @@ Each ADR follows this structure:
## ADR Index
### Phase 0: Project Setup & Foundation
### Epic 0: Project Setup & Foundation
- [ADR-0001: Go Module Path](./0001-go-module-path.md) - Module path: `git.dcentral.systems/toolz/goplt`
- [ADR-0002: Go Version](./0002-go-version.md) - Go 1.24.3
@@ -35,40 +35,45 @@ Each ADR follows this structure:
- [ADR-0011: Code Generation Tools](./0011-code-generation-tools.md) - go generate workflow
- [ADR-0012: Logger Interface Design](./0012-logger-interface-design.md) - Logger interface abstraction
### Phase 1: Core Kernel & Infrastructure
### Epic 1: Core Kernel & Infrastructure
- [ADR-0013: Database ORM Selection](./0013-database-orm.md) - entgo.io/ent
- [ADR-0014: Health Check Implementation](./0014-health-check-implementation.md) - Custom health check registry
- [ADR-0015: Error Bus Implementation](./0015-error-bus-implementation.md) - Channel-based error bus with pluggable sinks
- [ADR-0016: OpenTelemetry Observability Strategy](./0016-opentelemetry-observability.md) - OpenTelemetry for tracing, metrics, logs
### Phase 2: Authentication & Authorization
### Epic 2: Authentication & Authorization
- [ADR-0017: JWT Token Strategy](./0017-jwt-token-strategy.md) - Short-lived access tokens + long-lived refresh tokens
- [ADR-0018: Password Hashing Algorithm](./0018-password-hashing.md) - argon2id
- [ADR-0019: Permission DSL Format](./0019-permission-dsl-format.md) - String-based format with code generation
- [ADR-0020: Audit Logging Storage](./0020-audit-logging-storage.md) - PostgreSQL append-only table with JSONB metadata
### Phase 3: Module Framework
### Epic 3: Module Framework
- [ADR-0021: Module Loading Strategy](./0021-module-loading-strategy.md) - Static registration (primary) + dynamic plugin loading (optional)
### Phase 5: Infrastructure Adapters
### Epic 5: Infrastructure Adapters
- [ADR-0022: Cache Implementation](./0022-cache-implementation.md) - Redis with in-memory fallback
- [ADR-0023: Event Bus Implementation](./0023-event-bus-implementation.md) - In-process bus (default) + Kafka (production)
- [ADR-0024: Background Job Scheduler](./0024-job-scheduler.md) - asynq (Redis-backed) + cron
- [ADR-0025: Multi-tenancy Model](./0025-multitenancy-model.md) - Shared database with tenant_id column (optional)
### Phase 6: Observability & Production Readiness
### Epic 6: Observability & Production Readiness
- [ADR-0026: Error Reporting Service](./0026-error-reporting-service.md) - Sentry (optional, configurable)
- [ADR-0027: Rate Limiting Strategy](./0027-rate-limiting-strategy.md) - Multi-level (per-user + per-IP) with Redis
### Phase 7: Testing, Documentation & CI/CD
### Epic 7: Testing, Documentation & CI/CD
- [ADR-0028: Testing Strategy](./0028-testing-strategy.md) - Multi-layered (unit, integration, contract, load)
### Architecture & Scaling
- [ADR-0029: Microservices Architecture](./0029-microservices-architecture.md) - Microservices architecture from day one
- [ADR-0030: Service Communication Strategy](./0030-service-communication-strategy.md) - Service client abstraction and communication patterns
## Adding New ADRs
When making a new architectural decision:

View File

@@ -0,0 +1,585 @@
# Module Architecture
This document details the architecture of modules, how they are structured, how they interact with the core platform, and how multiple modules work together.
## Table of Contents
- [Module Structure](#module-structure)
- [Module Interface](#module-interface)
- [Module Lifecycle](#module-lifecycle)
- [Module Dependencies](#module-dependencies)
- [Module Communication](#module-communication)
- [Module Data Isolation](#module-data-isolation)
- [Module Examples](#module-examples)
## Module Structure
Every module follows a consistent structure that separates concerns and enables clean integration with the platform.
```mermaid
graph TD
subgraph "Module Structure"
Manifest[module.yaml<br/>Manifest]
subgraph "Public API (pkg/)"
ModuleInterface[IModule Interface]
ModuleTypes[Public Types]
end
subgraph "Internal Implementation (internal/)"
API[API Handlers]
Service[Domain Services]
Repo[Repositories]
Domain[Domain Models]
end
subgraph "Database Schema"
EntSchema[Ent Schemas]
Migrations[Migrations]
end
end
Manifest --> ModuleInterface
ModuleInterface --> API
API --> Service
Service --> Repo
Repo --> Domain
Repo --> EntSchema
EntSchema --> Migrations
style Manifest fill:#4a90e2,stroke:#2e5c8a,stroke-width:3px,color:#fff
style ModuleInterface fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
style Service fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
```
### Module Directory Structure
```
modules/blog/
├── go.mod # Module dependencies
├── module.yaml # Module manifest
├── pkg/
│ └── module.go # IModule implementation
├── internal/
│ ├── api/
│ │ └── handler.go # HTTP handlers
│ ├── domain/
│ │ ├── post.go # Domain entities
│ │ └── post_repo.go # Repository interface
│ ├── service/
│ │ └── post_service.go # Business logic
│ └── ent/
│ ├── schema/
│ │ └── post.go # Ent schema
│ └── migrate/ # Migrations
└── tests/
└── integration_test.go
```
## Module Interface
All modules must implement the `IModule` interface to integrate with the platform.
```mermaid
classDiagram
class IModule {
<<interface>>
+Name() string
+Version() string
+Dependencies() []string
+Init() fx.Option
+Migrations() []MigrationFunc
+OnStart(ctx) error
+OnStop(ctx) error
}
class BlogModule {
+Name() string
+Version() string
+Dependencies() []string
+Init() fx.Option
+Migrations() []MigrationFunc
}
class BillingModule {
+Name() string
+Version() string
+Dependencies() []string
+Init() fx.Option
+Migrations() []MigrationFunc
}
IModule <|.. BlogModule
IModule <|.. BillingModule
```
### IModule Interface
```go
type IModule interface {
// Name returns a unique, human-readable identifier
Name() string
// Version returns the module version (semantic versioning)
Version() string
// Dependencies returns list of required modules (e.g., ["core >= 1.0.0"])
Dependencies() []string
// Init returns fx.Option that registers all module services
Init() fx.Option
// Migrations returns database migration functions
Migrations() []func(*ent.Client) error
// OnStart is called during application startup (optional)
OnStart(ctx context.Context) error
// OnStop is called during graceful shutdown (optional)
OnStop(ctx context.Context) error
}
```
## Module Lifecycle
Modules go through a well-defined lifecycle from discovery to shutdown.
```mermaid
stateDiagram-v2
[*] --> Discovered: Module found
Discovered --> Validated: Check dependencies
Validated --> Loaded: Load module
Loaded --> Initialized: Call Init()
Initialized --> Migrated: Run migrations
Migrated --> Started: Call OnStart()
Started --> Running: Module active
Running --> Stopping: Shutdown signal
Stopping --> Stopped: Call OnStop()
Stopped --> [*]
Validated --> Rejected: Dependency check fails
Rejected --> [*]
```
### Module Initialization Sequence
```mermaid
sequenceDiagram
participant Main
participant Loader
participant Registry
participant Module
participant DI
participant Router
participant DB
participant Scheduler
Main->>Loader: DiscoverModules()
Loader->>Registry: Scan for modules
Registry-->>Loader: Module list
loop For each module
Loader->>Module: Load module
Module->>Registry: Register module
Registry->>Registry: Validate dependencies
end
Main->>Registry: GetAllModules()
Registry->>Registry: Resolve dependencies (topological sort)
Registry-->>Main: Ordered module list
Main->>DI: Create fx container
loop For each module (in dependency order)
Main->>Module: Init()
Module->>DI: fx.Provide(services)
Module->>Router: Register routes
Module->>Scheduler: Register jobs
Module->>DB: Register migrations
end
Main->>DB: Run migrations (core first)
Main->>DI: Start container
Main->>Module: OnStart() (optional)
Main->>Router: Start HTTP server
```
## Module Dependencies
Modules can depend on other modules, creating a dependency graph that must be resolved.
```mermaid
graph TD
Core[Core Kernel]
Blog[Blog Module]
Billing[Billing Module]
Analytics[Analytics Module]
Notifications[Notification Module]
Blog --> Core
Billing --> Core
Analytics --> Core
Notifications --> Core
Analytics --> Blog
Analytics --> Billing
Billing --> Blog
Notifications --> Blog
Notifications --> Billing
style Core fill:#4a90e2,stroke:#2e5c8a,stroke-width:3px,color:#fff
style Blog fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
style Billing fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
```
### Dependency Resolution
```mermaid
graph LR
subgraph "Module Dependency Graph"
M1[Module A<br/>depends on: Core]
M2[Module B<br/>depends on: Core, Module A]
M3[Module C<br/>depends on: Core, Module B]
Core[Core Kernel]
end
subgraph "Resolved Load Order"
Step1[1. Core Kernel]
Step2[2. Module A]
Step3[3. Module B]
Step4[4. Module C]
end
Core --> M1
M1 --> M2
M2 --> M3
Step1 --> Step2
Step2 --> Step3
Step3 --> Step4
style Core fill:#4a90e2,stroke:#2e5c8a,stroke-width:3px,color:#fff
style Step1 fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
```
## Module Communication
Modules (services) communicate through service client interfaces. All inter-service communication uses gRPC (primary) or HTTP (fallback).
### Communication Patterns
```mermaid
graph TB
subgraph "Communication Patterns"
ServiceClients[Service Clients<br/>gRPC/HTTP]
Events[Event Bus<br/>Kafka]
Shared[Shared Infrastructure<br/>Redis, PostgreSQL]
end
subgraph "Blog Service"
BlogService[Blog Service]
BlogHandler[Blog Handler]
end
subgraph "Service Clients"
AuthClient[Auth Service Client]
IdentityClient[Identity Service Client]
AuthzClient[Authz Service Client]
end
subgraph "Core Services"
EventBus[Event Bus]
AuthService[Auth Service<br/>:8081]
IdentityService[Identity Service<br/>:8082]
end
subgraph "Analytics Service"
AnalyticsService[Analytics Service]
end
BlogHandler --> BlogService
BlogService -->|gRPC| AuthClient
BlogService -->|gRPC| IdentityClient
BlogService -->|gRPC| AuthzClient
BlogService -->|Publish| EventBus
EventBus -->|Subscribe| AnalyticsService
AuthClient --> AuthService
IdentityClient --> IdentityService
AuthzClient --> IdentityService
style EventBus fill:#4a90e2,stroke:#2e5c8a,stroke-width:3px,color:#fff
style BlogService fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
style AnalyticsService fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
style ServiceClients fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
```
### Event-Driven Communication
```mermaid
sequenceDiagram
participant BlogModule
participant EventBus
participant AnalyticsModule
participant NotificationModule
participant AuditModule
BlogModule->>EventBus: Publish("blog.post.created", event)
EventBus->>AnalyticsModule: Deliver event
EventBus->>NotificationModule: Deliver event
EventBus->>AuditModule: Deliver event
AnalyticsModule->>AnalyticsModule: Track post creation
NotificationModule->>NotificationModule: Send notification
AuditModule->>AuditModule: Log audit entry
```
## Module Data Isolation
Modules can have their own database tables while sharing core tables.
```mermaid
erDiagram
USERS ||--o{ USER_ROLES : has
ROLES ||--o{ USER_ROLES : assigned_to
ROLES ||--o{ ROLE_PERMISSIONS : has
PERMISSIONS ||--o{ ROLE_PERMISSIONS : assigned_to
BLOG_POSTS {
string id PK
string author_id FK
string title
string content
timestamp created_at
}
BILLING_SUBSCRIPTIONS {
string id PK
string user_id FK
string plan
timestamp expires_at
}
USERS ||--o{ BLOG_POSTS : creates
USERS ||--o{ BILLING_SUBSCRIPTIONS : subscribes
AUDIT_LOGS {
string id PK
string actor_id
string action
string target_id
jsonb metadata
}
USERS ||--o{ AUDIT_LOGS : performs
```
### Multi-Tenancy Data Isolation
```mermaid
graph TB
subgraph "Single Database"
subgraph "Core Tables"
Users[users<br/>tenant_id]
Roles[roles<br/>tenant_id]
end
subgraph "Blog Module Tables"
Posts[blog_posts<br/>tenant_id]
Comments[blog_comments<br/>tenant_id]
end
subgraph "Billing Module Tables"
Subscriptions[billing_subscriptions<br/>tenant_id]
Invoices[billing_invoices<br/>tenant_id]
end
end
subgraph "Query Filtering"
EntInterceptor[Ent Interceptor]
TenantFilter[WHERE tenant_id = ?]
end
Users --> EntInterceptor
Posts --> EntInterceptor
Subscriptions --> EntInterceptor
EntInterceptor --> TenantFilter
style EntInterceptor fill:#4a90e2,stroke:#2e5c8a,stroke-width:3px,color:#fff
```
## Module Examples
### Example: Blog Module
```mermaid
graph TB
subgraph "Blog Module"
BlogHandler[Blog Handler<br/>/api/v1/blog/posts]
BlogService[Post Service]
PostRepo[Post Repository]
PostEntity[Post Entity]
end
subgraph "Service Clients"
AuthClient[Auth Service Client<br/>gRPC]
AuthzClient[Authz Service Client<br/>gRPC]
IdentityClient[Identity Service Client<br/>gRPC]
AuditClient[Audit Service Client<br/>gRPC]
end
subgraph "Core Services"
EventBus[Event Bus<br/>Kafka]
CacheService[Cache Service<br/>Redis]
end
subgraph "Database"
PostsTable[(blog_posts)]
end
BlogHandler --> BlogService
BlogService -->|gRPC| AuthClient
BlogService -->|gRPC| AuthzClient
BlogService -->|gRPC| IdentityClient
BlogService -->|gRPC| AuditClient
BlogService --> PostRepo
BlogService --> EventBus
BlogService --> CacheService
PostRepo --> PostsTable
PostRepo --> PostEntity
style BlogModule fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
style AuthService fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
```
### Module Integration Example
```mermaid
graph LR
subgraph "Request Flow"
Request[HTTP Request<br/>POST /api/v1/blog/posts]
Auth[Auth Middleware]
Authz[Authz Middleware]
Handler[Blog Handler]
Service[Blog Service]
Repo[Blog Repository]
DB[(Database)]
end
subgraph "Service Clients"
AuthClient[Auth Service Client]
IdentityClient[Identity Service Client]
AuthzClient[Authz Service Client]
end
subgraph "Side Effects"
EventBus[Event Bus]
AuditClient[Audit Service Client]
Cache[Cache]
end
Request --> Auth
Auth --> Authz
Authz --> Handler
Handler --> Service
Service --> Repo
Repo --> DB
Service -->|gRPC| AuthClient
Service -->|gRPC| IdentityClient
Service -->|gRPC| AuthzClient
Service -->|gRPC| AuditClient
Service --> EventBus
Service --> Cache
style Request fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
style Service fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
style ServiceClients fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
```
## Module Registration Flow
```mermaid
flowchart TD
Start([Application Start]) --> LoadManifests["Load module.yaml files"]
LoadManifests --> ValidateDeps["Validate dependencies"]
ValidateDeps -->|Valid| SortModules["Topological sort modules"]
ValidateDeps -->|Invalid| Error([Error: Missing dependencies])
SortModules --> CreateDI["Create DI container"]
CreateDI --> RegisterCore["Register core services"]
RegisterCore --> LoopModules{"More modules?"}
LoopModules -->|Yes| LoadModule["Load module"]
LoadModule --> CallInit["Call module.Init()"]
CallInit --> RegisterServices["Register module services"]
RegisterServices --> RegisterRoutes["Register module routes"]
RegisterRoutes --> RegisterJobs["Register module jobs"]
RegisterJobs --> RegisterMigrations["Register module migrations"]
RegisterMigrations --> LoopModules
LoopModules -->|No| RunMigrations["Run all migrations"]
RunMigrations --> StartModules["Call OnStart() for each module"]
StartModules --> StartServer["Start HTTP server"]
StartServer --> Running([Application Running])
Running --> Shutdown([Shutdown Signal])
Shutdown --> StopServer["Stop HTTP server"]
StopServer --> StopModules["Call OnStop() for each module"]
StopModules --> Cleanup["Cleanup resources"]
Cleanup --> End([Application Stopped])
style Start fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
style Running fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
style Error fill:#ff6b6b,stroke:#c92a2a,stroke-width:2px,color:#fff
```
## Module Permissions Integration
Modules declare permissions that are automatically integrated into the permission system.
```mermaid
graph TB
subgraph "Permission Generation"
Manifest["module.yaml<br/>permissions: array"]
Generator["Permission Generator"]
GeneratedCode["pkg/perm/generated.go"]
end
subgraph "Permission Resolution"
Request["HTTP Request"]
AuthzMiddleware["Authz Middleware"]
PermissionResolver["Permission Resolver"]
UserRoles["User Roles"]
RolePermissions["Role Permissions"]
Response["HTTP Response"]
end
Manifest --> Generator
Generator --> GeneratedCode
GeneratedCode --> PermissionResolver
Request --> AuthzMiddleware
AuthzMiddleware --> PermissionResolver
PermissionResolver --> UserRoles
PermissionResolver --> RolePermissions
UserRoles --> PermissionResolver
RolePermissions --> PermissionResolver
PermissionResolver --> AuthzMiddleware
AuthzMiddleware --> Response
classDef generation fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
classDef resolution fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
class Manifest,Generator,GeneratedCode generation
class PermissionResolver resolution
```
## Next Steps
- [Module Requirements](./module-requirements.md) - Detailed requirements for each module
- [Component Relationships](./component-relationships.md) - How components interact
- [System Architecture](./architecture.md) - Overall system architecture

View File

@@ -0,0 +1,753 @@
# System Architecture
This document provides a comprehensive overview of the Go Platform architecture, including system components, their relationships, and how modules integrate with the core platform.
## Table of Contents
- [High-Level Architecture](#high-level-architecture)
- [Layered Architecture](#layered-architecture)
- [Module System Architecture](#module-system-architecture)
- [Component Relationships](#component-relationships)
- [Data Flow](#data-flow)
- [Deployment Architecture](#deployment-architecture)
## High-Level Architecture
The Go Platform follows a **microservices architecture** where each module is an independent service:
- **Core Services**: Authentication, Identity, Authorization, Audit, etc.
- **Feature Services**: Blog, Billing, Analytics, etc. (modules)
- **Infrastructure Services**: Cache, Event Bus, Scheduler, etc.
All services communicate via gRPC (primary) or HTTP (fallback), with service discovery via a service registry. Services share infrastructure (PostgreSQL, Redis, Kafka) but are independently deployable and scalable.
```mermaid
graph TB
subgraph "Go Platform"
Core[Core Kernel]
Module1[Module 1<br/>Blog]
Module2[Module 2<br/>Billing]
Module3[Module N<br/>Custom]
end
subgraph "Infrastructure"
DB[(PostgreSQL)]
Cache[(Redis)]
Queue[Kafka/Event Bus]
Storage[S3/Blob Storage]
end
subgraph "External Services"
OIDC[OIDC Provider]
Email[Email Service]
Sentry[Sentry]
end
Core --> DB
Core --> Cache
Core --> Queue
Core --> Storage
Core --> OIDC
Core --> Email
Core --> Sentry
Module1 --> Core
Module2 --> Core
Module3 --> Core
Module1 --> DB
Module2 --> DB
Module3 --> DB
Module1 --> Queue
Module2 --> Queue
style Core fill:#4a90e2,stroke:#2e5c8a,stroke-width:3px,color:#fff
style Module1 fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
style Module2 fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
style Module3 fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
```
## Layered Architecture
The platform follows a **clean/hexagonal architecture** with clear separation of concerns across layers.
```mermaid
graph TD
subgraph "Presentation Layer"
HTTP[HTTP/REST API]
GraphQL[GraphQL API]
CLI[CLI Interface]
end
subgraph "Application Layer"
AuthMiddleware[Auth Middleware]
AuthzMiddleware[Authorization Middleware]
RateLimit[Rate Limiting]
Handlers[Request Handlers]
end
subgraph "Domain Layer"
Services[Domain Services]
Entities[Domain Entities]
Policies[Business Policies]
end
subgraph "Infrastructure Layer"
Repos[Repositories]
CacheAdapter[Cache Adapter]
EventBus[Event Bus]
Jobs[Scheduler/Jobs]
end
subgraph "Core Kernel"
DI[DI Container]
Config[Config Manager]
Logger[Logger]
Metrics[Metrics]
Health[Health Checks]
end
HTTP --> AuthMiddleware
GraphQL --> AuthMiddleware
CLI --> AuthMiddleware
AuthMiddleware --> AuthzMiddleware
AuthzMiddleware --> RateLimit
RateLimit --> Handlers
Handlers --> Services
Services --> Entities
Services --> Policies
Services --> Repos
Services --> CacheAdapter
Services --> EventBus
Services --> Jobs
Repos --> DB[(Database)]
CacheAdapter --> Cache[(Redis)]
EventBus --> Queue[(Kafka)]
Services --> DI
Repos --> DI
Handlers --> DI
DI --> Config
DI --> Logger
DI --> Metrics
DI --> Health
style Core fill:#4a90e2,stroke:#2e5c8a,stroke-width:3px,color:#fff
style Services fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
style Repos fill:#ff6b6b,stroke:#c92a2a,stroke-width:2px,color:#fff
```
## Module System Architecture
Modules are the building blocks of the platform. Each module can register services, routes, permissions, and background jobs.
```mermaid
graph TB
subgraph Lifecycle["Module Lifecycle"]
Discover["1. Discover Modules"]
Load["2. Load Module"]
Validate["3. Validate Dependencies"]
Init["4. Initialize Module"]
Start["5. Start Module"]
end
subgraph Registration["Module Registration"]
Static["Static Registration<br/>via init()"]
Dynamic["Dynamic Loading<br/>via .so files"]
end
subgraph Components["Module Components"]
Routes["HTTP Routes"]
Services["Services"]
Repos["Repositories"]
Perms["Permissions"]
Jobs["Background Jobs"]
Migrations["Database Migrations"]
end
Discover --> Load
Load --> Static
Load --> Dynamic
Static --> Validate
Dynamic --> Validate
Validate --> Init
Init --> Routes
Init --> Services
Init --> Repos
Init --> Perms
Init --> Jobs
Init --> Migrations
Routes --> Start
Services --> Start
Repos --> Start
Perms --> Start
Jobs --> Start
Migrations --> Start
classDef lifecycle fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
classDef registration fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
classDef components fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
classDef start fill:#ff6b6b,stroke:#c92a2a,stroke-width:2px,color:#fff
class Discover,Load,Validate,Init lifecycle
class Static,Dynamic registration
class Routes,Services,Repos,Perms,Jobs,Migrations components
class Start start
```
### Module Initialization Sequence
```mermaid
sequenceDiagram
participant Main
participant Loader
participant Registry
participant Module
participant DI
participant Router
participant DB
Main->>Loader: LoadModules()
Loader->>Registry: Discover modules
Registry-->>Loader: List of modules
loop For each module
Loader->>Module: Load module
Module->>Registry: Register(module)
Registry->>Registry: Validate dependencies
end
Main->>Registry: GetAllModules()
Registry-->>Main: Ordered module list
Main->>DI: Create container
loop For each module
Main->>Module: Init()
Module->>DI: Provide services
Module->>Router: Register routes
Module->>DB: Register migrations
end
Main->>DB: Run migrations
Main->>Router: Start HTTP server
```
## Component Relationships
This diagram shows how core components interact with each other and with modules.
```mermaid
graph TB
subgraph "Core Kernel Components"
ConfigMgr[Config Manager]
LoggerService[Logger Service]
DI[DI Container]
ModuleLoader[Module Loader]
HealthRegistry[Health Registry]
MetricsRegistry[Metrics Registry]
ErrorBus[Error Bus]
EventBus[Event Bus]
end
subgraph "Security Components"
AuthService[Auth Service]
AuthzService[Authorization Service]
TokenProvider[Token Provider]
PermissionResolver[Permission Resolver]
AuditService[Audit Service]
end
subgraph "Infrastructure Components"
DBClient[Database Client]
CacheClient[Cache Client]
Scheduler[Scheduler]
Notifier[Notifier]
end
subgraph "Module Components"
ModuleRoutes[Module Routes]
ModuleServices[Module Services]
ModuleRepos[Module Repositories]
end
DI --> ConfigMgr
DI --> LoggerService
DI --> ModuleLoader
DI --> HealthRegistry
DI --> MetricsRegistry
DI --> ErrorBus
DI --> EventBus
DI --> AuthService
DI --> AuthzService
DI --> DBClient
DI --> CacheClient
DI --> Scheduler
DI --> Notifier
AuthService --> TokenProvider
AuthzService --> PermissionResolver
AuthzService --> AuditService
ModuleServices --> DBClient
ModuleServices --> CacheClient
ModuleServices --> EventBus
ModuleServices --> AuthzService
ModuleRepos --> DBClient
ModuleRoutes --> AuthzService
Scheduler --> CacheClient
Notifier --> EventBus
ErrorBus --> LoggerService
ErrorBus --> Sentry
style DI fill:#4a90e2,stroke:#2e5c8a,stroke-width:3px,color:#fff
style AuthService fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
style ModuleServices fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
```
## Data Flow
### Request Processing Flow
```mermaid
sequenceDiagram
participant Client
participant Router
participant AuthMW[Auth Middleware]
participant AuthzMW[Authz Middleware]
participant RateLimit[Rate Limiter]
participant Handler
participant Service
participant Repo
participant DB
participant Cache
participant EventBus
participant Audit
Client->>Router: HTTP Request
Router->>AuthMW: Extract JWT
AuthMW->>AuthMW: Validate token
AuthMW->>Router: Add user to context
Router->>AuthzMW: Check permissions
AuthzMW->>AuthzMW: Resolve permissions
AuthzMW->>Router: Authorized
Router->>RateLimit: Check rate limits
RateLimit->>Cache: Get rate limit state
Cache-->>RateLimit: Rate limit status
RateLimit->>Router: Within limits
Router->>Handler: Process request
Handler->>Service: Business logic
Service->>Cache: Check cache
Cache-->>Service: Cache miss
Service->>Repo: Query data
Repo->>DB: Execute query
DB-->>Repo: Return data
Repo-->>Service: Domain entity
Service->>Cache: Store in cache
Service->>EventBus: Publish event
Service->>Audit: Record action
Service-->>Handler: Response data
Handler-->>Router: HTTP response
Router-->>Client: JSON response
```
### Module Event Flow
```mermaid
graph LR
subgraph "Module A"
AService[Service A]
AHandler[Handler A]
end
subgraph "Event Bus"
Bus[Event Bus]
end
subgraph "Module B"
BService[Service B]
BHandler[Handler B]
end
subgraph "Module C"
CService[Service C]
end
AHandler --> AService
AService -->|Publish Event| Bus
Bus -->|Subscribe| BService
Bus -->|Subscribe| CService
BService --> BHandler
CService --> CService
```
## Deployment Architecture
### Development Deployment
```mermaid
graph TB
subgraph "Developer Machine"
IDE[IDE/Editor]
Go[Go Runtime]
Docker[Docker]
end
subgraph "Local Services"
App[Platform App<br/>:8080]
DB[(PostgreSQL<br/>:5432)]
Redis[(Redis<br/>:6379)]
Kafka[Kafka<br/>:9092]
end
IDE --> Go
Go --> App
App --> DB
App --> Redis
App --> Kafka
Docker --> DB
Docker --> Redis
Docker --> Kafka
```
### Production Deployment
```mermaid
graph TB
subgraph "Load Balancer"
LB[Load Balancer<br/>HTTPS]
end
subgraph "Platform Instances"
App1[Platform Instance 1]
App2[Platform Instance 2]
App3[Platform Instance N]
end
subgraph "Database Cluster"
Primary[(PostgreSQL<br/>Primary)]
Replica[(PostgreSQL<br/>Replica)]
end
subgraph "Cache Cluster"
Redis1[(Redis<br/>Master)]
Redis2[(Redis<br/>Replica)]
end
subgraph "Message Queue"
Kafka1[Kafka Broker 1]
Kafka2[Kafka Broker 2]
Kafka3[Kafka Broker 3]
end
subgraph "Observability"
Prometheus[Prometheus]
Grafana[Grafana]
Jaeger[Jaeger]
Loki[Loki]
end
subgraph "External Services"
Sentry[Sentry]
S3[S3 Storage]
end
LB --> App1
LB --> App2
LB --> App3
App1 --> Primary
App2 --> Primary
App3 --> Primary
App1 --> Replica
App2 --> Replica
App3 --> Replica
App1 --> Redis1
App2 --> Redis1
App3 --> Redis1
App1 --> Kafka1
App2 --> Kafka2
App3 --> Kafka3
App1 --> Prometheus
App2 --> Prometheus
App3 --> Prometheus
Prometheus --> Grafana
App1 --> Jaeger
App2 --> Jaeger
App3 --> Jaeger
App1 --> Loki
App2 --> Loki
App3 --> Loki
App1 --> Sentry
App2 --> Sentry
App3 --> Sentry
App1 --> S3
App2 --> S3
App3 --> S3
style LB fill:#4a90e2,stroke:#2e5c8a,stroke-width:3px,color:#fff
style Primary fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
style Redis1 fill:#ff6b6b,stroke:#c92a2a,stroke-width:2px,color:#fff
```
## Core Kernel Components
The core kernel provides the foundation for all modules. Each component has specific responsibilities:
### Component Responsibilities
```mermaid
mindmap
root((Core Kernel))
Configuration
Load configs
Environment vars
Secret management
Dependency Injection
Service registration
Lifecycle management
Module wiring
Logging
Structured logs
Request correlation
Log levels
Observability
Metrics
Tracing
Health checks
Security
Authentication
Authorization
Audit logging
Module System
Module discovery
Module loading
Dependency resolution
```
## Module Integration Points
Modules integrate with the core through well-defined interfaces:
```mermaid
graph TB
subgraph "Core Kernel Interfaces"
IConfig[ConfigProvider]
ILogger[Logger]
IAuth[Authenticator]
IAuthz[Authorizer]
IEventBus[EventBus]
ICache[Cache]
IBlobStore[BlobStore]
IScheduler[Scheduler]
INotifier[Notifier]
end
subgraph "Module Implementation"
Module[Feature Module]
ModuleServices[Module Services]
ModuleRoutes[Module Routes]
end
Module --> IConfig
Module --> ILogger
ModuleServices --> IAuth
ModuleServices --> IAuthz
ModuleServices --> IEventBus
ModuleServices --> ICache
ModuleServices --> IBlobStore
ModuleServices --> IScheduler
ModuleServices --> INotifier
ModuleRoutes --> IAuthz
style IConfig fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
style Module fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
```
## Microservices Architecture
The platform is designed as **microservices from day one**, with each module being an independent service.
### Service Architecture
```mermaid
graph TB
subgraph "API Gateway"
Gateway[API Gateway<br/>Routing & Auth]
end
subgraph "Core Services"
AuthSvc[Auth Service<br/>:8081]
IdentitySvc[Identity Service<br/>:8082]
AuthzSvc[Authz Service<br/>:8083]
AuditSvc[Audit Service<br/>:8084]
end
subgraph "Feature Services"
BlogSvc[Blog Service<br/>:8091]
BillingSvc[Billing Service<br/>:8092]
AnalyticsSvc[Analytics Service<br/>:8093]
end
subgraph "Infrastructure"
DB[(PostgreSQL)]
Cache[(Redis)]
Queue[Kafka]
Registry[Service Registry]
end
Gateway --> AuthSvc
Gateway --> IdentitySvc
Gateway --> BlogSvc
Gateway --> BillingSvc
AuthSvc --> IdentitySvc
AuthSvc --> Registry
BlogSvc --> AuthzSvc
BlogSvc --> IdentitySvc
BlogSvc --> Registry
BillingSvc --> IdentitySvc
BillingSvc --> Registry
AuthSvc --> DB
IdentitySvc --> DB
BlogSvc --> DB
BillingSvc --> DB
AuthSvc --> Cache
BlogSvc --> Cache
BillingSvc --> Cache
BlogSvc --> Queue
BillingSvc --> Queue
AnalyticsSvc --> Queue
style Gateway fill:#4a90e2,stroke:#2e5c8a,stroke-width:3px,color:#fff
style Registry fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
```
### Service Communication
All inter-service communication uses service client interfaces:
```mermaid
graph TB
subgraph "Service Client Interface"
Interface[Service Interface<br/>pkg/services/]
end
subgraph "Implementations"
GRPC[gRPC Client<br/>Primary]
HTTP[HTTP Client<br/>Fallback]
end
subgraph "Service Registry"
Registry[Service Registry<br/>Discovery & Resolution]
end
Interface --> GRPC
Interface --> HTTP
Registry --> GRPC
Registry --> HTTP
style Interface fill:#4a90e2,stroke:#2e5c8a,stroke-width:3px,color:#fff
style Registry fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
```
### Service Communication Patterns
The platform uses three communication patterns:
1. **Synchronous Service Calls** (via Service Clients):
- gRPC calls (primary) - type-safe, efficient
- HTTP/REST calls (fallback) - for external integration
- All calls go through service client interfaces
- Service discovery via registry
2. **Asynchronous Events** (via Event Bus):
- Distributed via Kafka
- Preferred for cross-service communication
- Event-driven architecture for loose coupling
3. **Shared Infrastructure** (For state):
- Redis for cache and distributed state
- PostgreSQL for persistent data
- Kafka for events
### Service Registry
The service registry enables service discovery and resolution:
```mermaid
graph TB
subgraph "Service Registry"
Registry[Service Registry Interface]
Consul[Consul Registry]
K8s[K8s Service Discovery]
Etcd[etcd Registry]
end
subgraph "Services"
AuthSvc[Auth Service]
IdentitySvc[Identity Service]
BlogSvc[Blog Service]
end
Registry --> Consul
Registry --> K8s
Registry --> Etcd
Consul --> AuthSvc
K8s --> IdentitySvc
Etcd --> BlogSvc
style Registry fill:#4a90e2,stroke:#2e5c8a,stroke-width:3px,color:#fff
```
### Scaling Strategy
#### Independent Service Scaling
- Scale individual services based on load
- Independent resource allocation
- Independent deployment
- Better resource utilization
- Team autonomy
#### Development Mode
- For local development, multiple services can run in the same process
- Services still communicate via gRPC/HTTP (no direct calls)
- Docker Compose for easy local setup
- Maintains microservices architecture even in development
## Next Steps
- [Module Architecture](./architecture-modules.md) - Detailed module architecture and design
- [Module Requirements](./module-requirements.md) - Requirements for each module
- [Component Relationships](./component-relationships.md) - Detailed component interactions
- [ADRs](../adr/README.md) - Architecture Decision Records
- [ADR-0029: Microservices Architecture](../adr/0029-microservices-architecture.md) - Microservices strategy
- [ADR-0030: Service Communication](../adr/0030-service-communication-strategy.md) - Communication patterns

View File

@@ -0,0 +1,23 @@
/* Full width content */
.md-content {
max-width: 100% !important;
}
.md-main__inner {
max-width: 100% !important;
}
.md-grid {
max-width: 100% !important;
}
/* Ensure content area uses full width while keeping readable line length */
.md-content__inner {
max-width: 100%;
}
/* Adjust container padding for better full-width experience */
.md-container {
max-width: 100%;
}

View File

@@ -0,0 +1,481 @@
# Component Relationships
This document details how different components of the Go Platform interact with each other, including dependency relationships, data flow, and integration patterns.
## Table of Contents
- [Core Component Dependencies](#core-component-dependencies)
- [Module to Core Integration](#module-to-core-integration)
- [Service Interaction Patterns](#service-interaction-patterns)
- [Data Flow Patterns](#data-flow-patterns)
- [Dependency Graph](#dependency-graph)
## Core Component Dependencies
The core kernel components have well-defined dependencies that form the foundation of the platform.
```mermaid
graph TD
subgraph "Foundation Layer"
Config[Config Manager]
Logger[Logger Service]
end
subgraph "DI Layer"
DI[DI Container]
end
subgraph "Infrastructure Layer"
DB[Database Client]
Cache[Cache Client]
EventBus[Event Bus]
Scheduler[Scheduler]
end
subgraph "Security Layer"
Auth[Auth Service]
Authz[Authz Service]
Audit[Audit Service]
end
subgraph "Observability Layer"
Metrics[Metrics Registry]
Health[Health Registry]
Tracer[OpenTelemetry Tracer]
end
Config --> Logger
Config --> DI
Logger --> DI
DI --> DB
DI --> Cache
DI --> EventBus
DI --> Scheduler
DI --> Auth
DI --> Authz
DI --> Audit
DI --> Metrics
DI --> Health
DI --> Tracer
Auth --> DB
Authz --> DB
Authz --> Cache
Audit --> DB
DB --> Tracer
Cache --> Tracer
EventBus --> Tracer
style Config fill:#4a90e2,stroke:#2e5c8a,stroke-width:3px,color:#fff
style DI fill:#50c878,stroke:#2e7d4e,stroke-width:3px,color:#fff
style Auth fill:#ff6b6b,stroke:#c92a2a,stroke-width:2px,color:#fff
```
## Module to Core Integration
Modules (services) integrate with core services through service client interfaces. All communication uses gRPC or HTTP.
```mermaid
graph LR
subgraph "Feature Service (e.g., Blog)"
ModuleHandler[Module Handler]
ModuleService[Module Service]
ModuleRepo[Module Repository]
end
subgraph "Service Clients"
AuthClient[Auth Service Client]
AuthzClient[Authz Service Client]
IdentityClient[Identity Service Client]
AuditClient[Audit Service Client]
end
subgraph "Core Services"
AuthService[Auth Service<br/>:8081]
AuthzService[Authz Service<br/>:8083]
IdentityService[Identity Service<br/>:8082]
AuditService[Audit Service<br/>:8084]
EventBusService[Event Bus<br/>Kafka]
CacheService[Cache Service<br/>Redis]
end
subgraph "Infrastructure"
DBClient[Database Client]
CacheClient[Cache Client]
QueueClient[Message Queue]
end
ModuleHandler --> ModuleService
ModuleService -->|gRPC| AuthClient
ModuleService -->|gRPC| AuthzClient
ModuleService -->|gRPC| IdentityClient
ModuleService -->|gRPC| AuditClient
ModuleService --> ModuleRepo
ModuleService --> EventBusService
ModuleService --> CacheService
AuthClient --> AuthService
AuthzClient --> AuthzService
IdentityClient --> IdentityService
AuditClient --> AuditService
ModuleRepo --> DBClient
CacheService --> CacheClient
EventBusService --> QueueClient
style ModuleService fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
style AuthService fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
style DBClient fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
style ServiceClients fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
```
## Service Interaction Patterns
### Authentication Flow
```mermaid
sequenceDiagram
participant Client
participant Router
participant AuthMiddleware
participant AuthService
participant TokenProvider
participant UserRepo
participant DB
Client->>Router: POST /api/v1/auth/login
Router->>AuthMiddleware: Extract credentials
AuthMiddleware->>AuthService: Authenticate(email, password)
AuthService->>UserRepo: FindByEmail(email)
UserRepo->>DB: Query user
DB-->>UserRepo: User data
UserRepo-->>AuthService: User entity
AuthService->>AuthService: Verify password
AuthService->>TokenProvider: GenerateAccessToken(user)
AuthService->>TokenProvider: GenerateRefreshToken(user)
TokenProvider-->>AuthService: Tokens
AuthService->>DB: Store refresh token
AuthService-->>AuthMiddleware: Auth response
AuthMiddleware-->>Router: Tokens
Router-->>Client: JSON response with tokens
```
### Authorization Flow
```mermaid
sequenceDiagram
participant Request
participant AuthzMiddleware
participant Authorizer
participant PermissionResolver
participant Cache
participant UserRepo
participant RoleRepo
participant DB
Request->>AuthzMiddleware: HTTP request + permission
AuthzMiddleware->>Authorizer: Authorize(ctx, permission)
Authorizer->>Authorizer: Extract user from context
Authorizer->>PermissionResolver: HasPermission(user, permission)
PermissionResolver->>Cache: Check cache
Cache-->>PermissionResolver: Cache miss
PermissionResolver->>UserRepo: GetUserRoles(userID)
UserRepo->>DB: Query user_roles
DB-->>UserRepo: Role IDs
UserRepo-->>PermissionResolver: Roles
PermissionResolver->>RoleRepo: GetRolePermissions(roleIDs)
RoleRepo->>DB: Query role_permissions
DB-->>RoleRepo: Permissions
RoleRepo-->>PermissionResolver: Permission list
PermissionResolver->>PermissionResolver: Check if permission in list
PermissionResolver->>Cache: Store in cache
PermissionResolver-->>Authorizer: Has permission: true/false
Authorizer-->>AuthzMiddleware: Authorized or error
AuthzMiddleware-->>Request: Continue or 403
```
### Event Publishing Flow
```mermaid
sequenceDiagram
participant ModuleService
participant EventBus
participant Kafka
participant Subscriber1
participant Subscriber2
ModuleService->>EventBus: Publish(topic, event)
EventBus->>EventBus: Serialize event
EventBus->>Kafka: Send to topic
Kafka-->>EventBus: Acknowledged
Kafka->>Subscriber1: Deliver event
Kafka->>Subscriber2: Deliver event
Subscriber1->>Subscriber1: Process event
Subscriber2->>Subscriber2: Process event
```
## Data Flow Patterns
### Request to Response Flow
```mermaid
graph LR
Client[Client] -->|HTTP Request| LB[Load Balancer]
LB -->|Route| Server1[Instance 1]
LB -->|Route| Server2[Instance 2]
Server1 --> AuthMW[Auth Middleware]
Server1 --> AuthzMW[Authz Middleware]
Server1 --> RateLimit[Rate Limiter]
Server1 --> Handler[Request Handler]
Server1 --> Service[Domain Service]
Server1 --> Cache[Cache Check]
Server1 --> Repo[Repository]
Server1 --> DB[(Database)]
Service --> EventBus[Event Bus]
Service --> Audit[Audit Log]
Handler -->|Response| Server1
Server1 -->|HTTP Response| LB
LB -->|Response| Client
style Server1 fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
style Service fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
```
### Caching Flow
```mermaid
graph TD
Request[Service Request] --> CacheCheck{Cache Hit?}
CacheCheck -->|Yes| CacheGet[Get from Cache]
CacheCheck -->|No| DBQuery[Query Database]
DBQuery --> DBResponse[Database Response]
DBResponse --> CacheStore[Store in Cache]
CacheStore --> Return[Return Data]
CacheGet --> Return
style CacheCheck fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
style CacheGet fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
style DBQuery fill:#ff6b6b,stroke:#c92a2a,stroke-width:2px,color:#fff
```
## Dependency Graph
Complete dependency graph showing all components and their relationships.
```mermaid
graph TB
subgraph "Application Entry"
Main[Main Application]
end
subgraph "Core Kernel"
Config[Config]
Logger[Logger]
DI[DI Container]
ModuleLoader[Module Loader]
end
subgraph "Security"
Auth[Auth]
Authz[Authz]
Identity[Identity]
Audit[Audit]
end
subgraph "Infrastructure"
DB[Database]
Cache[Cache]
EventBus[Event Bus]
Scheduler[Scheduler]
BlobStore[Blob Store]
Notifier[Notifier]
end
subgraph "Observability"
Metrics[Metrics]
Health[Health]
Tracer[Tracer]
ErrorBus[Error Bus]
end
subgraph "Module"
ModuleHandler[Module Handler]
ModuleService[Module Service]
ModuleRepo[Module Repo]
end
Main --> Config
Main --> Logger
Main --> DI
Main --> ModuleLoader
Config --> Logger
Config --> DI
DI --> Auth
DI --> Authz
DI --> Identity
DI --> Audit
DI --> DB
DI --> Cache
DI --> EventBus
DI --> Scheduler
DI --> BlobStore
DI --> Notifier
DI --> Metrics
DI --> Health
DI --> Tracer
DI --> ErrorBus
Auth --> Identity
Auth --> DB
Authz --> Identity
Authz --> Cache
Authz --> Audit
Audit --> DB
Audit --> Logger
ModuleLoader --> DI
ModuleHandler --> ModuleService
ModuleService --> ModuleRepo
ModuleService -->|gRPC| Auth
ModuleService -->|gRPC| Authz
ModuleService -->|gRPC| Identity
ModuleService -->|gRPC| Audit
ModuleService --> EventBus
ModuleService --> Cache
ModuleRepo --> DB
Scheduler --> Cache
Notifier --> EventBus
ErrorBus --> Logger
ErrorBus --> Sentry
DB --> Tracer
Cache --> Tracer
EventBus --> Tracer
style Main fill:#4a90e2,stroke:#2e5c8a,stroke-width:4px,color:#fff
style DI fill:#50c878,stroke:#2e7d4e,stroke-width:3px,color:#fff
style ModuleService fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
```
## Component Interaction Matrix
| Component | Depends On | Used By |
|-----------|-----------|---------|
| Config | None | All components |
| Logger | Config | All components |
| DI Container | Config, Logger | All components |
| Auth Service | Identity, DB | Auth Middleware, Modules |
| Authz Service | Identity, Cache, Audit | Authz Middleware, Modules |
| Identity Service | DB, Cache, Notifier | Auth, Authz, Modules |
| Database Client | Config, Logger, Tracer | All repositories |
| Cache Client | Config, Logger | Authz, Scheduler, Modules |
| Event Bus | Config, Logger, Tracer | Modules, Notifier |
| Scheduler | Cache, Logger | Modules |
| Error Bus | Logger | All components (via panic recovery) |
## Integration Patterns
### Module Service Integration
```mermaid
graph TB
subgraph "Module Layer"
Handler[HTTP Handler]
Service[Domain Service]
Repo[Repository]
end
subgraph "Core Services"
Auth[Auth Service]
Authz[Authz Service]
EventBus[Event Bus]
Cache[Cache]
Audit[Audit]
end
subgraph "Infrastructure"
DB[(Database)]
Redis[(Redis)]
Kafka[Kafka]
end
Handler --> Auth
Handler --> Authz
Handler --> Service
Service --> Repo
Service --> EventBus
Service --> Cache
Service --> Audit
Repo --> DB
Cache --> Redis
EventBus --> Kafka
Audit --> DB
style Service fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
style Auth fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
style DB fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
```
### Cross-Service Communication
```mermaid
graph LR
subgraph "Blog Service"
BlogService[Blog Service]
end
subgraph "Analytics Service"
AnalyticsService[Analytics Service]
end
subgraph "Service Clients"
AuthzClient[Authz Service Client]
IdentityClient[Identity Service Client]
end
subgraph "Core Services"
EventBus[Event Bus<br/>Kafka]
AuthzService[Authz Service<br/>:8083]
IdentityService[Identity Service<br/>:8082]
Cache[Cache<br/>Redis]
end
BlogService -->|gRPC| AuthzClient
BlogService -->|gRPC| IdentityClient
BlogService -->|Publish Event| EventBus
EventBus -->|Subscribe| AnalyticsService
BlogService -->|Cache Access| Cache
AnalyticsService -->|Cache Access| Cache
AuthzClient --> AuthzService
IdentityClient --> IdentityService
style BlogService fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
style AnalyticsService fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
style EventBus fill:#4a90e2,stroke:#2e5c8a,stroke-width:3px,color:#fff
style ServiceClients fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
```
## Next Steps
- [System Architecture](./architecture.md) - Overall system architecture
- [Module Architecture](./architecture-modules.md) - Module design and integration
- [Module Requirements](./module-requirements.md) - Detailed module requirements

74
docs/content/index.md Normal file
View File

@@ -0,0 +1,74 @@
# Go Platform Documentation
Welcome to the Go Platform documentation! This is a plugin-friendly SaaS/Enterprise platform built with Go.
## What is Go Platform?
Go Platform is a modular, extensible platform designed to support multiple business domains through a plugin architecture. It provides:
- **Core Kernel**: Foundation services including authentication, authorization, configuration, logging, and observability
- **Module Framework**: Plugin system for extending functionality
- **Infrastructure Adapters**: Support for databases, caching, event buses, and job scheduling
- **Security-by-Design**: Built-in JWT authentication, RBAC/ABAC authorization, and audit logging
- **Observability**: OpenTelemetry integration for tracing, metrics, and logging
## Documentation Structure
### 📋 Overview
- **[Requirements](requirements.md)**: High-level architectural principles and requirements
- **[Implementation Plan](plan.md)**: Epic-based implementation plan with timelines
- **[Playbook](playbook.md)**: Detailed implementation guide and best practices
### 🏛️ Architecture
- **[Architecture Overview](architecture.md)**: System architecture with diagrams
- **[Module Architecture](architecture-modules.md)**: Module system design and integration
- **[Module Requirements](module-requirements.md)**: Detailed requirements for each module
- **[Component Relationships](component-relationships.md)**: Component interactions and dependencies
### 🏗️ Architecture Decision Records (ADRs)
All architectural decisions are documented in [ADR records](adr/README.md), organized by implementation epic:
- **Epic 0**: Project Setup & Foundation
- **Epic 1**: Core Kernel & Infrastructure
- **Epic 2**: Authentication & Authorization
- **Epic 3**: Module Framework
- **Epic 5**: Infrastructure Adapters
- **Epic 6**: Observability & Production Readiness
- **Epic 7**: Testing, Documentation & CI/CD
### 📝 Implementation Tasks
Detailed task definitions for each epic are available in the [Stories section](stories/README.md):
- Epic 0: Project Setup & Foundation
- Epic 1: Core Kernel & Infrastructure
- Epic 2: Authentication & Authorization
- Epic 3: Module Framework
- Epic 4: Sample Feature Module (Blog)
- Epic 5: Infrastructure Adapters
- Epic 6: Observability & Production Readiness
- Epic 7: Testing, Documentation & CI/CD
- Epic 8: Advanced Features & Polish (Optional)
## Quick Start
1. Review the [Requirements](requirements.md) to understand the platform goals
2. Check the [Implementation Plan](plan.md) for the epic-based approach
3. Follow the [Playbook](playbook.md) for implementation details
4. Refer to [ADRs](adr/README.md) when making architectural decisions
5. Use the [Task Stories](stories/README.md) as a checklist for implementation
## Key Principles
- **Clean/Hexagonal Architecture**: Clear separation between core and plugins
- **Microservices Architecture**: Each module is an independent service from day one
- **Plugin-First Design**: Extensible architecture supporting static and dynamic modules
- **Security-by-Design**: Built-in authentication, authorization, and audit capabilities
- **Observability**: Comprehensive logging, metrics, and tracing
- **API-First**: OpenAPI/GraphQL schema generation
## Contributing
When contributing to the platform:
1. Review relevant ADRs before making architectural decisions
2. Follow the task structure defined in the Stories
3. Update documentation as you implement features
4. Ensure all tests pass before submitting changes

View File

@@ -0,0 +1,816 @@
# Module Requirements
This document provides detailed requirements for each module in the Go Platform, including interfaces, responsibilities, and integration points.
## Table of Contents
- [Core Kernel Modules](#core-kernel-modules)
- [Security Modules](#security-modules)
- [Infrastructure Modules](#infrastructure-modules)
- [Feature Modules](#feature-modules)
## Core Kernel Modules
### Configuration Module
**Purpose**: Hierarchical configuration management with support for multiple sources.
**Requirements**:
- Load configuration from YAML files (default, environment-specific)
- Support environment variable overrides
- Support secret manager integration (AWS Secrets Manager, Vault)
- Type-safe configuration access
- Configuration validation
**Interface**:
```go
type ConfigProvider interface {
Get(key string) any
Unmarshal(v any) error
GetString(key string) string
GetInt(key string) int
GetBool(key string) bool
GetStringSlice(key string) []string
GetDuration(key string) time.Duration
IsSet(key string) bool
}
```
**Implementation**:
- Uses `github.com/spf13/viper` for configuration loading
- Load order: `default.yaml``{env}.yaml` → environment variables → secrets
- Supports nested configuration keys (e.g., `server.port`)
**Configuration Schema**:
```yaml
environment: development
server:
port: 8080
host: "0.0.0.0"
timeout: 30s
database:
driver: "postgres"
dsn: ""
max_connections: 25
max_idle_connections: 5
logging:
level: "info"
format: "json"
output: "stdout"
cache:
enabled: true
ttl: 5m
```
**Dependencies**: None (foundation module)
---
### Logging Module
**Purpose**: Structured logging with support for multiple outputs and log levels.
**Requirements**:
- Structured JSON logging for production
- Human-readable logging for development
- Support for log levels (debug, info, warn, error)
- Request-scoped fields (request_id, user_id, trace_id)
- Contextual logging (with fields)
- Performance: minimal overhead
**Interface**:
```go
type Field interface{}
type Logger interface {
Debug(msg string, fields ...Field)
Info(msg string, fields ...Field)
Warn(msg string, fields ...Field)
Error(msg string, fields ...Field)
Fatal(msg string, fields ...Field)
With(fields ...Field) Logger
}
// Helper functions
func String(key, value string) Field
func Int(key string, value int) Field
func Error(err error) Field
func Duration(key string, value time.Duration) Field
```
**Implementation**:
- Uses `go.uber.org/zap` for high-performance logging
- JSON encoder for production, console encoder for development
- Global logger instance accessible via `pkg/logger`
- Request-scoped logger via context
**Example Usage**:
```go
logger.Info("User logged in",
logger.String("user_id", userID),
logger.String("ip", ipAddress),
logger.Duration("duration", duration),
)
```
**Dependencies**: Configuration Module
---
### Dependency Injection Module
**Purpose**: Service registration and lifecycle management.
**Requirements**:
- Service registration via constructors
- Lifecycle management (OnStart/OnStop hooks)
- Dependency resolution
- Service overrides for testing
- Module-based service composition
**Implementation**:
- Uses `go.uber.org/fx` for dependency injection
- Core services registered in `internal/di/core_module.go`
- Modules register services via `fx.Provide()` in `Init()`
- Lifecycle hooks via `fx.Lifecycle`
**Core Module Structure**:
```go
var CoreModule = fx.Options(
fx.Provide(ProvideConfig),
fx.Provide(ProvideLogger),
fx.Provide(ProvideDatabase),
fx.Provide(ProvideHealthCheckers),
fx.Provide(ProvideMetrics),
fx.Provide(ProvideErrorBus),
fx.Provide(ProvideEventBus),
// ... other core services
)
```
**Dependencies**: Configuration Module, Logging Module
---
### Health & Metrics Module
**Purpose**: Health checks and metrics collection.
**Requirements**:
- Liveness endpoint (`/healthz`)
- Readiness endpoint (`/ready`)
- Metrics endpoint (`/metrics`) in Prometheus format
- Composable health checkers
- Custom metrics support
**Interface**:
```go
type HealthChecker interface {
Check(ctx context.Context) error
}
type HealthRegistry interface {
Register(name string, checker HealthChecker)
Check(ctx context.Context) map[string]error
}
```
**Core Health Checkers**:
- Database connectivity
- Redis connectivity
- Kafka connectivity (if enabled)
- Disk space
- Memory usage
**Metrics**:
- HTTP request duration (histogram)
- HTTP request count (counter)
- Database query duration (histogram)
- Cache hit/miss ratio (gauge)
- Error count (counter)
**Dependencies**: Configuration Module, Logging Module
---
### Error Bus Module
**Purpose**: Centralized error handling and reporting.
**Requirements**:
- Non-blocking error publishing
- Multiple error sinks (logger, Sentry)
- Error context preservation
- Panic recovery integration
**Interface**:
```go
type ErrorPublisher interface {
Publish(err error)
PublishWithContext(ctx context.Context, err error)
}
```
**Implementation**:
- Channel-based error bus
- Background goroutine consumes errors
- Pluggable sinks (logger, Sentry)
- Context extraction (user_id, trace_id, module)
**Dependencies**: Logging Module
---
## Security Modules
### Authentication Module
**Purpose**: User authentication via JWT tokens.
**Requirements**:
- JWT access token generation (short-lived, 15 minutes)
- JWT refresh token generation (long-lived, 7 days)
- Token validation and verification
- Token claims extraction
- Refresh token storage and revocation
**Interface**:
```go
type Authenticator interface {
GenerateAccessToken(userID string, roles []string, tenantID string) (string, error)
GenerateRefreshToken(userID string) (string, error)
VerifyToken(token string) (*TokenClaims, error)
RevokeRefreshToken(tokenHash string) error
}
type TokenClaims struct {
UserID string
Roles []string
TenantID string
ExpiresAt time.Time
IssuedAt time.Time
}
```
**Token Format**:
- Algorithm: HS256 or RS256
- Claims: `sub` (user ID), `roles`, `tenant_id`, `exp`, `iat`
- Refresh tokens stored in database with hash
**Endpoints**:
- `POST /api/v1/auth/login` - Authenticate and get tokens
- `POST /api/v1/auth/refresh` - Refresh access token
- `POST /api/v1/auth/logout` - Revoke refresh token
**Dependencies**: Identity Module, Configuration Module
---
### Authorization Module
**Purpose**: Role-based and attribute-based access control.
**Requirements**:
- Permission-based authorization
- Role-to-permission mapping
- User-to-role assignment
- Permission caching
- Context-aware authorization
**Interface**:
```go
type PermissionResolver interface {
HasPermission(ctx context.Context, userID string, perm Permission) (bool, error)
GetUserPermissions(ctx context.Context, userID string) ([]Permission, error)
}
type Authorizer interface {
Authorize(ctx context.Context, perm Permission) error
}
```
**Permission Format**:
- String format: `"{module}.{resource}.{action}"`
- Examples: `blog.post.create`, `user.read`, `system.health.check`
- Code-generated constants for type safety
**Authorization Flow**:
```mermaid
sequenceDiagram
participant Request
participant AuthzMiddleware
participant Authorizer
participant PermissionResolver
participant Cache
participant DB
Request->>AuthzMiddleware: HTTP request with permission
AuthzMiddleware->>Authorizer: Authorize(ctx, permission)
Authorizer->>Authorizer: Extract user from context
Authorizer->>PermissionResolver: HasPermission(user, permission)
PermissionResolver->>Cache: Check cache
Cache-->>PermissionResolver: Cache miss
PermissionResolver->>DB: Load user roles
PermissionResolver->>DB: Load role permissions
DB-->>PermissionResolver: Permissions
PermissionResolver->>Cache: Store in cache
PermissionResolver-->>Authorizer: Has permission: true/false
Authorizer-->>AuthzMiddleware: Authorized or error
AuthzMiddleware-->>Request: Continue or 403
```
**Dependencies**: Identity Module, Cache Module
---
### Identity Module
**Purpose**: User and role management.
**Requirements**:
- User CRUD operations
- Password hashing (argon2id)
- Email verification
- Password reset flow
- Role management
- Permission management
- User-role assignment
**Interfaces**:
```go
type UserRepository interface {
FindByID(ctx context.Context, id string) (*User, error)
FindByEmail(ctx context.Context, email string) (*User, error)
Create(ctx context.Context, u *User) error
Update(ctx context.Context, u *User) error
Delete(ctx context.Context, id string) error
List(ctx context.Context, filters UserFilters) ([]*User, error)
}
type UserService interface {
Register(ctx context.Context, email, password string) (*User, error)
VerifyEmail(ctx context.Context, token string) error
ResetPassword(ctx context.Context, email string) error
ChangePassword(ctx context.Context, userID, oldPassword, newPassword string) error
UpdateProfile(ctx context.Context, userID string, updates UserUpdates) error
}
type RoleRepository interface {
FindByID(ctx context.Context, id string) (*Role, error)
Create(ctx context.Context, r *Role) error
Update(ctx context.Context, r *Role) error
Delete(ctx context.Context, id string) error
AssignPermissions(ctx context.Context, roleID string, permissions []Permission) error
AssignToUser(ctx context.Context, userID string, roleIDs []string) error
}
```
**User Entity**:
- ID (UUID)
- Email (unique, verified)
- Password hash (argon2id)
- Email verified (boolean)
- Created at, updated at
- Tenant ID (optional, for multi-tenancy)
**Role Entity**:
- ID (UUID)
- Name (unique)
- Description
- Created at
- Permissions (many-to-many)
**Dependencies**: Database Module, Notification Module, Cache Module
---
### Audit Module
**Purpose**: Immutable audit logging of security-relevant actions.
**Requirements**:
- Append-only audit log
- Actor tracking (user ID)
- Action tracking (what was done)
- Target tracking (what was affected)
- Metadata storage (JSON)
- Correlation IDs
- High-performance writes
**Interface**:
```go
type Auditor interface {
Record(ctx context.Context, action AuditAction) error
Query(ctx context.Context, filters AuditFilters) ([]AuditEntry, error)
}
type AuditAction struct {
ActorID string
Action string // e.g., "user.created", "role.assigned"
TargetID string
Metadata map[string]any
IPAddress string
UserAgent string
}
```
**Audit Log Schema**:
- ID (UUID)
- Actor ID (user ID)
- Action (string)
- Target ID (resource ID)
- Metadata (JSONB)
- Timestamp
- Request ID
- IP Address
- User Agent
**Automatic Audit Events**:
- User login/logout
- Password changes
- Role assignments
- Permission grants
- Data modifications (configurable)
**Dependencies**: Database Module, Logging Module
---
## Infrastructure Modules
### Database Module
**Purpose**: Database access and ORM functionality.
**Requirements**:
- PostgreSQL support (primary)
- Connection pooling
- Transaction support
- Migration management
- Query instrumentation (OpenTelemetry)
- Multi-tenancy support (tenant_id filtering)
**Implementation**:
- Uses `entgo.io/ent` for code generation
- Ent schemas for all entities
- Migration runner on startup
- Connection pool configuration
**Database Client Interface**:
```go
type DatabaseClient interface {
Client() *ent.Client
Migrate(ctx context.Context) error
Close() error
HealthCheck(ctx context.Context) error
}
```
**Connection Pooling**:
- Max connections: 25
- Max idle connections: 5
- Connection lifetime: 5 minutes
- Idle timeout: 10 minutes
**Multi-Tenancy**:
- Automatic tenant_id filtering via Ent interceptors
- Tenant-aware queries
- Tenant isolation at application level
**Dependencies**: Configuration Module, Logging Module
---
### Cache Module
**Purpose**: Distributed caching with Redis.
**Requirements**:
- Key-value storage
- TTL support
- Distributed caching (shared across instances)
- Cache invalidation
- Fallback to in-memory cache
**Interface**:
```go
type Cache interface {
Get(ctx context.Context, key string) ([]byte, error)
Set(ctx context.Context, key string, value []byte, ttl time.Duration) error
Delete(ctx context.Context, key string) error
Exists(ctx context.Context, key string) (bool, error)
Increment(ctx context.Context, key string) (int64, error)
}
```
**Use Cases**:
- User permissions caching
- Role assignments caching
- Session data
- Rate limiting state
- Query result caching (optional)
**Cache Key Format**:
- `user:{user_id}:permissions`
- `role:{role_id}:permissions`
- `session:{session_id}`
- `ratelimit:{user_id}:{endpoint}`
**Dependencies**: Configuration Module, Logging Module
---
### Event Bus Module
**Purpose**: Event-driven communication between modules.
**Requirements**:
- Publish/subscribe pattern
- Topic-based routing
- In-process bus (development)
- Kafka bus (production)
- Error handling and retries
- Event ordering (per partition)
**Interface**:
```go
type EventBus interface {
Publish(ctx context.Context, topic string, event Event) error
Subscribe(topic string, handler EventHandler) error
Unsubscribe(topic string) error
}
type Event struct {
ID string
Type string
Source string
Timestamp time.Time
Data map[string]any
}
type EventHandler func(ctx context.Context, event Event) error
```
**Core Events**:
- `platform.user.created`
- `platform.user.updated`
- `platform.user.deleted`
- `platform.role.assigned`
- `platform.role.revoked`
- `platform.permission.granted`
**Event Flow**:
```mermaid
graph LR
Publisher[Module Publisher]
Bus[Event Bus]
Subscriber1[Module Subscriber 1]
Subscriber2[Module Subscriber 2]
Subscriber3[Module Subscriber 3]
Publisher -->|Publish| Bus
Bus -->|Deliver| Subscriber1
Bus -->|Deliver| Subscriber2
Bus -->|Deliver| Subscriber3
```
**Dependencies**: Configuration Module, Logging Module
---
### Scheduler Module
**Purpose**: Background job processing and cron scheduling.
**Requirements**:
- Cron job scheduling
- Async job queuing
- Job retries with backoff
- Job status tracking
- Concurrency control
- Job persistence
**Interface**:
```go
type Scheduler interface {
Cron(spec string, job JobFunc) error
Enqueue(queue string, payload any) error
EnqueueWithRetry(queue string, payload any, retries int) error
}
type JobFunc func(ctx context.Context) error
```
**Implementation**:
- Uses `github.com/robfig/cron/v3` for cron jobs
- Uses `github.com/hibiken/asynq` for job queuing
- Redis-backed job queue
- Job processor with worker pool
**Example Jobs**:
- Cleanup expired tokens (daily)
- Send digest emails (weekly)
- Generate reports (monthly)
- Data archival (custom schedule)
**Dependencies**: Cache Module (Redis), Logging Module
---
### Blob Storage Module
**Purpose**: File and blob storage abstraction.
**Requirements**:
- File upload
- File download
- File deletion
- Signed URL generation
- Versioning support (optional)
**Interface**:
```go
type BlobStore interface {
Upload(ctx context.Context, key string, data []byte, contentType string) error
Download(ctx context.Context, key string) ([]byte, error)
Delete(ctx context.Context, key string) error
GetSignedURL(ctx context.Context, key string, ttl time.Duration) (string, error)
Exists(ctx context.Context, key string) (bool, error)
}
```
**Implementation**:
- AWS S3 adapter (primary)
- Local file system adapter (development)
- GCS adapter (optional)
**Key Format**:
- `{module}/{resource_type}/{resource_id}/{filename}`
- Example: `blog/posts/abc123/image.jpg`
**Dependencies**: Configuration Module, Logging Module
---
### Notification Module
**Purpose**: Multi-channel notifications (email, SMS, push).
**Requirements**:
- Email sending (SMTP, AWS SES)
- SMS sending (Twilio, optional)
- Push notifications (FCM, APNs, optional)
- Webhook notifications
- Template support
- Retry logic
**Interface**:
```go
type Notifier interface {
SendEmail(ctx context.Context, to, subject, body string) error
SendEmailWithTemplate(ctx context.Context, to, template string, data map[string]any) error
SendSMS(ctx context.Context, to, message string) error
SendPush(ctx context.Context, deviceToken string, payload PushPayload) error
SendWebhook(ctx context.Context, url string, payload map[string]any) error
}
```
**Email Templates**:
- Email verification
- Password reset
- Welcome email
- Notification digest
**Dependencies**: Configuration Module, Logging Module, Event Bus Module
---
## Feature Modules
### Blog Module (Example)
**Purpose**: Blog post management functionality.
**Requirements**:
- Post CRUD operations
- Comment system (optional)
- Author-based access control
- Post publishing workflow
- Tag/category support
**Permissions**:
- `blog.post.create`
- `blog.post.read`
- `blog.post.update`
- `blog.post.delete`
- `blog.post.publish`
**Routes**:
- `POST /api/v1/blog/posts` - Create post
- `GET /api/v1/blog/posts` - List posts
- `GET /api/v1/blog/posts/:id` - Get post
- `PUT /api/v1/blog/posts/:id` - Update post
- `DELETE /api/v1/blog/posts/:id` - Delete post
**Domain Model**:
```go
type Post struct {
ID string
Title string
Content string
AuthorID string
Status PostStatus // draft, published, archived
CreatedAt time.Time
UpdatedAt time.Time
PublishedAt *time.Time
}
```
**Events Published**:
- `blog.post.created`
- `blog.post.updated`
- `blog.post.published`
- `blog.post.deleted`
**Dependencies**: Core Kernel, Identity Module, Event Bus Module
---
## Module Integration Matrix
```mermaid
graph TB
subgraph "Core Kernel (Required)"
Config[Config]
Logger[Logger]
DI[DI Container]
Health[Health]
end
subgraph "Security (Required)"
Auth[Auth]
Authz[Authz]
Identity[Identity]
Audit[Audit]
end
subgraph "Infrastructure (Optional)"
DB[Database]
Cache[Cache]
EventBus[Event Bus]
Scheduler[Scheduler]
BlobStore[Blob Store]
Notifier[Notifier]
end
subgraph "Feature Modules"
Blog[Blog]
Billing[Billing]
Custom[Custom Modules]
end
Config --> Logger
Config --> DI
DI --> Health
DI --> Auth
DI --> Authz
DI --> Identity
DI --> Audit
DI --> DB
DI --> Cache
DI --> EventBus
DI --> Scheduler
DI --> BlobStore
DI --> Notifier
Auth --> Identity
Authz --> Identity
Authz --> Audit
Blog --> Auth
Blog --> Authz
Blog --> DB
Blog --> EventBus
Blog --> Cache
Billing --> Auth
Billing --> Authz
Billing --> DB
Billing --> EventBus
Billing --> Cache
Custom --> Auth
Custom --> Authz
Custom --> DB
style Config fill:#4a90e2,stroke:#2e5c8a,stroke-width:3px,color:#fff
style Auth fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
style Blog fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
```
## Next Steps
- [Component Relationships](./component-relationships.md) - Detailed component interactions
- [System Architecture](./architecture.md) - Overall system architecture
- [Module Architecture](./architecture-modules.md) - Module design and integration

1662
docs/content/plan.md Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
# Requirements
## 1 HIGHLEVEL ARCHITECTURAL PRINCIPLES
## HIGHLEVEL ARCHITECTURAL PRINCIPLES
| Principle | Why it matters for a modular platform | How to enforce it |
|-----------|----------------------------------------|-------------------|
@@ -18,7 +18,7 @@
---
## 2 LAYERED / HEXAGONAL BLUEPRINT
## LAYERED / HEXAGONAL BLUEPRINT
```
+---------------------------------------------------+
@@ -50,7 +50,7 @@
---
## 3 REQUIRED BASE MODULES (THE “CORE KERNEL”)
## REQUIRED BASE MODULES (THE “CORE KERNEL”)
| Module | Core responsibilities | Public API / Extension points |
|--------|-----------------------|--------------------------------|
@@ -74,7 +74,7 @@
---
## 4 EXTENSIONPOINT DESIGN (HOW PLUGINS HOOK IN)
## EXTENSIONPOINT DESIGN (HOW PLUGINS HOOK IN)
1. **Module Manifest** a tiny JSON/YAML file (`module.yaml`) that declares:
- Module name, version, dependencies (core ≥ 1.2.0, other modules)
@@ -123,7 +123,7 @@
---
## 5 SAMPLE REPOSITORY LAYOUT (languageagnostic)
## SAMPLE REPOSITORY LAYOUT (languageagnostic)
```
/platform-root
@@ -206,7 +206,7 @@ bootstrap().catch(err => {
---
## 6 KEY DECISIONS YOU MUST TAKE EARLY
## KEY DECISIONS YOU MUST TAKE EARLY
| Decision | Options | Implications |
|----------|---------|--------------|
@@ -222,7 +222,7 @@ bootstrap().catch(err => {
---
## 7 COMMON PITFALLS & HOW TO AVOID THEM
## COMMON PITFALLS & HOW TO AVOID THEM
| Pitfall | Symptoms | Fix / Guardrail |
|---------|----------|-----------------|
@@ -238,7 +238,7 @@ bootstrap().catch(err => {
---
## 8 QUICK START GUIDE (What to Build First)
## QUICK START GUIDE (What to Build First)
1. **Create the Core Kernel**
- Set up DI container, config loader, logger, health/metrics endpoint.
@@ -280,7 +280,7 @@ bootstrap().catch(err => {
---
## 9 TOOLS & LIBRARIES (starter suggestions per stack)
## TOOLS & LIBRARIES (starter suggestions per stack)
| Stack | Core | Auth | DI / Module | Event Bus | ORM | Validation | Testing |
|-------|------|------|-------------|-----------|-----|------------|---------|
@@ -294,7 +294,7 @@ Pick the stack youre most comfortable with; the concepts stay identical.
---
## 🎉 TL;DR What You Must Deliver
## TL;DR What You Must Deliver
| Layer | Musthave components | Why |
|-------|----------------------|-----|

View File

@@ -0,0 +1,62 @@
# Complete Task List
This document provides a comprehensive list of all tasks across all epics. Each task has a corresponding detailed file in the epic-specific directories.
## Task Organization
Tasks are organized by epic and section. Each task file follows the naming convention: `{section}.{subtask}-{description}.md`
## Epic 0: Project Setup & Foundation
### 0.1 Repository Bootstrap
- [0.1.1 - Initialize Go Module](./epic0/0.1.1-initialize-go-module.md)
- [0.1.2 - Create Directory Structure](./epic0/0.1.2-create-directory-structure.md)
- [0.1.3 - Add Gitignore](./epic0/0.1.3-add-gitignore.md)
- [0.1.4 - Create Initial README](./epic0/0.1.4-create-initial-readme.md)
### 0.2 Configuration System
- [0.2.1 - Install Configuration Dependencies](./epic0/0.2.1-install-config-dependencies.md)
- [0.2.2 - Create Config Interface](./epic0/0.2.2-create-config-interface.md)
- [0.2.3 - Implement Config Loader](./epic0/0.2.3-implement-config-loader.md)
- [0.2.4 - Create Configuration Files](./epic0/0.2.4-create-configuration-files.md)
### 0.3 Logging Foundation
- [0.3.1 - Install Logging Dependencies](./epic0/0.3.1-install-logging-dependencies.md)
- See [Epic 0 README](./epic0/README.md) for remaining tasks
### 0.4 Basic CI/CD Pipeline
- See [Epic 0 README](./epic0/README.md) for tasks
### 0.5 Dependency Injection Setup
- See [Epic 0 README](./epic0/README.md) for tasks
## Epic 1-8 Tasks
Detailed task files for Epics 1-8 are being created. See individual epic README files:
- [Epic 1 README](./epic1/README.md) - Core Kernel & Infrastructure
- [Epic 2 README](./epic2/README.md) - Authentication & Authorization
- [Epic 3 README](./epic3/README.md) - Module Framework
- [Epic 4 README](./epic4/README.md) - Sample Feature Module (Blog)
- [Epic 5 README](./epic5/README.md) - Infrastructure Adapters
- [Epic 6 README](./epic6/README.md) - Observability & Production Readiness
- [Epic 7 README](./epic7/README.md) - Testing, Documentation & CI/CD
- [Epic 8 README](./epic8/README.md) - Advanced Features & Polish
## Task Status Tracking
To track task completion:
1. Update the Status field in each task file
2. Update checkboxes in the main plan.md
3. Reference task IDs in commit messages: `[0.1.1] Initialize Go module`
4. Link GitHub issues to tasks if using issue tracking
## Generating Missing Task Files
A script is available to generate task files from plan.md:
```bash
cd docs/tasks
python3 generate_tasks.py
```
Note: Manually review and refine generated task files as needed.

View File

@@ -0,0 +1,63 @@
# Implementation Tasks
This directory contains detailed task definitions for each epic of the Go Platform implementation.
## Task Organization
Tasks are organized by epic, with each major task section having its own detailed file:
### Epic 0: Project Setup & Foundation
- [Epic 0 Tasks](./epic0/README.md) - All Epic 0 tasks
### Epic 1: Core Kernel & Infrastructure
- [Epic 1 Tasks](./epic1/README.md) - All Epic 1 tasks
### Epic 2: Authentication & Authorization
- [Epic 2 Tasks](./epic2/README.md) - All Epic 2 tasks
### Epic 3: Module Framework
- [Epic 3 Tasks](./epic3/README.md) - All Epic 3 tasks
### Epic 4: Sample Feature Module (Blog)
- [Epic 4 Tasks](./epic4/README.md) - All Epic 4 tasks
### Epic 5: Infrastructure Adapters
- [Epic 5 Tasks](./epic5/README.md) - All Epic 5 tasks
### Epic 6: Observability & Production Readiness
- [Epic 6 Tasks](./epic6/README.md) - All Epic 6 tasks
### Epic 7: Testing, Documentation & CI/CD
- [Epic 7 Tasks](./epic7/README.md) - All Epic 7 tasks
### Epic 8: Advanced Features & Polish (Optional)
- [Epic 8 Tasks](./epic8/README.md) - All Epic 8 tasks
## Task Status
Each task file includes:
- **Task ID**: Unique identifier (e.g., `0.1.1`)
- **Title**: Descriptive task name
- **Epic**: Implementation epic
- **Status**: Pending | In Progress | Completed | Blocked
- **Priority**: High | Medium | Low
- **Dependencies**: Tasks that must complete first
- **Description**: Detailed requirements
- **Acceptance Criteria**: How to verify completion
- **Implementation Notes**: Technical details and references
- **Related ADRs**: Links to relevant architecture decisions
## Task Tracking
Tasks can be tracked using:
- GitHub Issues (linked from tasks)
- Project boards
- Task management tools
- Direct commit messages referencing task IDs
## Task Naming Convention
Tasks follow the format: `{epic}.{section}.{subtask}`
Example: `0.1.1` = Epic 0, Section 1 (Repository Bootstrap), Subtask 1

View File

@@ -0,0 +1,181 @@
# Story Consolidation Guide
## Overview
The stories have been reworked from granular, task-based items into meaningful, cohesive stories that solve complete problems. Each story now represents a complete feature or capability that can be tested end-to-end.
## Transformation Pattern
### Before (Granular Tasks)
- ❌ "Install dependency X"
- ❌ "Create file Y"
- ❌ "Add function Z"
- ❌ Multiple tiny tasks that don't deliver value alone
### After (Meaningful Stories)
- ✅ "Configuration Management System" - Complete config system with interface, implementation, and files
- ✅ "JWT Authentication System" - Complete auth with tokens, middleware, and endpoints
- ✅ "Database Layer with Ent ORM" - Complete database setup with entities, migrations, and client
## Story Structure
Each consolidated story follows this structure:
1. **Goal** - High-level objective
2. **Description** - What problem it solves
3. **Deliverables** - Complete list of what will be delivered
4. **Acceptance Criteria** - How we know it's done
5. **Implementation Steps** - High-level steps (not micro-tasks)
## Epic 0 - Completed Examples
### 0.1 Project Initialization and Repository Structure
**Consolidates:** Go module init, directory structure, .gitignore, README
### 0.2 Configuration Management System
**Consolidates:** Install viper, create interface, implement loader, create config files
### 0.3 Structured Logging System
**Consolidates:** Install zap, create interface, implement logger, request ID middleware
### 0.4 CI/CD Pipeline
**Consolidates:** GitHub Actions workflow, Makefile creation
### 0.5 DI and Application Bootstrap
**Consolidates:** Install FX, create DI container, create main.go
## Remaining Epics - Consolidation Pattern
### Epic 1: Core Kernel
- **1.1 Enhanced DI Container** - All DI providers and extensions
- **1.2 Database Layer** - Complete Ent setup with entities and migrations
- **1.3 Health & Metrics** - Complete monitoring system
- **1.4 Error Handling** - Complete error bus system
- **1.5 HTTP Server** - Complete server with all middleware
- **1.6 OpenTelemetry** - Complete tracing setup
- **1.7 Service Client Interfaces** - Complete service abstraction layer
### Epic 2: Authentication & Authorization
- **2.1 JWT Authentication System** - Complete auth with tokens
- **2.2 Identity Management** - Complete user lifecycle
- **2.3 RBAC System** - Complete permission system
- **2.4 Role Management API** - Complete role management
- **2.5 Audit Logging** - Complete audit system
- **2.6 Database Seeding** - Complete seeding system
### Epic 3: Module Framework
- **3.1 Module Interface & Registry** - Complete module system
- **3.2 Permission Code Generation** - Complete code gen system
- **3.3 Module Loader** - Complete loading and initialization
- **3.4 Module CLI** - Complete CLI tool
- **3.5 Service Registry** - Complete service discovery
### Epic 4: Sample Blog Module
- **4.1 Complete Blog Module** - Full module with CRUD, permissions, API
### Epic 5: Infrastructure Adapters
- **5.1 Cache System** - Complete Redis cache
- **5.2 Event Bus** - Complete event system
- **5.3 Blob Storage** - Complete S3 storage
- **5.4 Email Notification** - Complete email system
- **5.5 Scheduler & Jobs** - Complete job system
- **5.6 Secret Store** - Complete secret management
- **5.7 gRPC Services** - Complete gRPC service definitions and clients
### Epic 6: Observability
- **6.1 Enhanced OpenTelemetry** - Complete tracing
- **6.2 Error Reporting** - Complete Sentry integration
- **6.3 Enhanced Logging** - Complete log correlation
- **6.4 Metrics Expansion** - Complete metrics
- **6.5 Grafana Dashboards** - Complete dashboards
- **6.6 Rate Limiting** - Complete rate limiting
- **6.7 Security Hardening** - Complete security
- **6.8 Performance Optimization** - Complete optimizations
### Epic 7: Testing & Documentation
- **7.1 Unit Testing** - Complete test suite
- **7.2 Integration Testing** - Complete integration tests
- **7.3 Documentation** - Complete docs
- **7.4 CI/CD Enhancement** - Complete pipeline
- **7.5 Docker & Deployment** - Complete deployment setup
### Epic 8: Advanced Features
- **8.1 OIDC Support** - Complete OIDC
- **8.2 GraphQL API** - Complete GraphQL
- **8.3 Additional Modules** - Complete sample modules
- **8.4 Performance** - Complete optimizations
## Creating New Story Files
When creating story files for remaining epics, follow this template:
```markdown
# Story X.Y: [Meaningful Title]
## Metadata
- **Story ID**: X.Y
- **Title**: [Complete Feature Name]
- **Epic**: X - [Epic Name]
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: [hours]
- **Dependencies**: [story IDs]
## Goal
[High-level objective - what problem does this solve?]
## Description
[What this story delivers as a complete capability]
## Deliverables
- [Complete list of deliverables - not just files, but complete features]
- [Interface definitions]
- [Implementations]
- [API endpoints]
- [Integration points]
## Implementation Steps
1. [High-level step 1]
2. [High-level step 2]
3. [High-level step 3]
## Acceptance Criteria
- [ ] [End-to-end testable criteria]
- [ ] [Feature works completely]
- [ ] [Integration works]
## Related ADRs
- [ADR links]
## Implementation Notes
- [Important considerations]
## Testing
[How to test the complete feature]
## Files to Create/Modify
- [List of files]
```
## Key Principles
1. **Each story solves a complete problem** - Not just "install X" or "create file Y"
2. **Stories are testable end-to-end** - You can verify the complete feature works
3. **Stories deliver business value** - Even infrastructure stories solve complete problems
4. **Stories are independent where possible** - Can be worked on separately
5. **Stories have clear acceptance criteria** - You know when they're done
## Next Steps
1. Update remaining epic README files to reference consolidated stories
2. Create story files for remaining epics following the pattern
3. Update plan.md to complete all epics with story-based structure
4. Remove old granular task files (or archive them)
## Benefits of This Approach
- **Better Planning**: Stories represent complete features
- **Clearer Progress**: You can see complete features being delivered
- **Better Testing**: Each story can be tested end-to-end
- **Reduced Overhead**: Fewer story files to manage
- **More Meaningful**: Stories solve real problems, not just tasks

View File

@@ -0,0 +1,162 @@
# Story 0.1: Project Initialization and Repository Structure
## Metadata
- **Story ID**: 0.1
- **Title**: Project Initialization and Repository Structure
- **Epic**: 0 - Project Setup & Foundation
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: 2-3 hours
- **Dependencies**: None
## Goal
Establish a properly structured Go project with all necessary directories, configuration files, and documentation that follows Go best practices and supports the platform's modular architecture.
## Description
This story covers the complete project initialization, including Go module setup, directory structure creation, and initial documentation. The project structure must support the microservices architecture with clear separation between core services, feature services (modules), and infrastructure.
## Deliverables
### 1. Go Module Initialization
- Initialize Go module with correct module path: `git.dcentral.systems/toolz/goplt`
- Set Go version to 1.24 in `go.mod`
- Verify module initialization with `go mod verify`
### 2. Complete Directory Structure
Create the following directory structure:
```
platform/
├── cmd/
│ └── platform/ # Main entry point
├── internal/ # Private implementation code
│ ├── di/ # Dependency injection container
│ ├── registry/ # Module registry
│ ├── pluginloader/ # Plugin loader (optional)
│ ├── config/ # Config implementation
│ ├── logger/ # Logger implementation
│ ├── infra/ # Infrastructure adapters
│ └── ent/ # Ent ORM schemas
├── pkg/ # Public interfaces (exported)
│ ├── config/ # ConfigProvider interface
│ ├── logger/ # Logger interface
│ ├── module/ # IModule interface
│ ├── auth/ # Auth interfaces
│ ├── perm/ # Permission DSL
│ └── infra/ # Infrastructure interfaces
├── modules/ # Feature modules
│ └── blog/ # Sample Blog module (Epic 4)
├── config/ # Configuration files
│ ├── default.yaml
│ ├── development.yaml
│ └── production.yaml
├── api/ # OpenAPI specs
├── scripts/ # Build/test scripts
├── docs/ # Documentation
├── ops/ # Operations (Grafana dashboards, etc.)
├── .github/
│ └── workflows/
│ └── ci.yml
├── Dockerfile
├── docker-compose.yml
├── docker-compose.test.yml
├── .gitignore
├── README.md
└── go.mod
```
### 3. .gitignore Configuration
- Exclude build artifacts (`bin/`, `dist/`)
- Exclude Go build cache
- Exclude IDE files (`.vscode/`, `.idea/`, etc.)
- Exclude test coverage files
- Exclude dependency directories
- Exclude environment-specific files
### 4. Initial README.md
Create comprehensive README with:
- Project overview and purpose
- Architecture overview
- Quick start guide
- Development setup instructions
- Directory structure explanation
- Links to documentation
- Contributing guidelines
### 5. Basic Documentation Structure
- Set up `docs/` directory
- Create architecture documentation placeholder
- Create API documentation structure
## Implementation Steps
1. **Initialize Go Module**
```bash
go mod init git.dcentral.systems/toolz/goplt
```
- Verify `go.mod` is created
- Set Go version constraint
2. **Create Directory Structure**
- Create all directories listed above
- Ensure proper nesting and organization
- Add placeholder `.gitkeep` files in empty directories if needed
3. **Configure .gitignore**
- Add Go-specific ignore patterns
- Add IDE-specific patterns
- Add OS-specific patterns
- Add build artifact patterns
4. **Create README.md**
- Write comprehensive project overview
- Document architecture principles
- Provide setup instructions
- Include example commands
5. **Verify Structure**
- Run `go mod verify`
- Check all directories exist
- Verify .gitignore works
- Test README formatting
## Acceptance Criteria
- [ ] `go mod init` creates module with correct path `git.dcentral.systems/toolz/goplt`
- [ ] Go version is set to `1.24` in `go.mod`
- [ ] All directories from the structure are in place
- [ ] `.gitignore` excludes build artifacts, dependencies, and IDE files
- [ ] `README.md` provides clear project overview and setup instructions
- [ ] Project structure matches architecture documentation
- [ ] `go mod verify` passes
- [ ] Directory structure follows Go best practices
## Related ADRs
- [ADR-0001: Go Module Path](../../adr/0001-go-module-path.md)
- [ADR-0002: Go Version](../../adr/0002-go-version.md)
- [ADR-0007: Project Directory Structure](../../adr/0007-project-directory-structure.md)
## Implementation Notes
- The module path should match the organization's Git hosting structure
- All internal packages must use `internal/` prefix to ensure they are not importable by external modules
- Public interfaces in `pkg/` should be minimal and well-documented
- Empty directories can have `.gitkeep` files to ensure they are tracked in Git
- The directory structure should be documented in the README
## Testing
```bash
# Verify module initialization
go mod verify
go mod tidy
# Check directory structure
tree -L 3 -a
# Verify .gitignore
git status
```
## Files to Create/Modify
- `go.mod` - Go module definition
- `README.md` - Project documentation
- `.gitignore` - Git ignore patterns
- All directory structure as listed above

View File

@@ -0,0 +1,171 @@
# Story 0.2: Configuration Management System
## Metadata
- **Story ID**: 0.2
- **Title**: Configuration Management System
- **Epic**: 0 - Project Setup & Foundation
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: 4-6 hours
- **Dependencies**: 0.1
## Goal
Implement a flexible configuration system that loads settings from YAML files, environment variables, and supports type-safe access. The system must be injectable via DI and usable by all modules.
## Description
This story implements a complete configuration management system using Viper that provides a clean interface for accessing configuration values. The system supports multiple configuration sources (YAML files, environment variables) with proper precedence and type-safe accessors.
## Deliverables
### 1. Configuration Interface (`pkg/config/config.go`)
Define `ConfigProvider` interface with:
- `Get(key string) any` - Get any value
- `Unmarshal(v any) error` - Unmarshal into struct
- `GetString(key string) string` - Type-safe string getter
- `GetInt(key string) int` - Type-safe int getter
- `GetBool(key string) bool` - Type-safe bool getter
- `GetStringSlice(key string) []string` - Type-safe slice getter
- `GetDuration(key string) time.Duration` - Type-safe duration getter
- `IsSet(key string) bool` - Check if key exists
### 2. Viper Implementation (`internal/config/config.go`)
Implement `ConfigProvider` using Viper:
- Load `config/default.yaml` as baseline
- Merge environment-specific YAML files (development/production)
- Apply environment variable overrides (uppercase with underscores)
- Support nested configuration keys (dot notation)
- Provide all type-safe getters
- Support unmarshaling into structs
- Handle configuration validation errors
### 3. Configuration Loader (`internal/config/loader.go`)
- `LoadConfig(env string) (ConfigProvider, error)` function
- Environment detection (development/production)
- Configuration file discovery
- Validation of required configuration keys
- Error handling and reporting
### 4. Configuration Files
Create configuration files:
**`config/default.yaml`** - Base configuration:
```yaml
environment: development
server:
port: 8080
host: "0.0.0.0"
read_timeout: 30s
write_timeout: 30s
database:
driver: "postgres"
dsn: ""
max_connections: 25
max_idle_connections: 5
logging:
level: "info"
format: "json"
output: "stdout"
```
**`config/development.yaml`** - Development overrides:
```yaml
environment: development
logging:
level: "debug"
format: "console"
```
**`config/production.yaml`** - Production overrides:
```yaml
environment: production
logging:
level: "warn"
format: "json"
```
### 5. DI Integration
- Provider function for ConfigProvider
- Register in DI container
- Make configurable via FX
## Implementation Steps
1. **Install Dependencies**
```bash
go get github.com/spf13/viper@v1.18.0
go get github.com/spf13/cobra@v1.8.0
```
2. **Create Configuration Interface**
- Define `ConfigProvider` interface in `pkg/config/config.go`
- Add package documentation
- Export interface for use by modules
3. **Implement Viper Configuration**
- Create `internal/config/config.go`
- Implement all interface methods
- Handle configuration loading and merging
- Support nested keys and environment variables
4. **Create Configuration Loader**
- Implement `LoadConfig()` function
- Add environment detection logic
- Add validation logic
- Handle errors gracefully
5. **Create Configuration Files**
- Create `config/default.yaml` with base configuration
- Create `config/development.yaml` with dev overrides
- Create `config/production.yaml` with prod overrides
- Ensure proper YAML structure
6. **Integrate with DI**
- Create provider function
- Register in DI container
- Test injection
## Acceptance Criteria
- [ ] `ConfigProvider` interface is defined and documented
- [ ] Viper implementation loads YAML files successfully
- [ ] Environment variables override YAML values
- [ ] Type-safe getters work correctly (string, int, bool, etc.)
- [ ] Configuration can be unmarshaled into structs
- [ ] Nested keys work with dot notation
- [ ] Configuration system is injectable via DI container
- [ ] All modules can access configuration through interface
- [ ] Configuration validation works
- [ ] Error handling is comprehensive
## Related ADRs
- [ADR-0004: Configuration Management](../../adr/0004-configuration-management.md)
## Implementation Notes
- Use Viper's automatic environment variable support (uppercase with underscores)
- Support both single-level and nested configuration keys
- Consider adding configuration schema validation in future
- Environment variable format: `SERVER_PORT`, `DATABASE_DSN`, etc.
- Configuration files should be in YAML for readability
- Support secret manager integration placeholder (Epic 6)
## Testing
```bash
# Test configuration loading
go test ./internal/config/...
# Test type-safe getters
go run -c "test config getters"
# Test environment variable overrides
export SERVER_PORT=9090
go run cmd/platform/main.go
```
## Files to Create/Modify
- `pkg/config/config.go` - Configuration interface
- `internal/config/config.go` - Viper implementation
- `internal/config/loader.go` - Configuration loader
- `config/default.yaml` - Base configuration
- `config/development.yaml` - Development configuration
- `config/production.yaml` - Production configuration
- `internal/di/providers.go` - Add config provider

View File

@@ -0,0 +1,136 @@
# Story 0.3: Structured Logging System
## Metadata
- **Story ID**: 0.3
- **Title**: Structured Logging System
- **Epic**: 0 - Project Setup & Foundation
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: 4-6 hours
- **Dependencies**: 0.1, 0.2
## Goal
Implement a production-ready logging system with structured JSON output, request correlation, and configurable log levels that can be used by all modules.
## Description
This story implements a complete logging system using Zap that provides structured logging, request correlation via request IDs, and context-aware logging. The system must support both development (human-readable) and production (JSON) formats.
## Deliverables
### 1. Logger Interface (`pkg/logger/logger.go`)
Define `Logger` interface with:
- `Debug(msg string, fields ...Field)` - Debug level logging
- `Info(msg string, fields ...Field)` - Info level logging
- `Warn(msg string, fields ...Field)` - Warning level logging
- `Error(msg string, fields ...Field)` - Error level logging
- `With(fields ...Field) Logger` - Create child logger with fields
- `WithContext(ctx context.Context) Logger` - Create logger with context fields
- `Field` type for structured fields
- Package-level convenience functions
### 2. Zap Implementation (`internal/logger/zap_logger.go`)
Implement `Logger` interface using Zap:
- Structured JSON logging for production mode
- Human-readable console logging for development mode
- Configurable log levels (debug, info, warn, error)
- Request-scoped fields support
- Context-aware logging (extract request ID, user ID from context)
- Field mapping to Zap fields
- Error stack trace support
### 3. Request ID Middleware (`internal/logger/middleware.go`)
Gin middleware for request correlation:
- Generate unique request ID per HTTP request
- Add request ID to request context
- Add request ID to all logs within request context
- Return request ID in response headers (`X-Request-ID`)
- Support for existing request IDs in headers
### 4. Global Logger Export (`pkg/logger/global.go`)
- Export default logger instance
- Package-level convenience functions
- Thread-safe logger access
### 5. DI Integration
- Provider function for Logger
- Register in DI container
- Make configurable via FX
## Implementation Steps
1. **Install Dependencies**
```bash
go get go.uber.org/zap@v1.26.0
```
2. **Create Logger Interface**
- Define `Logger` interface in `pkg/logger/logger.go`
- Define `Field` type for structured fields
- Add package documentation
3. **Implement Zap Logger**
- Create `internal/logger/zap_logger.go`
- Implement all interface methods
- Support both JSON and console encoders
- Handle log levels and field mapping
4. **Create Request ID Middleware**
- Create `internal/logger/middleware.go`
- Implement Gin middleware
- Generate and propagate request IDs
- Add to response headers
5. **Add Global Logger**
- Create `pkg/logger/global.go`
- Export default logger
- Add convenience functions
6. **Integrate with DI**
- Create provider function
- Register in DI container
- Test injection
## Acceptance Criteria
- [ ] `Logger` interface is defined and documented
- [ ] Zap implementation supports JSON and console formats
- [ ] Log levels are configurable and respected
- [ ] Request IDs are generated and included in all logs
- [ ] Request ID middleware works with Gin
- [ ] Context-aware logging extracts request ID and user ID
- [ ] Logger can be injected via DI container
- [ ] All modules can use logger through interface
- [ ] Request correlation works across service boundaries
- [ ] Structured fields work correctly
## Related ADRs
- [ADR-0005: Logging Framework](../../adr/0005-logging-framework.md)
- [ADR-0012: Logger Interface Design](../../adr/0012-logger-interface-design.md)
## Implementation Notes
- Use Zap's production and development presets
- Request IDs should be UUIDs or similar unique identifiers
- Context should be propagated through all service calls
- Log levels should be configurable via configuration system
- Consider adding log sampling for high-volume production
- Support for log rotation and file output (future enhancement)
## Testing
```bash
# Test logger interface
go test ./pkg/logger/...
# Test Zap implementation
go test ./internal/logger/...
# Test request ID middleware
go test ./internal/logger/... -run TestRequestIDMiddleware
```
## Files to Create/Modify
- `pkg/logger/logger.go` - Logger interface
- `pkg/logger/global.go` - Global logger export
- `internal/logger/zap_logger.go` - Zap implementation
- `internal/logger/middleware.go` - Request ID middleware
- `internal/di/providers.go` - Add logger provider
- `config/default.yaml` - Add logging configuration

View File

@@ -0,0 +1,126 @@
# Story 0.4: CI/CD Pipeline and Development Tooling
## Metadata
- **Story ID**: 0.4
- **Title**: CI/CD Pipeline and Development Tooling
- **Epic**: 0 - Project Setup & Foundation
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: 3-4 hours
- **Dependencies**: 0.1
## Goal
Establish automated testing, linting, and build processes with a developer-friendly Makefile that enables efficient development workflow.
## Description
This story sets up the complete CI/CD pipeline using GitHub Actions and provides a comprehensive Makefile with common development commands. The pipeline should run on every push and pull request, ensuring code quality and buildability.
## Deliverables
### 1. GitHub Actions Workflow (`.github/workflows/ci.yml`)
Complete CI pipeline with:
- Go 1.24 setup
- Go module caching for faster builds
- Linting with golangci-lint or staticcheck
- Unit tests execution
- Test coverage reporting
- Binary build validation
- Code formatting validation (gofmt)
- Artifact uploads for build outputs
### 2. Comprehensive Makefile
Developer-friendly Makefile with commands:
- `make test` - Run all tests
- `make test-coverage` - Run tests with coverage report
- `make lint` - Run linters
- `make fmt` - Format code
- `make fmt-check` - Check code formatting
- `make build` - Build platform binary
- `make clean` - Clean build artifacts
- `make docker-build` - Build Docker image
- `make docker-run` - Run Docker container
- `make generate` - Run code generation
- `make verify` - Verify code (fmt, lint, test)
- `make help` - Show available commands
### 3. Linter Configuration
- `.golangci.yml` or similar linter config
- Configured rules and exclusions
- Reasonable defaults for Go projects
### 4. Pre-commit Hooks (Optional)
- Git hooks for formatting and linting
- Prevent committing unformatted code
## Implementation Steps
1. **Create GitHub Actions Workflow**
- Create `.github/workflows/ci.yml`
- Set up Go environment
- Configure module caching
- Add linting step
- Add testing step
- Add build step
- Add artifact uploads
2. **Create Makefile**
- Define common variables (GO, BINARY_NAME, etc.)
- Add test target
- Add lint target
- Add build target
- Add format targets
- Add Docker targets
- Add help target
3. **Configure Linter**
- Install golangci-lint or configure staticcheck
- Create linter configuration file
- Set up reasonable rules
4. **Test CI Pipeline**
- Push changes to trigger CI
- Verify all steps pass
- Check artifact uploads
## Acceptance Criteria
- [ ] CI pipeline runs on every push and PR
- [ ] All linting checks pass
- [ ] Tests run successfully (even if empty initially)
- [ ] Binary builds successfully
- [ ] Docker image builds successfully
- [ ] Makefile commands work as expected
- [ ] CI pipeline fails fast on errors
- [ ] Code formatting is validated
- [ ] Test coverage is reported
- [ ] Artifacts are uploaded correctly
## Related ADRs
- [ADR-0010: CI/CD Platform](../../adr/0010-ci-cd-platform.md)
## Implementation Notes
- Use Go 1.24 in CI to match project requirements
- Cache Go modules to speed up CI runs
- Use golangci-lint for comprehensive linting
- Set up test coverage threshold (e.g., 80%)
- Make sure CI fails on any error
- Consider adding security scanning (gosec) in future
- Docker builds should use multi-stage builds
## Testing
```bash
# Test Makefile commands
make test
make lint
make build
make clean
# Test CI locally (using act or similar)
act push
```
## Files to Create/Modify
- `.github/workflows/ci.yml` - GitHub Actions workflow
- `Makefile` - Development commands
- `.golangci.yml` - Linter configuration (optional)
- `.git/hooks/pre-commit` - Pre-commit hooks (optional)

View File

@@ -0,0 +1,122 @@
# Story 0.5: Dependency Injection and Application Bootstrap
## Metadata
- **Story ID**: 0.5
- **Title**: Dependency Injection and Application Bootstrap
- **Epic**: 0 - Project Setup & Foundation
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: 4-5 hours
- **Dependencies**: 0.1, 0.2, 0.3
## Goal
Set up dependency injection container using Uber FX and create the application entry point that initializes the platform with proper lifecycle management.
## Description
This story implements the dependency injection system using Uber FX and creates the main application entry point. The DI container will manage service lifecycle, dependencies, and provide a clean way to wire services together.
## Deliverables
### 1. DI Container (`internal/di/container.go`)
FX-based dependency injection container:
- Initialize FX container
- Register Config and Logger providers
- Basic lifecycle hooks (OnStart, OnStop)
- Support service overrides for testing
- Graceful shutdown handling
### 2. DI Providers (`internal/di/providers.go`)
Provider functions for core services:
- `ProvideConfig() fx.Option` - Configuration provider
- `ProvideLogger() fx.Option` - Logger provider
- Provider functions return FX options for easy composition
### 3. Application Entry Point (`cmd/platform/main.go`)
Main application bootstrap:
- Load configuration
- Initialize DI container with core services
- Set up basic application lifecycle
- Start minimal HTTP server (placeholder for Epic 1)
- Handle graceful shutdown (SIGINT, SIGTERM)
- Proper error handling and logging
### 4. Core Module (`internal/di/core_module.go`)
Optional: Export core module as FX option:
- `CoreModule() fx.Option` - Provides all core services
- Easy to compose with future modules
## Implementation Steps
1. **Install Dependencies**
```bash
go get go.uber.org/fx@latest
```
2. **Create DI Container**
- Create `internal/di/container.go`
- Initialize FX app
- Set up lifecycle hooks
- Add graceful shutdown
3. **Create Provider Functions**
- Create `internal/di/providers.go`
- Implement `ProvideConfig()` function
- Implement `ProvideLogger()` function
- Return FX options
4. **Create Application Entry Point**
- Create `cmd/platform/main.go`
- Load configuration
- Initialize FX app with providers
- Set up signal handling
- Start minimal server (placeholder)
- Handle shutdown gracefully
5. **Test Application**
- Verify application starts
- Verify graceful shutdown works
- Test service injection
## Acceptance Criteria
- [ ] DI container initializes successfully
- [ ] Config and Logger are provided via DI
- [ ] Application starts and runs
- [ ] Application shuts down gracefully on signals
- [ ] Lifecycle hooks work correctly
- [ ] Services can be overridden for testing
- [ ] Application compiles and runs successfully
- [ ] Error handling is comprehensive
- [ ] Logging works during startup/shutdown
## Related ADRs
- [ADR-0003: Dependency Injection Framework](../../adr/0003-dependency-injection-framework.md)
## Implementation Notes
- Use FX for dependency injection and lifecycle management
- Support graceful shutdown with context cancellation
- Handle SIGINT and SIGTERM signals
- Log startup and shutdown events
- Make services easily testable via interfaces
- Consider adding health check endpoint in future
- Support for service overrides is important for testing
## Testing
```bash
# Test application startup
go run cmd/platform/main.go
# Test graceful shutdown
# Start app, then send SIGTERM
kill -TERM <pid>
# Test DI container
go test ./internal/di/...
```
## Files to Create/Modify
- `internal/di/container.go` - DI container
- `internal/di/providers.go` - Provider functions
- `internal/di/core_module.go` - Core module (optional)
- `cmd/platform/main.go` - Application entry point
- `go.mod` - Add FX dependency

View File

@@ -0,0 +1,46 @@
# Epic 0: Project Setup & Foundation
## Overview
Initialize repository structure with proper Go project layout, implement configuration management system, establish structured logging system, set up CI/CD pipeline and development tooling, and bootstrap dependency injection and application entry point.
## Stories
### 0.1 Project Initialization and Repository Structure
- [Story: 0.1 - Project Initialization](./0.1-project-initialization.md)
- **Goal:** Establish a properly structured Go project with all necessary directories, configuration files, and documentation.
- **Deliverables:** Go module initialization, complete directory structure, .gitignore, comprehensive README.md
### 0.2 Configuration Management System
- [Story: 0.2 - Configuration Management System](./0.2-configuration-management-system.md)
- **Goal:** Implement a flexible configuration system that loads settings from YAML files, environment variables, and supports type-safe access.
- **Deliverables:** ConfigProvider interface, Viper implementation, configuration files, DI integration
### 0.3 Structured Logging System
- [Story: 0.3 - Structured Logging System](./0.3-structured-logging-system.md)
- **Goal:** Implement a production-ready logging system with structured JSON output, request correlation, and configurable log levels.
- **Deliverables:** Logger interface, Zap implementation, request ID middleware, context-aware logging
### 0.4 CI/CD Pipeline and Development Tooling
- [Story: 0.4 - CI/CD Pipeline and Development Tooling](./0.4-cicd-pipeline.md)
- **Goal:** Establish automated testing, linting, and build processes with a developer-friendly Makefile.
- **Deliverables:** GitHub Actions workflow, comprehensive Makefile, build automation
### 0.5 Dependency Injection and Application Bootstrap
- [Story: 0.5 - Dependency Injection and Application Bootstrap](./0.5-di-and-bootstrap.md)
- **Goal:** Set up dependency injection container using Uber FX and create the application entry point that initializes the platform.
- **Deliverables:** DI container, FX providers, application entry point, lifecycle management
## Deliverables Checklist
- [ ] Repository structure in place
- [ ] Configuration system loads YAML files and env vars
- [ ] Structured logging works
- [ ] CI pipeline runs linting and builds binary
- [ ] Basic DI container initialized
## Acceptance Criteria
- `go build ./cmd/platform` succeeds
- `go test ./...` runs (even if tests are empty)
- CI pipeline passes on empty commit
- Config loads from `config/default.yaml`
- Logger can be injected and used
- Application starts and shuts down gracefully

View File

@@ -0,0 +1,95 @@
# Story 1.1: Enhanced Dependency Injection Container
## Metadata
- **Story ID**: 1.1
- **Title**: Enhanced Dependency Injection Container
- **Epic**: 1 - Core Kernel & Infrastructure
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: 3-4 hours
- **Dependencies**: 0.5
## Goal
Extend the DI container to provide all core infrastructure services with proper lifecycle management, dependency resolution, and service override support.
## Description
This story extends the basic DI container to support all core services including database, health checks, metrics, and error bus. The container must handle service initialization order, lifecycle management, and provide a clean way to override services for testing.
## Deliverables
### 1. Extended DI Container (`internal/di/container.go`)
- Registration of all core services
- Lifecycle management via FX
- Service override support for testing
- Dependency resolution
- Error handling during initialization
### 2. Provider Functions (`internal/di/providers.go`)
Complete provider functions for all core services:
- `ProvideConfig() fx.Option` - Configuration provider
- `ProvideLogger() fx.Option` - Logger provider
- `ProvideDatabase() fx.Option` - Ent database client provider
- `ProvideHealthCheckers() fx.Option` - Health check registry provider
- `ProvideMetrics() fx.Option` - Prometheus metrics registry provider
- `ProvideErrorBus() fx.Option` - Error bus provider
### 3. Core Module (`internal/di/core_module.go`)
- Export `CoreModule() fx.Option` that provides all core services
- Easy composition with future modules
- Single import point for core services
## Implementation Steps
1. **Extend Container**
- Update `internal/di/container.go`
- Add support for all core services
- Implement lifecycle hooks
2. **Create Provider Functions**
- Create `internal/di/providers.go`
- Implement all provider functions
- Handle dependencies correctly
3. **Create Core Module**
- Create `internal/di/core_module.go`
- Export CoreModule function
- Document usage
4. **Test Integration**
- Verify all services are provided
- Test service overrides
- Test lifecycle hooks
## Acceptance Criteria
- [ ] All core services are provided via DI container
- [ ] Services are initialized in correct dependency order
- [ ] Lifecycle hooks work for all services
- [ ] Services can be overridden for testing
- [ ] DI container compiles without errors
- [ ] CoreModule can be imported and used
- [ ] Error handling works during initialization
## Related ADRs
- [ADR-0003: Dependency Injection Framework](../../adr/0003-dependency-injection-framework.md)
## Implementation Notes
- Use FX's dependency injection features
- Ensure proper initialization order
- Support service overrides via FX options
- Handle errors gracefully during startup
- Document provider functions
## Testing
```bash
# Test DI container
go test ./internal/di/...
# Test service injection
go run cmd/platform/main.go
```
## Files to Create/Modify
- `internal/di/container.go` - Extended container
- `internal/di/providers.go` - Provider functions
- `internal/di/core_module.go` - Core module export

View File

@@ -0,0 +1,144 @@
# Story 1.2: Database Layer with Ent ORM
## Metadata
- **Story ID**: 1.2
- **Title**: Database Layer with Ent ORM
- **Epic**: 1 - Core Kernel & Infrastructure
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: 6-8 hours
- **Dependencies**: 1.1
## Goal
Set up a complete database layer using Ent ORM with core domain entities, migrations, and connection management.
## Description
This story implements the complete database layer using Ent ORM. It includes defining core domain entities (User, Role, Permission, AuditLog), setting up migrations, configuring connection pooling, and creating a database client that integrates with the DI container.
## Deliverables
### 1. Ent Schema Initialization
- Initialize Ent schema in `internal/ent/`
- Set up code generation
### 2. Core Domain Entities (`internal/ent/schema/`)
Define core entities:
- **User** (`user.go`): ID, email, password_hash, verified, created_at, updated_at
- **Role** (`role.go`): ID, name, description, created_at
- **Permission** (`permission.go`): ID, name (format: "module.resource.action")
- **AuditLog** (`audit_log.go`): ID, actor_id, action, target_id, metadata (JSON), timestamp
- **Relationships**:
- `role_permissions.go` - Many-to-many between Role and Permission
- `user_roles.go` - Many-to-many between User and Role
### 3. Generated Ent Code
- Run `go generate ./internal/ent`
- Verify generated code compiles
- Type-safe database operations
### 4. Database Client (`internal/infra/database/client.go`)
- `NewEntClient(dsn string) (*ent.Client, error)` function
- Connection pooling configuration:
- Max connections
- Max idle connections
- Connection lifetime
- Idle timeout
- Migration runner wrapper
- Database health check integration
- Graceful connection closing
### 5. Database Configuration
- Add database config to `config/default.yaml`:
- Connection string (DSN)
- Connection pool settings
- Migration settings
- Driver configuration
### 6. DI Integration
- Provider function for database client
- Register in DI container
- Lifecycle management (close on shutdown)
## Implementation Steps
1. **Install Ent**
```bash
go get entgo.io/ent/cmd/ent
```
2. **Initialize Ent Schema**
```bash
go run entgo.io/ent/cmd/ent init User Role Permission AuditLog
```
3. **Define Core Entities**
- Create schema files for each entity
- Define fields and relationships
- Add indexes where needed
4. **Generate Ent Code**
```bash
go generate ./internal/ent
```
5. **Create Database Client**
- Create `internal/infra/database/client.go`
- Implement connection management
- Add migration runner
- Add health check
6. **Add Configuration**
- Update `config/default.yaml`
- Add database configuration section
7. **Integrate with DI**
- Create provider function
- Register in container
- Test connection
## Acceptance Criteria
- [ ] Ent schema compiles and generates code successfully
- [ ] Database client connects to PostgreSQL
- [ ] Core entities can be created and queried
- [ ] Migrations run successfully on startup
- [ ] Connection pooling is configured correctly
- [ ] Database health check works
- [ ] All entities have proper indexes and relationships
- [ ] Database client is injectable via DI
- [ ] Connections are closed gracefully on shutdown
## Related ADRs
- [ADR-0013: Database ORM](../../adr/0013-database-orm.md)
## Implementation Notes
- Use Ent for type-safe database operations
- Configure connection pooling appropriately
- Run migrations on application startup
- Add proper indexes for performance
- Handle database connection errors gracefully
- Support for database migrations in future epics
## Testing
```bash
# Test Ent schema generation
go generate ./internal/ent
go build ./internal/ent
# Test database connection
go test ./internal/infra/database/...
# Test migrations
go run cmd/platform/main.go
```
## Files to Create/Modify
- `internal/ent/schema/user.go` - User entity
- `internal/ent/schema/role.go` - Role entity
- `internal/ent/schema/permission.go` - Permission entity
- `internal/ent/schema/audit_log.go` - AuditLog entity
- `internal/ent/schema/role_permissions.go` - Relationship
- `internal/ent/schema/user_roles.go` - Relationship
- `internal/infra/database/client.go` - Database client
- `internal/di/providers.go` - Add database provider
- `config/default.yaml` - Add database config

View File

@@ -0,0 +1,126 @@
# Story 1.3: Health Monitoring and Metrics System
## Metadata
- **Story ID**: 1.3
- **Title**: Health Monitoring and Metrics System
- **Epic**: 1 - Core Kernel & Infrastructure
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: 5-6 hours
- **Dependencies**: 1.1, 1.2
## Goal
Implement comprehensive health checks and Prometheus metrics for monitoring platform health and performance.
## Description
This story creates a complete health monitoring system with liveness and readiness probes, and a comprehensive Prometheus metrics system for tracking HTTP requests, database queries, and errors.
## Deliverables
### 1. Health Check System
- **HealthChecker Interface** (`pkg/health/health.go`):
- `HealthChecker` interface with `Check(ctx context.Context) error` method
- Health status types
- **Health Registry** (`internal/health/registry.go`):
- Thread-safe registry of health checkers
- Register multiple health checkers
- Aggregate health status
- `GET /healthz` endpoint (liveness probe)
- `GET /ready` endpoint (readiness probe with database check)
- Individual component health checks
### 2. Prometheus Metrics System
- **Metrics Registry** (`internal/metrics/metrics.go`):
- Prometheus registry setup
- HTTP request duration histogram
- HTTP request counter (by method, path, status code)
- Database query duration histogram (via Ent interceptor)
- Error counter (by type)
- Custom metrics support
- **Metrics Endpoint**:
- `GET /metrics` endpoint (Prometheus format)
- Proper content type headers
### 3. Database Health Check
- Database connectivity check
- Connection pool status
- Query execution test
### 4. Integration
- Integration with HTTP server
- Integration with DI container
- Middleware for automatic metrics collection
## Implementation Steps
1. **Install Dependencies**
```bash
go get github.com/prometheus/client_golang/prometheus
```
2. **Create Health Check Interface**
- Create `pkg/health/health.go`
- Define HealthChecker interface
3. **Implement Health Registry**
- Create `internal/health/registry.go`
- Implement registry and endpoints
4. **Create Metrics System**
- Create `internal/metrics/metrics.go`
- Define all metrics
- Create registry
5. **Add Database Health Check**
- Implement database health checker
- Register with health registry
6. **Integrate with HTTP Server**
- Add health endpoints
- Add metrics endpoint
- Add metrics middleware
7. **Integrate with DI**
- Create provider functions
- Register in container
## Acceptance Criteria
- [ ] `/healthz` returns 200 when service is alive
- [ ] `/ready` checks database connectivity and returns appropriate status
- [ ] `/metrics` exposes Prometheus metrics in correct format
- [ ] All HTTP requests are measured
- [ ] Database queries are instrumented
- [ ] Metrics are registered in DI container
- [ ] Health checks can be extended by modules
- [ ] Metrics follow Prometheus naming conventions
## Related ADRs
- [ADR-0014: Health Check Implementation](../../adr/0014-health-check-implementation.md)
## Implementation Notes
- Use Prometheus client library
- Follow Prometheus naming conventions
- Health checks should be fast (< 1 second)
- Metrics should have appropriate labels
- Consider adding custom business metrics in future
## Testing
```bash
# Test health endpoints
curl http://localhost:8080/healthz
curl http://localhost:8080/ready
# Test metrics endpoint
curl http://localhost:8080/metrics
# Test metrics collection
go test ./internal/metrics/...
```
## Files to Create/Modify
- `pkg/health/health.go` - Health checker interface
- `internal/health/registry.go` - Health registry
- `internal/metrics/metrics.go` - Metrics system
- `internal/server/server.go` - Add endpoints
- `internal/di/providers.go` - Add providers

View File

@@ -0,0 +1,103 @@
# Story 1.4: Error Handling and Error Bus
## Metadata
- **Story ID**: 1.4
- **Title**: Error Handling and Error Bus
- **Epic**: 1 - Core Kernel & Infrastructure
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: 4-5 hours
- **Dependencies**: 1.1, 1.3
## Goal
Implement centralized error handling with an error bus that captures, logs, and optionally reports all application errors.
## Description
This story creates a complete error handling system with an error bus that captures all errors, logs them with context, and provides a foundation for future error reporting integrations (like Sentry).
## Deliverables
### 1. Error Bus Interface (`pkg/errorbus/errorbus.go`)
- `ErrorPublisher` interface with `Publish(err error)` method
- Error context support
- Error categorization
### 2. Channel-Based Error Bus (`internal/errorbus/channel_bus.go`)
- Buffered channel for error publishing
- Background goroutine consumes errors
- Logs all errors with context (request ID, user ID, etc.)
- Error aggregation
- Optional: Sentry integration placeholder (Epic 6)
### 3. Panic Recovery Middleware
- Recovers from panics in HTTP handlers
- Publishes panics to error bus
- Returns appropriate HTTP error responses (500)
- Preserves error context
### 4. Integration
- Integration with DI container
- Integration with HTTP middleware stack
- Integration with logger
## Implementation Steps
1. **Create Error Bus Interface**
- Create `pkg/errorbus/errorbus.go`
- Define ErrorPublisher interface
2. **Implement Channel-Based Error Bus**
- Create `internal/errorbus/channel_bus.go`
- Implement buffered channel
- Implement background consumer
- Add error logging
3. **Create Panic Recovery Middleware**
- Create middleware for Gin
- Recover from panics
- Publish to error bus
- Return error responses
4. **Integrate with DI**
- Create provider function
- Register in container
5. **Integrate with HTTP Server**
- Add panic recovery middleware
- Test error handling
## Acceptance Criteria
- [ ] Errors are captured and logged via error bus
- [ ] Panics are recovered and logged
- [ ] HTTP handlers return proper error responses
- [ ] Error bus is injectable via DI
- [ ] Error context (request ID, user ID) is preserved
- [ ] Background error consumer works correctly
- [ ] Error bus doesn't block request handling
## Related ADRs
- [ADR-0015: Error Bus Implementation](../../adr/0015-error-bus-implementation.md)
- [ADR-0026: Error Reporting Service](../../adr/0026-error-reporting-service.md)
## Implementation Notes
- Use buffered channels to prevent blocking
- Background goroutine should handle errors asynchronously
- Preserve error context (stack traces, request IDs)
- Consider error rate limiting in future
- Placeholder for Sentry integration in Epic 6
## Testing
```bash
# Test error bus
go test ./internal/errorbus/...
# Test panic recovery
# Trigger panic in handler and verify recovery
```
## Files to Create/Modify
- `pkg/errorbus/errorbus.go` - Error bus interface
- `internal/errorbus/channel_bus.go` - Error bus implementation
- `internal/server/middleware.go` - Panic recovery middleware
- `internal/di/providers.go` - Add error bus provider

View File

@@ -0,0 +1,122 @@
# Story 1.5: HTTP Server Foundation with Middleware Stack
## Metadata
- **Story ID**: 1.5
- **Title**: HTTP Server Foundation with Middleware Stack
- **Epic**: 1 - Core Kernel & Infrastructure
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: 6-8 hours
- **Dependencies**: 1.1, 1.3, 1.4
## Goal
Create a production-ready HTTP server with comprehensive middleware for security, observability, and error handling.
## Description
This story implements a complete HTTP server using Gin with a comprehensive middleware stack including request ID generation, structured logging, panic recovery, metrics collection, CORS, and graceful shutdown.
## Deliverables
### 1. HTTP Server (`internal/server/server.go`)
- Gin router initialization
- Server configuration (port, host, timeouts)
- Graceful shutdown handling
### 2. Comprehensive Middleware Stack
- **Request ID Generator**: Unique ID per request
- **Structured Logging**: Log all requests with context
- **Panic Recovery**: Recover panics → error bus
- **Prometheus Metrics**: Collect request metrics
- **CORS Support**: Configurable CORS headers
- **Request Timeout**: Handle request timeouts
- **Response Compression**: Gzip compression for responses
### 3. Core Route Registration
- `GET /healthz` - Liveness probe
- `GET /ready` - Readiness probe
- `GET /metrics` - Prometheus metrics
### 4. FX Lifecycle Integration
- HTTP server starts on `OnStart` hook
- Graceful shutdown on `OnStop` hook (drains connections)
- Port configuration from config system
### 5. Integration
- Integration with main application entry point
- Integration with all middleware systems
## Implementation Steps
1. **Install Dependencies**
```bash
go get github.com/gin-gonic/gin
```
2. **Create HTTP Server**
- Create `internal/server/server.go`
- Initialize Gin router
- Configure server settings
3. **Implement Middleware**
- Request ID middleware
- Logging middleware
- Panic recovery middleware
- Metrics middleware
- CORS middleware
- Timeout middleware
- Compression middleware
4. **Register Core Routes**
- Health endpoints
- Metrics endpoint
5. **Integrate with FX**
- Add lifecycle hooks
- Handle graceful shutdown
6. **Test Server**
- Verify server starts
- Test all endpoints
- Test graceful shutdown
## Acceptance Criteria
- [ ] HTTP server starts successfully
- [ ] All middleware executes in correct order
- [ ] Request IDs are generated and logged
- [ ] Metrics are collected for all requests
- [ ] Panics are recovered and handled
- [ ] Graceful shutdown works correctly
- [ ] Server is configurable via config system
- [ ] CORS is configurable per environment
- [ ] All core endpoints work correctly
## Related ADRs
- [ADR-0006: HTTP Framework](../../adr/0006-http-framework.md)
## Implementation Notes
- Use Gin for HTTP routing
- Middleware order is important
- Support graceful shutdown with connection draining
- CORS should be configurable per environment
- Consider adding rate limiting in future (Epic 6)
## Testing
```bash
# Test server startup
go run cmd/platform/main.go
# Test endpoints
curl http://localhost:8080/healthz
curl http://localhost:8080/ready
curl http://localhost:8080/metrics
# Test graceful shutdown
# Send SIGTERM and verify graceful shutdown
```
## Files to Create/Modify
- `internal/server/server.go` - HTTP server
- `internal/server/middleware.go` - Middleware functions
- `internal/di/providers.go` - Add server provider
- `config/default.yaml` - Add server configuration

View File

@@ -0,0 +1,117 @@
# Story 1.6: OpenTelemetry Distributed Tracing
## Metadata
- **Story ID**: 1.6
- **Title**: OpenTelemetry Distributed Tracing
- **Epic**: 1 - Core Kernel & Infrastructure
- **Status**: Pending
- **Priority**: Medium
- **Estimated Time**: 5-6 hours
- **Dependencies**: 1.1, 1.5
## Goal
Integrate OpenTelemetry for distributed tracing across the platform to enable observability in production.
## Description
This story implements OpenTelemetry tracing for HTTP requests and database queries, enabling distributed tracing across the platform. Traces will be exported to stdout in development and OTLP collector in production.
## Deliverables
### 1. OpenTelemetry Setup (`internal/observability/tracer.go`)
- TracerProvider initialization
- Export to stdout (development mode)
- Export to OTLP collector (production mode)
- Trace context propagation
- Resource attributes (service name, version, etc.)
### 2. HTTP Instrumentation Middleware
- Automatic span creation for HTTP requests
- Trace context propagation via headers
- Span attributes (method, path, status code, duration)
- Error recording in spans
### 3. Database Instrumentation
- Ent interceptor for database queries
- Query spans with timing and parameters
- Database operation attributes
### 4. Integration with Logger
- Include trace ID in logs
- Correlate logs with traces
- Span context in structured logs
### 5. Configuration
- Tracing configuration in config files
- Enable/disable tracing
- Export endpoint configuration
## Implementation Steps
1. **Install Dependencies**
```bash
go get go.opentelemetry.io/otel
go get go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp
go get go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp
```
2. **Create Tracer Setup**
- Create `internal/observability/tracer.go`
- Initialize TracerProvider
- Set up exporters
3. **Add HTTP Instrumentation**
- Create HTTP middleware
- Instrument Gin router
- Add trace context propagation
4. **Add Database Instrumentation**
- Create Ent interceptor
- Instrument database queries
- Add query attributes
5. **Integrate with Logger**
- Extract trace ID from context
- Add trace ID to logs
6. **Add Configuration**
- Add tracing config
- Configure export endpoints
## Acceptance Criteria
- [ ] HTTP requests create OpenTelemetry spans
- [ ] Database queries are traced
- [ ] Trace context propagates across service boundaries
- [ ] Trace IDs are included in logs
- [ ] Traces export correctly to configured backend
- [ ] Tracing works in both development and production modes
- [ ] Tracing has minimal performance impact
- [ ] Spans have appropriate attributes
## Related ADRs
- [ADR-0016: OpenTelemetry Observability](../../adr/0016-opentelemetry-observability.md)
## Implementation Notes
- Use OpenTelemetry Go SDK
- Support both stdout and OTLP exporters
- Trace context should propagate via HTTP headers
- Consider sampling for high-volume production
- Minimize performance impact
## Testing
```bash
# Test tracing
go run cmd/platform/main.go
# Make requests and verify traces
curl http://localhost:8080/healthz
# Check trace export (stdout or OTLP)
```
## Files to Create/Modify
- `internal/observability/tracer.go` - Tracer setup
- `internal/server/middleware.go` - Add tracing middleware
- `internal/infra/database/client.go` - Add tracing interceptor
- `internal/logger/zap_logger.go` - Add trace ID to logs
- `config/default.yaml` - Add tracing configuration

View File

@@ -0,0 +1,114 @@
# Story 1.7: Service Client Interfaces
## Metadata
- **Story ID**: 1.7
- **Title**: Service Client Interfaces
- **Epic**: 1 - Core Kernel & Infrastructure
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: 4-6 hours
- **Dependencies**: 1.1, 1.2, 2.1, 2.2
## Goal
Create service client interfaces for all core services to enable microservices communication. All inter-service communication will go through these interfaces.
## Description
This story implements the foundation for microservices architecture by creating service client interfaces for all core services. These interfaces will be implemented as gRPC clients (primary) or HTTP clients (fallback), ensuring all services communicate via network calls.
## Deliverables
### 1. Service Client Interfaces (`pkg/services/`)
Define service client interfaces for all core services:
- `IdentityServiceClient` - User and identity operations
- `AuthServiceClient` - Authentication operations
- `AuthzServiceClient` - Authorization operations
- `PermissionServiceClient` - Permission resolution
- `AuditServiceClient` - Audit logging
- `CacheServiceClient` - Cache operations (if needed)
- `EventBusClient` - Event publishing (already abstracted)
### 2. Service Client Factory (`internal/services/factory.go`)
Factory pattern for creating service clients:
- Create gRPC clients (primary)
- Create HTTP clients (fallback)
- Support service registry integration
- Handle client lifecycle and connection pooling
### 3. Configuration
- Service client configuration in `config/default.yaml`:
```yaml
services:
default_protocol: grpc # grpc, http
registry:
type: consul # consul, kubernetes, etcd
consul:
address: localhost:8500
```
### 5. DI Integration
- Provider functions for service clients
- Register in DI container
- Support service client injection
## Implementation Steps
1. **Define Service Client Interfaces**
- Create `pkg/services/identity.go`
- Create `pkg/services/auth.go`
- Create `pkg/services/authz.go`
- Define all interface methods
- Design for network calls (context, timeouts, errors)
2. **Create Service Factory**
- Create `internal/services/factory.go`
- Implement gRPC client creation
- Implement HTTP client creation (fallback)
- Support service registry integration
3. **Add Configuration**
- Add service configuration
- Support protocol selection (gRPC/HTTP)
- Service registry configuration
4. **Update Core Services**
- Services expose gRPC servers
- Services use service clients for inter-service calls
- No direct in-process calls between services
## Acceptance Criteria
- [ ] Service client interfaces are defined for all core services
- [ ] Service factory creates gRPC clients
- [ ] Service factory creates HTTP clients (fallback)
- [ ] Service clients are injectable via DI
- [ ] Configuration supports protocol selection
- [ ] Service clients are testable and mockable
- [ ] All inter-service communication goes through service clients
## Related ADRs
- [ADR-0029: Microservices Architecture](../../adr/0029-microservices-architecture.md)
- [ADR-0030: Service Communication Strategy](../../adr/0030-service-communication-strategy.md)
## Implementation Notes
- Interfaces should match existing service methods
- Use context for all operations
- Support cancellation and timeouts
- Design for network calls (retries, circuit breakers)
- gRPC will be implemented in Epic 5, but interfaces are defined here
## Testing
```bash
# Test service clients
go test ./internal/services/...
# Test service factory
go test ./internal/services/factory_test.go
```
## Files to Create/Modify
- `pkg/services/identity.go` - Identity service client interface
- `pkg/services/auth.go` - Auth service client interface
- `pkg/services/authz.go` - Authz service client interface
- `internal/services/factory.go` - Service client factory
- `internal/di/providers.go` - Add service client providers
- `config/default.yaml` - Add service configuration

View File

@@ -0,0 +1,58 @@
# Epic 1: Core Kernel & Infrastructure
## Overview
Extend DI container to support all core services, implement database layer with Ent ORM, build health monitoring and metrics system, create error handling and error bus, establish HTTP server with comprehensive middleware stack, and integrate OpenTelemetry for distributed tracing.
## Stories
### 1.1 Enhanced Dependency Injection Container
- [Story: 1.1 - Enhanced DI Container](./1.1-enhanced-di-container.md)
- **Goal:** Extend the DI container to provide all core infrastructure services with proper lifecycle management.
- **Deliverables:** Extended DI container, provider functions for all services, core module export
### 1.2 Database Layer with Ent ORM
- [Story: 1.2 - Database Layer](./1.2-database-layer.md)
- **Goal:** Set up a complete database layer using Ent ORM with core domain entities, migrations, and connection management.
- **Deliverables:** Ent schema, core entities, database client, migrations, connection pooling
### 1.3 Health Monitoring and Metrics System
- [Story: 1.3 - Health & Metrics](./1.3-health-metrics-system.md)
- **Goal:** Implement comprehensive health checks and Prometheus metrics for monitoring platform health and performance.
- **Deliverables:** Health check system, Prometheus metrics, health endpoints, metrics endpoint
### 1.4 Error Handling and Error Bus
- [Story: 1.4 - Error Handling](./1.4-error-handling.md)
- **Goal:** Implement centralized error handling with an error bus that captures, logs, and optionally reports all application errors.
- **Deliverables:** Error bus interface, channel-based implementation, panic recovery middleware
### 1.5 HTTP Server Foundation with Middleware Stack
- [Story: 1.5 - HTTP Server](./1.5-http-server.md)
- **Goal:** Create a production-ready HTTP server with comprehensive middleware for security, observability, and error handling.
- **Deliverables:** HTTP server, comprehensive middleware stack, core routes, FX lifecycle integration
### 1.6 OpenTelemetry Distributed Tracing
- [Story: 1.6 - OpenTelemetry](./1.6-opentelemetry.md)
- **Goal:** Integrate OpenTelemetry for distributed tracing across the platform to enable observability in production.
- **Deliverables:** OpenTelemetry setup, HTTP instrumentation, database instrumentation, trace-log correlation
### 1.7 Service Client Interfaces
- [Story: 1.7 - Service Client Interfaces](./1.7-service-abstraction-layer.md)
- **Goal:** Create service client interfaces for all core services to enable microservices communication.
- **Deliverables:** Service client interfaces, service factory, configuration
## Deliverables Checklist
- [ ] DI container with all core services
- [ ] Database client with Ent schema
- [ ] Health and metrics endpoints functional
- [ ] Error bus captures and logs errors
- [ ] HTTP server with middleware stack
- [ ] Basic observability with OpenTelemetry
- [ ] Service client interfaces for microservices
## Acceptance Criteria
- `GET /healthz` returns 200
- `GET /ready` checks DB connectivity
- `GET /metrics` exposes Prometheus metrics
- Panic recovery logs errors via error bus
- Database migrations run on startup
- HTTP requests are traced with OpenTelemetry

View File

@@ -0,0 +1,139 @@
# Story 2.1: JWT Authentication System
## Metadata
- **Story ID**: 2.1
- **Title**: JWT Authentication System
- **Epic**: 2 - Authentication & Authorization
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: 6-8 hours
- **Dependencies**: 1.2, 1.5
## Goal
Implement a complete JWT-based authentication system with access tokens, refresh tokens, and secure token management.
## Description
This story implements the complete JWT authentication system including token generation, verification, authentication middleware, and login/refresh endpoints. The system supports short-lived access tokens and long-lived refresh tokens for secure authentication.
## Deliverables
### 1. Authentication Interfaces (`pkg/auth/auth.go`)
- `Authenticator` interface for token generation and verification
- `TokenClaims` struct with user ID, roles, tenant ID, expiration
- Token validation utilities
### 2. JWT Implementation (`internal/auth/jwt_auth.go`)
- Generate short-lived access tokens (15 minutes default)
- Generate long-lived refresh tokens (7 days default)
- Token signature verification using HMAC or RSA
- Token expiration validation
- Claims extraction and validation
### 3. Authentication Middleware (`internal/auth/middleware.go`)
- Extract JWT from `Authorization: Bearer <token>` header
- Verify token validity (signature and expiration)
- Inject authenticated user into request context
- Helper function: `auth.FromContext(ctx) *User`
- Handle authentication errors appropriately
### 4. Authentication Endpoints
- `POST /api/v1/auth/login` - Authenticate user and return tokens
- Validate email and password
- Return access + refresh tokens
- Log login attempts
- `POST /api/v1/auth/refresh` - Refresh access token using refresh token
- Validate refresh token
- Issue new access token
- Optionally rotate refresh token
### 5. gRPC Server (Microservices)
- Expose gRPC server for authentication service
- gRPC service definition in `api/proto/auth.proto`
- gRPC server implementation in `internal/auth/grpc/server.go`
- Service registration in service registry
### 6. Integration
- Integration with DI container
- Use `IdentityServiceClient` for user operations (if Identity service is separate)
- Integration with HTTP server
- Integration with user repository
- Integration with audit logging
## Implementation Steps
1. **Install Dependencies**
```bash
go get github.com/golang-jwt/jwt/v5
```
2. **Create Authentication Interfaces**
- Create `pkg/auth/auth.go`
- Define Authenticator interface
- Define TokenClaims struct
3. **Implement JWT Authentication**
- Create `internal/auth/jwt_auth.go`
- Implement token generation
- Implement token verification
- Handle token expiration
4. **Create Authentication Middleware**
- Create `internal/auth/middleware.go`
- Implement token extraction
- Implement token verification
- Inject user into context
5. **Create Authentication Endpoints**
- Create login handler
- Create refresh handler
- Add routes to HTTP server
6. **Integrate with DI**
- Create provider function
- Register in container
## Acceptance Criteria
- [ ] Users can login and receive access and refresh tokens
- [ ] Access tokens expire after configured duration
- [ ] Refresh tokens can be used to obtain new access tokens
- [ ] Invalid tokens are rejected with appropriate errors
- [ ] Authenticated user is available in request context
- [ ] Login attempts are logged (success and failure)
- [ ] Token secrets are configurable
- [ ] Token claims include user ID, roles, and tenant ID
## Related ADRs
- [ADR-0017: JWT Token Strategy](../../adr/0017-jwt-token-strategy.md)
- [ADR-0029: Microservices Architecture](../../adr/0029-microservices-architecture.md)
- [ADR-0030: Service Communication Strategy](../../adr/0030-service-communication-strategy.md)
## Implementation Notes
- Use JWT v5 library
- Support both HMAC and RSA signing
- Token secrets should be configurable
- Consider token blacklisting for logout (future enhancement)
- Refresh tokens should be stored securely (database or cache)
## Testing
```bash
# Test authentication
go test ./internal/auth/...
# Test login endpoint
curl -X POST http://localhost:8080/api/v1/auth/login \
-H "Content-Type: application/json" \
-d '{"email":"user@example.com","password":"password"}'
# Test refresh endpoint
curl -X POST http://localhost:8080/api/v1/auth/refresh \
-H "Authorization: Bearer <refresh_token>"
```
## Files to Create/Modify
- `pkg/auth/auth.go` - Authentication interfaces
- `internal/auth/jwt_auth.go` - JWT implementation
- `internal/auth/middleware.go` - Authentication middleware
- `internal/auth/handler.go` - Authentication handlers
- `internal/di/providers.go` - Add auth provider
- `config/default.yaml` - Add JWT configuration

View File

@@ -0,0 +1,82 @@
# Story 2.2: Identity Management System
## Metadata
- **Story ID**: 2.2
- **Title**: Identity Management System
- **Epic**: 2 - Authentication & Authorization
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: 8-10 hours
- **Dependencies**: 1.2, 2.1
## Goal
Build a complete user identity management system with registration, email verification, password management, and user CRUD operations.
## Description
This story implements the complete user identity management system including user registration, email verification, password reset, password change, and user profile management. All operations are secured and audited.
## Deliverables
### 1. Identity Interfaces (`pkg/identity/identity.go`)
- `UserRepository` interface for user data access
- `UserService` interface for user business logic
- User domain models
### 2. User Repository (`internal/identity/user_repo.go`)
- CRUD operations using Ent
- Password hashing (bcrypt or argon2)
- Email uniqueness validation
- User lookup by ID and email
- User search and pagination
### 3. User Service (`internal/identity/user_service.go`)
- User registration with email verification token generation
- Email verification flow
- Password reset flow (token-based, time-limited)
- Password change with old password verification
- User profile updates
- User deletion (soft delete option)
### 4. User Management API Endpoints
- `POST /api/v1/users` - Register new user
- `GET /api/v1/users/:id` - Get user profile (authorized)
- `PUT /api/v1/users/:id` - Update user profile (authorized)
- `DELETE /api/v1/users/:id` - Delete user (admin only)
- `POST /api/v1/users/verify-email` - Verify email with token
- `POST /api/v1/users/reset-password` - Request password reset
- `POST /api/v1/users/change-password` - Change password
### 5. gRPC Server (Microservices)
- Expose gRPC server for identity service
- gRPC service definition in `api/proto/identity.proto`
- gRPC server implementation in `internal/identity/grpc/server.go`
- Service registration in service registry
### 6. Integration
- Integration with email notification system (Epic 5 placeholder)
- Integration with audit logging
- Integration with authentication system
- Identity service is an independent service that can be deployed separately
## Acceptance Criteria
- [ ] Users can register with email and password
- [ ] Passwords are securely hashed
- [ ] Email verification tokens are generated and validated
- [ ] Password reset flow works end-to-end
- [ ] Users can update their profiles
- [ ] User operations require proper authentication
- [ ] All user actions are audited
- [ ] Email uniqueness is enforced
## Related ADRs
- [ADR-0018: Password Hashing](../../adr/0018-password-hashing.md)
- [ADR-0029: Microservices Architecture](../../adr/0029-microservices-architecture.md)
- [ADR-0030: Service Communication Strategy](../../adr/0030-service-communication-strategy.md)
## Files to Create/Modify
- `pkg/identity/identity.go` - Identity interfaces
- `internal/identity/user_repo.go` - User repository
- `internal/identity/user_service.go` - User service
- `internal/identity/handler.go` - User handlers
- `internal/di/providers.go` - Add identity providers

View File

@@ -0,0 +1,70 @@
# Story 2.3: Role-Based Access Control (RBAC) System
## Metadata
- **Story ID**: 2.3
- **Title**: Role-Based Access Control (RBAC) System
- **Epic**: 2 - Authentication & Authorization
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: 6-8 hours
- **Dependencies**: 1.2, 2.1
## Goal
Implement a complete RBAC system with permissions, role management, and authorization middleware.
## Description
This story implements the complete RBAC system including permission definitions, permission resolution, authorization checking, and middleware for protecting routes.
## Deliverables
### 1. Permission System (`pkg/perm/perm.go`)
- `Permission` type (string format: "module.resource.action")
- Core permission constants (system, user, role permissions)
- Permission validation utilities
### 2. Permission Resolver (`pkg/perm/resolver.go` & `internal/perm/in_memory_resolver.go`)
- `PermissionResolver` interface
- Implementation that loads user roles and permissions from database
- Permission checking with caching
- Permission inheritance via roles
### 3. Authorization System (`pkg/auth/authz.go` & `internal/auth/rbac_authorizer.go`)
- `Authorizer` interface
- RBAC authorizer implementation
- Extract user from context
- Check permissions
- Return authorization errors
### 4. Authorization Middleware
- `RequirePermission(perm Permission) gin.HandlerFunc` decorator
- Integration with route registration
- Proper error responses for unauthorized access
### 5. gRPC Server (Microservices)
- Expose gRPC server for authorization service
- gRPC service definition in `api/proto/authz.proto`
- gRPC server implementation in `internal/auth/grpc/authz_server.go`
- Service registration in service registry
- Uses `IdentityServiceClient` for user operations
## Acceptance Criteria
- [ ] Permissions are defined and can be checked
- [ ] Users inherit permissions through roles
- [ ] Authorization middleware protects routes
- [ ] Unauthorized requests return 403 errors
- [ ] Permission checks are cached for performance
- [ ] Permission system is extensible by modules
## Related ADRs
- [ADR-0019: Permission DSL Format](../../adr/0019-permission-dsl-format.md)
- [ADR-0029: Microservices Architecture](../../adr/0029-microservices-architecture.md)
- [ADR-0030: Service Communication Strategy](../../adr/0030-service-communication-strategy.md)
## Files to Create/Modify
- `pkg/perm/perm.go` - Permission types
- `pkg/perm/resolver.go` - Permission resolver interface
- `internal/perm/in_memory_resolver.go` - Permission resolver implementation
- `pkg/auth/authz.go` - Authorization interface
- `internal/auth/rbac_authorizer.go` - RBAC authorizer
- `internal/auth/middleware.go` - Add authorization middleware

View File

@@ -0,0 +1,64 @@
# Story 2.4: Role Management API
## Metadata
- **Story ID**: 2.4
- **Title**: Role Management API
- **Epic**: 2 - Authentication & Authorization
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: 5-6 hours
- **Dependencies**: 1.2, 2.3
## Goal
Provide complete API for managing roles, assigning permissions to roles, and assigning roles to users.
## Description
This story implements the complete role management API allowing administrators to create, update, and delete roles, assign permissions to roles, and assign roles to users.
## Deliverables
### 1. Role Repository (`internal/identity/role_repo.go`)
- CRUD operations for roles
- Assign permissions to roles (many-to-many)
- Assign roles to users (many-to-many)
- List roles with permissions
- List users with roles
### 2. Role Management API Endpoints
- `POST /api/v1/roles` - Create new role
- `GET /api/v1/roles` - List all roles (with pagination)
- `GET /api/v1/roles/:id` - Get role details with permissions
- `PUT /api/v1/roles/:id` - Update role
- `DELETE /api/v1/roles/:id` - Delete role
- `POST /api/v1/roles/:id/permissions` - Assign permissions to role
- `DELETE /api/v1/roles/:id/permissions/:permId` - Remove permission from role
- `POST /api/v1/users/:id/roles` - Assign roles to user
- `DELETE /api/v1/users/:id/roles/:roleId` - Remove role from user
### 3. Authorization and Validation
- All endpoints protected (admin only)
- Input validation
- Error handling
### 4. gRPC Server (Microservices)
- Expose role management via existing Authz service gRPC server
- Role management methods in `api/proto/authz.proto`
- Service registration in service registry
## Acceptance Criteria
- [ ] Admin users can create and manage roles
- [ ] Permissions can be assigned to roles
- [ ] Roles can be assigned to users
- [ ] Role changes affect user permissions immediately
- [ ] All role operations are audited
- [ ] API endpoints are protected with proper permissions
## Related ADRs
- [ADR-0029: Microservices Architecture](../../adr/0029-microservices-architecture.md)
- [ADR-0030: Service Communication Strategy](../../adr/0030-service-communication-strategy.md)
## Files to Create/Modify
- `internal/identity/role_repo.go` - Role repository
- `internal/identity/role_handler.go` - Role handlers
- `internal/server/routes.go` - Add role routes

View File

@@ -0,0 +1,74 @@
# Story 2.5: Audit Logging System
## Metadata
- **Story ID**: 2.5
- **Title**: Audit Logging System
- **Epic**: 2 - Authentication & Authorization
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: 5-6 hours
- **Dependencies**: 1.2, 2.1
## Goal
Implement comprehensive audit logging that records all security-sensitive actions for compliance and security monitoring.
## Description
This story implements a complete audit logging system that records all authenticated actions with full context including actor, action, target, and metadata.
## Deliverables
### 1. Audit Interface (`pkg/audit/audit.go`)
- `Auditor` interface with `Record(ctx, action)` method
- `AuditAction` struct with actor, action, target, metadata
### 2. Audit Implementation (`internal/audit/ent_auditor.go`)
- Write audit logs to `audit_log` table
- Capture actor from request context
- Include request metadata (ID, IP, user agent, timestamp)
- Store action details and target information
- Support JSON metadata for flexible logging
### 3. Audit Middleware
- Intercept all authenticated requests
- Record action (HTTP method + path)
- Extract user and request context
- Store audit log entry
### 4. gRPC Server (Microservices)
- Expose gRPC server for audit service
- gRPC service definition in `api/proto/audit.proto`
- gRPC server implementation in `internal/audit/grpc/server.go`
- Service registration in service registry
### 5. Integration
- Integration with authentication endpoints
- Log login attempts (success and failure)
- Log password changes
- Log role assignments and removals
- Log permission changes
- Log user registration
### 5. Audit Log Query API
- `GET /api/v1/audit-logs` - Query audit logs with filters (admin only)
- Support filtering by actor, action, date range
- Pagination support
## Acceptance Criteria
- [ ] All authenticated actions are logged
- [ ] Audit logs include complete context (actor, action, target, metadata)
- [ ] Audit logs are immutable (no updates/deletes)
- [ ] Audit logs can be queried and filtered
- [ ] Audit logging has minimal performance impact
- [ ] Audit logs are stored securely
## Related ADRs
- [ADR-0020: Audit Logging Storage](../../adr/0020-audit-logging-storage.md)
- [ADR-0029: Microservices Architecture](../../adr/0029-microservices-architecture.md)
- [ADR-0030: Service Communication Strategy](../../adr/0030-service-communication-strategy.md)
## Files to Create/Modify
- `pkg/audit/audit.go` - Audit interface
- `internal/audit/ent_auditor.go` - Audit implementation
- `internal/audit/middleware.go` - Audit middleware
- `internal/audit/handler.go` - Audit query handler

View File

@@ -0,0 +1,57 @@
# Story 2.6: Database Seeding and Initialization
## Metadata
- **Story ID**: 2.6
- **Title**: Database Seeding and Initialization
- **Epic**: 2 - Authentication & Authorization
- **Status**: Pending
- **Priority**: Medium
- **Estimated Time**: 3-4 hours
- **Dependencies**: 1.2, 2.3, 2.4
## Goal
Provide database seeding functionality to create initial admin user, default roles, and core permissions.
## Description
This story implements a seeding system that creates the initial admin user, default roles (admin, user, guest), and assigns core permissions to enable the platform to be used immediately after setup.
## Deliverables
### 1. Seed Script (`internal/seed/seed.go`)
- Create default admin user (if doesn't exist)
- Create default roles (admin, user, guest)
- Assign core permissions to roles
- Set up initial role hierarchy
- Idempotent operations (safe to run multiple times)
### 2. Seed Command (`cmd/seed/main.go`)
- Command-line interface for seeding
- Configuration via environment variables
- Dry-run mode
- Verbose logging
### 3. Integration
- Optional: Auto-seed on first startup in development
- Manual seeding in production
- Integration with application startup
## Acceptance Criteria
- [ ] Seed script creates admin user successfully
- [ ] Default roles are created with proper permissions
- [ ] Seeding is idempotent (can run multiple times safely)
- [ ] Seed script can be run via CLI
- [ ] Admin user can login and manage system
## Related ADRs
- [ADR-0029: Microservices Architecture](../../adr/0029-microservices-architecture.md)
## Implementation Notes
- Seeding is typically done once per environment
- Can be run as a separate service or as part of deployment
- Uses service clients if accessing services (e.g., IdentityServiceClient for user creation)
## Files to Create/Modify
- `internal/seed/seed.go` - Seed functions
- `cmd/seed/main.go` - Seed command
- `Makefile` - Add seed command

View File

@@ -0,0 +1,52 @@
# Epic 2: Authentication & Authorization
## Overview
Implement complete JWT-based authentication system, build comprehensive identity management with user lifecycle, create role-based access control (RBAC) system, implement authorization middleware and permission checks, add comprehensive audit logging for security compliance, and provide database seeding for initial setup. All core services (Auth, Identity, Authz, Audit) are independent microservices that expose gRPC servers and register with the service registry.
## Stories
### 2.1 JWT Authentication System
- [Story: 2.1 - JWT Authentication](./2.1-jwt-authentication.md)
- **Goal:** Implement a complete JWT-based authentication system with access tokens, refresh tokens, and secure token management.
- **Deliverables:** Authentication interfaces, JWT implementation, authentication middleware, login/refresh endpoints
### 2.2 Identity Management System
- [Story: 2.2 - Identity Management](./2.2-identity-management.md)
- **Goal:** Build a complete user identity management system with registration, email verification, password management, and user CRUD operations.
- **Deliverables:** Identity interfaces, user repository, user service, user management API endpoints
### 2.3 Role-Based Access Control (RBAC) System
- [Story: 2.3 - RBAC System](./2.3-rbac-system.md)
- **Goal:** Implement a complete RBAC system with permissions, role management, and authorization middleware.
- **Deliverables:** Permission system, permission resolver, authorization system, authorization middleware
### 2.4 Role Management API
- [Story: 2.4 - Role Management](./2.4-role-management.md)
- **Goal:** Provide complete API for managing roles, assigning permissions to roles, and assigning roles to users.
- **Deliverables:** Role repository, role management API endpoints, authorization and validation
### 2.5 Audit Logging System
- [Story: 2.5 - Audit Logging](./2.5-audit-logging.md)
- **Goal:** Implement comprehensive audit logging that records all security-sensitive actions for compliance and security monitoring.
- **Deliverables:** Audit interface, audit implementation, audit middleware, audit log query API
### 2.6 Database Seeding and Initialization
- [Story: 2.6 - Database Seeding](./2.6-database-seeding.md)
- **Goal:** Provide database seeding functionality to create initial admin user, default roles, and core permissions.
- **Deliverables:** Seed script, seed command, integration with application startup
## Deliverables Checklist
- [ ] JWT authentication with access/refresh tokens
- [ ] User CRUD with email verification
- [ ] Role and permission management
- [ ] Authorization middleware
- [ ] Audit logging for all actions
- [ ] Seed script for initial data
## Acceptance Criteria
- User can register and login
- JWT tokens are validated on protected routes
- Users without permission get 403
- All actions are logged in audit table
- Admin can create roles and assign permissions
- Integration test: user without permission cannot access protected resource

View File

@@ -0,0 +1,85 @@
# Story 3.1: Module System Interface and Registry
## Metadata
- **Story ID**: 3.1
- **Title**: Module System Interface and Registry
- **Epic**: 3 - Module Framework
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: 5-6 hours
- **Dependencies**: 1.1, 2.3
## Goal
Design and implement the complete module system interface with registration, dependency resolution, and lifecycle management.
## Description
This story creates the foundation of the module system by defining the module interface, manifest structure, and registry. The system must support module registration, dependency validation, and lifecycle hooks.
## Deliverables
### 1. Module Interface (`pkg/module/module.go`)
- `IModule` interface with:
- `Name() string` - Module name
- `Version() string` - Module version
- `Dependencies() []string` - Module dependencies
- `Init() fx.Option` - FX options for module initialization
- `Migrations() []func(*ent.Client) error` - Database migrations
- Optional lifecycle hooks: `OnStart(ctx context.Context) error`
- Optional lifecycle hooks: `OnStop(ctx context.Context) error`
### 2. Module Manifest (`pkg/module/manifest.go`)
- `Manifest` struct with:
- Name, Version, Dependencies
- Permissions list
- Routes definition
- `module.yaml` schema definition
- Manifest parsing and validation
### 3. Module Registry (`internal/registry/registry.go`)
- Thread-safe module map
- `Register(m IModule)` function
- `All() []IModule` function
- `Get(name string) (IModule, error)` function
- Dependency validation (check dependencies are satisfied)
- Duplicate name detection
- Version compatibility checking
- Dependency cycle detection
## Implementation Steps
1. **Create Module Interface**
- Create `pkg/module/module.go`
- Define IModule interface
- Add lifecycle hooks
2. **Create Module Manifest**
- Create `pkg/module/manifest.go`
- Define Manifest struct
- Define module.yaml schema
3. **Create Module Registry**
- Create `internal/registry/registry.go`
- Implement thread-safe registry
- Add validation logic
4. **Test Registration**
- Test module registration
- Test dependency validation
- Test duplicate detection
## Acceptance Criteria
- [ ] Modules can register via `registry.Register()`
- [ ] Registry validates dependencies
- [ ] Registry prevents duplicate registrations
- [ ] Module interface is extensible
- [ ] Dependency cycles are detected
- [ ] Version compatibility is checked
## Related ADRs
- [ADR-0021: Module Loading Strategy](../../adr/0021-module-loading-strategy.md)
## Files to Create/Modify
- `pkg/module/module.go` - Module interface
- `pkg/module/manifest.go` - Module manifest
- `internal/registry/registry.go` - Module registry

View File

@@ -0,0 +1,65 @@
# Story 3.2: Permission Code Generation System
## Metadata
- **Story ID**: 3.2
- **Title**: Permission Code Generation System
- **Epic**: 3 - Module Framework
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: 4-5 hours
- **Dependencies**: 3.1
## Goal
Create automated permission code generation from module manifests to ensure type-safe permission constants.
## Description
This story implements a code generation system that scans module manifests and generates type-safe permission constants, ensuring permissions are defined in one place and used consistently throughout the codebase.
## Deliverables
### 1. Permission Generation Script (`scripts/generate-permissions.go`)
- Scan all `modules/*/module.yaml` files
- Extract permissions from manifests
- Generate `pkg/perm/generated.go` with Permission constants
- Support for multiple modules
- Error handling and validation
- Format generated code
### 2. Go Generate Integration
- `//go:generate` directive in `pkg/perm/perm.go`
- Automatic generation on build
- Integration with build process
### 3. Makefile Integration
- `make generate` command
- Run permission generation
- Verify generated code
## Implementation Steps
1. **Create Generation Script**
- Create `scripts/generate-permissions.go`
- Implement YAML parsing
- Implement code generation
2. **Add Go Generate Directive**
- Add directive to `pkg/perm/perm.go`
- Test generation
3. **Update Makefile**
- Add `make generate` command
- Integrate with build process
## Acceptance Criteria
- [ ] Permission constants are generated from `module.yaml`
- [ ] Generated code is type-safe
- [ ] Code generation runs automatically
- [ ] Permissions follow naming convention
- [ ] Multiple modules are supported
- [ ] Generated code is properly formatted
## Files to Create/Modify
- `scripts/generate-permissions.go` - Generation script
- `pkg/perm/perm.go` - Add go:generate directive
- `Makefile` - Add generate command

View File

@@ -0,0 +1,83 @@
# Story 3.3: Module Loader and Initialization
## Metadata
- **Story ID**: 3.3
- **Title**: Module Loader and Initialization
- **Epic**: 3 - Module Framework
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: 6-8 hours
- **Dependencies**: 3.1, 1.2
## Goal
Implement module loading (static and dynamic) with dependency resolution and automatic initialization.
## Description
This story implements the complete module loading system that discovers modules, resolves dependencies, initializes them in the correct order, and runs their migrations. It supports both static registration (preferred) and dynamic plugin loading.
## Deliverables
### 1. Module Loader (`internal/pluginloader/loader.go`)
- Support static registration (preferred method)
- Optional: Go plugin loading (`.so` files)
- Module discovery from `modules/*/module.yaml`
- Loader interface for extensibility
### 2. Static Loader (`internal/pluginloader/static_loader.go`)
- Import modules via side-effect imports
- Collect all registered modules
- Module discovery and registration
### 3. Optional Plugin Loader (`internal/pluginloader/plugin_loader.go`)
- Scan `./plugins/*.so` files
- Load via `plugin.Open()`
- Extract and validate module symbols
- Version compatibility checking
### 4. Module Initializer (`internal/module/initializer.go`)
- Collect all registered modules
- Resolve dependency order (topological sort)
- Initialize each module's `Init()` fx.Option
- Merge all options into main fx container
- Run migrations in dependency order
- Handle errors gracefully
### 5. FX Lifecycle Integration
- Call `OnStart()` during app startup
- Call `OnStop()` during graceful shutdown
- Proper error handling
## Implementation Steps
1. **Create Module Loader**
- Create `internal/pluginloader/loader.go`
- Define loader interface
2. **Implement Static Loader**
- Create `internal/pluginloader/static_loader.go`
- Implement static module loading
3. **Implement Module Initializer**
- Create `internal/module/initializer.go`
- Implement dependency resolution
- Implement initialization
4. **Integrate with FX**
- Add lifecycle hooks
- Test initialization
## Acceptance Criteria
- [ ] Modules load in correct dependency order
- [ ] Module migrations run automatically
- [ ] Module initialization integrates with FX
- [ ] Lifecycle hooks work correctly
- [ ] Dependency resolution handles cycles
- [ ] Errors are handled gracefully
## Files to Create/Modify
- `internal/pluginloader/loader.go` - Loader interface
- `internal/pluginloader/static_loader.go` - Static loader
- `internal/pluginloader/plugin_loader.go` - Plugin loader (optional)
- `internal/module/initializer.go` - Module initializer
- `internal/di/container.go` - Integrate module initialization

View File

@@ -0,0 +1,62 @@
# Story 3.4: Module Management CLI Tool
## Metadata
- **Story ID**: 3.4
- **Title**: Module Management CLI Tool
- **Epic**: 3 - Module Framework
- **Status**: Pending
- **Priority**: Medium
- **Estimated Time**: 4-5 hours
- **Dependencies**: 3.1, 3.3
## Goal
Provide CLI tooling for managing modules, validating dependencies, and testing module loading.
## Description
This story creates a CLI tool that allows developers and operators to manage modules, validate dependencies, test module loading, and inspect module information.
## Deliverables
### 1. CLI Tool (`cmd/platformctl/main.go`)
- `platformctl modules list` - List all loaded modules with versions
- `platformctl modules validate` - Validate module dependencies
- `platformctl modules test <module>` - Test module loading
- `platformctl modules info <module>` - Show module details
- `platformctl modules dependencies <module>` - Show module dependencies
- Command-line argument parsing
- Error handling and user-friendly output
### 2. Makefile Integration
- `make install-cli` - Install CLI tool
- `make cli` - Build CLI tool
- `make cli-test` - Test CLI tool
## Implementation Steps
1. **Create CLI Tool**
- Create `cmd/platformctl/main.go`
- Use cobra for CLI framework
- Implement commands
2. **Implement Commands**
- List command
- Validate command
- Test command
- Info command
3. **Add to Makefile**
- Add build commands
- Add install commands
## Acceptance Criteria
- [ ] CLI tool lists all modules
- [ ] Dependency validation works
- [ ] Module testing works
- [ ] CLI is installable and usable
- [ ] Commands provide helpful output
- [ ] Error messages are clear
## Files to Create/Modify
- `cmd/platformctl/main.go` - CLI tool
- `Makefile` - Add CLI commands

View File

@@ -0,0 +1,138 @@
# Story 3.5: Service Registry and Discovery
## Metadata
- **Story ID**: 3.5
- **Title**: Service Registry and Discovery
- **Epic**: 3 - Module Framework
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: 5-6 hours
- **Dependencies**: 1.7, 3.1
## Goal
Implement a service registry that enables service discovery for microservices, allowing services to locate and communicate with each other.
## Description
This story creates a service registry system supporting Consul and Kubernetes service discovery. The registry enables service discovery, health checking, and automatic service registration.
## Deliverables
### 1. Service Registry Interface (`pkg/services/registry.go`)
- `ServiceRegistry` interface with:
- `Register(service ServiceInfo) error` - Register a service
- `Deregister(serviceID string) error` - Deregister a service
- `Discover(serviceName string) ([]ServiceInfo, error)` - Discover services
- `GetService(serviceName string) (ServiceInfo, error)` - Get specific service
- `ListServices() ([]ServiceInfo, error)` - List all services
- `HealthCheck(serviceID string) error` - Check service health
### 2. Service Info Structure
- `ServiceInfo` struct with:
- ID, Name, Version
- Address (host:port)
- Protocol (local, grpc, http)
- Health status
- Metadata
### 3. Consul Registry (`internal/services/registry/consul.go`)
- Consul integration (primary for production)
- Service registration and discovery
- Health checking
- Automatic service registration
### 4. Kubernetes Service Discovery (`internal/services/registry/kubernetes.go`)
- Kubernetes service discovery
- Service health checking
- Automatic service registration via K8s services
### 5. Service Registration
- Auto-register services on startup
- Health check endpoints
- Graceful deregistration on shutdown
### 6. Configuration
- Registry configuration in `config/default.yaml`:
```yaml
service_registry:
type: consul # consul, kubernetes, etcd
consul:
address: localhost:8500
kubernetes:
namespace: default
etcd:
endpoints:
- localhost:2379
```
### 7. Integration
- Integrate with service factory
- Auto-register core services
- Support module service registration
## Implementation Steps
1. **Create Service Registry Interface**
- Create `pkg/services/registry.go`
- Define ServiceRegistry interface
- Define ServiceInfo struct
2. **Implement Consul Registry**
- Create `internal/services/registry/consul.go`
- Implement Consul integration
- Add health checking
3. **Implement Kubernetes Registry**
- Create `internal/services/registry/kubernetes.go`
- Implement K8s service discovery
- Add health checking
4. **Add Service Registration**
- Auto-register services on startup
- Add health check endpoints
- Handle graceful shutdown
5. **Add Configuration**
- Add registry configuration
- Support multiple registry types
6. **Integrate with Service Factory**
- Use registry for service discovery
- Resolve services via registry
## Acceptance Criteria
- [ ] Service registry interface is defined
- [ ] Consul registry works correctly
- [ ] Kubernetes registry works correctly
- [ ] Services are auto-registered on startup
- [ ] Service discovery works
- [ ] Health checking works
- [ ] Registry is configurable
- [ ] Graceful deregistration works
## Related ADRs
- [ADR-0029: Microservices Architecture](../../adr/0029-microservices-architecture.md)
- [ADR-0030: Service Communication Strategy](../../adr/0030-service-communication-strategy.md)
## Implementation Notes
- Consul is the primary registry for production
- Kubernetes service discovery for K8s deployments
- Health checks should be lightweight
- Support service versioning
## Testing
```bash
# Test service registry
go test ./internal/services/registry/...
# Test service discovery
go test ./internal/services/registry/... -run TestDiscovery
```
## Files to Create/Modify
- `pkg/services/registry.go` - Service registry interface
- `internal/services/registry/consul.go` - Consul registry
- `internal/services/registry/kubernetes.go` - Kubernetes registry
- `internal/services/factory.go` - Integrate with registry
- `internal/di/providers.go` - Add registry provider
- `config/default.yaml` - Add registry configuration

View File

@@ -0,0 +1,48 @@
# Epic 3: Module Framework
## Overview
Design and implement complete module system interface, build module registry with dependency resolution, create permission code generation from module manifests, implement module loader supporting static and dynamic loading, add module lifecycle management and initialization, and provide CLI tooling for module management.
## Stories
### 3.1 Module System Interface and Registry
- [Story: 3.1 - Module System Interface](./3.1-module-system-interface.md)
- **Goal:** Design and implement the complete module system interface with registration, dependency resolution, and lifecycle management.
- **Deliverables:** Module interface, module manifest, module registry
### 3.2 Permission Code Generation System
- [Story: 3.2 - Permission Code Generation](./3.2-permission-code-generation.md)
- **Goal:** Create automated permission code generation from module manifests to ensure type-safe permission constants.
- **Deliverables:** Permission generation script, Go generate integration, Makefile integration
### 3.3 Module Loader and Initialization
- [Story: 3.3 - Module Loader](./3.3-module-loader.md)
- **Goal:** Implement module loading (static and dynamic) with dependency resolution and automatic initialization.
- **Deliverables:** Module loader, static loader, plugin loader, module initializer, FX lifecycle integration
### 3.4 Module Management CLI Tool
- [Story: 3.4 - Module CLI](./3.4-module-cli.md)
- **Goal:** Provide CLI tooling for managing modules, validating dependencies, and testing module loading.
- **Deliverables:** CLI tool, Makefile integration
### 3.5 Service Registry and Discovery
- [Story: 3.5 - Service Registry](./3.5-service-registry.md)
- **Goal:** Implement a service registry that enables service discovery for microservices.
- **Deliverables:** Service registry interface, Consul registry, Kubernetes registry, service registration
## Deliverables Checklist
- [ ] Module interface and registration system
- [ ] Static module registry working
- [ ] Permission code generation tool
- [ ] Module loader with dependency resolution
- [ ] Module initialization in main app
- [ ] CLI tool for module management
- [ ] Service registry for discovery
## Acceptance Criteria
- Modules can register via `registry.Register()`
- Permission constants are generated from `module.yaml`
- Modules load in correct dependency order
- Module migrations run on startup
- `platformctl modules list` shows all modules
- Integration test: load multiple modules and verify initialization

View File

@@ -0,0 +1,169 @@
# Story 4.1: Complete Blog Module
## Metadata
- **Story ID**: 4.1
- **Title**: Complete Blog Module
- **Epic**: 4 - Sample Feature Module (Blog)
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: 10-12 hours
- **Dependencies**: 3.1, 3.2, 3.3, 2.3
## Goal
Create a complete sample blog module to demonstrate the framework, showing how to add routes, permissions, database entities, and services. This serves as a reference implementation for future developers.
## Description
This story implements a complete blog module with blog posts, CRUD operations, proper authorization, and integration with the core platform. The module demonstrates all aspects of module development including domain models, repositories, services, API handlers, and module registration.
## Deliverables
### 1. Blog Module Structure
- Create `modules/blog/` directory with proper structure:
```
modules/blog/
├── go.mod
├── module.yaml
├── internal/
│ ├── api/
│ │ └── handler.go
│ ├── domain/
│ │ ├── post.go
│ │ └── post_repo.go
│ ├── service/
│ │ └── post_service.go
│ └── ent/
│ └── schema/
│ └── post.go
└── pkg/
└── module.go
```
- Initialize `go.mod` for blog module
### 2. Module Manifest (`modules/blog/module.yaml`)
- Define module metadata (name, version, dependencies)
- Define permissions (blog.post.create, read, update, delete)
- Define routes with permission requirements
### 3. Blog Domain Model
- `Post` domain entity in `modules/blog/internal/domain/post.go`
- Ent schema in `modules/blog/internal/ent/schema/post.go`:
- Fields: title, content, author_id (FK to user)
- Indexes: author_id, created_at
- Timestamps: created_at, updated_at
- Generate Ent code for blog module
### 4. Blog Repository
- `PostRepository` interface in `modules/blog/internal/domain/post_repo.go`
- Implementation using Ent client (shared from core)
- CRUD operations: Create, FindByID, FindByAuthor, Update, Delete
- Pagination support
### 5. Blog Service
- `PostService` in `modules/blog/internal/service/post_service.go`
- Business logic for creating/updating posts
- Validation (title length, content requirements)
- Authorization checks (author can only update own posts)
- Uses service clients for inter-service communication:
- `IdentityServiceClient` - to get user information
- `AuthzServiceClient` - for authorization checks
- `AuditServiceClient` - for audit logging
### 6. Blog API Handlers
- API handlers in `modules/blog/internal/api/handler.go`:
- `POST /api/v1/blog/posts` - Create post
- `GET /api/v1/blog/posts/:id` - Get post
- `GET /api/v1/blog/posts` - List posts (with pagination)
- `PUT /api/v1/blog/posts/:id` - Update post
- `DELETE /api/v1/blog/posts/:id` - Delete post
- Use authorization middleware for all endpoints
- Register handlers in module's `Init()`
### 7. Blog Module Implementation
- Module implementation in `modules/blog/pkg/module.go`:
- Implement IModule interface
- Define Init() fx.Option
- Define Migrations()
- Register module in init()
### 8. Integration
- Update main `go.mod` to include blog module
- Import blog module in `cmd/platform/main.go`
- Run permission generation: `make generate`
- Verify blog permissions are generated
### 9. Tests
- Integration test in `modules/blog/internal/api/handler_test.go`:
- Test creating post with valid permission
- Test creating post without permission (403)
- Test updating own post vs other's post
- Test pagination
- Unit tests for service and repository
## Implementation Steps
1. **Create Module Structure**
- Create directory structure
- Initialize go.mod
2. **Create Module Manifest**
- Create module.yaml
- Define permissions and routes
3. **Create Domain Model**
- Create Post entity
- Create Ent schema
- Generate Ent code
4. **Create Repository**
- Create repository interface
- Implement using Ent
5. **Create Service**
- Create service with business logic
- Add validation and authorization
6. **Create API Handlers**
- Create handlers
- Add authorization middleware
- Register routes
7. **Create Module Implementation**
- Implement IModule interface
- Register module
8. **Integrate with Platform**
- Import module in main
- Generate permissions
- Test integration
9. **Add Tests**
- Create integration tests
- Create unit tests
## Acceptance Criteria
- [ ] Blog module loads on platform startup
- [ ] `POST /api/v1/blog/posts` requires `blog.post.create` permission
- [ ] User can create, read, update, delete posts
- [ ] Authorization enforced (users can only edit own posts)
- [ ] Integration test: full CRUD flow works
- [ ] Audit logs record all blog actions
- [ ] Permissions are generated correctly
- [ ] Module migrations run on startup
## Related ADRs
- [ADR-0029: Microservices Architecture](../../adr/0029-microservices-architecture.md)
- [ADR-0030: Service Communication Strategy](../../adr/0030-service-communication-strategy.md)
- See module framework ADRs
## Files to Create/Modify
- `modules/blog/module.yaml` - Module manifest
- `modules/blog/go.mod` - Module dependencies
- `modules/blog/internal/domain/post.go` - Domain model
- `modules/blog/internal/ent/schema/post.go` - Ent schema
- `modules/blog/internal/domain/post_repo.go` - Repository
- `modules/blog/internal/service/post_service.go` - Service
- `modules/blog/internal/api/handler.go` - API handlers
- `modules/blog/pkg/module.go` - Module implementation
- `go.mod` - Add blog module
- `cmd/platform/main.go` - Import blog module

View File

@@ -0,0 +1,31 @@
# Epic 4: Sample Feature Module (Blog)
## Overview
Create a complete sample module (Blog) to demonstrate the framework, showing how to add routes, permissions, database entities, and services. The Blog module is an independent service that uses service clients to communicate with core services. Provide reference implementation for future developers.
## Stories
### 4.1 Complete Blog Module
- [Story: 4.1 - Blog Module](./4.1-blog-module.md)
- **Goal:** Create a complete sample blog module to demonstrate the framework.
- **Deliverables:** Complete blog module with CRUD operations, permissions, database entities, services, API handlers, and integration tests
## Deliverables Checklist
- [ ] Blog module directory structure created
- [ ] Module manifest defines permissions and routes
- [ ] Blog post domain model defined
- [ ] Ent schema for blog posts created
- [ ] Repository implements CRUD operations
- [ ] Service layer implements business logic
- [ ] API endpoints for blog posts working
- [ ] Module integrated with core platform
- [ ] Integration tests passing
## Acceptance Criteria
- Blog module can be registered with core platform
- Permissions are generated for blog module
- CRUD operations work for blog posts
- API endpoints require proper authentication
- Module migrations run on startup
- Blog posts are associated with users
- Authorization enforced (users can only edit own posts)

View File

@@ -0,0 +1,64 @@
# Story 5.1: Cache System (Redis)
## Metadata
- **Story ID**: 5.1
- **Title**: Cache System (Redis)
- **Epic**: 5 - Infrastructure Adapters
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: 4-5 hours
- **Dependencies**: 1.1
## Goal
Implement a complete Redis-based caching system with a clean interface that can be swapped for other cache implementations.
## Description
This story implements a Redis cache adapter with a clean interface, allowing modules to cache data efficiently. The cache system supports TTL, key-based operations, and optional cache middleware for HTTP responses.
## Deliverables
### 1. Cache Interface (`pkg/infra/cache/cache.go`)
- `Cache` interface with:
- `Get(ctx context.Context, key string) ([]byte, error)`
- `Set(ctx context.Context, key string, value []byte, ttl time.Duration) error`
- `Delete(ctx context.Context, key string) error`
- `Exists(ctx context.Context, key string) (bool, error)`
- `Clear(ctx context.Context) error`
### 2. Redis Implementation (`internal/infra/cache/redis_cache.go`)
- Redis client setup
- Connection pooling
- All interface methods implemented
- Error handling
- Connection health checks
### 3. Configuration
- Redis config in `config/default.yaml`:
- Connection URL
- Pool settings
- Default TTL
### 4. DI Integration
- Provider function for Cache
- Register in DI container
### 5. Optional Cache Middleware
- HTTP response caching middleware
- Configurable cache keys
- TTL per route
## Acceptance Criteria
- [ ] Cache interface is defined
- [ ] Redis implementation works correctly
- [ ] Cache operations (get, set, delete) work
- [ ] TTL is respected
- [ ] Cache is injectable via DI
- [ ] Configuration is loaded from config
- [ ] Optional middleware works
## Files to Create/Modify
- `pkg/infra/cache/cache.go` - Cache interface
- `internal/infra/cache/redis_cache.go` - Redis implementation
- `internal/di/providers.go` - Add cache provider
- `config/default.yaml` - Add Redis config

View File

@@ -0,0 +1,69 @@
# Story 5.2: Event Bus System
## Metadata
- **Story ID**: 5.2
- **Title**: Event Bus System
- **Epic**: 5 - Infrastructure Adapters
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: 6-8 hours
- **Dependencies**: 1.1
## Goal
Implement a complete event bus system supporting both in-process (for development/testing) and Kafka (for production) with publish/subscribe capabilities.
## Description
This story implements an event bus that allows modules to publish and subscribe to events. It supports both in-process channels for development and Kafka for production, with a clean interface that makes the implementation swappable.
## Deliverables
### 1. Event Bus Interface (`pkg/eventbus/eventbus.go`)
- `EventBus` interface with:
- `Publish(ctx context.Context, topic string, event Event) error`
- `Subscribe(topic string, handler EventHandler) error`
- `Unsubscribe(topic string) error`
- `Event` and `EventHandler` types
### 2. In-Process Bus (`internal/infra/bus/inprocess_bus.go`)
- Channel-based in-process bus
- Used for testing and development
- Thread-safe implementation
### 3. Kafka Bus (`internal/infra/bus/kafka_bus.go`)
- Kafka producer for publishing
- Consumer groups for subscribing
- Error handling and retries
- Connection management
### 4. Core Events
- Define core platform events:
- `platform.user.created`
- `platform.user.updated`
- `platform.role.assigned`
- `platform.permission.granted`
### 5. Configuration
- Kafka config in `config/default.yaml`
- Bus selection (in-process vs Kafka)
### 6. DI Integration
- Provider function for EventBus
- Register in DI container
- Switchable via config
## Acceptance Criteria
- [ ] Event bus interface is defined
- [ ] In-process bus works for development
- [ ] Kafka bus works for production
- [ ] Events can be published and subscribed
- [ ] Bus is swappable via config
- [ ] Error handling works correctly
- [ ] Core events are defined
## Files to Create/Modify
- `pkg/eventbus/eventbus.go` - Event bus interface
- `internal/infra/bus/inprocess_bus.go` - In-process implementation
- `internal/infra/bus/kafka_bus.go` - Kafka implementation
- `internal/di/providers.go` - Add event bus provider
- `config/default.yaml` - Add Kafka config

View File

@@ -0,0 +1,65 @@
# Story 5.3: Blob Storage System
## Metadata
- **Story ID**: 5.3
- **Title**: Blob Storage System
- **Epic**: 5 - Infrastructure Adapters
- **Status**: Pending
- **Priority**: Medium
- **Estimated Time**: 5-6 hours
- **Dependencies**: 1.1, 1.5
## Goal
Implement a complete blob storage system using S3 with a clean interface for file upload, download, and management.
## Description
This story implements S3-based blob storage with support for file uploads, downloads, signed URLs, and file deletion. It includes an API endpoint for file uploads.
## Deliverables
### 1. Blob Storage Interface (`pkg/infra/blob/blob.go`)
- `BlobStore` interface with:
- `Upload(ctx context.Context, key string, data []byte) error`
- `Download(ctx context.Context, key string) ([]byte, error)`
- `Delete(ctx context.Context, key string) error`
- `GetSignedURL(ctx context.Context, key string, ttl time.Duration) (string, error)`
- `Exists(ctx context.Context, key string) (bool, error)`
### 2. S3 Implementation (`internal/infra/blob/s3_store.go`)
- AWS S3 client setup
- All interface methods implemented
- Error handling
- Content type detection
### 3. File Upload API
- `POST /api/v1/files/upload` - Upload file
- File validation
- Size limits
- Content type validation
### 4. Configuration
- S3 config in `config/default.yaml`:
- Bucket name
- Region
- Credentials (or use IAM role)
### 5. DI Integration
- Provider function for BlobStore
- Register in DI container
## Acceptance Criteria
- [ ] Blob storage interface is defined
- [ ] S3 implementation works correctly
- [ ] Files can be uploaded and downloaded
- [ ] Signed URLs are generated correctly
- [ ] File upload API works
- [ ] Configuration is loaded from config
- [ ] Blob store is injectable via DI
## Files to Create/Modify
- `pkg/infra/blob/blob.go` - Blob storage interface
- `internal/infra/blob/s3_store.go` - S3 implementation
- `internal/infra/blob/handler.go` - File upload handler
- `internal/di/providers.go` - Add blob store provider
- `config/default.yaml` - Add S3 config

View File

@@ -0,0 +1,67 @@
# Story 5.4: Email Notification System
## Metadata
- **Story ID**: 5.4
- **Title**: Email Notification System
- **Epic**: 5 - Infrastructure Adapters
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: 5-6 hours
- **Dependencies**: 1.1, 2.2
## Goal
Implement a complete email notification system with SMTP support, HTML email templates, and integration with identity management.
## Description
This story implements email notifications using SMTP with support for HTML emails and templates. It integrates with the identity service to send verification and password reset emails.
## Deliverables
### 1. Notification Interface (`pkg/notification/notification.go`)
- `Notifier` interface with:
- `SendEmail(ctx context.Context, to, subject, body string) error`
- `SendHTMLEmail(ctx context.Context, to, subject, htmlBody, textBody string) error`
- `SendSMS(ctx context.Context, to, message string) error` (placeholder)
### 2. SMTP Implementation (`internal/infra/email/smtp_notifier.go`)
- SMTP client setup
- HTML email support
- Email templates for:
- Email verification
- Password reset
- Welcome email
### 3. Integration with Identity Service
- Send verification email on registration
- Send password reset email
- Send welcome email
### 4. Configuration
- Email config in `config/default.yaml`:
- SMTP server
- Port
- Username/password
- From address
### 5. DI Integration
- Provider function for Notifier
- Register in DI container
## Acceptance Criteria
- [ ] Notification interface is defined
- [ ] SMTP implementation works correctly
- [ ] HTML emails are sent successfully
- [ ] Email templates work
- [ ] Verification emails are sent on registration
- [ ] Password reset emails are sent
- [ ] Configuration is loaded from config
- [ ] Notifier is injectable via DI
## Files to Create/Modify
- `pkg/notification/notification.go` - Notification interface
- `internal/infra/email/smtp_notifier.go` - SMTP implementation
- `internal/infra/email/templates.go` - Email templates
- `internal/identity/user_service.go` - Integrate email sending
- `internal/di/providers.go` - Add notifier provider
- `config/default.yaml` - Add email config

View File

@@ -0,0 +1,74 @@
# Story 5.5: Scheduler and Background Jobs System
## Metadata
- **Story ID**: 5.5
- **Title**: Scheduler and Background Jobs System
- **Epic**: 5 - Infrastructure Adapters
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: 6-8 hours
- **Dependencies**: 1.1, 5.1
## Goal
Implement a complete scheduler and background job system with cron jobs, job queues, retries, and job status tracking.
## Description
This story implements a scheduler system using Asynq (Redis-backed) that supports cron jobs for periodic tasks and job queues for background processing. Jobs can be registered from modules and tracked.
## Deliverables
### 1. Scheduler Interface (`pkg/scheduler/scheduler.go`)
- `Scheduler` interface with:
- `Cron(spec string, job JobFunc) error` - Schedule cron job
- `Enqueue(queue string, payload any) error` - Enqueue job
- `RegisterJob(name string, handler JobHandler) error` - Register job handler
### 2. Asynq Implementation (`internal/infra/scheduler/asynq_scheduler.go`)
- Redis-backed job queue
- Cron jobs for periodic tasks
- Job retries and backoff
- Job status tracking
- Job result storage
### 3. Job Registry (`internal/infra/scheduler/job_registry.go`)
- Register jobs from modules
- Start job processor on app startup
- Job lifecycle management
### 4. Example Jobs
- Cleanup expired tokens (daily)
- Send digest emails (weekly)
- Database cleanup tasks
### 5. Job Monitoring API
- `GET /api/v1/jobs/status` - Job status endpoint
- Job history and statistics
### 6. Configuration
- Scheduler config in `config/default.yaml`:
- Redis connection (shared with cache)
- Concurrency settings
- Retry settings
### 7. DI Integration
- Provider function for Scheduler
- Register in DI container
## Acceptance Criteria
- [ ] Scheduler interface is defined
- [ ] Cron jobs can be scheduled
- [ ] Jobs can be enqueued
- [ ] Jobs are processed correctly
- [ ] Job retries work
- [ ] Job status is tracked
- [ ] Example jobs run on schedule
- [ ] Job monitoring API works
## Files to Create/Modify
- `pkg/scheduler/scheduler.go` - Scheduler interface
- `internal/infra/scheduler/asynq_scheduler.go` - Asynq implementation
- `internal/infra/scheduler/job_registry.go` - Job registry
- `internal/infra/scheduler/jobs.go` - Example jobs
- `internal/di/providers.go` - Add scheduler provider
- `config/default.yaml` - Add scheduler config

View File

@@ -0,0 +1,68 @@
# Story 5.6: Secret Store Integration
## Metadata
- **Story ID**: 5.6
- **Title**: Secret Store Integration
- **Epic**: 5 - Infrastructure Adapters
- **Status**: Pending
- **Priority**: Medium
- **Estimated Time**: 4-5 hours
- **Dependencies**: 0.2
## Goal
Implement secret store integration supporting HashiCorp Vault and AWS Secrets Manager for secure secret management.
## Description
This story implements secret store adapters that can retrieve secrets from external secret management systems, with integration into the configuration system.
## Deliverables
### 1. Secret Store Interface (`pkg/infra/secret/secret.go`)
- `SecretStore` interface with:
- `GetSecret(ctx context.Context, key string) (string, error)`
- `GetSecrets(ctx context.Context, prefix string) (map[string]string, error)`
### 2. Vault Implementation (`internal/infra/secret/vault_store.go`)
- HashiCorp Vault client
- Support KV v2 secrets
- Authentication (token, app role)
- Secret caching
### 3. AWS Secrets Manager (`internal/infra/secret/aws_secrets.go`)
- AWS Secrets Manager client
- Secret retrieval
- Secret caching
### 4. Configuration Integration
- Integrate with config loader
- Overlay secrets on top of file/env config
- Load secrets lazily (cache)
- Secret key resolution
### 5. Configuration
- Secret store config in `config/default.yaml`:
- Provider (vault, aws, none)
- Connection settings
- Cache settings
### 6. DI Integration
- Provider function for SecretStore
- Register in DI container (optional, via config)
## Acceptance Criteria
- [ ] Secret store interface is defined
- [ ] Vault implementation works
- [ ] AWS Secrets Manager implementation works
- [ ] Secrets are loaded into config
- [ ] Secret caching works
- [ ] Configuration integration works
- [ ] Secret store is optional (can be disabled)
## Files to Create/Modify
- `pkg/infra/secret/secret.go` - Secret store interface
- `internal/infra/secret/vault_store.go` - Vault implementation
- `internal/infra/secret/aws_secrets.go` - AWS implementation
- `internal/config/loader.go` - Integrate secret loading
- `internal/di/providers.go` - Add secret store provider
- `config/default.yaml` - Add secret store config

View File

@@ -0,0 +1,150 @@
# Story 5.7: gRPC Service Definitions and Clients
## Metadata
- **Story ID**: 5.7
- **Title**: gRPC Service Definitions and Clients
- **Epic**: 5 - Infrastructure Adapters
- **Status**: Pending
- **Priority**: Medium
- **Estimated Time**: 8-10 hours
- **Dependencies**: 1.7, 3.5
## Goal
Implement gRPC service definitions and clients to enable microservices communication, allowing modules to be extracted as independent services.
## Description
This story implements gRPC service definitions for core services and gRPC clients that implement the service client interfaces. This enables modules to communicate with services over the network when deployed as microservices.
## Deliverables
### 1. gRPC Service Definitions (`api/proto/`)
- Define Protocol Buffer files for core services:
- `identity.proto` - Identity service
- `auth.proto` - Authentication service
- `authz.proto` - Authorization service
- `permission.proto` - Permission service
- `audit.proto` - Audit service
- Use protobuf v3
- Include proper message definitions
- Include service definitions
### 2. gRPC Server Implementations (`internal/services/grpc/server/`)
- Implement gRPC servers for each service:
- `identity_server.go` - Identity gRPC server
- `auth_server.go` - Auth gRPC server
- `authz_server.go` - Authz gRPC server
- Server implementations wrap existing services
- Error handling and validation
- Request/response conversion
### 3. gRPC Client Implementations (`internal/services/grpc/client/`)
- Implement gRPC clients that satisfy service client interfaces:
- `grpc_identity_client.go` - Identity gRPC client
- `grpc_auth_client.go` - Auth gRPC client
- `grpc_authz_client.go` - Authz gRPC client
- Connection pooling
- Retry logic
- Circuit breaker support
- Timeout handling
### 4. gRPC Server Setup
- gRPC server initialization
- Service registration
- Health check service
- Reflection service (development)
- Integration with HTTP server (gRPC-Gateway optional)
### 5. Code Generation
- `Makefile` target for protobuf generation
- Generate Go code from `.proto` files
- Generate gRPC server and client stubs
### 6. Configuration
- gRPC configuration in `config/default.yaml`:
```yaml
grpc:
enabled: false # Enable gRPC server
port: 9090
reflection: true # Enable reflection (dev)
```
### 7. Integration
- Integrate with service factory
- Support switching between local and gRPC clients
- Service registry integration for gRPC services
## Implementation Steps
1. **Install Dependencies**
```bash
go get google.golang.org/grpc
go get google.golang.org/protobuf
go install google.golang.org/protobuf/cmd/protoc-gen-go
go install google.golang.org/grpc/cmd/protoc-gen-go-grpc
```
2. **Define Protocol Buffers**
- Create `api/proto/` directory
- Define `.proto` files for each service
- Define messages and services
3. **Generate gRPC Code**
- Create `Makefile` target
- Generate Go code from protobuf
4. **Implement gRPC Servers**
- Create server implementations
- Wrap existing services
- Handle errors and validation
5. **Implement gRPC Clients**
- Create client implementations
- Implement service client interfaces
- Add connection management
6. **Integrate with Service Factory**
- Update factory to support gRPC clients
- Add gRPC server startup
## Acceptance Criteria
- [ ] gRPC service definitions are created
- [ ] gRPC servers are implemented
- [ ] gRPC clients implement service interfaces
- [ ] Service factory can create gRPC clients
- [ ] gRPC services can be enabled via configuration
- [ ] Code generation works
- [ ] gRPC clients work with service registry
## Related ADRs
- [ADR-0029: Microservices Architecture](../../adr/0029-microservices-architecture.md)
- [ADR-0030: Service Communication Strategy](../../adr/0030-service-communication-strategy.md)
## Implementation Notes
- Use protobuf v3
- Support both unary and streaming RPCs
- Implement proper error handling
- Add OpenTelemetry instrumentation
- Support service versioning
## Testing
```bash
# Generate protobuf code
make generate-proto
# Test gRPC servers
go test ./internal/services/grpc/server/...
# Test gRPC clients
go test ./internal/services/grpc/client/...
```
## Files to Create/Modify
- `api/proto/identity.proto` - Identity service definition
- `api/proto/auth.proto` - Auth service definition
- `api/proto/authz.proto` - Authz service definition
- `internal/services/grpc/server/` - gRPC server implementations
- `internal/services/grpc/client/` - gRPC client implementations
- `internal/services/factory.go` - Add gRPC client support
- `Makefile` - Add protobuf generation
- `config/default.yaml` - Add gRPC configuration

View File

@@ -0,0 +1,58 @@
# Epic 5: Infrastructure Adapters
## Overview
Implement infrastructure adapters (cache, queue, blob storage, email), make adapters swappable via interfaces, add scheduler/background jobs system, and implement event bus (in-process and Kafka).
## Stories
### 5.1 Cache System (Redis)
- [Story: 5.1 - Cache System](./5.1-cache-system.md)
- **Goal:** Implement a complete Redis-based caching system with a clean interface.
- **Deliverables:** Cache interface, Redis implementation, configuration, DI integration
### 5.2 Event Bus System
- [Story: 5.2 - Event Bus](./5.2-event-bus.md)
- **Goal:** Implement a complete event bus system supporting both in-process and Kafka.
- **Deliverables:** Event bus interface, in-process bus, Kafka bus, core events
### 5.3 Blob Storage System
- [Story: 5.3 - Blob Storage](./5.3-blob-storage.md)
- **Goal:** Implement a complete blob storage system using S3.
- **Deliverables:** Blob storage interface, S3 implementation, file upload API
### 5.4 Email Notification System
- [Story: 5.4 - Email Notification](./5.4-email-notification.md)
- **Goal:** Implement a complete email notification system with SMTP support.
- **Deliverables:** Notification interface, SMTP implementation, email templates, identity integration
### 5.5 Scheduler and Background Jobs System
- [Story: 5.5 - Scheduler & Jobs](./5.5-scheduler-jobs.md)
- **Goal:** Implement a complete scheduler and background job system.
- **Deliverables:** Scheduler interface, Asynq implementation, job registry, example jobs
### 5.6 Secret Store Integration
- [Story: 5.6 - Secret Store](./5.6-secret-store.md)
- **Goal:** Implement secret store integration supporting Vault and AWS Secrets Manager.
- **Deliverables:** Secret store interface, Vault implementation, AWS implementation, config integration
### 5.7 gRPC Service Definitions and Clients
- [Story: 5.7 - gRPC Services](./5.7-grpc-services.md)
- **Goal:** Implement gRPC service definitions and clients to enable microservices communication.
- **Deliverables:** gRPC service definitions, gRPC servers, gRPC clients, code generation
## Deliverables Checklist
- [ ] Cache adapter (Redis) working
- [ ] Event bus (in-process and Kafka) functional
- [ ] Blob storage (S3) adapter
- [ ] Email notification system
- [ ] Scheduler and background jobs
- [ ] Secret store integration (optional)
- [ ] gRPC service definitions and clients
## Acceptance Criteria
- Cache stores and retrieves data correctly
- Events are published and consumed
- Files can be uploaded and downloaded
- Email notifications are sent
- Background jobs run on schedule
- Integration test: full infrastructure stack works

View File

@@ -0,0 +1,72 @@
# Story 6.1: Enhanced Observability
## Metadata
- **Story ID**: 6.1
- **Title**: Enhanced Observability
- **Epic**: 6 - Observability & Production Readiness
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: 6-8 hours
- **Dependencies**: 1.6, 5.2, 5.1
## Goal
Enhance observability with full OpenTelemetry integration, comprehensive Prometheus metrics expansion, and improved logging with request correlation.
## Description
This story enhances the observability system by completing OpenTelemetry integration with all infrastructure components, expanding Prometheus metrics, and improving logging with better correlation and structured fields.
## Deliverables
### 1. Complete OpenTelemetry Integration
- Export traces to Jaeger/OTLP collector
- Add database instrumentation (Ent interceptor)
- Add Kafka instrumentation
- Add Redis instrumentation
- Create custom spans:
- Module initialization spans
- Background job spans
- Event publishing spans
- Trace context propagation:
- Include trace ID in logs
- Propagate across HTTP calls
- Include in error reports
### 2. Prometheus Metrics Expansion
- Add more metrics:
- Database connection pool stats
- Cache hit/miss ratio
- Event bus publish/consume rates
- Background job execution times
- Module-specific metrics (via module interface)
- Create metric labels:
- `module` label for module metrics
- `tenant_id` label (if multi-tenant)
- `status` label for error rates
### 3. Enhanced Logging
- Add structured fields:
- `user_id` from context
- `tenant_id` from context
- `module` name for module logs
- `trace_id` from OpenTelemetry
- Create log aggregation config:
- JSON format for production
- Human-readable for development
- Support for Loki/CloudWatch/ELK
## Acceptance Criteria
- [ ] Traces are exported and visible in Jaeger
- [ ] All infrastructure components are instrumented
- [ ] Trace IDs are included in logs
- [ ] Metrics are expanded with new dimensions
- [ ] Logs include all correlation fields
- [ ] Log aggregation works correctly
## Files to Create/Modify
- `internal/observability/tracer.go` - Enhanced tracing
- `internal/infra/database/client.go` - Add tracing
- `internal/infra/cache/redis_cache.go` - Add tracing
- `internal/infra/bus/kafka_bus.go` - Add tracing
- `internal/metrics/metrics.go` - Expanded metrics
- `internal/logger/zap_logger.go` - Enhanced logging

View File

@@ -0,0 +1,53 @@
# Story 6.2: Error Reporting (Sentry)
## Metadata
- **Story ID**: 6.2
- **Title**: Error Reporting (Sentry)
- **Epic**: 6 - Observability & Production Readiness
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: 4-5 hours
- **Dependencies**: 1.4
## Goal
Add comprehensive error reporting with Sentry integration that captures errors with full context.
## Description
This story integrates Sentry for error reporting, sending all errors from the error bus to Sentry with complete context including trace IDs, user information, and module context.
## Deliverables
### 1. Sentry Integration
- Install and configure Sentry SDK
- Integrate with error bus:
- Send errors to Sentry
- Include trace ID in Sentry events
- Add user context (user ID, email)
- Add module context (module name)
- Sentry middleware:
- Capture panics
- Capture HTTP errors (4xx, 5xx)
- Configure Sentry DSN via config
### 2. Error Context Enhancement
- Enrich errors with:
- Request context
- User information
- Module information
- Stack traces
- Environment information
## Acceptance Criteria
- [ ] Errors are reported to Sentry with context
- [ ] Panics are captured and reported
- [ ] HTTP errors are captured
- [ ] Trace IDs are included in Sentry events
- [ ] User context is included
- [ ] Sentry DSN is configurable
## Files to Create/Modify
- `internal/errorbus/sentry_bus.go` - Sentry integration
- `internal/server/middleware.go` - Sentry middleware
- `internal/di/providers.go` - Add Sentry provider
- `config/default.yaml` - Add Sentry config

View File

@@ -0,0 +1,46 @@
# Story 6.3: Grafana Dashboards
## Metadata
- **Story ID**: 6.3
- **Title**: Grafana Dashboards
- **Epic**: 6 - Observability & Production Readiness
- **Status**: Pending
- **Priority**: Medium
- **Estimated Time**: 4-5 hours
- **Dependencies**: 1.3, 6.1
## Goal
Create comprehensive Grafana dashboards for monitoring platform health, performance, and errors.
## Description
This story creates Grafana dashboard JSON files that visualize platform metrics, health, and performance data from Prometheus.
## Deliverables
### 1. Grafana Dashboards (`ops/grafana/dashboards/`)
- `platform-overview.json` - Overall health dashboard
- `http-metrics.json` - HTTP request metrics
- `database-metrics.json` - Database performance
- `module-metrics.json` - Per-module metrics
- `error-rates.json` - Error tracking
- Dashboard setup documentation
### 2. Documentation
- Document dashboard setup in `docs/operations.md`
- Dashboard import instructions
- Metric explanation
## Acceptance Criteria
- [ ] All dashboards are created
- [ ] Dashboards display correct metrics
- [ ] Dashboard setup is documented
- [ ] Dashboards can be imported into Grafana
## Files to Create/Modify
- `ops/grafana/dashboards/platform-overview.json`
- `ops/grafana/dashboards/http-metrics.json`
- `ops/grafana/dashboards/database-metrics.json`
- `ops/grafana/dashboards/module-metrics.json`
- `ops/grafana/dashboards/error-rates.json`
- `docs/operations.md` - Dashboard documentation

View File

@@ -0,0 +1,53 @@
# Story 6.4: Rate Limiting
## Metadata
- **Story ID**: 6.4
- **Title**: Rate Limiting
- **Epic**: 6 - Observability & Production Readiness
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: 4-5 hours
- **Dependencies**: 1.5, 5.1
## Goal
Implement rate limiting to prevent API abuse and ensure fair resource usage.
## Description
This story implements rate limiting middleware that limits requests per user and per IP address, with configurable limits per endpoint.
## Deliverables
### 1. Rate Limiting Middleware
- Per-user rate limiting
- Per-IP rate limiting
- Configurable limits per endpoint
- Rate limit storage (Redis)
- Return `X-RateLimit-*` headers
### 2. Configuration
- Rate limit config in `config/default.yaml`:
```yaml
rate_limiting:
enabled: true
per_user: 100/minute
per_ip: 1000/minute
```
### 3. Integration
- Integrate with HTTP server
- Add to middleware stack
- Error responses for rate limit exceeded
## Acceptance Criteria
- [ ] Rate limiting prevents abuse
- [ ] Per-user limits work correctly
- [ ] Per-IP limits work correctly
- [ ] Rate limit headers are returned
- [ ] Configuration is flexible
- [ ] Rate limits are stored in Redis
## Files to Create/Modify
- `internal/server/middleware.go` - Rate limiting middleware
- `internal/infra/ratelimit/limiter.go` - Rate limiter implementation
- `config/default.yaml` - Add rate limit config

View File

@@ -0,0 +1,54 @@
# Story 6.5: Security Hardening
## Metadata
- **Story ID**: 6.5
- **Title**: Security Hardening
- **Epic**: 6 - Observability & Production Readiness
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: 5-6 hours
- **Dependencies**: 1.5
## Goal
Add comprehensive security hardening including security headers, input validation, and request size limits.
## Description
This story implements security best practices including security headers, input validation, request size limits, and SQL injection protection.
## Deliverables
### 1. Security Headers Middleware
- `X-Content-Type-Options: nosniff`
- `X-Frame-Options: DENY`
- `X-XSS-Protection: 1; mode=block`
- `Strict-Transport-Security` (if HTTPS)
- `Content-Security-Policy`
### 2. Request Size Limits
- Max body size (10MB default)
- Max header size
- Configurable limits
### 3. Input Validation
- Use `github.com/go-playground/validator`
- Validate all request bodies
- Sanitize user inputs
- Validation error responses
### 4. SQL Injection Protection
- Use parameterized queries (Ent already does this)
- Add linter rule to prevent raw SQL
- Security scanning
## Acceptance Criteria
- [ ] Security headers are present
- [ ] Request size limits are enforced
- [ ] Input validation works
- [ ] SQL injection protection is in place
- [ ] Security headers are configurable
## Files to Create/Modify
- `internal/server/middleware.go` - Security headers middleware
- `internal/server/validation.go` - Input validation
- `config/default.yaml` - Add security config

View File

@@ -0,0 +1,53 @@
# Story 6.6: Performance Optimization
## Metadata
- **Story ID**: 6.6
- **Title**: Performance Optimization
- **Epic**: 6 - Observability & Production Readiness
- **Status**: Pending
- **Priority**: Medium
- **Estimated Time**: 6-8 hours
- **Dependencies**: 1.2, 5.1
## Goal
Optimize platform performance through database connection pooling, query optimization, response compression, and caching strategies.
## Description
This story implements performance optimizations including database connection pooling, query optimization, response compression, and strategic caching.
## Deliverables
### 1. Database Connection Pooling
- Configure max connections
- Configure idle timeout
- Monitor pool stats
- Connection health checks
### 2. Query Optimization
- Add indexes for common queries
- Use database query logging (development)
- Add slow query detection
- Query performance monitoring
### 3. Response Compression
- Gzip middleware for large responses
- Configurable compression levels
- Content type filtering
### 4. Caching Strategy
- Cache frequently accessed data (user permissions, roles)
- Cache invalidation strategies
- Cache warming
## Acceptance Criteria
- [ ] Database connection pooling is optimized
- [ ] Query performance is improved
- [ ] Response compression works
- [ ] Caching strategy is effective
- [ ] Performance meets SLA (< 100ms p95 for auth endpoints)
## Files to Create/Modify
- `internal/infra/database/client.go` - Connection pooling
- `internal/server/middleware.go` - Compression middleware
- `internal/perm/in_memory_resolver.go` - Add caching

View File

@@ -0,0 +1,55 @@
# Epic 6: Observability & Production Readiness
## Overview
Enhance observability with full OpenTelemetry integration, add comprehensive error reporting (Sentry), create Grafana dashboards, improve logging with request correlation, add rate limiting and security hardening, and optimize performance.
## Stories
### 6.1 Enhanced Observability
- [Story: 6.1 - Enhanced Observability](./6.1-enhanced-observability.md)
- **Goal:** Enhance observability with full OpenTelemetry integration, comprehensive Prometheus metrics, and improved logging.
- **Deliverables:** Complete OpenTelemetry integration, expanded metrics, enhanced logging
### 6.2 Error Reporting (Sentry)
- [Story: 6.2 - Error Reporting](./6.2-error-reporting.md)
- **Goal:** Add comprehensive error reporting with Sentry integration.
- **Deliverables:** Sentry integration, error context enhancement
### 6.3 Grafana Dashboards
- [Story: 6.3 - Grafana Dashboards](./6.3-grafana-dashboards.md)
- **Goal:** Create comprehensive Grafana dashboards for monitoring.
- **Deliverables:** Grafana dashboard JSON files, documentation
### 6.4 Rate Limiting
- [Story: 6.4 - Rate Limiting](./6.4-rate-limiting.md)
- **Goal:** Implement rate limiting to prevent API abuse.
- **Deliverables:** Rate limiting middleware, configuration
### 6.5 Security Hardening
- [Story: 6.5 - Security Hardening](./6.5-security-hardening.md)
- **Goal:** Add comprehensive security hardening.
- **Deliverables:** Security headers, input validation, request limits
### 6.6 Performance Optimization
- [Story: 6.6 - Performance Optimization](./6.6-performance-optimization.md)
- **Goal:** Optimize platform performance.
- **Deliverables:** Connection pooling, query optimization, compression, caching
## Deliverables Checklist
- [ ] Full OpenTelemetry integration
- [ ] Sentry error reporting
- [ ] Enhanced logging with correlation
- [ ] Comprehensive Prometheus metrics
- [ ] Grafana dashboards
- [ ] Rate limiting
- [ ] Security hardening
- [ ] Performance optimizations
## Acceptance Criteria
- Traces are exported and visible in Jaeger
- Errors are reported to Sentry with context
- Logs include request IDs and trace IDs
- Metrics are exposed and scraped by Prometheus
- Rate limiting prevents abuse
- Security headers are present
- Performance meets SLA (< 100ms p95 for auth endpoints)

View File

@@ -0,0 +1,83 @@
# Story 7.1: Comprehensive Testing Suite
## Metadata
- **Story ID**: 7.1
- **Title**: Comprehensive Testing Suite
- **Epic**: 7 - Testing, Documentation & CI/CD
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: 10-12 hours
- **Dependencies**: All previous epics
## Goal
Achieve comprehensive test coverage with unit tests, integration tests, and contract tests.
## Description
This story implements a complete testing suite with >80% code coverage, integration tests using testcontainers, and contract tests for API validation.
## Deliverables
### 1. Unit Tests
- Achieve >80% code coverage for core modules:
- Config loader
- Logger
- Auth service
- Permission resolver
- Module registry
- Use `github.com/stretchr/testify` for assertions
- Use `github.com/golang/mock` or `mockery` for mocks
- Test helpers:
- `testutil.NewTestDB()` - In-memory SQLite for tests
- `testutil.NewTestUser()` - Create test user
- `testutil.NewTestContext()` - Context with user
### 2. Integration Tests
- Install `github.com/testcontainers/testcontainers-go`
- Create integration test suite:
- Full HTTP request flow
- Database operations
- Event bus publishing/consuming
- Background job execution
- Test scenarios:
- User registration → login → API access
- Role assignment → permission check
- Module loading and initialization
- Multi-module interaction
- Create `docker-compose.test.yml`:
- PostgreSQL
- Redis
- Kafka (optional)
- Add test tags: `//go:build integration`
### 3. Contract Tests
- Install `github.com/pact-foundation/pact-go` (optional)
- Create API contract tests:
- Verify API responses match OpenAPI spec
- Test backward compatibility
- Use OpenAPI validator:
- Install `github.com/getkin/kin-openapi`
- Validate request/response against OpenAPI spec
- Generate OpenAPI spec from code annotations
### 4. Load Testing
- Create `perf/` directory with k6 scripts:
- `perf/auth-load.js` - Login endpoint load test
- `perf/api-load.js` - General API load test
- Document performance benchmarks:
- Request latency (p50, p95, p99)
- Throughput (requests/second)
- Resource usage (CPU, memory)
## Acceptance Criteria
- [ ] All tests pass in CI
- [ ] Code coverage >80%
- [ ] Integration tests work with testcontainers
- [ ] Contract tests validate API
- [ ] Load tests are documented
## Files to Create/Modify
- `internal/testutil/` - Test utilities
- `docker-compose.test.yml` - Test containers
- `perf/` - Load test scripts
- All test files across the codebase

View File

@@ -0,0 +1,68 @@
# Story 7.2: Complete Documentation
## Metadata
- **Story ID**: 7.2
- **Title**: Complete Documentation
- **Epic**: 7 - Testing, Documentation & CI/CD
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: 8-10 hours
- **Dependencies**: All previous epics
## Goal
Create comprehensive documentation covering architecture, API, operations, and developer guides.
## Description
This story creates complete documentation including README, architecture docs, API docs, operations guides, and code examples.
## Deliverables
### 1. Core Documentation
- **README.md**:
- Quick start guide
- Architecture overview
- Installation instructions
- Development setup
- **docs/architecture.md**:
- System architecture diagram
- Module system explanation
- Extension points
- **docs/extension-points.md**:
- How to create a module
- Permission system
- Event bus usage
- Background jobs
### 2. API Documentation
- **docs/api.md**:
- API endpoints documentation
- Authentication flow
- Error codes
- Request/response examples
### 3. Operations Documentation
- **docs/operations.md**:
- Deployment guide
- Monitoring setup
- Troubleshooting
- Grafana dashboards
### 4. Code Examples
- `examples/` directory with sample modules
- Code comments and godoc
- Tutorial examples
## Acceptance Criteria
- [ ] Documentation is complete and accurate
- [ ] All major features are documented
- [ ] Code examples work
- [ ] Documentation is accessible
## Files to Create/Modify
- `README.md` - Main documentation
- `docs/architecture.md` - Architecture docs
- `docs/extension-points.md` - Extension guide
- `docs/api.md` - API documentation
- `docs/operations.md` - Operations guide
- `examples/` - Code examples

View File

@@ -0,0 +1,51 @@
# Story 7.3: CI/CD Pipeline Enhancement
## Metadata
- **Story ID**: 7.3
- **Title**: CI/CD Pipeline Enhancement
- **Epic**: 7 - Testing, Documentation & CI/CD
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: 5-6 hours
- **Dependencies**: 7.1
## Goal
Enhance CI/CD pipeline with comprehensive testing, security scanning, and release automation.
## Description
This story enhances the CI/CD pipeline to run comprehensive tests, perform security scanning, and automate releases with Docker image builds.
## Deliverables
### 1. Enhanced CI Pipeline
- Update `.github/workflows/ci.yml`:
- Run unit tests with coverage
- Run integration tests (with testcontainers)
- Run linters (golangci-lint, gosec)
- Generate coverage report
- Upload artifacts
### 2. Release Workflow
- Add release workflow:
- Semantic versioning
- Tag releases
- Build and push Docker images
- Generate changelog
### 3. Security Scanning
- `gosec` for security issues
- Dependabot for dependency updates
- Trivy for container scanning
## Acceptance Criteria
- [ ] CI pipeline runs all tests
- [ ] Coverage reports are generated
- [ ] Security scanning works
- [ ] Release workflow works
- [ ] Docker images are built and pushed
## Files to Create/Modify
- `.github/workflows/ci.yml` - Enhanced CI
- `.github/workflows/release.yml` - Release workflow
- `.github/dependabot.yml` - Dependabot config

View File

@@ -0,0 +1,77 @@
# Story 7.4: Docker and Deployment
## Metadata
- **Story ID**: 7.4
- **Title**: Docker and Deployment
- **Epic**: 7 - Testing, Documentation & CI/CD
- **Status**: Pending
- **Priority**: High
- **Estimated Time**: 6-8 hours
- **Dependencies**: All previous epics
## Goal
Create production-ready Docker images and comprehensive deployment guides.
## Description
This story creates multi-stage Dockerfiles, Docker Compose files, and deployment guides for various platforms.
## Deliverables
### 1. Docker Images
- Create multi-stage `Dockerfile`:
- Build stage with Go
- Runtime stage (distroless)
- Health checks
- Proper layer caching
### 2. Docker Compose
- Create `docker-compose.yml` for development:
- Platform service
- PostgreSQL
- Redis
- Kafka (optional)
- Create `docker-compose.prod.yml` for production
### 3. Deployment Guides
- **docs/deployment/kubernetes.md**:
- Kubernetes manifests
- Helm chart (optional)
- Service definitions
- ConfigMap and Secret management
- **docs/deployment/docker.md**:
- Docker Compose deployment
- Environment variables
- Volume mounts
- **docs/deployment/cloud.md**:
- AWS/GCP/Azure deployment notes
- Managed service integration
- Load balancer configuration
### 4. Developer Experience
- Create `Makefile` with common tasks:
- `make dev` - Start dev environment
- `make test` - Run tests
- `make lint` - Run linters
- `make generate` - Generate code
- `make docker-build` - Build Docker image
- `make migrate` - Run migrations
- Add development scripts:
- `scripts/dev.sh` - Start all services
- `scripts/test.sh` - Run test suite
- `scripts/seed.sh` - Seed test data
- Create `.env.example` with all config variables
## Acceptance Criteria
- [ ] Docker images build and run successfully
- [ ] Docker Compose works for development
- [ ] Deployment guides are tested
- [ ] New developers can set up environment in <30 minutes
## Files to Create/Modify
- `Dockerfile` - Multi-stage build
- `docker-compose.yml` - Development compose
- `docker-compose.prod.yml` - Production compose
- `docs/deployment/` - Deployment guides
- `Makefile` - Enhanced with more commands
- `scripts/` - Development scripts

View File

@@ -0,0 +1,42 @@
# Epic 7: Testing, Documentation & CI/CD
## Overview
Comprehensive test coverage (unit, integration, contract), complete documentation, production-ready CI/CD pipeline, Docker images and deployment guides, and developer tooling.
## Stories
### 7.1 Comprehensive Testing Suite
- [Story: 7.1 - Testing Suite](./7.1-testing-suite.md)
- **Goal:** Achieve comprehensive test coverage with unit tests, integration tests, and contract tests.
- **Deliverables:** Unit tests (>80% coverage), integration tests, contract tests, load tests
### 7.2 Complete Documentation
- [Story: 7.2 - Documentation](./7.2-documentation.md)
- **Goal:** Create comprehensive documentation covering architecture, API, operations, and developer guides.
- **Deliverables:** README, architecture docs, API docs, operations guides, code examples
### 7.3 CI/CD Pipeline Enhancement
- [Story: 7.3 - CI/CD Enhancement](./7.3-cicd-enhancement.md)
- **Goal:** Enhance CI/CD pipeline with comprehensive testing, security scanning, and release automation.
- **Deliverables:** Enhanced CI pipeline, release workflow, security scanning
### 7.4 Docker and Deployment
- [Story: 7.4 - Docker & Deployment](./7.4-docker-deployment.md)
- **Goal:** Create production-ready Docker images and comprehensive deployment guides.
- **Deliverables:** Docker images, Docker Compose, deployment guides, developer tooling
## Deliverables Checklist
- [ ] >80% test coverage
- [ ] Integration test suite
- [ ] Complete documentation
- [ ] Production CI/CD pipeline
- [ ] Docker images and deployment guides
- [ ] Developer tooling and scripts
## Acceptance Criteria
- All tests pass in CI
- Code coverage >80%
- Documentation is complete and accurate
- Docker images build and run successfully
- Deployment guides are tested
- New developers can set up environment in <30 minutes

View File

@@ -0,0 +1,47 @@
# Story 8.1: OpenID Connect (OIDC) Support
## Metadata
- **Story ID**: 8.1
- **Title**: OpenID Connect (OIDC) Support
- **Epic**: 8 - Advanced Features & Polish
- **Status**: Pending
- **Priority**: Low
- **Estimated Time**: 6-8 hours
- **Dependencies**: 2.1
## Goal
Add OpenID Connect (OIDC) support for external identity providers and OIDC provider capabilities.
## Description
This story implements OIDC client support for validating tokens from external IdPs and optional OIDC provider functionality.
## Deliverables
### 1. OIDC Client Support
- Install `github.com/coreos/go-oidc`
- Validate tokens from external IdP
- Map claims to internal user
- Integration with authentication system
### 2. OIDC Provider (Optional)
- Discovery endpoint
- JWKS endpoint
- Token endpoint
- UserInfo endpoint
### 3. Documentation
- Document OIDC setup in `docs/auth.md`
- Configuration examples
- Integration guide
## Acceptance Criteria
- [ ] OIDC client validates external tokens
- [ ] Claims are mapped to internal users
- [ ] OIDC provider works (if implemented)
- [ ] Documentation is complete
## Files to Create/Modify
- `internal/auth/oidc_client.go` - OIDC client
- `internal/auth/oidc_provider.go` - OIDC provider (optional)
- `docs/auth.md` - OIDC documentation

Some files were not shown because too many files have changed in this diff Show More