docs: update dead links

This commit is contained in:
2025-11-05 11:00:36 +01:00
parent b4f8875a0e
commit 66b0c3b40d
20 changed files with 126 additions and 83 deletions

View File

@@ -0,0 +1,585 @@
# Module Architecture
This document details the architecture of modules, how they are structured, how they interact with the core platform, and how multiple modules work together.
## Table of Contents
- [Module Structure](#module-structure)
- [Module Interface](#module-interface)
- [Module Lifecycle](#module-lifecycle)
- [Module Dependencies](#module-dependencies)
- [Module Communication](#module-communication)
- [Module Data Isolation](#module-data-isolation)
- [Module Examples](#module-examples)
## Module Structure
Every module follows a consistent structure that separates concerns and enables clean integration with the platform.
```mermaid
graph TD
subgraph "Module Structure"
Manifest[module.yaml<br/>Manifest]
subgraph "Public API (pkg/)"
ModuleInterface[IModule Interface]
ModuleTypes[Public Types]
end
subgraph "Internal Implementation (internal/)"
API[API Handlers]
Service[Domain Services]
Repo[Repositories]
Domain[Domain Models]
end
subgraph "Database Schema"
EntSchema[Ent Schemas]
Migrations[Migrations]
end
end
Manifest --> ModuleInterface
ModuleInterface --> API
API --> Service
Service --> Repo
Repo --> Domain
Repo --> EntSchema
EntSchema --> Migrations
style Manifest fill:#4a90e2,stroke:#2e5c8a,stroke-width:3px,color:#fff
style ModuleInterface fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
style Service fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
```
### Module Directory Structure
```
modules/blog/
├── go.mod # Module dependencies
├── module.yaml # Module manifest
├── pkg/
│ └── module.go # IModule implementation
├── internal/
│ ├── api/
│ │ └── handler.go # HTTP handlers
│ ├── domain/
│ │ ├── post.go # Domain entities
│ │ └── post_repo.go # Repository interface
│ ├── service/
│ │ └── post_service.go # Business logic
│ └── ent/
│ ├── schema/
│ │ └── post.go # Ent schema
│ └── migrate/ # Migrations
└── tests/
└── integration_test.go
```
## Module Interface
All modules must implement the `IModule` interface to integrate with the platform.
```mermaid
classDiagram
class IModule {
<<interface>>
+Name() string
+Version() string
+Dependencies() []string
+Init() fx.Option
+Migrations() []MigrationFunc
+OnStart(ctx) error
+OnStop(ctx) error
}
class BlogModule {
+Name() string
+Version() string
+Dependencies() []string
+Init() fx.Option
+Migrations() []MigrationFunc
}
class BillingModule {
+Name() string
+Version() string
+Dependencies() []string
+Init() fx.Option
+Migrations() []MigrationFunc
}
IModule <|.. BlogModule
IModule <|.. BillingModule
```
### IModule Interface
```go
type IModule interface {
// Name returns a unique, human-readable identifier
Name() string
// Version returns the module version (semantic versioning)
Version() string
// Dependencies returns list of required modules (e.g., ["core >= 1.0.0"])
Dependencies() []string
// Init returns fx.Option that registers all module services
Init() fx.Option
// Migrations returns database migration functions
Migrations() []func(*ent.Client) error
// OnStart is called during application startup (optional)
OnStart(ctx context.Context) error
// OnStop is called during graceful shutdown (optional)
OnStop(ctx context.Context) error
}
```
## Module Lifecycle
Modules go through a well-defined lifecycle from discovery to shutdown.
```mermaid
stateDiagram-v2
[*] --> Discovered: Module found
Discovered --> Validated: Check dependencies
Validated --> Loaded: Load module
Loaded --> Initialized: Call Init()
Initialized --> Migrated: Run migrations
Migrated --> Started: Call OnStart()
Started --> Running: Module active
Running --> Stopping: Shutdown signal
Stopping --> Stopped: Call OnStop()
Stopped --> [*]
Validated --> Rejected: Dependency check fails
Rejected --> [*]
```
### Module Initialization Sequence
```mermaid
sequenceDiagram
participant Main
participant Loader
participant Registry
participant Module
participant DI
participant Router
participant DB
participant Scheduler
Main->>Loader: DiscoverModules()
Loader->>Registry: Scan for modules
Registry-->>Loader: Module list
loop For each module
Loader->>Module: Load module
Module->>Registry: Register module
Registry->>Registry: Validate dependencies
end
Main->>Registry: GetAllModules()
Registry->>Registry: Resolve dependencies (topological sort)
Registry-->>Main: Ordered module list
Main->>DI: Create fx container
loop For each module (in dependency order)
Main->>Module: Init()
Module->>DI: fx.Provide(services)
Module->>Router: Register routes
Module->>Scheduler: Register jobs
Module->>DB: Register migrations
end
Main->>DB: Run migrations (core first)
Main->>DI: Start container
Main->>Module: OnStart() (optional)
Main->>Router: Start HTTP server
```
## Module Dependencies
Modules can depend on other modules, creating a dependency graph that must be resolved.
```mermaid
graph TD
Core[Core Kernel]
Blog[Blog Module]
Billing[Billing Module]
Analytics[Analytics Module]
Notifications[Notification Module]
Blog --> Core
Billing --> Core
Analytics --> Core
Notifications --> Core
Analytics --> Blog
Analytics --> Billing
Billing --> Blog
Notifications --> Blog
Notifications --> Billing
style Core fill:#4a90e2,stroke:#2e5c8a,stroke-width:3px,color:#fff
style Blog fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
style Billing fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
```
### Dependency Resolution
```mermaid
graph LR
subgraph "Module Dependency Graph"
M1[Module A<br/>depends on: Core]
M2[Module B<br/>depends on: Core, Module A]
M3[Module C<br/>depends on: Core, Module B]
Core[Core Kernel]
end
subgraph "Resolved Load Order"
Step1[1. Core Kernel]
Step2[2. Module A]
Step3[3. Module B]
Step4[4. Module C]
end
Core --> M1
M1 --> M2
M2 --> M3
Step1 --> Step2
Step2 --> Step3
Step3 --> Step4
style Core fill:#4a90e2,stroke:#2e5c8a,stroke-width:3px,color:#fff
style Step1 fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
```
## Module Communication
Modules (services) communicate through service client interfaces. All inter-service communication uses gRPC (primary) or HTTP (fallback).
### Communication Patterns
```mermaid
graph TB
subgraph "Communication Patterns"
ServiceClients[Service Clients<br/>gRPC/HTTP]
Events[Event Bus<br/>Kafka]
Shared[Shared Infrastructure<br/>Redis, PostgreSQL]
end
subgraph "Blog Service"
BlogService[Blog Service]
BlogHandler[Blog Handler]
end
subgraph "Service Clients"
AuthClient[Auth Service Client]
IdentityClient[Identity Service Client]
AuthzClient[Authz Service Client]
end
subgraph "Core Services"
EventBus[Event Bus]
AuthService[Auth Service<br/>:8081]
IdentityService[Identity Service<br/>:8082]
end
subgraph "Analytics Service"
AnalyticsService[Analytics Service]
end
BlogHandler --> BlogService
BlogService -->|gRPC| AuthClient
BlogService -->|gRPC| IdentityClient
BlogService -->|gRPC| AuthzClient
BlogService -->|Publish| EventBus
EventBus -->|Subscribe| AnalyticsService
AuthClient --> AuthService
IdentityClient --> IdentityService
AuthzClient --> IdentityService
style EventBus fill:#4a90e2,stroke:#2e5c8a,stroke-width:3px,color:#fff
style BlogService fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
style AnalyticsService fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
style ServiceClients fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
```
### Event-Driven Communication
```mermaid
sequenceDiagram
participant BlogModule
participant EventBus
participant AnalyticsModule
participant NotificationModule
participant AuditModule
BlogModule->>EventBus: Publish("blog.post.created", event)
EventBus->>AnalyticsModule: Deliver event
EventBus->>NotificationModule: Deliver event
EventBus->>AuditModule: Deliver event
AnalyticsModule->>AnalyticsModule: Track post creation
NotificationModule->>NotificationModule: Send notification
AuditModule->>AuditModule: Log audit entry
```
## Module Data Isolation
Modules can have their own database tables while sharing core tables.
```mermaid
erDiagram
USERS ||--o{ USER_ROLES : has
ROLES ||--o{ USER_ROLES : assigned_to
ROLES ||--o{ ROLE_PERMISSIONS : has
PERMISSIONS ||--o{ ROLE_PERMISSIONS : assigned_to
BLOG_POSTS {
string id PK
string author_id FK
string title
string content
timestamp created_at
}
BILLING_SUBSCRIPTIONS {
string id PK
string user_id FK
string plan
timestamp expires_at
}
USERS ||--o{ BLOG_POSTS : creates
USERS ||--o{ BILLING_SUBSCRIPTIONS : subscribes
AUDIT_LOGS {
string id PK
string actor_id
string action
string target_id
jsonb metadata
}
USERS ||--o{ AUDIT_LOGS : performs
```
### Multi-Tenancy Data Isolation
```mermaid
graph TB
subgraph "Single Database"
subgraph "Core Tables"
Users[users<br/>tenant_id]
Roles[roles<br/>tenant_id]
end
subgraph "Blog Module Tables"
Posts[blog_posts<br/>tenant_id]
Comments[blog_comments<br/>tenant_id]
end
subgraph "Billing Module Tables"
Subscriptions[billing_subscriptions<br/>tenant_id]
Invoices[billing_invoices<br/>tenant_id]
end
end
subgraph "Query Filtering"
EntInterceptor[Ent Interceptor]
TenantFilter[WHERE tenant_id = ?]
end
Users --> EntInterceptor
Posts --> EntInterceptor
Subscriptions --> EntInterceptor
EntInterceptor --> TenantFilter
style EntInterceptor fill:#4a90e2,stroke:#2e5c8a,stroke-width:3px,color:#fff
```
## Module Examples
### Example: Blog Module
```mermaid
graph TB
subgraph "Blog Module"
BlogHandler[Blog Handler<br/>/api/v1/blog/posts]
BlogService[Post Service]
PostRepo[Post Repository]
PostEntity[Post Entity]
end
subgraph "Service Clients"
AuthClient[Auth Service Client<br/>gRPC]
AuthzClient[Authz Service Client<br/>gRPC]
IdentityClient[Identity Service Client<br/>gRPC]
AuditClient[Audit Service Client<br/>gRPC]
end
subgraph "Core Services"
EventBus[Event Bus<br/>Kafka]
CacheService[Cache Service<br/>Redis]
end
subgraph "Database"
PostsTable[(blog_posts)]
end
BlogHandler --> BlogService
BlogService -->|gRPC| AuthClient
BlogService -->|gRPC| AuthzClient
BlogService -->|gRPC| IdentityClient
BlogService -->|gRPC| AuditClient
BlogService --> PostRepo
BlogService --> EventBus
BlogService --> CacheService
PostRepo --> PostsTable
PostRepo --> PostEntity
style BlogModule fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
style AuthService fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
```
### Module Integration Example
```mermaid
graph LR
subgraph "Request Flow"
Request[HTTP Request<br/>POST /api/v1/blog/posts]
Auth[Auth Middleware]
Authz[Authz Middleware]
Handler[Blog Handler]
Service[Blog Service]
Repo[Blog Repository]
DB[(Database)]
end
subgraph "Service Clients"
AuthClient[Auth Service Client]
IdentityClient[Identity Service Client]
AuthzClient[Authz Service Client]
end
subgraph "Side Effects"
EventBus[Event Bus]
AuditClient[Audit Service Client]
Cache[Cache]
end
Request --> Auth
Auth --> Authz
Authz --> Handler
Handler --> Service
Service --> Repo
Repo --> DB
Service -->|gRPC| AuthClient
Service -->|gRPC| IdentityClient
Service -->|gRPC| AuthzClient
Service -->|gRPC| AuditClient
Service --> EventBus
Service --> Cache
style Request fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
style Service fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
style ServiceClients fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
```
## Module Registration Flow
```mermaid
flowchart TD
Start([Application Start]) --> LoadManifests["Load module.yaml files"]
LoadManifests --> ValidateDeps["Validate dependencies"]
ValidateDeps -->|Valid| SortModules["Topological sort modules"]
ValidateDeps -->|Invalid| Error([Error: Missing dependencies])
SortModules --> CreateDI["Create DI container"]
CreateDI --> RegisterCore["Register core services"]
RegisterCore --> LoopModules{"More modules?"}
LoopModules -->|Yes| LoadModule["Load module"]
LoadModule --> CallInit["Call module.Init()"]
CallInit --> RegisterServices["Register module services"]
RegisterServices --> RegisterRoutes["Register module routes"]
RegisterRoutes --> RegisterJobs["Register module jobs"]
RegisterJobs --> RegisterMigrations["Register module migrations"]
RegisterMigrations --> LoopModules
LoopModules -->|No| RunMigrations["Run all migrations"]
RunMigrations --> StartModules["Call OnStart() for each module"]
StartModules --> StartServer["Start HTTP server"]
StartServer --> Running([Application Running])
Running --> Shutdown([Shutdown Signal])
Shutdown --> StopServer["Stop HTTP server"]
StopServer --> StopModules["Call OnStop() for each module"]
StopModules --> Cleanup["Cleanup resources"]
Cleanup --> End([Application Stopped])
style Start fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
style Running fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
style Error fill:#ff6b6b,stroke:#c92a2a,stroke-width:2px,color:#fff
```
## Module Permissions Integration
Modules declare permissions that are automatically integrated into the permission system.
```mermaid
graph TB
subgraph "Permission Generation"
Manifest["module.yaml<br/>permissions: array"]
Generator["Permission Generator"]
GeneratedCode["pkg/perm/generated.go"]
end
subgraph "Permission Resolution"
Request["HTTP Request"]
AuthzMiddleware["Authz Middleware"]
PermissionResolver["Permission Resolver"]
UserRoles["User Roles"]
RolePermissions["Role Permissions"]
Response["HTTP Response"]
end
Manifest --> Generator
Generator --> GeneratedCode
GeneratedCode --> PermissionResolver
Request --> AuthzMiddleware
AuthzMiddleware --> PermissionResolver
PermissionResolver --> UserRoles
PermissionResolver --> RolePermissions
UserRoles --> PermissionResolver
RolePermissions --> PermissionResolver
PermissionResolver --> AuthzMiddleware
AuthzMiddleware --> Response
classDef generation fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
classDef resolution fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
class Manifest,Generator,GeneratedCode generation
class PermissionResolver resolution
```
## Next Steps
- [Module Requirements](./module-requirements.md) - Detailed requirements for each module
- [Component Relationships](./component-relationships.md) - How components interact
- [System Architecture](./architecture.md) - Overall system architecture

View File

@@ -0,0 +1,753 @@
# System Architecture
This document provides a comprehensive overview of the Go Platform architecture, including system components, their relationships, and how modules integrate with the core platform.
## Table of Contents
- [High-Level Architecture](#high-level-architecture)
- [Layered Architecture](#layered-architecture)
- [Module System Architecture](#module-system-architecture)
- [Component Relationships](#component-relationships)
- [Data Flow](#data-flow)
- [Deployment Architecture](#deployment-architecture)
## High-Level Architecture
The Go Platform follows a **microservices architecture** where each module is an independent service:
- **Core Services**: Authentication, Identity, Authorization, Audit, etc.
- **Feature Services**: Blog, Billing, Analytics, etc. (modules)
- **Infrastructure Services**: Cache, Event Bus, Scheduler, etc.
All services communicate via gRPC (primary) or HTTP (fallback), with service discovery via a service registry. Services share infrastructure (PostgreSQL, Redis, Kafka) but are independently deployable and scalable.
```mermaid
graph TB
subgraph "Go Platform"
Core[Core Kernel]
Module1[Module 1<br/>Blog]
Module2[Module 2<br/>Billing]
Module3[Module N<br/>Custom]
end
subgraph "Infrastructure"
DB[(PostgreSQL)]
Cache[(Redis)]
Queue[Kafka/Event Bus]
Storage[S3/Blob Storage]
end
subgraph "External Services"
OIDC[OIDC Provider]
Email[Email Service]
Sentry[Sentry]
end
Core --> DB
Core --> Cache
Core --> Queue
Core --> Storage
Core --> OIDC
Core --> Email
Core --> Sentry
Module1 --> Core
Module2 --> Core
Module3 --> Core
Module1 --> DB
Module2 --> DB
Module3 --> DB
Module1 --> Queue
Module2 --> Queue
style Core fill:#4a90e2,stroke:#2e5c8a,stroke-width:3px,color:#fff
style Module1 fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
style Module2 fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
style Module3 fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
```
## Layered Architecture
The platform follows a **clean/hexagonal architecture** with clear separation of concerns across layers.
```mermaid
graph TD
subgraph "Presentation Layer"
HTTP[HTTP/REST API]
GraphQL[GraphQL API]
CLI[CLI Interface]
end
subgraph "Application Layer"
AuthMiddleware[Auth Middleware]
AuthzMiddleware[Authorization Middleware]
RateLimit[Rate Limiting]
Handlers[Request Handlers]
end
subgraph "Domain Layer"
Services[Domain Services]
Entities[Domain Entities]
Policies[Business Policies]
end
subgraph "Infrastructure Layer"
Repos[Repositories]
CacheAdapter[Cache Adapter]
EventBus[Event Bus]
Jobs[Scheduler/Jobs]
end
subgraph "Core Kernel"
DI[DI Container]
Config[Config Manager]
Logger[Logger]
Metrics[Metrics]
Health[Health Checks]
end
HTTP --> AuthMiddleware
GraphQL --> AuthMiddleware
CLI --> AuthMiddleware
AuthMiddleware --> AuthzMiddleware
AuthzMiddleware --> RateLimit
RateLimit --> Handlers
Handlers --> Services
Services --> Entities
Services --> Policies
Services --> Repos
Services --> CacheAdapter
Services --> EventBus
Services --> Jobs
Repos --> DB[(Database)]
CacheAdapter --> Cache[(Redis)]
EventBus --> Queue[(Kafka)]
Services --> DI
Repos --> DI
Handlers --> DI
DI --> Config
DI --> Logger
DI --> Metrics
DI --> Health
style Core fill:#4a90e2,stroke:#2e5c8a,stroke-width:3px,color:#fff
style Services fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
style Repos fill:#ff6b6b,stroke:#c92a2a,stroke-width:2px,color:#fff
```
## Module System Architecture
Modules are the building blocks of the platform. Each module can register services, routes, permissions, and background jobs.
```mermaid
graph TB
subgraph Lifecycle["Module Lifecycle"]
Discover["1. Discover Modules"]
Load["2. Load Module"]
Validate["3. Validate Dependencies"]
Init["4. Initialize Module"]
Start["5. Start Module"]
end
subgraph Registration["Module Registration"]
Static["Static Registration<br/>via init()"]
Dynamic["Dynamic Loading<br/>via .so files"]
end
subgraph Components["Module Components"]
Routes["HTTP Routes"]
Services["Services"]
Repos["Repositories"]
Perms["Permissions"]
Jobs["Background Jobs"]
Migrations["Database Migrations"]
end
Discover --> Load
Load --> Static
Load --> Dynamic
Static --> Validate
Dynamic --> Validate
Validate --> Init
Init --> Routes
Init --> Services
Init --> Repos
Init --> Perms
Init --> Jobs
Init --> Migrations
Routes --> Start
Services --> Start
Repos --> Start
Perms --> Start
Jobs --> Start
Migrations --> Start
classDef lifecycle fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
classDef registration fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
classDef components fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
classDef start fill:#ff6b6b,stroke:#c92a2a,stroke-width:2px,color:#fff
class Discover,Load,Validate,Init lifecycle
class Static,Dynamic registration
class Routes,Services,Repos,Perms,Jobs,Migrations components
class Start start
```
### Module Initialization Sequence
```mermaid
sequenceDiagram
participant Main
participant Loader
participant Registry
participant Module
participant DI
participant Router
participant DB
Main->>Loader: LoadModules()
Loader->>Registry: Discover modules
Registry-->>Loader: List of modules
loop For each module
Loader->>Module: Load module
Module->>Registry: Register(module)
Registry->>Registry: Validate dependencies
end
Main->>Registry: GetAllModules()
Registry-->>Main: Ordered module list
Main->>DI: Create container
loop For each module
Main->>Module: Init()
Module->>DI: Provide services
Module->>Router: Register routes
Module->>DB: Register migrations
end
Main->>DB: Run migrations
Main->>Router: Start HTTP server
```
## Component Relationships
This diagram shows how core components interact with each other and with modules.
```mermaid
graph TB
subgraph "Core Kernel Components"
ConfigMgr[Config Manager]
LoggerService[Logger Service]
DI[DI Container]
ModuleLoader[Module Loader]
HealthRegistry[Health Registry]
MetricsRegistry[Metrics Registry]
ErrorBus[Error Bus]
EventBus[Event Bus]
end
subgraph "Security Components"
AuthService[Auth Service]
AuthzService[Authorization Service]
TokenProvider[Token Provider]
PermissionResolver[Permission Resolver]
AuditService[Audit Service]
end
subgraph "Infrastructure Components"
DBClient[Database Client]
CacheClient[Cache Client]
Scheduler[Scheduler]
Notifier[Notifier]
end
subgraph "Module Components"
ModuleRoutes[Module Routes]
ModuleServices[Module Services]
ModuleRepos[Module Repositories]
end
DI --> ConfigMgr
DI --> LoggerService
DI --> ModuleLoader
DI --> HealthRegistry
DI --> MetricsRegistry
DI --> ErrorBus
DI --> EventBus
DI --> AuthService
DI --> AuthzService
DI --> DBClient
DI --> CacheClient
DI --> Scheduler
DI --> Notifier
AuthService --> TokenProvider
AuthzService --> PermissionResolver
AuthzService --> AuditService
ModuleServices --> DBClient
ModuleServices --> CacheClient
ModuleServices --> EventBus
ModuleServices --> AuthzService
ModuleRepos --> DBClient
ModuleRoutes --> AuthzService
Scheduler --> CacheClient
Notifier --> EventBus
ErrorBus --> LoggerService
ErrorBus --> Sentry
style DI fill:#4a90e2,stroke:#2e5c8a,stroke-width:3px,color:#fff
style AuthService fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
style ModuleServices fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
```
## Data Flow
### Request Processing Flow
```mermaid
sequenceDiagram
participant Client
participant Router
participant AuthMW[Auth Middleware]
participant AuthzMW[Authz Middleware]
participant RateLimit[Rate Limiter]
participant Handler
participant Service
participant Repo
participant DB
participant Cache
participant EventBus
participant Audit
Client->>Router: HTTP Request
Router->>AuthMW: Extract JWT
AuthMW->>AuthMW: Validate token
AuthMW->>Router: Add user to context
Router->>AuthzMW: Check permissions
AuthzMW->>AuthzMW: Resolve permissions
AuthzMW->>Router: Authorized
Router->>RateLimit: Check rate limits
RateLimit->>Cache: Get rate limit state
Cache-->>RateLimit: Rate limit status
RateLimit->>Router: Within limits
Router->>Handler: Process request
Handler->>Service: Business logic
Service->>Cache: Check cache
Cache-->>Service: Cache miss
Service->>Repo: Query data
Repo->>DB: Execute query
DB-->>Repo: Return data
Repo-->>Service: Domain entity
Service->>Cache: Store in cache
Service->>EventBus: Publish event
Service->>Audit: Record action
Service-->>Handler: Response data
Handler-->>Router: HTTP response
Router-->>Client: JSON response
```
### Module Event Flow
```mermaid
graph LR
subgraph "Module A"
AService[Service A]
AHandler[Handler A]
end
subgraph "Event Bus"
Bus[Event Bus]
end
subgraph "Module B"
BService[Service B]
BHandler[Handler B]
end
subgraph "Module C"
CService[Service C]
end
AHandler --> AService
AService -->|Publish Event| Bus
Bus -->|Subscribe| BService
Bus -->|Subscribe| CService
BService --> BHandler
CService --> CService
```
## Deployment Architecture
### Development Deployment
```mermaid
graph TB
subgraph "Developer Machine"
IDE[IDE/Editor]
Go[Go Runtime]
Docker[Docker]
end
subgraph "Local Services"
App[Platform App<br/>:8080]
DB[(PostgreSQL<br/>:5432)]
Redis[(Redis<br/>:6379)]
Kafka[Kafka<br/>:9092]
end
IDE --> Go
Go --> App
App --> DB
App --> Redis
App --> Kafka
Docker --> DB
Docker --> Redis
Docker --> Kafka
```
### Production Deployment
```mermaid
graph TB
subgraph "Load Balancer"
LB[Load Balancer<br/>HTTPS]
end
subgraph "Platform Instances"
App1[Platform Instance 1]
App2[Platform Instance 2]
App3[Platform Instance N]
end
subgraph "Database Cluster"
Primary[(PostgreSQL<br/>Primary)]
Replica[(PostgreSQL<br/>Replica)]
end
subgraph "Cache Cluster"
Redis1[(Redis<br/>Master)]
Redis2[(Redis<br/>Replica)]
end
subgraph "Message Queue"
Kafka1[Kafka Broker 1]
Kafka2[Kafka Broker 2]
Kafka3[Kafka Broker 3]
end
subgraph "Observability"
Prometheus[Prometheus]
Grafana[Grafana]
Jaeger[Jaeger]
Loki[Loki]
end
subgraph "External Services"
Sentry[Sentry]
S3[S3 Storage]
end
LB --> App1
LB --> App2
LB --> App3
App1 --> Primary
App2 --> Primary
App3 --> Primary
App1 --> Replica
App2 --> Replica
App3 --> Replica
App1 --> Redis1
App2 --> Redis1
App3 --> Redis1
App1 --> Kafka1
App2 --> Kafka2
App3 --> Kafka3
App1 --> Prometheus
App2 --> Prometheus
App3 --> Prometheus
Prometheus --> Grafana
App1 --> Jaeger
App2 --> Jaeger
App3 --> Jaeger
App1 --> Loki
App2 --> Loki
App3 --> Loki
App1 --> Sentry
App2 --> Sentry
App3 --> Sentry
App1 --> S3
App2 --> S3
App3 --> S3
style LB fill:#4a90e2,stroke:#2e5c8a,stroke-width:3px,color:#fff
style Primary fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
style Redis1 fill:#ff6b6b,stroke:#c92a2a,stroke-width:2px,color:#fff
```
## Core Kernel Components
The core kernel provides the foundation for all modules. Each component has specific responsibilities:
### Component Responsibilities
```mermaid
mindmap
root((Core Kernel))
Configuration
Load configs
Environment vars
Secret management
Dependency Injection
Service registration
Lifecycle management
Module wiring
Logging
Structured logs
Request correlation
Log levels
Observability
Metrics
Tracing
Health checks
Security
Authentication
Authorization
Audit logging
Module System
Module discovery
Module loading
Dependency resolution
```
## Module Integration Points
Modules integrate with the core through well-defined interfaces:
```mermaid
graph TB
subgraph "Core Kernel Interfaces"
IConfig[ConfigProvider]
ILogger[Logger]
IAuth[Authenticator]
IAuthz[Authorizer]
IEventBus[EventBus]
ICache[Cache]
IBlobStore[BlobStore]
IScheduler[Scheduler]
INotifier[Notifier]
end
subgraph "Module Implementation"
Module[Feature Module]
ModuleServices[Module Services]
ModuleRoutes[Module Routes]
end
Module --> IConfig
Module --> ILogger
ModuleServices --> IAuth
ModuleServices --> IAuthz
ModuleServices --> IEventBus
ModuleServices --> ICache
ModuleServices --> IBlobStore
ModuleServices --> IScheduler
ModuleServices --> INotifier
ModuleRoutes --> IAuthz
style IConfig fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
style Module fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
```
## Microservices Architecture
The platform is designed as **microservices from day one**, with each module being an independent service.
### Service Architecture
```mermaid
graph TB
subgraph "API Gateway"
Gateway[API Gateway<br/>Routing & Auth]
end
subgraph "Core Services"
AuthSvc[Auth Service<br/>:8081]
IdentitySvc[Identity Service<br/>:8082]
AuthzSvc[Authz Service<br/>:8083]
AuditSvc[Audit Service<br/>:8084]
end
subgraph "Feature Services"
BlogSvc[Blog Service<br/>:8091]
BillingSvc[Billing Service<br/>:8092]
AnalyticsSvc[Analytics Service<br/>:8093]
end
subgraph "Infrastructure"
DB[(PostgreSQL)]
Cache[(Redis)]
Queue[Kafka]
Registry[Service Registry]
end
Gateway --> AuthSvc
Gateway --> IdentitySvc
Gateway --> BlogSvc
Gateway --> BillingSvc
AuthSvc --> IdentitySvc
AuthSvc --> Registry
BlogSvc --> AuthzSvc
BlogSvc --> IdentitySvc
BlogSvc --> Registry
BillingSvc --> IdentitySvc
BillingSvc --> Registry
AuthSvc --> DB
IdentitySvc --> DB
BlogSvc --> DB
BillingSvc --> DB
AuthSvc --> Cache
BlogSvc --> Cache
BillingSvc --> Cache
BlogSvc --> Queue
BillingSvc --> Queue
AnalyticsSvc --> Queue
style Gateway fill:#4a90e2,stroke:#2e5c8a,stroke-width:3px,color:#fff
style Registry fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
```
### Service Communication
All inter-service communication uses service client interfaces:
```mermaid
graph TB
subgraph "Service Client Interface"
Interface[Service Interface<br/>pkg/services/]
end
subgraph "Implementations"
GRPC[gRPC Client<br/>Primary]
HTTP[HTTP Client<br/>Fallback]
end
subgraph "Service Registry"
Registry[Service Registry<br/>Discovery & Resolution]
end
Interface --> GRPC
Interface --> HTTP
Registry --> GRPC
Registry --> HTTP
style Interface fill:#4a90e2,stroke:#2e5c8a,stroke-width:3px,color:#fff
style Registry fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
```
### Service Communication Patterns
The platform uses three communication patterns:
1. **Synchronous Service Calls** (via Service Clients):
- gRPC calls (primary) - type-safe, efficient
- HTTP/REST calls (fallback) - for external integration
- All calls go through service client interfaces
- Service discovery via registry
2. **Asynchronous Events** (via Event Bus):
- Distributed via Kafka
- Preferred for cross-service communication
- Event-driven architecture for loose coupling
3. **Shared Infrastructure** (For state):
- Redis for cache and distributed state
- PostgreSQL for persistent data
- Kafka for events
### Service Registry
The service registry enables service discovery and resolution:
```mermaid
graph TB
subgraph "Service Registry"
Registry[Service Registry Interface]
Consul[Consul Registry]
K8s[K8s Service Discovery]
Etcd[etcd Registry]
end
subgraph "Services"
AuthSvc[Auth Service]
IdentitySvc[Identity Service]
BlogSvc[Blog Service]
end
Registry --> Consul
Registry --> K8s
Registry --> Etcd
Consul --> AuthSvc
K8s --> IdentitySvc
Etcd --> BlogSvc
style Registry fill:#4a90e2,stroke:#2e5c8a,stroke-width:3px,color:#fff
```
### Scaling Strategy
#### Independent Service Scaling
- Scale individual services based on load
- Independent resource allocation
- Independent deployment
- Better resource utilization
- Team autonomy
#### Development Mode
- For local development, multiple services can run in the same process
- Services still communicate via gRPC/HTTP (no direct calls)
- Docker Compose for easy local setup
- Maintains microservices architecture even in development
## Next Steps
- [Module Architecture](./architecture-modules.md) - Detailed module architecture and design
- [Module Requirements](./module-requirements.md) - Requirements for each module
- [Component Relationships](./component-relationships.md) - Detailed component interactions
- [ADRs](../adr/README.md) - Architecture Decision Records
- [ADR-0029: Microservices Architecture](../adr/0029-microservices-architecture.md) - Microservices strategy
- [ADR-0030: Service Communication](../adr/0030-service-communication-strategy.md) - Communication patterns

View File

@@ -0,0 +1,481 @@
# Component Relationships
This document details how different components of the Go Platform interact with each other, including dependency relationships, data flow, and integration patterns.
## Table of Contents
- [Core Component Dependencies](#core-component-dependencies)
- [Module to Core Integration](#module-to-core-integration)
- [Service Interaction Patterns](#service-interaction-patterns)
- [Data Flow Patterns](#data-flow-patterns)
- [Dependency Graph](#dependency-graph)
## Core Component Dependencies
The core kernel components have well-defined dependencies that form the foundation of the platform.
```mermaid
graph TD
subgraph "Foundation Layer"
Config[Config Manager]
Logger[Logger Service]
end
subgraph "DI Layer"
DI[DI Container]
end
subgraph "Infrastructure Layer"
DB[Database Client]
Cache[Cache Client]
EventBus[Event Bus]
Scheduler[Scheduler]
end
subgraph "Security Layer"
Auth[Auth Service]
Authz[Authz Service]
Audit[Audit Service]
end
subgraph "Observability Layer"
Metrics[Metrics Registry]
Health[Health Registry]
Tracer[OpenTelemetry Tracer]
end
Config --> Logger
Config --> DI
Logger --> DI
DI --> DB
DI --> Cache
DI --> EventBus
DI --> Scheduler
DI --> Auth
DI --> Authz
DI --> Audit
DI --> Metrics
DI --> Health
DI --> Tracer
Auth --> DB
Authz --> DB
Authz --> Cache
Audit --> DB
DB --> Tracer
Cache --> Tracer
EventBus --> Tracer
style Config fill:#4a90e2,stroke:#2e5c8a,stroke-width:3px,color:#fff
style DI fill:#50c878,stroke:#2e7d4e,stroke-width:3px,color:#fff
style Auth fill:#ff6b6b,stroke:#c92a2a,stroke-width:2px,color:#fff
```
## Module to Core Integration
Modules (services) integrate with core services through service client interfaces. All communication uses gRPC or HTTP.
```mermaid
graph LR
subgraph "Feature Service (e.g., Blog)"
ModuleHandler[Module Handler]
ModuleService[Module Service]
ModuleRepo[Module Repository]
end
subgraph "Service Clients"
AuthClient[Auth Service Client]
AuthzClient[Authz Service Client]
IdentityClient[Identity Service Client]
AuditClient[Audit Service Client]
end
subgraph "Core Services"
AuthService[Auth Service<br/>:8081]
AuthzService[Authz Service<br/>:8083]
IdentityService[Identity Service<br/>:8082]
AuditService[Audit Service<br/>:8084]
EventBusService[Event Bus<br/>Kafka]
CacheService[Cache Service<br/>Redis]
end
subgraph "Infrastructure"
DBClient[Database Client]
CacheClient[Cache Client]
QueueClient[Message Queue]
end
ModuleHandler --> ModuleService
ModuleService -->|gRPC| AuthClient
ModuleService -->|gRPC| AuthzClient
ModuleService -->|gRPC| IdentityClient
ModuleService -->|gRPC| AuditClient
ModuleService --> ModuleRepo
ModuleService --> EventBusService
ModuleService --> CacheService
AuthClient --> AuthService
AuthzClient --> AuthzService
IdentityClient --> IdentityService
AuditClient --> AuditService
ModuleRepo --> DBClient
CacheService --> CacheClient
EventBusService --> QueueClient
style ModuleService fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
style AuthService fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
style DBClient fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
style ServiceClients fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
```
## Service Interaction Patterns
### Authentication Flow
```mermaid
sequenceDiagram
participant Client
participant Router
participant AuthMiddleware
participant AuthService
participant TokenProvider
participant UserRepo
participant DB
Client->>Router: POST /api/v1/auth/login
Router->>AuthMiddleware: Extract credentials
AuthMiddleware->>AuthService: Authenticate(email, password)
AuthService->>UserRepo: FindByEmail(email)
UserRepo->>DB: Query user
DB-->>UserRepo: User data
UserRepo-->>AuthService: User entity
AuthService->>AuthService: Verify password
AuthService->>TokenProvider: GenerateAccessToken(user)
AuthService->>TokenProvider: GenerateRefreshToken(user)
TokenProvider-->>AuthService: Tokens
AuthService->>DB: Store refresh token
AuthService-->>AuthMiddleware: Auth response
AuthMiddleware-->>Router: Tokens
Router-->>Client: JSON response with tokens
```
### Authorization Flow
```mermaid
sequenceDiagram
participant Request
participant AuthzMiddleware
participant Authorizer
participant PermissionResolver
participant Cache
participant UserRepo
participant RoleRepo
participant DB
Request->>AuthzMiddleware: HTTP request + permission
AuthzMiddleware->>Authorizer: Authorize(ctx, permission)
Authorizer->>Authorizer: Extract user from context
Authorizer->>PermissionResolver: HasPermission(user, permission)
PermissionResolver->>Cache: Check cache
Cache-->>PermissionResolver: Cache miss
PermissionResolver->>UserRepo: GetUserRoles(userID)
UserRepo->>DB: Query user_roles
DB-->>UserRepo: Role IDs
UserRepo-->>PermissionResolver: Roles
PermissionResolver->>RoleRepo: GetRolePermissions(roleIDs)
RoleRepo->>DB: Query role_permissions
DB-->>RoleRepo: Permissions
RoleRepo-->>PermissionResolver: Permission list
PermissionResolver->>PermissionResolver: Check if permission in list
PermissionResolver->>Cache: Store in cache
PermissionResolver-->>Authorizer: Has permission: true/false
Authorizer-->>AuthzMiddleware: Authorized or error
AuthzMiddleware-->>Request: Continue or 403
```
### Event Publishing Flow
```mermaid
sequenceDiagram
participant ModuleService
participant EventBus
participant Kafka
participant Subscriber1
participant Subscriber2
ModuleService->>EventBus: Publish(topic, event)
EventBus->>EventBus: Serialize event
EventBus->>Kafka: Send to topic
Kafka-->>EventBus: Acknowledged
Kafka->>Subscriber1: Deliver event
Kafka->>Subscriber2: Deliver event
Subscriber1->>Subscriber1: Process event
Subscriber2->>Subscriber2: Process event
```
## Data Flow Patterns
### Request to Response Flow
```mermaid
graph LR
Client[Client] -->|HTTP Request| LB[Load Balancer]
LB -->|Route| Server1[Instance 1]
LB -->|Route| Server2[Instance 2]
Server1 --> AuthMW[Auth Middleware]
Server1 --> AuthzMW[Authz Middleware]
Server1 --> RateLimit[Rate Limiter]
Server1 --> Handler[Request Handler]
Server1 --> Service[Domain Service]
Server1 --> Cache[Cache Check]
Server1 --> Repo[Repository]
Server1 --> DB[(Database)]
Service --> EventBus[Event Bus]
Service --> Audit[Audit Log]
Handler -->|Response| Server1
Server1 -->|HTTP Response| LB
LB -->|Response| Client
style Server1 fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
style Service fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
```
### Caching Flow
```mermaid
graph TD
Request[Service Request] --> CacheCheck{Cache Hit?}
CacheCheck -->|Yes| CacheGet[Get from Cache]
CacheCheck -->|No| DBQuery[Query Database]
DBQuery --> DBResponse[Database Response]
DBResponse --> CacheStore[Store in Cache]
CacheStore --> Return[Return Data]
CacheGet --> Return
style CacheCheck fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
style CacheGet fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
style DBQuery fill:#ff6b6b,stroke:#c92a2a,stroke-width:2px,color:#fff
```
## Dependency Graph
Complete dependency graph showing all components and their relationships.
```mermaid
graph TB
subgraph "Application Entry"
Main[Main Application]
end
subgraph "Core Kernel"
Config[Config]
Logger[Logger]
DI[DI Container]
ModuleLoader[Module Loader]
end
subgraph "Security"
Auth[Auth]
Authz[Authz]
Identity[Identity]
Audit[Audit]
end
subgraph "Infrastructure"
DB[Database]
Cache[Cache]
EventBus[Event Bus]
Scheduler[Scheduler]
BlobStore[Blob Store]
Notifier[Notifier]
end
subgraph "Observability"
Metrics[Metrics]
Health[Health]
Tracer[Tracer]
ErrorBus[Error Bus]
end
subgraph "Module"
ModuleHandler[Module Handler]
ModuleService[Module Service]
ModuleRepo[Module Repo]
end
Main --> Config
Main --> Logger
Main --> DI
Main --> ModuleLoader
Config --> Logger
Config --> DI
DI --> Auth
DI --> Authz
DI --> Identity
DI --> Audit
DI --> DB
DI --> Cache
DI --> EventBus
DI --> Scheduler
DI --> BlobStore
DI --> Notifier
DI --> Metrics
DI --> Health
DI --> Tracer
DI --> ErrorBus
Auth --> Identity
Auth --> DB
Authz --> Identity
Authz --> Cache
Authz --> Audit
Audit --> DB
Audit --> Logger
ModuleLoader --> DI
ModuleHandler --> ModuleService
ModuleService --> ModuleRepo
ModuleService -->|gRPC| Auth
ModuleService -->|gRPC| Authz
ModuleService -->|gRPC| Identity
ModuleService -->|gRPC| Audit
ModuleService --> EventBus
ModuleService --> Cache
ModuleRepo --> DB
Scheduler --> Cache
Notifier --> EventBus
ErrorBus --> Logger
ErrorBus --> Sentry
DB --> Tracer
Cache --> Tracer
EventBus --> Tracer
style Main fill:#4a90e2,stroke:#2e5c8a,stroke-width:4px,color:#fff
style DI fill:#50c878,stroke:#2e7d4e,stroke-width:3px,color:#fff
style ModuleService fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
```
## Component Interaction Matrix
| Component | Depends On | Used By |
|-----------|-----------|---------|
| Config | None | All components |
| Logger | Config | All components |
| DI Container | Config, Logger | All components |
| Auth Service | Identity, DB | Auth Middleware, Modules |
| Authz Service | Identity, Cache, Audit | Authz Middleware, Modules |
| Identity Service | DB, Cache, Notifier | Auth, Authz, Modules |
| Database Client | Config, Logger, Tracer | All repositories |
| Cache Client | Config, Logger | Authz, Scheduler, Modules |
| Event Bus | Config, Logger, Tracer | Modules, Notifier |
| Scheduler | Cache, Logger | Modules |
| Error Bus | Logger | All components (via panic recovery) |
## Integration Patterns
### Module Service Integration
```mermaid
graph TB
subgraph "Module Layer"
Handler[HTTP Handler]
Service[Domain Service]
Repo[Repository]
end
subgraph "Core Services"
Auth[Auth Service]
Authz[Authz Service]
EventBus[Event Bus]
Cache[Cache]
Audit[Audit]
end
subgraph "Infrastructure"
DB[(Database)]
Redis[(Redis)]
Kafka[Kafka]
end
Handler --> Auth
Handler --> Authz
Handler --> Service
Service --> Repo
Service --> EventBus
Service --> Cache
Service --> Audit
Repo --> DB
Cache --> Redis
EventBus --> Kafka
Audit --> DB
style Service fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
style Auth fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
style DB fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
```
### Cross-Service Communication
```mermaid
graph LR
subgraph "Blog Service"
BlogService[Blog Service]
end
subgraph "Analytics Service"
AnalyticsService[Analytics Service]
end
subgraph "Service Clients"
AuthzClient[Authz Service Client]
IdentityClient[Identity Service Client]
end
subgraph "Core Services"
EventBus[Event Bus<br/>Kafka]
AuthzService[Authz Service<br/>:8083]
IdentityService[Identity Service<br/>:8082]
Cache[Cache<br/>Redis]
end
BlogService -->|gRPC| AuthzClient
BlogService -->|gRPC| IdentityClient
BlogService -->|Publish Event| EventBus
EventBus -->|Subscribe| AnalyticsService
BlogService -->|Cache Access| Cache
AnalyticsService -->|Cache Access| Cache
AuthzClient --> AuthzService
IdentityClient --> IdentityService
style BlogService fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
style AnalyticsService fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
style EventBus fill:#4a90e2,stroke:#2e5c8a,stroke-width:3px,color:#fff
style ServiceClients fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
```
## Next Steps
- [System Architecture](./architecture.md) - Overall system architecture
- [Module Architecture](./architecture-modules.md) - Module design and integration
- [Module Requirements](./module-requirements.md) - Detailed module requirements

View File

@@ -0,0 +1,423 @@
# Data Flow Patterns
## Purpose
This document describes how data flows through the Go Platform system, covering request/response flows, event flows, cache patterns, and observability data collection.
## Overview
Data flows through the platform in multiple patterns depending on the type of operation. Understanding these patterns helps in debugging, performance optimization, and system design decisions.
## Key Concepts
- **Request Flow**: Data flow from HTTP request to response
- **Event Flow**: Asynchronous data flow through event bus
- **Cache Flow**: Data flow through caching layers
- **Observability Flow**: Telemetry data collection and export
## Request/Response Data Flow
### Standard HTTP Request Flow
Complete data flow from HTTP request to response.
```mermaid
graph TD
Start[HTTP Request] --> Auth[Authentication]
Auth -->|Valid| Authz[Authorization]
Auth -->|Invalid| Error1[401 Response]
Authz -->|Authorized| Handler[Request Handler]
Authz -->|Unauthorized| Error2[403 Response]
Handler --> Service[Domain Service]
Service --> Cache{Cache Check}
Cache -->|Hit| CacheData[Return Cached Data]
Cache -->|Miss| Repo[Repository]
Repo --> DB[(Database)]
DB --> Repo
Repo --> Service
Service --> CacheStore[Update Cache]
Service --> EventBus[Publish Events]
Service --> Audit[Audit Log]
Service --> Metrics[Update Metrics]
Service --> Handler
Handler --> Response[HTTP Response]
CacheData --> Response
Error1 --> Response
Error2 --> Response
Response --> Client[Client]
style Auth fill:#ff6b6b,stroke:#c92a2a,stroke-width:2px,color:#fff
style Service fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
style Cache fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
```
### Request Data Transformation
How request data is transformed as it flows through the system.
```mermaid
sequenceDiagram
participant Client
participant Handler
participant Service
participant Repo
participant DB
Client->>Handler: HTTP Request (JSON)
Handler->>Handler: Parse JSON
Handler->>Handler: Validate request
Handler->>Handler: Convert to DTO
Handler->>Service: Business DTO
Service->>Service: Business logic
Service->>Service: Domain entity
Service->>Repo: Domain entity
Repo->>Repo: Convert to DB model
Repo->>DB: SQL query
DB-->>Repo: DB result
Repo->>Repo: Convert to domain entity
Repo-->>Service: Domain entity
Service->>Service: Business logic
Service->>Service: Response DTO
Service-->>Handler: Response DTO
Handler->>Handler: Convert to JSON
Handler-->>Client: HTTP Response (JSON)
```
## Event Data Flow
### Event Publishing Flow
How events are published and flow through the event bus.
```mermaid
graph LR
Publisher[Event Publisher] --> Serialize[Serialize Event]
Serialize --> Metadata[Add Metadata]
Metadata --> EventBus[Event Bus]
EventBus --> Topic[Kafka Topic]
Topic --> Subscriber1[Subscriber 1]
Topic --> Subscriber2[Subscriber 2]
Topic --> SubscriberN[Subscriber N]
Subscriber1 --> Process1[Process Event]
Subscriber2 --> Process2[Process Event]
SubscriberN --> ProcessN[Process Event]
style EventBus fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
style Topic fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
```
### Event Data Transformation
How event data is transformed during publishing and consumption.
```mermaid
sequenceDiagram
participant Publisher
participant EventBus
participant Kafka
participant Subscriber
Publisher->>Publisher: Domain event
Publisher->>EventBus: Publish(event)
EventBus->>EventBus: Serialize to JSON
EventBus->>EventBus: Add metadata:
- trace_id
- user_id
- timestamp
- source
EventBus->>Kafka: Send to topic
Kafka-->>EventBus: Acknowledged
Kafka->>Subscriber: Deliver event
Subscriber->>Subscriber: Deserialize JSON
Subscriber->>Subscriber: Extract metadata
Subscriber->>Subscriber: Domain event
Subscriber->>Subscriber: Process event
```
## Cache Data Flow
### Cache-Aside Pattern Flow
How data flows through cache using the cache-aside pattern.
```mermaid
graph TD
Start[Service Request] --> Check{Cache Hit?}
Check -->|Yes| GetCache[Get from Cache]
Check -->|No| GetDB[Query Database]
GetCache --> Deserialize[Deserialize Data]
Deserialize --> Return[Return Data]
GetDB --> DB[(Database)]
DB --> DBData[Database Result]
DBData --> Serialize[Serialize Data]
Serialize --> StoreCache[Store in Cache]
StoreCache --> Return
style Check fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
style StoreCache fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
```
### Cache Invalidation Flow
How cache is invalidated when data changes.
```mermaid
sequenceDiagram
participant Service
participant Repository
participant DB
participant Cache
Service->>Repository: Update entity
Repository->>DB: Update database
DB-->>Repository: Update complete
Repository->>Cache: Invalidate(key)
Cache->>Cache: Remove from cache
Cache-->>Repository: Invalidated
Repository-->>Service: Update complete
Note over Service,Cache: Next read will fetch from DB and cache
```
### Cache Write-Through Pattern
How data is written through cache to database.
```mermaid
sequenceDiagram
participant Service
participant Cache
participant Repository
participant DB
Service->>Cache: Write data
Cache->>Cache: Store in cache
Cache->>Repository: Write to database
Repository->>DB: Insert/Update
DB-->>Repository: Success
Repository-->>Cache: Write complete
Cache-->>Service: Data written
```
## Observability Data Flow
### Tracing Data Flow
How distributed tracing data flows through the system.
```mermaid
graph TD
Request[HTTP Request] --> Trace[Start Trace]
Trace --> Span1[HTTP Span]
Span1 --> Service[Service Call]
Service --> Span2[Service Span]
Span2 --> DB[Database Query]
DB --> Span3[DB Span]
Span2 --> gRPC[gRPC Call]
gRPC --> Span4[gRPC Span]
Span3 --> Aggregate[Collect Spans]
Span4 --> Aggregate
Aggregate --> Export[Export to Collector]
Export --> Collector[OpenTelemetry Collector]
Collector --> Backend[Backend Storage]
style Trace fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
style Aggregate fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
```
### Metrics Data Flow
How metrics are collected and exported.
```mermaid
sequenceDiagram
participant Service
participant MetricsRegistry
participant Exporter
participant Prometheus
participant Grafana
Service->>Service: Business operation
Service->>MetricsRegistry: Increment counter
Service->>MetricsRegistry: Record duration
Service->>MetricsRegistry: Set gauge
MetricsRegistry->>MetricsRegistry: Aggregate metrics
Prometheus->>Exporter: Scrape metrics
Exporter->>MetricsRegistry: Get metrics
MetricsRegistry-->>Exporter: Metrics data
Exporter-->>Prometheus: Prometheus format
Prometheus->>Prometheus: Store metrics
Grafana->>Prometheus: Query metrics
Prometheus-->>Grafana: Metrics data
Grafana->>Grafana: Render dashboard
```
### Log Data Flow
How logs flow through the system to various sinks.
```mermaid
graph TD
Service[Service] --> Logger[Logger]
Logger --> Format[Format Log]
Format --> Output[Output Log]
Output --> Stdout[stdout]
Output --> File[File]
Output --> LogCollector[Log Collector]
LogCollector --> Elasticsearch[Elasticsearch]
LogCollector --> CloudLogging[Cloud Logging]
Stdout --> Container[Container Logs]
style Logger fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
style LogCollector fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
```
## Audit Data Flow
How audit logs flow through the system.
```mermaid
sequenceDiagram
participant Service
participant AuditClient
participant AuditService
participant DB
participant Archive
Service->>Service: Security-sensitive action
Service->>AuditClient: Record audit log
AuditClient->>AuditClient: Build audit entry:
- actor
- action
- target
- metadata
- timestamp
AuditClient->>AuditService: Store audit log
AuditService->>AuditService: Validate entry
AuditService->>AuditService: Ensure immutability
AuditService->>DB: Insert audit log
DB-->>AuditService: Log stored
AuditService->>Archive: Archive old logs
Archive->>Archive: Long-term storage
Note over Service,Archive: Audit logs are immutable
```
## Cross-Service Data Flow
### Inter-Service Request Flow
How data flows when services communicate via service clients.
```mermaid
sequenceDiagram
participant ServiceA
participant ServiceClient
participant ServiceRegistry
participant ServiceB
participant DB
ServiceA->>ServiceClient: Call service method
ServiceClient->>ServiceRegistry: Discover service
ServiceRegistry-->>ServiceClient: Service endpoint
ServiceClient->>ServiceB: gRPC request
ServiceB->>ServiceB: Process request
ServiceB->>DB: Query data
DB-->>ServiceB: Data
ServiceB->>ServiceB: Business logic
ServiceB-->>ServiceClient: gRPC response
ServiceClient-->>ServiceA: Return data
```
### Service-to-Service Event Flow
How events flow between services.
```mermaid
graph LR
ServiceA[Service A] -->|Publish| EventBus[Event Bus]
EventBus -->|Route| ServiceB[Service B]
EventBus -->|Route| ServiceC[Service C]
ServiceB -->|Publish| EventBus
EventBus -->|Route| ServiceD[Service D]
ServiceC -->|Publish| EventBus
EventBus -->|Route| ServiceE[Service E]
style EventBus fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
```
## Data Flow Patterns Summary
### Request Flow Pattern
- **Path**: Client → HTTP → Handler → Service → Repository → Database
- **Response**: Database → Repository → Service → Handler → HTTP → Client
- **Side Effects**: Cache updates, event publishing, audit logging, metrics
### Event Flow Pattern
- **Path**: Publisher → Event Bus → Kafka → Subscribers
- **Characteristics**: Asynchronous, eventual consistency, decoupled
### Cache Flow Pattern
- **Read**: Cache → (miss) → Database → Cache
- **Write**: Service → Database → Cache invalidation
- **Characteristics**: Performance optimization, cache-aside pattern
### Observability Flow Pattern
- **Tracing**: Service → OpenTelemetry → Collector → Backend
- **Metrics**: Service → Metrics Registry → Prometheus → Grafana
- **Logs**: Service → Logger → Collector → Storage
## Integration Points
This data flow patterns document integrates with:
- **[System Behavior Overview](system-behavior.md)**: How data flows fit into system behavior
- **[Service Orchestration](service-orchestration.md)**: How data flows between services
- **[Module Integration Patterns](module-integration-patterns.md)**: How data flows through modules
- **[Operational Scenarios](operational-scenarios.md)**: Data flow in specific scenarios
- **[Component Relationships](component-relationships.md)**: Component-level data flow
## Related Documentation
- [System Behavior Overview](system-behavior.md) - System-level behavior
- [Service Orchestration](service-orchestration.md) - Service coordination
- [Module Integration Patterns](module-integration-patterns.md) - Module integration
- [Operational Scenarios](operational-scenarios.md) - Operational flows
- [Component Relationships](component-relationships.md) - Component dependencies
- [Architecture Overview](architecture.md) - System architecture

View File

@@ -0,0 +1,410 @@
# Module Integration Patterns
## Purpose
This document explains how modules integrate with the core platform, focusing on module discovery, initialization, service integration, and communication patterns rather than detailed implementation.
## Overview
Modules are independent services that extend the platform's functionality. They integrate with the core platform through well-defined interfaces, service clients, and a standardized initialization process. Each module operates as an independent service while leveraging core platform capabilities.
## Key Concepts
- **Module**: Independent service providing specific functionality
- **Module Manifest**: YAML file defining module metadata and configuration
- **Module Interface**: Standard interface all modules implement
- **Service Clients**: Abstraction for inter-service communication
- **Module Registry**: Registry tracking all loaded modules
## Module Discovery Process
Modules are discovered automatically during application startup by scanning module directories.
```mermaid
sequenceDiagram
participant Main
participant ModuleLoader
participant FileSystem
participant ModuleManifest
participant ModuleRegistry
Main->>ModuleLoader: DiscoverModules()
ModuleLoader->>FileSystem: Scan modules/ directory
FileSystem-->>ModuleLoader: Module directories
loop For each module directory
ModuleLoader->>FileSystem: Read module.yaml
FileSystem-->>ModuleLoader: Module manifest
ModuleLoader->>ModuleManifest: Parse manifest
ModuleManifest-->>ModuleLoader: Module metadata
ModuleLoader->>ModuleRegistry: Register module
ModuleRegistry->>ModuleRegistry: Validate manifest
ModuleRegistry->>ModuleRegistry: Check dependencies
ModuleRegistry-->>ModuleLoader: Module registered
end
ModuleLoader->>ModuleRegistry: Resolve dependencies
ModuleRegistry->>ModuleRegistry: Build dependency graph
ModuleRegistry->>ModuleRegistry: Order modules
ModuleRegistry-->>ModuleLoader: Ordered module list
ModuleLoader-->>Main: Module list ready
```
### Discovery Steps
1. **Directory Scanning**: Scan `modules/` directory for module subdirectories
2. **Manifest Loading**: Load `module.yaml` from each module directory
3. **Manifest Parsing**: Parse manifest to extract metadata
4. **Dependency Extraction**: Extract module dependencies from manifest
5. **Module Registration**: Register module in module registry
6. **Dependency Resolution**: Build dependency graph and order modules
7. **Validation**: Validate all dependencies are available
## Module Initialization Flow
Modules are initialized in dependency order, ensuring all dependencies are available before module initialization.
```mermaid
sequenceDiagram
participant Main
participant ModuleRegistry
participant Module
participant DI
participant Router
participant ServiceRegistry
participant DB
Main->>ModuleRegistry: GetOrderedModules()
ModuleRegistry-->>Main: Ordered module list
loop For each module (dependency order)
Main->>Module: Init()
Module->>DI: Provide services
DI->>DI: Register module services
DI-->>Module: Services registered
Module->>Router: Register routes
Router->>Router: Add route handlers
Router-->>Module: Routes registered
Module->>DB: Register migrations
DB->>DB: Store migration info
DB-->>Module: Migrations registered
Module->>ServiceRegistry: Register service
ServiceRegistry->>ServiceRegistry: Register with registry
ServiceRegistry-->>Module: Service registered
Module->>Module: OnStart hook (optional)
Module-->>Main: Module initialized
end
Main->>DB: Run migrations
DB->>DB: Execute in dependency order
DB-->>Main: Migrations complete
Main->>Router: Start HTTP server
Main->>ServiceRegistry: Start service discovery
```
### Initialization Phases
1. **Dependency Resolution**: Determine module initialization order
2. **Service Registration**: Register module services in DI container
3. **Route Registration**: Register HTTP routes
4. **Migration Registration**: Register database migrations
5. **Service Registration**: Register module as service in service registry
6. **Lifecycle Hooks**: Execute OnStart hooks if defined
7. **Migration Execution**: Run migrations in dependency order
8. **Server Startup**: Start HTTP and gRPC servers
## Module Service Integration
Modules integrate with core services through service client interfaces, ensuring all communication goes through well-defined abstractions.
```mermaid
graph TB
subgraph "Module Service"
ModuleHandler[Module Handler]
ModuleService[Module Service]
ModuleRepo[Module Repository]
end
subgraph "Service Clients"
AuthClient[Auth Service Client]
IdentityClient[Identity Service Client]
AuthzClient[Authz Service Client]
AuditClient[Audit Service Client]
end
subgraph "Core Services"
AuthService[Auth Service<br/>:8081]
IdentityService[Identity Service<br/>:8082]
AuthzService[Authz Service<br/>:8083]
AuditService[Audit Service<br/>:8084]
end
subgraph "Infrastructure"
EventBus[Event Bus]
Cache[Cache]
DB[(Database)]
end
ModuleHandler --> ModuleService
ModuleService --> ModuleRepo
ModuleRepo --> DB
ModuleService -->|gRPC| AuthClient
ModuleService -->|gRPC| IdentityClient
ModuleService -->|gRPC| AuthzClient
ModuleService -->|gRPC| AuditClient
AuthClient --> AuthService
IdentityClient --> IdentityService
AuthzClient --> AuthzService
AuditClient --> AuditService
ModuleService --> EventBus
ModuleService --> Cache
style ModuleService fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
style AuthClient fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
```
### Service Integration Points
1. **Authentication**: Use Auth Service Client for token validation
2. **Identity**: Use Identity Service Client for user operations
3. **Authorization**: Use Authz Service Client for permission checks
4. **Audit**: Use Audit Service Client for audit logging
5. **Event Bus**: Publish and subscribe to events
6. **Cache**: Use cache for performance optimization
7. **Database**: Direct database access via repositories
## Module Data Management
Modules manage their own data while sharing database infrastructure.
```mermaid
graph TD
subgraph "Module A"
ModuleA[Module A Service]
RepoA[Module A Repository]
SchemaA[Module A Schema<br/>blog_posts]
end
subgraph "Module B"
ModuleB[Module B Service]
RepoB[Module B Repository]
SchemaB[Module B Schema<br/>billing_subscriptions]
end
subgraph "Shared Database"
DB[(PostgreSQL)]
end
subgraph "Migrations"
MigrationA[Module A Migrations]
MigrationB[Module B Migrations]
end
ModuleA --> RepoA
RepoA --> SchemaA
SchemaA --> DB
ModuleB --> RepoB
RepoB --> SchemaB
SchemaB --> DB
MigrationA --> DB
MigrationB --> DB
style ModuleA fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
style ModuleB fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
style DB fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
```
### Data Isolation Patterns
1. **Schema Isolation**: Each module has its own database schema
2. **Table Prefixing**: Module tables prefixed with module name
3. **Migration Isolation**: Each module manages its own migrations
4. **Shared Database**: Modules share database instance but not schemas
5. **Cross-Module Queries**: Use service clients, not direct SQL joins
## Module Permission System
Modules register permissions that are automatically integrated into the platform's permission system.
```mermaid
sequenceDiagram
participant Module
participant ModuleManifest
participant PermissionGenerator
participant PermissionRegistry
participant AuthzService
Module->>ModuleManifest: Define permissions
ModuleManifest->>ModuleManifest: permissions:
- blog.post.create
- blog.post.read
- blog.post.update
- blog.post.delete
Module->>PermissionGenerator: Generate permission code
PermissionGenerator->>PermissionGenerator: Parse manifest
PermissionGenerator->>PermissionGenerator: Generate constants
PermissionGenerator-->>Module: Permission code generated
Module->>PermissionRegistry: Register permissions
PermissionRegistry->>PermissionRegistry: Validate format
PermissionRegistry->>PermissionRegistry: Store permissions
PermissionRegistry-->>Module: Permissions registered
AuthzService->>PermissionRegistry: Resolve permissions
PermissionRegistry-->>AuthzService: Permission list
AuthzService->>AuthzService: Check permissions
```
### Permission Registration Flow
1. **Permission Definition**: Define permissions in `module.yaml`
2. **Code Generation**: Generate permission constants from manifest
3. **Permission Registration**: Register permissions during module initialization
4. **Permission Validation**: Validate permission format and uniqueness
5. **Permission Resolution**: Permissions available for authorization checks
## Module Communication Patterns
Modules communicate with each other through event bus and service clients.
```mermaid
graph TB
subgraph "Module A"
ServiceA[Module A Service]
end
subgraph "Module B"
ServiceB[Module B Service]
end
subgraph "Event Bus"
EventBus[Event Bus<br/>Kafka]
end
subgraph "Service Clients"
ClientA[Service Client A]
ClientB[Service Client B]
end
subgraph "Module C"
ServiceC[Module C Service]
end
ServiceA -->|Publish Event| EventBus
EventBus -->|Subscribe| ServiceB
EventBus -->|Subscribe| ServiceC
ServiceA -->|gRPC Call| ClientA
ClientA --> ServiceB
ServiceB -->|gRPC Call| ClientB
ClientB --> ServiceC
style ServiceA fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
style ServiceB fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
style EventBus fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
```
### Communication Patterns
#### Event-Based Communication
```mermaid
sequenceDiagram
participant ModuleA
participant EventBus
participant ModuleB
participant ModuleC
ModuleA->>EventBus: Publish event
EventBus->>EventBus: Route to subscribers
EventBus->>ModuleB: Deliver event
EventBus->>ModuleC: Deliver event
ModuleB->>ModuleB: Process event
ModuleC->>ModuleC: Process event
Note over ModuleB,ModuleC: Events processed independently
```
#### Service Client Communication
```mermaid
sequenceDiagram
participant ModuleA
participant Client
participant ServiceRegistry
participant ModuleB
ModuleA->>Client: Call service method
Client->>ServiceRegistry: Discover Module B
ServiceRegistry-->>Client: Module B endpoint
Client->>ModuleB: gRPC call
ModuleB->>ModuleB: Process request
ModuleB-->>Client: Response
Client-->>ModuleA: Return result
```
## Module Route Registration
Modules register their HTTP routes with the platform's router.
```mermaid
sequenceDiagram
participant Module
participant Router
participant AuthzMiddleware
participant ModuleHandler
Module->>Router: Register routes
Module->>Router: Define route: /api/v1/blog/posts
Module->>Router: Define permission: blog.post.create
Module->>Router: Define handler: CreatePostHandler
Router->>Router: Create route
Router->>AuthzMiddleware: Register permission check
Router->>Router: Attach handler
Router->>Router: Route registered
Note over Router: Routes are registered with<br/>permission requirements
```
### Route Registration Process
1. **Route Definition**: Module defines routes in `Init()` method
2. **Permission Association**: Routes associated with required permissions
3. **Handler Registration**: Handlers registered with router
4. **Middleware Attachment**: Authorization middleware automatically attached
5. **Route Activation**: Routes available when HTTP server starts
## Integration Points
This module integration integrates with:
- **[System Behavior Overview](system-behavior.md)**: How modules participate in system bootstrap
- **[Service Orchestration](service-orchestration.md)**: How modules operate as services
- **[Operational Scenarios](operational-scenarios.md)**: Module behavior in specific scenarios
- **[Architecture Modules](architecture-modules.md)**: Detailed module architecture
## Related Documentation
- [System Behavior Overview](system-behavior.md) - System-level behavior
- [Service Orchestration](service-orchestration.md) - Service coordination
- [Operational Scenarios](operational-scenarios.md) - Module usage scenarios
- [Architecture Modules](architecture-modules.md) - Module architecture details
- [Module Requirements](module-requirements.md) - Module requirements and interfaces

View File

@@ -0,0 +1,816 @@
# Module Requirements
This document provides detailed requirements for each module in the Go Platform, including interfaces, responsibilities, and integration points.
## Table of Contents
- [Core Kernel Modules](#core-kernel-modules)
- [Security Modules](#security-modules)
- [Infrastructure Modules](#infrastructure-modules)
- [Feature Modules](#feature-modules)
## Core Kernel Modules
### Configuration Module
**Purpose**: Hierarchical configuration management with support for multiple sources.
**Requirements**:
- Load configuration from YAML files (default, environment-specific)
- Support environment variable overrides
- Support secret manager integration (AWS Secrets Manager, Vault)
- Type-safe configuration access
- Configuration validation
**Interface**:
```go
type ConfigProvider interface {
Get(key string) any
Unmarshal(v any) error
GetString(key string) string
GetInt(key string) int
GetBool(key string) bool
GetStringSlice(key string) []string
GetDuration(key string) time.Duration
IsSet(key string) bool
}
```
**Implementation**:
- Uses `github.com/spf13/viper` for configuration loading
- Load order: `default.yaml``{env}.yaml` → environment variables → secrets
- Supports nested configuration keys (e.g., `server.port`)
**Configuration Schema**:
```yaml
environment: development
server:
port: 8080
host: "0.0.0.0"
timeout: 30s
database:
driver: "postgres"
dsn: ""
max_connections: 25
max_idle_connections: 5
logging:
level: "info"
format: "json"
output: "stdout"
cache:
enabled: true
ttl: 5m
```
**Dependencies**: None (foundation module)
---
### Logging Module
**Purpose**: Structured logging with support for multiple outputs and log levels.
**Requirements**:
- Structured JSON logging for production
- Human-readable logging for development
- Support for log levels (debug, info, warn, error)
- Request-scoped fields (request_id, user_id, trace_id)
- Contextual logging (with fields)
- Performance: minimal overhead
**Interface**:
```go
type Field interface{}
type Logger interface {
Debug(msg string, fields ...Field)
Info(msg string, fields ...Field)
Warn(msg string, fields ...Field)
Error(msg string, fields ...Field)
Fatal(msg string, fields ...Field)
With(fields ...Field) Logger
}
// Helper functions
func String(key, value string) Field
func Int(key string, value int) Field
func Error(err error) Field
func Duration(key string, value time.Duration) Field
```
**Implementation**:
- Uses `go.uber.org/zap` for high-performance logging
- JSON encoder for production, console encoder for development
- Global logger instance accessible via `pkg/logger`
- Request-scoped logger via context
**Example Usage**:
```go
logger.Info("User logged in",
logger.String("user_id", userID),
logger.String("ip", ipAddress),
logger.Duration("duration", duration),
)
```
**Dependencies**: Configuration Module
---
### Dependency Injection Module
**Purpose**: Service registration and lifecycle management.
**Requirements**:
- Service registration via constructors
- Lifecycle management (OnStart/OnStop hooks)
- Dependency resolution
- Service overrides for testing
- Module-based service composition
**Implementation**:
- Uses `go.uber.org/fx` for dependency injection
- Core services registered in `internal/di/core_module.go`
- Modules register services via `fx.Provide()` in `Init()`
- Lifecycle hooks via `fx.Lifecycle`
**Core Module Structure**:
```go
var CoreModule = fx.Options(
fx.Provide(ProvideConfig),
fx.Provide(ProvideLogger),
fx.Provide(ProvideDatabase),
fx.Provide(ProvideHealthCheckers),
fx.Provide(ProvideMetrics),
fx.Provide(ProvideErrorBus),
fx.Provide(ProvideEventBus),
// ... other core services
)
```
**Dependencies**: Configuration Module, Logging Module
---
### Health & Metrics Module
**Purpose**: Health checks and metrics collection.
**Requirements**:
- Liveness endpoint (`/healthz`)
- Readiness endpoint (`/ready`)
- Metrics endpoint (`/metrics`) in Prometheus format
- Composable health checkers
- Custom metrics support
**Interface**:
```go
type HealthChecker interface {
Check(ctx context.Context) error
}
type HealthRegistry interface {
Register(name string, checker HealthChecker)
Check(ctx context.Context) map[string]error
}
```
**Core Health Checkers**:
- Database connectivity
- Redis connectivity
- Kafka connectivity (if enabled)
- Disk space
- Memory usage
**Metrics**:
- HTTP request duration (histogram)
- HTTP request count (counter)
- Database query duration (histogram)
- Cache hit/miss ratio (gauge)
- Error count (counter)
**Dependencies**: Configuration Module, Logging Module
---
### Error Bus Module
**Purpose**: Centralized error handling and reporting.
**Requirements**:
- Non-blocking error publishing
- Multiple error sinks (logger, Sentry)
- Error context preservation
- Panic recovery integration
**Interface**:
```go
type ErrorPublisher interface {
Publish(err error)
PublishWithContext(ctx context.Context, err error)
}
```
**Implementation**:
- Channel-based error bus
- Background goroutine consumes errors
- Pluggable sinks (logger, Sentry)
- Context extraction (user_id, trace_id, module)
**Dependencies**: Logging Module
---
## Security Modules
### Authentication Module
**Purpose**: User authentication via JWT tokens.
**Requirements**:
- JWT access token generation (short-lived, 15 minutes)
- JWT refresh token generation (long-lived, 7 days)
- Token validation and verification
- Token claims extraction
- Refresh token storage and revocation
**Interface**:
```go
type Authenticator interface {
GenerateAccessToken(userID string, roles []string, tenantID string) (string, error)
GenerateRefreshToken(userID string) (string, error)
VerifyToken(token string) (*TokenClaims, error)
RevokeRefreshToken(tokenHash string) error
}
type TokenClaims struct {
UserID string
Roles []string
TenantID string
ExpiresAt time.Time
IssuedAt time.Time
}
```
**Token Format**:
- Algorithm: HS256 or RS256
- Claims: `sub` (user ID), `roles`, `tenant_id`, `exp`, `iat`
- Refresh tokens stored in database with hash
**Endpoints**:
- `POST /api/v1/auth/login` - Authenticate and get tokens
- `POST /api/v1/auth/refresh` - Refresh access token
- `POST /api/v1/auth/logout` - Revoke refresh token
**Dependencies**: Identity Module, Configuration Module
---
### Authorization Module
**Purpose**: Role-based and attribute-based access control.
**Requirements**:
- Permission-based authorization
- Role-to-permission mapping
- User-to-role assignment
- Permission caching
- Context-aware authorization
**Interface**:
```go
type PermissionResolver interface {
HasPermission(ctx context.Context, userID string, perm Permission) (bool, error)
GetUserPermissions(ctx context.Context, userID string) ([]Permission, error)
}
type Authorizer interface {
Authorize(ctx context.Context, perm Permission) error
}
```
**Permission Format**:
- String format: `"{module}.{resource}.{action}"`
- Examples: `blog.post.create`, `user.read`, `system.health.check`
- Code-generated constants for type safety
**Authorization Flow**:
```mermaid
sequenceDiagram
participant Request
participant AuthzMiddleware
participant Authorizer
participant PermissionResolver
participant Cache
participant DB
Request->>AuthzMiddleware: HTTP request with permission
AuthzMiddleware->>Authorizer: Authorize(ctx, permission)
Authorizer->>Authorizer: Extract user from context
Authorizer->>PermissionResolver: HasPermission(user, permission)
PermissionResolver->>Cache: Check cache
Cache-->>PermissionResolver: Cache miss
PermissionResolver->>DB: Load user roles
PermissionResolver->>DB: Load role permissions
DB-->>PermissionResolver: Permissions
PermissionResolver->>Cache: Store in cache
PermissionResolver-->>Authorizer: Has permission: true/false
Authorizer-->>AuthzMiddleware: Authorized or error
AuthzMiddleware-->>Request: Continue or 403
```
**Dependencies**: Identity Module, Cache Module
---
### Identity Module
**Purpose**: User and role management.
**Requirements**:
- User CRUD operations
- Password hashing (argon2id)
- Email verification
- Password reset flow
- Role management
- Permission management
- User-role assignment
**Interfaces**:
```go
type UserRepository interface {
FindByID(ctx context.Context, id string) (*User, error)
FindByEmail(ctx context.Context, email string) (*User, error)
Create(ctx context.Context, u *User) error
Update(ctx context.Context, u *User) error
Delete(ctx context.Context, id string) error
List(ctx context.Context, filters UserFilters) ([]*User, error)
}
type UserService interface {
Register(ctx context.Context, email, password string) (*User, error)
VerifyEmail(ctx context.Context, token string) error
ResetPassword(ctx context.Context, email string) error
ChangePassword(ctx context.Context, userID, oldPassword, newPassword string) error
UpdateProfile(ctx context.Context, userID string, updates UserUpdates) error
}
type RoleRepository interface {
FindByID(ctx context.Context, id string) (*Role, error)
Create(ctx context.Context, r *Role) error
Update(ctx context.Context, r *Role) error
Delete(ctx context.Context, id string) error
AssignPermissions(ctx context.Context, roleID string, permissions []Permission) error
AssignToUser(ctx context.Context, userID string, roleIDs []string) error
}
```
**User Entity**:
- ID (UUID)
- Email (unique, verified)
- Password hash (argon2id)
- Email verified (boolean)
- Created at, updated at
- Tenant ID (optional, for multi-tenancy)
**Role Entity**:
- ID (UUID)
- Name (unique)
- Description
- Created at
- Permissions (many-to-many)
**Dependencies**: Database Module, Notification Module, Cache Module
---
### Audit Module
**Purpose**: Immutable audit logging of security-relevant actions.
**Requirements**:
- Append-only audit log
- Actor tracking (user ID)
- Action tracking (what was done)
- Target tracking (what was affected)
- Metadata storage (JSON)
- Correlation IDs
- High-performance writes
**Interface**:
```go
type Auditor interface {
Record(ctx context.Context, action AuditAction) error
Query(ctx context.Context, filters AuditFilters) ([]AuditEntry, error)
}
type AuditAction struct {
ActorID string
Action string // e.g., "user.created", "role.assigned"
TargetID string
Metadata map[string]any
IPAddress string
UserAgent string
}
```
**Audit Log Schema**:
- ID (UUID)
- Actor ID (user ID)
- Action (string)
- Target ID (resource ID)
- Metadata (JSONB)
- Timestamp
- Request ID
- IP Address
- User Agent
**Automatic Audit Events**:
- User login/logout
- Password changes
- Role assignments
- Permission grants
- Data modifications (configurable)
**Dependencies**: Database Module, Logging Module
---
## Infrastructure Modules
### Database Module
**Purpose**: Database access and ORM functionality.
**Requirements**:
- PostgreSQL support (primary)
- Connection pooling
- Transaction support
- Migration management
- Query instrumentation (OpenTelemetry)
- Multi-tenancy support (tenant_id filtering)
**Implementation**:
- Uses `entgo.io/ent` for code generation
- Ent schemas for all entities
- Migration runner on startup
- Connection pool configuration
**Database Client Interface**:
```go
type DatabaseClient interface {
Client() *ent.Client
Migrate(ctx context.Context) error
Close() error
HealthCheck(ctx context.Context) error
}
```
**Connection Pooling**:
- Max connections: 25
- Max idle connections: 5
- Connection lifetime: 5 minutes
- Idle timeout: 10 minutes
**Multi-Tenancy**:
- Automatic tenant_id filtering via Ent interceptors
- Tenant-aware queries
- Tenant isolation at application level
**Dependencies**: Configuration Module, Logging Module
---
### Cache Module
**Purpose**: Distributed caching with Redis.
**Requirements**:
- Key-value storage
- TTL support
- Distributed caching (shared across instances)
- Cache invalidation
- Fallback to in-memory cache
**Interface**:
```go
type Cache interface {
Get(ctx context.Context, key string) ([]byte, error)
Set(ctx context.Context, key string, value []byte, ttl time.Duration) error
Delete(ctx context.Context, key string) error
Exists(ctx context.Context, key string) (bool, error)
Increment(ctx context.Context, key string) (int64, error)
}
```
**Use Cases**:
- User permissions caching
- Role assignments caching
- Session data
- Rate limiting state
- Query result caching (optional)
**Cache Key Format**:
- `user:{user_id}:permissions`
- `role:{role_id}:permissions`
- `session:{session_id}`
- `ratelimit:{user_id}:{endpoint}`
**Dependencies**: Configuration Module, Logging Module
---
### Event Bus Module
**Purpose**: Event-driven communication between modules.
**Requirements**:
- Publish/subscribe pattern
- Topic-based routing
- In-process bus (development)
- Kafka bus (production)
- Error handling and retries
- Event ordering (per partition)
**Interface**:
```go
type EventBus interface {
Publish(ctx context.Context, topic string, event Event) error
Subscribe(topic string, handler EventHandler) error
Unsubscribe(topic string) error
}
type Event struct {
ID string
Type string
Source string
Timestamp time.Time
Data map[string]any
}
type EventHandler func(ctx context.Context, event Event) error
```
**Core Events**:
- `platform.user.created`
- `platform.user.updated`
- `platform.user.deleted`
- `platform.role.assigned`
- `platform.role.revoked`
- `platform.permission.granted`
**Event Flow**:
```mermaid
graph LR
Publisher[Module Publisher]
Bus[Event Bus]
Subscriber1[Module Subscriber 1]
Subscriber2[Module Subscriber 2]
Subscriber3[Module Subscriber 3]
Publisher -->|Publish| Bus
Bus -->|Deliver| Subscriber1
Bus -->|Deliver| Subscriber2
Bus -->|Deliver| Subscriber3
```
**Dependencies**: Configuration Module, Logging Module
---
### Scheduler Module
**Purpose**: Background job processing and cron scheduling.
**Requirements**:
- Cron job scheduling
- Async job queuing
- Job retries with backoff
- Job status tracking
- Concurrency control
- Job persistence
**Interface**:
```go
type Scheduler interface {
Cron(spec string, job JobFunc) error
Enqueue(queue string, payload any) error
EnqueueWithRetry(queue string, payload any, retries int) error
}
type JobFunc func(ctx context.Context) error
```
**Implementation**:
- Uses `github.com/robfig/cron/v3` for cron jobs
- Uses `github.com/hibiken/asynq` for job queuing
- Redis-backed job queue
- Job processor with worker pool
**Example Jobs**:
- Cleanup expired tokens (daily)
- Send digest emails (weekly)
- Generate reports (monthly)
- Data archival (custom schedule)
**Dependencies**: Cache Module (Redis), Logging Module
---
### Blob Storage Module
**Purpose**: File and blob storage abstraction.
**Requirements**:
- File upload
- File download
- File deletion
- Signed URL generation
- Versioning support (optional)
**Interface**:
```go
type BlobStore interface {
Upload(ctx context.Context, key string, data []byte, contentType string) error
Download(ctx context.Context, key string) ([]byte, error)
Delete(ctx context.Context, key string) error
GetSignedURL(ctx context.Context, key string, ttl time.Duration) (string, error)
Exists(ctx context.Context, key string) (bool, error)
}
```
**Implementation**:
- AWS S3 adapter (primary)
- Local file system adapter (development)
- GCS adapter (optional)
**Key Format**:
- `{module}/{resource_type}/{resource_id}/{filename}`
- Example: `blog/posts/abc123/image.jpg`
**Dependencies**: Configuration Module, Logging Module
---
### Notification Module
**Purpose**: Multi-channel notifications (email, SMS, push).
**Requirements**:
- Email sending (SMTP, AWS SES)
- SMS sending (Twilio, optional)
- Push notifications (FCM, APNs, optional)
- Webhook notifications
- Template support
- Retry logic
**Interface**:
```go
type Notifier interface {
SendEmail(ctx context.Context, to, subject, body string) error
SendEmailWithTemplate(ctx context.Context, to, template string, data map[string]any) error
SendSMS(ctx context.Context, to, message string) error
SendPush(ctx context.Context, deviceToken string, payload PushPayload) error
SendWebhook(ctx context.Context, url string, payload map[string]any) error
}
```
**Email Templates**:
- Email verification
- Password reset
- Welcome email
- Notification digest
**Dependencies**: Configuration Module, Logging Module, Event Bus Module
---
## Feature Modules
### Blog Module (Example)
**Purpose**: Blog post management functionality.
**Requirements**:
- Post CRUD operations
- Comment system (optional)
- Author-based access control
- Post publishing workflow
- Tag/category support
**Permissions**:
- `blog.post.create`
- `blog.post.read`
- `blog.post.update`
- `blog.post.delete`
- `blog.post.publish`
**Routes**:
- `POST /api/v1/blog/posts` - Create post
- `GET /api/v1/blog/posts` - List posts
- `GET /api/v1/blog/posts/:id` - Get post
- `PUT /api/v1/blog/posts/:id` - Update post
- `DELETE /api/v1/blog/posts/:id` - Delete post
**Domain Model**:
```go
type Post struct {
ID string
Title string
Content string
AuthorID string
Status PostStatus // draft, published, archived
CreatedAt time.Time
UpdatedAt time.Time
PublishedAt *time.Time
}
```
**Events Published**:
- `blog.post.created`
- `blog.post.updated`
- `blog.post.published`
- `blog.post.deleted`
**Dependencies**: Core Kernel, Identity Module, Event Bus Module
---
## Module Integration Matrix
```mermaid
graph TB
subgraph "Core Kernel (Required)"
Config[Config]
Logger[Logger]
DI[DI Container]
Health[Health]
end
subgraph "Security (Required)"
Auth[Auth]
Authz[Authz]
Identity[Identity]
Audit[Audit]
end
subgraph "Infrastructure (Optional)"
DB[Database]
Cache[Cache]
EventBus[Event Bus]
Scheduler[Scheduler]
BlobStore[Blob Store]
Notifier[Notifier]
end
subgraph "Feature Modules"
Blog[Blog]
Billing[Billing]
Custom[Custom Modules]
end
Config --> Logger
Config --> DI
DI --> Health
DI --> Auth
DI --> Authz
DI --> Identity
DI --> Audit
DI --> DB
DI --> Cache
DI --> EventBus
DI --> Scheduler
DI --> BlobStore
DI --> Notifier
Auth --> Identity
Authz --> Identity
Authz --> Audit
Blog --> Auth
Blog --> Authz
Blog --> DB
Blog --> EventBus
Blog --> Cache
Billing --> Auth
Billing --> Authz
Billing --> DB
Billing --> EventBus
Billing --> Cache
Custom --> Auth
Custom --> Authz
Custom --> DB
style Config fill:#4a90e2,stroke:#2e5c8a,stroke-width:3px,color:#fff
style Auth fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
style Blog fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
```
## Next Steps
- [Component Relationships](./component-relationships.md) - Detailed component interactions
- [System Architecture](./architecture.md) - Overall system architecture
- [Module Architecture](./architecture-modules.md) - Module design and integration

View File

@@ -0,0 +1,406 @@
# Operational Scenarios
## Purpose
This document describes common operational scenarios in the Go Platform, focusing on how different components interact to accomplish specific tasks.
## Overview
Operational scenarios illustrate how the platform handles common use cases such as user authentication, authorization checks, event processing, and background job execution. Each scenario shows the complete flow from initiation to completion.
## Key Concepts
- **Scenario**: A specific operational use case
- **Flow**: Sequence of interactions to accomplish the scenario
- **Components**: Services and modules involved in the scenario
- **State Changes**: How system state changes during the scenario
## Authentication and Authorization Flows
### User Authentication Flow
Complete flow of user logging in and receiving authentication tokens.
```mermaid
sequenceDiagram
participant User
participant Client
participant AuthService
participant IdentityService
participant DB
participant TokenProvider
participant AuditService
User->>Client: Enter credentials
Client->>AuthService: POST /api/v1/auth/login
AuthService->>AuthService: Validate request format
AuthService->>IdentityService: Verify credentials
IdentityService->>DB: Query user by email
DB-->>IdentityService: User data
IdentityService->>IdentityService: Verify password hash
IdentityService-->>AuthService: Credentials valid
AuthService->>TokenProvider: Generate access token
TokenProvider->>TokenProvider: Create JWT claims
TokenProvider-->>AuthService: Access token
AuthService->>TokenProvider: Generate refresh token
TokenProvider->>DB: Store refresh token hash
DB-->>TokenProvider: Token stored
TokenProvider-->>AuthService: Refresh token
AuthService->>AuditService: Log login
AuditService->>DB: Store audit log
AuditService-->>AuthService: Logged
AuthService-->>Client: Access + Refresh tokens
Client-->>User: Authentication successful
```
### Authorization Check Flow
How the system checks if a user has permission to perform an action.
```mermaid
sequenceDiagram
participant Handler
participant AuthzMiddleware
participant AuthzService
participant PermissionResolver
participant Cache
participant DB
participant IdentityService
Handler->>AuthzMiddleware: Check permission
AuthzMiddleware->>AuthzMiddleware: Extract user from context
AuthzMiddleware->>AuthzService: Authorize(user, permission)
AuthzService->>Cache: Check permission cache
Cache-->>AuthzService: Cache miss
AuthzService->>PermissionResolver: Resolve permissions
PermissionResolver->>IdentityService: Get user roles
IdentityService->>DB: Query user roles
DB-->>IdentityService: User roles
IdentityService-->>PermissionResolver: Roles list
PermissionResolver->>DB: Query role permissions
DB-->>PermissionResolver: Permissions
PermissionResolver->>PermissionResolver: Aggregate permissions
PermissionResolver-->>AuthzService: User permissions
AuthzService->>AuthzService: Check permission in list
AuthzService->>Cache: Store in cache
AuthzService-->>AuthzMiddleware: Authorized/Unauthorized
alt Authorized
AuthzMiddleware-->>Handler: Continue
else Unauthorized
AuthzMiddleware-->>Handler: 403 Forbidden
end
```
### Permission Resolution Flow
How user permissions are resolved from roles and cached for performance.
```mermaid
graph TD
Start[Permission Check] --> Cache{Cache Hit?}
Cache -->|Yes| Return[Return Cached Permissions]
Cache -->|No| GetRoles[Get User Roles]
GetRoles --> DB1[(Database)]
DB1 --> Roles[User Roles]
Roles --> GetPermissions[Get Role Permissions]
GetPermissions --> DB2[(Database)]
DB2 --> Permissions[Role Permissions]
Permissions --> Aggregate[Aggregate Permissions]
Aggregate --> StoreCache[Store in Cache]
StoreCache --> Return
Return --> Check[Check Permission]
Check -->|Has Permission| Allow[Allow Access]
Check -->|No Permission| Deny[Deny Access]
style Cache fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
style Aggregate fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
```
## Data Access Patterns
### Cache-Aside Pattern
How data is accessed with caching to improve performance.
```mermaid
sequenceDiagram
participant Service
participant Cache
participant Repository
participant DB
Service->>Cache: Get(key)
Cache-->>Service: Cache miss
Service->>Repository: Find by ID
Repository->>DB: Query database
DB-->>Repository: Data
Repository-->>Service: Domain entity
Service->>Cache: Set(key, entity)
Cache-->>Service: Cached
Service-->>Service: Return entity
Note over Service,Cache: Next request will hit cache
```
### Write-Through Cache Pattern
How data writes are synchronized with cache.
```mermaid
sequenceDiagram
participant Service
participant Cache
participant Repository
participant DB
Service->>Repository: Save entity
Repository->>DB: Insert/Update
DB-->>Repository: Success
Repository->>Cache: Invalidate(key)
Cache->>Cache: Remove from cache
Cache-->>Repository: Invalidated
Repository-->>Service: Entity saved
Note over Service,Cache: Cache invalidated on write
```
## Event Processing Scenarios
### Event Publishing and Consumption
How events are published and consumed across services.
```mermaid
sequenceDiagram
participant Publisher
participant EventBus
participant Kafka
participant Subscriber1
participant Subscriber2
Publisher->>Publisher: Business event occurs
Publisher->>EventBus: Publish(event)
EventBus->>EventBus: Serialize event
EventBus->>EventBus: Add metadata
EventBus->>Kafka: Send to topic
Kafka-->>EventBus: Acknowledged
EventBus-->>Publisher: Event published
Kafka->>Subscriber1: Deliver event
Subscriber1->>Subscriber1: Deserialize event
Subscriber1->>Subscriber1: Process event
Subscriber1->>Subscriber1: Update state
Subscriber1-->>Kafka: Acknowledge
Kafka->>Subscriber2: Deliver event
Subscriber2->>Subscriber2: Deserialize event
Subscriber2->>Subscriber2: Process event
Subscriber2->>Subscriber2: Update state
Subscriber2-->>Kafka: Acknowledge
```
### Event-Driven Workflow
How multiple services coordinate through events.
```mermaid
graph LR
A[Service A] -->|Publish Event A| EventBus[Event Bus]
EventBus -->|Subscribe| B[Service B]
EventBus -->|Subscribe| C[Service C]
B -->|Publish Event B| EventBus
C -->|Publish Event C| EventBus
EventBus -->|Subscribe| D[Service D]
style EventBus fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
```
## Background Processing Scenarios
### Background Job Scheduling
How background jobs are scheduled and executed.
```mermaid
sequenceDiagram
participant Scheduler
participant JobQueue
participant Worker
participant Service
participant DB
participant EventBus
Scheduler->>Scheduler: Cron trigger
Scheduler->>JobQueue: Enqueue job
JobQueue->>JobQueue: Store job definition
JobQueue-->>Scheduler: Job enqueued
Worker->>JobQueue: Poll for jobs
JobQueue-->>Worker: Job definition
Worker->>Worker: Lock job
Worker->>Service: Execute job
Service->>DB: Update data
Service->>EventBus: Publish events
Service-->>Worker: Job complete
Worker->>JobQueue: Mark complete
JobQueue->>JobQueue: Remove job
alt Job fails
Worker->>JobQueue: Mark failed
JobQueue->>JobQueue: Schedule retry
end
```
### Job Retry Flow
How failed jobs are retried with exponential backoff.
```mermaid
stateDiagram-v2
[*] --> Pending: Job created
Pending --> Running: Worker picks up
Running --> Success: Job completes
Running --> Failed: Job fails
Failed --> RetryScheduled: Schedule retry
RetryScheduled --> Waiting: Wait (exponential backoff)
Waiting --> Pending: Retry time reached
Failed --> MaxRetries: Max retries reached
MaxRetries --> DeadLetter: Move to dead letter
Success --> [*]
DeadLetter --> [*]
```
## Configuration Management Scenarios
### Configuration Reload Flow
How configuration is reloaded without service restart.
```mermaid
sequenceDiagram
participant Admin
participant ConfigService
participant ConfigManager
participant Services
participant SecretStore
Admin->>ConfigService: Update configuration
ConfigService->>SecretStore: Fetch secrets (if needed)
SecretStore-->>ConfigService: Secrets
ConfigService->>ConfigManager: Reload configuration
ConfigManager->>ConfigManager: Validate configuration
ConfigManager->>ConfigManager: Merge with defaults
ConfigManager->>Services: Notify config change
Services->>Services: Update configuration
Services-->>ConfigManager: Config updated
ConfigManager-->>ConfigService: Reload complete
ConfigService-->>Admin: Configuration reloaded
```
## Audit Logging Flow
How all security-sensitive actions are logged.
```mermaid
sequenceDiagram
participant Service
participant AuditClient
participant AuditService
participant DB
Service->>Service: Security-sensitive action
Service->>AuditClient: Record audit log
AuditClient->>AuditClient: Extract context
AuditClient->>AuditClient: Build audit entry
AuditClient->>AuditService: Store audit log
AuditService->>AuditService: Validate audit entry
AuditService->>DB: Insert audit log
DB-->>AuditService: Log stored
AuditService-->>AuditClient: Audit logged
AuditClient-->>Service: Continue
Note over Service,DB: Audit logs are immutable
```
## Database Migration Flow
How database migrations are executed during module initialization.
```mermaid
sequenceDiagram
participant Main
participant ModuleRegistry
participant Module
participant MigrationRunner
participant DB
Main->>ModuleRegistry: Get modules (dependency order)
ModuleRegistry-->>Main: Ordered modules
loop For each module
Main->>Module: Get migrations
Module-->>Main: Migration list
end
Main->>MigrationRunner: Run migrations
MigrationRunner->>DB: Check migration table
DB-->>MigrationRunner: Existing migrations
loop For each pending migration
MigrationRunner->>DB: Start transaction
MigrationRunner->>DB: Execute migration
DB-->>MigrationRunner: Migration complete
MigrationRunner->>DB: Record migration
MigrationRunner->>DB: Commit transaction
end
MigrationRunner-->>Main: Migrations complete
```
## Integration Points
This operational scenarios document integrates with:
- **[System Behavior Overview](system-behavior.md)**: How these scenarios fit into overall system behavior
- **[Service Orchestration](service-orchestration.md)**: How services coordinate in these scenarios
- **[Module Integration Patterns](module-integration-patterns.md)**: How modules participate in these scenarios
- **[Data Flow Patterns](data-flow-patterns.md)**: Detailed data flow in these scenarios
## Related Documentation
- [System Behavior Overview](system-behavior.md) - System-level behavior
- [Service Orchestration](service-orchestration.md) - Service coordination
- [Module Integration Patterns](module-integration-patterns.md) - Module integration
- [Data Flow Patterns](data-flow-patterns.md) - Data flow details
- [Architecture Overview](architecture.md) - System architecture

View File

@@ -0,0 +1,403 @@
# Service Orchestration
## Purpose
This document explains how services work together in the Go Platform's microservices architecture, focusing on service lifecycle management, discovery, communication patterns, and failure handling.
## Overview
The Go Platform consists of multiple independent services that communicate via service clients (gRPC/HTTP) and share infrastructure components. Services are discovered and registered through a service registry, enabling dynamic service location and health monitoring.
## Key Concepts
- **Service**: Independent process providing specific functionality
- **Service Registry**: Central registry for service discovery (Consul, Kubernetes, etcd)
- **Service Client**: Abstraction for inter-service communication
- **Service Discovery**: Process of locating services by name
- **Service Health**: Health status of a service (healthy, unhealthy, degraded)
## Service Lifecycle Management
Services follow a well-defined lifecycle from startup to shutdown.
```mermaid
stateDiagram-v2
[*] --> Starting: Service starts
Starting --> Registering: Initialize services
Registering --> StartingServer: Register with service registry
StartingServer --> Running: Start HTTP/gRPC servers
Running --> Healthy: Health checks pass
Running --> Unhealthy: Health checks fail
Unhealthy --> Running: Health checks recover
Healthy --> Degrading: Dependency issues
Degrading --> Healthy: Dependencies recover
Degrading --> Unhealthy: Critical failure
Running --> ShuttingDown: Receive shutdown signal
ShuttingDown --> Deregistering: Stop accepting requests
Deregistering --> Stopped: Deregister from registry
Stopped --> [*]
```
### Lifecycle States
1. **Starting**: Service is initializing, loading configuration
2. **Registering**: Service registers with service registry
3. **Starting Server**: HTTP and gRPC servers starting
4. **Running**: Service is running and processing requests
5. **Healthy**: All health checks passing
6. **Unhealthy**: Health checks failing
7. **Degrading**: Service operational but with degraded functionality
8. **Shutting Down**: Service received shutdown signal
9. **Deregistering**: Service removing itself from registry
10. **Stopped**: Service has stopped
## Service Discovery and Registration
Services automatically register themselves with the service registry on startup and deregister on shutdown.
```mermaid
sequenceDiagram
participant Service
participant ServiceRegistry
participant Registry[Consul/K8s]
participant Client
Service->>ServiceRegistry: Register(serviceInfo)
ServiceRegistry->>Registry: Register service
Registry->>Registry: Store service info
Registry-->>ServiceRegistry: Registration confirmed
ServiceRegistry-->>Service: Service registered
Note over Service: Service starts health checks
loop Every health check interval
Service->>ServiceRegistry: Update health status
ServiceRegistry->>Registry: Update health
end
Client->>ServiceRegistry: Discover(serviceName)
ServiceRegistry->>Registry: Query services
Registry-->>ServiceRegistry: Service list
ServiceRegistry->>ServiceRegistry: Filter healthy services
ServiceRegistry->>ServiceRegistry: Load balance
ServiceRegistry-->>Client: Service endpoint
Client->>Service: Connect via gRPC/HTTP
Service->>ServiceRegistry: Deregister()
ServiceRegistry->>Registry: Remove service
Registry-->>ServiceRegistry: Service removed
```
### Service Registration Process
1. **Service Startup**: Service initializes and loads configuration
2. **Service Info Creation**: Create service info with name, version, address, protocol
3. **Registry Registration**: Register service with Consul/Kubernetes/etc
4. **Health Check Setup**: Start health check endpoint
5. **Health Status Updates**: Periodically update health status in registry
6. **Service Discovery**: Clients query registry for service endpoints
7. **Load Balancing**: Registry returns healthy service instances
8. **Service Deregistration**: On shutdown, remove service from registry
## Service Communication Patterns
Services communicate through well-defined patterns using service clients.
```mermaid
graph TB
subgraph "Service A"
ServiceA[Service A Handler]
ClientA[Service Client]
end
subgraph "Service Registry"
Registry[Service Registry]
end
subgraph "Service B"
ServiceB[Service B Handler]
ServerB[gRPC Server]
end
subgraph "Service C"
ServiceC[Service C Handler]
end
subgraph "Event Bus"
EventBus[Event Bus<br/>Kafka]
end
ServiceA -->|Discover| Registry
Registry -->|Service B endpoint| ClientA
ClientA -->|gRPC Call| ServerB
ServerB --> ServiceB
ServiceB -->|Response| ClientA
ServiceA -->|Publish Event| EventBus
EventBus -->|Subscribe| ServiceC
ServiceC -->|Process Event| ServiceC
style ClientA fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
style ServerB fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
style EventBus fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
```
### Communication Patterns
#### Synchronous Communication (gRPC/HTTP)
```mermaid
sequenceDiagram
participant Client
participant ServiceClient
participant Registry
participant Service
Client->>ServiceClient: Call service method
ServiceClient->>Registry: Discover service
Registry-->>ServiceClient: Service endpoint
ServiceClient->>Service: gRPC/HTTP call
Service->>Service: Process request
Service-->>ServiceClient: Response
ServiceClient-->>Client: Return result
alt Service unavailable
ServiceClient->>Registry: Retry discovery
Registry-->>ServiceClient: Alternative endpoint
ServiceClient->>Service: Retry call
end
```
#### Asynchronous Communication (Event Bus)
```mermaid
sequenceDiagram
participant Publisher
participant EventBus
participant Kafka
participant Subscriber1
participant Subscriber2
Publisher->>EventBus: Publish event
EventBus->>Kafka: Send to topic
Kafka-->>EventBus: Acknowledged
Kafka->>Subscriber1: Deliver event
Kafka->>Subscriber2: Deliver event
Subscriber1->>Subscriber1: Process event
Subscriber2->>Subscriber2: Process event
Note over Subscriber1,Subscriber2: Events processed independently
```
## Service Dependency Graph
Services have dependencies that determine startup ordering and communication patterns.
```mermaid
graph TD
subgraph "Core Services"
Identity[Identity Service]
Auth[Auth Service]
Authz[Authz Service]
Audit[Audit Service]
end
subgraph "Feature Services"
Blog[Blog Service]
Billing[Billing Service]
Analytics[Analytics Service]
end
subgraph "Infrastructure Services"
Registry[Service Registry]
EventBus[Event Bus]
Cache[Cache Service]
end
Auth --> Identity
Auth --> Registry
Authz --> Identity
Authz --> Cache
Authz --> Audit
Audit --> Registry
Blog --> Authz
Blog --> Identity
Blog --> Audit
Blog --> Registry
Blog --> EventBus
Blog --> Cache
Billing --> Authz
Billing --> Identity
Billing --> Registry
Billing --> EventBus
Analytics --> EventBus
Analytics --> Registry
style Identity fill:#4a90e2,stroke:#2e5c8a,stroke-width:3px,color:#fff
style Auth fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
style Blog fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
```
### Dependency Types
1. **Hard Dependencies**: Service cannot start without dependency (e.g., Auth depends on Identity)
2. **Soft Dependencies**: Service can start but with degraded functionality
3. **Runtime Dependencies**: Dependencies discovered at runtime via service registry
## Service Health and Failure Handling
Services continuously report their health status, enabling automatic failure detection and recovery.
```mermaid
graph TD
Service[Service] --> HealthCheck[Health Check Endpoint]
HealthCheck --> CheckDB[Check Database]
HealthCheck --> CheckCache[Check Cache]
HealthCheck --> CheckDeps[Check Dependencies]
CheckDB -->|Healthy| Aggregate[Aggregate Health]
CheckCache -->|Healthy| Aggregate
CheckDeps -->|Healthy| Aggregate
Aggregate -->|All Healthy| Healthy[Healthy Status]
Aggregate -->|Degraded| Degraded[Degraded Status]
Aggregate -->|Unhealthy| Unhealthy[Unhealthy Status]
Healthy --> Registry[Update Registry]
Degraded --> Registry
Unhealthy --> Registry
Registry --> LoadBalancer[Load Balancer]
LoadBalancer -->|Healthy Only| RouteTraffic[Route Traffic]
LoadBalancer -->|Unhealthy| NoTraffic[No Traffic]
style Healthy fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
style Degraded fill:#ffa500,stroke:#ff8c00,stroke-width:2px,color:#fff
style Unhealthy fill:#ff6b6b,stroke:#c92a2a,stroke-width:2px,color:#fff
```
### Health Check Types
1. **Liveness Check**: Service process is running
2. **Readiness Check**: Service is ready to accept requests
3. **Dependency Checks**: Database, cache, and other dependencies are accessible
4. **Business Health**: Service-specific health indicators
### Failure Handling Strategies
#### Circuit Breaker Pattern
```mermaid
stateDiagram-v2
[*] --> Closed: Service healthy
Closed --> Open: Failure threshold exceeded
Open --> HalfOpen: Timeout period
HalfOpen --> Closed: Success
HalfOpen --> Open: Failure
```
#### Retry Strategy
```mermaid
sequenceDiagram
participant Client
participant Service
Client->>Service: Request
Service-->>Client: Failure
Client->>Client: Wait (exponential backoff)
Client->>Service: Retry 1
Service-->>Client: Failure
Client->>Client: Wait (exponential backoff)
Client->>Service: Retry 2
Service-->>Client: Success
```
#### Service Degradation
When a service dependency fails, the service may continue operating with degraded functionality:
- **Cache Unavailable**: Service continues but without caching
- **Event Bus Unavailable**: Service continues but events are queued
- **Non-Critical Dependency Fails**: Service continues with reduced features
## Service Scaling Scenarios
Services can be scaled independently based on load and requirements.
```mermaid
graph TB
subgraph "Load Balancer"
LB[Load Balancer]
end
subgraph "Service Instances"
Instance1[Service Instance 1<br/>Healthy]
Instance2[Service Instance 2<br/>Healthy]
Instance3[Service Instance 3<br/>Starting]
Instance4[Service Instance 4<br/>Unhealthy]
end
subgraph "Service Registry"
Registry[Service Registry]
end
subgraph "Infrastructure"
DB[(Database)]
Cache[(Cache)]
end
LB -->|Discover| Registry
Registry -->|Healthy Instances| LB
LB --> Instance1
LB --> Instance2
LB -.->|No Traffic| Instance3
LB -.->|No Traffic| Instance4
Instance1 --> DB
Instance2 --> DB
Instance3 --> DB
Instance4 --> DB
Instance1 --> Cache
Instance2 --> Cache
style Instance1 fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
style Instance2 fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
style Instance3 fill:#ffa500,stroke:#ff8c00,stroke-width:2px,color:#fff
style Instance4 fill:#ff6b6b,stroke:#c92a2a,stroke-width:2px,color:#fff
```
### Scaling Patterns
1. **Horizontal Scaling**: Add more service instances
2. **Vertical Scaling**: Increase resources for existing instances
3. **Auto-Scaling**: Automatically scale based on metrics
4. **Load-Based Routing**: Route traffic to healthy instances only
## Integration Points
This service orchestration integrates with:
- **[System Behavior Overview](system-behavior.md)**: How services behave during startup and operation
- **[Module Integration Patterns](module-integration-patterns.md)**: How modules are loaded as services
- **[Operational Scenarios](operational-scenarios.md)**: Service interaction in specific scenarios
- **[Architecture Overview](architecture.md)**: Overall system architecture
## Related Documentation
- [System Behavior Overview](system-behavior.md) - System-level behavior
- [Module Integration Patterns](module-integration-patterns.md) - Module service integration
- [Operational Scenarios](operational-scenarios.md) - Service interaction scenarios
- [Architecture Overview](architecture.md) - System architecture
- [ADR-0029: Microservices Architecture](../adr/0029-microservices-architecture.md) - Architecture decision
- [ADR-0030: Service Communication Strategy](../adr/0030-service-communication-strategy.md) - Communication patterns

View File

@@ -0,0 +1,375 @@
# System Behavior Overview
## Purpose
This document provides a high-level explanation of how the Go Platform behaves end-to-end, focusing on system-level operations, flows, and interactions rather than implementation details.
## Overview
The Go Platform is a microservices-based system where each module operates as an independent service. Services communicate via gRPC (primary) or HTTP (fallback), share infrastructure components (PostgreSQL, Redis, Kafka), and are orchestrated through service discovery and dependency injection.
## Key Concepts
- **Services**: Independent processes that can be deployed and scaled separately
- **Service Clients**: Abstraction layer for inter-service communication
- **Service Registry**: Central registry for service discovery
- **Event Bus**: Asynchronous communication channel for events
- **DI Container**: Dependency injection container managing service lifecycle
## Application Bootstrap Sequence
The platform follows a well-defined startup sequence that ensures all services are properly initialized and registered.
```mermaid
sequenceDiagram
participant Main
participant Config
participant Logger
participant DI
participant Registry
participant ModuleLoader
participant ServiceRegistry
participant HTTP
participant gRPC
Main->>Config: Load configuration
Config-->>Main: Config ready
Main->>Logger: Initialize logger
Logger-->>Main: Logger ready
Main->>DI: Create DI container
DI->>DI: Register core services
DI-->>Main: DI container ready
Main->>ModuleLoader: Discover modules
ModuleLoader->>ModuleLoader: Scan module directories
ModuleLoader->>ModuleLoader: Load module.yaml files
ModuleLoader-->>Main: Module list
Main->>Registry: Register modules
Registry->>Registry: Resolve dependencies
Registry->>Registry: Order modules
Registry-->>Main: Ordered modules
loop For each module
Main->>Module: Initialize module
Module->>DI: Register services
Module->>Registry: Register routes
Module->>Registry: Register migrations
end
Main->>Registry: Run migrations
Registry->>Registry: Execute in dependency order
Main->>ServiceRegistry: Register service
ServiceRegistry->>ServiceRegistry: Register with Consul/K8s
ServiceRegistry-->>Main: Service registered
Main->>gRPC: Start gRPC server
Main->>HTTP: Start HTTP server
HTTP-->>Main: Server ready
gRPC-->>Main: Server ready
Main->>DI: Start lifecycle
DI->>DI: Execute OnStart hooks
DI-->>Main: All services started
```
### Bootstrap Phases
1. **Configuration Loading**: Load YAML files, environment variables, and secrets
2. **Foundation Services**: Initialize logger, config provider, DI container
3. **Module Discovery**: Scan and load module manifests
4. **Dependency Resolution**: Build dependency graph and order modules
5. **Module Initialization**: Initialize each module in dependency order
6. **Database Migrations**: Run migrations in dependency order
7. **Service Registration**: Register service with service registry
8. **Server Startup**: Start HTTP and gRPC servers
9. **Lifecycle Hooks**: Execute OnStart hooks for all services
## Request Processing Pipeline
Every HTTP request flows through a standardized pipeline that ensures security, observability, and proper error handling.
```mermaid
graph TD
Start([HTTP Request]) --> Auth[Authentication Middleware]
Auth -->|Valid Token| Authz[Authorization Middleware]
Auth -->|Invalid Token| Error1[401 Unauthorized]
Authz -->|Authorized| RateLimit[Rate Limiting]
Authz -->|Unauthorized| Error2[403 Forbidden]
RateLimit -->|Within Limits| Tracing[OpenTelemetry Tracing]
RateLimit -->|Rate Limited| Error3[429 Too Many Requests]
Tracing --> Handler[Request Handler]
Handler --> Service[Domain Service]
Service --> Cache{Cache Check}
Cache -->|Hit| Return[Return Cached Data]
Cache -->|Miss| Repo[Repository]
Repo --> DB[(Database)]
DB --> Repo
Repo --> Service
Service --> CacheStore[Update Cache]
Service --> EventBus[Publish Events]
Service --> Audit[Audit Logging]
Service --> Metrics[Update Metrics]
Service --> Handler
Handler --> Tracing
Tracing --> Response[HTTP Response]
Error1 --> Response
Error2 --> Response
Error3 --> Response
Return --> Response
style Auth fill:#ff6b6b,stroke:#c92a2a,stroke-width:2px,color:#fff
style Authz fill:#ff6b6b,stroke:#c92a2a,stroke-width:2px,color:#fff
style Service fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
```
### Request Processing Stages
1. **Authentication**: Extract and validate JWT token, add user to context
2. **Authorization**: Check user permissions for requested resource
3. **Rate Limiting**: Enforce per-user and per-IP rate limits
4. **Tracing**: Start/continue distributed trace
5. **Handler Processing**: Execute request handler
6. **Service Logic**: Execute business logic
7. **Data Access**: Query database or cache
8. **Side Effects**: Publish events, audit logs, update metrics
9. **Response**: Return HTTP response with tracing context
## Event-Driven Interactions
The platform uses an event bus for asynchronous communication between services, enabling loose coupling and scalability.
```mermaid
sequenceDiagram
participant Publisher
participant EventBus
participant Kafka
participant Subscriber1
participant Subscriber2
Publisher->>EventBus: Publish(event)
EventBus->>EventBus: Serialize event
EventBus->>EventBus: Add metadata (trace_id, user_id)
EventBus->>Kafka: Send to topic
Kafka-->>EventBus: Acknowledged
Kafka->>Subscriber1: Deliver event
Kafka->>Subscriber2: Deliver event
Subscriber1->>Subscriber1: Process event
Subscriber1->>Subscriber1: Update state
Subscriber1->>Subscriber1: Emit new events (optional)
Subscriber2->>Subscriber2: Process event
Subscriber2->>Subscriber2: Update state
Note over Subscriber1,Subscriber2: Events processed asynchronously
```
### Event Processing Flow
1. **Event Publishing**: Service publishes event to event bus
2. **Event Serialization**: Event is serialized with metadata
3. **Event Distribution**: Event bus distributes to Kafka topic
4. **Event Consumption**: Subscribers consume events from Kafka
5. **Event Processing**: Each subscriber processes event independently
6. **State Updates**: Subscribers update their own state
7. **Cascade Events**: Subscribers may publish new events
## Background Job Processing
Background jobs are scheduled and processed asynchronously, enabling long-running tasks and scheduled operations.
```mermaid
sequenceDiagram
participant Scheduler
participant JobQueue
participant Worker
participant Service
participant DB
participant EventBus
Scheduler->>JobQueue: Enqueue job
JobQueue->>JobQueue: Store job definition
Worker->>JobQueue: Poll for jobs
JobQueue-->>Worker: Job definition
Worker->>Worker: Start job execution
Worker->>Service: Execute job logic
Service->>DB: Update data
Service->>EventBus: Publish events
Service-->>Worker: Job complete
Worker->>JobQueue: Mark job complete
alt Job fails
Worker->>JobQueue: Mark job failed
JobQueue->>JobQueue: Schedule retry
end
```
### Background Job Flow
1. **Job Scheduling**: Jobs scheduled via cron or programmatically
2. **Job Enqueueing**: Job definition stored in job queue
3. **Job Polling**: Workers poll queue for available jobs
4. **Job Execution**: Worker executes job logic
5. **Job Completion**: Job marked as complete or failed
6. **Job Retry**: Failed jobs retried with exponential backoff
## Error Recovery and Resilience
The platform implements multiple layers of error handling to ensure system resilience.
```mermaid
graph TD
Error[Error Occurs] --> Handler{Error Handler}
Handler -->|Business Error| BusinessError[Business Error Handler]
Handler -->|System Error| SystemError[System Error Handler]
Handler -->|Panic| PanicHandler[Panic Recovery]
BusinessError --> ErrorBus[Error Bus]
SystemError --> ErrorBus
PanicHandler --> ErrorBus
ErrorBus --> Logger[Logger]
ErrorBus --> Sentry[Sentry]
ErrorBus --> Metrics[Metrics]
BusinessError --> Response[HTTP Response]
SystemError --> Response
PanicHandler --> Response
Response --> Client[Client]
style Error fill:#ff6b6b,stroke:#c92a2a,stroke-width:2px,color:#fff
style ErrorBus fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
```
### Error Handling Layers
1. **Panic Recovery**: Middleware catches panics and prevents crashes
2. **Error Classification**: Errors classified as business or system errors
3. **Error Bus**: Central error bus collects all errors
4. **Error Logging**: Errors logged with full context
5. **Error Reporting**: Critical errors reported to Sentry
6. **Error Metrics**: Errors tracked in metrics
7. **Error Response**: Appropriate HTTP response returned
## System Shutdown Sequence
The platform implements graceful shutdown to ensure data consistency and proper resource cleanup.
```mermaid
sequenceDiagram
participant Signal
participant Main
participant HTTP
participant gRPC
participant ServiceRegistry
participant DI
participant Workers
participant DB
Signal->>Main: SIGTERM/SIGINT
Main->>HTTP: Stop accepting requests
HTTP->>HTTP: Wait for active requests
HTTP-->>Main: HTTP server stopped
Main->>gRPC: Stop accepting connections
gRPC->>gRPC: Wait for active calls
gRPC-->>Main: gRPC server stopped
Main->>ServiceRegistry: Deregister service
ServiceRegistry->>ServiceRegistry: Remove from registry
ServiceRegistry-->>Main: Service deregistered
Main->>Workers: Stop workers
Workers->>Workers: Finish current jobs
Workers-->>Main: Workers stopped
Main->>DI: Stop lifecycle
DI->>DI: Execute OnStop hooks
DI->>DI: Close connections
DI->>DB: Close DB connections
DI-->>Main: Services stopped
Main->>Main: Exit
```
### Shutdown Phases
1. **Signal Reception**: Receive SIGTERM or SIGINT
2. **Stop Accepting Requests**: HTTP and gRPC servers stop accepting new requests
3. **Wait for Active Requests**: Wait for in-flight requests to complete
4. **Service Deregistration**: Remove service from service registry
5. **Worker Shutdown**: Stop background workers gracefully
6. **Lifecycle Hooks**: Execute OnStop hooks for all services
7. **Resource Cleanup**: Close database connections, release resources
8. **Application Exit**: Exit application cleanly
## Health Check and Monitoring Flow
Health checks and metrics provide visibility into system health and performance.
```mermaid
graph TD
HealthEndpoint[/healthz] --> HealthRegistry[Health Registry]
HealthRegistry --> CheckDB[Check Database]
HealthRegistry --> CheckCache[Check Cache]
HealthRegistry --> CheckEventBus[Check Event Bus]
CheckDB -->|Healthy| Aggregate[Aggregate Results]
CheckCache -->|Healthy| Aggregate
CheckEventBus -->|Healthy| Aggregate
Aggregate -->|All Healthy| Response200[200 OK]
Aggregate -->|Unhealthy| Response503[503 Service Unavailable]
MetricsEndpoint[/metrics] --> MetricsRegistry[Metrics Registry]
MetricsRegistry --> Prometheus[Prometheus Format]
Prometheus --> ResponseMetrics[Metrics Response]
style HealthRegistry fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
style MetricsRegistry fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
```
### Health Check Components
- **Liveness Check**: Service is running (process health)
- **Readiness Check**: Service is ready to accept requests (dependency health)
- **Dependency Checks**: Database, cache, event bus connectivity
- **Metrics Collection**: Request counts, durations, error rates
- **Metrics Export**: Prometheus-formatted metrics
## Integration Points
This system behavior integrates with:
- **[Service Orchestration](service-orchestration.md)**: How services coordinate during startup and operation
- **[Module Integration Patterns](module-integration-patterns.md)**: How modules integrate during bootstrap
- **[Operational Scenarios](operational-scenarios.md)**: Specific operational flows and use cases
- **[Data Flow Patterns](data-flow-patterns.md)**: Detailed data flow through the system
- **[Architecture Overview](architecture.md)**: System architecture and component relationships
## Related Documentation
- [Architecture Overview](architecture.md) - System architecture
- [Service Orchestration](service-orchestration.md) - Service coordination
- [Module Integration Patterns](module-integration-patterns.md) - Module integration
- [Operational Scenarios](operational-scenarios.md) - Common operational flows
- [Component Relationships](component-relationships.md) - Component dependencies