docs: add system specifications

This commit is contained in:
2025-11-05 09:51:47 +01:00
parent ace9678f6c
commit b4f8875a0e
7 changed files with 2030 additions and 0 deletions

View File

@@ -0,0 +1,423 @@
# Data Flow Patterns
## Purpose
This document describes how data flows through the Go Platform system, covering request/response flows, event flows, cache patterns, and observability data collection.
## Overview
Data flows through the platform in multiple patterns depending on the type of operation. Understanding these patterns helps in debugging, performance optimization, and system design decisions.
## Key Concepts
- **Request Flow**: Data flow from HTTP request to response
- **Event Flow**: Asynchronous data flow through event bus
- **Cache Flow**: Data flow through caching layers
- **Observability Flow**: Telemetry data collection and export
## Request/Response Data Flow
### Standard HTTP Request Flow
Complete data flow from HTTP request to response.
```mermaid
graph TD
Start[HTTP Request] --> Auth[Authentication]
Auth -->|Valid| Authz[Authorization]
Auth -->|Invalid| Error1[401 Response]
Authz -->|Authorized| Handler[Request Handler]
Authz -->|Unauthorized| Error2[403 Response]
Handler --> Service[Domain Service]
Service --> Cache{Cache Check}
Cache -->|Hit| CacheData[Return Cached Data]
Cache -->|Miss| Repo[Repository]
Repo --> DB[(Database)]
DB --> Repo
Repo --> Service
Service --> CacheStore[Update Cache]
Service --> EventBus[Publish Events]
Service --> Audit[Audit Log]
Service --> Metrics[Update Metrics]
Service --> Handler
Handler --> Response[HTTP Response]
CacheData --> Response
Error1 --> Response
Error2 --> Response
Response --> Client[Client]
style Auth fill:#ff6b6b,stroke:#c92a2a,stroke-width:2px,color:#fff
style Service fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
style Cache fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
```
### Request Data Transformation
How request data is transformed as it flows through the system.
```mermaid
sequenceDiagram
participant Client
participant Handler
participant Service
participant Repo
participant DB
Client->>Handler: HTTP Request (JSON)
Handler->>Handler: Parse JSON
Handler->>Handler: Validate request
Handler->>Handler: Convert to DTO
Handler->>Service: Business DTO
Service->>Service: Business logic
Service->>Service: Domain entity
Service->>Repo: Domain entity
Repo->>Repo: Convert to DB model
Repo->>DB: SQL query
DB-->>Repo: DB result
Repo->>Repo: Convert to domain entity
Repo-->>Service: Domain entity
Service->>Service: Business logic
Service->>Service: Response DTO
Service-->>Handler: Response DTO
Handler->>Handler: Convert to JSON
Handler-->>Client: HTTP Response (JSON)
```
## Event Data Flow
### Event Publishing Flow
How events are published and flow through the event bus.
```mermaid
graph LR
Publisher[Event Publisher] --> Serialize[Serialize Event]
Serialize --> Metadata[Add Metadata]
Metadata --> EventBus[Event Bus]
EventBus --> Topic[Kafka Topic]
Topic --> Subscriber1[Subscriber 1]
Topic --> Subscriber2[Subscriber 2]
Topic --> SubscriberN[Subscriber N]
Subscriber1 --> Process1[Process Event]
Subscriber2 --> Process2[Process Event]
SubscriberN --> ProcessN[Process Event]
style EventBus fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
style Topic fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
```
### Event Data Transformation
How event data is transformed during publishing and consumption.
```mermaid
sequenceDiagram
participant Publisher
participant EventBus
participant Kafka
participant Subscriber
Publisher->>Publisher: Domain event
Publisher->>EventBus: Publish(event)
EventBus->>EventBus: Serialize to JSON
EventBus->>EventBus: Add metadata:
- trace_id
- user_id
- timestamp
- source
EventBus->>Kafka: Send to topic
Kafka-->>EventBus: Acknowledged
Kafka->>Subscriber: Deliver event
Subscriber->>Subscriber: Deserialize JSON
Subscriber->>Subscriber: Extract metadata
Subscriber->>Subscriber: Domain event
Subscriber->>Subscriber: Process event
```
## Cache Data Flow
### Cache-Aside Pattern Flow
How data flows through cache using the cache-aside pattern.
```mermaid
graph TD
Start[Service Request] --> Check{Cache Hit?}
Check -->|Yes| GetCache[Get from Cache]
Check -->|No| GetDB[Query Database]
GetCache --> Deserialize[Deserialize Data]
Deserialize --> Return[Return Data]
GetDB --> DB[(Database)]
DB --> DBData[Database Result]
DBData --> Serialize[Serialize Data]
Serialize --> StoreCache[Store in Cache]
StoreCache --> Return
style Check fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
style StoreCache fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
```
### Cache Invalidation Flow
How cache is invalidated when data changes.
```mermaid
sequenceDiagram
participant Service
participant Repository
participant DB
participant Cache
Service->>Repository: Update entity
Repository->>DB: Update database
DB-->>Repository: Update complete
Repository->>Cache: Invalidate(key)
Cache->>Cache: Remove from cache
Cache-->>Repository: Invalidated
Repository-->>Service: Update complete
Note over Service,Cache: Next read will fetch from DB and cache
```
### Cache Write-Through Pattern
How data is written through cache to database.
```mermaid
sequenceDiagram
participant Service
participant Cache
participant Repository
participant DB
Service->>Cache: Write data
Cache->>Cache: Store in cache
Cache->>Repository: Write to database
Repository->>DB: Insert/Update
DB-->>Repository: Success
Repository-->>Cache: Write complete
Cache-->>Service: Data written
```
## Observability Data Flow
### Tracing Data Flow
How distributed tracing data flows through the system.
```mermaid
graph TD
Request[HTTP Request] --> Trace[Start Trace]
Trace --> Span1[HTTP Span]
Span1 --> Service[Service Call]
Service --> Span2[Service Span]
Span2 --> DB[Database Query]
DB --> Span3[DB Span]
Span2 --> gRPC[gRPC Call]
gRPC --> Span4[gRPC Span]
Span3 --> Aggregate[Collect Spans]
Span4 --> Aggregate
Aggregate --> Export[Export to Collector]
Export --> Collector[OpenTelemetry Collector]
Collector --> Backend[Backend Storage]
style Trace fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
style Aggregate fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
```
### Metrics Data Flow
How metrics are collected and exported.
```mermaid
sequenceDiagram
participant Service
participant MetricsRegistry
participant Exporter
participant Prometheus
participant Grafana
Service->>Service: Business operation
Service->>MetricsRegistry: Increment counter
Service->>MetricsRegistry: Record duration
Service->>MetricsRegistry: Set gauge
MetricsRegistry->>MetricsRegistry: Aggregate metrics
Prometheus->>Exporter: Scrape metrics
Exporter->>MetricsRegistry: Get metrics
MetricsRegistry-->>Exporter: Metrics data
Exporter-->>Prometheus: Prometheus format
Prometheus->>Prometheus: Store metrics
Grafana->>Prometheus: Query metrics
Prometheus-->>Grafana: Metrics data
Grafana->>Grafana: Render dashboard
```
### Log Data Flow
How logs flow through the system to various sinks.
```mermaid
graph TD
Service[Service] --> Logger[Logger]
Logger --> Format[Format Log]
Format --> Output[Output Log]
Output --> Stdout[stdout]
Output --> File[File]
Output --> LogCollector[Log Collector]
LogCollector --> Elasticsearch[Elasticsearch]
LogCollector --> CloudLogging[Cloud Logging]
Stdout --> Container[Container Logs]
style Logger fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
style LogCollector fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
```
## Audit Data Flow
How audit logs flow through the system.
```mermaid
sequenceDiagram
participant Service
participant AuditClient
participant AuditService
participant DB
participant Archive
Service->>Service: Security-sensitive action
Service->>AuditClient: Record audit log
AuditClient->>AuditClient: Build audit entry:
- actor
- action
- target
- metadata
- timestamp
AuditClient->>AuditService: Store audit log
AuditService->>AuditService: Validate entry
AuditService->>AuditService: Ensure immutability
AuditService->>DB: Insert audit log
DB-->>AuditService: Log stored
AuditService->>Archive: Archive old logs
Archive->>Archive: Long-term storage
Note over Service,Archive: Audit logs are immutable
```
## Cross-Service Data Flow
### Inter-Service Request Flow
How data flows when services communicate via service clients.
```mermaid
sequenceDiagram
participant ServiceA
participant ServiceClient
participant ServiceRegistry
participant ServiceB
participant DB
ServiceA->>ServiceClient: Call service method
ServiceClient->>ServiceRegistry: Discover service
ServiceRegistry-->>ServiceClient: Service endpoint
ServiceClient->>ServiceB: gRPC request
ServiceB->>ServiceB: Process request
ServiceB->>DB: Query data
DB-->>ServiceB: Data
ServiceB->>ServiceB: Business logic
ServiceB-->>ServiceClient: gRPC response
ServiceClient-->>ServiceA: Return data
```
### Service-to-Service Event Flow
How events flow between services.
```mermaid
graph LR
ServiceA[Service A] -->|Publish| EventBus[Event Bus]
EventBus -->|Route| ServiceB[Service B]
EventBus -->|Route| ServiceC[Service C]
ServiceB -->|Publish| EventBus
EventBus -->|Route| ServiceD[Service D]
ServiceC -->|Publish| EventBus
EventBus -->|Route| ServiceE[Service E]
style EventBus fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
```
## Data Flow Patterns Summary
### Request Flow Pattern
- **Path**: Client → HTTP → Handler → Service → Repository → Database
- **Response**: Database → Repository → Service → Handler → HTTP → Client
- **Side Effects**: Cache updates, event publishing, audit logging, metrics
### Event Flow Pattern
- **Path**: Publisher → Event Bus → Kafka → Subscribers
- **Characteristics**: Asynchronous, eventual consistency, decoupled
### Cache Flow Pattern
- **Read**: Cache → (miss) → Database → Cache
- **Write**: Service → Database → Cache invalidation
- **Characteristics**: Performance optimization, cache-aside pattern
### Observability Flow Pattern
- **Tracing**: Service → OpenTelemetry → Collector → Backend
- **Metrics**: Service → Metrics Registry → Prometheus → Grafana
- **Logs**: Service → Logger → Collector → Storage
## Integration Points
This data flow patterns document integrates with:
- **[System Behavior Overview](system-behavior.md)**: How data flows fit into system behavior
- **[Service Orchestration](service-orchestration.md)**: How data flows between services
- **[Module Integration Patterns](module-integration-patterns.md)**: How data flows through modules
- **[Operational Scenarios](operational-scenarios.md)**: Data flow in specific scenarios
- **[Component Relationships](component-relationships.md)**: Component-level data flow
## Related Documentation
- [System Behavior Overview](system-behavior.md) - System-level behavior
- [Service Orchestration](service-orchestration.md) - Service coordination
- [Module Integration Patterns](module-integration-patterns.md) - Module integration
- [Operational Scenarios](operational-scenarios.md) - Operational flows
- [Component Relationships](component-relationships.md) - Component dependencies
- [Architecture Overview](architecture.md) - System architecture

View File

@@ -25,6 +25,13 @@ Go Platform is a modular, extensible platform designed to support multiple busin
- **[Module Requirements](module-requirements.md)**: Detailed requirements for each module
- **[Component Relationships](component-relationships.md)**: Component interactions and dependencies
### 📐 System Specifications
- **[System Behavior Overview](system-behavior.md)**: How the system behaves end-to-end
- **[Service Orchestration](service-orchestration.md)**: How services work together
- **[Module Integration Patterns](module-integration-patterns.md)**: How modules integrate with the platform
- **[Operational Scenarios](operational-scenarios.md)**: Common operational flows and use cases
- **[Data Flow Patterns](data-flow-patterns.md)**: How data flows through the system
### 🏗️ Architecture Decision Records (ADRs)
All architectural decisions are documented in [ADR records](adr/README.md), organized by implementation epic:
- **Epic 0**: Project Setup & Foundation

View File

@@ -0,0 +1,410 @@
# Module Integration Patterns
## Purpose
This document explains how modules integrate with the core platform, focusing on module discovery, initialization, service integration, and communication patterns rather than detailed implementation.
## Overview
Modules are independent services that extend the platform's functionality. They integrate with the core platform through well-defined interfaces, service clients, and a standardized initialization process. Each module operates as an independent service while leveraging core platform capabilities.
## Key Concepts
- **Module**: Independent service providing specific functionality
- **Module Manifest**: YAML file defining module metadata and configuration
- **Module Interface**: Standard interface all modules implement
- **Service Clients**: Abstraction for inter-service communication
- **Module Registry**: Registry tracking all loaded modules
## Module Discovery Process
Modules are discovered automatically during application startup by scanning module directories.
```mermaid
sequenceDiagram
participant Main
participant ModuleLoader
participant FileSystem
participant ModuleManifest
participant ModuleRegistry
Main->>ModuleLoader: DiscoverModules()
ModuleLoader->>FileSystem: Scan modules/ directory
FileSystem-->>ModuleLoader: Module directories
loop For each module directory
ModuleLoader->>FileSystem: Read module.yaml
FileSystem-->>ModuleLoader: Module manifest
ModuleLoader->>ModuleManifest: Parse manifest
ModuleManifest-->>ModuleLoader: Module metadata
ModuleLoader->>ModuleRegistry: Register module
ModuleRegistry->>ModuleRegistry: Validate manifest
ModuleRegistry->>ModuleRegistry: Check dependencies
ModuleRegistry-->>ModuleLoader: Module registered
end
ModuleLoader->>ModuleRegistry: Resolve dependencies
ModuleRegistry->>ModuleRegistry: Build dependency graph
ModuleRegistry->>ModuleRegistry: Order modules
ModuleRegistry-->>ModuleLoader: Ordered module list
ModuleLoader-->>Main: Module list ready
```
### Discovery Steps
1. **Directory Scanning**: Scan `modules/` directory for module subdirectories
2. **Manifest Loading**: Load `module.yaml` from each module directory
3. **Manifest Parsing**: Parse manifest to extract metadata
4. **Dependency Extraction**: Extract module dependencies from manifest
5. **Module Registration**: Register module in module registry
6. **Dependency Resolution**: Build dependency graph and order modules
7. **Validation**: Validate all dependencies are available
## Module Initialization Flow
Modules are initialized in dependency order, ensuring all dependencies are available before module initialization.
```mermaid
sequenceDiagram
participant Main
participant ModuleRegistry
participant Module
participant DI
participant Router
participant ServiceRegistry
participant DB
Main->>ModuleRegistry: GetOrderedModules()
ModuleRegistry-->>Main: Ordered module list
loop For each module (dependency order)
Main->>Module: Init()
Module->>DI: Provide services
DI->>DI: Register module services
DI-->>Module: Services registered
Module->>Router: Register routes
Router->>Router: Add route handlers
Router-->>Module: Routes registered
Module->>DB: Register migrations
DB->>DB: Store migration info
DB-->>Module: Migrations registered
Module->>ServiceRegistry: Register service
ServiceRegistry->>ServiceRegistry: Register with registry
ServiceRegistry-->>Module: Service registered
Module->>Module: OnStart hook (optional)
Module-->>Main: Module initialized
end
Main->>DB: Run migrations
DB->>DB: Execute in dependency order
DB-->>Main: Migrations complete
Main->>Router: Start HTTP server
Main->>ServiceRegistry: Start service discovery
```
### Initialization Phases
1. **Dependency Resolution**: Determine module initialization order
2. **Service Registration**: Register module services in DI container
3. **Route Registration**: Register HTTP routes
4. **Migration Registration**: Register database migrations
5. **Service Registration**: Register module as service in service registry
6. **Lifecycle Hooks**: Execute OnStart hooks if defined
7. **Migration Execution**: Run migrations in dependency order
8. **Server Startup**: Start HTTP and gRPC servers
## Module Service Integration
Modules integrate with core services through service client interfaces, ensuring all communication goes through well-defined abstractions.
```mermaid
graph TB
subgraph "Module Service"
ModuleHandler[Module Handler]
ModuleService[Module Service]
ModuleRepo[Module Repository]
end
subgraph "Service Clients"
AuthClient[Auth Service Client]
IdentityClient[Identity Service Client]
AuthzClient[Authz Service Client]
AuditClient[Audit Service Client]
end
subgraph "Core Services"
AuthService[Auth Service<br/>:8081]
IdentityService[Identity Service<br/>:8082]
AuthzService[Authz Service<br/>:8083]
AuditService[Audit Service<br/>:8084]
end
subgraph "Infrastructure"
EventBus[Event Bus]
Cache[Cache]
DB[(Database)]
end
ModuleHandler --> ModuleService
ModuleService --> ModuleRepo
ModuleRepo --> DB
ModuleService -->|gRPC| AuthClient
ModuleService -->|gRPC| IdentityClient
ModuleService -->|gRPC| AuthzClient
ModuleService -->|gRPC| AuditClient
AuthClient --> AuthService
IdentityClient --> IdentityService
AuthzClient --> AuthzService
AuditClient --> AuditService
ModuleService --> EventBus
ModuleService --> Cache
style ModuleService fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
style AuthClient fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
```
### Service Integration Points
1. **Authentication**: Use Auth Service Client for token validation
2. **Identity**: Use Identity Service Client for user operations
3. **Authorization**: Use Authz Service Client for permission checks
4. **Audit**: Use Audit Service Client for audit logging
5. **Event Bus**: Publish and subscribe to events
6. **Cache**: Use cache for performance optimization
7. **Database**: Direct database access via repositories
## Module Data Management
Modules manage their own data while sharing database infrastructure.
```mermaid
graph TD
subgraph "Module A"
ModuleA[Module A Service]
RepoA[Module A Repository]
SchemaA[Module A Schema<br/>blog_posts]
end
subgraph "Module B"
ModuleB[Module B Service]
RepoB[Module B Repository]
SchemaB[Module B Schema<br/>billing_subscriptions]
end
subgraph "Shared Database"
DB[(PostgreSQL)]
end
subgraph "Migrations"
MigrationA[Module A Migrations]
MigrationB[Module B Migrations]
end
ModuleA --> RepoA
RepoA --> SchemaA
SchemaA --> DB
ModuleB --> RepoB
RepoB --> SchemaB
SchemaB --> DB
MigrationA --> DB
MigrationB --> DB
style ModuleA fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
style ModuleB fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
style DB fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
```
### Data Isolation Patterns
1. **Schema Isolation**: Each module has its own database schema
2. **Table Prefixing**: Module tables prefixed with module name
3. **Migration Isolation**: Each module manages its own migrations
4. **Shared Database**: Modules share database instance but not schemas
5. **Cross-Module Queries**: Use service clients, not direct SQL joins
## Module Permission System
Modules register permissions that are automatically integrated into the platform's permission system.
```mermaid
sequenceDiagram
participant Module
participant ModuleManifest
participant PermissionGenerator
participant PermissionRegistry
participant AuthzService
Module->>ModuleManifest: Define permissions
ModuleManifest->>ModuleManifest: permissions:
- blog.post.create
- blog.post.read
- blog.post.update
- blog.post.delete
Module->>PermissionGenerator: Generate permission code
PermissionGenerator->>PermissionGenerator: Parse manifest
PermissionGenerator->>PermissionGenerator: Generate constants
PermissionGenerator-->>Module: Permission code generated
Module->>PermissionRegistry: Register permissions
PermissionRegistry->>PermissionRegistry: Validate format
PermissionRegistry->>PermissionRegistry: Store permissions
PermissionRegistry-->>Module: Permissions registered
AuthzService->>PermissionRegistry: Resolve permissions
PermissionRegistry-->>AuthzService: Permission list
AuthzService->>AuthzService: Check permissions
```
### Permission Registration Flow
1. **Permission Definition**: Define permissions in `module.yaml`
2. **Code Generation**: Generate permission constants from manifest
3. **Permission Registration**: Register permissions during module initialization
4. **Permission Validation**: Validate permission format and uniqueness
5. **Permission Resolution**: Permissions available for authorization checks
## Module Communication Patterns
Modules communicate with each other through event bus and service clients.
```mermaid
graph TB
subgraph "Module A"
ServiceA[Module A Service]
end
subgraph "Module B"
ServiceB[Module B Service]
end
subgraph "Event Bus"
EventBus[Event Bus<br/>Kafka]
end
subgraph "Service Clients"
ClientA[Service Client A]
ClientB[Service Client B]
end
subgraph "Module C"
ServiceC[Module C Service]
end
ServiceA -->|Publish Event| EventBus
EventBus -->|Subscribe| ServiceB
EventBus -->|Subscribe| ServiceC
ServiceA -->|gRPC Call| ClientA
ClientA --> ServiceB
ServiceB -->|gRPC Call| ClientB
ClientB --> ServiceC
style ServiceA fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
style ServiceB fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
style EventBus fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
```
### Communication Patterns
#### Event-Based Communication
```mermaid
sequenceDiagram
participant ModuleA
participant EventBus
participant ModuleB
participant ModuleC
ModuleA->>EventBus: Publish event
EventBus->>EventBus: Route to subscribers
EventBus->>ModuleB: Deliver event
EventBus->>ModuleC: Deliver event
ModuleB->>ModuleB: Process event
ModuleC->>ModuleC: Process event
Note over ModuleB,ModuleC: Events processed independently
```
#### Service Client Communication
```mermaid
sequenceDiagram
participant ModuleA
participant Client
participant ServiceRegistry
participant ModuleB
ModuleA->>Client: Call service method
Client->>ServiceRegistry: Discover Module B
ServiceRegistry-->>Client: Module B endpoint
Client->>ModuleB: gRPC call
ModuleB->>ModuleB: Process request
ModuleB-->>Client: Response
Client-->>ModuleA: Return result
```
## Module Route Registration
Modules register their HTTP routes with the platform's router.
```mermaid
sequenceDiagram
participant Module
participant Router
participant AuthzMiddleware
participant ModuleHandler
Module->>Router: Register routes
Module->>Router: Define route: /api/v1/blog/posts
Module->>Router: Define permission: blog.post.create
Module->>Router: Define handler: CreatePostHandler
Router->>Router: Create route
Router->>AuthzMiddleware: Register permission check
Router->>Router: Attach handler
Router->>Router: Route registered
Note over Router: Routes are registered with<br/>permission requirements
```
### Route Registration Process
1. **Route Definition**: Module defines routes in `Init()` method
2. **Permission Association**: Routes associated with required permissions
3. **Handler Registration**: Handlers registered with router
4. **Middleware Attachment**: Authorization middleware automatically attached
5. **Route Activation**: Routes available when HTTP server starts
## Integration Points
This module integration integrates with:
- **[System Behavior Overview](system-behavior.md)**: How modules participate in system bootstrap
- **[Service Orchestration](service-orchestration.md)**: How modules operate as services
- **[Operational Scenarios](operational-scenarios.md)**: Module behavior in specific scenarios
- **[Architecture Modules](architecture-modules.md)**: Detailed module architecture
## Related Documentation
- [System Behavior Overview](system-behavior.md) - System-level behavior
- [Service Orchestration](service-orchestration.md) - Service coordination
- [Operational Scenarios](operational-scenarios.md) - Module usage scenarios
- [Architecture Modules](architecture-modules.md) - Module architecture details
- [Module Requirements](module-requirements.md) - Module requirements and interfaces

View File

@@ -0,0 +1,406 @@
# Operational Scenarios
## Purpose
This document describes common operational scenarios in the Go Platform, focusing on how different components interact to accomplish specific tasks.
## Overview
Operational scenarios illustrate how the platform handles common use cases such as user authentication, authorization checks, event processing, and background job execution. Each scenario shows the complete flow from initiation to completion.
## Key Concepts
- **Scenario**: A specific operational use case
- **Flow**: Sequence of interactions to accomplish the scenario
- **Components**: Services and modules involved in the scenario
- **State Changes**: How system state changes during the scenario
## Authentication and Authorization Flows
### User Authentication Flow
Complete flow of user logging in and receiving authentication tokens.
```mermaid
sequenceDiagram
participant User
participant Client
participant AuthService
participant IdentityService
participant DB
participant TokenProvider
participant AuditService
User->>Client: Enter credentials
Client->>AuthService: POST /api/v1/auth/login
AuthService->>AuthService: Validate request format
AuthService->>IdentityService: Verify credentials
IdentityService->>DB: Query user by email
DB-->>IdentityService: User data
IdentityService->>IdentityService: Verify password hash
IdentityService-->>AuthService: Credentials valid
AuthService->>TokenProvider: Generate access token
TokenProvider->>TokenProvider: Create JWT claims
TokenProvider-->>AuthService: Access token
AuthService->>TokenProvider: Generate refresh token
TokenProvider->>DB: Store refresh token hash
DB-->>TokenProvider: Token stored
TokenProvider-->>AuthService: Refresh token
AuthService->>AuditService: Log login
AuditService->>DB: Store audit log
AuditService-->>AuthService: Logged
AuthService-->>Client: Access + Refresh tokens
Client-->>User: Authentication successful
```
### Authorization Check Flow
How the system checks if a user has permission to perform an action.
```mermaid
sequenceDiagram
participant Handler
participant AuthzMiddleware
participant AuthzService
participant PermissionResolver
participant Cache
participant DB
participant IdentityService
Handler->>AuthzMiddleware: Check permission
AuthzMiddleware->>AuthzMiddleware: Extract user from context
AuthzMiddleware->>AuthzService: Authorize(user, permission)
AuthzService->>Cache: Check permission cache
Cache-->>AuthzService: Cache miss
AuthzService->>PermissionResolver: Resolve permissions
PermissionResolver->>IdentityService: Get user roles
IdentityService->>DB: Query user roles
DB-->>IdentityService: User roles
IdentityService-->>PermissionResolver: Roles list
PermissionResolver->>DB: Query role permissions
DB-->>PermissionResolver: Permissions
PermissionResolver->>PermissionResolver: Aggregate permissions
PermissionResolver-->>AuthzService: User permissions
AuthzService->>AuthzService: Check permission in list
AuthzService->>Cache: Store in cache
AuthzService-->>AuthzMiddleware: Authorized/Unauthorized
alt Authorized
AuthzMiddleware-->>Handler: Continue
else Unauthorized
AuthzMiddleware-->>Handler: 403 Forbidden
end
```
### Permission Resolution Flow
How user permissions are resolved from roles and cached for performance.
```mermaid
graph TD
Start[Permission Check] --> Cache{Cache Hit?}
Cache -->|Yes| Return[Return Cached Permissions]
Cache -->|No| GetRoles[Get User Roles]
GetRoles --> DB1[(Database)]
DB1 --> Roles[User Roles]
Roles --> GetPermissions[Get Role Permissions]
GetPermissions --> DB2[(Database)]
DB2 --> Permissions[Role Permissions]
Permissions --> Aggregate[Aggregate Permissions]
Aggregate --> StoreCache[Store in Cache]
StoreCache --> Return
Return --> Check[Check Permission]
Check -->|Has Permission| Allow[Allow Access]
Check -->|No Permission| Deny[Deny Access]
style Cache fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
style Aggregate fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
```
## Data Access Patterns
### Cache-Aside Pattern
How data is accessed with caching to improve performance.
```mermaid
sequenceDiagram
participant Service
participant Cache
participant Repository
participant DB
Service->>Cache: Get(key)
Cache-->>Service: Cache miss
Service->>Repository: Find by ID
Repository->>DB: Query database
DB-->>Repository: Data
Repository-->>Service: Domain entity
Service->>Cache: Set(key, entity)
Cache-->>Service: Cached
Service-->>Service: Return entity
Note over Service,Cache: Next request will hit cache
```
### Write-Through Cache Pattern
How data writes are synchronized with cache.
```mermaid
sequenceDiagram
participant Service
participant Cache
participant Repository
participant DB
Service->>Repository: Save entity
Repository->>DB: Insert/Update
DB-->>Repository: Success
Repository->>Cache: Invalidate(key)
Cache->>Cache: Remove from cache
Cache-->>Repository: Invalidated
Repository-->>Service: Entity saved
Note over Service,Cache: Cache invalidated on write
```
## Event Processing Scenarios
### Event Publishing and Consumption
How events are published and consumed across services.
```mermaid
sequenceDiagram
participant Publisher
participant EventBus
participant Kafka
participant Subscriber1
participant Subscriber2
Publisher->>Publisher: Business event occurs
Publisher->>EventBus: Publish(event)
EventBus->>EventBus: Serialize event
EventBus->>EventBus: Add metadata
EventBus->>Kafka: Send to topic
Kafka-->>EventBus: Acknowledged
EventBus-->>Publisher: Event published
Kafka->>Subscriber1: Deliver event
Subscriber1->>Subscriber1: Deserialize event
Subscriber1->>Subscriber1: Process event
Subscriber1->>Subscriber1: Update state
Subscriber1-->>Kafka: Acknowledge
Kafka->>Subscriber2: Deliver event
Subscriber2->>Subscriber2: Deserialize event
Subscriber2->>Subscriber2: Process event
Subscriber2->>Subscriber2: Update state
Subscriber2-->>Kafka: Acknowledge
```
### Event-Driven Workflow
How multiple services coordinate through events.
```mermaid
graph LR
A[Service A] -->|Publish Event A| EventBus[Event Bus]
EventBus -->|Subscribe| B[Service B]
EventBus -->|Subscribe| C[Service C]
B -->|Publish Event B| EventBus
C -->|Publish Event C| EventBus
EventBus -->|Subscribe| D[Service D]
style EventBus fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
```
## Background Processing Scenarios
### Background Job Scheduling
How background jobs are scheduled and executed.
```mermaid
sequenceDiagram
participant Scheduler
participant JobQueue
participant Worker
participant Service
participant DB
participant EventBus
Scheduler->>Scheduler: Cron trigger
Scheduler->>JobQueue: Enqueue job
JobQueue->>JobQueue: Store job definition
JobQueue-->>Scheduler: Job enqueued
Worker->>JobQueue: Poll for jobs
JobQueue-->>Worker: Job definition
Worker->>Worker: Lock job
Worker->>Service: Execute job
Service->>DB: Update data
Service->>EventBus: Publish events
Service-->>Worker: Job complete
Worker->>JobQueue: Mark complete
JobQueue->>JobQueue: Remove job
alt Job fails
Worker->>JobQueue: Mark failed
JobQueue->>JobQueue: Schedule retry
end
```
### Job Retry Flow
How failed jobs are retried with exponential backoff.
```mermaid
stateDiagram-v2
[*] --> Pending: Job created
Pending --> Running: Worker picks up
Running --> Success: Job completes
Running --> Failed: Job fails
Failed --> RetryScheduled: Schedule retry
RetryScheduled --> Waiting: Wait (exponential backoff)
Waiting --> Pending: Retry time reached
Failed --> MaxRetries: Max retries reached
MaxRetries --> DeadLetter: Move to dead letter
Success --> [*]
DeadLetter --> [*]
```
## Configuration Management Scenarios
### Configuration Reload Flow
How configuration is reloaded without service restart.
```mermaid
sequenceDiagram
participant Admin
participant ConfigService
participant ConfigManager
participant Services
participant SecretStore
Admin->>ConfigService: Update configuration
ConfigService->>SecretStore: Fetch secrets (if needed)
SecretStore-->>ConfigService: Secrets
ConfigService->>ConfigManager: Reload configuration
ConfigManager->>ConfigManager: Validate configuration
ConfigManager->>ConfigManager: Merge with defaults
ConfigManager->>Services: Notify config change
Services->>Services: Update configuration
Services-->>ConfigManager: Config updated
ConfigManager-->>ConfigService: Reload complete
ConfigService-->>Admin: Configuration reloaded
```
## Audit Logging Flow
How all security-sensitive actions are logged.
```mermaid
sequenceDiagram
participant Service
participant AuditClient
participant AuditService
participant DB
Service->>Service: Security-sensitive action
Service->>AuditClient: Record audit log
AuditClient->>AuditClient: Extract context
AuditClient->>AuditClient: Build audit entry
AuditClient->>AuditService: Store audit log
AuditService->>AuditService: Validate audit entry
AuditService->>DB: Insert audit log
DB-->>AuditService: Log stored
AuditService-->>AuditClient: Audit logged
AuditClient-->>Service: Continue
Note over Service,DB: Audit logs are immutable
```
## Database Migration Flow
How database migrations are executed during module initialization.
```mermaid
sequenceDiagram
participant Main
participant ModuleRegistry
participant Module
participant MigrationRunner
participant DB
Main->>ModuleRegistry: Get modules (dependency order)
ModuleRegistry-->>Main: Ordered modules
loop For each module
Main->>Module: Get migrations
Module-->>Main: Migration list
end
Main->>MigrationRunner: Run migrations
MigrationRunner->>DB: Check migration table
DB-->>MigrationRunner: Existing migrations
loop For each pending migration
MigrationRunner->>DB: Start transaction
MigrationRunner->>DB: Execute migration
DB-->>MigrationRunner: Migration complete
MigrationRunner->>DB: Record migration
MigrationRunner->>DB: Commit transaction
end
MigrationRunner-->>Main: Migrations complete
```
## Integration Points
This operational scenarios document integrates with:
- **[System Behavior Overview](system-behavior.md)**: How these scenarios fit into overall system behavior
- **[Service Orchestration](service-orchestration.md)**: How services coordinate in these scenarios
- **[Module Integration Patterns](module-integration-patterns.md)**: How modules participate in these scenarios
- **[Data Flow Patterns](data-flow-patterns.md)**: Detailed data flow in these scenarios
## Related Documentation
- [System Behavior Overview](system-behavior.md) - System-level behavior
- [Service Orchestration](service-orchestration.md) - Service coordination
- [Module Integration Patterns](module-integration-patterns.md) - Module integration
- [Data Flow Patterns](data-flow-patterns.md) - Data flow details
- [Architecture Overview](architecture.md) - System architecture

View File

@@ -0,0 +1,403 @@
# Service Orchestration
## Purpose
This document explains how services work together in the Go Platform's microservices architecture, focusing on service lifecycle management, discovery, communication patterns, and failure handling.
## Overview
The Go Platform consists of multiple independent services that communicate via service clients (gRPC/HTTP) and share infrastructure components. Services are discovered and registered through a service registry, enabling dynamic service location and health monitoring.
## Key Concepts
- **Service**: Independent process providing specific functionality
- **Service Registry**: Central registry for service discovery (Consul, Kubernetes, etcd)
- **Service Client**: Abstraction for inter-service communication
- **Service Discovery**: Process of locating services by name
- **Service Health**: Health status of a service (healthy, unhealthy, degraded)
## Service Lifecycle Management
Services follow a well-defined lifecycle from startup to shutdown.
```mermaid
stateDiagram-v2
[*] --> Starting: Service starts
Starting --> Registering: Initialize services
Registering --> StartingServer: Register with service registry
StartingServer --> Running: Start HTTP/gRPC servers
Running --> Healthy: Health checks pass
Running --> Unhealthy: Health checks fail
Unhealthy --> Running: Health checks recover
Healthy --> Degrading: Dependency issues
Degrading --> Healthy: Dependencies recover
Degrading --> Unhealthy: Critical failure
Running --> ShuttingDown: Receive shutdown signal
ShuttingDown --> Deregistering: Stop accepting requests
Deregistering --> Stopped: Deregister from registry
Stopped --> [*]
```
### Lifecycle States
1. **Starting**: Service is initializing, loading configuration
2. **Registering**: Service registers with service registry
3. **Starting Server**: HTTP and gRPC servers starting
4. **Running**: Service is running and processing requests
5. **Healthy**: All health checks passing
6. **Unhealthy**: Health checks failing
7. **Degrading**: Service operational but with degraded functionality
8. **Shutting Down**: Service received shutdown signal
9. **Deregistering**: Service removing itself from registry
10. **Stopped**: Service has stopped
## Service Discovery and Registration
Services automatically register themselves with the service registry on startup and deregister on shutdown.
```mermaid
sequenceDiagram
participant Service
participant ServiceRegistry
participant Registry[Consul/K8s]
participant Client
Service->>ServiceRegistry: Register(serviceInfo)
ServiceRegistry->>Registry: Register service
Registry->>Registry: Store service info
Registry-->>ServiceRegistry: Registration confirmed
ServiceRegistry-->>Service: Service registered
Note over Service: Service starts health checks
loop Every health check interval
Service->>ServiceRegistry: Update health status
ServiceRegistry->>Registry: Update health
end
Client->>ServiceRegistry: Discover(serviceName)
ServiceRegistry->>Registry: Query services
Registry-->>ServiceRegistry: Service list
ServiceRegistry->>ServiceRegistry: Filter healthy services
ServiceRegistry->>ServiceRegistry: Load balance
ServiceRegistry-->>Client: Service endpoint
Client->>Service: Connect via gRPC/HTTP
Service->>ServiceRegistry: Deregister()
ServiceRegistry->>Registry: Remove service
Registry-->>ServiceRegistry: Service removed
```
### Service Registration Process
1. **Service Startup**: Service initializes and loads configuration
2. **Service Info Creation**: Create service info with name, version, address, protocol
3. **Registry Registration**: Register service with Consul/Kubernetes/etc
4. **Health Check Setup**: Start health check endpoint
5. **Health Status Updates**: Periodically update health status in registry
6. **Service Discovery**: Clients query registry for service endpoints
7. **Load Balancing**: Registry returns healthy service instances
8. **Service Deregistration**: On shutdown, remove service from registry
## Service Communication Patterns
Services communicate through well-defined patterns using service clients.
```mermaid
graph TB
subgraph "Service A"
ServiceA[Service A Handler]
ClientA[Service Client]
end
subgraph "Service Registry"
Registry[Service Registry]
end
subgraph "Service B"
ServiceB[Service B Handler]
ServerB[gRPC Server]
end
subgraph "Service C"
ServiceC[Service C Handler]
end
subgraph "Event Bus"
EventBus[Event Bus<br/>Kafka]
end
ServiceA -->|Discover| Registry
Registry -->|Service B endpoint| ClientA
ClientA -->|gRPC Call| ServerB
ServerB --> ServiceB
ServiceB -->|Response| ClientA
ServiceA -->|Publish Event| EventBus
EventBus -->|Subscribe| ServiceC
ServiceC -->|Process Event| ServiceC
style ClientA fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
style ServerB fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
style EventBus fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
```
### Communication Patterns
#### Synchronous Communication (gRPC/HTTP)
```mermaid
sequenceDiagram
participant Client
participant ServiceClient
participant Registry
participant Service
Client->>ServiceClient: Call service method
ServiceClient->>Registry: Discover service
Registry-->>ServiceClient: Service endpoint
ServiceClient->>Service: gRPC/HTTP call
Service->>Service: Process request
Service-->>ServiceClient: Response
ServiceClient-->>Client: Return result
alt Service unavailable
ServiceClient->>Registry: Retry discovery
Registry-->>ServiceClient: Alternative endpoint
ServiceClient->>Service: Retry call
end
```
#### Asynchronous Communication (Event Bus)
```mermaid
sequenceDiagram
participant Publisher
participant EventBus
participant Kafka
participant Subscriber1
participant Subscriber2
Publisher->>EventBus: Publish event
EventBus->>Kafka: Send to topic
Kafka-->>EventBus: Acknowledged
Kafka->>Subscriber1: Deliver event
Kafka->>Subscriber2: Deliver event
Subscriber1->>Subscriber1: Process event
Subscriber2->>Subscriber2: Process event
Note over Subscriber1,Subscriber2: Events processed independently
```
## Service Dependency Graph
Services have dependencies that determine startup ordering and communication patterns.
```mermaid
graph TD
subgraph "Core Services"
Identity[Identity Service]
Auth[Auth Service]
Authz[Authz Service]
Audit[Audit Service]
end
subgraph "Feature Services"
Blog[Blog Service]
Billing[Billing Service]
Analytics[Analytics Service]
end
subgraph "Infrastructure Services"
Registry[Service Registry]
EventBus[Event Bus]
Cache[Cache Service]
end
Auth --> Identity
Auth --> Registry
Authz --> Identity
Authz --> Cache
Authz --> Audit
Audit --> Registry
Blog --> Authz
Blog --> Identity
Blog --> Audit
Blog --> Registry
Blog --> EventBus
Blog --> Cache
Billing --> Authz
Billing --> Identity
Billing --> Registry
Billing --> EventBus
Analytics --> EventBus
Analytics --> Registry
style Identity fill:#4a90e2,stroke:#2e5c8a,stroke-width:3px,color:#fff
style Auth fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
style Blog fill:#7b68ee,stroke:#5a4fcf,stroke-width:2px,color:#fff
```
### Dependency Types
1. **Hard Dependencies**: Service cannot start without dependency (e.g., Auth depends on Identity)
2. **Soft Dependencies**: Service can start but with degraded functionality
3. **Runtime Dependencies**: Dependencies discovered at runtime via service registry
## Service Health and Failure Handling
Services continuously report their health status, enabling automatic failure detection and recovery.
```mermaid
graph TD
Service[Service] --> HealthCheck[Health Check Endpoint]
HealthCheck --> CheckDB[Check Database]
HealthCheck --> CheckCache[Check Cache]
HealthCheck --> CheckDeps[Check Dependencies]
CheckDB -->|Healthy| Aggregate[Aggregate Health]
CheckCache -->|Healthy| Aggregate
CheckDeps -->|Healthy| Aggregate
Aggregate -->|All Healthy| Healthy[Healthy Status]
Aggregate -->|Degraded| Degraded[Degraded Status]
Aggregate -->|Unhealthy| Unhealthy[Unhealthy Status]
Healthy --> Registry[Update Registry]
Degraded --> Registry
Unhealthy --> Registry
Registry --> LoadBalancer[Load Balancer]
LoadBalancer -->|Healthy Only| RouteTraffic[Route Traffic]
LoadBalancer -->|Unhealthy| NoTraffic[No Traffic]
style Healthy fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
style Degraded fill:#ffa500,stroke:#ff8c00,stroke-width:2px,color:#fff
style Unhealthy fill:#ff6b6b,stroke:#c92a2a,stroke-width:2px,color:#fff
```
### Health Check Types
1. **Liveness Check**: Service process is running
2. **Readiness Check**: Service is ready to accept requests
3. **Dependency Checks**: Database, cache, and other dependencies are accessible
4. **Business Health**: Service-specific health indicators
### Failure Handling Strategies
#### Circuit Breaker Pattern
```mermaid
stateDiagram-v2
[*] --> Closed: Service healthy
Closed --> Open: Failure threshold exceeded
Open --> HalfOpen: Timeout period
HalfOpen --> Closed: Success
HalfOpen --> Open: Failure
```
#### Retry Strategy
```mermaid
sequenceDiagram
participant Client
participant Service
Client->>Service: Request
Service-->>Client: Failure
Client->>Client: Wait (exponential backoff)
Client->>Service: Retry 1
Service-->>Client: Failure
Client->>Client: Wait (exponential backoff)
Client->>Service: Retry 2
Service-->>Client: Success
```
#### Service Degradation
When a service dependency fails, the service may continue operating with degraded functionality:
- **Cache Unavailable**: Service continues but without caching
- **Event Bus Unavailable**: Service continues but events are queued
- **Non-Critical Dependency Fails**: Service continues with reduced features
## Service Scaling Scenarios
Services can be scaled independently based on load and requirements.
```mermaid
graph TB
subgraph "Load Balancer"
LB[Load Balancer]
end
subgraph "Service Instances"
Instance1[Service Instance 1<br/>Healthy]
Instance2[Service Instance 2<br/>Healthy]
Instance3[Service Instance 3<br/>Starting]
Instance4[Service Instance 4<br/>Unhealthy]
end
subgraph "Service Registry"
Registry[Service Registry]
end
subgraph "Infrastructure"
DB[(Database)]
Cache[(Cache)]
end
LB -->|Discover| Registry
Registry -->|Healthy Instances| LB
LB --> Instance1
LB --> Instance2
LB -.->|No Traffic| Instance3
LB -.->|No Traffic| Instance4
Instance1 --> DB
Instance2 --> DB
Instance3 --> DB
Instance4 --> DB
Instance1 --> Cache
Instance2 --> Cache
style Instance1 fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
style Instance2 fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
style Instance3 fill:#ffa500,stroke:#ff8c00,stroke-width:2px,color:#fff
style Instance4 fill:#ff6b6b,stroke:#c92a2a,stroke-width:2px,color:#fff
```
### Scaling Patterns
1. **Horizontal Scaling**: Add more service instances
2. **Vertical Scaling**: Increase resources for existing instances
3. **Auto-Scaling**: Automatically scale based on metrics
4. **Load-Based Routing**: Route traffic to healthy instances only
## Integration Points
This service orchestration integrates with:
- **[System Behavior Overview](system-behavior.md)**: How services behave during startup and operation
- **[Module Integration Patterns](module-integration-patterns.md)**: How modules are loaded as services
- **[Operational Scenarios](operational-scenarios.md)**: Service interaction in specific scenarios
- **[Architecture Overview](architecture.md)**: Overall system architecture
## Related Documentation
- [System Behavior Overview](system-behavior.md) - System-level behavior
- [Module Integration Patterns](module-integration-patterns.md) - Module service integration
- [Operational Scenarios](operational-scenarios.md) - Service interaction scenarios
- [Architecture Overview](architecture.md) - System architecture
- [ADR-0029: Microservices Architecture](adr/0029-microservices-architecture.md) - Architecture decision
- [ADR-0030: Service Communication Strategy](adr/0030-service-communication-strategy.md) - Communication patterns

View File

@@ -0,0 +1,375 @@
# System Behavior Overview
## Purpose
This document provides a high-level explanation of how the Go Platform behaves end-to-end, focusing on system-level operations, flows, and interactions rather than implementation details.
## Overview
The Go Platform is a microservices-based system where each module operates as an independent service. Services communicate via gRPC (primary) or HTTP (fallback), share infrastructure components (PostgreSQL, Redis, Kafka), and are orchestrated through service discovery and dependency injection.
## Key Concepts
- **Services**: Independent processes that can be deployed and scaled separately
- **Service Clients**: Abstraction layer for inter-service communication
- **Service Registry**: Central registry for service discovery
- **Event Bus**: Asynchronous communication channel for events
- **DI Container**: Dependency injection container managing service lifecycle
## Application Bootstrap Sequence
The platform follows a well-defined startup sequence that ensures all services are properly initialized and registered.
```mermaid
sequenceDiagram
participant Main
participant Config
participant Logger
participant DI
participant Registry
participant ModuleLoader
participant ServiceRegistry
participant HTTP
participant gRPC
Main->>Config: Load configuration
Config-->>Main: Config ready
Main->>Logger: Initialize logger
Logger-->>Main: Logger ready
Main->>DI: Create DI container
DI->>DI: Register core services
DI-->>Main: DI container ready
Main->>ModuleLoader: Discover modules
ModuleLoader->>ModuleLoader: Scan module directories
ModuleLoader->>ModuleLoader: Load module.yaml files
ModuleLoader-->>Main: Module list
Main->>Registry: Register modules
Registry->>Registry: Resolve dependencies
Registry->>Registry: Order modules
Registry-->>Main: Ordered modules
loop For each module
Main->>Module: Initialize module
Module->>DI: Register services
Module->>Registry: Register routes
Module->>Registry: Register migrations
end
Main->>Registry: Run migrations
Registry->>Registry: Execute in dependency order
Main->>ServiceRegistry: Register service
ServiceRegistry->>ServiceRegistry: Register with Consul/K8s
ServiceRegistry-->>Main: Service registered
Main->>gRPC: Start gRPC server
Main->>HTTP: Start HTTP server
HTTP-->>Main: Server ready
gRPC-->>Main: Server ready
Main->>DI: Start lifecycle
DI->>DI: Execute OnStart hooks
DI-->>Main: All services started
```
### Bootstrap Phases
1. **Configuration Loading**: Load YAML files, environment variables, and secrets
2. **Foundation Services**: Initialize logger, config provider, DI container
3. **Module Discovery**: Scan and load module manifests
4. **Dependency Resolution**: Build dependency graph and order modules
5. **Module Initialization**: Initialize each module in dependency order
6. **Database Migrations**: Run migrations in dependency order
7. **Service Registration**: Register service with service registry
8. **Server Startup**: Start HTTP and gRPC servers
9. **Lifecycle Hooks**: Execute OnStart hooks for all services
## Request Processing Pipeline
Every HTTP request flows through a standardized pipeline that ensures security, observability, and proper error handling.
```mermaid
graph TD
Start([HTTP Request]) --> Auth[Authentication Middleware]
Auth -->|Valid Token| Authz[Authorization Middleware]
Auth -->|Invalid Token| Error1[401 Unauthorized]
Authz -->|Authorized| RateLimit[Rate Limiting]
Authz -->|Unauthorized| Error2[403 Forbidden]
RateLimit -->|Within Limits| Tracing[OpenTelemetry Tracing]
RateLimit -->|Rate Limited| Error3[429 Too Many Requests]
Tracing --> Handler[Request Handler]
Handler --> Service[Domain Service]
Service --> Cache{Cache Check}
Cache -->|Hit| Return[Return Cached Data]
Cache -->|Miss| Repo[Repository]
Repo --> DB[(Database)]
DB --> Repo
Repo --> Service
Service --> CacheStore[Update Cache]
Service --> EventBus[Publish Events]
Service --> Audit[Audit Logging]
Service --> Metrics[Update Metrics]
Service --> Handler
Handler --> Tracing
Tracing --> Response[HTTP Response]
Error1 --> Response
Error2 --> Response
Error3 --> Response
Return --> Response
style Auth fill:#ff6b6b,stroke:#c92a2a,stroke-width:2px,color:#fff
style Authz fill:#ff6b6b,stroke:#c92a2a,stroke-width:2px,color:#fff
style Service fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
```
### Request Processing Stages
1. **Authentication**: Extract and validate JWT token, add user to context
2. **Authorization**: Check user permissions for requested resource
3. **Rate Limiting**: Enforce per-user and per-IP rate limits
4. **Tracing**: Start/continue distributed trace
5. **Handler Processing**: Execute request handler
6. **Service Logic**: Execute business logic
7. **Data Access**: Query database or cache
8. **Side Effects**: Publish events, audit logs, update metrics
9. **Response**: Return HTTP response with tracing context
## Event-Driven Interactions
The platform uses an event bus for asynchronous communication between services, enabling loose coupling and scalability.
```mermaid
sequenceDiagram
participant Publisher
participant EventBus
participant Kafka
participant Subscriber1
participant Subscriber2
Publisher->>EventBus: Publish(event)
EventBus->>EventBus: Serialize event
EventBus->>EventBus: Add metadata (trace_id, user_id)
EventBus->>Kafka: Send to topic
Kafka-->>EventBus: Acknowledged
Kafka->>Subscriber1: Deliver event
Kafka->>Subscriber2: Deliver event
Subscriber1->>Subscriber1: Process event
Subscriber1->>Subscriber1: Update state
Subscriber1->>Subscriber1: Emit new events (optional)
Subscriber2->>Subscriber2: Process event
Subscriber2->>Subscriber2: Update state
Note over Subscriber1,Subscriber2: Events processed asynchronously
```
### Event Processing Flow
1. **Event Publishing**: Service publishes event to event bus
2. **Event Serialization**: Event is serialized with metadata
3. **Event Distribution**: Event bus distributes to Kafka topic
4. **Event Consumption**: Subscribers consume events from Kafka
5. **Event Processing**: Each subscriber processes event independently
6. **State Updates**: Subscribers update their own state
7. **Cascade Events**: Subscribers may publish new events
## Background Job Processing
Background jobs are scheduled and processed asynchronously, enabling long-running tasks and scheduled operations.
```mermaid
sequenceDiagram
participant Scheduler
participant JobQueue
participant Worker
participant Service
participant DB
participant EventBus
Scheduler->>JobQueue: Enqueue job
JobQueue->>JobQueue: Store job definition
Worker->>JobQueue: Poll for jobs
JobQueue-->>Worker: Job definition
Worker->>Worker: Start job execution
Worker->>Service: Execute job logic
Service->>DB: Update data
Service->>EventBus: Publish events
Service-->>Worker: Job complete
Worker->>JobQueue: Mark job complete
alt Job fails
Worker->>JobQueue: Mark job failed
JobQueue->>JobQueue: Schedule retry
end
```
### Background Job Flow
1. **Job Scheduling**: Jobs scheduled via cron or programmatically
2. **Job Enqueueing**: Job definition stored in job queue
3. **Job Polling**: Workers poll queue for available jobs
4. **Job Execution**: Worker executes job logic
5. **Job Completion**: Job marked as complete or failed
6. **Job Retry**: Failed jobs retried with exponential backoff
## Error Recovery and Resilience
The platform implements multiple layers of error handling to ensure system resilience.
```mermaid
graph TD
Error[Error Occurs] --> Handler{Error Handler}
Handler -->|Business Error| BusinessError[Business Error Handler]
Handler -->|System Error| SystemError[System Error Handler]
Handler -->|Panic| PanicHandler[Panic Recovery]
BusinessError --> ErrorBus[Error Bus]
SystemError --> ErrorBus
PanicHandler --> ErrorBus
ErrorBus --> Logger[Logger]
ErrorBus --> Sentry[Sentry]
ErrorBus --> Metrics[Metrics]
BusinessError --> Response[HTTP Response]
SystemError --> Response
PanicHandler --> Response
Response --> Client[Client]
style Error fill:#ff6b6b,stroke:#c92a2a,stroke-width:2px,color:#fff
style ErrorBus fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
```
### Error Handling Layers
1. **Panic Recovery**: Middleware catches panics and prevents crashes
2. **Error Classification**: Errors classified as business or system errors
3. **Error Bus**: Central error bus collects all errors
4. **Error Logging**: Errors logged with full context
5. **Error Reporting**: Critical errors reported to Sentry
6. **Error Metrics**: Errors tracked in metrics
7. **Error Response**: Appropriate HTTP response returned
## System Shutdown Sequence
The platform implements graceful shutdown to ensure data consistency and proper resource cleanup.
```mermaid
sequenceDiagram
participant Signal
participant Main
participant HTTP
participant gRPC
participant ServiceRegistry
participant DI
participant Workers
participant DB
Signal->>Main: SIGTERM/SIGINT
Main->>HTTP: Stop accepting requests
HTTP->>HTTP: Wait for active requests
HTTP-->>Main: HTTP server stopped
Main->>gRPC: Stop accepting connections
gRPC->>gRPC: Wait for active calls
gRPC-->>Main: gRPC server stopped
Main->>ServiceRegistry: Deregister service
ServiceRegistry->>ServiceRegistry: Remove from registry
ServiceRegistry-->>Main: Service deregistered
Main->>Workers: Stop workers
Workers->>Workers: Finish current jobs
Workers-->>Main: Workers stopped
Main->>DI: Stop lifecycle
DI->>DI: Execute OnStop hooks
DI->>DI: Close connections
DI->>DB: Close DB connections
DI-->>Main: Services stopped
Main->>Main: Exit
```
### Shutdown Phases
1. **Signal Reception**: Receive SIGTERM or SIGINT
2. **Stop Accepting Requests**: HTTP and gRPC servers stop accepting new requests
3. **Wait for Active Requests**: Wait for in-flight requests to complete
4. **Service Deregistration**: Remove service from service registry
5. **Worker Shutdown**: Stop background workers gracefully
6. **Lifecycle Hooks**: Execute OnStop hooks for all services
7. **Resource Cleanup**: Close database connections, release resources
8. **Application Exit**: Exit application cleanly
## Health Check and Monitoring Flow
Health checks and metrics provide visibility into system health and performance.
```mermaid
graph TD
HealthEndpoint[/healthz] --> HealthRegistry[Health Registry]
HealthRegistry --> CheckDB[Check Database]
HealthRegistry --> CheckCache[Check Cache]
HealthRegistry --> CheckEventBus[Check Event Bus]
CheckDB -->|Healthy| Aggregate[Aggregate Results]
CheckCache -->|Healthy| Aggregate
CheckEventBus -->|Healthy| Aggregate
Aggregate -->|All Healthy| Response200[200 OK]
Aggregate -->|Unhealthy| Response503[503 Service Unavailable]
MetricsEndpoint[/metrics] --> MetricsRegistry[Metrics Registry]
MetricsRegistry --> Prometheus[Prometheus Format]
Prometheus --> ResponseMetrics[Metrics Response]
style HealthRegistry fill:#50c878,stroke:#2e7d4e,stroke-width:2px,color:#fff
style MetricsRegistry fill:#4a90e2,stroke:#2e5c8a,stroke-width:2px,color:#fff
```
### Health Check Components
- **Liveness Check**: Service is running (process health)
- **Readiness Check**: Service is ready to accept requests (dependency health)
- **Dependency Checks**: Database, cache, event bus connectivity
- **Metrics Collection**: Request counts, durations, error rates
- **Metrics Export**: Prometheus-formatted metrics
## Integration Points
This system behavior integrates with:
- **[Service Orchestration](service-orchestration.md)**: How services coordinate during startup and operation
- **[Module Integration Patterns](module-integration-patterns.md)**: How modules integrate during bootstrap
- **[Operational Scenarios](operational-scenarios.md)**: Specific operational flows and use cases
- **[Data Flow Patterns](data-flow-patterns.md)**: Detailed data flow through the system
- **[Architecture Overview](architecture.md)**: System architecture and component relationships
## Related Documentation
- [Architecture Overview](architecture.md) - System architecture
- [Service Orchestration](service-orchestration.md) - Service coordination
- [Module Integration Patterns](module-integration-patterns.md) - Module integration
- [Operational Scenarios](operational-scenarios.md) - Common operational flows
- [Component Relationships](component-relationships.md) - Component dependencies

View File

@@ -84,6 +84,12 @@ nav:
- Module Architecture: architecture-modules.md
- Module Requirements: module-requirements.md
- Component Relationships: component-relationships.md
- System Specifications:
- System Behavior Overview: system-behavior.md
- Service Orchestration: service-orchestration.md
- Module Integration Patterns: module-integration-patterns.md
- Operational Scenarios: operational-scenarios.md
- Data Flow Patterns: data-flow-patterns.md
- Architecture Decision Records:
- ADR Overview: adr/README.md
- "Epic 0: Project Setup & Foundation":