3.8 KiB
3.8 KiB
ADR-0029: Microservices Architecture
Status
Accepted
Context
The platform needs to scale independently, support team autonomy, and enable flexible deployment. A microservices architecture provides these benefits from day one, and the complexity of supporting both monolith and microservices modes is unnecessary.
Decision
Design the platform as microservices architecture from day one:
-
Service-Based Architecture: All modules are independent services:
- Each module is a separate service with its own process
- Services communicate via gRPC (primary) or HTTP (fallback)
- Service client interfaces for all inter-service communication
- No direct in-process calls between services
-
Service Registry: Central registry for service discovery:
- All services register on startup
- Service discovery via registry
- Health checking and automatic deregistration
- Support for Consul, etcd, or Kubernetes service discovery
-
Communication Patterns:
- Synchronous: gRPC service calls (primary), HTTP/REST (fallback)
- Asynchronous: Event bus via Kafka
- Shared State: Cache (Redis) and Database (PostgreSQL)
-
Service Boundaries: Each module is an independent service:
- Independent Go modules (
go.mod) - Own database schema (via Ent)
- Own API routes
- Own process and deployment
- Can be scaled independently
- Independent Go modules (
-
Development Simplification: For local development, multiple services can run in the same process, but they still communicate via service clients (no direct calls)
Consequences
Positive
- Simplified Architecture: Single architecture pattern, no dual-mode complexity
- Independent Scaling: Scale individual services based on load
- Team Autonomy: Teams can own and deploy their services independently
- Technology Diversity: Different services can use different tech stacks (future)
- Fault Isolation: Failure in one service doesn't bring down entire platform
- Deployment Flexibility: Deploy services independently
- Clear Boundaries: Service boundaries are explicit from the start
Negative
- Network Latency: Inter-service calls have network overhead
- Distributed System Challenges: Need to handle network failures, retries, timeouts
- Service Discovery Overhead: Additional infrastructure needed
- Debugging Complexity: Distributed tracing becomes essential
- Data Consistency: Cross-service transactions become challenging
- Development Setup: More complex local development (multiple services)
Mitigations
- Service Mesh: Use service mesh (Istio, Linkerd) for advanced microservices features
- API Gateway: Central gateway for routing and cross-cutting concerns
- Event Sourcing: Use events for eventual consistency
- Circuit Breakers: Implement circuit breakers for resilience
- Comprehensive Observability: OpenTelemetry, metrics, logging essential
- Docker Compose: Simplify local development with docker-compose
- Development Mode: Run multiple services in same process for local dev (still use service clients)
Implementation Strategy
Phase 1: Service Client Interfaces (Phase 1)
- Define service client interfaces for all core services
- All inter-service communication goes through interfaces
Phase 2: Service Registry (Phase 3)
- Create service registry interface
- Implement service discovery
- Support for Consul, Kubernetes service discovery
Phase 3: gRPC Services (Phase 5)
- Implement gRPC service definitions
- Create gRPC servers for all services
- Create gRPC clients for service communication
- HTTP clients as fallback option