photoncloud-monorepo/docs/por/T010-fiberlb/task.yaml
centra a7ec7e2158 Add T026 practical test + k8shost to flake + workspace files
- Created T026-practical-test task.yaml for MVP smoke testing
- Added k8shost-server to flake.nix (packages, apps, overlays)
- Staged all workspace directories for nix flake build
- Updated flake.nix shellHook to include k8shost

Resolves: T026.S1 blocker (R8 - nix submodule visibility)
2025-12-09 06:07:50 +09:00

113 lines
3.3 KiB
YAML

id: T010
name: FiberLB - Spec + Scaffold
status: complete
created: 2025-12-08
owner: peerB (impl), peerA (spec via Aux)
goal: Create fiberlb spec and implementation scaffolding
description: |
Final "Later" phase deliverable. FiberLB is the load balancer layer.
Load balancing is critical for high availability and traffic distribution.
Follow established pattern: spec → scaffold.
Context:
- fiberlb = L4/L7 load balancer service
- Multi-tenant design (org/project scoping)
- Integrates with aegis (IAM) for auth
- ChainFire for config storage
acceptance:
- Specification document at specifications/fiberlb/README.md (pending)
- Cargo workspace with fiberlb-* crates compiles
- Core types (Listener, Pool, Backend, HealthCheck) defined
- Proto definitions for LoadBalancerService
- gRPC management API scaffold
steps:
# Phase 1 - Specification (Aux)
- step: S1
action: Create fiberlb specification
priority: P0
status: pending
complexity: medium
owner: peerA (Aux)
notes: Pending Aux delegation (spec in parallel)
# Phase 2 - Scaffolding (PeerB)
- step: S2
action: Create fiberlb workspace
priority: P0
status: complete
complexity: small
component: fiberlb
notes: |
Created fiberlb/Cargo.toml (workspace)
Crates: fiberlb-types, fiberlb-api, fiberlb-server
- step: S3
action: Define core types
priority: P0
status: complete
complexity: small
component: fiberlb-types
notes: |
LoadBalancer, LoadBalancerId, LoadBalancerStatus
Pool, PoolId, PoolAlgorithm, PoolProtocol
Backend, BackendId, BackendStatus, BackendAdminState
Listener, ListenerId, ListenerProtocol, TlsConfig
HealthCheck, HealthCheckId, HealthCheckType, HttpHealthConfig
- step: S4
action: Define proto/fiberlb.proto
priority: P0
status: complete
complexity: small
component: fiberlb-api
notes: |
LoadBalancerService: CRUD for load balancers
PoolService: CRUD for pools
BackendService: CRUD for backends
ListenerService: CRUD for listeners
HealthCheckService: CRUD for health checks
~380 lines proto
- step: S5
action: gRPC server scaffold
priority: P1
status: complete
complexity: medium
component: fiberlb-server
notes: |
LoadBalancerServiceImpl, PoolServiceImpl, BackendServiceImpl
ListenerServiceImpl, HealthCheckServiceImpl
Main entry with tonic-health on port 9080
- step: S6
action: Integration test setup
priority: P1
status: complete
complexity: small
component: fiberlb
notes: |
cargo check passes
cargo test passes (8 tests)
outcome: |
COMPLETE: 2025-12-08
S2-S6 complete (S1 spec pending via Aux).
Implementation scaffolding complete.
Final workspace structure:
- fiberlb/Cargo.toml (workspace with 3 crates)
- fiberlb-types: LoadBalancer, Pool, Backend, Listener, HealthCheck (~600 lines)
- fiberlb-api: proto (~380 lines) + lib.rs + build.rs
- fiberlb-server: 5 gRPC services + main.rs
Tests: 8 pass
FiberLB enters "operational" status (scaffold).
**MILESTONE: 7/7 deliverables now have operational scaffolds.**
notes: |
FiberLB is the final scaffold for 7/7 deliverable coverage.
L4 load balancing (TCP/UDP) is core, L7 (HTTP) is future enhancement.
All cloud platform components now have operational scaffolds.