Add PrismNet-backed PlasmaVMC matrix coverage
Some checks failed
Nix CI / filter (push) Failing after 1s
Nix CI / gate () (push) Has been skipped
Nix CI / gate (shared crates) (push) Has been skipped
Nix CI / build () (push) Has been skipped
Nix CI / ci-status (push) Failing after 1s

This commit is contained in:
centra 2026-03-28 03:14:11 +09:00
parent e1a5d394e5
commit 9d21e2da95
Signed by: centra
GPG key ID: 0C09689D20B25ACA
6 changed files with 281 additions and 23 deletions

View file

@ -1,7 +1,7 @@
# Component Matrix # Component Matrix
PhotonCloud is intended to validate meaningful service combinations, not only a single all-on deployment. PhotonCloud is intended to validate meaningful service combinations, not only a single all-on deployment.
This page separates the compositions that are already exercised by the VM-cluster harness from the next combinations that still need dedicated automation. This page summarizes the compositions that are exercised by the VM-cluster harness today.
## Validated Control Plane ## Validated Control Plane
@ -18,9 +18,11 @@ These combinations justify the existence of the network services as composable p
## Validated VM Hosting Layer ## Validated VM Hosting Layer
- `plasmavmc + prismnet`
- `plasmavmc + lightningstor` - `plasmavmc + lightningstor`
- `plasmavmc + coronafs` - `plasmavmc + coronafs`
- `plasmavmc + coronafs + lightningstor` - `plasmavmc + coronafs + lightningstor`
- `plasmavmc + prismnet + coronafs + lightningstor`
This split keeps mutable VM volumes on CoronaFS and immutable VM images on LightningStor object storage. This split keeps mutable VM volumes on CoronaFS and immutable VM images on LightningStor object storage.
@ -40,11 +42,6 @@ This split keeps mutable VM volumes on CoronaFS and immutable VM images on Light
- `creditservice + iam` - `creditservice + iam`
- `deployer + iam + chainfire` - `deployer + iam + chainfire`
## Next Compositions To Automate
- `plasmavmc + prismnet`
- `plasmavmc + prismnet + coronafs + lightningstor`
## Validation Direction ## Validation Direction
The VM cluster harness now exposes: The VM cluster harness now exposes:
@ -54,4 +51,4 @@ nix run ./nix/test-cluster#cluster -- matrix
nix run ./nix/test-cluster#cluster -- fresh-matrix nix run ./nix/test-cluster#cluster -- fresh-matrix
``` ```
`fresh-matrix` is the publishable path because it rebuilds the host-side VM images before validating the composed service scenarios. `fresh-matrix` is the publishable path because it rebuilds the host-side VM images before validating the composed service scenarios, including PrismNet-backed PlasmaVMC guests.

View file

@ -25,7 +25,7 @@ nix run ./nix/test-cluster#cluster -- fresh-bench-storage
Use these three commands as the release-facing local proof set: Use these three commands as the release-facing local proof set:
- `fresh-smoke`: whole-cluster readiness, core behavior, and fault injection - `fresh-smoke`: whole-cluster readiness, core behavior, and fault injection
- `fresh-matrix`: composed service scenarios such as `prismnet + flashdns + fiberlb` and VM hosting bundles - `fresh-matrix`: composed service scenarios such as `prismnet + flashdns + fiberlb` and PrismNet-backed VM hosting bundles with `plasmavmc + coronafs + lightningstor`
- `fresh-bench-storage`: CoronaFS local-vs-shared-volume throughput, cross-worker volume visibility, and LightningStor large/small-object throughput capture - `fresh-bench-storage`: CoronaFS local-vs-shared-volume throughput, cross-worker volume visibility, and LightningStor large/small-object throughput capture
## Operational Commands ## Operational Commands

View file

@ -9,6 +9,7 @@ All VM images are built on the host in a single Nix invocation and then booted a
- 3-node control-plane formation for `chainfire`, `flaredb`, and `iam` - 3-node control-plane formation for `chainfire`, `flaredb`, and `iam`
- control-plane service health for `prismnet`, `flashdns`, `fiberlb`, `plasmavmc`, `lightningstor`, and `k8shost` - control-plane service health for `prismnet`, `flashdns`, `fiberlb`, `plasmavmc`, `lightningstor`, and `k8shost`
- worker-node `plasmavmc` and `lightningstor` startup - worker-node `plasmavmc` and `lightningstor` startup
- PrismNet port binding for PlasmaVMC guests, including lifecycle cleanup on VM deletion
- nested KVM inside worker VMs by booting an inner guest with `qemu-system-x86_64 -accel kvm` - nested KVM inside worker VMs by booting an inner guest with `qemu-system-x86_64 -accel kvm`
- gateway-node `apigateway`, `nightlight`, and minimal `creditservice` startup - gateway-node `apigateway`, `nightlight`, and minimal `creditservice` startup
- host-forwarded access to the API gateway and NightLight HTTP surfaces - host-forwarded access to the API gateway and NightLight HTTP surfaces
@ -59,7 +60,7 @@ Preferred entrypoint for publishable verification: `nix run ./nix/test-cluster#c
`make cluster-smoke` is a convenience wrapper for the same clean host-build VM validation flow. `make cluster-smoke` is a convenience wrapper for the same clean host-build VM validation flow.
`nix run ./nix/test-cluster#cluster -- matrix` reuses the current running cluster to exercise composed service scenarios such as `prismnet + flashdns + fiberlb`, VM hosting with `plasmavmc + coronafs + lightningstor`, the Kubernetes-style hosting bundle, and API-gateway-mediated `nightlight` / `creditservice` flows. `nix run ./nix/test-cluster#cluster -- matrix` reuses the current running cluster to exercise composed service scenarios such as `prismnet + flashdns + fiberlb`, PrismNet-backed VM hosting with `plasmavmc + prismnet + coronafs + lightningstor`, the Kubernetes-style hosting bundle, and API-gateway-mediated `nightlight` / `creditservice` flows.
Preferred entrypoint for publishable matrix verification: `nix run ./nix/test-cluster#cluster -- fresh-matrix` Preferred entrypoint for publishable matrix verification: `nix run ./nix/test-cluster#cluster -- fresh-matrix`

View file

@ -450,6 +450,112 @@ create_prismnet_vpc_with_retry() {
done done
} }
prismnet_get_port_json() {
local token="$1"
local org_id="$2"
local project_id="$3"
local subnet_id="$4"
local port_id="$5"
grpcurl -plaintext \
-H "authorization: Bearer ${token}" \
-import-path "${PRISMNET_PROTO_DIR}" \
-proto "${PRISMNET_PROTO}" \
-d "$(jq -cn --arg org "${org_id}" --arg project "${project_id}" --arg subnet "${subnet_id}" --arg id "${port_id}" '{orgId:$org, projectId:$project, subnetId:$subnet, id:$id}')" \
127.0.0.1:15081 prismnet.PortService/GetPort
}
wait_for_prismnet_port_binding() {
local token="$1"
local org_id="$2"
local project_id="$3"
local subnet_id="$4"
local port_id="$5"
local vm_id="$6"
local timeout="${7:-${HTTP_WAIT_TIMEOUT}}"
local deadline=$((SECONDS + timeout))
local port_json=""
while true; do
if port_json="$(prismnet_get_port_json "${token}" "${org_id}" "${project_id}" "${subnet_id}" "${port_id}" 2>/dev/null || true)"; then
if [[ -n "${port_json}" ]] && printf '%s' "${port_json}" | jq -e --arg vm "${vm_id}" '
.port.deviceId == $vm and .port.deviceType == "DEVICE_TYPE_VM"
' >/dev/null 2>&1; then
printf '%s\n' "${port_json}"
return 0
fi
fi
if (( SECONDS >= deadline )); then
die "timed out waiting for PrismNet port ${port_id} to bind to VM ${vm_id}"
fi
sleep 2
done
}
wait_for_prismnet_port_detachment() {
local token="$1"
local org_id="$2"
local project_id="$3"
local subnet_id="$4"
local port_id="$5"
local timeout="${6:-${HTTP_WAIT_TIMEOUT}}"
local deadline=$((SECONDS + timeout))
local port_json=""
while true; do
if port_json="$(prismnet_get_port_json "${token}" "${org_id}" "${project_id}" "${subnet_id}" "${port_id}" 2>/dev/null || true)"; then
if [[ -n "${port_json}" ]] && printf '%s' "${port_json}" | jq -e '
(.port.deviceId // "") == "" and
((.port.deviceType // "") == "DEVICE_TYPE_NONE" or (.port.deviceType // "") == "DEVICE_TYPE_UNSPECIFIED")
' >/dev/null 2>&1; then
printf '%s\n' "${port_json}"
return 0
fi
fi
if (( SECONDS >= deadline )); then
die "timed out waiting for PrismNet port ${port_id} to detach"
fi
sleep 2
done
}
wait_for_vm_network_spec() {
local token="$1"
local get_vm_json="$2"
local port_id="$3"
local subnet_id="$4"
local mac_address="$5"
local ip_address="$6"
local vm_port="${7:-15082}"
local timeout="${8:-${HTTP_WAIT_TIMEOUT}}"
local deadline=$((SECONDS + timeout))
local vm_json=""
while true; do
if vm_json="$(try_get_vm_json "${token}" "${get_vm_json}" "${vm_port}" 2>/dev/null || true)"; then
if [[ -n "${vm_json}" ]] && printf '%s' "${vm_json}" | jq -e \
--arg port "${port_id}" \
--arg subnet "${subnet_id}" \
--arg mac "${mac_address}" \
--arg ip "${ip_address}" '
(.spec.network // []) | any(
.portId == $port and
.subnetId == $subnet and
.macAddress == $mac and
.ipAddress == $ip
)
' >/dev/null 2>&1; then
printf '%s\n' "${vm_json}"
return 0
fi
fi
if (( SECONDS >= deadline )); then
die "timed out waiting for VM network spec to reflect PrismNet port ${port_id}"
fi
sleep 2
done
}
build_link() { build_link() {
printf '%s/build-%s' "$(vm_dir)" "$1" printf '%s/build-%s' "$(vm_dir)" "$1"
} }
@ -3582,11 +3688,12 @@ validate_lightningstor_distributed_storage() {
validate_vm_storage_flow() { validate_vm_storage_flow() {
log "Validating PlasmaVMC image import, shared-volume execution, and cross-node migration" log "Validating PlasmaVMC image import, shared-volume execution, and cross-node migration"
local iam_tunnel="" ls_tunnel="" vm_tunnel="" coronafs_tunnel="" local iam_tunnel="" prism_tunnel="" ls_tunnel="" vm_tunnel="" coronafs_tunnel=""
local node04_coronafs_tunnel="" node05_coronafs_tunnel="" local node04_coronafs_tunnel="" node05_coronafs_tunnel=""
local current_worker_coronafs_port="" peer_worker_coronafs_port="" local current_worker_coronafs_port="" peer_worker_coronafs_port=""
local vm_port=15082 local vm_port=15082
iam_tunnel="$(start_ssh_tunnel node01 15080 50080)" iam_tunnel="$(start_ssh_tunnel node01 15080 50080)"
prism_tunnel="$(start_ssh_tunnel node01 15081 50081)"
ls_tunnel="$(start_ssh_tunnel node01 15086 50086)" ls_tunnel="$(start_ssh_tunnel node01 15086 50086)"
vm_tunnel="$(start_ssh_tunnel node01 "${vm_port}" 50082)" vm_tunnel="$(start_ssh_tunnel node01 "${vm_port}" 50082)"
coronafs_tunnel="$(start_ssh_tunnel node01 15088 "${CORONAFS_API_PORT}")" coronafs_tunnel="$(start_ssh_tunnel node01 15088 "${CORONAFS_API_PORT}")"
@ -3594,7 +3701,32 @@ validate_vm_storage_flow() {
node05_coronafs_tunnel="$(start_ssh_tunnel node05 35088 "${CORONAFS_API_PORT}")" node05_coronafs_tunnel="$(start_ssh_tunnel node05 35088 "${CORONAFS_API_PORT}")"
local image_source_path="" local image_source_path=""
local node01_proto_root="/var/lib/plasmavmc/test-protos" local node01_proto_root="/var/lib/plasmavmc/test-protos"
local vpc_id="" subnet_id="" port_id="" port_ip="" port_mac=""
cleanup_vm_storage_flow() { cleanup_vm_storage_flow() {
if [[ -n "${token:-}" && -n "${port_id:-}" && -n "${subnet_id:-}" ]]; then
grpcurl -plaintext \
-H "authorization: Bearer ${token}" \
-import-path "${PRISMNET_PROTO_DIR}" \
-proto "${PRISMNET_PROTO}" \
-d "$(jq -cn --arg org "${org_id:-}" --arg project "${project_id:-}" --arg subnet "${subnet_id}" --arg id "${port_id}" '{orgId:$org, projectId:$project, subnetId:$subnet, id:$id}')" \
127.0.0.1:15081 prismnet.PortService/DeletePort >/dev/null 2>&1 || true
fi
if [[ -n "${token:-}" && -n "${subnet_id:-}" && -n "${vpc_id:-}" ]]; then
grpcurl -plaintext \
-H "authorization: Bearer ${token}" \
-import-path "${PRISMNET_PROTO_DIR}" \
-proto "${PRISMNET_PROTO}" \
-d "$(jq -cn --arg org "${org_id:-}" --arg project "${project_id:-}" --arg vpc "${vpc_id}" --arg id "${subnet_id}" '{orgId:$org, projectId:$project, vpcId:$vpc, id:$id}')" \
127.0.0.1:15081 prismnet.SubnetService/DeleteSubnet >/dev/null 2>&1 || true
fi
if [[ -n "${token:-}" && -n "${vpc_id:-}" ]]; then
grpcurl -plaintext \
-H "authorization: Bearer ${token}" \
-import-path "${PRISMNET_PROTO_DIR}" \
-proto "${PRISMNET_PROTO}" \
-d "$(jq -cn --arg org "${org_id:-}" --arg project "${project_id:-}" --arg id "${vpc_id}" '{orgId:$org, projectId:$project, id:$id}')" \
127.0.0.1:15081 prismnet.VpcService/DeleteVpc >/dev/null 2>&1 || true
fi
if [[ -n "${image_source_path}" && "${image_source_path}" != /nix/store/* ]]; then if [[ -n "${image_source_path}" && "${image_source_path}" != /nix/store/* ]]; then
ssh_node node01 "rm -f ${image_source_path}" >/dev/null 2>&1 || true ssh_node node01 "rm -f ${image_source_path}" >/dev/null 2>&1 || true
fi fi
@ -3603,6 +3735,7 @@ validate_vm_storage_flow() {
stop_ssh_tunnel node01 "${coronafs_tunnel}" stop_ssh_tunnel node01 "${coronafs_tunnel}"
stop_ssh_tunnel node01 "${vm_tunnel}" stop_ssh_tunnel node01 "${vm_tunnel}"
stop_ssh_tunnel node01 "${ls_tunnel}" stop_ssh_tunnel node01 "${ls_tunnel}"
stop_ssh_tunnel node01 "${prism_tunnel}"
stop_ssh_tunnel node01 "${iam_tunnel}" stop_ssh_tunnel node01 "${iam_tunnel}"
} }
trap cleanup_vm_storage_flow RETURN trap cleanup_vm_storage_flow RETURN
@ -3615,6 +3748,38 @@ validate_vm_storage_flow() {
local token local token
token="$(issue_project_admin_token 15080 "${org_id}" "${project_id}" "${principal_id}")" token="$(issue_project_admin_token 15080 "${org_id}" "${project_id}" "${principal_id}")"
log "Matrix case: PlasmaVMC + PrismNet"
vpc_id="$(create_prismnet_vpc_with_retry \
"${token}" \
"${org_id}" \
"${project_id}" \
"vm-network-vpc" \
"vm storage matrix networking" \
"10.62.0.0/16" | jq -r '.vpc.id')"
[[ -n "${vpc_id}" && "${vpc_id}" != "null" ]] || die "failed to create PrismNet VPC for PlasmaVMC matrix"
subnet_id="$(grpcurl -plaintext \
-H "authorization: Bearer ${token}" \
-import-path "${PRISMNET_PROTO_DIR}" \
-proto "${PRISMNET_PROTO}" \
-d "$(jq -cn --arg vpc "${vpc_id}" '{vpcId:$vpc, name:"vm-network-subnet", description:"vm storage matrix subnet", cidrBlock:"10.62.10.0/24", gatewayIp:"10.62.10.1", dhcpEnabled:true}')" \
127.0.0.1:15081 prismnet.SubnetService/CreateSubnet | jq -r '.subnet.id')"
[[ -n "${subnet_id}" && "${subnet_id}" != "null" ]] || die "failed to create PrismNet subnet for PlasmaVMC matrix"
local prismnet_port_response
prismnet_port_response="$(grpcurl -plaintext \
-H "authorization: Bearer ${token}" \
-import-path "${PRISMNET_PROTO_DIR}" \
-proto "${PRISMNET_PROTO}" \
-d "$(jq -cn --arg org "${org_id}" --arg project "${project_id}" --arg subnet "${subnet_id}" '{orgId:$org, projectId:$project, subnetId:$subnet, name:"vm-network-port", description:"vm storage matrix port", ipAddress:""}')" \
127.0.0.1:15081 prismnet.PortService/CreatePort)"
port_id="$(printf '%s' "${prismnet_port_response}" | jq -r '.port.id')"
port_ip="$(printf '%s' "${prismnet_port_response}" | jq -r '.port.ipAddress')"
port_mac="$(printf '%s' "${prismnet_port_response}" | jq -r '.port.macAddress')"
[[ -n "${port_id}" && "${port_id}" != "null" ]] || die "failed to create PrismNet port for PlasmaVMC matrix"
[[ -n "${port_ip}" && "${port_ip}" != "null" ]] || die "PrismNet port ${port_id} did not return an IP address"
[[ -n "${port_mac}" && "${port_mac}" != "null" ]] || die "PrismNet port ${port_id} did not return a MAC address"
ensure_lightningstor_bucket 15086 "${token}" "plasmavmc-images" "${org_id}" "${project_id}" ensure_lightningstor_bucket 15086 "${token}" "plasmavmc-images" "${org_id}" "${project_id}"
wait_for_lightningstor_write_quorum 15086 "${token}" "plasmavmc-images" "PlasmaVMC image import" wait_for_lightningstor_write_quorum 15086 "${token}" "plasmavmc-images" "PlasmaVMC image import"
@ -3764,6 +3929,8 @@ EOS
--arg org "${org_id}" \ --arg org "${org_id}" \
--arg project "${project_id}" \ --arg project "${project_id}" \
--arg image_id "${image_id}" \ --arg image_id "${image_id}" \
--arg subnet_id "${subnet_id}" \
--arg port_id "${port_id}" \
'{ '{
name:$name, name:$name,
orgId:$org, orgId:$org,
@ -3788,6 +3955,14 @@ EOS
bus:"DISK_BUS_VIRTIO", bus:"DISK_BUS_VIRTIO",
cache:"DISK_CACHE_WRITEBACK" cache:"DISK_CACHE_WRITEBACK"
} }
],
network:[
{
id:"tenant0",
subnetId:$subnet_id,
portId:$port_id,
model:"NIC_MODEL_VIRTIO_NET"
}
] ]
} }
}' }'
@ -3845,6 +4020,8 @@ EOS
current_worker_coronafs_port=35088 current_worker_coronafs_port=35088
peer_worker_coronafs_port=25088 peer_worker_coronafs_port=25088
fi fi
wait_for_vm_network_spec "${token}" "${get_vm_json}" "${port_id}" "${subnet_id}" "${port_mac}" "${port_ip}" "${vm_port}" >/dev/null
wait_for_prismnet_port_binding "${token}" "${org_id}" "${project_id}" "${subnet_id}" "${port_id}" "${vm_id}" >/dev/null
grpcurl -plaintext \ grpcurl -plaintext \
-H "authorization: Bearer ${token}" \ -H "authorization: Bearer ${token}" \
@ -3872,7 +4049,7 @@ EOS
sleep 2 sleep 2
done done
log "Matrix case: PlasmaVMC + CoronaFS" log "Matrix case: PlasmaVMC + PrismNet + CoronaFS + LightningStor"
local volume_id="${vm_id}-root" local volume_id="${vm_id}-root"
local data_volume_id="${vm_id}-data" local data_volume_id="${vm_id}-data"
local volume_path="${CORONAFS_VOLUME_ROOT}/${volume_id}.raw" local volume_path="${CORONAFS_VOLUME_ROOT}/${volume_id}.raw"
@ -4108,6 +4285,7 @@ EOS
(( $(printf '%s' "${data_volume_state_json}" | jq -r '.lastFlushedAttachmentGeneration // 0') < next_data_attachment_generation )) || die "data volume ${data_volume_id} unexpectedly reported destination flush before post-migration stop" (( $(printf '%s' "${data_volume_state_json}" | jq -r '.lastFlushedAttachmentGeneration // 0') < next_data_attachment_generation )) || die "data volume ${data_volume_id} unexpectedly reported destination flush before post-migration stop"
root_attachment_generation="${next_root_attachment_generation}" root_attachment_generation="${next_root_attachment_generation}"
data_attachment_generation="${next_data_attachment_generation}" data_attachment_generation="${next_data_attachment_generation}"
wait_for_prismnet_port_binding "${token}" "${org_id}" "${project_id}" "${subnet_id}" "${port_id}" "${vm_id}" >/dev/null
grpcurl -plaintext \ grpcurl -plaintext \
-H "authorization: Bearer ${token}" \ -H "authorization: Bearer ${token}" \
@ -4235,6 +4413,7 @@ EOS
fi fi
sleep 2 sleep 2
done done
wait_for_prismnet_port_detachment "${token}" "${org_id}" "${project_id}" "${subnet_id}" "${port_id}" >/dev/null
ssh_node "${node_id}" "bash -lc '[[ ! -d $(printf '%q' "$(vm_runtime_dir_path "${vm_id}")") ]]'" ssh_node "${node_id}" "bash -lc '[[ ! -d $(printf '%q' "$(vm_runtime_dir_path "${vm_id}")") ]]'"
ssh_node node01 "bash -lc '[[ ! -f ${volume_path} ]]'" ssh_node node01 "bash -lc '[[ ! -f ${volume_path} ]]'"
@ -4283,6 +4462,28 @@ EOS
die "shared-fs VM data volume unexpectedly persisted to LightningStor object storage" die "shared-fs VM data volume unexpectedly persisted to LightningStor object storage"
fi fi
grpcurl -plaintext \
-H "authorization: Bearer ${token}" \
-import-path "${PRISMNET_PROTO_DIR}" \
-proto "${PRISMNET_PROTO}" \
-d "$(jq -cn --arg org "${org_id}" --arg project "${project_id}" --arg subnet "${subnet_id}" --arg id "${port_id}" '{orgId:$org, projectId:$project, subnetId:$subnet, id:$id}')" \
127.0.0.1:15081 prismnet.PortService/DeletePort >/dev/null
port_id=""
grpcurl -plaintext \
-H "authorization: Bearer ${token}" \
-import-path "${PRISMNET_PROTO_DIR}" \
-proto "${PRISMNET_PROTO}" \
-d "$(jq -cn --arg org "${org_id}" --arg project "${project_id}" --arg vpc "${vpc_id}" --arg id "${subnet_id}" '{orgId:$org, projectId:$project, vpcId:$vpc, id:$id}')" \
127.0.0.1:15081 prismnet.SubnetService/DeleteSubnet >/dev/null
subnet_id=""
grpcurl -plaintext \
-H "authorization: Bearer ${token}" \
-import-path "${PRISMNET_PROTO_DIR}" \
-proto "${PRISMNET_PROTO}" \
-d "$(jq -cn --arg org "${org_id}" --arg project "${project_id}" --arg id "${vpc_id}" '{orgId:$org, projectId:$project, id:$id}')" \
127.0.0.1:15081 prismnet.VpcService/DeleteVpc >/dev/null
vpc_id=""
grpcurl -plaintext \ grpcurl -plaintext \
-H "authorization: Bearer ${token}" \ -H "authorization: Bearer ${token}" \
-import-path "${PLASMAVMC_PROTO_DIR}" \ -import-path "${PLASMAVMC_PROTO_DIR}" \

View file

@ -4,21 +4,41 @@ use prismnet_api::proto::{
port_service_client::PortServiceClient, GetPortRequest, AttachDeviceRequest, port_service_client::PortServiceClient, GetPortRequest, AttachDeviceRequest,
DetachDeviceRequest, DetachDeviceRequest,
}; };
use tonic::metadata::MetadataValue;
use tonic::transport::Channel; use tonic::transport::Channel;
/// PrismNET client wrapper /// PrismNET client wrapper
pub struct PrismNETClient { pub struct PrismNETClient {
auth_token: String,
port_client: PortServiceClient<Channel>, port_client: PortServiceClient<Channel>,
} }
impl PrismNETClient { impl PrismNETClient {
/// Create a new PrismNET client /// Create a new PrismNET client
pub async fn new(endpoint: String) -> Result<Self, Box<dyn std::error::Error>> { pub async fn new(
endpoint: String,
auth_token: String,
) -> Result<Self, Box<dyn std::error::Error>> {
let channel = Channel::from_shared(endpoint)? let channel = Channel::from_shared(endpoint)?
.connect() .connect()
.await?; .await?;
let port_client = PortServiceClient::new(channel); let port_client = PortServiceClient::new(channel);
Ok(Self { port_client }) Ok(Self {
auth_token,
port_client,
})
}
fn request_with_auth<T>(
auth_token: &str,
payload: T,
) -> Result<tonic::Request<T>, Box<dyn std::error::Error>> {
let mut request = tonic::Request::new(payload);
let token_value = MetadataValue::try_from(auth_token)?;
request
.metadata_mut()
.insert("x-photon-auth-token", token_value);
Ok(request)
} }
/// Get port details /// Get port details
@ -29,12 +49,12 @@ impl PrismNETClient {
subnet_id: &str, subnet_id: &str,
port_id: &str, port_id: &str,
) -> Result<prismnet_api::proto::Port, Box<dyn std::error::Error>> { ) -> Result<prismnet_api::proto::Port, Box<dyn std::error::Error>> {
let request = tonic::Request::new(GetPortRequest { let request = Self::request_with_auth(&self.auth_token, GetPortRequest {
org_id: org_id.to_string(), org_id: org_id.to_string(),
project_id: project_id.to_string(), project_id: project_id.to_string(),
subnet_id: subnet_id.to_string(), subnet_id: subnet_id.to_string(),
id: port_id.to_string(), id: port_id.to_string(),
}); })?;
let response = self.port_client.get_port(request).await?; let response = self.port_client.get_port(request).await?;
Ok(response.into_inner().port.ok_or("Port not found in response")?) Ok(response.into_inner().port.ok_or("Port not found in response")?)
} }
@ -49,14 +69,14 @@ impl PrismNETClient {
device_id: &str, device_id: &str,
device_type: i32, device_type: i32,
) -> Result<(), Box<dyn std::error::Error>> { ) -> Result<(), Box<dyn std::error::Error>> {
let request = tonic::Request::new(AttachDeviceRequest { let request = Self::request_with_auth(&self.auth_token, AttachDeviceRequest {
org_id: org_id.to_string(), org_id: org_id.to_string(),
project_id: project_id.to_string(), project_id: project_id.to_string(),
subnet_id: subnet_id.to_string(), subnet_id: subnet_id.to_string(),
port_id: port_id.to_string(), port_id: port_id.to_string(),
device_id: device_id.to_string(), device_id: device_id.to_string(),
device_type, device_type,
}); })?;
self.port_client.attach_device(request).await?; self.port_client.attach_device(request).await?;
Ok(()) Ok(())
} }
@ -69,13 +89,40 @@ impl PrismNETClient {
subnet_id: &str, subnet_id: &str,
port_id: &str, port_id: &str,
) -> Result<(), Box<dyn std::error::Error>> { ) -> Result<(), Box<dyn std::error::Error>> {
let request = tonic::Request::new(DetachDeviceRequest { let request = Self::request_with_auth(&self.auth_token, DetachDeviceRequest {
org_id: org_id.to_string(), org_id: org_id.to_string(),
project_id: project_id.to_string(), project_id: project_id.to_string(),
subnet_id: subnet_id.to_string(), subnet_id: subnet_id.to_string(),
port_id: port_id.to_string(), port_id: port_id.to_string(),
}); })?;
self.port_client.detach_device(request).await?; self.port_client.detach_device(request).await?;
Ok(()) Ok(())
} }
} }
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn request_with_auth_adds_internal_token_header() {
let request = PrismNETClient::request_with_auth(
"test-token",
GetPortRequest {
org_id: "org".to_string(),
project_id: "project".to_string(),
subnet_id: "subnet".to_string(),
id: "port".to_string(),
},
)
.expect("request metadata should be constructible");
assert_eq!(
request
.metadata()
.get("x-photon-auth-token")
.and_then(|value| value.to_str().ok()),
Some("test-token")
);
}
}

View file

@ -70,6 +70,7 @@ const ACTION_VOLUME_DELETE: &str = "compute:volumes:delete";
const NODE_ENDPOINT_LABEL: &str = "plasmavmc_endpoint"; const NODE_ENDPOINT_LABEL: &str = "plasmavmc_endpoint";
const FAILOVER_META_KEY: &str = "failover_at"; const FAILOVER_META_KEY: &str = "failover_at";
const FAILOVER_TARGET_KEY: &str = "failover_target"; const FAILOVER_TARGET_KEY: &str = "failover_target";
const PRISMNET_VM_DEVICE_TYPE: i32 = prismnet_api::proto::DeviceType::Vm as i32;
const STORE_OP_TIMEOUT: Duration = Duration::from_secs(5); const STORE_OP_TIMEOUT: Duration = Duration::from_secs(5);
/// VM Service implementation /// VM Service implementation
@ -1479,7 +1480,8 @@ impl VmServiceImpl {
return Ok(()); return Ok(());
}; };
let mut client = PrismNETClient::new(endpoint.clone()).await?; let auth_token = self.issue_internal_token(&vm.org_id, &vm.project_id).await?;
let mut client = PrismNETClient::new(endpoint.clone(), auth_token).await?;
for net_spec in &mut vm.spec.network { for net_spec in &mut vm.spec.network {
if let (Some(ref subnet_id), Some(ref port_id)) = if let (Some(ref subnet_id), Some(ref port_id)) =
@ -1498,7 +1500,7 @@ impl VmServiceImpl {
Some(port.ip_address.clone()) Some(port.ip_address.clone())
}; };
// Attach VM to port (DeviceType::Vm = 1) // Attach VM to the PrismNET port using the generated enum value.
client client
.attach_device( .attach_device(
&vm.org_id, &vm.org_id,
@ -1506,7 +1508,7 @@ impl VmServiceImpl {
subnet_id, subnet_id,
port_id, port_id,
&vm.id.to_string(), &vm.id.to_string(),
1, // DeviceType::Vm PRISMNET_VM_DEVICE_TYPE,
) )
.await?; .await?;
@ -1530,7 +1532,8 @@ impl VmServiceImpl {
return Ok(()); return Ok(());
}; };
let mut client = PrismNETClient::new(endpoint.clone()).await?; let auth_token = self.issue_internal_token(&vm.org_id, &vm.project_id).await?;
let mut client = PrismNETClient::new(endpoint.clone(), auth_token).await?;
for net_spec in &vm.spec.network { for net_spec in &vm.spec.network {
if let (Some(ref subnet_id), Some(ref port_id)) = if let (Some(ref subnet_id), Some(ref port_id)) =
@ -1926,6 +1929,15 @@ mod tests {
DiskCache::Writeback DiskCache::Writeback
); );
} }
#[test]
fn prismnet_vm_device_type_matches_generated_proto_enum() {
assert_eq!(
PRISMNET_VM_DEVICE_TYPE,
prismnet_api::proto::DeviceType::Vm as i32
);
assert_ne!(PRISMNET_VM_DEVICE_TYPE, prismnet_api::proto::DeviceType::None as i32);
}
} }
impl StateSink for VmServiceImpl { impl StateSink for VmServiceImpl {