-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Redfish: Attribute Links/ManagedBy is missing from the resource #70
Comments
Hi @lentzi90, thanks for the feedback! I will try to set up BMO and Ironic in my environment to see what Redfish endpoints are required for Ironic to go through the provisioning workflow. In the meantime, if you know what those endpoints are, it would be great to post them directly here to save some time (for instance, https://www.dmtf.org/sites/default/files/standards/documents/DSP2046_2024.2.html#manager-1191). I'm sure this is due to some missing Redfish endpoints, as the implemented ones are limited (I only tested with Tinkerbell). By the way, do you think your work can somehow become an e2e testing case for KubeVirtBMC? |
Thank you!
I'm hoping to use kubevirtbmc in BMO e2e tests eventually. Currently I am trying it out to find any blockers or missing features. It does look promising and would make our e2e test setup much simpler. I'm not sure if it would make sense to test kubevirtbmc itself in this way but it would certainly be possible! We currently test Redfish, IPMI and Redfish with virtualmedia. The tests would go through things like power on/off, reboot, change boot device order and of course virtualmedia. |
Thanks for sharing. But I have to say that the virtual media function, as recorded in #44, has not yet been implemented. At the current stage, I suggest using iPXE to do the provisioning. OTOH, I was told by a Redfish expert that there is an interop validator on the DMTF website and an interoperability profile provided by the Ironic project. With these at hand, we can have a canonical way to test the conformance of KubeVirtBMC and see whether it conforms to Ironic's workflow. |
Hi, We are having the same issue. We are also trying to use kubevirtbmc and the baremetal operator. Would be really great if this can be implemented. |
@lentzi90 I had a hard time pulling things together and made them work correctly. I was basically following the Metal3 quick-start guide and the guide on your GitHub repo but I couldn't make the VM under provision to query the correct endpoint for getting the kernel and initrd files: ![]() ![]()
I found that you seem to have worked on similar stuff before (metal3-io/ironic-image#468). Could you provide me with some insights? Thank you. |
I managed to get it working by forcing it to use With this setup working, however, I ended up with a successful BMH inspection without encountering the issue you mentioned originally. Here's the BMH resource I used/ended up with:
I won't be able to verify whether my patch can fix the original issue if I cannot reproduce it first. What's your BMO and Ironic versions? I followed the quick start guide, but I believe it's pretty outdated: with BMO v0.5.1 and Ironic v24.0.0. |
OTOH, I ran the validator against the KubeVirtBMC's Redfish service, and got a report that showed all the required stuff we haven't implemented. Indeed, the The report is in HTML format so it's not allowed to post it here. Below are the steps to do the validation and generate a report. I put a note here in case someone may want to check it later :)
Finally, run the validator with the config and profile:
The report will be in the |
Thank you for this @starbops ! |
Ok so I realize I cannot use virtualmedia, the Attribute error came when using virtualmedia. Indeed it changed now to another error with v0.5.1, so some progress I guess! 🙂 I am now trying to switch to Redfish without virtualmedia but I have a hard time figuring out how to configure the PXE booting. Did you use multus to be able to do this over the host network? Could you perhaps share the VirtualMachine yaml of how you did that? |
Absolutely. I can successfully PXE booting the virtual machine and install the target OS without issues. My setup runs on a single-node Harvester cluster, so I can quickly get a KubeVirt-ready environment and deploy the Metal3 stack using a separate Kubernetes cluster in a virtual machine. But setting up the environment from the ground up is fine. Since KubeVirtBMC needs to communicate with the API server where the virtual machines exist, I installed it on the underlying Harvester cluster. I then spun up two virtual machines: one is our target machine for provisioning and the other is for installing all the quickstart-mentioned stuff, including the KinD cluster.
![]() Both virtual machines are attached to a Multus bridge network, which is basically the same L2 network as the Harvester node. That means they are all in the same subnet VirtualMachine manifestapiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
annotations:
harvesterhci.io/vmRunStrategy: Always
harvesterhci.io/volumeClaimTemplates: '[{"metadata":{"name":"virtual-metal-rootdisk"},"spec":{"accessModes":["ReadWriteMany"],"resources":{"requests":{"storage":"10Gi"}},"volumeMode":"Block","storageClassName":"harvester-longhorn"}}]'
kubevirt.io/latest-observed-api-version: v1
kubevirt.io/storage-observed-api-version: v1
network.harvesterhci.io/ips: '[]'
creationTimestamp: "2025-03-01T08:06:45Z"
finalizers:
- kubevirtbmc-virtualmachine-controller
- kubevirt.io/virtualMachineControllerFinalize
- wrangler.cattle.io/VMController.CleanupPVCAndSnapshot
generation: 42
labels:
harvesterhci.io/creator: harvester
harvesterhci.io/os: linux
name: virtual-metal
namespace: adhoc
resourceVersion: "1752062"
uid: 170c15be-add7-47dd-bcfe-1cea5d76eb18
spec:
running: true
template:
metadata:
annotations:
harvesterhci.io/sshNames: '[]'
creationTimestamp: null
labels:
harvesterhci.io/vmName: virtual-metal
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: network.harvesterhci.io/mgmt
operator: In
values:
- "true"
architecture: amd64
domain:
cpu:
cores: 2
sockets: 1
threads: 1
devices:
disks:
- bootOrder: 1
disk:
bus: virtio
name: rootdisk
interfaces:
- bridge: {}
macAddress: ea:e3:66:d4:ca:4b
model: virtio
name: default
features:
acpi:
enabled: true
machine:
type: q35
memory:
guest: 4Gi
resources:
limits:
cpu: "2"
memory: 4Gi
requests:
cpu: 125m
memory: 2730Mi
evictionStrategy: LiveMigrateIfPossible
hostname: virtual-metal
networks:
- multus:
networkName: default/net-48
name: default
terminationGracePeriodSeconds: 120
volumes:
- name: rootdisk
persistentVolumeClaim:
claimName: virtual-metal-rootdisk
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2025-03-07T09:03:14Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: null
status: "True"
type: LiveMigratable
created: true
desiredGeneration: 42
observedGeneration: 42
printableStatus: Running
ready: true
runStrategy: Always
volumeSnapshotStatuses:
- enabled: false
name: rootdisk
reason: 2 matching VolumeSnapshotClasses for harvester-longhorn BareMetalHost manifestNAME STATE CONSUMER ONLINE ERROR AGE
bml-01 provisioned true 55m
ubuntu@aio:~$ kubectl get bmh bml-01 -o yaml
apiVersion: metal3.io/v1alpha1
kind: BareMetalHost
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"metal3.io/v1alpha1","kind":"BareMetalHost","metadata":{"annotations":{},"name":"bml-01","namespace":"default"},"spec":{"bmc":{"address":"redfish://adhoc-virtual-metal-
creationTimestamp: "2025-03-07T08:59:31Z"
finalizers:
- baremetalhost.metal3.io
generation: 2
name: bml-01
namespace: default
resourceVersion: "192942"
uid: 4a7cefab-76b9-4d73-a51c-09b0d0926f52
spec:
architecture: x86_64
automatedCleaningMode: metadata
bmc:
address: redfish://adhoc-virtual-metal-virtbmc.192.168.48.100.sslip.io/redfish/v1/Systems/1
credentialsName: bml-01
disableCertificateVerification: true
bootMACAddress: ea:e3:66:d4:ca:4b
bootMode: legacy
image:
checksum: http://192.168.48.58/SHA256SUMS
checksumType: sha256
format: qcow2
url: http://192.168.48.58/jammy-server-cloudimg-amd64.img
online: true
rootDeviceHints:
deviceName: /dev/vda
userData:
name: user-data
namespace: default
status:
errorCount: 0
errorMessage: ""
goodCredentials:
credentials:
name: bml-01
namespace: default
credentialsVersion: "192488"
hardware:
cpu:
arch: x86_64
count: 2
flags:
- 3dnowprefetch
- abm
- adx
- aes
- apic
- arat
- arch_capabilities
- avx
- avx2
- bmi1
- bmi2
- clflush
- cmov
- constant_tsc
- cpuid
- cpuid_fault
- cx16
- cx8
- de
- ept
- ept_ad
- erms
- f16c
- flexpriority
- fma
- fpu
- fsgsbase
- fxsr
- hle
- ht
- hypervisor
- ibpb
- ibrs
- invpcid
- lahf_lm
- lm
- mca
- mce
- md_clear
- mmx
- movbe
- msr
- mtrr
- nopl
- nx
- pae
- pat
- pcid
- pclmulqdq
- pdpe1gb
- pge
- pni
- popcnt
- pse
- pse36
- pti
- rdrand
- rdseed
- rdtscp
- rep_good
- rtm
- sep
- smap
- smep
- ss
- ssbd
- sse
- sse2
- sse4_1
- sse4_2
- ssse3
- stibp
- syscall
- tpr_shadow
- tsc
- tsc_adjust
- tsc_deadline_timer
- tsc_known_freq
- umip
- vme
- vmx
- vnmi
- vpid
- x2apic
- xsave
- xsaveopt
- xtopology
model: Intel Core Processor (Broadwell, IBRS)
firmware:
bios:
date: 04/01/2014
vendor: SeaBIOS
version: rel-1.16.0-0-gd239552c-rebuilt.opensuse.org
hostname: localhost.localdomain
nics:
- ip: 192.168.48.65
mac: ea:e3:66:d4:ca:4b
model: 0x1af4 0x0001
name: enp1s0
pxe: true
- ip: fe80::d325:1b78:5142:bfee%enp1s0
mac: ea:e3:66:d4:ca:4b
model: 0x1af4 0x0001
name: enp1s0
pxe: true
ramMebibytes: 4096
storage:
- alternateNames:
- /dev/vda
- /dev/disk/by-path/pci-0000:07:00.0
name: /dev/disk/by-path/pci-0000:07:00.0
rotational: true
sizeBytes: 10737418240
type: HDD
vendor: "0x1af4"
systemVendor:
manufacturer: KubeVirt
productName: None
hardwareProfile: unknown
lastUpdated: "2025-03-07T09:03:23Z"
operationHistory:
deprovision:
end: null
start: null
inspect:
end: "2025-03-07T09:02:22Z"
start: "2025-03-07T08:59:41Z"
provision:
end: "2025-03-07T09:03:23Z"
start: "2025-03-07T09:02:22Z"
register:
end: "2025-03-07T08:59:41Z"
start: "2025-03-07T08:59:31Z"
operationalStatus: OK
poweredOn: true
provisioning:
ID: 829d8b96-61d9-4fb5-aa9f-e7da7a339d0a
bootMode: legacy
image:
checksum: http://192.168.48.58/SHA256SUMS
checksumType: sha256
format: qcow2
url: http://192.168.48.58/jammy-server-cloudimg-amd64.img
rootDeviceHints:
deviceName: /dev/vda
state: provisioned
triedCredentials:
credentials:
name: bml-01
namespace: default
credentialsVersion: "192488" I noticed that you seem to install all of them in the same Kubernetes cluster. That's convenient because Metal3 can communicate to KubeVirtBMC directly via service names, so there's no need to set up an ingress controller. Maybe I should try it. This kind of setup is more likely for testing environment, which is the key pain point that KubeVirtBMC wants to resolve. |
Awesome, thank you! I think I will be able to progress with this, but it will likely be next week before I have time.
I'm not completely sure if this will actually work. I thought it would be easy with virtualmedia and just in-cluster network, but I am not sure if PXE would work well like that. In fact, I already started reverting back to host network and will probably go that route first since it is closer to what we normally do. |
Good luck with that! I'll close this issue since the link attributes were added to the Redfish resource. Feel free to re-open it if you find something still missing or let me know what I can do here. Thank you. |
Describe the bug
I am trying to use kubevirtbmc with Bare Metal Operator and Ironic.
So far I have managed to get the connection and authentication to work. I.e. Ironic is able to reach the Redfish API.
However, it then gets stuck because it expects to find Links/ManagedBy attribute for the System. Here is how it looks in the logs:
I tried looking through the code to see what could be missing. If I understand correctly, there are definitely Managers so I guess they would just need to be linked, perhaps here.
Does that sound correct?
To Reproduce
I have pushed my WIP changes here.
Deploying BMO/Ironic is unfortunately a bit complicated still so that is why rather link to the manifests instead of reproducing them here.
Steps to reproduce the behavior:
virtualmachinebmc
.registering
phase, confirming access to the BMCinspecting
More details can be found in the logs of the Ironic pod, specifically from the ironic container (not ironic-http).
Expected behavior
BMO/Ironic should be able to control the kubevirt VMs through the Redfish interface setup by kubevirtbmc.
The text was updated successfully, but these errors were encountered: