Skip to content

Comments

systemd/network: Ignore mana VFs on Azure#106

Merged
jepio merged 1 commit intoflatcar-masterfrom
azure-mana-vf
Sep 15, 2023
Merged

systemd/network: Ignore mana VFs on Azure#106
jepio merged 1 commit intoflatcar-masterfrom
azure-mana-vf

Conversation

@jepio
Copy link
Member

@jepio jepio commented Sep 12, 2023

Ignore mana VFs on Azure

Azure is introducing a new generation of NICs, that are architected the same way as the current Mellanox based ones: there is a synthetic (vmbus) NIC for handling some control traffic (including DHCP) and a VF that is enslaved to the synthetic NIC. We therefore need to exclude the devices manage by the mana driver from IP management.

How to use

[ describe what reviewers need to do in order to validate this PR ]

Testing done

No testing, just got a LISA bugreport.

  • Changelog entries added in the respective changelog/ directory (user-facing change, bug fix, security fix, update)
  • Inspected CI output for image differences: /boot and /usr size, packages, list files for any missing binaries, kernel modules, config files, kernel modules, etc.

Azure is introducing a new generation of NICs, that are architected the same
way as the current Mellanox based ones: there is a synthetic (vmbus) NIC for
handling some control traffic (including DHCP) and a VF that is enslaved to the
synthetic NIC. We therefore need to exclude the devices manage by the mana
driver from IP management.

Signed-off-by: Jeremi Piotrowski <jpiotrowski@microsoft.com>
Copy link
Member

@pothos pothos left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reminds me that this file is something afterburn could generate based on the MAC address instead of having to chase driver changes.

@pothos
Copy link
Member

pothos commented Sep 12, 2023

Reminds me that this file is something afterburn could generate based on the MAC address instead of having to chase driver changes.

Existing issue for that: flatcar/Flatcar#554

@jepio
Copy link
Member Author

jepio commented Sep 13, 2023

interesting - my initial thought was "i can implement that" but then i realised we also need it to work for hotplugged interfaces, so we'd need to re-run afterburn if we did this.

how about i just configure the list to say Driver=!hv_netvsc?

@pothos
Copy link
Member

pothos commented Sep 13, 2023

interesting - my initial thought was "i can implement that" but then i realised we also need it to work for hotplugged interfaces, so we'd need to re-run afterburn if we did this.

Ah, the metadata server output will change then? Retriggering afterburn from udev could work in that case.

how about i just configure the list to say Driver=!hv_netvsc?

We still need to restrict the matching to "meant to be physical" NICs. If they are all PCI devices, that can work with matching for the path or something but otherwise we could also take the restriction list from the default rule as this is also for that type of NICs.
So, something like this?

# Ignore SR-IOV interface on Azure, since it'll be transparently bonded
# to the synthetic interface
[Match]
KernelCommandLine=flatcar.oem.id=azure
# With NetworkManager, Azure uses a udev rule matching DRIVERS=="hv_pci".
# This won't work with networkd because it only checks the driver of the
# device itself, not its parents. All we can do instead is set the non-hv_netvsc
# NIC to be Unmanaged. In the output of networkctl you should see only
# one virtual NIC configured and the "physical" NIC be bonded to it and set
# Unmanaged.
Name=*
Type=!loopback bridge tunnel vxlan wireguard
Driver=!veth dummy hv_netvsc

[Link]
Unmanaged=yes

@pothos
Copy link
Member

pothos commented Sep 15, 2023

FYI, also needs a PR for bootengine:dracut/03flatcar-network/yy-azure-sriov.network and bootengine:dracut/03flatcar-network/yy-azure-sriov-coreos.network (yeah, two files here and two files there to keep in sync…)

@jepio
Copy link
Member Author

jepio commented Sep 15, 2023

Opened the second PR here:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants