My desktop is a MacPro6,1 Desktop (Black Can). Here'my Grub command line arguments for Pie
intel_iommu=on pci=hpbussize=10,hpmemsize=2M,nocrs,realloc
Even though I force a realloc I see the following BAR errors:
[ 0.710600] pci 0000:ad:01.0: BAR 13: no space for [io size 0x1000]
[ 0.710601] pci 0000:ad:01.0: BAR 13: failed to assign [io size 0x1000]
[ 0.710602] pci 0000:ad:00.0: BAR 14: no space for [mem size 0x00100000]
[ 0.710603] pci 0000:ad:00.0: BAR 14: failed to assign [mem size 0x00100000]
[ 0.710604] pci 0000:ad:01.0: BAR 14: no space for [mem size 0x00100000]
[ 0.710604] pci 0000:ad:01.0: BAR 14: failed to assign [mem size 0x00100000]
[ 0.710605] pci 0000:ad:01.0: BAR 13: no space for [io size 0x1000]
[ 0.710606] pci 0000:ad:01.0: BAR 13: failed to assign [io size 0x1000]
[ 0.710607] pci 0000:ae:00.0: BAR 0: no space for [mem size 0x00008000 64bit]
[ 0.710608] pci 0000:ae:00.0: BAR 0: failed to assign [mem size 0x00008000 64bit]
[ 0.710609] pci 0000:ae:00.0: BAR 0: no space for [mem size 0x00008000 64bit]
[ 0.710609] pci 0000:ae:00.0: BAR 0: failed to assign [mem size 0x00008000 64bit]
[ 0.710642] pci 0000:af:00.0: BAR 5: no space for [mem size 0x00000200]
After realloc some BAR memory and IO allocations are successfull,
[ 0.710910] pci 0000:cb:00.0: BAR 4: [io 0xb000-0xb01f] conflicts with 0000:a9:00.0 [io 0xb000-0xb01f]
[ 0.710910] pci 0000:cb:00.0: BAR 4: failed to assign [io size 0x0020]
[ 0.710911] pci 0000:cb:00.0: BAR 0: no space for [io size 0x0008]
[ 0.711081] pci 0000:02:00.0: BAR 4: assigned [io 0x2000-0x20ff]
[ 0.711092] pci 0000:06:00.0: BAR 4: assigned [io 0x3000-0x30ff]
I know what I need to do.. but I do not see an option on how to implement this. I need to enable PCI addressing above 4GB. Most of the forums post to enable this in the BIOS.. but this is a UEFI Mac.
Any help would be appreciated. I am phasing out MACOS. Thanks.
BTW my eGPU works with MacOS. It's connected via thunderbolt.
Everything works on Linux, except allocating the resources for my thunderbolt attached GPU. My thunderbolt attached jbods are all accessible.
The Nvidia driver even installs. Just not enough memory and IO resources for the to function properly. nvidia-smi fails to detect the device.