msi中斷初始化
1.什么是MSI中斷
Message Signaled Interrupts 是pci2.2中提出來的一種新的中斷形式。后續有msi-x擴展。
msi以及msi-x這種中斷形式的一個最主要的特點就是,它在系統的特定地址做一個memory write transaction, 將一個系統約定的數據寫入,以此通知CPU一個中斷產生了。這個特點帶來的最主要的好處就是脫離了傳統的interrupt pin的約束,中斷的數目也不再受到限制。
2.PCI規范中的MSI設計
msi以及msi-x的相關數據作為pci配置空間的一個capability structure來實現的。
msi的capability structure比較簡單。
這是一個最簡單的32為message 地址以及16位message 數據的msi capability structure。根據message control中的標記,地址可以是64位,還可能存在masking/pending域,用於對msi中斷的屏蔽進行管理。
msi-x為了擴展支持更多的中斷向量,其capability structure包含的是中斷向量表的地址(由設備的bar來確定)。
(address和data的格式根據不同的體系結構有不同的實現方式。最簡單的實現就可以是address就是一個固定的地址,data就是為其分配的中斷向量號,這樣root complex在發現對這個address寫了數據,就會通知CPU對應的msi中斷到了。當然,我們也可以細分這些數據結構,達到中斷檢查等目的)
3.Linux 2.6.26中的msi中斷處理
以X86為例。
一般的流程是,設備驅動里面檢查自己是否具有msi或者msi能力,如果有的話,調用driver/pci/msi.c中的pci_enable_msi或者pci_enable_msix。
這里以msi為例,msix稍微復雜一些,但原理類似。
--------------------------------------------------------------------------------
int pci_enable_msi(struct pci_dev* dev)
{
int status;
status = pci_msi_check_device(dev, 1, PCI_CAP_ID_MSI);
if (status)
return status;
WARN_ON(!!dev->msi_enabled);
/* Check whether driver already requested for MSI-X irqs */
if (dev->msix_enabled) {
printk(KERN_INFO "PCI: %s: Can't enable MSI. "
"Device already has MSI-X enabled\n",
pci_name(dev));
return -EINVAL;
}
status = msi_capability_init(dev);
return status;
--------------------------------------------------------------------------------
開始時檢查,重點看msi_capability_init
--------------------------------------------------------------------------------
static int msi_capability_init(struct pci_dev *dev)
{
struct msi_desc *entry;
int pos, ret;
u16 control;
msi_set_enable(dev, 0); /* Ensure msi is disabled as I set it up */
pos = pci_find_capability(dev, PCI_CAP_ID_MSI);
pci_read_config_word(dev, msi_control_reg(pos), &control);
/* MSI Entry Initialization */
entry = alloc_msi_entry();
if (!entry)
return -ENOMEM;
entry->msi_attrib.type = PCI_CAP_ID_MSI;
entry->msi_attrib.is_64 = is_64bit_address(control);
entry->msi_attrib.entry_nr = 0;
entry->msi_attrib.maskbit = is_mask_bit_support(control);
entry->msi_attrib.masked = 1;
entry->msi_attrib.default_irq = dev->irq; /* Save IOAPIC IRQ */
entry->msi_attrib.pos = pos;
if (is_mask_bit_support(control)) {
entry->mask_base = (void __iomem *)(long)msi_mask_bits_reg(pos,
is_64bit_address(control));
}
entry->dev = dev;
if (entry->msi_attrib.maskbit) {
unsigned int maskbits, temp;
/* All MSIs are unmasked by default, Mask them all */
pci_read_config_dword(dev,
msi_mask_bits_reg(pos, is_64bit_address(control)),
&maskbits);
temp = (1 << multi_msi_capable(control));
temp = ((temp - 1) & ~temp);
maskbits |= temp;
pci_write_config_dword(dev,
msi_mask_bits_reg(pos, is_64bit_address(control)),
maskbits);
entry->msi_attrib.maskbits_mask = temp;
}
list_add_tail(&entry->list, &dev->msi_list);
/* Configure MSI capability structure */
ret = arch_setup_msi_irqs(dev, 1, PCI_CAP_ID_MSI);
if (ret) {
msi_free_irqs(dev);
return ret;
}
/* Set MSI enabled bits */
pci_intx_for_msi(dev, 0);
msi_set_enable(dev, 1);
dev->msi_enabled = 1;
dev->irq = entry->irq;
return 0;
}
--------------------------------------------------------------------------------
首先根據message control中的一些標記填寫 msi_desc 數據結構。最重要的兩個數據結構 address 和data是在arch_setup_msi_irqs中填充的。
對應到x86架構中,這個函數在arch/x86/kernel/io_apic_32.c中。
arch_setup_msi_irq
--------------------------------------------------------------------------------
int arch_setup_msi_irq(struct pci_dev *dev, struct msi_desc *desc)
{
struct msi_msg msg;
int irq, ret;
irq = create_irq();
if (irq < 0)
return irq;
ret = msi_compose_msg(dev, irq, &msg);
if (ret < 0) {
destroy_irq(irq);
return ret;
}
set_irq_msi(irq, desc);
write_msi_msg(irq, &msg);
set_irq_chip_and_handler_name(irq, &msi_chip, handle_edge_irq,
"edge");
return 0;
}
--------------------------------------------------------------------------------
首先申請一個設備終端號---create_irq();
然后根據設備終端號構建了data—— msi_compose_msg
看看這個函數里面,可以看到x86下實現的一些屬性。
msg->address_hi = MSI_ADDR_BASE_HI;
msg->address_lo = MSI_ADDR_BASE_LO | ((INT_DEST_MODE == 0) ?MSI_ADDR_DEST_MODE_PHYSICAL:
MSI_ADDR_DEST_MODE_LOGICAL) |((INT_DELIVERY_MODE != dest_LowestPrio) ?
MSI_ADDR_REDIRECTION_CPU:MSI_ADDR_REDIRECTION_LOWPRI) |MSI_ADDR_DEST_ID(dest);
msg->data = MSI_DATA_TRIGGER_EDGE | MSI_DATA_LEVEL_ASSERT | ((INT_DELIVERY_MODE !=
dest_LowestPrio) ?MSI_DATA_DELIVERY_FIXED:MSI_DATA_DELIVERY_LOWPRI)
|MSI_DATA_VECTOR (vector);
不同架構下可能上述的賦值是不一樣的。主要的不同體現在屬性上。
然后寫到相關的數據結構中。
set_irq_msi
write_msi_msg
set_irq_chip_and_handler_name//這里會提供一個入口,在中斷的總入口之后,msi設備中斷會進入handle_edge_irq。
至此,msi中斷的處理流程介紹完畢。
2011-03-24 15:13:42 [<ffffffff8119465f>] write_msi_msg+0x21/0x26 2011-03-24 15:13:42 [<ffffffff8102284c>] arch_setup_msi_irqs+0x13f/0x205 2011-03-24 15:13:42 [<ffffffff81194f07>] pci_enable_msix+0x335/0x361 2011-03-24 15:13:42 [<ffffffffa00544a3>] __mlx4_init_one+0x4c4/0x901 [mlx4_core] 2011-03-24 15:13:42 [<ffffffffa005ad7d>] mlx4_init_one+0x41/0x46 [mlx4_core] 2011-03-24 15:13:42 [<ffffffff8118d7c1>] local_pci_probe+0x17/0x1b 2011-03-24 15:13:42 [<ffffffff8118e40f>] pci_device_probe+0xc5/0xf1 2011-03-24 15:13:42 [<ffffffff812178a9>] ? driver_sysfs_add+0x52/0x78 2011-03-24 15:13:42 [<ffffffff81217a04>] driver_probe_device+0xb7/0x13b 2011-03-24 15:13:42 [<ffffffff81217ae5>] __driver_attach+0x5d/0x80 2011-03-24 15:13:42 [<ffffffff81217a88>] ? __driver_attach+0x0/0x80 2011-03-24 15:13:42 [<ffffffff81217a88>] ? __driver_attach+0x0/0x80 2011-03-24 15:13:42 [<ffffffff81216dc1>] bus_for_each_dev+0x4e/0x7f 2011-03-24 15:13:42 [<ffffffff81217855>] driver_attach+0x21/0x23 2011-03-24 15:13:42 [<ffffffff8121743b>] bus_add_driver+0xf9/0x266 2011-03-24 15:13:42 [<ffffffff81217e8a>] driver_register+0xa3/0x11a 2011-03-24 15:13:42 [<ffffffffa0065031>] ? mlx4_init+0x0/0xab [mlx4_core] 2011-03-24 15:13:42 [<ffffffff8118e68b>] __pci_register_driver+0x55/0xb1 2011-03-24 15:13:42 [<ffffffffa0065031>] ? mlx4_init+0x0/0xab [mlx4_core] 2011-03-24 15:13:42 [<ffffffffa0065031>] ? mlx4_init+0x0/0xab [mlx4_core] 2011-03-24 15:13:42 [<ffffffffa006509e>] mlx4_init+0x6d/0xab [mlx4_core] 2011-03-24 15:13:42 [<ffffffff8100906a>] do_one_initcall+0x5f/0x14f 2011-03-24 15:13:42 [<ffffffff81077cb6>] sys_init_module+0xd0/0x22a 2011-03-24 15:13:42 [<ffffffff8100bb9b>] system_call_fastpath+0x16/0x1b
申請
You should register multiple handlers. With MSI, you have a consecutive vectors, and MSI-X gives a table of individual addresses and data for each interrupt vector.
For MSI:
request_irq(my_pci_dev->irq, irq_handler_0, ...);
request_irq(my_pci_dev->irq + 1, irq_handler_1, ...);
request_irq(my_pci_dev->irq + 2, irq_handler_2, ...);
For MSI-X:
request_irq(my_pci_dev->pMsixEntries[0].vector, irq_handler_0, ...);
request_irq(my_pci_dev->pMsixEntries[1].vector, irq_handler_1, ...);
request_irq(my_pci_dev->pMsixEntries[2].vector, irq_handler_2, ...);
In my driver code, the MSI irq is registered like this:
err = pci_enable_msi(my_pci_dev);
err = request_irq(my_pci_dev->irq, irq_handler, 0, "PCI_FPGA_CARD", NULL);
and the irq_handler
is defined like this:
static irqreturn_t irq_handler(int irq, void *dev_id)
{
printk(KERN_INFO "(irq_handler): Called\n");
return IRQ_HANDLED;
}
Q2: With the 3 kernel functions above, how can we get the message "001"
?
Q3: The PCI device support up to 8 MSI vectors, so to use all those 8 vectors, which code should I use below or neither is correct:
err = pci_enable_msi_block(my_pci_dev,8);
err = request_irq(my_pci_dev->irq, irq_handler, 0, "PCI_FPGA_CARD", NULL);
or
err = pci_enable_msi(my_pci_dev);
err = request_irq(my_pci_dev->irq, irq_handler_0, 0, "PCI_FPGA_CARD", NULL);
err = request_irq(my_pci_dev->irq, irq_handler_1, 0, "PCI_FPGA_CARD", NULL);
err = request_irq(my_pci_dev->irq, irq_handler_2, 0, "PCI_FPGA_CARD", NULL);
err = request_irq(my_pci_dev->irq, irq_handler_3, 0, "PCI_FPGA_CARD", NULL);
err = request_irq(my_pci_dev->irq, irq_handler_4, 0, "PCI_FPGA_CARD", NULL);
err = request_irq(my_pci_dev->irq, irq_handler_5, 0, "PCI_FPGA_CARD", NULL);
err = request_irq(my_pci_dev->irq, irq_handler_6, 0, "PCI_FPGA_CARD", NULL);
err = request_irq(my_pci_dev->irq, irq_handler_7, 0, "PCI_FPGA_CARD", NULL);
4. The MSI Driver Guide HOWTO
Authors: | Tom L Nguyen; Martine Silbermann; Matthew Wilcox |
---|---|
Copyright: | 2003, 2008 Intel Corporation |
4.1. About this guide
This guide describes the basics of Message Signaled Interrupts (MSIs), the advantages of using MSI over traditional interrupt mechanisms, how to change your driver to use MSI or MSI-X and some basic diagnostics to try if a device doesn’t support MSIs.
4.2. What are MSIs?
A Message Signaled Interrupt is a write from the device to a special address which causes an interrupt to be received by the CPU.
The MSI capability was first specified in PCI 2.2 and was later enhanced in PCI 3.0 to allow each interrupt to be masked individually. The MSI-X capability was also introduced with PCI 3.0. It supports more interrupts per device than MSI and allows interrupts to be independently configured.
Devices may support both MSI and MSI-X, but only one can be enabled at a time.
4.3. Why use MSIs?
There are three reasons why using MSIs can give an advantage over traditional pin-based interrupts.
Pin-based PCI interrupts are often shared amongst several devices. To support this, the kernel must call each interrupt handler associated with an interrupt, which leads to reduced performance for the system as a whole. MSIs are never shared, so this problem cannot arise.
When a device writes data to memory, then raises a pin-based interrupt, it is possible that the interrupt may arrive before all the data has arrived in memory (this becomes more likely with devices behind PCI-PCI bridges). In order to ensure that all the data has arrived in memory, the interrupt handler must read a register on the device which raised the interrupt. PCI transaction ordering rules require that all the data arrive in memory before the value may be returned from the register. Using MSIs avoids this problem as the interrupt-generating write cannot pass the data writes, so by the time the interrupt is raised, the driver knows that all the data has arrived in memory.
PCI devices can only support a single pin-based interrupt per function. Often drivers have to query the device to find out what event has occurred, slowing down interrupt handling for the common case. With MSIs, a device can support more interrupts, allowing each interrupt to be specialised to a different purpose. One possible design gives infrequent conditions (such as errors) their own interrupt which allows the driver to handle the normal interrupt handling path more efficiently. Other possible designs include giving one interrupt to each packet queue in a network card or each port in a storage controller.
4.4. How to use MSIs
PCI devices are initialised to use pin-based interrupts. The device driver has to set up the device to use MSI or MSI-X. Not all machines support MSIs correctly, and for those machines, the APIs described below will simply fail and the device will continue to use pin-based interrupts.
4.4.1. Include kernel support for MSIs
To support MSI or MSI-X, the kernel must be built with the CONFIG_PCI_MSI option enabled. This option is only available on some architectures, and it may depend on some other options also being set. For example, on x86, you must also enable X86_UP_APIC or SMP in order to see the CONFIG_PCI_MSI option.
4.4.2. Using MSI
Most of the hard work is done for the driver in the PCI layer. The driver simply has to request that the PCI layer set up the MSI capability for this device.
To automatically use MSI or MSI-X interrupt vectors, use the following function:
int pci_alloc_irq_vectors(struct pci_dev *dev, unsigned int min_vecs,
unsigned int max_vecs, unsigned int flags);
which allocates up to max_vecs interrupt vectors for a PCI device. It returns the number of vectors allocated or a negative error. If the device has a requirements for a minimum number of vectors the driver can pass a min_vecs argument set to this limit, and the PCI core will return -ENOSPC if it can’t meet the minimum number of vectors.
The flags argument is used to specify which type of interrupt can be used by the device and the driver (PCI_IRQ_LEGACY, PCI_IRQ_MSI, PCI_IRQ_MSIX). A convenient short-hand (PCI_IRQ_ALL_TYPES) is also available to ask for any possible kind of interrupt. If the PCI_IRQ_AFFINITY flag is set, pci_alloc_irq_vectors() will spread the interrupts around the available CPUs.
To get the Linux IRQ numbers passed to request_irq()
and free_irq()
and the vectors, use the following function:
int pci_irq_vector(struct pci_dev *dev, unsigned int nr);
Any allocated resources should be freed before removing the device using the following function:
void pci_free_irq_vectors(struct pci_dev *dev);
If a device supports both MSI-X and MSI capabilities, this API will use the MSI-X facilities in preference to the MSI facilities. MSI-X supports any number of interrupts between 1 and 2048. In contrast, MSI is restricted to a maximum of 32 interrupts (and must be a power of two). In addition, the MSI interrupt vectors must be allocated consecutively, so the system might not be able to allocate as many vectors for MSI as it could for MSI-X. On some platforms, MSI interrupts must all be targeted at the same set of CPUs whereas MSI-X interrupts can all be targeted at different CPUs.
If a device supports neither MSI-X or MSI it will fall back to a single legacy IRQ vector.
The typical usage of MSI or MSI-X interrupts is to allocate as many vectors as possible, likely up to the limit supported by the device. If nvec is larger than the number supported by the device it will automatically be capped to the supported limit, so there is no need to query the number of vectors supported beforehand:
nvec = pci_alloc_irq_vectors(pdev, 1, nvec, PCI_IRQ_ALL_TYPES)
if (nvec < 0)
goto out_err;
If a driver is unable or unwilling to deal with a variable number of MSI interrupts it can request a particular number of interrupts by passing that number to pci_alloc_irq_vectors() function as both ‘min_vecs’ and ‘max_vecs’ parameters:
ret = pci_alloc_irq_vectors(pdev, nvec, nvec, PCI_IRQ_ALL_TYPES);
if (ret < 0)
goto out_err;
The most notorious example of the request type described above is enabling the single MSI mode for a device. It could be done by passing two 1s as ‘min_vecs’ and ‘max_vecs’:
ret = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_ALL_TYPES);
if (ret < 0)
goto out_err;
Some devices might not support using legacy line interrupts, in which case the driver can specify that only MSI or MSI-X is acceptable:
nvec = pci_alloc_irq_vectors(pdev, 1, nvec, PCI_IRQ_MSI | PCI_IRQ_MSIX);
if (nvec < 0)
goto out_err;
4.4.3. Legacy APIs
The following old APIs to enable and disable MSI or MSI-X interrupts should not be used in new code:
pci_enable_msi() /* deprecated */
pci_disable_msi() /* deprecated */
pci_enable_msix_range() /* deprecated */
pci_enable_msix_exact() /* deprecated */
pci_disable_msix() /* deprecated */
Additionally there are APIs to provide the number of supported MSI or MSI-X vectors: pci_msi_vec_count()
and pci_msix_vec_count()
. In general these should be avoided in favor of letting pci_alloc_irq_vectors() cap the number of vectors. If you have a legitimate special use case for the count of vectors we might have to revisit that decision and add a pci_nr_irq_vectors() helper that handles MSI and MSI-X transparently.
4.4.4. Considerations when using MSIs
4.4.4.1. Spinlocks
Most device drivers have a per-device spinlock which is taken in the interrupt handler. With pin-based interrupts or a single MSI, it is not necessary to disable interrupts (Linux guarantees the same interrupt will not be re-entered). If a device uses multiple interrupts, the driver must disable interrupts while the lock is held. If the device sends a different interrupt, the driver will deadlock trying to recursively acquire the spinlock. Such deadlocks can be avoided by using spin_lock_irqsave() or spin_lock_irq() which disable local interrupts and acquire the lock (see Documentation/kernel-hacking/locking.rst).
4.4.5. How to tell whether MSI/MSI-X is enabled on a device
Using ‘lspci -v’ (as root) may show some devices with “MSI”, “Message Signalled Interrupts” or “MSI-X” capabilities. Each of these capabilities has an ‘Enable’ flag which is followed with either “+” (enabled) or “-” (disabled).
4.5. MSI quirks
Several PCI chipsets or devices are known not to support MSIs. The PCI stack provides three ways to disable MSIs:
- globally
- on all devices behind a specific bridge
- on a single device
4.5.1. Disabling MSIs globally
Some host chipsets simply don’t support MSIs properly. If we’re lucky, the manufacturer knows this and has indicated it in the ACPI FADT table. In this case, Linux automatically disables MSIs. Some boards don’t include this information in the table and so we have to detect them ourselves. The complete list of these is found near the quirk_disable_all_msi() function in drivers/pci/quirks.c.
If you have a board which has problems with MSIs, you can pass pci=nomsi on the kernel command line to disable MSIs on all devices. It would be in your best interests to report the problem to linux-pci@vger.kernel.org including a full ‘lspci -v’ so we can add the quirks to the kernel.
4.5.2. Disabling MSIs below a bridge
Some PCI bridges are not able to route MSIs between busses properly. In this case, MSIs must be disabled on all devices behind the bridge.
Some bridges allow you to enable MSIs by changing some bits in their PCI configuration space (especially the Hypertransport chipsets such as the nVidia nForce and Serverworks HT2000). As with host chipsets, Linux mostly knows about them and automatically enables MSIs if it can. If you have a bridge unknown to Linux, you can enable MSIs in configuration space using whatever method you know works, then enable MSIs on that bridge by doing:
echo 1 > /sys/bus/pci/devices/$bridge/msi_bus
where $bridge is the PCI address of the bridge you’ve enabled (eg 0000:00:0e.0).
To disable MSIs, echo 0 instead of 1. Changing this value should be done with caution as it could break interrupt handling for all devices below this bridge.
Again, please notify linux-pci@vger.kernel.org of any bridges that need special handling.
4.5.3. Disabling MSIs on a single device
Some devices are known to have faulty MSI implementations. Usually this is handled in the individual device driver, but occasionally it’s necessary to handle this with a quirk. Some drivers have an option to disable use of MSI. While this is a convenient workaround for the driver author, it is not good practice, and should not be emulated.
4.5.4. Finding why MSIs are disabled on a device
From the above three sections, you can see that there are many reasons why MSIs may not be enabled for a given device. Your first step should be to examine your dmesg carefully to determine whether MSIs are enabled for your machine. You should also check your .config to be sure you have enabled CONFIG_PCI_MSI.
Then, ‘lspci -t’ gives the list of bridges above a device. Reading /sys/bus/pci/devices/*/msi_bus will tell you whether MSIs are enabled (1) or disabled (0). If 0 is found in any of the msi_bus files belonging to bridges between the PCI root and the device, MSIs are disabled.
It is also worth checking the device driver to see whether it supports MSIs. For example, it may contain calls to pci_alloc_irq_vectors() with the PCI_IRQ_MSI or PCI_IRQ_MSIX flags.