linux驅動 關於資源resource


Copy from :https://blog.csdn.net/qq_16777851/article/details/82975057

於資源,在linux中有如下定義 

/*
* IO resources have these defined flags.
*/
#define IORESOURCE_BITS 0x000000ff /* Bus-specific bits */

#define IORESOURCE_TYPE_BITS 0x00001f00 /* Resource type */
#define IORESOURCE_IO 0x00000100 /* PCI/ISA I/O ports */
#define IORESOURCE_MEM 0x00000200
#define IORESOURCE_REG 0x00000300 /* Register offsets */
#define IORESOURCE_IRQ 0x00000400
#define IORESOURCE_DMA 0x00000800
#define IORESOURCE_BUS 0x00001000

#define IORESOURCE_PREFETCH 0x00002000 /* No side effects */
#define IORESOURCE_READONLY 0x00004000
#define IORESOURCE_CACHEABLE 0x00008000
#define IORESOURCE_RANGELENGTH 0x00010000
#define IORESOURCE_SHADOWABLE 0x00020000

#define IORESOURCE_SIZEALIGN 0x00040000 /* size indicates alignment */
#define IORESOURCE_STARTALIGN 0x00080000 /* start field is alignment */

#define IORESOURCE_MEM_64 0x00100000
#define IORESOURCE_WINDOW 0x00200000 /* forwarded by bridge */
#define IORESOURCE_MUXED 0x00400000 /* Resource is software muxed */

#define IORESOURCE_EXCLUSIVE 0x08000000 /* Userland may not map this resource */
#define IORESOURCE_DISABLED 0x10000000
#define IORESOURCE_UNSET 0x20000000 /* No address assigned yet */
#define IORESOURCE_AUTO 0x40000000
#define IORESOURCE_BUSY 0x80000000 /* Driver has marked this resource busy */

/* PCI control bits. Shares IORESOURCE_BITS with above PCI ROM. */
#define IORESOURCE_PCI_FIXED (1<<4) /* Do not move resource */
其中最常用的就是這幾種

#define IORESOURCE_IO 0x00000100 /* PCI/ISA I/O ports */
#define IORESOURCE_MEM 0x00000200
#define IORESOURCE_REG 0x00000300 /* Register offsets */
#define IORESOURCE_IRQ 0x00000400
#define IORESOURCE_DMA 0x00000800
#define IORESOURCE_BUS 0x00001000
今天我們主要來分析前三種資源,即IO、MEM、REG

幾乎每一種外設都是通過讀寫設備上的寄存器來進行的,外設的寄存器通常被連續地編址。根據CPU體系結構的不同,CPU對IO端口的編址方式有兩種:

 (1)I/O映射方式(I/O-mapped)

  典型地,如X86處理器為外設專門實現了一個單獨的地址空間,稱為"I/O地址空間"或者"I/O端口空間",CPU通過專門的I/O指令(如X86的IN和OUT指令)來訪問這一空間中的地址單元,IO地址空間,這個空間從kernel編程上來看,只能通過專門的接口函數才能訪問.硬件層面上,cpu需要用特殊指令才能訪問或需要用特殊訪問方式才能訪問,不能直接用指針來尋址,.在嵌入式中,基本上沒有io address space。

  (2)內存映射方式(Memory-mapped)

  RISC指令系統的CPU(如MIPS ARM PowerPC等)通常只實現一個物理地址空間,像這種情況,外設的I/O端口的物理地址就被映射到內存地址空間中,外設I/O端口成為內存的一部分。此時,CPU可以象訪問一個內存單元那樣訪問外設I/O端口,而不需要設立專門的外設I/O指令。

若果搜索內核代碼,會發現,只有極少量的寄存器是通過 IORESOURCE_REG來表示的,絕大多數寄存器都是通過IORESOURCE_IO     或IORESOURCE_MEM來表示的。

我們的ARM平台對寄存的訪問和內存一樣,所以通常我們ARM中定義寄存器資源都是采用IORESOURCE_MEM來表示。

在內核中,形容資源,使用這樣一個結構體來表示的,當然這個結構體也可以做成一個樹的節點被鏈入一棵樹中。

/*
* Resources are tree-like, allowing
* nesting etc..
*/
struct resource {
resource_size_t start; /* 這段資源的起始 */
resource_size_t end; /* 這段資源的結束 */
const char *name; /* 這個資源的名字,方便用戶查看 */
unsigned long flags; /* 標記屬於那種資源 */
struct resource *parent, *sibling, *child; /* 作為樹的節點,鏈入樹中 */
};
比如我們要使用一段寄存器作為資源傳參給具體的驅動:

static struct resource xxxx = {
/* addr */
.start = 0x04014000,
.end = 0x04014003,
.flags = IORESOURCE_MEM,
}
一般來說,在系統運行時,外設的I/O資源的物理地址是已知的,有硬件決定,查看手冊即可知道。但是CPU通常並沒有為這些已知的外設I/O的物理地址分配虛擬地址,所以驅動程序並不能直接通過物理地址來訪問I/O的地址資源,而必須將它們映射到核心虛擬地址空間(通過頁表),然后才能根據映射所得到的核心虛擬地址范圍,通過訪問指令來訪問這些I/O內存資源。linux中在io.h頭文件中申明了函數ioremap(),用來將I/O內存資源的物理地址映射到核心的虛擬地址空間。

一般我們,使用I/O內存首先要申請,然后才能映射,使用I/O端口首先要申請,或者叫請求,對於I/O端口的請求意思是讓內核知道你要訪問這個端口,這樣內核知道了以后它就不會再讓別人也訪問這個端口了,不然兩個用戶同時訪問一個硬件可能會有問題的。

這邊先以平台總線中添加一段MEM資源為例,分析一下,資源的如何加入資源樹。

/**
* platform_device_register - add a platform-level device
* @pdev: platform device we're adding
*/
int platform_device_register(struct platform_device *pdev)
{
device_initialize(&pdev->dev);
arch_setup_pdev_archdata(pdev);
return platform_device_add(pdev);
}


/**
* device_initialize - init device structure.
* @dev: device.
*
* This prepares the device for use by other layers by initializing
* its fields.
* It is the first half of device_register(), if called by
* that function, though it can also be called separately, so one
* may use @dev's fields. In particular, get_device()/put_device()
* may be used for reference counting of @dev after calling this
* function.
*
* All fields in @dev must be initialized by the caller to 0, except
* for those explicitly set to some other value. The simplest
* approach is to use kzalloc() to allocate the structure containing
* @dev.
*
* NOTE: Use put_device() to give up your reference instead of freeing
* @dev directly once you have called this function.
*/
void device_initialize(struct device *dev)
{
/* 初始化dev信息 */
dev->kobj.kset = devices_kset;
kobject_init(&dev->kobj, &device_ktype);
INIT_LIST_HEAD(&dev->dma_pools);
mutex_init(&dev->mutex);
lockdep_set_novalidate_class(&dev->mutex);
spin_lock_init(&dev->devres_lock);
INIT_LIST_HEAD(&dev->devres_head);
device_pm_init(dev);
set_dev_node(dev, -1);
}

/**
* arch_setup_pdev_archdata - Allow manipulation of archdata before its used
* @pdev: platform device
*
* This is called before platform_device_add() such that any pdev_archdata may
* be setup before the platform_notifier is called. So if a user needs to
* manipulate any relevant information in the pdev_archdata they can do:
*
* platform_device_alloc()
* ... manipulate ...
* platform_device_add()
*
* And if they don't care they can just call platform_device_register() and
* everything will just work out.
*/
void __weak arch_setup_pdev_archdata(struct platform_device *pdev)
{
/* 預留給特殊架構的 */
}
 

/**
* platform_device_add - add a platform device to device hierarchy
* @pdev: platform device we're adding
*
* This is part 2 of platform_device_register(), though may be called
* separately _iff_ pdev was allocated by platform_device_alloc().
*/
int platform_device_add(struct platform_device *pdev)
{
int i, ret;

if (!pdev)
return -EINVAL;

if (!pdev->dev.parent)
pdev->dev.parent = &platform_bus; /* 增加總線的父設備為平台設備 */

pdev->dev.bus = &platform_bus_type; /* 設備掛接在平台總線 */

switch (pdev->id) {
default: /* 自己設置設備標號 */
dev_set_name(&pdev->dev, "%s.%d", pdev->name, pdev->id);
break;
case PLATFORM_DEVID_NONE: /* -1表示不需要設備標號 */
dev_set_name(&pdev->dev, "%s", pdev->name);
break;
case PLATFORM_DEVID_AUTO: /* 由總線分配設備標號 */
/*
* Automatically allocated device ID. We mark it as such so
* that we remember it must be freed, and we append a suffix
* to avoid namespace collision with explicit IDs.
*/
ret = ida_simple_get(&platform_devid_ida, 0, 0, GFP_KERNEL);
if (ret < 0)
goto err_out;
pdev->id = ret;
pdev->id_auto = true;
dev_set_name(&pdev->dev, "%s.%d.auto", pdev->name, pdev->id);
break;
}
/* 對該設備的資源插入資源樹,如果已經有設備插入則會插入失敗 */
for (i = 0; i < pdev->num_resources; i++) {
struct resource *p, *r = &pdev->resource[i];

if (r->name == NULL)
r->name = dev_name(&pdev->dev); /* 如果資源沒設置名字,則和設備名一樣 */

p = r->parent;
if (!p) { /* 如果沒設置資源的父節點,則檢查是否是IORESOURCE_MEM或IORESOURCE_IO資源,如果說是,則把他們插入到這個資源所在的資源樹中 */
if (resource_type(r) == IORESOURCE_MEM)
p = &iomem_resource; /* iomem資源樹,信息 */
else if (resource_type(r) == IORESOURCE_IO)
p = &ioport_resource; /* ioport資源樹,信息 */
}

/* 這里p必須存在,即如果自己沒設置,默認也只添加 IO和MEM資源到資源樹中 */
if (p && insert_resource(p, r)) {
dev_err(&pdev->dev, "failed to claim resource %d\n", i);
ret = -EBUSY;
goto failed;
}
}

pr_debug("Registering platform device '%s'. Parent at %s\n",
dev_name(&pdev->dev), dev_name(pdev->dev.parent));

/* 增加設備,節點,sysfs信息,做匹配等 */
ret = device_add(&pdev->dev);
if (ret == 0)
return ret;

failed:
if (pdev->id_auto) {
ida_simple_remove(&platform_devid_ida, pdev->id);
pdev->id = PLATFORM_DEVID_AUTO;
}

while (--i >= 0) {
struct resource *r = &pdev->resource[i];
if (r->parent)
release_resource(r);
}

err_out:
return ret;
}
 

這里我們先分析一下資源樹的總信息,后面分析如何把資源加入資源樹。(kernel/resource.c)

struct resource ioport_resource = {
.name = "PCI IO",
.start = 0,
.end = IO_SPACE_LIMIT,
.flags = IORESOURCE_IO,
};

struct resource iomem_resource = {
.name = "PCI mem",
.start = 0,
.end = -1,
.flags = IORESOURCE_MEM,
};
如果我們是32位的處理器們可以看到iomem資源,即PCI mem資源的大小為0~0xffff,ffff.

而ioport則是采用一個宏來表示的,搜索后發現

/*
* This is the limit of PC card/PCI/ISA IO space, which is by default
* 64K if we have PC card, PCI or ISA support. Otherwise, default to
* zero to prevent ISA/PCI drivers claiming IO space (and potentially
* oopsing.)
*
* Only set this larger if you really need inb() et.al. to operate over
* a larger address space. Note that SOC_COMMON ioremaps each sockets
* IO space area, and so inb() et.al. must be defined to operate as per
* readb() et.al. on such platforms.
*/
#ifndef IO_SPACE_LIMIT
#if defined(CONFIG_PCMCIA_SOC_COMMON) || defined(CONFIG_PCMCIA_SOC_COMMON_MODULE)
#define IO_SPACE_LIMIT ((resource_size_t)0xffffffff)
#elif defined(CONFIG_PCI) || defined(CONFIG_ISA) || defined(CONFIG_PCCARD)
#define IO_SPACE_LIMIT ((resource_size_t)0xffff)
#else
#define IO_SPACE_LIMIT ((resource_size_t)0)
#endif
#endif
如果定義了PCI設備其值才不為0,否則值為0。

即如果沒PCI相關的宏,是不能使用IORESOURCE_IO定義資源的,否則肯定在注冊資源樹的時候就失敗了。

 

下面我們看一下平台總線中,如果把一段MEM插入到資源樹中。(先說一下,系統采用多叉樹的策略插入資源的)

/**
* insert_resource - Inserts a resource in the resource tree
* @parent: parent of the new resource
* @new: new resource to insert
*
* Returns 0 on success, -EBUSY if the resource can't be inserted.
*/
int insert_resource(struct resource *parent, struct resource *new)
{
struct resource *conflict;

conflict = insert_resource_conflict(parent, new);
return conflict ? -EBUSY : 0;
}

/**
* insert_resource_conflict - Inserts resource in the resource tree
* @parent: parent of the new resource
* @new: new resource to insert
*
* Returns 0 on success, conflict resource if the resource can't be inserted.
*
* This function is equivalent to request_resource_conflict when no conflict
* happens. If a conflict happens, and the conflicting resources
* entirely fit within the range of the new resource, then the new
* resource is inserted and the conflicting resources become children of
* the new resource.
*/
struct resource *insert_resource_conflict(struct resource *parent, struct resource *new)
{
struct resource *conflict;

write_lock(&resource_lock);
conflict = __insert_resource(parent, new);
write_unlock(&resource_lock);
return conflict;
}
 

 

/*
* Insert a resource into the resource tree. If successful, return NULL,
* otherwise return the conflicting resource (compare to __request_resource())
*/
static struct resource * __insert_resource(struct resource *parent, struct resource *new)
{
struct resource *first, *next;

for (;; parent = first) {
first = __request_resource(parent, new);
if (!first)
return first;

if (first == parent)
return first;
if (WARN_ON(first == new)) /* duplicated insertion */
return first;

if ((first->start > new->start) || (first->end < new->end))
break;
if ((first->start == new->start) && (first->end == new->end))
break;
}

for (next = first; ; next = next->sibling) {
/* Partial overlap? Bad, and unfixable */
if (next->start < new->start || next->end > new->end)
return next;
if (!next->sibling)
break;
if (next->sibling->start > new->end)
break;
}

new->parent = parent;
new->sibling = next->sibling;
new->child = first;

next->sibling = NULL;
for (next = first; next; next = next->sibling)
next->parent = new;

if (parent->child == first) {
parent->child = new;
} else {
next = parent->child;
while (next->sibling != first)
next = next->sibling;
next->sibling = new;
}
return NULL;
}

/* Return the conflict entry if you can't request it */
static struct resource * __request_resource(struct resource *root, struct resource *new)
{
resource_size_t start = new->start;
resource_size_t end = new->end;
struct resource *tmp, **p;

if (end < start) /* 輸入的參數范圍有誤 */
return root;
if (start < root->start) /* 輸入的參數不再資源總范圍內 */
return root;
if (end > root->end) /* 輸入的參數不再資源總范圍內 */
return root;
p = &root->child;
for (;;) {
tmp = *p;
if (!tmp || tmp->start > end) {
new->sibling = tmp;
*p = new;
new->parent = root;
return NULL;
}
p = &tmp->sibling;
if (tmp->end < start)
continue;
return tmp;
}
}
首先對re__insert_resource函數里面舉例進行分析:

先假設我們要請求的是iomem_resource ,資源的總范圍是0~0xffff,ffff,假設目前沒被任何設備所申請。

 

首先假設我們申請的是0xa000,0000~0xafff,ffff這段資源,這里稱為A:

三個判斷通過后,    p = &root->child,因為root資源還沒被申請所以child為NULL,即tmp = *p = NULL,因為滿足for下面的第一個if的!tmp而進入if里面,此時new->sibling = tmp = NULL, *p = new 即root的child指向new,這個資源的parent 指向root.

即完成下圖這樣的一個綁定

 

 

之后我們假設四種情況,在上圖的基礎上請求一段資源

1.請求的資源的end小於A資源的start(0x5000,0000~0x5fff,ffff)

2請求的資源的start大於A資源的end(0xc000,0000~0xcfff,ffff)

3.請求的資源的start小於A資源的start,但請求資源的end大於A資源的start(即請求資源的end位於A資源的中間[0x9ffff,0000~0xa000,ffff])

4.請求的資源的start大於A資源的start,但請求資源的start大於A資源的end(即請求資源的start位於A資源中間[0xaffff,0000~0xb000,ffff])

 

首先第一種情況:【0x5000,0000~0x5fff,ffff】

會因為請求資源的end小於A資源的start,即滿足tmp->start > end條件而進入,for后面的第一個if里面,之后完成下圖的綁定。

 

第二種情況【0xc000,0000~0xcfff,ffff】

因為第一次不滿足for后面的第一個if條件,而執行后面幾行語句

for (;;) {
tmp = *p;
if (!tmp || tmp->start > end) {
new->sibling = tmp;
*p = new;
new->parent = root;
return NULL;
}
p = &tmp->sibling;
if (tmp->end < start)
continue;
return tmp;
}
此時因為tmp本身是指向A的,所以tmp->sibling會指向NULL,有因為A->end = 0xafff,ffff,小於0xc000,0000,所以執行continue,又會到前面指行for后面的語句。

此時因為p已經指向A->sibling的地址,而A->sibling的值為NULL。故這次因為tmp為NULL,滿足進入for后面的if中,之后完成如下圖的綁定

 

 

第三種情況【0x9ffff,0000~0xa000,ffff】

因為第一次不滿足for后面的第一個if條件,而執行后面幾行語句

for (;;) {
tmp = *p;
if (!tmp || tmp->start > end) {
new->sibling = tmp;
*p = new;
new->parent = root;
return NULL;
}
p = &tmp->sibling;
if (tmp->end < start)
continue;
return tmp;
}
此時tmp指向A,之后因為不滿足A->end(0xafff,ffff) < start(0x9ffff,000) 而退出,返回tmp,即A的值,此時根下面掛接的A信息沒有改變。

第四種情況【0xaffff,0000~0xb000,ffff]】

同樣會因為A->end(0xafff,ffff)<start(0xafff,0000)而退出,返回A值本身。

 

 

我們再多分析一種情況,即在B2情況下,繼續插入0xA000,0000~0xBfff,ffff

1.此時剛進入for里面,會因為tmp->start(0xa000,0000) > end(0xBfff,ffff)不滿足,而執行下面的語句。

2.之后因為tmp->end(0xafff,ffff) < start(0xa000,0000)不滿足,而退出,返回tmp,即B2的值

 

總結

分析一下以四種情況,可見,如果插入的資源只要和根節點上掛的沒重合,都是可以插入進去的。

(注意這里插入失敗返回的是失敗位置的,這點很重要,后面要用到)

 

這里我們做個小實驗,使用下面命令查看一下資源。

cat /proc/iomem
[root@linux]/# cat /proc/iomem
30000000-42ffffff : System RAM
30008000-3039ad13 : Kernel code
303be000-3042e7bb : Kernel data
43800000-4fffffff : System RAM
88000000-88000000 : dm9000
88000000-88000000 : dm9000
88000004-88000007 : dm9000
88000004-88000007 : dm9000
e0900000-e0900fff : dma-pl330.0
e0900000-e0900fff : dma-pl330.0
e0a00000-e0a00fff : dma-pl330.1
e0a00000-e0a00fff : dma-pl330.1
e1100000-e11000ff : samsung-spdif
e1600000-e160001f : s5pv210-keypad
e1700000-e17000ff : s3c64xx-ts
e1700000-e17000ff : samsung-adc-v3
e1800000-e1800fff : s3c2440-i2c.0
e1a00000-e1a00fff : s3c2440-i2c.2
e2200000-e22000ff : samsung-ac97
e2500000-e2500fff : samsung-pwm
e2700000-e27003ff : s3c2410-wdt
e2800000-e28000ff : s3c64xx-rtc
e2900000-e29000ff : s5pv210-uart.0
e2900000-e29000ff : s5pv210-uart
e2900400-e29004ff : s5pv210-uart.1
e2900400-e29004ff : s5pv210-uart
e2900800-e29008ff : s5pv210-uart.2
e2900800-e29008ff : s5pv210-uart
e2900c00-e2900cff : s5pv210-uart.3
e2900c00-e2900cff : s5pv210-uart
e8200000-e8203fff : s5pv210-pata.0
eb000000-eb000fff : s3c-sdhci.0
eb100000-eb100fff : s3c-sdhci.1
eb200000-eb200fff : s3c-sdhci.2
eb300000-eb300fff : s3c-sdhci.3
ec000000-ec01ffff : s3c-hsotg
eee30000-eee300ff : samsung-i2s.0
f1700000-f170ffff : s5p-mfc
f8000000-f8003fff : s5pv210-fb
fab00000-fab00fff : s3c2440-i2c.1
fb200000-fb200fff : s5pv210-fimc.0
fb300000-fb300fff : s5pv210-fimc.1
fb400000-fb400fff : s5pv210-fimc.2
fb600000-fb600fff : s5p-jpeg.0
[root@linux]/#
注意上面的縮進,縮進的都是屬於上面那個的

 

注意我這里紅框圈出來的這部分。

此時我們插入下圖這段資源

 

結果如下圖所示

 

四個uart的資源都屬於了myled資源里面了(注意縮進)

此時rmmod我們看一下情況

[root@linux]/# rmmod /drivers/source_dev.ko
[root@linux]/# cat /proc/iomem
30000000-42ffffff : System RAM
30008000-3039ad13 : Kernel code
303be000-3042e7bb : Kernel data
43800000-4fffffff : System RAM
88000000-88000000 : dm9000
88000000-88000000 : dm9000
88000004-88000007 : dm9000
88000004-88000007 : dm9000
e0900000-e0900fff : dma-pl330.0
e0900000-e0900fff : dma-pl330.0
e0a00000-e0a00fff : dma-pl330.1
e0a00000-e0a00fff : dma-pl330.1
e1100000-e11000ff : samsung-spdif
e1600000-e160001f : s5pv210-keypad
e1700000-e17000ff : s3c64xx-ts
e1700000-e17000ff : samsung-adc-v3
e1800000-e1800fff : s3c2440-i2c.0
e1a00000-e1a00fff : s3c2440-i2c.2
e2200000-e22000ff : samsung-ac97
e2500000-e2500fff : samsung-pwm
e2700000-e27003ff : s3c2410-wdt
e2800000-e28000ff : s3c64xx-rtc
e8200000-e8203fff : s5pv210-pata.0
eb000000-eb000fff : s3c-sdhci.0
eb100000-eb100fff : s3c-sdhci.1
eb200000-eb200fff : s3c-sdhci.2
eb300000-eb300fff : s3c-sdhci.3
ec000000-ec01ffff : s3c-hsotg
eee30000-eee300ff : samsung-i2s.0
f1700000-f170ffff : s5p-mfc
f8000000-f8003fff : s5pv210-fb
fab00000-fab00fff : s3c2440-i2c.1
fb200000-fb200fff : s5pv210-fimc.0
fb300000-fb300fff : s5pv210-fimc.1
fb400000-fb400fff : s5pv210-fimc.2
fb600000-fb600fff : s5p-jpeg.0
[root@linux]/#
可以發現e2900000這段資源都已經找不到了,這又是一個疑問點。

 

 

 

可見重復了的也是可以插入的,我們這里執行不了,而是其他地方又做了處理。接下來我們繼續分析。

 

 

 

 

接下來分析上面函數的調用者

/*
* Insert a resource into the resource tree. If successful, return NULL,
* otherwise return the conflicting resource (compare to __request_resource())
*/
static struct resource * __insert_resource(struct resource *parent, struct resource *new)
{
struct resource *first, *next;

/* 注意這里是個循環,失敗了也是會繼續更新parrnt嘗試插入的,除非返回值為NULL,或new相對與parent無效 */
for (;; parent = first) {
first = __request_resource(parent, new); /* 返回NULL表示已經插入 */
if (!first)
return first; /* NULL表示已經插入,這里直接返回 */

if (first == parent) /* 插入的超過root范圍或插入范圍無效 */
return first;
if (WARN_ON(first == new)) /* duplicated insertion 重復插入個資源 */
return first;

/* 失敗節點,的start大於new->start 或 失敗節點的end小於new->end,即如下圖1所示 */
if ((first->start > new->start) || (first->end < new->end))
break;
/* new和某段資源重復了 */
if ((first->start == new->start) && (first->end == new->end))
break;
}

/* 到這里表示通過break出來的,即要插入的資源和資源池里面的資源有重疊 */
for (next = first; ; next = next->sibling) {
/* Partial overlap? Bad, and unfixable,部分重疊會認為是bad, */
if (next->start < new->start || next->end > new->end)
return next;
if (!next->sibling) /* 觀察最前面我的插入圖,只有第一個插入的sibling才是NULL,即已經查找完所有的了,即會直接掛在root下面 */
break;
if (next->sibling->start > new->end) /* 找到的某個資源的start大於new->end,即new資源整個都小於next->siblig,即可能可以做next->siblig節點的child */
break;
}

new->parent = parent; /* new的parent指向根節點 */
new->sibling = next->sibling; /* new的sibling指向上一個節點(new < next) */
new->child = first; /* 掛接屬於new范圍內的資源到new里面 */

next->sibling = NULL;
for (next = first; next; next = next->sibling)
next->parent = new;

if (parent->child == first) {
parent->child = new;
} else {
next = parent->child;
while (next->sibling != first)
next = next->sibling;
next->sibling = new;
}
return NULL;
}

注意:

1.上面的duplicated insertion,表示把A資源重復插入,而((first->start == new->start) && (first->end == new->end))則是表示,有另一個設備申請已經被申請的這段資源。

 

 

圖1,這里是或的關系喔

 

接下來,我們在我們在最上面2的基礎以及__insert_resource函數中,再次分三種情況進行分析。

 

 

首先,我們對B5通過更高一層的函數,這里再次進行插入C1[0xA000,0000~0xBfff,ffff]

1.和上面的情況一樣,失敗,但返回值是B2

for (;; parent = first) {
first = __request_resource(parent, new);
if (!first)
return first;

if (first == parent)
return first;
if (WARN_ON(first == new)) /* duplicated insertion */
return first;
// 0xc0000000 0xA0000000 0xcfffffff 0xbfffffff
if ((first->start > new->start) || (first->end < new->end))
break;
if ((first->start == new->start) && (first->end == new->end))
break;
}
2.接下來檢查會因為滿足第三個if條件,而退出,此時parent = root , first = B2

3.指行下面代碼,會因為第一個if重復了,而退出, 插入失敗

for (next = first; ; next = next->sibling) {
/* Partial overlap? Bad, and unfixable */
/* 0xcfffffff 0xbfffffff */
if (next->start < new->start || next->end > new->end)
return next;
if (!next->sibling)
break;
if (next->sibling->start > new->end)
break;
}
 

 

 

接下來,我們在最上面2的基礎上再次插入一組資源C2(0xB000,0000~0xBfff,ffff)

1.因為會在第一次進入__request_resource里面因為不滿足tmp->start(0xa000,0000) > end(0xbfff,ffff),指行后面的tmp->end(0xafff,ffff) < start(0xb000,0000),因為滿足,條件,指行continue,此時tmp為B2,比較tmp->start(0xC000,0000) > end(0xbfff,ffff)后,發現可以插入,返回NULL。下面是插入成功后的結果。

 

 

 

 

 

接着,我們繼續上面的基礎上插入和C2資源相同的資源C2_【0xb000,0000~0xbfff,ffff】。

1.進入__request_resource函數,參數分別是root和C2_。此時p = &root->child為A,會因為tmp->start(0xA000,0000) > end(0xbfff,ffff)不滿足,執行下面的語句。這次因為tmp->end(0xafff,ffff) < start(0xB000,0000)滿足,而continue到for后面,此時tmp為C2。這次還會因為C2的start不滿足C2_的end而執行后面的if,但這次因為C2和C2_范圍一樣,故不滿足if (tmp->end < start),進而返回C2,退出,返回C2

for (;;) {
tmp = *p;
if (!tmp || tmp->start > end) {
new->sibling = tmp;
*p = new;
new->parent = root;
return NULL;
}
p = &tmp->sibling;
if (tmp->end < start)
continue;
return tmp;
}
2.此時恰好滿足下面的最后一條語句,進而break退出這個for,此時的first是C2,parent是root

for (;; parent = first) {
first = __request_resource(parent, new);
if (!first)
return first;

if (first == parent)
return first;
if (WARN_ON(first == new)) /* duplicated insertion */
return first;

if ((first->start > new->start) || (first->end < new->end))
break;
if ((first->start == new->start) && (first->end == new->end))
break;
}
3.接下來進行比較.因為C2和C2_范圍相等,所以第一個if不滿足部分重疊,且C2的sibling是存在的B2,所以執行最后一句,此時剛好,B2的start(0xc000,0000)時大於C2的end(0xbfff,ffff),可以break退出,此時fist是C2,next是C2,parent還是root

for (next = first; ; next = next->sibling) {
/* Partial overlap? Bad, and unfixable */
if (next->start < new->start || next->end > new->end)
return next;
if (!next->sibling)
break;
if (next->sibling->start > new->end)
break;
}
4.此時fist是C2,next是C2,parent是root(因為我們最前面的for,第2步是在最后一個if相等,退出來的,並沒回到for里面去)

new->parent = parent; /* C2_->parent = root */
new->sibling = next->sibling; /* C2_->sibling = C2->sibling = B2 */
new->child = first; /* C2_->child = C2 */

next->sibling = NULL; /* next->sibling = C2->sibling = NULL */

/* 下面的for則會這樣指行,即
next = C2;
C2->parent = C2_;
next = C2->sibling = NULL(就在上面三行前,置位NULL的);

退出
*/
for (next = first; next; next = next->sibling)
next->parent = new;

看一下執行到這里的數據結構是怎樣的


此時我們可以看到,會把新的C2_作為之前C2的parent,這里我們可看到,父資源總是大於等於子資源。且新增加的相同的資源是之前的父資源。這個就和我們之前的對應上了。

 

接下來我們把后面的代碼分析完。

4.parent->child = A        first = C2,我們這里不相等,當然我們這里多說一句,如果相等的情況就是我們這次插入的是和A范圍相同的資源

此時要把A_作為parent的child,才能實現,后面完整的樹的查找遍歷。

if (parent->child == first) {
parent->child = new;
} else {
next = parent->child; /* 從A開始查找 */
while (next->sibling != first) /* 找到first C2 (我們這里第一步就找到了) */
next = next->sibling;
next->sibling = new; /* A->sibling = C2_ */
}
return NULL;


可以看到,這是一個很完美的樹形結構,從root可以yi'c依次遍歷所有的節點。

 

這里再對比我們之前的分析,做個小結:

1.__insert_resource函數第一次進來parent傳的是root,會進入__request_resource,,對於root的child總線所有資源里最小的那個。(比如我們上面的A)

2.之后檢查new的范圍,和是否要插入的在root范圍內,這一次因為root的start和end范圍很大,只要參數不傳錯都是可以成功的,但對其它層可能就會不滿足的,即返回傳入的節點root。

3.之后對child所屬的這一層(相同的parent為同一層,同層之間用sibling連接)從小到大查找是不是,如果小於本層的某個資源,且和本層的資源沒重疊,則直接插入,否則退出,返回(如果是沒找到,則返回的是本層的最大者的地址,否則如果是重疊了,則返回的是重疊的那個資源的地址)。

4.在__insert_resource里面判斷,如果是成功插入,返回NULL,則退出。否則,檢查參數,如果是要插入的和返回的有重疊,則break跳出循環,否則調到第2步。

5.之后檢查new,如果和本層的某一個資源部分重疊,則不能插入,結束。如果本層查找結束,或小於本層的某一個資源,則跳出。

6.現在說明,new里面至少包含一個完整的資源,接下來就是把new替換掉new包含的部分資源,並把new包含的資源掛接到new下面,同時對掛接層的資源的parent都改為new本身。

7.最后,把new的同層的前一個節點的sibling指向new

 

這里需要注意的是,加入我們掛接了某個重復資源myled,則之前的資源就掛載它的child里面了

 

如果我們把myled資源釋放掉,則掛載它下面的資源也會釋放掉

 

 

一個資源下掛載多個

 

卸載后,都不見了

 

 

 

在這個的基礎上

 

假設我們注冊這段資源有部分重復的資源會怎么樣

 

結果如下,會注冊失敗

 

 

最后,我對幾個常用的資源注冊函數進行分析

 

void __iomem *devm_ioremap_resource(struct device *dev, struct resource *res);
void __iomem *devm_request_and_ioremap(struct device *dev,
struct resource *res);

void __iomem *devm_request_and_ioremap(struct device *dev,
struct resource *res)
{
void __iomem *dest_ptr;

dest_ptr = devm_ioremap_resource(dev, res);
if (IS_ERR(dest_ptr))
return NULL;

return dest_ptr;
}
 

 

/**
* devm_ioremap_resource() - check, request region, and ioremap resource
* @dev: generic device to handle the resource for
* @res: resource to be handled
*
* Checks that a resource is a valid memory region, requests the memory region
* and ioremaps it either as cacheable or as non-cacheable memory depending on
* the resource's flags. All operations are managed and will be undone on
* driver detach.
*
* Returns a pointer to the remapped memory or an ERR_PTR() encoded error code
* on failure. Usage example:
*
* res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
* base = devm_ioremap_resource(&pdev->dev, res);
* if (IS_ERR(base))
* return PTR_ERR(base);
*/
void __iomem *devm_ioremap_resource(struct device *dev, struct resource *res)
{
resource_size_t size;
const char *name;
void __iomem *dest_ptr;

BUG_ON(!dev);

/* 通常我們arm上ioremap的都是mem資源的,不是則退出 */
if (!res || resource_type(res) != IORESOURCE_MEM) {
dev_err(dev, "invalid resource\n");
return IOMEM_ERR_PTR(-EINVAL);
}

size = resource_size(res); /* 得到資源大小 */
name = res->name ?: dev_name(dev); /* 資源如果申請時沒起名,就和設備名一樣 */

if (!devm_request_mem_region(dev, res->start, size, name)) { /* 請求資源 */
dev_err(dev, "can't request region for resource %pR\n", res);
return IOMEM_ERR_PTR(-EBUSY);
}

/* ioremap映射物理地址到虛擬地址 */
if (res->flags & IORESOURCE_CACHEABLE)
dest_ptr = devm_ioremap(dev, res->start, size);
else
dest_ptr = devm_ioremap_nocache(dev, res->start, size);

if (!dest_ptr) {
dev_err(dev, "ioremap failed for resource %pR\n", res);
devm_release_mem_region(dev, res->start, size);
dest_ptr = IOMEM_ERR_PTR(-ENOMEM);
}

return dest_ptr;
}


/* IO資源的大小 */
static inline resource_size_t resource_size(const struct resource *res)
{
return res->end - res->start + 1;
}
 

#define devm_request_mem_region(dev,start,n,name) \
__devm_request_region(dev, &iomem_resource, (start), (n), (name))

/*
* Managed region resource
*/
struct region_devres {
struct resource *parent;
resource_size_t start;
resource_size_t n;
};


struct resource * __devm_request_region(struct device *dev,
struct resource *parent, resource_size_t start,
resource_size_t n, const char *name)
{
struct region_devres *dr = NULL;
struct resource *res;

/* 申請空間,管理資源 */
dr = devres_alloc(devm_region_release, sizeof(struct region_devres),
GFP_KERNEL);
if (!dr)
return NULL;

/* 初始化 */
dr->parent = parent;
dr->start = start;
dr->n = n;

/* 申請資源 */
res = __request_region(parent, start, n, name, 0);
if (res)
devres_add(dev, dr); /* 把申請到的資源,加入到該設備中 */
else
devres_free(dr);

return res;
}



static DECLARE_WAIT_QUEUE_HEAD(muxed_resource_wait);

/**
* __request_region - create a new busy resource region
* @parent: parent resource descriptor
* @start: resource start address
* @n: resource region size
* @name: reserving caller's ID string
* @flags: IO resource flags
*/
struct resource * __request_region(struct resource *parent,
resource_size_t start, resource_size_t n,
const char *name, int flags)
{
DECLARE_WAITQUEUE(wait, current);
struct resource *res = alloc_resource(GFP_KERNEL);

if (!res)
return NULL;

/* 填充資源信息 */
res->name = name;
res->start = start;
res->end = start + n - 1;
res->flags = resource_type(parent);
res->flags |= IORESOURCE_BUSY | flags; /* 注意這里默認加了busy標志 */

write_lock(&resource_lock);

for (;;) {
struct resource *conflict;

/* 插入資源(我們之前已經分析過了,插入成功則返回NULL) */
conflict = __request_resource(parent, res);
if (!conflict)
break;
if (conflict != parent) { /* 這次的范圍屬於申請的范圍內,繼續查找如果這段資源忙的話,則等待忙完 */
if (!(conflict->flags & IORESOURCE_BUSY)) {
parent = conflict;
continue;
}
}
/* 如果這個資源是多個設備輪換使用的話,把這個設備加入等待隊列,等資源可用再被喚醒 */
if (conflict->flags & flags & IORESOURCE_MUXED) {
add_wait_queue(&muxed_resource_wait, &wait);
write_unlock(&resource_lock);
set_current_state(TASK_UNINTERRUPTIBLE);
schedule(); /* 這邊直接調度走 */
remove_wait_queue(&muxed_resource_wait, &wait); /* 喚醒說明資源到為了,這里刪除掉從等待隊列 */
write_lock(&resource_lock);
continue;
}
/* Uhhuh, that didn't work out.. */
free_resource(res);
res = NULL; /* 范圍NULL說明沒申請到 */
break;
}
write_unlock(&resource_lock);
return res; /* 返回申請到的資源地址 */
}

把該資源加入到資源設備鏈表中,注銷設備時就會釋放掉該資源,以及占用的空間。

void devres_add(struct device *dev, void *res)
{
struct devres *dr = container_of(res, struct devres, data);
unsigned long flags;

spin_lock_irqsave(&dev->devres_lock, flags);
add_dr(dev, &dr->node);
spin_unlock_irqrestore(&dev->devres_lock, flags);
}

static void add_dr(struct device *dev, struct devres_node *node)
{
devres_log(dev, node, "ADD");
BUG_ON(!list_empty(&node->entry));
list_add_tail(&node->entry, &dev->devres_head);
}
這理我們再看一下,申請資源時已經綁定了釋放函數,到時候會卸載設備時,采用回調方式的形式,釋放掉資源的。

struct resource * __devm_request_region(struct device *dev,
struct resource *parent, resource_size_t start,
resource_size_t n, const char *name)
{
struct region_devres *dr = NULL;
struct resource *res;

dr = devres_alloc(devm_region_release, sizeof(struct region_devres),
GFP_KERNEL);
if (!dr)
return NULL;

dr->parent = parent;
dr->start = start;
dr->n = n;

res = __request_region(parent, start, n, name, 0);
if (res)
devres_add(dev, dr);
else
devres_free(dr);

return res;

}

————————————————
版權聲明:本文為CSDN博主「to_run_away」的原創文章,遵循 CC 4.0 BY-SA 版權協議,轉載請附上原文出處鏈接及本聲明。
原文鏈接:https://blog.csdn.net/qq_16777851/article/details/82975057


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM