Linux內存管理 (8)malloc


專題:Linux內存管理專題

關鍵詞:malloc、brk、VMA、VM_LOCK、normal page、special page

每章問答: 

 

 malloc()函數是C函數庫封裝的一個核心函數,對應的系統調用是brk()。

1. brk實現

要了解brk的實現首先需要知道進程用戶空間的划分,以及struct mm_struct結構體中代碼段、數據段、堆相關參數。

然后brk也是基於VMA,找到合適的虛擬地址空間,創建新的VMA並插入VMA紅黑樹和鏈表中。

首先看看mm_struct中代碼段、數據段相關參數,和Linux內存管理框架圖結合看。

由於棧向低地址空間增長,堆向高地址空間增長,所以棧的起始地址start_stack和堆的結束地址brk會改變。在棧和堆之間是

struct mm_struct {
...
    unsigned long start_code, end_code, start_data, end_data;-----代碼段從start_code到end_code;數據段從start_code到end_code。
    unsigned long start_brk, brk, start_stack;--------------------堆從start_brk開始,brk表示堆的結束地址;棧從start_stack開始。
    unsigned long arg_start, arg_end, env_start, env_end;---------表示參數列表和環境變量的起始和結束地址,這兩個區域都位於棧的最高區域。 ...
}

 malloc是libc實現的接口,主要通過sys_brk這個系統調用分配內存。 

SYSCALL_DEFINE1(brk, unsigned long, brk)
{
    unsigned long retval;
    unsigned long newbrk, oldbrk;
    struct mm_struct *mm = current->mm;
    unsigned long min_brk;
    bool populate;

    down_write(&mm->mmap_sem);

#ifdef CONFIG_COMPAT_BRK
    /*
     * CONFIG_COMPAT_BRK can still be overridden by setting
     * randomize_va_space to 2, which will still cause mm->start_brk
     * to be arbitrarily shifted
     */
    if (current->brk_randomized)
        min_brk = mm->start_brk;
    else
        min_brk = mm->end_data;---------------確定堆空間的起始地址min_brk #else
    min_brk = mm->start_brk;
#endif
    if (brk < min_brk)-----------------------brk地址不合法,在數據段 goto out;

    /*
     * Check against rlimit here. If this check is done later after the test
     * of oldbrk with newbrk then it can escape the test and let the data
     * segment grow beyond its set limit the in case where the limit is
     * not page aligned -Ram Gupta
     */
    if (check_data_rlimit(rlimit(RLIMIT_DATA), brk, mm->start_brk,
                  mm->end_data, mm->start_data))---------------如果RLIMIT_DATA不是RLIM_INFINITY,需要保證數據段加上brk區域不超過RLIMIT_DATA。 goto out;

    newbrk = PAGE_ALIGN(brk);
    oldbrk = PAGE_ALIGN(mm->brk);------------------------------頁對齊 if (oldbrk == newbrk)
        goto set_brk;

    /* Always allow shrinking brk. */
    if (brk <= mm->brk) {
        if (!do_munmap(mm, newbrk, oldbrk-newbrk))-------------如果brk變小,說明brk區在縮小,do_munmap來釋放這一部分內存。 goto set_brk;
        goto out;
    }

    /* Check against existing mmap mappings. */
    if (find_vma_intersection(mm, oldbrk, newbrk+PAGE_SIZE))---查找一塊VMA包含start_addr的VMA,說明老邊界開始的地址空間已經在使用了,就不需要再尋找了。 goto out;

    /* Ok, looks good - let it rip. */
    if (do_brk(oldbrk, newbrk-oldbrk) != oldbrk)---------------申請虛擬地址空間 goto out;

set_brk:
    mm->brk = brk;
    populate = newbrk > oldbrk && (mm->def_flags & VM_LOCKED) != 0;---判斷flags中是否有VM_LOCKED,通常由mlockall設置。
    up_write(&mm->mmap_sem);
    if (populate)
        mm_populate(oldbrk, newbrk - oldbrk);-------------------------立即分配物理頁面 return brk;

out:
    retval = mm->brk;
    up_write(&mm->mmap_sem);
    return retval;
}

 do_brk僅進行匿名映射,申請從addr開始len大小的虛擬地址空間。

do_brk首先判斷虛擬地址空間是否足夠,然后查找VMA插入點,並判斷是否能夠進行VMA合並。如果找不到VMA插入點,則新建一個VMA,並更新到mm->mmap中。

/*
 *  this is really a simplified "do_mmap".  it only handles
 *  anonymous maps.  eventually we may be able to do some
 *  brk-specific accounting here.
 */
static unsigned long do_brk(unsigned long addr, unsigned long len)
{
    struct mm_struct *mm = current->mm;
    struct vm_area_struct *vma, *prev;
    unsigned long flags;
    struct rb_node **rb_link, *rb_parent;
    pgoff_t pgoff = addr >> PAGE_SHIFT;
    int error;

    len = PAGE_ALIGN(len);
    if (!len)
        return addr;

    flags = VM_DATA_DEFAULT_FLAGS | VM_ACCOUNT | mm->def_flags;

    error = get_unmapped_area(NULL, addr, len, 0, MAP_FIXED);---------------------------判斷虛擬地址空間是否有足夠的空間,這部分代碼是跟體系結構緊耦合的。 if (error & ~PAGE_MASK)
        return error;

    error = mlock_future_check(mm, mm->def_flags, len);
    if (error)
        return error;

    /*
     * mm->mmap_sem is required to protect against another thread
     * changing the mappings in case we sleep.
     */
    verify_mm_writelocked(mm);

    /*
     * Clear old maps.  this also does some error checking for us
     */
 munmap_back:
    if (find_vma_links(mm, addr, addr + len, &prev, &rb_link, &rb_parent)) {----------循環遍歷用戶進程紅黑樹中的VMA,然后根據addr來查找合適的插入點 if (do_munmap(mm, addr, len))
            return -ENOMEM;
        goto munmap_back;
    }

    /* Check against address space limits *after* clearing old maps... */
    if (!may_expand_vm(mm, len >> PAGE_SHIFT))
        return -ENOMEM;

    if (mm->map_count > sysctl_max_map_count)
        return -ENOMEM;

    if (security_vm_enough_memory_mm(mm, len >> PAGE_SHIFT))
        return -ENOMEM;

    /* Can we just expand an old private anonymous mapping? */
    vma = vma_merge(mm, prev, addr, addr + len, flags,------------------------------去找有沒有可能合並addr附近的VMA。
                    NULL, NULL, pgoff, NULL);
    if (vma)
        goto out;

    /*
     * create a vma struct for an anonymous mapping
     */
    vma = kmem_cache_zalloc(vm_area_cachep, GFP_KERNEL);----------------------------如果沒辦法合並,只能新創建一個VMA,VMA地址空間是[addr, addr+len]。 if (!vma) {
        vm_unacct_memory(len >> PAGE_SHIFT);
        return -ENOMEM;
    }

    INIT_LIST_HEAD(&vma->anon_vma_chain);
    vma->vm_mm = mm;
    vma->vm_start = addr;
    vma->vm_end = addr + len;
    vma->vm_pgoff = pgoff;
    vma->vm_flags = flags;
    vma->vm_page_prot = vm_get_page_prot(flags);
    vma_link(mm, vma, prev, rb_link, rb_parent);------------------------------------將新創建的VMA加入到mm->mmap鏈表和紅黑樹中。 out:
    perf_event_mmap(vma);
    mm->total_vm += len >> PAGE_SHIFT;
    if (flags & VM_LOCKED)
        mm->locked_vm += (len >> PAGE_SHIFT);
    vma->vm_flags |= VM_SOFTDIRTY;
    return addr;
}

 從arch_pick_mmap_layout中可知,current->mm->get_ummapped_area對應的是arch_get_unmapped_area_topdown

 所以get_unmapped_area指向arch_get_unmapped_area_topdown

unsigned long
arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
            const unsigned long len, const unsigned long pgoff,
            const unsigned long flags)
{
...
    info.flags = VM_UNMAPPED_AREA_TOPDOWN;
    info.length = len;
    info.low_limit = FIRST_USER_ADDRESS;
    info.high_limit = mm->mmap_base;
    info.align_mask = do_align ? (PAGE_MASK & (SHMLBA - 1)) : 0;
    info.align_offset = pgoff << PAGE_SHIFT;
    addr = vm_unmapped_area(&info);

    /*
     * A failed mmap() very likely causes application failure,
     * so fall back to the bottom-up function here. This scenario
     * can happen with large stack limits and large mmap()
     * allocations.
     */
    if (addr & ~PAGE_MASK) {
        VM_BUG_ON(addr != -ENOMEM);
        info.flags = 0;
        info.low_limit = mm->mmap_base;
        info.high_limit = TASK_SIZE;
        addr = vm_unmapped_area(&info);
    }

    return addr;
}

  

2.VM_LOCK情況

當指定VM_LOCK標志位時,表示需要馬上為這塊進程虛擬地址空間分配物理頁面並建立映射關系。

mm_populate調用__mm_populate來分配頁面,同時ignore_erros。

/*
 * __mm_populate - populate and/or mlock pages within a range of address space.
 *
 * This is used to implement mlock() and the MAP_POPULATE / MAP_LOCKED mmap
 * flags. VMAs must be already marked with the desired vm_flags, and
 * mmap_sem must not be held.
 */
int __mm_populate(unsigned long start, unsigned long len, int ignore_errors)
{
    struct mm_struct *mm = current->mm;
    unsigned long end, nstart, nend;
    struct vm_area_struct *vma = NULL;
    int locked = 0;
    long ret = 0;

    VM_BUG_ON(start & ~PAGE_MASK);
    VM_BUG_ON(len != PAGE_ALIGN(len));
    end = start + len;

    for (nstart = start; nstart < end; nstart = nend) {----------------------------以start為起始地址,先通過find_vma()查找VMA。 /*
         * We want to fault in pages for [nstart; end) address range.
         * Find first corresponding VMA.
         */
        if (!locked) {
            locked = 1;
            down_read(&mm->mmap_sem);
            vma = find_vma(mm, nstart);
        } else if (nstart >= vma->vm_end)
            vma = vma->vm_next;
        if (!vma || vma->vm_start >= end)
            break;
        /*
         * Set [nstart; nend) to intersection of desired address
         * range with the first VMA. Also, skip undesirable VMA types.
         */
        nend = min(end, vma->vm_end);
        if (vma->vm_flags & (VM_IO | VM_PFNMAP))
            continue;
        if (nstart < vma->vm_start)
            nstart = vma->vm_start;
        /*
         * Now fault in a range of pages. __mlock_vma_pages_range()
         * double checks the vma flags, so that it won't mlock pages
         * if the vma was already munlocked.
         */
        ret = __mlock_vma_pages_range(vma, nstart, nend, &locked);------------------為vma分配物理內存 if (ret < 0) {
            if (ignore_errors) {
                ret = 0;
                continue;    /* continue at next VMA */
            }
            ret = __mlock_posix_error_return(ret);
            break;
        }
        nend = nstart + ret * PAGE_SIZE;
        ret = 0;
    }
    if (locked)
        up_read(&mm->mmap_sem);
    return ret;    /* 0 or negative error code */
}

 

__mlock_vma_pages_range為vma指定虛擬地址空間的物理頁面:

/**
 * __mlock_vma_pages_range() -  mlock a range of pages in the vma.
 * @vma:   target vma
 * @start: start address
 * @end:   end address
 * @nonblocking:
 *
 * This takes care of making the pages present too.
 *
 * return 0 on success, negative error code on error.
 *
 * vma->vm_mm->mmap_sem must be held.
 *
 * If @nonblocking is NULL, it may be held for read or write and will
 * be unperturbed.
 *
 * If @nonblocking is non-NULL, it must held for read only and may be
 * released.  If it's released, *@nonblocking will be set to 0.
 */
long __mlock_vma_pages_range(struct vm_area_struct *vma,
        unsigned long start, unsigned long end, int *nonblocking)
{
    struct mm_struct *mm = vma->vm_mm;
    unsigned long nr_pages = (end - start) / PAGE_SIZE;
    int gup_flags;

    VM_BUG_ON(start & ~PAGE_MASK);
    VM_BUG_ON(end   & ~PAGE_MASK);
    VM_BUG_ON_VMA(start < vma->vm_start, vma);
    VM_BUG_ON_VMA(end   > vma->vm_end, vma);
    VM_BUG_ON_MM(!rwsem_is_locked(&mm->mmap_sem), mm);------------------------一些錯誤判斷

    gup_flags = FOLL_TOUCH | FOLL_MLOCK;
    /*
     * We want to touch writable mappings with a write fault in order
     * to break COW, except for shared mappings because these don't COW
     * and we would not want to dirty them for nothing.
     */
    if ((vma->vm_flags & (VM_WRITE | VM_SHARED)) == VM_WRITE)
        gup_flags |= FOLL_WRITE;

    /*
     * We want mlock to succeed for regions that have any permissions
     * other than PROT_NONE.
     */
    if (vma->vm_flags & (VM_READ | VM_WRITE | VM_EXEC))
        gup_flags |= FOLL_FORCE;

    /*
     * We made sure addr is within a VMA, so the following will
     * not result in a stack expansion that recurses back here.
     */
    return __get_user_pages(current, mm, start, nr_pages, gup_flags,----------為進程地址空間分配物理內存並且建立映射關系。
                NULL, NULL, nonblocking);
}

 

gup_flags的分配掩碼如下:

#define FOLL_WRITE    0x01    /* check pte is writable */
#define FOLL_TOUCH    0x02    /* mark page accessed */---------------------------------標記為可訪問
#define FOLL_GET    0x04    /* do get_page on page */
#define FOLL_DUMP    0x08    /* give error on hole if it would be zero */
#define FOLL_FORCE    0x10    /* get_user_pages read/write w/o permission */
#define FOLL_NOWAIT    0x20    /* if a disk transfer is needed, start the IO
                 * and return without waiting upon it */
#define FOLL_MLOCK    0x40    /* mark page as mlocked */-------------------------------標記為mlocked
#define FOLL_SPLIT    0x80    /* don't return transhuge pages, split them */
#define FOLL_HWPOISON    0x100    /* check page is hwpoisoned */
#define FOLL_NUMA    0x200    /* force NUMA hinting page fault */
#define FOLL_MIGRATION    0x400    /* wait for page to replace migration entry */
#define FOLL_TRIED    0x800    /* a retry, previous pass started an IO */

 

__get_user_pages是很重要的內存分配函數,用於為用戶空間分配物理內存。

/**
 * __get_user_pages() - pin user pages in memory
 * @tsk:    task_struct of target task
 * @mm:        mm_struct of target mm
 * @start:    starting user address
 * @nr_pages:    number of pages from start to pin
 * @gup_flags:    flags modifying pin behaviour
 * @pages:    array that receives pointers to the pages pinned.
 *        Should be at least nr_pages long. Or NULL, if caller
 *        only intends to ensure the pages are faulted in.
 * @vmas:    array of pointers to vmas corresponding to each page.
 *        Or NULL if the caller does not require them.
 * @nonblocking: whether waiting for disk IO or mmap_sem contention
 *
 * Returns number of pages pinned. This may be fewer than the number
 * requested. If nr_pages is 0 or negative, returns 0. If no pages
 * were pinned, returns -errno. Each page returned must be released
 * with a put_page() call when it is finished with. vmas will only
 * remain valid while mmap_sem is held.
 *
 * Must be called with mmap_sem held.  It may be released.  See below.
 *
 * __get_user_pages walks a process's page tables and takes a reference to
 * each struct page that each user address corresponds to at a given
 * instant. That is, it takes the page that would be accessed if a user
 * thread accesses the given user virtual address at that instant.
 *
 * This does not guarantee that the page exists in the user mappings when
 * __get_user_pages returns, and there may even be a completely different
 * page there in some cases (eg. if mmapped pagecache has been invalidated
 * and subsequently re faulted). However it does guarantee that the page
 * won't be freed completely. And mostly callers simply care that the page
 * contains data that was valid *at some point in time*. Typically, an IO
 * or similar operation cannot guarantee anything stronger anyway because
 * locks can't be held over the syscall boundary.
 *
 * If @gup_flags & FOLL_WRITE == 0, the page must not be written to. If
 * the page is written to, set_page_dirty (or set_page_dirty_lock, as
 * appropriate) must be called after the page is finished with, and
 * before put_page is called.
 *
 * If @nonblocking != NULL, __get_user_pages will not wait for disk IO
 * or mmap_sem contention, and if waiting is needed to pin all pages,
 * *@nonblocking will be set to 0.  Further, if @gup_flags does not
 * include FOLL_NOWAIT, the mmap_sem will be released via up_read() in
 * this case.
 *
 * A caller using such a combination of @nonblocking and @gup_flags
 * must therefore hold the mmap_sem for reading only, and recognize
 * when it's been released.  Otherwise, it must be held for either
 * reading or writing and will not be released.
 *
 * In most cases, get_user_pages or get_user_pages_fast should be used
 * instead of __get_user_pages. __get_user_pages should be used only if
 * you need some special @gup_flags.
 */
long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm,
        unsigned long start, unsigned long nr_pages,
        unsigned int gup_flags, struct page **pages,
        struct vm_area_struct **vmas, int *nonblocking)
{
    long i = 0;
    unsigned int page_mask;
    struct vm_area_struct *vma = NULL;

    if (!nr_pages)
        return 0;

    VM_BUG_ON(!!pages != !!(gup_flags & FOLL_GET));

    /*
     * If FOLL_FORCE is set then do not force a full fault as the hinting
     * fault information is unrelated to the reference behaviour of a task
     * using the address space
     */
    if (!(gup_flags & FOLL_FORCE))
        gup_flags |= FOLL_NUMA;

    do {
        struct page *page;
        unsigned int foll_flags = gup_flags;
        unsigned int page_increm;

        /* first iteration or cross vma bound */
        if (!vma || start >= vma->vm_end) {
            vma = find_extend_vma(mm, start);------------------------------查找VMA,如果vma->vm_start大於查找地址start,那么它會嘗試去擴增vma,吧vma->vm_start邊界擴大到start中。 if (!vma && in_gate_area(mm, start)) {
                int ret;
                ret = get_gate_page(mm, start & PAGE_MASK
                        gup_flags, &vma,
                        pages ? &pages[i] : NULL);
                if (ret)
                    return i ? : ret;
                page_mask = 0;
                goto next_page;
            }

            if (!vma || check_vma_flags(vma, gup_flags))
                return i ? : -EFAULT;
            if (is_vm_hugetlb_page(vma)) {
                i = follow_hugetlb_page(mm, vma, pages, vmas,
                        &start, &nr_pages, i,
                        gup_flags);
                continue;
            }
        }
retry:
        /*
         * If we have a pending SIGKILL, don't keep faulting pages and
         * potentially allocating memory.
         */
        if (unlikely(fatal_signal_pending(current)))-----------------------如果收到一個SIGKILL信號,不需要繼續內存分配,直接退出。 return i ? i : -ERESTARTSYS;
        cond_resched();----------------------------------------------------判斷當前進程是否需要被調度。
        page = follow_page_mask(vma, start, foll_flags, &page_mask);-------查看vma中的虛擬地址是否已經分配了物理內存。 if (!page) {
            int ret;
            ret = faultin_page(tsk, vma, start, &foll_flags,
                    nonblocking);
            switch (ret) {
            case 0:
                goto retry;
            case -EFAULT:
            case -ENOMEM:
            case -EHWPOISON:
                return i ? i : ret;
            case -EBUSY:
                return i;
            case -ENOENT:
                goto next_page;
            }
            BUG();
        }
        if (IS_ERR(page))
            return i ? i : PTR_ERR(page);
        if (pages) {-------------------------------------------------------flush頁面對應的cache
            pages[i] = page;
            flush_anon_page(vma, page, start);
            flush_dcache_page(page);
            page_mask = 0;
        }
next_page:
        if (vmas) {
            vmas[i] = vma;
            page_mask = 0;
        }
        page_increm = 1 + (~(start >> PAGE_SHIFT) & page_mask);
        if (page_increm > nr_pages)
            page_increm = nr_pages;
        i += page_increm;
        start += page_increm * PAGE_SIZE;
        nr_pages -= page_increm;
    } while (nr_pages);
    return i;
}

 

2.1 關於normal頁面和special頁面?

vm_normal_page根據pte來返回normal paging頁面的struct page結構。

一些特殊映射的頁面是不會返回struct page結構的,這些頁面不希望被參與到內存管理的一些活動中,如頁面回收、頁遷移和KSM等。

內核嘗試用pte_mkspecial()宏來設置PTE_SPECIAL軟件定義的比特位,主要用途有:

  • 內核的零頁面zero page
  • 大量的驅動程序使用remap_pfn_range()函數來實現映射內核頁面到用戶空間。這些用戶程序使用的VMA通常設置了(VM_IO|VM_PFNMAP|VM_DONTEXPAND|VM_DONTDUMP)
  • vm_insert_page()/vm_insert_pfn()映射內核頁面到用戶空間

vm_normal_page()函數把page頁面分為兩陣營,一個是normal page,另一個是special page。

  1. normal page通常指正常mapping的頁面,例如匿名頁面、page cache和共享內存頁面等。
  2. special page通常指不正常mapping的頁面,這些頁面不希望參與內存管理的回收或者合並功能,比如:
    • VM_IO:為IO設備映射
    • VM_PFN_MAP:純PFN映射
    • VM_MIXEDMAP:固定映射

3. malloc函數流程圖

get_user_pages

follow_page

vm_normal_page

4. C庫中malloc/free的實例

C庫中的malloc和free對應的系統調用有brk/mmap和munmap。

brk在堆區申請內存,mmap在堆和棧之間申請內存。

針對不同大小的malloc和free情況下,堆的增長和收縮,或者mmap區的使用,下面一個例子很好的作了解釋。

 

情況一、malloc小於128k的內存,使用brk分配內存,將_edata往高地址推(只分配虛擬空間,不對應物理內存(因此沒有初始化),第一次讀/寫數據時,引起內核缺頁中斷,內核才分配對應的物理內存,然后虛擬地址空間建立映射關系),如下圖:


 

1、進程啟動的時候,其(虛擬)內存空間的初始布局如圖1所示。
      其中, mmap內存映射文件是在堆和棧的中間(例如libc-2.2.93.so,其它數據文件等),為了簡單起見,省略了內存映射文件。
       _edata指針(glibc里面定義)指向數據段的最高地址。 
2、
進程調用A=malloc(30K)以后,內存空間如圖2:
      malloc函數會調用brk系統調用,將_edata指針往高地址推30K,就完成虛擬內存分配。
      你可能會問:只要把_edata+30K就完成內存分配了?
      事實是這樣的,_edata+30K只是完成虛擬地址的分配,A這塊內存現在還是沒有物理頁與之對應的,等到進程第一次讀寫A這塊內存的時候,發生缺頁中斷,內核才分配A這塊內存對應的物理頁。
      如果用malloc分配了A這塊內容,然后從來不訪問它,那么,A對應的物理頁是不會被分配的。
3、進程調用B=malloc(40K)以后,內存空間如圖3。

 

情況二、malloc大於128k的內存,使用mmap分配內存,在堆和棧之間找一塊空閑內存分配(對應獨立內存,而且初始化為0),如下圖:

 

4、進程調用C=malloc(200K)以后,內存空間如圖4:
      默認情況下, malloc函數分配內存,如果請求內存大於128K(可由M_MMAP_THRESHOLD選項調節),那就不是去推_edata指針了,而是利用mmap系統調用,從堆和棧的中間分配一塊虛擬內存
      這樣子做主要是因為::
      brk分配的內存需要等到高地址內存釋放以后才能釋放(例如,在B釋放之前,A是不可能釋放的,這就是內存碎片產生的原因,什么時候緊縮看下面),而mmap分配的內存可以單獨釋放
      當然,還有其它的好處,也有壞處,再具體下去,有興趣的同學可以去看glibc里面malloc的代碼了。 
5、進程調用D=malloc(100K)以后,內存空間如圖5;
6、進程調用free(C)以后,C對應的虛擬內存和物理內存一起釋放。

 

 

7、進程調用free(B)以后,如圖7所示:
        B對應的虛擬內存和物理內存都沒有釋放,因為只有一個_edata指針,如果往回推,那么D這塊內存怎么辦呢
        當然,B這塊內存,是可以重用的,如果這個時候再來一個40K的請求,那么malloc很可能就把B這塊內存返回回去了。 
8、進程調用free(D)以后,如圖8所示:
        B和D連接起來,變成一塊140K的空閑內存。
9、默認情況下:
       當最高地址空間的空閑內存超過128K(可由M_TRIM_THRESHOLD選項調節)時,執行內存緊縮操作(trim)。
       在上一個步驟free的時候,發現最高地址空閑內存超過128K,於是內存緊縮,變成圖9所示。

 經過這幾個步驟的演示,可以明確地看出malloc對進程地址空間brk和mmap區產生的影響。

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM