Commit 64c353864e3f7ccba0ade1bd6f562f9a3bc7e68d

Authored by Linus Torvalds

Merge branch 'for-v3.12' of git://git.linaro.org/people/mszyprowski/linux-dma-mapping

Pull DMA mapping update from Marek Szyprowski:
 "This contains an addition of Device Tree support for reserved memory
  regions (Contiguous Memory Allocator is one of the drivers for it) and
  changes required by the KVM extensions for PowerPC architectue"

* 'for-v3.12' of git://git.linaro.org/people/mszyprowski/linux-dma-mapping:
  ARM: init: add support for reserved memory defined by device tree
  drivers: of: add initialization code for dma reserved memory
  drivers: of: add function to scan fdt nodes given by path
  drivers: dma-contiguous: clean source code and prepare for device tree

Showing 15 changed files Side-by-side Diff

Documentation/devicetree/bindings/memory.txt
  1 +*** Memory binding ***
  2 +
  3 +The /memory node provides basic information about the address and size
  4 +of the physical memory. This node is usually filled or updated by the
  5 +bootloader, depending on the actual memory configuration of the given
  6 +hardware.
  7 +
  8 +The memory layout is described by the following node:
  9 +
  10 +/ {
  11 + #address-cells = <(n)>;
  12 + #size-cells = <(m)>;
  13 + memory {
  14 + device_type = "memory";
  15 + reg = <(baseaddr1) (size1)
  16 + (baseaddr2) (size2)
  17 + ...
  18 + (baseaddrN) (sizeN)>;
  19 + };
  20 + ...
  21 +};
  22 +
  23 +A memory node follows the typical device tree rules for "reg" property:
  24 +n: number of cells used to store base address value
  25 +m: number of cells used to store size value
  26 +baseaddrX: defines a base address of the defined memory bank
  27 +sizeX: the size of the defined memory bank
  28 +
  29 +
  30 +More than one memory bank can be defined.
  31 +
  32 +
  33 +*** Reserved memory regions ***
  34 +
  35 +In /memory/reserved-memory node one can create child nodes describing
  36 +particular reserved (excluded from normal use) memory regions. Such
  37 +memory regions are usually designed for the special usage by various
  38 +device drivers. A good example are contiguous memory allocations or
  39 +memory sharing with other operating system on the same hardware board.
  40 +Those special memory regions might depend on the board configuration and
  41 +devices used on the target system.
  42 +
  43 +Parameters for each memory region can be encoded into the device tree
  44 +with the following convention:
  45 +
  46 +[(label):] (name) {
  47 + compatible = "linux,contiguous-memory-region", "reserved-memory-region";
  48 + reg = <(address) (size)>;
  49 + (linux,default-contiguous-region);
  50 +};
  51 +
  52 +compatible: one or more of:
  53 + - "linux,contiguous-memory-region" - enables binding of this
  54 + region to Contiguous Memory Allocator (special region for
  55 + contiguous memory allocations, shared with movable system
  56 + memory, Linux kernel-specific).
  57 + - "reserved-memory-region" - compatibility is defined, given
  58 + region is assigned for exclusive usage for by the respective
  59 + devices.
  60 +
  61 +reg: standard property defining the base address and size of
  62 + the memory region
  63 +
  64 +linux,default-contiguous-region: property indicating that the region
  65 + is the default region for all contiguous memory
  66 + allocations, Linux specific (optional)
  67 +
  68 +It is optional to specify the base address, so if one wants to use
  69 +autoconfiguration of the base address, '0' can be specified as a base
  70 +address in the 'reg' property.
  71 +
  72 +The /memory/reserved-memory node must contain the same #address-cells
  73 +and #size-cells value as the root node.
  74 +
  75 +
  76 +*** Device node's properties ***
  77 +
  78 +Once regions in the /memory/reserved-memory node have been defined, they
  79 +may be referenced by other device nodes. Bindings that wish to reference
  80 +memory regions should explicitly document their use of the following
  81 +property:
  82 +
  83 +memory-region = <&phandle_to_defined_region>;
  84 +
  85 +This property indicates that the device driver should use the memory
  86 +region pointed by the given phandle.
  87 +
  88 +
  89 +*** Example ***
  90 +
  91 +This example defines a memory consisting of 4 memory banks. 3 contiguous
  92 +regions are defined for Linux kernel, one default of all device drivers
  93 +(named contig_mem, placed at 0x72000000, 64MiB), one dedicated to the
  94 +framebuffer device (labelled display_mem, placed at 0x78000000, 8MiB)
  95 +and one for multimedia processing (labelled multimedia_mem, placed at
  96 +0x77000000, 64MiB). 'display_mem' region is then assigned to fb@12300000
  97 +device for DMA memory allocations (Linux kernel drivers will use CMA is
  98 +available or dma-exclusive usage otherwise). 'multimedia_mem' is
  99 +assigned to scaler@12500000 and codec@12600000 devices for contiguous
  100 +memory allocations when CMA driver is enabled.
  101 +
  102 +The reason for creating a separate region for framebuffer device is to
  103 +match the framebuffer base address to the one configured by bootloader,
  104 +so once Linux kernel drivers starts no glitches on the displayed boot
  105 +logo appears. Scaller and codec drivers should share the memory
  106 +allocations.
  107 +
  108 +/ {
  109 + #address-cells = <1>;
  110 + #size-cells = <1>;
  111 +
  112 + /* ... */
  113 +
  114 + memory {
  115 + reg = <0x40000000 0x10000000
  116 + 0x50000000 0x10000000
  117 + 0x60000000 0x10000000
  118 + 0x70000000 0x10000000>;
  119 +
  120 + reserved-memory {
  121 + #address-cells = <1>;
  122 + #size-cells = <1>;
  123 +
  124 + /*
  125 + * global autoconfigured region for contiguous allocations
  126 + * (used only with Contiguous Memory Allocator)
  127 + */
  128 + contig_region@0 {
  129 + compatible = "linux,contiguous-memory-region";
  130 + reg = <0x0 0x4000000>;
  131 + linux,default-contiguous-region;
  132 + };
  133 +
  134 + /*
  135 + * special region for framebuffer
  136 + */
  137 + display_region: region@78000000 {
  138 + compatible = "linux,contiguous-memory-region", "reserved-memory-region";
  139 + reg = <0x78000000 0x800000>;
  140 + };
  141 +
  142 + /*
  143 + * special region for multimedia processing devices
  144 + */
  145 + multimedia_region: region@77000000 {
  146 + compatible = "linux,contiguous-memory-region";
  147 + reg = <0x77000000 0x4000000>;
  148 + };
  149 + };
  150 + };
  151 +
  152 + /* ... */
  153 +
  154 + fb0: fb@12300000 {
  155 + status = "okay";
  156 + memory-region = <&display_region>;
  157 + };
  158 +
  159 + scaler: scaler@12500000 {
  160 + status = "okay";
  161 + memory-region = <&multimedia_region>;
  162 + };
  163 +
  164 + codec: codec@12600000 {
  165 + status = "okay";
  166 + memory-region = <&multimedia_region>;
  167 + };
  168 +};
arch/arm/include/asm/dma-contiguous.h
... ... @@ -5,7 +5,6 @@
5 5 #ifdef CONFIG_DMA_CMA
6 6  
7 7 #include <linux/types.h>
8   -#include <asm-generic/dma-contiguous.h>
9 8  
10 9 void dma_contiguous_early_fixup(phys_addr_t base, unsigned long size);
11 10  
... ... @@ -17,6 +17,7 @@
17 17 #include <linux/nodemask.h>
18 18 #include <linux/initrd.h>
19 19 #include <linux/of_fdt.h>
  20 +#include <linux/of_reserved_mem.h>
20 21 #include <linux/highmem.h>
21 22 #include <linux/gfp.h>
22 23 #include <linux/memblock.h>
... ... @@ -377,6 +378,8 @@
377 378 /* reserve any platform specific memblock areas */
378 379 if (mdesc->reserve)
379 380 mdesc->reserve();
  381 +
  382 + early_init_dt_scan_reserved_mem();
380 383  
381 384 /*
382 385 * reserve memory for DMA contigouos allocations,
arch/x86/include/asm/dma-contiguous.h
... ... @@ -4,7 +4,6 @@
4 4 #ifdef __KERNEL__
5 5  
6 6 #include <linux/types.h>
7   -#include <asm-generic/dma-contiguous.h>
8 7  
9 8 static inline void
10 9 dma_contiguous_early_fixup(phys_addr_t base, unsigned long size) { }
drivers/base/dma-contiguous.c
... ... @@ -96,7 +96,7 @@
96 96 #endif
97 97  
98 98 /**
99   - * dma_contiguous_reserve() - reserve area for contiguous memory handling
  99 + * dma_contiguous_reserve() - reserve area(s) for contiguous memory handling
100 100 * @limit: End address of the reserved memory (optional, 0 for any).
101 101 *
102 102 * This function reserves memory from early allocator. It should be
103 103  
104 104  
105 105  
106 106  
... ... @@ -124,22 +124,29 @@
124 124 #endif
125 125 }
126 126  
127   - if (selected_size) {
  127 + if (selected_size && !dma_contiguous_default_area) {
128 128 pr_debug("%s: reserving %ld MiB for global area\n", __func__,
129 129 (unsigned long)selected_size / SZ_1M);
130 130  
131   - dma_declare_contiguous(NULL, selected_size, 0, limit);
  131 + dma_contiguous_reserve_area(selected_size, 0, limit,
  132 + &dma_contiguous_default_area);
132 133 }
133 134 };
134 135  
135 136 static DEFINE_MUTEX(cma_mutex);
136 137  
137   -static int __init cma_activate_area(unsigned long base_pfn, unsigned long count)
  138 +static int __init cma_activate_area(struct cma *cma)
138 139 {
139   - unsigned long pfn = base_pfn;
140   - unsigned i = count >> pageblock_order;
  140 + int bitmap_size = BITS_TO_LONGS(cma->count) * sizeof(long);
  141 + unsigned long base_pfn = cma->base_pfn, pfn = base_pfn;
  142 + unsigned i = cma->count >> pageblock_order;
141 143 struct zone *zone;
142 144  
  145 + cma->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
  146 +
  147 + if (!cma->bitmap)
  148 + return -ENOMEM;
  149 +
143 150 WARN_ON_ONCE(!pfn_valid(pfn));
144 151 zone = page_zone(pfn_to_page(pfn));
145 152  
146 153  
147 154  
148 155  
149 156  
150 157  
151 158  
152 159  
153 160  
154 161  
155 162  
156 163  
157 164  
... ... @@ -153,92 +160,53 @@
153 160 }
154 161 init_cma_reserved_pageblock(pfn_to_page(base_pfn));
155 162 } while (--i);
  163 +
156 164 return 0;
157 165 }
158 166  
159   -static struct cma * __init cma_create_area(unsigned long base_pfn,
160   - unsigned long count)
161   -{
162   - int bitmap_size = BITS_TO_LONGS(count) * sizeof(long);
163   - struct cma *cma;
164   - int ret = -ENOMEM;
  167 +static struct cma cma_areas[MAX_CMA_AREAS];
  168 +static unsigned cma_area_count;
165 169  
166   - pr_debug("%s(base %08lx, count %lx)\n", __func__, base_pfn, count);
167   -
168   - cma = kmalloc(sizeof *cma, GFP_KERNEL);
169   - if (!cma)
170   - return ERR_PTR(-ENOMEM);
171   -
172   - cma->base_pfn = base_pfn;
173   - cma->count = count;
174   - cma->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
175   -
176   - if (!cma->bitmap)
177   - goto no_mem;
178   -
179   - ret = cma_activate_area(base_pfn, count);
180   - if (ret)
181   - goto error;
182   -
183   - pr_debug("%s: returned %p\n", __func__, (void *)cma);
184   - return cma;
185   -
186   -error:
187   - kfree(cma->bitmap);
188   -no_mem:
189   - kfree(cma);
190   - return ERR_PTR(ret);
191   -}
192   -
193   -static struct cma_reserved {
194   - phys_addr_t start;
195   - unsigned long size;
196   - struct device *dev;
197   -} cma_reserved[MAX_CMA_AREAS] __initdata;
198   -static unsigned cma_reserved_count __initdata;
199   -
200 170 static int __init cma_init_reserved_areas(void)
201 171 {
202   - struct cma_reserved *r = cma_reserved;
203   - unsigned i = cma_reserved_count;
  172 + int i;
204 173  
205   - pr_debug("%s()\n", __func__);
206   -
207   - for (; i; --i, ++r) {
208   - struct cma *cma;
209   - cma = cma_create_area(PFN_DOWN(r->start),
210   - r->size >> PAGE_SHIFT);
211   - if (!IS_ERR(cma))
212   - dev_set_cma_area(r->dev, cma);
  174 + for (i = 0; i < cma_area_count; i++) {
  175 + int ret = cma_activate_area(&cma_areas[i]);
  176 + if (ret)
  177 + return ret;
213 178 }
  179 +
214 180 return 0;
215 181 }
216 182 core_initcall(cma_init_reserved_areas);
217 183  
218 184 /**
219   - * dma_declare_contiguous() - reserve area for contiguous memory handling
220   - * for particular device
221   - * @dev: Pointer to device structure.
222   - * @size: Size of the reserved memory.
223   - * @base: Start address of the reserved memory (optional, 0 for any).
  185 + * dma_contiguous_reserve_area() - reserve custom contiguous area
  186 + * @size: Size of the reserved area (in bytes),
  187 + * @base: Base address of the reserved area optional, use 0 for any
224 188 * @limit: End address of the reserved memory (optional, 0 for any).
  189 + * @res_cma: Pointer to store the created cma region.
225 190 *
226   - * This function reserves memory for specified device. It should be
227   - * called by board specific code when early allocator (memblock or bootmem)
228   - * is still activate.
  191 + * This function reserves memory from early allocator. It should be
  192 + * called by arch specific code once the early allocator (memblock or bootmem)
  193 + * has been activated and all other subsystems have already allocated/reserved
  194 + * memory. This function allows to create custom reserved areas for specific
  195 + * devices.
229 196 */
230   -int __init dma_declare_contiguous(struct device *dev, phys_addr_t size,
231   - phys_addr_t base, phys_addr_t limit)
  197 +int __init dma_contiguous_reserve_area(phys_addr_t size, phys_addr_t base,
  198 + phys_addr_t limit, struct cma **res_cma)
232 199 {
233   - struct cma_reserved *r = &cma_reserved[cma_reserved_count];
  200 + struct cma *cma = &cma_areas[cma_area_count];
234 201 phys_addr_t alignment;
  202 + int ret = 0;
235 203  
236 204 pr_debug("%s(size %lx, base %08lx, limit %08lx)\n", __func__,
237 205 (unsigned long)size, (unsigned long)base,
238 206 (unsigned long)limit);
239 207  
240 208 /* Sanity checks */
241   - if (cma_reserved_count == ARRAY_SIZE(cma_reserved)) {
  209 + if (cma_area_count == ARRAY_SIZE(cma_areas)) {
242 210 pr_err("Not enough slots for CMA reserved regions!\n");
243 211 return -ENOSPC;
244 212 }
... ... @@ -256,7 +224,7 @@
256 224 if (base) {
257 225 if (memblock_is_region_reserved(base, size) ||
258 226 memblock_reserve(base, size) < 0) {
259   - base = -EBUSY;
  227 + ret = -EBUSY;
260 228 goto err;
261 229 }
262 230 } else {
... ... @@ -266,7 +234,7 @@
266 234 */
267 235 phys_addr_t addr = __memblock_alloc_base(size, alignment, limit);
268 236 if (!addr) {
269   - base = -ENOMEM;
  237 + ret = -ENOMEM;
270 238 goto err;
271 239 } else {
272 240 base = addr;
... ... @@ -277,10 +245,11 @@
277 245 * Each reserved area must be initialised later, when more kernel
278 246 * subsystems (like slab allocator) are available.
279 247 */
280   - r->start = base;
281   - r->size = size;
282   - r->dev = dev;
283   - cma_reserved_count++;
  248 + cma->base_pfn = PFN_DOWN(base);
  249 + cma->count = size >> PAGE_SHIFT;
  250 + *res_cma = cma;
  251 + cma_area_count++;
  252 +
284 253 pr_info("CMA: reserved %ld MiB at %08lx\n", (unsigned long)size / SZ_1M,
285 254 (unsigned long)base);
286 255  
... ... @@ -289,7 +258,7 @@
289 258 return 0;
290 259 err:
291 260 pr_err("CMA: failed to reserve %ld MiB\n", (unsigned long)size / SZ_1M);
292   - return base;
  261 + return ret;
293 262 }
294 263  
295 264 /**
... ... @@ -74,5 +74,11 @@
74 74 depends on MTD
75 75 def_bool y
76 76  
  77 +config OF_RESERVED_MEM
  78 + depends on OF_FLATTREE && (DMA_CMA || (HAVE_GENERIC_DMA_COHERENT && HAVE_MEMBLOCK))
  79 + def_bool y
  80 + help
  81 + Initialization code for DMA reserved memory
  82 +
77 83 endmenu # OF
... ... @@ -9,4 +9,5 @@
9 9 obj-$(CONFIG_OF_PCI) += of_pci.o
10 10 obj-$(CONFIG_OF_PCI_IRQ) += of_pci_irq.o
11 11 obj-$(CONFIG_OF_MTD) += of_mtd.o
  12 +obj-$(CONFIG_OF_RESERVED_MEM) += of_reserved_mem.o
... ... @@ -545,6 +545,82 @@
545 545 return of_fdt_match(initial_boot_params, node, compat);
546 546 }
547 547  
  548 +struct fdt_scan_status {
  549 + const char *name;
  550 + int namelen;
  551 + int depth;
  552 + int found;
  553 + int (*iterator)(unsigned long node, const char *uname, int depth, void *data);
  554 + void *data;
  555 +};
  556 +
  557 +/**
  558 + * fdt_scan_node_by_path - iterator for of_scan_flat_dt_by_path function
  559 + */
  560 +static int __init fdt_scan_node_by_path(unsigned long node, const char *uname,
  561 + int depth, void *data)
  562 +{
  563 + struct fdt_scan_status *st = data;
  564 +
  565 + /*
  566 + * if scan at the requested fdt node has been completed,
  567 + * return -ENXIO to abort further scanning
  568 + */
  569 + if (depth <= st->depth)
  570 + return -ENXIO;
  571 +
  572 + /* requested fdt node has been found, so call iterator function */
  573 + if (st->found)
  574 + return st->iterator(node, uname, depth, st->data);
  575 +
  576 + /* check if scanning automata is entering next level of fdt nodes */
  577 + if (depth == st->depth + 1 &&
  578 + strncmp(st->name, uname, st->namelen) == 0 &&
  579 + uname[st->namelen] == 0) {
  580 + st->depth += 1;
  581 + if (st->name[st->namelen] == 0) {
  582 + st->found = 1;
  583 + } else {
  584 + const char *next = st->name + st->namelen + 1;
  585 + st->name = next;
  586 + st->namelen = strcspn(next, "/");
  587 + }
  588 + return 0;
  589 + }
  590 +
  591 + /* scan next fdt node */
  592 + return 0;
  593 +}
  594 +
  595 +/**
  596 + * of_scan_flat_dt_by_path - scan flattened tree blob and call callback on each
  597 + * child of the given path.
  598 + * @path: path to start searching for children
  599 + * @it: callback function
  600 + * @data: context data pointer
  601 + *
  602 + * This function is used to scan the flattened device-tree starting from the
  603 + * node given by path. It is used to extract information (like reserved
  604 + * memory), which is required on ealy boot before we can unflatten the tree.
  605 + */
  606 +int __init of_scan_flat_dt_by_path(const char *path,
  607 + int (*it)(unsigned long node, const char *name, int depth, void *data),
  608 + void *data)
  609 +{
  610 + struct fdt_scan_status st = {path, 0, -1, 0, it, data};
  611 + int ret = 0;
  612 +
  613 + if (initial_boot_params)
  614 + ret = of_scan_flat_dt(fdt_scan_node_by_path, &st);
  615 +
  616 + if (!st.found)
  617 + return -ENOENT;
  618 + else if (ret == -ENXIO) /* scan has been completed */
  619 + return 0;
  620 + else
  621 + return ret;
  622 +}
  623 +
548 624 #ifdef CONFIG_BLK_DEV_INITRD
549 625 /**
550 626 * early_init_dt_check_for_initrd - Decode initrd location from flat tree
drivers/of/of_reserved_mem.c
  1 +/*
  2 + * Device tree based initialization code for reserved memory.
  3 + *
  4 + * Copyright (c) 2013 Samsung Electronics Co., Ltd.
  5 + * http://www.samsung.com
  6 + * Author: Marek Szyprowski <m.szyprowski@samsung.com>
  7 + *
  8 + * This program is free software; you can redistribute it and/or
  9 + * modify it under the terms of the GNU General Public License as
  10 + * published by the Free Software Foundation; either version 2 of the
  11 + * License or (at your optional) any later version of the license.
  12 + */
  13 +
  14 +#include <asm/dma-contiguous.h>
  15 +
  16 +#include <linux/memblock.h>
  17 +#include <linux/err.h>
  18 +#include <linux/of.h>
  19 +#include <linux/of_fdt.h>
  20 +#include <linux/of_platform.h>
  21 +#include <linux/mm.h>
  22 +#include <linux/sizes.h>
  23 +#include <linux/mm_types.h>
  24 +#include <linux/dma-contiguous.h>
  25 +#include <linux/dma-mapping.h>
  26 +#include <linux/of_reserved_mem.h>
  27 +
  28 +#define MAX_RESERVED_REGIONS 16
  29 +struct reserved_mem {
  30 + phys_addr_t base;
  31 + unsigned long size;
  32 + struct cma *cma;
  33 + char name[32];
  34 +};
  35 +static struct reserved_mem reserved_mem[MAX_RESERVED_REGIONS];
  36 +static int reserved_mem_count;
  37 +
  38 +static int __init fdt_scan_reserved_mem(unsigned long node, const char *uname,
  39 + int depth, void *data)
  40 +{
  41 + struct reserved_mem *rmem = &reserved_mem[reserved_mem_count];
  42 + phys_addr_t base, size;
  43 + int is_cma, is_reserved;
  44 + unsigned long len;
  45 + const char *status;
  46 + __be32 *prop;
  47 +
  48 + is_cma = IS_ENABLED(CONFIG_DMA_CMA) &&
  49 + of_flat_dt_is_compatible(node, "linux,contiguous-memory-region");
  50 + is_reserved = of_flat_dt_is_compatible(node, "reserved-memory-region");
  51 +
  52 + if (!is_reserved && !is_cma) {
  53 + /* ignore node and scan next one */
  54 + return 0;
  55 + }
  56 +
  57 + status = of_get_flat_dt_prop(node, "status", &len);
  58 + if (status && strcmp(status, "okay") != 0) {
  59 + /* ignore disabled node nad scan next one */
  60 + return 0;
  61 + }
  62 +
  63 + prop = of_get_flat_dt_prop(node, "reg", &len);
  64 + if (!prop || (len < (dt_root_size_cells + dt_root_addr_cells) *
  65 + sizeof(__be32))) {
  66 + pr_err("Reserved mem: node %s, incorrect \"reg\" property\n",
  67 + uname);
  68 + /* ignore node and scan next one */
  69 + return 0;
  70 + }
  71 + base = dt_mem_next_cell(dt_root_addr_cells, &prop);
  72 + size = dt_mem_next_cell(dt_root_size_cells, &prop);
  73 +
  74 + if (!size) {
  75 + /* ignore node and scan next one */
  76 + return 0;
  77 + }
  78 +
  79 + pr_info("Reserved mem: found %s, memory base %lx, size %ld MiB\n",
  80 + uname, (unsigned long)base, (unsigned long)size / SZ_1M);
  81 +
  82 + if (reserved_mem_count == ARRAY_SIZE(reserved_mem))
  83 + return -ENOSPC;
  84 +
  85 + rmem->base = base;
  86 + rmem->size = size;
  87 + strlcpy(rmem->name, uname, sizeof(rmem->name));
  88 +
  89 + if (is_cma) {
  90 + struct cma *cma;
  91 + if (dma_contiguous_reserve_area(size, base, 0, &cma) == 0) {
  92 + rmem->cma = cma;
  93 + reserved_mem_count++;
  94 + if (of_get_flat_dt_prop(node,
  95 + "linux,default-contiguous-region",
  96 + NULL))
  97 + dma_contiguous_set_default(cma);
  98 + }
  99 + } else if (is_reserved) {
  100 + if (memblock_remove(base, size) == 0)
  101 + reserved_mem_count++;
  102 + else
  103 + pr_err("Failed to reserve memory for %s\n", uname);
  104 + }
  105 +
  106 + return 0;
  107 +}
  108 +
  109 +static struct reserved_mem *get_dma_memory_region(struct device *dev)
  110 +{
  111 + struct device_node *node;
  112 + const char *name;
  113 + int i;
  114 +
  115 + node = of_parse_phandle(dev->of_node, "memory-region", 0);
  116 + if (!node)
  117 + return NULL;
  118 +
  119 + name = kbasename(node->full_name);
  120 + for (i = 0; i < reserved_mem_count; i++)
  121 + if (strcmp(name, reserved_mem[i].name) == 0)
  122 + return &reserved_mem[i];
  123 + return NULL;
  124 +}
  125 +
  126 +/**
  127 + * of_reserved_mem_device_init() - assign reserved memory region to given device
  128 + *
  129 + * This function assign memory region pointed by "memory-region" device tree
  130 + * property to the given device.
  131 + */
  132 +void of_reserved_mem_device_init(struct device *dev)
  133 +{
  134 + struct reserved_mem *region = get_dma_memory_region(dev);
  135 + if (!region)
  136 + return;
  137 +
  138 + if (region->cma) {
  139 + dev_set_cma_area(dev, region->cma);
  140 + pr_info("Assigned CMA %s to %s device\n", region->name,
  141 + dev_name(dev));
  142 + } else {
  143 + if (dma_declare_coherent_memory(dev, region->base, region->base,
  144 + region->size, DMA_MEMORY_MAP | DMA_MEMORY_EXCLUSIVE) != 0)
  145 + pr_info("Declared reserved memory %s to %s device\n",
  146 + region->name, dev_name(dev));
  147 + }
  148 +}
  149 +
  150 +/**
  151 + * of_reserved_mem_device_release() - release reserved memory device structures
  152 + *
  153 + * This function releases structures allocated for memory region handling for
  154 + * the given device.
  155 + */
  156 +void of_reserved_mem_device_release(struct device *dev)
  157 +{
  158 + struct reserved_mem *region = get_dma_memory_region(dev);
  159 + if (!region && !region->cma)
  160 + dma_release_declared_memory(dev);
  161 +}
  162 +
  163 +/**
  164 + * early_init_dt_scan_reserved_mem() - create reserved memory regions
  165 + *
  166 + * This function grabs memory from early allocator for device exclusive use
  167 + * defined in device tree structures. It should be called by arch specific code
  168 + * once the early allocator (memblock) has been activated and all other
  169 + * subsystems have already allocated/reserved memory.
  170 + */
  171 +void __init early_init_dt_scan_reserved_mem(void)
  172 +{
  173 + of_scan_flat_dt_by_path("/memory/reserved-memory",
  174 + fdt_scan_reserved_mem, NULL);
  175 +}
drivers/of/platform.c
... ... @@ -21,6 +21,7 @@
21 21 #include <linux/of_device.h>
22 22 #include <linux/of_irq.h>
23 23 #include <linux/of_platform.h>
  24 +#include <linux/of_reserved_mem.h>
24 25 #include <linux/platform_device.h>
25 26  
26 27 const struct of_device_id of_default_bus_match_table[] = {
... ... @@ -218,6 +219,8 @@
218 219 dev->dev.bus = &platform_bus_type;
219 220 dev->dev.platform_data = platform_data;
220 221  
  222 + of_reserved_mem_device_init(&dev->dev);
  223 +
221 224 /* We do not fill the DMA ops for platform devices by default.
222 225 * This is currently the responsibility of the platform code
223 226 * to do such, possibly using a device notifier
... ... @@ -225,6 +228,7 @@
225 228  
226 229 if (of_device_add(dev) != 0) {
227 230 platform_device_put(dev);
  231 + of_reserved_mem_device_release(&dev->dev);
228 232 return NULL;
229 233 }
230 234  
include/asm-generic/dma-contiguous.h
1   -#ifndef ASM_DMA_CONTIGUOUS_H
2   -#define ASM_DMA_CONTIGUOUS_H
3   -
4   -#ifdef __KERNEL__
5   -#ifdef CONFIG_CMA
6   -
7   -#include <linux/device.h>
8   -#include <linux/dma-contiguous.h>
9   -
10   -static inline struct cma *dev_get_cma_area(struct device *dev)
11   -{
12   - if (dev && dev->cma_area)
13   - return dev->cma_area;
14   - return dma_contiguous_default_area;
15   -}
16   -
17   -static inline void dev_set_cma_area(struct device *dev, struct cma *cma)
18   -{
19   - if (dev)
20   - dev->cma_area = cma;
21   - if (!dev && !dma_contiguous_default_area)
22   - dma_contiguous_default_area = cma;
23   -}
24   -
25   -#endif
26   -#endif
27   -
28   -#endif
include/linux/device.h
... ... @@ -737,7 +737,7 @@
737 737  
738 738 struct dma_coherent_mem *dma_mem; /* internal for coherent mem
739 739 override */
740   -#ifdef CONFIG_CMA
  740 +#ifdef CONFIG_DMA_CMA
741 741 struct cma *cma_area; /* contiguous memory area for dma
742 742 allocations */
743 743 #endif
include/linux/dma-contiguous.h
... ... @@ -67,10 +67,54 @@
67 67  
68 68 extern struct cma *dma_contiguous_default_area;
69 69  
  70 +static inline struct cma *dev_get_cma_area(struct device *dev)
  71 +{
  72 + if (dev && dev->cma_area)
  73 + return dev->cma_area;
  74 + return dma_contiguous_default_area;
  75 +}
  76 +
  77 +static inline void dev_set_cma_area(struct device *dev, struct cma *cma)
  78 +{
  79 + if (dev)
  80 + dev->cma_area = cma;
  81 +}
  82 +
  83 +static inline void dma_contiguous_set_default(struct cma *cma)
  84 +{
  85 + dma_contiguous_default_area = cma;
  86 +}
  87 +
70 88 void dma_contiguous_reserve(phys_addr_t addr_limit);
71   -int dma_declare_contiguous(struct device *dev, phys_addr_t size,
72   - phys_addr_t base, phys_addr_t limit);
73 89  
  90 +int __init dma_contiguous_reserve_area(phys_addr_t size, phys_addr_t base,
  91 + phys_addr_t limit, struct cma **res_cma);
  92 +
  93 +/**
  94 + * dma_declare_contiguous() - reserve area for contiguous memory handling
  95 + * for particular device
  96 + * @dev: Pointer to device structure.
  97 + * @size: Size of the reserved memory.
  98 + * @base: Start address of the reserved memory (optional, 0 for any).
  99 + * @limit: End address of the reserved memory (optional, 0 for any).
  100 + *
  101 + * This function reserves memory for specified device. It should be
  102 + * called by board specific code when early allocator (memblock or bootmem)
  103 + * is still activate.
  104 + */
  105 +
  106 +static inline int dma_declare_contiguous(struct device *dev, phys_addr_t size,
  107 + phys_addr_t base, phys_addr_t limit)
  108 +{
  109 + struct cma *cma;
  110 + int ret;
  111 + ret = dma_contiguous_reserve_area(size, base, limit, &cma);
  112 + if (ret == 0)
  113 + dev_set_cma_area(dev, cma);
  114 +
  115 + return ret;
  116 +}
  117 +
74 118 struct page *dma_alloc_from_contiguous(struct device *dev, int count,
75 119 unsigned int order);
76 120 bool dma_release_from_contiguous(struct device *dev, struct page *pages,
77 121  
... ... @@ -80,7 +124,21 @@
80 124  
81 125 #define MAX_CMA_AREAS (0)
82 126  
  127 +static inline struct cma *dev_get_cma_area(struct device *dev)
  128 +{
  129 + return NULL;
  130 +}
  131 +
  132 +static inline void dev_set_cma_area(struct device *dev, struct cma *cma) { }
  133 +
  134 +static inline void dma_contiguous_set_default(struct cma *cma) { }
  135 +
83 136 static inline void dma_contiguous_reserve(phys_addr_t limit) { }
  137 +
  138 +static inline int dma_contiguous_reserve_area(phys_addr_t size, phys_addr_t base,
  139 + phys_addr_t limit, struct cma **res_cma) {
  140 + return -ENOSYS;
  141 +}
84 142  
85 143 static inline
86 144 int dma_declare_contiguous(struct device *dev, phys_addr_t size,
include/linux/of_fdt.h
... ... @@ -90,6 +90,9 @@
90 90 extern int of_flat_dt_is_compatible(unsigned long node, const char *name);
91 91 extern int of_flat_dt_match(unsigned long node, const char *const *matches);
92 92 extern unsigned long of_get_flat_dt_root(void);
  93 +extern int of_scan_flat_dt_by_path(const char *path,
  94 + int (*it)(unsigned long node, const char *name, int depth, void *data),
  95 + void *data);
93 96  
94 97 extern int early_init_dt_scan_chosen(unsigned long node, const char *uname,
95 98 int depth, void *data);
include/linux/of_reserved_mem.h
  1 +#ifndef __OF_RESERVED_MEM_H
  2 +#define __OF_RESERVED_MEM_H
  3 +
  4 +#ifdef CONFIG_OF_RESERVED_MEM
  5 +void of_reserved_mem_device_init(struct device *dev);
  6 +void of_reserved_mem_device_release(struct device *dev);
  7 +void early_init_dt_scan_reserved_mem(void);
  8 +#else
  9 +static inline void of_reserved_mem_device_init(struct device *dev) { }
  10 +static inline void of_reserved_mem_device_release(struct device *dev) { }
  11 +static inline void early_init_dt_scan_reserved_mem(void) { }
  12 +#endif
  13 +
  14 +#endif /* __OF_RESERVED_MEM_H */