aboutsummaryrefslogtreecommitdiff
path: root/core/arch/arm/mm/core_mmu_v7.c
AgeCommit message (Collapse)Author
2019-02-01core_mmu: introduce mmu partitionsVolodymyr Babchuk
For virtualization support we need to have multiple mmu partitions. One partition per virtual machine. Partition holds information about page tables, ASID, etc. When OP-TEE switches to another partition, it effectivelly changes how it sees memory. In this way it is possible to have multiple memory layouts with different shared buffers and TAs mapped, even with different .bss and .data sections. If virtualization is disabled, then only one, default partition exists and it is impossible to allocate more. Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
2018-05-16Remove license notice from STMicroelectronics filesEtienne Carriere
Since a while the source files license info are defined by SPDX identifiers. We can safely remove the verbose license text from the files that are owned by either only STMicroelectronics or only both Linaro and STMicroelectronics. Signed-off-by: Etienne Carriere <etienne.carriere@st.com> Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
2018-04-03core: more flexible ta mappingJens Wiklander
Replaces the current fixed array of TA map entries where some indexes have a special meaning. The new structures and functions dealing with this has a vm_ prefix instead of the old tee_mmu_ prefix. struct tee_ta_region is replaced by struct vm_region, which is now stored in a linked list using the new TEE_MATTR-bits to identify special regions. struct tee_mmu_info is replaced by vm_info, which now keeps the head of the linked list of regions. Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Acked-by: Jerome Forissier <jerome.forissier@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
2018-02-27core: armv7: core_init_mmu_regs() init contextidrJens Wiklander
The value of CONTEXTIDR is initially undefined, initialize it with a sane value. Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Tested-by: Jordan Rhee <jordanrh@microsoft.com> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
2018-02-16mmu: implement generic mmu initializationVolodymyr Babchuk
This patch adds function core_mmu_map_region() that maps given memory region. This function is generic, in sense, that it can map memory for both short and long descriptor formats, as it uses primitives provided by core_mmu_v7 and core_mmu_lpae. Also, this function tries to use largest allocation blocks possible. For example, if memory region is not aligned to PGDIR_SIZE but spans across multiple pgdirs, core_mmu_map_region() will map most of this region with large blocks, and only start/end will be mapped with small pages. As core_mmu_map_region() provides all means needed for MMU initialization, we can drop mmu-specific code in core_mmu_v7.c and core_mmu_lpae.c Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
2018-02-16mmu: replace _prepare_small_page_mapping with _entry_to_finer_grainedVolodymyr Babchuk
core_mmu_prepare_small_page_mapping() just prepares table for the next level if there was no mappings already. core_mmu_entry_to_finer_grained() will do the same even if there is are something mapped there. Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
2018-01-10Add SPDX license identifiersJerome Forissier
Adds one SPDX-License-Identifier line [1] to each source files that contains license text. Generated by [2]: spdxify.py --add-spdx optee_os/ The scancode tool [3] was used to double check the license matching code in the Python script. All the licenses detected by scancode are either detected by spdxify.py, or have no SPDX identifier, or are false matches. Link: [1] https://spdx.org/licenses/ Link: [2] https://github.com/jforissier/misc/blob/f7b56c8/spdxify.py Link: [3] https://github.com/nexB/scancode-toolkit Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org> Acked-by: Joakim Bech <joakim.bech@linaro.org>
2018-01-10core: user mode translation tableJens Wiklander
Adds a second translation table to be used while in user mode containing user mode mapping and a minimal kernel mapping. Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Acked-by: Andrew Davis <andrew.davis@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
2018-01-10core: refactor ASID managementJens Wiklander
Refactors Address Space Identifier management. The field in struct user_ta_ctx is moved into struct tee_mmu_info and renamed to asid. Allocation refactored internally with asid_alloc() and asid_free() functions, based on bitstring.h macros. ASIDs starts at 2, and is always an even number. ASIDs with the lowest bit set is reserved for as the second ASID when using ASIDs in pairs. Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Acked-by: Jerome Forissier <jerome.forissier@linaro.org> Acked-by: Andrew Davis <andrew.davis@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
2017-10-10Revert "core: core_mmu_v7: core_mmu_get_user_pgdir: remove duplicated code"Jerome Forissier
This reverts commit 3eb2ba74961b. core_mmu_set_info_table() sets tbl_info->num_entries to NUM_L1_ENTRIES, not NUM_UL1_ENTRIES. So the removed code was actually not duplicate. Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
2017-10-10core: core_mmu_v7: core_mmu_get_user_pgdir: remove duplicated codePeng Fan
core_mmu_set_info_table already set num_entries, no need to set it again. Signed-off-by: Peng Fan <peng.fan@nxp.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
2017-10-06core:mmu: rename _divide_block into _prepare_small_page_mappingEtienne Carriere
core_mmu_divide_block() label is misleading. core_mmu_prepare_small_page_mapping() is used to allocate required mmu table(s) and init the mmu to that a full pdgir area can be used the map 4kB small pages from a single table entry (mmu descriptor). Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Acked-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Volodymyr Babchuk <vlad.babchuk@gmaill.com>
2017-10-06core: core_mmu_divide_block shall not unmap memoryEtienne Carriere
Since the function is not expected to unmap anything, it should simply check that nothing is mapped instead of restoring the previous mapping one the overall pgdir entries. Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Acked-by: Jens Wiklander <jens.wiklander@linaro.org> Tested-by: Etienne Carriere <etienne.carriere@linaro.org> (b2260) Reviewed-by: Volodymyr Babchuk <vlad.babchuk@gmaill.com>
2017-10-06core: fix core_mmu_divide_block() against 32bit mmu constraintEtienne Carriere
core_mmu_divide_block() is used to prepare a MMU pgdir mapping table when no pgdir MMU table was previous allocated/filled to map a small page mapped address range. On non-LPAE (32bit mmu), a pgdir entry cannot be used to map both secure and non-secure pages. The pgdir descriptor defines whether the small pages inside the pgdir range will map secure or non-secure memory. Hence the core_mmu_divide_block() function takes an extra argument: the target secure/non-secure attribute of the pgdir. This argument in unused on LPAE mapping as a pgdir can hold secure as well as non-secure entries. Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Acked-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Volodymyr Babchuk <vlad.babchuk@gmaill.com>
2017-10-06core/mmu v7: move code inside core_mmu_divide_blockEtienne Carriere
This change allocates the L2 table only once it is useful. This change allows to eventually exit the function without freeing the L2 is something fails before we are filling the L2 table content. Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Acked-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Volodymyr Babchuk <vlad.babchuk@gmaill.com>
2017-09-18core: mmu: export map_memarea_sectionsPeng Fan
Export map_memarea_sections. We need a mmu table dedicated for low power feature, so export map_memarea_sections to create that section mapping. Signed-off-by: Peng Fan <peng.fan@nxp.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
2017-09-04core: bugfix core_mmu_user_mapping_is_active()Jens Wiklander
Fixes race in both v7 and lpae versions of core_mmu_user_mapping_is_active() by temporarily disabling interrupts. Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Tested-by: Jens Wiklander <jens.wiklander@linaro.org> (QEMU v8) Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
2017-08-29core: add MEM_AREA_PAGER_VASPACEJens Wiklander
Adds MEM_AREA_PAGER_VASPACE which is used to create empty translation tables as needed for the pager. Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
2017-06-26core: arm: rename TLB maintenance filesEtienne Carriere
ssvce_aXX.S and tz_ssvce.h now only provide TLB maintenance support. This change renames the source and header files accordingly. Signed-off-by: Etienne Carriere <etienne.carriere@st.com> Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
2017-06-26core: fix TODOs related to TLB maintenance in the pagerEtienne Carriere
Invalidate TLBs for target references instead of invalidating the whole tables. Some changes affect places where several references are modified and must be invalidated in the TLBs. This change aims at lowering the synchronization barrier required before/after the TLB maintenance operations. Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Tested-by: Jens Wiklander <jens.wiklander@linaro.org> (Hikey AArch{32,64} pager)
2017-06-26core: deprecate core_tlb_maintenance()Etienne Carriere
The core_tlb_maintenance() indirection is not useful. This function is now deprecated and one shall straight call tlbi_xxx() function instead. Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
2017-06-16core: clarify end of static mapping tableEtienne Carriere
Move remaining code relying on null size value for detecting end of static mapping table with a test on type value. This is made consistent between lpae and non-lpae implementations. Rename MEM_AREA_NOTYPE into MEM_AREA_END as it is dedicated to this specific purpose. Faulty core_mmu_get_type_by_pa() can return MEM_AREA_MAXTYPE on invalid cases. Add a comment highlighting null sized entry are not filled in the static mapping directives table. Forgive the trick on level_index_m'sk to fit in the 80 chars/line. Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
2017-06-15core: fix core_init_mmu_tables() loopViktor Signayevskiy
Fixes the terminating condition of the for loop in core_init_mmu_tables() to rely on mm[n].type instead of mm[n].size. Fixes: https://github.com/OP-TEE/issue/1602 Signed-off-by: Victor Signaevskyi <piligrim2007@meta.ua> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> [jf: wrap commit description] Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org>
2017-05-30core: remove __early_bssJerome Forissier
Initialize the .bss section early from assembler before entering C code. As a result, the __early_bss qualifier is not needed anymore. Remove it, as well as the related symbols (__early_bss_start and __early_bss_end). This makes the code simpler hence easier to maintain, at the expense of initialization time, since .bss is cleared before CPU caches are turned on (and doing it later would mean some C function have been called already). Here are some performance numbers measured on HiKey. The "memset" column measures the time it takes to clear .bss in C, without this patch. The "assembly" column reports the time taken by the clear_bss loop in this patch. Timings were performed using CNTPCT. Worst case is a ~1 ms overhead in boot time. memset(): | assembly: ms (bytes) | ms (bytes) --------------+-------------- Aarch64 0.30 (72824) | 0.08 (73528) Aarch32 0.27 (65016) | 1.24 (65408) Aarch32/pager 0.03 (11328) | 0.23 (11736) Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org> Tested-by: Jerome Forissier <jerome.forissier@linaro.org> (QEMU) Tested-by: Jerome Forissier <jerome.forissier@linaro.org> (HiKey 32/64) Tested-by: Jerome Forissier <jerome.forissier@linaro.org> (HiKey/pager) Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
2017-05-29core: use only KEEP*() macros for dependenciesJens Wiklander
Replaces the last $(entries-unpaged) and $(entries-init) which are hard coded in link.mk with KEEP_* annotations inside the source files instead. Reviewed-by: Volodymyr Babchuk <vlad.babchuk@gmail.com> Acked-by: Jerome Forissier <jerome.forissier@linaro.org> Tested-by: Jens Wiklander <jens.wiklander@linaro.org> (HiKey pager) Tested-by: Jens Wiklander <jens.wiklander@linaro.org> (QEMU pager) Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
2017-05-19core: MEM_AREA_TEE_RAM_RW_DATA identifies core read/write data memoryEtienne Carriere
This change prepares split of executable and writable from rw memory. Upon configuration the core private memory "TEE RAM" (heap, stack, ...) will either be in TEE_RAM or TEE_RAM_RW. This MEM_AREA_TEE_RAM_RW_DATA abstracts the memory used for core data. Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
2017-05-19core: non-LPAE reuse xlat table when possible at static map initEtienne Carriere
When a level2 translation table is already used for a virtual mapping range, allow core to reuse it to extend mapping in the same virtual region. map_memarea() now maps a single given "memory map area" based on MMU root table knowledge. Each non-LPAE level2 table is reset to zero when allocated since the code now map_page_memarea() only the fills the mapped virtual range in the target MMU table. Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
2017-05-16core: fix non-LPAE mappingEtienne Carriere
Fixes: c6c69797168c ("mm: add new VA region for dynamic shared buffers") Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Tested-by: Andrew F. Davis <afd@ti.com> (am43xx)
2017-04-28core_mmu: add page map/unmap functionsVolodymyr Babchuk
This function allows to map list of physical pages to specified virtual memory address. Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com> Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> [jf: remove braces {} around single statement block] Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org>
2017-04-28mm: add new VA region for dynamic shared buffersVolodymyr Babchuk
This region will be used later to dynamically map shared buffers provided by Normal World. Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com> Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
2017-04-18core_mmu_v7: Allow cache memory attributes to match non-SMP LinuxAndrew F. Davis
On non-SMP ARM Linux the default cache policy is inner/outer write-back, no write-allocate not sharable. When compiled with SMP support the policy is updated to inner/outer write-back with write-allocate sharable. OP-TEE makes the assumption that SMP will be enabled, allow overriding this for the non-SMP cases. Signed-off-by: Andrew F. Davis <afd@ti.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
2017-04-18core_mmu_v7: Rename index to normal cached memoryAndrew F. Davis
The index into cache attribute registers for device memory is called ATTR_DEVICE_INDEX, but the normal cached memory is referred to as ATTR_IWBWA_OWBWA_INDEX, this implies the caching type. This is not always the type of cache we will use. Rename it to a more generic ATTR_NORMAL_CACHED_INDEX. Signed-off-by: Andrew F. Davis <afd@ti.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
2017-04-12core: debug trace the number of xlat tables usedEtienne Carriere
These debug traces can be quite handy to monitor the number of translation tables effectively used at runtime. Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
2017-04-12core: debug trace for non-LPAE mappingEtienne Carriere
Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
2017-04-12core: fix non-LPAE mapping against unmapped areasEtienne Carriere
Core defines some virtual memory than should not be default mapped. Yet core_mmu_v7.c loads a non null descriptor in MMU tables for such memories: attributes are null (which makes the page effectively not mapped) but a meaningless non null physical page address is loaded. This change Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
2016-12-27core_mmu: add core_mmu_divide_block() functionVolodymyr Babchuk
This function divides L1/L2 translation table entry to L2/L3 entries. It can be used when we need finer mapping than currently possible. Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
2016-12-27core_mmu_v7: slight refactoring to look like core_mmu_lpaeVolodymyr Babchuk
This patch makes core_mmu_v7.c to look simmilar to core_mmu_lpae.c - ARMv7-specific definitions was moved from core_mmu_defs.h to .c file - core_mmu_defs.h was removed, because it stored definitions only for v7 - core_mmu_alloc_l2() now really allocates l2 pages Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
2016-12-05core: add core_mmu_get_user_pgdir()Jens Wiklander
Adds core_mmu_get_user_pgdir() to fill in a struct core_mmu_table_info describing the page directory used for user TAs. Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
2016-12-05core: bugfix core_mmu_get_entry_primitive()Jens Wiklander
Fixes both implementations of core_mmu_get_entry_primitive() to correctly report TEE_MATTR_TABLE. Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
2016-10-28Remove legacy tee_common_unpg.hJens Wiklander
Removes legacy file core/include/kernel/tee_common_unpg.h and updates with new types etc as needed. Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Tested-by: Etienne Carriere <etienne.carriere@linaro.org> (b2260) Tested-by: Jens Wiklander <jens.wiklander@linaro.org> (QEMU) Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
2016-10-17core_mmu: fix the ttb pa address settingZeng Tao
Using the real physic address to set the mmu ttbr, and don't rely on the plat mapping. Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Signed-off-by: Zeng Tao <prime.zeng@hisilicon.com> [Rebased on top of master] Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org>
2016-09-28core: add support for paging of user TAsJens Wiklander
Enables support for paging of user TAs if CFG_PAGED_USER_TA is y Acked-by: David Brown <david.brown@linaro.org> Tested-by: Jerome Forissier <jerome.forissier@linaro.org> (HiKey) Tested-by: Jens Wiklander <jens.wiklander@linaro.org> (QEMU 7) Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
2016-08-10core: review assert and panic tracesEtienne Carriere
Replace few "{ EMSG(...); panic(); }" with "panic(...);". Disable file/line/func debug traces in panic() logs when CFG_TEE_CORE_DEBUG is disable. Change __assert_log() uses EMSG_RAW() to no pollute trace with __assert_log() internals (duplicated file/line/func traces). Change assert() to use a low/high verbosity mode upon CFG_TEE_CORE_DEBUG as panic() does. Change assert() to also trace the C function where assertion failed. Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jen.wiklander@linaro.org> Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org> Tested-by: Etienne Carriere <etienne.carriere@linaro.org> (QEMU)
2016-08-09core: remove TEE_ASSERT()Etienne Carriere
TEE_ASSERT() can be confusing regarding assert() as assert() can be disabled through NDEBUG while TEE_ASSERT() can't. Instead one should explicitly implement "if (cond) { panic(); }" This patch removes several inclusions on tee_common_unpg.h as it used to define TEE_ASSERT() that has been removed. Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jen.wiklander@linaro.org> Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org> Tested-by: Etienne Carriere <etienne.carriere@linaro.org> (QEMU)
2016-08-09assert/panic: fix misuse of assert/panicEtienne Carriere
Currently implementation of macro assert() does not expand to a no-op when NDEBUG is defined. This will be done in a later change. Before that, fix misuses of assert() and TEE_ASSERT(): - Correct misplaced assert() that should panic() whatever NDEBUG. - Correct misplaced TEE_ASSERT() that should simply assert(). Also cleanup many inclusions of "assert.h" and few calls of assert(). Signed-off-by: Jens Wiklander <jen.wiklander@linaro.org> Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org> Tested-by: Etienne Carriere <etienne.carriere@linaro.org> (QEMU)
2016-07-05core: core_mmu_populate_user_map() argumentsJens Wiklander
Replaces the struct tee_mmu_info *mmu argument for core_mmu_populate_user_map() with struct user_ta_ctx *utc instead. This affects a few other mmu functions too. Reviewed-by: Joakim Bech <joakim.bech@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
2016-07-05core: add core_mmu_{set,get}_entry_primitive()Jens Wiklander
Adds core_mmu_set_entry_primitive() and core_mmu_get_entry_primitive() and moves core_mmu_set_entry() and core_mmu_get_entry() to generic translation table code. Reviewed-by: Joakim Bech <joakim.bech@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
2016-06-15core: non-linear mapping of secure world devicesJens Wiklander
This patch introduces non-linear mapping of secure world devices, that is, physical and virtual address of a device can differ. Reviewed-by: Joakim Bech <joakim.bech@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
2016-05-16core: remove kmap interfaceJens Wiklander
Removes kmap interface as the secure DDR memory is mapped already. Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org> Reviewed-by: Pascal Brand <pascal.brand@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
2016-04-13core: arm: mm: v7: use mm->va to locate the entry of ttbPeng Fan
Use mm->va to locate the entry of ttb, we should not use mm->pa, because va may be not the same with pa. Signed-off-by: Peng Fan <van.freenix@gmail.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>