aboutsummaryrefslogtreecommitdiff
path: root/core/arch/arm/include/mm/core_mmu.h
AgeCommit message (Collapse)Author
2019-05-02core: introduce CFG_CORE_RESERVED_SHMJens Wiklander
Introduces CFG_CORE_RESERVED_SHM which if set to y enables reserved shared memory, else disables support for reserved shared memory. Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
2019-05-02core: introduce CFG_CORE_DYN_SHMJens Wiklander
Introduces CFG_CORE_DYN_SHM which if set to y enables dynamic shared memory, else disables support for dynamic shared memory. In contrast with CFG_DYN_SHM_CAP it actually removes the support instead of just omit reporting it. Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
2019-02-13core_mmu: do not restrict device memory mapping to PGDIR_SIZE granularityJerome Forissier
Device memory registered via register_phys_mem() is currently rounded up/down to CORE_MMU_PGDIR_SIZE (1 MiB, or 2 MiB for LPAE). This is not needed and possibly incorrect for SoCs that define I/O memory maps with regions aligned on a small page (4 KiB), because using a larger granularity could result in overlaps between secure and non-secure mappings. This could cause issues depending on the type of memory firewall used by the SoC and its configuration. In any case, memory types other than MEM_AREA_IO_{SEC,NSEC} *can* be mapped with small page granularity using register_phys_mem(), so the situation is a bit inconsistent. This commit removes the rounding by default and provides a new macro: register_phys_mem_pgdir(). Platforms that still need to use PGDIR_SIZE granularity (typically because it consumes less page table space) need to replace register_phys_mem() by register_phys_mem_pgdir(). In order to avoid any functional change in platform code, all calls to register_phys_mem() with device memory are replaced with register_phys_mem_pgdir(). In addition, CORE_MMU_DEVICE_SIZE is removed and replaced with CORE_MMU_PGDIR_SIZE since there is no unique mapping size for device memory anymore. Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org> Reported-by: Zeng Tao <prime.zeng@hisilicon.com> Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
2019-02-01core_mmu: add core_mmu_init_virtualization() functionVolodymyr Babchuk
This function will be called at OP-TEE initialization to configure memory subsystem of virtualization framework. Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
2019-02-01core_mmu: introduce mmu partitionsVolodymyr Babchuk
For virtualization support we need to have multiple mmu partitions. One partition per virtual machine. Partition holds information about page tables, ASID, etc. When OP-TEE switches to another partition, it effectivelly changes how it sees memory. In this way it is possible to have multiple memory layouts with different shared buffers and TAs mapped, even with different .bss and .data sections. If virtualization is disabled, then only one, default partition exists and it is impossible to allocate more. Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
2019-02-01core_mmu: add MEM_AREA_SEC_RAM_OVERALL memory typeVolodymyr Babchuk
This memory type describes mapping that covers all secure memory as a flat mapping, so it is possible to access any portion of secure memory at any time. It will be used with virtualization extensions. Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com> Acked-by: Jens Wiklander <jens.wiklander@linaro.org>
2019-02-01virt: add nexus memory areaVolodymyr Babchuk
This patch is the first in series of patches that split OP-TEE RW memory into two regions: nexus memory and TEE memory. Nexus memory will be always mapped and it will be used to store all data that is vital for OP-TEE core and is not bound to virtual guests. TEE memory is a memory that holds data specific for certain guest. There will be TEE memory bank for every guest and it will be mapped into OP-TEE address space only during call from that guest. This patch adds nexus memory and moves stacks into it. Also it provides __nex_bss and __nex_data macros, so one can easily set right section for a variable. Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
2018-11-27core: base memory registration on scatter arrayJens Wiklander
The register_*() macros are now implemented using scatter array. Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
2018-07-04asid: move asid allocator from tee_mmu.c to core_mmu.cVolodymyr Babchuk
ASIDs will be allocated for individual virtrual guests, so allocator should reside in more generic place. Also, comment for MMU_NUM_ASIDS was updated. Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
2018-05-16Remove license notice from STMicroelectronics filesEtienne Carriere
Since a while the source files license info are defined by SPDX identifiers. We can safely remove the verbose license text from the files that are owned by either only STMicroelectronics or only both Linaro and STMicroelectronics. Signed-off-by: Etienne Carriere <etienne.carriere@st.com> Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
2018-05-15core: generic RAM layoutEtienne Carriere
Include mm/generic_ram_layout.h at top of platform_config.h to to get the TEE_RAM_*, TEE_TA_*, TEE_SHMEM_*, etc... defined from generic configuration directives. See description from generic_ram_layout.h head comments. Suggested-by: Jordan Rhee <jordanrh@microsoft.com> Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Joakim Bech <joakim.bech@linaro.org>
2018-04-25core: remove CFG_ prefix from CFG_TEE_LOAD_ADDREtienne Carriere
TEE_LOAD_ADDR is now local to source files. It is set to CFG_TEE_LOAD_ADDR value if defined only for the platforms that previously allowed build to override the value. Few platform did hardcod CFG_TEE_LOAD_ADDR, this change preserve these configurations. Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
2018-04-25core: remove CFG_ prefix from TEE_RAM_START/VA_SIZE/PH_SIZEEtienne Carriere
Almost platform currently define these directives from within the source code, through platform_config.h. These values do not need to be configuration directive with the CFG_ prefix. This change renames these macros so that they do not mess with the platform configuration directives. Old macro label New macro label CFG_TEE_RAM_START TEE_RAM_START CFG_TEE_RAM_VA_SIZE TEE_RAM_VA_SIZE CFG_TEE_RAM_PH_SIZE TEE_RAM_PH_SIZE Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
2018-03-02core: add ddr overall registerEdison Ai
register_ddr() is used to add overall DDR address range. SDP memories, static SHM, secure DDR and so on need to fix the problem that intersect with the overall DDR. Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Edison Ai <edison.ai@arm.com>
2018-03-02core: rename register_nsec_ddr() to register_dynamic_shm()Edison Ai
register_nsec_ddr() is actually only used to register dynamic physically non-contiguous SHM, rename it to register_dynamic_shm() will be more clear. Acked-by: Jerome Forissier <jerome.forissier@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Edison Ai <edison.ai@arm.com>
2018-02-16mmu: replace _prepare_small_page_mapping with _entry_to_finer_grainedVolodymyr Babchuk
core_mmu_prepare_small_page_mapping() just prepares table for the next level if there was no mappings already. core_mmu_entry_to_finer_grained() will do the same even if there is are something mapped there. Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
2018-01-18Fix compiler warning with register_sdp_mem()Jerome Forissier
Fixes the following warning/error when CFG_SECURE_DATA_PATH is disabled: $ make PLATFORM=hikey CFG_SECURE_DATA_PATH=n ... core/arch/arm/mm/core_mmu.c:90:61: error: ISO C does not allow extra ';' outside of a function [-Werror=pedantic] register_sdp_mem(CFG_TEE_SDP_MEM_BASE, CFG_TEE_SDP_MEM_SIZE); ^ cc1: all warnings being treated as errors Fixes: 2d9ed57b6bd8 ("Define register_sdp_mem() only when CFG_SECURE_DATA_PATH is defined") Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
2018-01-17Define register_sdp_mem() only when CFG_SECURE_DATA_PATH is definedVictor Chong
Suggested-by: Jerome Forissier <jerome.forissier@linaro.org> Suggested-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Victor Chong <victor.chong@linaro.org> Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org>
2018-01-10Add SPDX license identifiersJerome Forissier
Adds one SPDX-License-Identifier line [1] to each source files that contains license text. Generated by [2]: spdxify.py --add-spdx optee_os/ The scancode tool [3] was used to double check the license matching code in the Python script. All the licenses detected by scancode are either detected by spdxify.py, or have no SPDX identifier, or are false matches. Link: [1] https://spdx.org/licenses/ Link: [2] https://github.com/jforissier/misc/blob/f7b56c8/spdxify.py Link: [3] https://github.com/nexB/scancode-toolkit Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org> Acked-by: Joakim Bech <joakim.bech@linaro.org>
2018-01-10core: user mode translation tableJens Wiklander
Adds a second translation table to be used while in user mode containing user mode mapping and a minimal kernel mapping. Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Acked-by: Andrew Davis <andrew.davis@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
2018-01-10core: make core_mmu.h asm friendlyJens Wiklander
Makes core_mmu.h assembly friendly by excluding C code with #ifndef ASM Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org> Acked-by: Andrew Davis <andrew.davis@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
2017-11-09core; add MEM_AREA_TEE_ASANJens Wiklander
Adds MEM_AREA_TEE_ASAN which is used when pager is enabled to map the memory used by the address sanitizer if enabled. Currently this only works in configurations with the pager where emulated SRAM is used. Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
2017-10-12core: add register_phys_mem_ul()Jens Wiklander
Adds register_phys_mem_ul() which must be used (for compatibility with CFG_CORE_LARGE_PHYS_ADDR=y) when input address and size is based on symbols generated in the link script. Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
2017-10-06core:mmu: rename _divide_block into _prepare_small_page_mappingEtienne Carriere
core_mmu_divide_block() label is misleading. core_mmu_prepare_small_page_mapping() is used to allocate required mmu table(s) and init the mmu to that a full pdgir area can be used the map 4kB small pages from a single table entry (mmu descriptor). Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Acked-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Volodymyr Babchuk <vlad.babchuk@gmaill.com>
2017-10-06core: fix core_mmu_divide_block() against 32bit mmu constraintEtienne Carriere
core_mmu_divide_block() is used to prepare a MMU pgdir mapping table when no pgdir MMU table was previous allocated/filled to map a small page mapped address range. On non-LPAE (32bit mmu), a pgdir entry cannot be used to map both secure and non-secure pages. The pgdir descriptor defines whether the small pages inside the pgdir range will map secure or non-secure memory. Hence the core_mmu_divide_block() function takes an extra argument: the target secure/non-secure attribute of the pgdir. This argument in unused on LPAE mapping as a pgdir can hold secure as well as non-secure entries. Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Acked-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Volodymyr Babchuk <vlad.babchuk@gmaill.com>
2017-09-18core: mmu: export map_memarea_sectionsPeng Fan
Export map_memarea_sections. We need a mmu table dedicated for low power feature, so export map_memarea_sections to create that section mapping. Signed-off-by: Peng Fan <peng.fan@nxp.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
2017-09-14core: introduce TEE_RAM_VA_START and TEE_TEXT_VA_STARTZeng Tao
The currently OP-TEE implementation depends on the identity mapping, and the CFG_TEE_RAM_START and CFG_TEE_LOAD_ADDR are used as both physic and virtual address which is not extensible. This patch introduce the virtual address of these two marcos and as a base of non-identity mapping. Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Zeng Tao <prime.zeng@hisilicon.com>
2017-08-29core: add MEM_AREA_PAGER_VASPACEJens Wiklander
Adds MEM_AREA_PAGER_VASPACE which is used to create empty translation tables as needed for the pager. Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
2017-08-29core: add CORE_MMU_PGDIR_LEVELJens Wiklander
Adds the define CORE_MMU_PGDIR_LEVEL which indicates the level used for page directories. Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
2017-06-26core: fix TODOs related to TLB maintenance in the pagerEtienne Carriere
Invalidate TLBs for target references instead of invalidating the whole tables. Some changes affect places where several references are modified and must be invalidated in the TLBs. This change aims at lowering the synchronization barrier required before/after the TLB maintenance operations. Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Tested-by: Jens Wiklander <jens.wiklander@linaro.org> (Hikey AArch{32,64} pager)
2017-06-26core: deprecate core_tlb_maintenance()Etienne Carriere
The core_tlb_maintenance() indirection is not useful. This function is now deprecated and one shall straight call tlbi_xxx() function instead. Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
2017-06-22core: assign non-sec DDR configuration from DTJens Wiklander
Assigns non-secure DDR configuration from device tree if CFG_DT=y. Already present DDR configuration from register_nsec_ddr() is overridden. Reviewed-by: Volodymyr Babchuk <vlad.babchuk@gmail.com> Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Tested-by: Jens Wiklander <jens.wiklander@linaro.org> (QEMU) Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
2017-06-16core: clarify end of static mapping tableEtienne Carriere
Move remaining code relying on null size value for detecting end of static mapping table with a test on type value. This is made consistent between lpae and non-lpae implementations. Rename MEM_AREA_NOTYPE into MEM_AREA_END as it is dedicated to this specific purpose. Faulty core_mmu_get_type_by_pa() can return MEM_AREA_MAXTYPE on invalid cases. Add a comment highlighting null sized entry are not filled in the static mapping directives table. Forgive the trick on level_index_m'sk to fit in the 80 chars/line. Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
2017-06-02core_mmu: add non-secure DDR ranges supportVolodymyr Babchuk
This patch adds new macro `register_nsec_ddr` which allows platform code to register non-secure memory ranges. Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
2017-05-30Remove CFG_SMALL_PAGE_USER_TA=nJens Wiklander
Removes CFG_SMALL_PAGE_USER_TA and keep the code that was activated by CFG_SMALL_PAGE_USER_TA=y. This means that CFG_SMALL_PAGE_USER_TA=n which resulted in TA being mapped using 1 MiB or 2 MiB granularity is removed. Tested-by: Jens Wiklander <jens.wiklander@linaro.org> (QEMU) Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
2017-05-19core: exclusive writable/executable attribute in core mappingEtienne Carriere
Make executable memory non-writable and writable memory no-executable. Effective upon CFG_CORE_RWDATA_NOEXEC=y. Default configuration enables this directive. If CFG_CORE_RWDATA_NOEXEC is enabled, the read-only sections are mapped read-only/executable while the read/write memories are mapped read/write/not-executable. Potential 4KB of secure RAM wasted since the page alignment between unpaged text/rodata and unpaged read/write data. If CFG_CORE_RWDATA_NOEXEC not disabled, all text/rodata/data/... sections of the core are mapped read/write/executable. Both code and rodata and mapped together without alignment constraint. Hence define all "ro" are inside the "rx" relate area: __vcore_init_ro_size = 0 or init "ro" effective size. As init sections are mapped read-only, core won't be able to fill trailing content of the init last page. Hence __init_end and __init_size are page aligned. Core must premap all physical memory as readable to allow move of has tables to the allocated buffer during core inits. Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Tested-by: Jerome Forissier <jerome.forissier@linaro.org> (HiKey) Tested-by: Etienne Carriere <etienne.carriere@linaro.org> (qemu_virt) Tested-by: Etienne Carriere <etienne.carriere@linaro.org> (qemu_armv8) Tested-by: Etienne Carriere <etienne.carriere@linaro.org> (b2260)
2017-05-19core: MEM_AREA_TEE_RAM_RW_DATA identifies core read/write data memoryEtienne Carriere
This change prepares split of executable and writable from rw memory. Upon configuration the core private memory "TEE RAM" (heap, stack, ...) will either be in TEE_RAM or TEE_RAM_RW. This MEM_AREA_TEE_RAM_RW_DATA abstracts the memory used for core data. Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
2017-05-19core: introduce TEE_RAM_RX/_RO/_RW memory areasEtienne Carriere
Define new memory type IDs for the core private memory: - MEM_AREA_TEE_RAM_RX defines read-only/executable memory. - MEM_AREA_TEE_RAM_RO defines read-only/non-executable memory. - MEM_AREA_TEE_RAM_RW defines read/write/non-executable memory. Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
2017-05-19core: core static mapping should not mandate a default mapped SHMEtienne Carriere
Default SHM is a physically contiguous memory area that is default mapped inside the core. Before this change, current core mapping mandated the registration of a NSEC_SHM area. Other core layers maybe mandate such a memory but there should not be any constraint in the static mapping initialisation of the core. Other layer already check that this area is defined when they require it. As a side effect, the change updates core_mmu_is_shm_cached() so that it reflects the cache attribute defined for MEM_AREA_NSEC_SHM. Mapped memory reference map_nsec_shm is now useless and can be removed. Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
2017-05-15core: unexport deprecated core_va2pa_helper()Etienne Carriere
Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
2017-05-04core: mm: add missing entry in teecore_memtype_name()Jerome Forissier
teecore_memtype_name() does not handle MEM_AREA_SHM_VASPACE. Add it. Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
2017-04-28core_mmu: add page map/unmap functionsVolodymyr Babchuk
This function allows to map list of physical pages to specified virtual memory address. Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com> Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> [jf: remove braces {} around single statement block] Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org>
2017-04-28mm: add new VA region for dynamic shared buffersVolodymyr Babchuk
This region will be used later to dynamically map shared buffers provided by Normal World. Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com> Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
2017-04-19core: mm: print memory type name instead of numerical valueJerome Forissier
Improve the legibility of the memory manager debug traces by converting the memory types to strings before printing them in dump_mmap_table(), add_phys_mem() and add_va_space(). Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Volodymyr Babchuk <vlad.babchuk@gmail.com>
2017-04-12core: default define stack alignment and core vmem sizeEtienne Carriere
Default define CFG_TEE_RAM_VA_SIZE if not defined from platform. As a side effect of bringing a default value for CFG_TEE_RAM_VA_SIZE into core_mmu.h and thus including 'platform_config.h', the macro STACK_ALIGNMENT defined in user_ta.c must not conflict with the macro defined by the platform. Hence this change also default defines STACK_ALIGNMENT if not defined from platform. Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
2017-04-12core: fix register_XXX_mem() against physical addressEtienne Carriere
Use __COUNTER__ instead of the registered physical address to generate the label of the structure defined by the macros __register_phys_mem() and __register_sdp_mem(). Before this change, when argument "addr" is used, one cannot use these macros providing an address that is the result of a local operation. I.e This implementation was not possible: __register_phys_mem(<any-id>, ROUNDUP(<addr>, <value>), <size>); and one needed to use a temporary macro for the address computation: #define MY_BASE_ADDRESS ROUNDUP(<addr>, <value>) __register_phys_mem(<any-id>, MY_BASE_ADDRESS, <size>); Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
2017-03-17core: identify SDP memories though memory objectsEtienne Carriere
SDP memory objects are used to identified memref parameters related to Secure-Data-Path. This change creates SDP memory objects during inits. Each mobj identifies a registered SDP memory. SDP memory object are not default mapped to core, hence a default null virtual address. Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org>
2017-03-17core: platform registers SDP memoriesEtienne Carriere
Secure data path support is conditioned to CFG_SECURE_DATA_PATH. Core statically defines the SDP shared memory objects through macro register_sdp_shm(). Configuration directives CFG_TEE_SDP_SHM_BASE/_SIZE allow to register a "default" SDP memory area from generic implementation. SDP memories are not default map in OP-TEE core hence locations are not tested against OP-TEE memory layout. This change verifies the SDP memories layout against OP-TEE memory mapping memory. This is mandatory to prevent false identification of memory references if referring only to the list of the registered SDP memories when identifying a memory reference (later changes for SDP support). Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org>
2017-03-16core: minor cleaning in cache resourcesEtienne Carriere
Do not hard code the enumerated 'cache_op' IDs. Remove unsupported (and unused) WRITE_BUFFER_DRAIN operation. Deprecate L2CACHE_xxx operation IDs and use the already existing DCACHE_xxx operation IDs instead. Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
2017-03-16core: rename cache_maintenance_l1() into cache_op_inner()Etienne Carriere
Rename cache_maintenance_l1() into cache_op_inner() to prevent confusion as the function targets inner cache and not only level1 cache. Fix return type of cache_op_inner(). Suggested-by: Jerome Forissier <jerome.forissier@linaro.org> Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>