summaryrefslogtreecommitdiff
path: root/tools/opt
AgeCommit message (Collapse)Author
2018-02-02Merging r323155:Hans Wennborg
------------------------------------------------------------------------ r323155 | chandlerc | 2018-01-22 23:05:25 +0100 (Mon, 22 Jan 2018) | 133 lines Introduce the "retpoline" x86 mitigation technique for variant #2 of the speculative execution vulnerabilities disclosed today, specifically identified by CVE-2017-5715, "Branch Target Injection", and is one of the two halves to Spectre.. Summary: First, we need to explain the core of the vulnerability. Note that this is a very incomplete description, please see the Project Zero blog post for details: https://googleprojectzero.blogspot.com/2018/01/reading-privileged-memory-with-side.html The basis for branch target injection is to direct speculative execution of the processor to some "gadget" of executable code by poisoning the prediction of indirect branches with the address of that gadget. The gadget in turn contains an operation that provides a side channel for reading data. Most commonly, this will look like a load of secret data followed by a branch on the loaded value and then a load of some predictable cache line. The attacker then uses timing of the processors cache to determine which direction the branch took *in the speculative execution*, and in turn what one bit of the loaded value was. Due to the nature of these timing side channels and the branch predictor on Intel processors, this allows an attacker to leak data only accessible to a privileged domain (like the kernel) back into an unprivileged domain. The goal is simple: avoid generating code which contains an indirect branch that could have its prediction poisoned by an attacker. In many cases, the compiler can simply use directed conditional branches and a small search tree. LLVM already has support for lowering switches in this way and the first step of this patch is to disable jump-table lowering of switches and introduce a pass to rewrite explicit indirectbr sequences into a switch over integers. However, there is no fully general alternative to indirect calls. We introduce a new construct we call a "retpoline" to implement indirect calls in a non-speculatable way. It can be thought of loosely as a trampoline for indirect calls which uses the RET instruction on x86. Further, we arrange for a specific call->ret sequence which ensures the processor predicts the return to go to a controlled, known location. The retpoline then "smashes" the return address pushed onto the stack by the call with the desired target of the original indirect call. The result is a predicted return to the next instruction after a call (which can be used to trap speculative execution within an infinite loop) and an actual indirect branch to an arbitrary address. On 64-bit x86 ABIs, this is especially easily done in the compiler by using a guaranteed scratch register to pass the target into this device. For 32-bit ABIs there isn't a guaranteed scratch register and so several different retpoline variants are introduced to use a scratch register if one is available in the calling convention and to otherwise use direct stack push/pop sequences to pass the target address. This "retpoline" mitigation is fully described in the following blog post: https://support.google.com/faqs/answer/7625886 We also support a target feature that disables emission of the retpoline thunk by the compiler to allow for custom thunks if users want them. These are particularly useful in environments like kernels that routinely do hot-patching on boot and want to hot-patch their thunk to different code sequences. They can write this custom thunk and use `-mretpoline-external-thunk` *in addition* to `-mretpoline`. In this case, on x86-64 thu thunk names must be: ``` __llvm_external_retpoline_r11 ``` or on 32-bit: ``` __llvm_external_retpoline_eax __llvm_external_retpoline_ecx __llvm_external_retpoline_edx __llvm_external_retpoline_push ``` And the target of the retpoline is passed in the named register, or in the case of the `push` suffix on the top of the stack via a `pushl` instruction. There is one other important source of indirect branches in x86 ELF binaries: the PLT. These patches also include support for LLD to generate PLT entries that perform a retpoline-style indirection. The only other indirect branches remaining that we are aware of are from precompiled runtimes (such as crt0.o and similar). The ones we have found are not really attackable, and so we have not focused on them here, but eventually these runtimes should also be replicated for retpoline-ed configurations for completeness. For kernels or other freestanding or fully static executables, the compiler switch `-mretpoline` is sufficient to fully mitigate this particular attack. For dynamic executables, you must compile *all* libraries with `-mretpoline` and additionally link the dynamic executable and all shared libraries with LLD and pass `-z retpolineplt` (or use similar functionality from some other linker). We strongly recommend also using `-z now` as non-lazy binding allows the retpoline-mitigated PLT to be substantially smaller. When manually apply similar transformations to `-mretpoline` to the Linux kernel we observed very small performance hits to applications running typical workloads, and relatively minor hits (approximately 2%) even for extremely syscall-heavy applications. This is largely due to the small number of indirect branches that occur in performance sensitive paths of the kernel. When using these patches on statically linked applications, especially C++ applications, you should expect to see a much more dramatic performance hit. For microbenchmarks that are switch, indirect-, or virtual-call heavy we have seen overheads ranging from 10% to 50%. However, real-world workloads exhibit substantially lower performance impact. Notably, techniques such as PGO and ThinLTO dramatically reduce the impact of hot indirect calls (by speculatively promoting them to direct calls) and allow optimized search trees to be used to lower switches. If you need to deploy these techniques in C++ applications, we *strongly* recommend that you ensure all hot call targets are statically linked (avoiding PLT indirection) and use both PGO and ThinLTO. Well tuned servers using all of these techniques saw 5% - 10% overhead from the use of retpoline. We will add detailed documentation covering these components in subsequent patches, but wanted to make the core functionality available as soon as possible. Happy for more code review, but we'd really like to get these patches landed and backported ASAP for obvious reasons. We're planning to backport this to both 6.0 and 5.0 release streams and get a 5.0 release with just this cherry picked ASAP for distros and vendors. This patch is the work of a number of people over the past month: Eric, Reid, Rui, and myself. I'm mailing it out as a single commit due to the time sensitive nature of landing this and the need to backport it. Huge thanks to everyone who helped out here, and everyone at Intel who helped out in discussions about how to craft this. Also, credit goes to Paul Turner (at Google, but not an LLVM contributor) for much of the underlying retpoline design. Reviewers: echristo, rnk, ruiu, craig.topper, DavidKreitzer Subscribers: sanjoy, emaste, mcrosier, mgorny, mehdi_amini, hiraditya, llvm-commits Differential Revision: https://reviews.llvm.org/D41723 ------------------------------------------------------------------------ git-svn-id: https://llvm.org/svn/llvm-project/llvm/branches/release_60@324067 91177308-0d34-0410-b5e6-96231b3b80d8
2017-12-08[Debugify] Add a pass to test debug info preservationVedant Kumar
The Debugify pass synthesizes debug info for IR. It's paired with a CheckDebugify pass which determines how much of the original debug info is preserved. These passes make it easier to create targeted tests for debug info preservation. Here is the Debugify algorithm: NextLine = 1 for (Instruction &I : M) attach DebugLoc(NextLine++) to I NextVar = 1 for (Instruction &I : M) if (canAttachDebugValue(I)) attach dbg.value(NextVar++) to I The CheckDebugify pass expects contiguous ranges of DILocations and DILocalVariables. If it fails to find all of the expected debug info, it prints a specific error to stderr which can be FileChecked. This was discussed on llvm-dev in the thread: "Passes to add/validate synthetic debug info" Differential Revision: https://reviews.llvm.org/D40512 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@320202 91177308-0d34-0410-b5e6-96231b3b80d8
2017-12-05[CMake] Use PRIVATE in target_link_libraries for executablesShoaib Meenai
We currently use target_link_libraries without an explicit scope specifier (INTERFACE, PRIVATE or PUBLIC) when linking executables. Dependencies added in this way apply to both the target and its dependencies, i.e. they become part of the executable's link interface and are transitive. Transitive dependencies generally don't make sense for executables, since you wouldn't normally be linking against an executable. This also causes issues for generating install export files when using LLVM_DISTRIBUTION_COMPONENTS. For example, clang has a lot of LLVM library dependencies, which are currently added as interface dependencies. If clang is in the distribution components but the LLVM libraries it depends on aren't (which is a perfectly legitimate use case if the LLVM libraries are being built static and there are therefore no run-time dependencies on them), CMake will complain about the LLVM libraries not being in export set when attempting to generate the install export file for clang. This is reasonable behavior on CMake's part, and the right thing is for LLVM's build system to explicitly use PRIVATE dependencies for executables. Unfortunately, CMake doesn't allow you to mix and match the keyword and non-keyword target_link_libraries signatures for a single target; i.e., if a single call to target_link_libraries for a particular target uses one of the INTERFACE, PRIVATE, or PUBLIC keywords, all other calls must also be updated to use those keywords. This means we must do this change in a single shot. I also fully expect to have missed some instances; I tested by enabling all the projects in the monorepo (except dragonegg), and configuring both with and without shared libraries, on both Darwin and Linux, but I'm planning to rely on the buildbots for other configurations (since it should be pretty easy to fix those). Even after this change, we still have a lot of target_link_libraries calls that don't specify a scope keyword, mostly for shared libraries. I'm thinking about addressing those in a follow-up, but that's a separate change IMO. Differential Revision: https://reviews.llvm.org/D40823 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@319840 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-27Rename CommandFlags.h -> CommandFlags.defDavid Blaikie
Since this isn't a real header - it includes static functions and had external linkage variables (though this change makes them static, since that's what they should be) so can't be included more than once in a program. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@319082 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-14Rename CountingFunctionInserter and use for both mcount and cygprofile ↵Hans Wennborg
calls, before and after inlining Clang implements the -finstrument-functions flag inherited from GCC, which inserts calls to __cyg_profile_func_{enter,exit} on function entry and exit. This is useful for getting a trace of how the functions in a program are executed. Normally, the calls remain even if a function is inlined into another function, but it is useful to be able to turn this off for users who are interested in a lower-level trace, i.e. one that reflects what functions are called post-inlining. (We use this to generate link order files for Chromium.) LLVM already has a pass for inserting similar instrumentation calls to mcount(), which it does after inlining. This patch renames and extends that pass to handle calls both to mcount and the cygprofile functions, before and/or after inlining as controlled by function attributes. Differential Revision: https://reviews.llvm.org/D39287 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@318195 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-03re-land [ExpandMemCmp] Split ExpandMemCmp from CodeGen into its own pass."Clement Courbet
Fix undefined references: ExpandMemCmp belongs to CodeGen/, not Scalar/. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317318 91177308-0d34-0410-b5e6-96231b3b80d8
2017-10-24[opt] Initialize WriteBitcode pass.Michael Kruse
Probably due to a change of how some pass initializes its dependencies, the -write-bitcode pass (Bitcode/Writer/BitcodeWriterPass.cpp) is not initialized in opt anymore and therefore not usable with opt -write-bitcode Explicitly call initializeWriteBitcodePassPass() to make it available in opt again. Differential Revision: https://reviews.llvm.org/D39223 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@316464 91177308-0d34-0410-b5e6-96231b3b80d8
2017-10-12Revert "TargetMachine: Merge TargetMachine and LLVMTargetMachine"Matthias Braun
Reverting to investigate layering effects of MCJIT not linking libCodeGen but using TargetMachine::getNameWithPrefix() breaking the lldb bots. This reverts commit r315633. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@315637 91177308-0d34-0410-b5e6-96231b3b80d8
2017-10-12TargetMachine: Merge TargetMachine and LLVMTargetMachineMatthias Braun
Merge LLVMTargetMachine into TargetMachine. - There is no in-tree target anymore that just implements TargetMachine but not LLVMTargetMachine. - It should still be possible to stub out all the various functions in case a target does not want to use lib/CodeGen - This simplifies the code and avoids methods ending up in the wrong interface. Differential Revision: https://reviews.llvm.org/D38489 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@315633 91177308-0d34-0410-b5e6-96231b3b80d8
2017-10-02Move the stripping of invalid debug info from the Verifier to AutoUpgrade.Adrian Prantl
This came out of a recent discussion on llvm-dev (https://reviews.llvm.org/D38042). Currently the Verifier will strip the debug info metadata from a module if it finds the dbeug info to be malformed. This feature is very valuable since it allows us to improve the Verifier by making it stricter without breaking bcompatibility, but arguable the Verifier pass should not be modifying the IR. This patch moves the stripping of broken debug info into AutoUpgrade (UpgradeDebugInfo to be precise), which is a much better location for this since the stripping of malformed (i.e., produced by older, buggy versions of Clang) is a (harsh) form of AutoUpgrade. This change is mostly NFC in nature, the one big difference is the behavior when LLVM module passes are introducing malformed debug info. Prior to this patch, a NoAsserts build would have printed a warning and stripped the debug info, after this patch the Verifier will report a fatal error. I believe this behavior is actually more desirable anyway. Differential Revision: https://reviews.llvm.org/D38184 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@314699 91177308-0d34-0410-b5e6-96231b3b80d8
2017-09-23[Support] Rename tool_output_file to ToolOutputFile, NFCReid Kleckner
This class isn't similar to anything from the STL, so it shouldn't use the STL naming conventions. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@314050 91177308-0d34-0410-b5e6-96231b3b80d8
2017-08-31[Analysis] Fix some Clang-tidy modernize-use-using and Include What You Use ↵Eugene Zelenko
warnings; other minor fixes. Also affected in files (NFC). git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@312289 91177308-0d34-0410-b5e6-96231b3b80d8
2017-08-20Keep Optimization Remark Yaml in NewPMSam Elliott
Summary: The New Pass Manager infrastructure was forgetting to keep around the optimization remark yaml file that the compiler might have been producing. This meant setting the option to '-' for stdout worked, but setting it to a filename didn't give file output (presumably it was deleted because compilation didn't explicitly keep it). This change just ensures that the file is kept if compilation succeeds. So far I have updated one of the optimization remark output tests to add a version with the new pass manager. It is my intention for this patch to also include changes to all tests that use `-opt-remark-output=` but I wanted to get the code patch ready for review while I was making all those changes. Fixes https://bugs.llvm.org/show_bug.cgi?id=33951 Reviewers: anemet, chandlerc Reviewed By: anemet, chandlerc Subscribers: javed.absar, chandlerc, fhahn, llvm-commits Differential Revision: https://reviews.llvm.org/D36906 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@311271 91177308-0d34-0410-b5e6-96231b3b80d8
2017-08-04[Polly][PM] Register polly passes with the opt tool for the new-pm pathPhilip Pfaffe
Summary: When polly is linked into the tools because of the LLVM_POLLY_LINK_INTO_TOOLS option being set, we need to register its passes with the PassBuilder. Because polly is linked in, we can directly call its callback registration method, which registers the appropriate callbacks with the new PM's PassBuilder. This essentially follows exactly the way it worked with the legacy PM. Reviewers: grosser, chandlerc, bollu Reviewed By: grosser Subscribers: pollydev, llvm-commits Differential Revision: https://reviews.llvm.org/D36273 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@310043 91177308-0d34-0410-b5e6-96231b3b80d8
2017-08-03Delete Default and JITDefault code modelsRafael Espindola
IMHO it is an antipattern to have a enum value that is Default. At any given piece of code it is not clear if we have to handle Default or if has already been mapped to a concrete value. In this case in particular, only the target can do the mapping and it is nice to make sure it is always done. This deletes the two default enum values of CodeModel and uses an explicit Optional<CodeModel> when it is possible that it is unspecified. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@309911 91177308-0d34-0410-b5e6-96231b3b80d8
2017-07-26Make new PM honor -fdebug-info-for-profilingDehao Chen
Summary: The new PM needs to invoke add-discriminator pass when building with -fdebug-info-for-profiling. Reviewers: chandlerc, davidxl Reviewed By: chandlerc Subscribers: sanjoy, llvm-commits Differential Revision: https://reviews.llvm.org/D35744 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@309121 91177308-0d34-0410-b5e6-96231b3b80d8
2017-07-26Add test coverage for new PM PGOOpt handling.Dehao Chen
Summary: This patch adds flags and tests to cover the PGOOpt handling logic in new PM. Reviewers: chandlerc, davide Reviewed By: chandlerc Subscribers: sanjoy, llvm-commits Differential Revision: https://reviews.llvm.org/D35807 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@309076 91177308-0d34-0410-b5e6-96231b3b80d8
2017-07-11[PM] Another post-commit fix in NewPMDriverPhilip Pfaffe
There were two errors in the parsing of opt's command line options for extension point pipelines. The EP callbacks are not supposed to return a value. To check the pipeline text for correctness, I now try to parse it into a temporary PM object, and print a message on failure. This solves the compile time error for the lambda return type, as well as correctly handles unparsable pipelines now. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@307649 91177308-0d34-0410-b5e6-96231b3b80d8
2017-07-10[PM] Fix a warning.Philip Pfaffe
The DebugLogging argument was unused in the EP callbacks registration. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@307536 91177308-0d34-0410-b5e6-96231b3b80d8
2017-07-10[PM] Fix r307532: Get rid of a dangling reference.Philip Pfaffe
Escaping lambda by-reference capture of local variable caused a dangling reference. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@307534 91177308-0d34-0410-b5e6-96231b3b80d8
2017-07-10[PM] Enable registration of out-of-tree passes with PassBuilderPhilip Pfaffe
Summary: This patch adds a callback registration API to the PassBuilder, enabling registering out-of-tree passes with it. Through the Callback API, callers may register callbacks with the various stages at which passes are added into pass managers, including parsing of a pass pipeline as well as at extension points within the default -O pipelines. Registering utilities like `require<>` and `invalidate<>` needs to be handled manually by the caller, but a helper is provided. Additionally, adding passes at pipeline extension points is exposed through the opt tool. This patch adds a `-passes-ep-X` commandline option for every extension point X, which opt parses into pipelines inserted into that extension point. Reviewers: chandlerc Reviewed By: chandlerc Subscribers: lksbhm, grosser, davide, mehdi_amini, llvm-commits, mgorny Differential Revision: https://reviews.llvm.org/D33464 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@307532 91177308-0d34-0410-b5e6-96231b3b80d8
2017-06-30[ORE] Add diagnostics hotness thresholdBrian Gesiak
Summary: Add an option to prevent diagnostics that do not meet a minimum hotness threshold from being output. When generating optimization remarks for large codebases with a ton of cold code paths, this option can be used to limit the optimization remark output at a reasonable size. Discussion of this change can be read here: http://lists.llvm.org/pipermail/llvm-dev/2017-June/114377.html Reviewers: anemet, davidxl, hfinkel Reviewed By: anemet Subscribers: qcolombet, javed.absar, fhahn, eraman, llvm-commits Differential Revision: https://reviews.llvm.org/D34867 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@306912 91177308-0d34-0410-b5e6-96231b3b80d8
2017-06-30[ORE] Unify spelling as "diagnostics hotness"Brian Gesiak
Summary: To enable profile hotness information in diagnostics output, Clang takes the option `-fdiagnostics-show-hotness` -- that's "diagnostics", with an "s" at the end. Clang also defines `CodeGenOptions::DiagnosticsWithHotness`. LLVM, on the other hand, defines `LLVMContext::getDiagnosticHotnessRequested` -- that's "diagnostic", not "diagnostics". It's a small difference, but it's confusing, typo-inducing, and frustrating. Add a new method with the spelling "diagnostics", and "deprecate" the old spelling. Reviewers: anemet, davidxl Reviewed By: anemet Subscribers: llvm-commits, mehdi_amini Differential Revision: https://reviews.llvm.org/D34864 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@306848 91177308-0d34-0410-b5e6-96231b3b80d8
2017-06-01[ThinLTO] Migrate ThinLTOBitcodeWriter to the new PM.Tim Shen
Summary: Also see D33429 for other ThinLTO + New PM related changes. Reviewers: davide, chandlerc, tejohnson Subscribers: mehdi_amini, Prazek, cfe-commits, inglorion, llvm-commits, eraman Differential Revision: https://reviews.llvm.org/D33525 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@304378 91177308-0d34-0410-b5e6-96231b3b80d8
2017-05-18[LegacyPassManager] Remove TargetMachine constructorsFrancis Visoiu Mistrih
This provides a new way to access the TargetMachine through TargetPassConfig, as a dependency. The patterns replaced here are: * Passes handling a null TargetMachine call `getAnalysisIfAvailable<TargetPassConfig>`. * Passes not handling a null TargetMachine `addRequired<TargetPassConfig>` and call `getAnalysis<TargetPassConfig>`. * MachineFunctionPasses now use MF.getTarget(). * Remove all the TargetMachine constructors. * Remove INITIALIZE_TM_PASS. This fixes a crash when running `llc -start-before prologepilog`. PEI needs StackProtector, which gets constructed without a TargetMachine by the pass manager. The StackProtector pass doesn't handle the case where there is no TargetMachine, so it segfaults. Related to PR30324. Differential Revision: https://reviews.llvm.org/D33222 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@303360 91177308-0d34-0410-b5e6-96231b3b80d8
2017-05-15[X86] Relocate code of replacement of subtarget unsupported masked memory ↵Ayman Musa
intrinsics to run also on -O0 option. Currently, when masked load, store, gather or scatter intrinsics are used, we check in CodeGenPrepare pass if the subtarget support these intrinsics, if not we replace them with scalar code - this is a functional transformation not an optimization (not optional). CodeGenPrepare pass does not run when the optimization level is set to CodeGenOpt::None (-O0). Functional transformation should run with all optimization levels, so here I created a new pass which runs on all optimization levels and does no more than this transformation. Differential Revision: https://reviews.llvm.org/D32487 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@303050 91177308-0d34-0410-b5e6-96231b3b80d8
2017-05-10Add a late IR expansion pass for the experimental reduction intrinsics.Amara Emerson
This pass uses a new target hook to decide whether or not to expand a particular intrinsic to the shuffevector sequence. Differential Revision: https://reviews.llvm.org/D32245 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@302631 91177308-0d34-0410-b5e6-96231b3b80d8
2017-05-10[CodeGen] Split SafeStack into a LegacyPass and a utility. NFC.Ahmed Bougacha
This lets the pass focus on gathering the required analyzes, and the utility class focus on the transformation. Differential Revision: https://reviews.llvm.org/D31303 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@302609 91177308-0d34-0410-b5e6-96231b3b80d8
2017-04-26Turn DISubprogram into a variable-length node.Adrian Prantl
DISubprogram currently has 10 pointer operands, several of which are often nullptr. This patch reduces the amount of memory allocated by DISubprogram by rearranging the operands such that containing type, template params, and thrown types come last, and are only allocated when they are non-null (or followed by non-null operands). This patch also eliminates the entirely unused DisplayName operand. This saves up to 4 pointer operands per DISubprogram. (I tried measuring the effect on peak memory usage on an LTO link of an X86 llc, but the results were very noisy). This reapplies r301498 with an attempted workaround for g++. Differential Revision: https://reviews.llvm.org/D32560 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@301501 91177308-0d34-0410-b5e6-96231b3b80d8
2017-04-26Revert "Turn DISubprogram into a variable-length node."Adrian Prantl
This reverts commit r301498 while investigating bot breakage. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@301499 91177308-0d34-0410-b5e6-96231b3b80d8
2017-04-26Turn DISubprogram into a variable-length node.Adrian Prantl
DISubprogram currently has 10 pointer operands, several of which are often nullptr. This patch reduces the amount of memory allocated by DISubprogram by rearranging the operands such that containing type, template params, and thrown types come last, and are only allocated when they are non-null (or followed by non-null operands). This patch also eliminates the entirely unused DisplayName operand. This saves up to 4 pointer operands per DISubprogram. (I tried measuring the effect on peak memory usage on an LTO link of an X86 llc, but the results were very noisy). git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@301498 91177308-0d34-0410-b5e6-96231b3b80d8
2017-03-23[ThinLTO] Add support for emitting minimized bitcode for thin linkTeresa Johnson
Summary: The cumulative size of the bitcode files for a very large application can be huge, particularly with -g. In a distributed build environment, all of these files must be sent to the remote build node that performs the thin link step, and this can exceed size limits. The thin link actually only needs the summary along with a bitcode symbol table. Until we have a proper bitcode symbol table, simply stripping the debug metadata results in significant size reduction. Add support for an option to additionally emit minimized bitcode modules, just for use in the thin link step, which for now just strips all debug metadata. I plan to add a cc1 option so this can be invoked easily during the compile step. However, care must be taken to ensure that these minimized thin link bitcode files produce the same index as with the original bitcode files, as these original bitcode files will be used in the backends. Specifically: 1) The module hash used for caching is typically produced by hashing the written bitcode, and we want to include the hash that would correspond to the original bitcode file. This is because we want to ensure that changes in the stripped portions affect caching. Added plumbing to emit the same module hash in the minimized thin link bitcode file. 2) The module paths in the index are constructed from the module ID of each thin linked bitcode, and typically is automatically generated from the input file path. This is the path used for finding the modules to import from, and obviously we need this to point to the original bitcode files. Added gold-plugin support to take a suffix replacement during the thin link that is used to override the identifier on the MemoryBufferRef constructed from the loaded thin link bitcode file. The assumption is that the build system can specify that the minimized bitcode file has a name that is similar but uses a different suffix (e.g. out.thinlink.bc instead of out.o). Added various tests to ensure that we get identical index files out of the thin link step. Reviewers: mehdi_amini, pcc Subscribers: Prazek, llvm-commits Differential Revision: https://reviews.llvm.org/D31027 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@298638 91177308-0d34-0410-b5e6-96231b3b80d8
2017-03-21Do not inline hot callsites for samplepgo in thinlto compile phase.Dehao Chen
Summary: Because SamplePGO passes will be invoked twice in ThinLTO build: once at compile phase, the other at backend. We want to make sure the IR at the 2nd phase matches the hot part in profile, thus we do not want to inline hot callsites in the first phase. Reviewers: tejohnson, eraman Reviewed By: tejohnson Subscribers: mehdi_amini, llvm-commits, Prazek Differential Revision: https://reviews.llvm.org/D31201 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@298428 91177308-0d34-0410-b5e6-96231b3b80d8
2017-02-17opt: Rename -default-data-layout flag to -data-layout and make it always ↵Peter Collingbourne
override the layout. There isn't much point in a flag that only works if the data layout is empty. Differential Revision: https://reviews.llvm.org/D30014 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@295468 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-28Cleanup dump() functions.Matthias Braun
We had various variants of defining dump() functions in LLVM. Normalize them (this should just consistently implement the things discussed in http://lists.llvm.org/pipermail/cfe-dev/2014-January/034323.html For reference: - Public headers should just declare the dump() method but not use LLVM_DUMP_METHOD or #if !defined(NDEBUG) || defined(LLVM_ENABLE_DUMP) - The definition of a dump method should look like this: #if !defined(NDEBUG) || defined(LLVM_ENABLE_DUMP) LLVM_DUMP_METHOD void MyClass::dump() { // print stuff to dbgs()... } #endif git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@293359 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-26Replace addEarlyAsPossiblePasses callback with adjustPassManagerStanislav Mekhanoshin
This change introduces adjustPassManager target callback giving a target an opportunity to tweak PassManagerBuilder before pass managers are populated. This generalizes and replaces addEarlyAsPossiblePasses target callback. In particular that can be used to add custom passes to extension points other than EP_EarlyAsPossible. Differential Revision: https://reviews.llvm.org/D28336 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@293189 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-11[PM] Separate the LoopAnalysisManager from the LoopPassManager and moveChandler Carruth
the latter to the Transforms library. While the loop PM uses an analysis to form the IR units, the current plan is to have the PM itself establish and enforce both loop simplified form and LCSSA. This would be a layering violation in the analysis library. Fundamentally, the idea behind the loop PM is to *transform* loops in addition to running passes over them, so it really seemed like the most natural place to sink this was into the transforms library. We can't just move *everything* because we also have loop analyses that rely on a subset of the invariants. So this patch splits the the loop infrastructure into the analysis management that has to be part of the analysis library, and the transform-aware pass manager. This also required splitting the loop analyses' printer passes out to the transforms library, which makes sense to me as running these will transform the code into LCSSA in theory. I haven't split the unittest though because testing one component without the other seems nearly intractable. Differential Revision: https://reviews.llvm.org/D28452 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@291662 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-16IPO: Introduce ThinLTOBitcodeWriter pass.Peter Collingbourne
This pass prepares a module containing type metadata for ThinLTO by splitting it into regular and thin LTO parts if possible, and writing both parts to a multi-module bitcode file. Modules that do not contain type metadata are written unmodified as a single module. All globals with type metadata are added to the regular LTO module, and the rest are added to the thin LTO module. Differential Revision: https://reviews.llvm.org/D27324 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289899 91177308-0d34-0410-b5e6-96231b3b80d8
2016-11-19Change setDiagnosticsOutputFile to take a unique_ptr from a raw pointer (NFC)Mehdi Amini
Summary: This makes it explicit that ownership is taken. Also replace all `new` with make_unique<> at call sites. Reviewers: anemet Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D26884 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@287449 91177308-0d34-0410-b5e6-96231b3b80d8
2016-11-19[CMake] opt depends on intrinsics_genChris Bieneman
AnalysisWrappers.cpp has the following include chain: llvm/Analysis/CallGraph.h llvm/IR/CallSite.h llvm/IR/Attributes.h llvm/IR/Attributes.gen This means opt needs to depend on intrinsics_gen. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@287433 91177308-0d34-0410-b5e6-96231b3b80d8
2016-11-14Restore "[ThinLTO] Prevent exporting of locals used/defined in module level asm"Teresa Johnson
This restores the rest of r286297 (part was restored in r286475). Specifically, it restores the part requiring adding a dependency from the Analysis to Object library (downstream use changed to correctly model split BitReader vs BitWriter libraries). Original description of this part of patch follows: Module level asm may also contain defs of values. We need to prevent export of any refs to local values defined in module level asm (e.g. a ref in normal IR), since that also requires renaming/promotion of the local. To do that, the summary index builder looks at all values in the module level asm string that are not marked Weak or Global, which is exactly the set of locals that are defined. A summary is created for each of these local defs and flagged as NoRename. This required adding handling to the BitcodeWriter to look at GV declarations to see if they have a summary (rather than skipping them all). Finally, added an assert to IRObjectFile::CollectAsmUndefinedRefs to ensure that an MCAsmParser is available, otherwise the module asm parse would silently fail. Initialized the asm parser in the opt tool for use in testing this fix. Fixes PR30610. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@286844 91177308-0d34-0410-b5e6-96231b3b80d8
2016-11-09Revert "[ThinLTO] Prevent exporting of locals used/defined in module level asm"Mehdi Amini
This reverts commit r286297. Introduces a dependency from libAnalysis to libObject, which I missed during the review. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@286329 91177308-0d34-0410-b5e6-96231b3b80d8
2016-11-08[ThinLTO] Prevent exporting of locals used/defined in module level asmTeresa Johnson
Summary: This patch uses the same approach added for inline asm in r285513 to similarly prevent promotion/renaming of locals used or defined in module level asm. All static global values defined in normal IR and used in module level asm should be included on either the llvm.used or llvm.compiler.used global. The former were already being flagged as NoRename in the summary, and I've simply added llvm.compiler.used values to this handling. Module level asm may also contain defs of values. We need to prevent export of any refs to local values defined in module level asm (e.g. a ref in normal IR), since that also requires renaming/promotion of the local. To do that, the summary index builder looks at all values in the module level asm string that are not marked Weak or Global, which is exactly the set of locals that are defined. A summary is created for each of these local defs and flagged as NoRename. This required adding handling to the BitcodeWriter to look at GV declarations to see if they have a summary (rather than skipping them all). Finally, added an assert to IRObjectFile::CollectAsmUndefinedRefs to ensure that an MCAsmParser is available, otherwise the module asm parse would silently fail. Initialized the asm parser in the opt tool for use in testing this fix. Fixes PR30610. Reviewers: mehdi_amini Subscribers: johanengelen, krasin, llvm-commits Differential Revision: https://reviews.llvm.org/D26146 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@286297 91177308-0d34-0410-b5e6-96231b3b80d8
2016-11-02Fix Clang-tidy readability-redundant-string-cstr warningsMalcolm Parsons
Reviewers: beanz, lattner, jlebar Subscribers: jholewinski, llvm-commits, mehdi_amini Differential Revision: https://reviews.llvm.org/D26235 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@285832 91177308-0d34-0410-b5e6-96231b3b80d8
2016-10-30 [Polly] Remove the unused POLLY_LINK_LIBS for linking polly intoHongbin Zheng
tools Differential Revision: https://reviews.llvm.org/D25861 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@285514 91177308-0d34-0410-b5e6-96231b3b80d8
2016-10-01Use StringRef in Pass/PassManager APIs (NFC)Mehdi Amini
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@283004 91177308-0d34-0410-b5e6-96231b3b80d8
2016-09-27Output optimization remarks in YAMLAdam Nemet
(Re-committed after moving the template specialization under the yaml namespace. GCC was complaining about this.) This allows various presentation of this data using an external tool. This was first recommended here[1]. As an example, consider this module: 1 int foo(); 2 int bar(); 3 4 int baz() { 5 return foo() + bar(); 6 } The inliner generates these missed-optimization remarks today (the hotness information is pulled from PGO): remark: /tmp/s.c:5:10: foo will not be inlined into baz (hotness: 30) remark: /tmp/s.c:5:18: bar will not be inlined into baz (hotness: 30) Now with -pass-remarks-output=<yaml-file>, we generate this YAML file: --- !Missed Pass: inline Name: NotInlined DebugLoc: { File: /tmp/s.c, Line: 5, Column: 10 } Function: baz Hotness: 30 Args: - Callee: foo - String: will not be inlined into - Caller: baz ... --- !Missed Pass: inline Name: NotInlined DebugLoc: { File: /tmp/s.c, Line: 5, Column: 18 } Function: baz Hotness: 30 Args: - Callee: bar - String: will not be inlined into - Caller: baz ... This is a summary of the high-level decisions: * There is a new streaming interface to emit optimization remarks. E.g. for the inliner remark above: ORE.emit(DiagnosticInfoOptimizationRemarkMissed( DEBUG_TYPE, "NotInlined", &I) << NV("Callee", Callee) << " will not be inlined into " << NV("Caller", CS.getCaller()) << setIsVerbose()); NV stands for named value and allows the YAML client to process a remark using its name (NotInlined) and the named arguments (Callee and Caller) without parsing the text of the message. Subsequent patches will update ORE users to use the new streaming API. * I am using YAML I/O for writing the YAML file. YAML I/O requires you to specify reading and writing at once but reading is highly non-trivial for some of the more complex LLVM types. Since it's not clear that we (ever) want to use LLVM to parse this YAML file, the code supports and asserts that we're writing only. On the other hand, I did experiment that the class hierarchy starting at DiagnosticInfoOptimizationBase can be mapped back from YAML generated here (see D24479). * The YAML stream is stored in the LLVM context. * In the example, we can probably further specify the IR value used, i.e. print "Function" rather than "Value". * As before hotness is computed in the analysis pass instead of DiganosticInfo. This avoids the layering problem since BFI is in Analysis while DiagnosticInfo is in IR. [1] https://reviews.llvm.org/D19678#419445 Differential Revision: https://reviews.llvm.org/D24587 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@282539 91177308-0d34-0410-b5e6-96231b3b80d8
2016-09-27Revert "Output optimization remarks in YAML"Adam Nemet
This reverts commit r282499. The GCC bots are failing git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@282503 91177308-0d34-0410-b5e6-96231b3b80d8
2016-09-27Output optimization remarks in YAMLAdam Nemet
This allows various presentation of this data using an external tool. This was first recommended here[1]. As an example, consider this module: 1 int foo(); 2 int bar(); 3 4 int baz() { 5 return foo() + bar(); 6 } The inliner generates these missed-optimization remarks today (the hotness information is pulled from PGO): remark: /tmp/s.c:5:10: foo will not be inlined into baz (hotness: 30) remark: /tmp/s.c:5:18: bar will not be inlined into baz (hotness: 30) Now with -pass-remarks-output=<yaml-file>, we generate this YAML file: --- !Missed Pass: inline Name: NotInlined DebugLoc: { File: /tmp/s.c, Line: 5, Column: 10 } Function: baz Hotness: 30 Args: - Callee: foo - String: will not be inlined into - Caller: baz ... --- !Missed Pass: inline Name: NotInlined DebugLoc: { File: /tmp/s.c, Line: 5, Column: 18 } Function: baz Hotness: 30 Args: - Callee: bar - String: will not be inlined into - Caller: baz ... This is a summary of the high-level decisions: * There is a new streaming interface to emit optimization remarks. E.g. for the inliner remark above: ORE.emit(DiagnosticInfoOptimizationRemarkMissed( DEBUG_TYPE, "NotInlined", &I) << NV("Callee", Callee) << " will not be inlined into " << NV("Caller", CS.getCaller()) << setIsVerbose()); NV stands for named value and allows the YAML client to process a remark using its name (NotInlined) and the named arguments (Callee and Caller) without parsing the text of the message. Subsequent patches will update ORE users to use the new streaming API. * I am using YAML I/O for writing the YAML file. YAML I/O requires you to specify reading and writing at once but reading is highly non-trivial for some of the more complex LLVM types. Since it's not clear that we (ever) want to use LLVM to parse this YAML file, the code supports and asserts that we're writing only. On the other hand, I did experiment that the class hierarchy starting at DiagnosticInfoOptimizationBase can be mapped back from YAML generated here (see D24479). * The YAML stream is stored in the LLVM context. * In the example, we can probably further specify the IR value used, i.e. print "Function" rather than "Value". * As before hotness is computed in the analysis pass instead of DiganosticInfo. This avoids the layering problem since BFI is in Analysis while DiagnosticInfo is in IR. [1] https://reviews.llvm.org/D19678#419445 Differential Revision: https://reviews.llvm.org/D24587 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@282499 91177308-0d34-0410-b5e6-96231b3b80d8
2016-09-07[opt] Remove an unused argument to runPassPipeline().Davide Italiano
I have plans to use this API also in libLTO (and maybe lld). git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@280770 91177308-0d34-0410-b5e6-96231b3b80d8