Closed caselitz closed 6 years ago
The mechanism to detect a language is solely based on the name of the compiler executable. e.g.
Please provide some more information to detect com.nvidia.cuda.toolchain.language.cu/c/c++ language.
You will find that information in the compile_commands.json
file which is generated by cmake.
And please confirm that com.nvidia.cuda.toolchain.language.cu/c/c++
actually are language IDs that can be fed to the CDT indexer and will expose the expected behavior. (I could not find any source code of NsightEE.)
The mechanism to detect a language is solely based on the name of the compiler executable. e.g.
- C --> gcc, cc, cl, icl ,...
- C++ -> g++, c++
So I guess 3. com.nvidia.cuda.toolchain.language.cu/c/c++
-> nvcc
Please provide some more information to detect com.nvidia.cuda.toolchain.language.cu/c/c++ language. You will find that information in the compile_commands.json file which is generated by cmake.
In the first example of the compile_commands.json
I posted in the mailinglist you can see that /usr/local/cuda-9.1/bin/nvcc
is the compiler being used. I'm not sure what "more information" you need compared to what I've posted - everything else are basically duplicates of the given examples.
And please confirm that
com.nvidia.cuda.toolchain.language.cu/c/c++
actually are language IDs that can be fed to the CDT indexer and will expose the expected behavior.
I do think that they are language IDs because they appear in the Entries Languages column (if I select CUDA Linux toolchain
as current toolchain or add the nvcc compiler to the "Used tools" manually) like this:
[Unspecified][id=com.nvidia.cuda.toolchain.language.cuda.c]
[Unspecified][id=com.nvidia.cuda.toolchain.language.cuda.cu]
[Unspecified][id=com.nvidia.cuda.toolchain.language.cuda.c++]
I'm not familiar enough with this stuff to confirm that they "can be fed to the CDT indexer and will expose the expected behavior" (if you tell me how to check, I am happy to test).
Maybe also relevant, in the "Language Mappings" only the following seems to be added by the Plugin:
com.nvidia.cuda.ide.editor.cudac
From what I understand, for building, the current Toolchain does not matter (only current Builder "CMake Builder (GNU Make)" and CMakeLists.txt does - and I do not have any issue here). But the current Toolchain does matter for indexing, which is my problem. Is it correct that the selected current Toolchain only chooses a preset of "Used tools" and this is what actually matters? Because even when I choose the CUDA Linux toolchain
and add the ''GCC C++ Compiler" to the "Used tools", the Language "GNU C++" appears in the Entries Languages column for which the CMAKE_EXPORT_COMPILE_COMMANDS Parser works and Includes etc. get properly resolved in .cc files (but not in .cu files, probably because these are not linked to the "GNU C++" Language).
(I could not find any source code of NsightEE.)
I'm afraid this is proprietary NVIDIA stuff. :( Btw., just to point out again, I am using the Nsight Eclipse Plugin. Not sure whats the difference to this, except that the URL contains "nsightee" instead of "nsight" and it is supposed to work only with "Eclipse environment (v4.4 and v4.5)" instead of "vanilla Eclipse 4.4 or later". I feel what I'm using is the newer version of their plugin and their documentation is inconsistent. Again another thing is the "Nsight Eclipse Edition" WITHOUT Plugin, which is a complete Eclipse modified and shipped by NVIDIA.
I'm not familiar enough with this stuff to confirm that they "can be fed to the CDT indexer and will expose the expected behavior" (if you tell me how to check, I am happy to test).
The indexer is consulted by the source code editor to get for example the include paths. The C-Editor, for example, queries the indexer for C-include paths by specifying a language ID (to get only the entries for the C language).. The CUDA source editor will query the indexer by passing its own language ID (possibly 'com.nvidia.cuda.toolchain.language.cu', but that's a guess). The parser has to specify the proper ID to feed the include paths for cuda to the indexer, otherwise the CUDA editor will not find them and they will not show up in the Entries Languages column and 'Jump to declaration' on a macro that originates from a header file below the include path will not work.
I already asked Nvidia support to confirm the language ID, but they did not respond yet.
But the current Toolchain does matter for indexing, which is my problem. Is it correct that the selected current Toolchain only chooses a preset of "Used tools" and this is what actually matters? Because even when I choose the
I agree, the tools specified in the toolchain should not affect editor behavior. I played around and deleted each tool entry in the toolchain settings. But after I deleted the 'GCC C Compiler' entry, the C-editor was no longer aware of any C include paths and macros. The same holds for the 'GCC C++ Compiler' entry. So it seems you have to add the NVCC compiler there, too.
I suggest to just try the cuda language ID. I added an entry for nvcc which uses com.nvidia.cuda.toolchain.language.cu
as the language ID.
You may try it by adding the direct update site URL https://bintray.com/15knots/p2-zip/download_file?file_path=cmake4eclipse-1.10.0.zip. If it is working, we guessed the proper language ID.
Not sure if that is helpful here, but I looked into how the editor is chosen - apparently by File Association which can be Content Type based. The Nsight Plugin seems to modify the Content Types as follows:
+ Text
+ C Source File (*.c)
- C Header File (*.h)
+ C++ Source File (*.C, *.c++, *.cc, *.cpp, *.cxx)
+ C++ Header File (*.cuh, *.cuinc, *.h, *.hcu, *.hh, *.hpp, *.hxx, *.inc)
- CUDA Header Content Type (*.cuh, *.h, *.hcu)
- CUDA C Source File (*.cu)
- CUDA C Source File (*.cu)
By default the Cuda Editor seems to be used only for *.cu files, i.e., CUDA C Source Files (no idea why this appears two times in the list).
I already asked Nvidia support to confirm the language ID, but they did not respond yet.
Let's hope they give you an (official) statement, but I kind of doubt it. :(
I agree, the tools specified in the toolchain should not affect editor behavior. I played around and deleted each tool entry in the toolchain settings. But after I deleted the 'GCC C Compiler' entry, the C-editor was no longer aware of any C include paths and macros. The same holds for the 'GCC C++ Compiler' entry. So it seems you have to add the NVCC compiler there, too.
Yes. So my conclusion is, what matters (at least for indexing) are the "Used tools", not the "Current toolchain" (which might be only a preset for the set of "Used tools").
I suggest to just try the cuda language ID. I added an entry for nvcc which uses com.nvidia.cuda.toolchain.language.cu as the language ID. You may try it by adding the direct update site URL https://bintray.com/15knots/p2-zip/download_file?file_path=cmake4eclipse-1.10.0.zip. If it is working, we guessed the proper language ID.
Unfortunately it does not, same behavior as before (CMAKE_EXPORT_COMPILE_COMMANDS still not showing up for the three cuda.c/cu/c++
Languages in the Settings Entries). Maybe we should try com.nvidia.cuda.toolchain.language.c
or com.nvidia.cuda.toolchain.language.c++
. Or why can't we take all three? I think it is possible that one file contains multiple languages, e.g., c, c++, and cuda code.
The CUDA source editor will query the indexer by passing its own language ID (possibly 'com.nvidia.cuda.toolchain.language.cu', but that's a guess). The parser has to specify the proper ID to feed the include paths for cuda to the indexer, otherwise the CUDA editor will not find them and they will not show up in the Entries Languages column and 'Jump to declaration' on a macro that originates from a header file below the include path will not work.
| -------- lang id -------> | | <-------- lang id ------ |
Editor | | Indexer | | Parser
| <--- path etc for id ---- | | <--- path etc for id --- |
Is that summarized correctly? So we need to guess the lang id
that the Cuda Editor is sending to the Indexer, so that you can put this value in the lang id
that the Parser sends to the Indexer?
And can the Editor and Parser send multiple different lang ids
?
What I still don't understand is the relation between Editor and file type *.* (or does this Content Type somehow matter here?). What I observed: Thanks to the CMAKE_EXPORT_COMPILE_COMMANDS Parser, includes can be resolved in a .cc file but NOT in a .cuh file, even though both are opened in the C/C++-Editor.
| -------- lang id -------> | | <-------- lang id ------ | Editor | | Indexer | | Parser | <--- path etc for id ---- | | <--- path etc for id --- |
Is that summarized correctly? So we need to guess the lang id that the Cuda Editor is sending to the Indexer, so that you can put this value in the lang id that the Parser sends to the Indexer?
Yes, that's my assumption. At least the C/C++ editors work that way. But without any info from nvidia or the source code of their Builtin Specs detector or editor I cannot make this working (although it should be trivial).
What I still don't understand is the relation between Editor and file type . (or does this Content Type somehow matter here?).
The content type should not matter. It it used to open the appropriate Editor if you double-click a source file.
Thanks to the CMAKE_EXPORT_COMPILE_COMMANDS Parser, includes can be resolved in a .cc file but NOT in a .cuh file, even though both are opened in the C/C++-Editor.
I just can guess here: Maybe you have to tell the C/C++-Editor to recognize *.cuh as a C- header file (contentn type), maybe you can make it work by adding a language mapping. Consult the CDT forum for more precise info.
Try the 1.10.0 version mentioned above. Its a guess of the lang ID, but if it works, we're done.
I did, but it didn't help. See the end of my second last post.
But without any info from nvidia or the source code of their Builtin Specs detector or editor I cannot make this working (although it should be trivial).
I see your point. But can't we at least
try
com.nvidia.cuda.toolchain.language.c
orcom.nvidia.cuda.toolchain.language.c++
. Or why can't we take all three?
Could you open the plugin-jar that contains class com.nvidia.cuda.toolchain.internal.NVCCBuiltinSpecsDetector
and send me the file plugins.xml in there per PM? It would help in reverse-engineering.
I think there is no PM on Github? So I just post it here - I hope that's the right one.
Container: com.nvidia.cuda.toolchain_9.1.0.201711011803.jar
File: plugin.xml
<?xml version="1.0" encoding="UTF-8"?>
<?eclipse version="3.4"?>
<plugin>
<extension
point="org.eclipse.cdt.managedbuilder.core.buildDefinitions">
<managedBuildRevision
fileVersion="4.0.0">
</managedBuildRevision>
<toolChain
configurationEnvironmentSupplier="com.nvidia.cuda.toolchain.internal.CUDAToolkitEnvironmentSupplier"
configurationMacroSupplier="com.nvidia.cuda.toolchain.internal.CudaToolkitMacroSupplier"
id="com.nvidia.cuda.toolchain.baseToolchain"
isAbstract="true"
languageSettingsProviders="com.nvidia.cuda.toolchain.CUDACProvider"
name="CUDA base toolchain"
supportsManagedBuild="true"
targetTool="nvcc.linker;nvcc.archiver">
<builder
id="com.nvidia.cuda.toolchain.builder"
isAbstract="false"
isVariableCaseSensitive="false"
name="CUDA Toolkit Builder"
superClass="cdt.managedbuild.target.gnu.builder">
</builder>
<tool
command="${nvcc}"
id="nvcc.linker"
commandLineGenerator="com.nvidia.cuda.toolchain.internal.LinkerCommandLineGenerator"
isAbstract="false"
name="NVCC linker"
natureFilter="both"
outputFlag="-o">
<supportedProperties>
<property
id="org.eclipse.cdt.build.core.buildArtefactType">
<value
id="org.eclipse.cdt.build.core.buildArtefactType.exe">
</value>
<value
id="org.eclipse.cdt.build.core.buildArtefactType.sharedLib">
</value>
</property>
</supportedProperties>
<inputType
buildVariable="OBJS"
id="com.nvidia.cuda.toolchain.nvcc.linker.input"
multipleOfType="true"
sources="o"
superClass="cdt.managedbuild.tool.gnu.cpp.linker.input">
<additionalInput
kind="additionalinputdependency"
paths="$(USER_OBJS)">
</additionalInput>
<additionalInput
kind="additionalinput"
paths="$(LIBS)">
</additionalInput>
</inputType>
<outputType
outputs=""
buildVariable="EXECUTABLES"
id="com.nvidia.cuda.toolchain.nvcc.linker.output">
<enablement
type="ALL">
<checkOption
isRegex="false"
optionId="com.nvidia.cuda.toolchain.linker.shared"
value="false">
</checkOption>
</enablement>
</outputType>
<outputType
outputPrefix="lib"
buildVariable="LIBRARIES"
id="com.nvidia.cuda.toolchain.nvcc.linker.output.so">
<enablement
type="ALL">
<checkOption
isRegex="false"
optionId="com.nvidia.cuda.toolchain.linker.shared"
value="true">
</checkOption>
</enablement>
</outputType>
<optionCategory
id="com.nvidia.cuda.toolchain.linker.category.libraries"
name="Libraries">
</optionCategory>
<option
name="CUDA Runtime Library"
category="com.nvidia.cuda.toolchain.linker.category.libraries"
id="com.nvidia.cuda.toolchain.linker.cudart"
valueType="enumerated">
<enumeratedOptionValue
name="Static"
command="--cudart=static"
id="com.nvidia.cuda.toolchain.linker.cudart.static">
</enumeratedOptionValue>
<enumeratedOptionValue
name="Shared"
command="--cudart=shared"
id="com.nvidia.cuda.toolchain.linker.cudart.shared">
</enumeratedOptionValue>
</option>
<option superClass="gnu.cpp.link.option.libs"
category="com.nvidia.cuda.toolchain.linker.category.libraries" id="nvcc.linker.libs">
</option>
<option superClass="gnu.cpp.link.option.paths"
category="com.nvidia.cuda.toolchain.linker.category.libraries" id="nvcc.linker.paths">
</option>
<optionCategory
id="com.nvidia.cuda.toolchain.linker.category.misc"
name="Miscellaneous">
</optionCategory>
<option
browseType="file"
category="com.nvidia.cuda.toolchain.linker.category.misc"
command="-ccbin"
commandGenerator="com.nvidia.cuda.toolchain.internal.CommandSpaceValueGenerator"
defaultValue="${ccbin}"
id="com.nvidia.cuda.toolchain.linker.ccbin"
isAbstract="false"
name="Path to the host compiler"
valueType="string">
</option>
<option
category="com.nvidia.cuda.toolchain.linker.category.misc"
commandGenerator="com.nvidia.cuda.toolchain.internal.LinkWithOpenGLOptionCommandGenerator"
defaultValue="false"
id="nvcc.linker.option.linkgl"
isAbstract="false"
name="Link with OpenGL libraries"
resourceFilter="all"
valueType="boolean">
</option>
<option superClass="gnu.cpp.link.option.other"
category="com.nvidia.cuda.toolchain.linker.category.misc" id="nvcc.linker.other">
</option>
<option superClass="gnu.cpp.link.option.userobjs"
category="com.nvidia.cuda.toolchain.linker.category.misc" id="nvcc.linker.userobjs">
</option>
<optionCategory
id="com.nvidia.cuda.toolchain.linker.category.shared"
name="Shared Library Settings">
</optionCategory>
<option
defaultValue="false"
id="com.nvidia.cuda.toolchain.linker.shared"
isAbstract="false"
command="--shared"
category="com.nvidia.cuda.toolchain.linker.category.shared"
name="Build shared library"
resourceFilter="all"
valueType="boolean">
<enablement
attribute="defaultValue"
extensionAdjustment="false"
type="CONTAINER_ATTRIBUTE"
value="true">
<checkBuildProperty
property="org.eclipse.cdt.build.core.buildArtefactType"
value="org.eclipse.cdt.build.core.buildArtefactType.sharedLib">
</checkBuildProperty>
</enablement>
</option>
<enablement
type="ALL">
<not>
<checkBuildProperty
property="org.eclipse.cdt.build.core.buildArtefactType"
value="org.eclipse.cdt.build.core.buildArtefactType.staticLib">
</checkBuildProperty>
</not>
</enablement>
</tool>
<tool
command="${nvcc}"
id="nvcc.archiver"
isAbstract="false"
name="NVCC archiver"
natureFilter="both"
outputFlag="-lib -o">
<supportedProperties>
<property
id="org.eclipse.cdt.build.core.buildArtefactType">
<value
id="org.eclipse.cdt.build.core.buildArtefactType.staticLib"></value>
</property>
</supportedProperties>
<inputType
buildVariable="OBJS"
id="com.nvidia.cuda.toolchain.nvcc.archiver.input"
multipleOfType="true"
sources="o"
superClass="cdt.managedbuild.tool.gnu.cpp.linker.input">
</inputType>
<outputType
buildVariable="ARCHIVES"
id="com.nvidia.cuda.toolchain.nvcc.archiver.output.ar"
outputPrefix="lib">
</outputType>
<enablement
type="ALL">
<checkBuildProperty
property="org.eclipse.cdt.build.core.buildArtefactType"
value="org.eclipse.cdt.build.core.buildArtefactType.staticLib">
</checkBuildProperty>
</enablement>
</tool>
<tool
command="${nvcc}"
errorParsers="com.nvidia.cuda.toolchain.nvccErrorParser;org.eclipse.cdt.core.GCCErrorParser"
id="nvcc.compiler"
isAbstract="false"
name="NVCC Compiler"
natureFilter="both"
outputFlag="-c -o">
<inputType
dependencyCalculator="com.nvidia.cuda.toolchain.internal.NVCCManagedDependencyGenerator"
id="com.nvidia.cuda.toolchain.input.c"
languageId="com.nvidia.cuda.toolchain.language.cuda.c"
sourceContentType="org.eclipse.cdt.core.cSource"
primaryInput="true"
scannerConfigDiscoveryProfileId="com.nvidia.cuda.ide.build.NVCCPerProjectProfile|org.eclipse.cdt.managedbuilder.core.GCCManagedMakePerProjectProfileC|org.eclipse.cdt.make.core.GCCStandardMakePerFileProfile"
sources="c">
</inputType>
<inputType
dependencyCalculator="com.nvidia.cuda.toolchain.internal.NVCCManagedDependencyGenerator"
dependencyContentType="com.nvidia.cuda.toolchain.CUDAHeaderContentType"
dependencyExtensions="h,cuh,hcu"
id="com.nvidia.cuda.toolchain.input.cu"
languageId="com.nvidia.cuda.toolchain.language.cuda.cu"
primaryInput="true"
sourceContentType="com.nvidia.cuda.toolchain.CUDAContentType"
scannerConfigDiscoveryProfileId="com.nvidia.cuda.ide.build.NVCCPerProjectProfile"
sources="cu">
</inputType>
<inputType
dependencyCalculator="com.nvidia.cuda.toolchain.internal.NVCCManagedDependencyGenerator"
id="com.nvidia.cuda.toolchain.input.cpp"
languageId="com.nvidia.cuda.toolchain.language.cuda.c++"
sourceContentType="org.eclipse.cdt.core.cxxSource"
primaryInput="true"
scannerConfigDiscoveryProfileId="com.nvidia.cuda.ide.build.NVCCPerProjectProfile|org.eclipse.cdt.managedbuilder.core.GCCManagedMakePerProjectProfileCPP|org.eclipse.cdt.make.core.GCCStandardMakePerFileProfile"
sources="cpp">
</inputType>
<outputType
buildVariable="OBJS"
id="com.nvidia.cuda.toolchain.output.o"
outputs="o"
primaryOutput="true">
</outputType>
<optionCategory
id="com.nvidia.cuda.toolchain.compiler.category.dialect"
name="Dialect">
</optionCategory>
<option
name="Language standard"
category="com.nvidia.cuda.toolchain.compiler.category.dialect"
id="com.nvidia.cuda.toolchain.compiler.dialect"
valueType="enumerated">
<enumeratedOptionValue
name="Default"
command=""
id="com.nvidia.cuda.toolchain.compiler.dialect.default">
</enumeratedOptionValue>
<enumeratedOptionValue
name="C++11"
command="-std=c++11"
id="com.nvidia.cuda.toolchain.compiler.dialect.cpp11">
</enumeratedOptionValue>
<enumeratedOptionValue
name="C++14"
command="-std=c++14"
id="com.nvidia.cuda.toolchain.compiler.dialect.cpp14">
</enumeratedOptionValue>
</option>
<option
category="com.nvidia.cuda.toolchain.compiler.category.dialect"
command="--expt-relaxed-constexpr"
id="com.nvidia.cuda.toolchain.compiler.relaxed-constexpr"
isAbstract="false"
name="Allow host code to invoke __device__ constexpr functions"
valueType="boolean">
</option>
<option
category="com.nvidia.cuda.toolchain.compiler.category.dialect"
command="--expt-extended-lambda"
id="com.nvidia.cuda.toolchain.compiler.extended-lambda"
isAbstract="false"
name="Allow __host__, __device__ annotations in lambda declaration"
valueType="boolean">
</option>
<optionCategory
id="com.nvidia.cuda.toolchain.compiler.category.preprocessor"
name="Preprocessor">
</optionCategory>
<option superClass="gnu.cpp.compiler.option.preprocessor.def"
category="com.nvidia.cuda.toolchain.compiler.category.preprocessor" id="nvcc.compiler.def.symbols">
</option>
<option superClass="gnu.cpp.compiler.option.preprocessor.undef"
category="com.nvidia.cuda.toolchain.compiler.category.preprocessor" id="nvcc.compiler.undef.symbol">
</option>
<optionCategory
id="com.nvidia.cuda.toolchain.compiler.category.includes"
name="Includes">
</optionCategory>
<option superClass="gnu.cpp.compiler.option.include.paths"
category="com.nvidia.cuda.toolchain.compiler.category.includes" id="nvcc.compiler.include.paths">
</option>
<option superClass="gnu.cpp.compiler.option.include.files"
category="com.nvidia.cuda.toolchain.compiler.category.includes" id="nvcc.compiler.include.files">
</option>
<optionCategory
id="com.nvidia.cuda.toolchain.compiler.category.optimization"
name="Optimization">
</optionCategory>
<option
name="Optimization level"
category="com.nvidia.cuda.toolchain.compiler.category.optimization"
id="com.nvidia.cuda.toolchain.compiler.optimization"
valueType="enumerated">
<enumeratedOptionValue
id="com.nvidia.cuda.toolchain.compiler.optimization.default"
name="Default">
</enumeratedOptionValue>
<enumeratedOptionValue
name="O1"
command="-O1"
id="com.nvidia.cuda.toolchain.compiler.optimization.level1">
</enumeratedOptionValue>
<enumeratedOptionValue
name="O2"
command="-O2"
id="com.nvidia.cuda.toolchain.compiler.optimization.level2">
</enumeratedOptionValue>
<enumeratedOptionValue
name="O3"
command="-O3"
id="com.nvidia.cuda.toolchain.compiler.optimization.level3">
</enumeratedOptionValue>
<enablement
attribute="defaultValue"
extensionAdjustment="false"
type="CONTAINER_ATTRIBUTE"
value="com.nvidia.cuda.toolchain.compiler.optimization.level3">
<checkBuildProperty
property="org.eclipse.cdt.build.core.buildType"
value="org.eclipse.cdt.build.core.buildType.release">
</checkBuildProperty>
</enablement>
</option>
<option category="com.nvidia.cuda.toolchain.compiler.category.optimization"
command="-maxrregcount ${VALUE}" id="nvcc.compiler.maxregcount"
isAbstract="false" name="Maximum number of registers (-maxrregcount)"
tip="Specify the maximum amount of registers that GPU functions can use. Until a function-specific limit, a higher value will generally increase the performance of individual GPU threads that execute this function. However, because thread registers are allocated from a global register pool on each GPU, a higher value of this option will also reduce the maximum thread block size, thereby reducing the amount of thread parallelism. Hence, a good maxrregcount value is the result of a trade-off.
If this option is not specified, then no maximum is assumed. Otherwise the specified value will be rounded to the next multiple of 4 registers until the GPU specific maximum of 128 registers."
valueType="string">
</option>
<option category="com.nvidia.cuda.toolchain.compiler.category.optimization"
command="--use_fast_math" defaultValue="false"
id="nvcc.compiler.usefastmath" isAbstract="false"
name="Make use of fast math library (-use_fast_math)"
tip="Make use of fast math library. -use_fast_math implies -ftz=true -prec-div=false -prec-sqrt=false."
valueType="boolean">
</option>
<option category="com.nvidia.cuda.toolchain.compiler.category.optimization"
command="-ftz true" defaultValue="false" id="nvcc.compiler.ftz"
isAbstract="false" name="Flush denormal values to zero in single-precision FP operations (-ftz)"
tip="When performing single-precision floating-point operations, flush denormal values to zero or preserve denormal values. -use_fast_math implies --ftz=true."
valueType="boolean">
<enablement type="UI_ENABLEMENT">
<checkOption isRegex="false"
optionId="nvcc.compiler.usefastmath" value="false">
</checkOption>
</enablement>
</option>
<option category="com.nvidia.cuda.toolchain.compiler.category.optimization"
commandFalse="-prec-div false" defaultValue="true"
id="nvcc.compiler.precdiv" isAbstract="false"
name="Use IEEE round-to-nearest mode for precision FP division (-prec-div)"
tip="For single-precision floating-point division and reciprocals, use IEEE round-to-nearest mode or use a faster approximation. -use_fast_math implies --prec-div=false."
valueType="boolean">
<enablement type="UI_ENABLEMENT">
<checkOption isRegex="false"
optionId="nvcc.compiler.usefastmath" value="false">
</checkOption>
</enablement>
</option>
<option category="com.nvidia.cuda.toolchain.compiler.category.optimization"
commandFalse="-prec-sqrt false" defaultValue="true"
id="nvcc.compiler.precsqrt" isAbstract="false"
name="Use IEEE round-to-nearest mode for precision FP sqrt (-prec-sqrt)"
tip="For single-precision floating-point square root, use IEEE round-to-nearest mode or use a faster approximation. -use_fast_math implies --prec-sqrt=false."
valueType="boolean">
<enablement type="UI_ENABLEMENT">
<checkOption isRegex="false"
optionId="nvcc.compiler.usefastmath" value="false">
</checkOption>
</enablement>
</option>
<option category="com.nvidia.cuda.toolchain.compiler.category.optimization"
commandFalse="-fmad false" defaultValue="true"
id="nvcc.compiler.fmad" isAbstract="false"
name="Contract FP multiplies and adds/subtracts into FP multiply-add operations (-fmad)"
tip="Enables (disables) the contraction of floating-point multiplies and adds/subtracts into floating-point multiply-add operations (FMAD, FFMA or DFMA)."
valueType="boolean">
<enablement type="UI_ENABLEMENT">
<checkOption isRegex="false"
optionId="nvcc.compiler.usefastmath" value="false">
</checkOption>
</enablement>
</option>
<optionCategory
id="com.nvidia.cuda.toolchain.compiler.category.debugging"
name="Debugging">
</optionCategory>
<option
category="com.nvidia.cuda.toolchain.compiler.category.debugging"
command="--device-debug"
id="com.nvidia.cuda.toolchain.compiler.device-debug"
isAbstract="false"
name="Generate debug information for device code (-G)"
valueType="boolean">
<enablement
attribute="defaultValue"
type="CONTAINER_ATTRIBUTE"
value="true">
<checkBuildProperty
property="org.eclipse.cdt.build.core.buildType"
value="org.eclipse.cdt.build.core.buildType.debug"></checkBuildProperty>
</enablement>
</option>
<option
category="com.nvidia.cuda.toolchain.compiler.category.debugging"
command="--debug"
id="com.nvidia.cuda.toolchain.compiler.debug"
isAbstract="false"
name="Generate debug information for host code (-g)"
valueType="boolean">
<enablement
attribute="defaultValue"
type="CONTAINER_ATTRIBUTE"
value="true">
<checkBuildProperty
property="org.eclipse.cdt.build.core.buildType"
value="org.eclipse.cdt.build.core.buildType.debug">
</checkBuildProperty>
</enablement>
</option>
<option
category="com.nvidia.cuda.toolchain.compiler.category.debugging"
command="--generate-line-info"
id="com.nvidia.cuda.toolchain.compiler.lineinfo"
isAbstract="false"
name="Generate line-number information for device code (-lineinfo)"
valueType="boolean">
</option>
<optionCategory
id="com.nvidia.cuda.toolchain.compiler.category.cuda"
name="CUDA">
</optionCategory>
<option
category="com.nvidia.cuda.toolchain.compiler.category.cuda"
command="-gencode arch=compute_30,code=sm_30"
id="com.nvidia.cuda.toolchain.compiler.cuda.sm_30_sass"
isAbstract="false"
name="Generate SM 3.0 SASS"
value="true"
valueType="boolean">
</option>
<option
id="com.nvidia.cuda.toolchain.compiler.cuda.sm_32_sass"
category="com.nvidia.cuda.toolchain.compiler.category.cuda"
command="-gencode arch=compute_32,code=sm_32"
isAbstract="false"
name="Generate SM 3.2 SASS"
valueType="boolean">
</option>
<option
id="com.nvidia.cuda.toolchain.compiler.cuda.sm_35_sass"
category="com.nvidia.cuda.toolchain.compiler.category.cuda"
command="-gencode arch=compute_35,code=sm_35"
isAbstract="false"
name="Generate SM 3.5 SASS"
valueType="boolean">
</option>
<option
id="com.nvidia.cuda.toolchain.compiler.cuda.sm_50_sass"
category="com.nvidia.cuda.toolchain.compiler.category.cuda"
command="-gencode arch=compute_50,code=sm_50"
isAbstract="false"
name="Generate SM 5.0 SASS"
valueType="boolean">
</option>
<option
id="com.nvidia.cuda.toolchain.compiler.cuda.sm_53_sass"
category="com.nvidia.cuda.toolchain.compiler.category.cuda"
command="-gencode arch=compute_53,code=sm_53"
isAbstract="false"
name="Generate SM 5.3 SASS"
valueType="boolean">
</option>
<option
id="com.nvidia.cuda.toolchain.compiler.cuda.sm_60_sass"
category="com.nvidia.cuda.toolchain.compiler.category.cuda"
command="-gencode arch=compute_60,code=sm_60"
isAbstract="false"
name="Generate SM 6.0 SASS"
valueType="boolean">
</option>
<option
id="com.nvidia.cuda.toolchain.compiler.cuda.sm_61_sass"
category="com.nvidia.cuda.toolchain.compiler.category.cuda"
command="-gencode arch=compute_61,code=sm_61"
isAbstract="false"
name="Generate SM 6.1 SASS"
valueType="boolean">
</option>
<option
id="com.nvidia.cuda.toolchain.compiler.cuda.sm_62_sass"
category="com.nvidia.cuda.toolchain.compiler.category.cuda"
command="-gencode arch=compute_62,code=sm_62"
isAbstract="false"
name="Generate SM 6.2 SASS"
valueType="boolean">
</option>
<option
id="com.nvidia.cuda.toolchain.compiler.cuda.sm_70_sass"
category="com.nvidia.cuda.toolchain.compiler.category.cuda"
command="-gencode arch=compute_70,code=sm_70"
isAbstract="false"
name="Generate SM 7.0 SASS"
valueType="boolean">
</option>
<option
category="com.nvidia.cuda.toolchain.compiler.category.cuda"
command="-gencode arch=compute_30,code=compute_30"
id="com.nvidia.cuda.toolchain.compiler.cuda.sm_30_ptx"
isAbstract="false"
name="Generate SM 3.0 PTX"
value="true"
valueType="boolean">
</option>
<option
id="com.nvidia.cuda.toolchain.compiler.cuda.sm_35_ptx"
category="com.nvidia.cuda.toolchain.compiler.category.cuda"
command="-gencode arch=compute_35,code=compute_35"
isAbstract="false"
name="Generate SM 3.5 PTX"
valueType="boolean">
</option>
<option
id="com.nvidia.cuda.toolchain.compiler.cuda.sm_50_ptx"
category="com.nvidia.cuda.toolchain.compiler.category.cuda"
command="-gencode arch=compute_50,code=compute_50"
isAbstract="false"
name="Generate SM 5.0 PTX"
valueType="boolean">
</option>
<option
id="com.nvidia.cuda.toolchain.compiler.cuda.sm_53_ptx"
category="com.nvidia.cuda.toolchain.compiler.category.cuda"
command="-gencode arch=compute_53,code=compute_53"
isAbstract="false"
name="Generate SM 5.3 PTX"
valueType="boolean">
</option>
<option
id="com.nvidia.cuda.toolchain.compiler.cuda.sm_60_ptx"
category="com.nvidia.cuda.toolchain.compiler.category.cuda"
command="-gencode arch=compute_60,code=compute_60"
isAbstract="false"
name="Generate SM 6.0 PTX"
valueType="boolean">
</option>
<option
id="com.nvidia.cuda.toolchain.compiler.cuda.sm_61_ptx"
category="com.nvidia.cuda.toolchain.compiler.category.cuda"
command="-gencode arch=compute_61,code=compute_61"
isAbstract="false"
name="Generate SM 6.1 PTX"
valueType="boolean">
</option>
<option
id="com.nvidia.cuda.toolchain.compiler.cuda.sm_62_ptx"
category="com.nvidia.cuda.toolchain.compiler.category.cuda"
command="-gencode arch=compute_62,code=compute_62"
isAbstract="false"
name="Generate SM 6.2 PTX"
valueType="boolean">
</option>
<option
id="com.nvidia.cuda.toolchain.compiler.cuda.sm_70_ptx"
category="com.nvidia.cuda.toolchain.compiler.category.cuda"
command="-gencode arch=compute_70,code=compute_70"
isAbstract="false"
name="Generate SM 7.0 PTX"
valueType="boolean">
</option>
<optionCategory
id="com.nvidia.cuda.toolchain.compiler.category.misc"
name="Miscellaneous">
</optionCategory>
<option
id="com.nvidia.cuda.toolchain.compiler.verbose"
category="com.nvidia.cuda.toolchain.compiler.category.misc"
command="--verbose"
isAbstract="false"
name="Verbose (-v)"
valueType="boolean">
</option>
<option
id="com.nvidia.cuda.toolchain.compiler.keep"
category="com.nvidia.cuda.toolchain.compiler.category.misc"
command="--keep"
isAbstract="false"
name="Keep all intermediate files (-keep)"
valueType="boolean">
</option>
<option
category="com.nvidia.cuda.toolchain.compiler.category.misc"
command="-Xcompiler -fPIC"
defaultValue="false"
id="nvcc.compiler.pic"
superClass="gnu.cpp.compiler.option.other.pic"
valueType="boolean">
<enablement
attribute="value"
type="CONTAINER_ATTRIBUTE"
value="true">
<checkBuildProperty
property="org.eclipse.cdt.build.core.buildArtefactType"
value="org.eclipse.cdt.build.core.buildArtefactType.sharedLib">
</checkBuildProperty>
</enablement>
<enablement
attribute="value"
type="CONTAINER_ATTRIBUTE"
value="false">
<not>
<checkBuildProperty
property="org.eclipse.cdt.build.core.buildArtefactType"
value="org.eclipse.cdt.build.core.buildArtefactType.sharedLib">
</checkBuildProperty>
</not>
</enablement>
</option>
<option category="com.nvidia.cuda.toolchain.compiler.category.misc"
command="-w" defaultValue="false"
id="nvcc.compiler.disableWarnings" isAbstract="false"
name="Inhibit all warning messages (-w)"
valueType="boolean">
</option>
<option
browseType="directory"
category="com.nvidia.cuda.toolchain.compiler.category.misc"
command="--keep-dir "
commandGenerator="com.nvidia.cuda.toolchain.internal.CommandSpaceValueGenerator"
id="com.nvidia.cuda.toolchain.compiler.keepdir"
isAbstract="false"
name="Intermediate files keep directory"
valueType="string">
</option>
<option
browseType="file"
category="com.nvidia.cuda.toolchain.compiler.category.misc"
command="-ccbin"
commandGenerator="com.nvidia.cuda.toolchain.internal.CommandSpaceValueGenerator"
defaultValue="${ccbin}"
id="com.nvidia.cuda.toolchain.compiler.ccbin"
isAbstract="false"
name="Path to the host compiler"
valueType="string">
</option>
</tool>
</toolChain>
<toolChain
id="com.nvidia.cuda.toolchain.linuxToolchain"
isAbstract="false"
name="CUDA Linux toolchain"
osList="linux"
superClass="com.nvidia.cuda.toolchain.baseToolchain">
<targetPlatform
archList="all"
binaryParser="org.eclipse.cdt.core.ELF"
id="com.nvidia.cuda.toolchain.targetLinuxPlatform"
isAbstract="false"
osList="linux">
</targetPlatform>
</toolChain>
<toolChain
id="com.nvidia.cuda.toolchain.darwinToolchain"
isAbstract="false"
name="CUDA MacOS X toolchain"
osList="macosx"
superClass="com.nvidia.cuda.toolchain.baseToolchain">
<targetPlatform
archList="all"
binaryParser="org.eclipse.cdt.core.MachO64"
id="com.nvidia.cuda.toolchain.targetDarwinPlatform"
isAbstract="false"
osList="macosx">
</targetPlatform>
</toolChain>
<!-- Executable project types -->
<projectType
buildArtefactType="org.eclipse.cdt.build.core.buildArtefactType.exe"
id="com.nvidia.cuda.toolchain.linux.build.exe"
isAbstract="false"
isTest="false">
<configuration
buildProperties="org.eclipse.cdt.build.core.buildType=org.eclipse.cdt.build.core.buildType.debug"
id="com.nvidia.cuda.toolchain.linux.configuration.debug"
languageSettingsProviders="org.eclipse.cdt.ui.UserLanguageSettingsProvider;org.eclipse.cdt.core.ReferencedProjectsLanguageSettingsProvider;org.eclipse.cdt.managedbuilder.core.MBSLanguageSettingsProvider;${Toolchain};"
name="Debug">
<toolChain
id="com.nvidia.cuda.toolchain.linux.debugToolchain"
isAbstract="false"
superClass="com.nvidia.cuda.toolchain.linuxToolchain">
</toolChain>
</configuration>
<configuration
buildProperties="org.eclipse.cdt.build.core.buildType=org.eclipse.cdt.build.core.buildType.release"
id="com.nvidia.cuda.toolchain.linux.configuration.release"
languageSettingsProviders="org.eclipse.cdt.ui.UserLanguageSettingsProvider;org.eclipse.cdt.core.ReferencedProjectsLanguageSettingsProvider;org.eclipse.cdt.managedbuilder.core.MBSLanguageSettingsProvider;${Toolchain};"
name="Release">
<toolChain
id="com.nvidia.cuda.toolchain.linux.releaseToolchain"
isAbstract="false"
superClass="com.nvidia.cuda.toolchain.linuxToolchain">
</toolChain>
</configuration>
</projectType>
<projectType
buildArtefactType="org.eclipse.cdt.build.core.buildArtefactType.exe"
id="com.nvidia.cuda.toolchain.darwin.build.exe"
isAbstract="false"
isTest="false">
<configuration
buildProperties="org.eclipse.cdt.build.core.buildType=org.eclipse.cdt.build.core.buildType.debug"
id="com.nvidia.cuda.toolchain.darwin.configuration.debug"
languageSettingsProviders="org.eclipse.cdt.ui.UserLanguageSettingsProvider;org.eclipse.cdt.core.ReferencedProjectsLanguageSettingsProvider;org.eclipse.cdt.managedbuilder.core.MBSLanguageSettingsProvider;${Toolchain};"
name="Debug">
<toolChain
id="com.nvidia.cuda.toolchain.darwin.debugToolchain"
isAbstract="false"
superClass="com.nvidia.cuda.toolchain.darwinToolchain">
</toolChain>
</configuration>
<configuration
buildProperties="org.eclipse.cdt.build.core.buildType=org.eclipse.cdt.build.core.buildType.release"
id="com.nvidia.cuda.toolchain.darwin.configuration.release"
languageSettingsProviders="org.eclipse.cdt.ui.UserLanguageSettingsProvider;org.eclipse.cdt.core.ReferencedProjectsLanguageSettingsProvider;org.eclipse.cdt.managedbuilder.core.MBSLanguageSettingsProvider;${Toolchain};"
name="Release">
<toolChain
id="com.nvidia.cuda.toolchain.darwin.releaseToolchain"
isAbstract="false"
superClass="com.nvidia.cuda.toolchain.darwinToolchain">
</toolChain>
</configuration>
</projectType>
<!-- Shared library project types -->
<projectType
buildArtefactType="org.eclipse.cdt.build.core.buildArtefactType.sharedLib"
id="com.nvidia.cuda.toolchain.linux.build.so"
isAbstract="false"
isTest="false">
<configuration
artifactExtension="so"
buildProperties="org.eclipse.cdt.build.core.buildType=org.eclipse.cdt.build.core.buildType.debug"
id="com.nvidia.cuda.toolchain.linux.configuration.so.debug"
languageSettingsProviders="org.eclipse.cdt.ui.UserLanguageSettingsProvider;org.eclipse.cdt.core.ReferencedProjectsLanguageSettingsProvider;org.eclipse.cdt.managedbuilder.core.MBSLanguageSettingsProvider;${Toolchain};"
name="Debug">
<toolChain
id="com.nvidia.cuda.toolchain.linux.debugToolchain.so"
isAbstract="false"
superClass="com.nvidia.cuda.toolchain.linuxToolchain">
</toolChain>
</configuration>
<configuration
artifactExtension="so"
buildProperties="org.eclipse.cdt.build.core.buildType=org.eclipse.cdt.build.core.buildType.release"
id="com.nvidia.cuda.toolchain.linux.configuration.so.release"
languageSettingsProviders="org.eclipse.cdt.ui.UserLanguageSettingsProvider;org.eclipse.cdt.core.ReferencedProjectsLanguageSettingsProvider;org.eclipse.cdt.managedbuilder.core.MBSLanguageSettingsProvider;${Toolchain};"
name="Release">
<toolChain
id="com.nvidia.cuda.toolchain.linux.releaseToolchain.so"
isAbstract="false"
superClass="com.nvidia.cuda.toolchain.linuxToolchain">
</toolChain>
</configuration>
</projectType>
<projectType
buildArtefactType="org.eclipse.cdt.build.core.buildArtefactType.sharedLib"
id="com.nvidia.cuda.toolchain.darwin.build.so"
isAbstract="false"
isTest="false">
<configuration
artifactExtension="dylib"
buildProperties="org.eclipse.cdt.build.core.buildType=org.eclipse.cdt.build.core.buildType.debug"
id="com.nvidia.cuda.toolchain.darwin.configuration.so.debug"
languageSettingsProviders="org.eclipse.cdt.ui.UserLanguageSettingsProvider;org.eclipse.cdt.core.ReferencedProjectsLanguageSettingsProvider;org.eclipse.cdt.managedbuilder.core.MBSLanguageSettingsProvider;${Toolchain};"
name="Debug">
<toolChain
id="com.nvidia.cuda.toolchain.darwin.debugToolchain.so"
isAbstract="false"
superClass="com.nvidia.cuda.toolchain.darwinToolchain">
</toolChain>
</configuration>
<configuration
artifactExtension="dylib"
buildProperties="org.eclipse.cdt.build.core.buildType=org.eclipse.cdt.build.core.buildType.release"
id="com.nvidia.cuda.toolchain.darwin.configuration.so.release"
languageSettingsProviders="org.eclipse.cdt.ui.UserLanguageSettingsProvider;org.eclipse.cdt.core.ReferencedProjectsLanguageSettingsProvider;org.eclipse.cdt.managedbuilder.core.MBSLanguageSettingsProvider;${Toolchain};"
name="Release">
<toolChain
id="com.nvidia.cuda.toolchain.darwin.releaseToolchain.so"
isAbstract="false"
superClass="com.nvidia.cuda.toolchain.darwinToolchain">
</toolChain>
</configuration>
</projectType>
<!-- Static library project types -->
<projectType
buildArtefactType="org.eclipse.cdt.build.core.buildArtefactType.staticLib"
id="com.nvidia.cuda.toolchain.linux.build.ar"
isAbstract="false"
isTest="false">
<configuration
artifactExtension="a"
buildProperties="org.eclipse.cdt.build.core.buildType=org.eclipse.cdt.build.core.buildType.debug"
id="com.nvidia.cuda.toolchain.linux.configuration.ar.debug"
languageSettingsProviders="org.eclipse.cdt.ui.UserLanguageSettingsProvider;org.eclipse.cdt.core.ReferencedProjectsLanguageSettingsProvider;org.eclipse.cdt.managedbuilder.core.MBSLanguageSettingsProvider;${Toolchain};"
name="Debug">
<toolChain
id="com.nvidia.cuda.toolchain.linux.debugToolchain.ar"
isAbstract="false"
superClass="com.nvidia.cuda.toolchain.linuxToolchain">
</toolChain>
</configuration>
<configuration
artifactExtension="a"
buildProperties="org.eclipse.cdt.build.core.buildType=org.eclipse.cdt.build.core.buildType.release"
id="com.nvidia.cuda.toolchain.linux.configuration.ar.release"
languageSettingsProviders="org.eclipse.cdt.ui.UserLanguageSettingsProvider;org.eclipse.cdt.core.ReferencedProjectsLanguageSettingsProvider;org.eclipse.cdt.managedbuilder.core.MBSLanguageSettingsProvider;${Toolchain};"
name="Release">
<toolChain
id="com.nvidia.cuda.toolchain.linux.releaseToolchain.ar"
isAbstract="false"
superClass="com.nvidia.cuda.toolchain.linuxToolchain">
</toolChain>
</configuration>
</projectType>
<projectType
buildArtefactType="org.eclipse.cdt.build.core.buildArtefactType.staticLib"
id="com.nvidia.cuda.toolchain.darwin.build.ar"
isAbstract="false"
isTest="false">
<configuration
artifactExtension="a"
buildProperties="org.eclipse.cdt.build.core.buildType=org.eclipse.cdt.build.core.buildType.debug"
id="com.nvidia.cuda.toolchain.darwin.configuration.ar.debug"
languageSettingsProviders="org.eclipse.cdt.ui.UserLanguageSettingsProvider;org.eclipse.cdt.core.ReferencedProjectsLanguageSettingsProvider;org.eclipse.cdt.managedbuilder.core.MBSLanguageSettingsProvider;${Toolchain};"
name="Debug">
<toolChain
id="com.nvidia.cuda.toolchain.darwin.debugToolchain.ar"
isAbstract="false"
superClass="com.nvidia.cuda.toolchain.darwinToolchain">
</toolChain>
</configuration>
<configuration
artifactExtension="a"
buildProperties="org.eclipse.cdt.build.core.buildType=org.eclipse.cdt.build.core.buildType.release"
id="com.nvidia.cuda.toolchain.darwin.configuration.ar.release"
languageSettingsProviders="org.eclipse.cdt.ui.UserLanguageSettingsProvider;org.eclipse.cdt.core.ReferencedProjectsLanguageSettingsProvider;org.eclipse.cdt.managedbuilder.core.MBSLanguageSettingsProvider;${Toolchain};"
name="Release">
<toolChain
id="com.nvidia.cuda.toolchain.darwin.releaseToolchain.ar"
isAbstract="false"
superClass="com.nvidia.cuda.toolchain.darwinToolchain">
</toolChain>
</configuration>
</projectType>
</extension>
<extension
point="org.eclipse.cdt.core.templateAssociations">
<template
id="org.eclipse.cdt.build.core.templates.HelloWorldCAnsiProject">
<toolChain
id="com.nvidia.cuda.toolchain.linuxToolchain">
</toolChain>
<toolChain
id="com.nvidia.cuda.toolchain.darwinToolchain">
</toolChain>
</template>
<template
id="org.eclipse.cdt.build.core.templates.HelloWorldCCProject">
<toolChain
id="com.nvidia.cuda.toolchain.linuxToolchain">
</toolChain>
<toolChain
id="com.nvidia.cuda.toolchain.darwinToolchain">
</toolChain>
</template>
</extension>
<extension
id="id1"
name="name"
point="org.eclipse.cdt.core.ErrorParser">
<errorparser
id="com.nvidia.cuda.toolchain.nvccErrorParser"
name="NVCC error parser">
<pattern
description-expr="$3"
eat-processed-line="true"
file-expr="$1"
line-expr="$2"
regex="(.*?)\((\d+)\): error: (identifier \"(.*)\" is undefined)"
severity="Error"
variable-expr="$4">
</pattern>
<pattern
description-expr="$3"
eat-processed-line="true"
file-expr="$1"
line-expr="$2"
regex="(.*?)\((\d+)\): error: (.*)"
severity="Error">
</pattern>
</errorparser>
</extension>
<extension
id="com.nvidia.cuda.ide.build.NVCCPerProjectProfile"
name="NVCC Scanner Info per project profile"
point="org.eclipse.cdt.make.core.ScannerConfigurationDiscoveryProfile">
<scannerInfoCollector
class="com.nvidia.cuda.toolchain.internal.SICollector"
scope="project"/>
<buildOutputProvider>
<open/>
<scannerInfoConsoleParser class="org.eclipse.cdt.make.internal.core.scannerconfig.gnu.GCCScannerInfoConsoleParser"/>
</buildOutputProvider>
<scannerInfoProvider providerId="nvcc">
<run
command="${cuda_tk_bin}/nvcc"
arguments="-dryrun"
class="com.nvidia.cuda.toolchain.internal.NvccSpecsRunSIProvider"/>
<scannerInfoConsoleParser class="com.nvidia.cuda.toolchain.internal.NvccSpecsConsoleParser"/>
</scannerInfoProvider>
<scannerInfoProvider providerId="specsFile">
<run
arguments="-E -Xcompiler -P -Xcompiler -v -Xcompiler -dD ${plugin_state_location}/${specs_file}"
command="${cuda_tk_bin}/nvcc"
class="com.nvidia.cuda.toolchain.internal.GccNvccRunSpecsProvider"/>
<scannerInfoConsoleParser class="org.eclipse.cdt.make.internal.core.scannerconfig.gnu.GCCSpecsConsoleParser"/>
</scannerInfoProvider>
</extension>
<extension
point="org.eclipse.core.variables.dynamicVariables">
<variable
description="Path to NVCC compiler"
name="nvcc"
resolver="com.nvidia.cuda.toolchain.internal.DynamicVariableResolver"
supportsArgument="false">
</variable>
<variable
description="Path to the CC compiler binary"
name="ccbin"
resolver="com.nvidia.cuda.toolchain.internal.DynamicVariableResolver">
</variable>
<variable
description="CUDA toolkit by name bin directory location"
name="cuda_tk_bin"
resolver="com.nvidia.cuda.toolchain.internal.DynamicVariableResolver"
supportsArgument="true">
</variable>
<variable
description="CUDA toolkit samples directory location"
name="cuda_samples_dir"
resolver="com.nvidia.cuda.toolchain.internal.DynamicVariableResolver"
supportsArgument="true">
</variable>
<variable
description="CUDA toolkit samples common libraries directory location"
name="cuda_samples_common_lib_dir"
resolver="com.nvidia.cuda.toolchain.internal.DynamicVariableResolver">
</variable>
</extension>
<extension
point="org.eclipse.core.contenttype.contentTypes">
<content-type
base-type="org.eclipse.cdt.core.cxxSource"
file-extensions="cu"
id="CUDAContentType"
name="CUDA C Source File"
priority="high">
</content-type>
<content-type
base-type="org.eclipse.cdt.core.cxxHeader"
file-extensions="h,cuh,hcu"
id="CUDAHeaderContentType"
name="CUDA Header Content Type"
priority="high">
</content-type>
</extension>
<extension
point="org.eclipse.cdt.core.LanguageSettingsProvider">
<provider
class="com.nvidia.cuda.toolchain.internal.NVCCBuiltinSpecsDetector"
id="com.nvidia.cuda.toolchain.CUDACProvider"
name="Nvcc Builtins provider"
parameter="${COMMAND} ${FLAGS} -E -Xcompiler -P -Xcompiler -v -Xcompiler -dD "${INPUTS}""
prefer-non-shared="true">
<language-scope
id="com.nvidia.cuda.toolchain.language.cuda.c">
</language-scope>
<language-scope
id="com.nvidia.cuda.toolchain.language.cuda.c++">
</language-scope>
<language-scope
id="com.nvidia.cuda.toolchain.language.cuda.cu">
</language-scope>
</provider>
</extension>
<!-- <extension
point="org.eclipse.cdt.core.templates">
<template
id="com.nvidia.cuda.toolchain.saxpy.c.template"
location="templates/saxpyc/template.xml"
projectType="org.eclipse.cdt.build.core.buildArtefactType.exe">
</template>
</extension> -->
<extension
point="org.eclipse.ui.preferencePages">
<page
class="com.nvidia.cuda.toolchain.internal.CUDAToolkitPreferencePage"
id="com.nvidia.cuda.toolchain.preferencePage"
name="CUDA">
</page>
</extension>
<extension
point="org.eclipse.cdt.core.language">
<language
class="org.eclipse.cdt.core.dom.ast.gnu.cpp.GPPLanguage"
id="com.nvidia.cuda.toolchain.CUDALangugage"
name="CUDA">
<contentType
id="com.nvidia.cuda.toolchain.CUDAContentType">
</contentType>
<contentType
id="com.nvidia.cuda.toolchain.CUDAHeaderContentType">
</contentType>
</language>
</extension>
<extension
point="org.eclipse.ui.propertyPages">
<page
category="org.eclipse.cdt.managedbuilder.ui.properties.Page_head_build"
class="com.nvidia.cuda.toolchain.internal.CUDAToolkitPropertyPage"
id="com.nvidia.cuda.toolchain.propertyPage"
name="CUDA Toolkit">
<filter
name="projectNature"
value="org.eclipse.cdt.core.cnature">
</filter>
<enabledWhen>
<adapt
type="org.eclipse.core.resources.IProject">
</adapt>
</enabledWhen>
</page>
</extension>
</plugin>
point="org.eclipse.cdt.core.LanguageSettingsProvider">
<provider
class="com.nvidia.cuda.toolchain.internal.NVCCBuiltinSpecsDetector"
id="com.nvidia.cuda.toolchain.CUDACProvider"
name="Nvcc Builtins provider"
parameter="${COMMAND} ${FLAGS} -E -Xcompiler -P -Xcompiler -v -Xcompiler -dD "${INPUTS}""
prefer-non-shared="true">
<language-scope
id="com.nvidia.cuda.toolchain.language.cuda.c">
</language-scope>
<language-scope
id="com.nvidia.cuda.toolchain.language.cuda.c++">
</language-scope>
<language-scope
id="com.nvidia.cuda.toolchain.language.cuda.cu">
This is exactly the language ID I was looking for ^^^^^.
The cmake4eclispe version you tried has com.nvidia.cuda.toolchain.language.cu
.
Please ry the new version at URL https://bintray.com/15knots/p2-zip/download_file?file_path=cmake4eclipse-1.10.0.zip.
You mean com.nvidia.cuda.toolchain.CUDACProvider
?
Please ry the new version at URL https://bintray.com/15knots/p2-zip/download_file?file_path=cmake4eclipse-1.10.0.zip.
Unfortunately still not working, i.e., CMAKE_EXPORT_COMPILE_COMMANDS Parser only appears for GNU C
and GNU C++
but not for com.nvidia.cuda.toolchain.language.cuda.c/cu/c++.
Add the nvcc compiler to the Tool Chain: used tools list.
I have. All the used tools from current toolchain Linux GCC plus NVVC linker, compiler, archiver.
Otherwise the three entries in the language column com.nvidia.cuda.toolchain.language.cuda.c/cu/c++.
would also not appear.
I try to reproduce locally and installed a recent version of NSight and cmake.
But this version of NSight has no file
com.nvidia.cuda.toolchain_9.1.0.201711011803.jar
but
com.nvidia.cuda.ide.toolchains_9.1.0.201711040225.jar
, which is a different thing.
So it looks NVidia removed its own language settings provider (NVCCBuiltinSpecsDetector).
Anyway, I tried a different solution:
The 'CMake Build Output Parser' recognizes nvcc
as a C-compiler and send the appropriate language ID to the indexer.
In my case, an include directory /usr/local/cuda-9.1/include/
was showing up, although the CU source editor is still complaining about an unresolved inclusion on <cuda.h>
.
I try to reproduce locally and installed a recent version of NSight and cmake. But this version of NSight has no file
com.nvidia.cuda.toolchain_9.1.0.201711011803.jar
butcom.nvidia.cuda.ide.toolchains_9.1.0.201711040225.jar
, which is a different thing.So it looks NVidia removed its own language settings provider (NVCCBuiltinSpecsDetector).
I don't think so. As I pointed out before, there are two different things:
Probably since you use 1 and I use 2, there are different files. And NVIDIA might use different ways to provide the settings because they can modify the whole source code in 1 while in 2 they need to rely on what can be realized with an Eclipse plugin. So I don't think they removed the NVCCBuiltinSpecsDetector, but it is only existent in 2 but is not (was never) in 1.
The Nvcc Builtins provider
works fine for me. CUDA headers get resolved in the CU source editor.
Whats not working is that other library includes (include path specified via cmake) get resolved, probably because the CMAKE_EXPORT_COMPILE_COMMANDS Parser does only appear for the GNU C/C++
and not for the [ Unspecified ] [com.nvidia.cuda.toolchain.language.cuda.c/cu/c++]
Languages in the Entries tab.
If the 'CMake Build Output Parser' works for you as expected, I know which launguage ID to pass to the indexer.
OTOH, i noticed that if I add CUDA as a language in cmakelists.txt, cmake prints a warning
Manually-specified variables were not used by the project: CMAKE_EXPORT_COMPILE_COMMANDS
and no compile_commands.json
file gets generated, which effectively disables the parser.
If the 'CMake Build Output Parser' works for you as expected
It does NOT. Looks like the 'CMake Build Output Parser (deprecated)' is showing the same behavior as the 'CMAKE_EXPORT_COMPILE_COMMANDS Parser' - they
only appear for the GNU C/C++ and not for the
[ Unspecified ] [com.nvidia.cuda.toolchain.language.cuda.c/cu/c++]
Languages in the Entries tab.
What works for me is the 'Nvcc Builtins provider' - it appears for the [ Unspecified ] [com.nvidia.cuda.toolchain.language.cuda.c/cu/c++]
Languages in the Entries tab (but not for the GNU C/C++
, which should be fine as it only provides Cuda specific stuff).
OTOH, i noticed that if I add CUDA as a language in cmakelists.txt, cmake prints a warning Manually-specified variables were not used by the project: CMAKE_EXPORT_COMPILE_COMMANDS and no compile_commands.json file gets generated, which effectively disables the parser.
I do not have this behavior (having project(projectname CXX CUDA)
in my CMakeLists.txt) - I don't get the error and the compile_commands.json
gets properly generated.
Are you using the Nsight Eclipse Edition or the Nsight Eclipse Plugin (which I use)? If we are using different things, that might explain different behavior.
I am using Nsight Eclipse Plugin .
My cmakelists alos had find_package(CUDA REQUIRED)
, after removing that, the compile_commands.json gets generated.
I changed the parser to recognize nvcc
as a C-compiler (like gcc). Now include paths defined in CMakeLists.txt show up in the Includes folder in the Project Explorer view and in the Entries tab (as the GNU C language).
Would this satisfy your needs?
Sounds good. Can you provide a test-cmake4eclipse.zip again, so I can test it on Monday? Thanks for your effort!
BTW:
Whats not working is that other library includes (include path specified via cmake) get resolved, probably because the CMAKE_EXPORT_COMPILE_COMMANDS Parser does only appear for the GNU C/C++ and not for the [ Unspecified ] [com.nvidia.cuda.toolchain.language.cuda.c/cu/c++] Languages in the Entries tab.
From my point of view, the Nvcc Builtins provider
seem to be broken somehow, it should not show these [ Unspecified ] entries but a human readable name instead. So I would not care too much, if the include path specified via cmake do not show up there.
But regardless what is shown in the Entires tab, the editor should work with include paths specified via cmake if you press F3 (open declaration and it shold know about preprocessor define (no grey background in comditionally compiled code section).
I created a sample project to verify that the editor is working as expected: https://github.com/15knots/cmake4eclipse-sample-projects
Playing around with the sample project above and changing the language ID I found the following.
MACRO_FROM_COMMANDLINE
is not recognized (see the gray background in the screenshot).config.h
and MACRO_FROM_C_INCLPATH_PRJ
(defined in config.h)Screenshot showing CUDA editor
MACRO_FROM_COMMANDLINE
is recognizedconfig.h
and MACRO_FROM_C_INCLPATH_PRJ
(defined in config.h)Screenshot showing CUDA editor
So I guess, the CUDA language ID is what we need, regardless what the Entries tab is showing.
The plugin version on bintray uses the CUDA language ID. Please verify my findings.
Yes, the new version on bintray works. Nice, thank you! :)
Now the CMAKE_EXPORT_COMPILE_COMMANDS Parser appears for the [ Unspecified ] [com.nvidia.cuda.toolchain.language.cuda.c/cu/c++]
languages in the Entries tab and for ...cuda.cu
it also displays the include path etc.
However, in my project not all include paths work. I looked at the compile_commands.json
and I guess the problem is related to parsing it. While it works for -I<path>
, it does not for -isystem=<path>
. Looks like cmake generates -isystem <path>
for c++ and -isystem=<path>
for nvcc. Maybe fixing this can be combined with https://github.com/15knots/cmake4eclipse/pull/67.
Quick question: I saw that you merged https://github.com/15knots/cmake4eclipse/pull/67, but does this directly resolve the issue I mentioned? I'm not sure how the parsing works, but I'm afraid that the =
will end up in the include path?
No. Its just to allow whitespace between -isystem
and /inc/path
.
File an issue for that (nvcc specific).
This refers to: https://groups.google.com/forum/#!topic/cmake4eclipse-users/u4SO8kF0nJ8
This is a...
Brief Description
CMAKE_EXPORT_COMPILE_COMMANDS Parser should work in combination with the Nsight plugin
What is the expected behavior?
The CMAKE_EXPORT_COMPILE_COMMANDS should appear for the CUDA languages (com.nvidia.cuda.toolchain.language.cu/c/c++) in Entries|Settings Entries, the Include path etc. should be parsed, and library includes can be resolved in .cu and .cc (and so forth) files.
What behavior are you observing?
It does NOT appear, include path are not parsed, and includes cannot be resolved.
Provide the steps to reproduce the issue, if applicable:
Useful Information
cmake4eclipse version: 1.9.1 Which OS do you use: Ubuntu 16.04 Cmake version: 3.10.2