Open klausler opened 5 years ago
I'd guess you are unaware that Fortran 2003 contained an optional Part 3 that was conditional compilation. I'm not sure any vendor implemented it, there was little interest, and it got withdrawn in F2008. I really don't see us going back there.
The reality is that people use cpp (or a variant) and that seems to work for most everyone. The better question to ask is "what are the use cases for a preprocessor, and can better language design satisfy that need?" Look at the C interop stuff, for example - it eliminates a large swath of what preprocessors were used for. A proper generics feature would eliminate more.
I am aware of CoCo and how poorly it fared. The fact remains, C-like preprocessing is a real-world feature that is available in all compilers. Fortran would be more portable if the language acknowledged the existence of preprocessing and defined a standardized portable subset of behavior.
@klausler thanks for bringing this up. Related to this is the fact that the default behavior is to not use a preprocessor for .f90
files and to use it for .F90
files. And so in practice one must rename a .f90
file to .F90
in order for the preprocessor be be applied automatically, which is annoying.
Why not standardize a subset of the current behavior, and automatically apply it to .f90
files?
@sblionel's counterpoint is valid though --- just like C++ is moving away from using preprocessor by adding language features, Fortran is moving in that direction too. So I think the counterpoint is to not standardize a preprocessor, but rather improve language features so that the preprocessor is not needed.
Besides templates, one common use case for a preprocessor that I have seen in many codes is a custom ASSERT
macro, which is empty in Release mode, and in Debug mode it checks the condition and prints out the filename and line number. I have created #70 for this.
Keep in mind that the Fortran standard knows nothing about .f90 files or source files in general. It would be a broad expansion to try to legislate behavior based on file types. Note also that some operating systems are not case-sensitive for file names.
I don't think that the current state of preprocessing is broken enough to warrant the standard trying to get involved.
The standard doesn't have any concept of source files, much less source file names. The f18 compiler always recognizes and follows preprocessing directives and applies macro replacement, ignoring the source file name suffix, since its preprocessing capabilities are built in to its source prescanning and normalization phase, and are essentially free when not used.
I absolutely agree that some common subset of preprocessing behavior should be standardized. This is the one major part of Fortran that every compiler provides that is not covered by the standard language document; but perhaps improving portability of Fortran programs across vendors, writing interoperable header files usable by Fortran and C++, or providing safe guarantees of portability into the future are no longer primary objectives.
As part of defining f18's preprocessing behavior, I performed a survey of various Fortran compilers and wrote a collection of tests to check how their preprocessors interacted with difficult Fortran features like line continuation, fixed form, &c. The current state of the art is far more fragmented than I expected to find (see my link above for details and a compiler comparison table), and none of the existing compilers seemed to stand out as a model to be followed.
EDIT: The standard does have a concept of source files in the context of INCLUDE
lines, of course; my first sentence was too broad.
@sblionel yes, the standard currently does not have concept of source files, but perhaps it should. I see this GitHub repository as broader than what is strictly covered by the standard today --- because we might decide in the future to include some such things (such as more details about source files) into the standard.
And I must say I agree with @klausler's on this. There is a lot that Fortran should standardize and improve. Perhaps it does not need to go into the standard itself, but then let there be a document that we all agree upon, we do not need to call it the "standard" (perhaps we can call it "vendor recommendation"), but it will achieve what we want: improving portability across vendors.
I am not sure, whether pre-processing must be necessarily implemented within the compiler or being standardized at all. Using an appropriate interpreter language (e.g. Python) it is possible to implement a pre-processor satisfying all requirements @klausler formulated (and even much more) within a single file. You add this one file to your project, and you can build your project with all Fortran compilers, as your pre-processor on board makes sure that the compiler only sees standard conforming source files. You will have to have the interpreter of course around whenever the project is built, but by choosing something wide-spread as Python, it would be the case on almost all systems automatically.
Disclaimer: I may be biased as I myself also wrote such a one file pre-processor (Fypp) which apparently has found its way into several Fortran projects.
@aradi I think you having written fypp perfectly proves that there is a need for a preprocessor. Contrary to what @sblionel said, it seems that generic programing will not be around for years, whereas a good preprocessor (so not cpp...) can solve 95% of the use cases for generic programming.
@aradi Thanks for the link, I wasn't aware of Fypp. Its syntax seems incompatible with the other preprocessors though. I can see that it has more features, so that's probably the reason. But having the preprocessor syntax standardized I think is valuable.
@certik The syntax is different from the usual cpp-derived pre-processors to make sure, nobody tries to run those on files meant to be processed Fypp. :wink: (Also, it allows better escaping and ensures better prevention against unwanted substitutions, which is sometimes tricky with cpp-based approaches.)
@gronki Fypp was actually written in order to allow for easy generation of templates, so yes, a pre-processor can help to work around (but not solve) many of the generic programming needs. Still, I am not sure, whether it is a good idea to use "standardized pre-processor based workarounds" for generics, as we will stick then with them for the next few decades. :smile:
@gronki , I never suggested a timeframe for generics. But there is a lot of resistance to adding features that paper over shorter-term problems. But there is an existing preprocessor solution that works, why complicate issues with trying to wedge preprocessing into the standard? I'd prefer the energy and cycles to be put into solving the language issues that make people reach for a preprocessor.
Depends whether you believe that full generics will make it into Fortran. I would rather cut this absolute insanity of copy-pasting the same code using some simple but decent preprocessing language rather than wait 10 years.
wt., 12 lis 2019 o 17:56 Bálint Aradi notifications@github.com napisał(a):
@certik https://github.com/certik The syntax is different from the usual cpp-derived pre-processors to make sure, nobody tries to run those on files meant to be processed Fypp. 😉 (Also, it allows better escaping and ensures better prevention against unwanted substitutions, which is sometimes tricky with cpp-based approaches.)
@gronki https://github.com/gronki Fypp was actually written in order to allow for easy generation of templates, so yes, a pre-processor can help to work around (but not solve) many of the generic programming needs. Still, I am not sure, whether it is a good idea to use "standardized pre-processor based workarounds" for generics, as we will stick then with them for the next few decades. 😄
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/j3-fortran/fortran_proposals/issues/65?email_source=notifications&email_token=AC4NA3MODDAPT4MDBJECU2LQTLN5FA5CNFSM4JH7T652YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOED26AIA#issuecomment-552984608, or unsubscribe https://github.com/notifications/unsubscribe-auth/AC4NA3ICF2YZIBKPWTNJNGDQTLN5FANCNFSM4JH7T65Q .
@sblionel sure you have the point. Agree that including it in the core standard could be a waste of resources. But what about TS, just like it was with CoCo? Is there any idea or information why CoCo "lost" with cpp (despite implementation being available)? I didn't see anything wrong with it other than it just didn't take off.
In terms of priorities, I think I agree with @sblionel that it makes sense to invest our efforts in getting generics into the standard, rather than prioritize a short term solution over the long term. That's a good point.
@gronki, CoCo was before my time on the committee. All I know is that vendors didn't implement it and users didn't ask for it - they continued to use cpp.
CoCo was an optional part of the standard, not a TS. A TS has the expectation that it will be incorporated in a future standard largely unchanged. Our experience so far with optional standard parts is that neither users nor implementors are all that interested in them.
I will also trot out my oft-used point that everything that goes into the standard (and a TS or optional part is no different) has a cost, in terms of the resources and time of the committee members.
Thank you for great explanation! I did not recognize the difference between the optional part and the TS.
@gronki , I never suggested a timeframe for generics. But there is a lot of resistance to adding features that paper over shorter-term problems. But there is an existing preprocessor solution that works, why complicate issues with trying to wedge preprocessing into the standard? I'd prefer the energy and cycles to be put into solving the language issues that make people reach for a preprocessor.
It's not either/or, and people have work to do today. Fortran compilers have preprocessors, people use them, and code portability would benefit from standardizing them to the extent that they can be.
It's not either/or, and people have work to do today. Fortran compilers have preprocessors, people use them, and code portability would benefit from standardizing them to the extent that they can be.
The preprocessors of fortran compilers are in fact standardized by the cpp standard. I think this is more than sufficient. (eg. C18 (ISO/IEC 9899:2018) 6.10 preprocessor directives
It's not either/or, and people have work to do today. Fortran compilers have preprocessors, people use them, and code portability would benefit from standardizing them to the extent that they can be.
The preprocessors of fortran compilers are in fact standardized by the cpp standard. I think this is more than sufficient. (eg. C18 (ISO/IEC 9899:2018)
6.10 preprocessor directives
Running Fortran through a "bare" cpp that doesn't know about Fortran commentary, line continuations, column-73 fixed-form line truncation, CHARACTER concatenation operator, and built-in INCLUDE
lines will produce poor results. Real production Fortran compilers either use a modified cpp or implement a cpp-like facility internally. The ways in which the Fortran features that I just mentioned interact with the preprocessors' implementations of their features (directive processing, macro replacement) show too much variation across Fortran compilers, and that is why they need standardization by a committee that should take code portability seriously.
Just for clarity, is this proposal focusing on standardizing the behavior of existing preprocessors (cpp, fpp) rather than extending or adding new features for more robust code conversion or metaprogramming facilities (i.e., the latter should be posted elsewhere)?
I do not exclude the addition of new features to a standardized preprocessor in my concept, but I don't know of any particular new feature that I would want that isn't already implemented in at least one compiler. What do you have in mind?
I think, in case generics do not make it into the standard, loop constructs would be useful to generate various specific cases for a given template.
We are using that a lot for creating library functions with the same functionality but different data types / kinds. (For example wrapping MPI-functions as in MPIFX). We use currently Fypp for that (shipping Fypp with each project), but would be more than happy to change it to any other pre-processor language, provided we can be sure, each compiler can deal with it.
As for code conversion, my immediate use case is to iterate over multiple symbols to generate codes from a templated one. For example I often have a pattern like
open( newunit= data % file_foo, file="foo.dat" )
open( newunit= data % file_bar, file="bar.dat" )
open( newunit= data % file_baz, file="baz.dat" )
! similar lines follow
close( data % file_foo )
close( data % file_bar )
close( data % file_baz )
! similar lines follow
to open or close files for each type component. Although those lines are the same except for property names (like foo), I cannot write them conveniently at once (with standard Fortran/cpp/fpp). A similar situation occurs when doing some operation for all (or some of) type components, e.g. scaling by some factor
data % ene_foo = data % ene_foo * scale
data % ene_bar = data % ene_bar * scale
data % ene_baz =ydata % ene_baz * scale
! similar lines follow
If a preprocessor supports iteration over symbols, I may be able to write, e.g.
#for x in [ foo, bar, baz, ... ]
data % ene_$x = data % ene_$x * scale
#endfor
(where I suppose "$" is interpreted as in Bash). Fypp already has this facility
interface myfunc
#:for dtype in ['real', 'dreal', 'complex', 'dcomplex']
module procedure myfunc_${dtype}$
#:endfor
end interface myfunc
and Julia also uses such loops over symbols sometimes, e.g. https://github.com/JuliaLang/julia/blob/master/base/math.jl#L1100 https://github.com/JuliaLang/julia/blob/master/base/math.jl#L548 https://github.com/JuliaLang/julia/blob/master/base/math.jl#L382
I think it would be useful if standard Fortran or preprocessor will support such a feature somehow, if Fortran generics (discussed on Github) may not cover such a feature.
Apart from the feature request, I am a bit concerned that the extensive use of "#" or "$" can make a code very noisy or cryptic (the worst case of which might be Pe*l??), which I hope to be avoided (if possible...). In particular, if cpp/fpp requires the directive to start from column 1 with "#", the code may become less readable (as I often feel for codes with a lot of "#ifdef MPI").
Another feature request for a preprocessor (according to StackOverflow) might be to support output of multi-line processed sources (without using semicolon).
Fortran Preprocessor Macro with Newline https://stackoverflow.com/questions/59309458/fortran-preprocessor-macro-with-newline
Another feature request for a preprocessor (according to StackOverflow) might be to support output of multi-line processed sources (without using semicolon).
Fortran Preprocessor Macro with Newline https://stackoverflow.com/questions/59309458/fortran-preprocessor-macro-with-newline
The reason he or she does not want to use semicolons is fear of a 132-character line limit, which is something that a compiler with a built-in preprocessing stage should enforce before macro expansion, not after.
Still, I can think of scenarios where passing multiline arguments to macros would be useful. Thinking about macro based unit test systems (as in Google Test or Catch for C++) you would need the ability to pass multiline macros to Fortran.
Multi-line arguments are a different problem from multi-statement expansions. Both should work; specifically, Fortran line continuations should be usable within macro invocations.
I hate preprocessors and yet have written several; so I do not think cpp(1) is sufficient even if it is a modified "Fortran-safe" version such as fpp. That type of preprocessing it usually most useful for hiding portability issues between different systems and compilers which I find I need far less than in the past; partly because of more standardized OSes; and partly because of Fortran standardization and the inclusion of the ISO_C_BINDING module (a lot of cpp processing had to do with interfacing with C in my case). More recently I find I use preprocessing to make up for these missing Fortran features:
MULTI-FILE FILES
being able to have blocks of text in the same file for plain text documentation, Fortran, and related C
code and unit tests.
being able to define strings with blocks of plain text. IE
DEFINING CHARACTER VARIABLES WITH PLAIN BLOCKS OF TEXT (IE. "HERE" documents)
help=```
a block of text without any
other need for quotes or continuation characters and so on.
>>>_scratch1
test data
<<<
running code thru other utilities such as bash, sed, m4, ... to loop over routines to create generics. bash is even more commonly available than python, very standardized and surprisingly good as a
preprocessor, and has a large number of people familiar with it.
It GREATLY helps me to keep my document (usually in markdown syntax, sometimes in HTML or LaTex right in the file with my Fortran and C code and test scripts; my intent is not to say everyone should use one of my preprocessors but to say preprocessing is still useful for making up for missing features and customizing the development environment in useful ways.
Even using cpp as an example, it can be very useful just to do something like
#ifdef DOCUMENT
NAME
my routine -- it does this stuff
SYNOPSIS
my_routine(parm1, parm2, parm3)
DESCRIPTION
everything you ever wanted to know about my
routine
EXAMPLE
bet you wish there was an example here
#elif defined(FORTRAN)
subroutine my_routine
end subroutine my_routine
#elif defined(CCODE)
my_routine_c(){
}
#endif
it is then easy to make scripts or makefile rules to extract the document and run it thru man2txt to make a man page, turn it into comments and include it with the Fortran code and put that into a .f90 file, and put the C code into a .c file.
can be really a very useful use of a preprocessor. Note that m4 can be used to do all kinds of preprocessing.
Put more to the topic, preprocessing is often still desirable, especially for helping to simulate missing Fortran capabilities like templating and allowing flat block text; and if anyone is going to create a standard preprocessor cpp(1) is not a model I would recommend. People put up with using cpp for Fortran pre-processing for lack of simple alternatives; they do not find it sufficient (or even safe).
If we can't get J3 to standardize minimum requirements on this essential and universally implemented feature, perhaps the community can at least give some guidance to implementors. I would like f18 to predefine a macro that specifies the edition of the Fortran standard that it implements, something like #define __FORTRAN_STD__ 2018
. Is that the best name, or can anyone suggest something better?
@klausler yes, we should come up with conventions as a community.
Will this macro change if you tell the compiler to stick with F2003 for example (like the -std=f2003
option for gfortran)? How are you imagining users would use this macro?
This proposal seems unworkable to me. It means that a compiler that implements all but one minor feature of F2018 can't say it supports F2018. When you have a compiler such as gfortran that isn't even full F2003, yet has features from F2008 and F2018, what would you have it define?
As I wrote above. J3 did standardize a preprocessor and it was roundly ignored by the community. I have also observed that programmers are often misinformed about the revision of the standard they are using.
I would like to see preprocessors die and would not want to put anything in the standard that encourages their use. They're useful today because the language lacks features such as robust generics/templates, but work in that area is progressing with some 202X features helping. Past use to deal with C interoperability is no longer necessary. If you feel you need to write different code for different compilers, it would better to use the greatest common subset and enhance that as the compilers you use catch up. The alternative feels like a testing and maintenance nightmare to me.
A common strategy I have seen is to not use a feature that isn't supported in at least three compilers. I observe that this is likely to be less of a problem over the coming years as I see compilers catching up to the standard much more quickly than in the past 5-10 years.
@klausler yes, we should come up with conventions as a community.
Will this macro change if you tell the compiler to stick with F2003 for example (like the
-std=f2003
option for gfortran)? How are you imagining users would use this macro?
I don't know of any use cases for limiting the compiler to an older standard, apart from wanting pre-2003 allocatable array assignment semantics, and there's better syntactic solutions (viz., A(:)=...
rather than A=...
) for that use case. Do you?
The use for a __FORTRAN_STD__
predefined macro would be for making code that uses newer features conditional, for those cases where there's a less preferable approach that works with older standards that could be in an #else
case. It could also appear in programs that absolutely depend on newer features to provide more useful error messages:
#if __FORTRAN_STD__ < 2008
#error This program uses coarrays; get a newer Fortran compiler.
#endif
That's something that I often do with C++ code that uses C++2017 features.
This proposal seems unworkable to me. It means that a compiler that implements all but one minor feature of F2018 can't say it supports F2018. When you have a compiler such as gfortran that isn't even full F2003, yet has features from F2008 and F2018, what would you have it define?
As I wrote above. J3 did standardize a preprocessor and it was roundly ignored by the community. I have also observed that programmers are often misinformed about the revision of the standard they are using.
I would like to see preprocessors die and would not want to put anything in the standard that encourages their use. They're useful today because the language lacks features such as robust generics/templates, but work in that area is progressing with some 202X features helping. Past use to deal with C interoperability is no longer necessary. If you feel you need to write different code for different compilers, it would better to use the greatest common subset and enhance that as the compilers you use catch up. The alternative feels like a testing and maintenance nightmare to me.
A common strategy I have seen is to not use a feature that isn't supported in at least three compilers. I observe that this is likely to be less of a problem over the coming years as I see compilers catching up to the standard much more quickly than in the past 5-10 years.
I'm not proposing anything for the standard here. I understand that you don't want to standardize preprocessing, and that you get to decide whether preprocessing is standardized or not. Fine.
But preprocessing is still used in real codes by real users, it's part of every Fortran implementation, and I would like to provide the best implementation of preprocessing for them that I can in f18 in the absence of guidance from a standard.
I understand that you don't want to standardize preprocessing, and that you get to decide whether preprocessing is standardized or not.
No, I don't get to decide. I am just one vote among all WG5. But as I have said, we already did standardize preprocessing (though this happened before I was on the committee) and it was ignored and has now been dropped from the standard. Nothing I or WG5 say will stop people from using preprocessing using the tools (cpp) they are already using.
I understand that you don't want to standardize preprocessing, and that you get to decide whether preprocessing is standardized or not.
No, I don't get to decide. I am just one vote among all WG5. But as I have said, we already did standardize preprocessing (though this happened before I was on the committee) and it was ignored and has now been dropped from the standard. Nothing I or WG5 say will stop people from using preprocessing using the tools (cpp) they are already using.
Does any production Fortran compiler actually use cpp? It doesn't interact well with line continuation, line truncation, Hollerith, or (especially) INCLUDE. I don't know of a compiler that preprocesses with a stock cpp.
CoCo wasn't rejected by users and implementors because people didn't need or want preprocessing. CoCo was a failure because it was gratuitously different from the C-like preprocessing and local tooling that people were already using, and it wasn't a better solution (in fact, it's really weird and ugly).
I don't know what every compiler uses, but I often see cpp invoked with an option that better handles Fortran. ifort has its own fpp
that accepts cpp directives. I think pretty much every Fortran in common use has something similar.
My point was that people are already using cpp or a cpp-like preprocessor that they already have. I agree that CoCo was "gratuitiously different", but what the users told us was that cpp (or cpp-like) was working for them.
And if the common subset of the behaviors of Fortran-aware preprocessors (and built-in preprocessing phases) were to be documented, then both users and implementors would know what's portable and what's not. This is exactly the sort of thing that should be in a de jure standard. But that's not going to happen, and the best I could do for f18 was to determine that common subset myself, figure out the most reasonable behavior in edge cases where compilers differ, and ask users for guidance. If you have better advice for an implementor, I'm all ears.
EDIT: See here for a table of preprocessing behaviors of various compilers, using fixed and free form samples in this directory. As one can see, things are not terribly compatible today, but there is a common portable subset.
@klausler I agree with you and I think the best we can do is to get a community / vendors consensus of what should be supported and document it. Most production Fortran codes that I have seen use macros in some form, and thus compilers must support them.
Thank you for taking the lead on that in the document you shared.
Here is a permanent link to the preprocessor documentation:
We are currently figuring out how to add preprocessor support for LFortran. I can see that in Flang it is integrated into the compiler. I don't know if it is feasible to pre-process ahead of time (de-coupled from the compiler) and keep line numbers consistent, this is also relevant:
Last, if the preprocessor is not integrated into the Fortran compiler, new Fortran continuation line markers should be introduced into the final text.
That would be my preferable approach, but I assume the down side are worse error messages and possibly it is slower?
In f18 the first phase of compilation is called prescanning. It reads the original source file, and expands any INCLUDE or #include files, normalizes the source in many ways (preprocessing directives, macro expansion, line continuation, comment removal, space insertion for fixed form Hollerith, space removal / collapsing, case lowering, &c.) to construct a big contiguous string in memory. This string is what the parser parses, and it makes parsing so much easier. Each byte in that string can be mapped to its original source byte or macro expansion or whatever by means of an index data structure. In the parser and semantics we just use const char *
pointers to represent source locations in messages and the name strings of symbols, and those get mapped back to source locations for contextual error message reporting later.
I think, it would be much more important, that all compiler accept and process #line
directives in the source code. Then people can use whatever pre-processor suits their purpose most. As long as the preprocessor emits those directives (as for example Fypp does if requested), the user would always get correct error messages.
I think the #line
directives only ensure the correct line is being reported, but if your pre-processor expands a macro, it changes the line itself, so the compiler will report an error on the expanded line that is not what the user sees before calling the pre-processor.
No, if the source file is around, the compiler will show the right line. You can test it yourself.
test.F90:
#:def ASSERT(cond)
#:if defined("DEBUG")
$:cond
#:endif
#:enddef
program test
implicit none
#! expression is incorrect to trigger compiler error
@:ASSERT(1 ?= 2)
end program test
Executing
fypp -n -DDEBUG test.F90 > test.f90; gfortran test.f90
you obtain the error message:
test.F90:11:3:
11 | @:ASSERT(1 ?= 2)
| 1
Error: Invalid character in name at (1)
My bad, you are right. The only issue will happen if there is a syntax error in the expanded ASSERT macro, wouldn't it? Like this:
:ASSERT(1 ?= 2)@
I would expect it to show an incorrect column number.
If the error occurs in the expanded text, the error message can be indeed confusing. E.g.
#:def ASSERT(cond)
if (.invalid. ${cond}$) error stop "Assert failed"
#:enddef
program test
implicit none
@:ASSERT(1 == 2)
end program test
with
fypp -n test.F90 > test.f90; gfortran test.f90
results in
test.F90:9:6:
9 | @:ASSERT(1 == 2)
| 1
Error: Unknown operator ‘invalid’ at (1)
In this case, one would have to drop the line marker generation as with
fypp test.F90 > test.f90; gfortran test.f90
to obtain
test.f90:5:6:
5 | if (.invalid. 1 == 2) error stop "Assert failed"
| 1
Error: Unknown operator ‘invalid’ at (1)
But, this is independent of, whether the pre-pocessor is external or built in into the compiler. Do you show the original line or the expanded line (or both), when the error occurs in an expanded code? Whichever strategy one goes for, it can be equally realized with built-in as well as with external pre-processors (provided they generate line marker directives and the compiler understands them).
@aradi I am glad you posted here, I think you are right. Indeed the compiler could now about the pre-processor, as a black box, and it could show errors either in the expanded form, or unexpanded form, and in each way it would show the correct line.
How would it know the line comes from a macro expansion? Well, I guess once it found the line with the error in the expanded form, it can compare the unexpanded line (from the #line directive) and if it differs, it can show both, i.e. the error can look something like this:
test.f90:5:6:
5 | if (.invalid. 1 == 2) error stop "Assert failed"
| 1
Error: Unknown operator ‘invalid’ at (1)
test.F90:9:6:
9 | @:ASSERT(1 == 2)
| 2
Note: the line at (1) where the error happens came from a macro expansion at (2)
If the line does not differ, then it can simply show the unexpanded form, as that will be the one which users see in their files.
I think this might be a very acceptable approach, with the advantage that we can use different pre-processors, such as fypp
.
Summary of the black box approach:
I can still see some potential advantages of integrating the pre-processor more deeply with the compiler:
clang
for example gives you error messages almost as if macros were part of the language itselfBut the black box approach is not bad, and one can implement both.
@certik I fully agree. Yes, the column number will be incorrect in the unexpanded form. And yes, a tight integration can give even deeper insights. But that assumes the existence of a well defined (standardized) pre-processor language which all Fortran compilers implement and follow, and which covers all the pre-processing needs people may come up with. In the mean time, the line directives can serve as a "bridging technology", allowing the usage of custom pre-processors.
I created an issue at https://gitlab.com/lfortran/lfortran/-/issues/281 to implement this in LFortran.
f18 -fsyntax-only ppdemo.f90
./ppdemo.f90:2:10: error: Actual argument for 'x=' has bad type 'CHARACTER(1)'
print *, CALL(sin, 'abc')
^^^^^^^^^^^^^^^^
./header.h:1:1: in a macro defined here
#define CALL(f,x) f(x)
^^
./ppdemo.f90:1:1: included here
include "header.h"
^^^^^^^^^^^^^^^^^^
that expanded to:
sin( 'abc')
^
f18: Semantic errors in ppdemo.f90
I think that it's necessary to have an integrated preprocessing facility in the same part of the compiler that's handling INCLUDE statements, line continuation, case normalization, &c. It's not hard to implement and it should be standardized.
Most Fortran compilers support some form of source code preprocessing using a syntax similar to the preprocessing directives and macro references in C/C++. The behavior of the preprocessing features in the various Fortran compilers varies quite a bit (see https://github.com/flang-compiler/f18/blob/master/documentation/Preprocessing.md for a summary of the situation). To improve code portability, the Fortran standard should accept the existence of preprocessing, and standardize the behaviors that are common and/or most useful.
@gklimowicz edit: The more recent link is Preprocessing.md.