nim-lang / RFCs

A repository for your Nim proposals.
135 stars 26 forks source link

Refactor bad code, build a testable specification and write a compiler guide #478

Open ringabout opened 1 year ago

ringabout commented 1 year ago

The Neon Genesis Evolution Plan

北海虽赊,扶摇可接;东隅已逝,桑榆非晚。-- 滕王阁序 It is better late than never.

Introduction

Lacking a compiler documentation and a language specification with elaborate testable examples are the long-standing obstacles which hinder the Nim language from gaining the popularity it deserves. A compiler documentation gives developers the overview of the compiler internals and helps them step into the development of the Nim compiler. A language specification clarifies how the language works. Armed with detailed testable examples, it can prevent regressions from happening and act as a delicate guide to developers proving that the feature works and the language works.

The RFC is intended to be progressive and grows gradually. The task should start from commenting the compiler code. A testable specification can roll out at the same time. Finally the comments and the specification should help form a guide summarizing how the Nim compiler works.

The tasks are divided into three steps: comment the compiler code, build a testable specification and write a compiler guide.

Comment the compiler code

Compiler code without comments tends to be hard to understand. Comments help developers gain the insights of why the code is written like this and how it probably works. Comments could be obsolete and meaningless as time goes by. However, It is worthwhile updating the comments periodically since the code and comments will be read thousands of times.

The Nim CI should have a measure to calculate the comment ratio of the code. Ideally it should measure the comment ratio per pull request. The comment ratio is defined as comments lines / total lines (excluding empty lines).

The contribution of the compiler documentation should be in the form of a pull request. The pull request should comment a certain module or functionality in the Nim compiler.

Build a testable specification

A specification should describe what should work or fail, with abundant testable examples. It should cover as many cases as possible.

As a convention, a describe block can be used. The implementation starts from

template describe*(x: string, body: typed) =
  block:
    body

It may be extended for better documentation generation in the future. The testable specification for the Nim manual should be put under the tests/spec/manual directory. A simple example is given below:

# manual_bool_type.nim
import stdtest/specutils

describe "The internal integer value of a Boolean type":
  describe "0 for false":
    doAssert ord(false) == 0
  describe "1 for true":
    doAssert ord(true) == 1

describe "A Boolean type is an Ordinal type":
  doAssert bool is Ordinal
  doAssert true is Ordinal
  doAssert false is Ordinal

describe "The size of a Boolean type is one byte":
  doAssert sizeof(bool) == 1
  doAssert sizeof(true) == 1
  doAssert sizeof(false) == 1

These examples should generate a pretty HTML documentation. They can be linked to the Nim manual or included into the Nim manual in the foldable form (defaults to folded).

Write a compiler guide

A compiler guide helps developers learn the internals of the Nim compiler without wasting time in reading the compiler code. What is a magic function? How the Nim method is implemented? Is it efficient to use & to concatenate multiple strings? These are questions that a compile guide should attempt to answer. The guide should focus on the explanation of the specific terms and the gist of the implementation.

For instance:

`$` function is a magic proc used to concatenate strings and chars. The Nim compiler does some optimizations to make it perform as good as the in-place version.

There are four overloads for `$` function in the system module.

```nim
proc `&`*(x: string, y: char): string {.magic: "ConStrStr", noSideEffect.}
proc `&`*(x, y: char): string {.magic: "ConStrStr", noSideEffect.}
proc `&`*(x, y: string): string {.magic: "ConStrStr", noSideEffect.}
proc `&`*(x: char, y: string): string {.magic: "ConStrStr", noSideEffect.}

All of them have the same magic mConStrStr, which is needed in the optimization phases.

s = "Hello " & name & ", how do you feel" & 'z'

Here is the ast of the right-side expression:

StmtList
  Infix
    Ident "&"
    Infix
      Ident "&"
      Infix
        Ident "&"
        StrLit "Hello "
        Ident "name"
      StrLit ", how do you feel"
    CharLit 122


- [x] Is it efficient to use `$` function to concatenate multiple strings?(https://ringabout.github.io/neon/concatenate)
- [ ] todo ...
Varriount commented 1 year ago

I wholeheartedly agree with this. And in fact (if someone tells me how), I'm willing to put up a bounty of up to $1,000 USD, depending on the amount of work done.

PhilippMDoerner commented 1 year ago

Compiler code without comments tends to be hard to understand. Comments help developers gain the insights of why the code is written like this and how it probably works. Comments could be obsolete and meaningless as time goes by. However, It is worthwhile updating the comments periodically since the code and comments will be read thousands of times.

Just as a forewarning since this has been a massive sticking point for folks at my workplace: Is the intent to go beyond just doc comments into comments inside of macros/procs/templates? The experience I've made is that comments within functions get outdated first as somebody updates something that doesn't change the outcome of a function but how it achieves it. doc comments have a somewhat better rate of staying current and correct. That's why I tend to steer away from comments that aren't just doc comments.

I haven't looked at the compiler, but is it an option to have the comments bit only be about doc comments?

ringabout commented 1 year ago

Doc comments still need to explain every quirk in the procs because the Nim compiler is too complex. There are some procs has hundreds or even thousands of lines. It is painful to go through all the code without comments.

Nim does generate documentation for the compiler procs => https://nim-lang.org/docs/compiler/astalgo.html

Vindaar commented 1 year ago

I wholeheartedly agree with this. And in fact (if someone tells me how), I'm willing to put up a bounty of up to $1,000 USD, depending on the amount of work done.

I'd pitch in as well (with a smaller amount I can afford).

PhilippMDoerner commented 1 year ago

I wholeheartedly agree with this. And in fact (if someone tells me how), I'm willing to put up a bounty of up to $1,000 USD, depending on the amount of work done.

I'd pitch in as well (with a smaller amount I can afford).

I actually missed this one. Given that I'm already throwing money into the ecosystem, might as well pitch into this bit as well. Which...err... leads me to the question if we have a mechanism to deal with such an idea of a "community sponsored bounty" or if we best handle this all individually.

ringabout commented 1 year ago

if we have a mechanism to deal with such an idea of a "community sponsored bounty" or if we best handle this all individually.

There is. See the bounty for Interfaces proposal => https://github.com/nim-lang/RFCs/issues/39#issuecomment-1018706344

mratsim commented 1 year ago

In terms of documentation one of the most scary parts of the compiler is anything that does semantic check sem*.nim and also sigmatch. I dread going there.

I also missed the part about the closure iterator transformation. It's an important area for async, laden with bugs due to the complex interactions with defer/try+except/blocks ...

haxscramper commented 1 year ago

The Nim CI should have a measure to calculate the comment ratio of the code.

Current breakdown of the compiler documentation (single-hash comments are not considered documentation) comment ratio

Nim

ringabout commented 1 year ago

I'm going to push the setup to set an example for later updates when this plan is accepted.

haxscramper commented 1 year ago

Another obvious statistics while we are at it - this heatmap shows correlation of edits between different parts of the project code. The left side shows total number of edits of the file (at the time of analysis). Each cell in the heatmap represents the percentage of times the bottom file was edited together with one on the left.

Left ordered by total number of edits

image

Or by total number of co-edits with different files

image

Compiler description in general should also include guidelines about relationship between different parts of the compiler - no just who calls who in what order, but changes that are introduced in the semantic layer, how types defined in one part are used in another, how to improve certain parts. People come to the project with questions, and it is important to anticipate common ones, such as "how do I improve an error message X", "there is an RFC Z, I really like it and want to help implement", where do I start", "I changed something in place A and now things break for test B - what might be wrong" and so on and so forth.

Varriount commented 1 year ago

@haxscramper Just curious - how did you produce those heat maps and charts?

haxscramper commented 1 year ago

I'm writing a tool for this kind of analysis https://github.com/haxscramper/code_forensics - for now it is a proof of concept, but I'm working on more stats such as an average commit message length image

Files edited together 100 times or more

image

150 times or more

image

75 times or more

image

Someone might say it is obvious, but for someone who just came to the project, this kind of structural description can be very valuable.

Araq commented 1 year ago

I prefer massive refactorings over massive documentation efforts. I know that we can do both, but if I have to choose, I take "refactorings".

But the biggest elephant in the room is probably "phase ordering issues". We need a set of refactorings so that the transformation pipeline is simple and correct. Then "who calls whom" might also be clean enough so that it doesn't need that much documentation.

ringabout commented 1 year ago

I agree that commenting is not appropriate. It is better to have a clear code structure. The point that "Rewrite bad code; don't write comments" from https://www.youtube.com/watch?v=NLebZ3XT92E&t=308s seems fair.

The refactorings should have been in the part one. I would like to investigate into refactorings which makes the compiler code more readable and more accessible to newcomers.

haxscramper commented 1 year ago

sigmatch.paramTypesMatch has a total of >>NINE<< one-character variables - m, f, a, x, y, z, c, r, i, all involved in a complex algorithm of the best candidate argument search - that's about a third of the English alphabet. Good to know the algorithm complexity has such a robust limiting mechanism - at most you can write code that is three times as complex, not more, so I'm not sure if we even need to talk about refactoring to be honest

Araq commented 1 year ago

@haxscramper I don't understand. A refactoring can be as simple as "rename a variable".

haxscramper commented 1 year ago

For the reference - it was a joke, although maybe someone can interpret it as an example starting point or example of how code should not be written. I was just blown away by the sheer number of one-char variables in what appears to be one of the very important procedures in one of the most complex parts of the compiler (semantic analysis). This appears to be a common theme in the sem, with proliferation of undescriptive names that make it harder to intuit what the code was meant to do.

ringabout commented 1 year ago

Rewrite the codegen to be based on a stream of tokens, not based on Ropes. (Memory consumption, performance.)

I might work on it in a few days, since it doesn't sound too difficult for me. I suppose it does the reversed work compared to the lexer.