exercism / nim-representer

MIT License
5 stars 5 forks source link

Feature: upgrade macro evaluation strategy to use `nimscripter` #37

Closed ynfle closed 2 years ago

ynfle commented 2 years ago

This upgrades the representer to use https://github.com/beef331/nimscripter.

This module is a branch between Nimscript (a subset of nim that runs in the VM) and the compiled nim. It embeds an interpreter in to the executable, and doesn't require a nim compiler to be in path. The primary advantage is the the bridge is built-in, so there is no need for recompiling to create a representation for every exercise, as the representer uses nim's macros which only function in the VM. Without this, only the VM is used, but the executable has to be recompiled every time.

This also greatly simplifies the testing process, as the representation creation doesn't need to happen at compile to and injecting in. Rather, it can happen at runtime with the VM bridge.

CI: update to use version 1.6.6 and refractor of the representer

https://github.com/jiro4989/setup-nim-action is used to install nim using choosenim with the desired version.

Caching is utilized to not have to redownload the dependent packages again and invalidates the cache when the nimble file changes. This is the example used on the setup-nim-action repo README.md

Closes #9 as not relevant

ynfle commented 2 years ago

Please review, although I need to look over everything again

ErikSchierboom commented 2 years ago

@ynfle Is there any performance difference between this version and the previous one?

ynfle commented 2 years ago

Note docopt was added as a dependency here because before there was just compilation, so runtime args weren't possible. This can be split in to a separate PR or a separate commit.

ynfle commented 2 years ago

@ynfle Is there any performance difference between this version and the previous one?

Yes. on my machine it is about a 2x performance boost. From ~1.6s on main and ~.75s with this latest commit on the PR.

As more normalizations are added, I think that there will be a less of a performance boost, as the primary difference isn't in the evaluation of the AST etc. but rather the startup time and "other things" that compiler does which we don't need and isn't embedded in our executable. The previous strategy was to recompile every time.

ynfle commented 2 years ago

A fairer comparison would be a nimscript script which wouldn't need to recompile every time. A prototype with the following script ran at about ~.95s which is only a 1.25x speed up instead of 2x

import std/[json, macros, os, strformat, strutils]
import representer/[mapping, normalizations, types]

proc createRepresentation*(file: string): tuple[tree: NimNode, map: IdentMap] =
  var map: IdentMap
  let code = parseStmt(file)
  result = (tree: code.normalizeStmtList(map), map: map)

proc getTestableRepresentation*(contents: string, switch = false): SerializedRepresentation =
  let (tree, map) = createRepresentation(contents)
  result = (repr tree, $(if switch: %map.switchKeysValues else: %map))

proc getFileContents*(fileName: string): string = readFile fileName

func underSlug(s: string): string = s.replace('-', '_')

proc main() =
  let (tree, map) = getTestableRepresentation(getFileContents(paramStr(3) / underSlug(paramStr(2)) & ".nim"))
  if paramCount() == 5:
    let outDir = paramStr(4)
    writeFile outDir / "mapping.json", $map.parseJson
    writeFile outDir / "representation.txt", tree
  echo &"{tree = }\n{map.parseJson.pretty = }"

main()
ynfle commented 2 years ago

@ee7 @ErikSchierboom Which size level should this PR be labeled? x:size/massive?

ErikSchierboom commented 2 years ago

We don't yet have official guidelines, but I'm fairly certain this would count a massive.

P.S. you should probably get into the habit of using the x:rep labels to assign reputation, as they're superseding the x:size labels for reputation awarding (they both work for backwards compatibility).