Open jrburke opened 11 years ago
We have this issue. We are coding in TypeScript and would like to use r.js to bundle/minify. However we lose the sourcemaps for debugging purposes. I've always been in the "old school" of generating "PDBs" even for production. There are bugs that only can be reproduced in production environments. If you lose the sourcemap support with release code, debugging becomes much harder.
+1 for sourcemap chaining.
Ugh, bad commit comment. That commit is for #460.
Ran out of time for this release window, but more thoughts on this:
1) load.fromText(text, opts), with opts.sourceMap being the source map. During dev, just tack it inline to the evaled code along with the sourceURL? 2) During a build, optimizer writes the sourceMap to the file system when multifile optimization. For single file optimization, hold on to it until the end, saved out separately(?). For out: function(){}, pass it in opts object as second arg, under sourceMap property. 3) Make sure loader plugin writeFile works out too.
perhaps just look for existing fileName.js.map file, then use that as the input source map to the next stage of the transform.
Apologies if I'm misunderstanding something, but shouldn't it be possible to completely automatically figure out the source map for a given file? If it's a source-mapped file, then it has a comment saying where the source map is, no?
+1 for this issue!
I'm working with .coffee amd-modules, using a grunt task to compile those to .js amd-modules, each with their own .js to .coffee source-map.
Afterwards, i combine those amd-modules to components using r.js, each component being based on the dependency tree of a certain file. When using uglify2 and enabling sourcemap support, this results in a .js file and a sourcemap for each component - the sourcemap mapping the built .js file to the batch of .js files concatenated into it.
I was thinking about somehow using the uglify2 option in-source-map. To clarify this - the required files are first concatenated using r.js and the resulting file is optimized afterwards using uglify/uglify2/closure?
Why is this necessary? Like @pelotom mentioned, the map filename should be in the compiled output, no?
@matthewwithanm Nevermind, it is in the compiled output.
Neat! Seems to me that using that is much less fragile than making assumptions about filenames.
There, now we check each js file for it's existing map instead of making assumptions about file names
uglify2: function (fileName, fileContents, outFileName, keepLines, config) {
var result, existingMap, resultMap, finalMap, sourceIndex,
uconfig = {},
existingMapPath = outFileName + '.map',
baseName = fileName && fileName.split('/').pop();
config = config || {};
var patt = new RegExp('\#.sourceMappingURL\=[a-zA-Z0-9\-\_\.]*');
var filereg = patt.exec((fileContents.length)? fileContents : '' );
existingMapPath = filereg? existingMapPath.split('/').slice(0,-1).join('/')+filereg[0].split('=')[1] : existingMapPath;
How does this help getting sourcemaps for transpiled languages?
That finds the sourcemap if the transpiler provides an existing sourcemap. that's just the beginning of the uglify2 function on line 23363. that function uses the inSourceMap as part of the uglify config object.
lang.mixin(uconfig, config, true);
uconfig.fromString = true;
if (config.generateSourceMaps && outFileName) {
uconfig.outSourceMap = baseName;
if (file.exists(existingMapPath)) {
uconfig.inSourceMap = existingMapPath;
existingMap = JSON.parse(file.readFile(existingMapPath));
}
}
I'm just making it so that it uses the reference from the javascript file instead of making assumptions about file naming.
Should I do a pull request for this?
That finds the sourcemap if the transpiler provides an existing sourcemap. that's just the beginning of the uglify2 function on line 23363. that function uses the inSourceMap as part of the uglify config object.
I thought r.js builds the dependency tree first, then concatenates the files and optimizes the concatenated file afterwards. How does this help when the concatenated file consists of dozens of javascript files that each have their own sourcemap from transpiling?
It might not help there, but it does help when minifying files and not concatenating them. What I do know is that uglify runs on each js file individually. Also the output source map for the concatenated file includes all of the individual sources "sources":["../../require.js","../../bootstrap.js","main.js","../../ref.js","../../jquery-1.10.2.js","../../jquery-ui-1.8.20.js","../../jquery.validate.js","../../knockout-3.0.0.debug.js","../../lodash.js","../../gridmvc.js","../../jquery.unobtrusive-ajax.js"]
Wait a second - isn't building a dependency tree and concatenating all files from that tree into a single one the main purpose of the r.js optimizer?
I always thought the minification step using uglify or closure compiler comes afterwards - please correct me if i'm wrong.
Whats the point of optimizing a single file without dependencies?
Perhaps it was for a future implementation where it uses uglify's inSourceMap, and then figures out how to map that to concatenated files. I'm not entirely sure if r.js i currently capable of this, however this code is using the inSourceMap config option for ugilfy2, so I tried to make it more robust for later if not now.
I think there are two use cases here:
1) Developer used a requirejs loader plugin to handle transpiling. 2) Developer used a command line tool to generate the transpiled JS.
It sounds like looking for a comment of an existing source map may help case 2, but may not help case 1 because it could be a "single JS]() style of build, so there is no intermediate file output, just going form source to final JS file.
What would really help me for this ticket is to have a nice, simple test cases for each case that just uses r.js and maybe one other command line call for case 2 (no grunt/gulp, etc...) that generates typical output that I could use to test any changes to address the ticket. Part of the difficulty is that I do not use transpiled languages like this, so I do not have enough in-house examples to draw from.
@jrburke for case two it should be enough to require a file which was optimized with uglify before
@ooxi for case 2, I would prefer to have something that had multiple files in another format that have been converted to .js files before the optimizer is called. I feel like it will be closer to how the code will be used, vs using uglifyjs as a stand-in for that case. So something in coffeescript for example.
Thinking a little higher level, what if every js tool followed the following rule:
Either:
foo.js
, or with uglify, source file is foo.js
, and the generated file is foo.min.js
. If the tool does this, then the tool should name the source map by appending .map
to the output filename, eg foo.js.map
for coffeescript, or foo.min.js.map
for uglify..map
to the name of the new name of the input source file. So, requirejs names its source map foo.js.src.js.map
. In this case, foo.js
will point to foo.js.src.js.map
, which will point to foo.js.src.js
, which will point to foo.js.map
, which will point to foo.coffee
.Pretty much every other tool complies with the first case above, the exception is requirejs. If it were to comply with the second, then I believe no tools would step on each others source maps.
Of course, it's not conventional since the source map file is named according to the original source file, not the output source file. But then, requirejs is already not conventional compared to other tools, since it renames the original source file, and replaces it with the output file.
Thanks for putting thought into managing the tool flow, I'll see how I can integrate that. For the requirejs case, it keeps the names the same because that allows the rest of the project, the HTML, the requirejs.config() used in the project to continue to work without modifications.
Yes, I understand why requirejs is different, and I think that's fine.
We're working on a new web assets pipeline for play framework right now, so we're in a good position to be able to test and give feedback on anything you need testing/feedback for, and we've also done a lot of thinking about how all the different possible tools in the pipeline might fit together, including source maps etc, so we'd be happy to give feedback on any ideas you have.
As far as i can see, we're dealing with two basic scenarios (factoring out JavaScript transpiled in loader plugins).
As r.js seems to utilize mozilla/source-map, mozilla/source-map#105 could probably help.
The first scenario doesn't work that fine. See #672 (the names attribute is always empty).
+1
+1
Though, it seems that this is a dead issue, +1 for me too.
Has anyone come up with a work-around? Specifically, I'm in scenario 2 (Developer used a command line tool to generate the transpiled JS.)
@david0178418 This tool can create source maps directly back to original source. https://github.com/Rich-Harris/sorcery
perhaps just look for existing fileName.js.map file, then use that as the input source map to the next stage of the transform.
Prefer using fileName.js.map, with .js extension since it is conceivable down the road that there is a fileName.html.map, fileName.js.map. I suppose as a stopgap, also look for fileName.map assuming that it is the map to go into JS.
While that would work for "whole directory" optimizations, single file optimizations, where for example, maybe just out= is used (or in the browser out = function (){}), then it would be nice to pass the maps through the code flow without having to touch disk.
In jrburke/require-cs#42, @guybedford suggested perhaps something passed to load.fromText as a way to do this.
The "use existing .map" file change may be easy to support for 2.1.7, not sure about the other path yet.