Closed Telroshan closed 1 year ago
It happened here: https://github.com/bigskysoftware/htmx/pull/1338
Thanks @alexpetros ! Ok so we have to keep this href reference
It didn't occur to me at first but I get why npm t
is problematic here
npx serve
, which starts a server that we access at http://localhost:3000
, so we're running the tests at http://localhost:3000/
URLs are fine for hx-boost here because we have an HTTP URLnpm test
on the other hand, runs tests directly accessing the HTML file, as the command in package.json
resolves to mocha-chrome test/index.html
Now, if I open the test/index.html
file directly in my browser, I get those exact same 4 hx-boost errors!
Because then the URL the tests are run from is a file URL, not a HTTP one, i.e. file:///E:/htmx/test/index.html
for instance
May I ask how is the pipeline setup @alexpetros ?
Would it be conceivable to introduce a npx serve
call and run tests on a local server with a HTTP URL rather than against the HTML file itself (and a file URL) ?
Would the pipeline even support such a thing?
So for example, considering a Windows environment, the following works in my case :
"test": "START /B npx serve & mocha-chrome http://localhost:3000/test/",
Chaining with a single &
so that even if the server is already running (thus the start fails), the tests still run
Instead of currently:
"test": "mocha-chrome test/index.html",
It first starts the server in the background then runs the tests. With this approach, npm t
works fine:
616 passing (4s)
1 pending
The above only works on a Windows environment though, I dont know which environment the pipeline runs in, this may need to be adjusted to bash syntax
To argue a bit on this approach: conceptually, I think it makes more sense to run tests on a server-based page (HTTP), since we are precisely mocking a HTTP server to run most of our tests
Let me know what you think
The above only works on a Windows environment though, I dont know which environment the pipeline runs in, this may need to be adjusted to bash syntax
My local environment is an M1 MacBook Pro, and the CI runs on Ubuntu, so I'm guessing that what you're experiencing is a Windows issue. I really need to get a Windows machine.
Now, if I open the test/index.html file directly in my browser, I get those exact same 4 hx-boost errors!
This does not happen on MacOS! So we know that something we're doing with the file URLs is broken on Windows.
To argue a bit on this approach: conceptually, I think it makes more sense to run tests on a server-based page (HTTP), since we are precisely mocking a HTTP server to run most of our tests
I actually sort of disagree. There are HTML pages, and they should work everywhere HTML does. The fact that 4 out 619 tests don't work when run from file URLs, only on Windows, suggests to me we're doing something incorrectly with either those 4 tests or the thing that they're testing.
So if we take a closer look at #1338, this change was made to reduce cache misses for the same URL referenced differently (i.e with trailing slashes). This is an optimization, but not a breaking change to revert, especially since it caused a bug. We also don't have to revert the entire change—the history support can stay, I think we just need to remove the URL normalizing for hx-boost
.
I'm not convinced that was a particularly worthwhile optimization anyway, but for now let's just revert this single line and see if the tests pass:
Oh yeah look at that: https://github.com/alexpetros/htmx/actions/runs/6014239150/job/16313551736
Okay I can't get these tests to run reliably on Windows Server in the GitHub Runner (lmao) but I do think that the change I suggested (and the corresponding change to that hx-disinherit
test) should resolve it.
Oh, didn't know that Windows was acting differently there! It's actually a nice thing that you have a Mac on your end, this way we can test all sides lol (I don't own one)
Btw, I tried to run the tests on Windows' Ubuntu bash (WSL), but for some reason it's failing, complaining about a server unavailable issue (even though I explicitly ask it to run against the html file, thus without server, but maybe is it a WSL issue here? Whatever)
I actually sort of disagree. There are HTML pages, and they should work everywhere HTML does. The fact that 4 out 619 tests don't work when run from file URLs, only on Windows, suggests to me we're doing something incorrectly with either those 4 tests or the thing that they're testing.
I would say that htmx is about server side communication and HTTP requests, so does it make sense to run the tests in an environment that htmx isn't expected to run in anyway (i.e not a HTTP environment, no server) ?
In that sense, I'd find it more logical to adjust the pipeline to run a server (and replicate a "real environment", or probably better to call it an "expected environment"), rather than reverting a feature that was providing some cache optimization to htmx (this one may be impactful on a production environment, thus impact end users, as opposite to the current bug that only impacts the tests)
Again, the tests are mocking a fake server, and mocking is about replicating what you would expect in production ; here, replicating a local server, makes sense imo to provide that local server to automatically fix the URLs.
If the tests were hitting on absolute URLs (imagine they were hitting on an absolute path such as https://htmx.org
then it would make perfect sense to me to keep it a serverless HTML file, but in the current situation (hitting relative URLs, i.e a local server - a server serving the page itself), I'm a bit skeptical about this approach.
I would say that htmx is about server side communication and HTTP requests, so does it make sense to run the tests in an environment that htmx isn't expected to run in anyway (i.e not a HTTP environment, no server) ?
Why would you limit it this way? If htmx works cross-platform on files, you could write, for instance, a rich interface for browsing a local knowledge wiki. HTML works this way, why not htmx? We're going to ditch the compatibility with file systems we've already achieved for the sake of a history cache optimization that only applies to hx-boost
links that happen to have the same relative resource name?
More importantly, I think this "optimization" is wrong, both because it does not optimize the thing it claims to optimize, and introduces unexpected behavior. Let's say I have two anchors:
<a href="/profile">My Profile</a>
<a hx-boost="/profile">My Profile</a>
What this line of code in the diff does is effectively replace that second anchor with:
<a hx-boost="https://example.com/some/qualified/url/profile">My Profile</a>
In most cases the behavior is identical, but clearly it's not identical in all cases, and there's absolutely no reason to introduce additional normalization on top of what the browser will already do, especially if it risks behavior that deviates from HTML (since the promise of hx-boost
is that we're extending the semantics of the anchor element to allow for partial DOM updates).
Also, and I'm a little less confident about this, as best I can tell all the history stuff happens after the request is made, so this doesn't even "prevent potential key collisions", unless he's referring to some internal browser mechanics that I'm unaware of. So we can change this line, reclaim hx-boost
file behavior on windows, and not de-optimize anything.
Hmm I think I've expressed myself in a poor way here.
We're going to ditch the compatibility with file systems
No, not at all! What I'm suggesting here does absolutely not make htmx file system incompatible! The issue is that we're testing a setup with relative URL AS IF there was a local server operating behind. The browser, natively, with the href property, sees that the page is not running on a server but accessed through a file URL, thus resolves the relative URL to a file system URL. This is the expected behaviour, that's what you would exactly get if you were running htmx on a file system, it is supposed to happen this way.
So the issue here, and what I'm (probably poorly) trying to say, is that the way we test is wrong. We're testing as if we had a local server, but accessing the test file without a local server to serve it.
Does that make more sense?
introduce additional normalization on top of what the browser will already do
"what the browser will already do" is what it does the href property, it's not an additional normalization here, as if you were to click that /relative_url
from a HTML file, it would indeed redirect you to a file URL, so the behavior here is actually correct & expected
I feel like my English is really getting in the way here so sorry if it's messy, but what I'm trying to say is that there is a dissonance between what we test, and how we test it
Your English is great, I know exactly what you mean :)
I also wasn't clear enough that I basically agree with you about the way we test—doing so from a server would be more accurate to the majority use-case, and at the very least something we should do in addition to our existing test harness.
However, that's a secondary concern to whether or not the referenced commit is correct, and I think that particular isn't.
"what the browser will already do" is what it does the href property, it's not an additional normalization here, as if you were to click that /relative_url from a HTML file, it would indeed redirect you to a file URL, so the behavior here is actually correct & expected
By definition it's an additional normalization, because we change the URL before passing it to XMLHttpRequest()
. So whether or not the browser would interpret this URL the same way if it were passed, un-normalized to HXLHttpRequest()
, it is very much an additional manual step.
And as we can see from the test suite, it's a normalization that breaks the functionality of hx-boost
if you're running referencing relative file URLs on Windows—a corner case, but a real one. So while I agree with you that in theory it shouldn't change anything, it definitely does.
That to me is already disqualifying, but I'm even more inclined to remove it because I'm pretty confident that it doesn't actually optimize anything. It's meant to prevent key collisions, but those keys (URLs) are derived from the response, not the request. The commit also wrote a function to normalize response URLs, and that appears to be fully sufficient for the optimization that it's trying to perform.
And as we can see from the test suite, it's a normalization that breaks the functionality of hx-boost if you're running referencing relative file URLs on Windows
Does it though? The breaking thing is that we're mocking a HTTP URL (/test
) but in that exact same context, on Windows (because on linux & mac, file URLs look like relative HTTP URLs like /var
, /etc
or whatever), we're in a file system environment so the URls are expected to be file URLs and not HTTP ones
If I run the following on Windows, i.e a relative file URL in an expected file system environment (as opposed to the tests that are expecting a HTTP URL in a file system environment, thus the dissonance):
<a href="/folder/subfolder/file.txt" hx-boost="true">Click me</a>
It correctly sends a request for E://folder/subfolder/file.txt
, so it doesn't seem to break anything to me (unless a test that isn't in the environment we pretend it to be, so I wouldn't consider it as a bug)
Look at this way: this additional logic to normalize the URL with JavaScript removes the browser's built-in resiliency and error-correction. Why would we do that? You're correct that if our test URLs were more precise it wouldn't be a problem, but the fact that the browser can figure that out on its own is a feature of the browser, and precisely the kind of thing that htmx is designed to take advantage of.
Instead, we can just remove this additional logic from which derive no clear benefit and let the browser interpret that URL in exactly the same manner it would interpret a normal href
value.
I just PRed the change and I think that if you look at that line of code in isolation it makes a lot more sense to remove it.
I mean, I agree with you, I just hope as I'm not 100% confident that this it not going to cause issues in some cases. I don't know the codebase enough to tell for sure if we're not relying anywhere on some URL that isn't the response's normalized one as you mention in the PR, and that this isn't going to break something that isn't in the test suite Maybe am I being overly cautious here though
Fixed by PR #1742, closing
As discussed in #1687, also mentioned in #1650
npx serve
works without problemsnpm run test
though, raises 4 errors related to hx-boostAdding logs to htmx, I found out the following: Take this test for example, in
hx-boost/handles basic anchor properly
Logs:
So the test is failing here because the request fails: it doesn't call the mocked server's endpoint, tries to access a file URL instead, thus no swapping occurs
So it turns out that the
/test
URL, which is meant to be caught by the mocked server, resolves to a local file URL insteadI've noticed a line in htmx core (in
function boostElement
) about that:Replacing the current
.href
property access bygetRawAttribute(elt, "href")
, gets all tests to pass. There's a single test failing, but it's explicitly checking href against the URL, so it's normal that it fails here, given that href's value is the issue@alexpetros suggested using git blame to find out what was the initial issue that led to use the
href
property access instead of getting the raw attribute value. I'm going to try that and update this issue with what I find out.