Closed MatteoGioioso closed 2 years ago
I don't recommend using subfolder proxying. Your Strapi should be on it's own sub-domain like API.example.com
Your front end also should not live in the public folder of Strapi and should have it's own virtual host config.
Your front end also should not live in the public folder of Strapi and should have it's own virtual host config.
What do you mean? My frontend lives in a separated container.
tried with subdomain and still not working, more or less the same error
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://frontend:3000;
}
}
server {
listen 80;
server_name admin.example.com;
location / {
proxy_pass http://api:1337;
}
}
Everything loads correctly but when I try to navigate to admin.example.com/admin
cannot find main.js
and vendor.dll
Ok, after 2 days I have solved it 🙏 .
It was strapi unable to find frontend files because I kept the host as localhost
.
api:
build: strapi/
environment:
- HOST=www.subdomain.yourdomain.com <-- change to your host
- NODE_ENV=production
volumes:
and then in your strapi config server.json
{
"host": "www.subdomain.yourdomain.com", <-- change to your host
"port": "${process.env.PORT || 1337}",
"production": true,
"proxy": {
"enabled": false
},
...
fuck finally! thank you @MatteoGioioso
@MatteoGioioso where did you change this? which file?:
` api: build: strapi/ environment:
@hzrcan that should be the docker-compose.yml
@MatteoGioioso Tnx! But is a subfolder impossible? Still prefer that.
@mb89 There is official documentation about it:
which involves both Nginx and Strapi configuration, but! I haven't succeeded yet to make it work.
EDIT: Sub-Folder-Split
still didn't work for me; but Sub-Folder-Unified
- did! 👍
STRAPI v3.0.5, docker compose, Nginx v1.19.1. Configuration is exactly same as on official page.
@needleshaped Tnx, missed the tabs! I got Sub-Folder-Split
to work as well.
You can't use /admin
unfortunately, but /dashboard
or /cms
works.
/etc/nginx/sites-available/strapi.conf:
...
# Strapi API
location /api/ {
rewrite ^/api/(.*)$ /$1 break;
proxy_pass http://localhost:1337;
...
# Strapi Dashboard
location /cms {
proxy_pass http://localhost:1337/cms;
...
config/server.js:
...
url: 'https://example.com/api',
admin: {
url: 'https://example.com/cms',
},
...
I don't recommend using subfolder proxying. Your Strapi should be on it's own sub-domain like API.example.com
Your front end also should not live in the public folder of Strapi and should have it's own virtual host config.
Hi ! I am beginner in strapi, could you please explain a bit more ? What do you mean by Your front end also should not live in the public folder of Strapi and should have it's own virtual host config.?
I don't recommend using subfolder proxying. Your Strapi should be on it's own sub-domain like API.example.com Your front end also should not live in the public folder of Strapi and should have it's own virtual host config.
Hi ! I am beginner in strapi, could you please explain a bit more ? What do you mean by Your front end also should not live in the public folder of Strapi and should have it's own virtual host config.?
This issue is quite old and we offer some same configs for working with split frontend and backend here: https://strapi.io/documentation/v3.x/getting-started/deployment.html#optional-software-guides
But to answer your question, Strapi is a Headless CMS, so it's designed to be ran on it's own without also trying to serve a frontend. Depending on your frontend, it's better to offload that to an actual web server (Nginx, Apache, Caddy, Traefik, ect). Some frontend frameworks can also run as a service (aka SSR or Server Side Rendering) and likewise should be proxied by Nginx.
I dont know if that helps anyone. But it took me 6 hours until i found out to delete /build
and/or .cache
folder after changing config/server.js#admin.url
to https://localhost/dashboard
.
I dont know if that helps anyone. But it took me 6 hours until i found out to delete
/build
and/or.cache
folder after changingconfig/server.js#admin.url
tohttps://localhost/dashboard
.
Alternative is to use the yarn build --clean
or npm run build -- --clean
which will do that for you.
I know this is an old issue...but ouch! I can confirm this is still the case.
strapi build
does a whole webpack build and wants to hard-code all the urls and paths into the build.
This means (option 1) if you are trying to manage your builds/deploys with a CI, you need to make your FULL production ENV (secrets included I suppose) available to the CI...because you don't know as a user which of the values in ./config/
are getting baked into the build. The CI needs to know ahead of time where this strapi will be deployed so it can provide the right ENV for the build. And the CI build is specific to a target machine/env.
Or (option 2) you just use the CI to do what it can (e.g. node ci
testing and tar
) and then you have to have your provisioning code run the strapi build
on the end server once the ENV has been exposed. This also means the CI probably needs to include all dev dependencies in the build going to the server. This long-running webpack build blocks the rest of provisioning and leads to unpredictable go lives. As a matter of best practice, we try to never do builds on production servers (including yarn install --frozen-lockfile
or npm ci
or building source packages). In strapi's case it appears that we have to deploy to the server with both production and dev dependencies to be able to build (also something we try to avoid...in strapi's case the dev deps are apparently production deps). This also means that we can't guarantee the build will work on the CI before deploying to the production server...which can easily lead to deployment of broken builds we won't find out about until after deployment. So in this option a staging production server is mandatory as a way of testing the builds even work (since we can't do this in the CI).
The last (option 3) is manual deployment and giving users ssh access to production to manually manage/update strapi on production boxes. For security and data privacy reasons we try to strongly limit any SSH access to production machines and the access that is there is for "reading" (debugging) and users are not generally supposed to be making changes as there is a strong chance (if they even have access) their changes will conflict or be overwritten by our server management/provisioning code. So this option means that strapi servers cannot be handled in the same way as other production api and webservers...means losing out on a lot of automated monitoring, nginx stats, etc.
The underlying problem here (well for us) is that config is hard-coded into the webpack build (maybe other things?) at the strapi build
stage...which requires the build machine to know all the details (secret production ENV) about the destination production server. I'm sure fixing this is more complicated than using directory-relative urls in the source. Of course when starting strapi in production if expected/required config (ENV vars loaded by files in ./config/
) isn't there or doesn't validate then I would expect the production service to error and not start. This is a production/provisioning error and not a CI/app level error in my opinion.
Another thing... apparently mounting strapi under an nginx location block with a prefix (e.g. /api
) is supported...but we have to do an nginx rewrite to strip the /api
from the prefix? This is in the unified example but not called out anywhere as necessary or important. And apparently we need to do this even though we have already informed strapi about the path where it is mounted (public url)...so it should be able to manage without the redirect I would think? To add to the awesomeness... /ap
gives nginx 404, /api
gives strapi 404 and /api/
gives strapi "hello world" page.
Am I missing something here on strapi deployment/devops or is this project really not compatible with devops environments? It is okay if strapi is really designed for non-devops users who manually manage a small number of servers (e.g. the pm2 people)...even if that means it isn't really compatible with enterprise deployment environments and controls.
strapi build
).And we are just trying to avoid having strapi modifying state (db, uploads, etc) outside of known locations that we can manage backup/restore/migrate/etc. We are getting closer. Here is what we came up with for anyone else that has the same concerns or requirements...
The webpack build needs to happen on the CI (for us) and needs to know its own base path as well as the path to get to the API.
Strapi assumes that if your public admin url isn't absolute (starts with http...
) that it should be treated relative to the api url. (e.g. /admin
gets turned into /api/admin
whereas https://example.com/admin
will not be messed with).
There is a whole lot of trimming slashes (leading/trailing) and re-prepending slashes going on as well as special case logic based on whether or not you pass an absolute url.
Our solution on this was to patch node_modules/strapi-utils/lib/config.js
in the CI after {yarn,npm} install
but before strapi build
.
In particular this line: https://github.com/strapi/strapi/blob/v3.5.2/packages/strapi-utils/lib/config.js#L38
This allows us to avoid having to tell our CI or the strapi build
the absolute urls where the resulting build will be deployed. And it allows us to re-use the same build (tarball) for multiple deployments at different urls as long as they all use the same uri convention (e.g. /admin
and /api
).
Of course the target server that consumes/deploys the tarball needs to know about this convention and nginx needs to be setup to match. So there is some coupling between CI and production as not everything can be passed as runtime config (the build config). The strapi systemd service (passes env and runs strapi start
) needs to be informed as well.
This allows us to use the same tarball (domain-name independent) for both staging and production servers and not have to run strapi build
at all on target machines.
After looking through the code the only ENV needed for strapi build
is the server.url
and server.admin.url
and NODE_ENV=production
. The strapi server config is also set with ENV using the the env
helper described in the docs (url: env('PUBLIC_URL', 'https://www.example.com/public-url-not-set'),
).
./public
) out of app dir.I'm new to strapi (few days) and just learning what it does...some of these things are probably obvious if you already know how strapi works.
uploads (local uploads plugin) are (by default) saved relative to the app path (./public
). When a new build (full app dir) comes from the CI that gets blown away (bye bye image files). So we also moved this data dir outside of the app dir.
Now it can be managed separate from the deploy of the strapi build
app. It wasn't obvious how to do this but in the end we found we could override this via ./config/middleware.js
and then set the DATA_DIR
via env vars.
// config/middleware.js
// WARNING! This file does not support env helper like other files in ./config
// have to set defaults ourself.
module.exports = {
settings: {
public: {
path: process.env.DATA_DIR || './public',
}
},
};
So now we are in a state where we only have to manage the data_dir and db backups/restores and the strapi application folder can be managed completely with git including CI builds.
plugins
We are a little worried about people potentially installing plugins using the production UI instead of having site developers install/test them on their own machines and then commit them to our strapi project staging branch...because the next deployment from the CI will also wipe out any plugins that have been added (the entire app dir/build is deployed as a unit).
I noticed that plugin presence seems to be accomplished by looking at package name prefixes in package.json...so it might be better to have installed plugin state stored in the database and have a configurable (like DATA_DIR
for uploads) PLUGIN_DIR
that can live outside the main app dir.
Alternatively or in addition.... having an easy config switch to turn off marketplace (no plugin installs allowed when enabled) would be great. It just disables the UI feature and developers have to manage plugin install and testing and then commit that config to git.
auto-update
During CI (again after install
but before strapi build
) we customize a few things including turning off update notification.
/bin/echo -e \
"export const LOGIN_LOGO = null;\nexport const SHOW_TUTORIALS = false;\nexport const SETTINGS_BASE_URL = '/settings';\nexport const STRAPI_UPDATE_NOTIF = false;" \
> node_modules/strapi-admin/admin/src/config.js
We don't want strapi to behave like wordpress where the only way to track/manage changes is make frequent backups of a giant directory and diff them. So we expect that updates will be done by developers locally, tested and then committed to git where they will be built/deployed. We set STRAPI_UPDATE_NOTIF = false;
but it would be nice to be confident that ability to update via the api/admin-ui is really disabled.
telemetry
Despite setting STRAPI_TELEMETRY_DISABLED=true
for both the strapi build
on the CI (admin webpack build) and the strapi service/app (API) on the destination server...we still saw a lot of calls going out to analytics.strapi.io. We added another patch to the CI to just replace analytics.strapi.io
with analytics.strapi.io.test
in the files where we had located analytics calls.
Perhaps that ENV is outdated and we need to remove the UUID from package.json instead to accomplish this? Or perhaps telemetry is different than the analytics calls?
Another thing... apparently mounting strapi under an nginx location block with a prefix (e.g. /api) is supported...but we have to do an nginx rewrite to strip the /api from the prefix? This is in the unified example but not called out anywhere as necessary or important. And apparently we need to do this even though we have already informed strapi about the path where it is mounted (public url)...so it should be able to manage without the redirect I would think? To add to the awesomeness... /ap gives nginx 404, /api gives strapi 404 and /api/ gives strapi "hello world" page.
This is caused by how Koa-router looks for routes, it does regex matching and with a prefix you would have to manually update all of the prefixes for every model via the routes.json.
We opted not to mess with the koa router to handling sub-folder based proxying
We opted not to mess with the koa router to handling sub-folder based proxying
Thanks for the clear explanation.
A little feedback as a new to strapi (3 days) user reading docs to do an enterprise deployment. Just some things I think that are important requirements or warnings that could be called out better in the docs. Sorry if I missed them and they are there.
subdir-{unified,split}
/api/
not /api
). Shouldn't be a problem since api calls are generated by code, not users... but still important detail to mention.staging-api.example.com
, staging-www.example.com
, api.example.com
, www.example.com
.strapi build
must be run with complete production config/environment in order to build properly. So this config/env needs to be available in the CI or the build needs to be run on a configured target production server. To add to the awesomeness... /ap gives nginx 404, /api gives strapi 404 and /api/ gives strapi "hello world" page.
So the doc example (subfolder-unified) uses this rewrite in the /api
nginx location block...
rewrite ^/api/(.*)$ /$1 break;
Anyone see an issue with small modification to make the trailing slash optional on original uri, but leading slash mandatory on re-written?
rewrite ^/api/?(.*)$ /$1 break;
Unless I am missing something, the tweaked rewrite will result in the following which I can't imagine breaking anything in terms of strapi uri expectations. Not a big deal just nice not to have to worry about whether there is a trailing slash or not. Not sure if there is a meaningful performance penalty as I generally avoid doing rewrites if I can.
Orig. URI | Rewritten |
---|---|
/api |
/ |
/api/ |
/ |
/api/foo |
/foo |
/apifoo |
/foo |
/api?foo=bar |
/?foo=bar |
/api/?foo=bar |
/?foo=bar |
@mattpr no that makes perfect sense, can you open a PR on our documentation repo: https://github.com/strapi/documentation
Please ignore the contribution guide and use the following PR branch for your base, as we are planning to merge it later this week and it's part of a massive restructure project: https://github.com/strapi/documentation/pull/154
I'm going to transfer this issue over to the docs repo and reopen it pending that suggestion for the configs. You may also want to check the HAProxy rewrite as well.
Closing this issue fixed via pull request #157
Thank you for your contribution!
localhost
is www required if using a subdomain??
localhost
is www required if using a subdomain??
No it's not
See: https://github.com/strapi/documentation/issues/156#issuecomment-787069579
Informations
What is the current behavior? I am running
next.js
andstrapi.js
together on two separated docker containers withdocker-compose
. I want Nginx to redirect all the request fromwww.mydomain.com/admin
to strapi admin page. This is my nginx config:and my
server.json
Everything works fine and the app starts and works normally. However when I try to access the admin page of strapi I have the following error:
What is the expected behavior? Admin page should load normally
Suggested solutions