Open ryancco opened 4 years ago
I think it makes sense to be able to switch between the two depending on use-case.
I've noticed that using separate-vendor: true
allows the deployment to succeed but I'm not convinced the database is available and working, always seems to return the fallback location. I wonder if perhaps the storage/app
directory isn't made available to a deployment. May need to be moved to resources
or something.
Did you have any success getting it working?
Yeah, I have found that downloading the database to the resources directory seems to make it work.
'maxmind_database' => [
'class' => \Torann\GeoIP\Services\MaxMindDatabase::class,
'database_path' => resource_path('geoip.mmdb'),
'update_url' => sprintf('https://download.maxmind.com/app/geoip_download?edition_id=GeoLite2-City&license_key=%s&suffix=tar.gz', env('MAXMIND_LICENSE_KEY')),
'locales' => ['en'],
],
And call geoip:update
as a build step.
build:
- "composer install --no-dev"
- "php artisan optimize"
- "php artisan geoip:update"
- "yarn && yarn run prod && rm -rf node_modules"
I use the database path, but that would also work. The important part is including it in your build step. I just recently revisited my GitHub actions workflow and, for either you or anyone else looking to use this workaround, let me save you some time I lost working through this...
Assuming the vapor.yml
file has been configured with the build step mentioned before
- php artisan geoip:update
Your deployment step should look something similar to this (removed a few lines specific to my workflow)
- name: Deploy to production
env:
VAPOR_API_TOKEN: ${{ secrets.VAPOR_API_TOKEN }}
CACHE_DRIVER: array
MAXMIND_LICENSE_KEY: ${{ secrets.MAXMIND_LICENSE_KEY }}
run: ./vendor/bin/vapor deploy production
This would also apply if you're running any integration tests against, or for whatever reason need this package to function properly during your test suite. You'll need to specify the array cache driver (as the file driver does not support tagging) and be sure to have your maxmind license key set as an environment variable so that geoip:update
is able to fetch the mmdb.
Worth noting here too - if you're only interested in the request country and using Cloudflare (or can use Cloudflare) they can expose the request country as a request header.
Running into the same issue, has anyone tried attaching EFS to Vapor and using that path for the database?
Because when using the resources (or database) path it will store the database there, but as the Vapor docs say
When running an application in a serverless environment, you may not store files permanently on the local filesystem, since you can never be sure that the same serverless "container" will be used on a subsequent request.
Does that keep working when you go over the 1000 concurrent connections (the default) and Lambda scales up? I would think the database would then be gone again.
Tried including the database in the deploy steps but due to the lambda size limit could not follow through with it. Certain that it would work constantly if paired with the docker run time.
@denniskrol currently exploring using the EFS to store the maxmind file database. It seems to be working fine essentially have turned my config into the following.
'maxmind_database' => [
'class' => \Torann\GeoIP\Services\MaxMindDatabase::class,
'database_path' => env('MAXMIND_STORAGE_PATH', storage_path('app/geoip.mmdb')),
'update_url' => sprintf('https://download.maxmind.com/app/geoip_download?edition_id=GeoLite2-City&license_key=%s&suffix=tar.gz', env('MAXMIND_LICENSE_KEY')),
'locales' => ['en'],
]
So I can still work with it on local, and the path on vapor to something the likes of /mnt/app/geoip.mmdb
. Still I'm cautious to call it a solution though due to the nature of the size of payload that a lambda function can handle aside from the fact that I am no expert on streams which I feel could be a solution if that were to be an issue.
Also as a side note(mainly because I didn't know) if you use request()->server('REMOTE_ADDR')
it will not work in vapor, you have to call it through the proxy request()->ip()
Completely forgot to update this.
I went for a dual Vapor and EC2 with EFS solution. My project leans heavily on Laravel jobs and they were 2 to 3 times slower on Vapor than on an EC2 server. I run the same code on EC2 as Vapor and use the EC2 server just for (scheduled) jobs. Which also means the EC2 updates the MaxMind DB and stores it on EFS.
For Vapor the database_path
in the config is set to "/mnt/local/geoip.mmdb"
EC2 database_path
is set to "/mnt/efs/geoip.mmdb"
(where I mounted the EFS storage)
You can now get city, latitude and longitude using dynamic Cloudflare headers (not through workers, so it's free). If this is roughly all you need it's a great alternative to using GeoIP in your Laravel app. You can see the available fields here and how to add a HTTP request header modification rule here.
A requirement of Laravel Vapor is that the compressed project size is < 45mb. While the GeoLite2-City database is simply too large for this on its own being ~69mb uncompressed, the GeoLite2-Country database works just fine sitting at just 4mb uncompressed.
Obviously, this solution would break compatibility for other services, as well as the public API now returning either a partial Location response object or a new response object entirely. However, if this were a configurable option for the
maxmind_database
service it would be an intentional, visible configuration option that would allow support for Laravel Vapor.The code changes necessary in the
MaxMindDatabaseService
would be minimal. Instead of calling thecity
method on theReader
object if the configuration option iscountry
we would callcountry
.Given the response from the country database really only consists of
iso_code
, I would think it would be fine to opt for a partially completeLocation
object consisting of only that. Again, this would be an intentional configuration change available to users who do not need/cannot use the entire city database in instances such as Laravel Vapor.This is more of an idea gathering to see if it would be supported/accepted or if there might be a workaround already out there. I'm more than willing to PR the changes if this approach is accepted.