Humanity’s calling for the next hundred years is restoring natural beauty to Mother Earth. We believe that the planet is more of a being to be in communion with, rather than a resource to be extracted. To assist in this mission, we are providing a platform to help with global pollution and litter cleanup. This work belongs to the future of the human race, and therefore, we are using technology to provide community support for the cleanup effort. Its host of features is not just valuable to our users, but also to the planet itself.
We are advancing humanity’s mission of waste and plastic-pollution cleanup for the protection of Nature from harm and to improve the lives of human and non-human inhabitants. We provide a hub for the mission of cleaning up the planet for many different individuals and organizations, and we aim to be the ultimate resource and center of the cleanup effort.
We are a team of deeply devoted environmentalists who have a passion for restoring natural beauty. The planet is our common home, we borrow it from our children, and we inherit it from our parents. Caring for our common home with all living things will call forth into the future a life with less war, famine, destruction, climate disaster, hate, and division.
Software development live sessions happen on our public discord channel.
Litter Map is also a registered nonprofit organization with open board meetings on discord.
This repository is the cloud native back-end for the Litter Map application.
First install the requirements.
There are multiple ways to install a version of sam-cli, but if in doubt try installing with pip
as a user package:
pip install --user aws-sam-cli
If you don't have an AWS account, create one.
Using your root AWS account, create a new user with programmatic access to the AWS API. Make sure this user has membership in a group with AdministratorAccess
privileges.
Select the created user on the users page, and in the Security credentials
tab choose Create access key
. Select Show
to see the secret access key. You will not have access to the key again using the web interface, so make sure to store it in a secure location. The reference guide explains AWS security credentials and best practices in detail.
With this information on hand, configure your AWS credentials with the access key and secret key along with the desired deployment region by running:
aws configure
Your credentials will now be stored in a local file ~/.aws/credentials
and the AWS command line tools will now be able to execute commands on behalf of this account.
If you've already done that before (e.g., in the context of another deployment), take a look at how to create and switch between named profiles. It is assumed that separate instances (testing, staging, production) will be deployed under their own separate user accounts. In this case, run:
aws configure --profile <profile-name>
If this is a fresh clone of this source code repository, prepare the configuration files by running:
./init-config
Fetch the latest package of the image scaling lambda function as described in "Provide a built package", because it is a compiled binary that is not included with the source code.
Prepare the stack template and function code for deployment:
sam build
(what this does)Deploy the stack (ignore values you don't know for now):
sam deploy -g
(what this does)Carefully note the values returned in the Outputs
section. You will need them to configure the front-end client.
Authorize outside network access to the database (by default, access is restricted by security group rules):
./manage rds-db-authorize-remote-access
Perform first-time initialization on the littermap database:
./manage rds-db-init
This will initialize PostGIS and create the tables and access roles.
Take note of the geometry_type_objectid
value in the output. It is necessary to provide it after every database initialization, so redeploy the stack and manually specify the DBGeometryTypeOID
parameter now:
sam deploy -g
If you forget the oid, you can retrieve it by running:
./manage rds-db-run "SELECT oid FROM pg_type WHERE typname='geometry';" reader
Application type
choose Web application
Authorized redirect URIs
, depending on which domains you will be using to test the application, add the appropriate URLs with a path of /api/auth/google
:
https://
+ domain name + /api/auth/google
(e.g., for production)https://localhost:9999/api/auth/google
(for local testing)https://m2cgu13qmkahry.cloudfront.net/api/auth/google
(for testing with CloudFront CDN)https://91kfuezk29.execute-api.us-east-1.amazonaws.com/api/auth/google
(for testing the API directly)Client ID
and Client Secret
valuessam deploy -g
and specify those values when promptedEach lambda function is packaged and deployed as a separate service, which means they do not all have to be implemented using the same technology stack. While a lambda function that is written entirely in one of the supported interpreted languages (JavaScript, Python, Ruby) requires a remote machine equipped with the appropriate runtime interpreter to execute it, a native lambda is designed to be run directly by the CPU. If the lambda function executable or any of its dependencies need to be provided as a binary, it will need to be built and packaged.
To provide a native binary lambda deployment package, there are two options:
There is currently one native lambda:
scale-image
(it is currently still an early version)If you have a built package ready, just place scale-image.zip
into functions/scale-image/build/
. The build/
directory may need to be created.
Native lambdas can be built inside a specialized container that has the appropriate reproducible build environment that is isolated from your host system.
Make sure you've got Docker installed.
The build environment can currently be built for one of two 64-bit CPU architectures: x86
or arm
. Since all deration of concerns
ployed lambda functions are set to require the newer ARM CPU (due to its cost effectiveness), to build a package that will execute when deployed, it must be built and packaged together with its native linked libraries inside an arm
build environment container.
At this time, the only available build environment definition is for building lambdas from C++ source code using the official AWS C++ SDK and AWS C++ Runtime libraries. It also includes the libvips high performance image processing library. Additional build environments can be developed in the future that will allow building lambdas based on other technology stacks.
Either of the build environments (or both) can be built with:
./manage make-cpp-build-environment arm
./manage make-cpp-build-environment x86
If your host machine is one or the other and the architecture is not explicitly specified it will by default build an environment for the same architecture as your host machine.
Even if the deployed lambdas are specified to require an arm
machine, an x86
build environment may come in handy for iterating during development if you are developing on an x86
machine because the native build process is much faster.
If the build environment isn't the same as your host machine's native architecture, Docker will run it using user space emulation and building the image may take an hour or longer. If it doesn't work out of the box, it may require having qemu installed along with binary format support.
Once you have one or both of these environments built, they should be listed with:
docker images
Now, to build scale-image
for the arm
architecture:
./manage build-cpp-function scale-image arm
If the build process completes successfully, it will produce a deployment-ready zip package at functions/scale-image/build/scale-image.zip
.
The serverless stack includes S3 buckets for hosting the front-end and user uploaded media content. To deploy a version of the front-end:
./manage frontend-prepare
publish/config.json
to configure it./manage frontend-publish
If you turned on EnableCDN
, the front-end will now available through the CloudFront CDN endpoint.
To deploy the latest version:
./manage frontend-update
./manage frontend-publish
To deploy a specific branch or commit:
./manage frontend-update <branch-or-commit>
./manage frontend-publish
Don't forget to edit publish/config.json
before publishing.
In the following instructions, replace $BASE
with the API URL that looks something like:
https://2lrvdv0r03.execute-api.us-east-1.amazonaws.com/api/
Or with the CloudFront URL (if deployed with the CDN):
https://d224hq3ddavbz.cloudfront.net/api/
The active URL for the deployed API can be viewed by running:
./manage list-api-urls
Add a location (anonymous submit must be enabled):
echo '{"lat":22.3126,"lon":114.0413,"description":"Whoa!","level":99,"images":[]}' | http -v POST $BASE/add
Retrieve a location:
http -v $BASE/id/1
Log in (with Google):
$BASE/login/google
Add a location as a logged in user (get the session value from the Set-Cookie
reponse header):
echo '{"lat"-26.049:,"lon":31.714,"description":"Whoa!","level":99,"images":[]}' | http -v POST $BASE/add "Cookie:session=cbdaf7784f85381b96a219c7"
Log out:
$BASE/logout
View today's event log (in UTC):
./manage event-log
View event log for any particular day (specified in UTC):
./manage event-log 2021-12-25
Export the API schema:
./manage api-export
Perform arbitrary queries on the database by running:
./manage rds-db-run <query> [<user>]
(user is: admin, writer, reader; default: admin)For example, retrieve all the stored locations with:
./manage rds-db-run 'SELECT * FROM world.locations;' reader
The database can be completely reset by running:
./manage rds-db-init
Connect to the database as an administrator (must have postgresql installed to have the psql
utility):
./manage rds-db-connect
To see that you're logged into the database system as the correct user:
select current_user;
Show all tables in the world
schema (as opposed to the public
schema):
\dt world.*
Show all locations stored in the locations table:
select * from world.locations;
Type \help
to see available database commands.
To save money while not using the database during development, it can be temporarily hibernated with:
./manage rds-db-hibernate
Wake it back up with:
./manage rds-db-wake
To take this service down, run:
./manage stack-delete
(what this does)If that doesn't go smoothly, troubleshoot the issue or delete the stack in the CloudFormation dashboard.
The general procedure for changeset deployment after making changes is:
sam build && sam deploy
However, during development it can be much quicker to use sam sync. See:
sam sync --help
For a better understanding, read:
sam build --help
After making adjustments to the stack definition in template.yml
, optionally check it for errors with:
sam validate
If no errors are found, prepare the deployment files and then deploy the changes with:
sam build
sam deploy
After running sam build
, the intermediate template is available at .aws-sam/build/template.yaml
.
To change any parameter values before deployment, run sam deploy -g
.
To learn more about the deployment process and options run:
sam build -h
sam deploy -h
Be aware that deleting or changing the properties of individual running resources manually (e.g. in the AWS dashboard) will result in stack drift and can create difficulties that must be resolved in order to manage the stack wtih sam
.
sam build && sam deploy
sam deploy
sam deploy -g
For quick iteration, create shell aliases for:
sam build && sam deploy
sam build && sam deploy -g
sam build && sam deploy --no-confirm-changeset
Check javascript code for errors with ./manage lint
before deploying changes to functions
Colorize JSON output with jq
, for example: aws iam get-user | jq
Database used to store locations
Amazon RDS is a scalable relational database service that is API-compatible with PostgreSQL.
Database engine used to store user profiles, sessions, and event logs
DynamoDB is a fast and flexible NoSQL database that is simple by design but challenging to master. If used correctly, it will scale to terabytes and beyond with no performance degradation.
Copyright (C) Litter Map contributors (see AUTHORS.md)
This program is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License along with this program. If not, see https://www.gnu.org/licenses/.