The U.S. COVID-19 Atlas provided county-level visualizations and analytics to reveal a more detailed pandemic landscape with local hotspots of surging COVID cases that were missed by state-level data. The Atlas is live at: https://USCovidAtlas.org.
For more information about additional datasets used in the Atlas, see our Data page. Detailed data documentations about different variables and data sources are available at the data-docs folder.
Because there is no one single validated source for county-level COVID cases and deaths for real-time analysis, we incorporate multiple datasets from multiple projects to allow for comparisons.
We also include information from the following datasets:
Previously used Datasets:
To access raw 1P3A data, you must contact the 1P3A for a token directly.
Not all cases from 1P3A data can be assigned to a particular county, see following (the list is being updated as new data comes in everyday)
For a complete breakdown about the methods used in the Atlas, see our Methods page.
The hotspot detection (a Local Indicator of Spatial Autocorrelation) is powered by Geoda. We also use many other features from GeoDa including natural breaks classification and cartogram techniques. See below for how one can apply these methods to reproduce the results using above datasets.
More information about the Geoda project can be found here.
The US Covid Atlas open-science collaboration project was comprised of a coalition of research partners that were been integral to developing and expanding the Covid Atlas to meet the needs of health practitioners, planners, researchers, and the public.
Check out the Team page for more information about the many contributors to the Atlas: https://uscovidatlas.org/about#team.
The Advisory page details information about the Community Advisory Board: https://uscovidatlas.org/about#advisory.
There are multiple resources the learn more about the data, methods, technical infrastructure, and more at the main Covid Atlas site:
If you have a question regarding a specific dataset, please contact the dataset author(s) directly. If you have any questions regarding the Atlas, contact us by via: https://uscovidatlas.org/contact
Please cite us according to how you used the US Covid Atlas:
Website: Marynia Kolak, Qinyun Lin, Dylan Halpern, Susan Paykin, Aresha Martinez-Cardoso, and Xun Li. The US Covid Atlas, 2022. Center for Spatial Data Science at University of Chicago. https://www.uscovidatlas.org
Published Work of beta Version: Kolak, Marynia, Xun Li, Qinyun Lin, Ryan Wang, Moksha Menghaney, Stephanie Yang, and Vidal Anguiano Jr. "The US COVID Atlas: A dynamic cyberinfrastructure surveillance system for interactive exploration of the pandemic." Transactions in GIS 25, no. 4 (2021): 1741-1765.
Codebase of beta Version: Xun Li, Qinyun Lin, Marynia Kolak, Robert Martin, Stephanie Yang, Moksha Menghaney, Ari Israel, Ryan Wang, Vidal Anguiano Jr., Erin Abbott, Dylan Halpern, Sihan-Mao. (2020, October 12). GeoDaCenter/covid: beta (Version beta). Zenodo. http://doi.org/10.5281/zenodo.4081869
Repositories
URLs
yarn build
or yarn netlify-build
to include pre-build data fetching and parsing yarn docs
There are various other branch deploys on the US Covid Atlas web hosting (netlify) that are not publicly listed.
This project was bootstrapped with Create React App.
REACT_APP_MAPBOX_ACCESS_TOKEN=<token>
Enter your mapbox token (must have access to the resources that are hard-coded into the style.json
and style_light.json
files).
REACT_APP_ALERT_POPUP_FLAG=false
Just leave this "false".
Variables to connect with the Covid Stories content:
REACT_APP_EMAIL_FORM_URL=
REACT_APP_STORIES_PUBLIC_URL=
The following are all related to Google BigQuery credentials:
BIGQUERY_PROJECT_ID=
BIGQUERY_CLIENT_ID=
BIGQUERY_CLIENT_EMAIL=
BIGQUERY_CLIENT_X509_CERT_URL=
BIGQUERY_SECRET_KEY=
BIGQUERY_SECRET_KEY_ID=
All of the above variables (and perhaps a couple of others must also exist in the Netlify environment. If the data-pull-1.yml workflow is enabled, then the BigQuery variables must also be added to this repositories list of secrets.
cp .env.example .env
to create a local .env file and update values as needed.npm i -g yarn
yarn
to install dependenciesyarn fetch-data
to fetch the latest datayarn start
to start the appIn the project directory, you can run:
yarn fetch-data
Updates the data in the public data directory as required by the frontend application.
yarn start
Runs the app in the development mode.\ Open http://localhost:3000 to view it in the browser.
The page will reload if you make edits.\ You will also see any lint errors in the console.
yarn docs
Generates JSDoc site, output to the folder jsdocs
. See jsdoc
folder for configuration.
yarn test
Launches the test runner in the interactive watch mode.\ See the section about running tests for more information.
yarn build
Builds the app for production to the build
folder.\
It correctly bundles React in production mode and optimizes the build for the best performance.
The build is minified and the filenames include the hashes.\ Your app is ready to be deployed!
See the section about deployment for more information.
yarn eject
Note: this is a one-way operation. Once you eject
, you can’t go back!
If you aren’t satisfied with the build tool and configuration choices, you can eject
at any time. This command will remove the single build dependency from your project.
Instead, it will copy all the configuration files and the transitive dependencies (webpack, Babel, ESLint, etc) right into your project so you have full control over them. All of the commands except eject
will still work, but they will point to the copied scripts so you can tweak them. At this point you’re on your own.
You don’t have to ever use eject
. The curated feature set is suitable for small and middle deployments, and you shouldn’t feel obligated to use this feature. However we understand that this tool wouldn’t be useful if you couldn’t customize it when you are ready for it.