Open chopchop505 opened 4 years ago
No short term plans, but adding more data to EMS is always on our radar.
This dataset is quite "heavy" in terms of data size according to the file ZIP sizes stated here by state.
Sending this dataset as is to the browser would be a challenge for most situations. We should serve them by state, but still, it can go up to 17MB (North Carolina) that converted into TopoJSON and simplified a 90% yields a file almost 8MB.
I think the best way to do this would be ingesting the Census tract shapefiles into Elasticsearch using GDAL and then displaying them in Elastic Maps as documents. However, Elastic Maps can currently only load a maximum of 10,000 features so you may want to set a minimum zoom level (ex. 8) to avoid running into that limit in high density areas such as New York City.
You can still apply term joins to create choropleth maps in Elastic Maps with Document layers just as if you were using a layer from Elastic Maps Service.
Here's how I accomplished this.
1) Build and install GDAL from master (required for Elasticsearch 7). I used these instructions to build, but YMMV.
1) Download all the Census tract zip files into the current directory with wget -r -nH --cut-dirs=4 ftp://ftp2.census.gov/geo/tiger/TIGER2019/TRACT/*
1) Run this GDAL ogr2ogr
command to ingest each file in the directory into Elasticsearch.
for FILE in *.zip; do ogr2ogr -lco NOT_ANALYZED_FIELDS={ALL} -overwrite ES:http://localhost:9200 /vsizip/./$FILE; done
1) Create an index pattern in Kibana like this "tl2019_tract"
1) Add a new Documents layer to Elastic Maps from the "tl2019_tract" index pattern.
1) Set the minimum zoom level to 8.
You might also want to refer to our blog post about using GDAL with Elasticsearch.
Any plans for adding a layer for US Census Tracts?