Open ajtucker opened 5 years ago
Build 'GSS_data/Health/HMRC_alcohol_bulletin' is failing!
Last 50 lines of build output:
[...truncated 177 lines...]
[0;32m---> 73[0;31m [0mmd[0m [0;34m=[0m [0mmd[0m [0;34m+[0m [0;34m'\n'[0m[0;34m.[0m[0mjoin[0m[0;34m([0m[0;34m[[0m[0;34mf'* {d.label}'[0m [0;32mfor[0m [0md[0m [0;32min[0m [0mself[0m[0;34m.[0m[0mcatalog[0m[0;34m.[0m[0mdataset[0m[0;34m][0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m
[0m[1;32m 74[0m [0;32melse[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[1;32m 75[0m [0;32mif[0m [0mhasattr[0m[0;34m([0m[0mself[0m[0;34m.[0m[0mdataset[0m[0;34m,[0m [0;34m'label'[0m[0;34m)[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[0;32m~usr/local/lib/python3.7/site-packages/gssutils/scrape.py[0m in [0;36m<listcomp>[0;34m(.0)[0m
[1;32m 71[0m [0;32mif[0m [0mhasattr[0m[0;34m([0m[0mself[0m[0;34m.[0m[0mcatalog[0m[0;34m,[0m [0;34m'dataset'[0m[0;34m)[0m [0;32mand[0m [0mlen[0m[0;34m([0m[0mself[0m[0;34m.[0m[0mcatalog[0m[0;34m.[0m[0mdataset[0m[0;34m)[0m [0;34m>[0m [0;36m1[0m [0;32mand[0m [0mlen[0m[0;34m([0m[0mself[0m[0;34m.[0m[0mdistributions[0m[0;34m)[0m [0;34m==[0m [0;36m0[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[1;32m 72[0m [0mmd[0m [0;34m=[0m [0mmd[0m [0;34m+[0m [0;34mf'## {self.catalog.title}\n\nThis is a catalog of datasets; choose one from the following:\n\n'[0m[0;34m[0m[0;34m[0m[0m
[0;32m---> 73[0;31m [0mmd[0m [0;34m=[0m [0mmd[0m [0;34m+[0m [0;34m'\n'[0m[0;34m.[0m[0mjoin[0m[0;34m([0m[0;34m[[0m[0;34mf'* {d.label}'[0m [0;32mfor[0m [0md[0m [0;32min[0m [0mself[0m[0;34m.[0m[0mcatalog[0m[0;34m.[0m[0mdataset[0m[0;34m][0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m
[0m[1;32m 74[0m [0;32melse[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[1;32m 75[0m [0;32mif[0m [0mhasattr[0m[0;34m([0m[0mself[0m[0;34m.[0m[0mdataset[0m[0;34m,[0m [0;34m'label'[0m[0;34m)[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[0;31mAttributeError[0m: 'PMDDataset' object has no attribute 'label'
AttributeError: 'PMDDataset' object has no attribute 'label'
[Pipeline] }
[Pipeline] // ansiColor
[Pipeline] }
[Pipeline] // script
[Pipeline] }
$ docker stop --time=1 44e9de3b8d79f950200d5c1a7c17086730ef58ce61679ec1037f1d3c411580f3
$ docker rm -f 44e9de3b8d79f950200d5c1a7c17086730ef58ce61679ec1037f1d3c411580f3
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Validate CSV)
Stage "Validate CSV" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Upload Tidy Data)
Stage "Upload Tidy Data" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Test draft dataset)
Stage "Test draft dataset" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Publish)
Stage "Publish" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Declarative: Post Actions)
[Pipeline] script
[Pipeline] {
[Pipeline] step
Changes since last successful build:
[Alex Tucker] 540ba45d5aa50426e9e1d3f7a1ec0411ddc164d4 - Write lock URI needs to have prepended.
[Alex Tucker] 6290fc536b2603617b26728abc883ac9169f23bf - Missed a missing /v1/status
.
[Alex Tucker] 5ee1bd86aebac936c0390926b177426944210e69 - Add optional argument for dataset metadata trig file.
[Alex Tucker] a75d492bb510a698967fb5411843dc5422b88016 - Make metadata param relative to CWD.
No changes No changes No changes No changes
[Alex Tucker] d304a5acc8766e52976306021fa62c6ab7bae04c - Add template transformation pipeline.
[Alex Tucker] 7eda5c2f5f1a251462cd0a2820b201263fd36e11 - Add getDraftsetEndpoint
[Alex Tucker] 7c9c4e98b4ee0c634379ab5253c3293b426e7301 - Add SPARQL tests step and publish results.
[Alex Tucker] 806951bf2ee882eb722ed68d14041b7f0e928fea - SPARQL tests need union-with-live=true
No changes
[Alex Tucker] 562618a49dcb291061554092fc7c58b7877ca7c7 - Use template pipeline.
[Alex Tucker] d747d17b909dea7f689436a623263ff531631c94 - Don't trigger GDP-tests as they're now run on the unpublished draftset.
[Alex Tucker] 57b7cae64f2422e744b548d8533c87ea96b0c7b0 - Template for multiple datasets.
[Alex Tucker] 554340c2f38e69909c77daea277188852640b339 - Disambiguate use of 'dataset'
[Alex Tucker] 1452178f3d07debcb12de21cb3782810370ff2c0 - Use $refFamily
No changes
[vtula2000] b511c6e69ce4e95a2d6e67070938f8a2c185578f - Measure Type
[Alex Tucker] 0ab4797047cdc3a26e01fe2cfd6da3da461c57f3 - Try to always update Trello card.
[vtula2000] ccfa6327e0d3737b7e4f7ebde2fcf07ab05721e4 - update
[vtula2000] 692878d0f7140c3bf6ef9d3ba6be49dae0ed2abe - Error
[vtula2000] e8fe4ab3160e22ef5cccde2382616bfa20be6fb2 - error
[vtula2000] f85aa4afc47ab3c8b0806132bb01ac34e48fbb6f - error
[vtula2000] 0b2f3ff86d2e1776447f79494871ae0b1a12f506 - Error
[Alex Tucker] 73c61fabce4edbe6898f9e26ba0f1b3d9edc79fd - Use jupytext to keep only Python scripts. Update per-tab notebooks to use the latest spreadsheet.
[Alex Tucker] 2c3a2d2dcad68f95080061335209f3d7241b6af9 - If main.py exists, convert to notebook using jupytext.
[Alex Tucker] 1acd29462408540f9fbc087e739c4dd458b20af7 - Convert any .py files to .ipynb using jupytext.
No changes No changes No changes
[Alex Tucker] 0ab9caa46fd9a9ce37f92f6ff1886ef7d0fad336 - Use GitHub issues rather than updating a Trello card.
[Alex Tucker] b57d083813b089ab8107b2342f26cf94bf589f8b - Use gsscogs Docker images.
No changes
it the scraper. We're missing a mandatory .label attribute. Not sure if the fail is because it's mandatory and its missing or the mandatory bit failed and we're accessing a Nil pointer. Needs investigating.
Build 'GSS_data/Health/HMRC_alcohol_bulletin' is failing!
Last 50 lines of build output:
[...truncated 177 lines...]
[0;32m---> 73[0;31m [0mmd[0m [0;34m=[0m [0mmd[0m [0;34m+[0m [0;34m'\n'[0m[0;34m.[0m[0mjoin[0m[0;34m([0m[0;34m[[0m[0;34mf'* {d.label}'[0m [0;32mfor[0m [0md[0m [0;32min[0m [0mself[0m[0;34m.[0m[0mcatalog[0m[0;34m.[0m[0mdataset[0m[0;34m][0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m
[0m[1;32m 74[0m [0;32melse[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[1;32m 75[0m [0;32mif[0m [0mhasattr[0m[0;34m([0m[0mself[0m[0;34m.[0m[0mdataset[0m[0;34m,[0m [0;34m'label'[0m[0;34m)[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[0;32m~usr/local/lib/python3.7/site-packages/gssutils/scrape.py[0m in [0;36m<listcomp>[0;34m(.0)[0m
[1;32m 71[0m [0;32mif[0m [0mhasattr[0m[0;34m([0m[0mself[0m[0;34m.[0m[0mcatalog[0m[0;34m,[0m [0;34m'dataset'[0m[0;34m)[0m [0;32mand[0m [0mlen[0m[0;34m([0m[0mself[0m[0;34m.[0m[0mcatalog[0m[0;34m.[0m[0mdataset[0m[0;34m)[0m [0;34m>[0m [0;36m1[0m [0;32mand[0m [0mlen[0m[0;34m([0m[0mself[0m[0;34m.[0m[0mdistributions[0m[0;34m)[0m [0;34m==[0m [0;36m0[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[1;32m 72[0m [0mmd[0m [0;34m=[0m [0mmd[0m [0;34m+[0m [0;34mf'## {self.catalog.title}\n\nThis is a catalog of datasets; choose one from the following:\n\n'[0m[0;34m[0m[0;34m[0m[0m
[0;32m---> 73[0;31m [0mmd[0m [0;34m=[0m [0mmd[0m [0;34m+[0m [0;34m'\n'[0m[0;34m.[0m[0mjoin[0m[0;34m([0m[0;34m[[0m[0;34mf'* {d.label}'[0m [0;32mfor[0m [0md[0m [0;32min[0m [0mself[0m[0;34m.[0m[0mcatalog[0m[0;34m.[0m[0mdataset[0m[0;34m][0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m
[0m[1;32m 74[0m [0;32melse[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[1;32m 75[0m [0;32mif[0m [0mhasattr[0m[0;34m([0m[0mself[0m[0;34m.[0m[0mdataset[0m[0;34m,[0m [0;34m'label'[0m[0;34m)[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[0;31mAttributeError[0m: 'PMDDataset' object has no attribute 'label'
AttributeError: 'PMDDataset' object has no attribute 'label'
[Pipeline] }
[Pipeline] // ansiColor
[Pipeline] }
[Pipeline] // script
[Pipeline] }
$ docker stop --time=1 c6a34c3a81f2531d16a450b5f549b20abddec3211ee05f0de92725d927295475
$ docker rm -f c6a34c3a81f2531d16a450b5f549b20abddec3211ee05f0de92725d927295475
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Validate CSV)
Stage "Validate CSV" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Upload Tidy Data)
Stage "Upload Tidy Data" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Test draft dataset)
Stage "Test draft dataset" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Publish)
Stage "Publish" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Declarative: Post Actions)
[Pipeline] script
[Pipeline] {
[Pipeline] step
Changes since last successful build:
[Alex Tucker] 540ba45d5aa50426e9e1d3f7a1ec0411ddc164d4 - Write lock URI needs to have prepended.
[Alex Tucker] 6290fc536b2603617b26728abc883ac9169f23bf - Missed a missing /v1/status
.
[Alex Tucker] 5ee1bd86aebac936c0390926b177426944210e69 - Add optional argument for dataset metadata trig file.
[Alex Tucker] a75d492bb510a698967fb5411843dc5422b88016 - Make metadata param relative to CWD.
No changes No changes No changes No changes
[Alex Tucker] d304a5acc8766e52976306021fa62c6ab7bae04c - Add template transformation pipeline.
[Alex Tucker] 7eda5c2f5f1a251462cd0a2820b201263fd36e11 - Add getDraftsetEndpoint
[Alex Tucker] 7c9c4e98b4ee0c634379ab5253c3293b426e7301 - Add SPARQL tests step and publish results.
[Alex Tucker] 806951bf2ee882eb722ed68d14041b7f0e928fea - SPARQL tests need union-with-live=true
No changes
[Alex Tucker] 562618a49dcb291061554092fc7c58b7877ca7c7 - Use template pipeline.
[Alex Tucker] d747d17b909dea7f689436a623263ff531631c94 - Don't trigger GDP-tests as they're now run on the unpublished draftset.
[Alex Tucker] 57b7cae64f2422e744b548d8533c87ea96b0c7b0 - Template for multiple datasets.
[Alex Tucker] 554340c2f38e69909c77daea277188852640b339 - Disambiguate use of 'dataset'
[Alex Tucker] 1452178f3d07debcb12de21cb3782810370ff2c0 - Use $refFamily
No changes
[vtula2000] b511c6e69ce4e95a2d6e67070938f8a2c185578f - Measure Type
[Alex Tucker] 0ab4797047cdc3a26e01fe2cfd6da3da461c57f3 - Try to always update Trello card.
[vtula2000] ccfa6327e0d3737b7e4f7ebde2fcf07ab05721e4 - update
[vtula2000] 692878d0f7140c3bf6ef9d3ba6be49dae0ed2abe - Error
[vtula2000] e8fe4ab3160e22ef5cccde2382616bfa20be6fb2 - error
[vtula2000] f85aa4afc47ab3c8b0806132bb01ac34e48fbb6f - error
[vtula2000] 0b2f3ff86d2e1776447f79494871ae0b1a12f506 - Error
[Alex Tucker] 73c61fabce4edbe6898f9e26ba0f1b3d9edc79fd - Use jupytext to keep only Python scripts. Update per-tab notebooks to use the latest spreadsheet.
[Alex Tucker] 2c3a2d2dcad68f95080061335209f3d7241b6af9 - If main.py exists, convert to notebook using jupytext.
[Alex Tucker] 1acd29462408540f9fbc087e739c4dd458b20af7 - Convert any .py files to .ipynb using jupytext.
No changes No changes No changes
[Alex Tucker] 0ab9caa46fd9a9ce37f92f6ff1886ef7d0fad336 - Use GitHub issues rather than updating a Trello card.
[Alex Tucker] b57d083813b089ab8107b2342f26cf94bf589f8b - Use gsscogs Docker images.
No changes
[Alex Tucker] cc2bca87d857753755272fa5aaa08034ec2428f6 - Add template for ref_* pipelines.
[Alex Tucker] fd491cc35f5f1a690f227435207d77378bce174e - Also validate columns.csv and components.csv.
Build 'GSS_data/Health/HMRC_alcohol_bulletin' is failing!
Last 50 lines of build output:
[...truncated 177 lines...]
[0;32m---> 73[0;31m [0mmd[0m [0;34m=[0m [0mmd[0m [0;34m+[0m [0;34m'\n'[0m[0;34m.[0m[0mjoin[0m[0;34m([0m[0;34m[[0m[0;34mf'* {d.label}'[0m [0;32mfor[0m [0md[0m [0;32min[0m [0mself[0m[0;34m.[0m[0mcatalog[0m[0;34m.[0m[0mdataset[0m[0;34m][0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m
[0m[1;32m 74[0m [0;32melse[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[1;32m 75[0m [0;32mif[0m [0mhasattr[0m[0;34m([0m[0mself[0m[0;34m.[0m[0mdataset[0m[0;34m,[0m [0;34m'label'[0m[0;34m)[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[0;32m~usr/local/lib/python3.7/site-packages/gssutils/scrape.py[0m in [0;36m<listcomp>[0;34m(.0)[0m
[1;32m 71[0m [0;32mif[0m [0mhasattr[0m[0;34m([0m[0mself[0m[0;34m.[0m[0mcatalog[0m[0;34m,[0m [0;34m'dataset'[0m[0;34m)[0m [0;32mand[0m [0mlen[0m[0;34m([0m[0mself[0m[0;34m.[0m[0mcatalog[0m[0;34m.[0m[0mdataset[0m[0;34m)[0m [0;34m>[0m [0;36m1[0m [0;32mand[0m [0mlen[0m[0;34m([0m[0mself[0m[0;34m.[0m[0mdistributions[0m[0;34m)[0m [0;34m==[0m [0;36m0[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[1;32m 72[0m [0mmd[0m [0;34m=[0m [0mmd[0m [0;34m+[0m [0;34mf'## {self.catalog.title}\n\nThis is a catalog of datasets; choose one from the following:\n\n'[0m[0;34m[0m[0;34m[0m[0m
[0;32m---> 73[0;31m [0mmd[0m [0;34m=[0m [0mmd[0m [0;34m+[0m [0;34m'\n'[0m[0;34m.[0m[0mjoin[0m[0;34m([0m[0;34m[[0m[0;34mf'* {d.label}'[0m [0;32mfor[0m [0md[0m [0;32min[0m [0mself[0m[0;34m.[0m[0mcatalog[0m[0;34m.[0m[0mdataset[0m[0;34m][0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m
[0m[1;32m 74[0m [0;32melse[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[1;32m 75[0m [0;32mif[0m [0mhasattr[0m[0;34m([0m[0mself[0m[0;34m.[0m[0mdataset[0m[0;34m,[0m [0;34m'label'[0m[0;34m)[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[0;31mAttributeError[0m: 'PMDDataset' object has no attribute 'label'
AttributeError: 'PMDDataset' object has no attribute 'label'
[Pipeline] }
[Pipeline] // ansiColor
[Pipeline] }
[Pipeline] // script
[Pipeline] }
$ docker stop --time=1 c55e870835af912ac68c7f80361aed3f6aeed0e7d6e54f46017deba07059c2e4
$ docker rm -f c55e870835af912ac68c7f80361aed3f6aeed0e7d6e54f46017deba07059c2e4
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Validate CSV)
Stage "Validate CSV" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Upload Tidy Data)
Stage "Upload Tidy Data" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Test draft dataset)
Stage "Test draft dataset" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Publish)
Stage "Publish" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Declarative: Post Actions)
[Pipeline] script
[Pipeline] {
[Pipeline] step
Changes since last successful build:
[Alex Tucker] 540ba45d5aa50426e9e1d3f7a1ec0411ddc164d4 - Write lock URI needs to have prepended.
[Alex Tucker] 6290fc536b2603617b26728abc883ac9169f23bf - Missed a missing /v1/status
.
[Alex Tucker] 5ee1bd86aebac936c0390926b177426944210e69 - Add optional argument for dataset metadata trig file.
[Alex Tucker] a75d492bb510a698967fb5411843dc5422b88016 - Make metadata param relative to CWD.
No changes No changes No changes No changes
[Alex Tucker] d304a5acc8766e52976306021fa62c6ab7bae04c - Add template transformation pipeline.
[Alex Tucker] 7eda5c2f5f1a251462cd0a2820b201263fd36e11 - Add getDraftsetEndpoint
[Alex Tucker] 7c9c4e98b4ee0c634379ab5253c3293b426e7301 - Add SPARQL tests step and publish results.
[Alex Tucker] 806951bf2ee882eb722ed68d14041b7f0e928fea - SPARQL tests need union-with-live=true
No changes
[Alex Tucker] 562618a49dcb291061554092fc7c58b7877ca7c7 - Use template pipeline.
[Alex Tucker] d747d17b909dea7f689436a623263ff531631c94 - Don't trigger GDP-tests as they're now run on the unpublished draftset.
[Alex Tucker] 57b7cae64f2422e744b548d8533c87ea96b0c7b0 - Template for multiple datasets.
[Alex Tucker] 554340c2f38e69909c77daea277188852640b339 - Disambiguate use of 'dataset'
[Alex Tucker] 1452178f3d07debcb12de21cb3782810370ff2c0 - Use $refFamily
No changes
[vtula2000] b511c6e69ce4e95a2d6e67070938f8a2c185578f - Measure Type
[Alex Tucker] 0ab4797047cdc3a26e01fe2cfd6da3da461c57f3 - Try to always update Trello card.
[vtula2000] ccfa6327e0d3737b7e4f7ebde2fcf07ab05721e4 - update
[vtula2000] 692878d0f7140c3bf6ef9d3ba6be49dae0ed2abe - Error
[vtula2000] e8fe4ab3160e22ef5cccde2382616bfa20be6fb2 - error
[vtula2000] f85aa4afc47ab3c8b0806132bb01ac34e48fbb6f - error
[vtula2000] 0b2f3ff86d2e1776447f79494871ae0b1a12f506 - Error
[Alex Tucker] 73c61fabce4edbe6898f9e26ba0f1b3d9edc79fd - Use jupytext to keep only Python scripts. Update per-tab notebooks to use the latest spreadsheet.
[Alex Tucker] 2c3a2d2dcad68f95080061335209f3d7241b6af9 - If main.py exists, convert to notebook using jupytext.
[Alex Tucker] 1acd29462408540f9fbc087e739c4dd458b20af7 - Convert any .py files to .ipynb using jupytext.
No changes No changes No changes
[Alex Tucker] 0ab9caa46fd9a9ce37f92f6ff1886ef7d0fad336 - Use GitHub issues rather than updating a Trello card.
[Alex Tucker] b57d083813b089ab8107b2342f26cf94bf589f8b - Use gsscogs Docker images.
No changes
[Alex Tucker] cc2bca87d857753755272fa5aaa08034ec2428f6 - Add template for ref_* pipelines.
[Alex Tucker] fd491cc35f5f1a690f227435207d77378bce174e - Also validate columns.csv and components.csv.
No changes
Build 'GSS_data/Health/HMRC_alcohol_bulletin' is failing!
Last 50 lines of build output:
[...truncated 177 lines...]
[0;32m---> 73[0;31m [0mmd[0m [0;34m=[0m [0mmd[0m [0;34m+[0m [0;34m'\n'[0m[0;34m.[0m[0mjoin[0m[0;34m([0m[0;34m[[0m[0;34mf'* {d.label}'[0m [0;32mfor[0m [0md[0m [0;32min[0m [0mself[0m[0;34m.[0m[0mcatalog[0m[0;34m.[0m[0mdataset[0m[0;34m][0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m
[0m[1;32m 74[0m [0;32melse[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[1;32m 75[0m [0;32mif[0m [0mhasattr[0m[0;34m([0m[0mself[0m[0;34m.[0m[0mdataset[0m[0;34m,[0m [0;34m'label'[0m[0;34m)[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[0;32m~usr/local/lib/python3.7/site-packages/gssutils/scrape.py[0m in [0;36m<listcomp>[0;34m(.0)[0m
[1;32m 71[0m [0;32mif[0m [0mhasattr[0m[0;34m([0m[0mself[0m[0;34m.[0m[0mcatalog[0m[0;34m,[0m [0;34m'dataset'[0m[0;34m)[0m [0;32mand[0m [0mlen[0m[0;34m([0m[0mself[0m[0;34m.[0m[0mcatalog[0m[0;34m.[0m[0mdataset[0m[0;34m)[0m [0;34m>[0m [0;36m1[0m [0;32mand[0m [0mlen[0m[0;34m([0m[0mself[0m[0;34m.[0m[0mdistributions[0m[0;34m)[0m [0;34m==[0m [0;36m0[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[1;32m 72[0m [0mmd[0m [0;34m=[0m [0mmd[0m [0;34m+[0m [0;34mf'## {self.catalog.title}\n\nThis is a catalog of datasets; choose one from the following:\n\n'[0m[0;34m[0m[0;34m[0m[0m
[0;32m---> 73[0;31m [0mmd[0m [0;34m=[0m [0mmd[0m [0;34m+[0m [0;34m'\n'[0m[0;34m.[0m[0mjoin[0m[0;34m([0m[0;34m[[0m[0;34mf'* {d.label}'[0m [0;32mfor[0m [0md[0m [0;32min[0m [0mself[0m[0;34m.[0m[0mcatalog[0m[0;34m.[0m[0mdataset[0m[0;34m][0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m
[0m[1;32m 74[0m [0;32melse[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[1;32m 75[0m [0;32mif[0m [0mhasattr[0m[0;34m([0m[0mself[0m[0;34m.[0m[0mdataset[0m[0;34m,[0m [0;34m'label'[0m[0;34m)[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[0;31mAttributeError[0m: 'PMDDataset' object has no attribute 'label'
AttributeError: 'PMDDataset' object has no attribute 'label'
[Pipeline] }
[Pipeline] // ansiColor
[Pipeline] }
[Pipeline] // script
[Pipeline] }
$ docker stop --time=1 9a51066b752393e87287080531f4504303e9a10ee8d1d5ee6075147b4d125aba
$ docker rm -f 9a51066b752393e87287080531f4504303e9a10ee8d1d5ee6075147b4d125aba
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Validate CSV)
Stage "Validate CSV" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Upload Tidy Data)
Stage "Upload Tidy Data" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Test draft dataset)
Stage "Test draft dataset" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Publish)
Stage "Publish" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Declarative: Post Actions)
[Pipeline] script
[Pipeline] {
[Pipeline] step
Changes since last successful build:
[Alex Tucker] 540ba45d5aa50426e9e1d3f7a1ec0411ddc164d4 - Write lock URI needs to have prepended.
[Alex Tucker] 6290fc536b2603617b26728abc883ac9169f23bf - Missed a missing /v1/status
.
[Alex Tucker] 5ee1bd86aebac936c0390926b177426944210e69 - Add optional argument for dataset metadata trig file.
[Alex Tucker] a75d492bb510a698967fb5411843dc5422b88016 - Make metadata param relative to CWD.
No changes No changes No changes No changes
[Alex Tucker] d304a5acc8766e52976306021fa62c6ab7bae04c - Add template transformation pipeline.
[Alex Tucker] 7eda5c2f5f1a251462cd0a2820b201263fd36e11 - Add getDraftsetEndpoint
[Alex Tucker] 7c9c4e98b4ee0c634379ab5253c3293b426e7301 - Add SPARQL tests step and publish results.
[Alex Tucker] 806951bf2ee882eb722ed68d14041b7f0e928fea - SPARQL tests need union-with-live=true
No changes
[Alex Tucker] 562618a49dcb291061554092fc7c58b7877ca7c7 - Use template pipeline.
[Alex Tucker] d747d17b909dea7f689436a623263ff531631c94 - Don't trigger GDP-tests as they're now run on the unpublished draftset.
[Alex Tucker] 57b7cae64f2422e744b548d8533c87ea96b0c7b0 - Template for multiple datasets.
[Alex Tucker] 554340c2f38e69909c77daea277188852640b339 - Disambiguate use of 'dataset'
[Alex Tucker] 1452178f3d07debcb12de21cb3782810370ff2c0 - Use $refFamily
No changes
[vtula2000] b511c6e69ce4e95a2d6e67070938f8a2c185578f - Measure Type
[Alex Tucker] 0ab4797047cdc3a26e01fe2cfd6da3da461c57f3 - Try to always update Trello card.
[vtula2000] ccfa6327e0d3737b7e4f7ebde2fcf07ab05721e4 - update
[vtula2000] 692878d0f7140c3bf6ef9d3ba6be49dae0ed2abe - Error
[vtula2000] e8fe4ab3160e22ef5cccde2382616bfa20be6fb2 - error
[vtula2000] f85aa4afc47ab3c8b0806132bb01ac34e48fbb6f - error
[vtula2000] 0b2f3ff86d2e1776447f79494871ae0b1a12f506 - Error
[Alex Tucker] 73c61fabce4edbe6898f9e26ba0f1b3d9edc79fd - Use jupytext to keep only Python scripts. Update per-tab notebooks to use the latest spreadsheet.
[Alex Tucker] 2c3a2d2dcad68f95080061335209f3d7241b6af9 - If main.py exists, convert to notebook using jupytext.
[Alex Tucker] 1acd29462408540f9fbc087e739c4dd458b20af7 - Convert any .py files to .ipynb using jupytext.
No changes No changes No changes
[Alex Tucker] 0ab9caa46fd9a9ce37f92f6ff1886ef7d0fad336 - Use GitHub issues rather than updating a Trello card.
[Alex Tucker] b57d083813b089ab8107b2342f26cf94bf589f8b - Use gsscogs Docker images.
No changes
[Alex Tucker] cc2bca87d857753755272fa5aaa08034ec2428f6 - Add template for ref_* pipelines.
[Alex Tucker] fd491cc35f5f1a690f227435207d77378bce174e - Also validate columns.csv and components.csv.
No changes
Build 'GSS_data/Health/HMRC_alcohol_bulletin' is failing!
Last 50 lines of build output:
[...truncated 184 lines...]
[0;32m---> 73[0;31m [0mmd[0m [0;34m=[0m [0mmd[0m [0;34m+[0m [0;34m'\n'[0m[0;34m.[0m[0mjoin[0m[0;34m([0m[0;34m[[0m[0;34mf'* {d.label}'[0m [0;32mfor[0m [0md[0m [0;32min[0m [0mself[0m[0;34m.[0m[0mcatalog[0m[0;34m.[0m[0mdataset[0m[0;34m][0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m
[0m[1;32m 74[0m [0;32melse[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[1;32m 75[0m [0;32mif[0m [0mhasattr[0m[0;34m([0m[0mself[0m[0;34m.[0m[0mdataset[0m[0;34m,[0m [0;34m'label'[0m[0;34m)[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[0;32m~usr/local/lib/python3.7/site-packages/gssutils/scrape.py[0m in [0;36m<listcomp>[0;34m(.0)[0m
[1;32m 71[0m [0;32mif[0m [0mhasattr[0m[0;34m([0m[0mself[0m[0;34m.[0m[0mcatalog[0m[0;34m,[0m [0;34m'dataset'[0m[0;34m)[0m [0;32mand[0m [0mlen[0m[0;34m([0m[0mself[0m[0;34m.[0m[0mcatalog[0m[0;34m.[0m[0mdataset[0m[0;34m)[0m [0;34m>[0m [0;36m1[0m [0;32mand[0m [0mlen[0m[0;34m([0m[0mself[0m[0;34m.[0m[0mdistributions[0m[0;34m)[0m [0;34m==[0m [0;36m0[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[1;32m 72[0m [0mmd[0m [0;34m=[0m [0mmd[0m [0;34m+[0m [0;34mf'## {self.catalog.title}\n\nThis is a catalog of datasets; choose one from the following:\n\n'[0m[0;34m[0m[0;34m[0m[0m
[0;32m---> 73[0;31m [0mmd[0m [0;34m=[0m [0mmd[0m [0;34m+[0m [0;34m'\n'[0m[0;34m.[0m[0mjoin[0m[0;34m([0m[0;34m[[0m[0;34mf'* {d.label}'[0m [0;32mfor[0m [0md[0m [0;32min[0m [0mself[0m[0;34m.[0m[0mcatalog[0m[0;34m.[0m[0mdataset[0m[0;34m][0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m
[0m[1;32m 74[0m [0;32melse[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[1;32m 75[0m [0;32mif[0m [0mhasattr[0m[0;34m([0m[0mself[0m[0;34m.[0m[0mdataset[0m[0;34m,[0m [0;34m'label'[0m[0;34m)[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[0;31mAttributeError[0m: 'PMDDataset' object has no attribute 'label'
AttributeError: 'PMDDataset' object has no attribute 'label'
[Pipeline] }
[Pipeline] // ansiColor
[Pipeline] }
[Pipeline] // script
[Pipeline] }
$ docker stop --time=1 7531828ce07e5db2c9cc7eaf4fcdb5a5483cf9bd44238296714c672ddef158dd
$ docker rm -f 7531828ce07e5db2c9cc7eaf4fcdb5a5483cf9bd44238296714c672ddef158dd
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Validate CSV)
Stage "Validate CSV" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Upload Tidy Data)
Stage "Upload Tidy Data" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Test draft dataset)
Stage "Test draft dataset" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Publish)
Stage "Publish" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Declarative: Post Actions)
[Pipeline] script
[Pipeline] {
[Pipeline] step
Changes since last successful build:
[Alex Tucker] 540ba45d5aa50426e9e1d3f7a1ec0411ddc164d4 - Write lock URI needs to have prepended.
[Alex Tucker] 6290fc536b2603617b26728abc883ac9169f23bf - Missed a missing /v1/status
.
[Alex Tucker] 5ee1bd86aebac936c0390926b177426944210e69 - Add optional argument for dataset metadata trig file.
[Alex Tucker] a75d492bb510a698967fb5411843dc5422b88016 - Make metadata param relative to CWD.
No changes No changes No changes No changes
[Alex Tucker] d304a5acc8766e52976306021fa62c6ab7bae04c - Add template transformation pipeline.
[Alex Tucker] 7eda5c2f5f1a251462cd0a2820b201263fd36e11 - Add getDraftsetEndpoint
[Alex Tucker] 7c9c4e98b4ee0c634379ab5253c3293b426e7301 - Add SPARQL tests step and publish results.
[Alex Tucker] 806951bf2ee882eb722ed68d14041b7f0e928fea - SPARQL tests need union-with-live=true
No changes
[Alex Tucker] 562618a49dcb291061554092fc7c58b7877ca7c7 - Use template pipeline.
[Alex Tucker] d747d17b909dea7f689436a623263ff531631c94 - Don't trigger GDP-tests as they're now run on the unpublished draftset.
[Alex Tucker] 57b7cae64f2422e744b548d8533c87ea96b0c7b0 - Template for multiple datasets.
[Alex Tucker] 554340c2f38e69909c77daea277188852640b339 - Disambiguate use of 'dataset'
[Alex Tucker] 1452178f3d07debcb12de21cb3782810370ff2c0 - Use $refFamily
No changes
[vtula2000] b511c6e69ce4e95a2d6e67070938f8a2c185578f - Measure Type
[Alex Tucker] 0ab4797047cdc3a26e01fe2cfd6da3da461c57f3 - Try to always update Trello card.
[vtula2000] ccfa6327e0d3737b7e4f7ebde2fcf07ab05721e4 - update
[vtula2000] 692878d0f7140c3bf6ef9d3ba6be49dae0ed2abe - Error
[vtula2000] e8fe4ab3160e22ef5cccde2382616bfa20be6fb2 - error
[vtula2000] f85aa4afc47ab3c8b0806132bb01ac34e48fbb6f - error
[vtula2000] 0b2f3ff86d2e1776447f79494871ae0b1a12f506 - Error
[Alex Tucker] 73c61fabce4edbe6898f9e26ba0f1b3d9edc79fd - Use jupytext to keep only Python scripts. Update per-tab notebooks to use the latest spreadsheet.
[Alex Tucker] 2c3a2d2dcad68f95080061335209f3d7241b6af9 - If main.py exists, convert to notebook using jupytext.
[Alex Tucker] 1acd29462408540f9fbc087e739c4dd458b20af7 - Convert any .py files to .ipynb using jupytext.
No changes No changes No changes
[Alex Tucker] 0ab9caa46fd9a9ce37f92f6ff1886ef7d0fad336 - Use GitHub issues rather than updating a Trello card.
[Alex Tucker] b57d083813b089ab8107b2342f26cf94bf589f8b - Use gsscogs Docker images.
No changes
[Alex Tucker] cc2bca87d857753755272fa5aaa08034ec2428f6 - Add template for ref_* pipelines.
[Alex Tucker] fd491cc35f5f1a690f227435207d77378bce174e - Also validate columns.csv and components.csv.
No changes
[Alex Tucker] 5f5d4b46d7d338090ecfa2d02ede7ae8c0a498da - Ignore RequestAborted exceptions in empty_cache/sync_search calls.
[Alex Tucker] ef905fd5f2db85565dc4eb0bde049534e4846b0f - Use --no-verbose to cut down line length of csvlint output, so it can be
Build 'GSS_data/Health/HMRC_alcohol_bulletin' is failing!
Last 50 lines of build output:
[...truncated 178 lines...]
[0;32m---> 73[0;31m [0mmd[0m [0;34m=[0m [0mmd[0m [0;34m+[0m [0;34m'\n'[0m[0;34m.[0m[0mjoin[0m[0;34m([0m[0;34m[[0m[0;34mf'* {d.label}'[0m [0;32mfor[0m [0md[0m [0;32min[0m [0mself[0m[0;34m.[0m[0mcatalog[0m[0;34m.[0m[0mdataset[0m[0;34m][0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m
[0m[1;32m 74[0m [0;32melse[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[1;32m 75[0m [0;32mif[0m [0mhasattr[0m[0;34m([0m[0mself[0m[0;34m.[0m[0mdataset[0m[0;34m,[0m [0;34m'label'[0m[0;34m)[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[0;32m~usr/local/lib/python3.7/site-packages/gssutils/scrape.py[0m in [0;36m<listcomp>[0;34m(.0)[0m
[1;32m 71[0m [0;32mif[0m [0mhasattr[0m[0;34m([0m[0mself[0m[0;34m.[0m[0mcatalog[0m[0;34m,[0m [0;34m'dataset'[0m[0;34m)[0m [0;32mand[0m [0mlen[0m[0;34m([0m[0mself[0m[0;34m.[0m[0mcatalog[0m[0;34m.[0m[0mdataset[0m[0;34m)[0m [0;34m>[0m [0;36m1[0m [0;32mand[0m [0mlen[0m[0;34m([0m[0mself[0m[0;34m.[0m[0mdistributions[0m[0;34m)[0m [0;34m==[0m [0;36m0[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[1;32m 72[0m [0mmd[0m [0;34m=[0m [0mmd[0m [0;34m+[0m [0;34mf'## {self.catalog.title}\n\nThis is a catalog of datasets; choose one from the following:\n\n'[0m[0;34m[0m[0;34m[0m[0m
[0;32m---> 73[0;31m [0mmd[0m [0;34m=[0m [0mmd[0m [0;34m+[0m [0;34m'\n'[0m[0;34m.[0m[0mjoin[0m[0;34m([0m[0;34m[[0m[0;34mf'* {d.label}'[0m [0;32mfor[0m [0md[0m [0;32min[0m [0mself[0m[0;34m.[0m[0mcatalog[0m[0;34m.[0m[0mdataset[0m[0;34m][0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m
[0m[1;32m 74[0m [0;32melse[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[1;32m 75[0m [0;32mif[0m [0mhasattr[0m[0;34m([0m[0mself[0m[0;34m.[0m[0mdataset[0m[0;34m,[0m [0;34m'label'[0m[0;34m)[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[0;31mAttributeError[0m: 'PMDDataset' object has no attribute 'label'
AttributeError: 'PMDDataset' object has no attribute 'label'
[Pipeline] }
[Pipeline] // ansiColor
[Pipeline] }
[Pipeline] // script
[Pipeline] }
$ docker stop --time=1 8e9f409d13b7f1807cf2d95a8d716e54c58819e2665f772a4222d22128dac2b6
$ docker rm -f 8e9f409d13b7f1807cf2d95a8d716e54c58819e2665f772a4222d22128dac2b6
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Validate CSV)
Stage "Validate CSV" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Upload Tidy Data)
Stage "Upload Tidy Data" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Test draft dataset)
Stage "Test draft dataset" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Publish)
Stage "Publish" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Declarative: Post Actions)
[Pipeline] script
[Pipeline] {
[Pipeline] step
Changes since last successful build:
[Alex Tucker] 540ba45d5aa50426e9e1d3f7a1ec0411ddc164d4 - Write lock URI needs to have prepended.
[Alex Tucker] 6290fc536b2603617b26728abc883ac9169f23bf - Missed a missing /v1/status
.
[Alex Tucker] 5ee1bd86aebac936c0390926b177426944210e69 - Add optional argument for dataset metadata trig file.
[Alex Tucker] a75d492bb510a698967fb5411843dc5422b88016 - Make metadata param relative to CWD.
No changes No changes No changes No changes
[Alex Tucker] d304a5acc8766e52976306021fa62c6ab7bae04c - Add template transformation pipeline.
[Alex Tucker] 7eda5c2f5f1a251462cd0a2820b201263fd36e11 - Add getDraftsetEndpoint
[Alex Tucker] 7c9c4e98b4ee0c634379ab5253c3293b426e7301 - Add SPARQL tests step and publish results.
[Alex Tucker] 806951bf2ee882eb722ed68d14041b7f0e928fea - SPARQL tests need union-with-live=true
No changes
[Alex Tucker] 562618a49dcb291061554092fc7c58b7877ca7c7 - Use template pipeline.
[Alex Tucker] d747d17b909dea7f689436a623263ff531631c94 - Don't trigger GDP-tests as they're now run on the unpublished draftset.
[Alex Tucker] 57b7cae64f2422e744b548d8533c87ea96b0c7b0 - Template for multiple datasets.
[Alex Tucker] 554340c2f38e69909c77daea277188852640b339 - Disambiguate use of 'dataset'
[Alex Tucker] 1452178f3d07debcb12de21cb3782810370ff2c0 - Use $refFamily
No changes
[vtula2000] b511c6e69ce4e95a2d6e67070938f8a2c185578f - Measure Type
[Alex Tucker] 0ab4797047cdc3a26e01fe2cfd6da3da461c57f3 - Try to always update Trello card.
[vtula2000] ccfa6327e0d3737b7e4f7ebde2fcf07ab05721e4 - update
[vtula2000] 692878d0f7140c3bf6ef9d3ba6be49dae0ed2abe - Error
[vtula2000] e8fe4ab3160e22ef5cccde2382616bfa20be6fb2 - error
[vtula2000] f85aa4afc47ab3c8b0806132bb01ac34e48fbb6f - error
[vtula2000] 0b2f3ff86d2e1776447f79494871ae0b1a12f506 - Error
[Alex Tucker] 73c61fabce4edbe6898f9e26ba0f1b3d9edc79fd - Use jupytext to keep only Python scripts. Update per-tab notebooks to use the latest spreadsheet.
[Alex Tucker] 2c3a2d2dcad68f95080061335209f3d7241b6af9 - If main.py exists, convert to notebook using jupytext.
[Alex Tucker] 1acd29462408540f9fbc087e739c4dd458b20af7 - Convert any .py files to .ipynb using jupytext.
No changes No changes No changes
[Alex Tucker] 0ab9caa46fd9a9ce37f92f6ff1886ef7d0fad336 - Use GitHub issues rather than updating a Trello card.
[Alex Tucker] b57d083813b089ab8107b2342f26cf94bf589f8b - Use gsscogs Docker images.
No changes
[Alex Tucker] cc2bca87d857753755272fa5aaa08034ec2428f6 - Add template for ref_* pipelines.
[Alex Tucker] fd491cc35f5f1a690f227435207d77378bce174e - Also validate columns.csv and components.csv.
No changes
[Alex Tucker] 5f5d4b46d7d338090ecfa2d02ede7ae8c0a498da - Ignore RequestAborted exceptions in empty_cache/sync_search calls.
[Alex Tucker] ef905fd5f2db85565dc4eb0bde049534e4846b0f - Use --no-verbose to cut down line length of csvlint output, so it can be
No changes
Build 'GSS_data/Health/HMRC_alcohol_bulletin' is failing!
Last 50 lines of build output:
[...truncated 178 lines...]
[0;32m---> 77[0;31m [0mmd[0m [0;34m=[0m [0mmd[0m [0;34m+[0m [0;34m'\n'[0m[0;34m.[0m[0mjoin[0m[0;34m([0m[0;34m[[0m[0;34mf'* {d.label}'[0m [0;32mfor[0m [0md[0m [0;32min[0m [0mself[0m[0;34m.[0m[0mcatalog[0m[0;34m.[0m[0mdataset[0m[0;34m][0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m
[0m[1;32m 78[0m [0;32melse[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[1;32m 79[0m [0;32mif[0m [0mhasattr[0m[0;34m([0m[0mself[0m[0;34m.[0m[0mdataset[0m[0;34m,[0m [0;34m'label'[0m[0;34m)[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[0;32m~usr/local/lib/python3.7/site-packages/gssutils/scrape.py[0m in [0;36m<listcomp>[0;34m(.0)[0m
[1;32m 75[0m [0;32mif[0m [0mhasattr[0m[0;34m([0m[0mself[0m[0;34m.[0m[0mcatalog[0m[0;34m,[0m [0;34m'dataset'[0m[0;34m)[0m [0;32mand[0m [0mlen[0m[0;34m([0m[0mself[0m[0;34m.[0m[0mcatalog[0m[0;34m.[0m[0mdataset[0m[0;34m)[0m [0;34m>[0m [0;36m1[0m [0;32mand[0m [0mlen[0m[0;34m([0m[0mself[0m[0;34m.[0m[0mdistributions[0m[0;34m)[0m [0;34m==[0m [0;36m0[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[1;32m 76[0m [0mmd[0m [0;34m=[0m [0mmd[0m [0;34m+[0m [0;34mf'## {self.catalog.title}\n\nThis is a catalog of datasets; choose one from the following:\n\n'[0m[0;34m[0m[0;34m[0m[0m
[0;32m---> 77[0;31m [0mmd[0m [0;34m=[0m [0mmd[0m [0;34m+[0m [0;34m'\n'[0m[0;34m.[0m[0mjoin[0m[0;34m([0m[0;34m[[0m[0;34mf'* {d.label}'[0m [0;32mfor[0m [0md[0m [0;32min[0m [0mself[0m[0;34m.[0m[0mcatalog[0m[0;34m.[0m[0mdataset[0m[0;34m][0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m
[0m[1;32m 78[0m [0;32melse[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[1;32m 79[0m [0;32mif[0m [0mhasattr[0m[0;34m([0m[0mself[0m[0;34m.[0m[0mdataset[0m[0;34m,[0m [0;34m'label'[0m[0;34m)[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[0;31mAttributeError[0m: 'PMDDataset' object has no attribute 'label'
AttributeError: 'PMDDataset' object has no attribute 'label'
[Pipeline] }
[Pipeline] // ansiColor
[Pipeline] }
[Pipeline] // script
[Pipeline] }
$ docker stop --time=1 e1d166b4e30bce0bf6fd680c450f32e55d2c1d77b9376d128fdf2459a7adcf81
$ docker rm -f e1d166b4e30bce0bf6fd680c450f32e55d2c1d77b9376d128fdf2459a7adcf81
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Validate CSV)
Stage "Validate CSV" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Upload Tidy Data)
Stage "Upload Tidy Data" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Test draft dataset)
Stage "Test draft dataset" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Publish)
Stage "Publish" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Declarative: Post Actions)
[Pipeline] script
[Pipeline] {
[Pipeline] step
Changes since last successful build:
[Alex Tucker] 540ba45d5aa50426e9e1d3f7a1ec0411ddc164d4 - Write lock URI needs to have prepended.
[Alex Tucker] 6290fc536b2603617b26728abc883ac9169f23bf - Missed a missing /v1/status
.
[Alex Tucker] 5ee1bd86aebac936c0390926b177426944210e69 - Add optional argument for dataset metadata trig file.
[Alex Tucker] a75d492bb510a698967fb5411843dc5422b88016 - Make metadata param relative to CWD.
No changes No changes No changes No changes
[Alex Tucker] d304a5acc8766e52976306021fa62c6ab7bae04c - Add template transformation pipeline.
[Alex Tucker] 7eda5c2f5f1a251462cd0a2820b201263fd36e11 - Add getDraftsetEndpoint
[Alex Tucker] 7c9c4e98b4ee0c634379ab5253c3293b426e7301 - Add SPARQL tests step and publish results.
[Alex Tucker] 806951bf2ee882eb722ed68d14041b7f0e928fea - SPARQL tests need union-with-live=true
No changes
[Alex Tucker] 562618a49dcb291061554092fc7c58b7877ca7c7 - Use template pipeline.
[Alex Tucker] d747d17b909dea7f689436a623263ff531631c94 - Don't trigger GDP-tests as they're now run on the unpublished draftset.
[Alex Tucker] 57b7cae64f2422e744b548d8533c87ea96b0c7b0 - Template for multiple datasets.
[Alex Tucker] 554340c2f38e69909c77daea277188852640b339 - Disambiguate use of 'dataset'
[Alex Tucker] 1452178f3d07debcb12de21cb3782810370ff2c0 - Use $refFamily
No changes
[vtula2000] b511c6e69ce4e95a2d6e67070938f8a2c185578f - Measure Type
[Alex Tucker] 0ab4797047cdc3a26e01fe2cfd6da3da461c57f3 - Try to always update Trello card.
[vtula2000] ccfa6327e0d3737b7e4f7ebde2fcf07ab05721e4 - update
[vtula2000] 692878d0f7140c3bf6ef9d3ba6be49dae0ed2abe - Error
[vtula2000] e8fe4ab3160e22ef5cccde2382616bfa20be6fb2 - error
[vtula2000] f85aa4afc47ab3c8b0806132bb01ac34e48fbb6f - error
[vtula2000] 0b2f3ff86d2e1776447f79494871ae0b1a12f506 - Error
[Alex Tucker] 73c61fabce4edbe6898f9e26ba0f1b3d9edc79fd - Use jupytext to keep only Python scripts. Update per-tab notebooks to use the latest spreadsheet.
[Alex Tucker] 2c3a2d2dcad68f95080061335209f3d7241b6af9 - If main.py exists, convert to notebook using jupytext.
[Alex Tucker] 1acd29462408540f9fbc087e739c4dd458b20af7 - Convert any .py files to .ipynb using jupytext.
No changes No changes No changes
[Alex Tucker] 0ab9caa46fd9a9ce37f92f6ff1886ef7d0fad336 - Use GitHub issues rather than updating a Trello card.
[Alex Tucker] b57d083813b089ab8107b2342f26cf94bf589f8b - Use gsscogs Docker images.
No changes
[Alex Tucker] cc2bca87d857753755272fa5aaa08034ec2428f6 - Add template for ref_* pipelines.
[Alex Tucker] fd491cc35f5f1a690f227435207d77378bce174e - Also validate columns.csv and components.csv.
No changes
[Alex Tucker] 5f5d4b46d7d338090ecfa2d02ede7ae8c0a498da - Ignore RequestAborted exceptions in empty_cache/sync_search calls.
[Alex Tucker] ef905fd5f2db85565dc4eb0bde049534e4846b0f - Use --no-verbose to cut down line length of csvlint output, so it can be
No changes No changes
Build 'GSS_data/Health/HMRC_alcohol_bulletin' is failing!
Last 50 lines of build output:
[...truncated 174 lines...]
[0;32m---> 77[0;31m [0mmd[0m [0;34m=[0m [0mmd[0m [0;34m+[0m [0;34m'\n'[0m[0;34m.[0m[0mjoin[0m[0;34m([0m[0;34m[[0m[0;34mf'* {d.label}'[0m [0;32mfor[0m [0md[0m [0;32min[0m [0mself[0m[0;34m.[0m[0mcatalog[0m[0;34m.[0m[0mdataset[0m[0;34m][0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m
[0m[1;32m 78[0m [0;32melse[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[1;32m 79[0m [0;32mif[0m [0mhasattr[0m[0;34m([0m[0mself[0m[0;34m.[0m[0mdataset[0m[0;34m,[0m [0;34m'label'[0m[0;34m)[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[0;32m~usr/local/lib/python3.7/site-packages/gssutils/scrape.py[0m in [0;36m<listcomp>[0;34m(.0)[0m
[1;32m 75[0m [0;32mif[0m [0mhasattr[0m[0;34m([0m[0mself[0m[0;34m.[0m[0mcatalog[0m[0;34m,[0m [0;34m'dataset'[0m[0;34m)[0m [0;32mand[0m [0mlen[0m[0;34m([0m[0mself[0m[0;34m.[0m[0mcatalog[0m[0;34m.[0m[0mdataset[0m[0;34m)[0m [0;34m>[0m [0;36m1[0m [0;32mand[0m [0mlen[0m[0;34m([0m[0mself[0m[0;34m.[0m[0mdistributions[0m[0;34m)[0m [0;34m==[0m [0;36m0[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[1;32m 76[0m [0mmd[0m [0;34m=[0m [0mmd[0m [0;34m+[0m [0;34mf'## {self.catalog.title}\n\nThis is a catalog of datasets; choose one from the following:\n\n'[0m[0;34m[0m[0;34m[0m[0m
[0;32m---> 77[0;31m [0mmd[0m [0;34m=[0m [0mmd[0m [0;34m+[0m [0;34m'\n'[0m[0;34m.[0m[0mjoin[0m[0;34m([0m[0;34m[[0m[0;34mf'* {d.label}'[0m [0;32mfor[0m [0md[0m [0;32min[0m [0mself[0m[0;34m.[0m[0mcatalog[0m[0;34m.[0m[0mdataset[0m[0;34m][0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m
[0m[1;32m 78[0m [0;32melse[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[1;32m 79[0m [0;32mif[0m [0mhasattr[0m[0;34m([0m[0mself[0m[0;34m.[0m[0mdataset[0m[0;34m,[0m [0;34m'label'[0m[0;34m)[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[0;31mAttributeError[0m: 'PMDDataset' object has no attribute 'label'
AttributeError: 'PMDDataset' object has no attribute 'label'
[Pipeline] }
[Pipeline] // ansiColor
[Pipeline] }
[Pipeline] // script
[Pipeline] }
$ docker stop --time=1 2d8cfe293dce8a1056c0a5da6f87682d6d10fdca6df9e04e385d22347aab089d
$ docker rm -f 2d8cfe293dce8a1056c0a5da6f87682d6d10fdca6df9e04e385d22347aab089d
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Validate CSV)
Stage "Validate CSV" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Upload Tidy Data)
Stage "Upload Tidy Data" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Test draft dataset)
Stage "Test draft dataset" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Publish)
Stage "Publish" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Declarative: Post Actions)
[Pipeline] script
[Pipeline] {
[Pipeline] step
Changes since last successful build:
[Alex Tucker] 540ba45d5aa50426e9e1d3f7a1ec0411ddc164d4 - Write lock URI needs to have prepended.
[Alex Tucker] 6290fc536b2603617b26728abc883ac9169f23bf - Missed a missing /v1/status
.
[Alex Tucker] 5ee1bd86aebac936c0390926b177426944210e69 - Add optional argument for dataset metadata trig file.
[Alex Tucker] a75d492bb510a698967fb5411843dc5422b88016 - Make metadata param relative to CWD.
No changes No changes No changes No changes
[Alex Tucker] d304a5acc8766e52976306021fa62c6ab7bae04c - Add template transformation pipeline.
[Alex Tucker] 7eda5c2f5f1a251462cd0a2820b201263fd36e11 - Add getDraftsetEndpoint
[Alex Tucker] 7c9c4e98b4ee0c634379ab5253c3293b426e7301 - Add SPARQL tests step and publish results.
[Alex Tucker] 806951bf2ee882eb722ed68d14041b7f0e928fea - SPARQL tests need union-with-live=true
No changes
[Alex Tucker] 562618a49dcb291061554092fc7c58b7877ca7c7 - Use template pipeline.
[Alex Tucker] d747d17b909dea7f689436a623263ff531631c94 - Don't trigger GDP-tests as they're now run on the unpublished draftset.
[Alex Tucker] 57b7cae64f2422e744b548d8533c87ea96b0c7b0 - Template for multiple datasets.
[Alex Tucker] 554340c2f38e69909c77daea277188852640b339 - Disambiguate use of 'dataset'
[Alex Tucker] 1452178f3d07debcb12de21cb3782810370ff2c0 - Use $refFamily
No changes
[vtula2000] b511c6e69ce4e95a2d6e67070938f8a2c185578f - Measure Type
[Alex Tucker] 0ab4797047cdc3a26e01fe2cfd6da3da461c57f3 - Try to always update Trello card.
[vtula2000] ccfa6327e0d3737b7e4f7ebde2fcf07ab05721e4 - update
[vtula2000] 692878d0f7140c3bf6ef9d3ba6be49dae0ed2abe - Error
[vtula2000] e8fe4ab3160e22ef5cccde2382616bfa20be6fb2 - error
[vtula2000] f85aa4afc47ab3c8b0806132bb01ac34e48fbb6f - error
[vtula2000] 0b2f3ff86d2e1776447f79494871ae0b1a12f506 - Error
[Alex Tucker] 73c61fabce4edbe6898f9e26ba0f1b3d9edc79fd - Use jupytext to keep only Python scripts. Update per-tab notebooks to use the latest spreadsheet.
[Alex Tucker] 2c3a2d2dcad68f95080061335209f3d7241b6af9 - If main.py exists, convert to notebook using jupytext.
[Alex Tucker] 1acd29462408540f9fbc087e739c4dd458b20af7 - Convert any .py files to .ipynb using jupytext.
No changes No changes No changes
[Alex Tucker] 0ab9caa46fd9a9ce37f92f6ff1886ef7d0fad336 - Use GitHub issues rather than updating a Trello card.
[Alex Tucker] b57d083813b089ab8107b2342f26cf94bf589f8b - Use gsscogs Docker images.
No changes
[Alex Tucker] cc2bca87d857753755272fa5aaa08034ec2428f6 - Add template for ref_* pipelines.
[Alex Tucker] fd491cc35f5f1a690f227435207d77378bce174e - Also validate columns.csv and components.csv.
No changes
[Alex Tucker] 5f5d4b46d7d338090ecfa2d02ede7ae8c0a498da - Ignore RequestAborted exceptions in empty_cache/sync_search calls.
[Alex Tucker] ef905fd5f2db85565dc4eb0bde049534e4846b0f - Use --no-verbose to cut down line length of csvlint output, so it can be
No changes No changes No changes
Blocked by GSS-Cogs/gss-utils#11
@VTula2000 , I've fixed the GSS-Cogs/gss-utils#11 issue, so the HMRC uktradeinfo.com scraper can now pick up the spreadsheets again.
The issue was, I think, that the Alcohol Bulletin had dropped off the list of recent datasets as it is now a few months old. The scraper now follows the links off to the archive pages and scrapes those for the links to the downloads.
You'll need to do something like this:
scraper = Scraper('https://www.uktradeinfo.com/Statistics/Pages/TaxAndDutyBulletins.aspx')
scraper
You'll get back a list of datasets and will have to choose the alcohol one as follows:
scraper.select_dataset(title='Alcohol Duty')
scraper
Then you can just grab the latest one:
scraper.distribution(latest=True)
Build 'GSS_data/Health/HMRC_alcohol_bulletin' is failing!
Last 50 lines of build output:
[...truncated 182 lines...]
[0;32m--> 156[0;31m [0;32mreturn[0m [0mScraper[0m[0;34m.[0m[0m_filter_one[0m[0;34m([0m[0mself[0m[0;34m.[0m[0mdistributions[0m[0;34m,[0m [0;34m**[0m[0mkwargs[0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m
[0m[1;32m 157[0m [0;34m[0m[0m
[1;32m 158[0m [0;32mdef[0m [0mset_base_uri[0m[0;34m([0m[0mself[0m[0;34m,[0m [0muri[0m[0;34m)[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[0;32m~usr/local/lib/python3.7/site-packages/gssutils/scrape.py[0m in [0;36m_filter_one[0;34m(things, **kwargs)[0m
[1;32m 140[0m [0;32mreturn[0m [0mmax[0m[0;34m([0m[0mmatches[0m[0;34m,[0m [0mkey[0m[0;34m=[0m[0;32mlambda[0m [0md[0m[0;34m:[0m [0md[0m[0;34m.[0m[0missued[0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m
[1;32m 141[0m [0;32melse[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[0;32m--> 142[0;31m [0;32mraise[0m [0mFilterError[0m[0;34m([0m[0;34m'more than one match for given filter(s)'[0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m
[0m[1;32m 143[0m [0;32melif[0m [0mlen[0m[0;34m([0m[0mmatches[0m[0;34m)[0m [0;34m==[0m [0;36m0[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[1;32m 144[0m [0;32mraise[0m [0mFilterError[0m[0;34m([0m[0;34m'nothing matches given filter(s)'[0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m
[0;31mFilterError[0m: more than one match for given filter(s)
FilterError: more than one match for given filter(s)
[Pipeline] }
[Pipeline] // ansiColor
[Pipeline] }
[Pipeline] // script
[Pipeline] }
$ docker stop --time=1 17fc92e3cb59df3373e3d00e9930cf50cdbd193b1e753df00f2ac0f328ce8bca
$ docker rm -f 17fc92e3cb59df3373e3d00e9930cf50cdbd193b1e753df00f2ac0f328ce8bca
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Validate CSV)
Stage "Validate CSV" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Upload Tidy Data)
Stage "Upload Tidy Data" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Test draft dataset)
Stage "Test draft dataset" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Publish)
Stage "Publish" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Declarative: Post Actions)
[Pipeline] script
[Pipeline] {
[Pipeline] step
Changes since last successful build:
[Alex Tucker] 540ba45d5aa50426e9e1d3f7a1ec0411ddc164d4 - Write lock URI needs to have prepended.
[Alex Tucker] 6290fc536b2603617b26728abc883ac9169f23bf - Missed a missing /v1/status
.
[Alex Tucker] 5ee1bd86aebac936c0390926b177426944210e69 - Add optional argument for dataset metadata trig file.
[Alex Tucker] a75d492bb510a698967fb5411843dc5422b88016 - Make metadata param relative to CWD.
No changes No changes No changes No changes
[Alex Tucker] d304a5acc8766e52976306021fa62c6ab7bae04c - Add template transformation pipeline.
[Alex Tucker] 7eda5c2f5f1a251462cd0a2820b201263fd36e11 - Add getDraftsetEndpoint
[Alex Tucker] 7c9c4e98b4ee0c634379ab5253c3293b426e7301 - Add SPARQL tests step and publish results.
[Alex Tucker] 806951bf2ee882eb722ed68d14041b7f0e928fea - SPARQL tests need union-with-live=true
No changes
[Alex Tucker] 562618a49dcb291061554092fc7c58b7877ca7c7 - Use template pipeline.
[Alex Tucker] d747d17b909dea7f689436a623263ff531631c94 - Don't trigger GDP-tests as they're now run on the unpublished draftset.
[Alex Tucker] 57b7cae64f2422e744b548d8533c87ea96b0c7b0 - Template for multiple datasets.
[Alex Tucker] 554340c2f38e69909c77daea277188852640b339 - Disambiguate use of 'dataset'
[Alex Tucker] 1452178f3d07debcb12de21cb3782810370ff2c0 - Use $refFamily
No changes
[vtula2000] b511c6e69ce4e95a2d6e67070938f8a2c185578f - Measure Type
[Alex Tucker] 0ab4797047cdc3a26e01fe2cfd6da3da461c57f3 - Try to always update Trello card.
[vtula2000] ccfa6327e0d3737b7e4f7ebde2fcf07ab05721e4 - update
[vtula2000] 692878d0f7140c3bf6ef9d3ba6be49dae0ed2abe - Error
[vtula2000] e8fe4ab3160e22ef5cccde2382616bfa20be6fb2 - error
[vtula2000] f85aa4afc47ab3c8b0806132bb01ac34e48fbb6f - error
[vtula2000] 0b2f3ff86d2e1776447f79494871ae0b1a12f506 - Error
[Alex Tucker] 73c61fabce4edbe6898f9e26ba0f1b3d9edc79fd - Use jupytext to keep only Python scripts. Update per-tab notebooks to use the latest spreadsheet.
[Alex Tucker] 2c3a2d2dcad68f95080061335209f3d7241b6af9 - If main.py exists, convert to notebook using jupytext.
[Alex Tucker] 1acd29462408540f9fbc087e739c4dd458b20af7 - Convert any .py files to .ipynb using jupytext.
No changes No changes No changes
[Alex Tucker] 0ab9caa46fd9a9ce37f92f6ff1886ef7d0fad336 - Use GitHub issues rather than updating a Trello card.
[Alex Tucker] b57d083813b089ab8107b2342f26cf94bf589f8b - Use gsscogs Docker images.
No changes
[Alex Tucker] cc2bca87d857753755272fa5aaa08034ec2428f6 - Add template for ref_* pipelines.
[Alex Tucker] fd491cc35f5f1a690f227435207d77378bce174e - Also validate columns.csv and components.csv.
No changes
[Alex Tucker] 5f5d4b46d7d338090ecfa2d02ede7ae8c0a498da - Ignore RequestAborted exceptions in empty_cache/sync_search calls.
[Alex Tucker] ef905fd5f2db85565dc4eb0bde049534e4846b0f - Use --no-verbose to cut down line length of csvlint output, so it can be
No changes No changes No changes
Build 'GSS_data/Health/HMRC_alcohol_bulletin' is failing!
Last 50 lines of build output:
[...truncated 180 lines...]
[0;32m--> 156[0;31m [0;32mreturn[0m [0mScraper[0m[0;34m.[0m[0m_filter_one[0m[0;34m([0m[0mself[0m[0;34m.[0m[0mdistributions[0m[0;34m,[0m [0;34m**[0m[0mkwargs[0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m
[0m[1;32m 157[0m [0;34m[0m[0m
[1;32m 158[0m [0;32mdef[0m [0mset_base_uri[0m[0;34m([0m[0mself[0m[0;34m,[0m [0muri[0m[0;34m)[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[0;32m~usr/local/lib/python3.7/site-packages/gssutils/scrape.py[0m in [0;36m_filter_one[0;34m(things, **kwargs)[0m
[1;32m 140[0m [0;32mreturn[0m [0mmax[0m[0;34m([0m[0mmatches[0m[0;34m,[0m [0mkey[0m[0;34m=[0m[0;32mlambda[0m [0md[0m[0;34m:[0m [0md[0m[0;34m.[0m[0missued[0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m
[1;32m 141[0m [0;32melse[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[0;32m--> 142[0;31m [0;32mraise[0m [0mFilterError[0m[0;34m([0m[0;34m'more than one match for given filter(s)'[0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m
[0m[1;32m 143[0m [0;32melif[0m [0mlen[0m[0;34m([0m[0mmatches[0m[0;34m)[0m [0;34m==[0m [0;36m0[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[1;32m 144[0m [0;32mraise[0m [0mFilterError[0m[0;34m([0m[0;34m'nothing matches given filter(s)'[0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m
[0;31mFilterError[0m: more than one match for given filter(s)
FilterError: more than one match for given filter(s)
[Pipeline] }
[Pipeline] // ansiColor
[Pipeline] }
[Pipeline] // script
[Pipeline] }
$ docker stop --time=1 1bf1509d9f2124562ef779afa076b3d223e45609bb0dc98ef09251b096116041
$ docker rm -f 1bf1509d9f2124562ef779afa076b3d223e45609bb0dc98ef09251b096116041
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Validate CSV)
Stage "Validate CSV" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Upload Tidy Data)
Stage "Upload Tidy Data" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Test draft dataset)
Stage "Test draft dataset" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Publish)
Stage "Publish" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Declarative: Post Actions)
[Pipeline] script
[Pipeline] {
[Pipeline] step
Changes since last successful build:
[Alex Tucker] 540ba45d5aa50426e9e1d3f7a1ec0411ddc164d4 - Write lock URI needs to have prepended.
[Alex Tucker] 6290fc536b2603617b26728abc883ac9169f23bf - Missed a missing /v1/status
.
[Alex Tucker] 5ee1bd86aebac936c0390926b177426944210e69 - Add optional argument for dataset metadata trig file.
[Alex Tucker] a75d492bb510a698967fb5411843dc5422b88016 - Make metadata param relative to CWD.
No changes No changes No changes No changes
[Alex Tucker] d304a5acc8766e52976306021fa62c6ab7bae04c - Add template transformation pipeline.
[Alex Tucker] 7eda5c2f5f1a251462cd0a2820b201263fd36e11 - Add getDraftsetEndpoint
[Alex Tucker] 7c9c4e98b4ee0c634379ab5253c3293b426e7301 - Add SPARQL tests step and publish results.
[Alex Tucker] 806951bf2ee882eb722ed68d14041b7f0e928fea - SPARQL tests need union-with-live=true
No changes
[Alex Tucker] 562618a49dcb291061554092fc7c58b7877ca7c7 - Use template pipeline.
[Alex Tucker] d747d17b909dea7f689436a623263ff531631c94 - Don't trigger GDP-tests as they're now run on the unpublished draftset.
[Alex Tucker] 57b7cae64f2422e744b548d8533c87ea96b0c7b0 - Template for multiple datasets.
[Alex Tucker] 554340c2f38e69909c77daea277188852640b339 - Disambiguate use of 'dataset'
[Alex Tucker] 1452178f3d07debcb12de21cb3782810370ff2c0 - Use $refFamily
No changes
[vtula2000] b511c6e69ce4e95a2d6e67070938f8a2c185578f - Measure Type
[Alex Tucker] 0ab4797047cdc3a26e01fe2cfd6da3da461c57f3 - Try to always update Trello card.
[vtula2000] ccfa6327e0d3737b7e4f7ebde2fcf07ab05721e4 - update
[vtula2000] 692878d0f7140c3bf6ef9d3ba6be49dae0ed2abe - Error
[vtula2000] e8fe4ab3160e22ef5cccde2382616bfa20be6fb2 - error
[vtula2000] f85aa4afc47ab3c8b0806132bb01ac34e48fbb6f - error
[vtula2000] 0b2f3ff86d2e1776447f79494871ae0b1a12f506 - Error
[Alex Tucker] 73c61fabce4edbe6898f9e26ba0f1b3d9edc79fd - Use jupytext to keep only Python scripts. Update per-tab notebooks to use the latest spreadsheet.
[Alex Tucker] 2c3a2d2dcad68f95080061335209f3d7241b6af9 - If main.py exists, convert to notebook using jupytext.
[Alex Tucker] 1acd29462408540f9fbc087e739c4dd458b20af7 - Convert any .py files to .ipynb using jupytext.
No changes No changes No changes
[Alex Tucker] 0ab9caa46fd9a9ce37f92f6ff1886ef7d0fad336 - Use GitHub issues rather than updating a Trello card.
[Alex Tucker] b57d083813b089ab8107b2342f26cf94bf589f8b - Use gsscogs Docker images.
No changes
[Alex Tucker] cc2bca87d857753755272fa5aaa08034ec2428f6 - Add template for ref_* pipelines.
[Alex Tucker] fd491cc35f5f1a690f227435207d77378bce174e - Also validate columns.csv and components.csv.
No changes
[Alex Tucker] 5f5d4b46d7d338090ecfa2d02ede7ae8c0a498da - Ignore RequestAborted exceptions in empty_cache/sync_search calls.
[Alex Tucker] ef905fd5f2db85565dc4eb0bde049534e4846b0f - Use --no-verbose to cut down line length of csvlint output, so it can be
No changes No changes No changes
No changes
Build 'GSS_data/Health/HMRC_alcohol_bulletin' is failing!
Last 50 lines of build output:
[...truncated 177 lines...]
[0;32m--> 152[0;31m [0;32mreturn[0m [0mScraper[0m[0;34m.[0m[0m_filter_one[0m[0;34m([0m[0mself[0m[0;34m.[0m[0mdistributions[0m[0;34m,[0m [0;34m**[0m[0mkwargs[0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m
[0m[1;32m 153[0m [0;34m[0m[0m
[1;32m 154[0m [0;32mdef[0m [0mset_base_uri[0m[0;34m([0m[0mself[0m[0;34m,[0m [0muri[0m[0;34m)[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[0;32m~usr/local/lib/python3.7/site-packages/gssutils/scrape.py[0m in [0;36m_filter_one[0;34m(things, **kwargs)[0m
[1;32m 134[0m [0;32mreturn[0m [0mmax[0m[0;34m([0m[0mmatches[0m[0;34m,[0m [0mkey[0m[0;34m=[0m[0;32mlambda[0m [0md[0m[0;34m:[0m [0md[0m[0;34m.[0m[0missued[0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m
[1;32m 135[0m [0;32melse[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[0;32m--> 136[0;31m [0;32mraise[0m [0mFilterError[0m[0;34m([0m[0;34m'more than one match for given filter(s)'[0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m
[0m[1;32m 137[0m [0;32melif[0m [0mlen[0m[0;34m([0m[0mmatches[0m[0;34m)[0m [0;34m==[0m [0;36m0[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[1;32m 138[0m [0;32mraise[0m [0mFilterError[0m[0;34m([0m[0;34m'nothing matches given filter(s)'[0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m
[0;31mFilterError[0m: more than one match for given filter(s)
FilterError: more than one match for given filter(s)
[Pipeline] }
[Pipeline] // ansiColor
[Pipeline] }
[Pipeline] // script
[Pipeline] }
$ docker stop --time=1 956d181370d8a424616834151ce24bf6b2876e1abcda6ef812bd4b85c91ef5e4
$ docker rm -f 956d181370d8a424616834151ce24bf6b2876e1abcda6ef812bd4b85c91ef5e4
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Validate CSV)
Stage "Validate CSV" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Upload Tidy Data)
Stage "Upload Tidy Data" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Test draft dataset)
Stage "Test draft dataset" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Publish)
Stage "Publish" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Declarative: Post Actions)
[Pipeline] script
[Pipeline] {
[Pipeline] step
Changes since last successful build:
[Alex Tucker] 540ba45d5aa50426e9e1d3f7a1ec0411ddc164d4 - Write lock URI needs to have prepended.
[Alex Tucker] 6290fc536b2603617b26728abc883ac9169f23bf - Missed a missing /v1/status
.
[Alex Tucker] 5ee1bd86aebac936c0390926b177426944210e69 - Add optional argument for dataset metadata trig file.
[Alex Tucker] a75d492bb510a698967fb5411843dc5422b88016 - Make metadata param relative to CWD.
No changes No changes No changes No changes
[Alex Tucker] d304a5acc8766e52976306021fa62c6ab7bae04c - Add template transformation pipeline.
[Alex Tucker] 7eda5c2f5f1a251462cd0a2820b201263fd36e11 - Add getDraftsetEndpoint
[Alex Tucker] 7c9c4e98b4ee0c634379ab5253c3293b426e7301 - Add SPARQL tests step and publish results.
[Alex Tucker] 806951bf2ee882eb722ed68d14041b7f0e928fea - SPARQL tests need union-with-live=true
No changes
[Alex Tucker] 562618a49dcb291061554092fc7c58b7877ca7c7 - Use template pipeline.
[Alex Tucker] d747d17b909dea7f689436a623263ff531631c94 - Don't trigger GDP-tests as they're now run on the unpublished draftset.
[Alex Tucker] 57b7cae64f2422e744b548d8533c87ea96b0c7b0 - Template for multiple datasets.
[Alex Tucker] 554340c2f38e69909c77daea277188852640b339 - Disambiguate use of 'dataset'
[Alex Tucker] 1452178f3d07debcb12de21cb3782810370ff2c0 - Use $refFamily
No changes
[Vamshi] b511c6e69ce4e95a2d6e67070938f8a2c185578f - Measure Type
[Alex Tucker] 0ab4797047cdc3a26e01fe2cfd6da3da461c57f3 - Try to always update Trello card.
[Vamshi] ccfa6327e0d3737b7e4f7ebde2fcf07ab05721e4 - update
[Vamshi] 692878d0f7140c3bf6ef9d3ba6be49dae0ed2abe - Error
[Vamshi] e8fe4ab3160e22ef5cccde2382616bfa20be6fb2 - error
[Vamshi] f85aa4afc47ab3c8b0806132bb01ac34e48fbb6f - error
[Vamshi] 0b2f3ff86d2e1776447f79494871ae0b1a12f506 - Error
[Alex Tucker] 73c61fabce4edbe6898f9e26ba0f1b3d9edc79fd - Use jupytext to keep only Python scripts. Update per-tab notebooks to use the latest spreadsheet.
[Alex Tucker] 2c3a2d2dcad68f95080061335209f3d7241b6af9 - If main.py exists, convert to notebook using jupytext.
[Alex Tucker] 1acd29462408540f9fbc087e739c4dd458b20af7 - Convert any .py files to .ipynb using jupytext.
No changes No changes No changes
[Alex Tucker] 0ab9caa46fd9a9ce37f92f6ff1886ef7d0fad336 - Use GitHub issues rather than updating a Trello card.
[Alex Tucker] b57d083813b089ab8107b2342f26cf94bf589f8b - Use gsscogs Docker images.
No changes
[Alex Tucker] cc2bca87d857753755272fa5aaa08034ec2428f6 - Add template for ref_* pipelines.
[Alex Tucker] fd491cc35f5f1a690f227435207d77378bce174e - Also validate columns.csv and components.csv.
No changes
[Alex Tucker] 5f5d4b46d7d338090ecfa2d02ede7ae8c0a498da - Ignore RequestAborted exceptions in empty_cache/sync_search calls.
[Alex Tucker] ef905fd5f2db85565dc4eb0bde049534e4846b0f - Use --no-verbose to cut down line length of csvlint output, so it can be
No changes No changes No changes
No changes No changes
Build 'GSS_data/Health/HMRC_alcohol_bulletin' is failing!
Last 50 lines of build output:
[...truncated 172 lines...]
[0;32m--> 152[0;31m [0;32mreturn[0m [0mScraper[0m[0;34m.[0m[0m_filter_one[0m[0;34m([0m[0mself[0m[0;34m.[0m[0mdistributions[0m[0;34m,[0m [0;34m**[0m[0mkwargs[0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m
[0m[1;32m 153[0m [0;34m[0m[0m
[1;32m 154[0m [0;32mdef[0m [0mset_base_uri[0m[0;34m([0m[0mself[0m[0;34m,[0m [0muri[0m[0;34m)[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[0;32m~usr/local/lib/python3.7/site-packages/gssutils/scrape.py[0m in [0;36m_filter_one[0;34m(things, **kwargs)[0m
[1;32m 134[0m [0;32mreturn[0m [0mmax[0m[0;34m([0m[0mmatches[0m[0;34m,[0m [0mkey[0m[0;34m=[0m[0;32mlambda[0m [0md[0m[0;34m:[0m [0md[0m[0;34m.[0m[0missued[0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m
[1;32m 135[0m [0;32melse[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[0;32m--> 136[0;31m [0;32mraise[0m [0mFilterError[0m[0;34m([0m[0;34m'more than one match for given filter(s)'[0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m
[0m[1;32m 137[0m [0;32melif[0m [0mlen[0m[0;34m([0m[0mmatches[0m[0;34m)[0m [0;34m==[0m [0;36m0[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[1;32m 138[0m [0;32mraise[0m [0mFilterError[0m[0;34m([0m[0;34m'nothing matches given filter(s)'[0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m
[0;31mFilterError[0m: more than one match for given filter(s)
FilterError: more than one match for given filter(s)
[Pipeline] }
[Pipeline] // ansiColor
[Pipeline] }
[Pipeline] // script
[Pipeline] }
$ docker stop --time=1 7280f16176491e20c93a10e6b1e1d6427001b8a475e5f529ed74380b5b5afef0
$ docker rm -f 7280f16176491e20c93a10e6b1e1d6427001b8a475e5f529ed74380b5b5afef0
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Validate CSV)
Stage "Validate CSV" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Upload Tidy Data)
Stage "Upload Tidy Data" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Test draft dataset)
Stage "Test draft dataset" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Publish)
Stage "Publish" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Declarative: Post Actions)
[Pipeline] script
[Pipeline] {
[Pipeline] step
Changes since last successful build:
[Alex Tucker] 540ba45d5aa50426e9e1d3f7a1ec0411ddc164d4 - Write lock URI needs to have prepended.
[Alex Tucker] 6290fc536b2603617b26728abc883ac9169f23bf - Missed a missing /v1/status
.
[Alex Tucker] 5ee1bd86aebac936c0390926b177426944210e69 - Add optional argument for dataset metadata trig file.
[Alex Tucker] a75d492bb510a698967fb5411843dc5422b88016 - Make metadata param relative to CWD.
No changes No changes No changes No changes
[Alex Tucker] d304a5acc8766e52976306021fa62c6ab7bae04c - Add template transformation pipeline.
[Alex Tucker] 7eda5c2f5f1a251462cd0a2820b201263fd36e11 - Add getDraftsetEndpoint
[Alex Tucker] 7c9c4e98b4ee0c634379ab5253c3293b426e7301 - Add SPARQL tests step and publish results.
[Alex Tucker] 806951bf2ee882eb722ed68d14041b7f0e928fea - SPARQL tests need union-with-live=true
No changes
[Alex Tucker] 562618a49dcb291061554092fc7c58b7877ca7c7 - Use template pipeline.
[Alex Tucker] d747d17b909dea7f689436a623263ff531631c94 - Don't trigger GDP-tests as they're now run on the unpublished draftset.
[Alex Tucker] 57b7cae64f2422e744b548d8533c87ea96b0c7b0 - Template for multiple datasets.
[Alex Tucker] 554340c2f38e69909c77daea277188852640b339 - Disambiguate use of 'dataset'
[Alex Tucker] 1452178f3d07debcb12de21cb3782810370ff2c0 - Use $refFamily
No changes
[Vamshi] b511c6e69ce4e95a2d6e67070938f8a2c185578f - Measure Type
[Alex Tucker] 0ab4797047cdc3a26e01fe2cfd6da3da461c57f3 - Try to always update Trello card.
[Vamshi] ccfa6327e0d3737b7e4f7ebde2fcf07ab05721e4 - update
[Vamshi] 692878d0f7140c3bf6ef9d3ba6be49dae0ed2abe - Error
[Vamshi] e8fe4ab3160e22ef5cccde2382616bfa20be6fb2 - error
[Vamshi] f85aa4afc47ab3c8b0806132bb01ac34e48fbb6f - error
[Vamshi] 0b2f3ff86d2e1776447f79494871ae0b1a12f506 - Error
[Alex Tucker] 73c61fabce4edbe6898f9e26ba0f1b3d9edc79fd - Use jupytext to keep only Python scripts. Update per-tab notebooks to use the latest spreadsheet.
[Alex Tucker] 2c3a2d2dcad68f95080061335209f3d7241b6af9 - If main.py exists, convert to notebook using jupytext.
[Alex Tucker] 1acd29462408540f9fbc087e739c4dd458b20af7 - Convert any .py files to .ipynb using jupytext.
No changes No changes No changes
[Alex Tucker] 0ab9caa46fd9a9ce37f92f6ff1886ef7d0fad336 - Use GitHub issues rather than updating a Trello card.
[Alex Tucker] b57d083813b089ab8107b2342f26cf94bf589f8b - Use gsscogs Docker images.
No changes
[Alex Tucker] cc2bca87d857753755272fa5aaa08034ec2428f6 - Add template for ref_* pipelines.
[Alex Tucker] fd491cc35f5f1a690f227435207d77378bce174e - Also validate columns.csv and components.csv.
No changes
[Alex Tucker] 5f5d4b46d7d338090ecfa2d02ede7ae8c0a498da - Ignore RequestAborted exceptions in empty_cache/sync_search calls.
[Alex Tucker] ef905fd5f2db85565dc4eb0bde049534e4846b0f - Use --no-verbose to cut down line length of csvlint output, so it can be
No changes No changes No changes
No changes No changes No changes
Build 'GSS_data/Health/HMRC_alcohol_bulletin' is failing!
Last 50 lines of build output:
[...truncated 175 lines...]
[0;32m--> 152[0;31m [0;32mreturn[0m [0mScraper[0m[0;34m.[0m[0m_filter_one[0m[0;34m([0m[0mself[0m[0;34m.[0m[0mdistributions[0m[0;34m,[0m [0;34m**[0m[0mkwargs[0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m
[0m[1;32m 153[0m [0;34m[0m[0m
[1;32m 154[0m [0;32mdef[0m [0mset_base_uri[0m[0;34m([0m[0mself[0m[0;34m,[0m [0muri[0m[0;34m)[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[0;32m~usr/local/lib/python3.7/site-packages/gssutils/scrape.py[0m in [0;36m_filter_one[0;34m(things, **kwargs)[0m
[1;32m 134[0m [0;32mreturn[0m [0mmax[0m[0;34m([0m[0mmatches[0m[0;34m,[0m [0mkey[0m[0;34m=[0m[0;32mlambda[0m [0md[0m[0;34m:[0m [0md[0m[0;34m.[0m[0missued[0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m
[1;32m 135[0m [0;32melse[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[0;32m--> 136[0;31m [0;32mraise[0m [0mFilterError[0m[0;34m([0m[0;34m'more than one match for given filter(s)'[0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m
[0m[1;32m 137[0m [0;32melif[0m [0mlen[0m[0;34m([0m[0mmatches[0m[0;34m)[0m [0;34m==[0m [0;36m0[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[1;32m 138[0m [0;32mraise[0m [0mFilterError[0m[0;34m([0m[0;34m'nothing matches given filter(s)'[0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m
[0;31mFilterError[0m: more than one match for given filter(s)
FilterError: more than one match for given filter(s)
[Pipeline] }
[Pipeline] // ansiColor
[Pipeline] }
[Pipeline] // script
[Pipeline] }
$ docker stop --time=1 42ec34522f6ad8f6ac5aa4da852711b94da30bba4df5da85811254fa64f989dc
$ docker rm -f 42ec34522f6ad8f6ac5aa4da852711b94da30bba4df5da85811254fa64f989dc
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Validate CSV)
Stage "Validate CSV" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Upload Tidy Data)
Stage "Upload Tidy Data" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Test draft dataset)
Stage "Test draft dataset" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Publish)
Stage "Publish" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Declarative: Post Actions)
[Pipeline] script
[Pipeline] {
[Pipeline] step
Changes since last successful build:
[Alex Tucker] 540ba45d5aa50426e9e1d3f7a1ec0411ddc164d4 - Write lock URI needs to have prepended.
[Alex Tucker] 6290fc536b2603617b26728abc883ac9169f23bf - Missed a missing /v1/status
.
[Alex Tucker] 5ee1bd86aebac936c0390926b177426944210e69 - Add optional argument for dataset metadata trig file.
[Alex Tucker] a75d492bb510a698967fb5411843dc5422b88016 - Make metadata param relative to CWD.
No changes No changes No changes No changes
[Alex Tucker] d304a5acc8766e52976306021fa62c6ab7bae04c - Add template transformation pipeline.
[Alex Tucker] 7eda5c2f5f1a251462cd0a2820b201263fd36e11 - Add getDraftsetEndpoint
[Alex Tucker] 7c9c4e98b4ee0c634379ab5253c3293b426e7301 - Add SPARQL tests step and publish results.
[Alex Tucker] 806951bf2ee882eb722ed68d14041b7f0e928fea - SPARQL tests need union-with-live=true
No changes
[Alex Tucker] 562618a49dcb291061554092fc7c58b7877ca7c7 - Use template pipeline.
[Alex Tucker] d747d17b909dea7f689436a623263ff531631c94 - Don't trigger GDP-tests as they're now run on the unpublished draftset.
[Alex Tucker] 57b7cae64f2422e744b548d8533c87ea96b0c7b0 - Template for multiple datasets.
[Alex Tucker] 554340c2f38e69909c77daea277188852640b339 - Disambiguate use of 'dataset'
[Alex Tucker] 1452178f3d07debcb12de21cb3782810370ff2c0 - Use $refFamily
No changes
[Vamshi] b511c6e69ce4e95a2d6e67070938f8a2c185578f - Measure Type
[Alex Tucker] 0ab4797047cdc3a26e01fe2cfd6da3da461c57f3 - Try to always update Trello card.
[Vamshi] ccfa6327e0d3737b7e4f7ebde2fcf07ab05721e4 - update
[Vamshi] 692878d0f7140c3bf6ef9d3ba6be49dae0ed2abe - Error
[Vamshi] e8fe4ab3160e22ef5cccde2382616bfa20be6fb2 - error
[Vamshi] f85aa4afc47ab3c8b0806132bb01ac34e48fbb6f - error
[Vamshi] 0b2f3ff86d2e1776447f79494871ae0b1a12f506 - Error
[Alex Tucker] 73c61fabce4edbe6898f9e26ba0f1b3d9edc79fd - Use jupytext to keep only Python scripts. Update per-tab notebooks to use the latest spreadsheet.
[Alex Tucker] 2c3a2d2dcad68f95080061335209f3d7241b6af9 - If main.py exists, convert to notebook using jupytext.
[Alex Tucker] 1acd29462408540f9fbc087e739c4dd458b20af7 - Convert any .py files to .ipynb using jupytext.
No changes No changes No changes
[Alex Tucker] 0ab9caa46fd9a9ce37f92f6ff1886ef7d0fad336 - Use GitHub issues rather than updating a Trello card.
[Alex Tucker] b57d083813b089ab8107b2342f26cf94bf589f8b - Use gsscogs Docker images.
No changes
[Alex Tucker] cc2bca87d857753755272fa5aaa08034ec2428f6 - Add template for ref_* pipelines.
[Alex Tucker] fd491cc35f5f1a690f227435207d77378bce174e - Also validate columns.csv and components.csv.
No changes
[Alex Tucker] 5f5d4b46d7d338090ecfa2d02ede7ae8c0a498da - Ignore RequestAborted exceptions in empty_cache/sync_search calls.
[Alex Tucker] ef905fd5f2db85565dc4eb0bde049534e4846b0f - Use --no-verbose to cut down line length of csvlint output, so it can be
No changes No changes No changes
No changes No changes No changes No changes
Build 'GSS_data/Health/HMRC_alcohol_bulletin' is failing!
Last 50 lines of build output:
[...truncated 176 lines...]
[0;32m--> 152[0;31m [0;32mreturn[0m [0mScraper[0m[0;34m.[0m[0m_filter_one[0m[0;34m([0m[0mself[0m[0;34m.[0m[0mdistributions[0m[0;34m,[0m [0;34m**[0m[0mkwargs[0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m
[0m[1;32m 153[0m [0;34m[0m[0m
[1;32m 154[0m [0;32mdef[0m [0mset_base_uri[0m[0;34m([0m[0mself[0m[0;34m,[0m [0muri[0m[0;34m)[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[0;32m~usr/local/lib/python3.7/site-packages/gssutils/scrape.py[0m in [0;36m_filter_one[0;34m(things, **kwargs)[0m
[1;32m 134[0m [0;32mreturn[0m [0mmax[0m[0;34m([0m[0mmatches[0m[0;34m,[0m [0mkey[0m[0;34m=[0m[0;32mlambda[0m [0md[0m[0;34m:[0m [0md[0m[0;34m.[0m[0missued[0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m
[1;32m 135[0m [0;32melse[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[0;32m--> 136[0;31m [0;32mraise[0m [0mFilterError[0m[0;34m([0m[0;34m'more than one match for given filter(s)'[0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m
[0m[1;32m 137[0m [0;32melif[0m [0mlen[0m[0;34m([0m[0mmatches[0m[0;34m)[0m [0;34m==[0m [0;36m0[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[1;32m 138[0m [0;32mraise[0m [0mFilterError[0m[0;34m([0m[0;34m'nothing matches given filter(s)'[0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m
[0;31mFilterError[0m: more than one match for given filter(s)
FilterError: more than one match for given filter(s)
[Pipeline] }
[Pipeline] // ansiColor
[Pipeline] }
[Pipeline] // script
[Pipeline] }
$ docker stop --time=1 b97ce2fb0544adc5c8093bbfb76e0017e9725d51e5ea9be7d88bca0f2769a702
$ docker rm -f b97ce2fb0544adc5c8093bbfb76e0017e9725d51e5ea9be7d88bca0f2769a702
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Validate CSV)
Stage "Validate CSV" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Upload Tidy Data)
Stage "Upload Tidy Data" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Test draft dataset)
Stage "Test draft dataset" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Publish)
Stage "Publish" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Declarative: Post Actions)
[Pipeline] script
[Pipeline] {
[Pipeline] step
Changes since last successful build:
[Alex Tucker] 540ba45d5aa50426e9e1d3f7a1ec0411ddc164d4 - Write lock URI needs to have prepended.
[Alex Tucker] 6290fc536b2603617b26728abc883ac9169f23bf - Missed a missing /v1/status
.
[Alex Tucker] 5ee1bd86aebac936c0390926b177426944210e69 - Add optional argument for dataset metadata trig file.
[Alex Tucker] a75d492bb510a698967fb5411843dc5422b88016 - Make metadata param relative to CWD.
No changes No changes No changes No changes
[Alex Tucker] d304a5acc8766e52976306021fa62c6ab7bae04c - Add template transformation pipeline.
[Alex Tucker] 7eda5c2f5f1a251462cd0a2820b201263fd36e11 - Add getDraftsetEndpoint
[Alex Tucker] 7c9c4e98b4ee0c634379ab5253c3293b426e7301 - Add SPARQL tests step and publish results.
[Alex Tucker] 806951bf2ee882eb722ed68d14041b7f0e928fea - SPARQL tests need union-with-live=true
No changes
[Alex Tucker] 562618a49dcb291061554092fc7c58b7877ca7c7 - Use template pipeline.
[Alex Tucker] d747d17b909dea7f689436a623263ff531631c94 - Don't trigger GDP-tests as they're now run on the unpublished draftset.
[Alex Tucker] 57b7cae64f2422e744b548d8533c87ea96b0c7b0 - Template for multiple datasets.
[Alex Tucker] 554340c2f38e69909c77daea277188852640b339 - Disambiguate use of 'dataset'
[Alex Tucker] 1452178f3d07debcb12de21cb3782810370ff2c0 - Use $refFamily
No changes
[Vamshi] b511c6e69ce4e95a2d6e67070938f8a2c185578f - Measure Type
[Alex Tucker] 0ab4797047cdc3a26e01fe2cfd6da3da461c57f3 - Try to always update Trello card.
[Vamshi] ccfa6327e0d3737b7e4f7ebde2fcf07ab05721e4 - update
[Vamshi] 692878d0f7140c3bf6ef9d3ba6be49dae0ed2abe - Error
[Vamshi] e8fe4ab3160e22ef5cccde2382616bfa20be6fb2 - error
[Vamshi] f85aa4afc47ab3c8b0806132bb01ac34e48fbb6f - error
[Vamshi] 0b2f3ff86d2e1776447f79494871ae0b1a12f506 - Error
[Alex Tucker] 73c61fabce4edbe6898f9e26ba0f1b3d9edc79fd - Use jupytext to keep only Python scripts. Update per-tab notebooks to use the latest spreadsheet.
[Alex Tucker] 2c3a2d2dcad68f95080061335209f3d7241b6af9 - If main.py exists, convert to notebook using jupytext.
[Alex Tucker] 1acd29462408540f9fbc087e739c4dd458b20af7 - Convert any .py files to .ipynb using jupytext.
No changes No changes No changes
[Alex Tucker] 0ab9caa46fd9a9ce37f92f6ff1886ef7d0fad336 - Use GitHub issues rather than updating a Trello card.
[Alex Tucker] b57d083813b089ab8107b2342f26cf94bf589f8b - Use gsscogs Docker images.
No changes
[Alex Tucker] cc2bca87d857753755272fa5aaa08034ec2428f6 - Add template for ref_* pipelines.
[Alex Tucker] fd491cc35f5f1a690f227435207d77378bce174e - Also validate columns.csv and components.csv.
No changes
[Alex Tucker] 5f5d4b46d7d338090ecfa2d02ede7ae8c0a498da - Ignore RequestAborted exceptions in empty_cache/sync_search calls.
[Alex Tucker] ef905fd5f2db85565dc4eb0bde049534e4846b0f - Use --no-verbose to cut down line length of csvlint output, so it can be
No changes No changes No changes
No changes No changes No changes No changes No changes
Build 'GSS_data/Health/HMRC_alcohol_bulletin' is failing!
Last 50 lines of build output:
[...truncated 172 lines...]
[0;32m--> 152[0;31m [0;32mreturn[0m [0mScraper[0m[0;34m.[0m[0m_filter_one[0m[0;34m([0m[0mself[0m[0;34m.[0m[0mdistributions[0m[0;34m,[0m [0;34m**[0m[0mkwargs[0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m
[0m[1;32m 153[0m [0;34m[0m[0m
[1;32m 154[0m [0;32mdef[0m [0mset_base_uri[0m[0;34m([0m[0mself[0m[0;34m,[0m [0muri[0m[0;34m)[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[0;32m~usr/local/lib/python3.7/site-packages/gssutils/scrape.py[0m in [0;36m_filter_one[0;34m(things, **kwargs)[0m
[1;32m 134[0m [0;32mreturn[0m [0mmax[0m[0;34m([0m[0mmatches[0m[0;34m,[0m [0mkey[0m[0;34m=[0m[0;32mlambda[0m [0md[0m[0;34m:[0m [0md[0m[0;34m.[0m[0missued[0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m
[1;32m 135[0m [0;32melse[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[0;32m--> 136[0;31m [0;32mraise[0m [0mFilterError[0m[0;34m([0m[0;34m'more than one match for given filter(s)'[0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m
[0m[1;32m 137[0m [0;32melif[0m [0mlen[0m[0;34m([0m[0mmatches[0m[0;34m)[0m [0;34m==[0m [0;36m0[0m[0;34m:[0m[0;34m[0m[0;34m[0m[0m
[1;32m 138[0m [0;32mraise[0m [0mFilterError[0m[0;34m([0m[0;34m'nothing matches given filter(s)'[0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m
[0;31mFilterError[0m: more than one match for given filter(s)
FilterError: more than one match for given filter(s)
[Pipeline] }
[Pipeline] // ansiColor
[Pipeline] }
[Pipeline] // script
[Pipeline] }
$ docker stop --time=1 8892105e3b5759c1415fdb6914af4f0b4e0d9f052641bcb042860e62e019c013
$ docker rm -f 8892105e3b5759c1415fdb6914af4f0b4e0d9f052641bcb042860e62e019c013
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Validate CSV)
Stage "Validate CSV" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Upload Tidy Data)
Stage "Upload Tidy Data" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Test draft dataset)
Stage "Test draft dataset" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Publish)
Stage "Publish" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Declarative: Post Actions)
[Pipeline] script
[Pipeline] {
[Pipeline] step
Changes since last successful build:
[Alex Tucker] 540ba45d5aa50426e9e1d3f7a1ec0411ddc164d4 - Write lock URI needs to have prepended.
[Alex Tucker] 6290fc536b2603617b26728abc883ac9169f23bf - Missed a missing /v1/status
.
[Alex Tucker] 5ee1bd86aebac936c0390926b177426944210e69 - Add optional argument for dataset metadata trig file.
[Alex Tucker] a75d492bb510a698967fb5411843dc5422b88016 - Make metadata param relative to CWD.
No changes No changes No changes No changes
[Alex Tucker] d304a5acc8766e52976306021fa62c6ab7bae04c - Add template transformation pipeline.
[Alex Tucker] 7eda5c2f5f1a251462cd0a2820b201263fd36e11 - Add getDraftsetEndpoint
[Alex Tucker] 7c9c4e98b4ee0c634379ab5253c3293b426e7301 - Add SPARQL tests step and publish results.
[Alex Tucker] 806951bf2ee882eb722ed68d14041b7f0e928fea - SPARQL tests need union-with-live=true
No changes
[Alex Tucker] 562618a49dcb291061554092fc7c58b7877ca7c7 - Use template pipeline.
[Alex Tucker] d747d17b909dea7f689436a623263ff531631c94 - Don't trigger GDP-tests as they're now run on the unpublished draftset.
[Alex Tucker] 57b7cae64f2422e744b548d8533c87ea96b0c7b0 - Template for multiple datasets.
[Alex Tucker] 554340c2f38e69909c77daea277188852640b339 - Disambiguate use of 'dataset'
[Alex Tucker] 1452178f3d07debcb12de21cb3782810370ff2c0 - Use $refFamily
No changes
[Vamshi] b511c6e69ce4e95a2d6e67070938f8a2c185578f - Measure Type
[Alex Tucker] 0ab4797047cdc3a26e01fe2cfd6da3da461c57f3 - Try to always update Trello card.
[Vamshi] ccfa6327e0d3737b7e4f7ebde2fcf07ab05721e4 - update
[Vamshi] 692878d0f7140c3bf6ef9d3ba6be49dae0ed2abe - Error
[Vamshi] e8fe4ab3160e22ef5cccde2382616bfa20be6fb2 - error
[Vamshi] f85aa4afc47ab3c8b0806132bb01ac34e48fbb6f - error
[Vamshi] 0b2f3ff86d2e1776447f79494871ae0b1a12f506 - Error
[Alex Tucker] 73c61fabce4edbe6898f9e26ba0f1b3d9edc79fd - Use jupytext to keep only Python scripts. Update per-tab notebooks to use the latest spreadsheet.
[Alex Tucker] 2c3a2d2dcad68f95080061335209f3d7241b6af9 - If main.py exists, convert to notebook using jupytext.
[Alex Tucker] 1acd29462408540f9fbc087e739c4dd458b20af7 - Convert any .py files to .ipynb using jupytext.
No changes No changes No changes
[Alex Tucker] 0ab9caa46fd9a9ce37f92f6ff1886ef7d0fad336 - Use GitHub issues rather than updating a Trello card.
[Alex Tucker] b57d083813b089ab8107b2342f26cf94bf589f8b - Use gsscogs Docker images.
No changes
[Alex Tucker] cc2bca87d857753755272fa5aaa08034ec2428f6 - Add template for ref_* pipelines.
[Alex Tucker] fd491cc35f5f1a690f227435207d77378bce174e - Also validate columns.csv and components.csv.
No changes
[Alex Tucker] 5f5d4b46d7d338090ecfa2d02ede7ae8c0a498da - Ignore RequestAborted exceptions in empty_cache/sync_search calls.
[Alex Tucker] ef905fd5f2db85565dc4eb0bde049534e4846b0f - Use --no-verbose to cut down line length of csvlint output, so it can be
No changes No changes No changes
No changes No changes No changes No changes No changes No changes
Build 'GSS_data/Health/HMRC_alcohol_bulletin' is failing!
Last 50 lines of build output:
Changes since last successful build:
[Alex Tucker] 540ba45d5aa50426e9e1d3f7a1ec0411ddc164d4 - Write lock URI needs to have prepended.
[Alex Tucker] 6290fc536b2603617b26728abc883ac9169f23bf - Missed a missing
/v1/status
.[Alex Tucker] 5ee1bd86aebac936c0390926b177426944210e69 - Add optional argument for dataset metadata trig file.
[Alex Tucker] a75d492bb510a698967fb5411843dc5422b88016 - Make metadata param relative to CWD.
No changes No changes No changes No changes
[Alex Tucker] d304a5acc8766e52976306021fa62c6ab7bae04c - Add template transformation pipeline.
[Alex Tucker] 7eda5c2f5f1a251462cd0a2820b201263fd36e11 - Add getDraftsetEndpoint
[Alex Tucker] 7c9c4e98b4ee0c634379ab5253c3293b426e7301 - Add SPARQL tests step and publish results.
[Alex Tucker] 806951bf2ee882eb722ed68d14041b7f0e928fea - SPARQL tests need union-with-live=true
No changes
[Alex Tucker] 562618a49dcb291061554092fc7c58b7877ca7c7 - Use template pipeline.
[Alex Tucker] d747d17b909dea7f689436a623263ff531631c94 - Don't trigger GDP-tests as they're now run on the unpublished draftset.
[Alex Tucker] 57b7cae64f2422e744b548d8533c87ea96b0c7b0 - Template for multiple datasets.
[Alex Tucker] 554340c2f38e69909c77daea277188852640b339 - Disambiguate use of 'dataset'
[Alex Tucker] 1452178f3d07debcb12de21cb3782810370ff2c0 - Use $refFamily
No changes
[vtula2000] b511c6e69ce4e95a2d6e67070938f8a2c185578f - Measure Type
[Alex Tucker] 0ab4797047cdc3a26e01fe2cfd6da3da461c57f3 - Try to always update Trello card.
[vtula2000] ccfa6327e0d3737b7e4f7ebde2fcf07ab05721e4 - update
[vtula2000] 692878d0f7140c3bf6ef9d3ba6be49dae0ed2abe - Error
[vtula2000] e8fe4ab3160e22ef5cccde2382616bfa20be6fb2 - error
[vtula2000] f85aa4afc47ab3c8b0806132bb01ac34e48fbb6f - error
[vtula2000] 0b2f3ff86d2e1776447f79494871ae0b1a12f506 - Error
[Alex Tucker] 73c61fabce4edbe6898f9e26ba0f1b3d9edc79fd - Use jupytext to keep only Python scripts. Update per-tab notebooks to use the latest spreadsheet.
[Alex Tucker] 2c3a2d2dcad68f95080061335209f3d7241b6af9 - If main.py exists, convert to notebook using jupytext.
[Alex Tucker] 1acd29462408540f9fbc087e739c4dd458b20af7 - Convert any .py files to .ipynb using jupytext.
No changes No changes No changes
[Alex Tucker] 0ab9caa46fd9a9ce37f92f6ff1886ef7d0fad336 - Use GitHub issues rather than updating a Trello card.
[Alex Tucker] b57d083813b089ab8107b2342f26cf94bf589f8b - Use gsscogs Docker images.
View full output