A fully-searchable and accessible archive of court data including growing repositories of opinions, oral arguments, judges, judicial financial records, and federal filings.
A command to update the case names using the metadata from datasets. This will update all possible names, not just those from Resource or a source combined with Resource.
You can specify the delay to between updates to avoid issues with redis (updating the case names will trigger indexing)
docker exec -it cl-django python /opt/courtlistener/manage.py update_resource_casenames --filepath /opt/courtlistener/cl/assets/media/federal_3d.csv --delay 0.1
You perform a dry run to verify that everything is fine
docker exec -it cl-django python /opt/courtlistener/manage.py update_resource_casenames --filepath /opt/courtlistener/cl/assets/media/federal_3d.csv --dry-run
You can control the chunk size when reading the csv to avoid memory issues:
docker exec -it cl-django python /opt/courtlistener/manage.py update_resource_casenames --filepath /opt/courtlistener/cl/assets/media/federal_3d.csv --chunk-size 100000
A command to update the case names using the metadata from datasets. This will update all possible names, not just those from Resource or a source combined with Resource.
You can specify the delay to between updates to avoid issues with redis (updating the case names will trigger indexing)
docker exec -it cl-django python /opt/courtlistener/manage.py update_resource_casenames --filepath /opt/courtlistener/cl/assets/media/federal_3d.csv --delay 0.1
You perform a dry run to verify that everything is fine
docker exec -it cl-django python /opt/courtlistener/manage.py update_resource_casenames --filepath /opt/courtlistener/cl/assets/media/federal_3d.csv --dry-run
You can control the chunk size when reading the csv to avoid memory issues:
docker exec -it cl-django python /opt/courtlistener/manage.py update_resource_casenames --filepath /opt/courtlistener/cl/assets/media/federal_3d.csv --chunk-size 100000