We currently do not have a programmatic way to extract information from the actual crash narrative that accompanies CR3s. The crash narrative typically provides more descriptive information as to where, how, and why a crash occurred. Implementing an optical character recognition solution would help us convert image-based narrative to text formatting, allowing users to perform keyword searches.
We currently do not have a programmatic way to extract information from the actual crash narrative that accompanies CR3s. The crash narrative typically provides more descriptive information as to where, how, and why a crash occurred. Implementing an optical character recognition solution would help us convert image-based narrative to text formatting, allowing users to perform keyword searches.
Migrated from atd-vz-data #464