Open stevage opened 10 years ago
I'm not sure if this ever got to the point of fully working, but does sketch out most of the components.
To answer your question, you can use the CSVImport plugin and map the WKT information to the Coverage field (you may need the NeatlineFeatures plugin to use this). Then you can pull the Omeka items in to Neatline.
Thanks, I did try that first, but couldn't make Neatline do anything with the Coverage field (and yes, I had my data in WKT format). I have the Neatline Features plugin installed, but don't see any special behaviour. So it just ends up in the normal DC Coverage field, remaining as WKT text, and Neatline ignores it:
(In this screenshot, I had the WKT format wrong [a comma instead of space], but after fixing that, the result is the same.)
Wondering whether I'm missing a step? Are you able to elaborate at all?
Hey @stevage,
I was just debugging a similar problem that someone else is having, and this is a bug in the logic that Neatline pull the coverage values out of the Omeka items. When Neatline Features is installed, Neatline will only match coverages that were created using the point-and-click interface provided by Neatline Features, at the expense of "raw" WKT strings in the DC Coverage element.
This is unnecessary and super-unintuitive, and I will fix it ASAP (ticket here). For now, though, the solution is to just disable Neatline Features, which should make everything works as expected.
Let me know if this fixes the problem. Best, David
Thanks, that does seem to work. And any workaround is better than none. I seem to have a new problem now though - the imported points are all clustered very close to (0,0):
Hmm, looking closer I see NeatlineFeature generates WKT like this "POINT(16137863.140746 -4554328.3464246)" for lat/lon roughly -38/145. What SRS is this? Is there any way to get it to accept lat/lons, rather than a projected system? I was really hoping for users to be able to import georeferenced spreadsheets - I'm not sure how projection would fit into our workflow.
It uses the projection for the base map. The OSM uses EPSG:3857 (the old 900913). Depending on what you're using to process the points, you're most likely generating EPSG:4326. The conversion is pretty straight forward and this post may be of some help http://www.scholarslab.org/digital-humanities/geocoding-for-neatline-part-i/
If you're pushing these out to a CSV file (and can't change the projection in the software you're using), you can take the output and convert the points to their meter equivalent:
def degrees_to_meters(lon, lat)
half_circumference = 20037508.34
x = lon * half_circumference / 180
y = Math.log(Math.tan((90 + lat) * Math::PI / 360)) / (Math::PI / 180)
y = y * half_circumference / 180
return [x, y]
end
This is ruby, but should be pretty straight forward in any other language...
PS. @erochest has a fix for importing the coverage when NeatlineFeatures is installed (https://github.com/scholarslab/Neatline/commit/001ffb232c92465bf59f9ce4d3122f59e0342af6). We'll have a release of Neatline out that fixes this issue next week.
Ok, that's useful thanks. Any chance of Neatline supporting Lat/Lon natively? Spreadsheets with lat/lon columns are pretty common in the tools I use, such as TileMill and CartoDB, and are produced by some georeferencing tools. So, typically lat/longs are create directly in some kind of spreadsheet (eg, by non-technical users typing them in after looking them up on a tool like http://stevage.github.io/latlon) - they're not "exported" from somewhere, so no opportunity for re-projecting.
Would the NeatlineFeatures plugin help? It allows you find the points you're looking for (much like geocoder you created).
The hard part here is how MySQL actually handles spatial data (WKT format). You could create something like the CSVImport tool that could construct the WKT from various datums, but this could get quite difficult for complex geometries.
Well, the general use case is this one:
Of course, that might not be a use case you're interested in :)
I understand the pain with supporting multiple datums, but I sort of think the raw lat/long is a very broadly useful case that deserves the privilege of special treatment?
We've seen workflows like this which was why I wrote that series of posts on how to handle this (Geocoding for Neatline Part I and Part II, but it does require a bit of fiddling that can be quite intimidating for novice users. I think some facility for importing externally generated content would be quite nice for users, but I think it would be better to actually import these as Omeka items that could be used with the Neatline plugin.
If you're interested in pursuing this, we're more than happy to advise (and send/accept pull requests)!
Wow, great blog posts. Your approach is obviously fairly programmy, whereas we're avoiding teaching any coding at all - it's hard to teach "a little bit" of code.
I'm not quite sure what distinction you're making between "importing externally generated content" and "import[ing] these as Omeka items" but either way would be fine.
Anyway, it sounds like this tool doesn't quite do that yet, and I'm really just evaluating options at this point, so let's not worry too much about it.
Hi there! I'm writing to ask if there's been further progress on this.
We're just getting started on a mapping project here at Univ. of South Carolina and are planning to use Neatline, but we may be pushing against the envelope of the intent of the software a bit. We're hoping to use historical maps as a visualization/search tool for documents (all relating to 19C Scotland), and so one of our main goals is to correlate descriptions, which exist in many different documents, to the same places in maps. So, for example, a user might click on a county of Scotland and get all the text in our collection that refers to that county. Or, instead of a county, they might look up a historical figure and find all the places where that person appears. Right now we're at the early stage of marking up our text with XML tags that identify descriptions of places and persons, and we're thinking about how to incorporate this into Neatline.
My tentative idea is to create an Omeka item for every actual historical object/place/person and a separate record each time that item is described in our collection. (This approach seems consistent with what I've read are best practices in GIS but it seems to push against the boundaries for Neatline.) This means that we could have dozens, hundreds, or even thousands of distinct Neatline records for each Omeka item, each of which will always have a different text "body" and, sometimes, a different set of geo-coordinates.
It could be that we're just making the mistake of trying to use a software designed for small exhibits for a large-ish GIS database. In any case, if we do use Neatline, it'll be important for us to find a way to hack into the records, because we won't be able to batch import from Omeka items.
So, my question to all on this thread. Any progress since a year ago?
Thanks for your attention. Neatline is an amazing project.
Hi @michaelgavin ,
I don't know that Neatline is going to be a great fit for what you want to do. For one thing, it won't handle the use case that you describe where the user clicks on a feature and opens up information about several items. Each feature currently belongs to one item.
Omeka and Neatline are really designed for doing exhibits, not maintaining and curating repositories.
However, we do have some thoughts about systems that might help you. Why don't you drop us an email at scholarslab@virginia.edu and we can discuss it more there.
Best,
Dear Eric,
(I'm looping Nicole Dudley into this, because she's one of our collaborators at U of Iowa, who has had some similar questions.)
Thanks for getting back to me. Yes, please do forward thoughts about systems you think might help. We just got Neatline up an running and, you're right, it doesn't look like the user interface will easily give access to the range of material we're gathering.
Our goal is to identify, correlate, and make available many different references to places that appear in our collection, both in documents and in historical maps. We'd like to make the text searchable by place, such that, for example, a user might be able to select a location and find all the references to that location in our collection. We're beginning with a small collection of maps and documents, but already we're pushing the limits of the interface.
Please do let us know if you have ideas for something better. If nothing else, Neatline has been great for helping us think through what we want to do.
MG
On Jun 5, 2015, at 2:25 PM, Eric Rochester notifications@github.com wrote:
Hi @michaelgavin ,
I don't know that Neatline is going to be a great fit for what you want to do. For one thing, it won't handle the use case that you describe where the user clicks on a feature and opens up information about several items. Each feature currently belongs to one item.
Omeka and Neatline are really designed for doing exhibits, not maintaining and curating repositories.
However, we do have some thoughts about systems that might help you. Why don't you drop us an email at scholarslab@virginia.edu and we can discuss it more there.
Best,
— Reply to this email directly or view it on GitHub.
Hi there! I'm writing to ask if there's been further progress on Neatline CSV Import.
We're just getting started on a mapping project here at Univ. of South Carolina and are planning to use Neatline, but we may be pushing against the envelope of the intent of the software a bit. We're hoping to use historical maps as a visualization/search tool for documents (all relating to 19C Scotland), and so one of our main goals is to correlate descriptions, which exist in many different documents, to the same places in maps. So, for example, a user might click on a county of Scotland and get all the text in our collection that refers to that county. Or, instead of a county, they might look up a historical figure and find all the places where that person appears. Right now we're at the early stage of marking up our text with XML tags that identify descriptions of places and persons, and we're thinking about how to incorporate this into Neatline.
My tentative idea is to create an Omeka item for every actual historical object/place/person and a separate record each time that item is described in our collection. (This approach seems consistent with what I've read are best practices in GIS but it seems to push against the boundaries for Neatline.) This means that we could have dozens, hundreds, or even thousands of distinct Neatline records for each Omeka item, each of which will always have a different text "body" and, sometimes, a different set of geo-coordinates.
It could be that we're just making the mistake of trying to use a software designed for small exhibits for a large-ish GIS database. In any case, if we do use Neatline, it'll be important for us to find a way to hack into the records, because we won't be able to batch import from Omeka items.
So, my question to all on this thread. Any progress since a year ago?
Thanks for your attention. Neatline is an amazing project.
Hi,
So we've been working on a solution to do what you describe. It's using Blacklight with the GeoBlacklight plugin. Everything's backed in Solr, so it's easily searchable across a number of fields, and it should scale very nicely.
I'm glad that Neatline's been useful to you, though, even if it is only to help you think through your requirements.
Best,
Thanks for letting me know. GeoBlacklight looks very interesting. I wonder if implementation is as easy as they make it sound.
On Jun 11, 2015, at 4:04 PM, Eric Rochester notifications@github.com wrote:
Hi,
So we've been working on a solution to do what you describe. It's using Blacklight with the GeoBlacklight plugin. Everything's backed in Solr, so it's easily searchable across a number of fields, and it should scale very nicely.
I'm glad that Neatline's been useful to you, though, even if it is only to help you think through your requirements.
Best,
— Reply to this email directly or view it on GitHub.
@michaelgavin The "hard" part is just normalizing whatever metadata schema you have for Solr. Most everything else is pretty straight forward.
Sorry to abuse the issue system, but just wondering what the status of this plugin is? I installed it, but didn't get the "which exhibit do you want to import into?" screen. Clicking the plugin name goes to here instead:
If this plugin isn't being developed, are there other ways to import lat/long data?