Open rudokemper opened 8 months ago
I started to respond here and got a little brainstormy. I think it would be helpful to brainstorm in a group. With that said below is the beginning of my aborted attempt to respond....
I have to constantly remind myself this is not a tool/method to label data. This is 100% a method to display ALREADY labeled data. So this means that the user does not need to be able to add any info while looking at photos. It is purely a tool to display information already generated and imported into some centralized database.
To me the map component is less interesting until there are lots of locations. Camera trap (and acoustic) dataset are often made up of 1000s of media files from each site. So this makes it more challenging to map in a meaningful way, beyond simply markers (e.g. the heat map idea might not work until there are 100s or 1000s of locations that have been monitored. It also strays into the realm of science and might be wise to avoid in the near future because if you are trying to map animals you need to have an appropriate survey design + sampling execution. Anyway, not the point here).
I think a map with markers that on click pulls the "metadata" about the site/deployments at that site (and maybe a species list and number of images with that species. Then link to filterable gallery of images. Filtering the gallery on any combos of data fields seems important. I could see a few different pages to start.
I am going to stop here. I think I might have missed the point and now I am just wandering through my messy thoughts.
All makes sense to me, @abfleishman. Thanks for your thoughts!
With some caution, I want to note that we could make it possible to add / edit labels. On the change detection side, we are hearing from our partners that it will be useful for them to add metadata about alerts and do things like relate alerts (and Mapeo data) together. So we could also add a POST endpoint for camera traps to add / update data. But I'm quite sure we'd be missing a significant amount of functionality from Timelapse or other tools that facilitate the labeling process, and we shouldn't aim to reinvent the wheel here. This CT view should remain a light touch option for basic visualization.
This blueprint for GuardianConnector Explorer assumes that Frizzle (or other integration tools) has processed camera trap JSON/XML data by storing records in a database and generated browser-ready media attachments.
A /api/[table]/cameratraps
endpoint returns a paginated cameraTrapData
array, consisting of objects representing individual camera trap records. Each record has, at minimum, a unique ID, camera trap device ID, device coordinates, timestamp, and a media field with one or more filenames that can be accessed by appending them to a MEDIA_BASE_URI
config variable. Additional metadata fields (e.g., scientific name, Indigenous name, animal type, etc.) may also be present, as created by tools like Timelapse.
The API accepts query parameters for filtering, such as ?ct_id=
to retrieve data matching a specific camera trap device ID.
A new page route cameratraps/[tablename].vue
manages paginated GET requests to the API endpoint, which returns cameraTrapsData
. This data is passed as a prop to a CameraTrapsView
component.
CameraTrapsView
componentThe CameraTrapsView
has two child components, CameraTrapsFilter
and CameraTrapsCards
.
On a Desktop, CameraTrapsFilter
occupies ~25% of the top screen while CameraTrapsCards
uses the remaining 75%. On Mobile, the layout may adjust to a 50/50% split.
On both viewports, CameraTrapsFilter
ncludes a toggle arrow to move the component off-screen at the top or bring it back; on Mobile, it is off-screen by default. CameraTrapsView
has event listeners to handle emitted events from the child components, triggering state changes.
Desktop view, roughly:
--------------------------------------
| |filter| |filter| |xxxxxxx| |
| |filter| |filter| |minimap| |
| |filter| |filter| |xxxxxxx| |
-------------------v------------------
| |
| --------CameraTrapsCard--------- |
| | | |
| | |title| | |
| | | |
| | -----CameraTrapsMedia----- | |
| | |xxxxxxxxxxxxxxxxxxxxxxxx| | |
| | |xxxxxxxxxxxxxxxxxxxxxxxx| | |
| | |xxxxxxxxxxxxxxxxxxxxxxxx| | |
| | |xxxxxxxxxxxxxxxxxxxxxxxx| | |
| | |xxxxxxxxxxxxxxxxxxxxxxxx| | |
| | |xxxxxxxxxxxxxxxxxxxxxxxx| | |
| | -------------------------- | |
| | | |
| | |field| |field| |field| | |
| | |field| |field| |field| ... | |
| -------------------------------- |
| |
| --------CameraTrapsCard--------- |
| | | |
--------------------------------------
CameraTrapsCards
child componentCameraTrapsCards
receives cameraTrapsData
as a prop and iterates over this data to create a CameraTrapsCard
child component for each object.
Each CameraTrapsCard
renders media and record metadata elegantly; similar to the ViewSidebar
component but spanning the page width, adjusting the layout accordingly.
CameraTrapsCard
also displays a title column, configurable in the view, such as an Indigenous name of an animal.
CameraTrapsCards
is paginated, similar to the GalleryView
component. Scrolling emits a request upstream to fetch more data from the API.
CameraTrapsFilter
child componentCameraTrapsFilter
presents resents a series of filter options, such as dropdowns, multiple selects, or checkboxes. Sample filters may include animal type, camera trap ID, and media type. When filters are updated, an event is emitted to CameraTrapsView
and the cameratraps/tablename].vue
page to request matching data from the API.
For Desktop, the component conditionally renders CameraTrapsFilterMinimap
, a Mapbox GL JS map showing camera traps as icons, zoomed to their full extent. Clicking a camera trap updates the camera trap ID filter field. The map can toggle to fullscreen.
- How should we handle multiple media files for the same animal sighting incident? Should we merge these upstream into one record (e.g., in Timelapse or Frizzle) and use a carousel design to let users view multiple media attachments per record?
I would vote for collapsing by "event", but FYI sometimes the first image may not show much and so the image stack might show just the nose of an animal at first. I wonder if we would want to be able to mark the "cover photo" for each event so that once you review you could indicate which photo stays on top. (this is all bonus stuff and probably can be ignored)
I wonder if we would want to be able to mark the "cover photo" for each event so that once you review you could indicate which photo stays on top.
I understand your point. I like this idea, and it reminds me of the "Top pick" checkbox in Timelapse that you added to a partner config. We could do this by having a db column cover_media
which stores the filename of a single file, and if set, use that as the first media render, and then proceed with timestamp order.
We'll have to think about how this column is set upstream, but in my experience, most communities do this anyway as part of their CT processing workflows - they always want to find the very best visuals to show off in presentations or social media.
Some of our users are starting to collect camera trap data, consisting of a media file (image / video) and tabular metadata (such as location and site name).
I would like to invite some preliminary thoughts on how to visualize these on guardianconnector-views. This would be for exploration or visualization of data alone - we are not considering deriving any scientific or analytical insights about camera trap data at this time.
I can imagine that our users might like to explore this data via a Gallery and Map template - see the repo readme for screenshot examples of these existing view templates. (And you can view these on the BCM demo deployment, if you have access.)
However, are there any specific features that would be helpful to build in to these templates for camera traps? Are there other ways of exploring camera trap data that are different from these Gallery and Map templates?
For example, what comes to my mind is that we could...
Would love your ideas @abfleishman @IamJeffG @mmckown and anyone else!