Body should include file_definitions. Example:
{ "fileDefinitions": [ {"name": "before_ils.marc"} ] }
From the response, save id as the uploadDefinitionId used in the URI of the next two calls. Save fileDefinitions.id for use in the URI in the next step to upload file. Example response:
This step requires sending a jobProfileId. This should probably be a parameter, since there will be one used for ETD stubs and likely a different one for releaseWF. The param could be a hash including an id and a name since both will need to be included in the request body. For example: {"id": "ae0a94d0-1f8e-4177-bcf9-c3a90e4c9429", "name": "ETDs New"}
The JSON body should have a key "uploadDefinition" whose value is the response returned in step (1), and a key "jobProfileInfo" with the UUID, name, and dataType of jthe ob profile. The one we're using for ETDs New job profile @ahafele created for us. {"uploadDefinition": upload_definition_response, "jobProfileInfo": {"id": "ae0a94d0-1f8e-4177-bcf9-c3a90e4c9429", "name": "ETDs New", "dataType": "MARC" } })
Returns a 204 if data-import successfully started. (Does not return a response with whether import completed and if there were errors.)
4) Check status of job
To check the status of the response, get the jobExecutionId from the uploadDefinition and GET /metadata-provider/jobSummary/{job_id}
See example scripts
This can be used to load records successfully, by setting up an ~/.okapi file and changing the jobProfileInfo in the script in step 3 (@lwrubel has a working example):
This is a pre-req for uploading a ETD stub MARC records.
The consumer (e.g. hydra_etd) should not need to know all of these calls, but be able to call one method. The client can take care of the steps below.
Uploading a MARC record to FOLIO involves at least three API calls.
1) Create an uploadDefinition with file definitions (minimally, filenames) as the body:
/data-import/uploadDefinitions
AWS docs, FileUploadApi docs{"Accept": "application/json, text/plain", Content-Type": "application/json", "x-okapi-token": session_token}
{ "fileDefinitions": [ {"name": "before_ils.marc"} ] }
From the response, save
id
as theuploadDefinitionId
used in the URI of the next two calls. SavefileDefinitions.id
for use in the URI in the next step to upload file. Example response:2) Upload files
/data-import/uploadDefinitions/{upload_definition_id}/files/{file_def_id}
AWS Docs, FileUploadApi docs{"Accept": "application/json, text/plain", "Content-Type": "application/octet-stream", "x-okapi-token": session_token}
3) Request to process files
id
and aname
since both will need to be included in the request body. For example:{"id": "ae0a94d0-1f8e-4177-bcf9-c3a90e4c9429", "name": "ETDs New"}
/data-import/uploadDefinitions/{upload_definition_id}/processFiles
AWS Docs, FileProcessingAPI{"Accept": "application/json, text/plain", "Content-Type": "application/json", "x-okapi-token": session_token}
{"uploadDefinition": upload_definition_response, "jobProfileInfo": {"id": "ae0a94d0-1f8e-4177-bcf9-c3a90e4c9429", "name": "ETDs New", "dataType": "MARC" } })
Returns a 204 if data-import successfully started. (Does not return a response with whether import completed and if there were errors.)
4) Check status of job To check the status of the response, get the
jobExecutionId
from the uploadDefinition and GET/metadata-provider/jobSummary/{job_id}
A successful response will be something like:
Unsuccessful:
See example scripts This can be used to load records successfully, by setting up an
~/.okapi
file and changing the jobProfileInfo in the script in step 3 (@lwrubel has a working example):Another example (haven't tested this one, but it has the same methods)