-
Currently import relies on the imported data coming from the same azure account.
Suggestion: use presigned URLs to provide authorization for the import account to read contents
-
The ability to support hooks that run for a long period of time (hours instead of seconds) and to do so without blocking branches for writes.
-
The createDirectoryMarkerIfEmptyDirectory function checks for existence of a
directory marker, then creates it if needed. lakeFS supports creating an
object only if does not exist. Use that!
In fac…
-
Irrespective of the format of files under the specific prefix, the hook always fails.
> TODO: Need to evaluate the other hooks as well.
-
I believe there's a race condition in Graveler, that in some cases could lead to lost writes.
This is a design flaw with [graveler.Set()](https://github.com/treeverse/lakeFS/blob/7377b6e1fc3c5968dcce…
-
If the given URL to import from is itself a file, e.g. `s3://bucket/some/path/to/key/ -> file`, lakeFS will truncate the given URL and will iterate over the keys past it (including it), so it will als…
-
My app:
* Creates many objects (using the GetPhysicalAddress / linkPhysicalAddress
API) on a path .../_temporary/...
* Stages each again on another path on the same branch using the StageObject
A…
-
Adjust the documentation where necessary with the guidelines provided by AWS in order for lakeFS to comply with AWS' requirements to become an ATP.
-
Calling get-commits , throws when parents field is missing , which is ok for the initial commit-ids ie the commit-id of the initial branch creation.
```
lakefs_client.exceptions.ApiTypeError: Inval…
-
On more advanced versions of LakeFS (probably > = v1.0.0), we would like to remove the logic that tries to fill the generation field in DB when loading old dumps. It means we will no longer support lo…