NVIDIA / aistore

AIStore: scalable storage for AI applications
https://aistore.nvidia.com
MIT License
1.21k stars 160 forks source link

Build for arm64 #177

Closed KhanMechAI closed 2 weeks ago

KhanMechAI commented 1 month ago

Describe the feature

I'd like to run this on an NVIDIA Jetson, so an arm64 build would be great.

Use Case

Embedded storage

Proposed Solution

Modify platform specific api's to use generic unix api's

Other Information

No response

Acknowledgements

AIStore build (latest, v3.22, ...)

latest

Environment details (OS name and version, etc.)

l4t 35.1

gaikwadabhishek commented 1 month ago

This feature request will not be prioritized at this time as it does not align with our current goal of scalable storage. Our primary focus is on storing and providing high throughput for petascale deep learning.

While we may consider supporting this feature in the future, our team currently has other objectives/issues that require immediate attention.

KhanMechAI commented 2 weeks ago

I may be able to find some time to try and help, any chance you have a rough idea what might need to be done? Would a naive “fix it till it builds” be ok, or would I need to have a better grasp of the code/design?

alex-aizman commented 2 weeks ago

That'd be awesome if you give it a shot! The first step is crystal clear: port the following few assorted sources to arm64:

$ find . -type f -name "*_linux.go"
./fs/fs_linux.go
./cmn/cos/err_linux.go
./sys/proc_linux.go
./sys/api_linux.go
./sys/mem_linux.go
./sys/cpu_linux.go
./ios/diskstats_linux.go
./ios/dutils_linux.go
./ios/fsutils_linux.go

In the end, we are supposed to see the same filenames with "_arm" suffix instead of "_linux".

Compare with MacOS port that we do have: find . -type f -name "*_darwin.go"