Open lizelive opened 3 years ago
Thank you for submitting your first issue to this repository! A maintainer will be here shortly to triage and review. In the meantime, please double-check that you have provided all the necessary information to make this process easy! Any information that can help save additional round trips is useful! We currently aim to give initial feedback within two business days. If this does not happen, feel free to leave a comment. Please keep an eye on how this issue will be labeled, as labels give an overview of priorities, assignments and additional actions requested by the maintainers:
Finally, remember to use https://discuss.ipfs.io if you just need general support.
Do you have any way for folks to reproduce this issue? Is it unique to Azure, or happening in other environments as well?
It looks like the issue is reading the file from the filesystem, not related to go-ipfs itself or its internal datastore, but if there's reproducible this is much easier to investigate.
It's easily reproduced on azure. I can give you some credentials to a file store if that would help.
On Fri, Nov 6, 2020, 10:42 AM Adin Schmahmann notifications@github.com wrote:
Do you have any way for folks to reproduce this issue? Is it unique to Azure, or happening in other environments as well?
It looks like the issue is reading the file from the filesystem, not related to go-ipfs itself or its internal datastore, but if there's reproducible this is much easier to investigate.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/ipfs/go-ipfs/issues/7720#issuecomment-723238697, or unsubscribe https://github.com/notifications/unsubscribe-auth/AJS2V4ZSHJKC27NP5WFJ7GTSOQ7RFANCNFSM4SJMOXKQ .
Yes I am coming up with this problem too
My way to create this is deploy IPFS as docker and use AZURE file store to add space to the /
2021-05-05T17:24:09.044Z ERROR engine blockstore.GetSize(QmcXW7gydQwur59SEt6jY4Dxo5qqCVM1RKmwofpvQnowh8) error: stat /data/ipfs/blocks/BZ/AFYBEIGSZNFA6E5LBBYGIVIO2P3EYCNB7OET6RZ5TOGPQPJHBL7RRTFBZ4.data: interrupted system call
2021-05-05T17:24:12.321Z ERROR engine blockstore.GetSize(QmRR22joqkyQAgVaEHaLS5LdPxL8DBoTdPQ3Z4pRUPnep2) error: stat /data/ipfs/blocks/XU/AFYBEIBNWF67KUIOMQLEE4WJZNDH4WSMGYXASXZJCZIZEVYZXUOMGHFXUM.data: interrupted system call
2021-05-05T17:24:14.286Z ERROR engine blockstore.GetSize(QmZRzRzZUTxPjNkuAPGv91eSG7uPhy3huiAtEYBit8rmRX) error: stat /data/ipfs/blocks/R6/AFYBEIFEZ3BJWCI4FY3O2YBDTYRK35FYWWCSJP2AAVJBY46LBMCJNSRR6Y.data: interrupted system call
2021-05-05T17:24:15.749Z ERROR engine blockstore.GetSize(bafk2bzacea3u6gnvmj7sspyuuozxg3j5w6kcv4izdgl6terydob2nieo5dxng) error: stat /data/ipfs/blocks/XN/AFK2BZACEA3U6GNVMJ7SSPYUUOZXG3J5W6KCV4IZDGL6TERYDOB2NIEO5DXNG.data: interrupted system call
apiVersion: '2019-12-01'
type: Microsoft.ContainerInstance/containerGroups
location: westus
name: ipfs
properties:
containers:
- name: ipfs-node1
properties:
environmentVariables: []
image: ipfs/go-ipfs
ports:
- port: 4001
- port: 5001
- port: 8080
resources:
requests:
cpu: 1.0
memoryInGB: 2
volumeMounts:
- mountPath: /data/ipfs
name: dataipfs
- mountPath: /export
name: exportipfs
osType: Linux
restartPolicy: Always
ipAddress:
type: Public
ports:
- port: 4001
- port: 5001
- port: 8080
dnsNameLabel: ipfsn1
volumes:
- name: dataipfs
azureFile:
sharename: sndatan1
storageAccountName: sanipfsnode1
storageAccountKey: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
- name: exportipfs
azureFile:
sharename: sndatan1
storageAccountName: sanipfsnode1
storageAccountKey: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
tags: {}
Version information:
go-ipfs version: 0.7.0 Repo version: 10 System version: amd64/linux Golang version: go1.14.4
Description:
ipfs add -r -Q -p /mnt/dataset/
Error: read /mnt/dataset/sample.jpg: interrupted system call
I mounted some more drives and am copying the files and that seems to be working but I am wondering what i am doing well. I would prefer in the future not to need to do that as these are pretty big datasets and the extra copy stage takes several hours.