Open Thermi opened 8 years ago
Ok, I solved it.
1) changed the file system type to ext4
2) mounted it to the image
apiVersion: v1
kind: ReplicationController
metadata:
name: mongodb-test
spec:
replicas: 1
# selector identifies the set of Pods that this
# replication controller is responsible for managing
selector:
app: mongo
# podTemplate defines the 'cookie cutter' used for creating
# new pods when necessary
template:
metadata:
labels:
# Important: these labels need to match the selector above
# The api server enforces this constraint.
app: mongo
spec:
containers:
- name: mongo
image: mongo
ports:
- containerPort: 27017
name: mongo-port
volumeMounts:
- name: mongo-persistent-db
mountPath: /data/db
volumes:
- name: mongo-persistent-db
awsElasticBlockStore:
volumeID: <volume_id> # or aws://<region_id>/<volume_id>
fsType: ext4
I don't believe there are plans to include xfsprogs in our hyperkube image, which is why I believe you're seeing the file not in $PATH
message.
\cc @aaronlevy
The recommended FS to run mongodb off is XFS. Mongodb is a very wide spread database, so please include xfsprogs into the hyperkube image.
You could try bind mounting in /usr/sbin/mkfs.xfs
(I'm assuming that's the binary it's expecting). But it has a few shared library dependencies I'm not sure are present in the hyperkube image.
I've changed title / labels to track this as a feature request.
I would be +1 on adding XFS as well for it's dynamic inodes allocation. Use case: Prometheus instance on an iSCSI PV onprem. With ext4 you run out of inodes real quick and oversizing the volume to get more inodes is not a great workaround.
Aren't xfsprogs in hyperkube now? I remember seeing it when I was still running on AWS.
On my 1.7 hyperkube: mkfs.bfs mkfs.cramfs mkfs.ext2 mkfs.ext3 mkfs.ext4 mkfs.ext4dev mkfs.minix
On hyperkube 1.10.1 (quay.io/coreos/hyperkube:v1.10.1_coreos.0
) :
mkfs.bfs mkfs.cramfs mkfs.ext2 mkfs.ext3 mkfs.ext4 mkfs.minix mkfs.xfs
Looks like it's ok now; going to give it a try!
Hello,
It seems like some executable is missing on coreOS version 1032.0.0.
Trying to run a replicationController with pods inside, that require an EBS volume mounted just gives those errors:
That is the replicationController yaml file:
How can this be fixed? What executable does it want? Shouldn't a bind mount be enough?