88

This was discussed by k8s maintainers in https://github.com/kubernetes/kubernetes/issues/7438#issuecomment-97148195:

Allowing users to ask for a specific PV breaks the separation between them

I don't buy that. We allow users to choose a node. It's not the common case, but it exists for a reason.

How did it end? What's the intended way to have >1 PV's and PVC's like the one in https://github.com/kubernetes/kubernetes/tree/master/examples/nfs?

We use NFS, and PersistentVolume is a handy abstraction because we can keep the server IP and the path there. But a PersistentVolumeClaim gets any PV with sufficient size, preventing path reuse.

Can set volumeName in a PVC spec block (see https://github.com/kubernetes/kubernetes/pull/7529) but it makes no difference.

10 Answers 10

101

There is a way to pre-bind PVs to PVCs today, here is an example showing how:

  1. Create a PV object with a ClaimRef field referencing a PVC that you will subsequently create:
     $ kubectl create -f pv.yaml
     persistentvolume "pv0003" created
    
    where pv.yaml contains:
     apiVersion: v1
     kind: PersistentVolume
     metadata:
       name: pv0003
     spec:
       storageClassName: ""
       capacity:
         storage: 5Gi
       accessModes:
         - ReadWriteOnce
       persistentVolumeReclaimPolicy: Retain
       claimRef:
         namespace: default
         name: myclaim
       nfs:
         path: /tmp
         server: 172.17.0.2
    
  2. Then create the PVC with the same name:
     kind: PersistentVolumeClaim
     apiVersion: v1
     metadata:
       name: myclaim
     spec:
       storageClassName: ""
       accessModes:
         - ReadWriteOnce
       resources:
         requests:
           storage: 5Gi
    
  3. The PV and PVC should be bound immediately:
     $ kubectl get pvc
     NAME      STATUS    VOLUME    CAPACITY   ACCESSMODES   AGE
     myclaim   Bound     pv0003    5Gi        RWO           4s
     $ ./cluster/kubectl.sh get pv
     NAME      CAPACITY   ACCESSMODES   STATUS    CLAIM             REASON    AGE
     pv0003    5Gi        RWO           Bound     default/myclaim             57s
    
Sign up to request clarification or add additional context in comments.

11 Comments

That's an ambitious ticket for sure. Is it possible that Volume Selectors could pre-date the architecture overhaul, given that it can be done regardless of which proposal is implemented?
Yep. See github.com/kubernetes/kubernetes/issues/18359, it will tackle the implementation independently.
#18359 would be prefect. With 1.2 due soon I guess we'll have to hope for this to make it into 1.3?
Yep, we're aiming for 1.3
This doesn't work. When I create the PV as indicated, the claimRef is silently ignored, so the PV is just like any other.
|
64

It can be done using the keyword volumeName:

for example

apiVersion: "v1"
kind: "PersistentVolumeClaim"
metadata:
  name: "claimapp80"
spec:
  accessModes:
    - "ReadWriteOnce"
  resources:
    requests:
      storage: "10Gi"
  volumeName: "app080"

will claim specific PV app080

2 Comments

You should probably set storageClass to "" too. See this example cloud.google.com/kubernetes-engine/docs/how-to/…
What happens if the PV doesn't exists(and dynamic provisioning is enable), will it create the PV with that name or blow up?
20

Better to specify both volumeName in pvc and claimRef in pv.

By using storageClassName: manual in both pv and pvc we can bind each other, but it does not guarantee if there are many manual pv and pvc's.

Specifying a volumeName in your PVC does not prevent a different PVC from binding to the specified PV before yours does. Your claim will remain Pending until the PV is Available.

Specifying a claimRef in a PV does not prevent the specified PVC from being bound to a different PV. The PVC is free to choose another PV to bind to according to the normal binding process. Therefore, to avoid these scenarios and ensure your claim gets bound to the volume you want, you must ensure that both volumeName and claimRef are specified.

You can tell that your setting of volumeName and/or claimRef influenced the matching and binding process by inspecting a Bound PV and PVC pair for the pv.kubernetes.io/bound-by-controller annotation. The PVs and PVCs where you set the volumeName and/or claimRef yourself will have no such annotation, but ordinary PVs and PVCs will have it set to "yes".

When a PV has its claimRef set to some PVC name and namespace, and is reclaimed according to a Retain reclaim policy, its claimRef will remain set to the same PVC name and namespace even if the PVC or the whole namespace no longer exists.

source: https://docs.openshift.com/container-platform/3.11/dev_guide/persistent_volumes.html

2 Comments

in your first sentence - the claimRef is specified in the PV, not the PVC...
In the next major version of OpenShift (4.x), I cannot find this explanation anymore, probably removed. Refer to the K8s docs instead, which describe the same procedure of matching the two using volumeName and claimRef.
13

storageClassName in PV and PVC should be same. Add the persistent-volume name as volumeName in PVC to bound PVC to a specific PV.

like:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-name
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 40Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-name
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  volumeName: pv-name

1 Comment

I'm getting error unknown field "volumeName" in io.k8s.api.core.v1.ResourceRequirements.
4

Yes, you can actually provide the volumeName in the PVC. It will bind exactly to that PV name provided in the volumeName (also spec should be in sync)

Comments

4

Per the documentation:


The control plane can bind PersistentVolumeClaims to matching PersistentVolumes in the cluster. However, if you want a PVC to bind to a specific PV, you need to pre-bind them.

By specifying a PersistentVolume in a PersistentVolumeClaim, you declare a binding between that specific PV and PVC. If the PersistentVolume exists and has not reserved PersistentVolumeClaims through its claimRef field, then the PersistentVolume and PersistentVolumeClaim will be bound.

The binding happens regardless of some volume matching criteria, including node affinity. The control plane still checks that storage class, access modes, and requested storage size are valid.


Even though StatefulSets are the resource used to manage stateful applications, I have modified the pv/pvc example they provide into a Deployment example (with nginx, compatible with at least minikube, please comment/edit compatibility with cloud providers):

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: foo-pvc
spec:
  volumeName: foo-pv
  storageClassName: "standard"
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: "1Gi"
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: foo-pv
#  name: bar-pv
spec:
  storageClassName: "standard"
  capacity:
    storage: "1Gi"
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/test/"
#  claimRef:
#    name: foo-pvc
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: foo-deployment
  labels:
    app: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80
        volumeMounts:
          - name: nginx-storage
            mountPath: /test/
      volumes:
      - name: nginx-storage
        persistentVolumeClaim:
          claimName: foo-pvc

So, to elaborate on the documentation, if you fill in the PV's spec.claimRef.name (as foo-pvc or anything) your PVC will sit pending and won't bind. (I left it commented out.) You will notice however, that if you edit it after creating the PV like above, with kubectl edit pv foo-pv, that it is set by the control plane as such.

Also, i left an alternate PV metadata.name (commented out), you can switch the comment line and see as well that it will not bind if the value doesn't match what the PVC specified with spec.volumeName.

Additional note: I found that if you have not created /mnt/test before deploying the above, it will create that folder when you write to it from inside the container.

1 Comment

I ran into what you are saying. However, I found that the reason was that in the PV's claimRef, it needs two other fields. Namespace of the PVC, and Kind as in: claimRef: kind: PersistentVolumeClaim name: foo-pvc namespace: default
3

Now we can use storageClassName (at least from kubernetes 1.7.x)

See detail https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage

Copied sample code here as well

kind: PersistentVolume
apiVersion: v1
metadata:
  name: task-pv-volume
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/tmp/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: task-pv-claim
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 3Gi

3 Comments

Can the storageClassName be anything (i.e., user-defined), or must it be manual or selected from a particular set?
StorageClass only bind to a "class" of PV, not spesific one.
what if more than one pv
2

I don't think @jayme's edit to the original answer is forward compatible.

Though only documented as proposal, label selectors in PVCs seem to work with Kubernetes 1.3.0.

I've written an example that defines two volumes that are identical except in labels. Both would satisfy any of the claims, but when claims specify

selector:
    matchLabels:
      id: test2

it is evident that one of the dependent pods won't start, and the test1 PV stays unbound.

Can be tested in for example minikube with:

$ kubectl create -f volumetest.yml
$ sleep 5
$ kubectl get pods
NAME                              READY     STATUS    RESTARTS   AGE
volumetest1                       1/1       Running   0          8m
volumetest1-conflict              0/1       Pending   0          8m
$ kubectl get pv
NAME      CAPACITY   ACCESSMODES   STATUS      CLAIM          REASON    AGE
pv1       1Gi        RWO           Available                            8m
pv2       1Gi        RWO           Bound       default/test             8m

1 Comment

I think that PetSet (kubernetes.io/docs/user-guide/petset) in Kubernetes 1.3.0 solves deterministic mapping without label selectors. It addressess my original use cases. Pods from Replication Controllers or Deployments were not really meant to have persistent storage.
2

how about label selector, as described in kubernetes doc:

Claims can specify a label selector to further filter the set of volumes. Only the volumes whose labels match the selector can be bound to the claim. The selector can consist of two fields:

matchLabels - the volume must have a label with this value
matchExpressions - a list of requirements made by specifying key, list of values, and operator that relates the key and values. Valid operators include In, NotIn, Exists, and DoesNotExist.

Comments

0

Here's my IBM Cloud Example, using the Block storage plugin.

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nexus-data # Example: my-persistent-volume
spec:
  accessModes:
    - ReadWriteOnce
  capacity:
    storage: 100Gi # Example: 20Gi
  csi:
    driver: vpc.block.csi.ibm.io
    fsType: ext4
    volumeAttributes:
      iops: # Example: "3000"
      volumeId: # Example: a1a11a1a-a111-1111-1a11-1111a11a1a11
      zone: # Example: "eu-de-3"
    volumeHandle: # Example: a1a11a1a-a111-1111-1a11-1111a11a1a11
  persistentVolumeReclaimPolicy: Retain
  claimRef:
    namespace: nexus
    name: nexus-data-claim
  storageClassName: ""
  volumeMode: Filesystem
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nexus-data-claim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 100Gi
  storageClassName: ""

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.