47

I set up Kubernetes on CoreOS on bare metal using the generic install scripts. It's running the current stable release, 1298.6.0, with Kubernetes version 1.5.4.

We'd like to have a highly available master setup, but we don't have enough hardware at this time to dedicate three servers to serving only as Kubernetes masters, so I would like to be able to allow user pods to be scheduled on the Kubernetes master. I set --register-schedulable=true in /etc/systemd/system/kubelet.service but it still showed up as SchedulingDisabled.

I tried to add settings for including the node as a worker, including adding worker TLS certs to /etc/kubernetes/ssl, adding those settings to kubelet.service, adding an /etc/kubernetes/worker-kubeconfig.yaml that pointed to those certs, and added that information to the /etc/kubernetes/manifests/kube-proxy.yaml. I used my existing nodes as a template for what to add. This registered another node under the master's hostname and then both it and the original master node showed up as NotReady,SchedulingDisabled.

This question indicates that scheduling pods on the master node should be possible, but there is barely anything else that I can find on the subject.

11 Answers 11

61

If you are using Kubernetes 1.7 and above:

kubectl taint node mymasternode node-role.kubernetes.io/master:NoSchedule-
Sign up to request clarification or add additional context in comments.

4 Comments

I believe this command should be with a minus at the end to actually remove this taint from the master. Right?
Should be the accepted answer. @VictorG - Yes, It should be kubectl taint node dashboard2.pvi.com node-role.kubernetes.io/master:NoSchedule-
Starting with 1.20 the command should be: kubectl taint node mymasternode node-role.kubernetes.io/control-plane:NoSchedule- See more: kubernetes.io/docs/reference/labels-annotations-taints/…
"Suggested edit queue is full" when trying to edit answer. @Pascal is correct and should be added to this answer. Version <1.20 vs >1.20. Console output to add: taint "node-role.kubernetes.io/master:NoSchedule" not found
16

node-role.kubernetes.io/master

is deprecated in favor of:

node-role.kubernetes.io/control-plane

Official kubernetes documentation: node-role-kubernetes-io-master

So for versions +v1.20 the solution is :

kubectl taint node <master-node> node-role.kubernetes.io/control-plane:NoSchedule-
kubectl taint node <master-node> node-role.kubernetes.io/master:NoSchedule-

3 Comments

Upvoted but just to point out on my v1.24.0 installation I had both the master and control-plane taints and both had the NoSchedule effect and needed to be removed.
Thank you, yes I forgot to mention that both taints should be removed.
Tested on v1.33.2. Only the first line is needed. kubectl taint node <master-node> node-role.kubernetes.io/control-plane:NoSchedule-
13

Use the below command to untaint all masters

kubectl taint nodes --all node-role.kubernetes.io/master-

Comments

11

First, get the name of the master

kubectl get nodes

NAME     STATUS   ROLES    AGE   VERSION
yasin   Ready    master   11d   v1.13.4

as we can see there is one node with the name of yasin and the role is master. If we want to use it as worker we should run

kubectl taint nodes yasin node-role.kubernetes.io/master-

Comments

7

For anyone using kops on AWS. I wanted to enable scheduling of Pods on master.

$ kubectl get nodes -owide was giving me this output:

NAME                                          STATUS
...
...
ip-1**-**-**-***.********.compute.internal    Ready                      node
ip-1**-**-**-***.********.master.internal     Ready,SchedulingDisabled   master
                                                    ^^^^^^^^^^^^^^^^^^
ip-1**-**-**-***.********.compute.internal    Ready                      node
...
...

And $ kubectl describe nodes ip-1**-**-**-***.********.master.internal:

...
...
Taints:             <none>
Unschedulable:      true
...                 ^^^^
...

Patching the master with this command:

$ kubectl patch node MASTER_NAME -p "{\"spec\":{\"unschedulable\":false}}"

worked for me and scheduling of Pods is now enabled.

Ref: https://github.com/kubernetes/kops/issues/639#issuecomment-287015882

2 Comments

This does work on the OP's bare-metal, e.g., "the-hard-way" installations as well. This patch also persists over kubelet upgrades, so extra points for that! Thanks.
Beautifully worked!
4

I don't know why the master node shows up as NotReady; it shouldn't. Try executing kubectl describe node mymasternode to find out.

The SchedulingDisabled is because the master node is tainted with dedicated=master:NoSchedule

Execute this command against all your masters to remove the taint:

kubectl taint nodes mymasternode dedicated-

To understand why that works read up on taints and tolerations.

Comments

2

Allow scheduling of pods on the master

kubectl taint node --all node-role.kubernetes.io/master:NoSchedule-

Verify the master isn't tainted

kubectl describe node | egrep -i taint

Taints: <none>

Schedule and run test pod in master

kubectl run -it  busybox-$RANDOM --image=busybox --restart=Never -- date

This answer is a combination of other SO answers, from Victor G, Aryak Sengupta, and others.

Comments

0

Another way to list all taints in nodes and untaint the tainted one.

root@lab-a:~# kubectl get nodes -o json | jq ".items[]|{name:.metadata.name, taints:.spec.taints}"
{
  "name": "lab-a",
  "taints": null
}
{
  "name": "lab-b",
  "taints": [
    {
      "effect": "NoSchedule",
      "key": "node-role.kubernetes.io/master"
    }
  ]
}

lab-a does not have any taint. so we untaint lab-b:

root@lab-a:~# k taint node lab-b node-role.kubernetes.io/master:NoSchedule-
node/lab-b untainted

Install jq in ubuntu by: apt-get install jq

Comments

0

Since Openshift 4.x CoreOs is directly integrated on Kubernetes configuration (you can make all masters schedulable this way

# edit the field spec.mastersSchedulable to set a value true
$ oc patch schedulers.config.openshift.io cluster --type json \
     -p '[{"op": "add", "path": "/spec/mastersSchedulable", "value": true}]'

or using

oc edit schedulers.config.openshift.io cluster 

and edit the field

spec:
    mastersSchedulable: true

Comments

0

The answer is

kubectl taint nodes --all node-role.kubernetes.io/master-

according to: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#control-plane-node-isolation

Comments

0
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
kubectl taint nodes --all node-role.kubernetes.io/master-

1 Comment

Thank you for your interest in contributing to the Stack Overflow community. This question already has quite a few answers—including one that has been extensively validated by the community. Are you certain your approach hasn’t been given previously? If so, it would be useful to explain how your approach is different, under what circumstances your approach might be preferred, and/or why you think the previous answers aren’t sufficient. Can you kindly edit your answer to offer an explanation?

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.