0

microk8s is running on the single node. This happened after pods have taken all resources at one moment, but after some time. I was able to remove some of the deployments to free resources. Then I cleared some disk space and now it seems there is no shortage on the node. Also tried to sudo microk8s refresh-certs -e server.crt but it had no effect. microk8s reset is not working.

light@o-node0:~$ df -h

Filesystem      Size  Used Avail Use% Mounted on
tmpfs           1.2G  1.4M  1.2G   1% /run
/dev/sda2        32G   25G  5.0G  84% /
tmpfs           5.8G     0  5.8G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           1.2G  4.0K  1.2G   1% /run/user/1000
shm              64M     0   64M   0% /var/snap/microk8s/common/run/containerd/io.containerd.grpc.v1.cri/sandboxes/77d356835002abd4049efff2d03037c12cd85b4f9f9b4921453f010a26346c39/shm
shm              64M     0   64M   0% /var/snap/microk8s/common/run/containerd/io.containerd.grpc.v1.cri/sandboxes/d142b8ad0aa537ca063ea119ad1d2d03f9740ebb8480733e8d21d96a10a0c814/shm
shm              64M     0   64M   0% /var/snap/microk8s/common/run/containerd/io.containerd.grpc.v1.cri/sandboxes/4676759f00412a3460949d456fa2a1ce0f470d951147e4f3fd348903f26e9e36/shm
shm              64M     0   64M   0% /var/snap/microk8s/common/run/containerd/io.containerd.grpc.v1.cri/sandboxes/9df2580c5ae4059e742694372b865e500d29ee2c5b3e2a0ea7419b691e204e21/shm
shm              64M     0   64M   0% /var/snap/microk8s/common/run/containerd/io.containerd.grpc.v1.cri/sandboxes/4865028305e22b20928f2522e352566158bcdf1f5f24af2bc866d16027df201e/shm
shm              64M     0   64M   0% /var/snap/microk8s/common/run/containerd/io.containerd.grpc.v1.cri/sandboxes/1e71700eea92b63dbc6d601eef0ce87a8e0479d95123eea279023049c6772f8c/shm
shm              64M     0   64M   0% /var/snap/microk8s/common/run/containerd/io.containerd.grpc.v1.cri/sandboxes/19cc93f3a098463812d8e9afae31420e49c57fad4aa5b41d2028f8332f36d3f5/shm
shm              64M     0   64M   0% /var/snap/microk8s/common/run/containerd/io.containerd.grpc.v1.cri/sandboxes/deab0e6fd1abf3d2c168c9b017943e7946df4911e75aaaef5cfff94abbb464c9/shm
shm              64M     0   64M   0% /var/snap/microk8s/common/run/containerd/io.containerd.grpc.v1.cri/sandboxes/38cc5ecc2bacb7b7e4a38528c0c41ca7af86d1e08c4819db02ec78ecf52bb43d/shm
shm              64M     0   64M   0% /var/snap/microk8s/common/run/containerd/io.containerd.grpc.v1.cri/sandboxes/6b0341ae0c6e4c57954dc6aaaf2df2e3649ad18d80e47535967c34ee0630ffe8/shm
shm              64M     0   64M   0% /var/snap/microk8s/common/run/containerd/io.containerd.grpc.v1.cri/sandboxes/4135bef3d2b7ea52547f791f5958bdfb62d6bd79da6491ad4fb7ecbd64834689/shm
shm              64M     0   64M   0% /var/snap/microk8s/common/run/containerd/io.containerd.grpc.v1.cri/sandboxes/a8b631dca64fe6f930509eb24df6d3857f080c147fe21f3de8bcf8029464b2a3/shm
shm              64M     0   64M   0% /var/snap/microk8s/common/run/containerd/io.containerd.grpc.v1.cri/sandboxes/5694115e9b557f916353ff83d8975b3e54fe0c6cfaad681167d5f093b2565d1d/shm
shm              64M     0   64M   0% /var/snap/microk8s/common/run/containerd/io.containerd.grpc.v1.cri/sandboxes/b752c2bb3632e7a3055aa675e1e531fe8afd6ecc5bb6eaa8899a94bf24c7242c/shm
shm              64M   16K   64M   1% /var/snap/microk8s/common/run/containerd/io.containerd.grpc.v1.cri/sandboxes/ba191faa7d9c6d1ff5fbde3def6eaa5611128c0458b5cc795feef027aa3c3bbe/shm
shm              64M     0   64M   0% /var/snap/microk8s/common/run/containerd/io.containerd.grpc.v1.cri/sandboxes/dfe3de233cc8b1f3aeebc98eccadc58cd71a4604613ed2058ceaa9688ebac472/shm
light@o-node0:~$ cat /proc/meminfo
MemTotal:       11965968 kB
MemFree:         2358904 kB
MemAvailable:   11017092 kB

sudo journalctl -u snap.microk8s.daemon-kubelite

Mar 28 07:21:18 o-node0 microk8s.daemon-kubelite[1449518]: I0328 07:21:18.145439 1449518 daemon.go:65] Starting API Server
Mar 28 07:21:18 o-node0 microk8s.daemon-kubelite[1449518]: I0328 07:21:18.147024 1449518 server.go:560] external host was not specified, using 192.168.125.74
Mar 28 07:21:18 o-node0 microk8s.daemon-kubelite[1449518]: W0328 07:21:18.147050 1449518 authentication.go:520] AnonymousAuth is not allowed with the AlwaysA>
Mar 28 07:21:18 o-node0 microk8s.daemon-kubelite[1449518]: I0328 07:21:18.147876 1449518 server.go:168] Version: v1.26.1
Mar 28 07:21:18 o-node0 microk8s.daemon-kubelite[1449518]: I0328 07:21:18.147898 1449518 server.go:170] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Mar 28 07:21:18 o-node0 microk8s.daemon-kubelite[1449518]: Error: missing content for serving cert "serving-cert::/var/snap/microk8s/4595/certs/server.crt::/>
Mar 28 07:21:18 o-node0 microk8s.daemon-kubelite[1449518]: F0328 07:21:18.148156 1449518 daemon.go:67] API Server exited missing content for serving cert "se>
Mar 28 07:21:18 o-node0 systemd[1]: snap.microk8s.daemon-kubelite.service: Main process exited, code=exited, status=255/EXCEPTION
Mar 28 07:21:18 o-node0 systemd[1]: snap.microk8s.daemon-kubelite.service: Failed with result 'exit-code'.

sudo journalctl -u snap.microk8s.daemon-cluster-agent

Mar 11 05:41:19 o-node0 systemd[1]: Stopped Service for snap application microk8s.daemon-cluster-agent.
Mar 11 05:41:19 o-node0 systemd[1]: Started Service for snap application microk8s.daemon-cluster-agent.
Mar 11 05:41:19 o-node0 microk8s.daemon-cluster-agent[3688771]: /snap/microk8s/4595/run-cluster-agent-with-args: line 12: /snap/microk8s/4595/bin/uname: No such file or directory
Mar 11 05:41:19 o-node0 systemd[1]: snap.microk8s.daemon-cluster-agent.service: Main process exited, code=exited, status=127/n/a
Mar 11 05:41:19 o-node0 systemd[1]: snap.microk8s.daemon-cluster-agent.service: Failed with result 'exit-code'.
light@o-node0:~$ microk8s inspect

Inspecting system
Inspecting Certificates
Inspecting services
 FAIL:  Service snap.microk8s.daemon-cluster-agent is not running
For more details look at: sudo journalctl -u snap.microk8s.daemon-cluster-agent
  Service snap.microk8s.daemon-containerd is running
 FAIL:  Service snap.microk8s.daemon-kubelite is not running
For more details look at: sudo journalctl -u snap.microk8s.daemon-kubelite
  Service snap.microk8s.daemon-k8s-dqlite is running
  Service snap.microk8s.daemon-apiserver-kicker is running
  Copy service arguments to the final report tarball
Inspecting AppArmor configuration
Gathering system information
  Copy processes list to the final report tarball
  Copy disk usage information to the final report tarball
  Copy memory usage information to the final report tarball
  Copy server uptime to the final report tarball
  Copy openSSL information to the final report tarball
  Copy snap list to the final report tarball
  Copy VM name (or none) to the final report tarball
  Copy current linux distribution to the final report tarball
  Copy network configuration to the final report tarball
Inspecting kubernetes cluster
  Inspect kubernetes cluster
Inspecting dqlite
  Inspect dqlite

1 Answer 1

0
While storage might be one of the factors that is required while deploying the pods there are other things like CPU and Memory. Whenever the storage gets filled it will also cause an increase in your memory and CPU utilizations. From the data provided by you, there is a possibility that your memory got cached and is not getting released.

 

Data taken from your input (Converted KB to GB for better understanding)

Total Memory: 11.965968 GB

Free Memory:  2.358904 GB

Available Memory: 11.017092 GB

 

Clear the cache memory and check if there is any existing microk8s process running using `ps -ef` command, kill those processes and now start the microk8s service.

Sign up to request clarification or add additional context in comments.

1 Comment

@noname7619 is your issue resolved.. revert back if you are still facing some problems...

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.