Load balancing does not work correctly when scale pod

What happened:

When I performed a stress test on nginx, the nginx deployment scaled, but the newly created nginx pod did not have any load. If I stop the stress test for two minutes, all pods will start working normally. As shown in the picture below: image

What you expected to happen:

Once the pod is created and running through hpa, it can participate in load balancing normally.

How to reproduce it (as minimally and precisely as possible):

Create bitnami/nginx use helm:

# helm get values nginx -ntest-p1
USER-SUPPLIED VALUES:
autoscaling:
  enabled: false
  maxReplicas: 40
  minReplicas: 1
  targetCPU: 30
  targetMemory: 30
resources:
  limits:
    cpu: 200m
    memory: 128Mi
  requests:
    cpu: 200m
    memory: 128Mi

Anything else we need to know?:

My test tool: http_load If I start a stress test during scaledown, it told me no route to host err

Environment:

Kubernetes version (use kubectl version): v1.18.8-aliyun.1

Cloud provider or hardware configuration: aliyun

OS (e.g: cat /etc/os-release): Alibaba Cloud Linux (Aliyun Linux) 2.1903 LTS (Hunting Beagle)

Kernel (e.g. uname -a): Linux 4.19.91-23.al7.x86_64 #1 SMP Tue Mar 23 18:02:34 CST 2021 x86_64 x86_64 x86_64 GNU/Linux

Network plugin and version (if this is a network-related bug): flannel:v0.11.0.2-g6e46593e-aliyun

Others:

kube-proxy mode is ipvs, and other config is default.

same issue on github: https://github.com/kubernetes/kubernetes/issues/101887



Read more here: https://stackoverflow.com/questions/67479783/load-balancing-does-not-work-correctly-when-scale-pod

Content Attribution

This content was originally published by TinyWhite1997 at Recent Questions - Stack Overflow, and is syndicated here via their RSS feed. You can read the original post over there.

%d bloggers like this: