I have deployed a C# api on a Kubernetes cluster
As far as I understand we should have: GET http request -> Node(30000) -> Pod(80) -> C# API(8080)
My docker image exposes port 8080
FROM our-registry/dotnet/sdk:8.0 AS build
WORKDIR /app
# Copy the project file and restore any dependencies (use .csproj for the project name)
COPY MyApi/MyApi/*.csproj ./
RUN dotnet restore
# Copy the rest of the application code
COPY MyApi/MyApi/. ./
# Publish the application
ARG BUILD_CONFIG=Release
RUN echo "Building with configuration: ${BUILD_CONFIG}"
RUN dotnet publish -c ${BUILD_CONFIG} -o out
# Build the runtime image
FROM our-registry/dotnet/aspnet:8.0 AS runtime
WORKDIR /app
COPY --from=build /app/out ./
# Expose the port your application will run on
EXPOSE 8080
# Start the application
ENTRYPOINT ["dotnet", "MyApi.dll"]
My K8 api-service.yaml is set like this
apiVersion: v1
kind: Service
metadata:
name: my-api-service
namespace: somenamespace
spec:
selector:
app: my-api
ports:
- protocol: TCP
port: 80
targetPort: 8080
nodePort: 30000
type: NodePort
My C# API launch settings setup port 8080 and it works fine locally in debug/release on that port
{
"profiles": {
"http": {
"commandName": "Project",
"launchBrowser": true,
"launchUrl": "swagger",
"environmentVariables": {
...
},
"dotnetRunMessages": true,
"applicationUrl": "http://localhost:8080"
},
"https": {
"commandName": "Project",
"launchBrowser": true,
"launchUrl": "swagger",
"environmentVariables": {
...
},
"dotnetRunMessages": true,
"applicationUrl": "https://localhost:7084;http://localhost:8080"
},
...
On the cluster, the service runs
kubectl get svc -n somenamespace
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-api-service NodePort 10.*.*.* <none> 80:30000/TCP 3h56m
On that pod
kubectl get pods -n somenamespace -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
my-apipod-*********-***** 1/1 Running 0 138m 10.*.*.* somenode <none> <none>
I checked locally inside the pod
kubectl exec -it my-apipod-*********-***** -n somenamespace -- bash
...
root@my-apipod-*********-*****:/app# netstat -tulnp | grep LISTEN
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 1/dotnet
Getting the node
kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
somenode Ready worker 108d v1.28.11+rke2r1 192.168.1.23 192.168.1.23 Ubuntu 20.04.6 LTS 5.4.0-200-generic containerd://1.7.17-k3s1
I try to connect on that node's ip with the nodePort 30000
curl -X GET http://192.168.1.23:30000/swagger/index.html -v
Note: Unnecessary use of -X or --request, GET is already inferred.
* Trying 192.168.1.23:30000...
* Connected to 192.168.1.23 (192.168.1.23) port 30000 (#0)
> GET /swagger/index.html HTTP/1.1
> Host: 192.168.1.23:30000
> User-Agent: curl/7.81.0
> Accept: */*
>
* Recv failure: Connection reset by peer
* Closing connection 0
curl: (56) Recv failure: Connection reset by peer
I am not sure what else to check I ran my api code (.NET) locally and the swagger is working and accessible
Thanks for your help
[edit]
As I do requests like these in the browser:
http://192.168.1.23:30000
http://192.168.1.23:30000/swagger/index.html
Nothing appears in the log
kubectl logs -f -n mynamespace my-apipod-*********-*****
warn: Microsoft.AspNetCore.Hosting.Diagnostics[15]
Overriding HTTP_PORTS '8080' and HTTPS_PORTS ''. Binding to values defined by URLS instead 'http://0.0.0.0:8080'.
info: Microsoft.Hosting.Lifetime[14]
Now listening on: http://0.0.0.0:8080
info: Microsoft.Hosting.Lifetime[0]
Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
Hosting environment: Production
info: Microsoft.Hosting.Lifetime[0]
Content root path: /app
This appears if I try https
warn: Microsoft.AspNetCore.HttpsPolicy.HttpsRedirectionMiddleware[3]
Failed to determine the https port for redirect.
So the api seems to run fine
First step of troubleshooting, do a tcpdump on the node:
Also you can check the application logs:
Connection reset is an indication that the connection is reaching your api endpoint, but an error is causing it to reset the connection.
The answer was to create an ingress configuration