Health Check Packages

https://github.com/Xabaril/AspNetCore.Diagnostics.HealthChecks

Example code in Startup.cs

public void ConfigureServices(IServiceCollection services)
{
    IHealthChecksBuilder healthChecks = services.AddHealthChecks()
        .AddCheck("self", () => HealthCheckResult.Healthy());

    IEnumerable<HealthCheckUrl> healthCheckUrls = Configuration.GetSection("HealthCheckUrls").Get<IEnumerable<HealthCheckUrl>>();

    foreach(HealthCheckUrl healthCheckUrl in healthCheckUrls)
        healthChecks.AddUrlGroup(new Uri(healthCheckUrl.Url), healthCheckUrl.Name, HealthStatus.Unhealthy, new string[] { "services" });
}

public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
    app.UseRouter(r =>
    {
        r.MapGet("api/status", async (request, response, routeData) =>
        {
            response.StatusCode = 200;
            await response.WriteAsync(JsonConvert.SerializeObject(new { name = "Cerium Reporting API Gateway", status = "online" }));
        });
    });

    app.UseRouting();
    app.UseHealthChecks("/api/status/self", new HealthCheckOptions
    {
        Predicate = r => r.Name == "self"
    });

    app.UseHealthChecks("/api/status/services", new HealthCheckOptions
    {
        Predicate = r => r.Tags.Contains("services")
    });
}

the two endpoints are /api/status/self and /api/status/services. The self endpoint is a liveness check that kubernetes can use to establish if the service is still alive and responding to requests. The services endpoint serves as a readiness check. It will check all the dependencies of the service and make sure they are online before proceeding with destroying the previous containers.

Kubernetes Configuration

Liveness and Readiness Checks in Kubernetes

Example deployment with liveness and readiness config:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: reportinggateway 
spec:
  replicas: 2
  selector:
    matchLabels:
      app: reportinggateway
  template:
    metadata:
      labels:
        app: reportinggateway
    spec:
      containers:
      - name: reportinggateway
        image: devcontainerreg1804-01.corp.ceriumnetworks.com:5000/reportinggateway:${buildid}
        ports:
        - name: http
          containerPort: 80
        env:
        - name: ASPNETCORE_ENVIRONMENT
          value: DevelopmentKubernetes
        livenessProbe:
          httpGet:
            path: /api/status/self
            port: 80
            scheme: HTTP
          initialDelaySeconds: 10
          periodSeconds: 15
        readinessProbe:
          httpGet:
            path: /api/status/services
            port: 80
            scheme: HTTP
          initialDelaySeconds: 10
          periodSeconds: 15
      imagePullSecrets:
      - name: dockerkey
      nodeSelector:
        kubernetes.io/os: linux

Azure DevOps Configuration

Azure DevOps can be configured to respond to a deployment that hasn't rolled out in time as a failure to deploy. This would mean that the readiness checks of the container aren't being passed and therefore the service failed to deploy properly:

https://s3-us-west-2.amazonaws.com/secure.notion-static.com/05c351ba-ca87-4173-9d44-14b23fc2937c/Untitled.png

Here the timeout for rollout status is what determines how long to wait for the service to roll out before marking the deploy as a failure