Observability through logs: Centralized logging with Loki & querying via LogQL

OpenShift Observability

Red Hat® OpenShift® Observability is a comprehensive set of observability capabilities that provides deep insights into the performance and health of OpenShift-based applications and infrastructure across any footprint: the public cloud, on-prem, and edge.

Red Hat OpenShift Observability provides real-time visibility, monitoring, and analysis of various system metrics, logs, traces, and events to help you quickly diagnose and troubleshoot issues before they impact your applications or end users.

It streamlines metrics, traces and logs, while aggregating and transporting your data. Red Hat OpenShift Observability allows you to gain visibility into your clusters through efficient user interfaces and empowers your teams to make data-driven decisions.

With our balanced approach and 5 pillars, you can monitor capabilities and optimize your infrastructure seamlessly.

log 2
log 1

OpenShift Logging

With OpenShift Container Platform, you can deploy OpenShift Logging to aggregate all the logs from your OpenShift Container Platform cluster, such as node system audit logs, application container logs, and infrastructure logs. OpenShift Logging aggregates these logs from throughout your cluster and stores them in a default log store.

OpenShift Logging aggregates the following types of logs:

  • application - Container logs generated by user applications running in the cluster, except infrastructure container applications.

  • infrastructure - Logs generated by infrastructure components running in the cluster and OpenShift Container Platform nodes, such as journal logs. Infrastructure components are pods that run in the openshift-*, kube-*, or default projects.

  • audit - Logs generated by auditd, the node audit system, which are stored in the /var/log/audit/audit.log file, and the audit logs from the Kubernetes apiserver and the OpenShift apiserver.

log 3

Logging resources

  • ClusterLogging (CL) , After the Operators are installed, you create a ClusterLogging custom resource (CR) to schedule logging pods and other resources necessary to support the logging. The ClusterLogging CR deploys the collector and forwarder, which currently are both implemented by a daemonset running on each node. The Red Hat OpenShift Logging Operator watches the ClusterLogging CR and adjusts the logging deployment accordingly.

  • ClusterLogForwarder (CLF) - Generates collector configuration to forward logs per user configuration. The collector is based on Vector.

  • LokiStack - Controls the Loki cluster as log store and the web proxy with OpenShift Container Platform authentication integration to enforce multi-tenancy.

Loki is a horizontally scalable, highly available, multi-tenant log aggregation system offered as a GA log store for logging for Red Hat OpenShift that can be visualized with the OpenShift Observability UI.

The Loki configuration provided by OpenShift Logging is a short-term log store designed to enable users to perform fast troubleshooting with the collected logs. For that purpose, the logging for Red Hat OpenShift configuration of Loki has short-term storage, and is optimized for very recent queries. For long-term storage or queries over a long time period, users should look to log stores external to their cluster.

Logging for Red Hat OpenShift is an opinionated collector and normalizer of application, infrastructure, and audit logs. It is intended to be used for forwarding logs to various supported systems.

Need more details about OpenShift Logging? Visit this link.

Review Application Codes and Deployment Configuraitons

The sample applications used in this lab are consisted of 3 different applications:

  • frontend - A simple web application acts as a frontend application developed with Node.js.

  • simple-go - A Go application developed with Golang provides APIs for frontend application.

  • backend - A Java application developed with Quarkus Framework. acts as a backend application.

Review these application deployment configurations as well as the application codes to understand how logging works.

Applications Deployment Configurations

  • Go to https://github.com/rhthsa/developer-advocacy-2025/tree/main/config Git repository to see deployment configurations.

    log 6
  • Review Deployment and Service resoures of the backend application in backend.yaml file.

    log 7
  • Review Deployment and Service of the simple-go application in simple-go.yaml file.

    log 8
  • For Deployment resource in simple-go.yaml file, you should see an environment variable named BACKEND, the value is the backend application endpoint that the simple-go application will call to the backend application via this URL.

    log 9
  • Review Deployment, Service and Route resources of the frontend application in frontend.yaml file.

    log 10
  • For the Deployment resource in frontend.yaml file, you should see an environment variable named BACKEND_URL, the value is the simple-go application endpoint that the frontend application will call to the simple-go application via this URL.

    log 11

Application Codes

  • Go to https://gitlab.com/ocp-demo/backend_quarkus Git repository to see the backend application code.

    log 12
  • Open the BackendResource.java file in /code/src/main/java/com/example/quarkus directory. Then search for logger.info - these lines of code will print log messages to stdout (Standard output).

    log 14
  • Go to https://github.com/rhthsa/simple-rest-go Git repository to see the simple-go application code.

    log 15
  • Open the main.go file, seach for log.Printf and accessLogger.Printf - these codes will print log messages to stdout.

    log 16
    log 17
  • Go to https://gitlab.com/ocp-demo/frontend-js Git repository to see the frontend application code.

    log 18
  • Open the server.js file, search for logger.info - these lines of code will print log messages to stdout.

    log 19

Deploy Sample Applications

Next, we’re going to deploy those 3 applications we’ve reviewed the codes and deployment configurations. We’ll deploy the applications using the deployment configurations defined as YAML in the *.yaml files we’ve seen earlier.

  • In OpenShift console, change to userX-observe project that matches to your username.

    log 4
  • Click on the + button at the top right corner of console, then select Import YAML.

    log 5
  • Copy all YAML code from backend.yaml file in https://github.com/rhthsa/developer-advocacy-2025/tree/main/config and paste to YAML editor in OpenShift console.

    log 20
  • Click Create, and wait util all resources created successfully.

    log 21
  • Select Topology menu on the left. The backend Deployment should appear.

    log 22
  • Click on the + button at the top right corner of console, then select Import YAML. Copy all YAML code from simple-go.yaml file in https://github.com/rhthsa/developer-advocacy-2025/tree/main/config and paste to YAML editor in OpenShift console.

    log 23
  • click Create, and wait util all resources created successfully.

    log 24
  • Select Topology menu on the left. The simple-go Deployment should appear.

    log 25
  • Click on the + button at the top right corner of console, then select Import YAML. Copy all YAML code from frontend.yaml file in https://github.com/rhthsa/developer-advocacy-2025/tree/main/config and paste to YAML editor in OpenShift console.

    log 26
  • click Create, and wait util all resources created successfully.

    log 27
  • Select Topology menu on the left. The frontend Deployment should appear.

    log 28

View Individual Application Logs

  • Select the backend Deployment, go to the Resources tab on the right panel, then right click on the View logs link and select Open Link in New Tab to view application logs in a separated tab.

    log 30
  • You should see the backend application logs in the Logs tab in Pods Details page.

    log 31
  • Go back to the web browser tab that opened with the topology view. Select the simple-go Deployment, repeat the same steps above to view application logs.

    log 32
  • You should see the simple-go application logs in the Logs tab in Pods Details page.

    log 33
  • Go back to the web browser tab that opened with the topology view. Select the frontend Deployment, repeat the same steps to view application logs.

    log 34
  • You should see the frontend application logs in the Logs tab in Pods Details page.

    log 35
  • Go back to the web browser tab that opened with the topology view. Click on the arrow icon of the frontend Deployment to open application URL.

    log 36
  • You shold see the response from frontend application, try to refresh the web browser 2-3 times.

    log 37
  • Review logs of frontend application in the web browser tab we opened earlier. You should see new log messages from the application.

    log 39

    if the logs don’t show up, please check whether the Log steam paused. If so, just click the play button.

    log 38
  • Review logs of simple-go application in the web browser tab we opened earlier. You should see new log messages from the application.

    log 40
  • Review logs of backend application in the web browser tab we opened earlier. You should see new log messages from the application.

    log 41
  • Try to restart the backend application by select the backend deployment, go to the Details tab on the right panel. Then click v button to scale the Pod to 0 and wait until the Pod scaled down to 0.

    log 42
    log 43
  • Click on ^ button to scale Pod up to 1 again.

    log 44
  • Go to the Resources tab and view the backend application logs again.

    log 45
  • You should see that all previous logs now disappear and there are only new logs from the new Pod we’ve just scaled up.

    log 46

Query Application Logs using OpenShift Logging

In the previous section, you’ve learned how to view application logs in OpenShift web console. Also, you’ve seen that if the application container/Pod got restarted, all the previous logs would disappear.

The question is, how can you view all the previous application logs for problem troubleshooting and diagnostic?

Luckly!, OpenShift Logging has provided the software stack for log management for your applications in the OpenShift cluster. You can store your applications’s logs in the OpenShift cluster for short-term period i.e. 15 - 30 days, then shift all older logs to external log management stack i.e. EFK/ELK for long-term log persistent.

OpenShift Logging comes with web console plug-in that provides the UI to view and query logs stored in the OpenShift cluster.

  • Go to Observe menu, select Logs tab to view all logs from all applications in a particular project.

    log 47
  • Try to fileter with Pods, select all backend pods then click Run Query Button.

    log 48
  • Click on Show Resources or Hide Resources link, to show/hide log’s metadata e.g. Namespace/Project, Pod, and Container from which the log was produced.

    log 49
  • Now let’s try logs query using LogQL language. First, click Clear all filters to clear all filters, then click Show Query to open logs query input.

    Copy this query and past to the input text box then click Run Query button. The query statement is used to query all logs from the backend container in userX-observe namespace/Project.

    Change the userX-observe in the query to matches to your username.

    { log_type="application", kubernetes_namespace_name="userX-observe", kubernetes_container_name=~"backend" } | json
    log 50
  • Next, try another query to find all logs coming from the BackendResource.java class.

    Change the userX-observe in the query to matches to your username.

    { log_type="application", kubernetes_namespace_name="userX-observe" } |= "BackendResource" | json
    log 51

Summary

OpenShift’s logging capabilities offer significant benefits for containerized applications and overall platform management. These benefits include centralized log collection, enhanced security and compliance, and streamlined troubleshooting and debugging. OpenShift’s logging features streamline the development process by providing a consistent and standardized approach to logging.