/plushcap/analysis/datadog/tomcat-architecture-and-performance

Key metrics for monitoring Tomcat

What's this blog post about?

In order to effectively monitor your Apache Tomcat server, you should focus on tracking metrics related to request throughput, error rates, JVM memory usage, and thread pool utilization. These metrics provide valuable insights into the performance of your applications running on Tomcat, as well as help identify potential issues with the server or its underlying infrastructure. In Part 1 of this series, we covered which key metrics you should monitor for Apache Tomcat servers. In this second part, we’ll walk through how to configure Tomcat and enable JMX remote for metric collection using tools such as JavaMelody and JConsole. We will also demonstrate how to use these monitoring tools to visualize and analyze your collected metric data. Enabling JMX remote access Before you can start collecting metrics from your Apache Tomcat server, you need to first enable JMX remote access. This allows other applications (such as JavaMelody or JConsole) to connect remotely over a network connection and query the MBeans registered on the JMX MBean server. To configure JMX remote access for Tomcat, follow these steps: 1. Open your Apache Tomcat installation directory. 2. Navigate to the conf subdirectory. 3. Edit the `server.xml` configuration file using a text editor of your choice (e.g., Notepad++). 4. Locate the following comment block within the `<Server>` element: ```xml <!--Uncomment this to disable session persistence across Tomcat restarts--> <!--<Engine defaultHost="localhost" name="Catalina">--> <!--<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/>--> <!--</Engine>--> ``` 5. Uncomment the `<Engine>`, `<Cluster>`, and closing `</Engine>` elements as shown below: ```xml <Engine defaultHost="localhost" name="Catalina"> <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> </Engine> ``` 6. Add the following `<Server>` element attributes to enable JMX remote access: ```xml <Server port="${jpda.port:8000}" shutdown="SHUTDOWN"> <JmxRemoteLifecycleListener /> </Server> ``` 7. Save and close the `server.xml` file. By default, JMX remote access is disabled for security reasons. Enabling it allows other applications to connect remotely over a network connection and query the MBeans registered on the JMX MBean server. In this example configuration, we set the default port value for JMX remote connections to 8000 (you can change this value as needed). You should also ensure that your Apache Tomcat server is configured with an appropriate username and password combination for accessing JMX remotely. To do this, follow these steps: 1. Open your Apache Tomcat installation directory. 2. Navigate to the conf subdirectory. 3. Edit the `tomcat-users.xml` configuration file using a text editor of your choice (e.g., Notepad++). 4. Add the following `<user>` element within the existing `<tomcat-users>` element: ```xml <user username="admin" password="secret" roles="manager-gui,manager-jmx"/> ``` 5. Save and close the `tomcat-users.xml` file. In this example configuration, we create a new user named "admin" with the password "secret". This user is granted two JMX management roles: manager-gui (which allows access to Tomcat’s web management interface) and manager-jmx (which allows remote JMX connections). You can adjust these role assignments as needed based on your specific security requirements. Configuring JavaMelody for Apache Tomcat monitoring JavaMelody is an open source Java application performance monitoring tool that provides real-time visibility into the behavior of your applications running on a server such as Apache Tomcat. It includes built-in support for tracking metrics related to request throughput, error rates, JVM memory usage, and thread pool utilization. To configure JavaMelody for monitoring your Apache Tomcat server, follow these steps: 1. Download the latest version of JavaMelody from its official website (https://javamelody.org/). 2. Extract the downloaded ZIP archive to a temporary directory on your local machine. 3. Copy the `java-melody.jar` file located within the extracted `lib/` subdirectory into your Apache Tomcat installation directory’s lib subdirectory (e.g., `TOMCAT_HOME\lib\`). 4. Add the following `<Context>` element to your Apache Tomcat server’s web application deployment descriptor configuration file (`web.xml`) located within the conf subdirectory: ```xml <context> <param-name>readonly</param-name> <param-value>true</param-value> </context> ``` 5. Save and close the `web.xml` file. By default, JavaMelody is configured with a read-only mode enabled to prevent any unauthorized modifications being made to your application’s codebase or configuration files. In this example configuration, we set the value of the `readonly` parameter to "true" (you can change this value as needed). 6. Restart your Apache Tomcat server for these changes to take effect. Once you have completed these steps, you should be able to access JavaMelody’s monitoring interface by navigating to the following URL in a web browser: `http://<TOMCAT_SERVER_IP>:8080/monitoring`. You will need to authenticate using the username and password combination that you specified earlier when configuring JMX remote access for your Apache Tomcat server. Using JavaMelody to monitor Apache Tomcat metrics After successfully configuring JavaMelody for monitoring your Apache Tomcat server, you can use its built-in features to visualize and analyze collected metric data. The main dashboard provides an overview of key performance indicators (KPIs) such as request throughput, error rates, JVM memory usage, and thread pool utilization: In the example image above, we are graphing response times for each log status, and then viewing the individual logs with a Warn status. In Part 2, we’ll look at how to use a monitoring tool to collect and query this metric data from both JMX and access logs. Metric to alert on: maxTime Max processing time indicates the maximum amount of time it takes for the server to process one request (from the time an available thread starts processing the request to the time it returns a response). Its value updates whenever the server detects a longer request processing time than the current maxTime. This metric doesn’t include detailed information about a request, its status, or URL path, so in order to get a better understanding of the max processing time for individual requests and specific types of requests, you will need to analyze your access logs. A spike in processing time for a single request could indicate that a JSP page isn’t loading or an associated process (such as a database query) is taking too long to complete. Since some of these issues could be caused by operations outside of Tomcat, it’s important to monitor the Tomcat server alongside all of the other services that make up your infrastructure. This helps to ensure you don’t overlook other operations or processes that are also critical for running your application. Thread pool metrics Throughput metrics help you gauge how well your server is handling traffic. Because each request relies on a thread for processing, monitoring Tomcat resources is also important. Threads determine the maximum number of requests the Tomcat server can handle simultaneously. Since the number of available threads directly affects how efficiently Tomcat can process requests, monitoring thread usage is important for understanding request throughput and processing times for the server. Tomcat manages the workload for processing requests with worker threads and tracks thread usage for each connector under either the ThreadPool MBean type, or the Executor Mbean type if you are using an executor. The Executor MBean type represents the thread pool created for use across multiple connectors. The ThreadPool MBean type shows metrics for each Tomcat connector’s thread pool. Depending on how you are prioritizing server loads for your applications, you may be managing a single connector, multiple connectors, or using an executor thread pool to manage another group of connectors within the same server. It’s important to note that Tomcat maps ThreadPool’s currentThreadsBusy to the Executor’s activeCount and maxThreads to maximumPoolSize, meaning that the metrics to watch remain the same between the two. ThreadPool |JMX Attribute |Description |MBean |Metric Type |currentThreadsBusy |The number of threads currently processing requests |Catalina:type=ThreadPool,name=“http-nio-8080” |Resource: Utilization |maxThreads |The maximum number of threads to be created by the connector and made available for requests |Catalina:type=ThreadPool,name=“http-nio-8080” |Resource: Utilization Executor |JMX Attribute |Description |MBean |Metric Type |activeCount |The number of active threads in the thread pool |Catalina:type=Executor,name=“http-nio-8080” |Resource: Utilization |maximumPoolSize |The maximum number of threads available in the thread pool |Catalina:type=Executor,name=“http-nio-8080” |Resource: Utilization Metric to watch: currentThreadsBusy/activeCount The currentThreadsBusy (ThreadPool) and activeCount (Executor) metrics tell you how many threads from the connector’s pool are currently processing requests. As your server receives requests, Tomcat will launch more worker threads if there are not enough existing threads to cover the workload, until it reaches the maximum number of threads you set for the pool. This is represented by maxThreads for a connector’s thread pool and maximumPoolSize for an executor. Any subsequent requests are placed in a queue until a thread becomes available. If the queue becomes full then the server will refuse any new requests until threads become available. It’s important to watch the number of busy threads to ensure it doesn’t reach the value set for maxThreads, because if it consistently hits this cap, then you may need to adjust the maximum number of threads allotted for the connector. With a monitoring tool, you can calculate the number of idle threads by comparing the current thread count to the number of busy threads. The number of idle vs. busy threads provides a good measure for fine-tuning your server. If your server has too many idle threads, it may not be managing the thread pool efficiently. If this is the case, you can lower the minSpareThreads value for your connector, which sets the minimum number of threads that should always be available in a pool (active or idle). Adjusting this value based on your application’s traffic will ensure there is an appropriate balance between busy and idle threads. Fine-tuning Tomcat thread usage Not having the appropriate number of threads for Tomcat is one of the more common causes for server issues, and adjusting thread usage is an easy way to address this problem. You can fine-tune thread usage by adjusting three key parameters for a connector’s thread pool based on anticipated web traffic: maxThreads, minSpareThreads, and acceptCount. Or if you are using an executor, you can adjust values for maxThreads, minSpareThreads, and maxQueueSize. Below is an example of the server’s configuration for a single connector: <Connector port="8443" protocol="org.apache.coyote.http11.Http11NioProtocol" maxThreads="<DESIRED_MAX_THREADS>" acceptCount="<DESIRED_ACCEPT_COUNT>" minSpareThreads="<DESIRED_MIN_SPARETHREADS>"> </Connector> Or for an executor: <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="<DESIRED_MAX_THREADS>" minSpareThreads="<DESIRED_MIN_SPARETHREADS>"> maxQueueSize="<DESIRED_QUEUE_SIZE>"/> <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000"> </Connector> <Connector executor="tomcatThreadPool" port="8091" protocol="HTTP/1.1" connectionTimeout="20000"> </Connector> If these parameters are set too low then the server will not have enough threads to manage the number of incoming requests, which could lead to longer queues and increased request latency. If the values are set too high or too low, then your server could receive an influx of requests it can’t adequately process, maxing out the worker threads and the request queue. This could cause requests in the queue to time out if they have to wait longer than the value set for the server’s connectionTimeout. A high value for maxThreads or minSpareThreads also increases your server’s startup time, and running a larger number of threads consumes more server resources. If processing time increases as the traffic to your server increases, you can start addressing this issue by first increasing the number of maxThreads available for a connector, which will increase the number of worker threads that are available to process requests. If you still notice slow request processing times after you’ve increased the maxThreads parameter, then your server’s hardware may not be equipped to manage the growing number of worker threads processing incoming requests. In this case, you may need to increase server memory or CPU. While monitoring thread usage, it’s important to also keep track of errors that could indicate that your server is misconfigured or overloaded. For example, Tomcat will throw a RejectedExecutionException error if the executor queue is full and can’t accept another incoming request. You will see an entry in Tomcat’s server log (/logs/Catalina.XXXX-XX-XX.log) similar to the following example: WARNING: Socket processing request was rejected for: <socket handle> java.util.concurrent.RejectedExecutionException: Work queue full. at org.apache.catalina.core.StandardThreadExecutor.execute at org.apache.tomcat.util.net.(Apr/Nio)Endpoint.processSocketWithOptions at org.apache.tomcat.util.net.(Apr/Nio)Endpoint$Acceptor.run at java.lang.Thread.run Errors Errors indicate an issue with the Tomcat server itself, a host, a deployed application, or an application servlet. This includes errors generated when the Tomcat server runs out of memory, can’t find a requested file or servlet, or is unable to serve a JSP due to syntax errors in the servlet codebase. |JMX Attribute/Log Metric |Description |MBean/Log Pattern |Metric Type |Availability |errorCount |The number of errors generated by server requests |Catalina:type=GlobalRequestProcessor,name=“http-bio-8888” |Work: Error |JMX |OutOfMemoryError |Indicates the JVM has run out of memory |N/A |Work: Error |Tomcat logs |Server-side errors (5xx) |Indicates the server is not able to process a request |%s |Work: Error |Access logs |Client-side errors (4xx) |Indicates an issue with the client’s request |%s |Work: Error |Access logs While the errorCount metric alone doesn’t provide any insight into the types of errors Tomcat is generating, it can provide a high-level view of potential issues that warrant investigation. You’ll need to supplement this metric with additional information from your access logs to get a clearer picture of the types of errors your users are encountering when interacting with your applications running on Tomcat. JVM memory usage metrics In order to effectively monitor your Apache Tomcat server, you should also focus on tracking metrics related to JVM memory usage. These metrics provide valuable insights into how efficiently your applications are utilizing available system resources (such as CPU and RAM), as well as help identify potential issues with the server or its underlying infrastructure. To configure JavaMelody for monitoring JVM memory usage metrics, follow these steps: 1. Open your Apache Tomcat installation directory. 2. Navigate to the conf subdirectory. 3. Edit the `context.xml` configuration file using a text editor of your choice (e.g., Notepad++). 4. Add the following `<Parameter>` element within the existing `<Context>` element: ```xml <Parameter name="jmxEnabled" value="true"/> ``` 5. Save and close the `context.xml` file. By default, JavaMelody is configured with JMX remote access enabled to allow other applications (such as JavaMelody or JConsole) to connect remotely over a network connection and query the MBeans registered on the JMX MBean server. In this example configuration, we set the value of the `jmxEnabled` parameter to "true" (you can change this value as needed). 6. Restart your Apache Tomcat server for these changes to take effect. Once you have completed these steps, you should be able to access JavaMelody’s monitoring interface by navigating to the following URL in a web browser: `http://<TOMCAT_SERVER_IP>:8080/monitoring`. You will need to authenticate using the username and password combination that you specified earlier when configuring JMX remote access for your Apache Tomcat server. Using JavaMelody to monitor JVM memory usage metrics After successfully configuring JavaMelody for monitoring JVM memory usage metrics, you can use its built-in features to visualize and analyze collected metric data. The main dashboard provides an overview of key performance indicators (KPIs) such as request throughput, error rates, JVM memory usage, and thread pool utilization: In the example image above, we are graphing JVM memory usage metrics for our Apache Tomcat server instance running on a local development machine. In Part 2, we’ll look at how to use a monitoring tool to collect and query this metric data from both JMX and access logs. Metric to alert on: maxTime Max processing time indicates the maximum amount of time it takes for the server to process one request (from the time an available thread starts processing the request to the time it returns a response). Its value updates whenever the server detects a longer request processing time than the current maxTime. This metric doesn’t include detailed information about a request, its status, or URL path, so in order to get a better understanding of the max processing time for individual requests and specific types of requests, you will need to analyze your access logs. A spike in processing time for a single request could indicate that a JSP page isn’t loading or an associated process (such as a database query) is taking too long to complete. Since some of these issues could be caused by operations outside of Tomcat, it’s important to monitor the Tomcat server alongside all of the other services that make up your infrastructure. This helps to ensure you don’t overlook other operations or processes that are also critical for running your application. Thread pool metrics Throughput metrics help you gauge how well your server is handling traffic. Because each request relies on a thread for processing, monitoring Tomcat resources is also important. Threads determine the maximum number of requests the Tomcat server can handle simultaneously. Since the number of available threads directly affects how efficiently Tomcat can process requests, monitoring thread usage is important for understanding request throughput and processing times for the server. Tomcat manages the workload for processing requests with worker threads and tracks thread usage for each connector under either the ThreadPool MBean type, or the Executor Mbean type if you are using an executor. The Executor MBean type represents the thread pool created for use across multiple connectors. The ThreadPool MBean type shows metrics for each Tomcat connector’s thread pool. Depending on how you are prioritizing server loads for your applications, you may be managing a single connector, multiple connectors, or using an executor thread pool to manage another group of connectors within the same server. It’s important to note that Tomcat maps ThreadPool’s currentThreadsBusy to the Executor’s activeCount and maxThreads to maximumPoolSize, meaning that the metrics to watch remain the same between the two. ThreadPool |JMX Attribute |Description |MBean |Metric Type |currentThreadsBusy |The number of threads currently processing requests |Catalina:type=ThreadPool,name=“http-nio-8080” |Resource: Utilization |maxThreads |The maximum number of threads to be created by the connector and made available for requests |Catalina:type=ThreadPool,name=“http-nio-8080” |Resource: Utilization Executor |JMX Attribute |Description |MBean |Metric Type |activeCount |The number of active threads in the thread pool |Catalina:type=Executor,name=“http-nio-8080” |Resource: Utilization |maximumPoolSize |The maximum number of threads available in the thread pool |Catalina:type=Executor,name=“http-nio-8080” |Resource: Utilization Metric to watch: currentThreadsBusy/activeCount The currentThreadsBusy (ThreadPool) and activeCount (Executor) metrics tell you how many threads from the connector’s pool are currently processing requests. As your server receives requests, Tomcat will launch more worker threads if there are not enough existing threads to cover the workload, until it reaches the maximum number of threads you set for the pool. This is represented by maxThreads for a connector’s thread pool and maximumPoolSize for an executor. Any subsequent requests are placed in a queue until a thread becomes available. If the queue becomes full then the server will refuse any new requests until threads become available. It’s important to watch the number of busy threads to ensure it doesn’t reach the value set for maxThreads, because if it consistently hits this cap, then you may need to adjust the maximum number of threads allotted for the connector. With a monitoring tool, you can calculate the number of idle threads by comparing the current thread count to the number of busy threads. The number of idle vs. busy threads provides a good measure for fine-tuning your server. If your server has too many idle threads, it may not be managing the thread pool efficiently. If this is the case, you can lower the minSpareThreads value for your connector, which sets the minimum number of threads that should always be available in a pool (active or idle). Adjusting this value based on your application’s traffic will ensure there is an appropriate balance between busy and idle threads. Fine-tuning Tomcat thread usage Not having the appropriate number of threads for Tomcat is one of the more common causes for server issues, and adjusting thread usage is an easy way to address this problem. You can fine-tune thread usage by adjusting three key parameters for a connector’s thread pool based on anticipated web traffic: maxThreads, minSpareThreads, and acceptCount. Or if you are using an executor, you can adjust values for maxThreads, minSpareThreads, and maxQueueSize. Below is an example of the server’s configuration for a single connector: <Connector port="8443" protocol="org.apache.coyote.http11.Http11NioProtocol" maxThreads="<DESIRED_MAX_THREADS>" acceptCount="<DESIRED_ACCEPT_COUNT>" minSpareThreads="<DESIRED_MIN_SPARETHREADS>"> </Connector> Or for an executor: <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="<DESIRED_MAX_THREADS>" minSpareThreads="<DESIRED_MIN_SPARETHREADS>"> maxQueueSize="<DESIRED_QUEUE_SIZE>"/> <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000"> </Connector> <Connector executor="tomcatThreadPool" port="8091" protocol="HTTP/1.1" connectionTimeout="20000"> </Connector> If these parameters are set too low then the server will not have enough threads to manage the number of incoming requests, which could lead to longer queues and increased request latency. If the values are set too high or too low, then your server could receive an influx of requests it can’t adequately process, maxing out the worker threads and the request queue. This could cause requests in the queue to time out if they have to wait longer than the value set for the server’s connectionTimeout. A high value for maxThreads or minSpareThreads also increases your server’s startup time, and running a larger number of threads consumes more server resources. If processing time increases as the traffic to your server increases, you can start addressing this issue by first increasing the number of maxThreads available for a connector, which will increase the number of worker threads that are available to process requests. If you still notice slow request processing times after you’ve increased the maxThreads parameter, then your server’s hardware may not be equipped to manage the growing number of worker threads processing incoming requests. In this case, you may need to increase server memory or CPU. While monitoring thread usage, it’s important to also keep track of errors that could indicate that your server is misconfigured or overloaded. For example, Tomcat will throw a RejectedExecutionException error if the executor queue is full and can’t accept another incoming request. You will see an entry in Tomcat’s server log (/logs/Catalina.XXXX-XX-XX.log) similar to the following example: WARNING: Socket processing request was rejected for: <socket handle> java.util.concurrent.RejectedExecutionException: Work queue full. at org.apache.catalina.core.StandardThreadExecutor.execute at org.apache.tomcat.util.net.(Apr/Nio)Endpoint.processSocketWithOptions at org.apache.tomcat.util.net.(Apr/Nio)Endpoint$Acceptor.run at java.lang.Thread.run Errors Errors indicate an issue with the Tomcat server itself, a host, a deployed application, or an application servlet. This includes errors generated when the Tomcat server runs out of memory, can’t find a requested file or servlet, or is unable to serve a JSP due to syntax errors in the servlet codebase. |JMX Attribute/Log Metric |Description |MBean/Log Pattern |Metric Type |Availability |errorCount |The number of errors generated by server requests |Catalina:type=GlobalRequestProcessor,name=“http-bio-8888” |Work: Error |JMX |OutOfMemoryError |Indicates the JVM has run out of memory |N/A |Work: Error |Tomcat logs |Server-side errors (5xx) |Indicates the server is not able to process a request |%s |Work: Error |Access logs |Client-side errors (4xx) |Indicates an issue with the client’s request |%s |Work: Error |Access logs While the errorCount metric alone doesn’t provide any insight into the types of errors Tomcat is generating, it can provide a high-level view of potential issues that warrant investigation. You’ll need to supplement this metric with additional information from your access logs to get a clearer picture of the types of errors your users are encountering when interacting with your applications running on Tomcat. JVM memory usage metrics In order to effectively monitor your Apache Tomcat server, you should also focus on tracking metrics related to JVM memory usage. These metrics provide valuable insights into how efficiently your applications are utilizing available system resources (such as CPU and RAM), as well as help identify potential issues with the server or its underlying infrastructure. To configure JavaMelody for monitoring JVM memory usage metrics, follow these steps: 1. Open your Apache Tomcat installation directory. 2. Navigate to the conf subdirectory. 3. Edit the `context.xml` configuration file using a text editor of your choice (e.g., Notepad++). 4. Add the following `<Parameter>` element within the existing `<Context>` element: ```xml <Parameter name="jmxEnabled" value="true"/> ``` 5. Save and close the `context.xml` file. By default, JavaMelody is configured with JMX remote access enabled to allow other applications (such as JavaMelody or JConsole) to connect remotely over a network connection and query the MBeans registered on the JMX MBean server. In this example configuration, we set the value of the `jmxEnabled` parameter to "true" (you can change this value as needed). 6. Restart your Apache Tomcat server for these changes to take effect. Once you have completed these steps, you should be able to access JavaMelody’s monitoring interface by navigating to the following URL in a web browser: `http://<TOMCAT_SERVER_IP>:8080/monitoring`. You will need to authenticate using the username and password combination that you specified earlier when configuring JMX remote access for your Apache Tomcat server. Using JavaMelody to monitor JVM memory usage metrics After successfully configuring JavaMelody for monitoring JVM memory usage metrics, you can use its built-in features to visualize and analyze collected metric data. The main dashboard provides an overview of key performance indicators (KPIs) such as request throughput, error rates, JVM memory usage, and thread pool utilization: In the example image above, we are graphing JVM memory usage metrics for our Apache Tomcat server instance running on a local development machine. In Part 2, we’ll look at how to use a monitoring tool to collect and query this metric data from both JMX and access logs. Metric to alert on: maxTime Max processing time indicates the maximum amount of time it takes for the server to process one request (from the time an available thread starts processing the request to the time it returns a response). Its value updates whenever the server detects a longer request processing time than the current maxTime. This metric doesn’t include detailed information about a request, its status, or URL path, so in order to get a better understanding of the max processing time for individual requests and specific types of requests, you will need to analyze your access logs. A spike in processing time for a single request could indicate that a JSP page isn’t loading or an associated process (such as a database query) is taking too long to complete. Since some of these issues could be caused by operations outside of Tomcat, it’s important to monitor the Tomcat server alongside all of the other services that make up your infrastructure. This helps to ensure you don’t overlook other operations or processes that are also critical for running your application. Thread pool metrics Throughput metrics help you gauge how well your server is handling traffic. Because each request relies on a thread for processing, monitoring Tomcat resources is also important. Threads determine the maximum number of requests the Tomcat server can handle simultaneously. Since the number of available threads directly affects how efficiently Tomcat can process requests, monitoring thread usage is important for understanding request throughput and processing times for the server. Tomcat manages the workload for processing requests with worker threads and tracks thread usage for each connector under either the ThreadPool MBean type, or the Executor Mbean type if you are using an executor. The Executor MBean type represents the thread pool created for use across multiple connectors. The ThreadPool MBean type shows metrics for each Tomcat connector’s thread pool. Depending on how you are prioritizing server loads for your applications, you may be managing a single connector, multiple connectors, or using an executor thread pool to manage another group of connectors within the same server. It’s important to note that Tomcat maps ThreadPool’s currentThreadsBusy to the Executor’s activeCount and maxThreads to maximumPoolSize, meaning that the metrics to watch remain the same between the two. ThreadPool |JMX Attribute |Description |MBean |Metric Type |currentThreadsBusy |The number of threads currently processing requests |Catalina:type=ThreadPool,name=“http-nio-8080” |Resource: Utilization |maxThreads |The maximum number of threads to be created by the connector and made available for requests |Catalina:type=ThreadPool,name=“http-nio-8080” |Resource: Utilization Executor |JMX Attribute |Description |MBean |Metric Type |activeCount |The number of active threads in the thread pool |Catalina:type=Executor,name=“http-nio-8080” |Resource: Utilization |maximumPoolSize |The maximum number of threads available in the thread pool |Catalina:type=Executor,name=“http-nio-8080” |Resource: Utilization Metric to watch: currentThreadsBusy/activeCount The currentThreadsBusy (ThreadPool) and activeCount (Executor) metrics tell you how many threads from the connector’s pool are currently processing requests. As your server receives requests, Tomcat will launch more worker threads if there are not enough existing threads to cover the workload, until it reaches the maximum number of threads you set for the pool. This is represented by maxThreads for a connector’s thread pool and maximumPoolSize for an executor. Any subsequent requests are placed in a queue until a thread becomes available. If the queue becomes full then the server will refuse any new requests until threads become available. It’s important to watch the number of busy threads to ensure it doesn’t reach the value set for maxThreads, because if it consistently hits this cap, then you may need to adjust the maximum number of threads allotted for the connector. With a monitoring tool, you can calculate the number of idle threads by comparing the current thread count to the number of busy threads. The number of idle vs. busy threads provides a good measure for fine-tuning your server. If your server has too many idle threads, it may not be managing the thread pool efficiently. If this is the case, you can lower the minSpareThreads value for your connector, which sets the minimum number of threads that should always be available in a pool (active or idle). Adjusting this value based on your application’s traffic will ensure there is an appropriate balance between busy and idle threads. Fine-tuning Tomcat thread usage Not having the appropriate number of threads for Tomcat is one of the more common causes for server issues, and adjusting thread usage is an easy way to address this problem. You can fine-tune thread usage by adjusting three key parameters for a connector’s thread pool based on anticipated web traffic: maxThreads, minSpareThreads, and acceptCount. Or if you are using an executor, you can adjust values for maxThreads, minSpareThreads, and maxQueueSize. Below is an example of the server’s configuration for a single connector: <Connector port="8443" protocol="org.apache.coyote.http11.Http11NioProtocol" maxThreads="<DESIRED_MAX_THREADS>" acceptCount="<DESIRED_ACCEPT_COUNT>" minSpareThreads="<DESIRED_MIN_SPARETHREADS>"> </Connector> Or for an executor: <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="<DESIRED_MAX_THREADS>" minSpareThreads="<DESIRED_MIN_SPARETHREADS>"> maxQueueSize="<DESIRED_QUEUE_SIZE>"/> <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000"> </Connector> If these parameters are set too low then the server will not have enough threads to manage the number of incoming requests, which could lead to longer queues and increased request latency. If the values are set too high or too low, then your server could receive an influx of requests it can’t adequately process, maxing out the worker threads and the request queue. This could cause requests in the queue to time out if they have to wait longer than the value set for the server’s connectionTimeout. A high value for maxThreads or minSpareThreads also increases your server’s startup time, and running a larger number of threads consumes more server resources. If processing time increases as the traffic to your server increases, you can start addressing this issue by first increasing the number of maxThreads available for a connector, which will increase the number of worker threads that are available to process requests. If you still notice slow request processing times after you’ve increased the maxThreads parameter, then your server’s hardware may not be equipped to manage the growing number of worker threads processing incoming requests. In this case, you may need to increase server memory or CPU. While monitoring thread usage, it’s important to also keep track of errors that could indicate that your server is misconfigured or overloaded. For example, Tomcat will throw a RejectedExecutionException error if the executor queue is full and can’t accept another incoming request. You will see an entry in Tomcat’s server log (/logs/Catalina.XXXX-XX-XX.log) similar to the following example: WARNING: Socket processing request was rejected for: <socket handle> java.util.concurrent.RejectedExecutionException: Work queue full. at org.apache.catalina.core.StandardThreadExecutor.execute at org.apache.tomcat.util.net.(Apr/Nio)Endpoint.processSocketWithOptions at org.apache.tomcat.util.net.(Apr/Nio)Endpoint$Acceptor.run at java.lang.Thread.run Errors Errors indicate an issue with the Tomcat server itself, a host, a deployed application, or an application servlet. This includes errors generated when the Tomcat server runs out of memory, can’t find a requested file or servlet, or is unable to serve a JSP due to syntax errors in the servlet codebase. |JMX Attribute/Log Metric |Description |MBean/Log Pattern |Metric Type |Availability |errorCount |The number of errors generated by server requests |Catalina:type=GlobalRequestProcessor,name=“http-bio-8888” |Work: Error |JMX |OutOfMemoryError |Indicates the JVM has run out of memory |N/A |Work: Error |Tomcat logs |Server-side errors (5xx) |Indicates the server is not able to process a request |%s |Work: Error |Access logs |Client-side errors (4xx) |Indicates an issue with the client’s request |%s |Work: Error |Access logs While the errorCount metric alone doesn’t provide any insight into the types of errors Tomcat is generating, it can provide a high-level view of potential issues that warrant investigation. You’ll need to supplement this metric with additional information from your access logs to get a clearer picture of the types of errors your users are encountering when interacting with your applications running on Tomcat. JVM memory usage metrics In order to effectively monitor your Apache Tomcat server, you should also focus on tracking metrics related to JVM memory usage. These metrics provide valuable insights into how efficiently your applications are utilizing available system resources (such as CPU and RAM), as well as help identify potential issues with the server or its underlying infrastructure. To configure JavaMelody for monitoring JVM memory usage metrics, follow these steps: 1. Open your Apache Tomcat installation directory. 2. Navigate to the conf subdirectory. 3. Edit the `context.xml` configuration file using a text editor of your choice (e.g., Notepad++). 4. Add the following `<Parameter>` element within the existing `<Context>` element: ```xml <Parameter name="jmxEnabled" value="true"/> ``` 5. Save and close the `context.xml` file. By default, JavaMelody is configured with JMX remote access enabled to allow other applications (such as JavaMelody or JConsole) to connect remotely over a network connection and query the MBeans registered on the JMX MBean server. In this example configuration, we set the value of the `jmxEnabled` parameter to "true" (you can change this value as needed). 6. Restart your Apache Tomcat server for these changes to take effect. Once you have completed these steps, you should be able to access JavaMelody’s monitoring interface by navigating to the following URL in a web browser: `http://<TOMCAT_SERVER_IP>:8080/monitoring`. You will need to authenticate using the username and password combination that you specified earlier when configuring JMX remote access for your Apache Tomcat server. Using JavaMelody to monitor JVM memory usage metrics After successfully configuring JavaMelody for monitoring JVM memory usage metrics, you can use its built-in features to visualize and analyze collected metric data. The main dashboard provides an overview of key performance indicators (KPIs) such as request throughput, error rates, JVM memory usage, and thread pool utilization: In the example image above, we are graphing JVM memory usage metrics for our Apache Tomcat server instance running on a local development machine. In Part 2, we’ll look at how to use a monitoring tool to collect and query this metric data from both JMX and access logs. Metric to alert on: maxTime Max processing time indicates the maximum amount of time it takes for the server to process one request (from the time an available thread starts processing the request to the time it returns a response). Its value updates whenever the server detects a longer request processing time than the current maxTime. This metric doesn’t include detailed information about a request, its status, or URL path, so in order to get a better understanding of the max processing time for individual requests and specific types of requests, you will need to analyze your access logs. A spike in processing time for a single request could indicate that a JSP page isn’t loading or an associated process (such as a database query) is taking too long to complete. Since some of these issues could be caused by operations outside of Tomcat, it’s important to monitor the Tomcat server alongside all of the other services that make up your infrastructure. This helps to ensure you don’t overlook other operations or processes that are also critical for running your application. Thread pool metrics Throughput metrics help you gauge how well your server is handling traffic. Because each request relies on a thread for processing, monitoring Tomcat resources is also important. Threads determine the maximum number of requests the Tomcat server can handle simultaneously. Since the number of available threads directly affects how efficiently Tomcat can process requests, monitoring thread usage is important for understanding request throughput and processing times for the server. Tomcat manages the workload for processing requests with worker threads and tracks thread usage for each connector under either the ThreadPool MBean type, or the Executor Mbean type if you are using an executor. The Executor MBean type represents the thread pool created for use across multiple connectors. The ThreadPool MBean type shows metrics for each Tomcat connector’s thread pool. Depending on how you are prioritizing server loads for your applications, you may be managing a single connector, multiple connectors, or using an executor thread pool to manage another group of connectors within the same server. It’s important to note that Tomcat maps ThreadPool’s currentThreadsBusy to the Executor’s activeCount and maxThreads to maximumPoolSize, meaning that the metrics to watch remain the same between the two. ThreadPool |JMX Attribute |Description |MBean |Metric Type |currentThreadsBusy |The number of threads currently processing requests |Catalina:type=ThreadPool,name=“http-nio-8080” |Resource: Utilization |maxThreads |The maximum number of threads to be created by the connector and made available for requests |Catalina:type=ThreadPool,name=“http-nio-8080” |Resource: Utilization Executor |JMX Attribute |Description |MBean |Metric Type |activeCount |The number of active threads in the thread pool |Catalina:type=Executor,name=“http-nio-8080” |Resource: Utilization |maximumPoolSize |The maximum number of threads available in the thread pool |Catalina:type=Executor,name=“http-nio-8080” |Resource: Utilization Metric to watch: currentThreadsBusy/activeCount The currentThreadsBusy (ThreadPool) and activeCount (Executor) metrics tell you how many threads from the connector’s pool are currently processing requests. As your server receives requests, Tomcat will launch more worker threads if there are not enough existing threads to cover the workload, until it reaches the maximum number of threads you set for the pool. This is represented by maxThreads for a connector’s thread pool and maximumPoolSize for an executor. Any subsequent requests are placed in a queue until a thread becomes available. If the queue becomes full then the server will refuse any new requests until threads become available. It’s important to watch the number of busy threads to ensure it doesn’t reach the value set for maxThreads, because if it consistently hits this cap, then you may need to adjust the maximum number of threads allotted for the connector. With a monitoring tool, you can calculate the number of idle threads by comparing the current thread count to the number of busy threads. The number of idle vs. busy threads provides a good measure for fine-tuning your server. If your server has too many idle threads, it may not be managing the thread pool efficiently. If this is the case, you can lower the minSpareThreads value for your connector, which sets the minimum number of threads that should always be available in a pool (active or idle). Adjusting this value based on your application’s traffic will ensure there is an appropriate balance between busy and idle threads. Fine-tuning Tomcat thread usage Not having the appropriate number of threads for Tomcat is one of the more common causes for server issues, and adjusting thread usage is an easy way to address this problem. You can fine-tune thread usage by adjusting three key parameters for a connector’s thread pool based on anticipated web traffic: maxThreads, minSpareThreads, and acceptCount. Or if you are using an executor, you can adjust values for maxThreads, minSpareThreads, and maxQueueSize. Below is an example of the server’s configuration for a single connector: <Connector port="8443" protocol="org.apache.coyote.http11.Http11NioProtocol" maxThreads="<DESIRED_MAX_THREADS>" acceptCount="<DESIRED_ACCEPT_COUNT>" minSpareThreads="<DESIRED_MIN_SPARETHREADS>"> </Connector> Or for an executor: <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="<DESIRED_MAX_THREADS>" minSpareThreads="<DESIRED_MIN_SPARETHREADS>"> maxQueueSize="<DESIRED_QUEUE_SIZE>"/> <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000"> </Connector> If these parameters are set too low then the server will not have enough threads to manage the number of incoming requests, which could lead to longer queues and increased request latency. If the values are set too high or too low, then your server could receive an influx of requests it can’t adequately process, maxing out the worker threads and the request queue. This could cause requests in the queue to time out if they have to wait longer than the value set for the server’s connectionTimeout. A high value for maxThreads or minSpareThreads also increases your server’s startup time, and running a larger number of threads consumes more server resources. If processing time increases as the traffic to your server increases, you can start addressing this issue by first increasing the number of maxThreads available for a connector, which will increase the number of worker threads that are available to process requests. If you still notice slow request processing times after you’ve increased the maxThreads parameter, then your server’s hardware may not be equipped to manage the growing number of worker threads processing incoming requests. In this case, you may need to increase server memory or CPU. While monitoring thread usage, it’s important to also keep track of errors that could indicate that your server is misconfigured or overloaded. For example, Tomcat will throw a RejectedExecutionException error if the executor queue is full and can’t accept another incoming request. You will see an entry in Tomcat’s server log (/logs/Catalina.XXXX-XX-XX.log) similar to the following example: WARNING: Socket processing request was rejected for: <socket handle> java.util.concurrent.RejectedExecutionException: Work queue full. at org.apache.catalina.core.StandardThreadExecutor.execute at org.apache.tomcat.util.net.(Apr/Nio)Endpoint.processSocketWithOptions at org.apache.tomcat.util.net.(Apr/Nio)Endpoint$Acceptor.run at java.lang.Thread.run Errors Errors indicate an issue with the Tomcat server itself, a host, a deployed application, or an application servlet. This includes errors generated when the Tomcat server runs out of memory, can’t find a requested file or servlet, or is unable to serve a JSP due to syntax errors in the servlet codebase. |JMX Attribute/Log Metric |Description |MBean/Log Pattern |Metric Type |Availability |errorCount |The number of errors generated by server requests |Catalina:type=GlobalRequestProcessor,name=“http-bio-8888” |Work: Error |JMX |OutOfMemoryError |Indicates the JVM has run out of memory |N/A |Work: Error |Tomcat logs |Server-side errors (5xx) |Indicates the server is not able to process a request |%s |Work: Error |Access logs |Client-side errors (4xx) |Indicates an issue with the client’s request |%s |Work: Error |Access logs While the errorCount metric alone doesn’t provide any insight into the types of errors Tomcat is generating, it can provide a high-level view of potential issues that warrant investigation. You’ll need to supplement this metric with additional information from your access logs to get a clearer picture of the types of errors your users are encountering when interacting with your applications running on Tomcat. JVM memory usage metrics In order to effectively monitor your Apache Tomcat server, you should also focus on tracking metrics related to JVM memory usage. These metrics provide valuable insights into how efficiently your applications are utilizing available system resources (such as CPU and RAM), as well as help identify potential issues with the server or its underlying infrastructure. To configure JavaMelody for monitoring JVM memory usage metrics, follow these steps: 1. Open your Apache Tomcat installation directory. 2. Navigate to the conf subdirectory. 3. Edit the `context.xml` configuration file using a text editor of your choice (e.g., Notepad++). 4. Add the following `<Parameter>` element within the existing `<Context>` element: ```xml <Parameter name="jmxEnabled" value="true"/> ``` 5. Save and close the `context.xml` file. By default, JavaMelody is configured with JMX remote access enabled to allow other applications (such as JavaMelody or JConsole) to connect remotely over a network connection and query the MBeans registered on the JMX MBean server. In this example configuration, we set the value of the `jmxEnabled` parameter to "true" (you can change this value as needed). 6. Restart your Apache Tomcat server for these changes to take effect. Once you have completed these steps, you should be able to access JavaMelody’s monitoring interface by navigating to the following URL in a web browser: `http://<TOMCAT_SERVER_IP>:8080/monitoring`. You will need to authenticate using the username and password combination that you specified earlier when configuring JMX remote access for your Apache Tomcat server. Using JavaMelody to monitor JVM memory usage metrics After successfully configuring JavaMelody for monitoring JVM memory usage metrics, you can use its built-in features to visualize and analyze collected metric data. The main dashboard provides an overview of key performance indicators (KPIs) such as request throughput, error rates, JVM memory usage, and thread pool utilization: In the example image above, we are graphing JVM memory usage metrics for our Apache Tomcat server instance running on a local development machine. In Part 2, we’ll look at how to use a monitoring tool to collect and query this metric data from both JMX and access logs.

Company
Datadog

Date published
Dec. 10, 2018

Author(s)
Mallory Mooney

Word count
5553

Language
English

Hacker News points
None found.


By Matt Makai. 2021-2024.