apache nobody process and cpu high load
Hello, since a week one of my servers have a couple of httpd processes owned by nobody that are taking a lot of cpu use. If I restart apache everything goes back to normal until start escalating again.
Anyone knows how to identify who is using that process?
Thanks
-
Hey there! All Apache processes will be owned by the user "nobody" so seeing that showing up is normal. There isn't an easy way to identify which website is causing that behavior outside of checking the logs at that time and seeing if that matches up with anything. If this is a shared hosting environment I would *strongly* recommend looking into CloudLinux as that would allow you to set CPU and RAM limits for each account on the machine, and will track accounts that reach their limits. 0 -
Thanks @cPRex, you're right, the processes with an owner where the lsphp, not httpd. I have already have cloudlinux but that now was helpfull to identificate the user using that apache process. Finally I was able to identify the customer that owns these apache processes that runs for days using "lsof -p #PID" 0 -
I'm glad to hear you found a good resolution! 0 -
Hello! Although I now know whose process it is, I don't know why this happens. Is there a way to make apache processes not last forever? In this case I have three apache processes from the same client active for several days. 0 -
If you run "ps aux | grep httpd" does it show the process name as "/usr/sbin/httpd -k start" ? If so, are you also running nginx? 0 -
If you run "ps aux | grep httpd" does it show the process name as "/usr/sbin/httpd -k start" ? If so, are you also running nginx?
Hello, yes, it shows as httpd -k start and I'm not using nginx. I will move the customer to another server to see if have the same behavior. In this list the problem is the PID 3387516 [root@dallas ~]# ps aux | grep httpd nobody 158119 0.6 0.3 4590624 125756 ? Sl 06:56 0:55 /usr/sbin/httpd -k start nobody 160954 0.5 0.3 4590624 124900 ? Sl 07:00 0:47 /usr/sbin/httpd -k start nobody 162132 0.5 0.3 4590624 125676 ? Sl 07:00 0:51 /usr/sbin/httpd -k start nobody 162470 0.6 0.3 4590624 127572 ? Sl 07:00 0:54 /usr/sbin/httpd -k start nobody 189656 0.6 0.3 4590624 124772 ? Sl 07:30 0:42 /usr/sbin/httpd -k start nobody 190290 0.5 0.3 4590624 122876 ? Sl 07:30 0:40 /usr/sbin/httpd -k start nobody 258331 0.7 0.3 4590624 120360 ? Sl 08:30 0:23 /usr/sbin/httpd -k start root 329353 0.0 0.0 112812 968 pts/1 S+ 09:24 0:00 grep --color=auto httpd root 3385237 0.0 0.1 375976 64192 ? Ss Feb06 0:09 /usr/sbin/httpd -k start nobody 3387516 84.8 0.4 4590612 151056 ? Sl Feb06 860:26 /usr/sbin/httpd -k start nobody 4085450 0.0 0.1 374720 48116 ? S 02:06 0:18 /usr/sbin/httpd -k start nobody 4085455 0.0 0.1 375976 48148 ? S 02:06 0:01 /usr/sbin/httpd -k start [root@dallas ~]#0 -
If the issue is isolated to just one user, this is an excellent use case for CloudLinux, as that is specifically designed to keep a single user from overloading an entire system. 0 -
If the issue is isolated to just one user, this is an excellent use case for CloudLinux, as that is specifically designed to keep a single user from overloading an entire system.
I have cloudlinux on my servers. You think I have to open a ticket with them for this? Thanks0 -
Do you have CageFS installed? 0 -
Yes, I have it installed and working. After the latest changes I made in "Golbal apache configuration" I do not have any httpd process running for hours. Maybe its fixed? I put all values by default. I had the option "Symlink Protection" enabled. After reading the warning about that option I suspect that maybe it had something to do with the problem. I'll check tomorrow to see if it happens again or close the issue. 0 -
Sadly the problem is back. 0 -
I don't think Apache processes being old is an issue. My personal server, with very little traffic, has several: # ps aux | grep http root 1314 0.0 0.4 225216 9612 ? Ss 16:10 0:00 /usr/sbin/httpd -k start nobody 2497 0.0 0.2 226436 5964 ? S 16:18 0:00 /usr/sbin/httpd -k start nobody 2498 0.0 0.2 226436 5964 ? S 16:18 0:00 /usr/sbin/httpd -k start nobody 2499 0.0 0.2 226436 5968 ? S 16:18 0:00 /usr/sbin/httpd -k start nobody 2500 0.0 0.2 226436 5964 ? S 16:18 0:00 /usr/sbin/httpd -k start nobody 2501 0.0 0.2 226436 5964 ? S 16:18 0:00 /usr/sbin/httpd -k start
Those are all at least three hours old. If your processes are causing load on the server that's something else, but not due to the age.0 -
In my case, the processes that are running long time also are using 100% cpu time. When I kill the process, the load goes back to normal again. 0 -
It's time to open a ticket with our team then so we can take a look. 0 -
It's time to open a ticket with our team then so we can take a look.
Done #945298360 -
Thanks - I'm following along on my end now. 0 -
Our team was able to confirm that the server isn't experiencing high load, as the load rarely gets above 4, yet you have an 8-core CPU. We also determined you are running the Apache worker mpm, which is why you are seeing those parent processes linger. 0 -
Hello! The server is really experiencing high load when the stuck httpd processes are more than 6, this happens in a couple of days. Look how it look just before I restart httpd. Cloudlinux support suggest maybe its a bug in apache2 2.4.55-1. I do not know what to do, can I safely downgrade apache2? Thanks 0 -
You'd have to monitor the system and see exactly what those processes are and how much traffic is happening then. With just the graph, we'd only be guessing at what could be happening on the machine. 0 -
You'd have to monitor the system and see exactly what those processes are and how much traffic is happening then. With just the graph, we'd only be guessing at what could be happening on the machine.
Yes, I'm monitoring everything. I'll update the ticket when there are enough processes stuck in "Gracefully finishing" so support can see it again. My idea is that they didn't check anything because they didn't consider the load to be a concern. That was only because I had recently restarted apache.0 -
Hello! I can confirm that disabling mod_https2 "fixed" the problem. The last 24 hours we have the best load in weeks and no G processes stuck without restarting apache. I do not know how this works or why this is not happening in my other servers with the mod enabled, but I hope it helps someone else with the same problem. 0 -
Confirming this issue also affected my server - running Centos 7. Recurring issue of 5 or 6 nobody processes taking up over 100% CPU each. Kill them, restart Apache, they just reappeared. Examination of what the processes were doing just showed usual low level website activity, serving pages, files etc. Removed mod_https2 from Apache via EA 4 and problem solved. Thanks to benito for posting this. 0 -
Confirming this issue also affected my server - running Centos 7. Recurring issue of 5 or 6 nobody processes taking up over 100% CPU each. Kill them, restart Apache, they just reappeared. Examination of what the processes were doing just showed usual low level website activity, serving pages, files etc. Removed mod_https2 from Apache via EA 4 and problem solved. Thanks to benito for posting this.
I'm glad I helped. In my case I have even removed http2 on servers that did not have the specific problem of hanging processes. The truth is that on shared servers http2 consumes a lot of resources and I have not had any complaints from any client for not having http2.0 -
I am having the same issue with load but half of it is coming from MYSQL and the other half coming from Apache. I am going to try this fix for it. However, i was wondering what the importance of mod_https2 is? obviously i want to run the sites on https2 or 3 ...so is there any negative affects of uninstalling this mod? I am using Nginx as a front end proxy and Cloudflare also, if that matters. 0 -
Yes, with the module removed you will lose that support on the server side. I can't say for sure how Cloudflare would react to having that enabled on their end, but not on the server. 0
Please sign in to leave a comment.
Comments
30 comments