CloudLinux Speed Limit
I've been doing some testing with CloudLinux and the LVE limits and I'm not getting the results/protection I was expecting.
I'm testing on a 2 vCPU machine with 4GB memory.
I've got Grafana hooked up so I can see the load over time.
The 1m server load is typically sitting around 0.8 at idle.
I'm then using Apache AB to send 1000 requests, 5 at a time to a specific website which has an LVE speed limit of 50%.
If I understand things correctly, this should allow the website to use up to 50% of a single vCPU which should take the load to a little under 1.5 or so.
However what I'm seeing consistently is that the load keeps increasing and is going as high as 3.5 until the requests stop i.e. the server is being overloaded.
In the LVE current usage tab it shows the speed at 50% for that user along with very reasonable memory usage, IOPS, EP, etc.
For what it's worth, this is running on PHP 8.1 and has php-fpm enabled.
Thanks
-
To test this a little further, I've tried this against another site which is just a simple PHP file without any DB connections, etc. This is handling the requests significantly faster however if I throw 10000 requests at it the 1m load jumps right up to 7 or 8. This makes me think that LVE isn't doing its job. It feels it would be incredibly easy for someone to pull up the Apache ab tool and pretty much take down my server. Is something wrong here or is it just perhaps that there's quite a bit of overhead handling these requests (even if they're being rejected) so a 2vCPU system just can't handle it regardless? 0 -
is that a VPS right?, are you the owner or controlled of the real machine behind it? I would do this kind of tests only a dedicated machine, so there are no other VM sharing the resources. 0 -
I also think a dedicated machine would give more accurate results, but I wouldn't necessarily expect a direct correlation between speed and server load. It would be best to reach out to CloudLinux directly about this issue, as they'd be able to get you more specifics on how all of that gets calculated behind the scenes if that is something you're interested in. 0 -
I haven't tested CloudLinux in a while. But when I did, I got a little agitated with it. Apparently if you set your limits too low, it's counterproductive and the system works harder at keeping accounts under those limits to the effect that it harms overall server performance. I thought the purpose of having such limit controls was so that cheap hosting accounts got less resources (lower limits) than higher priced hosting accounts. But what I found was that you had to set the floor of the limits (the lowest values of the limits) so high that it basically made no incentive to sell any higher priced / higher resourced accounts. You could try to sell higher priced / higher limits accounts - and maybe other people are better at marketing that (in fact I'm sure they are). But the lowest limited settings worked for at least 90% of all accounts, if not 99 percent. And so if every account on the server has the same limits... then what's the point of controlling those limits? So we ended up cutting the CloudLinux licenses, they weren't doing anything. I don't know if that is specifically what the OP is running into. I can't remember where I was setting the limits. But for some reason, being unable to set the Speed Limit to under 100% seems to ring a bell. 0 -
This is running on a DigitalOcean droplet. I've now upgraded so it's not using any shared CPUs i.e. has dedicated CPUs and I've increased the spec. I'm now seeing somewhat more predictable results however it still seems that I'm able to push up the server load significantly higher than I would like just by sending a bunch of requests. I've migrated a bunch of sites over to it so will keep an eye on how it performs in the real world. 0
Please sign in to leave a comment.
Comments
5 comments