Providing VPS to customers
Hello,
I am entertaining the idea of providing VPSes to customers. Assuming I have all the hardware, redundancy, etc, I had some questions.
Let's say my server has two Xeon E5-2699v4 22 core processors (44 cores, 88 threads total). And let's say each core had 64GBs of RAM (128GB total).
Generally, how many users could I provide VPSes for? If each user had 2 cores, for example, does that mean I could only have 22 users? Or, depending on the virtualization software I go for (KVM, for example), could I provide a lot more VPSes to a lot more users? Can they share the CPU cores, essentially? I have the same question for RAM. I'm sure not all VPSes would be using the maximum amount of RAM allocated to them. Is there some sort of formula to figure out how many users I could safely provide VPSes to?
I had the same question for fiber, but on another website, someone said with 100Mb up, you could safely provide VPSes to 128 customers. To me, that'd imply 1000 up would mean 1,280 customers. This is the formula they give:
Is that right though? I rent a VPS from Linode and I'm told I have:
With 40Gbps in, I take it that means 40Gbps upload, and 1,000Mbps (or 1Gbps) download. I don't see how they can only charge me 20$. Here, I call on a dedicated fiber line, 1,000Mbps up, 1,000Mbps down, it's 2,500$ a month. How can companies provide 40Gbps up just to me? Or don't they? I thought SSDs weren't a really good idea for servers. I went with enterprise grade 10,000RPM 6Gbps SAS drives (but I'm going to upgrade to 12Gbps SAS drives). My controller supports 12Gbps SAS drives. It's the P440ar SAS Controller. The documentation on it says:
Maybe I'm misunderstanding this, but I think that means I can install 8 12Gb/s SAS drives and each drive will be able to operate at a sustained transfer speed of 12Gb/s....if I want more drives, without losing the speeds, I'll need to either upgrade the controller, or purchase another controller to add to it. Does that sound right? I'm just wondering why I see a lot of VPS companies saying they have SSD drives and not SAS. Are SSDs faster and more reliable? I was under the impression, the 12Gbps SAS drives were faster than the SSD drives out there. When I go to upgrade, I want to make sure I make the right decision and purchase the best hard drives I can get. Any help would be greatly appreciated. Anyone out there renting out VPSes that can share some of their knowledge with us? Thank you.
We can calculate the number of simultaneous hits (visitors) according to the link speed. If you want to dedicate a decent
bandwidth to each visitor, for example 100KBytes/visitor, a 100Mbps link can handle 128 simultaneous
connections per second (100Mbps / 8 = 12.5MBps; 12.5MBps * 1024 = 12800KBps; 12800KBps / 100KBps = 128).
Is that right though? I rent a VPS from Linode and I'm told I have:
4 GB RAM
2 CPU Cores
48 GB SSD Storage
3 TB Transfer
40Gbps Network In
1000Mbps Network Out
With 40Gbps in, I take it that means 40Gbps upload, and 1,000Mbps (or 1Gbps) download. I don't see how they can only charge me 20$. Here, I call on a dedicated fiber line, 1,000Mbps up, 1,000Mbps down, it's 2,500$ a month. How can companies provide 40Gbps up just to me? Or don't they? I thought SSDs weren't a really good idea for servers. I went with enterprise grade 10,000RPM 6Gbps SAS drives (but I'm going to upgrade to 12Gbps SAS drives). My controller supports 12Gbps SAS drives. It's the P440ar SAS Controller. The documentation on it says:
Eight (8) SAS physical links equally distributed across 2 internal x4 Mini-SAS connectors
12Gb/s SAS (1200 MB/s theoretical bandwidth per physical lane) on ProLiant Gen9
x8 6Gb/s SAS physical links (compatible with 6Gb/s SATA)
2 GiBytes 72-bit wide DDR3-1866MHz flash backed write cache provides up to 12.8GB/s maximum cache bandwidth
PCI Express Gen3 x8 link width
Read ahead caching
Write-back caching
Maybe I'm misunderstanding this, but I think that means I can install 8 12Gb/s SAS drives and each drive will be able to operate at a sustained transfer speed of 12Gb/s....if I want more drives, without losing the speeds, I'll need to either upgrade the controller, or purchase another controller to add to it. Does that sound right? I'm just wondering why I see a lot of VPS companies saying they have SSD drives and not SAS. Are SSDs faster and more reliable? I was under the impression, the 12Gbps SAS drives were faster than the SSD drives out there. When I go to upgrade, I want to make sure I make the right decision and purchase the best hard drives I can get. Any help would be greatly appreciated. Anyone out there renting out VPSes that can share some of their knowledge with us? Thank you.
-
Hi, Generally, how many users could I provide VPSes for? If each user had 2 cores, for example, does that mean I could only have 22 users? Or, depending on the virtualization software I go for (KVM, for example), could I provide a lot more VPSes to a lot more users? Can they share the CPU cores, essentially? I have the same question for RAM. I'm sure not all VPSes would be using the maximum amount of RAM allocated to them. Is there some sort of formula to figure out how many users I could safely provide VPSes to? -> You can share the cores. Virtual core is what created from a physical core that is given to the VPS. Not on KVM, but on OpenVZ you can almost overcommit this too if you want because in OpenVZ type, the core that is given to the VPS is not fully dedicated to it and is only used when required, so the same can be used by other VPS if one VPS is not using it. 0 -
Hi, Generally, how many users could I provide VPSes for? If each user had 2 cores, for example, does that mean I could only have 22 users? Or, depending on the virtualization software I go for (KVM, for example), could I provide a lot more VPSes to a lot more users? Can they share the CPU cores, essentially? I have the same question for RAM. I'm sure not all VPSes would be using the maximum amount of RAM allocated to them. Is there some sort of formula to figure out how many users I could safely provide VPSes to? -> You can share the cores. Virtual core is what created from a physical core that is given to the VPS. Not on KVM, but on OpenVZ you can almost overcommit this too if you want because in OpenVZ type, the core that is given to the VPS is not fully dedicated to it and is only used when required, so the same can be used by other VPS if one VPS is not using it.
What virtualization software tends to be more popular for this type of stuff? Do people like me tend to go with the OpenVZ or do they go for the KVM? I see there's a bunch of choices, some even from Oracle I believe. I'm guessing the RAM is the same way.... Any rough ideas how many customers we could safely provide VPSes to with 20 up / 60 down, 100 up / down, and 1000 up / down fiber? The 20 / 60 would be shared fiber line. The other two would be dedicated, just for me. Thanks.0 -
Hi, The network resources are shared among VPS, so what ever maximum you have on your Main Hardware server, you can give almost the same to the VPS too and since they are shared, they will use the resource when others are not using it.. 0 -
Thank you for the reply 24x7server, but what's a good formula to figure out how many customers I can offer services to with X amount of RAM? I don't want to "over-sell". Let's say I have a 128GB of RAM installed, with a total of 44 cores. I offer each client 2 cores and 4GB of RAM, just as an example. How could I calculate a safe number of maximum clients? At a minimum, I'd be able to have 22 client's (44 cores each), but with virtualization software like KVM (or OpenVZ), I should be able to provide VPSes to more than 22 clients. But how do I figure out how many more (ignoring bandwidth right now). Thank you. 0 -
Hello, You may also want to consider consulting with the sales teams for each of the virtualization software products you are considering using, as they might be able to provide some more technical details about how many VPS accounts you could host. From the cPanel perspective, you may find this document helpful: Installation Guide - System Requirements - Documentation - cPanel Documentation It's a list of all virtual environments we support. Additional, check out the "Hardware Requirements" section on that same document if you'd like to see what the minimum RAM/Processor requirements are for a cPanel installation. Thank you. 0 -
That link is wonderful! There's a few more questions I got but I don't know where to ask them, that's why I came here. I have four ethernet ports on the server. There's also an iLo port, but that won't be used for customers. How do people normally setup a server like this? Let's say to start with, I have 20 public IPv4 addresses and I'm looking at using the server to setup 20 seperate VPSes for customers. Do I just assign 5 public IP addresses to each interface? I have a cat 6 48-port managed switch that supports layer 3 routing. I haven't bought a router yet and I don't have the fiber uplink yet. I have the switch, the rack, the patch panel, a few other odds and ends, but still need to purchase more equipment and setup some redundancy. Do I just configure the router that I buy (probably some sort of rack mountable Cisco router with a fiber transceiver) to tell it about what interface the various IP addresses I'm assigned are located at, and then just let the switch route the traffic to one of the NICs? I have the switch configured with two VLANs, one for business, one for residential. They both have private IP addresses. So I'll have to do something with NAT or maybe assign a public IP address to the switch, I dunno. It's been a while since I played with enterprise grade hardware and it's all a bit fuzzy. Thanks for all the help guys. 0 -
Hello, I believe the network setup could depend on the specific virtualization software you choose. I recommend consulting with the documentation available from the virtualization software (e.g. Virtuozzo) to see the recommended setup. Thank you. 0 -
Okay. I will do that then. I'm leaning towards KVM. I'm going to create a new thread though and try to create a poll to see who uses what. Try to get a better feel as to which way to go, essentially. Thanks! 0 -
I'm just wondering why I see a lot of VPS companies saying they have SSD drives and not SAS. Are SSDs faster and more reliable? I was under the impression, the 12Gbps SAS drives were faster than the SSD drives out there. When I go to upgrade, I want to make sure I make the right decision and purchase the best hard drives I can get. Any help would be greatly appreciated. Anyone out there renting out VPSes that can share some of their knowledge with us? Thank you.
Regarding storage mediums, SSD is going to provide the best read/write performance per hardware cost, as in best performance per dollar. Having said that, SAS drives compare favorably in terms of performance to SATA drives, and compare favorably in terms of durability to SSD drives. SAS drives can be highly expensive to obtain, however, and most hosting providers will choose the cost-benefits of SATA or performance-benefits of SSD, often combining the two storage mediums, using SSD for operating system and databases, and using SATA to host backups and other "inactive" files. We've been using SAS drives in some of our older infrastructure for years, with no issues, but, with the ever-decreasing cost of SSD storage and ever-increasing popularity of "SSD Hosting", we've been employing the previously described combined storage media strategy on new deployments. If you are familiar with KVM, this is a good choice for hosting cPanel VPS servers.0 -
Regarding storage mediums, SSD is going to provide the best read/write performance per hardware cost, as in best performance per dollar. Having said that, SAS drives compare favorably in terms of performance to SATA drives, and compare favorably in terms of durability to SSD drives. SAS drives can be highly expensive to obtain, however, and most hosting providers will choose the cost-benefits of SATA or performance-benefits of SSD, often combining the two storage mediums, using SSD for operating system and databases, and using SATA to host backups and other "inactive" files. We've been using SAS drives in some of our older infrastructure for years, with no issues, but, with the ever-decreasing cost of SSD storage and ever-increasing popularity of "SSD Hosting", we've been employing the previously described combined storage media strategy on new deployments. If you are familiar with KVM, this is a good choice for hosting cPanel VPS servers.
I just asked you about KVM on the other thread I created (the one with the poll). So do you find that the average customer prefers solid state for their VPS that they rent? I know I've seen a lot of hype about SSD drives and I'm wondering if a good majority of customers just don't know what a SAS drive is. Maybe they think SSD is faster or something? We've been going for the enterprise grade level SAS drives because of the speed and reliability, thinking that would draw in more customers, having really good, fast drives. Maybe we need to rethink the storage medium a bit. In your experience, are the VPSes that are hosted on a SAS drive more popular than the SSD drives? Or is it the other way around? Thanks!0 -
Hello All I would like to say Thank you all for wonderful suggestions from all of you behalf of Spork Schivago because it helped me to learn new things 0
Please sign in to leave a comment.
Comments
11 comments