How much IO can I expect from shared hosting?


Quote Originally Posted by ChaoscripT
View Post
I also agree with @HostXNow, many customers in sort of race (include me) to find the best performance web hosting company.
I agree that the “Unlimited” and “cheap” words isn’t attractive anymore, I saw many times that companies advertise “Unlimited space”, really? you have unlimited amount of space?
Or there are companies who advertise “Unlimited cPanel accounts”, yeah sure, with the cPanel increase prices it’s really possible.

We forget (as customers) to check other things as well as:

– How much time the web hosting company is exists?

– AVG Tickets answers (Yes, it’s good to check in “out of office” hours to see the ticket response time)

– Price of the hosting

– How much resources the company are give (CPU/RAM/IO)

I know that most of the companies gives 1CPU/1RAM/10MB IO,

But I think that the good value (not rule of thumb or something) is to get 2CPU/2RAM/40-50MB IO, what do you think? (Sure that the website need to be proper coded and with good optimization)

Regards.

I was one of the first to offer high resource plans of 2 CPUs / 2 GB memory / 30 Mbps Disk IO several years ago before it took off, but that’s when all you needed was cPanel/CloudLinux. There are now too many other costs from software vendors and additional costs and regular price increases from the likes of cPanel/WHM, which has changed things a lot. Offering 2 CPUs / 2 GB memory / 30 Mbps Disk IO for around £5.00/month is no longer sustainable — and they say operating/hardware/software costs go down!? Nonsense.

Not only that, you first offer high resources, and many customers don’t use anything too taxing. All is good, but then either customer starts getting careless about what they host (though through no fault of their own, but by unrealstic demand from some users) and their CPU/mem increases too much, or you have individuals or even competitor crafting attacks to make as many accounts use as much CPU/mem as possible to overload the server, which can cause too much trouble.

Thinking about it now, you could reverse the IP of a server by a provider to get a list of the domains hosted on the server and craft an attack to cause high traffic for all the sites, causing a server to overload. The provider would soon stop offering a ridiculous amount of resources for pennies then! That would teach them.

So I wouldn’t say it’s best for a provider to only offer 2 CPUs / 2 GB memory / 50 Mbps as the default/only standard plan solely to attract more sales, or the provider may experience the issues I mentioned. Sure, you can offer 2 CPUs / 2GB memory / 50 Mbps. Still, the provider should charge accordingly in case many customers start using 100% of the resources allocated to them, whether via legit usage or other means.

A provider can oversell bandwidth A LOT, and they can oversell disk space a fair bit. They can even oversell memory and disk IO to some extent, but be careful when it comes to CPU, which is one of the most used/expensive resources from a provider perspective.

Offer a few different plans i.e

1 CPU / 1 GB memory /10 Mbps

2 CPU / 2 GB memory /20 Mbps

3 CPU / 3 GB memory /30 Mbps

But make sure you charge accordingly for them. Don’t oversell the shared CPU too much, or it can cause a lot of issues.

When using CPU on a shared/reseller hosting service, the provider would not like it if 10-20 accounts started using 100% CPU 24/7. Some misconfigured sites can do that and cause a lot of wait on the server, which can negatively affect all customers sites hosted on the server due to a provider using silly resource limits.

Now, if that happened but the provider was charging a realistic/reasonable price for a lower CPU, i.e. 1 CPU and 10-20 accounts caused a constant high resource usage, it wouldn’t be as much of an issue. However, if it’s one of those providers who has recently introduced offering unrealistic/ridiculous amounts of CPU for pennies that of a VPS/Dedicated server with around 4 -12 CPUs with the mentioned scenario could become a massive issue for provider and all customers on the server.

Customers trying to get a ton of CPU/mem for pennies are putting the stability and performance of their sites at risk and contributing to the crazy offerings advertised by various providers lately.

It’s like going back to when providers didn’t have CloudLinux and instead had to monitor things manually and allowed any account to use 25% of total server resources for short amounts of time.

Many providers could start going that route again. The difference now is that the throttling can be done automatically with CloudLinux. More advanced tools such as centralised monitoring allow providers to monitor server usage more efficiently.

#expect #shared #hosting

Leave a Reply