Building a Social Networking Website with PHP and MySQL requires solid coding skills, but it’s definitely doable. You’ll need to set up a MySQL database for user data, posts, and interactions, then use PHP to handle user authentication, profiles, and messaging. Frameworks like Laravel or CMS solutions can speed things up. However, if coding everything from scratch sounds overwhelming, platforms like SocialEngine provide a ready-made solution with powerful customization options, letting you create a fully functional Social Network without deep coding knowledge. Whether you build from scratch or
Building a Social Networking Website with PHP and MySQL requires solid coding skills, but it’s definitely doable. You’ll need to set up a MySQL database for user data, posts, and interactions, then use PHP to handle user authentication, profiles, and messaging. Frameworks like Laravel or CMS solutions can speed things up. However, if coding everything from scratch sounds overwhelming, platforms like SocialEngine provide a ready-made solution with powerful customization options, letting you create a fully functional Social Network without deep coding knowledge. Whether you build from scratch or use a platform, focus on user experience, security, and scalability.
Determining the best VPS hosting provider can be tricky because it largely depends on your specific needs.
Generally, here are the most important factors to take into account when choosing the right VPS provider for you:
- Performance and reliability – the ideal VPS provider should offer high-quality hardware, reliable virtualization technology like KVM, and ample resources that match your needs.
- Scalability and upgradability – ideally, your VPS provider should have multiple plans with different resource capacities to accommodate your needs now and and when those needs grow in the future. Also, the
Determining the best VPS hosting provider can be tricky because it largely depends on your specific needs.
Generally, here are the most important factors to take into account when choosing the right VPS provider for you:
- Performance and reliability – the ideal VPS provider should offer high-quality hardware, reliable virtualization technology like KVM, and ample resources that match your needs.
- Scalability and upgradability – ideally, your VPS provider should have multiple plans with different resource capacities to accommodate your needs now and and when those needs grow in the future. Also, the upgrade process should be seamless to maintain service uptime.
- Value for money – the best VPS company offers the best value, providing the most features, added extras, and service satisfaction at a reasonable price. Also, consider hidden costs and any additional fees for extras you could have included in your package elsewhere.
- Reputation and reviews – a reputable and highly-rated VPS hosting provider is likely to offer good service based on its service-level agreement (SLA). You can check review websites like Trustpilot, media coverage, or forums like Quora.
- Security features – A VPS hosting plan should include robust security features, such as a firewall, malware scanner, and automatic backups, as standard.
- Compatibility and control – your chosen VPS provider should support the software and operating system of your choice. Also, check if it offers full root access to ensure you can change various aspects of your server.
The recommendations
Based on the info above, here are some of the best VPS hosting providers and who they are best suited for:
- Hostinger – an excellent choice for people looking for the overall best value. It has various built-in features, heaps of added extras, 24/7 support and excellent performance.
- IONOS – suitable for users who are tight on budget and looking for cheap VPS plans.
- Liquid Web – a good choice for enterprises that want VPS hosting with customizable resources and Windows support.
- Bluehost – ideal if you want to use cPanel for simpler management since its plans include it.
There’s no way to meaningfully answer this.
Both PHP and MySQL can easily handle ~250 queries per second on average, but it all depends on what those queries are.
An indexed SELECT with a relatively small dataset? Cake.
A terrible multi-join monster with bad optimization? You’ll be lucky to do 2 a second.
The answer to your question, as written, is “yes” and you barely need any hardware to do it. If you write a stupidly simple PHP app that runs a SELECT NOW() it’ll run great!
If you want to handle a lot of real-time users, then your best choice is to go with an event-based, or non-blocking driven server. Taking this route will keep slow, wasteful, and unresponsive operations from killing performance, requiring that you create event loops with particular call backs, lowering process overhead. Here's a list of projects that support this level of concurrency:
Servers and libraries:
- Node.js (Javascript Server) - http://nodejs.org/
- Twisted (Python Library) - http://twistedmatrix.com/trac/
- Tornado (Python Server) - http://www.tornadoweb.org/
- EventMachine (Ruby Library) - h
If you want to handle a lot of real-time users, then your best choice is to go with an event-based, or non-blocking driven server. Taking this route will keep slow, wasteful, and unresponsive operations from killing performance, requiring that you create event loops with particular call backs, lowering process overhead. Here's a list of projects that support this level of concurrency:
Servers and libraries:
- Node.js (Javascript Server) - http://nodejs.org/
- Twisted (Python Library) - http://twistedmatrix.com/trac/
- Tornado (Python Server) - http://www.tornadoweb.org/
- EventMachine (Ruby Library) - http://rubyeventmachine.com/
- Gevent (Python Library) - http://www.gevent.org/
- Eventlet (Python Library) - http://eventlet.net/
- Scale Stack (C++ Framework) - http://scalestack.org/
More info:
- How To Node - http://howtonode.org/
- Node Tuts - http://nodetuts.com/
- Understanding the node.js event loop - http://blog.mixu.net/2011/02/01/understanding-the-node-js-event-loop/
- Scale Stack vs node.js vs Twisted vs Eventlet - http://oddments.org/?p=494
- Benchmarking Tornado vs. Twisted Web vs. Tornado on Twisted - http://programmingzen.com/2009/09/13/benchmarking-tornado-vs-twisted-web-vs-tornado-on-twisted/
- EventMachine: scalable non-blocking i/o in ruby - http://www.scribd.com/doc/28253878/EventMachine-scalable-non-blocking-i-o-in-ruby
- Asynchronous I/O - http://en.wikipedia.org/wiki/Asynchronous_I/O
Quora related threads:
- What is Node.js good for?
- Who should I follow on Twitter to learn more about node.js?
- How does Node.js compare against Tornado?
- What is Tornado (web framework)?
- What is the easiest way to understand the implications of the differences between threaded code and evented code?
- What are the easiest to use, most robust solutions for creating highly-available, non-blocking webservices?
- How does IO concurrency work in node.js despite the whole app running in a single thread?
That's less than 250 requests per second. Yes, PHP and MySQL can handle that quite easily on very simple hardware, and still spend most of their time sleeping.
That said: it is also possible to craft abominations of database queries that cause everything to slow down.
You should be using a CDN (content delivery network, like Akamai) for this.
This is a pretty serious load from a bandwidth perspective. If you have 25,000 people concurrently streaming (for example) at 5Mbps (a reasonable guess for 1080), then you need a 125,000Mbps or a 125 gigabit outbound connection. That’s super-duper expensive.
At the same time, 25k concurrent http streams, even if there were no bandwidth constraints, would be a serious load on any equipment you have. A single server wouldn’t cut it, and you’d need pretty high-end load balancers and routers to handle it too.
It’s also the wr
You should be using a CDN (content delivery network, like Akamai) for this.
This is a pretty serious load from a bandwidth perspective. If you have 25,000 people concurrently streaming (for example) at 5Mbps (a reasonable guess for 1080), then you need a 125,000Mbps or a 125 gigabit outbound connection. That’s super-duper expensive.
At the same time, 25k concurrent http streams, even if there were no bandwidth constraints, would be a serious load on any equipment you have. A single server wouldn’t cut it, and you’d need pretty high-end load balancers and routers to handle it too.
It’s also the wrong way to design for this application. At this scale, you would write your application to run on multiple servers, load balanced, but pull all of your assets from a CDN. A CDN places assets like videos closer to users so that less bandwidth is used overall, but especially by the side hosting things.
You don't need to handle 1,000 users simultaneously: you need to build something and ship it and start the process of discovering what you can build that will attract that many users. Seriously: don't even start worrying about that kind of scale until you know you're going to need it.
(That said, there's no harm in keeping scaling issues in mind while you're building the software - try to avoid making dumb architectural decisions that will be impossible to scale later. But deciding on your 1,000+ simultaneous user serving architecture before you've Built Something People Want is a huge waste of
You don't need to handle 1,000 users simultaneously: you need to build something and ship it and start the process of discovering what you can build that will attract that many users. Seriously: don't even start worrying about that kind of scale until you know you're going to need it.
(That said, there's no harm in keeping scaling issues in mind while you're building the software - try to avoid making dumb architectural decisions that will be impossible to scale later. But deciding on your 1,000+ simultaneous user serving architecture before you've Built Something People Want is a huge waste of your time).
Most car insurance companies are kind of banking on you not noticing that they’re overcharging you. But unlike the olden days where everything was done through an agent, there are now several ways to reduce your insurance bills online. Here are a few ways:
1. Take 2 minutes to compare your rates
Here’s the deal: your current car insurance company is probably charging you more than you should be paying. Don’t waste your time going from one insurance site to another trying to find a better deal.
Instead, use a site like Coverage.com, which lets you compare all of your options in one place.
Most car insurance companies are kind of banking on you not noticing that they’re overcharging you. But unlike the olden days where everything was done through an agent, there are now several ways to reduce your insurance bills online. Here are a few ways:
1. Take 2 minutes to compare your rates
Here’s the deal: your current car insurance company is probably charging you more than you should be paying. Don’t waste your time going from one insurance site to another trying to find a better deal.
Instead, use a site like Coverage.com, which lets you compare all of your options in one place.
Coverage.com is one of the biggest online insurance marketplaces in the U.S., offering quotes from over 175 different carriers. Just answer a few quick questions about yourself and you could find out you’re eligible to save up to $600+ a year - here.
2. Use your driving skills to drop your rate
Not every company will do this, but several of the major brand insurance companies like Progressive, Allstate, and Statefarm offer programs that allow you to use a dash cam, GPS, or mobile app to track your driving habits and reduce your rates. You just have to do it for a month typically and then they’ll drop your rate.
You can find a list of insurance companies that offer this option - here.
3. Fight speeding tickets and traffic infractions
A lot of people don’t realize that hiring a lawyer to fight your traffic violations can keep your record clean. The lawyer fee oftentimes pays for itself because you don’t end up with an increase in your insurance. In some cities, a traffic lawyer might only cost $75 per infraction. I’ve had a few tickets for 20+ over the speed limit that never hit my record. Keep this in mind any time you get pulled over.
4. Work with a car insurance company that rewards you for your loyalty
Sticking with the same car insurance provider should pay off, right? Unfortunately, many companies don’t truly value your loyalty. Instead of rewarding you for staying with them, they quietly increase your rates over time.
But it doesn’t have to be this way. Some insurers actually reward long-term customers with better deals and additional perks. By switching to a company that values loyalty - like one of the loyalty rewarding options on this site - you can enjoy real benefits, like lower premiums, better discounts, and added coverage options tailored just for you.
5. Find Out If Your Car Insurance Has Been Overcharging You
You can’t count on your car insurance provider to give you the best deal—they’re counting on you not checking around.
That’s where a tool like SavingsPro can help. You can compare rates from several top insurers at once and let them pitch you a better price.
Did you recently move? Buy a new car? Get a little older? These changes can mean better rates, and SavingsPro makes it easy to see if switching providers could save you money.
All it takes is a few minutes to answer these questions about your car and driving habits. You’ll quickly see if it’s time to cancel your current insurance and switch to a more affordable plan.
These are small, simple moves that can help you manage your car insurance properly. If you'd like to support my work, feel free to use the links in this post—they help me continue creating valuable content. Alternatively, you can search for other great options through Google if you prefer to explore independently.
I am 100% certain it can. You can also use c/c++, python, ruby, perl or what ever. For node you will have to watch out for CPU intensive tasks, and for memory usage. But as long as single user requests are not hogging a single cpu and available memory, you can scale the number of processes per server, and scale the number of servers. With anything going for that kind of scaling, a lot of stuff matters besides choice of language. And the core of node is written in C++, so if you have I/O intensive requests, then what matters in the first place is network, database and cache speed.
If you want to run a website that has about 20 million unique and/or returning users per month, you will need a server that is strong and has that much storage space, RAM, and security to handle that much traffic. More than that, you'll need so many resources that you'll have to use them all on so much business.
So, I think a dedicated server will work because it will give you the storage room, bandwidth, and uptime you need to handle a million users per month.
Also, there's no question that you can grow with this kind of server.
I have helped a few good companies host this kind of business, and
If you want to run a website that has about 20 million unique and/or returning users per month, you will need a server that is strong and has that much storage space, RAM, and security to handle that much traffic. More than that, you'll need so many resources that you'll have to use them all on so much business.
So, I think a dedicated server will work because it will give you the storage room, bandwidth, and uptime you need to handle a million users per month.
Also, there's no question that you can grow with this kind of server.
I have helped a few good companies host this kind of business, and trust me, their experiences have been great. If you want to get this much traffic or even more, ResellerClub's dedicated servers will work well for you.
In fact, I think you should check out their website and look at their plans to see which one fits your needs best.
1. Overpaying on Auto Insurance
Believe it or not, the average American family still overspends by $461/year¹ on car insurance.
Sometimes it’s even worse: I switched carriers last year and saved literally $1,300/year.
Here’s how to quickly see how much you’re being overcharged (takes maybe a couple of minutes):
- Pull up Coverage.com – it’s a free site that will compare offers for you
- Answer the questions on the page
- It’ll spit out a bunch of insurance offers for you.
That’s literally it. You’ll likely save yourself a bunch of money.
2. Overlook how much you can save when shopping online
Many people over
1. Overpaying on Auto Insurance
Believe it or not, the average American family still overspends by $461/year¹ on car insurance.
Sometimes it’s even worse: I switched carriers last year and saved literally $1,300/year.
Here’s how to quickly see how much you’re being overcharged (takes maybe a couple of minutes):
- Pull up Coverage.com – it’s a free site that will compare offers for you
- Answer the questions on the page
- It’ll spit out a bunch of insurance offers for you.
That’s literally it. You’ll likely save yourself a bunch of money.
2. Overlook how much you can save when shopping online
Many people overpay when shopping online simply because price-checking across sites is time-consuming. Here is a free browser extension that can help you save money by automatically finding the better deals.
- Auto-apply coupon codes – This friendly browser add-on instantly applies any available valid coupon codes at checkout, helping you find better discounts without searching for codes.
- Compare prices across stores – If a better deal is found, it alerts you before you spend more than necessary.
Capital One Shopping users saved over $800 million in the past year, check out here if you are interested.
Disclosure: Capital One Shopping compensates us when you get the browser extension through our links.
3. Not Investing in Real Estate (Starting at Just $20)
Real estate has long been a favorite investment of the wealthy, but owning property has often felt out of reach for many—until now.
With platforms like Ark7, you can start investing in rental properties with as little as $20 per share.
- Hands-off management – Ark7 takes care of everything, from property upkeep to rent collection.
- Seamless experience – Their award-winning app makes investing easy and efficient.
- Consistent passive income – Rental profits are automatically deposited into your account every month.
Now, you can build your own real estate portfolio without needing a fortune. Ready to get started? Explore Ark7’s properties today.
4. Wasting Time on Unproductive Habits
As a rule of thumb, I’d ignore most sites that claim to pay for surveys, but a few legitimate ones actually offer decent payouts.
I usually use Survey Junkie. You basically just get paid to give your opinions on different products/services, etc. Perfect for multitasking while watching TV!
- Earn $100+ monthly – Complete just three surveys a day to reach $100 per month, or four or more to boost your earnings to $130.
- Millions Paid Out – Survey Junkie members earn over $55,000 daily, with total payouts exceeding $76 million.
- Join 20M+ Members – Be part of a thriving community of over 20 million people earning extra cash through surveys.
With over $1.6 million paid out monthly, Survey Junkie lets you turn spare time into extra cash. Sign up today and start earning from your opinions!
5. Paying off credit card debt on your own
If you have over $10,000 in credit cards - a debt relief program could help you lower your total debt by an average of 23%.
- Lower your total debt – National Debt Relief works with creditors to negotiate and settle your debt for less than you owe.
- One affordable monthly payment – Instead of managing multiple bills, consolidate your payments into one simple, structured plan.
- No upfront fees – You only pay once your debt is successfully reduced and settled, ensuring a risk-free way to tackle financial burdens.
Simple as that. You’ll likely end up paying less than you owed and could be debt free in 12-24 months. Here’s a link to National Debt Relief.
6. Overspending on Mortgages
Overpaying on your mortgage can cost you, but securing the best rate is easy with Bankrate’s Mortgage Comparison Tool.
- Compare Competitive Rates – Access top mortgage offers from trusted lenders.
- Personalized results – Get tailored recommendations based on your financial profile.
- Expert resources – Use calculators to estimate monthly payments and long-term savings.
Don’t let high rates limit your financial flexibility. Explore Bankrate’s Mortgage Comparison Tool today and find the right mortgage for your dream home!
7. Ignoring Home Equity
Your home can be one of your most valuable financial assets, yet many homeowners miss out on opportunities to leverage its equity. Bankrate’s Best Home Equity Options helps you find the right loan for renovations, debt consolidation, or unexpected expenses.
- Discover top home equity loans and HELOCs – Access competitive rates and terms tailored to your needs.
- Expert tools – Use calculators to estimate equity and project monthly payments.
- Guided decision-making – Get insights to maximize your home’s value while maintaining financial stability.
Don’t let your home’s value go untapped. Explore Bankrate’s Best Home Equity Options today and make your equity work for you!
8. Missing Out on Smart Investing
With countless options available, navigating investments can feel overwhelming. Bankrate’s Best Investing Options curates top-rated opportunities to help you grow your wealth with confidence.
- Compare investments – Explore stocks, ETFs, bonds, and more to build a diversified portfolio.
- Tailored insights – Get tailored advice to match your financial goals and risk tolerance.
- Maximize returns – Learn strategies to optimize investments and minimize risks.
Take control of your financial future. Explore Bankrate’s Best Investing Options today and start building a stronger portfolio today!
Disclaimer:
Found is a financial technology company, not a bank. Business banking services are provided by Piermont Bank, Member FDIC. The funds in your account are FDIC-insured up to $250,000 per depositor for each account ownership category. Advanced, optional add-on bookkeeping software available with a Found Plus subscription. There are no monthly account maintenance fees, but transactional fees for wires, instant transfers, and ATM apply. Read more here: Fee Schedule
You can oppt for a cloud server with 1000vcpus, 198 GB Ram , and 100 TB storage and then upgrade the resources according to increasing the users.
The “10,000 users” metric is not meaningful. You need to understand a few things before it’s possible to answer this. What language and architecture you have is the entire problem, as well as how many page loads those 10k concurrent users are doing.
For example, if you have 10k concurrent users that are loading real content that’s server-side rendered (like a pure Rails or PHP architecture) with MySQL or Postgres on the backend then no, not a chance.
If you use CDN and ship React apps out to the browser and only do API calls that are written in Rust with a tiny in-memory data - then yes, you can
The “10,000 users” metric is not meaningful. You need to understand a few things before it’s possible to answer this. What language and architecture you have is the entire problem, as well as how many page loads those 10k concurrent users are doing.
For example, if you have 10k concurrent users that are loading real content that’s server-side rendered (like a pure Rails or PHP architecture) with MySQL or Postgres on the backend then no, not a chance.
If you use CDN and ship React apps out to the browser and only do API calls that are written in Rust with a tiny in-memory data - then yes, you can probably do it. Assuming you’re not talking about workstation class PCs - no Epycs or Xeons, if you put a Threadripper with as much RAM as it could take and a bunch of SSDs (like, 6ish) to get your IOPS up, then architect your database correctly, I think you could probably do it.
But without benchmarking the app nobody can really tell you. I have services that do 3k requests per second and run great, and I have other services that die with more than 10 rps.
As the others say, it’s a complex question.
The number of users doesn’t really matter. If they never log in, you can have a single small server with a database of 20 million logins. But if the just 10% of the users log in daily, you have 2 million users every day. And again what really matters is the number of requests they make.
A site like Facebook makes a lot of requests. When you scroll down the page it fetches new content, so it results in many small requests. Let’s assume each user is on the site for one hour a day. During that hour, that average user will make 20 requests a minute or 1200
As the others say, it’s a complex question.
The number of users doesn’t really matter. If they never log in, you can have a single small server with a database of 20 million logins. But if the just 10% of the users log in daily, you have 2 million users every day. And again what really matters is the number of requests they make.
A site like Facebook makes a lot of requests. When you scroll down the page it fetches new content, so it results in many small requests. Let’s assume each user is on the site for one hour a day. During that hour, that average user will make 20 requests a minute or 1200 requests an hour. We multiply that by the 2 million users and we get 2.4 Billion requests a day. Ouch.
2.4 Billion requests / 24 hours / 60 minutes / 60 seconds = 27,778 requests a second. If we then estimate the average request to take 50 milliseconds of compute time on a single core, it means that each server can do 20 requests a second which means we need 1389 servers to handle this. Now this will be lowered significantly when we factor in things like caching, multi core servers and so on. On the other hand, that’s only application servers. You also need database and all the other stuff that makes a site.
According to Number of active users at Facebook over the years then Facebook had 608 million users at the end of 2010. Also in 2010 was the last public number of servers in facebook which was 60,000 servers. If we do the math (60000/608*20), that’s 1974 servers for 20 million users. By modern standards it’s probably more optimised, but it can give you an idea.
I answered a similar question over here: Quora User's answer to How much would it cost for cloud hosting for a Snapchat like social media app with 5 million users?
By “shared server” I assume you mean a MySQL server with multiple databases using it. If you can’t use connection pooling (which is the first thing you should try) then get the MySQL admin to bump max_connections to something more interesting and restart mysqld. It defaults to a realtively small number (docs say 151).
We ship 1200 for most production nodes because that meets our needs. Be aware that changes your max RAM calculations but it’s rarely a problem on OLTP loads. You may also have to tweak the max file handles for the mysql user if you’re running some enhanced security.
Technically speaking scaling an app for 10k concurrent users or supporting concurrency for more than 10k users entirely depends upon application and it’s requirements. First answer the questions like:
- What does this application do?
- How many database requests are required per user
- How many static assets needs to be served
- Is this a real time application, database values are refreshed or updated in seconds. Am i trying to get latest from database, but more records are being inserted/updated as we speak?
- Management is okay with vertical or insisting on horizontal scaling?
- What’s my server limitations?
T
Technically speaking scaling an app for 10k concurrent users or supporting concurrency for more than 10k users entirely depends upon application and it’s requirements. First answer the questions like:
- What does this application do?
- How many database requests are required per user
- How many static assets needs to be served
- Is this a real time application, database values are refreshed or updated in seconds. Am i trying to get latest from database, but more records are being inserted/updated as we speak?
- Management is okay with vertical or insisting on horizontal scaling?
- What’s my server limitations?
There are many more aspects which needs to be taken care of before actually jumping and changing configurations (especially if it’s in production). Since you have specifically asked 10k concurrent connections i am assuming you are in production already.
Vertical scaling to support 10k concurrency is easy, all you have to do is increase the system resources i.e servers, load balancer, RAM, HDD of your servers. All can be done in one day.
Horizontal scaling is something where you are being within your limits still trying to upscale your application. This is considerably difficult and requires an expertise.
Now, lets take a example or an arbitrary application which we are going to horizontally scale.
Web Server
Which web server is being used? Apache, Nginx? Both are good and have their own set of advantages. There are numerous configurations available online which tells these web-servers how and how may workers, child needs to be created, how may should stay idle, after how much time a persistent connection is refreshed. In a nutshell, configuring your web-server to support what you need should be your TOP PRIORITY.
Cores available in your Server
Cores available on your server is the primary concern, the more core you have the better it’s for you to scale.
Utilizing RAM
Technically, your application will consume RAM depending upon it’s requirement. But since, we are breaking large applications into smaller parts which will eventually reside on a small server of it’s own. If you end up with lots of free RAM on primary servers (kept for for some high spikes).
Use PHP accelerators
First thing to do is install any PHP accelerators, Opcache and APC are my favorites. Dedicate enough RAM so they live happy, keep an eye on cache hit
and cache miss
ratio. If your hits are above 90% you are good.
Lots of Small Servers V/s One Big Server
My main philosophy lies in the ideas of keeping lots of smaller servers which have limited responsibilities. I never go for one very big server and ask it to do 10x jobs. Instead, break your application to do certain jobs on certain servers (if possible). This will also help you identify real bottlenecks. Many dev ops spend numerous hours just to figure out the real cause of the problem. In short if they know what’s not working they can directly address the issue on specific pool or servers rather than figuring out.
Content Delivery Network
You can use Akamai or MaxCDN or any CDN of your choice to server static assets. You should ensure that cache hits from CDN is above 95%. Once done this will drastically reduce the load on server. Even high scalable apps sometimes suffer due to large number of static assets being served from the server.
CloudFlare , Incapsula, Torbit, Amazon to the rescue
Applications like CloudFlare, Incapsula and others help you prevent spam bots that keep hitting your server to an extent. You think you are getting 10k concurrency but when you use them you will realize that 10–30% traffic was bad traffic filtered by CloudFlare as per their algorithm. Plus they also provide CDN, caching and other important web goodies.
Squid or Varnish
Use Squid or Varnish (the better one) which is a "Web application accelerator also known as a caching HTTP reverse proxy". It greatly reduces the load depending upon pthreads supported by your server.
Redis or Memcached
Both Memcached and Redis are super fast key-value storage engines that provide session handling for PHP. Configure them to connect to your servers and work like session handler.
Load Balancing
Keep more than one load balancer to ensure that load balancer itself never becomes a bottleneck.
Vertical and Horizontal Sharding
If you are using noSQL like MongoDB, and IF you see a need go for horizontal/vertical sharding to fetch data quickly.
If you are on RDBMS system like MySQL, ensure you you have index set on all coloumns which are part of your WHERE query. e.g.
- Select account_name from TABLE where email = ‘XXX@XXX.XXX’
You should have index set on email column.
Apart from all of that, keep a vigil eye on your servers, utilization, access and error logs and figure out if there is a bottleneck somewhere. Services like NewRelic will help you manage this like a pro.
I hope this would be enough to give you a real-world idea on scaling applications.
Pro Tip: There is NO “NOT FOUND” assets being served from your server. That may cause immense load. Many developers sometimes ignore that a certain background image (included via CSS) is not available on sever. This cause good amount of load on server.
Probably not. That is even assuming that the VPS server is not the database server.
NOTE: What the client does with the URL it receives is uninteresting, as long as the VPS server is not involved in playing the video.
Configuration: Some VPS servers are not configured for 10,000 concurrent TCP sessions. Your system needs sessions both for clients and for connections to the DBMS.
CPU capacity: Assume that the process from receipt of a request at the VPS server, until sending the reply spans, on average, r ms, e.g. 50 ms. According to Little’s Law, if the concurrency level is 10,000, then the trans
Probably not. That is even assuming that the VPS server is not the database server.
NOTE: What the client does with the URL it receives is uninteresting, as long as the VPS server is not involved in playing the video.
Configuration: Some VPS servers are not configured for 10,000 concurrent TCP sessions. Your system needs sessions both for clients and for connections to the DBMS.
CPU capacity: Assume that the process from receipt of a request at the VPS server, until sending the reply spans, on average, r ms, e.g. 50 ms. According to Little’s Law, if the concurrency level is 10,000, then the transaction arrival rate is 10,000/.05, or 200,000 requests per second. Adjust the arithmetic for your expected transaction duration.
- Is traffic from and to the clients encrypted? That takes additional CPU, especially at TLS session initiation time.
I/O Capacity: Does the provider promise that the VPS server can handle 800,000 I/O per second on a sustained basis? Assuming that all messages fit within one Ethernet frame, that number results from 4 I/O’s per transaction: Receive request; query DBMS; get response from DBMS; send reply. That assumes long running TCP sessions (or UDP) to clients and long running pooled database connections to the database server. Session initiation, termination, and encryption involve additional messages.
Transfer: Assuming that each transaction consumes 256 byte of “Transfer”, the 32 TB Transfer would be consumed in about 7 days of transactions at the specified rate. If the 32TB Transfer is per month, then this would insufficient. If Transfer is also consumed by traffic between the VPS server and the database server, then instead of 32TB being consumed in 7 days, it might be consumed in 4.
How much slower? Obviously, PHP is the middle man that constructs the query based on the web request arguments (which requires parsing PHP source code every time), executes the MySQL query, and returns the result. It probably opens a connection to MySQL every time beforehand, and closes it after running the query, because PHP does not know how many queries you would want to execute. Doing so will typically take additional time. Not to mention that the web server might be handling other web requests at the same time. Directly connecting to a MySQL database is typically only reserved for interna
How much slower? Obviously, PHP is the middle man that constructs the query based on the web request arguments (which requires parsing PHP source code every time), executes the MySQL query, and returns the result. It probably opens a connection to MySQL every time beforehand, and closes it after running the query, because PHP does not know how many queries you would want to execute. Doing so will typically take additional time. Not to mention that the web server might be handling other web requests at the same time. Directly connecting to a MySQL database is typically only reserved for internal use for security reasons, and multiple queries can be processed on a single open connection thus speeding up the response time.
Your question is a bit too broad to answer with certainty. However, if you expect potential rapid scaling up, then you may want to look at a cloud solution (instead of dedicated) as then it's fairly easy to rapidly resize your resources up and down as demand requires.
I’d run some some load tests on some virtual machines. And if you’re comfortable with java, include it for benchmarking too.
Not at all, all these big names might charge you a little sum of money at the initial stage but the renewal cost is insanely high for these platforms. I use Hostnoc for my business, and they work just fine, they charge what they promise.
I have used big hostings such as Hostgator and Bluehost etc but all proven to be insanely costly. Finally settled with Hostnoc.
Any language or framework can be built to scale up — obviously not with a single server, but with many servers handling requests. A bad system, like Python, can only handle one request at a time (due to it’s “global lock”) system and requires many virtual servers on each physical server. Most other languages and frameworks don’t have this restriction and can be more easily scaled. A single node server might be able to handle 100K concurrent requests and can easily be scaled across multiple server to any number of millions.
There is no definite answer without knowing your web application.
It is quite possible to handle that much traffic if your application has small memory footprint, database size is very small, MySQL queries are very efficient etc.
On the other way around, your server will not be able to handle even 1000 connections/sec if your application has huge memory usage, very big database size, bad MySQL indexing and inefficient MySQL queries.
you could look at AWS calculator. Consider the following costs
-
- Network Download / upload (need a very good bandwidth.)
- Storage Server (Blocks - 7MB (average user storage) * 10000 ~ 70GB)
- Replication (same as 7* 10000 MB ~ 70GB if you are considering one)
- Backups
10,000 at once, in a day, a week, a year? Which?
Building the server os the easy part, a decent CPU and loads of ram, preferably Linux based…. The real challenge of 10k users is bandwidth, as you say building a server from a pc I will assume that your self hosting….. well, your isp probably isn't gonna support it
I’d say probably not, unless you do some optimization like cacheing, php-page-load cacheing, enough RAM to store the DB in memory, etc…
The good news is that typically you don’t have to worry about it until your website “becomes popular” :)
A2A: For a dynamic PHP website that is doing nothing special, how much CPU and memory is needed for about 1000 concurrent users at the same time, about 5 page views per user in a time frame of 10-15 seconds?
This is always a piece of string question. Build the site properly with the best queries you can, a well configured database and intelligent caching, it would be nominal on a modern physical or virtual machine.
If you are reasonably competent and have a passing understanding of computer hardware, you can build a PC.
However, commercial servers should be built of server quality hardware. You might find somewhere to host a home built server, but these are rare these days. You need better metrics to understand what resources your 10,000 users might need and how to manage growth of those requirements. Realistically, leasing a commercial server is probably going to require less outlay and have more capability for expansion.
Don’t go with GoDaddy.
GoDaddy is one of the worst hosting providers and it is not us, every experienced blogger and website owner will tell you the same.
Look for some other alternatives as there are a lot of better Web hosting providers.
The life cycle of a website can be enigmatic.
Starting a website, at launch, it may be a bit rough. it may not be sociable, you may have issues you did not have in testing. That, and your marketing may have worked too well. Over-provisioning will soak up quite a bit.
Web servers need not work very hard. However if something were to go wrong, it might have an impact on other processes.
This is particularly true of Java/Tomcat implementations. It is by no means exclusive. Expensive can be good, providing you can resize as required and walk away from that expense.
You may not yet be ready for a separ
The life cycle of a website can be enigmatic.
Starting a website, at launch, it may be a bit rough. it may not be sociable, you may have issues you did not have in testing. That, and your marketing may have worked too well. Over-provisioning will soak up quite a bit.
Web servers need not work very hard. However if something were to go wrong, it might have an impact on other processes.
This is particularly true of Java/Tomcat implementations. It is by no means exclusive. Expensive can be good, providing you can resize as required and walk away from that expense.
You may not yet be ready for a separate database. Nor for load balancers other than WAF.
If your hotfixes are frequent (intra daily drops) you don't want 3 weeks of change control lead time nor C-Suite approvals getting in your way. A soft launch may be your intent.
You’re not gong to be running a social app where your ADARU is 20k for a broad, active content pool and user-generated content on top of a database like Oracle or SQL — which you probably know, but I just want to clarify that those dark days are no longer how things are done.
Most likely, you’ll contract with AWS, Microsoft Azure Cloud, Google Cloud Platform or a smaller operation that offers on-demand scale, proprietary platforms with AI, ML, analytics and the whole Home Depot of tools for just about anything you want your app to do, know or learn.
How your app architecture ends up is best left
You’re not gong to be running a social app where your ADARU is 20k for a broad, active content pool and user-generated content on top of a database like Oracle or SQL — which you probably know, but I just want to clarify that those dark days are no longer how things are done.
Most likely, you’ll contract with AWS, Microsoft Azure Cloud, Google Cloud Platform or a smaller operation that offers on-demand scale, proprietary platforms with AI, ML, analytics and the whole Home Depot of tools for just about anything you want your app to do, know or learn.
How your app architecture ends up is best left to your devs to decide, based on their experience and understanding of each platform. But the costs are so low (and the platforms so agile) that it makes sense to build on their architecture — if your 20k daily user count suddenly becomes 100k for sustained periods, you do nothing: no provisioning servers, no bandwith contracts, no upgrading the data collections.
Google / AWS / MSFT just make it happen. 20k -> 100k -> 1 Billion? Done. (Your monthy fees will certainly see a substantial bump, but it’s not a bad problem to have…)
- Google Cloud Platform ANTHOS seems right for you
- Their documentation, case studies, samples and onboarding is brilliant…
It depends on your hosting,
If its classic this is an example script with the limitsnand the filename you need to use.
PHP upload limits on shared hosting
What filename does my PHP initialization file need to use?
If you’re using cPanel then you can literally just click buttons to do it however if you need more memory then you’ll need to make a file and basically follow the steps above for a classic hosting account.
View or change your PHP version in cPanel hosting?
If you’re using Windows or Plesk then you have to make a file and script it out
What filename does my PHP initialization file need to us
It depends on your hosting,
If its classic this is an example script with the limitsnand the filename you need to use.
PHP upload limits on shared hosting
What filename does my PHP initialization file need to use?
If you’re using cPanel then you can literally just click buttons to do it however if you need more memory then you’ll need to make a file and basically follow the steps above for a classic hosting account.
View or change your PHP version in cPanel hosting?
If you’re using Windows or Plesk then you have to make a file and script it out
What filename does my PHP initialization file need to use?
Remember the script in the help article is just an example so you might have to play around abit but it shouldn’t be too hard. I always recommend Linux/cPanel if you’re going to us PHP. Windows/Plesk only recommended if you’re going to be using MSSQL, ASP, ASP .Net, basically languages that only work with Windows platform.