Recently I have a website which has a 4GB MySql database and 30,000 posts on WordPress. The average response time is 1,5 seconds without caching. I had 2 requests per second on a 4CPU / 8GB Machine and when I moved to a 32GB & 8CPU machine, nothing changed. Still, 2 requests per second and most of the RAM is used for caching. So I thought instead of having one bigger machine, setting up 4 smaller machines is better for concurrency. Is this correct? Anybody can suggest a better solution? I use Apache, PHP8 (.2), and MySql on the same server.
Asked
Active
Viewed 38 times
1
-
1Without knowledge of why you're not getting the performance you expect you're only guessing. A bigger server is good a guess for a (temporary) solution in many cases. That didn't work out? Then Horizontal scaling is a also a good guess. (But know that that is usually a fairly complicated solution to get right.) I would not go that route based on only a guess. - Please add in-depth monitoring , learn exactly what your current bottleneck is, get data and numbers. Then start with the black art of tuning ; make adjustments , collect more data , revert or keep that change, investigate further. – HBruijn Aug 03 '23 at 10:14
-
1Also, some mysql optimization might make one hell of a difference - with default settings it's dead slow. At the end of the day you will need to do some profiling and testing. – Zac67 Aug 03 '23 at 10:28
-
Additional DB information request, please. OS, Version? RAM size, # cores, any SSD or NVME devices on MySQL Host server? Post TEXT data on justpaste.it and share the links. From your SSH login root, Text results of: A) SELECT COUNT(*) FROM information_schema.tables; B) SHOW GLOBAL STATUS; after minimum 24 hours UPTIME C) SHOW GLOBAL VARIABLES; D) SHOW FULL PROCESSLIST; E) STATUS; not SHOW STATUS, just STATUS; G) SHOW ENGINE INNODB STATUS; for server workload tuning analysis to provide concurrency suggestions. – Wilson Hauck Sep 25 '23 at 17:04
-
@doraemon If you still need assistance, please consider posting Additional DB information requested on Sep 25, 2023 at 17:04 for analysis and concurrent suggestions will be provided. – Wilson Hauck Oct 08 '23 at 07:43
-
@WilsonHauck This was for one of our client's sites. And they finally made the decision this week to move to our services. I will inform later as I will work on this project next week. Thank you for your kind interest in helping me. Actually, my problem was pre-warming the cache. If a website has 100,000 posts, and if you can't do concurrency, you will need 100,000 seconds to warm the cache. That means 27 hours :) – doraemon Oct 08 '23 at 10:12
-
@doraemon Here to assist. Thanks – Wilson Hauck Oct 08 '23 at 13:28
-
@doraemon From practical experience, after 100,000 posts are loaded in hours, less than 10% will be needed by anyone logging in the next 5 days. It would be better to avoid this overhead and concentrate on acquiring requested historical posts with the least amount of effort. If you could gather the data requested above from the system being retired, we would be able to better guide you in tuning your new instance on your host to perform better from your day 1 after importing the existing 100,000 posts. View profile, please. – Wilson Hauck Oct 08 '23 at 14:22