The Wayback Machine - https://web.archive.org/web/20160630195520/http://dba.stackexchange.com:80/questions/142578/recommended-memory-configuration-for-many-sql-server-instances-per-server
Database Administrators Stack Exchange is a question and answer site for database professionals who wish to improve their database skills and learn from others in the community. It's 100% free, no registration required.

Sign up
Here's how it works:
  1. Anybody can ask a question
  2. Anybody can answer
  3. The best answers are voted up and rise to the top

I am currently managing a server cluster for a SaaS system. We currently have 6 SQL servers running MSSQL 2008 R2. Each server is a quad core virtual machine with 16GB of RAM provisioned on a server rack. We populate each server with the maximum of 50 instances allowed by SQL Server. Each instance will be accessed sporadically by a maximum of 10 clients simultaneously, and each database will in general only be around 100MB to 500MB in size.

Currently, each instance was installed without setting memory limits, but what we have found is that the first instances tend to use too much memory, leaving later instances (in starting order) with barely 200MB to work with, and the server OS with less than 1% physical memory available. This seems to cause excessive disk swapping, and latency issues.

What is the recommended way to split memory allocation in this case? Is there a formula to determine approximately how much memory one instance needs as a minimum from the number of clients and database size? Could I set a maximum of 300MB per instance and be done with it?

share|improve this question
1  
What is the edition of SQL Server (Express, Standard, Enterprise)? – ypercubeᵀᴹyesterday
    
It is MS SQL 2008 R2 SP3 Enterprise. – Drunken Code Monkeyyesterday
2  
@ypercubeᵀᴹ @ Drunken code monkey. Did you ever consider rethinking your licensing costs versus hardware costs ratio having your setup? – Tom Vyesterday
1  
@TomV Not really my problem, I just work here... :) – Drunken Code Monkeyyesterday

Each server is a quad core virtual machine with 16GB of RAM provisioned on a server rack.

We populate each server with the maximum of 50 instances allowed by SQL Server. Each instance will be accessed sporadically by a maximum of 10 clients simultaneously, and each database will in general only be around 100MB to 500MB in size.

IMHO, your total RAM is too low. Please read my answer (with relevant links)SQL Server Maximum and Minimum memory configuration. They change when you have multiple instances of sql server running on a given host.

Capping SQL server max memory on a multi instance server is a balancing act and max memory is applicable to only buffer pool. If sql server needs more memory, it is going to use it.

You can even use Lock Pages in Memory (I would still opt for more memory before enabling LPM in your case).

As a starting point,

As a side note, you should monitor CPU, memory and disk utilization and based on the usage per client, you should charge them as well. Alternatively, you can move to Azure :-)

share|improve this answer
1  
No amount of upvotes can state how much I agree with this answer! – Zaneyesterday
1  
@Zane especially the first sentence after the quote "RAM is too low" – Tom V23 hours ago
    
RAM is scaleable, we can reprovision the VMs anytime (rack has 256GB), but the issues we have been having are due to the OS getting starved, not the instances missing memory. I will keep the suggestion to increase RAM in mind if we still run into performance issues after the OS memory problem is solved. – Drunken Code Monkey4 hours ago

Set a maximum of 300MB per instance and be done with it. Seriously, you can monitor it with something like this to determine which may be candidates for giving a little more or less to.

SELECT DB_NAME(database_id) AS [Database Name], COUNT(*) * 8/1024.0 AS [Cached Size (MB)] FROM sys.dm_os_buffer_descriptors WHERE database_id > 4 -- system databases AND database_id <> 32767 -- ResourceDB GROUP BY DB_NAME(database_id) ORDER BY [Cached Size (MB)] DESC OPTION (RECOMPILE); 

From here: Memory utilization per database - SQL Server

Good article here: http://strictlysql.blogspot.com/2013/05/how-to-size-sql-server-memory-why-is-it.html

Allocating memory to the OS (from the article):

1 GB of memory reserved to Operating System 1 GB each for every 4 GB in 4 to 16 GB 1 GB each for every 8 GB in more than 16 GB. 

Split the remainder (i.e. 12 GBs) evenly across the 50 DBs, approximately 250MBs.

One configuration you may also consider is turning on "optimize for ad hoc workloads". This will essentially tell the SQL server not to cache the full query plans for queries until they've been run at least twice. This will keep the 'ad-hoc' or single-use queries from taking up this limited memory.

Also, you can minimize the impact of the transaction log on the memory by setting it to 'Simple' recover mode. You'll only be able to do this, if, in the event of failure, restoring from the last backup is okay. You can read some other limitations here, https://msdn.microsoft.com/en-us/library/ms189275.aspx.

I think that's fair until you see a reason to change it, particularly if these are individual clients who are in all other respects equal.

share|improve this answer
1  
Your answer is neither "The answer" nor it is correct. I suggest you please post it as comment. – Shankyyesterday
    
The other problem is that entails constant micromanagement. The accounts are created automatically from a web site script, and the SQL instances installed from an unattended install script. I realize in a perfect world one size never fits all, but since each instance's workload is comparable (and light), I would prefer just to set the configuration in the install script and forget about it. I still monitor the servers, I just don't want to have to constantly monitor each instance. – Drunken Code Monkeyyesterday
    
I think that really supports your inclination to set them all to the same unless you find reason (via your monitoring) to do differently. They sound 'clone-ish', therefore I think you're justified in treating it as such. And, if these are paying clients, I'd think they deserve their 'fair share' of memory and not be limited at the expense of other clients unless limiting their memory doesn't limit their DB performance. – Gerald Patriowskiyesterday
    
The question remains, how do you usually arrive to the winning number in the end? I said 300MB purely as a figure, because 300MB x 50 ~= 15GB, leaving 1GB for the OS. Is that enough? Should I instead provision a minimum and absolute maximum, and leave the rest floating? – Drunken Code Monkeyyesterday
    
@DrunkenCodeMonkey, see my updated answer. – Gerald Patriowskiyesterday

Not the answer you're looking for? Browse other questions tagged or ask your own question.

close