Does anyone know any rules of thumb (or better) with regards to hardware requirements for a dedicated distributor server? i.e. Based on estimate metrics (e.g. number of subscribers, publishers, tables, average table size, average update frequency, etc) is there some way to calculate a minimum RAM/CPU requirement for the distributor?
We’ll likely be using SQL Server 2012 for the distributor, though the publishers and subscribers are currently a mix of 2008 and 2008 R2.
The distributor will be clustered to ensure high availability. We’ll go for active/active to make optimal use of resources, sending half the publishers to one and half to the other, but will ensure that each node in the cluster’s spec satisfies the minimum spec so that should one node go down the solution can still cope.
We’ve been using replication to allow our international BI system to pull data from our transactional systems without impacting performance, with the BI servers hosting the distributor. Going forwards we’re hoping to move this distribution onto a separate server, so that we can provide replication as a service to the BI team and to other teams requiring this (i.e. local BI teams from various countries our in house monitoring system). We expect to see an increase in the number of tables being replicated as well as in the number of subscribers and publishers (as we’ll be including our non-production systems also).
Thanks in advance.
asked Aug 13 ’12 at 12:04
migrated from stackoverflow.com Aug 13 ’12 at 12:09
This question came from our site for professional and enthusiast programmers.
For anyone viewing this looking for similar info, we re going with 4 cores / 12GB ram to begin with. These will be on virtuals so we can monitor and tweak as needed. I ll try to work out a rule based on what we see over the next few months to help guide others. JohnLBevan Aug 16 ’12 at 13:23
Running on a VM is a good idea as it’s hard to define resource requirements but in most cases distributors are quite lightweight. The main factors are number of subscribers, transactions count and the number of articles in replication.
Some tips to help:
- Only replicate what you need to, this includes SPs, views etc.
- Using pull will reduce CPU and memory requirements on the distributor.
- This is harder to implement in Web edition but possible with scripts.
- If you have a lot of push agents running, you should allow more memory outside of SQL.
- Consider increasing the desktop heap size if using push and have a lot of agents. (this is a registry change)
- Choose the right type of replication.
- Snapshot is light on distributors as it doesn’t have the overhead of the log reader or clean-up jobs but heavier on the network and the subscriber when pushing out.
- Enterprise edition doesn’t provide any real performance benefits on the distributor.
- I hoped that merry-go-round reads would but no, it’s mostly in CPU when problems occur.
- You may find it helpful to reserve some cores for the network as large distributors can get overwhelmed during imports or clean-up jobs and struggle to get the data out.
- I’m talking 1000 subscribers and 100,000 articles (not my idea).
- Throwing CPU at the distributor rarely fixes the high CPU issue. You need to treat the distribution databases like any other database and tune it (I believe Denny Cherry has an article which recommends some indexes).
- CPU can also be caused by the locking and deadlocking during the clean-up job or large transactions failing to be delivered causing the subscriber agents to scan the entire table. You may find that reducing the frequency of the clean-up job actually help overall throughput but could impact latency.
In a nutshell, if replication is configured correctly, updates are batched properly (don’t update millions of rows in one transaction) and the subscribers are up to date then it’s happy.
answered Mar 3 ’16 at 13:06