This guide provides comprehensive strategies and techniques for optimizing the performance of Microsoft Identity Manager 2016 Synchronization Service in enterprise environments.
Performance Fundamentals
1. Architecture Overview
Key Performance Components:
- Synchronization Engine: Core processing engine
- Management Agents: Data source connectors
- Metaverse Database: Central identity store
- Connector Spaces: Staging areas for data
- Rule Extensions: Custom business logic
2. Performance Factors
Primary Influences:
- Data Volume: Number of objects and attributes
- Network Latency: Connectivity to data sources
- Database Performance: SQL Server optimization
- System Resources: CPU, memory, and disk I/O
- Synchronization Logic: Complexity of transformations
System Requirements and Sizing
1. Hardware Recommendations
Small Environment (< 50,000 objects):
- CPU: 4 cores @ 2.4 GHz minimum
- Memory: 8 GB RAM minimum
- Storage: 100 GB available space
- Network: 1 Gbps connection
Medium Environment (50,000 - 500,000 objects):
- CPU: 8 cores @ 2.8 GHz
- Memory: 16 GB RAM
- Storage: 500 GB available space
- Network: 1 Gbps dedicated connection
Large Environment (> 500,000 objects):
- CPU: 16+ cores @ 3.0 GHz
- Memory: 32+ GB RAM
- Storage: 1+ TB SSD storage
- Network: 10 Gbps connection
2. Database Server Sizing
SQL Server Requirements:
-- Recommended SQL Server configuration
-- Memory allocation
sp_configure 'max server memory (MB)', 8192
GO
RECONFIGURE
GO
-- Parallelism settings
sp_configure 'max degree of parallelism', 4
GO
RECONFIGURE
GO
-- Cost threshold for parallelism
sp_configure 'cost threshold for parallelism', 25
GO
RECONFIGURE
GO
3. Storage Considerations
Database File Placement:
- Data Files: High-performance SSD storage
- Log Files: Separate fast storage
- TempDB: Dedicated high-speed storage
- Backup Files: Network-attached storage
Database Optimization
1. Index Management
Critical Indexes:
-- Check index fragmentation
SELECT
OBJECT_NAME(ips.object_id) AS TableName,
i.name AS IndexName,
ips.avg_fragmentation_in_percent,
ips.page_count
FROM sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, 'DETAILED') ips
INNER JOIN sys.indexes i ON ips.object_id = i.object_id AND ips.index_id = i.index_id
WHERE ips.avg_fragmentation_in_percent > 10
AND ips.page_count > 1000
ORDER BY ips.avg_fragmentation_in_percent DESC
Index Maintenance:
-- Rebuild fragmented indexes
DECLARE @sql NVARCHAR(MAX) = ''
SELECT @sql = @sql +
'ALTER INDEX [' + i.name + '] ON [' + OBJECT_NAME(ips.object_id) + '] REBUILD;' + CHAR(13)
FROM sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, 'DETAILED') ips
INNER JOIN sys.indexes i ON ips.object_id = i.object_id AND ips.index_id = i.index_id
WHERE ips.avg_fragmentation_in_percent > 30
AND ips.page_count > 1000
EXEC sp_executesql @sql
2. Database Maintenance
Regular Maintenance Tasks:
-- Update statistics
UPDATE STATISTICS [mms_metaverse] WITH FULLSCAN
UPDATE STATISTICS [mms_connectorspace] WITH FULLSCAN
-- Check database consistency
DBCC CHECKDB('FIMSynchronizationService') WITH NO_INFOMSGS
-- Shrink log file if necessary
DBCC SHRINKFILE('FIMSynchronizationService_Log', 1024)
3. Query Optimization
Performance Monitoring:
-- Monitor expensive queries
SELECT TOP 20
qs.execution_count,
qs.total_logical_reads,
qs.total_logical_writes,
qs.total_worker_time,
qs.total_elapsed_time,
qs.total_elapsed_time / qs.execution_count AS avg_elapsed_time,
SUBSTRING(st.text, (qs.statement_start_offset/2)+1,
((CASE qs.statement_end_offset
WHEN -1 THEN DATALENGTH(st.text)
ELSE qs.statement_end_offset
END - qs.statement_start_offset)/2) + 1) AS statement_text
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) st
WHERE st.text LIKE '%mms_%'
ORDER BY qs.total_elapsed_time DESC
Synchronization Optimization
1. Management Agent Configuration
Connection Optimization:
Connection Pool Settings:
- Maximum Connections: 10-20 (adjust based on source system capacity)
- Connection Timeout: 30-60 seconds
- Command Timeout: 300 seconds (5 minutes)
- Enable Connection Pooling: True
Batch Size Optimization:
Recommended Batch Sizes:
- Active Directory: 500-1000 objects
- SQL Database: 100-500 objects
- LDAP: 200-500 objects
- Web Services: 50-200 objects
2. Filtering Strategies
Effective Filtering:
# Active Directory filter examples
# Exclude disabled accounts
(!userAccountControl:1.2.840.113556.1.4.803:=2)
# Include only specific OUs
(distinguishedName=*,OU=Employees,DC=contoso,DC=com)
# Exclude system accounts
(!sAMAccountName=krbtgt)
(!sAMAccountName=*$)
# Combine multiple conditions
(&(!userAccountControl:1.2.840.113556.1.4.803:=2)
(distinguishedName=*,OU=Employees,DC=contoso,DC=com)
(!sAMAccountName=krbtgt))
SQL Database Filtering:
-- Efficient WHERE clauses
SELECT * FROM Users
WHERE IsActive = 1
AND LastModified > @LastRunTime
AND Department IN ('Sales', 'Marketing', 'Engineering')
ORDER BY LastModified
3. Run Profile Optimization
Optimal Run Sequence:
- Delta Import: Quick detection of changes
- Delta Synchronization: Process changes only
- Export: Apply changes to target systems
- Confirm Export: Verify successful changes
Scheduling Considerations:
Recommended Schedule:
- Delta Sync: Every 15-30 minutes during business hours
- Full Sync: Weekly during maintenance window
- Password Sync: Real-time or every 5 minutes
- Cleanup: Daily during off-hours
Memory Management
1. JVM Configuration (if applicable)
Java-based Connectors:
JVM Memory Settings:
-Xms1024m
-Xmx4096m
-XX:NewRatio=3
-XX:+UseG1GC
-XX:MaxGCPauseMillis=200
2. .NET Memory Optimization
Application Pool Settings:
<!-- Web.config optimizations -->
<system.web>
<compilation debug="false" targetFramework="4.5" />
<httpRuntime
maxRequestLength="1048576"
executionTimeout="3600"
enableVersionHeader="false" />
</system.web>
3. Rule Extension Optimization
Memory-Efficient Code:
public class OptimizedRuleExtension : IMAExtensible2CallIn
{
// Use static dictionaries for lookups to avoid repeated database calls
private static readonly Dictionary<string, string> _departmentCache =
new Dictionary<string, string>();
// Use connection pooling
private static readonly string _connectionString =
ConfigurationManager.ConnectionStrings["HR"].ConnectionString;
public void MapAttributesForImport(string FlowRuleName, CSEntry csentry, MVEntry mventry)
{
switch (FlowRuleName)
{
case "LookupDepartment":
// Implement caching to reduce database calls
string employeeId = csentry["employeeID"].StringValue;
if (!string.IsNullOrEmpty(employeeId))
{
if (!_departmentCache.TryGetValue(employeeId, out string department))
{
department = LookupDepartmentFromDB(employeeId);
_departmentCache[employeeId] = department;
}
mventry["department"].StringValue = department;
}
break;
}
}
private string LookupDepartmentFromDB(string employeeId)
{
using (var connection = new SqlConnection(_connectionString))
{
using (var command = new SqlCommand(
"SELECT Department FROM Employees WHERE EmployeeID = @EmpId", connection))
{
command.Parameters.AddWithValue("@EmpId", employeeId);
connection.Open();
return command.ExecuteScalar()?.ToString();
}
}
}
}
Network Optimization
1. Connection Configuration
LDAP Optimization:
LDAP Settings:
- Use LDAPS (636) when possible
- Enable connection pooling
- Configure appropriate timeout values
- Use paged results for large queries
- Implement connection retry logic
Network Tuning:
# Windows network optimization
netsh int tcp set global autotuninglevel=normal
netsh int tcp set global chimney=enabled
netsh int tcp set global rss=enabled
2. Bandwidth Management
Traffic Shaping:
- Prioritize synchronization traffic
- Implement QoS policies
- Schedule large operations during off-peak hours
- Use compression when available
Monitoring and Measurement
1. Performance Counters
Key Metrics to Monitor:
# PowerShell script to collect performance data
$counters = @(
"\FIM Synchronization Service(*)\Objects Remaining",
"\FIM Synchronization Service(*)\Objects Processed per Second",
"\FIM Synchronization Service(*)\Total Connector Space Objects",
"\FIM Synchronization Service(*)\Total Metaverse Objects",
"\Process(miisserver)\% Processor Time",
"\Process(miisserver)\Working Set",
"\Memory\Available MBytes",
"\PhysicalDisk(_Total)\Avg. Disk Queue Length",
"\Network Interface(*)\Bytes Total/sec"
)
# Collect data every 30 seconds for 1 hour
Get-Counter -Counter $counters -SampleInterval 30 -MaxSamples 120 |
Export-Counter -Path "C:\PerfLogs\MIM_Performance.csv" -FileFormat CSV
2. SQL Server Monitoring
Database Performance Metrics:
-- Monitor wait statistics
SELECT TOP 20
wait_type,
wait_time_ms,
signal_wait_time_ms,
wait_time_ms - signal_wait_time_ms AS resource_wait_time_ms,
waiting_tasks_count,
wait_time_ms / waiting_tasks_count AS avg_wait_time_ms
FROM sys.dm_os_wait_stats
WHERE wait_type NOT IN (
'CLR_SEMAPHORE', 'LAZYWRITER_SLEEP', 'RESOURCE_QUEUE',
'SLEEP_TASK', 'SLEEP_SYSTEMTASK', 'WAITFOR'
)
AND waiting_tasks_count > 0
ORDER BY wait_time_ms DESC
3. Custom Monitoring Scripts
Automated Health Checks:
function Get-MIMSyncPerformance {
param(
[int]$ThresholdCPU = 80,
[int]$ThresholdMemory = 90,
[int]$ThresholdQueueLength = 10
)
$results = @{}
# CPU Usage
$cpu = Get-Counter "\Process(miisserver)\% Processor Time" -SampleInterval 5 -MaxSamples 3
$avgCPU = ($cpu.CounterSamples | Measure-Object -Property CookedValue -Average).Average
$results.CPUUsage = [math]::Round($avgCPU, 2)
$results.CPUAlert = $avgCPU -gt $ThresholdCPU
# Memory Usage
$memory = Get-Counter "\Process(miisserver)\Working Set" -SampleInterval 1 -MaxSamples 1
$memoryMB = [math]::Round($memory.CounterSamples[0].CookedValue / 1MB, 2)
$results.MemoryUsageMB = $memoryMB
# Queue Length
$queue = Get-Counter "\FIM Synchronization Service(*)\Objects Remaining" -SampleInterval 1 -MaxSamples 1
$queueLength = $queue.CounterSamples[0].CookedValue
$results.QueueLength = $queueLength
$results.QueueAlert = $queueLength -gt $ThresholdQueueLength
return $results
}
# Run health check
$healthCheck = Get-MIMSyncPerformance
Write-Output "CPU Usage: $($healthCheck.CPUUsage)%"
Write-Output "Memory Usage: $($healthCheck.MemoryUsageMB) MB"
Write-Output "Queue Length: $($healthCheck.QueueLength)"
Scalability Strategies
1. Horizontal Scaling
Multi-Server Deployment:
- Primary Sync Server: Main synchronization engine
- Secondary Sync Server: Disaster recovery and load sharing
- Database Clustering: SQL Server Always On Availability Groups
- Load Balancing: Distribute management agents across servers
2. Partition Management
Large Dataset Handling:
Partitioning Strategies:
- Geographic partitioning (by location)
- Organizational partitioning (by department)
- Alphabetical partitioning (by name ranges)
- Temporal partitioning (by creation date)
3. Caching Strategies
Data Caching Implementation:
public class CacheManager
{
private static readonly MemoryCache _cache = new MemoryCache("MIMCache");
private static readonly object _lockObject = new object();
public static T GetOrSet<T>(string key, Func<T> getItem, TimeSpan? expiry = null)
{
var item = _cache.Get(key);
if (item == null)
{
lock (_lockObject)
{
item = _cache.Get(key);
if (item == null)
{
item = getItem();
_cache.Set(key, item,
expiry ?? TimeSpan.FromMinutes(15),
CacheItemPriority.Normal);
}
}
}
return (T)item;
}
}
Troubleshooting Performance Issues
1. Common Performance Problems
Slow Import Operations:
- Cause: Large datasets without proper filtering
- Solution: Implement delta imports and filtering
- Monitoring: Track import duration and object counts
Memory Leaks:
- Cause: Improper resource disposal in rule extensions
- Solution: Implement proper using statements and disposal patterns
- Monitoring: Track memory usage over time
Database Blocking:
- Cause: Long-running transactions
- Solution: Optimize queries and implement proper indexing
- Monitoring: Monitor blocking sessions and wait statistics
2. Performance Tuning Checklist
System Level:
- [ ] Adequate hardware resources allocated
- [ ] SQL Server properly configured
- [ ] Network connectivity optimized
- [ ] Antivirus exclusions configured
Application Level:
- [ ] Appropriate filtering implemented
- [ ] Batch sizes optimized
- [ ] Connection pooling enabled
- [ ] Rule extensions optimized
Database Level:
- [ ] Indexes properly maintained
- [ ] Statistics up to date
- [ ] Query performance acceptable
- [ ] Backup strategy efficient
Best Practices Summary
1. Design for Performance
Architecture Decisions:
- Plan for growth and scalability
- Implement proper separation of concerns
- Use appropriate design patterns
- Consider future requirements
2. Operational Excellence
Monitoring and Maintenance:
- Establish performance baselines
- Implement proactive monitoring
- Regular maintenance procedures
- Capacity planning processes
3. Continuous Improvement
Performance Optimization Lifecycle:
- Measure: Establish current performance metrics
- Analyze: Identify bottlenecks and constraints
- Optimize: Implement targeted improvements
- Validate: Confirm performance gains
- Monitor: Ensure sustained performance
- Repeat: Continuously refine and improve
Conclusion
Optimizing MIM 2016 Synchronization Service performance requires a holistic approach encompassing hardware, software, network, and operational considerations. By implementing the strategies and techniques outlined in this guide, organizations can achieve optimal performance while maintaining reliability and scalability for their identity management infrastructure.
Regular monitoring, proactive maintenance, and continuous optimization are essential for sustaining high performance in production environments. The key to success lies in understanding the specific requirements of your environment and systematically addressing each performance factor.
Related Topics
- MIM 2016 Synchronization Service Overview: Architecture and components
- Active Directory Integration: AD-specific optimizations
- Rule Extensions Development: Performance-oriented coding
- Troubleshooting Guide: Performance problem resolution
- SQL Synchronization Guide: Database optimization techniques