Performance at Scale
The SAM engine scales well in most environments but has predictable bottlenecks at the high end. This page covers what slows down, what to do about it, and what is normal.
What's Normal
For typical environments:
| Environment Size | Typical Calculation Time |
|---|---|
| 100 assets | Seconds |
| 1,000 assets | < 1 minute |
| 10,000 assets | 5-10 minutes |
| 50,000 assets | 30-60 minutes |
| 100,000+ assets | Several hours; consider partitioning |
These are full-period recalculations. Single-period recalculations (current month only) are typically much faster because fewer transactions are deleted and rebuilt.
What Slows Down First
When the calculation gets slow, the bottlenecks in order:
- Number of consuming assets — every asset is processed for every product
- Number of products — each product is a separate calculation iteration
- Number of license records — for each product, every license is a candidate
- Calculation period length — recalculating a year takes ~12× a single month
- Complexity of the rules — added affinity dimensions add per-pair scoring cost
The first two dominate. An environment with 50,000 assets and 100 products will have 5 million asset-product evaluations per calculation.
Performance Levers
Restrict the Calculation Period
The biggest single lever. Recalculating only the current open period is dramatically faster than recalculating the whole year.
Set the scheduled job to use Start Date = first of current month. Historical periods are not touched (and do not need to be — see Recalculation Behavior).
Run Off-Peak
The calculation is database-intensive. Schedule it during a window when normal user activity is low (overnight). See Scheduling and Monitoring.
Index the Hot Tables
The engine touches:
Asset(heavily — every iteration filters and joins)SoftwareTransaction(the ledger)AssetDetailSoftware(the consumption source)AssetDependency(direct assignments)LicenseType(lookup)- Engine-internal per-period working tables
Make sure the indexes shipped with xAssets are present. If a DBA has tuned them away, performance suffers.
Watch for Runaway Affinity
Adding too many affinity rules — or rules that join across large temp tables — can slow each iteration. The default rule set is performance-tuned. Custom additions (especially scoping patterns) need to be tested at scale before going to production.
If a calculation slows down significantly after a rule change, the rule is the suspect.
Archive Old Periods
Per-period engine data accumulates over time. If storage growth becomes a concern, archiving very old periods can help — but this is destructive (loses historical position for the archived range) and should only be done with a deliberate retention policy.
When the Calculation Times Out
Some environments see the calculation hit a SQL Server timeout. The fixes:
- Increase the SQL command timeout in xAssets settings (see Configuration Guide).
- Restrict the period. A monthly recalculation is much less likely to timeout than a yearly one.
- Process in batches. For very large environments, consider partitioning by manufacturer — recalculate Microsoft products in one job, Adobe in another, etc.
The third option requires custom job orchestration and is only worth it for genuinely large estates.
Memory Use
The engine builds in-memory data structures during processing. For typical environments this is bounded; for very large ones (hundreds of thousands of assets) it can become significant.
If you see the calculation process consuming many gigabytes of memory:
- Confirm the host has enough RAM
- Consider running on a dedicated database/calculation host rather than co-located with the web tier
- Consider the period restriction approach (smaller scope = less data in memory)
Discovery and Load Performance
A separate concern from the calculation: the upstream Discovery → Recognition → Load Now pipeline can also slow down at scale. Specifically:
- FixADPDates processing on bulk discovery can be slow when many new assets arrive at once
- Recognition processing scales with the number of new titles per load
These are platform concerns; see the IT Asset Management Guide for discovery performance guidance.
Monitoring Performance Over Time
For a running production environment, track:
- Calculation duration per scheduled run (chart the trend)
- Asset count growth over time
- Database storage growth attributable to SAM data
A sudden jump in calculation duration without a matching asset count jump indicates something has changed — usually a rule edit, a data quality issue, or a database statistics issue.
Related Reading
- Scheduling and Monitoring
- Recalculation Behavior — why period restriction works
- Customizing the Calculation: Testing Rule Changes — performance testing for new rules