Performance and scaling guide

This document provides guidance for Optimizely B2B Commerce and sizing activities to assist our customers in capacity planning and performance tuning by providing applicable reference architectures along with baseline metrics and best practices recommendations.

This article covers the following topics:

Prerequisites

The following represents an overview of the core components or subsystems that compose the B2B Commerce e-commerce platform along with prerequisite requirements for supporting assets such as Operating System, Application Dependencies, and Hardware.

Disclaimer

The B2B Commerce platform was designed to operate in a wide variety of IT environments and integration scenarios. Ongoing maintenance activities of the environment, including but not limited to web server log monitoring, database backups, application configuration, and other routine tasks are not covered in this document. However, these activities are part of on-premise hosting scenarios and it is expected that the reader have general familiarity with all administrative aspects of a client-server type application deployed on the Microsoft technology stack.

The performance and sizing guidelines in this document are directional in nature and should not be interpreted as performance guarantees since actual overall application performance depends on a variety of externalities and factors, that is third party web services dependencies, network connectivity, application code, and so on. These factors are not controllable on the deployment and infrastructure level, which this guide is intended to cover.

System Components

  • Web Server - Handles the web traffic and hosts the B2B Commerce MVC Application
  • Integration Service - Service endpoint to support integration scenarios and data processing activities
  • Database Server - Houses all of B2B Commerce database tables in a single database
  • WIS Server - WIS agent processing back-end integration jobs by connecting to external resources such as ERP, PIM, CRM, or other line of business applications and data repositories. The WIS agent works together with the integration service on the web server to process data integrations such as: Customer Data, Product Data, Inventory Data, Order Data

While it is possible to install and configure all three subsystems on a single machine (server), a typical production deployment requires these subsystems to be installed and configured on distinct machines. Machines in this context refer to physical servers or virtualized resources such as Hypervisor based virtual machines or cloud (AWS/Azure) instances. The actual IT infrastructure landscape will include additional devices such as firewalls, routers, Active Directory, and other elements not depicted here.

Hardware Specification

Hardware specifications are provided as reference points and for guidance. Specifics depend on the operating system selected and the anticipated amount of processing. Resorting to system requirements as documented by Microsoft given the applied operating system may be a good practice when in doubt.

The following are drivers of computing and memory needs and can be considered in advanced tuning efforts:

  • Size of product catalog:
    • Less than 100,000 SKUs: small catalog > less computing/memory intensive
    • Between 100,000 and 1,000,000 SKUs: mid-sized catalog > medium computing/memory needs
    • More than 1 MM SKUs: large catalog > upper bound computing/memory needs
  • Number of Attributes and complexity of product data model:
    • Few attributes > lower computing/memory needs
    • Many attributes and complex pricing tables (pricing matrix) > higher computing/memory needs
  • Number of customer (user) records:
    • Less than 50,000 records: small user base > less computing/memory intensive
    • Between 50,000 and 500,000 records: mid-sized user base > medium computing/memory needs
    • More than 500,000 records: large user base > upper bound computing/memory needs
  • Number of images, video, products, customers:
    • Low volumes of records in DB > less storage on DB server
    • Low number of images/video (or use of CDN) > more storage on web server
    • High volumes of records (> 500,000/entity) > more storage on DB server
    • High volumes of images/video on web servers > more storage on web server

Web Server

Component Specification

Memory

16 to 24 GB
Computing 3.1 GHz (64-bit processor) or faster multi-core
Disk Space 30 to 80 GB

Database Server

Component Specification
Memory 16 to 32 GB
Computing 3.1 GHz (64-bit processor) or faster multi-core
Disk Space 120 to 200 GB (If data stored here)

WIS Server

Component Specification
Memory 16 to 24 GB
Computing 3.1 GHz (64-bit processor) or faster multi-core
Disk Space 30 to 80 GB

Software Requirements

Software requirements are summarized for the three different server types typically found in a B2B Commerce enterprise deployment that makes use of back-office integrations to the ERP, OMS, or PIM systems.

Web Server

Area Specification Comments
Operating System Windows Server 2012  
  Windows Server 2012 R2 Preferred>
  Windows Server 2016  
IIS IIS 8.5 (Integrated Mode) On Windows Server 2012 R2
  IIS 10.0 (Integrated Mode) On Windows Server 2016
     
.NET Framework 4.5.2  
ASP.NET MVC ASP.NET MVC 5.2.3  

Database Server

Area Specification Comments
Operating System Windows Server 2012 R2 Preferred
  Windows Server 2016  
Database SQL Server 2012 *Standard or Enterprise preferred
  SQL Server 2014 *Standard or Enterprise
  SQL Server 2016 *Standard or Enterprise
*High Availability

SQL Server Standard Edition is appropriate for customers who seek to use Replication or Database mirroring for transferring data from the primary database to the secondary databases for the purpose of database synchronization in redundancy and high availability scenarios. Replication uses the publish-subscribe model. This lets a primary server, referred to as the Publisher, distribute data to one or more secondary servers, or Subscribers. Replication enables real-time availability and scalability across these servers. It supports filtering to provide a subset of data at Subscribers, and allows for partitioned updates. Subscribers are online and available for reporting or other functions, without query recovery. SQL Server offers three types of replication: snapshot, transactional, and merge. Transactional replication provides the lowest latency and is usually used when high availability is needed. Database mirroring is primarily a software solution for increasing database availability. Mirroring is implemented on a per-database basis and works only with databases that use the full recovery model. The simple and bulk-logged recovery models do not support database mirroring.

Always-On Failover Clustering is available in the Standard and Enterprise editions of SQL Server and requires additional shared disk storage capacity in form of SAN or NAS. Failover clustering provides high-availability support for an entire instance of SQL Server. A failover cluster is a combination of one or more nodes, or servers, with two or more shared disks. Applications are each installed into a Microsoft Cluster Service (MSCS) cluster group, known as a resource group. At any time, each resource group is owned by only one node in the cluster. The application service has a virtual name that is independent of the node names, and is referred to as the failover cluster instance name. An application can connect to the failover cluster instance by referencing the failover cluster instance name. The application does not have to know which node hosts the failover cluster instance.

WIS Server

Area Specification Comments
Operating System Windows Server 2012 R2 Preferred
Windows Server 2016  
.NET Framework 4.5.2  

Configurations

  • B2B Commerce websites are reached on port 80 and 443 respectively
  • Web servers communicate to Database servers through TCP (default port 1433)

  • WIS Server must be able to communicate with the load balanced endpoint or the web integration service through port 443.
  • WIS Server must be able to access all back-office systems it is integrating with such as ERP, PIM, and so on.

  • B2B Commerce uses standard SMTP mail for sending transactional emails, alerts, and so on. Therefore it must have access via TCP Port 25 to the SMTP server. External MTAs are supported as long as these provide authenticated and non-authenticated SMTP protocols. SMTP settings are available from B2B Commerce Application Settings (within the Admin Console) and from IIS web.config settings.

Reference Architectures

In this section, we introduce three distinct reference architectures that provide representative topologies illustrating how the B2B Commerce subsystems can be deployed to meet client requirements. It should be noted that the proposed architectures are not meant to be prescriptive. Instead, these topologies are intended to provide IT administrators and system engineers some reference points when considering and sizing for their particular audiences and anticipated usage patterns of one or more e-commerce sites hosted within B2B Commerce. Optimizely. has however, confirmed that the presented architectures are real-world examples of existing clients.

Minimum Topology

As indicated earlier in this document, it is highly recommended, for production purposes, that B2B Commerce subsystems be deployed to dedicated machines. Therefore the following topology is considered the minimum production type deployment.

Reference Metrics

Customers using this type of topology typically have low order volumes and limited number of registered customers. Their online channel is either new or they perform a dedicated pilot with a limited number of their business partners. Also, some customers who are in phase I of their online presence and do not yet enable online purchasing make use of this topology bringing their product catalog in phase I online only running mostly a pure content site without transactions.

  • # of Orders per Month < 500,000
  • # of Registered Customers (User Profiles) < 600,000
  • # of SKUs in the Catalog < 500,000

Limitations

Customers who chose to deploy B2B Commerce in this model need to be aware that this deployment does not provide for redundancy and therefore is susceptible to outages possibly leading to lost revenue and inappropriate online experiences for their customers. Optimizely does not recommend this topology when mission critical, revenue generating transaction is at the heart of the web presence.

Usage Scenarios

The use of deployments following this minimal production deployment architecture should be limited to scenarios where:

  • A single website is hosted within B2B Commerce
  • No commercial transactions take place on the website (no checkout, no order placement).
  • Website is primarily focused around presenting content to users such as online catalogs and pure content sites.
  • Websites with minimal transactions and the online e-commerce channel is not a primary source of revenue or where the system is not qualified as "mission critical".
  • Pilots and POCs, testing the system with dedicated pool of audiences or onboarding of a particular user group.

Production Topology

The production topology represents a typical production grade deployment configuration most of Insite's customers adopt when they launch their B2B and B2C e-commerce business on B2B Commerce unless they already have an established e-commerce practice and business that requires additional scale to support millions of transactions per day. Then they will look to the Enterprise topology discussed in the next section.

The production topology accomplishes a few critical improvements over the Minimum topology introduced in the previous section:

  1. Redundancy is ensured on the web server and database level due to clustered web servers and database servers (or replication). A load balancer (hardware preferred) is responsible for balancing incoming traffic across the two web servers. Distributed File System Replication can be configured between the web servers to manage shared files such as web application files that are part of the code deployment and other shared assets, that is product images, spec sheets, and so on.
  2. The integration service has been placed on a dedicated web server that resides in the DMZ and has the sole responsibility to process any integration jobs in conjunction with the server-side Windows Integration Server (WIS), a core component of B2B Commerce backend integration. This is recommended especially if integration patterns call for real-time data flows or frequent loads between the website and the backend systems. By using a dedicated Integration server, the web servers are not impacted when integration jobs are executed.
  3. On the database, server level replication or clustering is used to support high availability.
  4. While not depicted here, one should also consider redundancy for the Integration server a concept introduced in the upcoming Enterprise Topology section of this document.

Reference Metrics

Customers using this type of topology typically have substantial order volumes and increasing numbers of registered customers. Their online channel is either established and a mission critical system is required with 98% uptime or better.

  • # of Orders per Month up to 2,000,000 (1.5 orders / second
  • # of Registered Customers (User Profiles) up to 10,000,000
  • # of SKUs in the Catalog up to 1,000,00

Usage Scenarios

Following are typical usage scenarios supported by the production topology:

  • Multi-website deployments where B2B Commerce houses several (up to 30) dedicated e-commerce websites
  • Mid to large scale customer base
  • Integration to ERP systems and/or other back-office systems such as Order Management, PIM, CRM, and so on.

Enterprise Topology

As the focus shifts from ensuring a redundant and resilient architecture (more up-time), to a scalable architecture that supports higher (or spiking) load patterns, more throughput, and overall larger volumes of transactions (orders), the concept of redundant web servers can be scaled out accordingly. This permits Enterprise customers of B2B Commerce to adjust the deployment and B2B Commerce subsystems to meet even the most ambitious performance requirements.

In this topology the following areas have been scaled-out to achieve higher capacity:

  1. Additional web servers have been provisioned allowing load-balancing between four dedicated web servers, each part of a larger DFSR scenario.
  2. The database server is clustered with two or more nodes.
  3. Redundant Integration servers are load balanced to process integration jobs in collaboration with WIS servers in the back-end.
  4. Multiple WIS servers (behind the firewall) can be configured to support large scale integration scenarios. For instance, one WIS server may handle any data connectivity to ERP, while another dedicated WIS Server enriches product data in B2B Commerce from a dedicated PIM system.

Reference Metrics

Customers using this type of topology operate multi-site, multi-geography based large scale e-commerce properties that represent a major revenue channel for the organization.

  • # of Orders per Month > 2,000,000 (3 orders/second)
  • # of Registered Customers (User Profiles) > 10,000,000
  • # of SKUs in the Catalog > 1,000,000 and complex business rules

Usage Scenarios

Following are typical usage scenarios supported by the enterprise topology:

  • Multi-website deployments where B2B Commerce houses many dedicated e-commerce websites.
  • Large scale customer base with complex backend integrations and real-time requirements
  • Integration to ERP systems and/or other back-office systems such as Order Management, PIM, CRM, and so on.
  • Complex business rules and multi-faceted relationships between customers (organizations) and buyers (users)
  • Staged purchasing and approval workflows for requisition orders
  • Highly seasonal business where capacity for peak periods needs to be ensured

Guide

In this section of the document, we discuss additional considerations for sizing and of B2B Commerce. These discussions are intended to provide the interested reader with supplemental information around the .NET based technology stack to entertain related concepts that are typically available within the realm of Microsoft's best practices recommendations pertaining to performance.

Optimizely Technology Stack

B2B Commerce is built on top of the Microsoft .NET technology stack, which is comprised of the following tiered application patterns:

  • Database Tier: On the persistence layer, B2B Commerce is optimized for Microsoft SQL Server 2012.
    • NHibernate is used as Object Relational Mapping technology which can be configured with a dedicated second-level cache configuration such as Redis.
  • The Business Tier is comprised of POCOs and Services that make up the Insite.Model and Insite.Domain pillars all coded in C# and follow the IOC pattern using Microsoft's Unity container. NHibernate's 2nd level cache can be turned on to support object-caching via web.config settings.

    • Insite.Services represents a facade and key-entry point for the Application Programming Interface (API) against which customizers are developing when building extensions and customizations to the platform.

  • The Presentation tier is an ASP.NET MVC application that directly sits on top of the Insite.Services facade. Full page and partial page caching support is available as part of the ASP.NET framework and as part of B2B Commerce.

Guidelines

The following discusses three major focus areas that are applicable when an application such as B2B Commerce.

  1. Specialization
  2. Optimization
  3. Distribution

Specialization

B2B Commerce is comprised of a set of subsystems that can be deployed independently. This supports the goal of specialization. The objective here is to break apart the application into smaller pieces to isolate dedicated processes onto dedicated machines and computing power to achieve a more scalable and performant system overall.

Consider the following areas when B2B Commerce:

  1. Dedicated machines for web server, database server, integration service, and integration server are recommended (see Production Topology).
  2. Consider moving static assets such as images and/or video to dedicated Content Delivery Networks (CDNs).
    1. Some customers prefer to deliver all product images from services such as Scene7 or Akamai CDN.
    2. B2B Commerce has built-in support for image deployments to AWS and Azure CDNs.
    3. Video often may be delivered through platforms such as Scene7, Brightcove, Youtube or others.
  3. Use of dedicated hardware devices for SSL encryption

Optimization

Optimization can and should be applied on several levels:

  1. Network Infrastructure:
    1. Use hardware load balancers for web servers
    2. Use SSL offloading wherever possible
    3. Deploy firewalls with at least 1 Gbps throughput
    4. Deploy switches with at least 1 Gbps throughput
    5. Use IIS compression for static assets
  2. Storage Setup:
    1. Install two or more host bus adapters in database server and use PowerPath/multipath I/O.
    2. Use mesh-networking topology for SAN fiber switches
    3. Use a SAN that is optimized for write operations
  3. Computing / Memory:
    1. Adjust computing and memory capacity based on performance counters and monitoring results representing true traffic and load patterns
  4. Understand business requirements as these relate to caching and cache-invalidation settings
    1. Adjust cache expiration and invalidation strategies accordingly
    2. Consider Edge caching such as Akamai

Distribution

To implement distribution you need to add servers, duplicate the application across them, and implement load balancing. For load balancing, you can use Network Load Balancing (NLB), a service included with all editions of Windows Server. However, hardware load balancing devices are recommended for production environments while NLB may be used in development or QA environments.

Scale Up vs Scale Out

B2B Commerce supports both, up and out. The following presents a brief overview of what the two approaches are about and when to use which approach.

Up

With this approach, existing hardware capacity is increased. Existing hardware components, such as a CPU, might be replaced with faster ones, or one might add new hardware components, such as additional memory or solid state drives (SSD). The key hardware components that affect performance and scalability are CPU, memory, disk, and network adapters. An upgrade could also entail replacing existing servers with new servers.

Out

With this approach, more servers are added to the farm to spread application processing load across multiple machines. Doing so increases the overall processing capacity of the system.

Pros and Cons

Up is a simple option and one that can be cost effective. It does not introduce additional maintenance and support costs. However, any single points of failure remain, which is a risk that should be only accepted in the most minimalistic deployments (refer to Minimum Topology).

Beyond a certain threshold, adding more hardware to the existing servers may not produce the desired results. For an application to scale up effectively, the underlying framework, runtime, and computer architecture must also scale up. B2B Commerce is specifically optimized for multi-threading when it comes to integration processing.

Out enables the addition of more servers in the anticipation of further growth, and provides the flexibility to take a server participating in the farm offline for upgrades with relatively little impact on the cluster. In general, the ability of an application to scale out depends more on its architecture than on underlying infrastructure. B2B Commerce supports out in various areas, such as the web server role, the database server role, and both integration roles, the website side of the integration as well as the back-office side of the integration processor. Each of these roles can be scaled out independently.

Miscellaneous

Additional performance tuning tips are provided here:

  • Database should be configured in a MSSQL cluster or mirrored pair (active/passive or active/active) for failover
  • Database should be configured to use separate volumes for operating system, MSSQL data files, MSSQL log files, and MSSQL backups
  • MSSQL data files should be placed on a RAID 10 volume, ideally on a SAN utilizing more than 4 spindles.
  • MSSQL temp DB files should be configured one-per-processor-core of the database server
  • MSSQL data file auto-growth may be set to 100 or 250 MB (minimum - depending on overall database size)
  • Database Server Instance should be dedicated to B2B Commerce and the SQL Service should be configured to use 60 to 80% of the system memory
  • While possible, it is not recommended that the B2B Commerce database be installed on the same machine or database instance that is used by other internal, backend, or other applications.
  • Web Server should be dedicated to the B2B Commerce and IIS application pool should be set to use 60 to 80% of the system memory
  • Redundant hardware firewalls / load balancers configured for automatic failover
  • Use dynamic DNS for multi-site hosting (in case primary hosting facility is unavailable)
  • If using virtualized machines, it is recommended that multiple physical servers be used to eliminate dependency on the availability of a single physical server when high availability requirements exist.
  • Anti-virus software should not be running and scanning SQL volumes.

Ongoing Effect

As long as the application usage continues to increase and the transacted business volume expands, and performance tuning is an ongoing effort. A critical step in this process is to measure existing capacity and performance and align the results with anticipated growth assumptions and metrics. Application testing and profiling are techniques used to adequately determining where to spend capacity adjustment efforts. Some of our customers use tools such as Gomez, DynaTrace, or Loadrunner (HP) for these initiatives.

Performance Testing

As part of Insite's ongoing efforts to ensure that our product can perform and scale according to customer needs and growth dynamics, the product verification team has conducted a variety of load and throughput testing scenarios. These tests were specifically aimed at understanding what hardware configurations would be required to obtain constant throughput behavior of the system as the number of concurrent users increase. That means our core objective was to establish the hardware configurations suited to support the same ratio of orders placed / users on the site as user traffic on the site increased. Our target was around an average of 14 orders per user.

The tests were all conducted within a dedicated AWS environment using a set of distinct configurations to address the following performance and scalability questions:

  1. When applying load on the system, how many orders can be processed concurrently until system boundaries are hit?
  2. How many concurrent users can transact on the system before the system begins to degrade in terms of response time and ability to process transactions (here: completed page loads)?
  3. How linear is the path as we add web servers during horizontal scenarios?

We attempted to find the impact of a variety of configurations to help in trying to emulate real-life load while keeping a laboratory environment in terms of externalities and testing harness.

Test Harness

JMeter was used as the performance testing tool generating the necessary load on the system. The test cases covered two explicit objectives:

  • Concurrency simulation, which included wait times between pages based on reference values gathered from reviewing user behavior patterns on several live production websites (incorporating think/read time).
  • Load simulation, which represents brute force order submission into the system to understand throughput and order taking capacity excluding any wait factors.

Transactions, for this review, are number of completed page loads. The tests simulate a variety of search and browse paths along with 1 out of 4 users following an ordering path. During the ordering path, products are added to the cart and the checkout process is followed until order placement. The checkout process does not rely on external interfaces for payment, freight or tax calculations in order to eliminate these externalities.

To further provide laboratory conditions, the use of edge caching or other output caching has been disabled and is not used. This means that the test outcomes should not be regarded as final performance characteristics of the product because these can be greatly enhanced when enabling the various caching strategies. However, for the purpose of this testing, the bare system capacity is being tested not the overall experience.

Test Results

The test results were achieved using different configurations:

Configuration Specification # Concurrent Users on the site # Page Transactions / Hour # Orders Captured / Hour
Low

1 web server (7.5GB, 4 virtual cores) SQL (15GB, 4 virtual cores)

110 21,960 1,542
Medium 4 web server (7.5GB, 4 virtual cores) SQL (15GB, 4 virtual cores) 550 78,120 7,704
High 8 web server (7.5GB, 4 virtual cores) SQL (15GB, 4 virtual cores) 1110 151,560 14,970
  • Low Configuration: Using a single webserver, the system was able to process 110 concurrent users who generated an order volume of 1542 orders/hour. That is 14 orders/hour/user on average.
  • Medium Configuration: Using 4 webservers of the same kind, the system was able to process 550 concurrent users who generated an order volume of 7.704 orders/hour. That is 14 orders/hour/user on average. This illustrates that 4x the hardware was able to support 5x the users but only allow for 4x the order volume. Still the average number of orders/hour/user is still 14.
  • High Configuration: Doubling the hardware yet again by using 8 servers, the system was able to 2x the user load but only 1.94x increase of orders meaning, a slight degradation in the system's ability to capture orders at linear user increments is observable all other things being constant.

Conclusions

The laboratory testing has helped to establish that B2B Commerce can be scaled out (horizontally) by adding web server resources gradually to support increasing volumes of users, transactions, and orders on the site. The test has been executed in an AWS environment using compute optimized cs.xlarge instances with 4 vCPUs and 7.5 GB memory each.

This should provide a good reference point for those clients who are interested in self-hosting but decide to leverage the Amazon infrastructure. For those clients who are hosting on-premise on physical servers or virtualized servers that are managed within their own premises, corresponding translation of the testing results above can be made depending on specific site and infrastructure conditions applicable on a client-by-client basis.