Skip to content

How to Run an SFA Pilot

Most SFA pilots are designed to succeed. That is the problem. When an organisation runs a pilot with its most cooperative reps during a quiet month and measures only whether data was entered, the pilot produces a positive result that does not predict real-world performance. A properly designed pilot is designed to find problems - because finding them before full rollout is far cheaper than finding them after.

Why Most Pilots Fail to Predict Deployment Reality

Section titled “Why Most Pilots Fail to Predict Deployment Reality”

The common pilot design failure follows a predictable pattern: select 3 to 5 reps who are already tech-comfortable, run the pilot during a low-activity period, and measure adoption by counting logins and form submissions. At the end of 2 weeks, adoption looks good, the vendor is satisfied, and the procurement decision is made.

Then full rollout happens. The reps who struggle with mobile interfaces reject the system. The sync failures that only surface under high transaction volumes appear for the first time. The manager dashboards that looked clean during a quiet period fall behind during a peak month. The problems were always there - the pilot just wasn’t designed to find them.

Include 5 to 8 reps from your most complex, highest-volume territory. Not your most cooperative reps. Not the ones who volunteered. The reps who handle the highest outlet density, the most SKU variety, and the most challenging distribution conditions.

The reason is straightforward: if the system works for your hardest use case, it will work for your easier ones. If it only works for your easiest use case, you have learned nothing about production performance.

Also include the territory manager for the pilot group as an active participant - not an observer. The manager dashboard is half the system. Testing it separately from the rep-side pilot produces an incomplete picture.

Minimum 4 weeks. This is not negotiable.

The first week is novelty. Reps are engaged simply because something is new. Data entry is high, enthusiasm is present, and everything looks fine.

The second week is where friction starts to surface. Reps begin defaulting to old habits for tasks they find awkward in the new system. Sync issues start appearing as the volume of data builds up. The manager begins navigating the dashboard daily and discovering gaps.

Weeks three and four reveal whether the system has stabilised into a sustainable workflow or whether adoption is quietly declining. These are the weeks that predict long-term performance.

Anything shorter than 4 weeks produces first-week impressions, not deployment-quality data.

Target: under 60 seconds for a known outlet with a repeat order. Time this directly by accompanying a rep on a visit during week 2. If it is taking 3 to 5 minutes, the interface is too slow for a 25-outlet day and adoption will collapse.

The percentage of planned visits that are completed within the planned visit window. A strike rate below 80% during the pilot indicates either beat plan problems, system friction that is slowing reps down, or both.

Rep Satisfaction - Weekly, Honest Conversation

Section titled “Rep Satisfaction - Weekly, Honest Conversation”

Not a survey. A direct conversation: “What is frustrating you about this system?” Ask this after week 1, week 2, and week 4. Write down the specific answers. The issues reps raise at week 1 should be smaller by week 4. If they are the same issues or growing, the system is not adapting to the workflow.

Is the territory manager opening the dashboard daily without being asked? Ask them: “Did you call or message any rep this week to ask about their visit status?” If yes, the dashboard is not delivering the visibility it promised - the manager is supplementing it with manual check-ins, which is the behaviour SFA should eliminate.

Take the SFA-reported secondary sales figures for each pilot outlet and compare them against your existing method of recording secondary sales (distributor reports, manual counts, or however you currently do it). The two numbers should converge within 2 to 3 weeks as reps get comfortable with capture. A persistent gap of more than 10 to 15% indicates a data capture problem that will scale badly at full rollout.

At the end of week 4, run a structured debrief session with the pilot reps. Ask one question: “What was frustrating?” Not what worked. Not what was useful. What was frustrating.

Adoption dies in the friction points, not in the features. A rep who finds the system broadly useful but has three specific workflows that are slow or awkward will eventually revert to manual methods for those workflows, and then for adjacent workflows, and eventually for everything. The debrief is where those friction points surface - before they become an enterprise-wide adoption problem.

Document every friction point raised. This list has two uses.

A pilot passes if it can demonstrate two things:

  1. Reps complete more orders per day than they did before the pilot. If order capture volume has not increased - even modestly - by week 4, the system is not accelerating field execution.

  2. Managers can answer territory questions without calling anyone. Ask the manager: “What was your coverage rate yesterday?” If they can answer from the dashboard without calling a rep, the system is delivering management visibility.

If either of these criteria fails, the pilot has failed regardless of how many other metrics look positive.

Document the friction points before signing a full rollout contract. Use them as negotiation leverage for configuration changes.

Every SFA vendor will tell you their system is highly configurable. The pilot gives you specific, real-world examples of where the current configuration doesn’t fit your workflow. Bring those examples to the contract negotiation: “Before we sign, we need these 4 friction points resolved in the configuration. Here’s what resolution looks like for each one.”

This is not adversarial. A vendor that is confident in their system will engage with specific configuration requests. A vendor that resists them is telling you something important about how the implementation relationship will go.