What the score looks like
Every tool page on fewertools shows its Stack Score at the top of the verdict block. Here is the badge, with a sample breakdown:
The number on the left is the composite score. The pill is the tier. The bar visualises the score against 100. The line underneath names the three signals that produced it.
The tier ladder
Tiers are buckets, not the score itself. Two tools with very different breakdowns can land in the same tier. The tier is a quick read for "should I shortlist this." The score is for finer-grained comparison.
The formula
Three components add up to 100. Each one rewards a different kind of evidence.
The hands-on rating from our review of the tool. This is the dominant signal because everything else (ownership, pricing) only matters once we know the tool actually works.
Founder-led tools historically deliver lower pricing risk and slower price creep. Private-equity-owned tools historically deliver the opposite. Acquired tools sit in between because the trajectory is uncertain. We track this in our public ownership ledger.
Drawn from our Pulse log of pricing events. Recent hikes pull the score down. Recent drops or new free tiers push it up. A long stretch with no events is treated as neutral, because no news is genuinely no news.
Why these three
A score is only useful if the inputs are things the user actually cares about. We started with a longer list (community size, integration count, GitHub stars, market share) and cut every signal that did not change a buying decision for a solo or bootstrapped founder.
What survived:
- Verdict answers "is the tool any good." Without that nothing else matters.
- Ownership answers "is the tool likely to still be the same product in two years." This is where most ranking sites refuse to take a position.
- Pricing trajectory answers "is my bill about to go up." This is where the editorial layer earns its keep, because it is unique to fewertools.
If we cannot defend a signal as decision-changing, it does not go in.
What the score is not
- Not a popularity score. Traffic, GitHub stars, and Twitter mentions are not inputs. The number does not move when a tool gets press.
- Not a vendor survey. No tool company is ever asked for input on its own ranking. There is no questionnaire and no opt-in.
- Not paid. No tool has paid for placement or for a higher score. Our affiliate links never change a verdict, and verdicts are the dominant input.
- Not category-aware yet. The score is absolute. A "Solid" CRM and a "Solid" form builder both sit at 55 to 69. We may add category-relative variants later, but the absolute number is the canonical one for now.
How often it updates
The score is recomputed whenever any of its inputs change. In practice that means:
- A new Pulse drop for a tool re-runs the pricing component within a day.
- An ownership change (acquisition, IPO, founder exit) re-runs the ownership component immediately.
- A re-review or verdict flip re-runs the verdict component on the next deploy.
Every tool page shows its current score. There is no archived "this tool used to score 82" history yet, but we are adding score history alongside pricing history for every tool with enough events.
Disagreements
If you think a score is wrong, two things are useful: tell us which input you would change (verdict, ownership, or pricing trajectory) and what evidence would support the change. The formula is fixed, but the inputs are editorial and we update them when we are wrong.
Send the evidence to hello@fewertools.com or open an issue against the public site. We log corrections in the changelog.