ToolShedTested
Methodology

How We Test Tools

By Jake Mercer

Our Testing Methodology

A detailed, transparent breakdown of how every tool earns its score on ToolShed Tested. This is the data behind our recommendations.

Category-Specific Testing Protocols

Different tool categories demand different testing approaches. Below is a detailed breakdown of how we test each major product category on ToolShed Tested.

Cordless Drills Testing Protocol

Test 1 — Torque Measurement: We measure maximum torque output using a calibrated digital torque meter attached to a 1/2″ hex adapter. Three measurements are averaged to produce the final torque figure. We compare measured torque against manufacturer claims and note any discrepancies.

Test 2 — Endurance Run: Using a fresh, fully charged battery (manufacturer-recommended size), we drill 7/8″ holes through construction-grade SPF 2×4 lumber at a consistent feed rate until the battery dies. Total holes completed is recorded. Each test is performed twice and averaged.

Test 3 — Speed Test: Time to drill through standardized materials: 3/4″ pine (softwood), 3/4″ red oak (hardwood), and 1/8″ mild steel plate using appropriate bits at each material’s recommended speed.

Test 4 — Chuck Retention: We test bit slippage by driving a 1/2″ spade bit into oak at maximum torque. The chuck is hand-tightened only (no wrench). Any bit rotation relative to the chuck is measured and recorded.

Impact Drivers Testing Protocol

Test 1 — Driving Speed: Time to drive 3″ #10 construction screws into SPF 2×4 lumber. Twenty screws are driven and the average time per screw is recorded.

Test 2 — Lag Bolt Test: We drive 5/16″ × 3″ lag bolts into treated southern yellow pine (SYP) — a dense, demanding application. Success rate and driving time are recorded.

Test 3 — Endurance: Continuous driving of 3″ deck screws into SPF lumber on a single battery charge. Total screws driven is the primary metric.

Test 4 — Mode Effectiveness: For multi-speed drivers, we test each mode’s suitability: low speed for delicate electronics screws, medium for general fastening, high for structural work. We note whether modes effectively prevent overdriving or cam-out.

Circular Saws Testing Protocol

Test 1 — Cross-Cut Speed: Time to cross-cut a 2×4 SPF board at 90° and 45° bevel. Five cuts are timed and averaged.

Test 2 — Rip Cut Endurance: Continuous rip cuts through 3/4″ plywood on a single battery charge. Total linear feet is recorded.

Test 3 — Cut Quality: Cross-cut faces are examined under 10x magnification and compared against reference samples. Tear-out, surface roughness, and edge quality are evaluated on a 1-5 scale.

Test 4 — Accuracy: Miter angle accuracy is measured with a digital protractor after making 10 cuts at various settings. Maximum deviation from set angle is recorded.

Miter Saws Testing Protocol

Test 1 — Angle Accuracy: We set the miter to 0°, 22.5°, 45°, and the maximum angle, make five cuts at each setting, and measure actual angle deviation with a digital protractor accurate to 0.1°. Results are averaged per setting.

Test 2 — Cut Quality: Cross-cuts through red oak at 90° and 45° are examined for tear-out, surface quality, and squareness. A precision square verifies 90° cuts for blade perpendicularity.

Test 3 — Dust Collection: We weigh sawdust captured in the collection bag/port versus sawdust that escapes, calculating a capture percentage. Testing uses identical material types and cut volumes.

Test 4 — Fence Deflection: Using a dial indicator, we measure fence movement under hand pressure to evaluate rigidity. Any deflection over 0.005″ is noted as a concern for precision work.

Angle Grinders Testing Protocol

Test 1 — Material Removal Rate: Using a standard grinding disc, we grind mild steel plate for 60 seconds at consistent pressure. The plate is weighed before and after to calculate grams removed per minute.

Test 2 — Cutting Speed: Time to cut through 1/4″ × 2″ mild steel flat bar using a standard cut-off disc. Five cuts are timed and averaged.

Test 3 — Safety Features: We evaluate brake stopping time (disc release to full stop), kickback protection engagement, and anti-restart functionality under controlled conditions.

Test 4 — Runtime: Intermittent grinding (30 seconds on, 10 seconds off) on a fresh battery charge until the battery is depleted. Total active grinding time is recorded.

Cross-Category Benchmarks

To enable fair comparisons across brands and price points, we maintain standardized benchmarks:

  • Materials: All wood tests use kiln-dried SPF (spruce-pine-fir) 2×4 lumber from the same supplier. Hardwood tests use red oak boards from a consistent source. Metal tests use A36 mild steel plate.
  • Batteries: All runtime tests use manufacturer-recommended battery sizes, fully charged. Batteries are charged using manufacturer chargers in a climate-controlled environment (68-72°F).
  • Bits and Blades: Unless testing tool-specific accessories, we use identical aftermarket bits and blades (Diablo/Freud) across all brands to isolate tool performance from accessory quality.
  • Environment: Workshop testing occurs at 65-75°F, 40-60% relative humidity. Temperature and humidity are logged during each session.

Data Collection & Analysis

All test data is recorded in real-time using digital logging where possible (torque meters, tachometers) and manual recording for count-based tests (holes, screws, cuts). Each quantitative test is performed a minimum of three times, with the average reported as the final result. Outliers (>2 standard deviations from mean) are discarded and the test is repeated.

Our rating system converts raw performance data into a 5-point scale using category-specific thresholds. A drill scoring 4.5/5 in performance has measurably outperformed 90% of drills in its price class across our standardized tests.

Continuous Improvement

Our testing methodology evolves as tools evolve. We review and update our protocols annually to ensure they remain relevant to current technology. When new tool features emerge (smart connectivity, adaptive speed control), we develop new tests to evaluate them fairly.

We welcome feedback on our methodology. If you believe we’re testing something incorrectly or missing an important metric, we want to hear about it.