This question has the makings of another "Torque Vs. Horsepower" debate, guys!
Nonetheless, here's a couple of thought starters:
o Friction relates to the normal (right angles to the surfaces) force and the coefficient of friction between the surfaces. That's why it's harder to drag a diesel block across the shop floor than an aluminum LS Chev one. Clearly, when you're transmitting more torque through a gear pair, etc., there is a greater contact force, thus more friction. As the RPM rises, this translates to more power loss.
o This is also why with your car (to simplify, lets say with a manual tranny) on a chassis dyno, you could take the plugs out and rotate the engine, driveline and inertia roller with a breaker bar, even though the same components would absorb hundreds of horsepower at high loads and RPM.
o A really precise analysis of
all the drivetrain losses would likely reveal a range of characteristic variables Vs. speed and load ranging from fixed, to direct percentages with either and/or both RPM and load, to ones increasing at some power over one (e.g., squared) Consider a 4,000 RPM stall torque converter with lock up. At 1,000 RPM, it absorbs 100% of the engine's power. Between there and lock up, it's a variable, dependent on RPM and engine torque. After lock up, it becomes a much smaller factor..
o Obviously, just as on an engine dyno, everything we can do to standardize the test conditions [tire pressures, actual (test-specific?) tires, hold-down strap tension and angles, all lubricant temps, atmospheric correction factors, etc., etc.] will help with consistency, but with many more factors in play than for an engine dyno, the opportunities for variations will remain higher.
o Bottom line, an approximate percentage loss is the best when can do for a general case, and unless dyno racing is the sole object, it serves the purpose of showing proportional changes as the tuning process proceeds.
Felix, qui potuit rerum cognscere causas.
Happy is he who can discover the cause of things.