From reading your posts, I get the impression that you are unusually
objective in your observations and descriptions of your automotive
experiences, or at least you really try to be objective. (IMO, that is a
good thing). The only point where I see less than an attempt at total
objectivity is your point that one size truck is "just right" while another
may be too large or too small. IMO, there is demand for the spectrum of
truck sizes and capabilities and the profit opportunities they provide to
the automakers. The reason for the demand may be totally subjective, like
why someone who hauls 2 bicycles would need a full size truck instead of a
smaller one, but the demand is (with higher gas prices, maybe was) still
I have no statistical basis for my opinion, but I think that people who have
a favorable impression of a product are more likely to overlook a design
feature or vehicle characteristic that they might not overlook in a vehicle
that they do not have as favorable impression of. I think Toyota and Honda
have benefited greatly from this phenomenon, if it exists. An example of
this phenomenon is the piston slap that some people have complained about.
The manufacturing and assembly methods that Toyota uses results in very
little variation, and under the same operating conditions and maintenance
history, 2 Toyotas of the same model are very likely to experience the same
problems or lack of problems, which means that the noise that some people
are complaining about and some people do not complain about is likely there
in most, of not all of those models. People who love their cars or trucks
are probably less likely to count the noise as a problem on the survey than
people who are indifferent or are very picky.
They're not asking, "How do you like it?" They are asking a different
question, "How many problems have you had with it?" I could just love my
new Prius even though I had a problem with the power steering pump, a
leak in the truck and a cracked windshield. Or I might hate it even
though it has had no problems.
This is why I can't possibly see how J.D. Power surveys are useful in
determining anything other than initial quality which is what they are
designed to measure. And I actually am a big believer in Porsches, I
just don't think that J.D. Power results mean squat to anyone who's
going to keep their car after the warranty runs out.
replace "roosters" with "cox" to reply.
Powers has a lot of importance. Look at all the advertising revenue it
generates. It is also very important to know what the best car is for those
that keep them for 3 to 6 months. If, OTOH, you intend to keep your car for
5 or 15 years, it has no meaning at all.
----- Original Message -----
Sent: Sunday, June 08, 2008 6:43 PM
Subject: Re: 2008 J.D. Power Initial Quality Study: Porsche, Honda,
Chevrolet among big winners
JD Powers also has a survey that address longer term reliability (3 years).
I suspect this is about as long as is meaningful. After three years I
suspect owner treatment of the vehicles becomes a significant factor in
I've never had much respect for the CR survey results. I've answered them
for years, but think doing so is largely a waste of times. The survey is far
from random and they collect too little information to make the broad
pronouncements given in the magazines. The little circles they display in
the magazine are also misleading. They over emphasize the difference between
? The extremes cancel each other out and should not affect
the average in any significant way, assuming the sample size
is large enough.
Sample size per year-model seems about the same for the IQS
and CR surveys. Power is not as forthcoming, IMO, about
sample size per vehicle.
Links at http://www.jdpower.com/autos/car-ratings/ ,
says Power used input from 97,000 car owners for the IQS.
The input covers I guess over 100 different models. (I am
too lazy to count them all up.) So there's input of maybe
around 1000 owners for each model.
J.D. Power's 2007 dependability ratings (for three year old
cars, asking about problems in the last 12 months) use input
from a paltry 53,000 car owners.
CR uses input from 1 million owners, covering 1100
model-years for the past decade. So CR is using the input of
about 1000 owners per model-year. So I'd guesstimate that
CR's input is of higher statistical significance for any
given model-year. Take a few years running where the model
design is known not to have changed a lot, and CR is of much
higher statistical significance.
Sure, the editorial comments are a start and at least as
good as anecdotal reports here.
I suppose the prudent course is to form a "meta-study" of
both the J.D. Power survey and CR's survey.
Which hints at a big CR strength; presentation of the data. With CR
you can quickly see the entire history of each system in each model.
You can quickly spot the year they fixed the transmission or whether
manufacturer X has problems with the first model year of a new design.
It is not random at all. They only survey CR readers, and then only readers
who wish to respond. I've always felt this biases the results of the CR
survey to match the editorial opinions of the CR staff. In recent years CR
has done a better job of massaging the results, but I still think they are
And why do you think the JD Power survey is useless? It is a true random
survey. They collect much more information than CR does.
And you know this because? Does it ever bother you that the results for
different year model of a particular model that should be essentially the
same parts get vastly different reliability ratings in some categories from
year to year?
OK, what exactly do they mean.....I mean besides Excellent, Very Good, Good,
Fair and Poor. For '07 cars, the average problem rate for the worst category
(Body Integrity) was only 3%. What do you suppose the accuracy of the CR
Survey is? I'll bet it is a lot worse than 3%.
So, CR surveys a select group, that is more likely than the general
population to agree with there opinions, they don't provide data on the
number of vehicles of a particular type surveyed, or the even what average
means, yet you think they are highly accurate.....
J.D. Power also only surveys those who wish to respond. I
can't see how the self-selection is any worse.
What motive would CR editors have to massage what CR readers
"Editorial" is way too strong a descriptor for the quality
reviews of the cars (not the matrices of reader experiences)
that CR testers perform. The tests the CR staff does has
results all over the map. Sometimes Ford gets a good rating,
sometimes VW, and so on.
The reader surveys OTOH consistently rate Toyota and Honda
as the best makes of cars.
but I still think they are
Not for Hondas and Toyotas, with the exception of an
occasionally new design, like the Toyota Tundra c. 2004.
Sounds like you have been reading the articles. I do not
have the April issue handy, but what the circles mean is
See my post to Jeff. The "accuracy" of the CR surveys should
be better than that of J.D. Power's dependability survey,
because the sample size per model appears to be larger.
(Neither JD Power nor CR give the exact number of owners per
model surveyed.) You can still argue CR reader bias, I
suppose. Though, come on, what does that mean here? CR
readers are no more likely to ignore car problem than anyone
else, are they? Or do we want to sample car owners who get a
breakdown and ignore the car for the next two years? Or
those who do not like to maintain their car? You do realize
those who do not follow the maintenance schedule throw every
damn thing off when it comes to surveys, right?
It's mostly going to be differences between two models that
are statistically significant, meaning it's reasonable to
conclude another car randomly chosen from a population of
this model will perform X better than another model with a
Nor does J.D. Power state exactly how much input it had for
Plus, for dependability J.D. Power looks only at three-year
old cars, by all indications from a sample arguably as
self-selected as CR's.
Not true. For instance, for 2002-2006 Camrys, the quality of the suspension
varied from very good to excellent from, almost at random. The fuel system
went from very good to excellent to good without any significant changes to
the design. So did the ratings of body hardware. For some reason, '03 have
worse cooling systems that an other year (but according to the parts
catalog, the parts are the same....). I suppose you are going to point out
that chages from very good to excellent are trival, but then that is my
point. The differences are trivial, probably well within the accuracy of the
survey. CR takes poorly collected data (not random, poor questions),
massages it, and presents it as little circles that really don't mean
anything. At least JD Powers gives you a number (number of problems reported
per 100 vehicels) and at least they start out with a random sample. I
suppose you should stay away from any vehicle with solid black circles, but
how many fall into that category? Do you really think there is much
difference between vehicles that rate good or better?
A large but biased sample is not going to give better results.
Have you completes a CR survey? There is a fair amount of room for
iterpertation of the questions.
So how much statictical difference is there between an Accord and a Camry?
CR predicts a new Camry will have worse than average reliability. A new
Accord will have better than average reliability. What does that mean? If I
buy a Camry instead of an Accord am I likely to have one more problem, or
two, or ten, or twenty? If you can't tell me from the CR predicitions, what
good are they? At least if you look at the JD Power numbers you can get an
idea that the spread between vehicles is very small, much smaller than CR's
reporting methods suggests. In the latest initial quality survey, the
difference between the best vehicle manufacturer (Porsche) and the worst
(Mini) was 0.8 problems per vehicle. In the 2007 Vehciel Dependability
Study, the difference between the most dependable manufacturers (Buick and
Lexus) and the least dependable (Land Rover) was 2.5 problems. This shoudl
tell you that the differences are down in the noise range, and the little
circles that CR uses are trying to divide up very trivial differences into 5
categories. If you start with data that is poorly collected and then try to
use it to indicate trivial distinctions, you are not being fair. At least
with JD Powers, you can see for yourself that most cars are pretty good. I
have no problem with people claiming Land Rovers are less reliable that
Lexi, but I doubt the difference is near as significant as Lexus owners
would like to think.
JD Powers starts out with a random sample. CR starts out with their
Oh my god, good to excellent.
I think the consistency of the almost all red (meaning
good-to-excellent) reliability matrices for Hondas and
Toyotas speak for themselves. Black circles are rare for
them. I am not posting for your benefit. You're dug into a
political belief here. I am posting for others'. Go to CR
and go to J.D. Power. Just do not go to J.D. Power by
You have proved no more bias in CR than in J.D. Power,
either in its questions or in the group it samples.
CR's million owners surveyed per year over ten years trumps
J.D. Power's hogwash 3-year-old vehicle survey of some
----- Original Message -----
Sent: Tuesday, June 10, 2008 3:09 AM
Subject: Re: 2008 J.D. Power Initial Quality Study: Porsche, Honda,
Chevrolet among big winners
And you know there is a tiny margin of error because? CR may or may
not have a "huge" sample for a particular vehicle. Saying "millions"
sounds impressive, but millions (actually 1.3 million responses for
2007) spread over 10 years of different models implies that some
models may only get a few responses (hundreds or less). CR doesn't
include results below a certain level, but what level is that? The
average number of respondents for a particular year/model is probably
around 500. Do you really think this is enough to provide a tiny
margin of error?
No, but it is my opinion that people who subscribe to CR are likely to
be biased towards agreeing with CR's opinion and tend to color their
responses to match. I am not saying they are lying, or deliberately
miss stating the results just that they are likely to shade their
response to match the CR opinions. When working with relatively small
numbers of responses for a particular model from a select group (CR
subscribers), small errors can appear to be significant when you boil
them down to the little circles. In fact, I suspect that many times
the differences are very small. CR seem to resist publishing the raw
numbers. For comparisons, they go so far as to show difference as
percentage of variation from average for categories of vehicles. This
is potentially just as misleading as the little circles. For instance,
in the small SUV category, the Honda Element predicted reliability of
around 70% better than the average small SUV. The Dodge Nitro has a
predicted reliability of 195% worse than the average small SUV. So no
one should buy a Nitro because it is 265% less reliable than an
Element - right? But what does this really mean? Suppose the average
small SUV has 1 problem. This would imply that the average Honda
Element would have 0.2 problems (or 20 problems per hundred) and that
the average Nitro would have less than two problems. Furthermore, what
exactly constitutes a problems? The CR survey leaves a lot of latitude
to the respondents, and then they don't even let us know how they
factor different levels of problems into the overall reliability.
So this means they have even less good data for a particular model,
making it even less likely the statistical error is "tiny."
A little bit of homework is appropriate before one slanders.
This figure is reported in the annual issue and also at the
CR web site
And on Impala the range is from poor to very poor (mostly the latter.)
Doesn't sound like there is any trouble distinguishing which of these
vehicles has a more reliable suspension system.
And Impala ranges from good to poor. I think you are having trouble
seeing the forest because all the trees are in the way. Step back and
look at the big picture.
So did the ratings of body hardware. For some reason, '03 have
Have you ever heard of a bad batch of parts? Changing suppliers? To
be honest with you, I am looking at the 2008 CR survey right now and
2003 Camrys are the same as 2002 and 2004.
With no breakdown of what those problems are.
None if you are dealing with Toyota or Honda. If you look at GM,
Chrysler, Mercedes, Kia, Nissan, Ford and VW, there is a wide
selection of models to choose from.
For brevity, I snipped Gordon's helpful observations. Look
Of course, CR does too, as has been noted.
I too think this is one of the big advantages of the CR
survey. J.D. Power has only three categories (plus
"overall"). CR has 17! It is very important to me to know
whether a tranny has been problematic and whether it is
"major" or "minor" problematic, or is it electrical or
"major engine" or "minor engine" etc. CR evaluates this.
Maybe you saw this already, but for others, here is an FAQ
on the CR survey that I think is very helpful:
It puts the average sample per model-year between 200 and
400, which is less than I estimated, with some model-years
having several thousand samples, and some having less than
100. The latter's results are excluded from publication.
The CR FAQ also notes that it is the differences between
models where there is statistical significance. Again,
that's key. Because fact is a 1% failure rate in a sample
size of 1000 has a margin of error of about +/- 3%. (One
sees this margin of error in political polls all the time.
Political poll takers aim for around 1000 "hits" so they can
report a MOE of about 3%.) So CE White is correct with his
concern about reading any individual chart "too precisely."
But his concern will also apply to the J.D. Power survey.
One has to look at the differences between models, instead,
among other things.
Please let me know wherer I can find the "numbers." I have the
magazine and an on-line subscription. I've nver seen raw numbers. It
is my opinion that CR does there very best to obsure the actual source
of their data and to over emphasize minor differences. If they
actually have the raw numbers available somewhere, maybe I would
change my opinion.
And then they don't tell you the numbers, instead they feed them to
some internal CR process that obscures the raw data and outputs
meaningless little circles. Plus, they allow the respondent a lot of
leeway in deciding what is minor and what is major.
http://www.consumerreports.org/cro/cars/new-cars/auto-test/consumer-reports-car-reliability-faq-8-06/overview/0608_consumer-reports-carreliability-faq_ov.htm > It puts the average sample per model-year between 200 and 400, which
Thanks for posting this. It confirms my worst fears. CR is making very
fine distinction form poorly collected data. The FAQ tries to spin
this as being useful, but clearly the little circles are even less
meaningful than I thought. In many cases they are giving vehicles a
poor rating based on a reported problem rate 4% greater than average.
There is no way the CR survey has an accuracy of +/-3% for most of the
vehicles listed (the typical vehicle has 200 to 400 responses; they
allow data to be reported with as few as 100 responses). This means
the little circles are at best worthless for many vehicles. I suppose
for high volume vehicles there may be some validity, but still the
difference between an excellent and poor rating is at best very small.
Probably so small as to be insignificant compared to other factors if
people knew how small the difference truly is. My sister just
purchased a RAV4, mainly because it had such good reliability ratings.
If I had told her it was at best likely to have 4% fewer problems than
an Escape, which she could have bought for thousands less, I suspect
she might have considered the Escape (especially since my younger
Sister has a 7 year old Escape that has been trouble free).
Actually I agree that my concerns apply to JD Powers is well. But at
least JD Powers gives you the raw data (problems per 100 vehicles).
From that I can infer that most vehicles are very close in quality. CR
on the other hand gives you little circles that imply great
difference, when in fact they are actually very minor in most cases. I
find this to be a misleading approach.
We discussed this already. Look at the key for the circles
in the April issue.
The notion that what the CR circles tell us are the
/differences between/ models, and not a statistically
meaningful problem rate for each model-year, is not easy for
a lot of people to grasp. Yet it's a well-known statistical
concept. Most often it's the /difference/ in two averages
that is most meaningful, and not the averages themselves.
This is like an Internet mythology. Without your citing specific
instances where this is the case, it is pretty hard to respond. As
far as I can see, related vehicles usually have very similar
The average model year had about 7000 responses. A 1% failure rate
represents 70 respondents (typically) who reported a problem. My
guestimate is this is a lot better than a 3% margin of error.
The opinions are irrelevant. The question is, did you have to repair
the transmission last year, yes or no? If the survey is inaccurate,
it has produced some uncanny results. For example: Honda, of course,
has a stellar repair record - traditionally neck and neck with Toyota
for best in the world. Yet one year, CR reported that one feature on
one Honda model had the worst repair record in the survey. That would
seem to indicate that the survey respondents weren't influenced by
Motorsforum.com is a website by car enthusiasts for car enthusiasts. It is not affiliated with any of the car or spare part manufacturers or car dealers discussed here.
All logos and trade names are the property of their respective owners.