A few "hard data" questions in the survey doesn't turn the overall
survey into a hard data gathering exercise. The main questions in the
survey are clearly eliciting an opinion. Questions like - "If you had
any problems with your car in the past 12 months that you considered
SERIOUS because of cost, failure, safety or downtime, select the
appropriate box(es)...." or "How satisfied are you with this vehicle
with respect to each of the following factors?" The answers to these
questions are opinions, not facts. The chief information gathered is
all based on the opinions of the reader. This makes it "clearly an
opinion survey." When the pollsters come around and ask me how GWB is
doing, they also ask me my age and race. Just because they ask me some
hard data questions, does that make my answers to the other questions
You mean the data gathered where they asked the respondents to decide
what is "SERIOUS?"
No, I am saying that CR readers tend to spit back the opinions of the
magazine. They have been told repeatedly that car A is very reliable,
so when they are asked to report SERIOUS problems with car A, they
tend to think problems with such a reliable car had can't be serious.
I am not saying it is a massive case of deliberate misreporting.
Because of the limited nature of the survey, very small shifts can
appears very significant when the data is reduced to the little
circles. In my opinion, the data gathering method is unscientific and
the reporting methods can make it appear that there are great
differences when in fact, there may be no statistically valid
differences at all between component systems of two different cars.
This means that very subtle biases can skew the results and make it
appear there are significant differences when in fact they don't
It is not just that the samples are too small, they are both too small
and from a non-random group.
For years Buicks have shown up better in reliability ratings (like JD
Powers) than other GM products (at times even better than Cadillac).
Why? In most cases the basic parts of the car are exactly the same as
Pontiacs and Oldsmobiles and they are built on the same assembly lines
by the same workers. There is no reason to believe that Buicks are
more reliable than Oldsmobiles, but if you look at JD Powers surveys,
Buicks always come out much better. This also used to be the case with
CR surveys, but in recent years it appears that CR started blending
the data for similar cars sold by different divisions to avoid having
to explain this phenomenon.
You have to be kidding. When a new Toyota comes out the CR editors
will say they expect it to be reliable. When a new Buick comes out
they will say it is a new model and the reliability is unknown.
The reliability reports are based on the respondents deciding what is
"SERIOUS." This is an opinion based question. Cars from all
manufacturers have very few problems of any type (according to JD
Powers the average new car has 1.24 problems, and the average 3 year
old car has 2.27 problems). It only takes a subtle shift in
determining what is "SERIOUS" to make large differences in the survey.
For all I know, Toyota owners might be overly picky and over report
problems compared to Buick owners. The CR reporting methods (the
little circles) can make tiny differences appear to be very
The reviews are full of opinions. For instance:
"This large, front-wheel-drive four-door sedan replaced the LeSabre
and the Park Avenue. The standard power plant is a rough-sounding
3.8-liter V6. A potent 4.6-liter, 275-hp V8 powers the CXS. The
four-speed automatic transmission shifts smoothly enough. The Lucerne
has a quiet, comfortable ride, especially at low speeds. Handling is
not agile and the steering lacks feedback. It has a tendency to
fishtail easily at its limits. Stability control is only available on
the CXS V8. Braking is unimpressive. The back seat is roomier than the
one in the LaCrosse. First-year reliability has been above average."
"rough-sounding 3.8-liter V6"
"the four-speed automatic transmission shifts smoothly enough"
"has a quiet, comfortable ride"
"Braking is unimpressive"
"The Mercury Milan and similar Ford Fusion are new midsized sedans
that are very agile and satisfying to drive. These sedans rate even
higher in our testing than the Mazda6 sedan on which they're based.
They have a sporty feel; the V6 and automatic transmission make a
smooth and responsive powertrain. The four-cylinder is a bit coarse.
The interior is well made and space is generous."
"are new midsized sedans that are very agile and satisfying to drive"
"They have a sporty feel"
"The four-cylinder is a bit coarse"
"The interior is well made and space is generous"
"The redesigned Toyota Camry is roomy, quiet, has a comfortable ride,
and is refined. The addition of a telescoping steering column is a
plus. Power comes from a strong 3.5-liter V6 mated to a six-speed
automatic transmission. It returns 23 mpg overall, just one mpg less
than the four-cylinder. The base 2.4-liter four-cylinder is also
responsive and relatively refined. A four-cylinder hybrid version
returned an impressive 34 mpg overall. Handling is responsive and
secure but not sporty. The interior is spacious, with reclining rear
seats in the high-end XLE. Curtain air bags are standard, but
stability control remains optional."
"is roomy, quiet, has a comfortable ride, and is refined"
"......a strong 3.5-liter V6"
"The base 2.4-liter four-cylinder is also responsive and relatively
"Handling is responsive and secure but not sporty"
"The interior is spacious"
etc., etc., etc.
The reviews are full of opinions.
You don't think the whole continual retesting of the Suzuki Samurai
wasn't an attempt to skew the ratings? Every newspaper / magazine / TV
news program / etc that I am aware of skews their content to please /
attract the readers / viewers. You are naive if you don't think CR
does as well.
People keep buying Motor Trend, and I am sure most of the people who
buy it understand that "Car / Truck / SUV of the Year" is up for sale
to the highest bidder. People keep watching NBC's Dateline, yet most
people know they faked the whole Chevrolet exploding gas tank report.
60 Minutes has told so many lies it is hard to imagine they have any
credibility, yet it remains a very popular program. I think most
people know WWE is scripted, but it still remains very popular. I
think CR is biased towards certain makes, but I still enjoy reading
it. Heck I often even agree with their opinions. I thought the Samurai
was a death trap, but I also thought CR when over the line in trying
to prove it.
I never said it was a hard data gathering exercise, just that it's not
simply an opinion poll.
How do you determine which questions are the "main" ones?
I said so in my post.
The only thing that makes it "chief information" is your opinion, which is
clearly biased. I mentioned that I was referring specifically to the
reliability data--which means the only relavent questions in the poll are
There's a problem with your analogy. If the poll results included the
average age and racial data of the persons taking the poll, and some people
were only interested in the average age and race of people who were polled
rather than what they thought of GWB, then absolutely those are hard data
questions pertaining to the information at which those folks are looking.
Yes. I trust the law of averages because as I mentioned before, the results
have enough internal consistency to be believed. When two different makes
and models of a car get rated, the results are usually pretty similar if not
always exactly the same. If they were at different ends of the spectrum,
that would be another thing altogether. I expect some variation because of
the small sample sizes, however if you look at general trends you get a
picture based on more data.
Your basic argument is that the owners of certain companies' cars will use a
significantly more restrictive criteria for determining whether or not a
repair problem is "serious." You haven't provided any evidence or reasoning
that support your claim, IMO.
Does CR express the opinion that certain cars are very reliable? From what
I've seen, they don't. They might mention that a car is expected to be
reliable based on the poll results, maybe that's what you mean. But then we
get to a case of the chicken or the egg. Did CR first tell its readers that
certain cars are more reliable and then start sending out polls? Or did
they send out the polls and then mention the results? I don't have any
issues from before they started collecting repair data from readers, so I
That's your opinion, but it doesn't make sense to me. It would make more
sense to me that if people shell out money for a car that they are EXPECTING
to be ultra-reliable and then end up having to repair it, they'll report
even the most *minor* incidents as serious.
I would agree with you there and to some extent what you are saying based on
that point. Expecting people being polled to use their own judgement on
"serious" repairs is a bad way to collect data, because when you ask
judgement-based questions, the smaller samples are bound to be less
accurate. However I disagree with your other conclusion, which is that the
results would be skewed in favor of certain brands and/or against others. I
think they would just have a wider range of accuracy (meaning less
Yep, I agree again--I want to see:
1. A much larger survey response.
2. No relying on opinion or judgement questions! They should be asking
specific questions like "What was repaired" and "How much did the repair
3. The results should reflect the actual numbers. For example, what
percentage of owners of 1997 Corollas with 100,000 to 130,000 miles on the
vehicle had to repair their vehicle's transmission last year, and on average
how much was spent per repair job?
I'm saying the polling and reporting procedures are flawed, not that they
don't provide a reasonably reliable (if somewhat vague) indication of a
Can you give me an example of a Buick and its twin Olds vehicle that this
happened with? No offense, but I would like to look it up myself--I am
pretty sure I can find past issues of CR frombefore they started combining
the data for similar models.
By the way, are you sure they did start blending differently-branded
instances of the same basic vehicle? I notice that the Vibe is separate
from the Matrix in the reliability records, and from what I understand they
are basically the same vehicle.
I haven't seen that, but then I don't usually pay attention to statements
like that, so they very well could. However, I believe they used to say
that about Mercedes-Benz autos as well, because they usually topped the CR
reliability records... yet now, you find Mercedes cars at the bottom of the
reliability charts, while Toyotas have sonsistently remained at the top.
Unless CR started telling Mercedes owners that their cars were very
unreliable, this would indicate that people don't tend to minimize their
repair problems based on CR reliability reports, otherwise Mercedes owners
would still be considering most or all their repairs to be "not serious" and
not reporting them.
Good point about the goofy circles. However, they also list percentages--at
least on their web site.
Yep. I never said they weren't. However, they aren't "opinion pieces"
because they also contain factual data, like "the rear seats fold all the
way down to make a completely flat cargo area" etc. An opinion piece would
not have any use for statements like that. They are reviews, which contain
a mix of factual data and descriptions of their experience, which have to be
opinions. For example, you can measure how loud engine noise is, but you
can't measure how an engine *sounds* or how a shifter *feels* in any way
that would make sense to people. You have to use your best judgement, which
is of course opinion. As I've said, I have found that their descriptions of
such things tends to agree with what I have found. If they say one car has
an engine sound that's "rough" and don't say that about another car, it's
obviously a judgement but again I have in the past agreed with most of their
judgements so I
That's yet another opinion. I'd say you're naive if you think they do,
because CR isn't any newspaper/magazing/TV program etc--their existence is
based on the trust that they are truthful and objective. If they break that
trust, no one has any reason to buy their magazine, subscribe to their web
site, etc--and this isn't the case with most newspapers, magazines, TV
programs, etc--it's the opposite for them in fact. People pick which news
station, newspaper, fashion magazine, etc based on it telling what they want
to hear--but with CR it's different. The purpose of CR is to give the
reader information which will help them spend their money on purchases they
won't later regret making. And that's pretty much all it's for.
Some do, some don't, most don't care--they just want to see their own car
praised so they can feel smart for buying it. If their car isn't one of the
favored ones, then they don't get Motor Trend. I suppose some people might
base their buying decisions on what MT says, but I feel sorry for them.
Right, because it's entertainment. It's supposed to be true, but even if it
isn't, it's still interesting. CR is about the most boring, driest read you
can buy. I have trouble forcing myself to even read the reviews of things
I'm interested in. Without impeccable credibility, I'd pay quite a bit if I
had to just to *avoid* reading CR.
You actually enjoy reading CR. I am astounded, flabbergasted, amazed and
befuddled. I guess we all have our own tastes. I won't even consider that
there enough people who enjoy reading CR that they could make it without
credibility. I mean good grief, it would be like reading weather reports
for places that don't exist.
I look forward to your response.
Did you not read the actual questions? The reliability data is partially
based on the respondent deciding what is SERIOUS. This is the respondents
But if you are trying to decide what vehicle is reliable, you want reliable
reliability data. CR is giving you the CR reader's opinion of the
reliability of thier cars. These are not the same thing. One is based on
hard facts (dollars spent, hours out of service, etc.), the other is based
on whether people felt they were inconvenienced.
Garbage in, garbage out. Too small a sample size; non-random samples, etc.
They depend on people remembering problems, and deciding they were serious.
If you average the opinions of all the RNC members, you would conclude GWB
is doing a great job.
I guess I missed your evidence that they don't. No doubt this is
speculation, but I don't think you can make the opposite claim (i.e., that
people are totally objective when filling out the survey - for one thing,
they have no clear guide what should be considered serious).
I think the opposite is true in many cases. I think people that buy cars
because they are said to be reliable tend to not want to admit they made a
mistake if the car turns out to be unreliable. Humans have a hard time
admitting mistakes (even me).
And I am saying that CR is reporting a very limited group's opinion of the
reliability of a car, not the actual releiability.
Try around 1992. Or just go look at the JD Powers Survey results on line to
see how results are slewed by expectations (JDP definitely does not average
results across brands). According to the JDP 2005 Vehicle Dependability
Study, the average 3 year old Buick has 1.63 problems, the average 3 year
old Oldsmobile has 2.42. That is a retty signficant difference (over 30%).
CR rates the 2006 models exactly the same (suspicious?). JD Power gave them
radically different ratings (Vibe got 3 balls for manufacturing quality,
Matrix got 4.5; Vibe got 2 balls for design quality, Matrix got 3).
Mercedes is so far down the list CR would have no credibility if they
continued to claim they were reliable. Besides, these days which audience is
it more important for CR to please, Toyota buyers or Mercedes buyers?
Percetage of what? The only percentages I see are in the graphs where they
rate cars comapred to the CR average "score." How is the score calculated?
They don't actually tell (at least as far as I can see). They take a lot of
pride in mentioning that they are basing the ratings on 1.3 million vehicles
spanning three year models. But since they are rating over 300 models, over
3 model years, the ":average" model/year only has 1,300 data points - and
that is the averge. Many of the models must have only a few hundred. CR
could just publish the raw results and then we could decide for ourselves.
They could put it online for a minimal cost. I wonder why they don't do it.
Even your example could be an example of bias. Choosing which facts to
present involves making an editorial decision - which is another way of
saying, expressing an opinion. CR decides what is important to report on.
They love ESC. I think it is over hyped and not worth the cost. If I was
writing the article it wouldn't be a factor. One thing that has always
bugged me is the entry and exit from cars. I hate Crown Victorias because
they are too low, which makes them hard for me (tall/large) to get into. My
2001 Mustang was easier to get in than my Mothers' Grand Marquis. I rarely
see this sort of thing mentioned, and it is often a problem for me with
Japanese cars. I have to practically fall into my Sister's Civic because it
is so low to the ground. Selective reporting is as much a bias factor as
stating a clearly identifiable opinion.
Yes I expressed andopinion. Here is another - I think you are naive if you
believe what you just wrote. Do you think people who subscribe to CR need
CR's opinions on a monthly basis to make a purchase? If they only read CR to
gather information for an occasional purchase, why wouldn't they just buy
the yearly buying guide, or pick-up the occasional issue that addresses
their next big purchase, or read it at the library. People subscribe to CR
for the same sort of reasons they subscribe to Car and Driver, or Popular
Science, or People. People like to have information. People like to read
things. I enjoy CR, I like to read their opinions, even when I don't agree
with them, and they do include interesting features.
So only Motor Trend buyers want to see their car praised? I think you jsut
swung over to my side of the discussion. --> Toyota owners like to see their
cars praised, so they subscribe to CR. CR needs to keep those Toyota owners
happy, so they continue to praise Toyotas.
The Suzuki Samurai episode was hardly an example of "impeccable
credibility." Or the recent baby seat test fiasco. I've seen them do some
really stupid tests. I find some articles boring, but still find enough
interesting to keep subscribing. I'd just as soon read CR's car comparisons
as Car and Drivers. I just wish CR would test a few more interesting cars
(I am still waiting for the Ferrari / Aston Martin comparison).
If the only people who subscribed to CR were people who were making a major
purchase, CR would already be out of business. CR needs the faithful
subscriber base to stay in business. In fact, they are far more beholden to
their subscriber than publications that accept advertising. C&D can afford
to tick off a few subscribers as long as they keep the big advertisers
happy. I wonder what would happen next month if CR had a road test that
trashed the new Camry.
I wanted to make a few more comment about the reliability
"percentages" shown on the CR website.
There inconsistencies in these "percentages" that should make you
wonder about the value of the CR reliability data. For instance:
A Honda Accord Hybrid is shown as being around 75% better than
average. A 4 cylinder Accord is about 60% better than average. A V-6
Accord is only about 40% better than average. Do you really think
there is that much difference in the reliability between the three
"types" of Accords? Of course if the difference was only very small
then this would make sense, but then the difference probably would not
be statistically significant, which of course makes the whole
comparison an exercise of making almost non-existent difference look
Another Case -
A V-6 Camry is a little over 40% better than average. A 4 cylinder
Camry is only average or a little worse. Do you think a V-6 Camry is
significantly more reliable than a 4 cylinder Camry? I don't. Yet CR's
percentages are structured in such a way as to make it appear that the
V-6 Camrys are far more reliable. And why would a Lexus ES350 be close
to 100% more reliable than average, if the V-6 Camry is only 40% more
reliable than average and the Avalon is only 35% more reliable than
average? Despite the much higher cost, the basic underpinnings of the
ES350 are still standard Toyota components. I cannot believe there is
a significant difference in ACTUAL reliability (as opposed to the
Customer's opinion of reliability). And while you are looking at
Toyotas, look at the Solara - 4 cylinder Solaras are actually rated as
being more reliable than V-6 Solaras (50% to 40%). This wouldn't be
particularly significant, except it is completely at odds with the
comparison of 4 cylinder and V-6 Camrys. They share drivetrains. You
would assume if 4 cylinder Camrys were much less reliable than V-6
Camrys, 4 cylinder Solaras would be much less reliable than V-6
Solaras - but that is not what CR's data shows.
This fits into my contention that CR's survey is poorly constructed.
It is not random and there are too few data points to make the results
meaningful. They take this bad data, and then over emphasize the
differences. What are probably very small and possibly statistically
insignificant differences are presented in such a way as to make it
appear they represent important differences.
So you ask - why would CR do this? Easy! If they printed results where
the reliability of most cars was "average" or very close to average
with no meaningful differences, who would bother to read the magazine?
They are doing exactly the same thing other media outlets do - hyping
the story to attract Customers (in this case subscribers). It is
little different than Dateline using rockets to make sure the Chevy
truck would burn in their infamous story.
Yes, that's what it says on their web site. Thirty-five percent seems like
a huge range of difference for the same car with different drivetrains!
Also, their site does say "Note that the average stretches 20 points on
either side of the zero line, so it's possible for a car to have an average
Predicted Reliability Rating even if its bar is in the negative zone." To
me, this means that the cars are fairly close--in other words, if a
difference of forty points separates the high and low ends of the average
field, I would expect that a difference between any two cars of less than
forty points would be meaningless. I believe you said something to this
effect brfore. Again, your point.
Not if they are all the same basic car, made in the same factories, and in
the same class for comparison. If these criteria are met (which they could
be--I don't know) then I'd agree the wide variance would need to be
explained--otherwise I would agree that there would be some doubt as to
whether or not the numbers are meaningful. Good point.
Well, forst I'd like to thank you for providing examples. I understand much
better what you're getting at and agree that the percentages aren't nearly
as helpful as I had thought. From this point on, I doubt if I'd let a
difference of sixty or eighty points or less influence my buying decision.
More than eighty points would make a difference to me, though.
Once again, I disagree that the commonality of the poll participants being
CR subscribers would skew the result in favor of, say, Toyota consistently
enough to show much better than average reliability for at least 25 years--I
think that over time the results would even out more. I believe the
companies that get consistently high ratings in the CR polls do make better
cars--though how much better is anyone's guess. It might not even be
For example, if 73 of 1000 Accord owners have to make major repairs within
the first four years after the warranty expires and 86 of 1000 Impala owbers
have to do the same (given equal definitions for the word "major"), should
it even be a factor in my buying decision that I have (if my math is
correct) a 1.3 percent greater liklihood of having to make a major repair
within the first four years after the warranty expires with the Impala than
with the Accord? Yes, but only a *slight* one. If I like the cars equally
otherwise and they are the same price, I would go with the Accord, but if
the Impala is $150 cheaper, I might just get that. That 1.3% increased risk
seems like a fair tradeoff for $150. If that translates to an 80-point
difference on CR's scale, then the scale doesn't do me much good. (This is
an example, all the numbers are made up.)
Of course, CR doesn't post the raw data, so we can't make such a
determination for ourselves, even if the sample is statistically
I don't agree that the data has been proven to be bad, but the small sample
combined with questionable and mysterious methods for displaying it do make
me re-think my whole attitude toward their reliability reports.
That's a valid line of reasoning for why they would do this, and the fact
that their percent range is odd and potentially misleading combined with the
fact that they don't release their raw data (do they?) makes their chart
highly suspect *at best*.
One thing I've noticed is that most US cab companies which own their
vehicles and rent them to the drivers buy American cars. Since the cab
companies are paying to repair these vehicles, I would guess they would be
strongly inclined to buy Japanese cars instead--if Japanese cars lasted
longer and needed fewer repairs than American cars. It would save them a
bundle, cabs get driven a lot of miles and need lots of fixing. For other
fleets I would suspect image concerns, for example I suspect Geek Squad
chose VW Bugs for image alone, but when it comes to cabs most people just
want one that's available or will be soon.
So how do you explain the very poor reliability ratings that CR
subscribers have given to the new Nissan Quest and Nissan Titan, which
scored highly in tests, or the above-average reliability rating for
the low-scoring Chevy Impala?
I said they "tend" spit back opinions. Since I think the survey is
poorly constructed and not statistically valid, I expect inconsistent
results to be the norm. And I was not referring to driving quality, I
was talking about reliability. I am confident that BMWs score really
well in terms of driving qualities, but they aren't particularly
reliable. Besides CR has consistently mentioned that the Titan and
Quest are not reliable. And interestingly, the V-8 Impala has poor
reliability. The V-6 is average. I don't see how you can call the
Impala "low scoring" since CR "recommends" the V-6 Impala.
If CR "recommends" the V-6 Impala, does the CR "groupthink" now include
Get a grip. Among other things, CR probably tests various iterations of
their survey and found that asking the customers for "serious" problems by
the customers' own lights didn't deviate significantly from CR supplying
some criteria for what was "serious" and what was not. So, they made the
form simpler and probably increased response by making the questions more
Further, CR is, to some extent, an opinion-maker. But around here, you're
going to find people who have taken CRs advice and find that it has helped
put money in their pocket. This is not our opinion, this is our experience.
And if you want to whine about bias and groupthink and so forth, I'd suggest
you try some other web auto forums (not even UseNet) because there's plenty
of them out there that have, for instance, an apparent pathological hatred
of domestic cars that leaves any possible anti-domestic bias on CRs part in
Posted via a free Usenet account from http://www.teranews.com
I realize that, but there are many other examples, and if respondents
to the reliabilitly survey are biased in favor of cars that score well
in CR's tests, why do they often say they're highly satisfied with
cars that they say are very unreliable or dissatisfied with cars that
they say are reliable?
Different assembly factories? It's been said that Nissan's least-
reliable vehicles all come from one factory in Mississippi, and in the
case of my old Ford Escort, CR said the sedan, which were made at
their Hermosillo, Mexico factory, were more reliable than the
hatchbacks, which were assembled in a US factory.
It got 63 points, putting into the lower range of the "Very Good"
catagory, and ranked 18th out of 23 "Family Sedans Over $25,000". In
this catagory only the bottom-ranked Pontiac Grand Prix wasn't
recommended among cars with average or better reliabilitly records.
The GP was more reliable than average but scored only 38 points,
putting it in the upper range of the "Fair" range.
That's where I think you're wrong, I haven't seen any evidence of CR readers
tending to spit back opinions nor any reasoning to suggest they would.
That's where I'd agree with you. So it doesn't really matter if CR readers
are sheep to the CR shephard or whether they tend to be independant
thinkers; the samples are too small to be reliable for individual models.
I wouldn't say they are the norm. For example, the Matrix and Vibe are very
close, probably within five points. Other different makes of the same
vehicle seem to be very close as well. That's actually unexpected given the
sample sizes. Plus, if you were correct and the survey results were usually
inconsistent, then certain auto makers (like Toyota) would never have stayed
consistently near the top for the last 25 years at least, while others have
gone from near the top to near the bottom (Like Mercedes.) I believe this
shows that the results do *in general* show which makes are producing more
reliable models over the years and which ones less so, but that doesn't
matter if the differences aren't significant.
I don't follow that logic at all. They won't be subscribing to CU if
they don't more or less agree with the editorial stance of the
magazine. For example, the magazine has pushed many social programs
for the past decade - to the point where they lost some subscribers,
such as myself, who didn't care to be lectured to by some bozo who's
supposed to be testing clothes dryers for me. So whoever subscribes is
statistically likely to be of a like mind to the CR editors on many
subjects. Therefore, their survey WILL be biased. It's been noted
many times that if you look at vehicles that are built on the same
assembly line but given different "names", they will often have
different ratings and virtually always the "American" version will get
lower ratings then the "imported" version.
On Wed, 18 Apr 2007 00:24:42 GMT, "Art"
The selection is very important, not just the sample size. The CR survey
is asking a bunch of more or less like-minded people (afterall, those who
don't like CR for whatever reason are not subscribing to it generally
speaking) what they think if they want to share their thoughts. It is like
sending out a questionaire to neo-cons asking how Bush Jr is doing
as president. They can fill it out or just toss it in the trash. Odds are
the results are going to be skewed.
My father has been subscribing for a long time, despite complaining
about their "stupid socialism".
That didn't seem to be the case with the Toyota Matrix/Pontiac Vibe,
either in test scores or reliability ratings, or with the Toyota
Corolla/Chevy Nova/Geo Prizm/Chevy Prizm, which were built
simultaneously on the same assembly line in California, although with
cosmetic and mechanical differences (like the ABS).
I stopped putting much stock in their car ratings when they went thru
a cycle of how they rated cars. For a few years, maybe a decade or
more ago, they starting including a cost index of some sort to go
along with the trouble index. So that Mercedes that looked like it
rarely had any problems suddenly got black dots for it's cost of
repair index. Some of their other favorites started to get those
black dots too. Some of the domestics that had not quite as good of
scores in the trouble index showed up pretty good in the cost index.
That only lasted a couple years and they printed some half assed
nonsense about how it wasn't really a good way to evaluate vehicles
and dropped that rating. There is absolutely no doubt in my mind that
both the editors and the readers of CR have an anti-US bias.
A bias does not imply that they are outright making things up. If the
Ford Fusion / Mercury Milan / Lincoln Zephyr really did well in the
survey, they can't just say the results were bad. But when they write
the reviews they can either not emphasize that or emphasize other
areas (either good or bad) to create a biased impression.
The devil advocate in me is suspicious of the great rating for the
Fusion. I think it is possible that CR has trashed US vehicles to the
point that the only CR readers that still buy domestic brands are hard
core Ford/Chevy/etc buyers. When these hard core buyers get the survey
they shade their answers to the point that the results are too good.
Of course I have thought that hard core foreign brand owners have been
doing that for years. Think about it. Compared to domestic competitors
Toyotas are generally over priced by hundreds, even thousands of
dollars. For cars in similar categories, they also are often smaller.
So why would someone buy less car for more money? Gas mileage and
reliability are two rational reasons for doing so. So if you go out
and spend more for less car because it is supposedly more reliable,
isn't it likely you'd want to validate your reasons for buying the
more expensive car and maybe shade your answers to the survey?
Wouldn't you feel stupid if the Camry you bought because it was so
reliable was actually less reliable than a Fusion which costs
thousands less - oh wait, it is. I guess I am wrong, all vehicle
owners are completely honest and Fusions are more reliable than Camrys
and are a far better value. Toyota sales will soon crash and Ford
will rule the world. Ford's problems are solved. Rational Consumer
Reports reading car buyers will quit buying over priced average
reliability Camrys and start buying lower priced more reliable
Fusions...sure that is going to happen.
Motorsforum.com is a website by car enthusiasts for car enthusiasts. It is not affiliated with any of the car or spare part manufacturers or car dealers discussed here.
All logos and trade names are the property of their respective owners.