This was also posted to www.bobistheoilguy.com ,
but accidentally to the "Diesel" forum. I've asked the moderators to move
it to the "Gasoline" forum.
My 1991 Acura Integra has (currently) about 343,600 miles on it. At this
mileage, it's sort of inevitable that oil usage is going to be a lot higher
than it was when the engine had fewer miles on it.
After a scare four years ago (the last time I posted in BITOG), I decided
to make a near-science of monitoring my oil consumption, so I could find
out how much oil I was REALLY using.
The results of two-years of testing have led me to several conclusions:
1) ambient air temperatures make no difference in consumption, but age of
2) usage cycles up-and-down regularly, for no apparent reason;
3) it takes a long time and many test-cycles to determine a trend.
The chart is here:
I did this chart for "summer" only, which covers from about mid-April to
late-November, outside of which dates the winter tires go on, and mess up
the method on account of their slightly larger diameter.
I've tried to be as consistent and as accurate as possible with my testing
methods, but I think 17 cycles covering approximately 35,000 miles and 24
months is enough to detect the engine's actual consumption.
Hope this is of interest to some here.
On 5/28/2010 7:13 PM Tegger spake these words of knowledge:
It's of interest to me; over this time frame, you averaged about 1660
miles per quart of oil. That's not bad, given the wear on your engine.
A lot of work, and good information. And something the rest of us can
measure oil consumption against.
I wish there were more folks doing this level of information-gathering.
"Everything has its wonders, even darkness and silence, and I learn,
whatever state I may be in, therein to be content." -- Helen Keller
snipped-for-privacy@FreeNet.Carleton.CA (M.A. Stewart) wrote in
They once actually did have a blind man in their "reasonably priced car".
He'd lost his sight in a bomb-blast in some Third World country when he was
with the British military.
Jeremy Clarkson and he did /many/ test laps during which (with "Jezza" as
passenger) they'd worked out a rally-type system with Jeremy as navigator.
The blind guy did not finish last.
Tony Harding ( firstname.lastname@example.org) writes:
I have only seen a few of the shows. But I do remember briefly seeing
on the tower of lap times, close to the bottom (and above a few others...
poor folks), the pasted-on 'Blind Man' lap times. I didn't believe it at
first... but later I saw it again, and thought it was a riot (a politically
incorrect riot of course). I had assumed that they had possibly
blindfolded a person to get the lap times.
So do you think Helen Keller could turn better lap times than the 'poor
folks' below the Blind Man (hypothetically)? What about Patty Duke?
Could she turn better lap times?
I think the above represents a well-done study.
I too would think that oil consumption would be pretty consistent over
the 41k miles. But taking into account errors in measurement,
consumption perhaps is pretty consistent. For example, using the chart
linked above, the reader can surmise that the average consumption was
about 1660 miles per quart (mpq) with a range from 1500 to 1850 mpq,
or +/- about 10%. Reading error for each measurement taken would maybe
be around 0.04 quart ( = about one-half a gradation = about one-half
mm on the calibration chart next to which you held the dipstick).
Temperature effects might cause an error in reading of around +/- 0.03
quart (= .0005 per degree F * 20 degree F * 3 quarts). You monitored
every 1k miles or so, so I gather you did two readings of the dipstick
per data point. Each reading could be off by 0.07 quart. Hence any
given data point might be off by around 2*(.04+.03) quart = 0.14 quart
or around 10% error, consistent with what your readings were. Hence I
would say the variation you found might be explained by the
measurement error involved in doing readings. Hence the average number
you found is useful.
I see from the "Bob is the Oil Guy" (BITOG) thread you used Mobil 1, a
synthetic oil, for the entire study. I think this may be relevant for
some readers. E.g. those who are still using non-synthetic oil might
expect higher oil consumption for an engine the same age as your
(Anyone wanting to see Tegger's thread at BITOG should just do a BITOG
keyword search for "Tegger.")
I'm a bit surprised that nobody in BITOG has brought up this extremely
important point. I was fully expecting to be called-out on reading
errors, but nobody's done that yet except you.
As you point out, the obvious uncontrolled variable here is my dipstick-
level-recording method: It's a simple visual comparison between dipstick
and chart, which is subject to some guesswork as to the exact position
of the "meniscus".
Of course, the level doesn't go in increments that match the markings on
my chart, but is usually in between the marks. Is it closer to one mark
than the other? Do I call it 6.3? 6.4? It takes very little change in
what I record to make a big difference in the final mileage number.
That's part of the reason I need very many records in the dataset; it's
the only way of materially reducing the inevitable contamination caused
by reading errors.
I take at least four readings each time I do my readings, just to be
certain I'm reading the level correctly. Even then there is some
"fudge", since the oil runs down the stick fairly quickly, especially in
warmer weather. I need to make sure I check the level quickly, before
the level at the "meniscus" thins-out so far that I have difficulty
locating the top of the "meniscus".
I had been using M1 for some time before my decision to keep precise
records, and kept using it just to eliminate a variable.
I have, though, also discovered that M1, over more than 100,000 miles,
produces FAR less varnish than Castrol GTX did over the same mileage.
I've been sufficiently impressed with M1 that I wish to keep using it.
Hi, I am not sure what you mean by "called-out," but I want to be
clear that I was not trying to criticize your work. Rather I was
curious about the variation and whether it might be explained by
measurement error. I think the variation is not meaningful and is to
be expected. Your study is /extremely/ well done. I doubt any other
DIY-er could get a narrower range for measured oil consumption. Your
study is instructive in a few ways, in fact.
I guess I might also keep in mind that generally, oil consumption is
going to rise. As you know. So one cannot take an average of too long
a time period and have it be meaningful. Rather something like the
average over every 50k miles seems like it would be helpful to folks
as a measure perhaps of how much wear the engine is seeing.
Do you mind if I post a link to your chart and a summary of it at
snip for brevity
Yes, I understand: Lots of data points help average out any
By "called-out", I meant I expected somebody to bring up the issue of
measurement accuracy when reading off the dipstick. The entire work
depends on that, of course. And nobody but you brought it up. That
doesn't say much for BITOG, frankly.
I'm complimenting you, is ultimately what I meant.
That is very possible. I considered the possibility myself, but had no
real way of compensating for it outside of large sample size.
I'm no statistician, but I've done enough testing and offhand
statistical analysis--both personally and as part of my job--to realize
that variability is the norm in statistics, and can be very difficult to
both account for, and to correct for, when attempting to determine
trends and tendencies within the numbers.
I'm not totally sure what accounts for the variation. As you point out,
it may well have to do simply with the vagaries of my vision and with my
judgments as to the exact fractional positioning of a particular
I found that if I considered a reading as being 0.1 of a mark higher or
lower, that made as much as 500 miles per quart difference in the final
analysis. I had no way of correcting for that other than sample size.
When calculating gas mileage, one faces the very same variability
My reading-records were conservative, normally erring on the side of
higher consumption. IOW, if I wasn't quite sure, I'd guess on the higher
side (6.4 usage versus 6.3, etc.).
If what I'm seeing is accurate, I believe consumption rises very slowly
over the years. I believe that the reason some people see a sudden large
hike in usage is because they never check their oil. And then one day
they do, and it's a LOT lower than they expected. They conclude from
this that it all-of-a-sudden started happening.
Please do. When posting to BITOG, I was hoping for somebody to chime in
with meaningful technical criticism and advice. So far, you're the only
one. Maybe we can get others in hondatech.
Hey, you're doing peer-review! Cool!
Your whole PDF is almost too utterly ridiculous to even comment.
Your explanation of your methodology is rather confused. You claim to
take measurements every 1000 miles which doesn't jive with the mileage
numbers shown on the chart. You cover period of 41000 miles. That
should mean (if you took a reading every 1000 miles) 42 measurements,
but your chart only shows the result of 17 measurements. So what about
the other 25 measurements that should be there but you don't show?
The 17 points on the chart don't show data they show some dubious
calculation you made from whatever data might have been. The data itself
is probably also dubious. You may have a mark on your dipstick when the
sump is full and a mark when the oil is down a quart, but that doesn't
mean that half way between those 2 marks is where it is where it is at
half a quart low. This would be due to the irregular shape of the oil
Using your method is like using the gas gauge to calculate gas
mileage. With that type of bogus measuring someone might conclude: Gee
for some reason I always get 40 mpg on from full to half a tank and
then get 20 mpg on the second half of the tank.
All you have really proved is that with the first reading of the
dipstick you get some reading plus or minus one notch on the scale. And
with the second reading you get plus or minus some other notch on the
scale. You consistently get to around some point on the dipstick at 1000
miles and consistently get to around some other point at 2000 miles. And
that is about all you can say conclusively.
Here is what it looks like to me.
1) At 3000 miles you change the oil.
2) After 1000 miles you infer from your flawed measuring system that it
has used about .65 quarts at that point which means it went 1540
miles/qt. In reality the engine has really only used .5 quarts at that
point in the cycle which means it has really gone 2000 mi./qt.
3) At 2000 miles you check the oil and infer from your measurements that
it has used .55 quarts which means in that interval it has gone 1800
miles/qt. But in reality it has used .7 quarts on this part of the
cycle, which means it has actually used 1430mi/qt. in this interval. At
this point the engine has used a little over a quart on the dipstick and
so you add a fresh quart.
4) At 3000 miles your chart indicates you don't do any
measuring/recording data you just change the oil and repeat the
foolishness from step 1).
The fact that you repeat the same bogus methodology over and over and
get somewhat the same results repeatedly - you interpret as being
That appears to be obvious. You're Not much good at accurately
collecting or reporting meaningful data either.
The chart says near the bottom: "Not all 1,000 mile intervals were
recorded (some were missed for various reasons)." This happens in
experiments all the time. When it does, the question is: Are there
still enough sample points for statistical meaning? IMO, yes.
Tegger stated at the BITOG site that after "the first level check I
top it back up. Then I top it back up after the second check. At each
check-time it's down a bit more than a pint, depending."
Hence before he adds oil each time, he must be reading near the same
point on the dipstick each time. He might be a little off, but given
he knows how much he adds to bring the level to full, I doubt he is
far off. Given how often oil checking is discussed here, it would not
surprise me if he checked for linearity on his Integra's dipstick.
Some dipsticks apparently are pretty linear from low to full. Not all,
but some. At any rate, give him a chance to answer.
Someone at honda-tech.com suggested he instead measure how much oil is
needed to bring the level back to the full mark. Presumably he is
adding oil very carefully to top off the level, and so the volume
could be measured.
That doesn't even begin to answer my question. Why is most of the data
missing? Why isn't there any data for how much was used between 2000
and 3000 miles? If there was any serious attempt at accuracy, there
could have been an accurate accounting of the total consumption for the
entire oil change interval plus an accurate accounting of how much was
added along the way. This doesn't even begin to look like a credible
Well that is a rather important detail to suddenly introduce after the
methodology has been questioned.
So what is all this nonsense about measuring ticks on the dipstick? Why
not accurately measure the amount added to get it back to the known full
If he does know that actual amount added to get it back to full, why
would the calculations be made from the dipstick readings? It makes no
sense at all.
I'm a little suspicious of anyone who leaves out the most important
details and then when the deficiency is noted they suddenly come up with
a story. If they understood the importance from the beginning then why
would they have not mentioned it from the beginning.
The most important detail that is excluded is why does this study look
like it is just cherry picked data. I can see no good excuse why all the
data was not included. Nor is there any reason to believe that the 40%
of the data that was given is representative of what all the data would
have shown if it had been collected and reported using credible
I do not see any data missing. There are seventeen sample points. Oil
used and miles driven were estimated for each one. The interval
between data points is reasonable.
How come you reject Tegger's statement that he did not take
measurements for every single 1000 mile interval?
If there was any serious attempt at accuracy, there
What you propose above is double counting of data.
I would cut someone a little slack for not writing this on the chart
that was linked. The guy is a volunteer, donating his time and labor,
after all. Plus Tegger is taking questions and answering them (well
those that are posed with civility, anyway). Or maybe others should
criticize those who do not follow Tegger's suggestion earlier to go to
the BITOG site?
Sure, when stated with civility, this is a good question.
All the data was included, it is just not at the interval you want (an
interval that does not make sense, insofar as the credibility of the
study is concerned).
Data does not have to be gathered at exactly the same interval for a
study to be credible.
How come you say he made that statement? All I saw was a statement that
not all the 1000 mile checks were recorded. So what was the basis for
not recording some 1000 mile intervals? Or more important why were those
17 dipstick checks chosen? I can speculate all sorts of possibilities.
Beyond that the mileage shown doesn't jive with 1000 mile intervals. I
see intervals of 2550 1200 1430 1390. So what is one to make of that
given the statement that the oil was being checked at 1000 mile
No what I propose is using data that has a likelihood of being accurate
versus data whose accuracy is extremely doubtful.
It doesn't matter how much time and effort was expended. The quantity of
effort is not what make the results credible. I would except the
results of data collected in a single 3000 mile interval if done
correctly to be a more meaningful then doing it incorrectly for 13 oil
Asking nicely makes the question easy to ignore. Calling the study
ridiculous won't change anything the criticism will still be ignored so
it makes no difference at all.
The bottom line, if the amount of oil consumed was not measured in a
reliable accurate way then there really isn't anything meaningful to be
drawn from the data.
I'm not the one who is claiming the oil was being checked every 1000
miles. There are only 3 or 4 data points on the chart that come close
to corresponding with 1000 mile sampling interval. There is no way of
knowing what that or any of the other discrepancies mean except that the
study is just far too sloppy in its methods to expect the results to
mean anything at all.
We haven't seen any data. NADA - NONE - ZIP. There is no data in that
chart. What we have are the results of calculations made from what we
have been told is the data. Both the calculations methods and data
collection methods appear to be highly suspect. It looks like it may be
accurate to say that this engine uses quart every 1400-1700 miles. Any
conclusions beyond that simple one are extremely doubtful.
I suspect his engine is like most other engines - the rate of oil
consumption increases the longer the oil is in the crankcase. However,
you would need extremely accurate careful laboratory testing procedures
to be able to detect any change in rate of consumption in the first 2000
miles after a fresh oil change. The fact that he is consistently finding
a difference in the rate of oil consumption between the first and second
1000 miles is very strong indication that the testing procedure itself
is probably introducing this bias.
If someone says that for the last 42000 miles they have added a quart
of oil at 1600 miles and when they go to change the oil at 3000 miles
the engine is almost down to the add mark on the dipstick, I would then
surmise that engine uses a quart of oil about every 1600 miles.
When I see this chart I surmise that this guy has managed to create a
method that maximizes the uncertainty in what should be a fairly simple
thing to keep track of. His engine uses maybe a quart every 1400 miles
or maybe it's 1700 miles to a quart. Who knows?. That is pretty sad to
go to all that extra trouble and end up knowing less as a result. It's
like an old farmer whose method to determine how many cows he has is to
count all the legs and tails he sees in a field and divide by 5. You may
think that method is absolutely brilliant and I respond that it is
utterly ridiculous. The reality for the farmer who has quite a few cows
is he never really is sure how many cows are in the pasture.
I would be suspicious of anyone claiming that they knew what their oil
consumption is within 5% over a 42k mile interval. Why? Because
driving conditions likely play a role as much as measurement error.
I think folks who are not as careful as Tegger should be prepared for
variations in measurement of around 25%. They should not read anything
into supposed increases of say 25% until they have monitored oil
consumption over a long period.
Sure a study could always be done better. But this one is in fact very
well done. Failure to concede this subtracts from a person's
credibility, in my eyes.
There is talk on the net of sudden increases in oil consumption, and
then consumption reduces by a lot. But such observations may in fact
be due to mere measurement error. I think this study helps support
You and I came to the same conclusion about what the study said, per
the quotations in my last post.
This study and subsequent reporting may be careful. Nobody said it
wasn't careful. You can take a shit very carefully, but you still have
nothing but turds to show for it.
The variations of 25% are a direct result of the invalid methodology
used. If you simply figure out how much the engine used over the entire
oil change interval (3000 miles in this case) you will get a reliable
result. If you use 1.8 quarts in 3000 miles traveled that is 1670mi/qt.
If you use 2 quarts in 3200 miles that would be 1600 mi/qt. Simply
accounting for all the oil used and all the miles traveled and dividing
will give an accurate result and eliminate the wild variations. Doing it
the way shown on the PDF file is lot of extra work that does nothing but
make the result less accurate and more uncertain.
It is just plain stupid to do it by measuring small increments on the
dipstick over varying mile intervals and then averaging the results of
those mi./qt. calculations. To illustrate the stupidity - let's suppose
he had read his dipstick at intervals of 200-400 miles and calculated
the mi/qt for each of those varying mileage intervals and then averaged
those mi/qt calculations. You may call that being more careful, but
would it be more accurate? No it would not! It would produce an even
less accurate result. I suggest you enroll in some remedial Math
education if you can't understand why that methodology is invalid.
That's very funny.
WTF??? You are now claiming that the value of this study is that it is
an excellent example of how one can introduce measurement error?? What
purpose does that serve?
Motorsforum.com is a website by car enthusiasts for car enthusiasts. It is not affiliated with any of the car or spare part manufacturers or car dealers discussed here.
All logos and trade names are the property of their respective owners.