When this occurs, the usual reaction is to blame
either the technology or the oil analysis lab for having failed to warn
of impending doom. In extreme cases, the temptation may be to seek out a
different lab which, rightly or wrongly, is billed as “better” than the
incumbent lab.
Steps to Designing a World-Class Oil Analysis Program:
Step No. 1. Initial Program Setup
A program designed around this type of sound footing, requires
development of an overall oil analysis strategy in conjunction with a Failure Mode and Effects Analysis (FMEA). This is often performed as
part of a more comprehensive Reliability-Centered Maintenance (RCM)
program.
The FMEA process
looks at each critical asset, and based on component type, application
and historical failures, allows test slates, sampling frequencies and
targets and limits to be selected. These items will address the most
likely or prevalent root cause of the failure.
Now let’s think about the maximum permissible water, which should be set
as a goal-based limit from an FMEA and criticality assessment. If the
plant FMEA indicates a need to keep water content below 200 ppm (0.02
percent), then the only option to trend water content over time to
ensure compliance would be the Karl Fischer test, because Fourier Transform InfraRed (FTIR )is
typically insensitive below 500 to 1,000 ppm.
Step No. 2. Sampling Strategy
Of all the factors involved in developing an effective program,
sampling strategy has perhaps the single largest impact on success or
failure. With oil analysis, the adage “garbage in, garbage out”
definitely applies. While most oil analysis labs can provide advice on
where and how to sample different components, the ultimate
responsibility for sampling strategy must rest on the end user’s
shoulders.
Take the real-life example of a reliability engineer at a plywood plant
who had outright rejected oil analysis as an effective
conditioning-monitoring technique.
His misguided belief was based on the
notion that because the plant he worked at had experienced four
hydraulic pump failures in the past two years, none of which had been
picked up by oil analysis, the technology simply did not work. But is it
really the technology that’s at fault?
When the same engineer was asked from where in the system he was taking
the sample, he seemed genuinely shocked that anyone would sample a
hydraulic system anywhere other than the reservoir.
However, by doing
so, any wear debris from the failing pump would show up only in the oil
sample bottle after finding its way through the system, which included
valve blocks, untold numbers of actuators and a 3-micron return line
filter, and into a 5,000-gallon reservoir where it would be diluted.
Step No. 3. Data Logging and Sample Analysis
Assuming the sampling strategy is correct
and the program has been designed based on sound reliability engineering
goals; it is now up to the lab to ensure the sample provides the
necessary information.
The first stage is to make sure the sample, and
subsequent data, is logged in the correct location so trend analysis and
rate-of-change limits can be applied.
That is the lab’s responsibility, right? What if two successive samples
are labeled slightly different? For example, two samples are labeled
unit IDs GB-3456 and 3456.
While logic might tell us that the prefix GB
simply means “gearbox,” imagine the difficulty the lab faces, confronted
with as many as 2,000 samples daily.
While carelessness and
inattentiveness on the part of the lab are inexcusable, it is incumbent
on the end user to ensure the consistency of information that is logged
and used for diagnostic interpretation.
Once the sample has been properly set up at the lab, the actual sample
analysis is next. This is an area where end users are definitely at the
mercy of the lab and its quality assurance (QA) and quality control (QC)
procedures. For example, how does the lab sequence tests?
the lab is requested to run a particle count, does it perform this
test first to minimize the possibility of further lab procedures
contaminating the sample, or is it left until last?
How often does the
lab run QA samples - samples of known chemical composition inserted in
the daily run to ensure test instruments are within acceptable QC
limits? Does it run them every 10 samples, every 50, or not at all?
What happens if a QA sample fails? Does the lab retest the customer
samples back to the last QA sample that passed, or does it simply
recalibrate the instrument and move on?
What about the technicians who are actually running the tests? Are they
high school graduates who have been hired as cheap labor, or are they
chemical technicians or degreed chemists?
What about training specific
to used oil analysis? Have the lab techs been sent to any training
courses and have they obtained any industry- recognized qualifications
such as ICML’s LLA (Laboratory Lubricant Analyst) certification? ICML = Internation Council for Machinery Lubrication
Step No. 4. Data Diagnosis and Prognosis
Diagnostic and prognostic interpretation of the data is perhaps the
step where the most antagonistic relationship can develop between the
lab and its customers.
For some customers, there is a misguided belief
that for a $10 oil sample, they should receive a report that indicates
which widget is failing, why it is failing and how long that widget can
be left in service before failure will occur. If only it were that
simple!
The lab’s role is to evaluate the data so that complex chemical
concepts such as acid number or the presence of dark-metallo oxides
makes sense to people who may have many years of maintenance
experiences, but haven’t taken a high school chemistry class in many
years.
The lab cannot be expected to know - unless it is specifically informed
- that a particular component has been running hot for a few months,
that the process generates thrust loading on the bearings, or that a new
seal was recently installed on a specific component that is now showing
signs of excess water in the oil sample.
Evaluating data and making meaningful condition-based monitoring (CBM)
decisions is a symbiotic process. The end user needs the lab
diagnosticians’ expertise to make sense of the data, while the lab needs
the in-plant expertise of the end user who is intimately familiar with
each component, its functionality, and what maintenance or process
changes may have occurred recently that could impact the oil analysis
data.
Likewise, evaluating data in a vacuum, without other supporting
technologies such as vibration analysis and thermography, can also
detract from the effectiveness of the CBM process.
Step No. 5. Performance Tracking and Cost Benefit Analysis
Oil analysis is most effective when it is used to track metrics or
benchmarks set forth in the planning stage. For example, the goal may be
to improve the overall fluid cleanliness levels in the plant’s
hydraulic press by using improved filtration. In this case, oil analysis
- and specifically the particle count data - becomes a performance
metric that can be used to measure compliance with the stated
reliability goals.
Metrics provide accountability, not just for those directly involved
with the oil analysis program, but for the whole plant, sending a clear
message that lubrication and oil analysis are an important part of the
plant’s strategy for achieving both maintenance and production
objectives.
The final stage is to evaluate, typically on an annual
basis, the effectiveness of the oil analysis program.