Wednesday, June 20, 2012

DESIGNING RELIABILITY INTO COMPONENTS WITH QUALITY FACTOR

Design for reliability is an emerging discipline that refers to the process of designing reliability into components. The process encompasses several tools and practices and describes the order of their deployment that an organization needs to have in place in order to drive reliability into components and operations. Typically, the first step in reliability process is to set the system’s reliability requirements. Reliability must therefore be "designed in" to a particular system. During system design, top-level reliability requirements are then allocated to subsystems by design engineers working together with a specific plan and model.
A reliability plan and model that uses block diagrams to provide a graphical or mathematical model means of evaluating the relationships between different components of the system. The models incorporate prediction based on parts/components count resistance to failure and degradation taken from real time data. The design technique and component physics of failure relies on understanding the physical processes of stress, strength, and failure at a very detailed level as well as statistical methods. Then the component and system can then be redesigned to reduce the probability of failure. The reliability of components in this case is of course governed by various mechanisms through which they may fail. These mechanisms are therefore mainly dependent on the design, manufacturing processes and operational environment of devices. In order to reduce the incidence of failure, it is necessary to determine and study the modes of failure and feed the information back to design and production teams for the required corrective action. In addition to the feedback, it is important that information be supplied to the user concerning the overall reliability of the devices under normal operating conditions. It is however often necessary to represent such conditions with some other more convenient reliability test conditions (eg thermal stress) and it is consequently essential that these tests be related to operating conditions in a known manner.
These latter tests are the accelerated life tests carried out to evaluate device reliability with the help of quality factor analysis. The correlation between such tests (quality factor tests) and operating conditions may be determined by the wear out mechanisms for the devices, where the activation energies for these mechanisms are obtained from a series of steps like stress and over stress tests. The distribution plots from these type of tests are frequently deviated by random or consistent failures and may even be swamped by the dominant failure modes. In addition, new failure modes may be introduced by the stress of life, which means several failure modes can be considered and determined with the help of quality factor metric, through the use of Technological Inheritance Coefficient.
In this case, a combination of stress with other surface quality parameters like surface roughness, surface wavelength, surface hardness, surface concentration factor and others are integrated to form a single quality factor metric. Actually, the type of mechanism with quality factor will give straight lines on normal distribution plots. Therefore it is necessary and now possible that, the physics of failure mechanisms and their levels of activity be considered together during real time operations. It is common practice to employ screening procedures such as microscopic examination and "burn-in" programs, to enable immediate rejection of defective components or to accelerate gross failure potential in a short time. The use of component quality factor analysis with technological inheritance model is cost effective to combine stress tests, microscopic tests and physics of failure tests under a single platform for reliability design, as well as for life time, life cycle costs evaluation and assessment of components.


No comments:

Post a Comment