| I once worked with a medical device design engineer who, although talented enough, was not adept in the subtle yet indispensable skill of verbal communication. He lacked a concise, organized approach to his projects, and his problem solving skills were unilateral and obtuse, that is to say, his only aim was to satisfy his personal requirements, what he felt was important. Creative problem solving and brainstorming with customers as to what they desired did not fall within his repertoire. As a result customer complaints and a long string of product failures eventually led to him losing his position.
Where specifically had he failed? The net result of his approach was that he designed devices that did not deliver the desired customer results. They also had a varying tendency to be either unnecessarily expensive to produce, unreliable to operate, or difficult to service. All concerned with the product were often dissatisfied, from customers to service technicians. This caused the company we worked for to incur considerable expense to rectify his design errors. The company also lost some of their customer base to competitors. Sadly, none of this would have happened if my coworker had used a systems engineering approach in designing his projects.
Before we get any further into a discussion on systems engineering, let’s get a handle on what is meant by a system. In a nutshell, a system is a combination of interacting components that are organized to achieve one or more specific purposes. The components can be tools, machine parts, electronics, people, or any combination thereof. For example, hundreds of parts can be combined by a manufacturer into a system to form a medical device such as an x-ray film developing machine, the end result of which is to produce a film of diagnostic quality.
The system part of Systems Engineering stays true to this definition. It is an interdisciplinary approach to complex engineering projects which guides all activities during the course of a product’s life cycle, from conception to production. While doing so it will integrate and monitor work processes between all departments involved, with a constant eye towards optimization of processes and reduction of costs in order to satisfy stakeholder requirements.
A key objective of systems engineering is to produce systems that satisfy stakeholder needs by producing reliable, cost effective, and safe products capable of performing tasks as designated by the customer. Within the medical device arena stakeholders include patients, nurses, doctors, the US Food and Drug Administration (FDA), device service technicians, device dealers, as well as the device manufacturer.
Next time we’ll begin our exploration of how systems engineering addresses the medical device design process with a discussion on the first of its five stages, known as Concept.
Posts Tagged ‘FDA’
| Imagine going on a diet and not having a scale to check your progress, or going to the doctor and not having your temperature taken. Feedback is important in our daily lives, and industry benefits by it, too.
Generally speaking, feedback, or monitoring, is a tool that provides relevant information on a timely basis as to whether things are working as they were intended to. It’s an indispensable tool within the food manufacturing industry. Without it, entire plants could be erected exposing workers to injury and consumers to bacteria-laden products. It’s just plain common sense to monitor activities all along the way, starting with the design process. Now let’s see how monitoring is applied in HACCP Design Principle No. 4.
Principle 4: Establish critical control point monitoring requirements. – Monitoring activities are necessary to ensure that the critical limits established at each critical control point (CCP) established under Principle 3 discussed last week are working as intended. In other words, if the engineer identifies significant risks in the design of a piece of food processing equipment and establishes critical limits at CCPs to eliminate the risk, then the CCPs must be monitored to see if the risk has actually been eliminated.
Monitoring can and should be performed in food manufacturing plants by a variety of personnel, including design engineers, the manager of the engineering department, production line workers, maintenance workers, and quality control inspectors. For example, engineering department procedures in a food manufacturing plant should require the engineering manager to monitor CCPs established by the staff during the design of food processing equipment and production lines. Monitoring would include reviewing the design engineer’s plans, checking things like assumptions made concerning processes, calculations, material selections, and proposed physical dimensions.
In short, monitoring should be a part of nearly every process, starting with the review of design documents, mechanical and electrical drawings, validation test data for machine prototypes, and technical specifications for mechanical and electrical components. This monitoring would be conducted by the engineering manager during all phases of the design process and before the finished equipment is turned over to the production department to start production.
To illustrate, suppose the engineering manager is reviewing the logic in a programmable controller for a cooker on a production line. She discovers a problem with the lower critical limits established by her engineer at a CCP in the design of a cooker temperature control loop. You see, the time and temperature in the logic is sufficient to thoroughly cook smaller cuts of meat in most of the products that will be made on the line, however the larger cuts will be undercooked. The time and temperature settings within the logic are insufficient to account for the difference.
This situation illustrates the fact that monitoring does no good unless feedback is provided with immediacy. In our example, the design engineer who first established the CCP and the critical limits was not informed in a timely manner of the difference in cooking times that different size meats would require, resulting in the writing of erroneous software logic. Fortunately, continued monitoring by the engineering manager caught the error, leading her to provide feedback about it to the design engineer, who can then make the necessary corrections to the software.
Next week we’ll see what design engineers do with the feedback they’ve received, as seen through the eyes of HACCP Principle 5, covering the establishment of corrective actions.
| How do parents make life safer and healthier for their kids? One of the ways is to impose limits on things like roaming distance within the neighborhood, curfews, and insisting that you eat your vegetables. Just common sense, right? Let’s take a look at some more of it.
Limits are also necessary within the food manufacturing industry. Let’s take a look at Hazard Analysis and Critical Control Point (HACCP) Principle No. 3 to see how they’re established and why.
Principle 3: Establish critical limits for each critical control point. – You can think of a critical limit as a boundary of safety for each critical control point (CCP). So how do you determine that boundary of safety? It’s difficult to generalize, but if you’ve ever watched the TV show Hoarders, you have an excellent example of one that has not only been breeched, but torn asunder.
In order to prevent things in the commercial food industry from getting anywhere near Hoarders bad, maximum and minimum values are set in place, representing safeguards to physical, biological, and chemical parameters at play within the industry. Critical limits can be obtained from regulatory standards and guidelines, scientific literature, experimental studies, as well as information provided by consultants. These critical limits come into play with issues as varied as machine design, raw material temperatures, and overall safe processing times.
How could the hoarders let things get so bad? If you listen carefully, you’ll hear bits of information that provide a clue. They’ll say it started with a few things falling to the floor which they didn’t feel like picking up and it escalated from there.
Now all of us live within environments which differ as to their cleanliness, but by and large we live within space where we feel comfortable and consider to be reasonably clean. We don’t all habitually move stoves and refrigerators to clean, for example. But if we were so inclined, refrigerators do come with front access panels that are easily removed. Trouble is the space they provide access to often isn’t large enough to accommodate hands and a vacuum cleaner nozzle comfortably. You can imagine how frustrating and potentially dangerous it would be to public health to have commercial machinery that provided such limited access for cleaning.
To cope with this problem design engineers institute minimum and maximum parameters, such as in the critical limit dimensions of a removable cover. Their guideline would ensure that enough space is provided so that personnel can fully access all aspects of machinery with tools for cleaning. That same cover can also have established maximum critical limits, so that dimensions aren’t too large and heavy to be manipulated by hand. Human nature being what it is, something that is too difficult to remove may be “forgotten” and parts of the machine may never get cleaned.
Raw meats and many produce can contain hazards like salmonella, E. coli, and other nasty critters that are dangerous to human health. One of the ways the commercial food industry works to ensure that these contaminants aren’t unleashed on the public is to install programmable control systems into processing machinery that essentially cooks the meat at an established minimum temperature for a minimum amount of time. Utilizing this type of temperature control in conjunction with an established maximum cooking parameter for temperature and time will virtually eliminate the possibility of overcooked or burnt food products. When you buy that frozen dinner in most cases it’s completely cooked, but it’s a rarity to find it’s been burned.
Another situation in which critical limits are utilized is in the maintenance of machinery, such as when they limit the number of hours a machine can be operated before it is shut down for servicing.
Next week we’ll move on to Principle No. 4 and see how it establishes monitoring requirements for each CCP. ____________________________________________
| What would you do if you heard an unfamiliar sound coming from your water heater? If you’re like most people you’d make a mental note to keep an eye on it, but ignore it for the most part. Unfortunately, this less than proactive approach often results in water heater floods. As an engineer, I’m more likely than the general population to investigate the cause of the water heater’s sound and proactively seek a remedy before a real problem has a chance to develop.
The FDA’s Hazard Analysis Critical Control Point (HACCP) seeks to accomplish the same with regard to food production. As discussed last week with regard to HACCP Principle 1, those involved in designing food processing equipment must proactively analyze designs to identify potential food safety hazards. Now let’s see how common sense is once again employed through Principle 2, guiding design engineers to take control of situations where hazards have been identified through Principle 1.
HACCP Principle 2: Identify critical control points. A critical control point (CCP) is a step in the design process at which a control can be most effectively introduced to prevent or eliminate hazards. In this context a “control” would be a design revision to eliminate hazards identified during the Principle 1 stage. We will once again use the two examples introduced in last week’s blog discussion on Principle 1.
In our first example, hazard analysis revealed that food can accumulate in a food processing machine in areas where cleaning is difficult or impossible. This accumulation would eventually rot and fall into uncontaminated food passing through production lines. Design engineers would work to address this contamination hazard by identifying a CCP within the design process, that is, the best place where a preventative measure can be added to the machine setup to facilitate removal of the accumulation. At that CCP, measures can be taken to change the machine’s design. Perhaps all that is needed to correct the situation is to include easy to remove access covers.
In our second example hazard analysis revealed that the metal tooling as designed for our food production machine was too fragile and would not withstand the repeated forces imposed on it by the mass production process. This design flaw presents a strong possibility that metal parts will break off and enter food on the line. To correct this situation, design engineers must once again identify the juncture within the design process at which a CCP is identified. There, a preventative measure can most effectively be introduced, enabling more robust metal to be used in the tooling.
The previous two examples illustrate CCPs being utilized within the design process. CCPs can also be introduced outside the design process, as when they are identified during the course of training procedures involving the operation, cleaning, and general maintenance of equipment and production lines. And an excellent way of implementing this approach is to have design engineers collaborate with operating and maintenance staff. Working together, they are best able to identify key elements to be addressed and make note of them within written procedures.
Now that we have identified some examples of CCPs within the design process, we can move on to HACCP Principle 3 and how it guides design engineers to establish critical limits for each CCP.
| Imagine a doctor not washing his hands in between baby deliveries. Unbelievable but true, this was a widespread practice up until last century when infections, followed by death of newborns, was an all-too common occurrence in hospitals across the United States. It took an observant nurse to put two and two together after watching many physicians go from delivery room to delivery room, mother to mother, without washing their hands. Once hand washing in between deliveries was made mandatory, the incidence of infection and death in newborns plummeted.
Why wasn’t this simple and common sense solution instituted earlier? Was it ignorance, negligence, laziness, or a combination thereof that kept doctors from washing up? Whatever the root cause of this ridiculous oversight, it remains a fact of history. Common sense was finally employed, and babies’ lives saved.
The same common sense is at play in the development of the FDA’s Hazard Analysis Critical Control Point (HACCP) policy, which was developed to ensure the safe production of commercial food products. Like the observant nurse who played watchdog to doctors’ poor hygiene practices and became the catalyst for improved hospital procedures set in place and remaining until today, HACCP policy results in a proactive strategy where hazards are identified, assessed, and then control measures developed to prevent, reduce, and eliminate potential hazards.
In this article, we’ll begin to explore how engineers design food processing equipment and production lines in accordance with the seven HACCP principles. You will note that here, once again, the execution of common sense can solve many problems.
Principle 1: Conduct a hazard analysis. – Those involved in designing food processing equipment and production lines must proactively analyze designs to identify potential food safety hazards. If the hazard analysis reveals contaminants are likely to find their way into food products, then preventive measures are put in place in the form of design revisions.
For example, suppose a food processing machine is designed and hazard analysis reveals that food can accumulate in areas where cleaning is difficult or impossible. This accumulation will rot with time, and the bacteria-laden glop can fall onto uncontaminated food passing through production lines.
As another example, a piece of metal tooling may have been designed with the intent to form food products into a certain shape, but hazard analysis reveals that the tooling is too fragile and cannot withstand the repeated forces imposed on it by the mass production process. There is a strong likelihood that small metal parts can break off and enter the food on the line.
Next time we’ll move on to HACCP Principle 2 and see how design engineers control problems identified during the hazard analysis performed pursuant to Principle 1.
| Perhaps you’ve heard of the non-reciprocal wine and sewage principle. I’m not sure where it originated, but it states that if you add a cup of wine to a barrel of sewage, you still get a barrel of sewage. No brainer, right? Well, consider the flip side. If you add a cup of sewage to a barrel of wine, you also get nothing more than a barrel of sewage. In other words, a small amount of contamination goes a long way.
The premise of this principle also applies within the food manufacturing industry. If you were to add uncontaminated food to garbage, you would just get more garbage, and if you add garbage to food… well, you get it. The term garbage can encompass an endless variety of contaminants, such as broken glass, metal shavings, nuts, bolts, plastic fibers, grease, broken machine parts, errant human body parts, and on and on. Although the FDA does allow for certain levels of natural contaminants, like insect parts and rodent hairs, consumers are never pleased when undesirable elements enter their food supply. It could even be dangerous.
When design engineers create food processing machinery and production lines, they must be on the lookout for potential risks of contamination hazards. They must also provide a quick means of mitigation, before contaminants can enter into commercial production. A systematic approach provides the best means of addressing these needs, allowing for a pre-emptive method to ensure food safety. Checklists and procedural policy set in place for these reasons will enable design engineers to identify, assess, and control risks before they turn into hazards. This is where Hazard Analysis Critical Control Point (HACCP) planning comes in.
To address these needs, the FDA has set up the HACCP (pronounced, “hass-up”) system, defined as “…a management system in which food safety is addressed through the analysis and control of biological, chemical, and physical hazards from raw material production, procurement and handling, to manufacturing, distribution and consumption of the finished product.”
HACCP is the outgrowth of FDA current Good Manufacturing Practices (cGMP), which are set out in the Code of Federal Regulations pertaining to commercial food processors and manufacturers, Title 21, Part 110, entitled, “current Good Manufacturing Practice in Manufacturing, Packing or Holding Human Food.” Every commercial food processor, regardless of size, must implement a cGMP/HACCP quality assurance program to comply with these regulations.
HACCP is a proactive strategy where hazards are identified, assessed, and then control measures developed to prevent, reduce, or eliminate potential hazards. A key element of HACCP involves prevention of food contamination during all phases of manufacturing, and way before the finished food product undergoes quality inspection. This strategy extends into the food manufacturing equipment and production line design process as well.
Next time we’ll continue our look at HACCP and how its seven principles are used by design engineers to prevent food product contamination.
| Did you ever hear the saying, “garbage in, garbage out?” Perhaps you’ve used it yourself at times, as when your teenager insists on writing their 20-page term paper the night before it’s due. Parents, having the benefit of decades of life experience, know that the outcome of a last ditch effort of this type will most likely not turn out well.
This wisdom also applies particularly well to the medical manufacturing process. The FDA is like the parent in this instance, mandating that Design Transfer Procedures be in place to avert the types of disasters which might ensue if the “garbage” philosophy were carried out. Meant to ensure that medical device designs are correctly translated into production specifications for manufacturing, Design Transfer Procedures keep those directly involved with the manufacturing process in check. It is absolutely vital that those involved in manufacturing receive accurate and complete information.
Imagine what would happen if an engineer provided a manufacturer with faulty design information. Components could be made to the wrong specifications or of a material that proves toxic to the application. These errors range in negative effect from being costly in terms of dollars wasted to perhaps costing someone their life.
A Design Transfer Procedure would ensure that a variety of mishaps do not occur during the transfer process. The procedure is typically overseen by the medical device company’s management. For example, a Design Transfer Procedure would lay out responsibilities of supervisors and managers to make sure the latest revision of electrical schematics, bills of materials, Gerber files, and quality testing procedures are received by the manufacturer of a device’s printed circuit boards. It’s important that the order is received in a timely manner so as not to hold up the manufacturing process. However, it’s much more important that the printed circuit board is made properly, the correct electrical components are placed on it in the correct orientation, and it is tested to make sure it doesn’t malfunction after assembly.
Design Change Procedures basically ensure that when changes are necessary, the medical device company follows all the procedures for Design and Development Planning, Design Input, Design Output, Design Review, Design Verification, and Design Validation. Once the changes are reviewed, validated, verified, and approved, they can be incorporated into the original device design. This is where the Design Change Procedure must dovetail with the Design Transfer Procedure to make sure the correct information is provided to the company’s management staff in the procurement, manufacturing, product service, and warehouse departments. This is to make sure they can keep component vendors on track with the changes, maintain sufficient inventory of the changed components, put the right components in the device during assembly, and properly support repair technicians in the field.
Yet another aspect of Design Controls promulgated by the FDA comes into play with the establishment of procedures for maintaining a Design History File (DHF). This DHF contains all documentation created during the life cycle of the project, meaning, movement from creation to completion and on into market introduction, sometimes beyond. DHF Procedure sets up protocols for collection and organization of information about the medical device design, starting with design documentation and covering the gamut from design changes, to validation testing, to design verification, and on to design review. All this is done to ensure that the initial product design was developed in accordance with the original design plan and overall product design requirements.
| Recently my wife was on a quest to make the perfect pound cake, but before she put butter to flour she did her research. What’s the best butter? Best flour? Eventually she came up with a recipe she felt would prove to be the Queen of all pound cakes. After the recipe came reviews by her test panel, or family members, including myself. Questions were asked, such as, When you first bite into a pound cake, do you want to be aware of vanilla or lemon? It was only then that she would begin to combine ingredients for the final mouth watering product. Very much this same procedure is used when coming up with a new medical device.
Previously we’ve discussed FDA requirements for medical devices as they concern design controls with respect to design and development planning and design input procedures. We’ll now focus on requirements for Design Output, Design Verification, and Design Review Procedures.
Design Output and Design Verification Procedures go hand in hand to ensure that design output is properly documented, organized, reviewed, and evaluated in light of design input. What this means is that medical device companies must scrutinize and evaluate what is going into the design process, then make a comparison to what is coming out. The design is ultimately verified when all requirements for the medical device as previously set out have been met.
“Design output” is just another name for work product after major phases of the design project are completed, such as when my wife determined which butter would produce the best pound cake. Design output typically takes the form of specifications, notes, calculations, computer programs, mechanical drawings, electrical schematics, printed circuit board (PCB) layouts, bills of materials (BOM), mockups, prototypes, test data, and test reports. These are then utilized by people outside engineering circles to manufacture components and assemble them into a final product.
Design Review Procedures ensure that the design output is evaluated by others not directly involved in or responsible for the design work product, much as when family members served as a reviewing committee for my wife’s inquiries into taste preferences in pound cake. Sometimes she’d even ask a friend or neighbor to put their two cents in, and companies, too, will at times go outside and hire consultants to perform this function. By so doing, unbiased opinions are sought out, in the hopes that this fresh set of eyes will be more likely to spot errors, omissions, and misinterpretations that could prove disastrous if put into play. Design reviews are typically conducted after each major phase of a design project is complete.
Just as a recipe that looks good on paper may not necessarily taste good, a device design will often seem to work perfectly on paper, then prove otherwise when its manufacture begins or it’s used in the field. Ideally bugs are worked out before the product hits production and, later, the marketplace. Design Validation Procedures make use of prototypes for testing and careful evaluation under simulated or actual use conditions. Does the design safely meet requirements for intended use? Does it conform with industry standards? If not, there’ll be a lot of wasted “dough” going into the trash — pun intended!
Next time we’ll explore FDA requirements for Design Transfer and Design Changes. We’ll also talk about procedures for Design History Files.
| Have you ever had the divine experience of remodeling a major-use room of your house? Was the general contractor you employed able to understand what you wanted, plan out the work according to your requirements, and finish the job to your satisfaction? Maybe you had the unfortunate experience of hiring one that forgot your requirements, made things up as they went along, and stuck you with a room that looked awful, violated building codes, and didn’t meet your needs.
Now imagine what would happen if a medical device company took this haphazard approach to designing new products. Suppose the company’s engineers ignored the input of regulatory, marketing, procurement, quality control, and manufacturing staff? What if they chose not to follow applicable industry standards for performance and safety? And what if they failed to check design calculations or test prototypes for errors before putting the device into production and introducing it to the marketplace? The result is likely to be unfavorable, just like your contractor forgetting that you wanted a black granite countertop, not a beige one.
To help eliminate painful and costly scenarios such as these, the FDA requires that medical device manufacturers establish and maintain procedures to control the design of Class II and III devices, and even some Class I. This requirement for a system of design controls is part of the Quality System Regulation (QSR) under Title 21 of the Code of Federal Regulations. In case you’re not too familiar with the Code of Federal Regulations, Title 21 gives the FDA legal authority to regulate food, drugs, and medical devices in the United States.
So what falls under the premises of FDA design controls? Well, the FDA requires that a medical device company develop procedures for:
For now, let’s focus on Design and Development Planning and Design Input Procedures.
In the Design and Development Planning Procedure companies must carefully plan who will be involved in each phase of product development, as well as how they will interact, all in an effort to ensure that information flows and design requirements are met. The right pool of people would include design engineers, in addition to those employees responsible for making sure that regulations are complied with and those who are charged with securing intellectual property rights to the design. Then there are those who must acquire the physical materials required to manufacture the device and those who will do the actual manufacturing of it. Also, those responsible for quality control, marketing, sales, and product service should be involved. Perhaps others should be involved as well. Mind you, the Design and Development Planning Procedures are not set in stone. They must be regularly reviewed and updated as the project evolves.
Now let’s talk about Design Input, which is another term for a design requirement. These inputs can come from inside or outside the company. An example of a requirement coming from within is when Marketing stipulates that the maximum manufacturing cost of the device should not exceed $150 in order to maintain an acceptable margin of profit and be most competitive in the marketplace. A design requirement coming from outside the company would include industry standards that make specific requirements, such as requiring that the device in question be designed to protect its electronics from radio frequency interference.
Next time we’ll continue our discussion on medical device design by exploring Design Output, Design Verification, and Design Review Procedures.
For the last couple of weeks we’ve been discussing FDA medical device risk classifications, namely Classes I, II, and III. We also began discussing the FDA system of regulatory controls governing each class, starting with General Controls. This week we’ll examine the more stringent guidelines that come into play within Special Controls.
As you would imagine from the name, Special Controls come into play when General Controls aren’t deemed to be sufficient to deal with the situation. Class II and III medical devices, because they pose a higher level of risk to patients than Class I, generally require more FDA supervision than mere General Controls. These devices tend to fall under the auspices of Special Controls. Special Controls include things like special labeling requirements, complying with mandatory performance standards, and perhaps requiring that a manufacturer conduct a Post Market Surveillance (PMS) study. In case you’re wondering, a PMS may be required by the FDA to collect data after a medical device is sold, should there be any unexpected adverse events involving the device. A study of this data would aid in an investigation to determine the number of events, the cause of the events, and how to correct any problems that led to the events.
Let’s look at some examples of how Special Controls apply to Class II medical devices. One example would be a cranial molding helmet. These helmets are often used with infants to reshape their skulls into becoming more symmetric. Due to the nature of this device’s application on such a delicate patient, Special Controls include a requirement for special labeling. In this case, the labeling must include warnings to physicians and parents that precautions must be taken during its application to protect patients from possible injury, including eye trauma and impairments of brain growth.
Another example would be sutures. Yes, they are considered to be Class II medical devices. In this case, Special Controls require that sutures meet “mandatory performance standards.” What are “mandatory performance standards?” Well, they generally include industry consensus standards for particular medical devices. They are based on industry-wide accepted guidelines to ensure proper product performance. In this example, industry standards for suture material contain specific guidelines as to material composition, diameter size, mechanical strength, and biocompatibility. Adherence to these standards provides the highest assurance that sewn incisions won’t break open when the suture is stressed or the suture material won’t cause some sort of adverse reaction with the patient’s skin.
As specific as Special Controls can be, they are sometimes not enough. On these occasions the FDA states, “Class III devices are those for which insufficient information exists to assure safety and effectiveness solely through General or Special Controls.” Under these circumstances more regulatory control may be imposed. This is the case when dealing with medical devices directly responsible for supporting/sustaining human life, such as a cardiac defibrillator.
One such FDA control method that goes beyond Special Controls is the requirement to submit a Pre Market Approval application (PMA) to the FDA for approval. This PMA is subject to the most stringent FDA requirements. As a part of the PMA process a company must demonstrate the safety and effectiveness of a new medical device design by producing data and documentation obtained during “adequate and well-controlled” clinical trials.
In our series on FDA Classifications for Medical Devices we have merely grazed the surface. Depending on the device in question there may be a myriad of other considerations, so please consult the FDA’s web site for the complete picture: http://www.fda.gov/MedicalDevices/default.htm.