The Future State of Computer Validation, Part II: Increasing the Efficiency of Computer Validation Practices

Published on: 
, , , , , , , ,

Pharmaceutical Technology Europe

Pharmaceutical Technology Europe, Pharmaceutical Technology Europe-09-01-2003, Volume 15, Issue 9

Part I of this article was published in the March 2003 issue of 21 CFR Part 11: Compliance and Beyond. In this issue, Part II discusses the potential advances and changes that must be made for computer validation to remain innovative and relevant to the industry.

The objective of this article is to attempt to look at the future state of computer validation principles based on current events and trends in regulations, business practices and technology. This crystal ball approach will prepare the industry for the future by establishing a current-state best-practice foundation of computer validation principles as well as improving computer validation practices.

Part I of this article was published in the March 2003 issue of 21 CFR Part 11: Compliance and Beyond. In this issue, Part II discusses the potential advances and changes that must be made for computer validation to remain innovative and relevant to the industry.

Improving the future of computer validation infrastructure

With the increased need to conduct computer validation, there comes a need to improve the efficiency of how computer validation is performed. To have better efficiency in computer validation practices, a better infrastructure for computer validation practices is required. In addition to an existing computer validation programme that may already have been established (for example, the availability of an inventory list of the systems, a master plan on the validation status of the systems or a prioritization of how the systems are going to be validated and procedures for conducting computer validation), the following are some conceptual approaches that can be considered for building the future infrastructure (please note that these concepts are simply food for thought).

Use of a commercial off-the-shelf (COTS) screening requirements document

The current computer validation practices for preparing a full and thorough requirements document before system purchase may hamper the effort of purchasing a system in an expedient manner. In addition, the current operational lifestyle for new systems acquisition is mainly related to the purchase of COTS software, which typically means there is an existing client base and, perhaps, some familiarity with the product. The concept of screening requirements is the use of a high-level requirements document to select a vendor, which enables a purchase order to be issued and then allows the system acquisition process to begin. Once the vendor is selected, the strategy is to work with the vendor in acquiring or developing the specifications document. The potential for developing and using a screening requirements process can only be considered for purchasing a system that meets all of the following criteria:

The authors

  • a COTS system (one that is not a customized, contracted or an in-house developed system)

  • not a new system (that is, it is not version 1.0; there is an established, regulated client base)

  • a system for which the principle of functionality and operation is already known (for example, a high performance liquid chromatography system)

  • a similar system is already available in-house

  • a system for which the requirement specifications can be furnished by or co-developed with the vendor.

As with other types of software systems, a risk factor is involved in the use of this screening requirements concept. Those risks may vary from system to system, from project to project and from company to company. From a computer validation perspective, the main risk factor to be considered when using this concept is the capability of it meeting the basic objective of computer validation: providing documented evidence that provides a high degree of assurance that the system reliably and consistently does what it is designed to do. When this concept is applied, and the above criteria are met, the basic objective of computer validation can still be met. Mitigating the risk arising from this approach can be done by ensuring that the system is qualified to meet business needs (for example, through testing). Hence, incorporating and meeting business needs should be included in validating the system.

Process for evaluating the need to do a vendor audit

This section is not intended to introduce a concept regarding how a vendor audit should be conducted, but rather to introduce the concept of determining whether there is a need to conduct a vendor audit (or the type and level of the audit). Having such a process will define a consistent approach for evaluating vendors and expedite the decision making process.

The following are some of the questions that can be considered for this process:

  • is the system COTS or custom developed (contracted out or developed in-house)?

  • does the system's acquisition involve the purchase of one or more than one system?

  • will the system be implemented at one location or multiple locations?
Advertisement

  • will the system be deployed as a standalone system or interfaced to other systems?

  • is the system new and recently marketed or has it already been in the market for an extended period of time?

  • has the purchaser had experience with this system?

  • has the purchaser had experience with the vendor?

  • will the vendor be used just for this one particular project or for future projects?

  • has the vendor been previously audited by your company or does the vendor have an audit on file with the Audit Repository Center?

  • can the vendor make high quality development validation available?

  • does the vendor have an established market history (is it financially stable)?

  • does the vendor have quality certifications?

  • does the system perform complex GxP functions (for example, you may not want to do an audit of a vendor for simple electronic balances)?

Other factors can also be considered and a weighting factor can be applied to each of those factors. The totalled result can be used to determine whether a vendor audit should be conducted. Having this type of process in place will expedite the process of computer validation by providing guidance as to which vendors must be audited.

Process for accepting third party certification

In addition to considering the need for vendor audits to inspect the quality of the product, alternatives - such as third party certifications - may be evaluated. The introduction by the US Food and Drug Administration (FDA) of a systems approach to inspection may facilitate the use of third party certification to evaluate the quality of the software without performing a vendor audit. Accredited certification bodies certify the quality of a product by conducting audits performed by independent external quality assurance agencies known as third party certification bodies.39 The certification bodies use technically qualified auditors who have been specially trained and subjected to professional selection. Having this type of process in place would expedite computer validation by potentially avoiding long discussions regarding the quality of the product. Instead, the audit can concentrate on the capability of the product to meet user needs.

It should be noted that FDA does not currently recognize any third party certifications, including ISO 9001 certification. The industry must work with FDA to provide it with a level of trust in such a scheme before it can be relied on to mitigate the validation workload. In the meantime, the PDA effort (Technical Report 32) of vendor audits and the establishment of a vendor audit package repository centre39 should be considered as a means of satisfying the supplier audit requirement.

Availability of a corporate data dictionary

To increase the efficiency of the process for making a decision and for information availability, the need for systems integration (for example, between chromatographic systems and LIMS or between LIMS and MRP) will increase and become more prominent in the future.

An industry leader speaks

To anticipate this need, one has to consider the development of a corporate data dictionary standard (for example, establishing standard definitions for terms such as sample, item code and unit of measurement, and creating a naming convention for products or data elements that will be shared between the computer systems). Building the data dictionary can be started whenever a new system is deployed or from the existing systems through the collection of data dictionary items from those systems. The collected data dictionary items from the various systems can then be compiled and cleaned (that is, eliminating redundancies and selecting a primary data item preference) to provide a controlled version of the data dictionary. When another system is being considered for deployment or acquisition, the data dictionary can be used to assess whether the data elements already exist in the dictionary or a new element must be introduced to the library. This data dictionary can also be used to support requirements development and will be an aid to system or vendor selection. For example, the data dictionary can help determine the complexity of integrating or interfacing new and existing systems.

Simplifying the computer validation approach

Currently, almost all instruments or equipment being purchased contain a microprocessor, even those as simple as a pH meter. The level of validation (or qualification) of a simple microprocessor-based system does not always require the same level of validation effort that a more complex system requires. Hence, a simple conceptual process for determining the level of validation (or qualification) effort is needed. This concept is based on the premise that the level of validation (or qualification) can be categorized as needing either calibration, qualification or validation. For example, laboratory instruments or equipment that meet all of the following criteria can be categorized as needing calibration only:

  • equipment or instruments in a COTS system

  • instruments or equipment not attached to an external computer or PC

  • if software is embedded in the equipment or instrument and only the vendor can load or change the software

  • no data are retained or stored by the equipment or instrument other than the current data analysis measurement

  • no data can be stored and retrieved, changed and stored again

  • the equipment or instrument is of the standalone type and not interfaced or integrated to other systems (if it is interfaced or integrated to other systems, then the total integrated or interfaced system must be considered when determining the level of validation or qualification).

After calibration, qualification must also be performed if the following factors are required to operate or use the equipment or instrument:

  • calibration must be run whenever a sample analysis is performed (calibration is not the periodic type of calibration or performed on a preventive maintenance schedule)

  • final output (or result) of the equipment or instrument requires data processing or calculation based on the input of another sample or other data parameters obtained during the analysis of that additional sample

  • the equipment, instrument controls, monitors switches or other functions that are needed for that equipment to operate.

For an instrument or equipment that does not fall into these categories (that is, calibration or calibration 1 qualification), then validation is required. The level of validation can then be divided, based on whether the system is a custom-developed or a COTS system. For the custom system, a vendor audit, design specifications, programming standards and a source code walk-through and review may have to be considered and included as part of the validation activities. For COTS packages, much of this work will have already been done by the vendor (and verified through an audit). The criteria described above offer a conceptual depiction of the types of issues that must be considered when attempting to determine an appropriate level for validation activities.

Validation data and test sets retention for regression data analysis

During validation, consider analysing and selecting data and test sets that were used for the testing. The data and test sets will be useful for future testing activities that can take advantage of regression analysis testing. The data and test sets should be able to form the baseline of the system's current operational state and be used for ongoing support activities. These data and test sets should be meaningful in testing the important system functions. Hence, if there is a need to verify an important system function, this regression analysis can save time by eliminating the need to develop new data sets for testing. For example, regression testing might be considered for verifying the availability and operation of certain functions during an upgrade that should not affect the operation of those functions. This approach is even more desirable in light of the fact that regulators typically want to see the system pass the same challenges after a change as it did originally. Besides upgrades, patches or fixes (whether for computer hardware, the operating system, application program or third party programs), the regression analysis test can also be considered as part of a disaster recovery programme, for integration with other systems, for periodic review of the system's validation status and for troubleshooting activities.

Statistical sampling approach for change control

Change control is a validation activity that has traditionally been resource intensive. Assuming that each change control request takes 2 h (this is an assumption - in actuality each request may take longer), and assuming there are three changes per system per year, having 100 systems means that change control will require 600 resource hours per year. Hence, a process to improve the efficiency of change control should be considered.

The concept of statistical sampling to review the change process associated with systems undergoing identical change is considered here. The sampling of these systems should be based on good, technical statistics or current regulatory acceptable practices (for example, =n 1 1, as in the sampling methods for raw materials, might be applicable to identical computer systems undergoing exactly the same change). It should be noted that this approach is not universally applicable and one should consider the suitability of this approach to different situations. For example, it might be suitable for a client–server system with several identical clients (hardware, software, application, environment), but it may not be suitable for systems traditionally validated on an individual basis (for example, tablet presses). If this type of approach is selected, one could suggest two types of tests as a minimum for the client server example above:

  • full testing on the selected sample

  • testing required outside the selected sample.

An additional aspect to consider when applying statistical sampling is the need to predefine the acceptability of similar systems. If the systems are not identical, then consideration must be given to what is an acceptable delta for the differences between those similar systems. This approach will be more effective if there are more similar systems, making standardization of systems a key factor in operating the business. Some of the deltas that one can consider may include the differences in software such as operating systems, third party tools, application programs, versions, patches and fixes, and the deltas in hardware, equipment, instruments or other peripherals that are components of the system. Great care must be taken in justifying an acceptable delta.

Other approaches can be used to improve the change control process (for example, defining a clear boundary of the different levels of documentation required for GMP and non-GMP hybrid type of systems). Running regression analysis tests when a non-GMP function is changed could also be considered - regression tests may only take a couple of minutes rather than the 2 h in the example given at the beginning of this section. Another consideration is the availability of a configuration management tool that can be an aid to a faster determination process when evaluating the effect of a change. The configuration management tool can maintain the traceability of the the validation documents. Facile traceability (for example, requirements to specifications, specifications to test sets and operating procedures to training requirements) can have a significant effect on the effort to implement change. When a requirement or a function is changed, the effect of the change can be traced to the specifications, test sets and operating procedures that may require updating. Knowing the effect may allow a better evaluation of the acceptance of the change and also a better resource requirement estimate for implementing the change.

Finally, statistical sampling can also be considered as part of a testing strategy for projects implementing or deploying multiple systems that are the same or very similar (that is, within an acceptable delta). A similar approach, sometimes referred to as matrix validation, is used in the context of validating manufacturing equipment and processes.

When statistical sampling is used, it is recommended that professional statistical support is used rather than relying on ad hoc advice. It is vital that statistical techniques are used appropriately.

Disaster recovery and business continuity planning

The increased use of systems means an increase in a business' operational dependency on a system's performance and availability. Preparing for a system's unavailability means planning how to recover the system and also how to continue the business operation while the system is unavailable. Some factors to consider for this type of plan may include the following:

  • how to continue business operations when the computer system is not available, also known as contingency planning (note that the more reliant the business becomes on automated processes, the less likely it is that manual operations can be substituted when there is a disruption in service)

  • the criteria regarding how a disaster can be declared or undeclared and who has the authority to make those decisions (note that a disaster can be either a hardware or software failure, or a natural one)

  • how the alternative system can be qualified in an expedient manner yet maintain data integrity (this includes manual back-up processes, which are problematic because many business operation units may no longer have the knowledge or the tools, for example (such as paper forms) to do their processes without the aid of computers)

  • how data generated or transactions executed can be incorporated into the recovered system when the system is unavailable.

The level of periodic review and gap analysis

Whether it is a periodic review, audit, validation evaluation or gap analysis, the basic objective of all these activities is the same: to evaluate the validation status and find the areas where validation (and possibly regulatory compliance) can be improved. With the increased number of systems, this activity may require substantial resources. Hence, a process for determining the level of this activity based on the system's risk is worth considering.

The approach for the levels of review can generally be categorized into the following:

  • the availability of expected documents such as validation plan, specifications, installation qualification (IQ), operational qualification (OQ) and performance qualification (PQ). (The expected documents themselves can be listed based on the applicable computer validation standard operating procedures [SOPs].)

  • the availability of expected sections within each document (for example, software section of an IQ document)

  • the availability of expected elements within each section of the document (for example, the software section of an IQ document contains the elements of operating system and application program names and version numbers)

  • the availability of expected functions being verified in the validation exercise (for example, the system access and security function or other functions being used according to the operational procedure for the system)

  • the availability of evidence of conformance to good documentation practices (for example, appropriate signatures and dates)

  • the availability of appropriate support documentation for system management, security procedures, back-up and restore and problem management.

These levels of review can then be applied according to the system's risk. Various factors can be considered in assessing the system's risk. The risk factors are mostly related to the degree of regulatory exposure and the importance of the system's ability to support business operations. The following are just some of the risk factors one can consider:

  • the effect of the system on product safety, identity, strength, purity and quality - whether direct or indirect

  • the interval since the previous review or when the last validation activity was conducted

  • the quantity of the system (that is, whether there is only one system or more than one of these systems)

  • the product coverage of the system being used (that is, whether the system is used for only one product or multiple products)

  • the number of system users

  • the number of changes that have occurred, supported by evidence that the changes have been managed appropriately

  • the historical compliance of the system (that is, the number of observations documented in previous reviews)

  • the number of problems that have been attributed to the system (for example, batch records may give an indication that there has been process deviation because of a problem traceable to the system).

Weighting factors can then be applied to those various types of risk, and the calculated level of risk can be correlated to the level of review that must be performed. This type of review approach will enable resources to be focussed and allocated accordingly. In addition, by assigning a unique colour or number to the review finding on the basis of the level of review, this periodic review approach can also be used as a management tool (for example, assigning the colour red to every review finding, indicating that a document is not available or assigning yellow to findings that indicate the expected section of the document is not available). The compiled results can then be placed in a table, whereby systems with the highest number of red flags indicate a lower degree of compliance and may indicate the immediate need to address those findings.

System retirement approach

As more and more systems are being deployed, the number of systems that will be retired also increases. Hence, it is important to establish a process for system retirement before the need arises. The main consideration for system retirement is the accessibility and disposition of data from the system being retired. It must be established whether the data from the system can also be retired or if it must be available for future access. On the basis of the data disposition, one of the approaches that can be considered is to classify the retirement process as "full retirement" or "semi-retirement." Full retirement means every part of the system is being retired, including the data, software, hardware and equipment. Semi-retirement means only part of the system is being retired. In semi-retirement mode, data will typically be made available for current or future operation. This semi-retirement process is then simply based on the available alternatives for making the data available. Some consideration points when determining these alternatives may depend upon the portability of the data, cost and system capability (risk analysis factors), including the following:

  • Conformance of data to standard data format (or the level of the non-proprietary nature of the data to the system that is being retired and where the data will be used in the new system).

  • The location and accessibility of the data. With the advances of distributed data architecture, it may well be that data or records are compiled from several databases residing in various systems (for example, batch record data might be pulled from MRP, MES and LIMS).

  • The type of future data processing or data readability requirements. For example, if the data are incompatible with the future system, the rendering of data to a different electronic format might be a viable option (such as using Adobe Acrobat as the rendering tool to port the data into a portable document format [PDF]) or one can simply print hard copies.

  • Future system capability to accommodate the data. This point should also be considered and verified. A simple example is the issues surrounding the migration of documents created by an earlier version of Microsoft Word to a newer version.

  • Consideration points for other typical data migration issues should also be given (for example, how the future system addresses truncation or rounding of numeric fields).

  • Preservation of associated metadata (for example, audit trails, time stamps and a myriad of other details that 21 CFR Part 11 requires to be tracked).

The above items are just some of the factors to be considered when retiring a system. Without doubt, Part 11 regulatory requirements add complexity to the data migration issues (for example, having the need to maintain a link between the electronic record and the electronic signature).

Modularizing the validation document library

Leveraging a validation document from existing validations is a typical approach one uses when expediting the preparation of the validation documents. This leveraging process is usually performed by searching a library of available documents, selecting the documents that can be used and then customizing them to conform to the system being validated. In this process, it is this customization phase that usually takes the most resources. Therefore, the process for customizing the documents is an area requiring improvement. One approach to consider is modularizing the validation documents into components and objects, which can then be selected and compiled into the desired and appropriate validation documents. Some of the factors to consider in modularizing the validation documents are

  • the use of a common style and format (for example, the title page, table of contents, headers and footers, and pagination method), and the use of common terminology and phrases

  • the use of common templates for such documents as user requirements, functional specifications and test scripts

  • the use of common categories based on the system's operational location (for example, manufacturing, laboratory or information systems)

  • the use of common terminology (including naming conventions) and categories within those operational locations (for example, table coating equipment, HPLC and databases)

  • the use of common test cases (or test scenarios) within those systems (for example, security test, report output verification and alarm tests)

  • the practice of standardizing common modular components within those systems (for example, the controller, PC or autosampler for HPLC). Hence, substituting the same module onto another system will reduce the amount of validation needed because the module has been qualified individually, and the system, as a whole, may only require a limited set of validation.

Once a strategy or plan for modularizing the validation document library is established, additional improvements can also be made (for example, creating a validation document compiler to allow a 'cut-and-paste' or 'click-and-drag' mechanism to aid in the creation of the documents). This standardization practice has the added benefit of simplifying the task of reviewing validation documentation and can have a significant, positive effect on the review cycle time.

21 CFR Part 11 implementation consideration points

The previous concepts cover several areas of a system development life cycle: acquisition, vendor audit, requirements, validation approach, change control, regression testing, periodic review and system retirement. This section deals specifically with the implementation of 21 CFR Part 11. The typical route for implementing Part 11 is done by assessing the system, developing a remedial action plan by identifying the potential solutions, budgeting a plan and executing it. However, there are additional points that are worth consideration:

  • The availability of a position paper. This paper should provide the company's position regarding issues related to Part 11 implementation (for example, defining the type writer interpretation and the company position when classifying what is considered as a type writer application). Another example is what constitutes acceptable password practices, server time setting practices and data migration practices.

  • The availability of a compiled list of third party vendors, offering functionality as required by Part 11, and the acceptance of those vendor solutions.

  • The availability of generic Part 11 protocols to offer a consistent approach of verifying compliance to the rule, yet allowing the customization of protocols to be specific to a system.

  • The formation of a data integrity evaluation team, whose objective is to co-ordinate and evaluate potential data integrity issues related to the electronic records and electronic signatures requirement. For example, when a virus or worm has infected the system, the team must evaluate the integrity of the data that are residing in the affected system.

  • For budgeting purposes, provide an approach when estimating the cost for remediating the systems, such as evaluating the overhead cost inherent in every system (for example, systems to be remedied may require the following overhead elements: revising SOPs, retraining users and qualifying the Part 11 functions).

It is not the objective of this article to be comprehensive in all Part 11 aspects to be considered, but merely to provide some examples to clarify the objective of this section, which is to consider the potential issues when implementing the regulation and the need for addressing those issues in a cohesive manner.

Conclusion

Increasing the efficiency of validation activities is a key factor for success in the future of computer validation. Many other methods and concepts are available when trying to increase the efficiency of performing computer validation, but they are too numerous to be included in this article. For example, consolidating the IQ, OQ and PQ into one document can help expedite the validation exercise by decreasing the number of signatures needed for review and approval. Another example is the development of a checklist for reviewing and accepting a vendor-provided validation document. In some cases, the use of automated test tools may also help when expediting the completion of validation, although it must be carefully considered because this practice also introduces significant complexity to the process of test scripting. Another success factor is to stay abreast of developments in the areas of regulations, technology and computer validation by joining professional organizations such as GAMP and PDA.

In lieu of the other potential concepts and approaches, it is hoped that the concepts offered in this article provide some help and benefits for decreasing the need for applying future smoke-jumping methodology in meeting computer validation needs. An open discussion and anonymous questions can be posted on the message section of www.ComputerValidation.com

References

1.

www.fda.gov/cder/guidance/105-115.htm

2. www.fda.gov/cder/present/phrma5-2000/lillie/index.htm

3. www.fda.gov/cber/gdlns/einddft.pdf

4. www.fda.gov/ora/compliance_ref/bimo/ffinalcct.htm

5. www.fda.gov/cder/guidance/iche3.pdf

6. www.fda.gov/cder/guidance/2353fnl.pdf

7. www.fda.gov/cder/guidance/3223fnl.htm

8. www.fda.gov/cber/gdlns/ebla.pdf

9. www.fda.gov/ora/compliance_ref/cpg/cpggenl/cpg160-850.htm

10. www.fda.gov/ora/compliance_ref/Part1 1/dockets_index.htm

11. www.fda.gov/cder/guidance/105-115.htm#SEC

12. www.fda.gov/cber/gdlns/ichclinical.pdf

13. www.fda.gov/oc/leveraging/default.htm

14. www.fda.gov/oc/speeches/2000/scienceforum.html

15. www.contracts.ogc.doc.gov/cld/regs/65FR25508.html and http://ec.fed.gov/gpedoc.htm

16. www.fda.gov/ora/inspect_ref/igs/foodcomp.html

17. www.fda.gov/foi/warning.htm

18. www.fda.gov/foi/electrr.htm

19. www.fda.gov/cdrh/ode/guidance/585.html

20. www.fda.gov/ora/compliance_ref/bimo/ffinalcct.htm

21. www.fda.gov/cdrh/comp/guidance/938.html

22. www.fda.gov/foi/warning_letters/m5056n.pdf and www.fda.gov/foi/warning_letters/m5057n.pdf

23. www.fda.gov/cder/warn/cyber/cyber2000.htm

24."New Drugs Bringing Questions and Recalls. Side Effects Kill Thousands of Patients Every Year. Poor Monitoring, Speedy Approvals and Aggressive Advertising are Blamed," Philadelphia Inquirer (Philadelphia, Pennsylvania, USA, 7 January 2001).

25. www.oscargruss.com/?/healthtech.nsf/vwSpecialReportsWeb and www.informedinvestors.com/iif_forums/client_capsule.cfm?CompanyID=519

26. www.ncbi.nlmnih.gov/BLAST/

27. www.ncbi.nlm.nih.gov/Entrez/

28. www.delsyspharma.com/techmain.html

29. www.domainpharma.com/dom5/products/default.htm

30. www.manufacturing.net