Examples of low-risk systems include word processing systems that are used, for example, to generate validation records. The
reasons for relegating these systems to the low-risk category include the relatively low likelihood that they would have errors,
the likelihood that errors would be detected by proofreading, and, in this case, the likelihood that such errors would have
no direct impact on product quality or patient safety.
Once the risk level is identified, validation steps can be defined. Risk level information is used for considerations such
- In what detail do we specify the system? For example, for a low-risk system, we only prepare a high-level system description
and for high-risk systems we develop detailed system requirement specifications.
- How extensively do we test the computer system? For example, high-risk systems will be tested under normal and high load conditions. Test cases should be linked to the requirement specifications.
- How much equipment redundancy do we need? For example, for high-risk systems we should have validated, redundant hardware
for all components. For medium-risk systems, redundancy of the most critical components is enough, and for low-risk systems,
there is no need for redundancy.
- How frequently must we back-up data generated by the system? While a daily back-up is a must for high-risk systems, weekly
incremental back-up is sufficient for low-risk systems.
- What type of vendor assessment is required? For example, high-risk systems will require vendor audits while, for medium and
low-risk systems, an audit checklist and documented experience from the vendor should be enough.
- What requirements of Part 11 should be implemented in the computer system? For example, high-risk systems' computer generated
audit trails should be implemented, while for low-risk systems, a paper-based, manual audit trail is enough.
Validation tasks should be defined for each phase starting from planning through specification settings, vendor qualification,
installation, testing, and on-going system control.
The tasks should be consistent within an organization for each risk category. They should be well documented and be included
either in the risk management master plan or in the validation master plan.
Table IV summarizes examples with validation activities for each validation phase and task.
Table IV: Examples of validation tasks.
For some validation tasks other factors should be considered besides the impact of the system on product quality. One such
factor is vendor qualification. The question of how much to invest depends on two factors: product risk and vendor risk. Factors
that impact vendor risk include:
- experience with the vendor (software quality, responsiveness and quality of support);
- size of the company;
- company history;
- represented and recognized in industry, e.g., Bio/Pharma;
- expertise with (FDA) regulations;
- future outlook;
- How likely is the company to stay in business?
Table IV is recommended as a starting point for a commercial, networked Off The Shelf (OTS) system with user specific configurations
(i.e., network configurations). Such an automated system would fall into category four, as defined by GAMP (1). This table can be
extended to a third dimension to include systems or software that have been developed for a specific user (GAMP category five)
and to include systems that do not require any user specific configurations (GAMP category three). GAMP categories indicate
the level of system customization. The extent of validation is lower for GAMP category three and higher for systems in GAMP