The Paperless Validation Journey: Terminology Matters
In the last blog, we covered a portion of a recent presentation Tx3’s own Dori Gonzalez-Acevedo did on “The Evolution of Paperless Validation” and I thought it would be a good idea to expand on that a bit. The portion covered in the last post (view it here) discussed shifting the scales of a validation and testing maturity model to an “Optimized” state. With much available on the topic of testing maturity models, Dori felt it was necessary to explain an often underrepresented component of testing maturity relative to regulated systems in Life Sciences, which is the correlating validation maturity model.
Needless to say, I highly recommend going back and reading that content.
Today I would like to again leverage Dori’s content to provide a bit more context, or food-for-thought on this topic. After all, we are talking about a very complex landscape, fraught with changing methodologies, new technologies, industry regulations, and organizational considerations that must be taken into account. Each of these areas is very broad in nature, but are certainly filled with their own set of nuances, so being equipped with an understanding of the terminology and underlying components required to move through the various levels of testing and validation maturity models we discussed in the last post is crucial.
Now, let’s move past the preamble and get to the good stuff. From here on out I will be relaying and conveying more of Dori’s excellent content and I would encourage you to reach out to her on LinkedIn if you have any questions or would like to discuss any of this in more detail.
Every day we work with different companies and it seems that many of those companies have different meanings for the same words. They also often have different levels of understanding of common terminology used throughout the industry. This is further complicated by the fact that across the industry, key methodologies have become more complex and continue to evolve and change.
Words matter. Definitions and terminology matters. As a general rule of thumb, it is best to stick to industry organizations like ISPE and IEEE for common industry terminology and definitions. Before we move too far down the road of advancing testing and validation maturity models, let’s make sure that we are level setting some common terminology and modeling concepts as you move forward in a paperless validation journey.
There are three methodologies that are most critical to your paperless validation journey and process: Risk, Testing, and Validation.
In terms of risk methodologies, teams need to make sure that they have a proper understanding of the difference between the levels of risk and what is required for each. Let’s start with system-based risk.
One can successfully implement a systems-based risk program, where at the system, solution, or application level, risk is analyzed for a variety of elements. Based on that analysis, decisions are made that dictate what predetermined validation deliverables need or need not occur. Further, a risk program can be expanded to include requirements-based risk assessments where individual or groups of requirements can be evaluated and risk rated accordingly. Once completed, predetermined outcomes of level of testing, or level of documentation can be performed.
Teams can take their risk programs one step further and perform risk-based testing, which is based on a variety of categories, types, or technical complexity. Based on those predetermined criteria, appropriate testing can be executed and captured.
Let’s expand on one such criterion - technical complexity. This is determining whether or not a solution - or part of a solution - is delivered in 1 of 3 ways:
- Out of the box: This is generally where an application or solution is installed and used straight out of the box and is pre-validated or pre-tested. In this scenario, teams can generally leverage the software vendor’s SDLC documentation to cover the bulk of their SDLC and validation documentation requirements.
- Configurable Solutions: This is a solution with configuration changes applied, or at least applied to parts of the solution/application. Here, most of the core software does not change, but there are configurable components based on individual business requirements. The testing of configurable solutions may be limited only to configuration verification, but this is dependent on the level of configuration complexity. It is also dependent on independent vendor assessments and audits from the associated software vendors.
- Custom Development: This is something seen less frequently these days, but this area comes with the highest level of complexity and the highest level of risk. For custom developed or heavily customized solutions, full SDLC testing is recommended.
There is another growing area of risk criteria that’s inherent in application management, and that’s the associated change release management process which can be impacted by application ownership models.
- On-Prem: If applications are hosted on-prem and internally managed and you can verify that your organization has full control, certain risk mitigation may still apply, but you have full control over them.
- Off-Prem: An application or solution is managed off-prem and by a 3rd party. In this case, you may still have partial control over some of your procedures, or at least are able to have the 3rd party vendor adhere to your defined procedures.
- SaaS: Finally, there is a full SaaS model where the software vendor is not only managing the product but also administering the management of the entire end-to-end process. In this scenario, organizations have the least amount of control. There is contractual control, but no actual application or infrastructure control.
Risk Methodologies are fairly common in our industry, but how you define and apply risk matters as well. The onus is on the organization to adequately define and apply appropriate risk profiles, and ultimately to supply verification documentation based on those defined risk profiles, regardless of where the application resides.
Testing Methodologies have changed dramatically in recent years, as has how we think about testing. Here, I would like to cover several areas spanning from legacy practices (that are still utilized at many organizations) and some of the more modern approaches - as well as the variants within each.
- Waterfall Testing: This is the traditional V-model testing process including User Requirements Testing, Functional/System Testing, and Design/Unit Testing. It is a very robust software testing methodology still utilized by many teams. This is a staged approach where each previous testing cycle needs to be complete prior to the preceding level of testing.
-One thing to note here is that these are different types of tests that target each different level of associated requirements and specifications. However, we often do not see organizations truly applying this differentiation. Rather, they are re-utilizing the same test in different environments (DEV, QA, and PRD) and just calling them something different. This should be avoided.
- Hybrid/Iterative Testing: We see this getting much more traction in Life Sciences these days. This is a combination of both Waterfall and Agile methodologies, where one can combine a staged, cyclical approach with a gated approach. In this approach, requirements and specifications are more flexible. They can be drafted - or nearly complete - and still allow testing cycles to be performed during configuration in advance of completed configurations which allows requirements to be revised prior to locking them down for “Formal” testing. This is important as there is often a big differentiation between “Informal” and “Formal” testing cycles, or “Modular” testing in a hybrid model.
- Agile Testing: I don’t want to get too deep into it here as there are a ton of great resources covering this topic (including several of our blogs if you would like to check them out), but in short, Agile is a testing approach that aligns with an Agile development methodology. In this process, testing is embedded continuously in development and quality practices and is executed in sprints, rather than in a pre-defined, start-to-finish projects with a clear segmentation of duties through each phase of said project.
Lastly, within each of these Testing Methodologies, there are variants that can be utilized, including manual testing, automated testing, and DevOps or CI model testing. All of these can coexist to a varying extent in each of the defined methodologies, but it’s important to note the difference between each of these three approaches.
In manual testing, tests are generally manually written and subsequently manually executed, often with little or no use of testing tools (but not always). In automation, testing tools are used to build automation frameworks and generate automated tests to support high frequency, repeatable testing requirements. In DevOps or CI, there is a continuous integration of testing being applied throughout the SDLC process.
Each of these types are valid entry points within a Testing Methodology and require understanding and consideration when developing an overall paperless validation process. It should also be understood that it’s not intended for each of these to be utilized in a mutually exclusive fashion. Rather, they should be logically combined and leveraged in such a way that makes sense within the context of the Testing Methodology that is best suited for the application under test.
For the purposes of this post, I don't want to spend too much time here as we cover this pretty extensively in the previous blog. However, I would like to quickly point out that some of the terms of validation methodologies that we referred in that post are the differentiation between different aspects of those methodologies. For example......
- Paper or wet ink signatures: This is the most rudimentary form of validation and verification documentation. Manually defined, managed, and executed on physical pieces of paper, this approach was widely utilized prior to the advent of 21 CFR Part 11, but is in effect at many organizations that have not yet incorporated electronic signatures or electronic records into their CSV processes.
- Electronic paper/documents: While potentially "paperless" this may still rely on scanned paper documents, or may have an electronic document and esignature capability incorporated. Either way, this is still a document reliant process, prone to many of the same limitations as paper documents and their static nature.
- Paperless validation: While the process above may be "paperless", when we at Tx3 are talking about paperless validation, we are referring to a truly paperless process in which there is no reliance on traditional documents, whether they be paper or electronic. This is where we get into a "data-driven" model (further explanation below).
The second differentiation will be between....
- Traditional CSV: Primarily based in a Waterfall testing methodology and documented through traditional validation documentation, or what we define as "document-centric" (see below). Many teams still operate under this approach, but the reliance on documents and the focus around the documentation itself rather than the quality of targeted, risk-based testing, is increasingly becoming an impediment as teams try to adapt to and implement more modern testing and SDLC methodologies and tools.
- Computer Software Assurance (CSA): Gaining much buzz over the last year or so, here there is a differentiation between non-product CSV and product CSV. I don't want to get too deep into CSA here as it is a full topic in and of itself, but in short, this is a variant of validation/verification testing for non-product CSV that puts the emphasis on applying appropriate risk and quality testing, rather than blanket testing where the primary focus has been hijacked by the documentation.
The third dimension of the validation methodologies is a concept between document-centric and data-driven models.
- Document-centric: When we talk about document-centric, we aren’t just talking about paper documents, but electronic as well. Historically, documents have been the sole focus of many validation programs and still are in many cases. In some cases, we have seen validation programs that have a multitude of documents for a single application that need to be managed over the course of time. These are generally manually managed, with manual inputs, and manual traceability. All of which are time consuming and prone to human error
- Data-Driven: In contrast, if one can focus on the data and enable a data-driven model, one can then utilize a technology-driven and controlled process in which the automated or systematic tool allows the capturing of electronic records and electronic signatures while managing the process automatically. This allows team members to focus on the actual data, the actual inputs, rather than the document formatting, or manually captured traceability, or overall management of those documents.
All the elements within each of the validation, testing, and risk methodologies matter. How your organization defines them, and how they integrate into an overall paperless validation process is critical.
Wrap-Up and Resources
With something as precise as modernizing validation and testing practices in a regulated environment, the importance of the terminology used, and how it is applied is crucial. If you read the last post, you’ll know that we go into much more detail on the actual nuts and bolts of aligning a validation maturity model with a testing maturity model, but I thought it was important to define more clearly some of the factors that play into those initiatives in a little more detail (again, read the last post here for a more in depth look).
Of course, this is only a segment of a broader conversation. If you haven’t yet, I would highly recommend watching Dori’s full presentation here:
Connect with Dori on LinkedIn.
View the previous blog to learn how all this applies to Testing and Validation Maturity Models here.
Jason Secola manages content marketing and channel activities at Tx3 Services and has been with the company since 2016. Jason began working with the larger portion of the existing Tx3 team dating back to 2007 when he got his first start in the world of application testing and later began a focus on testing in a regulated environment. He currently resides near Sacramento, CA.
Jason Secola manages content marketing and channel activities at Tx3 Services and has been with the company since 2016. Jason began working with the larger portion of the existing Tx3 team dating back to 2007 when he got his first start in the world of application testing and later began a focus on testing in a regulated environment. He currently resides near Sacramento, CA.View all posts by Jason Secola