Computer Systems Validation (CSV) is time consuming, expensive, stressful, headache inducing, inefficient and less than progressive relative to modern technology trends. Nonetheless, it is an important and required exercise that must encompass the entire application lifecycle. Unfortunately, the process that many teams have built or inherited in order to meet CSV requirements does little to alleviate these issues and often further exacerbates them.
The paper-based, document driven methodology of CSV is the most rudimentary form of this practice and its utilization continues to this day. In the early days of validation, it was widely adopted, and many teams continue to use it simply because it’s what they’ve always done and “it works”. It does “work”, but the reality is that this approach offers little value other than allowing compliance to be achieved and simply “allowing” for compliance to be achieved does not mean that CSV is being optimized for compliance, nor does it add real business value or contribute to continuous improvement in software quality.
Anyone who has worked with paper-based, document-driven CSV knows that it is prone to inefficiency, high cost, risk and human error. It’s a very manually intensive process rooted in outdated methodologies that requires a burdensome amount of maintenance. Yes, tools like Microsoft Word and Excel have aided in the documentation process itself, but they offer nothing by way of providing controls or mechanisms to enforce that defined compliance procedures are being adhered to. As many teams have experienced, this leaves the door wide open for procedural breakdowns, missed steps, required approval signatures to be skipped and documentation to not be captured appropriately.
No matter how well defined or rigorous an organization’s CSV process may be, without appropriate governance it is difficult to enforce, and the human error variable becomes a very real problem in the paper-based methodology. From a compliance standpoint, this can certainly lead to costly mistakes and breaches in regulation, but even at the project level this approach is handcuffing and limiting in a way that causes an overall state of operational inefficiency through the application lifecycle.
Projects may be delayed while a Requirement is sitting on a desk waiting to be approved by the business owner who is on vacation. QA just noticed post-test execution that the test was executed, but the test author acted as the approver, so now the approval must be re-routed to re-execute the test. Maybe a rejection was kicked back with no reason assigned now requiring that the rejector be tracked down to find out what needs to be fixed for approval so the next step can be achieved. Over the course of just a single project these issues start to pile up, but over the course of tens, hundreds or even thousands of projects it becomes a glaring problem with massive negative implications on a team’s efficacy and ultimately an organization’s overall bottom line. As the issues pile up, so do the stacks of paper.
While all of this is going on, mountains of paper documents (which hopefully contain all the required steps and signatures) are piling up. That paper needs to be organized, compiled into binders, put in boxes and packed away into a storage facility. Soon, we have what closely resembles the closing scene in Raiders of the Lost Arc as we pan out to reveal what appears to be an endless cache of crates full of documentation.
In today’s age this seems like an almost unfathomable way to manage data, but it is the reality of what many teams in the Life Sciences field are dealing with. What do you do when you need to access and reference that data? What happens when the arbiter of all those records leaves the company unexpectedly? It’s time for an audit, so how much time does it take to prepare the required documentation for the auditor? How much time does it take to scour those documents when the auditor requests additional information or verification? The answer is only found in more tedious, manual effort on top of what is already a tedious, manual process.
Moving to paperless validation may be the answer. No more shipping paper documents for review. No more wet signatures. Innovate and modernize CSV by eliminating paper and moving electronic review/approval, and an electronic system of record! Don’t jump straight in, though. Life Sciences teams should be aware that not all electronic based approaches are created equal and some are certainly not optimal.
Jason Secola is the Support Sales Manager at Tx3 Services and has been with the company since 2016. Jason began working with the larger portion of the existing Tx3 team dating back to 2007 when he got his first start in the world of application testing and later began a focus on testing in a regulated environment.
He currently resides near Sacramento, CA.