OR WAIT 15 SECS
Volume 23, Issue 11
PTE interviews Leigh Jordan, UK Sales Manager at Cognex UK Ltd, about the importance of code readability when using anticounterfeiting solutions such as 2D datamatrix codes.
As companies increasingly adopt anti-counterfeiting solutions, particularly mass serialisation strategies, what other factors do they need to take into account besides the actual security solution itself?
Code readability is critical to ensure full compliance. If a code cannot be read then perfectly good products will be scrapped unnecessarily. Verification is also important as it ensures that all codes can be read once products are released out in the field. When assessing and monitoring the quality of barcodes, important considerations are:
The clear quiet zone (the space around the barcode) is also crucial. The best practice minimum requirement for a quiet zone is a 2-cells clear area around the periphery of a 2D code, or 10x on a 1D barcode (10 x the width of the smallest bar). In practical terms, however, codes can now be read with smaller quiet zones than this and often this happens in practice (although it is not recommended).
Leigh Jordan. UK Sales Manager at Cognex UK LtdThe data content within the code and the data formatting needs to be correct. Otherwise, once the code is decoded, the information will not make sense to the software system that is reading in the data: garbage in = garbage out! This is why the likes of GS1—a not-for-profit organisation dedicated to global standards and solutions—have invested a lot of time and effort on education in terms of what the correct formatting elements are and the correct amount of data that needs to be included.
It’s also important to bear in mind that different system set ups are not consistent, so code quality can vary with different systems. Everyone forgets they’re not measuring the code, but the image of the code. Any inconsistencies in the setup will affect the image.
How do on-line verification solutions compare with off-line systems in terms of speed and accuracy?
Off line systems are available ‘out the box’, where as online ones need to be customised to ensure that parameters such as lighting, standoff and setup are correct. On-line systems are bespoke for the environment that they work within. If there is an issue with the amount of space available then the system will be configured as appropriate, such as with the correct lens and lighting. The goal of an on-line verifier is to provide quality feedback on the code without disrupting the production flow of the line.
An off-line verifier verifies to a set standard, which will dictate the exact lighting and lensing setups. These may also call for the code to be read at a variety of angles, dependant on the standards being adhered to and this is physically impossible in an on-line setting. To summarise, on-line verification provides easier integration into a production environment, but off-line verification provides full compliance to verification standards.
What does work well is a combination of both. Using an off-line verifier on an ad-hoc basis, such as measuring the quality of the code once a day or once a week, and then correlating the results with the on-line verifier. If the on-line verifier score varies significantly then we know that the part has moved out of compliance.
How do you think verification solutions will advance in the coming years as the pharma industry adopts more anti-counterfeiting solutions?
As standards evolve, verification solutions will adapt to follow suite to ensure that they are met. Solutions will have to include more flexibility to accommodate the different marking methods of parts or variety of surfaces that the parts are marked on. This will really come back to the standards organisations, who will determine which metrics, tolerances, lighting and conditions they will sanction as approved.
In terms of the systems that read the codes, these are becoming faster, more compact, and with higher resolution and greater intelligence inside.