Today, LIMS is in the fourth generation of its evolution, and many organizations must upgrade, supplement, or modify their LIMS to ensure optimum interoperability with the overall corporate IS infrastructure. To ensure successful management of the LIMS implementation and maintenance, selecting the best LIMS architecture for an organization is more critical than ever. It is a task that becomes even more complicated because many of the terms used to describe very different LIMS and system architectures that are used interchangeably. This article provides clarification of the terms as well as the capabilities of potential solutions.
From first-generation LIMS to web functionalityLIMS connects the analytical instruments in the laboratory to one or more workstations or personal computers (PCs). Analytical instruments, such as chromatographs, collect sample data, which is then forwarded to a PC, or further to a server, where the data are organized into meaningful information. This information is then stored to be mined by the laboratory and other departments, or it can be sorted and organized into various report formats based upon the type of report required. A full-featured LIMS will manage the various laboratory data collected from sample login to reporting the results and will address the needs of research and development (R&D) departments all the way through to quality management laboratories.
The first commercial LIMS was introduced and developed in the 1980s by analytical instrument manufacturers. This first generation LIMS placed laboratory functions onto a single centralized computer, which offered greater laboratory productivity and functionality as well as the first automated reporting capabilities. These systems were quickly followed by second generation LIMS, which used third-party commercial relational databases (RDB) to provide application-specific solutions. Most relied on minicomputers such as Digital VAX, but PC-based solutions emerged soon afterwards.
The increase in computer-processing speed, the enhancements in third-party software capabilities, and, the reduction in PC, workstation, and minicomputer costs paralleled the introduction of the commercial LIMS. These advantages drove a migration away from proprietary commercial systems toward open systems that emphasized user-configurability, rather than customization by the vendor. By the time the third generation technology was introduced in the 1990s, LIMS combined the PC's easy-to-use interface and standardized desktop tools with the power and security of minicomputer servers in a client–server configuration. That is, its architecture splits the data processing between a series of clients and a database server that runs all, or part of, the relational database management system. In the mid-1990s, the fourth generation LIMS decentralized the client–server architecture further, thereby optimizing resource sharing and network throughput by enabling processing to be performed anywhere on the network. When the Internet took off in 1996, the first web-enabled LIMS was soon introduced, followed by web-based and thin-client solutions.
Most of these thin-client solutions were developed in Java, and web-based systems on Microsoft's .NET platform. Some of these web-based systems leverage eXtensible Markup Language (XML), to transmit data between traditional client/server architectures and the .NET framework to offer a web-based application.
Thick-client, web-enabled, web-based, and thin-client LIMS
Understanding the differences between thick-client, web-enabled, web-based, and thin-client LIMS is challenging because the terms are often used interchangeably, adding to the confusion and making it difficult to develop a well-informed decision when it comes to choosing the "right" LIMS. These terms actually apply to very different types of platforms. And using them interchangeably can prove costly when organizations find out that their new software does not deploy or function as expected.