Recent developments in advanced imaging modalities such as magnetic resonance imaging (MRI), multidetector computed tomography (CT) and hybrid modalities such as positron emission tomography-CT (PET-CT) bring new dimensions to diagnostic investigations of cardiovascular disease. These modalities extend the existing wide range of image-based diagnostic procedures that are commonly used in cardiology. More than in any other discipline, cardiologists rely extensively on imaging techniques for decision support and for patient management. In addition, unlike other disciplines, cardiovascular investigations require specialised image analysis techniques for extraction of quantitative data for accurate evaluation of cardiac and vascular physiological and functional alterations. With the increase in accuracy of the information provided by new imaging modalities, the complexity of image analysis and quantitative measurement tools has also increased significantly.
With the growing complexity of image-based diagnostic techniques comes the challenge of designing adequate processing and analysis tools that match the needs and requirements of cardiologists for their daily clinical practice. Not only do these tools have to be accurate and simple to operate by non-computer-savvy users, but they must be implemented in ways that ensure fast and efficient processing of very large data sets. Cardiac images tend to come in high spatial and temporal resolution, where the time dimension is essential for the assessment of the heart in motion and time-varying phenomena such as blood flow and perfusion. This fourth dimension of the data results in very large volumes of image data that require extensive storage and processing capabilities. Furthermore, with the ability to visualise and extract metabolic and functional phenomena with molecular imaging techniques such as PET, it is now possible to add a fifth dimension to the data representing regional biological parameters measured in vivo for each segment of tissue. The graphical representation of such complex multi-dimensional data also represents an important challenge for the designers of new computer-aided diagnostic platforms.
In clinical reality, cardiologists also need to communicate the results of their investigations to colleagues and physicians of other disciplines, and not infrequently to the patients themselves. It is therefore necessary to be able to visualise the images in a way that can be easily shown to physicians who do not have the expertise and training to interpret images in their native format. 3-D rendered images as well as dynamic cine movies (4-D images) are becoming methods of choice for communicating results between cardiology specialists and clinicians as well as to patients. Advanced rendering workstations, being quite expensive, are often not accessible to a large community of users but are only available in radiology departments.
The need for convenient and low-cost image display and visualisation tools are being met with open source software programs, developed by the users themselves, and distributed free of charge under open source licensing agreements. These custom-made platforms are often more suitable for general users than their commercial counterparts.
New Imaging Modalities Require New Tools
Cardiac MRI imaging technique has rapidly become a major tool for assessment of cardiac anatomy and function. A complete cardiac study consists of several sequences of images acquired either in static or dynamic mode. All images are acquired in multiple oblique planes in different orientations along the axes of the heart. Proper visualisation requires cross-reference identification of the planes, but also side-by-side dynamic display of multiple sequences. Furthermore, functional evaluation of cardiac wall motion requires segmental analysis of myocardial contraction and wall thickening, as well as measurement of global parameters such as volumes and ejection fraction. With specific imaging sequences it is also possible to measure flow patterns along the different chambers of the heart and across the valves and the major vessels. Advanced image-analysis techniques are required for extraction and quantification of multiple functional parameters from the images. Proper visualisation of the results includes colour-coded mapping of the measured parameters onto the images or through geometric diagrams of the spatial distribution of each parameter. The most commonly used schematic representation of functional parameters measured along the left ventricle is the ‘polar maps’, where short-axis cross-sections of the ventricle are represented in concentric circles where the apex is represented in the centre, and the last slice at the base of the ventricle is represented as the most peripheral circle. This graphic simplification, originally developed for reporting results of myocardial scintigraphy in nuclear medicine, has been adopted for numerous other imaging modalities for identification of a variety of functional parameters measured in different segments of the heart. Such polar maps have the merit of standardising the geographic partition of different segments of the left ventricle in a pre-defined number of sectors (the most commonly used maps are based on 17 or 22 sectors) that facilitate the comparison of different parameters measured by different imaging techniques. It is, for example, easy to correlate anomalies in myocardial perfusion measured by a single photon emission computed tomography (SPECT) or PET scan with segmental wall motion anomalies measured with MRI. The division of the ventricle into an identical number of segments in each imaging modality is a simpler and more convenient way to geographically map the different parameters than three-dimensional image fusion.
With the emergence of new hybrid imaging modalities, such as PET/CT, SPECT/CT and soon-to-come PET/MRI imaging devices, data of complementary imaging techniques can be acquired concomitantly in perfectly co-registered co-ordinates. Proper visualisation of such data requires sophisticated 3-D and 4-D rendering techniques.
Navigating the Fifth Dimension
The interactive display of 3-D data that varies in time (fourth dimension) and includes fused colour-coded functional parameters and blending between the data of co-registered modalities adds an additional dimension, often referred to as the fifth dimension. Such metabolic data (fifth dimension) requires special software tools allowing users to easily ‘navigate’ through. Besides, these data sets, which are usually extremely large in size, require very high-performance graphic visualisation platforms. Traditionally, only high-end workstations provided by medical imaging manufacturers were capable of handling such large data sets with the necessary software and hardware platform. Such workstations were available only in limited numbers in specialised imaging departments and were usually not easily accessible to other physicians such as surgeons and clinical practitioners. With the rapid evolution in computer performance of the latest generation of consumer-marked desktop computers, it is possible today to develop high-performance image processing software programs on standard off-the-shelf hardware. Unfortunately, most vendors have maintained a segregation of different software tools dedicated to different imaging modalities. Cardiologists today rely on multiple imaging procedures for each given clinical case. It is not uncommon to have to combine results from five or six imaging modalities for a single clinical problem. Traditional imaging techniques such as echocardiography, contrast angiography and radionuclide scintigraphy are compared to results extracted from cardiac MRI and CT studies. The challenge relies not only on the ability of a single imaging platform to handle all these modalities simultaneously, but also the ability to extract quantitative measurements in a consistent way from the different data sets. The users need to be able to measure consistently morphological and functional parameters such as ejection fraction, regional wall motion or perfusion abnormalities from the different modalities. This has always been a limitation of commercially available solutions, often driven by vendors specialising in particular imaging techniques without adequate understanding of images obtained from other modalities.
In recent years one can observe the emergence of multimodality solutions that come from academic groups or from specialised companies that focus on the specific matter of bringing together software solutions that were traditionally dedicated to single imaging modalities. The popularity of open source software solutions that are being developed by communities of users and distributed freely has taken over the traditional software market driven by large commercial companies. The open solutions developed and designed by the users themselves have the ability to grow and expand more rapidly than their commercial counterparts. One example of such development that has grown rapidly in the last couple of years is the OSIRIX image visualisation platform (http://homepage.mac.com/rossetantoine/osirix/) developed through a collaboration between the University of California in Los Angeles (UCLA) and the University Hospital of Geneva (HUG) and distributed free of charge under open source licensing. The goal of our project is to develop a completely new software platform that will allow users to efficiently and conveniently navigate through large sets of multidimensional data without the need for high-end expensive hardware or software. In our design we wanted to take advantage of the significant progress in performance and flexibility of 3-D animation on personal computers mostly driven by the computer graphics and game industry. We also elected to develop our system on new open source software libraries emerging from the scientific community. The most successful set of tools were supplied by Insight Software Consortium The Visualization Toolkit or VTK (http://public.kitware.com/VTK/) and the Insight Toolkit or ITK (http://itk.org/), providing additional rendering and image-processing tools for medical applications. These new open source software toolkits offer powerful functions to perform complex image manipulations, and great performance for realtime 3-D image visualisation. Our objective was to incorporate these powerful tools in a new graphic user interface (GUI) that is more suitable for clinical applications and image interpretation of large multidimensional datasets. We made special efforts to design a software platform that allows to easily and quickly develop a new generation of multidimensional viewers that could replace many of the existing functions that are available only on high-end, expensive 3-D workstations.
Teaching Old Docs New Tricks
The remaining challenge is the wide distribution of advanced image-processing and visualisation tools to clinicians and referring physicians. It is becoming necessary to provide access to these tools outside imaging departments such as radiology and cardiology. This also requires that these tools be tailored to the different categories of users who are not necessarily experts in image processing and computer platforms but have a genuine need to be able to display, visualise and navigate through the image data sets to better understand the results of the diagnostic procedures.
It is also the task of the radiologists and cardiologists to format and pre-arrange the image data to be easily accessible and understandable by their colleagues. With multidimensional data and dynamic display techniques, we are far from the traditional diagnostic report associated with printed static images on a sheet of film. In this context, the image visualisation software becomes a communication platform between physicians of different disciplines. This paradigm shift also requires appropriate training and support of a larger community of users. It is also important that the software user interface be designed to be easily accessible and useable by non-computer experts.
In our project of development and wide distribution of OSIRIX we experienced the tremendous potential of open source paradigm inside a community of specialists and professional users who were extremely helpful and instrumental in making this new platform successful. The rapid growth of OSIRIX features and tools was driven by the large community of users that contributed to its constant improvement in response to real practical needs. It is a perfect example of software application development directly driven by users and tailored for their specific needs. The most challenging task for our development team was to keep pace and co-ordinate the development of new tools while maintaining the robustness and stability of the whole program. Thanks to the wealth of feedback and input from thousands of users around the world, it was possible to release software updates and new features at a rate that exceeded, by far, the rate of software updates in the industry.