top of page

2014

Smell, Lung Cancer, Electronic Nose and Trained Dogs

De Lema Bruno, Adjounian Haroution, Costa Castany Magda, Ionescu Radu

 

Journal of Lung, Pulmonary & Respiratory Research

 

Lung cancer (LC) represents a problem of high magnitude for the medical systems due to its morbidity and mortality, also because of the huge human and economical efforts and costs that it catalyzes. Only in Europe, the average life-time cost of lung cancer patients ranges between €46,000 and €61,000 per patient. Secondary prevention, which consists in mass-screening of the high-risk population, could result of high benefit. For this aim to become a reality in the future, a potentially inexpensive and non-invasive approach for LC (pre)diagnosing is emerging. Such possibility relies on the detection of volatile biomarkers emitted from cell membranes. Tumor growth is accompanied by gene changes that may lead to oxidative stress, and the peroxidation of the cell membrane species causes volatile biomarkers to be emitted. Some of these biomarkers appear in distinctively different mixture compositions, depending on whether a cell is healthy or cancerous. These volatile biomarkers can be detected, among others, through the analysis of the exhaled breath, because the cancer-related changes in the blood chemistry are reflected in measurable changes to the breath through exchange via the lung. Importantly, these volatiles or their metabolic products are transmitted to the alveolar exhaled breath through exchange via the lung even at the very onset of the disease. There are several methods that can be applied to analyses the exhaled breath for the identification of a specific pattern of volatile biomarkers related with the target medical condition. The feasibility of three strategies, and the possible synergic effect of their concomitant application as a powerful pre-estimation tool for mass-screening of LC high-risk population:
a. Spectrographic techniques such as Gas-Chromatography coupled to Mass-Spectrometry (GC-MS), which relies on reference libraries of analyses mass spectra to structurally identify and track the analyses in gaseous samples.
b. Electronic-nose (e-nose), which consists in a matrix of chemical gas sensors specifically trained for the target application by means of a pattern recognition algorithm.

Improving the performance of the Prony method using a wavelet domain filter for MRI denoising

Rodney Jaramillo, Marianela Lentini, Marco Paluszny

 

Hindawi Publishing Corporation Computational and Mathematical Methods in Medicine, Volume 2014, Article ID 810680

 

The Prony methods are used for exponential fitting. We use a variant of the Prony method for abnormal brain tissue detection in sequences of 𝑇2 weighted magnetic resonance images. Here, MR images are considered to be affected only by Rician noise, and a new wavelet domain bilateral filtering process is implemented to reduce the noise in the images. This filter is amodification of Kazubek’s algorithm and we use synthetic images to show the ability of the new procedure to suppress noise and compare its performance with respect to the original filter, using quantitative and qualitative criteria. The tissue classification process is illustrated using a real sequence of 𝑇2 MR images, and the filter is applied to each image before using the variant of the Prony method.

Tissue classification in oncological PET/CT images

Jhonalbert Aponte, David Grande, Wuilian Torres, Miguel Martín-Landrove

 

Ingeniería y Ciencias Aplicadas: Modelos Matemáticos y Computacionales, E. Dávila, J. Del Río, M. Cerrolaza, R. Chacón (Eds.) SVMNI 2014

 

Oncological PET/CT is a powerful combination of molecular and structural imaging that allows for early full body cancer disease detection and further treatment and disease evolution monitoring. Recently, oncological PET/CT has been proposed in the assessment of image tumor contouring for treatment planning what implies the need for a confident method for image integration or fusion. In the present work, tumor PET/CT images are analyzed by a segmentation method, k-means clustering, combining the information coming from PET images, through the Standardized Uptake Value (SUV), and CT image information, through the CT number or linear attenuation coefficient, allowing for tissue classification and image segmentation. Results are used for Gross Target Volume (GTV) assessment as a guide in medical practice for SUV level selection in image tumor contouring in targeted treatment applications, such as radiation therapy. Also, SUV distributions for different tumor lesions are obtained and used to assess reference values in diagnostic PET/CT.

Geometry of tumor growth in brain

Miguel Martín-Landrove, Francisco Torres-Hoyos

 

Ingeniería y Ciencias Aplicadas: Modelos Matemáticos y Computacionales, E. Dávila, J. Del Río, M. Cerrolaza, R. Chacón (Eds.) SVMNI 2014

 

Tumor growth can be characterized by using scaling analysis methods performed upon the tumor interface; the procedure yields key parameters that define growth geometry according different universality classes. In the present work, results obtained by scaling analysis are shown for tumor lesions in brain, of primary origin, either malignant or benign and metastases. To evaluate different proposed models for tumor growth in brain, several growth simulations for primary brain tumors or gliomas were performed assuming a simple growth model described by a reaction-diffusion differential equation or in this context, a proliferation-invasion equation. The proliferation term was of the logistic type to take into account the limitation of nutrients and oxygen resources on tumor cells. To take into account the differences between grey and white matter for the diffusion parameter, the simulations used the brain tissue database provided by BrainWeb. Simulations were performed for different relations between the diffusion parameters (invasion) and the reaction parameters (proliferation) covering growth conditions from low grade gliomas up to high grade gliomas (glioblastoma multiforme). Scaling analysis results reveal a close correspondence to results previously obtained on tumor magnetic resonance images, which suggests that the simple model used for the computer simulations describes in an appropriate manner tumor growth of gliomas in brain and potentially its use can be extended to describe brain metastases.

MRI spatial distortion evaluation and assessment for stereotactic radiosurgery

José Mielgo, Miguel Martín-Landrove, Wuilian Torres

 

Ingeniería y Ciencias Aplicadas: Modelos Matemáticos y Computacionales, E. Dávila, J. Del Río, M. Cerrolaza, R. Chacón (Eds.) SVMNI 2014

 

Gamma Knife intracranial stereotactic radiosurgery is a high precision radiosurgery which requires a high degree of accuracy in target localization by means of medical imaging. In particular, MRI introduces spatial distortions due to antennae’s RF inhomogeneity and brain’s inhomogeneous tissue magnetization. Gamma Knife equipment rely almost exclusively on MRI for target localization, so the generation of an atlas of MRI spatial distortion could be very useful to decide if an additional CT stereotactic acquisition is necessary. In the present work, an image registration method is proposed to evaluate the spatial distortion map in MR images assuming that CT images are less distorted than MR images and can be used as a suitable template for image registration. Initially, MR and CT images are registered by means of a rigid transformation computed through an Iterative Closest Point algorithm using the external or fiducial marks present in both stereotactic data sets as input data. Further registration is performed by non-rigid transformations based on quadratic B-Spline thin plate-like function using as similarity function the normalized mutual information. Results indicate relatively high spatial distortions in skull bone–encephalic mass and corpus callosum–encephalic mass interfaces where a local magnetic field gradient due to tissue inhomogeneity is present. Therefore, for lesions located in those zones it is mandatory to have a stereotactic CT acquisition. Future work is addressed to develop an appropriate atlas of patient dependent MRI spatial distortion.

A simple approach to account for cell latency and necrosis in a brain tumor growth model

Johan Rojas, Rixy Plata, Miguel Martín-Landrove

 

Ingeniería y Ciencias Aplicadas: Modelos Matemáticos y Computacionales, E. Dávila, J. Del Río, M. Cerrolaza, R. Chacón (Eds.) SVMNI 2014

 

Brain tumors have been successfully modeled by a reaction-diffusion equation, where cellular invasion is assumed to be of the diffusion type parameterized by a tensor similar to the molecular diffusion tensor, that can be measured by diffusion weighted MRI, and the proliferation term is taken of the logistic type. The model has been extended to take into account therapy terms, such as chemotherapy and radiotherapy, which in general act differently depending on the activity state of tumor cells. It is well known that as a consequence of tumor growth an important fraction of tumor cells, impaired of nutrients and oxygen, become to a latency state for which neither proliferation nor invasion are present. Latency and necrotic states are determined by concentration levels of nutrients and oxygen. Assuming a diffusive model for nutrient and oxygen concentration, effective latency and necrotic radii can be established, i.e. cells deep inside the tumor at a distance from the tumor interface bigger than these radii are either latent or necrotic. In order to take into account changes in cell states, elementary volumes are tagged appropriately: 0 tags volumes corresponding to tissues different to white and grey matter, 1 tags volumes corresponding to white and grey matter, 2 tags volumes for which cellular concentration is above 90% of maximum concentration as established by a logistic proliferation model. Volumes tagged 1 and 2 are accessible by the differential equation, while 0 is not. Latency and necrotic volumes are tagged 3 and 4 respectively. Including therapy terms, all tags except 0 and 4 are reversible, since latent cells can become active due to therapy induced death of active cells. Simulations were performed using a Matlab developed code.

Evolution rules of deterministic cellular automata for multichannel segmentation of brain tumors in MRI

Antonio Rueda Toicen, Rhadamés Carmona, Miguel Martín-Landrove, Wuilian Torres

 

Ingeniería y Ciencias Aplicadas: Modelos Matemáticos y Computacionales, E. Dávila, J. Del Río, M. Cerrolaza, R. Chacón (Eds.) SVMNI 2014

 

Image segmentation is the process of partitioning an image in groups of pixels or voxels that share a common characteristic. High sensitivity and high precision segmentations of cerebral tumors in magnetic resonance images are necessary for the safe planning of radiosurgical treatment. GrowCut is an image segmentation method based in a cellular automaton that simulates the competitive growth of various bacteria colonies in the image space. We present a group of automata evolution rules for image segmentation that are derived from GrowCut, and a quantitative comparison of the segmentations of brain tumors in multichannel magnetic resonance images achieved through these rules in GPU implementations.

Gesture-gross recognition of upper limbs to physical rehabilitation

Jordan Ojeda, Esmitt Ramrez J., Francisco Moreno, Omaira Rodrguez

 

Ingeniería y Ciencias Aplicadas: Modelos Matemáticos y Computacionales, E. Dávila, J. Del Río, M. Cerrolaza, R. Chacón (Eds.) SVMNI 2014

 

Nowadays, modern computational technologies used in rehabilitation processes have grown considerably in health care centers. These open a broad of new paradigms which improve the rehabilitation process, robotic hardware, virtual reality system and others. Particularly, Virtual Reality systems are notable for having a high interaction with the user based on real-time responsive actions. In rehabilitation, these systems are oered as modern strategies where a patient performs a set of therapy activities recognized as integration tasks through games or simulations. Several health care centers are using these strategies as part of the regular therapy due to the treatment time is less than using the standard ones. If a therapy is focused on upper limbs, a set of specialized gestures are necessary for the total recovery of patients. In this paper we present an eective solution dedicated to capture and recognition of movement of the upper limbs based on gross motor skills. Our proposal integrates corporal gross gestures used as the main user interface in an entire platform for physical rehabilitation of children with motor disabilities in upper limbs. It is designed based on a Microsoft Kinect as a low-cost hardware to capture the motion. Several gestures are achieved to test our proposal given excellent results.

Design and development of a low-cost rehabilitation data glove

Jordan Ojeda, Esmitt Ramrez J., Christiam Mena, Omaira Rodrguez

 

Ingeniería y Ciencias Aplicadas: Modelos Matemáticos y Computacionales, E. Dávila, J. Del Río, M. Cerrolaza, R. Chacón (Eds.) SVMNI 2014

 

A stroke may cause disabilities such as dysfunctions of motor skills. Particularly, many patients present chronic decits in upper limbs, specically in hands. The main treatment for this kind of aection is the physical rehabilitation therapy which consists in a set of physical activities to recover the complete mobility of aected limbs. Nowadays, one of the treatments based on modern technologies is the usage of rehabilitation data gloves. It consists in a device connected to computer sending information of all nger movements and their spatial position. There are many devices which have been used in rehabilitation systems focused on retraining and learning ne motor skills. In Latin America, few companies are dedicated to producing and distributing rehabilitation data gloves, which make these devices expensive and dicult to acquire. In this paper we propose the design and development of a low-cost rehabilitation data glove that achieves all functionalities for therapeutic usage. The hardware of our proposal is based on low-cost materials and software is based on gesture recognition. The rehabilitation data glove allows the detection of exion/extension degrees for each nger, determining nger positions of a patient in a given time. Our solution incorporates an API to provide the detection of gestures. Results show a precise and real-time capture of the movement.

Modeling human tissues: An efficient integrated methodology

Miguel Cerrolaza, Giovana Gavidia, Eduardo Soudah, Miguel Martín-Landrove

 

Biomedical Engineering: Applications, Basis and Communications, Vol. 26, No. 1 (2014) 1450012

 

Geometric models of human body organs are obtained from imaging techniques like computed tomography (CT) and magnetic resonance image (MRI) that oallow an accurate visualization of the inner body, thus providing relevant information about their its structure and pathologies. Next, these models are used to generate surface and volumetric meshes, which can be used further for visualization, measurement, biomechanical simulation, rapid prototyping and prosthesis design. However, going from geometric models to numerical models is not an easy task, being necessary to apply image-processing techniques to solve the complexity of human tissues and to get more simplified geometric models, thus reducing the complexity of the subsequent numerical analysis. In this work, an integrated and efficient methodology to obtain models of soft tissues like gray and white matter of brain and hard tissues like jaw and spine bones is proposed. The methodology is based on image-processing algorithms chosen according to some characteristics of the tissue: type, intensity profiles and boundaries quality. First, low-quality images are improved by using enhancement algorithms to reduce image noise and to increase structures contrast. Then, hybrid segmentation for tissue identification is applied through a multi-stage approach. Finally, the obtained models are resampled and exported in formats readable by computer aided design (CAD) tools. In CAD environments, this data is used to generate discrete models using finite element methed (FEM) or other numerical methods like the boundary element method (BEM). Results have shown that the proposed methodology is useful and versatile to obtain accurate geometric models that can be used in several clinical cases to obtain relevant quantitative and qualitative information.

Vladar3.jpg
Stability and Response of Polygenic Traits to Stabilizing Selection and Mutation

Harold P. de Vladar, Nick Barton

 

Genetics, Vol. 197, 749–767 June 2014

 

When polygenic traits are under stabilizing selection, many different combinations of alleles allow close adaptation to the optimum. If alleles have equal effects, all combinations that result in the same deviation from the optimum are equivalent. Furthermore, the genetic variance that is maintained by mutation–selection balance is 2m=S per locus, where m is the mutation rate and S the strength of stabilizing selection. In reality, alleles vary in their effects, making the fitness landscape asymmetric and
complicating analysis of the equilibria. We show that that the resulting genetic variance depends on the fraction of alleles near fixation, which contribute by 2m/S, and on the total mutational effects of alleles that are at intermediate frequency. The interplay between stabilizing selection and mutation leads to a sharp transition: alleles with effects smaller than a threshold value of 2(m/S)^1/2 remain polymorphic, whereas those with larger effects are fixed. The genetic load in equilibrium is less than for traits of equal effects, and the fitness equilibria are more similar. We find that if the optimum is displaced, alleles with effects close to the threshold value sweep first, and their rate of increase is bounded by (m/S)^1/2  : Long-term response leads in general to well-adapted traits, unlike the case of equal effects that often end up at a suboptimal fitness peak. However, the particular peaks to which the populations converge are extremely sensitive to the initial states and to the speed of the shift of the optimum trait value.

ALC9.jpg
RDF-ization of DICOM Medical Images towards Linked Health Data Cloud

Andrés Tello, Alexandra La Cruz, Víctor Saquicela, Mauricio Espinoza, Maria-Esther Vidal

VI Latin American Congress on Biomedical Engineering CLAIB 2014, Paraná, Argentina 29, 30 & 31 October 2014 pp 757-760

 

This paper proposes a novel strategy for semantifying DICOM medical images (RDF-ization) automatically. We define an architecture that involves processes for extracting, anonymizing, and serializing metadata comprised in DICOM medical images into RDF/XML. These processes allow for semantically enriching and sharing the metadata of DICOM medical files through the Linked Health Data cloud. Thereby providing enhanced query capabilities with respect to the ones offered by current PACS environments, while exploiting all advantages of the Linking Open Data (LOD) cloud and SemanticWeb technologies.

FTH.jpg
Quality control of the breast ca treatments on HDR brachytherapy with TLD-100

F. Torres Hoyos, N. De La Espriella Vélez, A. Sánchez Caraballo

Revista Mexicana de F´ısica 60 (2014) 409–413

 

This paper proposes a novel strategy for semantifying DICOM medical images (RDF-ization) An anthropomorphic Phantom, a female trunk, was built with a natural bone structure and experimental material coated, glycerin and waterbased material called JJT to build soft tissue equivalent to the muscle of human tissue, and a polymer (styrofoam) to build the lung as critical organ to simulate the treatment of breast cancer, with high dose rate brachytherapy (HDR) and sources of Ir-192. The treatments were planned and calculated for the critical organ: Lung, and injury of 2 cm in diameter in breast with MicroSelectron HDR system and the software Plato Brachytherapy V 14.1 of the Nucletron (Netherlands) which uses the standard protocol of radiotherapy for brachytherapy treatments. The dose experimentally measured with dosimeters TLD-100 LIF: Mg; Ti, which were previously calibrated, were placed in the same positions and bodies mentioned above, with less than 5% uncertainty. The reading dosimeters was carried out in a Harshaw TLD 4500.The results obtained for calculated treatments, using the standard simulator, and the experimental with TLD-100, show a high concordance, as they are on average a § 1.1% making process becomes in a quality control of this type of treatments

bottom of page