In this paper we extend features of our open source R package that will enable new modern research methodology for astronomers with data and algorithms deep analyses. Methods of Genetic Algorithms (GA) were added to this computational project made of nosql database and intelligent numerical algorithms searching for most extreme extragalactic objects. In future we accelerate GA with OpenCL and CUDA kernels just to enable opportunities for finding very rare astronomical objects or classifying and clustering the well known ones. The greater computational power, the better search and astronomical databases will be created, the more improvements can be introduced into parametrization algorithms for future massive surveys.
The application of data mining techniques in the field of astronomy can greatly promote the development of astronomy. The astronomical object databases are very large, so the proper dataset preprocessing is needed. This paper introduces the quasars clustering based on their parameterization data and its significance, implements several clustering methods, discusses their advantages and disadvantages.
In this paper we describe our open source R package that will enable new modern research methodology for astronomers with data and algorithms deep analyses. It is using OpenCL accelerated column-oriented R environment. It will provide a basis to develop the computational project made of database and intelligent numerical algorithms searching for most extreme extragalactic objects and carrying out analyses of AGN. In this project self-learning numerical algorithms will be based on the newest methods of artificial intelligence and typical astronomical analyses approach, which will be first time programmed in R in such the big scope.
This paper describes the implementation of a system consisting of a mobile application and RESTful architecture server intended for the analysis and presentation of quasars' spectrum. It also depicts the quasar's characteristics and significance to the scientific community, the source for acquiring astronomical objects' spectral data, used software solutions as well as presents the aspect of Cloud Computing and various possible deployment configurations.
Obtaining interesting celestial objects from tens of thousands or even millions of recorded optical-ultraviolet spectra depends not only on the data quality but also on the accuracy of spectra decomposition. Additionally rapidly growing data volumes demands higher computing power and/or more efficient algorithms implementations. In this paper we speed up the process of substracting iron transitions and fitting Gaussian functions to emission peaks utilising C++ and OpenCL methods together with the NOSQL database. In this paper we implemented typical astronomical methods of detecting peaks in comparison to our previous hybrid methods implemented with CUDA.
With the help of silicon industry microfluidic processors were invented utilizing nano membrane valves, pumps and microreactors. These so called lab-on-a-chips combined together with molecular computing create molecular-systems-ona- chips. This work presents a new approach to implementation of molecular inference systems. It requires the unique representation of signals by DNA molecules. The main part of this work includes the concept of logic gates based on typical genetic engineering reactions. The presented method allows for constructing logic gates with many inputs and for executing them at the same quantity of elementary operations, regardless of a number of input signals. Every microreactor of the lab-on-a-chip performs one unique operation on input molecules and can be connected by dataflow output-input connections to other ones.
In this paper we present an interface between Pogamut 3 platform and Clojure programming language. Clojure is a state
of the art functional language with roots in Lisp. Pogamut 3 is a framework that simplifies creation of embodied agents.
Our goal was to introduce Clojure code in our agents logic. Simple emergent behavior of a group of agents was
implemented using Clojure code. Performance of execution of Clojure code called from Pogamut platform was
measured.
In this paper we explore the problem of intelligent agent beliefs. We model agent beliefs using multimodal logics of
belief, KD45(m) system implemented as a directed graph depicting Kripke semantics, precisely. We present a card game
engine application which allows multiple agents to connect to a given game session and play the card game. As an
example simplified version of popular Saboteur card game is used. Implementation was done in Java language using
following libraries and applications: Apache Mina, LWJGL.
Although recent years bring massive astronomical surveys which have been extensivly searched, there are still
many mysteries burried in the data. We attempt to extract objects with untypical emission lines. Especialy those
with with weak and absent emission but without significant absorption. For that purpose we created database
which contains quasars spectra for a quick access and peaks detection code in R environment what we describe
in this article.
Standardization of the diagnostic process of insomnia is a highly important task in clinical practice, epidemiological
considerations and treatment outcomes assessment. In this paper we describe standard surveys relationships
within cluster groups with the same insomnia degrees.
Finding interesting celestial objects among tens of thousands or even millions of recorded raw data is not an easy task to
implement. In this paper we speed up this process with high level nvidia cuda C++ template library called Thrust, which
makes our database with R interface much more efficient.
Insomnia generally is defined as a subjective report of difficulty falling sleep, difficulty staying asleep, early
awakening, or nonrestorative sleep. It is one of the most common health complaints among the general population.
in this paper we try to find relationships between different insomnia cases and predisposing, precipitating, and
perpetuating factors following by pharmacological treatment.
Hormone parameters were determined in the serum of young addicted men in order to compare them with those obtained
from the group of healthy subjects. Three groups were investigated which were named opiates, mixed and control group.
Statistical and data mining methods were applied to obtain significant differences. R package was used for all computation.
The determination of hormones parameters provide important information relative to impact of addiction.
Nowadays computers analyze medical data almost in every diagnosis and treatment steps. We develop new technology which gives us better and more precise diagnosis. We chose esophageal high resolution manometry with impedance (HRMI) which has been considered as a "gold standard" test for esophageal motility. HRMI is the next generation of manometry explanation which is more sensitive and accurate to EFT. Examination allows physicians to ger information about esophageal peristalsis, amplitude and duration of the esophageal contraction and liquid/viscous bolus transit time from mouth through stomach. In 2008 we examined 80 patients using "old" EFT manometry and 80 patients in 2009 using high resolution manometry (HRMI). Everybody got manometry, endoscopy and x-ray examination. We asked about symptoms which we correlate and connect with data from EFT and HRMI. We tried to find a good algorithm for this purpose in order to do a simple and helpful tool for physician to make righta diagnosis and treatment decision. Connection between data and symptoms seems to be right and clear, but finding a good algorithm for given data is the main problem.
In this paper we explore the problem of communication and coordination in a team of intelligent game bots (aka embodied agents). It presents a tactical decision making system controlling the behavior of an autonomous bot followed by the concept of a team tactical decision making system controlling the team of intelligent bots. The algorithms to be introduced have been implemented in the Java language by means of Pogamut 2 framework, interfacing the bot logic with Unreal Tournament 2004 virtual environment.
In this paper data mining methods were applied to investigate features determining high quality pork meat. The aim of the study was analysis of conditionality of the pork meat quality defined in coherence with HDL and LDL cholesterol concentration, plasma leptin, triglycerides, plasma glucose and serum. The research was carried out on 54 pigs. originated from crossbreeding of Naima sows with P76-PenArLan boars hybrids line. Meat quality parameters were evaluated in samples derived from the Longissimus (LD) muscle taken behind the last rib on the basis: the pH value, meat colour, drip loss, the RTN, intramuscular fat and glycolytic potential. The results of this study were elaborated by using R environment and show that cluster and regression analysis can be a useful tool for in-depth analysis of the determinants of the quality of pig meat in homogeneous populations of pigs. However, the question of determinants of the level of glycogen and fat in meat requires further research.
Blood pressure in childhood and adolescents is important indicator of good health and strong predictor of BP in
adulthood. Genetic susceptibility, environmental and socioeconomic factors are related both with life style, obesity and
cardiovascular risk including elevated BP. Increased body mass index is strictly correlated with BP, and obesity and
overweight is main intermediate phenotype of childhood hypertension. However, despite current obesity epidemic
available data do not fully support the hypothesis that it has resulted in increase of BP in children. We analysed data
obtained from 7591 children participating in nation-wide health survey using data mining methodology. Results reveal
relationships of obesity and high blood pressure with school environment characteristics.
Nowadays computers successfully analyze medical data giving results used for futher treatment. Every year we develop
new technology which gives us better and more precise diagnose. We chose esophageal manometry (EFT) which has
been considered as a "gold standard" test for the evaluation of esophageal motility. EFT allows physicians to get
informations about esophageal peristalsis, amplitude and duration of the esophageal contraction and liquid/viscous bolus
transit time from mouth through stomach. We examined 80 patients during 2008 year. Everybody got EFT, endoscopy
and X-Ray examination. It was important to ask about symptoms which we correlate and connect with data from EFT.
We tried to find a good algorithm for this job in order to do a simple and helpful tool for physician to make right
diagnose. Connection between data and symptoms seems to be right and clear, but finding a good algorithm for given
data is the main problem.
Electric bioimpedance is one of methods to assess the hydrate status in hemodialyzed patients. It is also being used for
assessing the hydration level among peritoneal dialysed patients, diagnosed with neoplastic diseases, patients after organ
transplantations and the ones infected with HIV virus. During measurements sets were obtained from two groups, which
were named a control (healthy volunteers) and test group (hemodialyzed patients). Zscored, discretized data and data
retrieval results were computed in R language environment in order to find a simple rule for recognizing health
problems. The executed experiments affirm possibilities of creating good classifiers for detecting a proper patient with
the help of medical data sets, but only with previous training.
The main concept of molecular computing depends on DNA self-assembly abilities and on modifying DNA with the help of enzymes during genetic operations. In the typical DNA computing a sequence of operations executed on DNA strings in parallel is called an algorithm, which is also determined by a model of DNA strings. This methodology is similar to the soft hardware specialized architecture driven here by heating, cooling and enzymes, especially polymerases used for copying strings. As it is described in this paper the polymerase Taq properties are changed by modifying its DNA sequence in such a way that polymerase side activities together with peptide chains, responsible for destroying amplified strings, are cut off. Thus, it introduces the next level of molecular computing. The genetic operation execution succession and the given molecule model with designed nucleotide sequences produce computation results and additionally they modify enzymes, which directly influence on the computation process. The information flow begins to circulate. Additionally, such optimized enzymes are more suitable for nanoconstruction, because they have only desired characteristics. The experiment was proposed to confirm the possibilities of the suggested implementation.
DNA computing provides new molecular mechanism for storing and processing information. DNA macrostructures are bases of specially designed algorithms realized by so called soft hardware applications. To obtain these structures a special DNA sequences design tool is required. In this paper comparison of two such computer programs was provided. In our program a custom genetic algorithm with new hybrid operators was involved in creating a set of DNA chains. The second program written by Winfree makes random changes using a given set of short constant forbidden fragments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.