عنوان مقاله | نویسنده(ها) | مربوط به کنفرانس | چکیده | خرید مقاله |
---|---|---|---|---|
Mohammad Zeiaee, Mohammad Reza Jahed-Motlagh
|
چهاردهمین کنفرانس بینالمللی سالانه انجمن کامپیوتر ایران
|
Portfolio optimization under classic mean-variance
framework of Markowitz must be revised as variance
fails to be a good risk measure. This is especially true
when the asset returns are not normal. In this paper,
we utilize Value at Risk (VaR) as the risk measure and
Historical Simulation (HS) is used to obtain an
acceptable estimate of the VaR. Also, a well known
multi-objective evolutionary approach is used to
address the inherent bi-objective problem; In fact,
NSGA-II is incorporated here. This method is tested on
a set of past return data of 12 assets on Tehran Stock
Exchange (TSE). A comparison of the obtained results,
shows that the proposed method offers high quality
solutions and a wide range of risk return trade-offs.
|
||
Mohamad Alishahi, Mehdi Ravakhah, Baharak Shakeriaski, Mahmud Naghibzade
|
چهاردهمین کنفرانس بینالمللی سالانه انجمن کامپیوتر ایران
|
One of the most effective ways to extract knowledge
from large information resources is applying data
mining methods. Since the amount of information on
the Internet is exploding, using XML documents is
common as they have many advantages. Knowledge
extraction from XML documents is a way to provide
more utilizable results. XCLS is one of the most
efficient algorithms for XML documents clustering. In
this paper we represent a new algorithm for clustering
XML documents. This algorithm is an improvement
over XCLS algorithm which tries to obviate its
problems. We implemented both algorithms and
evaluated their clustering quality and running time on
the same data sets. In both cases, it is shown that the
performance of the new algorithm is better.
|
||
Zahra Toony, Hedieh Sajedi, Mansour Jamzad
|
چهاردهمین کنفرانس بینالمللی سالانه انجمن کامپیوتر ایران
|
Recently, a technique has been proposed for image
hiding, that is based on block texture similarity where,
blocks of secret image are compared with blocks of a
set of cover images and the cover image with the most
similar blocks to those of the secret image is selected
as the best candidate cover image to conceal the secret
image. In this paper, we propose a new image hiding
method in which, the secret image is initially coded
using a fuzzy coding/decoding method. By applying the
fuzzy coder, each block of the secret image is
compressed to a smaller block. In this way, after
compressing the secret image to a smaller one, we hide
it in a cover image. Obviously hiding a smaller secret
image causes less distortion in the stego-image (the
image that has secret image or data) and therefore
higher quality stego-image is obtained. Consequently,
the proposed method provides higher embedding rate
and enhanced security.
|
||
S.A. Hosseini Amereii, M.M. Homayounpour
|
چهاردهمین کنفرانس بینالمللی سالانه انجمن کامپیوتر ایران
|
Two popular and better performing approaches to
language Identification (LID) are Phone Recognition
followed by Language Modeling (PRLM) and Parallel
PRLM. In this paper, we report several improvements
in Phone Recognition which reduces error rate in
PRLM and PPRLM based LID systems. In our previous
paper, we introduced APRLM approach that reduces
error rate for about 1.3% in LID tasks. In this paper,
we suggest other solution that overcomes APRLM. This
new LID approach is named Generalized PRLM or
GPRLM. Several language identification experiments
were conducted and the proposed improvements were
evaluated using OGI-MLTS corpus. Our results show
that GPRLM overcomes PPRLM and APRLM about
2.5% and 1.2% respectively in two language
classification tasks.
|
||
Omid Khayat, Javad Razjouyan, Hadi ChahkandiNejad, Mahdi Mohammad Abadi, Mohammad Mehdi
|
چهاردهمین کنفرانس بینالمللی سالانه انجمن کامپیوتر ایران
|
This paper introduces a revisited hybrid algorithm for
function approximation. In this paper, a simple and fast
learning algorithm is proposed, which automates
structure and parameter identification simultaneously
based on input-target samples. First, without need of
clustering, the initial structure of the network with the
specified number of rules is established, and then a
training process based on the error of other training
samples is applied to obtain a more precision model.
After the network structure is identified, an optimization
learning, based on the criteria error, is performed to
optimize the obtained parameter set of the premise parts
and the consequent parts. At the end, comprehensive
comparisons are made with other approaches to
demonstrate that the proposed algorithm is superior in
term of compact structure, convergence speed, memory
usage and learning efficiency.
|
||
Ahmad Ali Abin, Mehran Fotouhi, Shohreh Kasaei
|
چهاردهمین کنفرانس بینالمللی سالانه انجمن کامپیوتر ایران
|
This paper presents a new segmentation method for
color images. It relies on soft and hard segmentation
processes. In the soft segmentation process, a cellular
learning automata analyzes the input image and closes
together the pixels that are enclosed in each region to
generate a soft segmented image. Adjacency and
texture information are encountered in the soft
segmentation stage. Soft segmented image is then fed
to the hard segmentation process to generate the final
segmentation result. As the proposed method is based
on CLA it can adapt to its environment after some
iterations. This adaptive behavior leads to a semi
content-based segmentation process that performs well
even in presence of noise. Experimental results show
the effectiveness of the proposed segmentation method.
|
||
Vahid Khatibi, Gholam Ali Montazer
|
چهاردهمین کنفرانس بینالمللی سالانه انجمن کامپیوتر ایران
|
In this paper, a novel inference engine named fuzzyevidential
hybrid engine has been proposed using
Dempster-Shafer theory of evidence and fuzzy sets
theory. This hybrid engine operates in two phases. In
the first phase, it models the input information’s
vagueness through fuzzy sets. In following, extracting
the fuzzy rule set for the problem, it applies the fuzzy
inference rules on the acquired fuzzy sets to produce
the first phase results. At second phase, the acquired
results of previous stage are assumed as basic beliefs
for the problem propositions and in this way, the belief
and plausibility functions (or the belief interval) are
set. Gathering information from different sources, they
provide us with diverse basic beliefs which should be
fused to produce an integrative result. For this
purpose, evidential combination rules are used to
perform the information fusion. Having applied the
proposed engine on the coronary heart disease (CHD)
risk assessment, it has yielded 86 percent accuracy
rate in the CHD risk prediction.
|
||
Vahid Khatibi, Gholam Ali Montazer
|
چهاردهمین کنفرانس بینالمللی سالانه انجمن کامپیوتر ایران
|
One of the toughest challenges in medical diagnosis
is uncertainty handling. The recognition of intestinal
bacteria such as Salmonella and Shigella which cause
typhoid fever and dysentery, respectively, is one such
challenging problem for microbiologists. In this paper,
we take an intelligent approach towards the bacteria
classification problem by using five similarity
measures of fuzzy sets (FSs) and intuitionistic fuzzy
sets (IFSs) to examine their capabilities in
encountering uncertainty in the medical pattern
recognition. Finally, the recognition rates of the
measures are calculated among which IFS Mitchel and
Hausdorf similarity measures score the best results
with 95.27% and 94.48% recognition rates,
respectively. On the other hand, FS Euclidean distance
yieldes only 85% recognition rate.
|
||
Ali B. Hashemi, M.R Meybodi
|
چهاردهمین کنفرانس بینالمللی سالانه انجمن کامپیوتر ایران
|
PSO, like many stochastic search methods, is very
sensitive to efficient parameter setting. As modifying a
single parameter may result in a large effect. In this
paper, we propose a new a new learning automatabased
approach for adaptive PSO parameter selection.
In this approach three learning automata are utilized
to determine values of each parameter for updating
particles velocity namely inertia weight, cognitive and
social components. Experimental results show that the
proposed algorithms compared to other schemes such
as SPSO, PSO-IW, PSO TVAC, PSO-LP, DAPSO,
GPSO, and DCPSO have the same or even higher
ability to find better local minima. In addition,
proposed algorithms converge to stopping criteria
significantly faster than most of the PSO algorithms.
|
||
Ali B. Hashemi, M.R. Meybodi
|
چهاردهمین کنفرانس بینالمللی سالانه انجمن کامپیوتر ایران
|
In real world, optimization problems are usually
dynamic in which local optima of the problem change.
Hence, in these optimization problems goal is not only
to find global optimum but also to track its changes. In
this paper, we propose a variant of cellular PSO, a
new hybrid model of particle swarm optimization and
cellular automata, which addresses dynamic
optimization. In the proposed model, population is
split among cells of cellular automata embedded in the
search space. Each cell of cellular automata can
contain a specified number of particles in order to
keep the diversity of swarm. Moreover, we utilize the
exploration capability of quantum particles in order to
find position of new local optima quickly. To do so,
after a change in environment is detected, some of the
particles in the cell change their role from standard
particles to quantum for few iterations. Experimental
results on moving peaks benchmark show that the
proposed algorithm outperforms mQSO, a well-known
multi swarm model for dynamic optimization, in many
environments.
|
||
Ehsan Safavieh, Amin Gheibi, Mohammadreza Abolghasemi, Ali Mohades
|
چهاردهمین کنفرانس بینالمللی سالانه انجمن کامپیوتر ایران
|
Particle Swarm Optimization (PSO) is an
optimization method that is inspired by nature and is
used frequently nowadays. In this paper we proposed a
new dynamic geometric neighborhood based on
Voronoi diagram in PSO. Voronoi diagram is a
geometric naturalistic method to determine neighbors
in a set of particles. It seems that in realistic swarm,
particles take Voronoi neighbors into account.
Also a comparison is made between the
performance of some traditional methods for choosing
neighbors and new dynamic geometric methods like
Voronoi and dynamic Euclidean. In this comparison it
is found that PSO with geometric neighborhood can
achieve better accuracy overall especially when the
optimum value is out of the initial range.
|
||
Ahmad Ali Abin, Mehran Fotouhi, Shohreh Kasaei
|
چهاردهمین کنفرانس بینالمللی سالانه انجمن کامپیوتر ایران
|
In recent years, processing the images that contain
human faces has been a growing research interest
because of establishment and development of
automatic methods especially in security applications,
compression, and perceptual user interface. In this
paper, a new method has been proposed for multiple
face detection and tracking in video frames. The
proposed method uses skin color, edge and shape
information, face detection, and dynamic movement
analysis of faces for more accurate real-time multiple
face detection and tracking purposes. One of the main
advantages of the proposed method is its robustness
against usual challenges in face tracking such as
scaling, rotation, scene changes, fast movements, and
partial occlusions.
|
||
Monireh Abdoos, Nasser Mozayani, Ahmad Akbari
|
چهاردهمین کنفرانس بینالمللی سالانه انجمن کامپیوتر ایران
|
In this paper, we present a new measure for
evaluating similarity changes in a multi agent system.
The similarity measure of the agents changes during
the learning process. The similarity differences are
because of any composition or decomposition of some
agent sets. The presented measure, defines the changes
of homogeneity of agents by composition and
decomposition. The utility of the metrics is
demonstrated in the experimental evaluation of multi
agent foraging. The results show that while the
similarity difference gets a positive value, the
performance grow rapidly.
|
||
A. Mashhadi Kashtiban, M. Alinia Ahandani
|
چهاردهمین کنفرانس بینالمللی سالانه انجمن کامپیوتر ایران
|
In this paper we propose several methods for
partitioning, the process of grouping members of
population to different memeplexes, in a shuffled frog
leaping algorithm. These proposed methods divide the
population in terms of the value of cost function or the
geometric position of members or quite random
partitioning. The proposed methods are evaluated on
several low and high dimensional benchmark
functions. The obtained results on low dimensional
functions demonstrate that geometric partitioning
methods have the best success rate and the fastest
performance. Also on high dimensional functions,
however using of the geometric partitioning methods
for the partitioning stage of the SFL algorithm lead to
a better success rate but these methods are more time
consuming than other partitioning methods.
|
||
H. Davoudi, E. Kabir
|
چهاردهمین کنفرانس بینالمللی سالانه انجمن کامپیوتر ایران
|
Keystroke dynamics-based authentication, KDA,
verifies users via their typing patterns. To authenticate
users based on their typing samples, it is required to
find out the resemblance of a typing sample and the
training samples of a user regardless of the text typed.
In this paper, a measure is proposed to find the
distance between a typing sample and a set of samples
of a user. For each digraph, histogram-based density
estimation is used to find the pdf of its duration time.
This measure is combined with another measure which
is based on the two samples distances. Experimental
results show considerable decrease in FAR while FRR
remains constant.
|
||
Heshaam Faili
|
چهاردهمین کنفرانس بینالمللی سالانه انجمن کامپیوتر ایران
|
Increasing the domain of locality by using treeadjoining-
grammars (TAG) encourages some
researchers to use it as a modeling formalism in their
language application. But parsing with a rich
grammar like TAG faces two main obstacles: low
parsing speed and a lot of ambiguous syntactical
parses. We uses an idea of the shallow parsing based
on a statistical approach in TAG formalism, named
supertagging, which enhanced the standard POS tags
in order to employ the syntactical information about
the sentence. In this paper, an error-driven method in
order to approaching a full parse from the partial
parses based on TAG formalism is presented. These
partial parses are basically resulted from supertagger
which is followed by a simple heuristic based light
parser named light weight dependency analyzer
(LDA). Like other error driven methods, the process of
generation the deep parses can be divided into two
different phases: error detection and error correction,
which in each phase, different completion heuristics
applied on the partial parses. The experiments on Penn
Treebank show considerable improvements in the
parsing time and disambiguation process.
|
||
Mohsen Rohani, Alireza Nasiri Avanaki
|
چهاردهمین کنفرانس بینالمللی سالانه انجمن کامپیوتر ایران
|
A watermarking method in DCT domain is
modified to achieve better imperceptibility. Particle
Swarm Optimization (PSO) is used to find the best
DCT coefficients for embedding the watermark
sequence and the Structural Similarity Index is used as
the fitness function in order to have a watermarked
image with the best possible quality.
|
||
Ali Nouri, Hooman Nikmehr
|
چهاردهمین کنفرانس بینالمللی سالانه انجمن کامپیوتر ایران
|
In a quest for modeling human brain, we are going to introduce a brain model based on a general framework for brain called Memory-Prediction Framework. The model is a hierarchical Bayesian structure that uses Reservoir Computing methods as the state-of-the-art and the most biological plausible Temporal Sequence Processing method for online and unsupervised learning. So, the model is called Hierarchical Bayesian Reservoir Memory (HBRM). HBRM uses a simple stochastic gradient descent learning algorithm to learn and organize common multi-scale spatio-temporal patterns/features of the input signals in a hierarchical structure in an unsupervised manner to provide robust and real-time prediction of future inputs. We suggest HBRM as a real-time high-dimensional stream processing model for the basic brain computations. In this paper we will describe the model and assess its prediction accuracy in a simulated real-world environment.
|
||
A. R. Khanteymoori, M. M. Homayounpour, M. B. Menhaj
|
چهاردهمین کنفرانس بینالمللی سالانه انجمن کامپیوتر ایران
|
This paper describes the theory and implementation
of dynamic Bayesian networks in the context of speaker
identification. Dynamic Bayesian networks provide a
succinct and expressive graphical language for
factoring joint probability distributions, and we begin
by presenting the structures that are appropriate for
doing speaker identification in clean and noisy
environments. This approach is notable because it
expresses an identification system using only the
concepts of random variables and conditional
probabilities. We present illustrative experiments in
both clean and noisy environments and our
experiments show that this new approach is very
promising in the field of speaker identification.
|
||
A. R. Khanteymoori, M. B. Menhaj, M. M. Homayounpour
|
چهاردهمین کنفرانس بینالمللی سالانه انجمن کامپیوتر ایران
|
A new structure learning approach for Bayesian
networks (BNs) based on asexual reproduction
optimization (ARO) is proposed in this paper. ARO can
be essentially considered as an evolutionary based
algorithm that mathematically models the budding
mechanism of asexual reproduction. In ARO, a parent
produces a bud through a reproduction operator;
thereafter the parent and its bud compete to survive
according to a performance index obtained from the
underlying objective function of the optimization
problem; this leads to the fitter individual. The
proposed method is applied to real-world and
benchmark applications, while its effectiveness is
demonstrated through computer simulation. Results of
simulation show that ARO outperforms GA because
ARO results good structure in comparison with GA
and the speed of convergence in ARO is more than GA.
Finally, the ARO performance is statistically shown.
|