عنوان مقاله | نویسنده(ها) | مربوط به کنفرانس | چکیده | خرید مقاله |
---|---|---|---|---|
Hadis Mohseni, Shohreh Kasaei
|
چهاردهمین کنفرانس بینالمللی سالانه انجمن کامپیوتر ایران
|
Discriminative subspace analysis is a popular approach
for a variety of applications. There is a growing
interest in subspace learning techniques for face
recognition. Principal component analysis (PCA) and
eigenfaces are two important subspace analysis methods
have been widely applied in a variety of areas.
However, the excessive dimension of data space often
causes the curse of dimensionality dilemma, expensive
computational cost, and sometimes the singularity
problem. In this paper, a new supervised discriminative
subspace analysis is presented by encoding face
image as a high order general tensor. As face space
can be considered as a nonlinear submanifold embedded
in the tensor space, a decomposition method called
Tucker tensor is used which can effectively decomposes
this sparse space. The performance of the proposed
method is compared with that of eigenface, Fisherface,
tensor LPP, and ORO4×2 on ORL and Weizermann
databases. Conducted experimental results show the
superiority of the proposed method.
|
||
Alborz moghaddam, Ehsanollah kabir
|
چهاردهمین کنفرانس بینالمللی سالانه انجمن کامپیوتر ایران
|
Web access prediction has attracted significant
attention in recent years. Web prefetching and some
personalization systems use prediction algorithms. Most
current applications that predict the next user web page
have an offline component that does the data preparation
task and an online section that provides personalized
content to the users based on their current navigational
activities. In this paper we present an online prediction
model that does not have an offline component and fit in the
memory with good prediction accuracy. Our algorithm is
based on LZ78 and LZW algorithms that are adapted for
modeling the user navigation in web. Our model decreases
computational complexities which is a serious problem in
developing online prediction systems. A performance
evaluation is presented using real web logs. This evaluation
shows that our model needs much less memory than PPM
family of algorithms with good prediction accuracy.
|
||
Hoda Bahonar, Nasrollah M. Charkari
|
چهاردهمین کنفرانس بینالمللی سالانه انجمن کامپیوتر ایران
|
In this paper, we propose a method for selecting
the symmetry axis of eyes region from two or more
candidates. We propose a region-based deformable
template matching from two new defined operations:
intensity-based 2-clustering and edge shadowing. The
results display the effectiveness of our method for
extraction of eye, eyebrow and nose templates. The
parameters of these templates can be used as feature
vectors in low bit rate transmission. Evaluation of the
proposed method on an Iranian database shows the
accuracy of 99% for feature region extraction and
86% in average for feature template extraction.
|
||
M. valizadeh, M. komeili, N. armanfard, E. kabir
|
چهاردهمین کنفرانس بینالمللی سالانه انجمن کامپیوتر ایران
|
This paper presents an efficient algorithm for
adaptive binarization of degraded document images.
Document binarization algorithms suffer from poor
and variable contrast in document images. We propose
a contrast independent binarization algorithm that
does not require any parameter setting by user.
Therefore, it can handle various types of degraded
document images. The proposed algorithm involves
two consecutive stages. At the first stage, independent
of contrast between foreground and background, some
parts of each character are extracted and in the second
stage, the gray level of foreground and background are
locally estimated. For each pixel, the average of
estimated foreground and background gray levels is
defined as threshold. After extensive experiments, the
proposed binarization algorithm demonstrate superior
performance against four well-know binarization
algorithms on a set of degraded document images
captured with camera.
|
||
A. R. Koushki, M. Nosrati Maralloo, C. Lucas, A. Kalhor
|
چهاردهمین کنفرانس بینالمللی سالانه انجمن کامپیوتر ایران
|
One of the important requirements for operational
planning of electrical utilities is the prediction of
hourly load up to several days, known as Short Term
Load Forecasting (STLF). Considering the effect of its
accuracy on system security and also economical
aspects, there is an on-going attention toward putting
new approaches to the task. Recently, Neuro Fuzzy
modeling has played a successful role in various
applications over nonlinear time series prediction.
This paper presents a neuro-fuzzy model for the
application of short-term load forecasting. This model
is identified through Locally Liner Model Tree
(LoLiMoT) learning algorithm. The model is compared
to a multilayer perceptron and Kohonen Classification
and Intervention Analysis. The models are trained and
assessed on load data extracted from EUNITE network
competition.
|
||
Parastoo Didari, Behrad Babai, Azadeh Shakery
|
چهاردهمین کنفرانس بینالمللی سالانه انجمن کامپیوتر ایران
|
Text retrieval engines, such as search engines,
always return a list of documents in response to a given
query. Existing evaluations of text retrieval algorithms
mostly use Precision and Recall of the returned list of
documents as main quality measures of a search engine. In
this paper, we propose a novel approach for comparing
different algorithms adopted by different search engines
and evaluate their performance. In our approach, the
results of each algorithm is treated as an inter-related set of
documents and the effectiveness of the algorithm is
evaluated based on the degree of relation in the set of
documents. After verifying the correctness of the evaluation
measure by examining the results of the two retrieval
algorithms, BM25 and pivoted normalization, and
comparing these results with an ideal ranking, we compare
the results of these algorithms and investigate the impact of
certain major factors like stemming on the results of the
suggested algorithm. The effectiveness of our proposed
method is justified through obtained experimental results.
|
||
A. Shirvani, H. Chegini, S. Setayeshi, C. Lucas
|
چهاردهمین کنفرانس بینالمللی سالانه انجمن کامپیوتر ایران
|
Polynomials are one of the most powerful functions
that have been used in many fields of mathematics
such as curve fitting and regression. Low order
polynomials are desired for their smoothness1, good
local approximation and interpolation. Being smooth,
they can be used to locally approximate almost any
derivable function. This means that when linear
functions fail in approximation (e.g. where the first
order Taylor expansion equals zero) polynomial
functions can be used in local approximation, such
that one can achieve better estimations at extremums.
In this paper, application of polynomial kernel
functions in locally linear neurofuzzy models is shown.
Using polynomial kernels in local models, better local
approximations in prediction of chaotic time series
such as Mackey-Glass is achieved, and the capability
of the neurofuzzy network is enhanced.
|
||
Atefeh Torkaman, Nasrollah Moghaddam Charkari, Mahnaz Aghaeipour
|
چهاردهمین کنفرانس بینالمللی سالانه انجمن کامپیوتر ایران
|
Classification is a well known task in data mining
and machine learning that aims to predict the class of
items as accurately as possible. A well planned data
classification system makes essential data easy to find.
An object is classified into one of the categories called
classes according to the features that well separated
the classes. Actually, classification maps an object to
its classification label. Many researches used different
learning algorithms to classify data; neural networks,
decision trees, etc.
In this paper, a new classification approach based
on cooperative game is proposed. Cooperative game is
a branch of game theory consists of a set of players
and a characteristic function which specifies the value
created by different subsets of the players in the game.
In order to find classes in classification process,
objects can be imagine as the players in a game and
according to the values which obtained by these
players, classes will be separated. This approach can
be used to classify a population according to their
contributions. In the other words, it applies equally to
different types of data. Through out this paper, a
special case in medical diagnosis was studied. 304
samples taken from human leukemia tissue consists of
17 attributes which determine different CD markers
related to leukemia were analyzed. These samples
collected from different types of leukemia at Iran Blood
Transfusion Organization (IBTO). Obtained results
demonstrate that cooperative game is very promising
to use directly for classification.
|
||
M. Komeili, N. Armanfard, M. Valizadeh, E. Kabir
|
چهاردهمین کنفرانس بینالمللی سالانه انجمن کامپیوتر ایران
|
In this paper we propose a new integration method for
multi-feature object tracking in a particle filter
framework. We divide particles into separate clusters.
All particles within a cluster measure a specific
feature. The number of particles within a cluster is in
proportion to the reliability of associated feature. We
do a compensation stage which neutralizes the effect of
particles weights mean within a cluster. Compensation
stage balances the concentration of particles around
local maximal. So, particles are distributed more
effectively in the scene. Proposed method provides
both effective hypothesis generation and effective
evaluation of hypothesis. Experimental results over a
set of real-world sequences demonstrate better
performance of our method compared to the common
methods of feature integration.
|
||
M. Nosrati Maralloo, A. R. Koushki, C. Lucas, A. Kalhor
|
چهاردهمین کنفرانس بینالمللی سالانه انجمن کامپیوتر ایران
|
Long-term forecasting of load demand is necessary
for the correct operation of electric utilities. There is
an on-going attention toward putting new approaches
to the task. Recently, Neurofuzzy modeling has played
a successful role in various applications over
nonlinear time series prediction. This paper presents a
neurofuzzy model for long-term load forecasting. This
model is identified through Locally Linear Model Tree
(LoLiMoT) learning algorithm. The model is compared
to a multilayer perceptron and hierarchical hybrid
neural model (HHNM). The models are trained and
assessed on load data extracted from a North-
American electric utility.
|
||
N. Armanfard, M. Valizadeh, M. Komeili, E. Kabir
|
چهاردهمین کنفرانس بینالمللی سالانه انجمن کامپیوتر ایران
|
In this paper we propose a new approach for text
region extraction in camera-captured document
images. Texture-Edge Descriptor, TED, is utilized for
text region extraction. TED is an 8-bit binary number
which its bits are structural. This structural bits and
special text region characteristics in document images
make TED an appropriate descriptor for text region
extraction. Applying well-known water flow method to
the text regions extracted by TED, results in fast and
good quality document image binarization.
Experimental results demonstrate the effectiveness of
our method for text region extraction and document
image binarization.
|
||
M. valizadeh, M. komeili, E. kabir, N. armanfard
|
چهاردهمین کنفرانس بینالمللی سالانه انجمن کامپیوتر ایران
|
In this paper, we present a novel hybrid algorithm
for binarization of badly illuminated document images.
This algorithm locally enhances the document image
and makes the gray levels of text and background
pixels separable. Afterward a simple global
binarization algorithm binarizes the enhanced image.
The enhancement process is a novel method that uses a
separate transformation function to map the gray level
of each pixel into a new domain. For each pixel, the
transformation function is determined using its
neighboring pixels gray level. The proposed
binarization algorithm is robust for wide variety of
degraded document images. Evaluation over a set of
degraded document images illustrates the effectiveness
of our proposed binarization algorithm.
|
||
M. valizadeh, N. armanfard, M. komeili, E. kabir
|
چهاردهمین کنفرانس بینالمللی سالانه انجمن کامپیوتر ایران
|
In this paper, we present a novel hybrid algorithm
for binarization of badly illuminated document images.
This algorithm locally enhances the document image
and makes the gray levels of text and background
pixels separable. Afterward a simple global
binarization algorithm binarizes the enhanced image.
The enhancement process is a novel method that uses a
separate transformation function to map the gray level
of each pixel into a new domain. For each pixel, the
transformation function is determined using its
neighboring pixels gray level. The proposed
binarization algorithm is robust for wide variety of
degraded document images. Evaluation over a set of
degraded document images illustrates the effectiveness
of our proposed binarization algorithm.
|
||
M. valizadeh, N. armanfard, M. komeili, E. kabir
|
چهاردهمین کنفرانس بینالمللی سالانه انجمن کامپیوتر ایران
|
In this paper, we present a novel hybrid algorithm
for binarization of badly illuminated document images.
This algorithm locally enhances the document image
and makes the gray levels of text and background
pixels separable. Afterward a simple global
binarization algorithm binarizes the enhanced image.
The enhancement process is a novel method that uses a
separate transformation function to map the gray level
of each pixel into a new domain. For each pixel, the
transformation function is determined using its
neighboring pixels gray level. The proposed
binarization algorithm is robust for wide variety of
degraded document images. Evaluation over a set of
degraded document images illustrates the effectiveness
of our proposed binarization algorithm.
|
||
M. Komeili, M. Valizadeh, N. Armanfard, E. Kabir
|
چهاردهمین کنفرانس بینالمللی سالانه انجمن کامپیوتر ایران
|
In this paper, a fuzzy inference system by which
reliability of features can be measured is designed.
The reliability determines discriminative power of a
feature in separating target from background. We
focus our attention on design of membership functions.
With a rational explanation on available information
over a particle filter-base tracking process, we infer a
coarse estimation of membership functions. It follows
with a fine-tuning stage by using genetic algorithm.
Color, edge, texture and TED are used in current work
but the extension to a wider number of features is
straightforward.
|
||
N. Armanfard, M. Komeili, M. Valizadeh, E. Kabir
|
چهاردهمین کنفرانس بینالمللی سالانه انجمن کامپیوتر ایران
|
Background modeling is one of the most important
parts of visual surveillance systems. Most background
models are pixel-based which extract detailed shape
of moving objects, but they are so sensitive to nonstationary
scenes. In many applications there is no
need to detect the detailed shape of moving objects.
So some researchers use block-based methods instead
of pixel-based which are more insensitive to local
movements. These two methods are complementary to
each other. We propose an efficient hierarchical
method by which the block level information is
utilized intelligently to improve the efficiency and
robustness of pixel level. Experimental results
demonstrate the effectiveness of the algorithm when
applied in different outdoor and indoor environments.
|
||
Bahareh Atoufi, Ali Zakerolhosseini, Caro Lucas
|
چهاردهمین کنفرانس بینالمللی سالانه انجمن کامپیوتر ایران
|
Being able to predict the coming seizure can
impressively improve the quality of the patients' lives
since they can be warned to avoid doing risky activities
via a prediction system. Here, a locally linear neuro
fuzzy model is used to predict the EEG time series.
Subsequently, this model is utilized in accompany with
Singular Spectrum Analysis for prediction. Afterward,
an information theoretic criterion is used to select a
reliable subset of input variables which contain more
information about the target signal. Comparison of
three mentioned methods on one hand shows that SSA
enables our prediction model to extract the main
patterns of the EEG signal and highly improves the
prediction accuracy. On the other hand, applying the
method of channel selection to the model yields more
accurate prediction. It is shown that fusion of some
certain signals provides more information about the
target and considerably improves the prediction
ability.
|
||
Zeinab Zeinalpour Tabrizi, Behrouz Minaei Bidgoli, Mahmud Fathi
|
چهاردهمین کنفرانس بینالمللی سالانه انجمن کامپیوتر ایران
|
Video processing techniques based on pattern
recognition methods and machine vision is one of the
interesting research fields which attract many
researchers. In this paper, we proposed a novel
method for video summarization using genetic
algorithm based on information theory. Our method
relies on the mutual information for video
summarization. The information theory measure
provides us with better results because it extracts the
inter-frame information. We present that it is a suitable
factor for summarizing video, which maintains its
integrity.
|
||
Sepideh Jabbari, Hassan Ghassemian
|
چهاردهمین کنفرانس بینالمللی سالانه انجمن کامپیوتر ایران
|
In this paper, we address the Heart Sound signal
modeling problem. The approach taken is based on
sparse and redundant representations on an
overcomplete dictionary. We apply matching pursuit
(MP) and orthogonal matching pursuit (OMP) on two
sets of normal and pathological phonocardiograms
(PCGs). The dictionary includes classical Gabor
wavelets or time-frequency atoms which are the
product of a sinusoid and a Gaussian window function.
The normalized root-mean-square error (NRMSE) was
computed between the original and the reconstructed
signals. The results show that the OMP method is very
suitable to the transient and complex properties of the
PCG’s, as it yielded excellent NRMSE’s around 1.61%
for normal sounds and 5.19% for pathological
murmurs.
|
||
Ali Nodehi, Mohamad Tayarani, Fariborz Mahmoudi
|
چهاردهمین کنفرانس بینالمللی سالانه انجمن کامپیوتر ایران
|
Quantum Evolutionary Algorithm (QEA) is a
novel optimization algorithm which uses a probabilistic
representation for solution and is highly suitable for
combinatorial problems like Knapsack problem. Fractal image
compression is a well-known problem which is in the class of
NP-Hard problems. Genetic algorithms are widely used for
fractal image compression problems, but QEA is not used for
this kind of problems yet. This paper uses a novel Functional
Sized population Quantum Evolutionary Algorithm for fractal
image compression. Experimental results show that the
proposed algorithm has a better performance than GA and
conventional fractal image compression algorithms.
|