Amitkumar Baburao Ranit1 and Sangita R. Gudadhe2, 1Assistant Professor, Department of Civil Engineering, Prof Ram Meghe College of Engineering & Management, Badnera, Amravati, Maharashtra, India, 2Assistant Professor, Department of Computer Science & Engineering, Sipna College of Engineering & Technology, Amravati, Maharashtra, India
In occurrence of flood event and its damages increases in number which harming human lives and lessening economy growth, damage to property and destruction of the environment. Flood losses reduce the assets of households, communities and societies through the destruction of standing crops, dwellings, infrastructure, machinery and buildings, apart from the tragic loss of life. Early warning is a key element of Disaster Risk Reduction. The basic benefit of early flood warning system is an increased lead time for warnings at locations subject to flood risk. The degree and scale of flood hazards nowadays increases massively with the changing climate for this problem we forecasting the flood by using machine learning algorithm (ML) methods which is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly program For creating the machine learning algorithm for prediction model the historical records of flood events in addition to the real-time cumulative data of a number of rain gauges or other sensing devices for various return periods are often used. The sources of dataset are traditionally rainfall, and water level, measured either by ground rain gauges or relatively new remote sensing technologies of satellites. Prediction of flood is done by using ARIMA model and Raspberry pi model. These two models help us to predict weather flood event will occurs or not. For utilization of this model we prepare a code in python language. These models take real time data as input and provide the predicted outflow as output. The use of ARIMA model and Raspberry pi techniques for the purpose of improving the quality of data set which has highly contributed in improving the accuracy of forecast and also which dramatically increase the generalization ability of models to give signal with respective time and decrease the uncertainty of prediction. Developing early warning systems for the countries can contribute to the mitigation of flood risks and life-saving through effective utilization of these new models.
Hydrological Data, Metrological Data, ANN, ARIMA, Machine Learning, Artificial Intelligence, Raspberry Pi Model.
Husaelddin Balla, Sarah Jane Delaney and Marisa Llorens Salvador, School Of Computer Science, Technological University Dublin, Dublin, Ireland
Recently, the majority of sentiment analysis researchers focus on aspect-based sentiment analysis because it delivers in-depth analysis with more accurate results compared with traditional sentiment analysis. In this paper, we propose an interactive learning approach to tackle a target-based sentiment analysis task for the Arabic language. The proposed IA-LSTM model uses an interactive attention-based mechanism to force the model to focus on different parts (targets) of a sentence. We investigate the ability to use targets, left context, and right context, and model them separately to learn their own representations via interactive modeling. We evaluated our model on two different datasets: Arabic hotel review and Arabic book review datasets. The results demonstrate the effectiveness of using this interactive modeling technique which enhanced the model performance.
Natural Language Processing, Sentiment Analysis, Arabic SA, Deep Learning, Opinion Mining.
Mozhgan Saeidi, Department of Computer Science, Dalhousie University, Canada
Word sense disambiguation is one of the challenging tasks in Natural Language Processing (NLP). This paper considers the text ambiguity problem as a classification task and presents a model for the text ambiguity problem using transformers. Transformers have shown improvements in the most recent solutions for different NLP tasks. In this text ambiguity task, we look to find the correct meaning for each word in the text. We show that using pre-train transformer models improves the accuracy of the architecture. Our model decides how accurate the target word in a context corresponds to the given sense. Our experiments also show how the data augmentation technique can perform better on the NLP task.
Word Sense Disambiguation, Text Ambiguity, Transformers, Pre-train Language Models.
Afef Saihi, Department of Industrial Engineering, American University of Sharjah, Sharjah, UAE
Autism spectrum disorder ASD is a neurodevelopmental disorder associated with challenges in communication, social interaction, and repetitive behaviors. Getting a clear diagnosis for a child is necessary for starting early intervention and having access to therapy services. However, there are many barriers that hinder the screening of these kids for autism at an early stage which might delay further the access to therapeutic interventions. One promising direction for improving the efficiency and accuracy of ASD detection in toddlers is the use of machine learning techniques to build classifiers that serve the purpose. This paper contributes to this area and uses the data developed by Dr. Fadi Fayez Thabtah to train and test various machine learning classifiers for the early ASD screening. Based on various attributes, three models have been trained and compared which are Decision tree C4.5, Random Forest, and Neural Network. The three models provided very good accuracies based on testing data, however, it is the Neural Network that outperformed the other two models. This work contributes to the early screening of toddlers by helping identify those who have ASD traits and should pursue formal clinical diagnosis.
Autism Spectrum Disorder, Screening, Machine learning, Decision tree, Random forest, Neural network, Classifier, Accuracy.
Vittalis Ayu, Department of Informatics, Sanata Dharma University, Indonesia
Mobile crowdsensing has become a new paradigm that enables the citizen to participate in the sensing process by voluntarily gather data from their smartphone to accomplish some given task. However, performing the sensing task generate lots of data resulting in various quality of the sensed data and high sensing cost in term of resource consumption. This became a major concern in mobile crowdsensing as the mobile nodes who act as crowdsensors have limited resources. Moreover, an opportunistic mobile crowdsensing mechanism does not require user involvement so the data collection process must be autonomous and intelligent to be able to sense the data in the right context. This means that context-awareness is also important in opportunistic crowdsensing to maintain the sensed data quality. In this mini review, we revisit the possibility of enhancing the mobile crowdsensing mechanism. We argue that improving the data collection process including context-awareness can optimize in-node data availability and sensed data quality. Besides, we also argue that finding optimization on inter-node data exchange mechanisms will increase the quality of the in-node data. Furthermore, smartphones which are related to humans as their owner reflect the physical and social behavior of humans. We believe that considering contexts such as human social relation and human mobility pattern can benefit the optimization strategies.
Mobile Crowdsensing, Data Quality, Context-awareness, Social Relation, Mobility Pattern.
Batool Madani, Department of Industrial Engineering, American University of Sharjah, Sharjah, UAE
The food delivery business is shifting to online platforms to reach a wider range of customers. This transformation has gained high attention in recent yearsbecause ofthe availability of customizing ordering experiences, easy payment methods, fast delivery, and others.As a result, the competition between online food delivery providershas intensified, and they must have a better understanding of their customers’ needs, allowing them to predict future purchasing decisions. The incorporation of predictive models can help providers to understand their customers. In this study, a dataset collected from 388 online food delivery consumers in Bangalore, Indiawas provided to predict their purchasing decision. Four prediction models are considered: CART and C4.5 decision trees, random forest, and rule-based classifiers, and their accuracy in prodividing the correct class label is evaluated. The findings show that all models perform similarly, butthe C4.5 outperforms them all with a prediction accuracy of 91.67%.
Food Delivery Industry, Purchasing Prediction, Machine Learning, Decision Trees, Random Forest, Rule-Based Classifier.
Mozhgan Saeidi, Norbert Zeh, Evangelos Milios, Department of Computer Science, Canada
The global efforts to control situations related to COVID-19 are threatened by recognizing locations that include needed people. If we can identify different groups of people under overwhelming conditions and seek help, it would be easier to help them from the most urgent ones to the least ones. In this current work, we analyze groups of needed people in each geographic place based on different features. Our system can rank those recognized people in each location from most urgent to the least urgent ones and visualizes this ranking. This ranking system helps the community planning services to help people and solve problems that have been caused because of the current pandemic. Our approach is data adaptable, meaning it can use the dataset of various locations to generate its ranking. The dataset we used in this work is for the city of Montreal because of its availability. Our next contribution is classifying the features of peoples groups for better estimations of the required help. For this classification task, we used binary classification and multi-class classifications, choosing the suitable classifier based on the data feature, e.g., using the first for attributes like gender and the latter for features like age group.
Covid-19, Geographical location, Longitude, Latitude, Classification, Visualization.
Philipp Bolte1, Ulf Witkowski1 and Rolf Morgenstern2, 1Department of Electronics and Circuit Technology, South Westphalia University of Applied Sciences, Soest, Germany, 2Department of Agriculture, South Westphalia University of Applied Sciences, Soest, Germany
In agriculture it becomes more and more important to have detailed data, e.g. about weather and soil quality, not only in large scale classic crop farming applications but also for urban agriculture. This paper proposes a modular wireless sensor node that can be used in a centralized data acquisition scenario. A centralized approach, in this case multiple sensor nodes and a single gateway or a set of gateways, can be easily installed even without local infrastructure as mains supply. The sensor node integrates a LoRaWAN radio module that allows long-range wireless data transmission and low-power battery operation for several months at reasonable module costs. The developed wireless sensor node is an open system with focus on easy adaption to new sensors and applications. The proposed system is evaluated in terms of transmission range, battery runtime and sensor data accuracy.
Wireless Sensor Node, LoRa Communication, Real-Time Environmental Monitoring, Urban Agriculture.
Humera Batool1*, Weiyu Li2, Asif Nawaz3, Waheed Yousuf Ramay4 and Lixin Tian5, 1School of Mathematical Sciences, Nanjing Normal University, Nanjing, Jiangsu, China, 2School of Mathematical Sciences, Suzhou University of Science and Technology Suzhou, China, 3College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, China, 4Department of Computer Science, COMSATS university Islamabad Sahiwal Campus, Pakistan, 5Center for Energy Development and Environmental Protection, Jiangsu University, Zhenjiang, China, 5Research Centre of Energy-interdependent Behavior and Strategy, Nanjing Normal University, Nanjing, China
In past decades, application of machine learning is seen scarce in predicting housing prices as compared to internet, economy, industry and other fields. House price prediction and forecasting is arduous task. To elaborate this forecasting, house price prediction model is constructed using the machine learning technique named Deep Neural Network (DNN) using platform Keras and programming language python and decision tree. Mean square error (MSE) and Root Mean Squared Error (RMSE) were used for validation of model performance. In this study, we used house prices and their attributes data from 2012 to onwards in Nanjing city, provincial capital of Jiangsu province. As case study two districts Jianye and Xuanwu on Line 2 of Nanjing metro were selected having houses proximities of 500 meters, 500-800 meters and 800-1200 meters respectively. Our results depicts that various house attributes and proximity to metro station has high positive impact on house prices. Results showed that DNN is highly efficient predictor of RMSE, achieving 0.184 and 0.377 for Xuanwu and Jianye districts respectively. Application of Deep Neural Network minimized the difference between predicted and actual price by Mean square error. Application of decision tree also facilitated in housing prices precisely. Our proposed model could facilitate and enhance the use of DNN in house price prediction problems.In future we can improve house price prediction by using machine learning algorithms like random forest and CNN or RNN.
House price prediction, Deep Neural Network (DNN), Decision tree, Machine learning.
Oluwatobi Oyinlola1∗, Kayalvizhi Jayavel2 and Didacienne Mukanyiligira1, 1African Center Of Excellence in Internet of Things (ACEIoT), University of Rwanda, Kigali, Rwanda, 2Department of Information Technology, SRM Institute of Science and Technology, India
Digital bracelets or wrist watches, which include functionalities like calculators and unit converters, were commercially available for decades. Due to the technological advancements,they are equipped with data transferring facilities and becoming multifunctional. This paper presents design of a smart bracelet, which can identify the pairs of people during handshakes and exchange their contact details with each other. An advanced system was implemented to detect pairs of handshakes when happening concurrently as multiple pairs of people may do hand shaking at the same time when the crowd size is large in a gathering. A peak detection algorithm and a top-k algorithm were used to identify the matching hand- shakes by processing data. Communication links were also established using the Bluetooth Low Energy (BLE)technology to exchange contact in formation of people who are handshaking with each other. Finally, a smart bracelet with multiple functionalities was manufactured and its performance was evaluated.
Accelerometer, Bluetooth, Gesture recognition, System-on-chip.
Reena Malik and Sonal Trivedi, Chitkara Business School, Chitkara University, Punjab, India
Retail sector is transforming rapidly propelling by rising household income, technology advancements, e-commerce, and increased expectations. New innovative technologies are being used by the retailers in order to provide seamless and unique shopping experience to the customer. Internet of things is one of technology creating competitive advantage in the world of retailing and now smart retailing is in trend to cater enhanced customer expectations. This study focuses on the concept and applications of IoT especially in the field of retailing by using secondary data. The study will also focus on the brands that are using IoT and getting benefitted in terms of increased market share, customer satisfaction, retention and loyalty.
Automatic retail, Future retail, Internet of things, Digital, digital retailing, customer satisfaction, retail transformation, smart retailing.
David Noever, Samantha E. Miller Noever, PeopleTec, Inc., 4901 Corporate Drive. NW, Huntsville, AL, USA
A malicious firmware update may prove devastating to a host of embedded devices that make up the Internet of Things (IoT) and that typically lack the same security verifications now applied to full operating systems. This work converts the binary headers into 1024-pixel thumbnail images to train a deep neural network and distinguish benign and malicious variants. One outcome of this image conversion enables contact with the vast machine learning literature applied to digit recognition (MNIST). Another result indicates that greater than 90% accurate classifications prove possible using image-based convolutional neural networks when combined with transfer learning methods.
Neural Networks, Internet of Things, Image Classification, Firmware, MNIST Benchmark.
Emanuel Marx, Chair of Digital Industrial Service Systems, Friedrich-Alexander-Universität ErlangenNürnberg, Fuerther Strasse 248, Nuremberg, German
Modeling smart services is in danger of becoming the Babylonian jumble of languages. Several resource configurations owned by stakeholders with individual principles form complex service systems, and smart products, performing as boundary-objects, unite multiple disciplines with each of their modeling languages and approaches in their development. Despite the increasing significance of smart service systems in research and practice, practical means to specify their architecture from both a service and a technical perspective are scarce. This absence of conceptual guidance thwarts an efficient and meaningful design process, and service development and smart product development are often isolated from each other. To bridge this void, we propose the domain-specific modeling language ServML. ServML extends the established Systems Modeling Language (SysML), which focuses on complex mechatronic systems, by adding domainspecific service constructs. Our results comprise a meta-model describing the conceptual constructs (abstract syntax), graphical symbols as a possible representation (concrete syntax), and a demonstration of a real-life smart service system based on an online case study. Our results support researchers and practitioners equally in designing industrial smart service systems from both a business and a technological perspective.
Smart service system, Smart product, Domain-specific modeling language, SysML.
Min Guk I. Chi, Bachelor of Business Administration, S P Jain School of Global Management
The premise that Active Queue Management (AQM) is effective in both quantitative and qualitative settings in residential and enterprise networks has repeatedly been established in multiple papers from academic journals along with private studies in addressing bufferbloat, characterized as excessive latency because of heavy network utilization. However, the presence and understanding of bufferbloat mitigation is absent and not well-known in the Philippine Internet of Things space except enthusiasts, willing to take the time to examine the concept along with its benefits. Hence, this paper examines possible reasons as to why AQM is not widely adopted by Philippine consumers and industries in increasing productivity considering the COVID-19 Pandemic: a lack of basic understanding of bufferbloat and its implications, the complexity of the concept, the know-how required to execute its implementation being far too high, and the lack of perceived benefit by existing telecommunications players in the country.
Active Queue Management, Consumer Adoption, COVID-19, Bufferbloat.
Yew Kee Wong
In the information era, enormous amounts of data have become available on hand to decision makers. Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and extract value and knowledge from these datasets. The Internet of Things, or "IoT" for short, is about extending the power of the internet beyond computers and smartphones to a whole range of other things, processes and environments. IoT is at the epicentre of the Digital Transformation Revolution that is changing the shape of business, enterprise and people’s lives. This transformation influences everything from how we manage and operate our homes to automating processes across nearly all industries. This paper aims to analyse the relationships of AI, big data and IoT, as well as the opportunities provided by the applications in various operational domains.
Artificial Intelligence, Big Data, IoT, Digital Transformation Revolution, Machine Learning.
Asaju Christine and Hima Vadapalli, Department of Computer Science, University of the Witwatersrand, Johannesburg, South Africa
A key problem identified in the online classes is that it lacks direct, timely, effective communication and feedback to teachers compared with the traditional face-to-face classes. Studies have shown that facial emotion expression plays an important role in communication, especially to understand the learning affects of students. This work considers a deep learning approach to improve on the current set up in the online learning domain through the use of facial emotion expression recognition and testimation of a learner’s learning affects. The study leverages facial emotion recognition using Resnet50 pre-trained CNN networks for feature extraction and a Long Short-Term Memory Network (LSTM) for classification of the emotions using extended DISFA data-set, the work achieved an accuracy of 95% on validation data set compared with the state- of- the art approach. It is expected that this will provide feedback to the teachers and cause an improvement upon the online platform.
Online learning, Deep learning, facial emotion recognition, Learning Affects.
Ali Asghar Anvary Rostamy, Professor of Finance, Tarbiat Modares University, Tehran, Iran
This study measures and compare the capabilities of a statistical model of ARIMA and three intelligent models namely artificial neural network, fuzzy neural network, and genetic algorithm in predicting stock returns of companies listed in the Tehran Stock Exchange during 2013 to 2019. Three main hypotheses considered in this research imply that the forecasting capability of the proposed models is significantly different, the intelligent models outperform the traditional statistical ARIMA model, and that the technical analysis makes fewer errors than fundamental analysis in predicting stock returns. The diversified sample of this research consists of 18 manufacturing companies from different industries. Excel, SPSS, and MATLAB software were used to analyse the data. The results indicate that the artificial neural network, fuzzy neural network, and genetic algorithm have fewer errors than ARIMA method. In other words, all of the proposed intelligent models significantly outperform the unintelligent statistical method of ARIMA.
Stock Returns, Forecasting, Intelligent Models, Iranian Stock Market.
Naveen Kumar, Mathematics Division of University Institute of Sciences (UIS), Chandigarh University, Gharuan, Mohali-140413, Punjab, India
The epidemic of coronavirus took place on 31 December 2019 when China notified the World Health Organization of a series of cases of unclear origin of pneumonia in Wuhan City, Hubei Province. The novel coronavirus disease (COVID-19) triggered clusters of lethal pneumonia with a clinical appearance extremely close to that of severe acute respiratory syndrome referred to as SARS. COVID-19 is a viral disorder that is transferable to other healthful individuals from an infected adult. COVID-19 allows certain individuals to develop minor illnesses, but in certain instances may prove harmful. Therefore, treating the possibility of infection seriously is very necessary. COVID-19 has quickly flourished as a global health epidemic that effects substantial number of a people across worldwide as illustrated by the World Health Organisation (WHO). Consequently, to stop this outbreak in an environment when no vaccinations are accessible to citizens for their wellbeing, the awareness about the sensitivity of virus is an efficient means to mitigate the spread of coronavirus. The structural views and evaluations of the Novel coronavirus were explored in the early parts of this article. The numerous methods of transmission and social opportunities have also been illuminated to minimize COVID-19. The results of social distancing were identified as preventive steps to remove COVID-19 in the last portion of this article.
Corona virus, COVID-19, SARS-CoV-1, SARS-CoV-2, Social distancing.
Esam Alzahrani1,2 and Leon Jololian1, 1Department of Electrical and Computer Engineering, University of Alabama at Birmingham, Birmingham, AL, USA, 2Department of Computer Engineering, Al-Baha University, Alaqiq, Saudi Arabia
Forensic author profiling plays an important role in indicating possible profiles for suspects. Among the many automated solutions recently proposed for author profiling, transfer learning outperforms many other state-of-the-art techniques in natural language processing. Nevertheless, the sophisticated technique has yet to be fully exploited for author profiling. At the same time, whereas current methods of author profiling, all largely based on features engineering, have spawned significant variation in each model used, transfer learning usually requires a preprocessed text to be fed into the model. Considering the variations in potential preprocessing techniques, we conducted an experimental study that involved applying five such techniques to measure each technique’s effect while using the BERT model, chosen for being one of the most-used stock pretrained models. In our five experiments, we found that BERT achieves the best accuracy in predicting the gender of the author when no preprocessing technique is applied.
Authorship profiling, NLP, digital forensics, transfer learning.
Imran N. Junejo, Zayed University, Dubai, 19282, U.A.E
We address the problem of Pedestrian Attribute Recognition (PAR) in this paper. Owing to the presence of surveillance cameras in almost all outdoor and indoor public spaces, keeping and eye on pedestrian is a sought-after task with many useful applications. The problem entails recognizing attributes such as age-group, clothing style, accessories, footwear style etc. This is a multi-label problem and challenging even for human observers. We propose using a convolution neural network (CNN) with trainable Gabor wavelets (TGW) layers. The proposed layers are learnable and adapt to the dataset for a better recognition. The proposed multi-branch neural network is a mix of TGW and convolutional layers and we show its effectiveness on a public dataset.
Gabor Wavelets, Convolutional Neural Networks, Pedestrian Attributes.
Hassan Saleh Mahdi, Department of English, College of Science and Arts, Balqarn, University of Bisha, Bisha, 61922, Saudi Arabia & Hodeida University, Hodeida, Yemen
The integration of internet in translation creates several opportunities for translators. This study aims at examining the impact of using web-based translation on translating religious texts. The study followed a quasi-experimental study design. Sixty students enrolled in English Department, University of Bisha, Saudi Arabia, participated in this study. The participants were divided randomly into three groups (i.e., words group, sentences group and passages group). The data was collected through a translation test and a questionnaire. The results indicated that web-based translation is more beneficial in translating words than translating sentences or passages. In addition, web-based translation is more beneficial when words are translated from English into Arabic as well as from Arabic into English. The results from the questionnairerevealed positive attitudes towards using web-based translation in the process of translation.
web-based translation, religious texts, translation, decontextualized translation, contextualized translation.
Anuradha Hewa Siribaddana, Shashipraba Perera, Umesh Rathnayaka and Nethmini De Silva, Department of Software Engineering Sri Lanka Institute of Information Technology, Malabe, Sri Lanka
There are different kinds of recommendation systems available to suggest products for customers those who use different systems. Recommendation systems (i.e., Recommendation Engines) are software tools and techniques which provide suggestions to users for supporting their decision-making processes. In this research paper, we present a system based on machine learning to recommend library books. This is an effective and efficient system to enhance the performance of the readers of the library. Searching a book from the library can be a time consuming and a tedious task for university students. This could be a reason for the students to use the internet for their studies instead of referring to library books. This research work aims at introducing effective solutions for university students to make the maximum use of library books for their academics. This method is expected to be helpful for all the library users in a university. University students, lecturers and all other library users will be benefited with the proposed recommendation system. Image processing, natural language processing, data mining techniques and machine learning techniques are proposed to be used in this system. Above techniques would provide a proper content management system, personal history management, recommendations based on preferences of users, given searching criteria and artificial intelligence to capture user plans for book reading, which conclude the main subsystems of the web application. Experimental results indicate Logistic Regression as the best model trained, which reached a promising testing accuracy of 89.48% in the rating/review system.
Machine Learning, Natural Language Processing, Image processing, Optical Character Recognition, Chatbot, Recommendation Systems.
Hosna Ghandeharioun, Department of Electrical and Biomedical Engineering, Khorasan Institute of Higher Education, Mashhad, Iran
Automatic detection of obstructive sleep apnea (OSA) is of great demand. OSA is one of the most prevalent diseases of the current centaury and an established comorbidity to Covid-19. OSA is characterized by complete or relative breathing pauses during sleep. According to medical observations, if OSA remained un-recognized and un-treated it may lead to both physical and mental complications. The gold standard of OSA detection is the time-consuming and expensive method of polysomnography (PSG). Development of an on-line home-based method for simple and economic detection of OSA is an effective way for spurred detection and reference of patients to sleep clinics. In addition, it can serve as a controller for therapeutic/assistive devices. In this paper several configurations for on-line OSA detection are proposed. The best configuration uses both ECG and SpO2 signals for feature extraction and MI analysis for feature reduction. Various methods of supervised machine learning are exploited for classification. Finally, to reach the best result, the most successful classifiers in sensitivity and specificity are combined in groups of three members with four different combination methods. The proposed method has advantages like limited use of biological signals, automatic detection, on-line working scheme and uniform and acceptable performance (over 85%) in all the employed databases. These advantages have not been integrated together in pervious published methods.
Obstructive Sleep Apnea, Supervised Machine Learning, Feature Reduction, Classifier Combination, Biomedical Signal Processing.
Yew Kee Wong, School of Information Engineering, HuangHuai University, Henan, China
In the information era, enormous amounts of data have become available on hand to decision makers. Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and extract value and knowledge from these datasets. Machine learning is a method of data analysis that automates analytical model building. It is a branch of artificial intelligence based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention. Such minimal human intervention can be provided using big data analytics, which is the application of advanced analytics techniques on big data. This paper aims to analyse some of the different machine learning algorithms and methods which can be applied to big data analysis, as well as the opportunities provided by the application of big data analytics in various decision making domains.
Artificial Intelligence, Machine Learning, Big Data Analysis.
Asmaa Hakami, Raneem Alqarni and Mahila Almutairi, Department of Computer Science, King abdulaziz university, Jeddah, Saudi Arabia
Generating poems is not an easy task for humans, so if the computer is able to create a poem, this indicates the development of computer creativity. Many models have been used to create poems, but few of them are interested in the Arabic language, and between the lines of the entire poem, the coherence in meaning and themes is still unclear. In this paper, the character-based LSTM, Markov-LSTM, and pre-trained GPT-2 models were used to create Arabic praise poems and compare their results in terms of the accuracy of the meaning and the accuracy of the model. The results of both the Markov-LSTM and pre-trained GPT-2 were similar in terms of the meaning and outcome of the BLEU method. However, there is a weakness in terms of meaning in the character-based LSTM model because it creates new words.
Arabic Poems, Markov, GPT-2, Deep Neural Networks, & Natural Language Processing.
Bencheng Wei, Smith School of Business, Queen’s University, Kingston, Ontario
Understanding the intent behind email/chat between customers and customer service agents has become a crucial problem nowadays due to an exponential increase in the use of the internet by people from different cultures and educational backgrounds. More importantly, the explosion of e-commerce has led to a significant increase in text conversation between customers and agents. In this paper, we propose an approach to data mining the conversation intents behind the textual data. Using the customer service dataset, we train unsupervised text representation models using CBOW and Skip-Ngram, and then develop an intent mapping model which would rank the pre-defined intents base on cosine similarity between sentences’ embeddings and intents’ embeddings. Topic-modeling techniques are used to define intents and domain experts are also involved to interpret topic modelling results. With this approach, we can get a good understanding of the user intentions behind the unlabelled customer service textual data.
Intent Mining, Topic Modelling, Natural Language Processing, Text Similarity, Deep Learning.
Shimirwa Aline Valerie and Jian Xu, School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing 210094, China
Extractive summarization aims to select the most important sentences or words from a document to generate a summary. Traditional summarization approaches have relied extensively on features manually designed by humans. In this paper, based on the recurrent neural network equipped with the attention mechanism, we propose a datadriven technique. We set up a general framework that consists of a hierarchical sentence encoder and an attentionbased sentence extractor. The framework allows us to establish various extractive summarization models and explore them. Comprehensive experiments are conducted on two benchmark datasets, and experimental results show that training extractive models based on Reward Augmented Maximum Likelihood (RAML)can improve the model’s generalization capability. And we realize that complicated components of the state-of-the-art extractive models do not attain good performance over simpler ones. We hope that our work can give more hints for future research on extractive text summarization.
Extractive summarization, Recurrent neural networks, Attention mechanism, Maximum Likelihood Estimation, Reward Augmented Maximum Likelihood.
Shaolin Hu1,2,*, Wenqiang Jiang2, Jiahui Liang2, Jian Li3, 1Guangdong University of Petrochemical Technology, Maoming 525000, China, 2Xi’an University of Technology, Xi’an 710048, China, 3Norinco Group Test and Measuring Academy, Huayin 714200, China
If the radar is used to track a highly dynamic target, its tracking measurement data may inevitably include outliers. In order to avoid the adverse effects of outliers on target positioning, it is necessary to check the rationality of the radar measurement data. This paper establishes a set of new methods to use the measurement data of a single optical theodolite to verify the rationality of radar measurement data. Using optical theodolite measurement data to verify radar measurement data is a common and commonly used method, but usually two or more optical theodolites are required to achieve this goal. The difficulty of using a single optical theodolite measurement data to verify the radar measurement data is that the optical theodolite can only provide two kinds of data (the azimuth and elevation), but the radar simultaneously measures three data (the distance, azimuth and elevation), and more importantly, the reference datum of the theodolite and radar are different. This paper skillfully overcomes the above-mentioned problems and realizes the data rationality test in three different actual cases: in case 1, whether the radar data is normal is unknown; in case 2, the distance of radar measurement data is known to be normal, but whether the angle measurement is normal is unknown; in case 3, the distance of radar measurement data is known to be normal, but whether the range measurement is normal is unknown. Simulation results show that the correct rate is more than 95% if the optical theodolite measurement data is high-precision, and that more than 90% of the accuracy of outlier recognition can still be achieved even if the theodolite measurement data contains small measurement errors.
Rationality Test, Outliers ,Theodolite, Radar.
Btissame MAHI and Youssef FAKIR, Laboratory of Information Processing and Decision Support, Department of Computer Science, Faculty of Science and Technology, Sultan Moulay Slimane University. PO Box. 523, Béni Mellal, Morocco
Cluster algorithms can be classified as hierarchical based clustering, density based clustering, grid based clustering and model based clustering. Clustering algorithms has been studied and applied in many different areas, such as analysis of institutions academic performance. In this article, we will cover partition-based clustering algorithms for Land conflicts. The k-means approach is compared to other partition clustering algorithms, such as Partitioning Around Medoids (PAM) and Fuzzy CMeans clustering (FCM). The results of the experiments indicate that the number of clusters and the number of data can influence the performance of algorithms.
Clustering, partition, Fuzzy C-Means clustering (FCM), K-Means, Partitioning Around Medoids (PAM).
Wenxiang Lina and Desheng Liub, Science and Technology on Complex Electronic System Simulation Laboratory, Space Engineering University, Beijing, China
With the improvement of computer computing power, the shortcomings of genetic mining algorithms have been compensated and become a research hotspot. In this paper, an ETM fusion algorithm based on log classification is proposed to mine process models for high-frequency and low-frequency logs separately, and to mine complete process models based on alignment idea for model fusion, in response to the shortage of ETM algorithm for noise processing, which cannot mine low-frequency effective paths and remove noise. Through the simulation of event log data verification, the improved algorithm achieves more complete process model mining than the original ETM algorithm.
Process mining, ETM, log classification.
Rachid Sabre, Biogéosciences (UMR CNRS/uB 6282), University of Burgundy, 26, Bd Docteur Petitjean, Dijon, France
This work focuses on the symmetric alpha stable process with continuous time frequently used in modeling the signal with indefinitely growing variance when the spectral measure is mixed: sum of a continuous meseare and discrete measure. The objective of this paper is to estimatethe spectral density of the continuous part from discrete observations of the signal. For that, we propose a method based on the smoothing of the observations via Jackson polynomial kernel using toi spectral windows and taking into account the width of the interval where the spectral density is non-zero and sampling at periodic instant. This technique allows avoiding the aliasing phenomenonencountered when the estimation is made from the discrete observations of a process with continuous time.
Spectral density, stable processes, periodogram, smoothing estimate, aliasing.
1L. E. Mendoza, 2J.D. Fernández, 1Department of Telecommunications, 1Group GIBUP, 2Group on systems applied to industry, 1University of Pamplona, 1, 2Pontifical Bolivarian University Medellín
The automatic detection of specific areas in medical images using mathematical techniques has been growing significantly, due to the applications this allows. This article presents the results in the automatic segmentation of the cerebral corpus callosum in cerebral magnetic resonance imaging using Deep learning. 1450 images were used for training, each image with a resolution of 512*512. A conditioning stage was developed to modify the contrast of the image, remove irrelevant information and perform a pattern extraction process using wavelet transformation. The results show the segmentation of the corpus callosum and the percentage of accuracy was 99.514%. The system was validated with 415 images.
Deep learning, magnetic resonance image, cerebral corpus callosum, image segmentation.
Petar Prvulović1, Jelena Vasiljevic2 and Dhinaharan Nagamalai3, 1Department of Computer Engineering, School of Computing, Union University, Belgrade, Serbia, 2University of Belgrade, Institute Mihajlo Pupin, 3Kalasalingam Academy of Research and Education, India
This paper explains a method used to detect the presence of impulse noise in a set of scanned documents as a part of OCR preprocessing. As the document set is supposed to be processed in large scale, the primary concern of the noise detection method was efficiency within existing project constraints. Following the nature of noise, the method seeks to detect the presence of noise in document margins. The method works in two stages. First stage is margin detection, based on color spectre analysis. Second stage is noise recognition in margin samples, based on a pixel contrast score. The resulting implementation proved efficient both in terms of detection accuracy and algorithmic complexity.
Impulse noise, Noise detection, Margin detection.
Xiaohan Feng1 and Makoto Murakami2, 1Graduate School of Information Sciences and Arts, Toyo University, Kawagoe, Saitama, Japan, 2Dept. of Information Sciences and Arts, Toyo University, Kawagoe, Saitama, Japan
The information explosion makes it easier to ignore information that requires social attention, and news games can make that information stand out. There is also considerable research that shows that people are more likely to remember narrative content. Virtual environments can also increase the amount of information a person can recall. If these elements are blended together, it may help people remember important information. This research aims to provide directional results for researchers interested in combining VR and narrative, enumerating the advantages and limitations of using text or non-text plot prompts in news games. It also provides hints for the use of virtual environments as learning platforms in news games. The research method is to first derive a theoretical derivation, then create a sample of news games, and then compare the experimental data of the sample to prove the theory. Research compares the survey data of a VR game that presents a story in non-text format (Group VR), a game that presents the story in non-text format (Group NVR), a VR game that presents the story in text (Group VRIT), and a game that presents the story in text (Group NVRIT) will be compared and analyzed. This paper describes the experiment. The results of the experiment show that among the four groups, the means that can make subjects remember the most information is a VR news game with a storyline. And there is a positive correlation between subjects experience and confidence in recognizing memories, and empathy is positively correlated with the correctness of memories. In addition, the effects of "VR", "experience", and "presenting a story from text or video" on the percentage of correct answers differed depending on the type of question.
Virtual reality, narratology, news games, interactive.
Xinglong Zhu, Yifan Wang, Danni Ai, Tianyu Fu, Jingfan Fan*, Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
Object tracking based on ultrasound image navigation can effectively reduce damage to healthy tissues in radiotherapy. In this study, we propose a deep Siamese network based on feature fusion. Whilst adopting MobileNetV2 as the backbone, we introduce an unsupervised training strategy to enrich the volume of the samples. We predict the location of the target with a region proposal network module and refine the results using a nonmaximum suppression (NMS)-based postprocessing algorithm. Moreover, we evaluate our algorithm in the Challenge on Liver Ultrasound Tracking (CLUST) dataset and a self-collected dataset, which proves the need for our improvement and the effectiveness of the algorithm.
Ultrasound tracking, Siamese network, Respiratory motion estimation, One-shot learning.
Vivek Ramakrishnan1 and D. J. Pete2, 1Research Scholar, Department of Electronics Engineering, Datta Meghe College of Engineering, Sector-3, Airoli, Navi Mumbai - 400708, India, 2Professor and Head, Department of Electronics Engineering, Datta Meghe College of Engineering, Sector-3, Airoli, Navi Mumbai - 400708, India
Combining images with different exposure settings are of prime im- portance in the field of computational photography. Both transform domain approach and filtering based approaches are possible for fusing multiple exposure images, to obtain the well-exposed image. We propose a Discrete Cosine Trans- form (DCT-based) approach for fusing multiple exposure images. The input im- age stack is processed in the transform domain by an averaging operation and the inverse transform is performed on the averaged image obtained to generate the fusion of multiple exposure image. The experimental observation leads us to the conjecture that the obtained DCT coefficients are indicators of parameters to measure well-exposedness, contrast and saturation as specified in the traditional exposure fusion based approach and the averaging performed indicates equal weights assigned to the DCT coefficients in this non- parametric and non pyramidal approach to fuse the multiple exposure stack.
Discrete, Exposure, Cosine, Fusion, Coefficients, Transform, Contrast, Saturation, Weights.
Cem Ata Baykara1, Ilgin Safak2 and Kübra Kalkan3, 1Department of Computer Science, Ozyegin University, Istanbul, Turkey, 2Fibabanka R&D Center, Istanbul, Turkey, 3Department of Computer Science, Ozyegin University, Istanbul, Turkey
The rapid growth of the Internet of Things (IoT) has changed how people perform digital transactions in their everyday lives. Due to the increasing number of diverse and connected devices used in both public and private networks, security and privacy have gained the utmost importance. However, security is often achieved at the expense of efficiency and computational resources, where not all IoT devices are equipped with sufficient computational power. This paper proposes a new lightweight handshake protocol implemented on top of the Constrained Application Protocol (CoAP) that can be used in device discovery, autonomously and securely managing (adding or removing) devices of any computational complexity and ensuring the IoT network security. A Physical Unclonable Function (PUF) is utilized for the session key generation in the proposed handshake protocol. The CoAP server performs real-time device discovery using the proposed handshake protocol and autonomous anomaly detection using machine learning algorithms to ensure the security of the IoT network. IoT devices displaying anomalous behaviour are autonomously blacklisted by the CoAP server and not allowed to join the network or removed from the IoT network. Simulation results show that among the machine learning algorithms, the stacking classifier performs with the highest overall anomaly detection accuracy of 99.98%.
IoT Networks, Network Security, Handshake Protocols, Anomaly Detection, Machine Learning.
Ziqin Huang, Department of Computer Science and Technology, East China Normal University, Shanghai, China
E-voting system is a system that employs electronic ways to cast and count votes from different votes, in which the privacy is the pivotal issue of the E-voting system, hence there are a number of mechanisms and tools are proposed to detect and mitigate privacy leaks. At present, the E-voting system usually relied on a trusted third party to maintain the system operation, in such a way that the security and the privacy of the E-voting system well immediately disappeared once the trusted third parties are subverted. To eliminate the security concern addressed by the trusted third parties, this work proposes a novel decentralized and anonymous E-voting scheme with voterself-verification based on public Ethereum blockchain. Concretely, there is no any trusted third party in the system in which each voter can know the results and conduct picketing relying on the Ethereum blockchain, where each voter should apply for voting qualification at first and then broadcast the voted ballot to the blockchain network. In the system, we use UDP protocol to achieve anonymous transmission and employ the linkable ring signature to preserve the voting not being repeated. Finally, we give a implementation of the E-voting system and carry out a variety of analysis to conduct the correctness and feasible of our system.
Ethereum, E-Voting, Linkable ring signature.
Symphorien Monsia and Sami Faiz, LTSIRS, University of Tunis El Manar, Tunis 1005, Tunisia
This paper describes HFL4BDA (High Functional Language for Big Data Analytics), a functional high-level query system for interactive big data analytics. The main features of HFL4BDA are: (1) it is based on HiFun, a functional query language, (2) it is multiparadigm and thus integrates several data processing frameworks such as SQL, Hadoop, Spark and Flink, (3) it allows the graphical definition of analytical queries, the automatic transcription of these abstract queries to the low-level frameworks underlying the HFL4BDA and the optimization of analytical queries to improve their execution performance at the physical level using the rewrite method provided by its query language, (4) it is easy to use and is also scalable. What makes our HFL4BDA unique is its highly scalable, multiparadigm architecture with a single dialect or data query language. Our approach is validated with different datasets and 13 representative queries that demonstrate the usability of the query language and evaluate the benefits of query optimization.
Big Data Analytics, Functional Language, High-Level Languages, Query Execution Plans, Query Mapping, Query Optimization, Query Rewriting.
Mr. Kamal Kumar Gola, S. Chakraborty, B. Gupta, Faculty of Engineering, TMU, Moradabad, India
In recent time, to explore the underwater resources and to accumulate the scientific data from the aquatic environments, underwater acoustic sensor networks (UWSNs) have been proposed. Several challenges have been met by UWSNs to be mentioned void node problem, low bandwidth, hotspot problem, security issue, energy efficient path, high propagation delay and high energy consumption etc. Certain challenges to the design of routing algorithms for UWSNs came into existence by the constraints of the underwater environment. Out of which one is free mobility of sensor nodes with water currents. This along with asymmetric acoustic propagation characteristics may come along to network partitioning resultant into one or more nodes which find themselves unable to get connected to the remaining network and consequently they met with failure to report to their sensed data. Though, forwarding the packets efficiently to the surface sink in line with the energy constrained sensor devices is surely the key challenge faced by UWSNs. This work focuses on open issues and research challenges of UWSNs which helps the researchers as a guide to do the research in future.
Underwater acoustic sensor networks, ocean environment, void node, hotspot problem, energy consumption, delay etc.