2,631 research outputs found
Expert Finding by Capturing Organisational Knowledge from Legacy Documents
Organisations capitalise on their best knowledge through the improvement of shared expertise which leads to a higher level of productivity and competency. The recognition of the need to foster the sharing of expertise has led to the development of expert finder systems that hold pointers to experts who posses specific knowledge in organisations. This paper discusses an approach to locating an expert through the application of information retrieval and analysis processes to an organization’s existing information resources, with specific reference to the engineering design domain. The approach taken was realised through an expert finder system framework. It enables the relationships of heterogeneous information sources with experts to be factored in modelling individuals’ expertise. These valuable relationships are typically ignored by existing expert finder systems, which only focus on how documents relate to their content. The developed framework also provides an architecture that can be easily adapted to different organisational environments. In addition, it also allows users to access the expertise recognition logic, giving them greater trust in the systems implemented using this framework. The framework were applied to real world application and evaluated within a major engineering company
Transparent authentication methodology in electronic education
In the context of on-line assessment in e-learning, a problem arises when a student taking an exam may wish to cheat by handing over personal credentials to someone else to take their place in an exam, Another problem is that there is no method for signing digital content as it is being produced in a computerized environment. Our proposed solution is to digitally sign the participant’s work by embedding voice samples in the transcript paper at regular intervals. In this investigation, we have demonstrated that a transparent stenographic methodology will provide an innovative and practical solution for achieving continuous authentication in an online educational environment by successful insertion and extraction of audio digital signatures
Towards an integrated model for citizen adoption of E-government services in developing countries: A Saudi Arabia case study
This paper considers the challenges that face the widespread adoption of E-government in developing countries, using Saudi Arabian our case study. E-government can be defined based on an existing set of requirements. In this paper we define E-government as a matrix of stakeholders; governments to governments, governments to business and governments to citizens using information and communications technology to deliver and consume services. E-government has been implemented for a considerable time in developed countries. However E-government services still faces many challenges their implemented and general adoption in developing countries. Therefore, this paper presents an integrated model for ascertaining the intention to adopt E-government services and thereby aid governments in accessing what is required to increase adoption
On a Service-Oriented Approach for an Engineering Knowledge Desktop
Increasingly, manufacturing companies are shifting their focus from selling products to providing services. As a result, when designing new products, engineers must increasingly consider the life cycle costs in addition to any design requirements. To identify possible areas of concern, designers are required to consult existing maintenance information from identical products. However, in a large engineering company, the amount of information available is significant and in wide range of formats. This paper presents a prototype knowledge desktop suitable for the design engineer. The Engineering Knowledge Desktop analyses and suggests relevant information from ontologically marked-up heterogeneous web resources. It is designed using a Service-Oriented Architecture, with an ontology to mediate between Web Services. It has been delivered to the user community for evaluation
Data Mining to Support Engineering Design Decision
The design and maintenance of an aero-engine generates a significant amount of documentation. When designing new engines, engineers must obtain knowledge gained from maintenance of existing engines to identify possible areas of concern. Firstly, this paper investigate the use of advanced business intelligence tenchniques to solve the problem of knowledge transfer from maintenance to design of aeroengines. Based on data availability and quality, various models were deployed. An association model was used to uncover hidden trends among parts involved in maintenance events. Classification techniques comprising of various algorithms was employed to determine severity of events. Causes of high severity events that lead to major financial loss was traced with the help of summarization techniques. Secondly this paper compares and evaluates the business intelligence approach to solve the problem of knowledge transfer with solutions available from the Semantic Web. The results obtained provide a compelling need to have data mining support on RDF/OWL-based warehoused data
Sensitivity and parameter-estimation precision for alternate LISA configurations
We describe a simple framework to assess the LISA scientific performance
(more specifically, its sensitivity and expected parameter-estimation precision
for prescribed gravitational-wave signals) under the assumption of failure of
one or two inter-spacecraft laser measurements (links) and of one to four
intra-spacecraft laser measurements. We apply the framework to the simple case
of measuring the LISA sensitivity to monochromatic circular binaries, and the
LISA parameter-estimation precision for the gravitational-wave polarization
angle of these systems. Compared to the six-link baseline configuration, the
five-link case is characterized by a small loss in signal-to-noise ratio (SNR)
in the high-frequency section of the LISA band; the four-link case shows a
reduction by a factor of sqrt(2) at low frequencies, and by up to ~2 at high
frequencies. The uncertainty in the estimate of polarization, as computed in
the Fisher-matrix formalism, also worsens when moving from six to five, and
then to four links: this can be explained by the reduced SNR available in those
configurations (except for observations shorter than three months, where five
and six links do better than four even with the same SNR). In addition, we
prove (for generic signals) that the SNR and Fisher matrix are invariant with
respect to the choice of a basis of TDI observables; rather, they depend only
on which inter-spacecraft and intra-spacecraft measurements are available.Comment: 17 pages, 4 EPS figures, IOP style, corrected CQG versio
The 'Parekh Report' - national identities with nations and nationalism
‘Multiculturalists’ often advocate national identities. Yet few study the ways in which ‘multiculturalists’ do so and in this article I will help to fill this gap. I will show that the Commission for Multi-Ethnic Britain’s report reflects a previously unnoticed way of thinking about the nature and worth of national identities that the Commission’s chair, and prominent political theorist, Bhikhu Parekh, had been developing since the 1970s. This way of thinking will be shown to avoid the questionable ways in which conservative and liberal nationalists discuss the nature and worth of national identities while offering an alternative way to do so. I will thus show that a report that was once criticised for the way it discussed national identities reflects how ‘multiculturalists’ think about national identities in a distinct and valuable way that has gone unrecognised
Studying stellar binary systems with the Laser Interferometer Space Antenna using Delayed Rejection Markov chain Monte Carlo methods
Bayesian analysis of LISA data sets based on Markov chain Monte Carlo methods
has been shown to be a challenging problem, in part due to the complicated
structure of the likelihood function consisting of several isolated local
maxima that dramatically reduces the efficiency of the sampling techniques.
Here we introduce a new fully Markovian algorithm, a Delayed Rejection
Metropolis-Hastings Markov chain Monte Carlo method, to efficiently explore
these kind of structures and we demonstrate its performance on selected LISA
data sets containing a known number of stellar-mass binary signals embedded in
Gaussian stationary noise.Comment: 12 pages, 4 figures, accepted in CQG (GWDAW-13 proceedings
LISA Data Analysis using MCMC methods
The Laser Interferometer Space Antenna (LISA) is expected to simultaneously
detect many thousands of low frequency gravitational wave signals. This
presents a data analysis challenge that is very different to the one
encountered in ground based gravitational wave astronomy. LISA data analysis
requires the identification of individual signals from a data stream containing
an unknown number of overlapping signals. Because of the signal overlaps, a
global fit to all the signals has to be performed in order to avoid biasing the
solution. However, performing such a global fit requires the exploration of an
enormous parameter space with a dimension upwards of 50,000. Markov Chain Monte
Carlo (MCMC) methods offer a very promising solution to the LISA data analysis
problem. MCMC algorithms are able to efficiently explore large parameter
spaces, simultaneously providing parameter estimates, error analyses and even
model selection. Here we present the first application of MCMC methods to
simulated LISA data and demonstrate the great potential of the MCMC approach.
Our implementation uses a generalized F-statistic to evaluate the likelihoods,
and simulated annealing to speed convergence of the Markov chains. As a final
step we super-cool the chains to extract maximum likelihood estimates, and
estimates of the Bayes factors for competing models. We find that the MCMC
approach is able to correctly identify the number of signals present, extract
the source parameters, and return error estimates consistent with Fisher
information matrix predictions.Comment: 14 pages, 7 figure
Can Modus Vivendi Save Liberalism from Moralism? A Critical Assessment of John Gray’s Political Realism
This chapter assesses John Gray’s modus vivendi-based justification for liberalism. I argue that his approach is preferable to the more orthodox deontological or teleological justificatory strategies, at least because of the way it can deal with the problem of diversity. But then I show how that is not good news for liberalism, for grounding liberal political authority in a modus vivendi undermines liberalism’s aspiration to occupy a privileged normative position vis-à-vis other kinds of regimes. So modus vivendi can save liberalism from moralism, but at cost many liberals will not be prepared to pay
- …
