549 research outputs found
An Integrated Stereo Vision and Fuzzy Logic Controller for Following Vehicles in an Unstructured Environment
Time series forecasting using a TSK fuzzy system tuned with simulated annealing
In this paper, a combination of a Takagi-Sugeno fuzzy system (TSK) and simulated annealing is used to predict well known time series by searching for the best configuration of the fuzzy system. Simulated annealing is used to optimise the parameters of the antecedent and the consequent parts of the fuzzy system rules. The results of the proposed method are encouraging indicating that simulated annealing and fuzzy logic are able to combine well in time series prediction
Real-time evolution of an embedded controller for an autonomous helicopter
In this paper we evolve the parameters of a proportional, integral, and derivative (PID) controller for an unstable, complex and nonlinear system. The individuals of the applied genetic algorithm (GA) are evaluated on the actual system rather than on a simulation of it, thus avoiding the ldquoreality gaprdquo. This makes implicit a formal model identification for the implementation of a simulator. This also calls for the GA to be approached in an unusual way, where we need to consider new aspects not normally present in the usual situations using an unnaturally consistent simulator for fitness evaluation. Although elitism is used in the GAs, no monotonic increase in fitness is exhibited by the algorithm. Instead, we show that the GApsilas individuals converge towards more robust solutions
Type-2 fuzzy alpha-cuts
Type-2 fuzzy logic systems make use of type-2 fuzzy sets. To be able to deliver useful type-2 fuzzy logic applications we need to be able to perform meaningful operations on these sets. These operations should also be practically tractable. However, type-2 fuzzy sets suffer the shortcoming of being complex by definition. Indeed, the third dimension, which is the source of extra parameters, is in itself the origin of extra computational cost. The quest for a representation that allow practical systems to be implemented is the motivation for our work. In this paper we define the alpha-cut decomposition theorem for type- 2 fuzzy sets which is a new representation analogous to the alpha-cut representation of type-1 fuzzy sets and the extension principle. We show that this new decomposition theorem forms a methodology for extending mathematical concepts from crisp sets to type-2 fuzzy sets directly. In the process of developing this theory we also define a generalisation that allows us to extend operations from interval type-2 fuzzy sets or interval valued fuzzy sets to type-2 fuzzy sets. These results will allow for the more applications of type-2 fuzzy sets by expiating the parallelism that the research here affords
An effective named entity similarity metric for comparing data from multiple sources with varying syntax
This paper describes and demonstrates a names entity similarity metric developed for, and currentlyin use by, the FuzzyPhoto project. The presented metric is effective at comparing named entity datain and across syntax less data schemas such as are often encounter in GLAM collections. Theefficiency of the approach was compared to an existing named entity similarity metric and is shownto be a significant improvement when comparing messy named entity data. Publisher Statement: This is a pre-copyedited, author-produced version of an article accepted for publication in Digital Scholarship in the Humanities following peer review. The version of record Croft, D, Brown, S & Coupland, S 2016, 'An effective Named Entity similarity metric for use with syntax independent data' Digital Scholarship in the Humanities, vol 32, no. 4, pp. 779-787 is available online at: https://dx.doi.org/10.1093/llc/fqw03
On Nie-Tan operator and type-reduction of interval type-2 fuzzy sets
Type-reduction of type-2 fuzzy sets is considered to be a defuzzification bottleneck because of the computational complexity involved in the process of type-reduction. In this research, we prove that the closed-form Nie-Tan operator, which outputs the average of the upper and lower bounds of the footprint of uncertainty, is actually an accurate method for defuzzifing interval type-2 fuzzy sets
Slug Damage and Control of Slugs in Horticultural Crops
Slugs can cause severe damage in horticultural crops. Slug activity; slug damage and control strategies differ considerably between countries or regions in Europe.
The brochure summarizes recent research on novel methods of slug control
Elliptic membership functions and the modeling uncertainty in type-2 fuzzy logic systems as applied to time series prediction
In this paper, our aim is to compare and contrast various ways of modeling uncertainty by using different type-2 fuzzy membership functions available in literature. In particular we focus on a novel type-2 fuzzy membership function–”Elliptic membership function”. After briefly explaining the motivation behind the suggestion of the elliptic membership function, we analyse the uncertainty distribution along its support, and we compare its uncertainty modeling capability with the existing membership functions. We also show how the elliptic membership functions perform in fuzzy arithmetic. In addition to its extra advantages over the existing type-2 fuzzy membership functions such as having decoupled parameters for its support and width, this novel membership function has some similar features to the Gaussian and triangular membership functions in addition and multiplication operations. Finally, we have tested the prediction capability of elliptic membership functions using interval type-2 fuzzy logic systems on US Dollar/Euro exchange rate prediction problem. Throughout the simulation studies, an extreme learning machine is used to train the interval type-2 fuzzy logic system. The prediction results show that, in addition to their various advantages mentioned above, elliptic membership functions have comparable prediction results when compared to Gaussian and triangular membership functions
Assessing the Provenance of Student Coursework
The Higher Education sector is mobilising vast resources in its response to the use of Generative AI in student coursework. This response includes institutional policies, training for staff and students and AI detection tools. This paper is concerned with one aspect of this fast-moving area; the assessment of the provenance of a piece of written student coursework. The question of the provenance of student work is a surprisingly complex one, which, in truth can only ever be answered by the student themselves. As academics we must understand the difference between checking for plagiarism and generative AI use. When assessing a student's possible use of generative AI there is no ground truth for us to test against and this makes the detection of AI use a completely different problem to plagiarism detection. A range of AI detection tools are available, some of which have been adopted within the sector. Some of these tools have high detection rates, however, most suffer with false positive rates meaning institutions would be falsely accusing hundreds of students per year of committing academic offences.
This paper explores a different approach to this problem which complements the use of AI detection tools. Rather than examining the work submitted by a student, the author examines the creation and editing of the that work over time. This gives an understanding how a piece of work was written, and most importantly how it has been edited. Inspecting a documents history requires that it is written on a cloud-based platform with version history enabled. The author has created a tool which sits on top of the cloud-based platform and integrates with the virtual learning environment. The tool records each time a student digitally touches their work, and the changes are recorded. The tool interface gives an overview for a cohort, with the ability to delve more deeply into an individual submission.
The result is an easily accessible interactive history of a document during its development, giving some kind of provenance to that document. This history of construction and editing, shows how a piece of written work has been crafted over time, providing useful evidence of academic practice. Data on the points where students digitally touch their work can also be useful beyond questions of academic practice. The Author gives an example of using a data-driven approach to give formative feedback and discusses how data-driven approaches could become common in teaching practice
- …
