9,154 research outputs found

    Self-Optimizing Mechanisms for EMF Reduction in Heterogeneous Networks

    Full text link
    This paper focuses on the exposure to Radio Frequency (RF) Electromagnetic Fields (EMF) and on optimization methods to reduce it. Within the FP7 LEXNET project, an Exposure Index (EI) has been defined that aggregates the essential components that impact exposure to EMF. The EI includes, among other, downlink (DL) exposure induced by the base stations (BSs) and access points, the uplink (UL) exposure induced by the devices in communication, and the corresponding exposure time. Motivated by the EI definition, this paper develops stochastic approximation based self-optimizing algorithm that dynamically adapts the network to reduce the EI in a heterogeneous network with macro- and small cells. It is argued that the increase of the small cells' coverage can, to a certain extent, reduce the EI, but above a certain limit, will deteriorate DL QoS. A load balancing algorithm is formulated that adapts the small cell' coverage based on UL loads and a DL QoS indicator. The proof of convergence of the algorithm is provided and its performance in terms of EI reduction is illustrated through extensive numerical simulations

    Handling uncertainty in information extraction

    Get PDF
    This position paper proposes an interactive approach for developing information extractors based on the ontology definition process with knowledge about possible (in)correctness of annotations. We discuss the problem of managing and manipulating probabilistic dependencies

    Information Extraction, Data Integration, and Uncertain Data Management: The State of The Art

    Get PDF
    Information Extraction, data Integration, and uncertain data management are different areas of research that got vast focus in the last two decades. Many researches tackled those areas of research individually. However, information extraction systems should have integrated with data integration methods to make use of the extracted information. Handling uncertainty in extraction and integration process is an important issue to enhance the quality of the data in such integrated systems. This article presents the state of the art of the mentioned areas of research and shows the common grounds and how to integrate information extraction and data integration under uncertainty management cover

    Neogeography: The Challenge of Channelling Large and Ill-Behaved Data Streams

    Get PDF
    Neogeography is the combination of user generated data and experiences with mapping technologies. In this article we present a research project to extract valuable structured information with a geographic component from unstructured user generated text in wikis, forums, or SMSes. The extracted information should be integrated together to form a collective knowledge about certain domain. This structured information can be used further to help users from the same domain who want to get information using simple question answering system. The project intends to help workers communities in developing countries to share their knowledge, providing a simple and cheap way to contribute and get benefit using the available communication technology

    Named Entity Extraction and Disambiguation: The Reinforcement Effect.

    Get PDF
    Named entity extraction and disambiguation have received much attention in recent years. Typical fields addressing these topics are information retrieval, natural language processing, and semantic web. Although these topics are highly dependent, almost no existing works examine this dependency. It is the aim of this paper to examine the dependency and show how one affects the other, and vice versa. We conducted experiments with a set of descriptions of holiday homes with the aim to extract and disambiguate toponyms as a representative example of named entities. We experimented with three approaches for disambiguation with the purpose to infer the country of the holiday home. We examined how the effectiveness of extraction influences the effectiveness of disambiguation, and reciprocally, how filtering out ambiguous names (an activity that depends on the disambiguation process) improves the effectiveness of extraction. Since this, in turn, may improve the effectiveness of disambiguation again, it shows that extraction and disambiguation may reinforce each other.\u

    Concept Extraction Challenge: University of Twente at #MSM2013

    Get PDF
    Twitter messages are a potentially rich source of continuously and instantly updated information. Shortness and informality of such messages are challenges for Natural Language Processing tasks. In this paper we present a hybrid approach for Named Entity Extraction (NEE) and Classification (NEC) for tweets. The system uses the power of the Conditional Random Fields (CRF) and the Support Vector Machines (SVM) in a hybrid way to achieve better results. For named entity type classification we used AIDA \cite{YosefHBSW11} disambiguation system to disambiguate the extracted named entities and hence find their type

    Unsupervised improvement of named entity extraction in short informal context using disambiguation clues

    Get PDF
    Short context messages (like tweets and SMS’s) are a potentially rich source of continuously and instantly updated information. Shortness and informality of such messages are challenges for Natural Language Processing tasks. Most efforts done in this direction rely on machine learning techniques which are expensive in terms of data collection and training. In this paper we present an unsupervised Semantic Web-driven approach to improve the extraction process by using clues from the disambiguation process. For extraction we used a simple Knowledge-Base matching technique combined with a clustering-based approach for disambiguation. Experimental results on a self-collected set of tweets (as an example of short context messages) show improvement in extraction results when using unsupervised feedback from the disambiguation process

    Enhancement of fish production in a reservoir after partitioning by dikes through community participation

    Get PDF
    A reservoir of 70 acres was portioned by dikes into four manageable big ponds to get more production of fishes at Basurhat, Noakhali, Bangladesh under the supervision of local community through a society of 40 people ownership. Pangus (Pangasius hypophthalmus) @ 20,000/acre, and then fry and fingerlings of different types of fishes such as catla (Catla catla), rohu (Labeo rohita), mrigal (Cirrhina mrigala), grass carp (Ctenophmyngodon idella), bighead (Aristichthys nobili), silver carp (Hypophthalmichthys molitrix), common carp (Cyprinus cmpio) and rajpunti (Puntius gonionatus) @ 500/acre were stocked. Feed containing 25% protein was used two times daily and feed was adjusted fortnightly. After 8 months, all the fishes were weighed 0.80-2.10 kg except rajpunti (150-200 g) and tilapia (150-220 g), and a total of 25 ton of fish was harvested which was five times higher than the previous production under signal ownership. The production of fishes were increased after partitioning the lake with dikes due to proper management and control
    corecore