1,192 research outputs found
An evaluation of biotic ligand models predicting acute copper toxicity to Daphnia magna in wastewater effluent
This is the author's accepted manuscript. The final published article is available from the link below. Copyright @ 2010 SETAC.The toxicity of Cu to Daphnia magna was investigated in a series of 48-h immobilization assays in effluents from four wastewater treatment works. The assay results were compared with median effective concentration (EC50) forecasts produced by the HydroQual biotic ligand model (BLM), the refined D. magna BLM, and a modified BLM that was constructed by integrating the refined D. magna biotic ligand characterization with the Windermere humic aqueous model (WHAM) VI geochemical speciation model, which also accommodated additional effluent characteristics as model inputs. The results demonstrated that all the BLMs were capable of predicting toxicity by within a factor of two, and that the modified BLM produced the most accurate toxicity forecasts. The refined D. magna BLM offered the most robust assessment of toxicity in that it was not reliant on the inclusion of effluent characteristics or optimization of the dissolved organic carbon active fraction to produce forecasts that were accurate by within a factor of two. The results also suggested that the biotic ligand stability constant for Na may be a poor approximation of the mechanisms governing the influence of Na where concentrations exceed the range within which the biotic ligand stability constant value had been determined. These findings support the use of BLMs for the establishment of site-specific water quality standards in waters that contain a substantial amount of wastewater effluent, but reinforces the need for regulators to scrutinize the composition of models, their thermodynamic and biotic ligand parameters, and the limitations of those parameters.EPSRC and Severn Trent Water
Geographically weighted correspondence matrices for local error reporting and change analyses: mapping the spatial distribution of errors and change
This letter describes and applies generic methods for generating local measures from the correspondence table. These were developed by integrating the functionality of two existing R packages: gwxtab and diffeR. They demonstrate how spatially explicit accuracy and error measures can be generated from local geographically weighted correspondence matrices, for example to compare classified and reference data (predicted and observed) for error analyses, and classes at times t1 and t2 for change analyses. The approaches in this letter extend earlier work that considered the measures derived from correspondence matrices in the context of generalized linear models and probability. Here the methods compute local, geographically weighted correspondence matrices, from which local statistics are directly calculated. In this case a selection of the overall and categorical difference measures proposed by Pontius and Milones (2011) and Pontius and Santacruz (2014), as well as spatially distributed estimates of kappa coefficients, User and Producer accuracies. The discussion reflects on the use of the correspondence matrix in remote sensing research, the philosophical underpinnings of local rather than global approaches for modelling landscape processes and the potential for policy and scientific benefits that local approaches support
Pharmaceuticals in soils of lower income countries: Physico-chemical fate and risks from wastewater irrigation.
Population growth, increasing affluence, and greater access to medicines have led to an increase in active pharmaceutical ingredients (APIs) entering sewerage networks. In areas with high wastewater reuse, residual quantities of APIs may enter soils via irrigation with treated, partially treated, or untreated wastewater and sludge. Wastewater used for irrigation is currently not included in chemical environmental risk assessments and requires further consideration in areas with high water reuse. This study critically assesses the contemporary understanding of the occurrence and fate of APIs in soils of low and lower-middle income countries (LLMIC) in order to contribute to the development of risk assessments for APIs in LLMIC. The physico-chemical properties of APIs and soils vary greatly globally, impacting on API fate, bioaccumulation and toxicity. The impact of pH, clay and organic matter on the fate of organic ionisable compounds is discussed in detail. This study highlights the occurrence and the partitioning and degradation coefficients for APIs in soil:porewater systems, API usage data in LLMICS and removal rates (where used) within sewage treatment plants as key areas where data are required in order to inform robust environmental risk assessment methodologies
Proposed Environmental Quality Standards for Phenol in Water
This is the Proposed Environmental Quality Standards (EQS) for Phenol in Water prepared for the National Rivers Authority, and published by the Environment Agency in 1995. The report reviews the properties and uses of phenol, its fate, behaviour and reported concentrations in the environment and critically assesses the available data on its toxicity and bioaccumulation. The information is used to derive EQSs for the protection of fresh and saltwater life and for the abstraction of water to potable supply. Phenol is widely used as a chemical intermediate and the main sources for phenol in the environment are of anthropogenic origin. Phenol may also be formed during natural decomposition of organic material. The persistence of phenol in the aquatic environment is low with biodegradation being the main degradation process (half-lives of hours to days). Phenol is moderately toxic to aquatic organisms and its potential to bioaccumulate in aquatic organisms is low
Spatial methods for infectious disease outbreak investigations: systematic literature review
Investigations of infectious disease outbreaks are conventionally framed in terms of person, time and place. Although geographic information systems have increased the range of tools available, spatial analyses are used relatively infrequently. We conducted a systematic review of published reports of outbreak investigations worldwide to estimate the prevalence of spatial methods, describe the techniques applied and explore their utility. We identified 80 reports using spatial methods published between 1979 and 2013, ca 0.4% of the total number of published outbreaks. Environmental or waterborne infections were the most commonly investigated, and most reports were from the United Kingdom. A range of techniques were used, including simple dot maps, cluster analyses and modelling approaches. Spatial tools were usefully applied throughout investigations, from initial confirmation of the outbreak to describing and analysing cases and communicating findings. They provided valuable insights that led to public health actions, but there is scope for much wider implementation and development of new methods.</jats:p
Comparison of Data Fusion Methods Using Crowdsourced Data in Creating a Hybrid Forest Cover Map
Data fusion represents a powerful way of integrating individual sources of information to produce a better output than could be achieved by any of the individual sources on their own. This paper focuses on the data fusion of different land cover products derived from remote sensing. In the past, many different methods have been applied, without regard to their relative merit. In this study, we compared some of the most commonly-used methods to develop a hybrid forest cover map by combining available land cover/forest products and crowdsourced data on forest cover obtained through the Geo-Wiki project. The methods include: nearest neighbour, naive Bayes, logistic regression and geographically-weighted logistic regression (GWR), as well as classification and regression trees (CART). We ran the comparison experiments using two data types: presence/absence of forest in a grid cell; percentage of forest cover in a grid cell. In general, there was little difference between the methods. However, GWR was found to perform better than the other tested methods in areas with high disagreement between the inputs
Determining Biodegradation Kinetics of Hydrocarbons at Low Concentrations: Covering 5 and 9 Orders of Magnitude of Kow and Kaw
Accurate attribute mapping from volunteered geographic information: issues of volunteer quantity and quality
Crowdsourcing is a popular means of acquiring data, but the use of such data is limited by concerns with its quality. This is evident within cartography and geographical sciences more generally, with the quality of volunteered geographic information (VGI) recognized as a major challenge to address if the full potential of citizen sensing in mapping applications is to be realized. Here, a means to characterize the quality of volunteers, based only on the data they contribute, was used to explore issues connected with the quantity and quality of volunteers for attribute mapping. The focus was on data in the form of annotations or class labels provided by volunteers who visually interpreted an attribute, land cover, from a series of satellite sensor images. A latent class model was found to be able to provide accurate characterisations of the quality of volunteers in terms of the accuracy of their labelling, irrespective of the number of cases that they labelled. The accuracy with which a volunteer could be characterized tended to increase with the number of volunteers contributing but was typically good at all but small numbers of volunteers. Moreover, the ability to characterize volunteers in terms of the quality of their labelling could be used constructively. For example, volunteers could be ranked in terms of quality which could then be used to select a sub-set as input to a subsequent mapping task. This was particularly important as an identified subset of volunteers could undertake a task more accurately than when part of a larger group of volunteers. The results highlight that both the quantity and quality of volunteers need consideration and that the use of VGI may be enhanced through information on the quality of the volunteers derived entirely from the data provided without any additional information
Geographically weighted evidence combination approaches for combining discordant and inconsistent volunteered geographical information
There is much interest in being able to combine crowdsourced data. One of the critical issues in information sciences is how to combine data or information that are discordant or inconsistent in some way. Many previous approaches have taken a majority rules approach under the assumption that most people are correct most of the time. This paper analyses crowdsourced land cover data generated by the Geo-Wiki initiative in order to infer the land cover present at locations on a 50 km grid. It compares four evidence combination approaches (Dempster Shafer, Bayes, Fuzzy Sets and Possibility) applied under a geographically weighted kernel with the geographically weighted average approach applied in many current Geo-Wiki analyses. A geographically weighted approach uses a moving kernel under which local analyses are undertaken. The contribution (or salience) of each data point to the analysis is weighted by its distance to the kernel centre, reflecting Tobler’s 1st law of geography. A series of analyses were undertaken using different kernel sizes (or bandwidths). Each of the geographically weighted evidence combination methods generated spatially distributed measures of belief in hypotheses associated with the presence of individual land cover classes at each location on the grid. These were compared with GlobCover, a global land cover product. The results from the geographically weighted average approach in general had higher correspondence with the reference data and this increased with bandwidth. However, for some classes other evidence combination approaches had higher correspondences possibly because of greater ambiguity over class conceptualisations and / or lower densities of crowdsourced data. The outputs also allowed the beliefs in each class to be mapped. The differences in the soft and the crisp maps are clearly associated with the logics of each evidence combination approach and of course the different questions that they ask of the data. The results show that discordant data can be combined (rather than being removed from analysis) and that data integrated in this way can be parameterised by different measures of belief uncertainty. The discussion highlights a number of critical areas for future research
- …
