162 research outputs found

    Thresholds of terrestrial nutrient loading for the development of eutrophication episodes in a coastal embayment in the Aegean Sea

    Get PDF
    Thresholds of terrestrial nutrient loading (inorganic N and P) for the development of eutrophication episodes were estimated in an enclosed embayment, the gulf of Kalloni, in the Aegean, Eastern Mediterranean. Terrestrial loading was quantified by a watershed runoff model taking into account land use, geomorphology, sewerage, industrial and animal farming by-products. The eutrophication episodes were assessed by an existing scale for the Aegean coastal waters based on chl a, whereas the necessary nutrient concentrations (N and P) for the development of such episodes were defined using a probabilistic procedure. Finally, for the linking between nutrient loading arriving at the gulf and the resulting nutrient enrichment of the marine ecosystem, three loading factors were applied, developed by Vollenweider for lake and marine ecosystems. The first assumes no exchange between the embayment and the open sea, whereas the two others take into account water renewal time. Only the threshold for inorganic nitrogen estimated by the first factor was exceeded in the study area during February after a strong rainfall event coinciding with a eutrophication episode observed in the interior of the gulf, implying that the waters of the gulf are rather confined and the receiving body operates as a lake. The degree of confinement was further examined by studying the temperature, salinity, and density distributions inside the gulf and across the channel connecting the gulf to the open sea. It was found that the incoming freshwater from the watershed during winter results to the formation of a dilute surface layer of low salinity and density, clearly isolated from the open sea. The nutrients from the river inputs are diluted into this isolated water mass and the eutrophication threshold for nitrogen is exceeded. Although phosphorus loading was also high during winter, the corresponding limits were never exceeded. The proposed methodology sets a quantitative relationship between terrestrial nutrient loading and the development of eutrophication episodes in coastal embayments, assuming that information on the physical setting of the system is available. These cause-and-effect relationships can be invaluable tools for managers and decision makers in the framework of Integrated Coastal Zone Management

    Lumpy species coexistence arises robustly in fluctuating resource environments

    Get PDF
    The effect of life-history traits on resource competition outcomes is well understood in the context of a constant resource supply. However, almost all natural systems are subject to fluctuations of resources driven by cyclical processes such as seasonality and tidal hydrology. To understand community composition, it is therefore imperative to study the impact of resource fluctuations on interspecies competition. We adapted a well-established resource-competition model to show that fluctuations in inflow concentrations of two limiting resources lead to the survival of species in clumps along the trait axis, consistent with observations of “lumpy coexistence” [Scheffer M, van Nes EH (2006) Proc Natl Acad Sci USA 103:6230–6235]. A complex dynamic pattern in the available ambient resources arose very early in the self-organization process and dictated the locations of clumps along the trait axis by creating niches that promoted the growth of species with specific traits. This dynamic pattern emerged as the combined result of fluctuations in the inflow of resources and their consumption by the most competitive species that accumulated the bulk of biomass early in assemblage organization. Clumps emerged robustly across a range of periodicities, phase differences, and amplitudes. Given the ubiquity in the real world of asynchronous fluctuations of limiting resources, our findings imply that assemblage organization in clumps should be a common feature in nature

    Decisions, Counterfactual Explanations and Strategic Behavior

    Full text link
    As data-driven predictive models are increasingly used to inform decisions, it has been argued that decision makers should provide explanations that help individuals understand what would have to change for these decisions to be beneficial ones. However, there has been little discussion on the possibility that individuals may use the above counterfactual explanations to invest effort strategically and maximize their chances of receiving a beneficial decision. In this paper, our goal is to find policies and counterfactual explanations that are optimal in terms of utility in such a strategic setting. We first show that, given a pre-defined policy, the problem of finding the optimal set of counterfactual explanations is NP-hard. Then, we show that the corresponding objective is nondecreasing and satisfies submodularity and this allows a standard greedy algorithm to enjoy approximation guarantees. In addition, we further show that the problem of jointly finding both the optimal policy and set of counterfactual explanations reduces to maximizing a non-monotone submodular function. As a result, we can use a recent randomized algorithm to solve the problem, which also offers approximation guarantees. Finally, we demonstrate that, by incorporating a matroid constraint into the problem formulation, we can increase the diversity of the optimal set of counterfactual explanations and incentivize individuals across the whole spectrum of the population to self improve. Experiments on synthetic and real lending and credit card data illustrate our theoretical findings and show that the counterfactual explanations and decision policies found by our algorithms achieve higher utility than several competitive baselines.Comment: New data preprocessing method, experiments on credit card data and experiments under a matroid constrain

    On the Within-Group Discrimination of Screening Classifiers

    Get PDF
    Screening classifiers are increasingly used to identify qualified candidatesin a variety of selection processes. In this context, it has been recentlyshown that, if a classifier is calibrated, one can identify the smallest set ofcandidates which contains, in expectation, a desired number of qualifiedcandidates using a threshold decision rule. This lends support to focusing oncalibration as the only requirement for screening classifiers. In this paper,we argue that screening policies that use calibrated classifiers may sufferfrom an understudied type of within-group discrimination -- they maydiscriminate against qualified members within demographic groups of interest.Further, we argue that this type of discrimination can be avoided ifclassifiers satisfy within-group monotonicity, a natural monotonicity propertywithin each of the groups. Then, we introduce an efficient post-processingalgorithm based on dynamic programming to minimally modify a given calibratedclassifier so that its probability estimates satisfy within-group monotonicity.We validate our algorithm using US Census survey data and show thatwithin-group monotonicity can be often achieved at a small cost in terms ofprediction granularity and shortlist size.<br

    Counterfactual Explanations in Sequential Decision Making Under Uncertainty

    Get PDF
    Methods to find counterfactual explanations have predominantly focused on one step decision making processes. In this work, we initiate the development of methods to find counterfactual explanations for decision making processes in which multiple, dependent actions are taken sequentially over time. We start by formally characterizing a sequence of actions and states using finite horizon Markov decision processes and the Gumbel-Max structural causal model. Building upon this characterization, we formally state the problem of finding counterfactual explanations for sequential decision making processes. In our problem formulation, the counterfactual explanation specifies an alternative sequence of actions differing in at most k actions from the observed sequence that could have led the observed process realization to a better outcome. Then, we introduce a polynomial time algorithm based on dynamic programming to build a counterfactual policy that is guaranteed to always provide the optimal counterfactual explanation on every possible realization of the counterfactual environment dynamics. We validate our algorithm using both synthetic and real data from cognitive behavioral therapy and show that the counterfactual explanations our algorithm finds can provide valuable insights to enhance sequential decision making under uncertainty

    Group Testing under Superspreading Dynamics

    Get PDF
    Testing is recommended for all close contacts of confirmed COVID-19 patients. However, existing group testing methods are oblivious to the circumstances of contagion provided by contact tracing. Here, we build upon a well-known semi-adaptive pool testing method, Dorfman's method with imperfect tests, and derive a simple group testing method based on dynamic programming that is specifically designed to use the information provided by contact tracing. Experiments using a variety of reproduction numbers and dispersion levels, including those estimated in the context of the COVID-19 pandemic, show that the pools found using our method result in a significantly lower number of tests than those found using standard Dorfman's method, especially when the number of contacts of an infected individual is small. Moreover, our results show that our method can be more beneficial when the secondary infections are highly overdispersed

    Species extinctions strengthen the relationship between biodiversity and resource use efficiency

    Get PDF
    Evidence from terrestrial ecosystems indicates that biodiversity relates to ecosystem functions (BEF), but this relationship varies in its strength, in part, as a function of habitat connectivity and fragmentation. In primary producers, common proxies of ecosystem function include productivity and resource use efficiency. In aquatic primary producers, macroecological studies have observed BEF variance, where ecosystems with lower richness show stronger BEF relationships. However, aquatic ecosystems are less affected by habitat fragmentation than terrestrial systems and the mechanism underlying this BEF variance has been largely overlooked. Here, we provide a mechanistic explanation of BEF variance using a trait-based, numerical model parameterized for phytoplankton. Resource supply in our model fluctuates recurrently, similar to many coastal systems. Our findings show that following an extinction event, the BEF relationship can be driven by the species that are the most efficient resource users. Specifically, in species-rich assemblages, increased redundancy of efficient resource users minimizes the risk of losing function following an extinction event. On the other hand, in species-poor assemblages, low redundancy of efficient resource users increases the risk of losing ecosystem function following extinctions. Furthermore, we corroborate our findings with what has been observed from large-scale field studies on phytoplankton

    Is Your LLM Overcharging You? Tokenization, Transparency, and Incentives

    Get PDF
    State-of-the-art large language models require specialized hardware and substantial energy to operate. As a consequence, cloud-based services that provide access to large language models have become very popular. In these services, the price users pay for an output provided by a model depends on the number of tokens the model uses to generate it -- they pay a fixed price per token. In this work, we show that this pricing mechanism creates a financial incentive for providers to strategize and misreport the (number of) tokens a model used to generate an output, and users cannot prove, or even know, whether a provider is overcharging them. However, we also show that, if an unfaithful provider is obliged to be transparent about the generative process used by the model, misreporting optimally without raising suspicion is hard. Nevertheless, as a proof-of-concept, we introduce an efficient heuristic algorithm that allows providers to significantly overcharge users without raising suspicion, highlighting the vulnerability of users under the current pay-per-token pricing mechanism. Further, to completely eliminate the financial incentive to strategize, we introduce a simple incentive-compatible token pricing mechanism. Under this mechanism, the price users pay for an output provided by a model depends on the number of characters of the output -- they pay a fixed price per character. Along the way, to illustrate and complement our theoretical results, we conduct experiments with several large language models from the Llama\texttt{Llama}, Gemma\texttt{Gemma} and Ministral\texttt{Ministral} families, and input prompts from the LMSYS Chatbot Arena platform
    corecore