719 research outputs found

    Agriculture intensifies soil moisture decline in Northern China

    Get PDF
    Northern China is one of the most densely populated regions in the world. Agricultural activities have intensified since the 1980s to provide food security to the country. However, this intensification has likely contributed to an increasing scarcity in water resources, which may in turn be endangering food security. Based on in-situ measurements of soil moisture collected in agricultural plots during 1983–2012, we find that topsoil (0–50cm) volumetric water content during the growing season has declined significantly (p < 0.01), with a trend of −0.011 to −0.015 m3 m−3 per decade. Observed discharge declines for the three large river basins are consistent with the effects of agricultural intensification, although other factors (e.g. dam constructions) likely have contributed to these trends. Practices like fertilizer application have favoured biomass growth and increased transpiration rates, thus reducing available soil water. In addition, the rapid proliferation of water-expensive crops (e.g., maize) and the expansion of the area dedicated to food production have also contributed to soil drying. Adoption of alternative agricultural practices that can meet the immediate food demand without compromising future water resources seem critical for the sustainability of the food production system

    One-for-All: Generalized LoRA for Parameter-Efficient Fine-tuning

    Full text link
    We present Generalized LoRA (GLoRA), an advanced approach for universal parameter-efficient fine-tuning tasks. Enhancing Low-Rank Adaptation (LoRA), GLoRA employs a generalized prompt module to optimize pre-trained model weights and adjust intermediate activations, providing more flexibility and capability across diverse tasks and datasets. Moreover, GLoRA facilitates efficient parameter adaptation by employing a scalable, modular, layer-wise structure search that learns individual adapter of each layer. Originating from a unified mathematical formulation, GLoRA exhibits strong transfer learning, few-shot learning and domain generalization abilities, as it adapts to new tasks through not only weights but also additional dimensions like activations. Comprehensive experiments demonstrate that GLoRA outperforms all previous methods in natural, specialized, and structured vision benchmarks, achieving superior accuracy with fewer parameters and computations. The proposed method on LLaMA-1 and LLaMA-2 also show considerable enhancements compared to the original LoRA in the language domain. Furthermore, our structural re-parameterization design ensures that GLoRA incurs no extra inference cost, rendering it a practical solution for resource-limited applications. Code and models are available at: https://github.com/Arnav0400/ViT-Slim/tree/master/GLoRA.Comment: Technical report. v2: Add LLaMA-1&2 results. Code and models at https://github.com/Arnav0400/ViT-Slim/tree/master/GLoR

    ConvNet vs Transformer, Supervised vs CLIP: Beyond ImageNet Accuracy

    Full text link
    Modern computer vision offers a great variety of models to practitioners, and selecting a model from multiple options for specific applications can be challenging. Conventionally, competing model architectures and training protocols are compared by their classification accuracy on ImageNet. However, this single metric does not fully capture performance nuances critical for specialized tasks. In this work, we conduct an in-depth comparative analysis of model behaviors beyond ImageNet accuracy, for both ConvNet and Vision Transformer architectures, each across supervised and CLIP training paradigms. Although our selected models have similar ImageNet accuracies and compute requirements, we find that they differ in many other aspects: types of mistakes, output calibration, transferability, and feature invariance, among others. This diversity in model characteristics, not captured by traditional metrics, highlights the need for more nuanced analysis when choosing among different models. Our code is available at https://github.com/kirill-vish/Beyond-INet.Project page: https://kirill-vish.github.io/beyond-imagenet-accuracy

    Search for the Lepton Flavor Violation Process J/ψeμJ/\psi \to e\mu at BESIII

    Get PDF
    We search for the lepton-flavor-violating decay of the J/ψJ/\psi into an electron and a muon using (225.3±2.8)×106(225.3\pm2.8)\times 10^{6} J/ψJ/\psi events collected with the BESIII detector at the BEPCII collider. Four candidate events are found in the signal region, consistent with background expectations. An upper limit on the branching fraction of B(J/ψeμ)<1.5×107\mathcal{B}(J/\psi \to e\mu)< 1.5 \times 10^{-7} (90% C.L.) is obtained
    corecore