101 research outputs found

    Growth Response of Pleurotus spp. on Different Basal Media and Different pH Levels

    Get PDF
    Five isolates of Pleurotus spp. viz. Pleurrtus florida. P. sajor-caju, P. eous, P. flabellatus, P. sp. were cultured on PDA media and maintained on PDA slants. All the isolated species were tested for biomass production in various media viz. Richard’s Broth, Asthana Hawker’s. Czapecks’ Dox, Potato Dextrose and Malt Extract  Broth media. The maximum biomass was recorded on Potato dextrose media in P. florida, (1.86 g). The minimum biomass was recorded on Asthana Hawker’s media in P.florida & P.eous (1.15g). All the isolated spp. were tested for the effect of pH variations. A pH range between 3 to 8 was recorded. The maximum biomass was recorded at 5.0 pH. The biomass gradually decreased in an acidic environment, similarly higher pH also did not favoured the growth as well as biomass

    ProKnow: Process Knowledge for Safety Constrained and Explainable Question Generation for Mental Health Diagnostic Assistance

    Get PDF
    Current Virtual Mental Health Assistants (VMHAs) provide counseling and suggestive care. They refrain from patient diagnostic assistance because of a lack of training on safety-constrained and specialized clinical process knowledge (Pro-Know). In this work, we define ProKnow as an ordered set of information that maps to evidence-based guidelines or categories of conceptual understanding to experts in a domain. We also introduce a new dataset of diagnostic conversations guided by safety constraints and ProKnow that healthcare professionals use (ProKnow-data). We develop a method for natural language question generation (NLG) that collects diagnostic information from the patient interactively (ProKnow-algo). We demonstrate the limitations of using state-of-the-art large-scale language models (LMs) on this dataset. ProKnow-algo models the process knowledge through explicitly modeling safety, knowledge capture, and explainability. LMs with ProKnow-algo generated 89% safer questions in the depression and anxiety domain. Further, without ProKnow-algo generations question did not adhere to clinical process knowledge in ProKnow-data. In comparison, ProKnow-algo-based generations yield a 96% reduction in averaged squared rank error. The Explainability of the generated question is assessed by computing similarity with concepts in depression and anxiety knowledge bases. Overall, irrespective of the type of LMs, ProKnow-algo achieved an averaged 82% improvement over simple pre-trained LMs on safety, explainability, and process-guided question generation. We qualitatively and quantitatively evaluate the efficacy of ProKnow-algo by introducing three new evaluation metrics for safety, explainability, and process knowledge-adherence. For reproducibility, we will make ProKnow-data and the code repository of ProKnow-algo publicly available upon acceptance

    Resonant phonons : localization in a structurally ordered crystal

    Get PDF
    Phonon localization is a phenomenon that influences numerous material properties in condensed matter physics. Anderson localization brings rise to localized atomic-scale phonon interferences in disordered lattices with an influence limited to high-frequency phonons having wavelengths comparable to the size of a randomly perturbed unit cell. Here we computationally expose a form of phonon localization induced by augmenting a crystalline material with intrinsic phonon nanoresonators with feature sizes that can be smaller or larger than the phonon wavelengths but must be relatively small compared to the phonon mean free paths. This mechanism is deterministic and takes place within numerous discrete narrow-frequency bands spread throughout the full spectrum with central frequencies controlled by design. For demonstration, we run molecular dynamics simulations of all-silicon nanopillared membranes at room temperature and apply to the underlying thermalized environment narrowband wave packets as an excitation at precisely the frequencies where resonant hybridizations are evident in the anharmonic phonon band structure. Upon comparison to other frequency ranges where the nanostructure does not exhibit local resonances, significant intrinsic spatial phonon localization along the direction of transport is explicitly observed. Furthermore, the energy exchange with external sources is minimized at the resonant frequencies. We conclude by making a direct comparison with Anderson localization highlighting the superiority of the resonant phonons across both sides of the interference frequency limit

    Visual Hallucination: Definition, Quantification, and Prescriptive Remediations

    Full text link
    The troubling rise of hallucination presents perhaps the most significant impediment to the advancement of responsible AI. In recent times, considerable research has focused on detecting and mitigating hallucination in Large Language Models (LLMs). However, it's worth noting that hallucination is also quite prevalent in Vision-Language models (VLMs). In this paper, we offer a fine-grained discourse on profiling VLM hallucination based on two tasks: i) image captioning, and ii) Visual Question Answering (VQA). We delineate eight fine-grained orientations of visual hallucination: i) Contextual Guessing, ii) Identity Incongruity, iii) Geographical Erratum, iv) Visual Illusion, v) Gender Anomaly, vi) VLM as Classifier, vii) Wrong Reading, and viii) Numeric Discrepancy. We curate Visual HallucInation eLiciTation (VHILT), a publicly available dataset comprising 2,000 samples generated using eight VLMs across two tasks of captioning and VQA along with human annotations for the categories as mentioned earlier

    TDLR: Top (\u3cem\u3eSemantic\u3c/em\u3e)-Down (\u3cem\u3eSyntactic\u3c/em\u3e) Language Representation

    Get PDF
    Language understanding involves processing text with both the grammatical and common-sense contexts of the text fragments. The text “I went to the grocery store and brought home a car” requires both the grammatical context (syntactic) and common-sense context (semantic) to capture the oddity in the sentence. Contextualized text representations learned by Language Models (LMs) are expected to capture a variety of syntactic and semantic contexts from large amounts of training data corpora. Recent work such as ERNIE has shown that infusing the knowledge contexts, where they are available in LMs, results in significant performance gains on General Language Understanding (GLUE) benchmark tasks. However, to our knowledge, no knowledge-aware model has attempted to infuse knowledge through top-down semantics-driven syntactic processing (Eg: Common-sense to Grammatical) and directly operated on the attention mechanism that LMs leverage to learn the data context. We propose a learning framework Top-Down Language Representation (TDLR) to infuse common-sense semantics into LMs. In our implementation, we build on BERT for its rich syntactic knowledge and use the knowledge graphs ConceptNet and WordNet to infuse semantic knowledge

    A Comprehensive Survey of Hallucination Mitigation Techniques in Large Language Models

    Full text link
    As Large Language Models (LLMs) continue to advance in their ability to write human-like text, a key challenge remains around their tendency to hallucinate generating content that appears factual but is ungrounded. This issue of hallucination is arguably the biggest hindrance to safely deploying these powerful LLMs into real-world production systems that impact people's lives. The journey toward widespread adoption of LLMs in practical settings heavily relies on addressing and mitigating hallucinations. Unlike traditional AI systems focused on limited tasks, LLMs have been exposed to vast amounts of online text data during training. While this allows them to display impressive language fluency, it also means they are capable of extrapolating information from the biases in training data, misinterpreting ambiguous prompts, or modifying the information to align superficially with the input. This becomes hugely alarming when we rely on language generation capabilities for sensitive applications, such as summarizing medical records, financial analysis reports, etc. This paper presents a comprehensive survey of over 32 techniques developed to mitigate hallucination in LLMs. Notable among these are Retrieval Augmented Generation (Lewis et al, 2021), Knowledge Retrieval (Varshney et al,2023), CoNLI (Lei et al, 2023), and CoVe (Dhuliawala et al, 2023). Furthermore, we introduce a detailed taxonomy categorizing these methods based on various parameters, such as dataset utilization, common tasks, feedback mechanisms, and retriever types. This classification helps distinguish the diverse approaches specifically designed to tackle hallucination issues in LLMs. Additionally, we analyze the challenges and limitations inherent in these techniques, providing a solid foundation for future research in addressing hallucinations and related phenomena within the realm of LLMs

    Tailorable deformation and crushing behavior of 3D printed multilayered tetra-chiral lattices: Experiments and finite element modeling

    Get PDF
    This study investigates the tunable auxetic and crushing characteristics of a set of novel, multilayered tetra-chiral (TC) lattices through a combination of experimental testing and finite element (FE) modeling, aiming to uncover how the mechanical properties are affected by layer stacking. Utilizing Digital Light Processing (DLP) to fabricate multilayered lattices from PlasGray photoresin across two distinct length scales, we observe length scale-dependent material properties, which prompt a more in-depth examination of the deformation and failure characteristics of multilayered lattices. The experimental results form the basis for subsequent FE analysis, employing a Drucker–Prager material model combined with ductile damage criteria to investigate the effects of layering on stiffness, strength, and energy absorption characteristics. By tuning the architectural parameters within the FE simulations, we expand the design space, aiming to uncover configurations that exhibit superior mechanical performance. The results indicate that, for a constant mass, bi-layered and multi-layered TC structures exhibit enhanced energy absorption, with increases of 114% and 149%, respectively. This study advances the understanding of layered tetra-chiral lattices and provides a foundation for future efforts to optimize the design and functionality of multilayered structures in various engineering applications

    Exploring the Relationship between LLM Hallucinations and Prompt Linguistic Nuances: Readability, Formality, and Concreteness

    Full text link
    As Large Language Models (LLMs) have advanced, they have brought forth new challenges, with one of the prominent issues being LLM hallucination. While various mitigation techniques are emerging to address hallucination, it is equally crucial to delve into its underlying causes. Consequently, in this preliminary exploratory investigation, we examine how linguistic factors in prompts, specifically readability, formality, and concreteness, influence the occurrence of hallucinations. Our experimental results suggest that prompts characterized by greater formality and concreteness tend to result in reduced hallucination. However, the outcomes pertaining to readability are somewhat inconclusive, showing a mixed pattern

    The Troubling Emergence of Hallucination in Large Language Models--An Extensive Definition, Quantification, and Prescriptive Remediations

    Full text link
    The recent advancements in Large Language Models (LLMs) have garnered widespread acclaim for their remarkable emerging capabilities. However, the issue of hallucination has parallelly emerged as a by-product, posing significant concerns. While some recent endeavors have been made to identify and mitigate different types of hallucination, there has been a limited emphasis on the nuanced categorization of hallucination and associated mitigation methods. To address this gap, we offer a finegrained discourse on profiling hallucination based on its degree, orientation, and category, along with offering strategies for alleviation. As such, we define two overarching orientations of hallucination: (i) factual mirage (FM) and (ii) silver lining (SL). To provide a more comprehensive understanding, both orientations are further sub-categorized into intrinsic and extrinsic, with three degrees of severity - (i) mild, (ii) moderate, and (iii) alarming. We also meticulously categorize hallucination into six types: (i) acronym ambiguity, (ii) numeric nuisance, (iii) generated golem, (iv) virtual voice, (v) geographic erratum, and (vi) time wrap. Furthermore, we curate HallucInation eLiciTation (HILT), a publicly available dataset comprising of 75,000 samples generated using 15 contemporary LLMs along with human annotations for the aforementioned categories. Finally, to establish a method for quantifying and to offer a comparative spectrum that allows us to evaluate and rank LLMs based on their vulnerability to producing hallucinations, we propose Hallucination Vulnerability Index (HVI). Amidst the extensive deliberations on policy-making for regulating AI development, it is of utmost importance to assess and measure which LLM is more vulnerable towards hallucination. We firmly believe that HVI holds significant value as a tool for the wider NLP community, with the potential to serve as a rubric in AI-related policy-making. In conclusion, we propose two solution strategies for mitigating hallucinations

    FACTOID: FACtual enTailment fOr hallucInation Detection

    Full text link
    The widespread adoption of Large Language Models (LLMs) has facilitated numerous benefits. However, hallucination is a significant concern. In response, Retrieval Augmented Generation (RAG) has emerged as a highly promising paradigm to improve LLM outputs by grounding them in factual information. RAG relies on textual entailment (TE) or similar methods to check if the text produced by LLMs is supported or contradicted, compared to retrieved documents. This paper argues that conventional TE methods are inadequate for spotting hallucinations in content generated by LLMs. For instance, consider a prompt about the 'USA's stance on the Ukraine war''. The AI-generated text states, ...U.S. President Barack Obama says the U.S. will not put troops in Ukraine...'' However, during the war the U.S. president is Joe Biden which contradicts factual reality. Moreover, current TE systems are unable to accurately annotate the given text and identify the exact portion that is contradicted. To address this, we introduces a new type of TE called ``Factual Entailment (FE).'', aims to detect factual inaccuracies in content generated by LLMs while also highlighting the specific text segment that contradicts reality. We present FACTOID (FACTual enTAILment for hallucInation Detection), a benchmark dataset for FE. We propose a multi-task learning (MTL) framework for FE, incorporating state-of-the-art (SoTA) long text embeddings such as e5-mistral-7b-instruct, along with GPT-3, SpanBERT, and RoFormer. The proposed MTL architecture for FE achieves an avg. 40\% improvement in accuracy on the FACTOID benchmark compared to SoTA TE methods. As FE automatically detects hallucinations, we assessed 15 modern LLMs and ranked them using our proposed Auto Hallucination Vulnerability Index (HVI_auto). This index quantifies and offers a comparative scale to evaluate and rank LLMs according to their hallucinations
    corecore