946 research outputs found

    The UN Guiding Principles on Business and Human Rights and the Human Rights of Workers to Form or Join Trade Unions and to Bargain Collective

    Get PDF
    This document is part of a digital collection provided by the Martin P. Catherwood Library, ILR School, Cornell University, pertaining to the effects of globalization on the workplace worldwide. Special emphasis is placed on labor rights, working conditions, labor market changes, and union organizing.CCC_2012_Rpt_UN_Workers_Rights_as_Human_Rights_Principles.pdf: 162 downloads, before Oct. 1, 2020

    A Comprehensive Review of the 2016 ASHA Code of Ethics

    Get PDF
    The American Speech-Language-Hearing Association (ASHA) initially implemented a Code of Ethics in 1952, and has periodically revisited the content of the document with revisions to reflect the expanding scope of practice within speech-language pathology and audiology and to clarify certain concepts. Code revision is a cyclical mandated task of the ASHA Board of Ethics conducted to assure accuracy, currency, and completeness of this most important document (Solomon-Rice & O’Rourke, 2016). The current version of the Code of Ethics (2016) was modified from the previous version (2010r), with an updated preamble, definitions of related vocabulary, and re-organized language in the principles. The new code, which supports collaboration, competence, and responsibility, serves as the ethical underpinning for students and clinical fellows, practicing clinicians, researchers, supervisors, and administrators. It is incumbent on ASHA members to encode this information, and incorporate ethical practices across the span of their careers. The current article will summarize the changes between the 2010r and 2016 versions of the ASHA Code of Ethics for practicing speech-language pathologists and audiologists and students studying in these fields. Managers may benefit from this tutorial in order to be familiar with the standards to which their speech-language pathologists and audiologists must abide. Official clarification regarding the ASHA Code of Ethics should be directed to the ASHA Director of Ethics at [email protected]

    Fuzzy Bayesian inference

    Get PDF
    Bayesian methods provide formalism for reasoning about partial beliefs under conditions of uncertainty. Given a set of exhaustive and mutually exclusive hypotheses, one can compute the probability of a hypothesis for a given evidence using the Bayesian inversion formula. In Bayesian's inference, the evidence could be a single atomic proposition or multi-valued one. For the multi-valued evidence, these values could be discrete, continuous, or fuzzy. For the continuous-valued evidence, the density functions used in the Bayesian inference are difficult to be determined in many practical situations. Complicated laboratory testing and advance statistical techniques are required to estimate the parameters of the assumed type of distribution. Using the proposed fuzzy Bayesian approach, a formulation is derived to estimate the density function from the conditional probabilities of the fuzzy-supported values. It avoids the complicated testing and analysis, and it does not require the assumption of a particular type of distribution. The estimated density function in our approach is proved to conform to two axioms in the theorem of probability. Example is provided in the paper.published_or_final_versio

    Effect of roughness on vertical dispersion coefficient over idealized urban street canyons under neutral stratification

    Get PDF
    Ground-level pollutants (e.g. vehicular emission) are the primary pollutant sources affecting the public health and living quality in many modern compact cities. Thus, it is necessary to estimate the pollutant concentration and distribution in urban areas in a fast and reliable manner for better urban planning. Gaussian plume dispersion model is commonly used in practice. However, one of its major parameters, dispersion coefficient, often overlooks the effect of surface roughness so its accuracy in urban application is in doubt. In the existence of large-scale roughness element, the calculation of pollutant distribution in the urban boundary layer (UBL) would be prone to error. Our previous studies, using ...published_or_final_versio

    On plume dispersion over two-dimensional urban-like idealized roughness elements with height variation

    Get PDF
    A series of large-eddy simulation (LES) models over two-dimensional (2D) urban-like idealized roughness elements with height variation were performed. Results show that building-height variability (BHV) could enhance the aerodynamic resistance of the urban surfaces. Both the air exchange rate (ACH) and the vertical dispersion coefficient ðz increase with increasing the friction factor, implying that the air quality in both street canyons and urban boundary layer (UBL) could be improved by increasing the surface roughness via BHV. In addition, the parameters used in the estimates of dispersion coefficient are modified substantially by the friction factor, suggesting that friction factor could be used to parameterize dispersion coefficient of urban Gaussian plume model.postprin

    Pollutant removal, dispersion, and entrainment over two-dimensional idealized street canyons

    Get PDF
    Idealized two-dimensional (2D) street canyon models of unity building-height-to-street-width (aspect) ratio are employed to examine the pollutant transport over hypothetical urban areas. The results show that the pollutant removal is mainly governed by atmospheric turbulence when pollutant sources exist in the street canyons. Numerous decelerating, uprising air masses are located at the roof level, implying that the pollutant is removed from the street canyons to the urban boundary layer (UBL) by ejections. For the street canyons without pollutant source, the removal by ejections is limited leading to insignificant turbulent pollutant removal. The roof-level turbulent kinetic energy (TKE) distribution demonstrates that its production is not governed by local wind shear but the descending TKE from the UBL. In the UBL, the pollutant disperses rapidly over the buildings, exhibiting a Gaussian-plume shape. The vertical pollutant profiles illustrate a self-similarity behavior in the downstream region. Future studies will be focused on the characteristic plume shape over 2D idealized street canyons of different aspect ratios.postprintThe 13th International Conference on Wind Engineering (ICWE13), Amsterdam, The Netherlands, 10-15 July 2011

    A methodology for determining the resolvability of multiple vehicle occlusion in a monocular traffic image sequence

    Get PDF
    This paper proposed a knowledge-based methodology for determining the resolvability of N occluded vehicles seen in a monocular image sequence. The resolvability of each vehicle is determined by: firstly, deriving the relationship between the camera position and the number of vertices of a projected cuboid on the image; secondly, finding the direction of the edges of the projected cuboid in the image; and thirdly, modeling the maximum number of occluded cuboid edges of which the occluded cuboid is irresolvable. The proposed methodology has been tested rigorously on a number of real world monocular traffic image sequences that involves multiple vehicle occlusions, and is found to be able to successfully determine the number of occluded vehicles as well as the resolvability of each vehicle. We believe the proposed methodology will form the foundation for a more accurate traffic flow estimation and recognition system.published_or_final_versio

    Hosted Lecture: Brigadier Feroz Khan on Pakistan's Nuclear Weapons

    Get PDF
    Naval Postgraduate SchoolCenter for Contemporary Conflict (CCC

    A method for vehicle count in the presence of multiple-vehicle occlusions in traffic images

    Get PDF
    This paper proposes a novel method for accurately counting the number of vehicles that are involved in multiple-vehicle occlusions, based on the resolvability of each occluded vehicle, as seen in a monocular traffic image sequence. Assuming that the occluded vehicles are segmented from the road background by a previously proposed vehicle segmentation method and that a deformable model is geometrically fitted onto the occluded vehicles, the proposed method first deduces the number of vertices per individual vehicle from the camera configuration. Second, a contour description model is utilized to describe the direction of the contour segments with respect to its vanishing points, from which individual contour description and vehicle count are determined. Third, it assigns a resolvability index to each occluded vehicle based on a resolvability model, from which each occluded vehicle model is resolved and the vehicle dimension is measured. The proposed method has been tested on 267 sets of real-world monocular traffic images containing 3074 vehicles with multiple-vehicle occlusions and is found to be 100% accurate in calculating vehicle count, in comparison with human inspection. By comparing the estimated dimensions of the resolved generalized deformable model of the vehicle with the actual dimensions published by the manufacturers, the root-mean-square error for width, length, and height estimations are found to be 48, 279, and 76 mm, respectively. © 2007 IEEE.published_or_final_versio

    A Natural Language Processing Based Internet Agent

    Get PDF
    Searching for useful information is a difficult job by the virtue of the information overloading problem. With technological advances, notably the World-Wide Web (WWW), it allows every ordinary information owner to offer information online for others to access and retrieve. However, it also makes up a global information system that is extremely large-scale, diverse and dynamic. Internet agents and Internet search engines have been used to deal with such problems. But the search results are usually not quite relevant to what a user wants since most of them use simple keyword matching. In this paper, we propose a natural language processing based agent (NIAGENT) that understands a user's natural query. NIAGENT not only cooperates with a meta Internet search engine in order to increase recall of web pages but also analyzes the contents of the referenced documents to increase precision. Moreover, the proposed agent is autonomous, light-weight, and multithreaded. The architectural design also represents an interesting application of a distributed and cooperative computing paradigm. A prototype of NIAGENT, implemented in Java, shows its promise to find more useful information than keyword based searching.published_or_final_versio
    corecore