312 research outputs found

    Adult English Learners\u27 Perceptions of their Pronunciation and Linguistic SelfConfidence

    Get PDF
    Second language pronunciation research often focuses on intelligibility from the perspectives of native-speakers. However, few studies focus on English learners’ (ELs) perceptions of pronunciation, and few studies examine linguistic self-confidence (LSC). This study explores advanced-level adult ELs’ perceptions of their own pronunciation and the relationship between their perceptions and LSC. Inspiration for this study comes from ELs in my classes and Tracy Derwing’s 2003 study. This mixed methods study utilized an initial questionnaire followed by individual interviews. Results from data obtained suggest that adult ELs perceive English pronunciation affects quality of life in a variety of ways. Results also suggest that a relationship between ELs’ perspectives of their own pronunciation and LSC exists, but to what extent is unclear. LSC is a highly changeable construct that is affected by personal, cultural, and social elements as well as the context of the communicative situation

    Anatomically Constrained Implicit Face Models

    Full text link
    Coordinate based implicit neural representations have gained rapid popularity in recent years as they have been successfully used in image, geometry and scene modeling tasks. In this work, we present a novel use case for such implicit representations in the context of learning anatomically constrained face models. Actor specific anatomically constrained face models are the state of the art in both facial performance capture and performance retargeting. Despite their practical success, these anatomical models are slow to evaluate and often require extensive data capture to be built. We propose the anatomical implicit face model; an ensemble of implicit neural networks that jointly learn to model the facial anatomy and the skin surface with high-fidelity, and can readily be used as a drop in replacement to conventional blendshape models. Given an arbitrary set of skin surface meshes of an actor and only a neutral shape with estimated skull and jaw bones, our method can recover a dense anatomical substructure which constrains every point on the facial surface. We demonstrate the usefulness of our approach in several tasks ranging from shape fitting, shape editing, and performance retargeting

    Network Visualization Literacy: Task, Context, and Layout

    Get PDF
    Thesis (Ph.D.) - Indiana University, School of Informatics, Computing, and Engineering, 2018Information visualization as a practice is becoming increasingly global, being conducted by and distributed to increasingly diverse stakeholder groups. Visualizations are being viewed in casual contexts and for a variety of purposes. The use of network visualizations has likewise increased in recent years, in part because network visualizations have properties that are applicable to datasets ranging from academic journal and patent citations to molecular interactions to the movement of refugees across national borders. Unlike charts based on numerical or categorical axes, common network visualizations operate under a set of rules that are largely unexplained to the users of the diagrams. For example, unlike axis-based charts, there is no stable reference system across node-link diagrams. The same dataset can produce many visualizations that look very different from each other, depending on the choice of layout algorithm, rotation, data thresholding, etc. Research on the skills required to interpret network visualizations and the prevalence of those skills have typically been small in scale–limited to a small group of users or a limited set of visualization design choices. With the broadening of the audiences for visualizations and the dissemination of more sophisticated visualization types, a detailed examination of the typical skills of a novice viewer of network visualizations is crucial to the development of appropriate and successful visualizations.This dissertation advances our understanding of network visualization literacy by studying performance of both novices and experts in network science on a variety of network analysis tasks and datasets using a variety of visualization designs. The empirical results will provide a baseline for understanding network visualization usage and will offer advice to visualization designers on the design features that best support particular tasks

    Draper Station Analysis Tool

    Get PDF
    Draper Station Analysis Tool (DSAT) is a computer program, built on commercially available software, for simulating and analyzing complex dynamic systems. Heretofore used in designing and verifying guidance, navigation, and control systems of the International Space Station, DSAT has a modular architecture that lends itself to modification for application to spacecraft or terrestrial systems. DSAT consists of user-interface, data-structures, simulation-generation, analysis, plotting, documentation, and help components. DSAT automates the construction of simulations and the process of analysis. DSAT provides a graphical user interface (GUI), plus a Web-enabled interface, similar to the GUI, that enables a remotely located user to gain access to the full capabilities of DSAT via the Internet and Webbrowser software. Data structures are used to define the GUI, the Web-enabled interface, simulations, and analyses. Three data structures define the type of analysis to be performed: closed-loop simulation, frequency response, and/or stability margins. DSAT can be executed on almost any workstation, desktop, or laptop computer. DSAT provides better than an order of magnitude improvement in cost, schedule, and risk assessment for simulation based design and verification of complex dynamic systems

    YebC Modulates OspC and VlsE Inverse Regulation and VlsE Expression in Persistent Lyme Disease

    Get PDF
    Background & Hypothesis: Lyme disease, caused by the bacterium Borrelia burgdorferi, is the most common vector-borne infectious disease in the United States. Although easily treated with antibiotics, undiagnosed cases may develop into persistent infections with complications including Lyme carditis, neuroborreliosis, & arthritis. VlsE antigen variation is one of the major mechanisms employed by B. burgdorferi to establish persistent infection. We hypothesize that YebC modulates VlsE expression and antigen variation, enabling the shift from acute to persistent infection. Materials & Methods: C3H/HeN or C3H/SCID mice were infected with the B. burgdorferi strain 5A4NP1, yebC mutant, and yebC complement at a dose of 105 or 106 spirochetes. Mice were sacrificed at days 7, 30, 60, and 90 post-infection and tissue samples were subjected to RNA and DNA extraction. Results: YebC levels were closely associated with the upregulation of vlsE and the downregulation of ospC in vitro and in vivo. The yebC mutant displayed loss of infectivity in C3H/HeN mice, and reduced VlsE antigen variation. Conclusion & Impact: This data demonstrates that YebC of B burgdorferi can regulate the frequency of vlsE recombination and modulates the inverse regulation of OspC and VlsE. This new factor may serve as an avenue for developing drugs which can target vlsE recombination to combat complications of persistent Lyme disease

    From Measure to Leisure: Extending Theory on Technology in the Workplace

    Full text link
    The values present both in modern organizations and in research on these organizations reflect the organizational culture that has developed gradually over time. For example, research on organizations regularly focuses on the aspects of work that can be most easily quantified, such as the hierarchy within the organization or the physical arrangement of the office. Less defined aspects of organizations, such as the support for visibility and reflection, are more difficult to study and potentially less valued by the organizational culture. Similarly, the scientific management movement that spurred the Industrial Revolution is a very visible example of the high value that has been assigned to quantifiable efficiency within the workplace itself. Though the scientific management movement was soon contradicted by findings that showed the importance of psychological factors such as individual recognition, the ultimate response within organizations was to quantify additional aspects of the work environment, to varying degrees of success. The values that give efficiency and quantification this prominence in the workplace and in organizational research also impact the design and use of computing technology in the workplace. Computing has become a significant element in the modern organization, but the accepted role for computing technologies is often restricted to the automation of analytic tasks formerly accomplished by workers. In this way, computing technology becomes a surrogate for a human brain, attempting to model the way a specific type of work has traditionally been done. The mental processes involved in work, however, are not simply analytical. David Levy (2005) contends that the excess of information available for analysis in contemporary work environments cannot be meaningfully processed without allowing workers time for reflection and contemplation. This time may help workers draw connections that are still difficult for computers, or it may provide workers with opportunities for collaboration and diversification. The elevation of the importance of visibility and reflection within the workplace may have more success if undertaken in conjunction with the installation of technology designed for this purpose. Because current organizational studies typically omit activities with complex motivations, initial studies on the subject must gather data for the purpose of grounded (inductive) theory generation. The study described herein addresses traditional organizational research topics as well as the presence and use of non-task-based activities in the workplace. The study takes a broad look at a university department encompassing approximately 60 individuals, utilizing surveys and interviews to collect a variety of background information. As an additional intervention, a prototype technology devise with ludic intentions was introduced to the department, and its use provided further insight into the role of technology in the workplace. Ultimately, a series of testable hypotheses are proposed to guide further research into visibility and reflection in the workplace

    Knotty Articulations: Professors and Preservice Teachers on Teaching Literacy in Urban Schools

    Get PDF
    In this qualitative study, we examined preservice teachers’ articulations of what it meant to teach literacy in urban settings and the roles that we as university instructors played in their understandings of the terms urban, literacy, and teacher. We framed the study within extant studies of teacher education and research on metaphors. Data indicated that the participants metaphorically constructed literacy as an object that could be passed from teacher to student and that was often missing, hidden, or buried in urban settings. Implications of the study suggest that faculty members are one factor among several important influences in preservice teachers becoming professionals, and the metaphors faculty use in teaching preservice teachers deserve careful consideration

    Infinite 3D Landmarks: Improving Continuous 2D Facial Landmark Detection

    Full text link
    In this paper, we examine 3 important issues in the practical use of state-of-the-art facial landmark detectors and show how a combination of specific architectural modifications can directly improve their accuracy and temporal stability. First, many facial landmark detectors require face normalization as a preprocessing step, which is accomplished by a separately-trained neural network that crops and resizes the face in the input image. There is no guarantee that this pre-trained network performs the optimal face normalization for landmark detection. We instead analyze the use of a spatial transformer network that is trained alongside the landmark detector in an unsupervised manner, and jointly learn optimal face normalization and landmark detection. Second, we show that modifying the output head of the landmark predictor to infer landmarks in a canonical 3D space can further improve accuracy. To convert the predicted 3D landmarks into screen-space, we additionally predict the camera intrinsics and head pose from the input image. As a side benefit, this allows to predict the 3D face shape from a given image only using 2D landmarks as supervision, which is useful in determining landmark visibility among other things. Finally, when training a landmark detector on multiple datasets at the same time, annotation inconsistencies across datasets forces the network to produce a suboptimal average. We propose to add a semantic correction network to address this issue. This additional lightweight neural network is trained alongside the landmark detector, without requiring any additional supervision. While the insights of this paper can be applied to most common landmark detectors, we specifically target a recently-proposed continuous 2D landmark detector to demonstrate how each of our additions leads to meaningful improvements over the state-of-the-art on standard benchmarks.Comment: 12 pages, 13 figure

    Engaging Researchers in Data Dialogues: Designing Collaborative Programming to Promote Research Data Sharing

    Get PDF
    A range of regulatory pressures emanating from funding agencies and scholarly journals increasingly encourage researchers to engage in formal data sharing practices. As academic libraries continue to refine their role in supporting researchers in this data sharing space, one particular challenge has been finding new ways to meaningfully engage with campus researchers. Libraries help shape norms and encourage data sharing through education and training, and there has been significant growth in the services these institutions are able to provide and the ways in which library staff are able to collaborate and communicate with researchers. Evidence also suggests that within disciplines, normative pressures and expectations around professional conduct have a significant impact on data sharing behaviors (Kim and Adler 2015; Sigit Sayogo and Pardo 2013; Zenk-Moltgen et al. 2018). Duke University Libraries\u27 Research Data Management program has recently centered part of its outreach strategy on leveraging peer networks and social modeling to encourage and normalize robust data sharing practices among campus researchers. The program has hosted two panel discussions on issues related to data management—specifically, data sharing and research reproducibility. This paper reflects on some lessons learned from these outreach efforts and outlines next steps

    Fast Nonlinear Least Squares Optimization of Large-Scale Semi-Sparse Problems

    Get PDF
    Many problems in computer graphics and vision can be formulated as a nonlinear least squares optimization problem, for which numerous off-the-shelf solvers are readily available. Depending on the structure of the problem, however, existing solvers may be more or less suitable, and in some cases the solution comes at the cost of lengthy convergence times. One such case is semi-sparse optimization problems, emerging for example in localized facial performance reconstruction, where the nonlinear least squares problem can be composed of hundreds of thousands of cost functions, each one involving many of the optimization parameters. While such problems can be solved with existing solvers, the computation time can severely hinder the applicability of these methods. We introduce a novel iterative solver for nonlinear least squares optimization of large-scale semi-sparse problems. We use the nonlinear Levenberg-Marquardt method to locally linearize the problem in parallel, based on its firstorder approximation. Then, we decompose the linear problem in small blocks, using the local Schur complement, leading to a more compact linear system without loss of information. The resulting system is dense but its size is small enough to be solved using a parallel direct method in a short amount of time. The main benefit we get by using such an approach is that the overall optimization process is entirely parallel and scalable, making it suitable to be mapped onto graphics hardware (GPU). By using our minimizer, results are obtained up to one order of magnitude faster than other existing solvers, without sacrificing the generality and the accuracy of the model. We provide a detailed analysis of our approach and validate our results with the application of performance-based facial capture using a recently-proposed anatomical local face deformation model
    corecore