13,277 research outputs found
Hip Hop DJs and the Evolution of Technology: Cultural Exchange, Innovation, and Democratization
Review of Hip Hop DJs and the Evolution of Technology: Cultural Exchange, Innovation, and Democratization, by André Sirois (2016)
Computer technologies and institutional memory
NASA programs for manned space flight are in their 27th year. Scientists and engineers who worked continuously on the development of aerospace technology during that period are approaching retirement. The resulting loss to the organization will be considerable. Although this problem is general to the NASA community, the problem was explored in terms of the institutional memory and technical expertise of a single individual in the Man-Systems division. The main domain of the expert was spacecraft lighting, which became the subject area for analysis in these studies. The report starts with an analysis of the cumulative expertise and institutional memory of technical employees of organizations such as NASA. A set of solutions to this problem are examined and found inadequate. Two solutions were investigated at length: hypertext and expert systems. Illustrative examples were provided of hypertext and expert system representation of spacecraft lighting. These computer technologies can be used to ameliorate the problem of the loss of invaluable personnel
Propagation of Input Uncertainty in Presence of Model-Form Uncertainty: A Multi-fidelity Approach for CFD Applications
Proper quantification and propagation of uncertainties in computational
simulations are of critical importance. This issue is especially challenging
for CFD applications. A particular obstacle for uncertainty quantifications in
CFD problems is the large model discrepancies associated with the CFD models
used for uncertainty propagation. Neglecting or improperly representing the
model discrepancies leads to inaccurate and distorted uncertainty distribution
for the Quantities of Interest. High-fidelity models, being accurate yet
expensive, can accommodate only a small ensemble of simulations and thus lead
to large interpolation errors and/or sampling errors; low-fidelity models can
propagate a large ensemble, but can introduce large modeling errors. In this
work, we propose a multi-model strategy to account for the influences of model
discrepancies in uncertainty propagation and to reduce their impact on the
predictions. Specifically, we take advantage of CFD models of multiple
fidelities to estimate the model discrepancies associated with the
lower-fidelity model in the parameter space. A Gaussian process is adopted to
construct the model discrepancy function, and a Bayesian approach is used to
infer the discrepancies and corresponding uncertainties in the regions of the
parameter space where the high-fidelity simulations are not performed. The
proposed multi-model strategy combines information from models with different
fidelities and computational costs, and is of particular relevance for CFD
applications, where a hierarchy of models with a wide range of complexities
exists. Several examples of relevance to CFD applications are performed to
demonstrate the merits of the proposed strategy. Simulation results suggest
that, by combining low- and high-fidelity models, the proposed approach
produces better results than what either model can achieve individually.Comment: 18 pages, 8 figure
Shuffling a Stacked Deck: The Case for Partially Randomized Ranking of Search Engine Results
In-degree, PageRank, number of visits and other measures of Web page
popularity significantly influence the ranking of search results by modern
search engines. The assumption is that popularity is closely correlated with
quality, a more elusive concept that is difficult to measure directly.
Unfortunately, the correlation between popularity and quality is very weak for
newly-created pages that have yet to receive many visits and/or in-links.
Worse, since discovery of new content is largely done by querying search
engines, and because users usually focus their attention on the top few
results, newly-created but high-quality pages are effectively ``shut out,'' and
it can take a very long time before they become popular.
We propose a simple and elegant solution to this problem: the introduction of
a controlled amount of randomness into search result ranking methods. Doing so
offers new pages a chance to prove their worth, although clearly using too much
randomness will degrade result quality and annul any benefits achieved. Hence
there is a tradeoff between exploration to estimate the quality of new pages
and exploitation of pages already known to be of high quality. We study this
tradeoff both analytically and via simulation, in the context of an economic
objective function based on aggregate result quality amortized over time. We
show that a modest amount of randomness leads to improved search results
A constrained pressure-temperature residual (CPTR) method for non-isothermal multiphase flow in porous media
For both isothermal and thermal petroleum reservoir simulation, the
Constrained Pressure Residual (CPR) method is the industry-standard
preconditioner. This method is a two-stage process involving the solution of a
restricted pressure system. While initially designed for the isothermal case,
CPR is also the standard for thermal cases. However, its treatment of the
energy conservation equation does not incorporate heat diffusion, which is
often dominant in thermal cases. In this paper, we present an extension of CPR:
the Constrained Pressure-Temperature Residual (CPTR) method, where a restricted
pressure-temperature system is solved in the first stage. In previous work, we
introduced a block preconditioner with an efficient Schur complement
approximation for a pressure-temperature system. Here, we extend this method
for multiphase flow as the first stage of CPTR. The algorithmic performance of
different two-stage preconditioners is evaluated for reservoir simulation test
cases.Comment: 28 pages, 2 figures. Sources/sinks description in arXiv:1902.0009
Relaxations for inference in restricted Boltzmann machines
We propose a relaxation-based approximate inference algorithm that samples
near-MAP configurations of a binary pairwise Markov random field. We experiment
on MAP inference tasks in several restricted Boltzmann machines. We also use
our underlying sampler to estimate the log-partition function of restricted
Boltzmann machines and compare against other sampling-based methods.Comment: ICLR 2014 workshop track submissio
- …
