1,682 research outputs found
Managing Large Scale Project Analysis Teams through a Web Accessible Database
Large scale space programs analyze thousands of requirements while mitigating safety, performance, schedule, and cost risks. These efforts involve a variety of roles with interdependent use cases and goals. For example, study managers and facilitators identify ground-rules and assumptions for a collection of studies required for a program or project milestone. Task leaders derive product requirements from the ground rules and assumptions and describe activities to produce needed analytical products. Disciplined specialists produce the specified products and load results into a file management system. Organizational and project managers provide the personnel and funds to conduct the tasks. Each role has responsibilities to establish information linkages and provide status reports to management. Projects conduct design and analysis cycles to refine designs to meet the requirements and implement risk mitigation plans. At the program level, integrated design and analysis cycles studies are conducted to eliminate every 'to-be-determined' and develop plans to mitigate every risk. At the agency level, strategic studies analyze different approaches to exploration architectures and campaigns. This paper describes a web-accessible database developed by NASA to coordinate and manage tasks at three organizational levels. Other topics in this paper cover integration technologies and techniques for process modeling and enterprise architectures
"We're on the Same Page": A Usability Study of Secure Email Using Pairs of Novice Users
Secure email is increasingly being touted as usable by novice users, with a
push for adoption based on recent concerns about government surveillance. To
determine whether secure email is for grassroots adoption, we employ a
laboratory user study that recruits pairs of novice to install and use several
of the latest systems to exchange secure messages. We present quantitative and
qualitative results from 25 pairs of novice users as they use Pwm, Tutanota,
and Virtru. Participants report being more at ease with this type of study and
better able to cope with mistakes since both participants are "on the same
page". We find that users prefer integrated solutions over depot-based
solutions, and that tutorials are important in helping first-time users. Hiding
the details of how a secure email system provides security can lead to a lack
of trust in the system. Participants expressed a desire to use secure email,
but few wanted to use it regularly and most were unsure of when they might use
it.Comment: 34th Annual ACM Conference on Human Factors in Computing Systems (CHI
2016
System-of-Systems Technology-Portfolio-Analysis Tool
Advanced Technology Life-cycle Analysis System (ATLAS) is a system-of-systems technology-portfolio-analysis software tool. ATLAS affords capabilities to (1) compare estimates of the mass and cost of an engineering system based on competing technological concepts; (2) estimate life-cycle costs of an outer-space-exploration architecture for a specified technology portfolio; (3) collect data on state-of-the-art and forecasted technology performance, and on operations and programs; and (4) calculate an index of the relative programmatic value of a technology portfolio. ATLAS facilitates analysis by providing a library of analytical spreadsheet models for a variety of systems. A single analyst can assemble a representation of a system of systems from the models and build a technology portfolio. Each system model estimates mass, and life-cycle costs are estimated by a common set of cost models. Other components of ATLAS include graphical-user-interface (GUI) software, algorithms for calculating the aforementioned index, a technology database, a report generator, and a form generator for creating the GUI for the system models. At the time of this reporting, ATLAS is a prototype, embodied in Microsoft Excel and several thousand lines of Visual Basic for Applications that run on both Windows and Macintosh computers
Three Dimensional Computer Graphics Federates for the 2012 Smackdown Simulation
The Simulation Interoperability Standards Organization (SISO) Smackdown is a two-year old annual event held at the 2012 Spring Simulation Interoperability Workshop (SIW). A primary objective of the Smackdown event is to provide college students with hands-on experience in developing distributed simulations using High Level Architecture (HLA). Participating for the second time, the University of Alabama in Huntsville (UAHuntsville) deployed four federates, two federates simulated a communications server and a lunar communications satellite with a radio. The other two federates generated 3D computer graphics displays for the communication satellite constellation and for the surface based lunar resupply mission. Using the Light-Weight Java Graphics Library, the satellite display federate presented a lunar-texture mapped sphere of the moon and four Telemetry Data Relay Satellites (TDRS), which received object attributes from the lunar communications satellite federate to drive their motion. The surface mission display federate was an enhanced version of the federate developed by ForwardSim, Inc. for the 2011 Smackdown simulation. Enhancements included a dead-reckoning algorithm and a visual indication of which communication satellite was in line of sight of Hadley Rille. This paper concentrates on these two federates by describing the functions, algorithms, HLA object attributes received from other federates, development experiences and recommendations for future, participating Smackdown teams
Diamond Dicing
In OLAP, analysts often select an interesting sample of the data. For
example, an analyst might focus on products bringing revenues of at least 100
000 dollars, or on shops having sales greater than 400 000 dollars. However,
current systems do not allow the application of both of these thresholds
simultaneously, selecting products and shops satisfying both thresholds. For
such purposes, we introduce the diamond cube operator, filling a gap among
existing data warehouse operations.
Because of the interaction between dimensions the computation of diamond
cubes is challenging. We compare and test various algorithms on large data sets
of more than 100 million facts. We find that while it is possible to implement
diamonds in SQL, it is inefficient. Indeed, our custom implementation can be a
hundred times faster than popular database engines (including a row-store and a
column-store).Comment: 29 page
Lessons Learned from Deploying an Analytical Task Management Database
Defining requirements, missions, technologies, and concepts for space exploration involves multiple levels of organizations, teams of people with complementary skills, and analytical models and simulations. Analytical activities range from filling a To-Be-Determined (TBD) in a requirement to creating animations and simulations of exploration missions. In a program as large as returning to the Moon, there are hundreds of simultaneous analysis activities. A way to manage and integrate efforts of this magnitude is to deploy a centralized database that provides the capability to define tasks, identify resources, describe products, schedule deliveries, and generate a variety of reports. This paper describes a web-accessible task management system and explains the lessons learned during the development and deployment of the database. Through the database, managers and team leaders can define tasks, establish review schedules, assign teams, link tasks to specific requirements, identify products, and link the task data records to external repositories that contain the products. Data filters and spreadsheet export utilities provide a powerful capability to create custom reports. Import utilities provide a means to populate the database from previously filled form files. Within a four month period, a small team analyzed requirements, developed a prototype, conducted multiple system demonstrations, and deployed a working system supporting hundreds of users across the aeros pace community. Open-source technologies and agile software development techniques, applied by a skilled team enabled this impressive achievement. Topics in the paper cover the web application technologies, agile software development, an overview of the system's functions and features, dealing with increasing scope, and deploying new versions of the system
Hubble Space Telescope Planetary Camera Images of NGC 1316
We present HST Planetary Camera V and I~band images of the central region of
the peculiar giant elliptical galaxy NGC 1316. The inner profile is well fit by
a nonisothermal core model with a core radius of 0.41" +/- 0.02" (34 pc). At an
assumed distance of 16.9 Mpc, the deprojected luminosity density reaches \sim
2.0 \times 10^3 L_{\sun} pc.
Outside the inner two or three arcseconds, a constant mass-to-light ratio of
is found to fit the observed line width measurements. The
line width measurements of the center indicate the existence of either a
central dark object of mass 2 \times 10^9 M_{\sun}, an increase in the
stellar mass-to-light ratio by at least a factor of two for the inner few
arcseconds, or perhaps increasing radial orbit anisotropy towards the center.
The mass-to-light ratio run in the center of NGC 1316 resembles that of many
other giant ellipticals, some of which are known from other evidence to harbor
central massive dark objects (MDO's).
We also examine twenty globular clusters associated with NGC 1316 and report
their brightnesses, colors, and limits on tidal radii. The brightest cluster
has a luminosity of 9.9 \times 10^6 L_{\sun} (), and the
faintest detectable cluster has a luminosity of 2.4 \times 10^5 L_{\sun}
(). The globular clusters are just barely resolved, but their core
radii are too small to be measured. The tidal radii in this region appear to be
35 pc. Although this galaxy seems to have undergone a substantial merger
in the recent past, young globular clusters are not detected.Comment: 21 pages, latex, postscript figures available at
ftp://delphi.umd.edu/pub/outgoing/eshaya/fornax
The Faint End of the Luminosity Function and Low Surface Brightness Galaxies
SHELS (Smithsonian Hectospec Lensing Survey) is a dense redshift survey
covering a 4 square degree region to a limiting R = 20.6. In the construction
of the galaxy catalog and in the acquisition of spectroscopic targets, we paid
careful attention to the survey completeness for lower surface brightness dwarf
galaxies. Thus, although the survey covers a small area, it is a robust basis
for computation of the slope of the faint end of the galaxy luminosity function
to a limiting M_R = -13.3 + 5logh. We calculate the faint end slope in the
R-band for the subset of SHELS galaxies with redshif ts in the range 0.02 <= z
< 0.1, SHELS_{0.1}. This sample contains 532 galaxies with R< 20.6 and with a
median surface brightness within the half light radius of SB_{50,R} = 21.82 mag
arcsec^{-2}. We used this sample to make one of the few direct measurements of
the dependence of the faint end of the galaxy luminosity function on surface
brightness. For the sample as a whole the faint end slope, alpha = -1.31 +/-
0.04, is consistent with both the Blanton et al. (2005b) analysis of the SDSS
and the Liu et al. (2008) analysis of the COSMOS field. This consistency is
impressive given the very different approaches of th ese three surveys. A
magnitude limited sample of 135 galaxies with optical spectroscopic reds hifts
with mean half-light surface brightness, SB_{50,R} >= 22.5 mag arcsec^{-2} is
unique to SHELS_{0.1}. The faint end slope is alpha_{22.5} = -1.52+/- 0.16.
SHELS_{0.1} shows that lower surface brightness objects dominate the faint end
slope of the l uminosity function in the field, underscoring the importance of
surface brightness limits in evaluating measurements of the faint end slope and
its evolution.Comment: 34 pages, 13 figures, 3 tables, Astronomical Journal, in press
(updated based on review
- …
