13 research outputs found
Adventure Mode: A Speculative Rideshare Design
Most smart city projections presume efficiency, predictability, and control as core design principles for smart transportation. Adventure Mode is a speculative design proposal developed as part of a research project with a major automotive company that proposes uses and interactions for Autonomous Vehicles (AVs) and rideshare advancements that defy these normative presumptions. Adventure Mode reframes the focus of moving vehicles from destination-based experiences to journey-based ones. Adventure Mode pushes the probabilities for unexpected encounters and anonymous play in increasingly predictable and predicted urban environments. It embraces the submission to algorithmic decision and chance as a ludic modality in human-computer interactions and urban artificial intelligence
Adventure Mode: A Speculative Rideshare Design
Most smart city projections presume efficiency, predictability, and control as core design principles for smart transportation. Adventure Mode is a speculative design proposal developed as part of a research project with a major automotive company that proposes uses and interactions for Autonomous Vehicles (AVs) and rideshare advancements that defy these normative presumptions. Adventure Mode reframes the focus of moving vehicles from destination-based experiences to journey-based ones. Adventure Mode pushes the probabilities for unexpected encounters and anonymous play in increasingly predictable and predicted urban environments. It embraces the submission to algorithmic decision and chance as a ludic modality in human-computer interactions and urban artificial intelligence.</jats:p
Human-Vehicle Interfaces: The Power of Vehicle Movement Gestures in Human Road User Coordination
Autonomous vehicles will have to coordinate their behavior with human road users such as drivers and pedestrians. The majority of recently proposed solutions for autonomous vehicle-to-human communication consist of introducing additional visual cues (such as lights, text and pictograms) on either the car’s exterior or as projections on the road. We argue that potential shortcomings in the visibility (due to light conditions, placement on the vehicle) and immediate understandability (learned, directive) of many of these cues make them alone insufficient in mediating multi-party interactions in the busy intersections of day-to-day traffic. Our observations of real-world human road user behavior in urban intersections indicate that movement in context is a central method of communication for coordination among drivers and pedestrians. The observed movement patterns gain meaning when seen within the context of road geometry, current road activity, and culture. While all movement communicates the intention of the driver, we highlight the use of movement as gesture, done for the specific purpose of communicating to other road users and give examples of how these influence traffic interactions. An awareness and understanding of the effect and importance of movement gestures in day-to-day traffic interactions is needed for developers of autonomous vehicles to design forms of human-vehicle communication that are effective and scalable in multi-party interactions
An Integrative Approach to Understanding Flight Crew Activity
In this paper, we describe an integrative approach to understanding flight crew activity. Our approach combines contemporary innovations in cognitive science theory with a new suite of methods for measuring, analyzing, and visualizing the activities of commercial airline flight crews in interaction with the complex automated systems found on the modern flight deck. Our unit of analysis is the multiparty, multimodal activity system. We installed a variety of recording devices in high-fidelity flight simulators to produce rich, multistream time-series data sets. The complexity of such data sets and the need for manual coding of high-level events make large-scale analysis prohibitively expensive. We break through this analysis bottleneck by using our newly developed integrated software system called ChronoViz, which supports visualization and analysis of multiple sources of time-coded data, including multiple sources of high-definition video, simulation data, transcript data, paper notes, and eye gaze data. Four examples of flight crew activity serve to illustrate the methods, the theory, and the kinds of findings that are now possible in the study of flight crew interaction with flight deck automation. </jats:p
