342 research outputs found
Inferring geometric constraints in human demonstrations
This paper presents an approach for inferring geometric constraints in human
demonstrations. In our method, geometric constraint models are built to create
representations of kinematic constraints such as fixed point, axial rotation,
prismatic motion, planar motion and others across multiple degrees of freedom.
Our method infers geometric constraints using both kinematic and force/torque
information. The approach first fits all the constraint models using kinematic
information and evaluates them individually using position, force and moment
criteria. Our approach does not require information about the constraint type
or contact geometry; it can determine both simultaneously. We present
experimental evaluations using instrumented tongs that show how constraints can
be robustly inferred in recordings of human demonstrations.Comment: 2nd Conference on Robot Learning (CoRL 2018). Accepted. 14 pages, 5
figure
Characterizing Input Methods for Human-to-robot Demonstrations
Human demonstrations are important in a range of robotics applications, and
are created with a variety of input methods. However, the design space for
these input methods has not been extensively studied. In this paper, focusing
on demonstrations of hand-scale object manipulation tasks to robot arms with
two-finger grippers, we identify distinct usage paradigms in robotics that
utilize human-to-robot demonstrations, extract abstract features that form a
design space for input methods, and characterize existing input methods as well
as a novel input method that we introduce, the instrumented tongs. We detail
the design specifications for our method and present a user study that compares
it against three common input methods: free-hand manipulation, kinesthetic
guidance, and teleoperation. Study results show that instrumented tongs provide
high quality demonstrations and a positive experience for the demonstrator
while offering good correspondence to the target robot.Comment: 2019 ACM/IEEE International Conference on Human-Robot Interaction
(HRI
N-ary implicit blends with topology control
International audienceConstructive implicit surfaces are attractive for modeling and animation because they seamlessly handle shapes with complex and dynamic topology. However, the way they merge shapes is difficult to control. This paper introduces a solution: an improved blend operator that provides control over how topology changes are handled. It is based on a correction applied to the standard blending operator: the sum. Building on summation preserves the n-ary nature of the blend, providing the simplicity of arbitrary (e.g. flat) construction trees and segmentation invariance. The correction is based on projection to a reference case in the variation-space defined by the field and the norm of its gradient. It provides a single parameter, allowing for tuning behavior to achieve effects ranging from avoiding topological combination, through merging only during overlap, to merging at a distance. Dynamic adjustment of the parameter allows for context-dependent effects. Applications range from skeleton-based modeling, where shapes keep the topology of their skeleton, to objects that change topology during animation, with controllable merging. We illustrate the latter with Manga-style hair, where merging depends on the angle between hair wisps
How Do We Evaluate the Quality of Computational Editing Systems?
International audienceOne problem common to all researchers in the field of virtual cinematography and editing is to be able to assess the quality of the output of their systems. There is a pressing requirement for appropriate evaluations of proposed models and techniques. Indeed, though papers are often accompanied with example videos, showing subjective results and occasionally providing qualitative comparisons with other methods or with human-created movies, they generally lack an extensive evaluation. The goal of this paper is to survey evaluation methodologies that have been used in the past and to review a range of other interesting methodologies as well as a number of questions related to how we could better evaluate and compare future systems
Adding dynamics to sketch-based character animations
International audienceCartoonists and animators often use lines of action to emphasize dynamics in character poses. In this paper, we propose a physically-based model to simulate the line of action's motion, leading to rich motion from simple drawings. Our proposed method is decomposed into three steps. Based on user-provided strokes, we forward simulate 2D elastic motion. To ensure continuity across keyframes, we re-target the forward simulations to the drawn strokes. Finally, we synthesize a 3D character motion matching the dynamic line. The fact that the line can move freely like an elastic band raises new questions about its relationship to the body over time. The line may move faster and leave body parts behind, or the line may slide slowly towards other body parts for support. We conjecture that the artist seeks to maximize the filling of the line (with the character's body)---while respecting basic realism constraints such as balance. Based on these insights, we provide a method that synthesizes 3D character motion, given discontinuously constrained body parts that are specified by the user at key moments
Space-time sketching of character animation
International audienceWe present a space-time abstraction for the sketch-based design of character animation. It allows animators to draft a full coordinated motion using a single stroke called the space-time curve (STC). From the STC we compute a dynamic line of action (DLOA) that drives the motion of a 3D character through projective constraints. Our dynamic models for the line's motion are entirely geometric, require no pre-existing data, and allow full artistic control. The resulting DLOA can be refined by over-sketching strokes along the space-time curve, or by composing another DLOA on top leading to control over complex motions with few strokes. Additionally , the resulting dynamic line of action can be applied to arbitrary body parts or characters. To match a 3D character to the 2D line over time, we introduce a robust matching algorithm based on closed-form solutions, yielding a tight match while allowing squash and stretch of the character's skeleton. Our experiments show that space-time sketching has the potential of bringing animation design within the reach of beginners while saving time for skilled artists
- …
