33,761 research outputs found

    Discussion of "EQUI-energy sampler" by Kou, Zhou and Wong

    Full text link
    Discussion of ``EQUI-energy sampler'' by Kou, Zhou and Wong [math.ST/0507080]Comment: Published at http://dx.doi.org/10.1214/009053606000000506 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    CR-Analogue of Siu-ˉ\partial\bar{\partial}-formula and Applications to Rigidity problem for pseudo-Hermitian harmonic maps

    Full text link
    We give several versions of Siu's ˉ\partial\bar{\partial}-formula for maps from a strictly pseudoconvex pseudo-Hermitian manifold (M2m+1,θ)(M^{2m+1}, \theta) into a K\"ahler manifold (Nn,g)(N^n, g). We also define and study the notion of pseudo-Hermitian harmonicity for maps from MM into NN. In particular, we prove a CR version of Siu Rigidity Theorem for pseudo-Hermitian harmonic maps from a pseudo-Hermitian manifold with vanishing Webster torsion into a K\"ahler manifold having strongly negative curvature.Comment: 10 pages, to appear in Proc. Amer. Math. So

    Synthesizing Dynamic Patterns by Spatial-Temporal Generative ConvNet

    Full text link
    Video sequences contain rich dynamic patterns, such as dynamic texture patterns that exhibit stationarity in the temporal domain, and action patterns that are non-stationary in either spatial or temporal domain. We show that a spatial-temporal generative ConvNet can be used to model and synthesize dynamic patterns. The model defines a probability distribution on the video sequence, and the log probability is defined by a spatial-temporal ConvNet that consists of multiple layers of spatial-temporal filters to capture spatial-temporal patterns of different scales. The model can be learned from the training video sequences by an "analysis by synthesis" learning algorithm that iterates the following two steps. Step 1 synthesizes video sequences from the currently learned model. Step 2 then updates the model parameters based on the difference between the synthesized video sequences and the observed training sequences. We show that the learning algorithm can synthesize realistic dynamic patterns

    Interpretable Convolutional Neural Networks

    Full text link
    This paper proposes a method to modify traditional convolutional neural networks (CNNs) into interpretable CNNs, in order to clarify knowledge representations in high conv-layers of CNNs. In an interpretable CNN, each filter in a high conv-layer represents a certain object part. We do not need any annotations of object parts or textures to supervise the learning process. Instead, the interpretable CNN automatically assigns each filter in a high conv-layer with an object part during the learning process. Our method can be applied to different types of CNNs with different structures. The clear knowledge representation in an interpretable CNN can help people understand the logics inside a CNN, i.e., based on which patterns the CNN makes the decision. Experiments showed that filters in an interpretable CNN were more semantically meaningful than those in traditional CNNs.Comment: In this version, we release the website of the code. Compared to the previous version, we have corrected all values of location instability in Table 3--6 by dividing the values by sqrt(2), i.e., a=a/sqrt(2). Such revisions do NOT decrease the significance of the superior performance of our method, because we make the same correction to location-instability values of all baseline
    corecore