3,535 research outputs found
Combining Traditional Marketing and Viral Marketing with Amphibious Influence Maximization
In this paper, we propose the amphibious influence maximization (AIM) model
that combines traditional marketing via content providers and viral marketing
to consumers in social networks in a single framework. In AIM, a set of content
providers and consumers form a bipartite network while consumers also form
their social network, and influence propagates from the content providers to
consumers and among consumers in the social network following the independent
cascade model. An advertiser needs to select a subset of seed content providers
and a subset of seed consumers, such that the influence from the seed providers
passing through the seed consumers could reach a large number of consumers in
the social network in expectation.
We prove that the AIM problem is NP-hard to approximate to within any
constant factor via a reduction from Feige's k-prover proof system for 3-SAT5.
We also give evidence that even when the social network graph is trivial (i.e.
has no edges), a polynomial time constant factor approximation for AIM is
unlikely. However, when we assume that the weighted bi-adjacency matrix that
describes the influence of content providers on consumers is of constant rank,
a common assumption often used in recommender systems, we provide a
polynomial-time algorithm that achieves approximation ratio of
for any (polynomially small) . Our
algorithmic results still hold for a more general model where cascades in
social network follow a general monotone and submodular function.Comment: An extended abstract appeared in the Proceedings of the 16th ACM
Conference on Economics and Computation (EC), 201
Boundary curves of surfaces with the 4-plane property
Let M be an orientable and irreducible 3-manifold whose boundary is an
incompressible torus. Suppose that M does not contain any closed nonperipheral
embedded incompressible surfaces. We will show in this paper that the immersed
surfaces in M with the 4-plane property can realize only finitely many boundary
slopes. Moreover, we will show that only finitely many Dehn fillings of M can
yield 3-manifolds with nonpositive cubings. This gives the first examples of
hyperbolic 3-manifolds that cannot admit any nonpositive cubings.Comment: Published in Geometry and Topology at
http://www.maths.warwick.ac.uk/gt/GTVol6/paper21.abs.htm
An algorithm to detect laminar 3-manifolds
We show that there are algorithms to determine if a 3-manifold contains an
essential lamination or a Reebless foliation.Comment: Published by Geometry and Topology at
http://www.maths.warwick.ac.uk/gt/GTVol7/paper8.abs.htm
Learning Incoherent Subspaces: Classification via Incoherent Dictionary Learning
In this article we present the supervised iterative projections and rotations (s-ipr) algorithm, a method for learning discriminative incoherent subspaces from data. We derive s-ipr as a supervised extension of our previously proposed iterative projections and rotations (ipr) algorithm for incoherent dictionary learning, and we employ it to learn incoherent sub-spaces that model signals belonging to different classes. We test our method as a feature transform for supervised classification, first by visualising transformed features from a synthetic dataset and from the ‘iris’ dataset, then by using the resulting features in a classification experiment
Theory of the Lattice Boltzmann Equation: Symmetry properties of Discrete Velocity Sets
In the lattice Boltzmann equation, continuous particle velocity space is replaced by a finite dimensional discrete set. The number of linearly independent velocity moments in a lattice Boltzmann model cannot exceed the number of discrete velocities. Thus, finite dimensionality introduces linear dependencies among the moments that do not exist in the exact continuous theory. Given a discrete velocity set, it is important to know to exactly what order moments are free of these dependencies. Elementary group theory is applied to the solution of this problem. It is found that by decomposing the velocity set into subsets that transform among themselves under an appropriate symmetry group, it becomes relatively straightforward to assess the behavior of moments in the theory. The construction of some standard two- and three-dimensional models is reviewed from this viewpoint, and procedures for constructing some new higher dimensional models are suggested
The palaeobiogeographical spread of the acritarch Veryhachium in the Early and Middle Ordovician and its impact on biostratigraphical applications
The genus Veryhachium Deunff, 1954, is one of the most frequently documented acritarch genera, being recorded from the Early Ordovician to the Neogene. Detailed investigations show that Veryhachium species first appeared near the South Pole in the earliest part of the Tremadocian (Early Ordovician). The genus was present at high palaeolatitudes (generally>60° S) on the Gondwanan margin during the Tremadocian before spreading to lower palaeolatitudes on the Gondwanan margin and other palaeocontinents (Avalonia and Baltica) during the Floian. It became cosmopolitan in the Middle and Late Ordovician. Although useful for distinguishing Ordovician from Cambrian strata, the diachronous first appearance data of Veryhachium morphotypes mean that they should be used with caution for long-distance correlation
A characterization of the multivariate excess wealth ordering
In this paper, some new properties of the upper-corrected orthant of a random vector are proved. The univariate right-spread or excess wealth function, introduced by Fernández-Ponce et al. (1996), is extended to multivariate random vectors, and some properties of this multivariate function are studied. Later, this function was used to define the excess wealth ordering by Shaked and Shanthikumar (1998) and Fernández-Ponce et al. (1998). The multivariate excess wealth function enable us to define a new stochastic comparison which is weaker than the multivariate dispersion orderings. Also, some properties relating the multivariate excess wealth order with stochastic dependence are describe
Multiscale Finite-Difference-Diffusion-Monte-Carlo Method for Simulating Dendritic Solidification
We present a novel hybrid computational method to simulate accurately
dendritic solidification in the low undercooling limit where the dendrite tip
radius is one or more orders of magnitude smaller than the characteristic
spatial scale of variation of the surrounding thermal or solutal diffusion
field. The first key feature of this method is an efficient multiscale
diffusion Monte-Carlo (DMC) algorithm which allows off-lattice random walkers
to take longer and concomitantly rarer steps with increasing distance away from
the solid-liquid interface. As a result, the computational cost of evolving the
large scale diffusion field becomes insignificant when compared to that of
calculating the interface evolution. The second key feature is that random
walks are only permitted outside of a thin liquid layer surrounding the
interface. Inside this layer and in the solid, the diffusion equation is solved
using a standard finite-difference algorithm that is interfaced with the DMC
algorithm using the local conservation law for the diffusing quantity. Here we
combine this algorithm with a previously developed phase-field formulation of
the interface dynamics and demonstrate that it can accurately simulate
three-dimensional dendritic growth in a previously unreachable range of low
undercoolings that is of direct experimental relevance.Comment: RevTeX, 16 pages, 10 eps figures, submitted to J. Comp. Phy
Self-Consistent Asset Pricing Models
We discuss the foundations of factor or regression models in the light of the
self-consistency condition that the market portfolio (and more generally the
risk factors) is (are) constituted of the assets whose returns it is (they are)
supposed to explain. As already reported in several articles, self-consistency
implies correlations between the return disturbances. As a consequence, the
alpha's and beta's of the factor model are unobservable. Self-consistency leads
to renormalized beta's with zero effective alpha's, which are observable with
standard OLS regressions. Analytical derivations and numerical simulations show
that, for arbitrary choices of the proxy which are different from the true
market portfolio, a modified linear regression holds with a non-zero value
at the origin between an asset 's return and the proxy's return.
Self-consistency also introduces ``orthogonality'' and ``normality'' conditions
linking the beta's, alpha's (as well as the residuals) and the weights of the
proxy portfolio. Two diagnostics based on these orthogonality and normality
conditions are implemented on a basket of 323 assets which have been components
of the S&P500 in the period from Jan. 1990 to Feb. 2005. These two diagnostics
show interesting departures from dynamical self-consistency starting about 2
years before the end of the Internet bubble. Finally, the factor decomposition
with the self-consistency condition derives a risk-factor decomposition in the
multi-factor case which is identical to the principal components analysis
(PCA), thus providing a direct link between model-driven and data-driven
constructions of risk factors.Comment: 36 pages with 8 figures. large version with 6 appendices for the
Proceedings of the 5th International Conference APFS (Applications of Physics
in Financial Analysis), June 29-July 1, 2006, Torin
- …
