906 research outputs found

    Optimal Auctions vs. Anonymous Pricing: Beyond Linear Utility

    Full text link
    The revenue optimal mechanism for selling a single item to agents with independent but non-identically distributed values is complex for agents with linear utility (Myerson,1981) and has no closed-form characterization for agents with non-linear utility (cf. Alaei et al., 2012). Nonetheless, for linear utility agents satisfying a natural regularity property, Alaei et al. (2018) showed that simply posting an anonymous price is an e-approximation. We give a parameterization of the regularity property that extends to agents with non-linear utility and show that the approximation bound of anonymous pricing for regular agents approximately extends to agents that satisfy this approximate regularity property. We apply this approximation framework to prove that anonymous pricing is a constant approximation to the revenue optimal single-item auction for agents with public-budget utility, private-budget utility, and (a special case of) risk-averse utility.Comment: Appeared at EC 201

    Simple Mechanisms for Non-linear Agents

    Full text link
    We consider agents with non-linear preferences given by private values and private budgets. We quantify the extent to which posted pricing approximately optimizes welfare and revenue for a single agent. We give a reduction framework that extends the approximation of multi-agent pricing-based mechanisms from linear utility to nonlinear utility. This reduction framework is broadly applicable as Alaei et al. (2012) have shown that mechanisms for linear agents can generally be interpreted as pricing-based mechanisms. We give example applications of the framework to oblivious posted pricing (e.g., Chawla et al., 2010), sequential posted pricing (e.g., Yan, 2011), and virtual surplus maximization (Myerson, 1981)

    A Survey on Deep Clustering: From the Prior Perspective

    Full text link
    Facilitated by the powerful feature extraction ability of neural networks, deep clustering has achieved great success in analyzing high-dimensional and complex real-world data. The performance of deep clustering methods is affected by various factors such as network structures and learning objectives. However, as pointed out in this survey, the essence of deep clustering lies in the incorporation and utilization of prior knowledge, which is largely ignored by existing works. From pioneering deep clustering methods based on data structure assumptions to recent contrastive clustering methods based on data augmentation invariances, the development of deep clustering intrinsically corresponds to the evolution of prior knowledge. In this survey, we provide a comprehensive review of deep clustering methods by categorizing them into six types of prior knowledge. We find that in general the prior innovation follows two trends, namely, i) from mining to constructing, and ii) from internal to external. Besides, we provide a benchmark on five widely-used datasets and analyze the performance of methods with diverse priors. By providing a novel prior knowledge perspective, we hope this survey could provide some novel insights and inspire future research in the deep clustering community

    Geometric Interaction Augmented Graph Collaborative Filtering

    Full text link
    Graph-based collaborative filtering is capable of capturing the essential and abundant collaborative signals from the high-order interactions, and thus received increasingly research interests. Conventionally, the embeddings of users and items are defined in the Euclidean spaces, along with the propagation on the interaction graphs. Meanwhile, recent works point out that the high-order interactions naturally form up the tree-likeness structures, which the hyperbolic models thrive on. However, the interaction graphs inherently exhibit the hybrid and nested geometric characteristics, while the existing single geometry-based models are inadequate to fully capture such sophisticated topological patterns. In this paper, we propose to model the user-item interactions in a hybrid geometric space, in which the merits of Euclidean and hyperbolic spaces are simultaneously enjoyed to learn expressive representations. Experimental results on public datasets validate the effectiveness of our proposal

    A Critical Role for CaMKII in Behavioral Timescale Synaptic Plasticity in Hippocampal CA1 Pyramidal Neurons

    Get PDF
    Behavioral timescale synaptic plasticity (BTSP) is a type of non-Hebbian synaptic plasticity reported to underlie place field formation. Despite this important function, the molecular mechanisms underlying BTSP are poorly understood. The α-calcium-calmodulin-dependent protein kinase II (αCaMKII) is activated by synaptic transmission-mediated calcium influx, and its subsequent phosphorylation is central to synaptic plasticity. Because the activity of αCaMKII is known to outlast the event triggering phosphorylation, we hypothesized that it could mediate the extended timescale of BTSP. To examine the role of αCaMKII in BTSP, we performed whole-cell in vivo and in vitro recordings in CA1 pyramidal neurons from mice engineered with a point mutation at the autophosphorylation site (T286A) causing accelerated signaling kinetics. Here, we demonstrate a profound deficit in synaptic plasticity, strongly suggesting that αCaMKII signaling is required for BTSP. This study elucidates part of the molecular mechanism of BTSP and provides insight into the function of αCaMKII in place cell formation and ultimately learning and memory

    Deployment Prior Injection for Run-time Calibratable Object Detection

    Full text link
    With a strong alignment between the training and test distributions, object relation as a context prior facilitates object detection. Yet, it turns into a harmful but inevitable training set bias upon test distributions that shift differently across space and time. Nevertheless, the existing detectors cannot incorporate deployment context prior during the test phase without parameter update. Such kind of capability requires the model to explicitly learn disentangled representations with respect to context prior. To achieve this, we introduce an additional graph input to the detector, where the graph represents the deployment context prior, and its edge values represent object relations. Then, the detector behavior is trained to bound to the graph with a modified training objective. As a result, during the test phase, any suitable deployment context prior can be injected into the detector via graph edits, hence calibrating, or "re-biasing" the detector towards the given prior at run-time without parameter update. Even if the deployment prior is unknown, the detector can self-calibrate using deployment prior approximated using its own predictions. Comprehensive experimental results on the COCO dataset, as well as cross-dataset testing on the Objects365 dataset, demonstrate the effectiveness of the run-time calibratable detector
    corecore