261 research outputs found

    PyTimeVar: A Python Package for Trending Time-Varying Time Series Models

    Get PDF
    Time-varying regression models with trends are commonly used to analyze long-term tendencies and evolving relationships in data. However, statistical inference for parameter paths is challenging, and recent literature has proposed various bootstrap methods to address this issue. Despite this, no software package in any language has yet offered the recently developed tools for conducting inference in time-varying regression models. We propose PyTimeVar, a Python package that implements nonparametric estimation along with multiple new bootstrap-assisted inference methods. It provides a range ofbootstrap techniques for constructing pointwise confidence intervals and simultaneous bands for parameter curves. Additionally, the package includes four widely used methods for modeling trends and time-varying relationships. This allows users to compare different approaches within a unified environment

    PyTimeVar: A Python Package for Trending Time-Varying Time Series Models

    Get PDF
    Time-varying regression models with trends are commonly used to analyze long-term tendencies and evolving relationships in data. However, statistical inference for parameter paths is challenging, and recent literature has proposed various bootstrap methods to address this issue. Despite this, no software package in any language has yet offered the recently developed tools for conducting inference in time-varying regression models. We propose PyTimeVar, a Python package that implements nonparametric estimation along with multiple new bootstrap-assisted inference methods. It provides a range ofbootstrap techniques for constructing pointwise confidence intervals and simultaneous bands for parameter curves. Additionally, the package includes four widely used methods for modeling trends and time-varying relationships. This allows users to compare different approaches within a unified environment

    PyTimeVar: A Python Package for Trending Time-Varying Time Series Models

    Get PDF
    Time-varying regression models with trends are commonly used to analyze long-term tendencies and evolving relationships in data. However, statistical inference for parameter paths is challenging, and recent literature has proposed various bootstrap methods to address this issue. Despite this, no software package in any language has yet offered the recently developed tools for conducting inference in time-varying regression models. We propose PyTimeVar, a Python package that implements nonparametric estimation along with multiple new bootstrap-assisted inference methods. It provides a range ofbootstrap techniques for constructing pointwise confidence intervals and simultaneous bands for parameter curves. Additionally, the package includes four widely used methods for modeling trends and time-varying relationships. This allows users to compare different approaches within a unified environment

    Bootstrap inference for linear time-varying coefficient models in locally stationary time series

    Get PDF
    Time-varying coefficient models can capture evolving relationships. However, constructingasymptotic confidence bands for coefficient curves in these models is challenging due to slowconvergence rates and the presence of various nuisance parameters. A residual-based sievebootstrap method has recently been proposed to address these issues. While it successfullyproduces confidence bands with accurate empirical coverage, its applicability is restricted tostrictly stationary processes. We introduce a new bootstrap scheme, the local blockwise wildbootstrap (LBWB), that allows for locally stationary processes. The LBWB can replicate thedistribution of the parameter estimates while automatically accounting for nuisance parameters. An extensive simulation study reveals the superior performance of the LBWB compared to various benchmark approaches. It also shows the potential applicability of the LBWB in broader scenarios, including time-varying cointegrating models. We then examine herding effects in the Chinese renewable energy market using the proposed methods. Our findings strongly support the presence of herding behaviors before 2016, aligning with earlier studies. However, contrary to previous research, we find no significant evidence of herding between around 2018 and 2021. Online supplementary materials are available for this article

    Instance-Conditioned Adaptation for Large-scale Generalization of Neural Combinatorial Optimization

    Full text link
    The neural combinatorial optimization (NCO) approach has shown great potential for solving routing problems without the requirement of expert knowledge. However, existing constructive NCO methods cannot directly solve large-scale instances, which significantly limits their application prospects. To address these crucial shortcomings, this work proposes a novel Instance-Conditioned Adaptation Model (ICAM) for better large-scale generalization of neural combinatorial optimization. In particular, we design a powerful yet lightweight instance-conditioned adaptation module for the NCO model to generate better solutions for instances across different scales. In addition, we develop an efficient three-stage reinforcement learning-based training scheme that enables the model to learn cross-scale features without any labeled optimal solution. Experimental results show that our proposed method is capable of obtaining excellent results with a very fast inference time in solving Traveling Salesman Problems (TSPs) and Capacitated Vehicle Routing Problems (CVRPs) across different scales. To the best of our knowledge, our model achieves state-of-the-art performance among all RL-based constructive methods for TSP and CVRP with up to 1,000 nodes.Comment: 17 pages, 6 figure

    Self-Improved Learning for Scalable Neural Combinatorial Optimization

    Full text link
    The end-to-end neural combinatorial optimization (NCO) method shows promising performance in solving complex combinatorial optimization problems without the need for expert design. However, existing methods struggle with large-scale problems, hindering their practical applicability. To overcome this limitation, this work proposes a novel Self-Improved Learning (SIL) method for better scalability of neural combinatorial optimization. Specifically, we develop an efficient self-improved mechanism that enables direct model training on large-scale problem instances without any labeled data. Powered by an innovative local reconstruction approach, this method can iteratively generate better solutions by itself as pseudo-labels to guide efficient model training. In addition, we design a linear complexity attention mechanism for the model to efficiently handle large-scale combinatorial problem instances with low computation overhead. Comprehensive experiments on the Travelling Salesman Problem (TSP) and the Capacitated Vehicle Routing Problem (CVRP) with up to 100K nodes in both uniform and real-world distributions demonstrate the superior scalability of our method

    Large Language Model for Multi-objective Evolutionary Optimization

    Full text link
    Multiobjective evolutionary algorithms (MOEAs) are major methods for solving multiobjective optimization problems (MOPs). Many MOEAs have been proposed in the past decades, of which the search operators need a carefully handcrafted design with domain knowledge. Recently, some attempts have been made to replace the manually designed operators in MOEAs with learning-based operators (e.g., neural network models). However, much effort is still required for designing and training such models, and the learned operators might not generalize well on new problems. To tackle the above challenges, this work investigates a novel approach that leverages the powerful large language model (LLM) to design MOEA operators. With proper prompt engineering, we successfully let a general LLM serve as a black-box search operator for decomposition-based MOEA (MOEA/D) in a zero-shot manner. In addition, by learning from the LLM behavior, we further design an explicit white-box operator with randomness and propose a new version of decomposition-based MOEA, termed MOEA/D-LO. Experimental studies on different test benchmarks show that our proposed method can achieve competitive performance with widely used MOEAs. It is also promising to see the operator only learned from a few instances can have robust generalization performance on unseen problems with quite different patterns and settings. The results reveal the potential benefits of using pre-trained LLMs in the design of MOEAs
    corecore