136 research outputs found

    Heterogeneous Unit Clustering for Efficient Operational Flexibility Modeling for Strategic Models

    Get PDF
    The increasing penetration of wind generation has led to significant improvements in unit commitment models. However, long-term capacity planning methods have not been similarly modified to address the challenges of a system with a large fraction of generation from variable sources. Designing future capacity mixes with adequate flexibility requires an embedded approximation of the unit commitment problem to capture operating constraints. Here we propose a method, based on clustering units, for a simplified unit commitment model with dramatic improvements in solution time that enable its use as a submodel within a capacity expansion framework. Heterogeneous clustering speeds computation by aggregating similar but non-identical units thereby replacing large numbers of binary commitment variables with fewer integers that still capture individual unit decisions and constraints. We demonstrate the trade-off between accuracy and run-time for different levels of aggregation. A numeric example using an ERCOT-based 205-unit system illustrates that careful aggregation introduces errors of 0.05-0.9% across several metrics while providing several orders of magnitude faster solution times (400x) compared to traditional binary formulations and further aggregation increases errors slightly (~2x) with further speedup (2000x). We also compare other simplifications that can provide an additional order of magnitude speed-up for some problems

    Impact of unit commitment constraints on generation expansion planning with renewables

    Get PDF
    Growing use of renewables pushes thermal generators against operating constraints - e.g. ramping, minimum output, and operating reserves - that are traditionally ignored in expansion planning models. We show how including such unit-commitment-derived details can significantly change energy production and optimal capacity mix. We introduce a method for efficiently combining unit commitment and generation expansion planning into a single mixed-integer optimization model. Our formulation groups generators into categories allowing integer commitment states from zero to the installed capacity. This formulation scales well, runs much faster (e.g. 5000×) than individual plant binary decisions, and makes the combined model computationally tractable for large systems (hundreds of generators) at hourly time resolutions (8760 hours) using modern solvers on a personal computer. We show that ignoring these constraints during planning can result in a sub-optimal capacity mix with significantly higher operating costs (17%) and carbon emissions (39%) and/or the inability to meet emissions targets

    Effects of a Distributed Computing Architecture on the Emerald Nanosatellite Development Process

    Get PDF
    Building satellites with greater capabilities on shorter timelines requires changes in development approach. Relative to previous satellite projects in Stanford’s Space Systems Development Laboratory (SSDL), the Emerald Nanosatellite system is highly complex. Its mission requires numerous experiments and relatively sophisticated subsystem capabilities. To develop this system on a short two-year timeline required a new development approach to simplify system integration. As a result, the Emerald development team adopted a modular distributed computing architecture. While this decision imposed many changes on Emerald’s design process, the benefits of the distributed architecture for system integration and testing justified its selection. This approach has already affected the early stages of engineering model integration, and is expected to provide flexibility throughout construction and integration of the flight hardware. In addition the distributed architecture developed for the Emerald project will provide a useful tool for future development efforts in the SSDL and the small satellite development community

    A Distributed Computing Architecture for Small Satellite and Multi-Spacecraft Missions

    Get PDF
    Distributed computing architectures offer numerous advantages in the development of complex devices and systems. This paper describes the design, implementation and testing of a distributed computing architecture for low-cost small satellite and multi-spacecraft missions. This system is composed of a network of PICmicro® microcontrollers linked together by an I2C serial data communication bus. The system also supports sensor and component integration via Dallas 1-wire and RS232 standards. A configuration control processor serves as the external gateway for communication to the ground and other satellites in the network; this processor runs a multitasking real-time operating system and an advanced production rule system for on-board autonomy. The data handling system allows for direct command and data routing between distinct hardware components and software tasks. This capability naturally extends to distributed control between spacecraft subsystems, between constellation satellites, and between the space and ground segments. This paper describes the technical design of the aforementioned features. It also reviews the use of this system as part of the two-satellite Emerald and QUEST university small satellite missions

    Incorporating operational flexibility into electric generation planning : impacts and methods for system design and policy analysis

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Engineering Systems Division, 2013.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student-submitted PDF version of thesis.Includes bibliographical references (p. 253-272).This dissertation demonstrates how flexibility in hourly electricity operations can impact long-term planning and analysis for future power systems, particularly those with substantial variable renewables (e.g., wind) or strict carbon policies. Operational flexibility describes a power system's ability to respond to predictable and unexpected changes in generation or demand. Planning and policy models have traditionally not directly captured the technical operating constraints that determine operational flexibility. However, as demonstrated in this dissertation, this capability becomes increasingly important with the greater flexibility required by significant renewables (>=20%) and the decreased flexibility inherent in some low-carbon generation technologies. Incorporating flexibility can significantly change optimal generation and energy mixes, lower system costs, improve policy impact estimates, and enable system designs capable of meeting strict regulatory targets. Methodologically, this work presents a new clustered formulation that tractably combines a range of normally distinct power system models, from hourly unit-commitment operations to long-term generation planning. This formulation groups similar generators into clusters to reduce problem size, while still retaining the individual unit constraints required to accurately capture operating reserves and other flexibility drivers. In comparisons against traditional unit commitment formulations, errors were generally less than 1% while run times decreased by several orders of magnitude (e.g., 5000x). Extensive numeric simulations, using a realistic Texas-based power system show that ignoring flexibility can underestimate carbon emissions by 50% or result in significant load and wind shedding to meet environmental regulations. Contributions of this dissertation include: 1. Demonstrating that operational flexibility can have an important impact on power system planning, and describing when and how these impacts occur; 2. Demonstrating that a failure to account for operational flexibility can result in undesirable outcomes for both utility planners and policy analysts; and 3. Extending the state of the art for electric power system models by introducing a tractable method for incorporating unit commitment based operational flexibility at full 8760 hourly resolution directly into planning optimization. Together these results encourage and offer a new flexibility-aware approach for capacity planning and accompanying policy design that can enable cleaner, less expensive electric power systems for the future.by Bryan S. Palmintier.Ph.D

    Switch 2.0: A Modern Platform for Planning High-Renewable Power Systems

    Full text link
    This paper describes Switch 2.0, an open-source modeling platform for planning transitions to low-emission electric power grids, designed to satisfy 21st century grid planning requirements. Switch is capable of long-, medium- and short-term planning of investments and operations with conventional or smart grids, integrating large shares of renewable power, storage and/or demand response. Applications include integrated resource planning, investment planning, economic and policy analyses as well as basic research. Potential users include researchers, educators, industry and regulators. Switch formulates generation and transmission capacity planning as a mixed integer linear program where investment and operation are co-optimized across sampled time series during multiple investment periods. High-resolution production cost modeling is supported by freezing investment decisions and including longer time series and more operational details. Modeling features include unit commitment, part-load efficiency, planning and operating reserves, fuel supply curves, storage, hydroelectric networks, policy constraints and demand response. Switch has a modular architecture that allows users to flexibly compose models by choosing built-in modules 'a la carte' or writing custom modules. This paper describes the software architecture and model formulation of Switch 2.0 and provides a case study in which the model was used to identify the best options for obtaining load-shifting and reserve services from batteries and demand response in a 100% renewable power system

    A review of power distribution test feeders in the United States and the need for synthetic representative networks

    Get PDF
    Artículos en revistasUnder the increasing penetration of distributed energy resources and new smart network technologies, distribution utilities face new challenges and opportunities to ensure reliable operations, manage service quality, and reduce operational and investment costs. Simultaneously, the research community is developing algorithms for advanced controls and distribution automation that can help to address some of these challenges. However, there is a shortage of realistic test systems that are publically available for development, testing, and evaluation of such new algorithms. Concerns around revealing critical infrastructure details and customer privacy have severely limited the number of actual networks published and that are available for testing. In recent decades, several distribution test feeders and US-featured representative networks have been published, but the scale, complexity, and control data vary widely. This paper presents a first-of-a-kind structured literature review of published distribution test networks with a special emphasis on classifying their main characteristics and identifying the types of studies for which they have been used. This both aids researchers in choosing suitable test networks for their needs and highlights the opportunities and directions for further test system development. In particular, we highlight the need for building large-scale synthetic networks to overcome the identified drawbacks of current distribution test feeders.info:eu-repo/semantics/publishedVersio

    A Review of Power Distribution Test Feeders in the United States and the Need for Synthetic Representative Networks

    Get PDF
    Under the increasing penetration of distributed energy resources and new smart network technologies, distribution utilities face new challenges and opportunities to ensure reliable operations, manage service quality, and reduce operational and investment costs. Simultaneously, the research community is developing algorithms for advanced controls and distribution automation that can help to address some of these challenges. However, there is a shortage of realistic test systems that are publically available for development, testing, and evaluation of such new algorithms. Concerns around revealing critical infrastructure details and customer privacy have severely limited the number of actual networks published and that are available for testing. In recent decades, several distribution test feeders and US-featured representative networks have been published, but the scale, complexity, and control data vary widely. This paper presents a first-of-a-kind structured literature review of published distribution test networks with a special emphasis on classifying their main characteristics and identifying the types of studies for which they have been used. As a result, this both aids researchers in choosing suitable test networks for their needs and highlights the opportunities and directions for further test system development. In particular, we highlight the need for building large-scale synthetic networks to overcome the identified drawbacks of current distribution test feeders.</p

    Smart-DS: synthetic models for advanced, realistic testing: distribution systems and scenarios

    Get PDF
    The National Renewable Energy Laboratory (NREL) in collaboration with Massachusetts Institute of Technology (MIT), Universidad Pontificia Comillas (Comillas-IIT, Spain) and GE Grid Solutions, is working on an ARPA-E GRID DATA project, titled Smart-DS, to create: 1) High-quality, realistic, synthetic distribution network models, and 2) Advanced tools for automated scenario generation based on high-resolution weather data and generation growth projections. Through these advancements, the Smart-DS project is envisioned to accelerate the development, testing, and adoption of advanced algorithms, approaches, and technologies for sustainable and resilient electric power systems, especially in the realm of U.S. distribution systems. This talk will present the goals and overall approach of the Smart-DS project, including the process of creating the synthetic distribution datasets using reference network model (RNM) and the comprehensive validation process to ensure network realism, feasibility, and applicability to advanced use cases. The talk will provide demonstrations of early versions of synthetic models, along with the lessons learnt from expert engagements to enhance future iterations. Finally, the scenario generation framework, its development plans, and co-ordination with GRID DATA repository teams to house these datasets for public access will also be discussed.info:eu-repo/semantics/draf
    corecore