35 research outputs found
Metallic Slit-Plate Dampers: Damage Evaluation with Metal Magnetic Memory Technique and Application to Structures with Rocking Columns
The authors thank the PREDITEST Company, from the Czech Republic, and in particular
Svoboda, for support in MMM equipment, measurements, and scientific discussionsInelastic deformation of metallic materials is one of the most effective mechanisms for the dissipation of energy input to a structure by an earthquake. Metallic dampers are special devices that resort to this source of energy dissipation, proving to be a cost-efficient solution for the seismic protection of structures. Two important issues arise when implementing metallic dampers in real structures: (1) Inelastic deformations cause damage that must be quantified after an earthquake to decide upon their eventual replacement; (2) dampers must possess an energy dissipation capacity large enough to endure severe earthquakes. This paper focuses on a particular type of metallic damper consisting of slit-plates made of stainless steel, applied to reinforced concrete frames with rocking columns at the first story. In particular, a new damage index based on the metallic magnetic memory (MMM) method is proposed and validated experimentally to quantify the damage of slit plate dampers subjected to cyclic loadings. Further, the seismic response of a frame with rocking columns that incorporate the damper is obtained to demonstrate that it can endure severe earthquakes without failing, and to emphasize the relevance of the proposed MMM damage index that would make its replacement after a severe earthquake unnecessary.This research was funded by Consejería de Economia, Innovación, Ciencia y Empleo, Junta de Andalucía,
grant number TEP-02429, by Ministerio de Economía, Industria y Competitividad, Gobierno de España, grant
number BIA2017 88814 R, and received funds from the European Union (Fonds Européen de Dévelopment
Régional). The APC was funded by Ministerio de Economía, Industria y Competitividad, Gobierno de España,
grant number BIA2017 88814 R
Beyond Chinchilla-Optimal: Accounting for Inference in Language Model Scaling Laws
Large language model (LLM) scaling laws are empirical formulas that estimate
changes in model quality as a result of increasing parameter count and training
data. However, these formulas, including the popular Deepmind Chinchilla
scaling laws, neglect to include the cost of inference. We modify the
Chinchilla scaling laws to calculate the optimal LLM parameter count and
pre-training data size to train and deploy a model of a given quality and
inference demand. We conduct our analysis both in terms of a compute budget and
real-world costs and find that LLM researchers expecting reasonably large
inference demand (~1B requests) should train models smaller and longer than
Chinchilla-optimal. Furthermore, we train 47 models of varying sizes and
parameter counts to validate our formula and find that model quality continues
to improve as we scale tokens per parameter to extreme ranges (up to 10,000).
Finally, we ablate the procedure used to fit the Chinchilla scaling law
coefficients and find that developing scaling laws only from data collected at
typical token/parameter ratios overestimates the impact of additional tokens at
these extreme ranges.Comment: 16 pages, 7 figures, To appear in the 41st International Conference
on Machine Learning, 202
