220 research outputs found
Towards a simplified definition of Function Points
3Background. COSMIC Function Points and traditional Function Points (i.e., IFPUG Function points and more recent variation of Function Points, such as NESMA and FISMA) are probably the best known and most widely used Functional Size Measurement methods. The relationship between the two kinds of Function Points still needs to be investigated. If traditional Function Points could be accurately converted into COSMIC Function Points and vice versa, then, by measuring one kind of Function Points, one would be able to obtain the other kind of Function Points, and one might measure one or the other kind interchangeably. Several studies have been performed to evaluate whether a correlation or a conversion function between the two measures exists. Specifically, it has been suggested that the relationship between traditional Function Points and COSMIC Function Points may not be linear, i.e., the value of COSMIC Function Points seems to increase more than proportionally to an increase of traditional Function Points.
Objective. This paper aims at verifying this hypothesis using available datasets that collect both FP and CFP size measures.
Method. Rigorous statistical analysis techniques are used, specifically Piecewise Linear Regression, whose applicability conditions are systematically checked. The Piecewise Linear Regression curve is a series of interconnected segments. In this paper, we focused on Piecewise Linear Regression curves composed of two segments. We also used Linear and Parabolic Regressions, to check if and to what extent Piecewise Linear Regression may provide an advantage over other regression techniques. We used two categories of regression techniques: Ordinary Least Squares regression is based on the usual minimization of the sum of squares of the residuals, or, equivalently, on the minimization of the average squared residual; Least Median of Squares regression is a robust regression technique that is based on the minimization of the median squared residual. Using a robust regression technique helps filter out the excessive influence of outliers.
Results. It appears that the analysis of the relationship between traditional Function Points and COSMIC Function Points based on the aforementioned data analysis techniques yields valid significant models. However, different results for the various available datasets are achieved. In practice, we obtained statistically valid linear, piecewise linear, and non-linear conversion formulas for several datasets. In general, none of these is better than the others in a statistically significant manner.
Conclusions. Practitioners interested in the conversion of FP measures into CFP (or vice versa) cannot just pick a conversion model and be sure that it will yield the best results. All the regression models we tested provide good results with some datasets. In practice, all the models described in the paper –in particular, both linear and non-linear ones– should be evaluated in order to identify the ones that are best suited for the specific dataset at hand.openLavazza, L.; Morasca, S.; Robiolo, G.Lavazza, LUIGI ANTONIO; Morasca, Sandro; Robiolo, G
SOFTENG 2023: the ninth international conference on advances and trends in software engineering
The Ninth International Conference on Advances and Trends in Software Engineering (SOFTENG 2023), held between April 24th and April 28th, 2023, continued a series of events focusing on these challenging aspects for software development and deployment, across the whole life-cycle.
Software engineering exhibits challenging dimensions in the light of new applications, devices, and services. Mobility, user-centric development, smart-devices, e-services, ambient environments, e-health and wearable/implantable devices pose specific challenges for specifying software requirements and developing reliable and safe software. Specific software interfaces, agile organization and software dependability require particular approaches for software security, maintainability, and sustainability.
We take here the opportunity to warmly thank all the members of the SOFTENG 2023 technical program committee, as well as all the reviewers. The creation of such a high-quality conference program would not have been possible without their involvement. We also kindly thank all the authors who dedicated much of their time and effort to contribute to SOFTENG 2023. We truly believe that, thanks to all these efforts, the final conference program consisted of top-quality contributions. We also thank the members of the SOFTENG 2023 organizing committee for their help in handling the logistics of this event.
We hope that SOFTENG 2023 was a successful international forum for the exchange of ideas and results between academia and industry and for the promotion of progress in the field of software engineering
Hepatic Macrosteatosis Is Partially Converted to Microsteatosis by Melatonin Supplementation in ob/ob Mice Non-Alcoholic Fatty Liver Disease
Obesity is a common risk factor for non-alcoholic fatty liver disease (NAFLD). Currently, there are no specific treatments against NAFLD. Thus, examining any molecule with potential benefits against this condition emerged melatonin as a molecule that influences metabolic dysfunctions. The aim of this study was to determine whether melatonin would function against NAFDL, studying morphological, ultrastuctural and metabolic markers that characterize the liver of ob/ob mice
Software Development and Maintenance Effort Estimation Using Function Points and Simpler Functional Measures
Functional size measures are widely used for estimating software development effort. After the introduction of Function Points, a few “simplified” measures have been proposed, aiming to make measurement simpler and applicable when fully detailed software specifications are not yet available. However, some practitioners believe that, when considering “complex” projects, traditional Function Point measures support more accurate estimates than simpler functional size measures, which do not account for greater-than-average complexity. In this paper, we aim to produce evidence that confirms or disproves such a belief via an empirical study that separately analyzes projects that involved developments from scratch and extensions and modifications of existing software. Our analysis shows that there is no evidence that traditional Function Points are generally better at estimating more complex projects than simpler measures, although some differences appear in specific conditions. Another result of this study is that functional size metrics—both traditional and simplified—do not seem to effectively account for software complexity, as estimation accuracy decreases with increasing complexity, regardless of the functional size metric used. To improve effort estimation, researchers should look for a way of measuring software complexity that can be used in effort models together with (traditional or simplified) functional size measures
Using Locally Weighted Regression to Estimate the Functional Size of Software: an Empirical Study
In software engineering, measuring software functional size via the IFPUG (International Function Point Users Group) Function Point Analysis using the standard manual process can be a long and expensive activity, which is possible only when functional user requirements are known completely and in detail. To solve this problem, several early estimation methods have been proposed and have become de facto standard processes. Among these, a prominent one is High-level Function Point Analysis. Recently, the Simple Function Point method has been released by IFPUG; although it is a proper measurement method, it has a great level of convertibility to traditional Function Points and may be used as an estimation method. Both High-level Function Point Analysis and Simple Function Point skip the activities needed to weight data and transaction functions, thus enabling lightweight measurement based on coarse-grained requirements specifications. This makes the process faster and cheaper, but yields approximate measures. The accuracy of the mentioned method has been evaluated, also via large-scale empirical studies, showing that the yielded approximate measures are sufficiently accurate for practical usage. In this paper, locally weighted regression is applied to the problem outlined above. This empirical study shows that estimates obtained via locally weighted regression are more accurate than those obtained via High-level Function Point Analysis, but are not substantially better than those yielded by alternative estimation methods using linear regression. The Simple Function Point method appears to yield measures that are well correlated with those obtained via standard measurement. In conclusion, locally weighted regression appears to be effective and accurate enough for estimating software functional size
An empirical evaluation of the “cognitive complexity” measure as a predictor of code understandability
Background: Code that is difficult to understand is also difficult to inspect and maintain and ultimately causes increased costs. Therefore, it would be greatly beneficial to have source code measures that are related to code understandability. Many ‘‘traditional’’ source code measures, including for instance Lines of Code and McCabe’s Cyclomatic Complexity, have been used to identify hard-to-understand code. In addition, the ‘‘Cognitive Complexity’’ measure was introduced in 2018 with the specific goal of improving the ability to evaluate code understandability.
Aims: The goals of this paper are to assess whether (1) ‘‘Cognitive Complexity’’ is better correlated with code understandability than traditional measures, and (2) the availability of the ‘‘Cognitive Complexity’’ measure improves the performance (i.e., the accuracy) of code understandability prediction models.
Method: We carried out an empirical study, in which we reused code understandability measures used in several previous studies. We first built Support Vector Regression models of understandability vs. code measures, and we then compared the performance of models that use ‘‘Cognitive Complexity’’ against the performance of models that do not.
Results: ‘‘Cognitive Complexity’’ appears to be correlated to code understandability approximately as much as traditional measures, and the performance of models that use ‘‘Cognitive Complexity’’ is extremely close to the performance of models that use only traditional measures.
Conclusions: The ‘‘Cognitive Complexity’’ measure does not appear to fulfill the promise of being a significant improvement over previously proposed measures, as far as code understandability prediction is concerned
an evaluation of function point counting based on measurement oriented models
OBJECTIVE: It is well known that Function Point Analysis suffers from several problems. In particular, the measurement criteria and procedure are not defined precisely. Even the object of the measurement is not defined precisely: it is given by whatever set of documents and information representing the user requirements. As a consequence, measurement needs to be performed by an "expert", who can compensate the lack of precision of the method with the knowledge of common practices and interpretations. The paper aims at evaluating a methodology for function point measurement based on the representation of the system through UML models: this methodology aims at providing a precise definition of the object of the measurement, as well as the measurement procedure and rules. METHODS: An experimental application of the methodology is presented. A set of analysts (having different degrees of experience) were trained in the methodology and were then given the same requirements to model. The resulting models were measured by a few measurers, also trained in UML model-based counting. RESULTS: The results show that the variability of the FP measure is small compared to the one obtained after applying "plain" FPA, as described in the literature. More precisely, whereas the influence of the modeller on the result appears to be negligible (i.e., a counter gets the same results from different models of the same application), the variability due to the measurer is more significant (i.e., different counters get different results from the same model), but still small when compared to the results reported in the literature on FPA. CONCLUSIONS: The number of data points that we were able to collect was not big enough to allow reliable conclusions from a rigorous statistical viewpoint. Nevertheless, the results of the experiment tend to confirm that the considered technique decreases noticeably the variability of FP measures
An Investigation of the users’ perception of OSS quality
Abstract. The quality of Open Source Software (OSS) is generally much debated. Some state that it is generally higher than closed-source counterparts, while others are more skeptical. The authors have collected the opinions of the users concerning the quality of 44 OSS products in a systematic manner, so that it is now possible to present the actual opinions of real users about the quality of OSS products. Among the results reported in the paper are: the distribution of trustworthiness of OSS based on our survey; a comparison of the trustworthiness of the surveyed products with respect to both open and closed-source competitors; the identification of the qualities that affect the perception of trustworthiness, based on rigorous statistical analysis
A Conceptual Basis for Feature Engineering
The gulf between the user and the developer perspectives lead to diculties in producing successful software systems. Users are focused on the problem domain, where the system's features are the primary concern. Developers are focused on the solution domain, where the system's life-cycle artifacts are key. Presently, there is little understanding of how to narrow this gulf
- …
