Where is the productivity in dynamically typed languages
Dynamic vs static typed language courses [closed]
Are there any studies on the effectiveness of statically or dynamically typed languages?
- Programmer productivity measurements
- Error rate
Also takes into account the impact of using unit tests.
I've seen a lot of discussions about the merits of both sides, but I wonder if anyone has done a study on it.
Some suggested reading:
Not exactly on static input, but related:
Some interesting articles or essays on the topic or static analysis of programs in general:
And for those who would be wondering what this is about:
However, I doubt any of these will give you a straightforward answer as it won't do exactly the study you're looking for. However, they will be interesting reads.
WARNING: Some of these links are unreliable, others go through portals of various computer companies and use fee-based access for members. Sorry, I tried to find multiple links for each of those links, but it's not as good as I would like.
Only yesterday I found this study: Unit tests are not enough. You also need static input.
Basically, the author used a tool that is able to automatically convert a project from a non-static to a static input language (Python to Haskell).
Then he selected a number of open source Python projects that also contained a reasonable number of test units and automatically converted them to haskell.
The Haskell translation revealed a number of errors related to the nature of the variable: The errors were not detected by the test units.
- Link to the discussion of the ACM article "An experiment on static and dynamic type systems" (2010) by Stephan Hanenberg (referenced in an earlier article by Lorin Hochstein).
- Conclusion: The productivity for a similar quality was higher in a dynamic language.
- Possible biases / validity problems: The subjects were all students. Also limited variety of programming tasks (subjects were asked to implement a scanner and parser).
- ACM article "Do Programming Languages Affect Productivity?" (2007) by Delorey, Knudson, and Chun.
- Possible distortions / validity problems: No quality measure (e.g. errors discovered after publication). No level of reliability (is software written in statically typed languages more reliable?). Sample Bias - All projects were openly taken from open source CVS repositories. Also no distinction between weakly and strongly typed languages (ie pointers).
- Diploma thesis "Empirical investigation of software productivity and quality" (2008) by Michael F. Siok
- Conclusion: The choice of programming language has no significant influence on productivity or quality. However, this has an impact on personnel costs and the "quality of the entire software project portfolio".
- Possible distortions / validity problems: limited to the area of avionics. Programming languages could all be typed statically. I have not read the dissertation, so I cannot assess its severity.
My opinion. While there is weak evidence that dynamically typed languages are more productive, it is inconclusive. (1) There are many factors that have not been controlled. (2) There are too few studies. (3) There has been little or no discussion of what constitutes a suitable test method.
Here's a starting point:
The paper challenges the common wisdom that all other things being equal, programmers write the same number of lines of code at a time, regardless of language. In other words, the work should serve as empirical evidence that mechanical productivity (lines of code written) no is a good measure of functional productivity and needs to be normalized at least linguistically.
I found a static vs. dynamic language: a literature review that lists some studies on the subject and gives a nice summary of each study.
Here is the summary:
Of the controlled experiments, only three show an effect large enough to have any practical significance. The Prechelt study comparing C, C ++, Java, Perl, Python, Rexx and Tcl; the Endrikat study comparing Java and Dart; and Cooley's experiment with VHDL and Verilog. Unfortunately, they all have issues that make it difficult to come to a really strong conclusion.
In the Prechelt study, the populations differed between dynamic and typed languages, and the conditions for the tasks were also different. There was a follow-up study asking Lispers to come up with their own solutions to the problem. People like Darius Bacon were compared to random students. Follow-up to follow-up literally involves comparing code from Peter Norvig with code from random college students.
In the Endrikat study, they specifically picked a task where they thought static typing would make a difference, and they drew their subjects from a population where everyone had taken classes in static typed language. They do not comment on whether or not the students had experience of dynamically typed language, but it can be assumed that most or all of them had less experience of dynamically typed language.
Cooley's experiment was one of the few that attracted people from a non-student population, which is great. But as with all other experiments, the task was a trivial toy task. While it seems unfortunate that none of the VHDL (static language) participants were able to complete the task on time, it is extremely unusual to want to complete a hardware design in 1.5 hours outside of a school project. You could argue that a large task can be broken down into many smaller tasks, but a plausible counter-argument is that there are fixed costs associated with VHDL that can be amortized over many tasks.
As for the rest of the experiments, the main finding is that under the specific circumstances described in the studies, any effect, if any, is minor.
Moving on to the case studies, the two troubleshooting case studies make interesting reading, but they don't really speak for or against types. One shows that when Haskell transcribes Python programs, a number of errors of unknown non-zero severity are found that may not be found by unit tests with line coverage targeting. The Erlang article pair shows that static analysis can help you find some errors that tests, some of which are severe, are hard to find.
As a user, I find it handy for my compiler to give me an error message before I run separate static analysis tools, but these are marginal, possibly even smaller than the effect size of the controlled studies listed above.
I found the 0install case study (which compared different languages to Python and ended up choosing Ocaml) one of the more interesting things I came across, but it's a subjective thing that everyone will interpret differently from what you can see when They look .
This fits my impression (in my little corner of the world, ACL2, Isabelle / HOL, and PVS are the most common validators, and it makes sense that people would prefer more automation when solving problems in industry) but that is it also subjective.
And then there are the studies that break down data from existing projects. Unfortunately, I couldn't find anyone to do something to determine the cause (e.g. find an appropriate instrumental variable) so they just measure correlations. Some of the connections are unexpected, but not enough information is available to determine why.
The only data mining study that presents data that may be of interest without further exploration is Smallshire's Python bug review, but there isn't enough information on the methodology to figure out what his study really means, and it is not clear why he indicated data for other languages without submitting the data3.
Some notable omissions in the studies are extensive studies with skilled programmers, let alone studies with large numbers of "good" or "bad" programmers doing anything that approaches a significant project (in places I've worked a three month project would be small, but that's many times larger than any project used in a controlled study. Use "modern" statically typed languages, use incremental / optional typing, use modern standards IDEs (like VS and Eclipse), use modern radical IDEs (like LightTable), with old editors (like Emacs and vim), maintenance in a non-trivial code base, maintenance in a realistic environment, maintenance in a code base that you are already familiar with are familiar, etc.
If you look at the internet commentary on these studies, most of them are passed on to justify one point of view or the other. The Prechelt study on Dynamic vs. Static, as well as the follow-ups on Lisp, are persistent favorites by proponents of dynamic languages, and the Github mining study has been trending lately among functional programmers.
I honestly don't think static or dynamic typing is the real question.
I think there are two parameters that should come first:
- The proficiency level in the language: the more experienced you are, the more you know about the "pitfalls" and the more likely it is that you will avoid / easily spot them. This also applies to the particular application / program you are working on
- testing: I love static typing (hell, I love to program in C ++: p), but there is so much a compiler / static analyzer can do for you. It's just impossible to rely on a program without testing it. And I'm all for fuzzy testing (if applicable) because you just can't think about all the possible combinations of inputs.
Once you are comfortable with the language, you will write code and it will be easy to spot errors.
If you write decoupled code and test each function extensively, you will produce mature code and thus be productive (because you cannot be considered productive if you do not assess the quality of the product, can you?). )
I would therefore take the view that the debate between static and dynamic behavior in relation to productivity is rather controversial or at least largely suppressed by other considerations.
Here are a few:
Stefan Hanenberg. 2010. An experiment on static and dynamic type systems: Doubts about the positive effects of static type systems on development time. In Proceedings of the ACM International Conference on Object Oriented Programming System Languages and Applications (OOPSLA '10). ACM, New York, NY, USA, 22-35. DOI = 10.1145 / 1869459.1869462 http://doi.acm.org/10.1145/1869459.1869462
Daniel P. Delorey, Charles D. Knutson, and Scott Chun, "Do Programming Languages Affect Productivity? A Case Study Using Data from Open Source Projects," Flow, p. 8, First International Workshop on Emerging Trends in FLOSS Research and Development (FLOSS '07: ICSE Workshops 2007), 2007
Daly, M .; Sazawal, V., Foster, J.: Work in Progress: An Empirical Study on Static Typing in Ruby, Workshop on Evaluation and Ease of Use of Programming Languages and Tools (PLATEAU) at ON-WARD 2009.
Lutz Prechelt and Walter F. Tichy. 1998. A controlled experiment to evaluate the benefits of checking the argument type of a procedure. IEEE Trans. Softw. Closely. 24, 4 (April 1998), 302-312. DOI = 10.1109 / 32.677186 http://dx.doi.org/10.1109/32.677186
- Which restaurants in Chicago are dog friendly?
- Have you ever seen red 1
- What is economic planning 1
- How do caregivers relax in their downtime
- Why do you have to live
- When is an object-oriented approach inappropriate?
- Who is the founder of the prophets of Judaism?
- Are SAP training institute in Noida
- Who was the worst superhero casting
- Bitcoin needs a killer app at all
- Means UPON here
- What is Thoreau's philosophy of life
- Is CBD oil good for pets?
- Did NASA first invent virtual reality
- What is a Xiaomi phone
- Why do teen boys hate pants
- Which is expensive Bali or Seychelles
- What is the Russian word for?
- How slow is the evolutionary process
- What is risperidone
- Are the words ear and hearing related
- How do I restart my career
- Dogs have periods
- Dogs like to walk in the snow