The great static vs. dynamic typing debate
I will frame this just using two languages: Java and Ruby. I can make it even simpler: there really is no debate as far as I am concerned. There are certain classes of large mission critical systems that are best implemented using Java for reasons of type safety (avoid obscure bugs that creep past unit/functional/integration testing that occur long after a system is deployed), run time efficiency, and scalability over many CPU cores. There is a much larger class of systems that are not "mission critical" and it is more important to develop and deploy quickly, basically minimizing development and maintenance costs: I use Ruby for this type of development. Equally skilled programmers will always be able to develop faster in Ruby than in Java: less code to write and read for the same functionality because of dynamic typing, blocks, etc. It is not simply a matter of less code: statically typed languages like Scala are very terse, but I believe still slower to develop in.
The problem with just using Java is that you will end up spending too much on development and maintenance costs for some systems where the extra cost is simply not worth it. The problem with just using a dynamic language like Ruby is that for some types of systems it is simply not appropriate for reasons that I have already mentioned.
So, I don't think that there should be any debate at all - but perhaps a difference in opinion on where to draw the line in deciding the type of programming language to use for a specific project. Every programmer should be very familiar with at least two or three languages. Using different programming languages provides different points of view and ways of thinking about solving problems. Learning new languages also helps give us different perspectives on problem solving.
Being able to use the right tool for the right job is the reason to know at least one dynamic and one statically typed language very well. I would then argue that at least a small amount of time needs to be spent at least understanding what new programming languages offer. A very long time ago, when I transitioned from using C to C++ (strong type checking) I was young enough and naive enough to think that C++ was the be all and end all of programming languages. I really liked C++, writing five C++ books and spending a lot of time mentoring C++ and object oriented design. When Java was first released, I made a very quick transition to Java. Looking back on my long term use of C++, in retrospect the only projects where C++ was superior was Nintendo Ultra 64 game development and some VR work - run time efficiency was important. I have not worked on anything in ten years that would be better done in C++ because of the great speed improvements in the JVM.
We will obviously see many more exciting (and perhaps practical :-) new trends in programming languages in the next 20 years and perhaps we will some day reach the point when it no longer makes sense to master 2 or 3 different programming languages at any given time. If there ends up being no debate between when to use static vs. dynamic typed lanuages, this will probably occur when program analysis tools get good enough to provide enough protection to get the same level of safety.
The problem with just using Java is that you will end up spending too much on development and maintenance costs for some systems where the extra cost is simply not worth it. The problem with just using a dynamic language like Ruby is that for some types of systems it is simply not appropriate for reasons that I have already mentioned.
So, I don't think that there should be any debate at all - but perhaps a difference in opinion on where to draw the line in deciding the type of programming language to use for a specific project. Every programmer should be very familiar with at least two or three languages. Using different programming languages provides different points of view and ways of thinking about solving problems. Learning new languages also helps give us different perspectives on problem solving.
Being able to use the right tool for the right job is the reason to know at least one dynamic and one statically typed language very well. I would then argue that at least a small amount of time needs to be spent at least understanding what new programming languages offer. A very long time ago, when I transitioned from using C to C++ (strong type checking) I was young enough and naive enough to think that C++ was the be all and end all of programming languages. I really liked C++, writing five C++ books and spending a lot of time mentoring C++ and object oriented design. When Java was first released, I made a very quick transition to Java. Looking back on my long term use of C++, in retrospect the only projects where C++ was superior was Nintendo Ultra 64 game development and some VR work - run time efficiency was important. I have not worked on anything in ten years that would be better done in C++ because of the great speed improvements in the JVM.
We will obviously see many more exciting (and perhaps practical :-) new trends in programming languages in the next 20 years and perhaps we will some day reach the point when it no longer makes sense to master 2 or 3 different programming languages at any given time. If there ends up being no debate between when to use static vs. dynamic typed lanuages, this will probably occur when program analysis tools get good enough to provide enough protection to get the same level of safety.
It's important to understand that static type systems don't prevent bugs, they only prevent "type errors". Type errors are not necessarily the same as errors. Type errors are "just" violations of the type system. Sometimes you want violations of the type system (think about casting, for example), and sometimes you want to express things that are not even possible with static typing (think about removing methods from a running program). On the other hand, you can get real errors even if the static types of a program are correct. So there is no actual correlation between type errors and real errors, there is just the hope that there is such a correlation.
ReplyDeleteI have actually been able to show with three examples that a static type system - especially Java's type system - can introduce bugs into a program that wouldn't exist in a dynamically typed language. See Dynamic vs. Static Typing - A Pattern-Based Analysis for that paper.
Efficiency is also not an issue. Languages like Self, Strongtalk, Scheme and Common Lisp have shown several times that you can have both dynamic typing and efficiency in the same system.
There are type systems that give you better guarantees with regard to correctness, but Java's type system is definitely not one of them.
Hello Pascal, thanks for your comments!
ReplyDeleteI like the workshop name where your paper appeared: PostJava workshop :-)
I especially liked the discussion of exceptions in your paper.
-Mark
Just curious, are you writing a book on ruby on rails? If so, when is it going to be released?
ReplyDeleteI'd be very interested to see your ideas on implementing a large scale data management and mining project semantically with a rails application. I hope that's something you would cover in the book.
Hello Matt,
ReplyDeleteMy Ruby book project was canceled over a year ago.
re: semantic web with Ruby and Rails: probably a good fit, except that so many great semantic web libraries are written in Java. You might want to go with JRuby so you can use the Java libraries.
I spend most of my time programming in Ruby, with Common Lisp and Java being runners-up. Almost everything that I have done with web apps in the last couple of years has been with Rails.