I’ve got 405MB of 3D seismic data from Teapot Dome sitting in my file cache, and I want to give you a quick view of some of its summary statistics.
I’ve got 405MB of 3D seismic data from Teapot Dome sitting in my file cache, and I want to give you a quick view of some of its summary statistics. How long do you think you should have to wait? If you’re working in Excel, you might be happy with a few minutes. A .NET programmer — used to endless database calls and virtual machines in his line of work — wouldn’t be too surprised at a few seconds, or tens of seconds. Long enough to fire up a spinny cursor and send you to Facebook, or whatever your work-day sin is.
A little high-performance chemistry suggests that if you’re just doing a few simple operations per input value, you ought to be able to get to within an order of magnitude or so of the speed of the memory bus to your CPU. (Actually hitting it will likely require some exotic programming and some luck.) I have 1600 MHz DDR3 RAM on my laptop: about 12.8 GB/s. Optimal speed would be 405 MB / 12,800 MB / s, or 32ms. There’s no need for a spinny cursor at that speed. What a delightful possibility!
This is the first in a series of articles looking at a simple benchmark in languages advertised for scientific computing. To say that we’re benchmarking the languages themselves is of course silly. Most languages have at least 2 or 3 popular compilers and runtimes, most have far more than that. Every time a fool on the Internet publishes a blog article claiming “MY FOO PROGRAM IS 28.65% FASTER THAN YOUR BAR PROGRAM” some other fool decides to use FOO for that reason alone. Especially if the blog he read it on looked nice and had pretty pictures. I’ll try not to be either of those fools.
What I hoped to understand from this exercise was the relative difficulty of achieving high speed computations in each of the languages without resorting to hand-coded assembly, 3rd party libraries, and the like.
For fun, I wrote a simple conversion benchmark from IBM/370 floating point to IEEE floating point, which is the bulk of number crunching involved when reading a SEG-Y seismic tape so you can do something with it later. (The standardization of this format pre-dates the standardization of floating point arithmetic by the IEEE.) I wrote the benchmark first in F#, since I was on a .NET kick, then in C++ to see how close to the metal I could get, and finally in Julia to see whether it could compete like they said it would.
I was mostly interested in speed, not style. You’ll see the algorithm is the same in each one, lifted from a friendly newsgroup poster. The dark beating heart of scientific code is — if you want it fast — always a little ugly and imperative.
Why these 3 languages? Scientific software is always overrun by drudge work. That radon transform is just a few paragraphs of (clever, optimized, hard-won) code, but what about all the I/O handlers, file formats, job schedulers, graphics, provenance trackers, data serialization, and everything else? A higher level language starts to sound promising. You want something capable of easy abstraction, but also easy interoperability into the world of C and FORTRAN.
F# qualifies as high level, but as a part of the .NET ecosystem, has graceful interaction with C code. Also, the F# community has been on a tear recently building up its capabilities for scientific computing, especially in statistics and data science. (Check out FsLab, they’ve got interesting things cooking.) This presumably comes from its use in the financial sector in New York & London.
C++ is the go-to language for production level scientific code in nearly every domain. It’s capable of high levels of abstraction, and can compile some of the fastest code out there. Most important libraries are exposed via C++. But successfully using high level abstractions is not the language’s core strength: there are just too many details to keep track of at once. (My most recent favorite, found by Palladium extraordinaire Brandon, is std::decay, but I digress.)
Julia is a newer language attempting to bridge the gap between high-level computing (it’s a LISP dialect) and high speed computing (advertising efficient code generation.) If it’s LISP (macros!) and supports distributed computing, and has great prototyping environments in place, count me interested!
Where’s Python? There’s no shortage of material out there about Python, especially as regards subsurface computing. Python is a great choice for technical computing due to its huge ecosystem of curated, interoperable libraries, but is best described as a hybrid system of Python for scripting, and C++ for heavy lifting. Fine for lots of things, but perhaps difficult if you’re building new algorithms and aiming for interactive performance, as opposed to ad hocexploration. I’d rather find a language I can do it all in. (Important caveat: the Python community understands this drive as well, and is building Python implementations that can run at high speeds: see Numba in particular.)
Next week: we’ll jump into the F#.
Tell us what you need and one of our experts will get back to you.