It’s been appraisals season here at Dayshape, and so I’ve been reflecting on my work this past year.
In one of our appraisal questions, we were asked: “What personal strengths help you to do your job effectively?”.
In my answer, I noted that my career path wasn’t the “traditional” software engineering path. I spent ten years writing software as an academic, with a very different tech stack, and a very different set of aims. Not to mention a very different audience.
Much has been said about the programming skills of scientists in the last few years – in particular of code written to model epidemics that have been applied to COVID-19. There’s also been some fair criticism, and frankly some ridiculous blether.
I’ve been in industry for 3 years now, and my varied experience has taught me some important truths about how software is written and used in three quite different arenas: as part of the scientific endeavour, in industry, and as a component of developing government policy.
Disclaimer: I’m about to generalise a lot, but I think there’s value in doing so.
Software for Science
– The developer is the user is the developer
– The code is not the product
– The code doesn’t always have to work, but when it does it better be fast
– The code’s half-life is very short
– The tech stack is as limited as possible
– Testing is a long, arduous process and it largely shouldn’t be
Scientists are, for the most part, journeyman programmers. With few exceptions, scientists learn to code without much attention to the underlying formalism of computer science. If you want to confirm this, ask a computational astrophysicist to name a design pattern, or describe what a lambda function is. Most will give you blank stares.
And that’s kind of the point. The software is not the ultimate goal of the scientist – the product they are trying to create is knowledge. The software is something you squeeze to make knowledge come out. It might be in a partially broken state after this squeezing, but you don’t need to fix it straight away. It’s not like anyone else is going to be using it.
This explains why scientific code has the shape it does. In search of knowledge, the code has been written and rewritten a number of times. There’s a very strong feedback loop between the user and the developer – they’re usually the same person. Even if the user didn’t write the first iteration of the code, they’ve probably looked at the source code themselves and perhaps tweaked it a bit.
Because of this connection, the tech stack is usually very limited. The code is usually executed using the command line – why write a frontend or a GUI? The code is not the product. It will be written in a single language that can be compiled for maximum optimisation. This is why FORTRAN still exists and is used widely. It’s very fast, and is designed for numerical integration on no memory and CPU. Don’t forget, Python snobs, a lot of your fancy libraries run on compiled FORTRAN and C…
This leanness of scientific software is to be commended. One valid criticism of modern enterprise software, especially web apps, is the incredible bloat that comes with modern frameworks. I’ve been working on a personal code project, to which I added a very basic frontend last week. Just adding one dependency resulted in a node_modules file that was half a gigabyte!
In many cases, once scientific software has been utilised to produce a scientific result, it will rarely be used again. Weep for the poor graduate student who must unearth this crumpled ball of text, six years later, and try to deduce what the hell is happening, sans comments or documentation. Version control is far from widespread in academic circles. I didn’t learn git until well after submitting my PhD.
A common cry from software engineers looking at scientific code is: “where are the unit tests?”.
A common response from some scientific programmers is: “what’s a unit test?”. You can find a definition here.
There are lots of reasons. The main one is that, yes, most scientific code is written without knowledge of unit tests or test-driven development. I can point to at least one bug I fixed during the first year of my PhD that, had I written a unit test, I could have identified in about a week. It took me nearly six months.
Another reason is that the really important unit tests the code needs to pass are not always so readily boiled down. If I’m writing a stellar atmosphere code, there are some unit tests one can write about the smallest components (e.g., gravity, radiation) but the unit test you really want to write is: “can I reproduce the X et al (1974) model?”. This test is extremely fuzzy in its definition – what does “reproduce” mean? A trained human will be able to tell fairly quickly, but machines will take longer, and training a machine to do so is time and effort that the academic cannot afford. This is why scientific software still undergoes huge amounts of apparently needless manual testing. Again, the code is not the product. A scientist’s reputation lies with their results, not with their code quality.
I’m glad to say that’s changing. The drive for open-source code has forced adoption of version control, and the idea that the code could be the product is slowly beginning to take hold – at least in the younger generation of academics. This approach is arguably yielding better science, but it’s still not clear that academics who effectively become full-time software engineers are being fully recompensed for their effort from a career point of view.
Software for Enterprise
– The developer rarely meets the user
– The code is the product
– The code might just live forever
– The code has to work all the time
– Testing is a much better defined process
Enterprise software is written by engineers whose principal function is to write code. Don’t get me wrong, there’s a lot more to the job, especially as engineers become more senior and take on leadership roles, product decisions, and client-facing roles.
The point I’m making is that in industry, code is not the means to the end – it is the end. The code is the product. Software companies exist to produce code that makes money.
And this drives the crucial differences. It needs to work all the time; it needs to be fast; it needs to be supported on a long-term basis, for potentially millions of users who will never meet one of the engineers or vice versa. It needs to be usable by someone who hasn’t seen the source code. Hell, it needs to be usable by someone who doesn’t know what “source code” means. Also, the source code is proprietary!
Usability, reliability, and speed are so important that testing is a huge part of enterprise software. Unit testing, integration testing, visual testing, UI testing, regression testing, manual testing… Whole teams of engineers are assigned to Quality Assurance, i.e. make sure the code does what it is supposed to. This is not a trivial task – enterprise software is enormous, complex, and in comparison to scientific software – ancient. One thing I enjoy about my job is stumbling across code written years ago by a mysterious developer who seemed to have written most of the critical superstructure, leaving only cryptic comments like “ho ho” in their wake. In our case, his name was Thanos. It was inevitable.
Unit testing also has to ensure that we have correctly captured business logic – for the scientists reading this – think of this as the laws of physics, except they were written by your users e.g., the company’s economic model. For the code to be profitable, we need this confidence that the business logic is correctly captured in every circumstance.
Grumblers at the back, your grumbles are mostly valid. I’ve generalised a lot so far, and these two descriptions are more like caricatures at two ends of a spectrum. There’s definitely signs that scientific software is taking on the features I ascribed to enterprise software – examples include yt, astropy, amongst many, many others.
Equally, not all commercial software follows the enterprise model – mobile gaming apps are often very homebrew, even slapdash, and despite this are still fabulously profitable, even if only for a brief period.
Software for Policy
– No one is the user
– The code isn’t the product, but it also is
– The code has to work
And then we come to software for policy. This starts out life as scientific software, being used to produce knowledge. That knowledge is then used to formulate advice that drives policy. As a result, the critical individuals in this process – the decision makers – do not use the code. Do you think the COBRA committee ran the Imperial COVID-19 Model themselves?
And here comes a paradox. The code is not the product – the product is advice. But for that advice to be accepted, trust is required. And how do you build trust in this situation? You can only publish the code you are using, to open yourself up to scrutiny.
And this is where the two approaches to developing software (science vs industry) can come into collision. Being open with your code is doomed to backfire if your software relies on laymen users being intimately familiar with the source code, being comfortable with using the command line, and not having a frontend.
Worse, if you open up your code to professional engineers, who have never had to program under the incentives and stresses you did, they’re going to point out all the ways they would have done it instead. Code review is another critical function of software engineers, and another one that I’d like to see ported into scientific programming, but even then I doubt that most industrial engineers would really understand the context behind the design decisions of scientific software engineers and vice versa.
This is where I think teams like the Imperial COVID-19 team (and others) fall down. Simply publishing the code opened them up to severe criticism. They would have done better to:
a) Publish a web app that ran the code to encourage the public to explore its results, not its source code, or
b) Publish a “better” version with more rigorous engineering practices, or
c) Invite the engineers in the general public to review the code. An invited review would have focused the criticism to a sensible forum, while actually making the code better. This is one of the reasons that open source is so powerful and successful as a philosophy.
In the world of software, context trumps code every single time. Software is an incredibly flexible, powerful tool that can be built and used for all sorts of different reasons. Yes, there are some good standard practices all software engineers should be encouraged to develop (unit tests, code review) but it’s also important to remember what the software is for.
Programmers on all sides have much to learn from each other. Injecting industrial rigour and engineering professionalism into scientific programming has obvious benefits for academia, but I think industrial software has a lot to learn from the minimal, highly performant code that scientists produce on a regular basis to expand our understanding of the Universe.
If you have a passion for software, then we’d like to hear from you. Check out our vacancies and join our team at the fastest growing tech company in Scotland.