Happy crashing

The fan turns loud, the casing heats up and you can feel the hot air flowing out from the ventilation slits and then: zap – the laptop shuts down! Obliterating all unsaved changes you might have. Who would be happy about this happening? Well, teenage-me was. In a roguish sense-of-achievement kind of way – guilty excitement. As I also admitted when Uma interviewed me recently. I did have my own stationary PC, however, using my mom’s laptop was way fancier.

Image from Blue Screen of Death on Wikipedia

Learning Delphi as my first programming language in 8th grade, kicked off a series of coding experiments that I delved into. Back then I hardly knew anything about proper programming and concepts as the “Big O” notation that is “used to classify algorithms according to how their running time or space requirements grow as the input size grows”. So my loops were very inefficient and at times actually just open-ended (infinite). Dear computer, draw this outwardly rotating spiral until… well, until you can’t no more. Those were the moments where my moms’ laptop surrendered. I knew that this is not a good thing to happen, especially not regularly. But at the same time it made me feel weirdly powerful and proud that my code was so exhausting for this machine that it just gives up. It might be a misuse of the word empathy, but this experience gave me some sort of empathy for computing machines that do heavy work. On a side note; nowadays I do no longer strive towards crashing computers by sending them into infinite loops.

 

Image from Mandelbrot set on Wikipedia

Coding beautiful things

The moment of first seeing the Mandelbrot set on screen, as a result of my few lines of code, was absolutely stunning. Fractal beauty emerging from relatively simple math. Next up where rotating psychedelic fibonacci spirals, the Lorenz Attractor (as can also be seen in said interview) and so forth. The big hype about chaos theory was over by that time. But not for me, I was reliving the fascination and magic of it, watching these miraculous intricate and beautiful shapes appearing on my screen.

The time it takes me to think about an algorithm, read up on it, make sketches to understand the corner cases, type it down, correct it in multiple iterations etc. vs. the time it takes to compile and execute it. Human awareness time vs. computation time: if I wouldn’t include factors that slow down my rotating spiral, it would just be over in the blink of a second and my “slow” human perception would have missed it altogether.

 

Image from wikimedia

Back in the old days

Sometimes I envy old computer scientists who have been around in the early days of computing. Where punch cards were still around and programming meant wiring cables around on gigantic machines. I imagine this must have felt much closer to human dimensions related to both time and to physicality. You had to engage quite physically with these beasts and the computation times were still within somewhat comprehensible factors of what one could do on pen and paper manually. Ever since we have been climbing Moore’s law and computation times are now light years away from doing calculations manually. A stupendous amount of operations takes place before you can even lift up a pen.

Layer after layer you got abstracted away from the zeros and ones – each layer doing a piece of translation work such that you can now conveniently just write something sweet as Java code and not care much about all the many things taking place on lower levels when you run your code.

 

Taming infinity

Most computational requirements we fire up everyday when using a browser or a text editor are negligible. However, as soon as you enter the enchanting world of scientific computing, computation times can easily explode to infinity if you don’t tame your algorithms properly. That’s why it’s crucial sometimes to inspect algorithms with respect to their complexity classes. Luckily Florian is really good with those and has an eye on them when we make additions or changes in the code. Does the workload go up linearly with the input size, or exponentially or worse? This can literally make the difference of being finished in (milli)seconds or being finished in, lets say, the age of the universe. Isn’t that crazy? Theoretic computation times can be stretched and compressed at scales that are vastly eclipsing anything we can relate to with our human cycles and rhythms. The task for the one crafting an algorithm is of course to bring things into scales that are reasonable in the environments and time scales we operate and live in. In our crowd:it code are numerous loops where infinity is looming behind. Only clever thinking and solid research brings it into usable time scales. Infinity was fought back every time it showed its smirking face. And the quest for optimising existing algorithms or replacing them with better ones continues of course. It’s not unlikely that improvements in key spots can reduce the overall computation time for a simulation by large degrees. Currently it is realistic, that a simulation of a big scenario takes hours or even days. How did this kind of time scale come about though? Why is it not seconds, why is it not millennia.

Chewing gum time

It hits me from time to time (pun intended), how time is this chewing gum dimension in the world of computation. Especially in our field of simulations. Running a large scenario with many pedestrians might take over night to complete, even though it might just be 15 minutes of real time that are being simulated. But on the other hand, if you did the same sophisticated math by hand that the computer did for you, you wouldn’t be able to finish it within your lifetime! You see how wild time is bend here? You seek to simulate the real thing – that takes a multitude of the time that it would take in real. But once you get your simulation results you can actually replay them much faster than real time – to get a quicker sense of what’s going on. And this again might help if the real event takes place one day in the future: non-clogged evacuation routes that our simulation results helped arguing for, save relevant time to keep everyone safe.

 

By Benjamin Degenhart, software developer at accu:rate

Header photos by InstaWalli and Uroš Jovičić on Unsplash.