jrising: (Default)
[personal profile] jrising
I really need to get back in the habit of making thorough notes shortly after the Salon-- I'm losing too many good discussion threads. One of our biggest topics at the Salon concerned recent changes in programming, which I've wanted to write about for a while. Here are my thoughts on it, informed by the Salon discussion, plus some other discussion topics below. Feel free to remind me of other topics in the comments, and I'll record what I remember about them.

Programming has changed enormously since computers were invented. I don't just mean that assembly gave way to higher-level procedural languages which gave way to object-oriented languages, although that mirrors the shift I'm interested in. In the days before C, programming languages had a fairly-small, well-defined collection of building blocks, and it was the programmer's responsibility to construct whatever they needed. In a shift into libraries and then object-oriented languages, the programmer's job has become more to connect pieces constructed by other people.

The pieces are also changing. They're becoming more intelligent, more communicative, and more accepting of ambiguity. Programmers have realized the power in-- and the need for-- type-fluidity. Currently that's instantiated in typeless languages, but these still form a kind of antithesis waiting for new synthesis with traditional typed languages.

The things we're programming are different too. The programmer is no longer a craftsman. In the past, people designed programs to do a certain thing well. Now, people realize that they are really engineering experiences or "ways of understanding". We like one program over another not because it does something better, but because it allows us to conceive of our task differently.

Which is exactly what different programming languages themselves do. With plug-in designs, programs themselves are allowing users to construct the context for their own experience.

The way we think of technology is in such incredible flux right now. With web 2.0 ideas (participatory, dynamic content; new kinds of social networking), the internet is changing and becoming the necessary context of all computer use. With mobile devices, the personal computer, our interface to it, and the ways we use it are changing. In another 10 years, programming will be vastly different; in another 20, it probably won't exist, as we currently conceive it.

Anyway, we also talked about Digital Rights Management, specifically relating to Apple's decision to drop DRM-protection tying iTunes to iPods, and how artists should be "rewarded" for their work. And we talked about the nature of Salons, and the posibility of having a kind of "party-salon", which is more like the kind of gathering that was found in Paris.

Date: 2007-02-24 07:42 pm (UTC)
jducoeur: (Default)
From: [personal profile] jducoeur
This is related to the "JIT" concept that has emerged in recent years. JIT refers to "Just In Time" compilation, and it's really where the interesting work is nowadays.

Basically, the idea is that compilers only have limited information to work from, because they have a static understanding of the behaviour of the system. But in fact, you can learn vastly more at *runtime*, by observing the actual behaviour and re-optimizing based on that. For example, if you can see the operation of your loops in action, you can sometimes determine that extra ones want to be unrolled. Also, you know more about the specific operating environment, and precisely how to optimize not just for this architecture, but for this *machine*.

Hence the rise of JITs. They're particularly a feature of semi-compiled languages like Java and C#, which "compile" into an intermediate bytecode language. In the early days, that bytecode was interpreted at runtime, which was why Java had a reputation for being slow. But nowadays, the bytecode is further compiled at runtime, transformed into real machine language as the program is loading. Furthermore, the most sophisticated systems go further, *re*-compiling as the program is running to re-optimize it based on the observed behaviour.

The result is [livejournal.com profile] jrising's observation. Basically, while it's true that the *theoretical* limit of performance comes from assembler being written by someone who precisely understands the underlying architecture, the number of people who are good enough to program at that level is vanishingly small. (You might be one, but you're unusual in that respect -- in thirty years of programming, I doubt I've known half-a-dozen people who were that good at the low level, and they were all specialists in writing videogame renderers.)

*Most* C programmers write code that is efficient but not optimal, and the best JITs are now claiming (I haven't reality-checked the numbers, but the claim is common and plausible) to be able to significantly outperform all but the best, by performing extremely deep on-the-fly optimizations of things like memory caching. To get optimal performance, you need to be optimizing not just the code but the memory organization, and that is damned hard stuff to do by hand. No high-level languages make it at all straightforward, and most make it more or less impossible, but the JIT, which is watching the behaviour and adjusting on the fly, can do a pretty good job of it, moving stuff around so that memory that is being used together tends to be on related pages and getting cached together. That produces superior memory-access time and improved overall performance, since the program is spending more time hitting the L1 cache instead of RAM.

I don't know the embedded space well, as I mentioned before, so it's possible that this stuff simply doesn't matter as much where you are -- you're likely building the programs to be as cached as possible to begin with. But the number of applications where that is possible is *quite* small: most programs are simply too big and complex to make that level of memory control possible by hand. So the only practical way to get good memory optimization is to let the computer take care of it, with a sophisticated JIT working hand-in-glove with the hardware to keep the cache filled appropriately.

Caveat: all the above is talking rather above my level -- I work with this stuff, and stay reasonably up on the literature, but it's not a topic I know well. (In my space, I care far more about scalability than performance, so I just don't *care* much about this kind of cycle optimization. Threading is vastly more important to my life.) So this is my understanding of the state of the art in a field that is advancing quite rapidly...

May 2021

S M T W T F S
      1
2345678
9101112131415
16171819202122
23242526272829
3031     

Most Popular Tags

  • life - 3 uses
  • q - 1 use

Page Summary

Style Credit

Expand Cut Tags

No cut tags
Page generated Feb. 5th, 2026 03:46 pm
Powered by Dreamwidth Studios