I really need to get back in the habit of making thorough notes shortly after the Salon-- I'm losing too many good discussion threads. One of our biggest topics at the Salon concerned recent changes in programming, which I've wanted to write about for a while. Here are my thoughts on it, informed by the Salon discussion, plus some other discussion topics below. Feel free to remind me of other topics in the comments, and I'll record what I remember about them.
Programming has changed enormously since computers were invented. I don't just mean that assembly gave way to higher-level procedural languages which gave way to object-oriented languages, although that mirrors the shift I'm interested in. In the days before C, programming languages had a fairly-small, well-defined collection of building blocks, and it was the programmer's responsibility to construct whatever they needed. In a shift into libraries and then object-oriented languages, the programmer's job has become more to connect pieces constructed by other people.
The pieces are also changing. They're becoming more intelligent, more communicative, and more accepting of ambiguity. Programmers have realized the power in-- and the need for-- type-fluidity. Currently that's instantiated in typeless languages, but these still form a kind of antithesis waiting for new synthesis with traditional typed languages.
The things we're programming are different too. The programmer is no longer a craftsman. In the past, people designed programs to do a certain thing well. Now, people realize that they are really engineering experiences or "ways of understanding". We like one program over another not because it does something better, but because it allows us to conceive of our task differently.
Which is exactly what different programming languages themselves do. With plug-in designs, programs themselves are allowing users to construct the context for their own experience.
The way we think of technology is in such incredible flux right now. With web 2.0 ideas (participatory, dynamic content; new kinds of social networking), the internet is changing and becoming the necessary context of all computer use. With mobile devices, the personal computer, our interface to it, and the ways we use it are changing. In another 10 years, programming will be vastly different; in another 20, it probably won't exist, as we currently conceive it.
Anyway, we also talked about Digital Rights Management, specifically relating to Apple's decision to drop DRM-protection tying iTunes to iPods, and how artists should be "rewarded" for their work. And we talked about the nature of Salons, and the posibility of having a kind of "party-salon", which is more like the kind of gathering that was found in Paris.
Programming has changed enormously since computers were invented. I don't just mean that assembly gave way to higher-level procedural languages which gave way to object-oriented languages, although that mirrors the shift I'm interested in. In the days before C, programming languages had a fairly-small, well-defined collection of building blocks, and it was the programmer's responsibility to construct whatever they needed. In a shift into libraries and then object-oriented languages, the programmer's job has become more to connect pieces constructed by other people.
The pieces are also changing. They're becoming more intelligent, more communicative, and more accepting of ambiguity. Programmers have realized the power in-- and the need for-- type-fluidity. Currently that's instantiated in typeless languages, but these still form a kind of antithesis waiting for new synthesis with traditional typed languages.
The things we're programming are different too. The programmer is no longer a craftsman. In the past, people designed programs to do a certain thing well. Now, people realize that they are really engineering experiences or "ways of understanding". We like one program over another not because it does something better, but because it allows us to conceive of our task differently.
Which is exactly what different programming languages themselves do. With plug-in designs, programs themselves are allowing users to construct the context for their own experience.
The way we think of technology is in such incredible flux right now. With web 2.0 ideas (participatory, dynamic content; new kinds of social networking), the internet is changing and becoming the necessary context of all computer use. With mobile devices, the personal computer, our interface to it, and the ways we use it are changing. In another 10 years, programming will be vastly different; in another 20, it probably won't exist, as we currently conceive it.
Anyway, we also talked about Digital Rights Management, specifically relating to Apple's decision to drop DRM-protection tying iTunes to iPods, and how artists should be "rewarded" for their work. And we talked about the nature of Salons, and the posibility of having a kind of "party-salon", which is more like the kind of gathering that was found in Paris.
no subject
Date: 2007-02-22 02:28 pm (UTC)Still, I have to expect that even there hand-programming is *eventually* going to become problematic. The thing is, cycle efficiency per core is ceasing to be the gating question: the nature of computer architecture is taking a radical left turn, starting last year. After years of everyone knowing that multi-core would eventually become necessary, there was a rather sudden consensus that last year was the time -- that single-core architectures had reached their limit, and the only way to squeeze out more speed was to go multi-core.
More relevant to your point, multi-core is basically what's driving per-watt efficiencies now, as well. Part of what's been driving up the energy cost per unit of speed has been the relentless march of on-chip optimization, and those optimizations are horribly expensive. So instead, everyone is making a real leap, to more, simpler cores on each chip. Those cores are both significantly cooler and slower than the ones that preceded them; in theory, the speed is being made up for by the fact that there are more of them.
In the short run, I don't expect that to change your life dramatically: you'll just hand-code to the separate cores. But eventually, I have to question whether that's going to be practical. You can hand-code to four cores without real difficulty, but making efficient use of, say, 80 of them (and they are talking about numbers like that in the not *terribly* distant future) seems less plausible to me. I don't know the embedded world *nearly* as well as I do the personal/server space, but it feels to me like a paradigm shift is going to become a flat necessity eventually.
If most of the world's programmers migrate to higher-level languages, I would refer back to my original question:
Does this mean that I will be forever pigeon-holed, most valued for an archaic set of skills that is no longer taught?
The short answer is yes; indeed, it's probably largely so already. If you're operating at the C/assembler level, I'd guess that most current graduates really can't relate to what you do. (It kind of threw me when I started to realize that most of the kids coming out with CS degrees had never done *any* assembler, but it's true, and they regard C as quaint if they know it at all.)
no subject
Date: 2007-02-22 03:04 pm (UTC)- The system architecture already uses multiple (many) cores - only the inter-core bandwidth is improving. On-chip, we do hand-code to a couple of cores, but then often the same images will be shared amongst several cores in a multi-core chip because they will be performing the same functions in parallel. Whether the core aggregating the data is handling 4 or 80 sets of streams is just a matter of bandwidth and memory.
- Deterministic processing (desirable in telecom) requires that core n perform x and y - and only x and y. You wouldn't believe how skeptical and freaked out some of our larger customers were when we made it that much less deterministic by adding *cache* to our chips.
So we certainly do trail other technology, but it's because our bottom line is dictated by how many chips we can sell - which is in turn determined by our pricing (yield from the fab), our time to market (vs our competitors), and our efficiency (mips/watt).
We have never employed on-chip optimization, as it would adversely affect both TTM and efficiency. Sadly, our compiler team has not yet produced the perfect compiler. ;-)
It's an interesting problem. I am very interested as an engineer in optimizing problems (in general, I like to make things be efficient), so it seems to be a good fit for me. I had done some assembly and C in college, but I think the courses have been largely replaced by java/c++ in many schools. It's obvious that C/assembly are not going to go away - and though I am in something of a niche, the talent pool is most likely going to shrink as demand grows. At least, that's what my bank account hopes for. =)