About to ride to the bus station this morning; greeted by another nice rainbow:
Brought to us by the laws of physics. No pot of gold evident. Pity.
Happy Darwin Day 2016!
Freedom of thought is best promoted by the gradual illumination of men’s minds which follows from the advance of science. (Charles Darwin)
From so simple a beginning endless forms most beautiful and most wonderful have been, and are being, evolved. (Charles Darwin, Origin of Species)
Meanwhile, let creationism talk itself into oblivion.
It has often and confidently been asserted, that man’s origin can never be known; but ignorance more frequently begets confidence than does knowledge; it is those who know little, and not those who know much, who so positively assert that this or that problem will never be solved by science. (Charles Darwin, The Descent of Man)
MIT Artificial Intelligence (AI) pioneer Marvin Minsky died at the age of 88 on January 24th in Boston.
See this EE Times blog post for a good summary.
Marvin Minsky’s participation in the 1956 Dartmouth Conference along with John McCarthy, Nathaniel Rochester, and Claude Shannon gave rise to the term artificial intelligence. While considerable progress has been made in the domain of machine intelligence, Minsky’s book The Emotion Machine deals with some of the more recalcitrant aspects of human-level intelligence.
His work ranged widely, including early work in neural networks (perceptrons) or connectionism and symbolic or classical AI including expert systems. I took university classes in classical AI and enjoyed experimenting with neural networks in the 90s, but my knowledge representation Master’s thesis was very much in the symbolic camp.
Minsky provided advice for the movie 2001: A Space Odessey regarding the delusional HAL 9000 computer. He made this remark about the film:
Kubrick’s vision seemed to be that humans are doomed, whereas Clarke’s is that humans are moving on to a better stage of evolution.
I’ll end with more Minsky quotes that provide some insight into this influential man’s thought processes. I particularly like the final tongue-in-cheek comment.
No computer has ever been designed that is ever aware of what it’s doing; but most of the time, we aren’t either.
If you just have a single problem to solve, then fine, go ahead and use a neural network. But if you want to do science and understand how to choose architectures, or how to go to a new problem, you have to understand what different architectures can and cannot do.
I believed in realism, as summarized by John McCarthy’s comment to the effect that if we worked really hard, we’d have an intelligent system in from four to four hundred years.
Peter Naur received the ACM Turing award in 2005 for “…fundamental contributions to programming language design and the definition of Algol 60, to compiler design, and to the art and practice of computer programming”.
He is best known as the original editor of the Algol 60 Report, and as the “N” in BNF or Backus-Naur Form (with John Backus of Fortran, and other, fame), first used to describe the syntax of Algol 60. He objected to this and thought BNF should denote Backus-Normal Form instead. Nevertheless, MacLennan (1983), in Principles of Programming Languages: Evaluation and Implementation), notes the following about the connection between BNF and Naur:
Peter Naur, then the editor of the Algol Bulletin, was surprised because Backus’s definition of Algol-58 did not agree with his interpretation of the Algol-58 report. He took this as an indication that a more precise method of describing syntax was required and prepared some samples of a variant of the Backus notation. As a result, this notation was adopted for the Algol-60 report…
I gave examples of BNF from the Report along with Algol code fragments in a talk I gave to the Australian Computer Society about the 50th anniversary of Algol 60. Compiler construction tools like lex & yacc arose from the creation of BNF and variations such as EBNF (with which I have spent more time) led to more expressive and concise programming language grammars and still more powerful tools.
Alan Perlis commented in 1978, with a pun on the begin and end keywords used to delimit code blocks, that:
Algol’s is the linguistic form most widely used in describing the new exotic algorithms…Where important new linguistic inventions have occurred, they have been imbedded usually within an Algol framework, e.g. records, classes, definitions of types and their operations,…, modules. Algol has become so ingrained in our thinking about programming that we almost automatically base investigations in abstract programming on an Algol representation to isolate, define, and explicate our ideas…It was a noble begin but never intended to be a satisfactory end.
Others have remarked upon the contribution of Algol:
Here is a language so far ahead of its time that it was not only an improvement on its predecessors but also on nearly all its successors. (1980 Turing Award Lecture, C.A.R. Hoare)
Lisp and Algol, are built around a kernel that seems as natural as a branch of mathematics. (Metamagical Themas, Douglas Hofstadter)
Algol 60 lives on in the genes of Scheme and Pascal. (SICP, Abelson & Sussman)
Block structure, lexical scope, and recursion are just a few features that no Pascal, C/C++, Python, Java, C# or programmer would find surprising. Naur and his collaborators played a large part in shaping the programming languages we think and code in today. Scheme is a dialect of Lisp — Lisp predating Algol — that was ultimately influenced by it, e.g. lexical scope and by the “language report” document approach (see Revised Report on the Algorithmic language Scheme).
As Perlis alludes to above, many of the beneficiaries of the descendants of Algol are unaware of how their language constrains the way in which they think about programs and programming, the Sapir-Whorf hypothesis in action.
The late Dutch computer scientist, Edsger Dijkstra remarked:
Several friends of mine, when asked to suggest a date of birth for Computing Science, came up with January 1960, precisely because it was Algol 60 that showed the first ways in which automatic computing could and should and did become a topic of academic concern.
Naur started his career as an astronomer, but changed his profession after encountering computers. He was not fond of the idea of programming as a branch of mathematics and saw it as very much a human activity, the sub-title of a 1992 book (Computing: A Human Activity) by Naur. Section 1.4 entitled Programming as Theory Building challenges the simplistic Agile mantra that source code is all that matters, whereas in fact, like JPEG image files, it can be seen as a lossy format, a distillation of a programmer’s thoughts with lost context, mitigated only in part by appropriate comments (an art form in itself).
In Programming as Theory Building, Naur outlines scenarios relating to writing a compiler for similar languages and the development of a large real-time industrial monitoring system. In his words:
The conclusion seems inescapable that at least with certain kinds of large programs, the continued adaption, modification, and correction of errors in them, is essentially dependent on a certain kind of knowledge possessed by a group of programmers who are closely and continuously connected with them.
Naur was offered a professorship in computer science at the University of Copenhagen in 1969. He thought that computer science was fundamentally about the nature and use of data, didn’t much like the phrase and coined the term datology, which he thought had a more human orientation, giving rise to what has been called the Copenhagen interpretation of computer science (as opposed to quantum physics).
Later in his career, Peter Naur was critical of contemporary conceptions of science and philosophy, developed a theory of human thinking called Synapse-State Theory of Mental Life, contrasted human vs computer thinking (see this video) and rejected the possibility of strong AI. I may not agree with all of Naur’s ideas in this area, but consider them worth hearing.
As an aside, this Y-Combinator post about Naur’s passing makes the interesting observation that a number of widely used programming languages other than Algol 60 have Danish origins, e.g.
It has occurred to me increasingly in the last few years that many of our field’s pioneers are reaching the end of their lives. Even confining oneself to those associated with programming language design and implementation, since 2010, at least the following have died:
Before I started writing this post, I knew about Naur’s association with Algol 60 and BNF, but that’s about all. Now I’ve discovered that, like so many pioneers, he had depths I have only just started to explore, even if confining myself to his work Computing: A Human Activity.
Most Friday nights, Christian street preachers and pamphleteers inhabit Adelaide’s Rundle Mall. One pamphlet offered to me recently had the title The Final Flicker.
In summary, the pamphlet makes the following assertions:
The first two are self-evident: we’re going to die but we don’t know when. For anyone who has lost someone close, the third is not hard to fathom either. Actually, it’s patronising and pedantic. Everyone dies. Welcome to Life.
Point 4 says that the Bible has all the answers about life and that friends and scientists don’t. This is a bold claim indeed and needs to be justified.
That which can be asserted without evidence can be dismissed without evidence. (Christopher Hitchens)
The fifth point declares that the purpose of life is only to prepare for death. Really? How depressing. Another unjustified claim unworthy of further attention. Nothing to see here. Move along…
Hmm. Wait. If these people really believed what they said, namely that the purpose of life is only to prepare for death, then why wait? Why not just end their lives now? I suppose the counter claim will be that suicide is a sin. Phew!
EDIT: After reading this post, a friend pointed out that since all sins should be forgiven, even this is not really an objection.
Another objection a Christian apologist may raise is: time is needed for such preparation. But how much preparation and of what kind? If it’s just a matter of believing something, well, anyone can do that, at anytime. If life is a moral training ground, and salvation comes from good works, then sure, that would take time. But, skipping to the end, point 20 says:
All you have to do is repent, turn from your sin, trust Him as your Saviour and you will be saved.
So no good works are required, just turning away from sin and having faith.
Higher up the list again: point 6 says that the Bible is clear that after death we go to Heaven or Hell.
What biblical verse declares this so unambiguously? The pamphlet is keen to point to specific verses to “back up” other points. Why not this one, given its obvious importance?
Perhaps it should quote Matthew 25:41. Want to see what that would mean in practice? Read points 7 to 10 again, view as much of the The Thinking Atheist’s video Burn Victims as you can and then ask yourself whether any aspect of a god who would send one of its own creatures to such an unimaginably hideous place could ever be considered good, just or righteous in any meaningful sense.
Point 11 brings us to John 3:16, the idea that if we just believe in God, we won’t be punished for our sins eternally but will have, a better, eternal life. That brings us back to the question I raised above: how much preparation is necessary and of what kind? Well, if we just have to believe, then we can end it all at any time! Right?
Surely this is all just too much like a game…
God could simply declare that everyone can come to the eternal party. Apparently this god requires the attention and adoration of its creatures. But an all powerful god should want for nothing. Right?
Point 12 declares that “God is just, so must punish sins”. That’s like me saying that I have a strong sense of morality, so I should punish those who don’t, or at least those who do “wrong”. Oh, I forgot. I’m not a god… Apparently, you need to have created a universe to be able to call yourself “just”.
All other points (12 to 17) are in need of evidence, not the least of which:
The person who is certain, and who claims divine warrant for his certainty, belongs now to the infancy of our species.(Christopher Hitchens)
In the end, the essence of the pamphlet is this:
The only positive thing I can say about any of this is that at least the pamphleteers are being consistent regarding core Christian claims, rather than adhering to some watered down theology consisting of only a vague notion of god, like many liberal denominations. That’s not to say anything about the veracity of the fundamentalist’s claims of course.
One particularly obnoxious idea that emerged in antiquity is Pascal’s wager, the “argument” that it is in our best interest to assume that God (but which?; there are so many to choose from) exists, to avoid the possibility of eternal punishment.
If God does not exist, the thinking goes, nothing has been lost, right?
Wrong! A life of pointless servitude can been avoided if a person recognises the distinct possibility that monotheism is an off-by-one error, i.e. that there is no evidence that any god exists, some version of the Judaeo-Christian god or any other, so that the correct number of gods is not one but zero.
Based upon the available evidence, this is all an atheist claims. My son noted this short animation recently, which makes a pretty compelling case for the off-by-one error.
In fact, “atheism” is a term that should not even exist. No one ever needs to identify himself as a “non-astrologer” or a “non-alchemist.”… Atheism is nothing more than the noises reasonable people make in the presence of unjustified religious beliefs.(Sam Harris, Letter to a Christian Nation)
The Universe revealed by Science is rich enough. We don’t need to add our own unfounded complexity. Science and engineering have created the modern world that so many of us are fortunate to live in and is, along with critical thinking more generally, the only hope for solving our biggest problems.
If religious instruction were not allowed until the child had attained the age of reason, we would be living in a quite different world.(Christopher Hitchens)
I get that people are afraid to die and find the idea of losing those they care about difficult to bear. The deep-felt desire for an afterlife is, I think, at the heart of most religions, whether openly acknowledged or not.
However, given the challenges to our way of life from climate change and dogmatic thinking, it’s not okay to retreat into The Dark like frightened children.
Come on people, grow up! We are not at the centre of things.
I’ll end with another quote from Hitchens, who has said it all better than I ever could:
The only position that leaves me with no cognitive dissonance is atheism. It is not a creed. Death is certain, replacing both the siren-song of Paradise and the dread of Hell. Life on this earth, with all its mystery and beauty and pain, is then to be lived far more intensely: we stumble and get up, we are sad, confident, insecure, feel loneliness and joy and love. There is nothing more; but I want nothing more.
In the 90s I was pretty happy if you sat me down with a C compiler and vi. Any programmer, at least if exposed to a Unix variant of some kind, will know about the sometimes quirky but very powerful editor, vi. At UTAS as a computer systems officer (“jack of all trades, master of none”: coding, soldering, networking, software/hardware installation), I taught vi and other Unix-related topics to academics in short courses; later, as a junior academic, I continued to use it whenever on a Unix system. It may sound a bit sad, but one of the happiest 2 weeks of employment was spent in a portable office (terrapin style) with a DEC VT220 terminal, SunOS (or it may have been Solaris by then) Unix, a C compiler and vi, with which I was tasked to write the core of a student enrolment system at UTAS, still in use for several years after I left.
I also occasionally used emacs and, on pre-OS X Macs, whatever editor the IDE (integrated development environments, although I suspect they weren’t called that at the time) provided, for example: THINK C.
On the Amiga I used an editor that came with AmigaDOS (ED or EDIT was, I think, the name), MicroEMACS, and at least one port of vi. These, especially the fairly minimalist MicroEMACS, were perfectly fine for developing my ACE Amiga BASIC compiler.
I’ve also had a go at writing a simple editor or two, including one for a PICK mainframe email system I developed as an undergraduate project.
Upon finding the need to leave UTAS (due to university funding cuts around 1996) I was offered work with a Tasmanian Internet Service Provider (ISP) as a programmer and sysadmin, when things were just getting started in that area. Being paid to code in C and learning Perl kept me happy for while, although I knew at the start that I’d want something more than to work for an ISP but it was a great opportunity. Best of all I was able to provide employment for one of my former UTAS students there after I left, giving him a start in the game. There, as always, was vi, this time on FreeBSD and BSDi Unix systems.
When I started working for Motorola (and later Freescale Semiconductor) in 1997, I found a fairly even split between use of vi and emacs there (under Solaris), along with a high degree of religious adherence to one or the other, the kind of zeal that still accompanies the adherents of particular programming languages. I’ve always had a fascination for the LISP programming language, so emacs with its in-built LISP interpreter won points on that front, along with specific modes of use in the Motorola/Freescale environment.
These days I’m a bit of a generalist when it comes to both editors and programming languages (C, C++, Java, Python, R etc), although I have my favourites and those less favoured. That’s the subject of another post, I think.
On any given day, I could find myself using vi, emacs, Eclipse, Visual Studio, PyCharm and various other IDEs. On Unix (okay, Linux systems now) or cygwin I have for many years tended to use vi (okay, vim, its now dominant incarnation) for quick editing when I want an editor now and emacs for more complex editing. Despite the power of modern IDEs, they are, like modern operating systems, often slow resource hogs, and tend to leave me a bit cold. At least some of these have emacs and vi modes for their editors. There are other newcomer editors under Windows and Unix that while fine, don’t compel me to want to use them. To be fair to IDEs though, if you need non-trivial source-level debug, they’re hard to beat. Having said that, I’ll still break out a command-line debugger when it’s appropriate.
Now, as a software engineer with CSIRO, especially in Linux high performance compute (HPC) cluster environments coding in C++, I find myself using vi or in particular gvim (“graphical vi improved”) in that context more and more again. Once more I’m really loving its power and simplicity, including tags for source code navigation, split windows, and all the keyboard shortcut goodness that have always made vi fast and productive to work with. Also better from a resource usage point of view on a HPC system.
Maybe this is partly motivated by a nostalgic streak, but mostly arises from a pragmatic approach.
Anyway, sometimes improving on the past is not so easy or at best only incremental.
My observation (9.124 (0.031) V), is shown under the cross-hairs in the images: Visual and Johnson V together and V alone.
Minimum should be happening around about now (~Aug 27).
I have images from Aug 25 that I’ll process this weekend. The conditions were less than ideal, but I managed to get some data before the clouds became a persistent problem.
I hope to take some more images this weekend.
Thanks to Peter Williams for prompting me to consider making observations of BL Tel which is nicely positioned high in the evening night sky now.
In Star Trek: The Voyage Home, Captain Kirk amusingly refers to late 20th century America as a primitive and paranoid culture. The same could be said for many countries then and now, some more than others of course, especially in recent times.
Like many (but not all, that’s for sure) in Australia, I’m saddened and disturbed by today’s executions. In comments about the event I’ve seen people conflate two distinct issues:
The (flawed) legal machine that eventually led to the firing squad has nothing to say about morality. There’s no ethical content to be found in The Law. Ethics must come first to inform The Law.
In my view, the death penalty is the sign of a primitive culture or at least some aspects of that culture.
Back in the Star Trek universe, a rare culture might be thought ready to join the club (Federation), but many are thought to be too primitive in one sense or another at a particular point in time.
At the moment, I think of the country in question as being like one of those primitive cultures: not yet ready to join the club due to an inability to see the moral harm of the death penalty and an unwillingness to engage in rational conversation about it. I’m sure there are many individuals in that country who do see the problem which is why it’s important to distinguish a country’s citizens from those who purport to run it.
If we didn’t worship the almighty dollar so much and had the strength of our convictions, we might consider imposing economic sanctions, declaring the country unfit to join the club suggesting: “perhaps one day when you have grown up and apologised for your barbarism, we will consider trading with you again”.
Of course, sanctions are not without harmful effects, at least on ordinary people.
There’s no simple response, but there must be one, and it must be clear.