Is there intelligent life somewhere else in the universe, beyond the confines of our own galaxy? All we know so far - conspiracy theories notwithstanding - is that no intelligent being has made contact with us. Which, if you think about, seems kind of strange. All those millions of light years, a near infinity of space and human beings are all there is?
There is one particularly pessimistic but plausible explanation for the intergalactic silence: there may well be (or have been) other intelligent life in the universe but we have received no messages from them because intelligent species tend to destroy themselves before they become technologically advanced enough to send those kinds of messages. If some natural disaster - star death, massive asteroid strike - doesn't get them, then they will destroy themselves, aided by their own massively powerful technology. This theory became popular in the 1950s, when the threat of totally destructive nuclear war hung heavy over the people of the US, Europe and the USSR.
So is there anything to this idea? And if so, what does it mean for us?
These questions may sound like the stuff of science fiction, but a number of prestigious academic institutions are taking them seriously. Last week, the University of Cambridge made headlines with its announcement of the new Centre for the Study of Existential Risk. Led by the professor of cosmology Sir Martin Rees, the department will examine the key threats to humankind's continued existence. The University of Oxford already has the Future of Humanity Institute, which brings together philosophers and scientists to study the same question.
Central to the concerns of Rees and his colleagues is the idea that humankind may be overrun by our own technology. That includes threats posed by biotechnology and nanotechnology as well as those posed by the rise of intelligent machines. What happens, asked the philosopher Huw Price, the co-founder of the centre, when "intelligence escapes from the constraints of biology" and we are surrounded by intelligent machines "that are not malicious but machines whose interests don't include us"? In other words, we may one day wake up and find that our computers have decided to destroy us.
As Price concedes, the seriousness of that risk is hard to quantify. But the trend for academic study of this question is itself a testament to the vast transformation that connective technology is effecting our societies, our lives and our consciousness.
The timescales are hard to quantify, too. The futurist Ray Kurzweil thinks that by 2045, computers will achieve self-awareness; that, says Kurzweil, will constitute a fundamental breaking point in the history of humanity that he and others call "the singularity". It may sound far-fetched, but so seemed Kurzweil's prediction in 1990 that a computer would be the best chess player in the world by 1998. In 1997, IBM's Deep Blue beat world chess champion Garry Kasparov.
So are we to be wiped out by our computers? Most, including Kurzweil, think it's unlikely. Perhaps, instead, we will become our computers and, freed from the need to exist as carbon-based life forms, will live as pure consciousness inside computer-generated virtual realities.
Of course, we may already be doing that. But that's a subject for another column.
David Mattin is the lead strategist at trendwatching.com