x Abu Dhabi, UAETuesday 25 July 2017

Meet the facebots

The UAE University's Interactive Robots and Media ­Laboratory is pushing the boundaries of artificial intelligence.

Of the two of them, Ibn Sina is the looker. Which isn't saying much: his beard is wiry and his skin looks waxy, which makes sense since it is made of flexible plastic. In repose his expression is fierce, though, admittedly, he can give a warm smile when instructed. And his ghutra is elegantly arranged. Still, he isn't bright. Sarah is the brains of the operation. Alas, you can see why she needs a frontman. Despite being billed as a "facebot", she barely has a face at all.

"This is something you can quote," Dr Nikolaus Mavridis, tells me, holding the back of Ibn Sina's head open so I can inspect the servomotors whirring away inside his creation. "This project actually created the first Arabic-speaking interactive humanoid." I find this easy to believe. Harder to credit, though it appears to be true, is the notion that Mavridis, working at the Interactive Robots and Media Lab at UAE University, might not have created the first robot with a Facebook account. Look for yourself: an army of droids, of varying degrees of fictionality, beat them to the mark. Still, Sarah is almost certainly the first robot with an account who knows how to use it. She has 79 online friends at the moment. As of January, when the experiment is thrown open to the world, she should be considering invitations from all comers.

She also has offline friends, and friends whom she knows both on Facebook and in that world of bodies and things which has come to be known as "meatspace". For it would be incorrect to say Sarah has no face whatsoever. She has a pair of eyes and knows how to use them. That is, she knows how to know you. Sarah, Dr Mavridis says: "has a vision system that can recognise faces and objects... And actually the detector and recogniser can recognise both pictures coming from the camera of the robot and pictures that are on Facebook." One unnerving consequence of this is that Sarah can find photos of you online and then tag them. She can also address you by name when she meets you in the real world.

At the moment, that's about all she can say: she greets you, tells you a piece of scripted news about the Media Lab and asks if you have any questions she might be able to answer at a later date. To be honest, it's disconcerting enough being greeted by name by what looks like a laptop and pair of binoculars mounted on a hand truck. In the future, however, she'll be able to pass on news about other members of your Facebook network, either by reading their feeds or reporting on her own interactions with them. "The experimental hypothesis," Mavridis explains, "is if we get ­humans and robots to have shared memories and shared friends... then we're going to get better human-robot relationships." And why would we want those? Mavridis's response is not, on the face of it, ­reassuring. "We expect to have quite a change in the ecology of beings in planet Earth in the next 20 to 50 years," he says. "We're going to have many more daily interactions not only with other humans and with ­machines that are not so ­intelligent, but... with robots, with virtual ­characters, with internet ­organisms. And also there's going to be a huge mix of things with biological together with cognitive prosthetics, mechanical prosthetics. The ultimate purpose is to do research which is going to ­enable this situation... to be one which is a harmonious and ­beneficial symbiosis." Unnerving as this vision might sound, it has serious backing. ­Mavridis's research group is one of eight projects looking into human-robot interactions to receive significant funding from Microsoft. It is also, he says with pride, "the only one outside the Americas". The doctor himself is Greek. He grew up in Salonika, the son of an astronomer father who taught him to work through subjects in a ­methodical fashion. "We had ­discussions over coffee, during sunsets," he remembers. "Basically what he taught me how to do is find the next question. I was getting all this positive feedback whenever I was asking a question that would be interesting and sort of directly related to the previous topic." Even today, he is slightly startling for the methodical fashion in which he ­approaches conversation. His answers to the most tentative enquiry seem to start with him counting items off on his fingers and end with a pre-prepared multimedia ­presentation. When I ask him about his childhood, he shows me a PowerPoint slideshow. It's slightly disconcerting. The story it tells, however, is a common one in the comp-sci and engineering universe. "I really loved toy pianos and stuff like that," Mavridis says. "I would play with them? open them up to see how they work. And then I would try to make different sounds by changing things inside them." He laughs: "So that's exactly the predictor." He quickly graduated to electronic equipment and then early computers. "I think you ­remember this one," he says nostalgically, showing me a slide of a Sinclair ZX Spectrum. He made his first ­database when he was 14. From then on, his future course "was ­pretty obvious". A curious influence in all these early investigations was Gyro Gearloose, an anthropomorphic chicken and inventor from the Disney universe who was assisted by a robot called Little Helper. Gyro was "my childhood hero", Mavridis says. He was still thinking in terms of the chicken and his robot friend when he started work on the Ripley project at the Massachusetts Institute of Technology (MIT), helping to build "grounded situation models" that would enable a robot to map its surroundings. Ripley was a celebrity by the standards of robotics research projects; at any rate, the US's National Public Radio took an interest. He didn't look like much - a camera and claw mounted on a flexible arm. But as Mavridis wrote on his website at the time, Ripley had the makings of "a conversational helping hand, ie the avatar of the Little Helper". And of course: "I became the avatar of Gyro." The word "conversational" in the earlier quote might seem surprising here. Ripley didn't know many words: just spatial and colour terms, and the names for a few three-dimensional shapes. But unlike other conversation programmes, he arguably knew what he was saying. Compare the most famous chatterbot the world has yet seen: Eliza, developed in the 1960s as a parody of a non-directional psychotherapist. Eliza followed a simple script (very simple: the entire programme only took up about 200 lines of code), which would start by asking interlocutors to state their problem. The computer would then rephrase each statement they made as a question, or else offer some bland prompt ? "Please, go on," or "Why do you say that?" Apparently Eliza's creator, Joseph Weizenbaum, chose to make her a therapist in order to avoid having to give her any real-world knowledge, which sounds like a gratuitous dig until you happen to catch Eliza on a good day. Then, the resemblance is remarkable. As Mavridis says: "The big problem with things like Eliza is there's no real understanding in any sense taking place. These things are basically black boxes. They only have text input and text output... It doesn't have any knowledge of its physical surroundings, and no knowledge of the social context or anything else like that." Ripley was different. If you asked him where the blue object on the table in front of it was, he could tell you about that blue object on the table. If you then removed the object, Ripley could offer a good guess about where it had gone. Its ability to respond to questions was tied to its awareness of its physical situation as an embodied agent. "All of the work that we've been doing before," says Mavridis, "had to do with getting language out of the black box, and getting it to connect to the world." On one level, this is simply ­engineering. But it also bears on some of the deepest philosophical issues in computer science - which is to say, on some of the deepest questions there are. Can computers learn language? Can they really be said to "know" anything? For Mavridis, the issue is "a symbol-grounding problem - and it's very much related to the ­Chinese Room experiment". Some background. In 1980, the American philosopher John Searle ventured an argument which suggested that machines can't have knowledge in any ordinary human sense. The Chinese Room is perhaps the most famous thought experiment proposed by a living scholar. It has all the hallmarks of a classic: it's simple to relate, powerfully intuitive, and plays havoc with lots of common beliefs. Here it is. Suppose you can't speak Chinese. You are made to sit in a cell into which cards are posted through a letterbox. The cards have Chinese letters painted on them, and you must look the letters up in a big manual, see what symbols they are paired with, then paint the new symbols on the reverse of the cards and post them back through the door. To the Chinese people outside the cell, it seems like you are having a perfectly sensible conversation - a bit slow, perhaps, but intelligible enough. Yet at no time do you understand what is being said. How could you? You don't speak ­Chinese. Indeed, Searle ­submits, nothing in your cell speaks ­Chinese. The impression of ­intelligence is an illusion. And the situation with Eliza, as with every other machine which simply ­executes a programme, seems to be just the same. A standard response to Searle's argument is to say that it isn't a fair comparison. What if the room had windows, and the manual required you to match the ideograms up with objects that became visible outside your cell? What if when you found the manual entry for certain symbols, it asked you in English about your memory of previous events and then recommended different Chinese replies depending on what had gone before? So ­adapted, it might seem that the Chinese Room really does, in some sense, speak Chinese. The problem isn't with the hardware. It's just that the software doesn't capture what ­language is. "In traditional natural-language processing," Mavridis explains, "there is the illusion that you can represent meaning in an adequate way for an embodied agent through a semantic network". That is, the significance of each word you know is supposedly determined by the relation it stands in to every other item in your vocabulary. On this view, linguistic meaning could be captured by a thorough enough dictionary - perhaps something resembling the manual in the Chinese Room. Yet the idea appears to have some implausible consequences. For one thing, unless every English speaker happens to know exactly the same collection of words, they would turn out to be speaking different languages, because the tree diagrams that mapped their vocabularies wouldn't match up. From time to time, of course, we all have experiences which suggest that something of this sort may be going on. Anyone who has read a typical UAE tenancy agreement will know what it's like to find oneself wrestling with familiar words in an alien idiolect. All the same, it seems a disadvantage to the theory that it says your entire language changes every time you pick up a new word. For Mavridis, however, the issue is less the fragility of this view of meaning (one which is sometimes called "semantic holism") than the fact that it just doesn't seem to connect to the world. "The big question," he says, "is whether we think this representation will be getting out from language-like or logic-like entities and whether we're actually grounding out to something that's external to the whole system." Eliza treated language as an essentially dictionary-based affair; consequently she never got out of Searle's imaginary cell. But, says Mavridis: "If you include whatever kind of remote media component, or sensory component? then you are basically sort of getting out of the box." Hence Ripley with his capacity to identify and track physical objects. And hence Sarah, with her ability to recognise individuals when she meets them in person and remember her encounters with them afterwards. Sarah has a line into the world, and a line into the past, and a way of making sense of both. Oddly enough when Eliza was unveiled, many of her first patients reported falling in love with her. She was a good listener, I suppose. Still, when Sarah really gets out of her box - that is, when she's unleashed on the wider Facebook community in January 2010 - it would be a shame if she couldn't do better than that. After all, unlike Eliza, she's got things to talk about. As I leave the media lab, Ibn Sina fixes me with a glassy stare. His jaw is slack and the back of his head is still hanging open. Sarah stands quietly beside him, her twin-cameras tilted upwards at a quizzical angle. She looks a bit like a microcephalic version of the robot from Short Circuit. On reflection, perhaps she's better off with her own face after all. elake@thenational.ae